id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
52963623
pes2o/s2orc
v3-fos-license
The Development and Validation of a Novel Nanobody-Based Competitive ELISA for the Detection of Foot and Mouth Disease 3ABC Antibodies in Cattle Effective management of foot and mouth disease (FMD) requires diagnostic tests to distinguish between infected and vaccinated animals (DIVA). To address this need, several enzyme-linked immunosorbent assay (ELISA) platforms have been developed, however, these tests vary in their sensitivity and specificity and are very expensive for developing countries. Camelid-derived single-domain antibodies fragments so-called Nanobodies, have demonstrated great efficacy for the development of serological diagnostics. This study describes the development of a novel Nanobody-based FMD 3ABC competitive ELISA, for the serological detection of antibodies against FMD Non-Structural Proteins (NSP) in Uganda cattle herds. This in-house ELISA was validated using more than 600 sera from different Uganda districts, and virus serotype specificities. The evaluation of the performance of the assay demonstrated high diagnostic sensitivity and specificity of 94 % (95 % CI: 88.9–97.2), and 97.67 % (95 % CI: 94.15–99.36) respectively, as well as the capability to detect NSP-specific antibodies against multiple FMD serotype infections. In comparison with the commercial PrioCHECK FMDV NSP-FMD test, there was a strong concordance and high correlation and agreement in the performance of the two tests. This new developed Nanobody based FMD 3ABC competitive ELISA could clearly benefit routine disease diagnosis, the establishment of disease-free zones, and the improvement of FMD management and control in endemically complex environments, such as those found in Africa. Global FMD control strategy includes reliable and effective surveillance and is supported by competent laboratory diagnostic services (9,15). Such diagnosis is typically carried out by the combination of virus isolation, serological tests, and nucleic acid recognition methods (9,16). Serological tests are an essential component in the diagnosis algorithm of FMD because it is required for animal's import/export certification, as well as to determine the "free-from-infection" animal state and demonstrate vaccine efficacy (17). In this regard, the detection of antibodies to viral non-structural proteins, NSPs, is considered as one of the most important indicators of infection, irrespective of vaccination status (18), and is routinely performed in FMD free and endemic countries where vaccination is used (19). Out of the different NSPs studied, the 3ABC polyprotein was found to be the most reliable single indicator of infection (20). Currently, most detection assays of antibodies to NSPs are based on recombinantly expressed 3ABC target antigen (21)(22)(23)(24)(25)(26), and several 3ABC commercial tests (kits) are available today (17,27). Although used worldwide, these tests vary in sensitivity and specificity and are expensive for developing countries (17,27,28). Camilidae such as camels, llamas, and alpacas have a humoral immune response that has evolved into heavy-chainonly antibodies (29,30). Unlike conventional IgGs, the antigenbinding fragment of these heavy chain antibodies consists of one single variable domain referred to as VHH or Nanobody (Nb) (31,32). Nbs are typically procured by cloning their genetic repertoire from B cells circulating in the blood of an immunized animal, constructing a cDNA library and panning by phage display (31,32). The Nb is one of the smallest known antigenbinding antibody fragments. Their reduced size, improved solubility, high stability, and antigen affinity makes them a great new generation of detection component for diagnostic applications (30,(33)(34)(35). This study describes the development and validation of a new Nb-based FMD 3ABC competitive ELISA for the detection of anti-FMDV NSP antibodies in cattle serum in Uganda. The assay demonstrated high sensitivity and specificity to identify NSP antibodies of several FMD serotype infections with, effective and robust performance, and potentially low-cost production. This unique, tailor-made assay could clearly benefit routine disease diagnosis, the establishment of disease-free zones, and the improvement of FMD management and control in endemically complex environments, such as those found in Africa. Construction and Expression of FMD 3ABC Recombinant Protein FMDV 3ABC gene of serotype O (O1/Israel/99, GenBank: AF189157.1) containing inactivated 3Cpro protease (26) was codon optimized for expression in E. coli and synthesized commercially in the pJ411 expression vector (DNA2.0). In addition, a six-histidine sequence was added to the 5' end of the gene to generate a six-His (6xHis)-tagged protein. The cDNA construct was transformed into competent E. coli BL21(DE3) (Stratagene) and plated onto LB agar containing 25 µg/ml of Kanamycin (LB-Kan). A single colony of the transformed E. coli was inoculated into 10 ml of LB-Kan broth and cultured at 37 • C overnight (ON) with vigorous shaking at 225 rpm. The ON culture was diluted 1:100 into LB-Kan and grown at 37 • C with vigorous shaking until the optical density at 600 nm (OD600) reached 0.6-0.8. Then, the culture was supplemented with 1 mM isopropyl-β-D-thiogalactopyranoside (IPTG) and incubated at 37 • C for an additional 4 h. Following incubation, cells were harvested by centrifugation at 6000 rpm for 20 min and frozen at −80 • C until further use. Purification of FMD 3ABC Recombinant Protein The E. coli pellet of the cells, containing FMDV 3ABC protein, was resuspended in a lysis buffer [50 mM NaH 2 PO 4 , 300 mM NaCl, 5 mM β-mercaptoethanol (β-Me) and 10 mM imidazole, pH 8.0], and 1 mg/ml Lysozyme, 3 U/ml Benzonase Nuclease, and protease inhibitor cocktail (Sigma-Aldrich) at 1:100 dilution was added to the lysis buffer. After 30 min incubation on ice, the bacterial cell wall was disrupted by ultra-sonication for 1 min at 80 % amplitude (repeated 3 times), on ice. Following the sonication, broken cells were centrifuged at 10,000 × g for 60 min at 4 • C to separate the soluble and insoluble proteins fraction. FMDV 3ABC recombinant protein was purified from insoluble fraction and/or inclusion bodies. The insoluble fraction was washed with lysis buffer containing 1 % Triton X-100 followed by two washes with lysis buffer without Triton X-100. The insoluble material was dissolved in the denaturing solubilization buffer (50 mM NaH 2 PO 4 , [pH 8.0], 300 mM NaCl, 8.0 M Urea, and 1 mM DTT) and mixed on a platform shaker for about 1 h at Room Temperature (RT). Then, the mixture was centrifuged at 10,000 × g for 30 min at 4 • C. The supernatant containing solubilized FMDV 3ABC was collected and loaded onto Ni-NTA resin (QIAGEN) which was preequilibrated with solubilization buffer. The protein was eluted from the column with solubilization buffer containing 0.25 M imidazole. Next, FMDV 3ABC protein was refolded by dilution to a uniform concentration of 0.7 mg/ml and dialyzed against refolding buffer containing 50 mM NaH 2 PO 4 , [pH 8.0], 150 mM NaCl, 8.0 M Urea, 3 mM reduced Glutathione, 0.3 mM oxidized Glutathione for 4 h at 4 • C. Afterward, another dialysis step was performed against a buffer containing 50 mM NaH 2 PO 4 , [pH 7.5] 150 mM NaCl, 3 mM reduced Gluthatione, 0.3 mM oxidized Glutathione and 3.0 M urea, pH 8.0) at 4 • C ON. The following day an additional dialysis step was performed for 4 h at 4 • C against the same buffer with Urea concentration of 1.5 M. Final dialysis step was performed for 4 h at 4 • C against a buffer containing 20 mM NaH 2 PO 4 , 150 mM NaCl, and 5 % v/v glycerol. The dialyzed refolded FMDV 3ABC protein was then concentrated using a Centricon 30 kDa cutoff, Millipore (MERCK). The purity and integrity of FMDV-3ABC protein was assessed by SDS-PAGE and Western Blot (WB) as reported elsewhere (26). For the detection of the 3ABC protein an anti-FMDV-3ABC camel serum and a commercially anti-6xHis antibody (Sigma) were used as positive controls while serum from non-inoculated camel served as negative control. Ethical Statement All animal experiments were performed according to Directive 2010/63/EU of the European Parliament for the protection of animals used for scientific purposes and approved by the Ethical Committee for Animal Experiments of the Israel (clearance numbers 11-220-6 and 13-220-3). Camelus dromedarius Immunization A healthy camel was immunized with FMDV 3ABC protein or with four commercially synthesized (Sigma) peptides of 14 to 21 amino acids (aa) long, which represent conserved sequence motifs derived from FMDV 3ABC protein of all seven serotypes (36). All peptides were conjugated via their N terminal Cysteine to keyhole limpet hemocyanin (KLH) or to Bovine Serum Albumin (BSA; Sigma). The aa sequence of peptides used during immunization included; peptide 1A, CISIPSQKSVLYFLIEKGQHEA, derived from FMD 3A protein, peptide 1B, CGPYEGPVKKPVALKVKAK, derived from FMD 3B protein, and peptides 1C, CRVFEFEIKVKGQDMLSDAAL, and 2C, CMDGDTMPGLFAYRA, derived from FMDV 3C protein. During immunization, the camel was injected seventimes, once every 2 weeks, with FMDV antigen dissolved in PBS and mixed with an equal volume of Freund's incomplete adjuvant (Sigma). The first three injections included 1 mg/injection of the purified FMDV 3ABC protein. As from the fourth injection, the camel was injected with a mixture of 0.5 mg of FMDV 3ABC protein and the four different KLH-conjugated peptides, 100 µg/each. After seven injections, peripheral blood lymphocytes (PBLs) from 100 ml of the blood of the immunized camel were isolated by density gradient using HISTOPAQUE-1077 (Sigma) and used to construct the Nb library. All camel experiments were performed according to guidelines approved by the Israel Ethic Committee. Generation of Phage-Display Library and Selection of anti-FMDV-3ABC Nbs The generation of anti-FMDV 3ABC Nb phage-display library was performed as previously reported (37). Briefly, PBL were purified from immunized camel by density centrifugation using Histopaque-1077 (Sigma-Aldrich). RNA was extracted using TRIzol reagent (Ambion) according to the manufacturer's instruction, and total cDNA was generated using SuperScript FIRST-Strand Synthesis System (Invitrogen) according to manufacturer's instruction. Total cDNA encoding all variable domains of both, conventional and heavy-chain-only antibodies was amplified by PCR using the primers CALL001 (5 ′ -GTCCTGGCTGCTCTTCTACAAGG-3) and CALL002 (5 ′ GGTACGTGCTGTTGAACTGTTCC-3 ′ ). The shortest PCR amplicon (0.7 kb), comprising the variable domains that originate from heavy-chain only encoding mRNA, is purified from a preparative agarose gel and used for a nested PCR using the primers A6E (5 ′ -GATGTGCAGCTGCAGGAGTCTGGR GGAGG-3 ′ ) and PMCF (5 ′ -CTAGTGCGGCCGCTGAGGA GACGGTGACCTGGGT-3 ′ ). Afterward, PCR products and pMECS phagemid, were digested with PstI and NotI restriction enzymes (Roche). The pool of amplified Nb DNA fragments ligated in the phage-display vector pMECS was transformed into E. coli TG1 electrocompetent cells to generate a library of 1.0 × 10 7 transformants. Next, Nbs were phage-displayed as previously described (37), and bio-panning procedures were performed against the FMDV 3ABC protein or the mixture of the four synthetic peptides, which were BSA conjugated for this step. Positive phage colonies were recovered by alkaline elution and reamplified for further use in a second and third round of bio-panning. After three enrichment rounds, ∼100 colonies were randomly picked, and Nanobodies in the periplasmic extracts (PE) were screened against FMDV 3ABC protein and/or peptides for specific binders by ELISA as previously reported (37) with minor modifications. Briefly, cells containing Nbs were disrupted by osmotic shock, centrifuged and Nbs residing in the supernatant were then collected and incubated for 1 h at RT in microtiter plates (Nunc) precoated with 100 µl/well of 1 µg/ml FMDV-3ABC protein or individual synthetic peptides and incubated. All following assay procedures were performed as previously described (37). Colonies were considered positive when the ratio of OD405 nm between the test and control wells (non-coated well) was ≥ 3. Sequencing of positivescoring constructs were then determined by an automated DNA sequencer (ABI Prism 3100 genetic analyzer; Applied Biosystems, Foster City, CA, USA), and Nb sequences were aligned. Purification of Selected anti-FMD Nbs Selected anti-FMDV 3ABC Nbs DNA fragments, fused to an HA tag and 6 x His tag at their C-termini in the pMECS vector, were transformed into E. coli WK6 and secreted into the periplasm. After ON bacterial induction at 28 • C with 1 mM IPTG, periplasmic extracts containing the soluble anti-FMDV 3ABC Nbs were obtained by osmotic shock as previously reported (37). Nbs were then purified using immobilized metal affinity chromatography (IMAC) on Ni-NTA resin (Sigma-Aldrich, St. Louis, MO, USA) and gel filtration on Superdex 75 HR 16/60 (Pharmacia, Gaithersburg, MD, USA) in PBS. The concentration of the Nbs was determined by the OD 280nm measurement and using the individual theoretical extinction coefficients as calculated with the EXPASy -ProtParam webtool. Nbs were then aliquoted and stored at −80 • C until further use. Sera Samples A total of 415 serum samples were collected from infected, noninfected and randomly recovered cattle herds obtained from the Uganda Virus Research Institute (UVRI), Entebbe, and Makerere University, Kampala, Uganda (Frank et al., under review). Sera were collected in Uganda in 2014 and 2015 during FMD outbreaks and in the following years during 2015 to 2017 from various cattle herds from previously infected districts as part of a national surveillance program. In addition, 216 serum samples collected from naïve and vaccinated calves were also obtained from the Kimron Veterinary Institute (KVI), Beit Dagan, Israel. A detailed description of all samples that were used during the study is presented in Table 1. All serum samples used during this work followed the procedures prescribed and approved by the Uganda institutional animal ethics committee. Surface Plasmon Resonance Assay The affinity of selected anti-FMDV 3ABC Nbs was measured using Surface Plasmon Resonance Assay (SPR) analysis as previously reported (37). The SPR binding studies were performed using a Biacore T200 instrument. A CM5 sensor chip (GE Healthcare) was coupled with 5 µg/ml FMDV 3ABC protein in 10 mM sodium acetate pH 5.5, using the amino coupling chemistry (NHS/EDC; N hydroxysuccinimide N-ethyl-N'-(dimethylaminopropyl carbodiimide), as recommended by the manufacturer. The final change in response units (RU) was 840 RU. For affinity measurements, different concentrations ranging from 500 nM to 1.95 nM of purified Nbs were injected over the sensor chip at a flow rate of 30 µl/min in HEPESbuffered saline (HBS; 10 mM HEPES, pH 7.5, 150 mM NaCl, 3.5 mM EDTA and 0.005 % (v/v) Tween 20) running buffer. The contact time was 120 s followed by a dissociation time of 600 s. Regeneration was performed for 60 s with 100 mM Glycine pH 2.0 followed by a stabilization time of 600 s. Kinetic parameters were evaluated with the help of the BIA evaluation T200 software (Biacore) assuming a 1:1 Langmuir binding, with drift. Indirect ELISA The ELISA procedure was based on a previously described method (23,24) with minor modifications. Briefly, Maxisorb ELISA plates (Nunc) were coated at 4 • C for 16 h (ON) with 100 µl/well of FMDV 3ABC protein at a concentration of 60 ng/ml suspended in PBS. Following incubation plates were washed three times with washing buffer [PBS containing 0.05 % Tween 20 (PBS-T)] and then blocked for 1 h at RT with 5 % skimmed milk (Sigma). The same washing procedure was performed after each incubation step. Next, serial dilutions of tested Nbs in concentrations from 4 µg/ml to 60 ng/ml were added at a volume of 100 µl/well and incubated for 1 h at RT. Following incubation plates were washed and 100 µl/well of 1 µg/ml anti-HA (Sigma) was added and incubated for 1 h at RT. Afterward, plates were washed and 100 µl/well of 1 µg/ml HRP-conjugated anti-mouse IgG antibody (Sigma) was added and incubated for an additional 1 h at RT. Finally, plates were washed four times and 90 µl/well of TMB One solution (SouthernBiotech) was added to develop the color reaction. The color reaction was stopped after 15 min with 1 M Sulfuric acid and readouts were obtained with absorbance read at 470 nm using a standard luminometer (Thermolabsystems-Luminoskan Ascent). Nb-Based 3ABC Competitive ELISA The levels of circulating antibodies to FMDV 3ABC protein were determined by a novel Nb-based competitive NSP ELISA. In principle, immunoassay plates (Maxisorp, Nunc) were coated ON at 4 • C with FMDV 3ABC protein at a concentration of 60 ng/ml in PBS. Following incubation plates were washed three times in wash buffer (PBS containing 0.1 % Tween-20) and blocked 1 h at RT with 5 % skimmed milk. The same washing procedure was performed after each incubation step. After plates were washed, serum samples diluted 1:25 in diluent buffer (PBS containing 0.1 % Tween-20 and 1 % skimmed milk) were dispensed at a volume of 100 µl/well in duplicate and incubated ON at 4 • C. Following incubation, plates were washed, and 100 µl/well of 0.25 µg/ml of the incubated Nb in dilution buffer at a volume of 100 µl/well was added and incubated for 1 h at RT. Subsequently, plates were washed, and a commercial monoclonal anti-HA antibody (Sigma-Aldrich) diluted 1:2000 in dilution buffer was added into the wells and incubated for 1 h at RT. Then after washing the plates, 100 µl/well of 1:2000 rabbit antimouse IgG conjugated to HRP was added and incubated for additional 1 h at RT. Finally, plates were washed four times and 90 µl/well of TMB One solution (SouthernBiotech) was added to develop the color reaction. The color reaction was stopped after 15 min with 1 M Sulfuric acid and readouts were obtained with absorbance read at 470 nm using a standard luminometer (Thermolabsystems-Luminoskan Ascent). Nb Based 3ABC Competitive ELISA Cutoff Value The cutoff value was determined with a control set of 150 negative cattle sera collected from uninfected animals in Uganda (Frank et al., under review) and using 10-fold stratified cross-validation analysis (38). Percent Inhibition (PI) values were calculated for each serum tested using the formula: 100-(OD tested sample/ OD negative control) × 100. The signal of the negative control is obtained by adding no serum only 0.25 µg/ml of Nb at a volume of 100 µl/well. Samples showing a PI value above 51 % were considered "positive"; and those below 51 %, "negative." Nb-Based 3ABC Competitive ELISA Analytical Sensitivity The analytical sensitivity of the Nb-based NSP competitive ELISA was assessed by determining the endpoint dilution of a positive control serum using a 2-fold dilution series from 1:5 to 1:320. The endpoint was the dilution at which the value for the positive test sample was below the cutoff value and could not be discriminated from that of the negative control. Construction of FMD 3ABC Recombinant Protein and Peptides A recombinant 3ABC protein was used for the detection of specific FMDV NSP antibodies. The construction and cloning of the FMDV 3ABC recombinant viral gene (serotype O1, GenBank: AF189157.1) included an inactivated 3Cpro/mut and a 6 × His tag marker. In addition, four peptides derived from FMDV genes 3A, 3B, and 3Cpro were also synthesized (one from 3A and 3B and two from 3C referred to as 3C1 and 3C2). These specific peptides were selected based on their amino acid sequence conservation among the seven serotypes of FMDV and therefore have the potential to interact with antibodies against NSP of multiple viral serotype strains. WB analysis was performed for the detection of FMDV 3ABC protein, using an FMDV-3ABC immunized camel serum, vs. a nonimmunized camel serum as negative control. Since FMDV-3ABC recombinant was recombinantly expressed as a His-tagged protein, an anti-His monoclonal antibody was also used as a second positive control. The results presented in Figure S1 shows a specific distinct band, corresponding to a molecular weight of ∼50,000 kd representing FMDV 3ABC recombinant protein. This distinctive band was only detected when blotted against immunized camel serum or with an anti-His tag antibody control ( Figure S1A). As expected non-immunized serum showed no immunoreactivity toward FMDV 3ABC protein. Additional immunoreactivity specificity was also validated by using the FMDV 3D recombinant protein as a negative antigen control. The results of 3ABC immunized camel sera blotted against these two recombinant viral proteins showed a distinctive band only against FMDV 3ABC recombinant protein blot ( Figure S1B). Coomassie blue staining result is also presented (Figure S1C). Selection of anti-FMDV 3ABC Protein Nanobodies The Nbs against FMDV 3ABC recombinant protein were identified and isolated from an immune phage-displayed Nb library as schematically shown in Figure S2. A total of more than 100 Nbs were isolated and tested, against FMDV 3ABC recombinant protein and peptides (3A, 3B, 3C1, and 3C2). Screening results after panning, partly presented in Figure 1, reveal five classes of Nbs with distinctive immunorecognition profile to FMDV protein and peptides. Most of the isolated Nbs were able to demonstrate positive immunorecognition of either a single FMDV peptide or the protein. Out of a total of 100 Nbs tested, two Nbs, (i.e., Nb19 and Nb94) showed the capacity to strongly recognize 3ABC recombinant protein as well as an additional viral peptide 3C2. These two Nbs, along with four other Nbs (i.e., Nb1, Nb4, Nb38, and Nb88) that demonstrated high immunoreactivity solely to FMDV 3ABC recombinant protein, were selected for further evaluation. Additionally, Nb9, which showed low immunoreactivity to FMDV 3ABC recombinant protein or peptides, was also used as a negative control (Figure 1). The selected Nbs were then evaluated for their binding affinity to FMDV 3ABC recombinant protein using SPR analysis and indirect ELISA. The SPR results presented in Table S1 show that five out of six Nbs tested had high binding affinity to FMDV recombinant 3ABC protein, with KD-values ranging from 1.67 to 9.37 10 −8 M ( Table S1). The indirect ELISA data presented in Figure 2A yielded similar binding affinity as the SPR analysis, demonstrating all Nb tested to have high immunoreactivity to FMDV 3ABC recombinant protein. As expected Nb9 showed low binding affinity and immunoreactivity capacity in both methods (Table S1 and Figure 2A). Additional assessment of Nbs performance was carried out using the preliminary format of the competitive ELISA. The results presented in Figure 2B revealed that out of the six Nbs, Nb94 demonstrated the highest PI of 40 %, compared to ∼20 % for the other Nbs tested. Based on the overall performance, Nb94 was selected for the construction of the in-house Nb-based 3ABC competitive ELISA. The Construction of the In-house Nb-Based FMD 3ABC Competitive ELISA Using Nb94 as a competitive component, an in-house 3ABC ELISA for the detection of FMD NSP antibodies in cattle serum was developed and validated. The development of the assay included the assessment of various parameters such as the antigen and Nb94 concentration, sera dilution, incubation times, and temperature conditions (represented in Figure S3). A schematic presentation of the assay configuration is shown in Figure S4. In addition, an evaluation was performed for the analytical performance of the assay, including diagnostically sensitivity and specificity, lower limit of detection, and repeatability (Figure 3). The results presented in Figure 3A show the assay to be highly predictable with an AUC (Under the Curve) of 0.985 as determined by Receiver-Operating Characteristic analysis (ROC) using a total of 222 sera (from infected, non-infected, and naïve animals). The Nb-based 3ABC competitive ELISA demonstrated high analytical sensitivity and specificity of 94 % (95 % CI: 88.9-97.2) and 97.67 % (95 % CI: 94.15-99.36) respectively. Lower limit of detection calculated using a set of positive (infected sera sample) and negative (non-infected sera sample) control serum, and presented in Figure 3B, demonstrated clear discrimination between seropositive and negative controls using dilutions in the range of 1:5 to 1:200, with positive control yielding high PI values (≈ 90 %) down to a 1:50 dilution, after which the PI values gradually decreased. Inhibition was still seen at a dilution of 1:200. Overall the assay was highly repeatable, as determined by the set of the positive and negative controls tested on different days and by different operators (Figure 3C). Sera Screening Infected, Non-infected, Naïve, and Vaccinated Samples Figure 4). In addition, a total of 216 serum samples were obtained from KVI, Bet Dagan, Israel. These samples were collected from 72 calves at three different time points representing different FMD vaccination status: naive uninfected and unvaccinated, and those that received first vaccination and second vaccination. The results presented in Table 2 and Figure 5A showed the Nb-based 3ABC competitive ELISA to have 100 % specificity for the naïve group (72 out of 72 negatives), 99 % for calves after first vaccination (71 out of 72 negative), and 93 % for calves after two sets of vaccination (67 out of 72 negative). Comparison of the different sample groups presented in Figure 6 demonstrated significant discrimination (P < 0.001) between FMD infected samples to Randomized Cattle Herds in Uganda The UVRI health workers collected a total of 165 serum samples from four districts in Uganda during 2016-2017 as part of a sera surveillance study to assess the levels of FMD NSP antibodies in cattle herds. These samples were analyzed using the Nb-based 3ABC competitive ELISA. Out of the 165 samples, 80 samples were collected in Nakasake district, 65 in Mbale district, 13 in Isingiro district, and 7 in Gomba district (see Table 3). The results presented in Table 3 and Figure 7 showed that 21 out of 80 samples collected in Nakasake were defined positive for the presence of FMD NSP antibodies, 9 out of 65 in Mbale, 4 out of 13 in Isingiro, and 0 in Gomba. In total, a prevalence of 20 % of FMD NSP antibodies (34 out of 165), was found in these groups of randomly collected samples. Diagnostic Performance of the Nb-Based 3ABC Competitive ELISA Compared to Priocheck Nsp Test The Nb-based 3ABC competitive ELISA results were compared with a PrioCHECK FMDV NSP test. A total of 631 serum samples were tested and analyzed in both assays. These samples represented different FMD state, including naïve, non-infected, infected, vaccinated, and randomized field samples survey ( Table 1). The results presented in Table 2 and Table 3 showed a strong correlation between the Nb-based 3ABC competitive ELISA and PrioCHECK FMDV NSP test. High concordance between the two assays was observed for the samples collected from naïve, non-infected, FMD serotype O and SAT1 infected, and first time vaccinated calves (97-99 %). Samples collected from animals infected with FMD serotype SAT2 and second time vaccinated animals showed a lower concordance of 88-90 % ( Table 2). The comparison of the randomized field samples survey exhibited an accordance of 96 % between both assays ( Table 3). Data analysis revealed 7 % (5/72) of animals that received two vaccinations were diagnosed positive for the presence of NSP antibodies when tested by the Nb-based 3ABC competitive ELISA, compared to 16.6 % detected in the PrioCHECK test (12/72) (Figures 5A,B, respectively). Similar prevalence levels were seen in both assays for samples from infected and randomly collected animals (98 % to 94 %, and 24 % to 20 %, respectively). Data analysis, estimated by calculating the kappa coefficient value (GraphPad software), demonstrated a strong correlation between both assays with a value of 0.875 and SE of 0.021 (95 % CI: 0.834 to 0.916) for all samples across all categories. Further data analysis using Bland-Altman method (GraphPad software) presented in Figure S5, also revealed high agreement between both assays with a Bias value of 0.92, SD of 1.36 and 95 % limits of agreement; −1.74 to 3.58. DISCUSSION FMD is an acute and highly contagious disease of cloven-hoofed animals, which can lead to devastating economic losses across many parts of the world (9,41). Over the years, extensive efforts have been invested to improve the performance of diagnostic tests for FMD, resulting in the development of a wide range of ELISA tests to detect the presence of anti-NSP antibodies (22,23,25,39,42,43). Although several 3ABC commercial tests (kits) are available, these tests are not ideal since they are extremely expensive and have raised concerns regarding sensitivity and specificity performance, especially in endemically complex surroundings (15,(44)(45)(46). To address the environmental needs and to overcome the challenges of the endemically multiple settings, a novel Nbbased FMD 3ABC competitive ELISA was developed for the detection of antibodies against NSP in cattle sera in Uganda. The design of this in-house assay included the use of a highaffinity Nb (Nb94), which targets the FMDV 3ABC protein, and more specifically, a conserved region located within the FMDV 3Cpro protein. The selection of this Nb with an immunoreactivity profile to both the complete protein and a specific conserved region was proven to be highly critical, enabling the detection of multi-FMD serotype strains in a single assay configuration. The nature characteristics of Nbs, including their structural stability, solubility, scalable and straightforward production, their elevated thermostability, and long shelf-life FIGURE 5 | Sera screening analysis of naïve and vaccinated calves' samples using the Nanobody (Nb94) based 3ABC competitive ELISA and PrioCHECK NSP test. A total of 72 calves were serially sampled three times; before first vaccination (naïve state), after first vaccination, and after second vaccination. The prevalence of FMDV NSP antibodies were determined by the Nb (Nb94) based 3ABC competitive ELISA (A) and PrioCHECK NSP test (B). Cutoff for each assay is indicated by dash line. (47) were the key reasons for using them as the competitive component in the development of the assay. Also, the use of Nbs provides a significant economic benefit. They potentially enable development of detection tools, with longer shelf-life and eliminates the need for temperature-controlled storage and supply chain, which are extremely important in changing surrounding such as presented in Africa. Also compared to mAbs, Nbs distinctive structural properties (devoid of a light chain, and only comprising a single, variable heavy chain domain) facilitate their construction, which results in lower production costs. They were shown to be efficiently expressed in economic production systems, such as bacteria and yeast, with high batch-to-batch consistency (48), which allow their production on a large scale and a short period. Considering the above, the use of Nbs in our competitive ELISA format provides the assay, if needed, the capacity for easy adjustment for specific detection of other FMDV serotypes based on 3ABC or any other proteins selected. The new Nb-based competitive ELISA was designed for detection of antibodies against FMD NSP. This was done since NSP antibodies have been widely accepted as a reliable method for diagnosing the infection status of animal herds, regardless of vaccination status (17,(49)(50)(51)(52). Furthermore, the presence of these antibodies provides critical input for the risk analysis in the assessment of FMD control management (44,53), and it is currently the most sensitive tool to distinguish present from past infection with FMDV after a single time-point sampling (17). The Nb-based 3ABC competitive ELISA was evaluated by screening a wide range of serum samples representing different FMD serotype infection and state. These serum samples were obtained from cattle herds in Uganda and Israel considered FMD free (naïve and clinically non-infected), FMD infected (serotype O, SAT1, and SAT2 infection), and FMD vaccinated (first or second vaccine admission). The analytical performance of the assay was assessed using ROC analysis, which demonstrated the assay high predictive strength, as well as its high diagnostic sensitivity and specificity. Strong repeatability and clear discrimination between infected animals, naive/noninfected, and vaccinated animals was also observed. The screening results demonstrated the Nb-based 3ABC competitive ELISA could successfully differentiate between infected and vaccinated animals (DIVA). This capability was highlighted by the high numbers of positive samples detected by the assay in the infected animal group (141 out of 150), compared to the low to non-positive samples determined in the vaccinated (1 out of 72 after first vaccination, and 5 out of 72 after second vaccination), and naive (0 out of 72) groups. The assay showed also the capacity to detect NSP antibodies in sera samples collected from cattle infected by three different FMD serotypes, with total sensitivity of 94 % (95 % CI: 88.9-97.2) and specificity of 97.67 % (95 % CI: 94.15-99.36), as determined by testing a set of naïve and non-infected samples. The Nb-based 3ABC competitive ELISA performance was compared with the commercial PrioCHECK NSP ELISA test that is widely used in Uganda (17,49). This comparison demonstrated high correlation and agreements between assays for all serum samples regardless of their FMD status. Interestingly, although both assays showed that calves exhibited an increase in their NSP antibody response after two shots of vaccines, the PrioCHECK NSP ELISA test has defined a double number of positive samples compared to the Nb-based 3ABC ELISA (12 and 5, respectively). Although limited by sample numbers, this result is consistent with previous reports showing that the specificity of the PrioCHECK NSP test dropped significantly after multiple doses of vaccination (17,43,54). In theory, the detection of antibodies against NSPs indicates infection rather than vaccination, however, in practice, antibodies against NSPs may also be provoked by trace amounts of NSPs present in commercial vaccines and multiple vaccinations (18,20,50,55). Since under ideal conditions vaccinated animals should not elicit NSP antibodies, the lower positive NSP antibodies samples seen by the Nb-based 3ABC competitive ELISA could suggest a higher specificity of our assay compared to the PrioCHECK NSP test. To further validate assay performance, a small randomized trial consisting of serum samples collected from individual cattle field herds in different districts in Uganda were also analyzed. The results revealed a total prevalence level of 20 % for NSP antibodies by the Nb-based 3ABC competitive ELISA. Although this cohort represents limited sample numbers, the integrity of the assay performance is supported, by the PrioCHECK NSP test analysis that demonstrated a similar NSP antibody prevalence result of 25 %. Today, most African countries are still poorly equipped to control FMD due to lack of infrastructure and financial resources (15,45). FMD diagnosis in countries such as Uganda is mainly based on molecular diagnostic tests and serological assays such as NSP ELISA (39). Although molecular-based diagnostic tests have shown a higher analytical sensitivity compared to serological assays, these systems require sophisticated equipment and highly trained laboratory staff. Such limitations make them not practical for routine screening and confine their use to research institutions (45,56,57). As a result, the focal testing of FMD is carried out in regional and national reference labs and mainly relies on commercial NSP ELISA kits (39,56). Considering the unique environmental and economic challenges, new tailored serological assays with high diagnostics performance, low production cost, and without the need for expensive laboratory equipment are clearly still needed for largescale application in FMD control and surveillance. In this study, we successfully developed and validated a DIVA Nb-based 3ABC competitive ELISA, for detection of NSP antibodies in cattle serum. Since every assay development has its own sets of merits and demerits (58), various parameters that extensively differ between non-endemic and endemic surroundings, such as those seen in Uganda, were taken under consideration during assay design. Further studies with large sample cohorts and across different animal species must still be carried out to validate the assay performance before regulatory authorities can adopt it for routine use. However, this tailor-made highly sensitive and specific NSP ELISA presented herein clearly demonstrates the potential to be used as an alternative/supplemental way for simple, low-cost and effective method for detection of FMD NSP antibodies, and to serve as a critical component in FMD regional control management and surveillance. AUTHOR CONTRIBUTIONS Study experiments were designed by SG, AS, ElR, SM, VY, and LL. SG designed FMDV peptides and constructed FMDV 3ABC protein. SG, CV, and EmR selected, expressed, purified, and characterized the anti-FMD 3ABC nanobodies. FM and SO collected analyzed and provided FMD infected sera samples. Naive and vaccinated samples were collected analyzed and provided by NS. Randomized field samples were collected and process by IA and JL. SG, AS, PB, and SF-M developed and constructed the Nb-based 3ABC competitive ELISA. ELISA screening analysis was performed by SG, AS, and PB. PrioCHECK NSP test analysis was done by IA, SG, AS, and PB. The manuscript was written by AS and SG and edited by RM, LV-S, ElR, SM, CV, JL, VY, and LL. FUNDING The study was funded by the Cooperative Biological Engagement Program of the U.S. Department of Defense Threat Reduction Agency, Defense Threat Reduction Agency, (DTRA # 8802), and the EPSRC IRC in Early-Warning Sensing Systems for Infectious Diseases (i-sense) EP/K031953/1.
2018-10-12T13:06:50.520Z
2018-10-12T00:00:00.000
{ "year": 2018, "sha1": "f83531f72fae0eaad96b62c7e43adab7235aa9b7", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2018.00250/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f83531f72fae0eaad96b62c7e43adab7235aa9b7", "s2fieldsofstudy": [ "Medicine", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
263971757
pes2o/s2orc
v3-fos-license
Age-Related Differences in the Luminal and Mucosa-Associated Gut Microbiome of Broiler Chickens and Shifts Associated with Campylobacter jejuni Infection Despite the importance of gut microbiota for broiler performance and health little is known about the composition of this ecosystem, its development and response towards bacterial infections. Therefore, the current study was conducted to address the composition and structure of the microbial community in broiler chickens in a longitudinal study from day 1 to day 28 of age in the gut content and on the mucosa. Additionally, the consequences of a Campylobacter (C.) jejuni infection on the microbial community were assessed. The composition of the gut microbiota was analyzed with 16S rRNA gene targeted Illumina MiSeq sequencing. Sequencing of 130 samples yielded 51,825,306 quality-controlled sequences, which clustered into 8285 operational taxonomic units (OTUs; 0.03 distance level) representing 24 phyla. Firmicutes, Proteobacteria, Bacteroidetes, Actinobacteria, and Tenericutes were the main components of the gut microbiota, with Proteobacteria and Firmicutes being the most abundant phyla (between 95.0 and 99.7% of all sequences) at all gut sites. Microbial communities changed in an age-dependent manner. Whereas, young birds had more Proteobacteria, Firmicutes, and Tenericutes dominated in older birds (>14 days old). In addition, 28 day old birds had more diverse bacterial communities than young birds. Furthermore, numerous significant differences in microbial profiles between the mucosa and luminal content of the small and large intestine were detected, with some species being strongly associated with the mucosa whereas others remained within the luminal content of the gut. Following oral infection of 14 day old broiler chickens with 1 × 108 CFU of C. jejuni NCTC 12744, it was found that C. jejuni heavily colonized throughout the small and large intestine. Moreover, C. jejuni colonization was associated with an alteration of the gut microbiota with infected birds having a significantly lower abundance of Escherichia (E.) coli at different gut sites. On the contrary, the level of Clostridium spp. was higher in infected birds compared with birds from the negative controls. In conclusion, the obtained results demonstrate how the bacterial microbiome composition changed within the early life of broiler chickens in the gut lumen and on the mucosal surface. Furthermore, our findings confirmed that the Campylobacter carrier state in chicken is characterized by multiple changes in the intestinal ecology within the host. Despite the importance of gut microbiota for broiler performance and health little is known about the composition of this ecosystem, its development and response towards bacterial infections. Therefore, the current study was conducted to address the composition and structure of the microbial community in broiler chickens in a longitudinal study from day 1 to day 28 of age in the gut content and on the mucosa. Additionally, the consequences of a Campylobacter (C.) jejuni infection on the microbial community were assessed. The composition of the gut microbiota was analyzed with 16S rRNA gene targeted Illumina MiSeq sequencing. Sequencing of 130 samples yielded 51,825,306 quality-controlled sequences, which clustered into 8285 operational taxonomic units (OTUs; 0.03 distance level) representing 24 phyla. Firmicutes, Proteobacteria, Bacteroidetes, Actinobacteria, and Tenericutes were the main components of the gut microbiota, with Proteobacteria and Firmicutes being the most abundant phyla (between 95.0 and 99.7% of all sequences) at all gut sites. Microbial communities changed in an age-dependent manner. Whereas, young birds had more Proteobacteria, Firmicutes, and Tenericutes dominated in older birds (>14 days old). In addition, 28 day old birds had more diverse bacterial communities than young birds. Furthermore, numerous significant differences in microbial profiles between the mucosa and luminal content of the small and large intestine were detected, with some species being strongly associated with the mucosa whereas others remained within the luminal content of the gut. Following oral infection of 14 day old broiler chickens with 1 × 10 8 CFU of C. jejuni NCTC 12744, it was found that C. jejuni heavily colonized throughout the small and large intestine. Moreover, C. jejuni colonization was associated with an alteration of the gut microbiota with infected birds having a significantly lower abundance of Escherichia (E.) coli at different gut sites. On the contrary, the level of Clostridium spp. was higher in infected birds compared with birds from the negative controls. In conclusion, the obtained results demonstrate how the bacterial microbiome composition changed within the early life of broiler chickens in the gut lumen and on the mucosal surface. Furthermore, our findings confirmed that the Campylobacter carrier state in chicken is characterized by multiple changes in the intestinal ecology within the host. INTRODUCTION A diverse microbiota is found throughout the gastrointestinal tract (GIT) of chickens, most predominant in the cecum (Mead, 1997;Videnska et al., 2014). The gut microbiota plays an essential role in nutrition, detoxification of certain compounds, growth performance and protection against pathogenic bacteria. The microbiota is crucial to strengthen the immune system, thereby affecting growth, health, and wellbeing of chicken. Generally, the gut microbiota modulates host responses to limit the colonization of pathogens (Rehman et al., 2007). There is little information about the diversity and function of the gut microbiota in chickens, its impact on the host and the impact of certain pathogens. Development of the gut microbiota in chickens occurs immediately after hatching and is influenced by both genetic and external factors like diet and environment (Apajalahti et al., 2004). It was reported that disturbances in the intestinal microbiota leads to a delay in growth, weakens the host resistance and increases the susceptibility to various infectious diseases (Lan et al., 2004). Gong et al. (2002) demonstrated that the cecal microbiota protects chickens against bacterial infections, while microbiota in the small intestine contributes significantly to its function, including digestion and nutrient absorption, which significantly determines the growth rate of the bird. Studies on gut microbiota have mostly been performed with chickens older than 1 week of age due to the various influences in day-old birds. However, the composition of gut microbiota at the first day of life in newly hatched chickens is a matter of interest within a longitudinal study. Therefore, the focus of the actual study was to determine the diversity and community structures of the microbiota within the small and large intestine from hatch until 4 weeks of age. Furthermore, differences among the mucosaassociated and luminal content microbiota were determined for the first time. Campylobacter (C.) jejuni is the most common cause of food-borne bacterial enteritis worldwide (EFSA, 2011). C. jejuni infection of chickens had previously not been considered to influence bird health and it was thought that C. jejuni is part of the normal microbiota of birds (EFSA, 2011). Understanding how Campylobacter species, especially C. jejuni, establishes successful colonization in chickens remains a foremost research priority as this gastrointestinal pathogen not only overcomes the host's defense system, but also competes with the microbial community for space and nutrients. It has been shown that Campylobacter requires numerous factors to successfully colonize the host, to translocate and to avoid clearance (Awad et al., 2014(Awad et al., , 2015a(Awad et al., ,b, 2016Humphrey et al., 2014). In addition, Awad et al. (2016) showed that Campylobacter had the ability to reduce butyrate, isobutyrate, valerate, and isovalerate which might be due to the utilization of short-chain fatty acids (SCFAs) as a carbon source (Masanta et al., 2013) or due to the reduction of butyric acid producing bacteria amongst the microbiota. In general, there is a complex interplay between microbiota composition and SCFAs concentration and it was found that the type and level of SCFAs in the gut can affect different members of the microbial community in various ways (Mon et al., 2015). It is still unknown how C. jejuni affects the ecology of the chicken gut, a feature of high importance considering a possible detrimental effect on the health of birds associated with C. jejuni colonization. Haag et al. (2012) demonstrated that C. jejuni colonization in mice depends on the microbiota of the host and vice versa and Campylobacter colonization induces a shift of the intestinal microbiota. Thus, it can be hypothesized that Campylobacter colonization is associated with an alteration in the intestinal microbiota of chickens as well. Therefore, the second aim of the actual study was to investigate the dynamics of an experimental Campylobacter jejuni NCTC 12744 infection in 14-28 days old chickens and the consequences on the alteration of the gut microbiome. Ethics Statement The animal experiment was approved by the institutional ethics committee of the University of Veterinary Medicine and the Ministry of Research and Science under the license number GZ 68.205/0011-11/3b/2013. All husbandry practices were performed with full consideration of animal welfare. Experimental Design In this study, a total of 45 1-day-old broiler chickens (males and females) were obtained from a commercial hatchery (Ross-308, Geflügelhof Schulz, Graz, Austria). Five day-old birds were immediately sacrificed for determining the gut microbiota of the jejunal and cecal mucosa. At 7 and 14 days of age, five birds were randomly selected for measuring the development of gut microbiota from gut content and mucosa. The birds were kept as non-infected for the first 2 weeks and were housed on wood shavings with feed and water supplied ad libitum. The birds were fed a standard commercial diet for the whole experimental period in order to avoid an influence of the change of diet on the microbial composition. At the first and 14 days of age birds were confirmed as Campylobacter-free by taking cloacal swabs which were streaked onto modified charcoal-cefaprazone-deoxycholate agar (CM0739, OXOID, Hampshire, UK) and grown for 48 h under microaerophilic conditions at 42 • C. At 14 days of age, 15 birds were infected with Campylobacter jejuni (C. jejuni) reference strain NCTC 12744 and kept separately from 15 non-infected control birds which were inoculated with PBS only. C. jejuni was routinely grown in Lennox L Base broth (LB broth) (Invitrogen, California, USA) at 42 • C for 48 h in a shaking incubator. Campylobacter colony-forming unit (CFU) was determined from each suspension by serial dilutions in duplicate using LB agar. Campylobacter suspensions were stored at −80 • C by adding 2 mL of 40% glycerol/10 mL LB broth. For infection, Campylobacter suspensions were centrifuged for 5 min at 10,000 × rpm. The pellet was washed 3 times with phosphate-buffered saline (PBS) each time centrifuged at the same conditions as mentioned above. Finally, the pellets were resuspended in PBS and the necessary concentration was adjusted for birds' infection. The infection was performed orally via feeding tube (gavage) with a dose of 1 × 10 8 CFU/bird at 14 days of age as described previously (Awad et al., 2015a). At 7 days post infection 5 birds from each group were anesthetized by injection of a single dose of thiopental (20 mg/kg) into the wing vein and slaughtered by bleeding of the jugular vein. The final 10 birds/group were killed at 14 days post infection. At each time point the gastrointestinal content from the jejunum and ceca, as well as jejunal and cecal mucosa from 5 birds/group were taken to determine the gut microbiota. Intestinal segments were disclosed at the mesentery with sterile instruments and the digesta was removed. The luminal site of the intestinal segments was washed with sterile ice-cold PBS until the mucosa was completely cleaned from the digesta. The mucosa was rinsed several times with sterile ice cold PBS, after which the mucosa was collected aseptically by scraping off the mucosa using scalpel blades. All samples were stored at −80 • C until further processing. DNA Extraction, PCR Amplification of the 16S rRNA Gene, and Illumina MISeq Sequencing DNA from luminal content and gut mucosa samples was extracted using the PowerSoil R DNA Isolation Kit (MoBio Laboratories, Carlsbad, CA, USA) as described previously (Mann et al., 2014;Yasuda et al., 2015). The same protocol of DNA extraction was applied to luminal content and gut mucosa. From each of the 130 samples a total of 250 mg of gut content or mucosa was used for DNA isolation according to manufacturer's instructions. DNA concentration was determined by a Qubit fluorometer (Invitrogen, Carlsbad, CA, USA). The V345 hypervariable region of the 16S rRNA genes was amplified with the primers F341 (5 ′ -GTGYCAGCMGCCGCGGTAA-3 ′ ) (Zakrzewski et al., 2012) and R909 (5 ′ -CCGYCAATTYMTTTRAGTTT-3 ′ ) (Tamaki et al., 2011). An amplicon size of approximately 568 bp was produced. 16S rRNA gene PCRs, library preparation and sequencing were performed by Microsynth (Microsynth AG, Balgach Switzerland). Libraries were constructed by ligating sequencing adapters and indices onto purified PCR products using the Nextera XT Sample Preparation Kit (Illumina) and equimolar amounts of each of the libraries were pooled and sequenced on an Illumina MiSeq Sequencing Platform. Sequence data were analyzed with the software package QIIME (Caporaso et al., 2010). Low quality sequences (q < 20) were filtered, chimeric sequences were excluded by using the USEARCH 6.1 database (Edgar, 2010) and sequences were clustered into operational taxonomic units (OTUs; 97% similarity) with the QIIME script "pick_open_reference_otus.py." OTUs with less than 10 sequences were removed, resulting in 8285 OTUs, which were used for all downstream analysis. The representative sequences of the 50 most abundant OTUs over all sampling time points were classified against type strains using the Greengenes database (http://greengenes.lbl.gov) (DeSantis et al., 2006). Microbial Diversity Analysis Both alpha and beta diversity indices were used to estimate the microbiome diversity within-and between microbial communities. Calculations were done with the "summary.single" command in the software package mothur (http://www.mothur. org/; Kozich et al., 2013). Alpha diversity indices analysis included Chao1 index (richness estimate), abundance-based coverage estimator (ACE, richness estimate), Shannon's diversity, and Simpson's diversity index. For the Bray-Curtis similarity, the dataset was rarefied to the minimum number of sequences per sample. Rarefaction curve was constructed based on the observed number of OTUs and nearly reached asymptotes for all samples (data not shown). Statistical Analysis Statistically significant differences in relative abundance with regard to sampling sites and time were calculated using "metastats" in mothur, which is based on the homonymous bioinformatics program (White et al., 2009;Paulson et al., 2011). "Metastats" uses repeated t statistics and Fisher's tests on random permutations to handle sparsely-sampled features (White et al., 2009). Results were reported as a mean and standard deviation (SD). The significance level was set to P < 0.05. The P-values were adjusted with the Benjamini and Hochberg false discovery rate correction (FDR, q-value), and a q < 0.25 was considered significant (Lim et al., 2016). Furthermore, significant differences between the diversity estimators of the two groups were performed using the non-parametric Kruskal-Wallis-test followed by Mann-Whitney-test. PASW statistics 20, SPSS software (Chicago, Il., USA) was used for statistical analyses of diversity estimators. Accession Numbers Sequencing data are available in the European Nucleotide Archive (ENA) database under the accession number PRJEB14860. Sequence Analysis, Phylum and OTU Classification Sequencing of 130 samples yielded 51,825,306 quality-controlled sequences, clustering into 8285 operational taxonomic units (OTUs; 0.03 distance level). Throughout all gut sites 24 phyla were identified with Firmicutes, Proteobacteria, and Tenericutes being the most abundant ones. In Figure S1A, Tables S1A-D, relative abundances of all phyla are delineated with respect to age and groups. The results showed that in the jejunum and cecum, Firmicutes and Proteobacteria were the dominating luminal and mucosal-associated phyla in all birds investigated (Tables S2, S3). At the first day of life Proteobacteria were significantly higher in the jejunal (P = 0.0000, q = 0.0000) and cecal (P = 0.016; q = 0.059) mucosa of the birds and decreased thereafter, as no significant differences were found between samples from day 14 to day 28 of age (P = 0.140; q = 0.438 and P = 0.519; q = 0.955). On the contrary, Firmicutes were significantly lower at day 1 and increased thereafter (P = 0.001; q = 0.016 and P = 0.006; q = 0.055 in the jejunal and cecal mucosa, respectively). For infected birds, relative abundances of bacterial phyla at the two sampling time points carried out post infection are represented in Figure S1B, Tables S4A-D. Figure 1 shows that the phylum Proteobacteria decreased while Firmicutes increased at either 21 (7 dpi) or 28 days of age (14 dpi). There was a significant decrease in Actinobacteria and Proteobacteria in the jejunal mucosa at 14 dpi (P = 0.006; q = 0.100 and P = 0.005; q = 0.100), while Firmicutes and Bacteroidetes were more abundant in the infected birds compared to the controls (P = 0.005; q = 0.100 and P = 0.023; q = 0.217, Table S4A). However, in the cecal content and cecal mucosa, Bacteroidetes (P = 0.001; q =0.019) increased at 7 dpi, but decreased (P = 0.002; q = 0.026 and P = 0.005; q = 0.048) at 14 dpi in the infected birds compared with the controls, indicating that the Campylobacter infection modulates the jejunal and cecal phylum abundances in different ways. In Table 1, the 50 most abundant OTUs from all birds are listed including the internal OTU number, relative abundance together with the reference strain and similarity (compared with strains of the Greengenes database). Relative OTUs abundances at different ages in all birds are shown in Tables S5A-D, S6A-D. The OTUs and species abundances sorted by age at the four gut sites of the birds are shown in the heatmaps of Figure S2. In total, the 50 most abundant OTUs accounted for 73.9% of all sequences and of those 42 OTUs differed significantly in their relative abundances over all gut sites independent of the age (Tables 2, 3). At the first day of age, a notable high relative abundance of OTU 1, 25, 27, and 35 (best type strain hits: Escherichia coli, Enterococcus faecalis, Clostridium paraputrificum, and Clostridium sartagoforme) were found in both jejunal and cecal mucosa (Tables S5A,C), whereas OTU 38 (best type strain hit: Acinetobacter johnsonii) was only abundant in the jejunal mucosa and OTU 42 (best type strain hit: C. paraputrificum) was only abundant in the cecal mucosa. All these abundant OTUs decreased by age. In the jejunal mucosa, OTU 1 was the most abundant (57.9%), followed by the other four OTUs which ranged between 2.6 and 7.9%. Similarly, in the mucosa of the cecum, OTU 1 was highly abundant (65.9%), followed by OTUs 27, 25, 35, and 42 which ranged between 7.8 and 3.3%. The OTUs and species abundances sorted by gut sites of the infected birds compared with the control birds are shown in the heatmaps (Figure 2). Interestingly, in the infected birds, the abundance of E. coli and Eubacterium desmolans (best type strain hits) were lower at different gut sites ( Figure 3A). On the contrary, Clostridium spp. abundance was higher in the infected birds compared with the negative controls ( Figure 3B). Assessment of the Microbial Community Diversity Diversity indices estimating species richness and evenness for birds are shown in Figure 4. Diversity indices indicated that microbial richness and diversity increased with age. Interestingly, diversity indices were not different comparing samples from days 1 and 7. However, older chickens (14-28 days of age) had a significantly more diverse microbial community structure as indicated by the number of OTUs observed (Sobs), Chao1, ACE, Shannon's index, and Simpson index (P < 0.01). In addition, the microbial diversity in older chickens is more consistent, as there was no difference in diversity indices comparing samples from days 14 to 28. The results also revealed significant differences in the microbial diversity among jejunum and cecum as the chicken aged, supported by Sobs (P < 0.001), Chao1 (P < 0.001), ACE (P < 0.001), Shannon's index (P < 0.001), and Simpson index (P = 0.060) with a more complex diversity in the cecum compared with the jejunum. Furthermore, a difference in species richness among the luminal and mucosa-associated gut microbiota, independent of the age, was detected in all birds as supported by Sobs (P = 0.017), Chao1 (P = 0.015), ACE (P = 0.022), respectively. In the infected birds, significant differences in the microbial diversity among jejunum and cecum supported by Sobs (P < 0.001), Chao1 (P < 0.001), ACE (P < 0.001), Shannon's index (P < 0.001), and Simpson index (P = 0.011) were found. Additionally, an increase in the species richness among luminal and mucosa-associated gut microbiota of the infected birds at 14 dpi compared with those from 7 dpi was obtained. Diversity indices were not significantly different among the gut sites of infected and control birds. Exceptional to this, a higher species richness was noticed in the cecum content of infected birds at 14 dpi, supported by Sobs, Chao1, and ACE (P = 0.047, Figure 4C), indicating that the Campylobacter infection increased the microbiota complexity. Similarity and Stability of the Gut Microbiota Composition Over Time The microbial community similarity among all samples over time was assessed by calculating a Bray-Curtis similarity matrix. Community similarity analysis based on the Bray-Curtis index showed clear differences between gut sites and age, indicating strong shifts in microbial community structures (Figure 5). In addition, the Bray-Curtis index suggested that the birds at the first day of age displayed a high degree of dissimilarity compared with the other ages. It was also apparent that microbiota compositions of older birds were more similar compared with young birds. The Bray-Curtis index revealed clear differences between jejunum and cecum from infected birds at the two sampling time points post infection. Furthermore, the comparison of the microbiota between control and infected birds showed that community structures were more dissimilar at the OTUs level, demonstrating that the gut microbial communities changed as a result of infection. To measure the similarity between microbial communities in all birds at different ages, principal component analysis (PCA) was performed (Figure 6). PCA analysis showed that there was a clear clustering of the birds at days 1 and 7 of age in the jejunum and cecum compared with the other days. In addition, the microbial community of the older chickens clustered with less variation compared to young birds. PCA plots also demonstrate that, the microbial community was more separated in the ceca than in the jejunum. To delineate the shared species among the groups, a Venn diagram displaying the overlaps between gut sites at different ages and groups was performed ( Figure S3). The proportions of shared OTUs appear to be low at each gut site from day 1 to day 28 of age. These shared species, however, varied from one site to another. Furthermore, the analysis showed that only 399 OTUs (n = 1847 OTUs) were shared among the jejunal mucosa in the control and infected birds, while 745 OTUs (n = 2401 OTUs) were shared between the jejunal content in the control and infected birds at the two time points post infection ( Figure 7A). In the cecal mucosa and the cecal content, the comparison revealed that only 2218 OTUs (n = 6736 OTUs) and 2617 OTUs (n = 6860 OTUs) were shared, among control and infected birds combining the two time points post infection ( Figure 7B). These data demonstrated that 25-36% of the observed OTUs in the jejunum and cecum were shared between the control and infected birds, respectively. DISCUSSION The intestinal microbiota acts as a physical barrier for incoming pathogens and plays an important role in the host resistance against infections by both direct interactions with pathogenic bacteria via competitive exclusion, such as occupation of attachment sites or consumption of nutrient sources, and indirectly by influencing the immune system via production of antimicrobial substances (Sekirov et al., 2010). Development of the gut microbiota in chickens occurs immediately after hatching and by getting older, this microbiome becomes very diverse until it reaches a relatively stable dynamic state (Pan and Yu, 2014). Interactions of the intestinal microbiome with the host and certain microorganisms have profound effects on bird health, and are therefore of great importance for poultry production. Consequently, in the present study, the composition of the gut microbiota of chickens in a longitudinal study from day 1 to day 28 of age was analyzed and the differences between content and mucosa-associated gut microbiota were investigated. In order to extend the range of analyses comparisons were performed between control chickens and chickens infected at 14 days of age with C. jejuni. In this study, a high diversity of phyla (15 in the jejunal and 4 in the cecal mucosal samples) was found at day 1 of life, indicating a rapid intake of environmental organisms after birth. In addition, the composition of the gut microbiota differed substantially between young and older birds, with Proteobacteria being significantly more present at the first day of life and decreasing thereafter, whereas the Firmicutes were the predominant phylum in older birds. This is in agreement with Lu et al. (2003) who found that the gut is firstly colonized by the phylum Proteobacteria, particularly by the family Enterobacteriaceae. In older birds, the phylum Firmicutes mainly represented by Lachnospiraceae, Ruminococcaceae, Clostridiaceae, and Lactobacillaceae dominated. As a consequence, the chicken gut is firstly colonized by facultative aerobes which are substituted later on by anaerobes. Obviously, oxygen consumption by the aerobic bacteria alters the gut ecosystem toward more reducing conditions, which facilitates subsequent growth and colonization of the obligate anaerobes (Wise and Siragusa, 2007). Besides Proteobacteria and Firmicutes, also lower abundant phyla (e.g., Actinobacteria and Tenericutes) changed significantly with time, indicating high dynamics in the re-organization of the whole microbiome through time. Taken together, the present study revealed that the chicken gut is largely dominated by the phyla Proteobacteria and Firmicutes, with lower proportions of Actinobacteria, Bacteroidetes, and Tenericutes. Similarly, previous studies have also shown that Firmicutes, Bacterioidetes, and Proteobacteria are the most common phyla in the chicken ceca (Wei et al., 2013;Oakley et al., 2014;Sergeant et al., 2014). Interestingly, jejunal, and cecal microbiota were found to be distinct and certain acid-tolerant bacteria, mostly Acidobacteria, were present in the jejunum only. Altogether, the results demonstrated that the abundance of bacteria varied between the jejunum and the cecum, with some species more present in the jejunum (e.g., Acinetobacter and Acidobacteria) and others (e.g., Bacteroides and Clostridium) being predominant in the cecum of chickens. This and other variations can be explained by the fact that feed passes quickly through the foregut and is retained for hours in the hindgut. In addition, the small intestine is mainly responsible for food digestion and absorption, while the large intestine, especially the cecum, is responsible for microbial fermentation, further nutrient absorption and detoxification of substances that are harmful to the host (Gong et al., 2002). Chickens investigated in the current study had a high abundance of E. coli and E. faecalis (best type strain hits) in the first week of life which might potentially increase their resistance to other bacterial infections. E. coli, a facultative anaerobe bacterium, was the dominant species in the early life of chickens. Thus, a depletion of E. coli during the second week of life could potentially affect the host susceptibility to enteric pathogen infections, representing a key role for these gut microbiota in host resistance. This decrease in E. coli abundance has been attributed with a beginning dominance of anaerobes (Zhu and Joerger, 2003). It may be possible that such disturbances in the community structure allow a pathogen to colonize and proliferate. Anyhow, it remains hypothetical whether these diversity changes influence the susceptibility to pathogens and the outcome of infection. The current results revealed that E. coli, E. faecalis, C. paraputrificum, and C. sartagoforme (best type strain hits) were more predominant in the mucosa than in the lumen, suggesting significant implications for birds' health, considering that the mucosa-associated bacteria are of great importance in the host mucosal responses with consequences for the mucosal barrier (Ott et al., 2004). Despite the high prevalence of Campylobacter in chickens the mechanism of colonization in the gut is still poorly understood. The high bacterial load in the gut and the establishment of a latent infection characterized by continuous shedding indicates that Campylobacter in chickens can modify the microbiota composition. In the current study it could be shown that Campylobacter colonization shifted the two major phyla towards an enrichment of Firmicutes with concomitant reduction of Proteobacteria. Interestingly, a reverse correlation between Firmicutes and Proteobacteria was observed, suggesting a possible antagonistic interaction between these two phyla. According to Pan and Yu (2014) alterations in one phyla or species may not only affect the host directly, but can also disrupt the entire microbial community. Notably, bacterial taxa belonging to the phyla Firmicutes are known to be involved in the degradation of complex carbohydrates (not absorbed by the host) and in the production of SCFAs (Thibodeau et al., 2015). Thus, the SCFAs production by Firmicutes might, at least partially, explain their dominance in the infected birds, which have a high SCFAs requirement as a source of energy for C. jejuni to colonize the chicken gut. Furthermore, Brown et al. (2012) reported that members of the phylum Firmicutes can inhibit the growth of opportunistic pathogens, such as E. coli, which has also been shown in the present study. Besides these major shifts, also low abundant phyla (e.g., Actinobacteria and Tenericutes) were affected by the Campylobacter infection, which could also disequilibrate the microbiome composition. Similarly, Johansen et al. (2006) found in a denaturing gradient gel electrophoresis (DGGE) based experiment that C. jejuni colonization affected the development and complexity of the microbial communities of the ceca over 17 days of age. Furthermore, Qu et al. (2008) noted that the community structure of the cecal microbiome from the C. jejuni challenged chicken has greater diversity and evenness with a higher abundance of Firmicutes at the expense of the Bacteroidetes and other taxa. Sofka et al. (2015) also reported that Campylobacter carriage, assessed in samples from slaughter houses, was associated with moderate modulations of the cecal microbiome as revealed by an increase in Streptococcus and Blautia relative abundance in birds of 56 days of age, originating from different farms and production types. Recently, Thibodeau et al. (2015) found also that C. jejuni colonization induced a moderate alteration of the chicken cecal microbiome beta-diversity at 35 days of age. This study's results strongly suggest that the Campylobacter associated alterations of the gut microbiota were a direct effect due to the interaction of C. jejuni with the microbiota or a consequence of the host responses or even a combination of both (Barman et al., 2008;Mon et al., 2015). The obtained results indicate that the influence of a Campylobacter infection on microbial communities was more pronounced at 14 dpi than at 7 dpi. This could be explained by an increased load of Camplyobacter at the later time point as demonstrated in recent studies using the same C. jejuni strain (Awad et al., 2014(Awad et al., , 2015a(Awad et al., ,b, 2016. We also found significant differences in the abundance of certain bacterial species in the infected birds compared with the controls. C. jejuni caused a significant decrease in E. coli (best type strain hit) in the microbiota of infected birds in both jejunum and cecum. This is in agreement with our previous study which showed that Campylobacter colonization decreased E. coli loads in the jejunum and cecum at 7 dpi and at 14 dpi, but increased E. coli translocation to the liver and spleen of the infected birds as determined by conventional bacteriology (Awad et al., 2016). Thus, the current results pointed out that the relative abundance of E. coli could be an important determinant of susceptibility for a Campylobacter infection in particular and Gram-negative pathogens in general. In contrast to the Campylobacter -E. coli interaction, it was found that the relative abundance of Clostridium spp. was higher in the infected birds compared with the negative controls, indicating a link between C. jejuni and Clostridium. This confirms data from an earlier study in which a positive correlation between high levels of Clostridium perfringens (>6 log) and the colonization of C. jejuni were found by real-time quantitative PCR (Skånseng et al., 2006;Thibodeau et al., 2015). This might be due to the fact that C. jejuni acts as a hydrogen sink leading to improved growth conditions for some Clostridia through increased fermentation (Kaakoush et al., 2014). This link can also be explained by the fact that the Clostridium organic acid production could be used by C. jejuni as an energy source. Furthermore, it was found that a Campylobacter infection induces excess mucous production in the intestine (Molnár et al., 2015) which consequently may enhance Clostridium proliferation due to the fact that an increase in mucin secretion in the gut provides an opportunity for Clostridium spp. to proliferate (M'Sadeq et al., 2015). Overall, the higher abundance of Campylobacter and Clostridium spp. might result in a higher endotoxin production with subsequent increase in intestinal permeability that facilitates the colonization and enhances bacterial translocation from the intestine to the internal organs, which is well in agreement with our pervious results (Awad et al., 2015a(Awad et al., , 2016. Finally, the strong shifts in the bacterial microbiome in the current study might help to explain why a Campylobacter infection is age dependent and chickens in the field become mainly colonized at an age of two to 4 weeks (Newell and Fearnley, 2003;Conlan et al., 2007). In agreement with this, Bereswill et al. (2011) demonstrated that a shift of intestinal microbiota in humans was linked with an increased susceptibility for C. jejuni. Finally, Haag et al. (2012) demonstrated that C. jejuni colonization in mice depends on the microbiota of the host and vice versa and Campylobacter colonization induces a shift of the intestinal microbiota. This was also observed in the present study as community structures were more dissimilar at the OTUs level in the infected birds compared with the controls. Moreover, in the infected birds, the population of beneficial microbes, such as E. coli and E. desmolans were comparatively lower than the potentially pathogenic bacteria, such as Clostridium spp., rendering the need for modulation of the gut microbiota to improve the gut health of the infected birds. CONCLUSION In the current study a substantial change in the composition of luminal and mucosa-associated gut microbiota in broiler chickens from day 1-28 was noticed. It could also be demonstrated that a C. jejuni infection in chickens was associated with significant changes in the composition of the intestinal ecosystem. Furthermore, these changes of the gut microbiota could lead to intestinal dysfunction, which has been evidenced in our previous studies. In this context, the results provide new insights into the microecological divergence of the intestinal microbiota with and without a Campylobacter infection and illustrate the C. jejuni-host crosstalk within the gut of broiler chickens. Understanding the relationship between disruption of the normal gut microbiota and Campylobacter infection may lead to improve in control strategies in order to minimize the consequences for the chicken host and the risk of bacterial spread to humans. AUTHOR CONTRIBUTIONS All authors listed, have made substantial, direct and intellectual contribution to the work, and approved it for publication. ACKNOWLEDGMENTS Part of this work was performed within the CEPO (Centre of Excellence for Poultry) project, which was funded by the European Regional Development most-abundant OTUs sorted by gut sites and age in the control birds. The heat map shows relative abundance of a given phylotype. Colour scaling is ranged from 0 to higher than 25%. n.d, not detected; n.a, not analyzed. Figure S3 | Venn diagrams are showing the shared OTUs for the control at different gut sites from day 1-28 (A) jejunum; (B) cecum. jm, jejunal mucosa; jc, jejunal content; cm, cecum mucosa; cc, cecum content; (c), control; (i), infected; d, day.
2017-05-02T20:19:46.627Z
2016-11-22T00:00:00.000
{ "year": 2016, "sha1": "9c195656766c9ced9a7d65ff8feb914a0eb6ca79", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2016.00154/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9c195656766c9ced9a7d65ff8feb914a0eb6ca79", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
246386996
pes2o/s2orc
v3-fos-license
Results of Callisto Eye System in Toric Intraocular Lens Alignment Objectives: This study was an evaluation of the effectiveness of the Callisto eye image-guided, markerless system (Carl Zeiss Meditec AG, Jena, Germany) in toric intraocular lens (IOL) positioning. Methods: The results of a novel, markerless, alignment system used for IOL positioning were analyzed in this retrospective study. Preoperatively, reference image registration was performed with the IOLMaster 700 biometer (Carl Zeiss Meditec AG, Jena, Germany) and transferred to the Callisto eye system, which was used in conjunction with an Opmi Lumera 700 microscope (Carl Zeiss Meditec AG, Jena, Germany). Using the Callisto Z Align technology, a toric IOL was aligned precisely with the steep axis. One day after surgery, the pupil was fully dilated and a thin slit was placed on the marker of the toric IOL and the angle was measured using an axis calculator smartphone application. The degree of the measured angle and the preoperatively determined angle were compared. Results: Sixty eyes of 46 patients were included. The difference in the absolute angle between the intended and the postoperative (at day 1) axes was a mean of 2.71±1.64°. Conclusion: The Callisto eye image-guided, markerless system successfully provided assistance in precisely positioning the toric IOL. Introduction Many studies have demonstrated the effectiveness of astigmatism correction with a toric intraocular lens (IOL) (1)(2)(3). Implantation of a toric IOL at the correct axis is of the utmost value to correct astigmatism in cataract surgery. The manual marking technique is a popular method, although it has several disadvantages, such as potential human error by both the surgeon and the patient during marking, as well as fading of the corneal marks. The markerless Callisto eye system (Carl Zeiss Meditec AG, Jena, Germany) is a tool introduced to address these problems. This study is an examination of results achieved at a single center with this intraoperative IOL positioning system. Methods This study was approved by the institutional review board of the Istanbul Yeni Yuzyil University Faculty of Medicine and the ethical standards of the Declaration of Helsinki were observed throughout. In this retrospective study, the medical records of patients who underwent phacoemulsification with implantation of a toric monofocal IOL between January 1, 2018 and December 31, 2018 were reviewed. The inclu-sion criteria were the presence of corneal astigmatism of 1.25 diopter (D) or more, uncomplicated phaco surgery, use of Callisto eye ( Fig. 1) during surgery for toric IOL positioning, available first-month medical records, and a pupil dilatation that enabled observation of marking on the IOL. During this period, 96 toric IOLs were implanted and 60 eyes of 46 patients met the study criteria. There were 20 males and 26 females, aged 32 to 72 years (mean: 56±16.4 years). Preoperative Assessment All of the patients provided written, informed consent before the surgery. A detailed slit-lamp examination was performed preoperatively. Best spectacle-corrected visual acuity, non-contact intraocular pressure, cornea astigmatism, and anterior chamber evaluation with a Pentacam HR system (Oculus Optikgeräte, Wetzlar, Germany) were recorded. The diopter of the toric IOL was determined using an online calculator. Preoperatively, high quality reference infrared images were registered with an IOL Master 700 biometer (Carl Zeiss Meditec AG, Jena, Germany) while the patient was in a seated position and transferred to the Callisto eye system, which was connected to an Opmi Lumera 700 microscope (Carl Zeiss Meditec AG, Jena, Germany). During toric IOL positioning, these reference images were matched with live stream images from the Opmi Lumera 700 and horizontal and steep axes were determined with the Z Align function. Surgical Technique All of the surgeries were performed by a single experienced phaco surgeon (K.B.) under topical anesthesia using a 2.2 mm temporal clear corneal incision. Following injection of an ophthalmic viscosurgical device (OVD), capsulorhexis, phacoemulsification, and irrigation/aspiration of cortical material were performed. The OVD was injected into the capsular bag and a monofocal hydrophobic toric IOL (AcrySof IQ SN6AT3-T9; Alcon Laboratories, Ft. Worth, TX, USA) was implanted in the capsular bag and positioned at the determined axis using the Callisto eye system. The OVD was then removed from the anterior chamber and behind the IOL. Postoperative topical medications applied were dexamethasone 0.1% 4x, nepafenac 0.1% 4x, and moxifloxacin 0.5% 4x for 1 month. Postoperative Assessment At postoperative day 1, in order to analyze the accuracy of the Callisto eye system, the pupil was fully dilated with topical tropicamide and epinephrine. A slit-lamp examination was conducted while the patients were asked to look straight ahead. A thin slit was centered and rotated until it overlapped with the marker on the toric IOL. It was ensured that the operated and non-operated eyes were on the same level. The angle of this thin slit was measured accurately with a smartphone axis calculator application. Discussion Positioning of the toric IOL on the correct axis is very important since it has been established that a deviation of 3° from the intended axis could cause a 9.05% decrease in astigmatic correction (4). Several techniques have been developed to ensure the best possible positioning, such as manual marking, iris registration, wavefront aberrometry, and imageguided systems (5)(6)(7). This study was an evaluation of the effectiveness of the Callisto eye image-guided system. Preoperative image registration was performed with an IOLMaster 700 biometer while the patient was in a seated position. This device approved the images automatically only if they were of high quality. Therefore, the registration of images was not prone to human error. In several studies, the effectiveness of automatic computer-based systems both for toric IOL alignment and postoperative toric IOL alignment has been investigated. Solomon et al. (8) assessed the Callisto eye system in comparison with intraoperative aberrometry. They found that Callisto eye yielded less remaining refractive astigmatism. In another study, Carey et al. (9) compared the effectiveness of slitlamp slit (3.1±1.6°) with a refractive power/corneal analyzer system (2.7±2.0°) for postoperative toric IOL angle measurement and found that the 2 methods were equally effective. Elhofi and Helaly (10) compared the effectiveness of the manual marking technique (4.33±2.72°) with the Verion Image-Guided System (Alcon Laboratories, Ft. Worth, TX, USA) (2.4±1.96°) and found the 2 techniques to be equally effective. In a similar article, Montes de Oca et al. (11) evaluated and compared manual marking (2.88±2.18°) with the computer-guided TrueVision 3D Visualization and Guidance System (Truevision Systems, Goleta, CA, USA) (2.96±2.54°) and found similar accuracy in toric IOL positioning. Webers et al. (12) found that both the Verion Image-Guided System (Alcon Laboratories, Ft. Worth, TX, USA) (1.3±1.6°) and the manual marking technique (2.8±1.8°) were similarly effective. Woodcock et al. (13) compared intraoperative aberrometry with the manual technique and found less refractive astigmatism in the intraoperative aberrometry group. The results of our study were similar. We found that there was a 2.71±1.64° absolute angle difference between the toric IOL axis and the target axis at postoperative day 1. Based on our findings and previous reports (8)(9)(10)(11)(12)(13), it can be concluded that computer-based systems are trustworthy. There are some limitations to our study. First, this was a retrospective study with a limited number of patients. Prospective research with a larger number of patients may provide useful data. Second, because this was designed as an evaluation study, we did not have a control group. In a future study, we plan to compare the manual technique with the Callisto eye system. Third, we evaluated the position of the toric IOL at postoperative day 1. Long-term assessment of the toric IOL may give us valuable information about IOL rotation. Conclusion In conclusion, our results demonstrated that the Callisto eye system enabled precise positioning of the toric IOL on the intended axis. This system is easy to use but has a steep learning curve. Further prospective studies of larger patient groups are needed to support this study. Disclosures Ethics Committee Approval: The Ethics Committee of Institutional review board of the Istanbul Yeni Yuzyil University Faculty of Medicine provided the ethics committee approval for this study (16.01.2020/058).
2022-01-30T05:09:51.443Z
2020-02-14T00:00:00.000
{ "year": 2020, "sha1": "1b9b7c89da9bbad6154af1de728f2c3d94cb6e20", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "1b9b7c89da9bbad6154af1de728f2c3d94cb6e20", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267223662
pes2o/s2orc
v3-fos-license
Design and Build the Social Security Equity Crowdfunding Application as Funding Optimization for MSMEs . The existence of MSMEs for the economy in Indonesia is very strategic because of their contribution to gross domestic product (GDP) and high employment. In addition, the MSME sector has proven to be strong in facing national and international economic crises. Even though it has a high contribution it does not mean that MSMEs do not face problems. One of the problems faced by MSMEs is the low ability to develop their business due to limited access to capital. Therefore, this study aims to design and build a funding application (social security equity crowdfunding) for MSMEs. The design of this study uses a research and development approach with four main stages, namely (1). Carry out a preliminary study through theoretical studies and field studies in order to develop an application framework. (2) Compile applications according to the design carried out in the preliminary stage. (3) Conduct expert validation tests and limited-scale trials. (4). The final stage of this research was conducting a large-scale trial of 30 MSMEs in Tulungagung, Kediri and Blitar Regencies. Through research, a funding application is produced with a social security equity crowdfunding scheme that can assist MSMEs in developing alternative funding outside of financial institutions, so as to increase the business capacity and competitiveness of MSMEs at the national and international levels. Introduction The existence of Micro, Small and Medium Enterprises (MSMEs) in the national economy is very strategic because they are able to become one of the business units that absorb a large number of workers and also play a role in distributing development results for the community [1].The economic recovery which is currently experiencing a contraction cannot be separated from the role of MSMEs, whose number is currently estimated at more than 40 million, and more than 99 percent of them are micro-scale businesses. IRCEB The large number of MSMEs has contributed to labor absorption, with 97% of the entire national workforce working in the micro, small and medium enterprise sector [2].These conditions indicate that the position of MSMEs is very strategic and can be a stimulator for national economic growth [3].So it is imperative to strengthen the MSME group which involves many parties outside the government. In addition, MSMEs are the largest contributors to national gross domestic product (GDP).Over the past year, the Ministry of Cooperatives and Small and Medium Enterprises (Kemenkop UMKM) recorded that the number of MSMEs reached 64.2 million with a contribution to GDP of 61.07 percent or rp8,573.89trillion.Although it has a strategic role in the national economy, the existence of MSMEs is inseparable from business problems.One of the problems faced by MSMEs is limited capital in developing a business [4].This problem is inseparable from the existence of MSMEs that are still unable to access capital from financial institutions [3].Departing from the above problems, a funding model innovation is needed in order to strengthen the business while being able to support the performance of MSMEs in Indonesia. The funding innovation for MSMEs developed in this study is a funding platform that is able to bring MSMEs together with investors.This concept has then developed with the equity crowdfunding model.The equity crowdfunding model is a method of raising funds in order to finance projects that are being developed by MSMEs with joint venture schemes carried out by investors.In its development, the concept of equity crowdfunding has developed into Social Security Equity Crowdfunding which emphasizes more on the concept of business ownership by investors based on plasma core business partnerships.This equity crowdfunding model is intended to be able to act as a funding medium for small and medium-sized start-ups to increase capital and investment capacity for MSME players [5].This convenience further elevates the Fintech sector in its role in running the national economy and the transformation from the conventional to the digital era [6]. The presence of the Fintech platform will be very helpful because Fintech is a combination of financial services and technology that can make it easier for people to save, borrow and invest online [7].This statement is reinforced by data from the Financial Services Authority (OJK) where there has been an increase in all related elements, ranging from Fintech companies to lending, which reached IDR 146.25 trillion in January to November 2020 [2].The existence of this equity crowdfunding-based fintech already has a legal umbrella from the Financial Services Authority (OJK) which is Various results of previous studies explain that there is a relationship between people's motivation in business funding [8], [9], [10], [11], the use of platforms to business funding [12], [13], [11], and the use of crowdfunding applications in order to support funding for businesses [14], [15], [16].Based on previous research, there is a role of funding development using the security crowdfunding model and the use of fintech in supporting business funding. Changes in business models at this time are inseparable from the existence of financial technology (fintech) which makes business processes faster [17].One form of development and implementation of financial technology (Fintech) is the development of equity crowdfunding [6].The theory development and implementation of crowdfunding concept is bigger than crowdsourcing [18].Various definitions of crowdfunding are essentially a form of cooperative cooperation by people who raise funds to support other people's efforts [19]. Through the results of this research, an application will be produced that is able to answer the need for easy and fair access to funding with a family concept called social security crowdfunding (SSC) for MSMEs in Indonesia.At this time there have been many studies that raise the concept of crowdfunding, but not much has been done that develops the concept of social security crowdfunding.Therefore the problem raised in this study is how to design, build and implement social security equity crowdfunding applications as funding optimization for Small, Micro and Medium Enterprises (MSMEs). Research Method The design in this study uses an approach (Research and Development) in answering the research objectives that have been set at the beginning.In producing research products that have eligibility standards, a systematic guide to the steps taken by researchers must be prepared [20].In order to produce research products, the following are 4 (four) stages of research that will be implemented. Stage studi pendahuluan At this stage, what was carried out was to conduct a preliminary study of 30 MSMEs in Tulungagung, Kediri and Blitar Regencies.The results of the preliminary study will produce a formulation of the design of the Social application as much as equity crowdfunding. Application stage The next stage will be carried out is to design the initial form of the Social application as much as equity crowdfunding until the construction of an application that is ready for expert validation and limited scale testing. Expert trial stage and limited scale The activities carried out at this stage are trials of materials and programs by experts. Before conducting expert trials, validity and reliability tests were first carried out.After the validity and reliability testing process, researchers will continue with trials on a limited scale on 10 MSME managers. Large-scale trial and repair phase At this stage what was carried out was to conduct a large-scale trial of the model on 30 MSME managers and the Blitar Regency Cooperative and MSME Office which were the objects of research.This data analysis technique is used to process data from respondents using the application.The results of the large-scale trial will be the basis for carrying out the improvement process before the application is officially used by MSME actors. This study used 30 MSMEs in Tulungagung, Kediri and Blitar Regencies.The data that has been collected is then analyzed descriptively to analyze the data obtained from the validation results of material experts, media, and questionnaires from users.There are several data analysis techniques used in this study, 1. Validity Test, construct validity test is used in this study, while the formula used is the product moment formula. Preliminary study stage There are 2 main activities in the preliminary t ahap, namely literature studies then continued with field studies.The literature study was conducted by looking for references to the concept of empowering MSMEs in the aspects of funding and investment. Meanwhile, field study activities begin with observations in the field by identifying the character of financing and investment in MSMEs.One of the objectives of this field observation activity is to identify the need for the preparation of social security equity crowdfunding applications. Field studies were conducted to identify the problems and needs of researchers in accordance with the problems expressed in the background above.Through this field study, the team obtained materials to compile the necessary content in the application to be built.The collection process carried out in field studies was carried out using interview techniques to MSMEs.The field study was conducted by the team through visits to MSMEs in Tulungagung, Kediri and Blitar Regencies and the Blitar Regency Cooperatives and MSMEs Office. The results of the field study show that the majority of MSMEs finance through financial institutions in the area, for example cooperatives.Not many do financing in banking institutions because they have difficulty meeting the administrative requirements put forward by banks.Meanwhile, the funding process in cooperatives is relatively easier and simpler for MSMEs.In addition, the research carried out on alternative financing that is in accordance with the characteristics of the MSME business resulted in a limited business ownership-based financing model.Then the following is a funding model as the basis for preparing a social security equity crowdfunding application. The localization scheme with the social security equity crowdfunding model described in the picture above provides an explanation that there are 3 parties in DOI 10.18502/kss.v9i4.15111 the process of financing social security equity crowdfunding for MSMEs.The first party is MSME actors as parties who need capital, the second party is potential investors who will place funds for investment activities and the third party is an investment operation or manager who will play an administrative and substantive role.The concept of localization with the social security equity crowdfunding model is expected to be able to encourage MSMEs to upgrade to become professional and accountable business entities.So that the social security equity crowdfunding application that will be built is expected to be able to become a partner for MSMEs to increase business capacity and alternative investments for investors in Indonesia. Application development stage After the process of literature study and field study, the next step is to analyze and formulate the content of the application.With the consideration of branding the application so that it is easier to be known by the public, it was decided to use the name "urunUMKM".The concept of "urunUKM" means that the community has the opportunity to participate in national economic empowerment through increasing the business capacity of MSMEs.This is an overview of the "urunUKM" application. On the main page of this application is a menu of options for investors regarding project offers from MSMEs.On this page, information will be provided to investors regarding alternative business options that will become partners of investors in investing their capital.To make it easier for investors to determine investment partners, various types of information related to investment decisions, including business profiles, business development potential offered by MSMEs, will be explained in detail in this application. When entering the menu feature on the application, the information displayed includes; capital status, see daily data, balance, forums, news and reports.Furthermore, when the user chooses to be a party who needs capital, the menu that must be done DOI 10.18502/kss.v9i4.15111 is that they must upload a proposal for the division of business ownership based on the project to be run.The proposal submitted by the MSME will later be submitted to the investor.Investors can choose for an attractive and profitable business.In addition, in order to increase investor confidence in fund managers, this application provides reports to investors periodically.The report displayed in the application is a financial statement, through this report it is expected that the company's performance will be known in a certain period. The concept of funding with the social security equity crowdfunding model attracts investor involvement in making more informed and profitable business decisions in making decisions.The further impact generated when the performance of MSMEs increases, the more trusted by investors, makes their productivity increase so that they can absorb more workers.This "urunUKM" application is expected to be able to become one of the incubators in this will create a healthy MSME business ecosystem and make the continuity of the MSME business more guaranteed, so as to reduce unemployment and become one of the accelerators of the Indonesian economy Application trial phase The next step in preparing for the "urunUKM" application is to conduct a limited review and testing by a team of experts.This test should check not only the reliability of the application process, but also the reliability of the application created.In this study, effectiveness and reliability tests, effectiveness tests were carried out by a team of DOI 10.18502/kss.v9i4.15111experts, and application acceptance rate tests were carried out by users.The following will be described by the hasi l of each trial. Reliability Test The results of the reliability test show that the questionnaire variable has a cronbach alpha coefficient value greater than 0.6, so the question instruments used in the user and expert questionnaires can be said to be reliable. Expert team validation Model evaluation is carried out by experts or practitioners using theory-based evaluation tools that are used as indicators of expert evaluation.Expert validation of the designed application has resulted in an acceptance rate of more than 60%.This means that the programming, content and display aspects are rated well.As a result of this peer review process, experts agree that the application will serve as an alternative funding for MSMEs. Test the acceptance rate of the application In order to see the level of user satisfaction with the "urunUKM" application, an evaluation is needed in the form of an application acceptance level test.The results of the evaluation of the application model show that the acceptance value of the "urunUKM" application is more than 80%.This means that the application made has been rated as a very good "urunUKM" application for analysis based on appearance, material aspects and benefits. Conclusion Based on the results and discussion that have been described through the stages in this study, the conclusions are described as follows. 1.The social security equity crowdfunding application is a funding application for MSMEs players whose function is not only as an alternative to funding, but also to create a healthy MSMEs business ecosystem, so that it can become one of the accelerators of the Indonesian economy. Table 1 : Test, reliability test is carried out with the cronbach alpha formula.The reliability test results in this study were declared reliable if the alpha produced > of the reliable critical number [18].3. Descriptive Data Analysis , Descriptive data analysis exposes the results of the study based on the data that has been obtained.Percentages and qualitative criteria can be established based on the following table: Development revision decision making.
2024-01-26T16:19:41.698Z
2024-01-24T00:00:00.000
{ "year": 2024, "sha1": "4e04b9bf336b43f8648058261b5adfe7c7730421", "oa_license": null, "oa_url": "https://knepublishing.com/index.php/KnE-Social/article/download/15111/24158", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ec4a886fa2c8e9335ef3bf3283d4069674a48fac", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [] }
51707731
pes2o/s2orc
v3-fos-license
Serial FLT PET imaging to discriminate between true progression and pseudoprogression in patients with newly diagnosed glioblastoma: a long-term follow-up study Purpose Response evaluation in patients with glioblastoma after chemoradiotherapy is challenging due to progressive, contrast-enhancing lesions on MRI that do not reflect true tumour progression. In this study, we prospectively evaluated the ability of the PET tracer 18F-fluorothymidine (FLT), a tracer reflecting proliferative activity, to discriminate between true progression and pseudoprogression in newly diagnosed glioblastoma patients treated with chemoradiotherapy. Methods FLT PET and MRI scans were performed before and 4 weeks after chemoradiotherapy. MRI scans were also performed after three cycles of adjuvant temozolomide. Pseudoprogression was defined as progressive disease on MRI after chemoradiotherapy with stabilisation or reduction of contrast-enhanced lesions after three cycles of temozolomide, and was compared with the disease course during long-term follow-up. Changes in maximum standardized uptake value (SUVmax) and tumour-to-normal uptake ratios were calculated for FLT and are presented as the mean SUVmax for multiple lesions. Results Between 2009 and 2012, 30 patients were included. Of 24 evaluable patients, 7 showed pseudoprogression and 7 had true progression as defined by MRI response. FLT PET parameters did not significantly differ between patients with true progression and pseudoprogression defined by MRI. The correlation between change in SUVmax and survival (p = 0.059) almost reached the standard level of statistical significance. Lower baseline FLT PET uptake was significantly correlated with improved survival (p = 0.022). Conclusion Baseline FLT uptake appears to be predictive of overall survival. Furthermore, changes in SUVmax over time showed a tendency to be associated with improved survival. However, further studies are necessary to investigate the ability of FLT PET imaging to discriminate between true progression and pseudoprogression in patients with glioblastoma. Introduction Glioblastoma (GBM) is the most common and most aggressive primary brain tumour. It accounts for more than 50% of all gliomas and has an incidence rate of 3.19 per 100.000 in the United States [1]. Current first-line treatment, consisting of maximal surgical resection followed by postoperative radiation with concomitant and adjuvant temozolomide (TMZ) therapy, has improved 2-year survival from 11% to 27% and 5-year survival from 2% to 10% [2]. However, response evaluation of this treatment in these patients is problematic because of the difficulty in distinguishing recurrent tumour (i.e. true progression) from pseudoprogression. Pseudoprogression is defined as progressive gadolinium-enhanced lesions on MRI immediately after the end of concurrent chemoradiotherapy, following stabilisation or spontaneous improvement in the contrast-enhanced lesions without further treatment other than adjuvant TMZ [3,4]. This is observed in 28-66% of all GBM patients undergoing chemoradiotherapy, and primarily occurs within the first 3 months after completion of chemoradiotherapy [5]. The difficulty in distinguishing true progression from pseudoprogression impedes clinical decision making in these patients. In patients with pseudoprogression, standard treatment with adjuvant TMZ should be continued, whereas in patients with true tumour progression, other treatment modalitiesalthough scarceor palliative supportive care are more appropriate. Interestingly, 18 F-fluorothymidine (FLT) is an 18 F-labelled thymidine analogue that is taken up preferentially by proliferating cells. FLT tracer uptake reflects thymidine kinase 1 activity, which is involved in DNA synthesis, and can be used as a measure of cell proliferation. In several tumour types, FLT uptake measured with PET corresponds to the Ki67 proliferation index, and its change is correlated with response to therapy [13,14]. In glioma patients, FLT uptake has been used for tumour grading and is correlated with Ki67 proliferation index [15,16]. Moreover, FLT PET has been found to perform better in predicting survival and recurrence in glioma patients than FDG PET and MRI [17,18]. However, to date, no prospective study has been conducted to determine the ability of FLT PET to discriminate between pseudoprogression and true progression. Therefore, the aim of this prospective study in patients with newly diagnosed GBM was to determine whether FLT PET scans, performed before and after chemoradiotherapy, can discriminate between true progression and pseudoprogression as measured by MRI after three courses of adjuvant TMZ. In addition, MRI responses were compared and verified in relation to the disease course during long-term follow-up. Patients and treatment Patients with newly diagnosed GBM or gliosarcoma (WHO grade IV, hereafter referred to as GBM) who were eligible for standard treatment with radiotherapy and TMZ were prospectively included. After surgical resection or biopsy, patients were treated with radiotherapy consisting of 2 Gy irradiation 5 out of 7 days per week for 6 weeks, for a total dose of 60 Gy. Patients received concomitant TMZ orally at a dose of 75 mg/m 2 daily for 6 weeks. After a treatment break of 4 weeks, patients received up to six cycles of adjuvant TMZ (150-200 mg/m 2 ) for 5 days every 28 days. The use of corticosteroids during treatment was recorded. No changes in treatment were introduced based on the results of the FLT PET scan. Overall survival was calculated from the date of informed consent to the date of death or last known date alive, censored at the time of analysis (end of December 2017). Written informed consent was obtained from all individual participants included in the study. The protocol was approved by the local medical ethics committee and registered with the Dutch trial register (NTR3680). MRI imaging Patients underwent standard radiological follow-up with MRI (1.5 T using T1, T2 and contrast-enhanced 3D T1 gradient echo sequences) within 72 h of surgery (baseline), 10 weeks after the start of treatment (4 weeks after completing chemoradiotherapy), 22 weeks after the start of treatment (after the third cycle of adjuvant TMZ or earlier as clinically indicated), and every 3 months thereafter. MRI data for this study were assessed by an independent neuroradiologist and a radiologistin-training using the Macdonald criteria for tumour response evaluation [19]. Pseudoprogression was defined as progressive disease on MRI at 10 weeks, with stabilisation or reduction in enhancing lesions on MRI at 22 weeks. True progression was defined as progressive disease on MRI at both 10 weeks and 22 weeks. The MRI responses were confirmed in relation to the disease course during long-term survival follow-up of these patients. FLT PET imaging FLT was synthesized as described by Been et al. [20]. FLT PET scans were performed after surgery, but before the start of radiotherapy (baseline) and 10 weeks after the start of treatment (4 weeks after completing chemoradiotherapy). Patients were instructed to fast for a minimum of 4 h before intravenous injection of tracer. For FLT, 200 MBq was administered 30 min before the baseline PET scan (mean ± SD 201.22 ± 14.16 MBq) and follow-up scan (mean ± SD 196.60 ± 26.70 MBq). A 60-min dynamic protocol was used in the first three patients to determine the optimal timing, followed by an abbreviated, static protocol of 30 min in the remaining patients. PET scans were performed on either an HR+ ECAT Exact or an mCT PET scanner (Siemens, Knoxville, TN). Baseline and follow-up PET scans were performed on the same scanner in almost all patients. Both the ECAT Exact and mCT PET scanners were standardized according to The Netherlands protocol for standardization and quantification of FDG whole-body PET studies in multicentre trials, which ultimately formed the foundation for the European Association of Nuclear Medicine (EANM) procedure guidelines for tumour PET imaging [21][22][23]. The maximum standardized uptake value (SUV max ) was assessed according to EANM procedure guidelines by drawing a region of interest (ROI) around every lesion on a separate reconstruction [22]. For multiple lesions, the mean SUV max was calculated. FLT PET scans were fused with the most recent MRI scan to differentiate actual tumour from postsurgery effects outside the cerebrum if needed. The SUV mean for normal brain tissue was assessed by drawing a ROI in the contralateral brain tissue. Tumour and nontumour ROIs were drawn by the same clinical researcher and were confirmed by a nuclear medicine physician. Tumour-tonormal (T/N) ratios were determined by dividing the SUV max of the tumour by the SUV mean of the normal brain tissue. Threshold values for SUV max and T/N ratio, and a FLT PET response, defined as a 25% decrease in SUV max between the first and second FLT PET scan, were based on corresponding FLT studies in the literature [18,24,25]. Ki67 immunohistochemical staining Deparaffinized GBM tissue from primary surgery or biopsy was used to evaluate the proliferation fraction of tumour cells (tissue slices of thickness 4 μm). Antigen retrieval was performed using 10 mM Tris/1 mM EDTA (pH 9) in a microwave at 700 W. Endogenous peroxidase and biotin were blocked using routine techniques. The slides were incubated with the primary antibody, Ki67 (clone MIB-1; Dako, Glostrup, Denmark) at room temperature for 1 h, followed by application of the secondary antibody, peroxidase-conjugated rabbit anti-mouse serum (Dako), and the tertiary antibody, peroxidase-conjugated goat anti-rabbit serum (Dako), for 30 min each. The first antibody was diluted 1/100 in 1% bovine serum albumin (BSA)/phosphate-buffered saline (PBS). The secondary and tertiary antibodies were diluted 1/ 100 in 1% BSA/PBS with 1% AB serum. Colour was developed with 3,3′-diaminobenzidine (Sigma, Zwijndrecht, The Netherlands) for 10 min. The slides were scanned for hot spots of proliferative activity. In one high-power field (×400 magnification) the fraction of Ki67-positive nuclei/ total number of nuclei was determined. Statistics Taal et al. found that 18 of 85 patients (20%) had discordant MRI scans showing disease progression on the first follow-up scan 4 weeks after the end of radiotherapy followed by stabilisation or a reduction in the contrast-enhanced lesions on MRI at 22 weeks, indicating pseudoprogression [3]. McNemar's test showed that five discordant MRI scans in the absence of discordant FLT PET scans would be sufficient to prove the superiority of FLT PET over MRI for discriminating between true progression and pseudoprogression. Based on these assumptions, at least 25 patients were needed for this study. An independent samples t test and the Mann-Whitney U test were used to compare FLT uptake and T/N ratios, respectively, between patients with and without pseudoprogression. To discriminate between true progression and pseudoprogression, receiver operating characteristic curves were used to find an optimal cut-off value for FLT uptake and changes in uptake. Fisher's exact test was used to determine if FLT PET could accurately identify patients with pseudoprogression, based on optimal cutoff values. Kaplan-Meier curves with the log-rank test were used to analyse survival in our long-term survival follow-up. An additional multiple Cox regression analysis was performed on survival data to correct for clinical variables (i.e. tumour extent and size, steroid use and Ki67 proliferation index). Furthermore, hazard ratios (HRs) for clinical variables were calculated and are reported with 95% confidence intervals (CIs). Lastly, a Pearson correlation test was used to calculate correlations between FLT uptake and proliferation index. A two-sided p value of <0.05 was considered significant. Statistics were calculated using IBM SPSS Statistics 22. Graphs were generated using GraphPad Prism version 7.02 for Windows. Patients Of 30 patients (28 with GBM and 2 with gliosarcoma, WHO grade IV) included between November 2009 and November 2012 (Table 1), five were not evaluable for pseudoprogression due to early death, salvage surgery or clinical deterioration that prevented further participation in the study, and one was excluded from the pseudoprogression analysis as only a baseline MRI scan before tumour resection was available. The CONSORT diagram is shown in Fig. 1. Baseline FLT PET scans were performed 4.9 ± 3.8 days before the start of radiotherapy, except in two patients who had their baseline FLT PET scan 2 and 4 days after the start of radiotherapy for logistic reasons. Follow-up FLT PET scans were performed 27.0 ± 8.0 days after completion of radiotherapy. Three patients had their follow-up FLT PET scan 1 day after the start of adjuvant TMZ. Finally, for logistic reasons two patients had their FLT PET scan 6 and 22 days after the start of adjuvant TMZ, respectively. Pseudoprogression as defined by MRI response A total of 24 patients were analysed for pseudoprogression (Fig. 1). The mean SUV max values at baseline and at 10 weeks in these 24 patients were 1.96 ± 1.00 and 1.28 ± 0.53, respectively. Pseudoprogression was observed in seven patients, and true progression in seven other patients (Fig. 2). Ten patients had either stable disease or a complete response on MRI after 10 weeks (Table 2). Six patients, of whom one had pseudoprogression and another had true progression, initially showed no baseline FLT uptake due to a macroscopic gross total resection of their tumour. Therefore, some of the pseudoprogression analyses had to be performed in the remaining patients. Patients with pseudoprogression had mean SUV max values of 2.01 ± 1.08 at baseline and 1.41 ± 0.65 at 10 weeks, compared with 2.07 ± 1.11 at baseline and 1.28 ± 0.62 at 10 weeks in patients with true progression. There was no significant difference between patients with pseudoprogression and those with true progression in SUV max at baseline (p = 0.928), SUV max at 10 weeks (p = 0.699), change in SUV max (p = 0.567) and T/N ratio (p = 0.699) on FLT PET scans. Furthermore, FLT parameters in patients with pseudoprogression and those with true progression did not significantly differ from the FLT parameters in patients with stable disease or complete response. Two of the patients with pseudoprogression were identified based on FLT uptake reduction, while three patients with true progression also showed a decrease in SUV max of more than 25% (sensitivity 29%, specificity 43%). Furthermore, cut-off values identified as optimal by others for identifying recurrent tumour with a SUV of ≥1.34 and a T/N ratio of ≥4.94 were applied to FLT PET scans at 10 weeks [24,25]. However, this approach did not provide an accurate prediction in all patients. Long-term follow-up In all 30 patients, a baseline FLT PET scan was available. However, five patients showed no FLT uptake on baseline FLT PET. Therefore, survival analyses with SUV max at baseline were based on 25 patients. At the end of December 2017, 27 patients had died and three were censored at the date last known to be alive. The median overall survival in all patients was 14.1 months (95% CI 3.4-24.8 months). SUV max at baseline and 10 weeks were both significantly correlated with survival (HR = 3.03, 95% CI 1.72-5.33. p < 0.001, and HR = 5.16, 95% CI 1.83-14.55, p = 0.002, respectively). The correlation between change in SUV max (ΔSUV max ) and survival almost reached the standard level of statistical significance (HR = 0.44, 95% CI 0.19-1.03, p = 0.059). When compared to the response defined by MRI after three cycles of adjuvant TMZ, MRI response was more significantly associated with survival (p = 0.028) than SUV max at baseline (p = 0.048) and at follow-up (p = 0.044). Furthermore, use of steroids, tumour size and extent of disease were significantly associated with survival (p = 0.007, p = 0.001 and p = 0.047, respectively). After correction for these clinical variables, SUV max at baseline remained significantly correlated with survival (HR = 6.82, 95% CI 1.31-35.42, p = 0.022; Table 3). Furthermore, the results of the subgroup analysis, excluding six patients who were scanned during radiotherapy or TMZ treatment, were comparable to those of the main analysis. Proliferation index In the 28 patients with specimens available for Ki67 staining, the mean SUV max at baseline and at 10 weeks, and ΔSUV max did not correlate with the Ki67 index of the tumour tissue before treatment (r = 0.233, p = 0.285; r = −0.321, p = 0.145; and r = −0.191, p = 0.420, respectively). Discussion In this small, prospective trial, we defined pseudoprogression and true progression based on both MRI scans, and compared MRI responses with the disease course during long-term follow-up. Changes in SUV max (ΔSUV max ) between the FLT PET scan at baseline and 10 weeks did not discriminate between true progression and pseudoprogression as defined by MRI. Interestingly, during long-term follow-up, ΔSUV max between baseline and 10 weeks showed a tendency to be associated with improved survival. Furthermore, in the 24 patients included in our analysis, a lower baseline FLT uptake did not correlate with Ki67 index, but was predictive of a longer survival. Despite the urgent need to distinguish between true progression and pseudoprogression in GBM patients, this is one of the few prospective studies that has assessed the ability of FLT PET imaging to distinguish pseudoprogression from true progression with long-term follow-up [26]. To date, mainly retrospective studies have been performed in patients with radiological suspicion of recurrent brain tumour at different time points, and these have shown variable results. In one study, FLT PET had a low specificity for distinguishing Three other studies were able to discriminate between true progression and radionecrosis in 15, 19 and 21 glioma patients, respectively, using FLT kinetic values and the T/ N ratio [24,27,28]. MRI is still considered the optimal modality for the assessment of treatment response and effects [3]. Consequently, changes on MRI at 10 and 22 weeks were used in our study to define pseudoprogression and true progression. Unfortunately, at the time of this study, the RANO criteria for glioma response evaluation on MRI were still under development, and therefore, the MacDonald criteria were used instead. As well as the imaging characteristics on conventional contrast-enhanced T1-weighted MRI images, the RANO response criteria also include characteristics on T2-weighted and fluid-attenuated inversion recovery (FLAIR) images [29]. However, due to the difficulty in identifying tumour lesions without contrast enhancement and the quantitative evaluation of the degree of T2/FLAIR changes to define tumour progression, an adequate assessment of treatment response or tumour recurrence with the help of the RANO criteria remains problematic [30,31]. A key limitation of FLT, in contrast to MET, FET and F-DOPA amino acid tracers, is that FLT uptake is primarily restricted to contrast-enhancing tumour lesions due to its dependence on the permeability and tumour disruption of the blood-brain barrier [31,32]. Therefore, the inability to accurately discriminate between true progression and pseudoprogression in our prospective study with FLT PET may well have been due to the fact that FLT uptake in highgrade gliomas reflects not only trapping of FLT in proliferating tumour cells, but also disruption of the blood-brain barrier [33]. As a result, areas showing true progression as well as pseudoprogression would show an increased FLT uptake. An important limitation of this study is that only SUV max was used for quantification of FLT uptake. The use of SUV max does not take into account the heterogeneity in FLT uptake. Therefore, kinetic analysis might be of interest to distinguish between FLT uptake due to proliferation and FLT leakage that results from disruption of the blood-brain barrier, as shown in previous studies [33][34][35][36]. In addition, kinetic analysis would support the correct interpretation of the static FLT data. Unfortunately, kinetic analysis could not be performed in the present study, as FLT PET scans were performed 30 min after tracer injection. However, SUV max is easy to obtain, is mostly used in clinical practice with FDG PET imaging, and has been proven to be robust. In glioma, SUV max quantification of FLT uptake has a repeatability coefficient of 23%, which seems to be better than corresponding values for FDG PET [37,38]. Furthermore, in other studies FLT kinetic values have been found to be well correlated with SUV parameters [39,40]. Several studies have suggested other parameters for quantification of FLT PET, such as proliferative volume and parametric response maps [12,41]. Due to the small numbers of patients and the different approaches used for quantification, direct comparison of the results is difficult. Lastly, it is difficult to determine the optimal timing of serial FLT PET imaging before and during GBM treatment. Since the aim of this study was to differentiate between true progression and pseudoprogression after chemoradiotherapy, the baseline FLT PET scan was performed after surgery. Imaging before surgery would have revealed tumour uptake, but most patients undergo a gross total resection of tumour tissue. However, imaging after surgery can also lead to increased FLT uptake due to increased blood flow and proliferation as part of the wound healing process. This might also explain the lack of correlation between FLT uptake and the Ki67 index in our study, in contrast to the results of previous studies, in which the FLT PET scans were often performed before surgery [15][16][17]. Interestingly, FLT PET uptake at baseline and at 10 weeks was significantly correlated with survival. Furthermore, a decrease in FLT uptake over time also showed a tendency to be associated with improved survival (p = 0.059). After correction for clinical variables, only baseline FLT uptake remained significantly associated with survival. However, previous studies have also confirmed that (change in) FLT uptake is a strong independent predictor of survival [12,18,42]. This is in line with the results of imaging studies using FET and F-DOPA amino acid PET tracers [11,12]. Therefore, FLT uptake may still provide useful prognostic information in patients with GBM. Conclusion Our study suggests that further evaluation of FLT PET imaging is warranted to define its ability to discriminate between pseudoprogression and true progression in GBM patients treated with chemoradiotherapy, as this remains an urgent unmet need.
2018-08-06T13:17:57.177Z
2018-07-21T00:00:00.000
{ "year": 2018, "sha1": "a612de59ddd81273c3be2f8de5d1f96b818a1cd2", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00259-018-4090-4.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1dd74d1b4c970e8cf3423ac4ad6f2f3fb7911922", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248986081
pes2o/s2orc
v3-fos-license
The metabolic equivalent of task score Aims This study investigates the use of the metabolic equivalent of task (MET) score in a young hip arthroplasty population, and its ability to capture additional benefit beyond the ceiling effect of conventional patient-reported outcome measures. Methods From our electronic database of 751 hip arthroplasty procedures, 221 patients were included. Patients were excluded if they had revision surgery, an alternative hip procedure, or incomplete data either preoperatively or at one-year follow-up. Included patients had a mean age of 59.4 years (SD 11.3) and 54.3% were male, incorporating 117 primary total hip and 104 hip resurfacing arthroplasty operations. Oxford Hip Score (OHS), EuroQol five-dimension questionnaire (EQ-5D), and the MET were recorded preoperatively and at one-year follow-up. The distribution was examined reporting the presence of ceiling and floor effects. Validity was assessed correlating the MET with the other scores using Spearman’s rank correlation coefficient and determining responsiveness. A subgroup of 93 patients scoring 48/48 on the OHS were analyzed by age, sex, BMI, and preoperative MET using the other metrics to determine if differences could be established despite scoring identically on the OHS. Results Postoperatively the OHS and EQ-5D demonstrate considerable negatively skewed distributions with ceiling effects of 41.6% and 53.8%, respectively. The MET was normally distributed postoperatively with no relevant ceiling effect. Weak-to-moderate significant correlations were found between the MET and the other two metrics. In the 48/48 subgroup, no differences were found comparing groups with the EQ-5D, however significantly higher mean MET scores were demonstrated for patients aged < 60 years (12.7 (SD 4.7) vs 10.6 (SD 2.4), p = 0.008), male patients (12.5 (SD 4.5) vs 10.8 (SD 2.8), p = 0.024), and those with preoperative MET scores > 6 (12.6 (SD 4.2) vs 11.0 (SD 3.3), p = 0.040). Conclusion The MET is normally distributed in patients following hip arthroplasty, recording levels of activity which are undetectable using the OHS. Cite this article: Bone Joint Res 2022;11(5):317–326. Introduction Primary hip arthroplasty is an effective intervention for improving pain and restoring function. 1 Patient-reported outcome measures (PROMs) such as the Oxford Hip Score (OHS), 2,3 which is routinely collected before and after hip arthroplasty in the UK, reliably and predictably report considerable, cost-effective improvements in pain and function. 4,5 However, as a consequence of the efficacy of hip arthroplasty, the distribution of postoperative scores is highly negatively skewed. In two national registries, the modal score on the postoperative OHS was 100% (48/48 points) with up to 20% of patients recording this score. 6,7 The population of patients presenting for hip arthroplasty has evolved and their expectations are different from those undergoing surgery when the OHS was introduced. 8 In a study by Scott et al, 9 40% of hip arthroplasty patients considered returning to sporting activity a 'very important' preoperative expectation, but such activities are not captured by the OHS. By relying solely on this skewed metric, potential health gain in return to sporting activity from innovative techniques will remain undetected, while other patient groups may be inappropriately told that their clinical results are as good as they can get, despite dissatisfaction with the level of activity that they have achieved. 7,10,11 In the UK, the pre-to postoperative change in OHS is used by government-backed initiatives including 'getting it right first time' (GIRFT) and the NHS best practice tariff (BPT), 12,13 to measure success at the institutional level. With the skewed OHS as the metric, the only way to improve the health gain obtained is by refusing care until the preoperative scores are low enough. This approach may have a role in healthcare rationing but should not impede the scientific desire to measure higher-level function. As degree of health gain is so closely related to preoperative score, and as preoperative score may vary, there is a need for a metric which measures outcome equally independently of preoperative health state. Alternative scores have been developed with the aim of being more discriminative in high-functioning patients. Several constructs have been suggested, such as joint perception as measured by the Forgotten Joint Score (FJS) and physical activity scores. The FJS assesses patients' awareness of their joint arthroplasty performing different tasks, with the optimal outcome being a 'forgotten' artificial joint. In a study comparing the outcomes of robotic and manual THA, the authors found no clinically relevant difference using the OHS, however the robotic group did substantially better using the FJS. 14 The authors support the idea that the ceiling effect of the OHS limits its use for comparing high-functioning postoperative patients, with their results indicating that the FJS may be more discriminative. While this sounds encouraging, other authors have reported ceiling effects of 20% to 30% using the FJS in a postoperative hip arthroplasty cohort, and so problems may still exist with skewed distributions using this score. 15,16 Physical activity metrics may be another solution, with a number of valid and reliable metrics such as the University of California, Los Angeles (UCLA) activity scale available. 17 This score appears to have no ceiling effect and is simple to use, however it only includes a small number of activities and does not account for the individual activity intensity. 17 One potential solution to this problem is the use of metabolic equivalent of task (MET) values, which numerically quantify the energy expenditure of over 800 activities comparing them to energy expenditure at rest. 18 This sophisticated, personalized approach to quantifying activity energy expenditure has been validated as a surrogate for general cardiovascular fitness, correlating well with both objective activity measures, such as pedometers, as well as the development of cardiovascular disease and mortality. 19,20 Exercise at an intensity that raises the heart rate is now well established as being an effective health maintenance intervention. MET values have been used to confirm this beneficial effect: in a twin study, 'conditioning exercise' offers substantial protection against risk of death when compared with sedentary or occasional exercising. 20 Although not yet commonly used following arthroplasty surgery, with a simplification to measure activity intensity without the performance time or frequency, the MET may be a robust way of comparing activity in postoperative hip arthroplasty patients, demonstrating activity levels that have real relevance to health and life expectancy. This study therefore aims to answer two important questions: 1) does the MET have a postoperative ceiling effect that may limit its ability to discriminate between high-performing postoperative patients; and 2) can the MET demonstrate continued improvement and health gains beyond the maximal OHS, establishing differences between postoperative patients who score 48/48? Methods Study design. This study was a retrospective analysis of anonymous data, collected prospectively from consenting primary hip arthroplasty patients as part of an ongoing, longitudinal study of gait analysis in lower limb arthroplasty (REC reference: 14/SC/1243). Patients from this study were eligible for inclusion if they underwent a primary hip arthroplasty under one of 13 surgeons at 12 sites between 2014 and 2018. Patients were excluded if they had revision surgery, an alternative hip procedure, or incomplete data either preoperatively or at one-year follow-up. Demographic data and the patient-reported answers to three PROMs questionnaires (EuroQol five-dimension questionnaire (EQ-5D), OHS, and the MET) were recorded preoperatively and at one year postoperatively. Demographic details. A total of 751 patients were initially identified on our electronic database; 73 were excluded having had an alternative or revision procedure, and a further 457 patients were excluded due to lack of preoperative or one-year PROM scores. Overall 221 patients, including 117 THAs (53%) and 104 HRAs (47%), with a mean age of 59 years (SD 11), were analyzed in this study. Demographic data are detailed in Table I. The 221 responding patients with full datasets were a mean four years younger than the 457 non-responders (59 years (SD 11) vs 63 years (SD 12), p < 0.001, independentsamples t-test). MET score. Using a similar methodology to Amstutz and Le Duff, 21 the MET asks patients to choose three physical activities that are important to them, and that are affected by their joint problem. These initially selected activities remain the same at all follow-up timepoints. Patients then rate the intensity at which they currently perform the activity on a visual scale from 0 to 100. METs are numerical values assigned to demonstrate the energy expenditure used performing different tasks. One MET is equivalent to energy expenditure during rest and is approximately equal to 3.5 ml O 2 kg -1 min -1 in adults. 19 Using Arizona State Universities compendium of activities, 18 the MET values associated with each activity are recorded. An example is running, which has a range of values between 4.5 (jogging on a mini-tramp) and 23 METs (running a 4.3 minute mile). Based on this reference range, the patient's self-reported intensity score is then used to work out a value for the METs they are currently doing. This is done by subtracting the lower value of the MET reference range from the higher value, multiplying this by the percentage intensity expressed as a decimal, and then adding back on the lower reference value. Using the above as an example, if a patient rated their intensity as 50% in running (range 4.5 to 23 METs) their MET score would be worked out as ((23 to 4.5)*0.5) + 4.5 = 13.75 METs. The MET is the maximum value scored from the three chosen activities. In the full MET, the frequency and duration of physical activity are recorded; we omitted these aspects from the score in favour of intensity, to avoid measuring cardiorespiratory fitness and also to avoid under-representing performance through measuring high-intensity but infrequently performed activities, such as skiing. 21,22 Distribution of scores. Data were analyzed to demonstrate the distribution, presence of ceiling or floor effects, concurrent validity of the MET in terms of its responsiveness, and correlations between the MET and the two conventional PROMs. Other authors have suggested when validating physical activity metrics that a weak-to-moderate correlation would be expected between the activity metric and conventional PROMs. 8 Health gains. Recent literature has established that as preoperative OHS increases, the improvement in score decreases. 23 To investigate whether health gains in the MET and EQ-5D are also limited by the level of preoperative joint symptoms, the relationship between preoperative OHS and patient improvement at one year using the three metrics was plotted. Fractional polynomial regression plots were used to demonstrate the likely increase in each metric for a given preoperative OHS score. 48/48 sub-cohort analysis. A subgroup analysis was performed on a cohort of patients with the maximum postoperative OHS. Previous studies have highlighted that postoperative physical activity (as measured on the UCLA activity score) in hip arthroplasty patients may be *Missing data, n = 163. †Non-parametric variable, data presented as median (interquartile range), otherwise data reported as mean (standard deviation, range) as indicated. EQ-5D, EuroQol five-dimension questionnaire; HRA, hip resurfacing arthroplasty; IQR, interquartile range; MET, metabolic equivalent of task; OHS, Oxford Hip Score; SD, standard deviation; THA, total hip arthroplasty. higher in younger patients, male patients, those with higher preoperative activity levels, and those with lower BMI. 24,25 Based on this, the 48/48 scoring patient cohort was divided into categories (age < or > 60 years, male or female, preoperative MET < or > 6, and BMI < or > 25 kg/ m 2 ) and compared using the MET and EQ-5D at one year postoperatively. Previous literature has classified activities as light (< 3), moderate (3 to 6), or vigorous (> 6) according to their MET values. 26,27 Therefore, in this analysis, the threshold for high preoperative activity was set at 6 METs. Statistical analysis. Statistical analysis was performed using Stata/IC 10.1 (StataCorp, USA). Data were first tested for normality visually using histograms and normal Q-Q plots. To quantify the shape and symmetry of the distribution about the mean, kurtosis and skewness values were calculated. A standard normal distribution is generally considered to have a kurtosis value of 3 and a skew value of 0. 28 For independent data, parametric variables were compared using the independent-samples ttest, and non-parametric data were compared using the Mann-Whitney U test. Paired data were compared using the paired t-test. Ceiling and floor effects were calculated as the percentage of patients scoring the maximum or minimum scores, respectively. As previously indicated in the literature, ceiling or floor effects of > 15% were considered relevant. 7 Construct validity of the MET was assessed by examining the responsiveness of the score to change using the standardized response mean (SRM; calculated by dividing the mean change in score over the one-year time period by the standard deviation (SD) of that change), and concurrent validity was assessed by measuring correlations between scores as calculated using Spearman's rank correlation coefficient. R s values of 0.3 to 0.5 were considered as weak-to-moderate correlations. 8 Statistical significance was set at p < 0.05. Results Distribution. Preoperatively, the distribution of the OHS was normal (skew -0.14, kurtosis 2.57) and that of the EQ-5D was bimodal with a skewness value of -0.99 and a kurtosis of 4.03 (Figures 1 and 2). The MET demonstrates a slight positive skew of 0.59 representing a floor effect, with the commonest score being zero, on account of hip pain and near-normal kurtosis of 3.18 ( Figure 3). Postoperatively, both the OHS and EQ-5D scores demonstrate a substantial negative skew (Figures 1 and 2). The kurtosis value of the OHS was 13.07 with a skewness value of -3.12, while the EQ-5D demonstrated a kurtosis value of 6.23 and a skewness value of -1.96. The MET on the other hand exhibited a normal distribution postoperatively, centred around a mean of 10.7 (SD 3.8) with very little skew (0.40) and near normal kurtosis (4.46; Figure 3 and Table II). Ceiling and floor effects. No floor effects were seen for the OHS or EQ-5D, but substantial ceiling effects of 41.6% and 53.8% were seen at one year follow-up in the OHS and EQ-5D, respectively. Preoperatively the MET had a moderate floor effect of 25.3%, while no relevant ceiling effect was noted postoperatively (Table II). Validity. Spearman's rank correlation coefficient was weak-to-moderate, but there were statistically significant correlations between the MET and EQ-5D both preoperatively (r s = 0.46, p < 0.001) and to a lesser extent at one-year follow-up (r s = 0.32 p < 0.001, Spearman's rank correlation). Similarly, weak-to-moderate correlations were demonstrated between the MET and the OHS both preoperatively (r s = 0.46, p < 0.001, Spearman's rank correlation) and at one-year follow-up (r s = 0.30, p < 0.001, Spearman's rank correlation). Improvement in score and responsiveness. All three metrics demonstrated excellent responsiveness with effect sizes as determined by the SRMs of > 1 (Table III). The fractional polynomial predictive plots in Figure 4 demonstrate a strong negative relationship between preoperative OHS and improvement in score for both EQ-5D ( Figure 4a) and OHS (Figure 4b). For the MET score, this relationship is far less clear, with an initial decrease in improvement seen in patients who have lower preoperative OHS, while in patients with higher preoperative OHS the MET is progressively more responsive (Figure 4c). 48/48 OHS. A total of 92 postoperative patients scored 48/48 on the OHS following surgery. The histograms in Histograms with kernel (Epanechnikov) density plots demonstrating distribution of metabolic equivalent of task (MET) scores preoperatively and at one-year follow-up. Solid vertical lines represent mean values, dashed vertical lines represent the median. Figure 5 demonstrate the distribution of EQ-5D in this group, which has a strong negative skew, with the majority of patients scoring the maximal score of 1 (Figure 5a). The MET on the other hand exhibits a near normal distribution of scores, despite all patients scoring the same on the OHS (Figure 5b). When subdivided into groups, patients aged under 60 years scored significantly higher on the MET than patients over 60 years of age (mean 12.7 (SD 4.7) vs 10.6 (SD 2.4), p = 0.008, independentsamples t-test), as did male patients (mean 12.5 (SD 4.5) vs 10.8 (SD 2.8), p = 0.024, independent-samples t-test) and patients with higher activity levels on their preoperative MET scores (mean 12.6 (SD 4.2) vs 11.0 (SD 3.3), p = 0.040, independent-samples t-test) (Figure 6a). No significant differences were found comparing patients by BMI using the MET or comparing any of the groups using the EQ-5D (Figure 6b, Table III). Discussion This retrospective study set out to determine whether the MET score could capture differences in function that were not detectable by the OHS or EQ-5D in an active hip arthroplasty population. This question was answered: the MET score does deliver a symmetrical metric with a normal distribution in a postoperative population, capturing differences in activity levels that were not detectable using the OHS. This question is relevant for health economists, policy makers, and those designing clinical trials. Health benefits that can be captured simply, without the need for expensive equipment or licences, should help drive commissioning choices. By demonstrating that patients can improve past the OHS maximum score, we have revealed an opportunity that was otherwise denied: using this metric, surgeons who are currently penalized for failing to deliver adequate health gains may now be able to justify their offering of arthroplasty in younger and more active patients. By restricting health gains to the OHS, commissioners may unfairly restrict access to arthroplasty surgery, or unfairly penalize hospitals for not achieving satisfactory results if these decisions are based solely on health gains as measured by the OHS. Although there is more work to be done in this area, the aspects of validity measured in the present study support its use as a metric for the outcome of hip arthroplasty surgery. The MET demonstrated evidence of concurrent validity with weak-to-moderate correlations found with both the OHS and EQ-5D. Naal et al 17 used a similar approach, establishing weak-to-moderate correlations with three different physical activity scores and the OHS. One potential limitation was that the present study did not validate the MET against another validated physical activity metric or objective physical activity measures such as a pedometer or exercise log. However, the authors note that the face validity of using MET values has already been well established by other similar MET-based scores. 18,19 Although not validated specifically for use in arthroplasty, the International Physical Activity Questionnaire (IPAQ) score is a MET-based score, shown to be valid and reliable for use in the general population measuring activity levels. 19 It differs from the MET, being a better measure of cardiorespiratory fitness, whereas the MET is personalized to patients' sporting aspirations. The major advantage of the MET is that no matter what activities patients choose, the scores are comparable and relevant to their joint disease. Furthermore, the numeric MET values assigned by the University of Arizona are objective, being based upon oxygen consumption. 18,19 Therefore, the authors considered concurrent validity with other hip-specific and generic PROMs, alongside responsiveness, encouraging Fractional polynomial regression plots demonstrating predicted improvement in score at one-year follow-up for a given preoperative Oxford Hip Score (OHS) for a) OHS, b) EuroQol five-dimension questionnaire (EQ-5D), and c) metabolic equivalent of task (MET). Fractional polynomial fit line with 95% confidence intervals (CIs) demonstrated in grey. Fig. 5 Histograms with kernel (Epanechnikov) density plots demonstrating distribution of a) metabolic equivalent of task score (MET) and b) EuroQol five-dimension questionnaire (EQ-5D), at one-year follow-up for the subgroup of patients who all scored 48/48 on the Oxford Hip Score (n = 92). validation data for using the score in this cohort, however further work in this area would be an interesting avenue for future research. Responsiveness is considered another aspect of construct validity. 29 The greater the responsiveness, the more accurate a metric is in detecting change when it has occurred. The MET had a SRM of 1.17, which indicates a large effect size or an excellent response to change over time. 8 The calculated SRMs for EQ-5D and OHS in this cohort were found to be similar to previously published literature, further validating our findings. 30 Unlike the OHS and EQ-5D, the postoperative MET had a normal distribution and exhibited no ceiling effect. Substantial postoperative ceiling effects were found for the OHS (41.6%) and EQ-5D (53.8%). In general, ceiling effects or floor effects are considered problematic when 15% or more of the cohort score the best or worst scores. 7,10 By having large numbers of patients scoring the best or worst scores, the metric is rendered insensitive to detecting differences at the extremes of the scale. 7,10 Other studies have demonstrated strong ceiling effects in the OHS of 19.9%, 6 and even more pronounced ceiling effects for the EQ-5D of 39.8%. 30 While the pattern of these findings support our results, our population demonstrated a much higher percentage ceiling effect for both metrics. This may be related to the studied population which included a younger, more active cohort than that used in other studies. While other scores have been developed with the aim of reducing the impact of ceiling effect, unfortunately problematic ceiling effects may still Column scatter for the subgroup of 92 patients scoring 48/48 on the Oxford Hip Score, compared by age group, BMI, preoperative metabolic equivalent of task (MET) score, and sex using: a) MET scores; and b) EuroQol five-dimension questionnaire (EQ-5D) scores. The solid horizontal line represents the median and the whiskers represent the interquartile range. Statistically significant p-values have been indicated. exist. In a recent study, the FJS was reported to have a ceiling effect of 31.9%, similar to those reported for more conventional PROMs. 15 In addition, the FJS has reported a substantial floor effect of 22.4%, suggesting that there may be problems discriminating at both ends of the score. 31 While the MET showed no postoperative ceiling effect, it did show a preoperative floor effect, similarly to the FJS. This is not surprising given that the formulation of the question specifies the selection of tasks that have been negatively affected by the respondent's hip pain. A similar preoperative floor effect has been observed in validation studies looking at other physical activity-based outcome measures such as the Tegner score. 17 When using MET solely as an assessment of postoperative outcome rather than of preoperative disease state, this floor effect is unimportant. If it were to be used for the former, the question may have to be re-formulated. Both the OHS and EQ-5D demonstrated very little predicted improvement towards the upper end of the preoperative OHS scale. The MET on the other hand shows continued predicted improvements, with a 6 MET improvement predicted for patients who score 48/48 on the OHS. A large registry study by Price et al 23 demonstrated a similar effect using the OHS, with the likelihood of seeing a meaningful clinical improvement decreasing with higher preoperative scores. The authors conclude that at a preoperative score of 40 or above, there was a 0% chance of meaningful improvement, suggesting this as a threshold for referral. 23 The present study suggests that even though these higher-scoring preoperative patients do not show improvement using the OHS, they do show considerable improvement using the MET. Setting a referral threshold at 40 may restrict access to high-functioning patients who may want to return to a preferred sporting activity. While it is certainly important to use conventional PROMs to record health gains, the assumption that no further benefit can be achieved past the maximal score may mean that these overall health gains are under-represented. In doing so, one may unfairly restrict access to our highly effective surgical interventions for higher-functioning patients who are unable to perform their desired sporting activity. Without an additional activity metric, the considerable improvement in quality of life delivered by returning them to their preferred sporting activity may be reported as a failure, since the improvement in function captured by change in OHS may be smaller than average. The subgroup analysis further emphasizes the point that the patients who score 48/48 are not necessarily performing at a similar level to one another. Despite identical OHS scores, patients > 60 years old had a mean MET of 10.6 METs compared to the 12.7 METs scored by the under 60s. A similar effect was noted for the male sex and those with higher preoperative MET scores. To put those scores into perspective, an activity such as Nordic walking at a fast pace scores 9.5 METs. 18 A fast run at 9 mph scores 12.8 METs, 18 so a difference of 2 to 3 METs translates into the difference between patients performing a fast walk or a fast run. Clinically this would likely be a noticeable benefit. Other studies have shown the effect of age, sex, and preoperative activity levels on postoperative physical activity. Williams et al, 24 in a study of 736 primary joint arthroplasty operations, found male sex, younger age, preoperative UCLA scores, and lower BMI to be overall predictors for achieving higher postoperative activity levels. The authors report that males are nearly five times more likely to achieve a UCLA activity score > 7 post-hip arthroplasty when compared to females (odds ratio 4.84, 95% confidence interval 2.93 to 7.99). 24 These findings have been corroborated by a number of other studies, concurring with the findings of the present study. 25,32,33 There are a number of limitations to this study. First, a large proportion of patients (61%) did not have preoperative or one-year postoperative scores, and the included patients were younger than those with missing data. It is possible that this younger cohort who completed the online questionnaire were more physically active and motivated than those who did not respond. Furthermore, our studied population was considerably younger than the national average for hip arthroplasty. While the authors believe this young population to be ideal for investigating the MET, it is worth noting that our findings may not be generalizable to the wider population of hip arthroplasty patients. Second, the MET does not factor in frequency of the activity, only intensity, so it cannot be used as a metric of fitness. Additionally, a high MET value may not correlate with impact on the hip joint, nor on the number of hip cycles. For instance, canoeing with vigorous effort scores a MET of 12.5. 18 This scores similarly to running at 9 mph (12.8 METs), 18 however running has greater impact on the hip joint and may not be attempted following hip arthroplasty in an effort to protect the longevity of the implant. Although our score did not take this into account, patients were asked to pick activities that were of importance to them and that their joint trouble affected, thus directing them to choose activities specific to the hip. Finally, as data in this study were retrospectively analyzed, there remains a risk of selection bias. In conclusion, this study demonstrates that a simple, patient-centred activity metric (MET) can pick up important health gains in return to higher-level sporting activity, which are missed by the OHS in a younger, active population. The MET showed evidence of construct validity, good responsiveness to change, and no postoperative ceiling effect, with health gains not limited by preoperative OHS. A patient-centred physical activity metric may have a useful role in addition to conventional function-based PROMs scores where the functional outcome of hip arthroplasty is relevant.
2022-05-24T06:23:20.718Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "feaadb8f62b3738cfefbaa95fbd511c9ecd0c6e5", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1302/2046-3758.115.bjr-2021-0445.r1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2b62d0d50df0eb9bb92756f430f058d319c94815", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236162948
pes2o/s2orc
v3-fos-license
Secondary Tricuspid Regurgitation: Pathophysiology, Incidence and Prognosis Tricuspid regurgitation (TR) can be divided into primary and secondary origins. Primary TR is mostly caused by infective endocarditis, leaflet perforation, entrapment after device placement and congenital abnormalities. The natural cause of secondary (functional) TR is not well-understood and underdiagnoses is likely. Because symptoms such as ascites, edema and hepatomegaly usually manifest at a late state, assessment of TR is challenging requiring a multiparametric approach. Secondary TR can be subdivided into four morphologic types according to the underlying mechanism: Left-heart related TR, precapillary pulmonary hypertension related TR, right ventricular disease related TR and isolated TR. INTRODUCTION Tricuspid regurgitation (TR) has long been the most neglected valvular disease. This is mainly due to so far limited treatment options. On the one hand, conservative therapy results in resistance to diuretic treatment while surgical therapy on the other hand is associated with high in-hospital mortality (8.8%) (1). With the introduction of transcatheter tricuspid valve treatment options, which have shown promising results, the forgotten valve has finally emerged from the shadows. This review aims to provide insight into the pathophysiology, incidence and prognosis of secondary tricuspid regurgitation in particular. PATHOPHYSIOLOGY AND PREVALENCE OF PRIMARY TRICUSPID REGURGITATION Akin to mitral regurgitation, TR may be of primary (degenerative) or secondary (functional) origin. Primary tricuspid regurgitation (PTR) occurs less frequently (8-10% of all-cause TR) (2). In PTR abnormalities of the tricuspid valve apparatus may be of congenital or acquired origin. Apical displacement of the tricuspid leaflets that arise directly from the right ventricle without being linked to chordae is the most common congenital cause of primary TR (Ebstein's disease) (3). Acquired primary tricuspid regurgitation is mostly caused by leaflet perforation and entrapment following device placement (4). Considering a continuously aging population with an increased need for cardiac pacemaker-implantation, the prevalence of pacemaker/lead-induced TR may increase and should be considered in future device selection as novel techniques such as his bundle pacing and leadless pacemakers are quickly becoming available (5). Another important entity of PTR is endocarditis (Figure 1A), which makes up 17% of all endocarditis cases, predominantly occurs in males and is very often a consequence of intravenous drug abuse or is also related to implantable devices. It affects the anterior leaflet and manifests with large vegetations in the majority of cases (6,7). Rarer causes of PTR are chordae rupture following right-ventricular biopsies often seen after cardiac transplantation or hepatically metastased neuro-endocrine tumors (Hedinger syndrome), which involve the heart and particularly the tricuspid valve in 60% resulting in fibrotic stiffening of the leaflets (8). PATHOPHYSIOLOGY AND INCIDENCE OF SECONDARY (FUNCTIONAL) TRICUSPID REGURGITATION The natural cause of secondary (functional) tricuspid regurgitation (FTR) is not yet well-understood and in general four types of secondary tricuspid regurgitation are described (9) (Table 1; Figure 1): • Left heart related tricuspid regurgitation (LH-TR). To understand the pathophysiology of FTR it is crucial to understand the anatomy of the right heart and the tricuspid valve. Generally, the tricuspid valve consists of three leaflets but anatomical variants with a variable number of leaflets commonly occur (10). The posterior leaflet is usually smaller than the anterior and septal ones and functional TR often results in an ellipsoid regurgitant orifice along the anteroposterior edge with a shorter septolateral dimension (11). The leafleats are linked via chordae to papillary muscles and therefore tethering following right ventricular remodeling can occur, which in turn FIGURE 1 | Pathophysiologic Subdivision of Tricuspid Regurgitation. Primary tricuspid regurgitation is caused by abnormalities/damage on the tricuspid valve apparatus (A) (e.g., prolapse of the leaflet or endocarditis). Left heart related tricuspid regurgitation and precapillary pulmonary tricuspid regurgitation (B) are caused by dilation of the right ventricle, papillary muscle displacement and tethering of the tricuspid valve leaflets with malcoaptation. In isolated tricuspid regurgitation (C) the tricuspid annulus is pronouncedly dilated due to dilation of the right atrium in the presence of atrial fibrillation or diastolic dysfunction. may cause TR even in the absence of tricuspid annulus (TA) dilation ( Figure 1B) (12). Functionally however, the tricuspid annulus belongs to the right ventricle (RV) as it is septally fixed, partly consists of a fibrous tissue and is sensitive to pre-and afterload as well as right ventricular and/or atrial dilation (12,13). Therefore, enlargement of the non-planar and elliptical annulus during tricuspid regurgitation results in a more circular and planar tricuspid architecture (9). Topilsky et al. described a prevalence of significant TR of 0.55% in 21,020 examined patients. However, 1 in 25 patients older than 75 years presented with a moderate or severe TR whereas all-cause TR was more often diagnosed in women (14). Characterization of the mechanism of TR is difficult and hampered by the limitations of 2D-echocardiography. Resulting from a lack of standardization and its volume dependence the assessment of TR can be challenging and thus requires a multi-parametric approach including quantitative [e.g., effective regurgitant orifice area (EROA), regurgitant volume], semiquantitative [e.g., annulus dilation, vena cava width, proximal isovelocity surface area (PISA), jet area, hepatic flow, tricuspid valve inflow], and qualitative (e.g., inferior vena cava size, right atrium size, right ventricle size, ventricular septum motion, tricuspid valve morphology, color flow jet, jet contour, flow convergence zone) evaluation in transthoracic and transesophageal echocardiography (15,16). In this context it is important to mention, that when weighing up treatment options, right ventricular function and geometry is considered an important factor and thus should also be assessed using parameters such as tricuspid annular plane systolic excursion (TAPSE), right ventricular fractional area change (RV-FAC) and right ventricular longitudinal strain (RV-GLS). With the introduction and establishment of 3-D realtime echocardiography the ability to understand and differentiate between the types of TR improved tremendously (11). However, novel transcatheter therapeutic approaches may require additional cardiac computed tomographic assessments for procedural planning which allow to precisely determine the tricuspid annular size and the neighboring anatomical structures such as the right coronary artery or the coronary sinus thus estimate potential periinterventional risks (17). In some cases, the use of magnetic resonance tomography can be helpful to evaluate right heart function and volume. LEFT-HEART RELATED TRICUSPID REGURGITATION Left-heart related tricuspid regurgitation (LH-TR) is the most common form of FTR caused by left-sided valvular and myocardial disease associated with increased left atrial pressure, pulmonary hypertension and increased RV afterload which leads to RV dilation in particular at the basal level, tricuspid leaflet tethering, tricuspid annulus dilation and leaflet malcoaptation ( Figure 1B) (14). Compared to healthy valves the annulus becomes more planar, circular and dilated. Due to RV dilation, TR regurgitation results in a larger RV eccentricity index. Depending on the presence of pulmonary hypertension the RV becomes more elliptical, emphasizing valvular tethering (9, 10). Topilsky et al. described that in 62.5% of patients who were diagnosed with significant TR left-heart related diseases were the culprit (14). One third of patients with severe mitral regurgitation and one quarter of patients with severe aortic stenosis presented with significant tricuspid regurgitation (at least moderate) (9,18). Furthermore, heart failure patients with reduced ejection fraction frequently presented with significant tricuspid regurgitation which can progress even despite optimal medical therapy (19,20). Benfari et al. demonstrated that 88% of 13,026 heart failure patients with reduced ejection fraction showed functional tricuspid regurgitation. Among these 26% were classified as at least moderate (19). PRECAPILLARY PULMONARY HYPERTENSION RELATED TRICUSPID REGURGITATION Precapillary pulmonary hypertension related tricuspid regurgitation (PH-TR) is usually observed in patients with chronic lung disease, pulmonary thromboembolism, left-to-right shunting and a doppler estimated systolic pulmonary artery pressure of > 50 mmHg (10). The RV shows a midventricular dilation, a dilation of the tricuspid annulus and the right atrium ( Figure 1B). However, leaflet tethering due to lateral and apical papillary muscle displacement seems to be the dominant mechanism in pulmonary hypertension (21,22). Functional parameters of the right ventricle such as fractional area change (FAC) and tricuspid annular plane systolic excursion (TAPSE) do not generally show impairment whereas the RVsphericity index (diameter/length) increases with progression of TR. Right atrial (RA) dilation is also frequently associated with PAH and severe TR (23). Tricuspid valve tenting height and area is also significantly increased. Additionally, along progression of TR leaflet length increases in PAH, whereas tricuspid valve coverage (leaflet length/tenting area) decreases significantly due to RV dilation (24). Increases in systolic pulmonary artery pressure (PASP) and RV size as well as reduced tricuspid valve coverage are associated with TR progression which leads to progressive right heart failure (24). However, predicting TR progression with baseline RV size, PASP or TA diameter is not yet possible which limits selection of at-risk patients who may benefit from a more aggressive therapeutic approach (24). In general, when PAH is confirmed, TR is common. Tricuspid regurgitation was present in 96.5% of a PAH population while 60% them were at least of moderate severity (25). The estimated incidence of PAH ranges between 7 and 26 cases per 1 million adults (26). RIGHT VENTRICULAR DISEASE RELATED TRICUSPID REGURGITATIO Intrinsic right ventricular dysfunction in the absence of pulmonary hypertension and diseases such as arrhythmogenic right ventricular cardiomyopathy and inferior cardiac infarction can lead to tricuspid regurgitation due to papillary muscle displacement and malfunction resulting in increased tricuspid leaflet tethering and insufficient leaflet coaptation (27,28). In arrhythmogenic right ventricular cardiomyopathy in particular, fibro-fatty tissue replacement due to progressive myocyte loss leads to right atrial and ventricular dilation lacking a specific pattern which favors the development of tricuspid regurgitation (27,29). An incidence value of RVD-TR has not been reported in the literature so far. However, 15% of a cohort of ARVC patients showed a significant TR which contributes to worsening HF by increasing RV filling pressure and decreasing RV forward stroke volume (27). ISOLATED TRICUSPID REGURGITATION Isolated (idiopathic) tricuspid regurgitation (ITR) is described as a morphologic type of TR in the absence of left-heart sided causes, pulmonary hypertension or primary right ventricular diseases. Hence, it is recognized as a separate entity (other than secondary TR) (9,20,30). In the absence of pronounced pulmonary hypertension, the right ventricle shows a not so dominant elongation but rather a dilation of the basal segments (11). Additionally, 3D-Echocardiography with tricuspid valve analysis using 3D-quantifation software revealed that the tricuspid annulus in patients with ITR was pronouncedly more dilated, planar, circular and dysfunctional than in patients with LH-TR with less leaflet tethering and tenting volume ( Figure 1C) (11). Patient characteristics are also different to LH-TR. Isolated TR mostly appears in female patients of advanced age with smaller body surface area, lower likelihood of coronary artery disease and higher rate of arterial hypertension and in particular atrial fibrillation (14). The RA shows a higher degree of enlargement. Rotational and helical blood flow within the RA is hence disrupted, particularly in atrial fibrillation, which may contribute to TR progression (10,31). Utsonomya et al. described a prevalence of 9.2% of isolated tricuspid regurgitation in patients diagnosed with at least moderate TR (11). Topilsky et al. could show that 8.1% of significant TRs diagnosed in American community residents were of isolated origin (14). Interestingly, diastolic dysfunction (heart failure with preserved ejection fraction; HFpEF) seems to be another key mechanism in isolated TR in patients without atrial fibrillation (20). Interestingly, Mascherbauer et al. found that 51% of routinely followed HFpEF patients had at least moderate secondary TR. Patients with TR had a higher pulmonary vascular resistance, reduced pulmonary compliance, and elevated left ventricular filling pressure compared to those presenting without TR (20). Therefore, tricuspid regurgitation -once diagnosed -should entail further assessment of the left ventricle, in particular with regard to diastolic dysfunction (20) and vice versa patients with diagnosed HFpEF should be monitored for worsening RV function and TR. CLINICAL OUTCOME AND PROGNOSIS OF FUNCTIONAL TRICUSPID REGURGITATION The prognostic relevance of tricuspid regurgitation has long been recognized. Thus, already in 2004 Nath et al. showed that severe TR was associated with a reduced 1-year mortality of around 64% in over 5,000 patients who were followed over a period of 4 Frontiers in Cardiovascular Medicine | www.frontiersin.org years (2). More recently, the prognostic relevance of TR has been demonstrated for nearly every underlying etiology. Thus, the presence of at least moderate TR was associated with a significantly increased mortality risk (HR: 2.17; 95% CI: 1.30-3.63) in patients with prior surgical mitral valve replacement (32). Data from the German Mitraclip registry revealed that patients with concomitant severe tricuspid regurgitation who underwent edge-to-edge mitral valve intervention had a higher one-year mortality (HR 2.01; 95% CI 1.25-3.23; p = 0.004) as well as MACCE rate (33). Two-year data from the COAPT-Trial also showed that concomitant severe tricuspid regurgitation worsened the clinical outcome of patients (composite rate of death and hospitalization for heart failure 83.0 vs. 64.3%; HR: 1.74; 95% CI: 1.24-2.45; p = 0.001) (34). However, interventional mitral valve treatment improved outcome in patients with and without significant tricuspid regurgitation. Using data from the TriValve and TRAMI registries (n overall = 228), Mehr et al. could show that simultaneous mitral and tricuspid interventional therapy in patients with both severe mitral and tricuspid regurgitation was associated with a higher 1-year survival than isolated transcatheter mitral repair (HR 0.52; p = 0.02) (35). Following transcatheter aortic valve replacement for aortic stenosis a large registry study with 34,576 patients revealed that TR severity also correlated with mortality (HR 1.29; 95% CI 1.11-1.50; p < 0.001) and readmission (HR 1.27; 95% CI 1.04-1.54; p < 0.001) (36). More than mild TR was also found to be associated with increased mortality in patients undergoing surgical aortic replacement (37). Benfari et al. could highlight that among 13,026 patients increased severity of TR is associated with a lower 5-year survival in heart failure patients with reduced ejection fraction despite optimal medical treatment (HR 1.57 95% CI 1.39-1.78) (19). Bartko et al. emphasized that even moderate TR may be a relevant prognostic factor in patients with HFrEF which is why it should be taken into account when determining the most suitable therapeutic approach (38). With regard to right ventricular diseases as underlying pathology, tricuspid regurgitation has been identified as a prognostic parameter of death or need for heart transplantation during a 10-year follow up of ARVC patients (HR 7.6; 95% CI 2.6-22.0; p < 0.001) (27). Isolated tricuspid regurgitation is also independently associated with excess mortality and morbidity. Topilsky et al. could demonstrate in a retrospective study with 353 mostly female patients that isolated TR was associated with lower 10year survival rate (38 ± 7% vs. 70 ± 6%; p > 0.001), in particular in presence of atrial fibrillation which may be explained by progressive RA remodeling in atrial fibrillation with higher risk for right heart failure (30). These findings were confirmed by a 15-year survival analysis using data from a large American registry study. The survival rate after 15 years was significantly lower in patients with relevant ITR compared to patients with no identifiable heart disease (25.8 ± 5 % p < 0.001) (14). TREATMENT STRATEGIES FOR TRICUSPID REGURGITATION Depending on the etiology and the morphology as well as the severity of TR and patient's risk factors individualized therapeutic regimes should be chosen. In the presence of risk factors for the development of TR echocardiographic assessments at least once per year should be considered (39). There are two main points to consider when treating TR conservatively: Symptomatic treatment of patients, e.g., with diuretics and medication for heart failure and treatment of the underlying diseases (e.g., left-heart related pathologies) (39). In the presence of relevant primary tricuspid regurgitation surgery is the treatment of choice (39,40). In patients with relevant secondary tricuspid regurgitation, tricuspid surgery is recommended in combination with a surgical left heart treatment or when patients have already undergone prior cardiac surgery, yet suffer from symptomatic tricuspid regurgitation (39,40). In patients with singular, but at least severe secondary tricuspid regurgitation, surgery represents a suitable treatment option despite high in-hospital mortality which is probably caused by too late admission with a remarkably end-organ damage (1). In general, surgical treatment of TR aims to reduce the annulus size and to restore the valve geometry. Thereby, annuloplasty with rigid rings seems to have a lower rate of recurrent TR than flexible devices or tricuspid valve reconstruction using the DeVega technique (1). Taken together, the treatment of relevant tricuspid regurgitation remains challenging. Conservative therapy over longer periods usually results in refractoriness in diuretic treatment and surgery is unsuitable for patients with high operative risk (1,14). However, in recent years, transcatheter tricuspid valve interventions for TR have been evolved and show promising results so far. Similar to the mitral valve, edge-to-edge valve repair, direct annuloplasty and valve replacement are the most commonly used treatment strategies (41,42). Yet, device selection for transcatheter tricuspid valve intervention is still based on limited experiences. Generally, treatment of very severe TR is challenging for any reconstructive system. In the presence of large coaptation gaps annuloplasty should be favored whereas in the setting of leaflet tethering edge-to-edge could be more suitable (16). CONCLUSION AND CLINICAL PERSPECTIVE Tricuspid regurgitation can develop as a result of multiple underlying disease processes, which in turn lead to different morphologic phenotypes of TR. Irrespective of its cause the presence of TR adversely affects clinical outcome. It is therefore high time that TR is not only recognized as an important treatment target but also as a prognostic factor. TR and also risk factors for the development of TR should therefore be assessed as a routine part in the work-up of most cardiologic disorders in order to better understand its natural course and thus to create appropriate individually tailored treatment strategies. AUTHOR CONTRIBUTIONS All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication. FUNDING We acknowledge support by the Open Access Publication Funds of the Ruhr-Universität Bochum.
2021-07-22T13:18:01.841Z
2021-07-22T00:00:00.000
{ "year": 2021, "sha1": "ce60c4a4b02b33d49358a0b2f17d8eb9fe4c9574", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2021.701243/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce60c4a4b02b33d49358a0b2f17d8eb9fe4c9574", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55127663
pes2o/s2orc
v3-fos-license
Robust features of future climate change impacts on sorghum yields in West Africa West Africa is highly vulnerable to climate hazards and better quantification and understanding of the impact of climate change on crop yields are urgently needed. Here we provide an assessment of near-term climate change impacts on sorghum yields in West Africa and account for uncertainties both in future climate scenarios and in crop models. Towards this goal, we use simulations of nine bias-corrected CMIP5 climate models and two crop models (SARRA-H and APSIM) to evaluate the robustness of projected crop yield impacts in this area. In broad agreement with the full CMIP5 ensemble, our subset of bias-corrected climate models projects a mean warming of +2.8 °C in the decades of 2031–2060 compared to a baseline of 1961–1990 and a robust change in rainfall in West Africa with less rain in the Western part of the Sahel (Senegal, South-West Mali) and more rain in Central Sahel (Burkina Faso, South-West Niger). Projected rainfall deficits are concentrated in early monsoon season in the Western part of the Sahel while positive rainfall changes are found in late monsoon season all over the Sahel, suggesting a shift in the seasonality of the monsoon. In response to such climate change, but without accounting for direct crop responses to CO2, mean crop yield decreases by about 16–20% and year-to-year variability increases in the Western part of the Sahel, while the eastern domain sees much milder impacts. Such differences in climate and impacts projections between the Western and Eastern parts of the Sahel are highly consistent across the climate and crop models used in this study. We investigate the robustness of impacts for different choices of cultivars, nutrient treatments, and crop responses to CO2. Adverse impacts on mean yield and yield variability are lowest for modern cultivars, as their short and nearly fixed growth cycle appears to be more resilient to the seasonality shift of the monsoon, thus suggesting shorter season varieties could be considered a potential adaptation to ongoing climate changes. Easing nitrogen stress via increasing fertilizer inputs would increase absolute yields, but also make the crops more responsive to climate stresses, thus enhancing the negative impacts of climate change in a relative sense. Finally, CO2 fertilization would significantly offset the negative climate impacts on sorghum yields by about 10%, with drier regions experiencing the largest benefits, though the net impacts of climate change remain negative even after accounting for CO2. Introduction The Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC 2014) has warned, with higher confidence than in previous reports, that climate change is likely to adversely affect food security in many regions of the world. This is especially true in developing countries where a large fraction of the population is already facing chronic hunger and malnutrition (FAO 1999, Schmidhuber andTubiello 2007) and where widespread poverty limits the capacity to cope with climate variability and natural disasters. In such countries, progress on food security will depend partly on the effective adaptations of agriculture to climate change. Adaptation planning-such as breeding more resilient crop varieties or promoting existing varieties and practices that are more resistant to climateinduced stress (Barnabás et al 2008)-requires reliable scenarios of future regional agricultural production. However, producing such scenarios remains challenging, because of large uncertainties in regional climate change projections, in the response of crop to environmental changes (e.g. rainfall, temperature, CO 2 concentration) and in the adaptation of agricultural management to climate changes (Challinor et al 2007). For example, a meta-analysis of the literature (Knox et al 2012, Roudier et al 2011 shows that projected impacts on yield in several African countries are most frequently slightly negative (−10% to −8%), but there are large variations among crops and regions as well as large modeling uncertainties, which make it difficult to provide a consistent assessment of future yield changes at regional scale. This study performs such an assessment for West Africa. Although climate uncertainties, particularly associated with rainfall changes, can be an important impediment to adaptation planning, there are also situations where robust changes can be identified and may allow proactive planning. Indeed, despite the widely acknowledged spread in current climate model projections of regional rainfall changes over West Africa, especially with respect to summertime rainfall totals (Druyan 2011), there is mounting evidence in climate models from the Coupled Model Intercomparison Projects , needs to be investigated as a first step towards identifying the crop varieties (e.g. late or early sorghum) and practices (e.g. delayed or early sowing) most suitable to withstand climate change (Dingkuhn et al 2006). Here we assess the impacts of climate change on the yield of sorghum, one of the main staple crops in the Sudanian and Sahelian savannas of West Africa. The study extends the work of Sultan et al (2013). Using the same SARRA-H crop model forced by idealized climate forcings, that study demonstrated that higher temperatures act to increase potential evapotranspiration and crop maintenance respiration and to reduce the crop-cycle length. Warming, therefore, is simulated to cause millet and sorghum yield losses in West Africa, even in the case of increased precipitation. In this study, we do not use idealized climatic changes, and instead investigate the response to a set of complete climate projections, in which temperature and rainfall vary across the season and in an internally consistent manner. We also investigate the robustness of the climate impacts by taking into account the diversity of local cultivars, the uncertainties in future climate projections and in the response of crop models, and the crop response to CO 2 increase. In the next section we introduce the climate data (from nine CMIP5 climate models that were bias-corrected and downscaled), the two crop models (SARRA-H and APSIM), and the simulation protocols. In section 3, we analyze present and future yields to identify the areas and crop cultivars most vulnerable to climate change. Since APSIM is the only crop model including a CO 2 fertilization scheme, the intercomparison between the two crop models will mainly assess the robustness of the response of the crop to temperature and rainfall changes while the direct effect of CO 2 will be examined separately using solely APSIM. Finally, in section 4, we discuss our conclusions. Weather data Our main meteorological dataset comprises daily data from from 35 stations in nine countries across West Africa (figure 1), compiled by AGRHYMET Regional Center and National Meteorological Agencies for the period. This is a dry period compared to the recent decades (Panthou et al 2014), but it is the only period for which a sufficient array of daily station data has been made publicly available. These stations record rainfall and several meteorological parameters at 2 m above ground level, such as solar radiation, surface wind speed, humidity and temperature. The 35 weather stations are used to perform historical crop growth simulations for validation purpose against crop yield data and to estimate the bias-correction functions. For the crop future simulations we select 13 out of the 35 stations (figure 1); these 13 stations are more evenly distributed across the study area and the aggregated results are thus representative of the whole region, avoiding over-representing any specific area. Climate scenarios and bias correction technique We use historical simulations and the RCP8.5 projections from 9 CMIP5 (Taylor et al 2012) models 8 , the choice of the models was based solely on the availability of daily values of precipitation and of mean, maximum, and minimum surface temperature at the time of the study. Output from more than 20 such GCMs is available today, however the 9-model subset is a reasonable representation of the larger ensemble over Western Africa. Other variables (i.e. wind, humidity and radiation) necessary for forcing the crop models were obtained from the historical records of weather stations, based on a conditional resampling to preserve the covariance between these variables and precipitation. The historical time series of precipitation and temperature were first bias-corrected and downscaled to the necessary field scale, following a method adapted from Piani et al (2010). The basic idea of the method is to (i) sort by increasing accumulation the values of daily rainfall observed at a station and produced by a climate model for the same period and (ii) use a parametric function to fit the emerging transfer function (TF) that will map the model data to observations. The preponderance of low-intensity accumulation ensures that the fit is most accurate for the most frequent rainfall rates, distinguishing this method from other approaches that directly match the rainfall probability density function (PDF) of the model to the observed. The use of a parsimonious parametric fit prevents overfitting of the data, especially for intense events. The fit is either linear for all rainfall values, or linear for high rainfall intensity, but curving at low intensity. This functional form corrects the common bias of too many drizzle days and not enough dry days and large-accumulation days. The dry-day correction is determined internally by the fitting process and does not constitute a separate pre-processing step as in other bias-correction methodologies. The temperature terms are corrected with a similar method, but here the fit to the transfer function is always linear. Following Piani et al (2010), the three temperature characteristics are corrected together (as mean daily temperature, daily range, and temperature skewness) to avoid large relative errors in the daily temperature range. Data for a singular calendar month is used to fit a set of twelve TFs, which are then interpolated to obtain a diurnally resolved TF. The TFs are derived from the historical runs for the period for each individual model, and then are applied to the scenario simulations to obtain an ensemble of forcings for the 2031-2060 crop simulations. It was shown by Chen et al (2011) that the large interdecadal variability in Sahel rainfall characteristics implies that a TF based on data from one epoch does not fully remove model bias in a different epoch. We have attempted to ameliorate this problem by including, when possible, longer records in our observational datasets (additional rainfall data was provided by Adrian Tompkins in personal communication) and by polling nearby stations together, so that the observed rainfall characteristics targeted by the TF are as broadly representative as possible. The flipside of this choice is that the forcing we produced should be considered representative for broader regions, rather than the exact location of the meteorological stations. We argue that this loss of specificity is not problematic. In light of the large uncertainties in projections even at the regional scale, it would be unwise to give downscaled projections the status of bona fide local forecasts. In this study, we interpret them as regional scenarios. Panels a and b in figure 2 show the sorted daily precipitation for southern Burkina Faso and northern Benin in the CSIRO and MIROC models against observations for the rainy season months. The two models have similar biases in the weak precipitation range-i.e. an overestimation of drizzle days-but very different behaviors in the high intensity ranges, with the MIROC model producing exuberant precipitation (well below the 1:1 line), while the CSIRO model displays the most typical bias of muted precipitation in intense events (above the 1:1 line). The functional fits of the TFs are shown as solid line, and are capable of capturing both of these divergent behaviors, so that the end result of the bias correction and downscaling is to produce similar daily values for both models (as shown in the insets; see also figure S1 in the supplementary material, available at stacks.iop.org/ERL/ 9/104006/mmedia). mean of maximal (minimal) daily temperature of 34.4°C (21.9°C) and a mean annual rainfall 900 mm per year. The three cultivars (described in several previous studies, e.g. Kouressy et al 2008a, b) fall into two different categories: (i) the 'traditional' varieties (GuineaLo and GuineaAm) which have moderate to strong photoperiod sensitivity, a flexible crop cycle length but smaller average yields; and (ii) a 'modern' variety (Caudat) which is an early-maturing, short duration and photoperiod-insensitive crop selected to maximize the mean yield under optimal fertility conditions. On-farm surveys, mainly in Mali but also in Senegal, Burkina Faso and Niger (Traoré et al 2011), have shown that 'modern' varieties have been so far adopted by a minority of farmers. Kouressy et al (2008a) demonstrated that such little adoption might be explained by a weak adaptation of short and constant duration crops to semi-arid environments. Indeed they must be sown at a specific date in order to synchronize the flowering stage with the end of the rainy season to avoid drought, pest and disease problems while photoperiod sensitive cultivars have the advantage of permitting flexible sowing dates. Furthermore, Sultan et al (2013) have shown that traditional photoperiod-sensitive cultivars are less affected by temperature increases, because the photoperiod limits the heat-induced reduction of the crop duration. 2.3.2. Crop modeling tool. In order to span some of the uncertainty in crop modeling, which has been shown to be an important contributor to overall uncertainty in climate change impacts (e.g., Asseng et al 2013), we use two different crop models calibrated against the same trials data: SARRA-H and APSIM. These models differ in their treatment of nutrients, CO 2 , sowing date and more (see table S1 in the supplementary material) and agreement in their simulations might indicate a robust response of sorghum to climate change in West Africa. The SARRA-H model (version v.32) simulates yield attainable under water-limited conditions by simulating the soil water balance, potential and actual evapotranspiration, phenology, potential and water-limited carbon assimilation, and biomass partitioning (see Kouressy The SARRA-H model does not explicitly simulate the effects of fertilizer, manure application, or residue on crop yields. However the impact of soil fertility was taken into account by tuning the biomass conversion ratio to an optimal level for the modern variety and to a lower level for traditional varieties that are usually cropped with low to no inputs. The APSIM model explicitly simulates the nitrogen cycle. Fertilizers, manure, and residues on the surface can be removed or added, be incorporated into soil during tillage operations, and decompose. Crop nitrogen uptake is the minimum between demand for crop growth and potential supply of nitrogen from soil and senescing leaves, and it is capped by a maximum nitrogen uptake rate (van Oosterom et al 2010). The nitrogen stress impacts photosynthesis, phenology and grain filling processes (Hammer et al 2010). For this study, two levels of fertilizer urea with nitrogen content at 10 kg ha −1 and 50 kg ha −1 are applied every year at the time of sowing. These two levels represent low and medium fertilizer inputs, with the low level close to the reality for the historical periods from 1961-1990(Heisey and Mwangi 1996. We chose not to reset the soil fertility parameters (including soil organic matter contents and soil nitrogen contents) in the APSIM simulation as this approach can represent more realistic transient situation of soil fertility in West Africa. It is well known that crop growth continues to withdraw soil nutrients when there are not enough inputs, which further endangers the regional food production (Sheldrick andLingard 2004, Roy et al 2003). We treat the historical and future simulations in APSIM with the same initial soil fertility condition, thus our results are fair to compare the two simulations to assess the climate impacts. Sowing dates were generated by the two crop models following two different rules. In SARRA-H sowing starts when plant-available soil moisture is greater than 8 mm at the end of the day, followed by a 20 d period during which crop establishment is monitored. If the simulated daily total biomass decreases during 11 out of 20 d, the juvenile crop is considered to have failed, triggering automatic re-sowing. Such agronomic criteria have been shown to be close to the farmers' planting rules in Niger (Marteau et al 2011). For the APSIM model, we first define a possible temporal window for sowing centered at the rainy season onset following the AGRHYMET definition (Brown et al 2010). The sowing date is the last day of the first 10 continuous days (in the sowing window) with rainfall accumulation of 20 mm, provided that at this date plant-available soil water is above 10 mm. If the above criteria are never satisfied, the last day of the sowing window is defined as sowing day. Crops can be killed by a variety of stresses, usually at the late stage of the phenology phases, and re-sowing is not implemented in the APSIM. The plant response to CO 2 concentration is not included in the present version of the SARRA-H model. However, we used an APSIM version that has incorporated the CO 2 fertilization scheme. The CO 2 fertilization is achieved through linearly increasing the transpiration efficiency by 37% at 700 ppm compared with at 350 ppm (Harrison et al 2014) with no direct effect of CO 2 on radiation use efficiency. The calibration of the SARRAH model against trials data in Mali has been detailed by Kouressy et al (2008a, b). For this study, we calibrate the APSIM model against the same trials data. We use the DEVEL software (Clerget et al 2008, Kumar et al 2009 to optimize parameters of thermal time requirements for phenological stage, photoperiod sensitivity, leaf appearance rate and leaf area function. All the APSIM calibrated parameters and cultivar-specific parameters are listed in table S2 in the supplementary material. 2.3.3. Crop models evaluation protocols. For validation purpose, the two crop models were run for all 35 meteorological stations in West Africa over 1961-90 and for all three sorghum cultivars. Since there are no existing data giving the proportion of each cultivar in the whole cropped area of sorghum in West Africa, we assumed the same proportion of each cultivar in each site and average across cultivars. We also made the assumption that soil and management practices were the same in the 35 locations. Although local variations of soils and management can have a major effect on crop yields, this assumption is possible because of the relative uniformity of the soil (over 95% of soils in this region are sandy with low levels of organic matter, total nitrogen, and effective cation exchange capacity: see Bationo et al 2005) and management practices (little or no agricultural inputs, no irrigation, sowing after the first major rain event: see Marteau et al 2011). Simulations were performed without any irrigation since most crop systems are rainfed (93% of all agricultural land in Sub-Saharan Africa) and, to our knowledge, irrigation is never used for sorghum in West Africa. For validation purpose, following the work done by Sultan et al (2013), we scaled-up the crop yield simulations by simply averaging the crop yields of each of the 35 locations. Simulated crop yields were validated against Food and Agriculture Organization of the United Nations (FAO) annual data submitted by its member nations. We extracted national sorghum yields from the FAO on-line database (http://faostat. fao.org/) and computed an average of countries' national yield (Senegal, Mali, Burkina Faso, Niger, Guinea, Gambia, Guinea Bissau, Togo, Benin) over the 1961-1990 period, weighted by the national cultivated area for sorghum. Both mean and variability of simulated yields were validated against FAO observations. We assess model fidelity from the correlation between observed and predicted crop yield time series, after removal of any linear trends. FAO sorghum yields in some countries show increasing yields over 1961-1990, while others face decreasing values (Chad, Niger). Local climate fluctuations may play a role in these trends, but non-climatic factors are likely to be the dominant drivers (land-degradation, intensification, intra or extra-national migrations, economic crisis). Because these potential nonclimatic effects will not be simulated by any climate-driven crop model, we detrend observations and only analyze interannual variability. The linear trend equation for FAO data is yield = 1.3*year + 568. Since sorting out climatic and non-climatic trends in yield is not possible, detrending might also remove potential climate effects. In such a case, climate trends would force the simulated yields to show a trend as well. Therefore, the only way to make a comparison between observations and simulations fair is to detrend both. Thus we also remove the linear trend from simulated yield time series. The linear trend equation for crop models outputs are (i) yield = −38.2*year + 2839; yield = −28.7*year + 3323 for the APSIM simulations with a fertilization rate of respectively 10 kg ha −1 and 50 kg ha −1 ; and (ii) yield = −15.8*year + 2026 for the SARRA-H crop model. The strong declining trends in the APSIM crop yields are due to the soil fertility loss since there was no resetting of mineral inputs after the start of the simulation. 2.3.4. Crop models scenarios protocols. The calibrated APSIM and SARRA-H models were then used to simulate the response to climate change of the three cultivars of sorghum in West Africa. We thus forced the two crop models with bias-corrected outputs at each of the 13 selected locations from the nine GCMs over both the historical period 1961-1990 and the future period 2031-2060 under the RCP8.5 scenario. The difference between the two sets of simulations indicates the yield response to climatic changes over the intervening decades. To investigate the crop response to the elevated CO 2 concentration in the atmosphere in the RCP8.5 scenario, we performed two 2031-2060 simulations with the APSIM model: one where CO 2 concentration is at 520 ppm, and one where CO 2 concentration is kept at the historical value of 350 ppm. The latter simulation is comparable with the SARRA-H crop simulation. Comparison of the two APSIM simulations provides an estimate of future CO 2 fertilization. Evaluation of the simulated yields under the historical period When looking at the deviation from the trend line with standardized yield values, the average sorghum yield simulated for our 35 locations agrees well with the observed yield variability derived from country statistics ( figure 3). Indeed, when removing trends, crop yield variability is a response to climatic fluctuations in the FAO observations with a correlation coefficient between FAO yields and annual rainfall of R = 0.62 (see rainfall time series in figure 3). The effect of heat stress is masked by the strong relationship of drought and heat in the historical record (global warming has only recently decoupled high temperature from drought, see Funk et al 2012). Although overestimated, this relationship between rainfall and crop yield is well represented in the crop models (table 1). As a consequence, both APSIM and SARRA-H capture the low yields of the drought years in the early 70 s and early 80 s, and they both fail to capture yields variations uncorrelated with rainfall (such as in 1967 and 1986). The inter-annual correlation coefficient between simulated and observed detrended yields is R = 0.70 for the SARRA-H model and R = 0.52 for the APSIM simulations run with 50 kg ha −1 fertilizer rate. The correlation is lower (R = 0.44) when using the 10 kg ha −1 fertilizer rate in the APSIM model, as nutrient-deficient soils create highly nitrogen-stressed environments in which plants are not able to take advantage of increased water availability because of nitrogen stress (which exists in both high and low rainfall years). Although our models capture quite well the variability of crop yields, the mean yield and the yield variance (table 1) are overestimated, especially with the APSIM simulations with the highest fertilization rates. Such an overestimation is a common shortfall of crop simulations for Sub-Saharan Africa: crop models are usually calibrated against data collected in controlled environments and thus do not account for nonclimatic factors like pests, weeds and soil-related constraints (Challinor et al 2004, Challinor et al 2005, Bondeau et al 2007. Furthermore, spatial heterogeneity in management, planting dates, cultivars, soils and other factors likely also reduce interannual variability in FAO yields compared to more homogeneous simulations performed in this study. The assumption we make in this study is that this positive mean bias in crop production is relatively constant in time, and thus that the simulated (climate-driven) yield mean and variability can still be compared between historical and future periods. Finally, in order to assess whether the downscaled GCMs adequately represent observed climate conditions for crop model applications, we compared mean yields simulated under the historical period by using observations or downscaled GCMs outputs to force the two crop models (figure S1). We found that although there is some dispersion from one GCM to another, the multi-model ensemble mean yield under historical conditions is very close to the mean yield simulated with observed weather data (figure S1). Figure 4 shows the annual rainfall (figure 4 (a)) and the mean surface temperature (figure 4(b)) changes under the scenario RCP8.5 for the 2031-2060 period. The changes are computed as averages across the nine GCM simulations for each of the 13 stations. Future changes in rainfall clearly depict a West-East dipole with annual rainfall increasing in eight stations located in Central Sahel while rainfall is stagnating or decreasing in stations located in the Western part of the Sahel. The rainfall increase can reach +100% in two Southern stations located in Burkina Faso and Ghana. Regional mean rainfall changes for the full set of stations and subsets in the Western and Central Sahel are shown for each individual GCM in figure 4(c). The dipole pattern is a very consistent feature across GCM simulations with 7 out of 9 GCM simulating less rainfall in the future in the Western Sahel and 8 out of 9 GCM showing a rainfall increase in Central Sahel. Although the precise location of the separation line in the dipole varies, a similar rainfall response in the Sahel is found in several studies using different subsets of CMIP5 models Figure 5 shows that the rainfall deficit is essentially concentrated in early monsoon season in the Western part of the Sahel in June-July (figure 5(a)) while positive rainfall changes are found in late monsoon season all over the Sahel in September-October ( figure 5(b)). It suggests a shift in the seasonality of the monsoon which may start later in the Western part of the Sahel and become more active in fall especially in Central Sahel. This change in seasonality is also very consistent with previous studies using completely different sets of GCM simulations from the CMIP3 (Biasutti and Sobel 2009) and the CMIP5 (Biasutti 2013). Climate change scenarios In contrast with rainfall changes, the temperature changes pattern (figures 4(b) and (d)) is quite homogeneous in longitude while presenting a latitudinal gradient: the warming is more intense in the Northern Sahel where temperature changes exceed +3°C in some stations ( figure 4(b)). The mean warming is about +2.4°C but the spread between GCM is large ranging from +2.0°C (MIROC5 and MPI-ESM-Mr) to +3.9°C (IPSL-CM5A-LR) for the 13-station average. The spread is due to differences across models both in climate sensitivity and in rainfall changes (more rain is associated with cooler surface temperatures, because of attendant Impacts on sorghum yields In response to climate change, mean sorghum yields decrease in 12 out of 13 stations, when we average simulations from the two crop models and the three varieties of sorghum (figure 6(a)). The mean yield loss over West Africa is −13%, consistent with the meta-analysis of Roudier et al (2011) that found mean yield loss in West Africa of about −10%. Here we found that this negative impact follows the West-East dipole of rainfall changes with a larger yield loss in Western stations (14-29%) than in the Central Sahel, where the mean yield change ranges from −13% to +7%. Consistent with a dominant role for heat stress, simulated crop yields tend to decrease in the future even where rainfall increases, as in the Central part of the Sahel. The decrease of mean yield in 2031-2050 is a robust feature in our simulations since there is a good agreement across climate and crop models in the yield decrease ( figure 6(b)). More than 90% of all simulations lead to a yield decrease in 4 out of 6 stations located in the Western Sahel. The agreement between simulations in a projected yield decrease is lower in the Central Sahel. This is to be expected given that in the Western Sahel both decreasing rain and temperature warming tend to suppress yields, while in Central Sahel rain and temperature changes act in opposite directions. In addition to the reduction of mean yield, future projections also show an increase of the year-to-year variability of the yields of sorghum (figure 6(c)) especially in the Western Sahel where some stations experience an increase of relative yield variability of more than 20%. Changes in yield variability are not consistent across all simulations, but are robust in Western Sahel, where yield is reduced and thus relative variability is projected to increase from 53 to 85% ( figure 6(d)). The impact of climate change on the mean crop yield is remarkably similar between SARRA-H and APSIM with 10 kg ha −1 fertilization rate ( figure 7(a)). Averaged across West Africa, the two models simulate a yield loss of the same magnitude (−10.0% and −10.8% respectively for SARRA-H and APSIM) as well as a more pronounced impact in the Western Sahel (−16.5% and −19.6% respectively). The increase of crop yield variability is also very consistent between crop models both in magnitude and in its spatial pattern ( figure 7(b)). In the Western Sahel the increase of yield variability is +20.2% and +30.8% respectively for SARRA-H and APSIM. Such consistency between two completely different crop models affirms the robustness of the projections of crop yields under climate change scenarios. While fertilization a higher fertilization rate (10 kg ha −1 vs 50 kg ha −1 ) increased absolute yield in the APSIM simulations, the climate-induced yield reductions were larger for the high nitrogen fertilization case. A higher fertilization rate leads to a more detrimental impact of climate change everywhere in the Sahel with a decrease of mean crop yield of −17.8% and an increase of variance of almost 25% in average across the 13 stations in West Africa. The different APSIM results from two fertilizer rates demonstrate that when nitrogen stress is decreased or minimized, sorghum yields become more responsive to water and heat stresses brought forth by climate change (i.e. robust rainfall decreases and temperature increases). The short-duration, modern variety of sorghum tends to be more resilient to the adverse effects of climate changes than the two traditional varieties, in that both the yield loss and the variability increase are weaker (figure 8). This is especially true in the APSIM model, which simulates large differences between cultivars and even positive impacts in Central Sahel with the modern short-duration variety (figure S2 in the supplementary material). The advantage of the short-duration photoperiod insensitive cultivar might be a consequence of the seasonality shift of the monsoon with less rainfall at the beginning of the rainy season and more rainfall in late monsoon season. Indeed, in both crop models, sowing -2060 and 1961-1990 in mean yield of sorghum (a) and in coefficient of variation of yields (c). Simulated yields are mean yields from the two crop models and the three varieties of sorghum. Right panel: agreement across all model simulations (in %) in the decrease of mean yield (b) and the increase of the coefficient of variation (d) in the 2031-2060 period. This agreement is computed using the nine GCMs, the two crop models and the three varieties of sorghum. In both panels, a weight of ¼ has been given to the two APSIM runs (10 kg ha −1 and 50 kg ha −1 ) and a weight of ½ to the SARRA-H runs to avoid oversampling of the APSIM simulations. -2060 and 1961-1990 in mean yield of sorghum in average across the 9 GCM simulations and the three varieties of sorghum. The relative changes are computed on average across the 13 stations (all stations), across the 6 western stations (Western Sahel) and across the 6 eastern stations (Central Sahel). Bottom panel: same but for the relative change in the coefficient of variation. is delayed (by 7.3 and 4.1 d respectively in SARRA-H and APSIM; table S3 in the supplementary material). In the SARRA-H model, this delays maturity of the photoperiod insensitive cultivar by 3.3 days. Warming still acts to shorten the crop cycle length, but the delay in maturity date ameliorates this effect. Such a delay in maturity is not simulated for the two photoperiod sensitive varieties (table S3) for which the flowering date is relatively independent of sowing date (Kouressy et al 2008a). The photoperiod-sensitive varieties have thus a larger reduction of their growing season compared to the modern photoperiod-insensitive variety and are not able to take advantage of more rainfall in late monsoon season. Finally, we show the effect of CO 2 in the APSIM model ( figure 9). CO 2 fertilization positively increases the crop yields by 6-10% across the whole region, though the net impact of climate change for crop yields is still negative after accounting for CO 2 effects, except in Central Sahel. Different fertilizer inputs have a slight impact on the positive benefits of CO 2 fertilization (10 kg ha −1 : +7.5%; 50 kg ha −1 : +9.6%), and the three cultivars show little difference in the benefits (figure S3 in the supplementary material). However, the impacts of CO 2 fertilization vary clearly with mean annual rainfall, with dry areas having the largest benefits ( figure 9(b)). APSIM simulates the CO 2 fertilization effects through an increase in transpiration efficiency, thus drier areas benefit more than wetter ones from the increased water use efficiency. Summary and discussion We assess the impacts of near-term climate change on the mean and variability of yields for traditional and modern sorghum varieties in West Africa, accounting for uncertainties both in future climate scenarios and in crop models. We constructed regional bias-corrected forcings from nine GCMs extracted from the CMIP5 archive and used two crop models (SARRA-H and APSIM) with different treatments for nutrients and other key variables to obtain the most robust projections to date of future crop yields in this region. This approach emphasizes the range of possible outcomes for the region, at the expense of trying to determine the most likely outcome for any given locality. Thus, we pool together results from climate and crop models with diverse skills and sensitivities and we make no attempt to determine the exact soil and treatment conditions for each station in the analysis. In viewing our results, one should be mindful of both the qualitative robustness of the climate change signal, and the quantitative range of possible outcomes. In West Africa, future climate projections from our subset of bias-corrected GCMs show a mean warming of +2.8°C in 2031-2060 compared to our baseline period . This warming is accompanied by robust changes in rainfall showing a West-East dipole with less rain in the Western part of the Sahel (Senegal, South-West Mali) and more rain in Central Sahel (Burkina Faso, South-West Niger). The rainfall deficit is essentially concentrated in the early monsoon season in the Western Sahel while positive rainfall changes are found in late monsoon season all over the Sahel. Both the West-East dipole in mean rainfall changes and the late start of the monsoon are consistent with previous studies using raw output from larger GCM ensembles. In our simulations, climate change leads to a decrease in sorghum yields everywhere in West Africa-even in the Central Sahel where rainfall is increasing. In addition, the coefficient of variation of yields increases, which might indicate a greater risk of crop failures under a warmer climate. These findings are robust across the two crop models used in this study and are consistent with previous findings for C 4 crops, e.g. maize (Jones andThornton 2003, andSchlenker andLobell 2010), millet, and sorghum (Sultan et al 2013). All these studies confirm that temperature increase is the main driver of adverse yield changes in the future. To define adaptation strategies for agriculture in Africa, we must be able to identify the most vulnerable areas and to specify crop varieties with the most robust characteristics for withstanding climate change. Here we find that the impacts of climate change are greatest in the Western part of the Sahel (mean yield losses of some 16-20% and increased interannual variability), where the projected warming is associated with a decrease of rainfall especially during the early monsoon season. East-West differences in climate and impacts 2031-2060 and 1961-1990 in mean yield of sorghum on average across the 9 GCM simulations for each of the three varieties of sorghum. Simulated yields are mean yields from the two crop models and a weight of ¼ has been given to the two APSIM runs (10 kg ha −1 and 50 kg ha −1 ) and a weight of ½ to the SARRA-H runs to avoid oversampling of the APSIM simulations. Results per crop model are shown in figure S1. The relative changes are computed in average across the 13 stations (all stations), across the 6 western stations (Western Sahel) and across the 6 eastern stations (Central Sahel). Bottom panel: same but for the relative change in the coefficient of variation. projections are a highly consistent feature across the climate and crop models used in the study. Our simulations also show that the effect of climate change is not identical for all cultivars of sorghum: adverse impacts on mean yield and yield variability were found to be the lowest for modern cultivars with a short and nearly fixed growth cycle. This finding is in contrast with the conclusions of Sultan et al (2013): using the same SARRA-H model, they found that modern cultivars were most susceptible to climate change. That study only considered uniform changes in rainfall patterns, but we suggest that changes in the seasonality of the monsoon-with less rainfall at the beginning of the rainy season-can greatly affect crop growth. Indeed, in our simulations, the seasonality shift leads to a delayed sowing in both crop models, which shortens the rainy season and makes short-duration varieties more adapted in the future. This result is consistent with the study from Kouressy et al (2008a), which demonstrated that potentially high-yielding and photoperiod-insensitive modern cultivars display an advantage where the rainy season is short. In our case modern varieties offer a double-benefit of higher yields and more resilience to climate change. In future scenarios, with a low nitrogen stress (50 kg ha −1 ), the APSIM model simulates yields that are 68% higher with modern varieties compared to the two other varieties. This yield benefit can be up to 128% with the SARRA-H model which has no nitrogen stress. The interaction between water stress and nitrogen stress in the nutrient-deficient Sahel is another interesting emerging pattern. When soil receives little inputs and is over-exploited as it is often in the Sahel (Sheldrick andLingard 2004, Roy et al 2003), the crop system is much less responsive to changes in other environmental variables (e.g. temperature and rainfall). Particularly in the Sahel, the benefits of reduced water stress by increased rainfall can be largely offset by the increased nitrogen stress induced by leaching. This is why increasing fertilizer inputs can make the Sahel agricultural system more responsive to climatic stresses and produce more negative impacts (in a relative sense, %) in crop yields under climate change, though the absolute yield would increase by 30% from 10 kg ha −1 to 50 kg ha −1 (table S3). Our results are consistent with another modeling study (Turner and Rao 2013), which shows that the impact of warming is minimum for low-input, small-holder, sorghum farmers in some African region, as these systems are too much nutrientstressed. Thus, while increasing fertilizer inputs and restoring nutrients imbalance increase overall food production and have fundamental benefits for the agricultural development of Africa (Vitousek et al 2009), the trade-off is that the improved agro-systems would be more sensitive to climate change. The impact of higher atmospheric CO 2 concentration is a major source of uncertainty in crop yield projections (Soussana et al 2010, Roudier et al 2011. There is an ongoing debate about the extent of impacts of CO 2 fertilization on crop yields in observations and models (Long et al 2006, Ainsworth, Long 2005. In our simulations, CO 2 fertilization would significantly reduce the negative climate impacts, increasing sorghum yields on average by 10%, and drier regions would have the largest benefits. This estimate, based on the APSIM model, is much higher than in a previous study for C4 crops (Berg et al 2013), though both studies agree that the largest impacts happen in arid regions. The only effect of CO 2 in APSIM-sorghum is to increase the transpiration efficiency (by 37% when CO 2 concentration rises from 350 ppm to 700 ppm), which increases the water use efficiency and as a result has more benefits for dry regions or drought years. The differences among various crop models (Tubiello and Ewert 2002) as well as between model simulations and field experiments (Ainsworth et al 2008) are still large, and these differences highlight the large uncertainties in this critical issue. Future research based on observations is urgently needed to clarify how to best model the impacts of CO 2 fertilization. However CO 2 fertilization effects are unlikely to modify the main conclusions of this study. Indeed, even after accounting for CO 2 , yield losses remain more important in the Figure 9. Effects of CO 2 on climate change impacts on crop yields. (a) Effects of CO 2 on relative changes (%) in mean crop yields. The solid and dashed lines refer to 10 kg ha −1 and 50 kg ha −1 fertilizer inputs. The results are averaged over all GCM model ensembles and three crop cultivars. (b) Station-level relative yield changes (%) averaged over both nutrient levels with or without CO 2 effects, and also the benefits of CO 2 fertilization in terms of relative changes in crop yields, as a function of mean annual precipitation (mm/year).
2018-12-07T03:32:17.894Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "762bffaf16ced599ab9fb0c1470c40763f480917", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1748-9326/9/10/104006", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "4e2611357b5ea35735c011de71d8a1cc788fda82", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
55210578
pes2o/s2orc
v3-fos-license
Feasibility of a New Granular Rapid Release Elemental S Fertilizer in Preventing S Deficiency of Canola on a S-Deficient Soil Our previous research has indicated that granular elemental S (ES) fertilizers are not effective in the year of application and also are not consistently as effective as sulphate-S in increasing seed yield of canola in subsequent years, especially when applied at seeding in spring, because of slow dispersion of elemental S particles from granules for subsequent oxidation of ES to sulphate-S. A field experiment was established in autumn 2010 to determine the relative effectiveness of a new rapid release elemental S (RRES, now called Vitasul) fertilizer, in comparison to sulphate-S fertilizer, with various combinations of application times and placement methods (applied at 20 kg∙S∙ha−1) on seed yield, straw yield, oil and protein concentration in seed, N and S uptake, partial factor productivity (PFP—kg∙seed∙kg−1 applied N∙ha−1—blanket application of 120 kg∙N∙ha−1), S use efficiency (SUE—increase in kg∙seed∙kg−1 applied S∙ha−1) and percent recovery of applied S in seed + straw (%) of canola in 2011, 2012 and 2013 growing seasons on a S-deficient Gray Luvisol loam soil at Star City, Saskatchewan. The 11 treatments included two granular S sources (RRES and potassium sulphate) and five application time/placement method combinations (broadcast in autumn and incorporated in spring, broadcast in spring pre-tillage [broadcast and incorporated], broadcast in spring pre-emergence, sideband in spring and seedrow-placed in spring), plus a zeroS control. There was a significant response of seed yield of canola to applied S in all 3 years, but the responses varied with S source and with application time-placement combinations in different years. Seed yield increased considerably with all sulphate-S treatments compared to the zero-S control, although seed yield tended to be slightly lower in some spring and/or autumn broadcast treatments than the other sulphate-S treatments. Compared to the zero-S control, seed yield also increased significantly with all RRES treatments, but the increase was greater with autumn applied RRES than the spring applied RRES in many cases. Autumn applied RRES produced only slightly lower seed yield but spring applied RRES produced much lower seed yield than the highest Corresponding author. Introduction In the Prairie Provinces of Canada, canola is a major cash crop in the Parkland region, where many Gray and Dark Gray soils are deficient or potentially deficient in available S for high crop yields [1] [2].Because canola has high S requirements, deficiency of S at any growth stage in the growing season can result in a considerable reduction in seed yield.So, a constant supply of available S to canola plants is thus required throughout the growing season in order to prevent any seed yield loss due to S deficiency.Sulphate is the only form of S that plants can use, and previous research has shown that deficiency of S in rapeseed or canola can be readily prevented or corrected by applying sulphate-S fertilizers [3]- [5]. A number of elemental S fertilizers are now available in the market for commercial use, and may cost less than sulphate-S.However, the elemental S in these fertilizers must be oxidized in soil to plant-available sulphate-S for effective crop use.In previous research studies, granular elemental S fertilizers were found much less effective than sulphate-S fertilizers in improving seed yield of canola on S-deficient soils, especially when applied in spring [6]- [8].Dispersion of elemental S particles from granular elemental S fertilizers to enhance microbial oxidation of elemental S particles to sulphate-S in soil was considered as the major problem for poor performance of granular elemental S fertilizers [9].Because of most likely increase in dispersion of elemental S particles in soil over the winter and their subsequent oxidation to sulphate-S in the growing season, autumn-applied elemental S usually produced greater seed yield than spring-applied elemental S but seed yields were still lower and inconsistent than the sulphate-S fertilizers [7]. In other field experiments with spring applied S on S-deficient soils, broadcast/spread surface-application of elemental S fertilizers that contain S particles in suspension or powder formulation prevented S deficiency in canola and produced seed yield comparable to sulphate-S fertilizers [10] [11].However, suspension or powder formulations of elemental S are not convenient to apply and may not be practical to use on a commercial scale.Now, there is a new granular elemental S fertilizer (rapid release elemental S [RRES], called Vitasul-manufactured by Sulvaris Inc., Calgary, Alberta, Canada), which is expected to oxidize/release adequate amounts of sulphate-S in preventing S deficiency in canola in the growing season.The objective of this study was to determine the relative effectiveness of RRES and sulphate-S fertilizers with various combinations of application times and placement methods on seed yield, straw yield, oil and protein concentration in seed, N and S uptake, partial factor productivity (PFP-kg•seed•kg −1 applied N•ha −1 ), S use efficiency (SUE-increase in kg•seed•kg −1 applied S•ha −1 ) and percent recovery of applied S in seed + straw (%) of canola in 2011, 2012 and 2013 growing seasons, and residual sulphate-S and nitrate-N in the 0 -60 cm soil depth in autumn 2013 (after three annual applications of fertilizers) on a S-deficient Gray Luvisol loam soil near Star City, Saskatchewan. Materials and Methods A field experiment was established in autumn 2010 on a Gray Luvisol (Typic Haplocryalf) loam soil near Star City, Saskatchewan, Canada.Soil at this site has shown severe S deficiency in canola in all previous years [7] [12], and significant increase in forage yield of timothy from S application [13].Some characteristics of soils used in these experiments are presented in Table 1.Precipitation in the growing season (May, June, July and August) at the nearest Environment Canada Meteorological Station (AAFC Melfort Research Farm) is given in Table 2.In 2011, the growing season precipitation was below long-term average (especially in May during seeding season and in August during seed formation/filling).In 2012, the growing season precipitation was much above average (with very wet conditions in June and July).In 2013, the growing season precipitation was slightly below-average, but it was well distributed and above-average in June and July during peak growing season, resulting in excellent crop growth and seed yield. In this study, a randomized complete block design was used to lay out the treatments in four replications.Each plot was 7.5 m long and 1.8 m wide.The 11 treatments included two granular S sources (rapid release elemental S [RRES] and potassium sulphate, applied at 20 kg•S•ha −1 ) and five application time/placement method combinations [broadcast autumn (surface-broadcast in autumn and then incorporated into soil in spring prior to seeding), broadcast spring pre-tillage (surface-broadcast and then incorporated into soil in spring prior to seeding), broadcast in spring pre-emergence (surface-broadcast soon after seeding), sidebanded in spring at seeding and seedrow-placed in spring at seeding], plus a zero-S control.In autumn 2010, 2011 or 2012, all plots were tilled to about 10 cm soil depth, and then granular RRES and potassium sulphate were surface broadcast in about mid-October.In spring 2011, 2012 or 2013, the S fertilizers were broadcast on surface prior to tillage in the spring pre-till treatments and all plots received blanket application of N (34-0-0 at 120 kg•N•ha −1 ), P (TSP at 30 kg•P•ha −1 ) and K (KCl at 20 kg•K•ha −1 ).All plots were tilled to incorporate the fertilizers prior to seeding.Spring sideband and seedrow treatments received S fertilizers at seeding time.Plots were seeded with a double-disc press drill at 17.8 cm row spacing.Treatments were repeated on the same plots during the duration of this study. In each plot, the data were recorded on seed and straw yield, oil and protein concentration in seed, concentration and uptake of total S and total N in seed and straw every year, and residual sulphate-S and nitrate-N in soil at termination in autumn 2013.Seed yield was determined by harvesting 1.25 m wide and 7.0 m long strips with a plot combine and straw yield was calculated from hand harvested samples collected from two 1-m long rows in each plot at maturity.The oven dry (60˚C) samples were analyzed for oil, total N and total S in seed, and for total S and total N in straw.Oil concentration in canola seed was determined using crude fat method [14].Total S in seed and straw was determined by digestion of samples in nitric acid-hydrogen peroxide and measuring its concentration in the digest by ICP-AES [15].Total N in seed samples was determined by sample digestion and detection of N by thermal conductivity using a CNS combustion analyzer [16].Protein concentration was calculated by multiplying the total N with factor 6.25 [17]. Partial factor productivity (PFP), S use efficiency (SUE) and percent recovery of applied S in seed + straw Soil samples from the experimental area were obtained from the 0 -15, 15 -30 and 30 -60 cm depths in October 2010 (prior to initiation of the field experiment).Each sample was a composite of four cores (4-cm diameter).The soil samples were air dried at room temperature, ground to pass through a 2-mm sieve, and then analyzed for sulphate-S and nitrate-N.Sulphate-S in soil was determined by extraction with CaCl 2 and measuring its concentration in the extract by ICP-AES [19].For nitrate-N, the ground soil samples were extracted using a 1:5 soil: 2M KCl solution [20], and the concentration of nitrate-N in the extract was determined with a Technicon Autoanalyzer II [21]. The data were subjected to analysis of variance (ANOVA) using GLM procedure [22].For each ANOVA, standard error of the mean (SEM) and significance are reported.Least significant difference (LSD 0.05 ) was used to determine significant differences between treatment means. Results and Discussion There was a significant seed yield response of canola to applied S in all 3 years (2011, 2012 and 2013), but seed yields varied with S source and application time-placement method combination treatments in different years (Table 3). In 2011, seed yield increased considerably with all sulphate-S treatments compared to the zero-S control, although seed yield tended to be slightly lower in autumn broadcast and spring sideband treatments than other sulphate-S treatments.Compared to the zero-S control, seed yield also increased significantly with all RRES treatments, but the increase was much greater with autumn broadcast RRES than many spring applied RRES treatments.That is, autumn broadcast RRES produced only slightly lower seed yield and most spring applied RRES treatments produced much lower seed yield of canola than the highest yielding spring applied sulphate-S treatments (e.g., broadcast pre-till or seedrow-placed S).This suggests the potential of autumn broadcast RRES in preventing S deficiency by increasing availability of S to hybrid canola plants in the first growing season, although slightly less effective than spring broadcast/incorporated sulphate-S in increasing seed yield of canola.In contrast to the findings of the present study, our previous research in Saskatchewan has shown that autumn or In 2012, seed yield was much lower (63% of the three year mean) than in 2011 and 2013, probably due to higher precipitation (43% higher than normal) and poor soil drainage.This year there was a significant (but moderate) seed yield response of canola to all spring applied sulphate-S treatments, and seed yield tended to be slightly lower in the autumn broadcast sulphate-S treatment and significantly lower in the spring seedrow-placed sulphate-S treatment than the other spring sulphate-S treatments (Table 3).Among sulphate-S treatments, spring seedrow-placed S produced the lowest seed yield, which was probably due to toxic effects of potassium sulphate on emergence or early growth of seedlings in the seedrow but we did not make any plant counts on seedling emergence.Seed yield also increased significantly (but moderately) with all RRES treatments compared to the zero-S control, but the increases were significantly greater with autumn broadcast RRES and more so with spring pre-emergence broadcast RRES than other spring applied RRES treatments.Autumn broadcast RRES produced only slightly lower, spring pre-emergence broadcast RRES produced similar, and spring applied pretill, sideband and seedrow-placed RRES produced much lower seed yield than the highest yielding spring applied sulphate-S broadcast pre-till or sideband S treatments.Earlier research on canola in Saskatchewan and Alberta has shown that autumn applied granular ES fertilizers was more effective than spring applied granular ES fertilizers, but autumn applied ES was not consistently as effective as spring applied sulphate-S fertilizer, even after multi-year annual applications [7] [8] [10].Our previous research on canola in Saskatchewan has also shown substantial increase in seed yield of canola when liquid ES fertilizer was spray-broadcast on soil surface after seeding [7].This suggests that physical dispersion of S particles from the ES granules, which is the major limitation for exposure of ES particles to oxidation to sulphate-S under Parkland region climatic conditions, can be overcome by surface-broadcast of granular ES fertilizer immediately after seeding.Similarly, our present findings also suggest the potential of spring broadcast pre-emergence RRES in preventing S deficiency in hybrid canola at least after second annual application, when seed yield was similar to spring broadcast/incorporated sulphate-S.In summary, our findings suggested the potential of spring broadcast pre-emergence RRES or autumn broadcast RRES in preventing S deficiency in hybrid canola after second annual application, although seed yield was slightly lower (not significantly) than the spring broadcast/incorporated sulphate-S. In 2013, there was a marked seed yield response of canola to spring applied sulphate-S, and seed yield tended to be slightly greater in the spring broadcast pre-till sulphate-S treatment than the other spring sulphate-S treatments (Table 3).Among sulphate-S treatments, spring pre-emergence broadcast tended to produce lowest seed yield.Compared to zero-S control, seed yield also increased considerably with all RRES treatments, but the increases tended to be lower with autumn broadcast RRES and spring sideband or seedrow-placed RRES treatments than spring pre-emergence RRES or spring pre-till RRES treatments.Autumn broadcast RRES, and spring sideband and seedrow-placed RRES produced much lower seed yield than the highest yielding spring applied sulphate-S broadcast pre-till S treatment.The 3-year findings suggest the potential of spring pre-emergence RRES in preventing S deficiency in hybrid canola consistently after second and third annual applications, although seed yields were slightly lower (not significantly) than the spring broadcast/incorporated sulphate-S. On the average of 3 years, there was a significant increase in canola seed yield from applied S compared to zero-S control in all cases, but seed yields varied with S source and/or application time-method combination (Table 3).Among the sulphate-S treatments, spring broadcast pre-till sulphate-S treatment produced the greatest seed yield, which was closely followed by spring sideband and spring pre-emergence broadcast treatments, and then autumn broadcast and spring seedrow-placed S treatments producing similar and lowest seed yield.The poor performance (although not significant) of autumn applied/broadcast sulphate-S compared to spring preemergence broadcast sulphate-S was probably due to over-winter loss of S in early spring by leaching from the soil sulphate-S pool.In the case of spring pre-emergence broadcast sulphate-S, it is possible that a small portion of applied S may have been stranded in the surface soil and did not become available to the crop in the early growing season on this S-deficient soil.Similarly, it is possible that a small portion of applied S for the sidebanded sulphate-S may also have not become available to crop in the early growing season.The relative poor performance of seedrow-placed sulphate-S could be due to the toxic effect of sulphate-S on seedling emergence and/or their early growth (but we did not make any plant counts).Among the RRES treatments, spring preemergence produced the highest seed yield, and it was closely followed by autumn broadcast, with the lowest seed yields from spring pre-till broadcast, spring sideband and spring seedrow-placed treatments (Table 3). Overall, the findings suggest the potential of spring pre-emergence broadcast RRES or autumn broadcast RRES in preventing S deficiency in hybrid canola, although seed yield was still slightly lower than the spring pre-till broadcast/incorporated sulphate-S treatment which produced the greatest seed yield from applied S. In the case of RRES, autumn broadcast was considered ideal time/method for diffusion of ES particles from granules and then oxidation of ES to sulphate-S over the winter, and was expected to be more effective than spring preemergence RRES and/or as effective as spring applied sulphate-S fertilizer, but it was still slightly less effective than those treatments, especially spring-applied sulphate-S.It is possible that a small portion of ES that was oxidized to sulphate-S over the winter from autumn broadcast RRES, may have been lost by runoff and/or leaching in early spring after snow melting.The relative poor performance of spring pre-till broadcast RRES compared to spring pre-emergence broadcast RRES could be due to the fact that ES granules after incorporation in pre-till treatment may have stayed intact in the soil, resulting in less diffusion and subsequently poor oxidation of ES particles to sulphate-S compared to ES granules that were deposited on the soil surface, being exposed to wetting/drying and temperature, ideal for diffusion and oxidation, in the case of spring pre-emergence broadcast RRES, as also suggested by Solberg et al. [9]. Unlike seed yield, there was no significant effect of applied S source and/or timing of application on straw yield in all three years in the present study (data not shown).In contrast, previous research has shown a significant positive response of both straw and seed yield to applied sulphate-S fertilizer at some sites/site-years [4] [7] [8] [23].In the present study, straw yields ranged from 4745 to 6879 kg•ha −1 in 2011, from 3701 to 5059 kg•ha −1 in 2012 and from 7587 to 10,446 kg•ha −1 in 2013.In 2012, straw yield was highest with spring pre-till RRES and lowest with spring sideband RRES.In 2013, straw yield was highest with zero-S treatment and lowest with autumn broadcast RRES.These straw yield results suggest that RRES in these treatments increased the availability of S to plants in the later growing season and increased straw yield, but this did not translate into seed yield. Oil concentration in canola seed increased with sulphate-S application in most cases in all 3 years, but RRES showed significant beneficial effect on oil concentration in canola seed only in 2012 and 2013 (Table 4).In 2011, oil concentration in canola seed increased with almost all sulphate-S fertilizer treatments and tended to increase with only autumn broadcast RRES and spring pre-emergence RRES treatments.In 2012, oil concentration in canola seed increased in almost all sulphate-S (except seedrow-placed sulphate-S) and in all RRES treatments.Spring pre-emergence RRES gave the highest oil concentration in canola seed.In 2013, oil concentration in canola seed increased in all sulphate-S and RRES treatments.On the average of 3 years, oil concentration in canola seed increased in all S treatments, regardless of S source, suggesting that S fertilization can increase oil concentration in canola seed when S deficiency exists in canola.Earlier research studies in Canada (Saskatche- There was no effect of any S fertilizer treatment on protein concentration in canola seed in 2011 (Table 5).In 2012, protein concentration in canola seed increased significantly only in seedrow-placed sulphate-S treatment.This increase in protein concentration of canola seed was most likely due to the decrease in seed yield because of detrimental effect of seedrow-placed S in this treatment.There was no significant effect of S fertilizer on protein concentration in canola seed in 2013, and also on the 3-yr average protein concentration.Similarly, our other research has also reported no effect of S application on protein concentration in canola seed [7] [8] [11] [23] [25]. The response trends of total N uptake in seed + straw to applied S were significant in all 3 years and were generally similar to seed yield, and also varied with S source and application time-placement treatments in different years (Table 6).In 2011, total N uptake in seed + straw increased significantly in all treatments, and it was highest for spring sideband RRES treatment, followed very closely by spring seed-row placed sulphate-S treatment.In 2012, there was a significant increase in total N uptake in seed + straw in almost all treatments except sideband RRES, although spring sideband or seedrow-placed RRES and spring seedrow-placed sulphate-S gave the lowest total N uptake in seed + straw.Spring applied pre-till and sideband sulphate-S had the highest total N uptake in seed + straw.In 2013, total N uptake in seed + straw increased significantly with applied S in most treatments except autumn broadcast RRES and sulphate-S.On the 3-yr average, total N uptake in seed + straw increased significantly with all S treatments, regardless of S source and time-method combination.Other earlier research has also shown increase of total N uptake in canola seed and/or straw with S fertilizer application, mainly due to increase in yield [11] [23]. Total S uptake in seed + straw increased significantly in all sulphate-S treatments in all years, but with RRES it increased significantly in only few treatments (e.g., RRES broadcast autumn, RRES broadcast spring pre-till and RRES broadcast spring pre-emergence) in 2012 and 2013, and also total S uptake was generally lower with RRES than sulphate-S (Table 7).The highest total S uptake in seed + straw was with spring seedrow-placed sulphate-S in 2011 and with spring pre-till sulphate-S in 2012 and 2013.Among the RRES treatments, total S uptake in seed + straw was highest with broadcast spring pre-emergence in 2011, and with autumn broadcast in 2012 and 2013, but it was still less than the spring broadcast/incorporated sulphate-S treatment, even after three annual applications.Previous research studies in Canada (Saskatchewan and Manitoba) and USA (Montana) have shown increase of total S uptake in canola seed and/or straw with S fertilizer application, due to increase in both yield and concentration of total S in canola plants [7] [8] [11] [23]- [27].In our previous research in Saskatchewan, total S uptake in canola was usually much lower with ES fertilizers than sulphate-S fertilizers [7] [8] [11]. There was a significant increase in PFP with applied S for both S sources compared to zero-S control in all 3 years (Table 8).The response trends of PFP to applied S fertilizers were essentially similar to seed yield, although the magnitude of response varied in different years.In 2011, the PFP values ranged from 23.6 to 24.9 kg seed kg −1 applied N for sulphate-S treatments, with small differences among treatments.For the RRES treatments, the PFP values ranged from 20.4 to 23.6 kg seed kg −1 applied N, with the highest PFP with autumn broadcast treatment.In 2012, the PFP values ranged from 13.8 to 16.3 kg seed kg −1 applied N for sulphate-S The SUE for seed yield (kg seed kg −1 applied S•ha −1 ) varied with S source and application time-placement treatments in different years (Table 9).In 2011, the SUE for sulphate-S was highest for spring seedrow-placed (48.6) and spring pre-till (48.2), followed by spring pre-emergence (45.9), with the lowest SUE with autumn broadcast (41.9) or spring sideband (40.5).For RRES, autumn broadcast gave the highest SUE (40.8) but it was significantly lower than the best sulphate-S treatments.In 2012, the SUE for sulphate-S ranged from 15.0 (spring seedrow-placed) to 29.5 (spring pre-till).For RRES, the SUE ranged from 11.2 (spring sideband) to 28.4 (spring-pre-emergence).In 2013, the SUE for sulphate-S ranged from 46.7 (broadcast spring pre-emergence) to 60.8 (spring pre-till).For RRES, the SUE ranged from 42.3 (spring seedrow-placed) to 55.0 (spring-pre-emergence).On the average of 3 years, the SUE ranged from 40.0 (spring-pre-emergence) to 46.2 (spring pre-till) for sulphate-S, and from 25.5 (spring seedrow-placed) to 39.0 (spring-pre-emergence) for RRES. Like SUE, % recovery of applied S also varied with S source and application time-placement treatments in different years (Table 10).In 2011, recovery of applied sulphate-S in seed + straw ranged 31.6% to 49.6%, with the highest S recovery with spring seedrow-placed S and lowest with spring sideband S. For RRES, recovery of applied S ranged from 1.7% to 21.3%, and was highest for spring pre-emergence treatment and lowest for spring seedrow-placed S. In 2012, recovery of applied sulphate-S in seed + straw ranged from 35.9% to 54.6%, with the highest S recovery with spring pre-till S. For RRES, recovery of applied S ranged from 8.9% to 41.7%, with the highest recovery autumn broadcast treatment and lowest for spring sideband S. In 2013, recovery of applied S ranged from 50.2% (spring sideband) to 87.1% (spring pre-till) for sulphate-S and from 11.9 (spring sideband) to 28.4 (broadcast autumn) for RRES.On the average of 3 years, recovery of applied S ranged from 41.1% (spring sideband) to 61.7% (spring pre-till) for sulphate-S and from 11.0 (spring sideband) to 27.4 (broadcast autumn) for RRES. Overall, the response trends of total N uptake and PFP were usually similar to seed yield for both S sources, but total S uptake, SUE and % recovery of applied S in seed + straw were lower with RRES than sulphate-S in Conclusion There was a significant seed yield response of hybrid canola to applied S from both sulphate-S and RRES sources in all 3 years.Oil concentration in canola seed increased with both S sources in 2012 and 2013, but it increased only with sulphate-S in 2011.There was no effect of any S treatment on the protein concentration in canola seed.The response trends of total N uptake and PFP were usually similar to seed yield for both S sources, but total S uptake, SUE and % recovery of applied S were lower with RRES than sulphate-S in many/most cases. The findings of our study on a S-deficient soil suggest that the ideal S application is sulphate-S broadcast, incorporated into soil prior to seeding in spring.Our findings also suggest the potential of autumn broadcast RRES and spring pre-emergence broadcast RRES in preventing S deficiency in hybrid canola, although seed yields are slightly lower than the ideal highest yielding spring broadcast/incorporated sulphate-S treatment.Our findings are based on one site/soil, so there is a need of additional future research to verify our findings and improve effectiveness of Vitasul (commercial name of RRES) further under varied soil types, climatic and crop growing conditions.For producers who are planning to use Vitasul on their farms, they should try it on a small scale (for their own satisfaction) and find out if this S fertilizer is working/effective in preventing S deficiency in their crop, especially canola, under their particular soil, crop and farm/climatic situations/conditions. Table 1 . Some characteristics of soil in autumn 2010 at initiation of the field experiment at Star City, Saskatchewan. Table 2 . Growing season monthly and total precipitation for the three site-years, and average 30-yr average precipitation and temperature at Star City, Saskatchewan. Table 3 . [8]d yield of canola with rapid release elemental S (RRES) and sulphate-S fertilizers applied at 20 kg•S•ha −1 with various combinations of application time and placement method in 2011, 2012 and 2013 on a S-deficient soil at Star City, Saskatchewan.appliedgranularESfertilizers were not effective in increasing seed yield of canola in the first year of application, although autumn applied ES was more effective than spring applied ES, but it was still significantly less effective than spring applied sulphate-S in most cases[7][8]. spring Table 4 . [27]entration of oil in seed of canola with rapid release elemental S (RRES) and sulphate-S fertilizers applied at 20 kg•S•ha −1 with various combinations of application time and placement method in 2011, 2012 and 2013 on a S-deficient soil at Star City, Saskatchewan.Alberta and Manitoba) and USA (Montana) have also shown increase of oil concentration in canola seed with sulphate-S fertilizer application[7][8][11][23]-[27], but there was little beneficial effect of ES fertilizers on oil concentration in canola seed[8][26][27]. Table 5 . Concentration of protein in seed of canola with rapid release elemental S (RRES) and sulphate-S fertilizers applied at 20 kg•S•ha −1 with various combinations of application time and placement method in 2011, 2012 and 2013 on a S-deficient soil at Star City, Saskatchewan. Table 6 . Total N uptake in seed + straw of canola with rapid release elemental S (RRES) and sulphate-S fertilizers applied at 20 kg•S•ha −1 with various combinations of application time and placement method in 2011, 2012 and 2013 on a S-deficient soil at Star City, Saskatchewan. Table 7 . Total S uptake in seed + straw of canola with rapid release elemental S (RRES) and sulphate-S fertilizers applied at 20 kg•S•ha −1 with various combinations of application time and placement method in 2011, 2012 and 2013 on a S-deficient soil at Star City, Saskatchewan. Table 8 . Partial factor productivity (PFP for N applied at 120 kg•N•ha −1 as a blanket application) for seed yield of canola with rapid release elemental S (RRES) and sulphate-S fertilizers applied at 20 kg•S•ha −1 with various combinations of application time and placement method in 2011, 2012 and 2013 on a S-deficient soil at Star City, Saskatchewan., with the lowest PFP with spring seedrow-placed S. For the RRES treatments, the PFP values ranged from 13.2 to 16.1 kg seed kg −1 applied N, with the highest PFP with autumn broadcast or spring pre-emergence S treatment.In 2013, the PFP values ranged from 32.8 (spring broadcast pre-emergence) to 35.1 (spring broadcast pre-till) kg seed kg −1 applied N for sulphate-S treatments.For the RRES treatments, the PFP values ranged from 32.1 (spring sideband or spring seedrow-placed) to 34.2 (spring broadcast pre-emergence) kg•seed•kg −1 applied N. On the average of 3 years, the PFP values ranged from 24.1 (autumn broadcast or spring seedrowplaced) to 25.4 (spring broadcast pre-till) kg seed kg −1 applied N for sulphate-S treatments and from 22.0 (spring seedrow-placed) to 24.2 (spring broadcast pre-emergence) kg seed kg −1 applied N for RRES treatments. treatments Table 9 . Sulphur use efficiency (SUE) for seed yield of canola with rapid release elemental S (RRES) and sulphate-S fertilizers applied at 20 kg•S•ha −1 with various combinations of application time and placement method in 2011, 2012 and 2013 on a S-deficient soil at Star City, Saskatchewan. Table 10 . Percent recovery of applied S in seed + straw of canola with rapid release elemental S (RRES) and sulphate-S fertilizers applied at 20 kg•S•ha −1 with various combinations of application time and placement method in 2011, 2012 and 2013 on a S-deficient soil at Star City, Saskatchewan./most cases.It is possible that RRES may have supplied near sufficient/adequate amounts of available S to canola plants during the growing season for seed yield but not for total S uptake in seed + straw, resulting in lower total S uptake, SUE and % recovery of applied S in seed + straw than the highest yielding spring applied sulphate-S fertilizer (broadcast/incorporated) treatment. many
2018-12-07T03:33:17.420Z
2014-03-11T00:00:00.000
{ "year": 2014, "sha1": "a3bc10409840b170c0bc9ed4d12f24d021316bc7", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=50122", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "de603e8b77da211ff6f1d0580fc17c9bc95d034d", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Mathematics" ] }
13942999
pes2o/s2orc
v3-fos-license
Isolation and Characterization of Microsatellite Markers Useful for Exploring Introgression Among Species in the Diverse New Zealand Cicada Genus Kikihia The New Zealand cicada genus Kikihia Dugdale 1971 exhibits more than 20 contact zones between species pairs that vary widely in their divergence times (between 20,000 and 2 million years) in which some level of hybridization is evident. Mitochondrial phylogenies suggest some movement of genes across species boundaries. Biparentally inherited and quickly evolving molecular markers like microsatellites are useful for assessing gene flow levels. Here, we present six polymorphic microsatellite loci that amplify DNA from seven species across the genus Kikihia; Kikihia “northwestlandica,” Kikihia “southwestlandica,” Kikihia muta, Kikihia angusta, Kikihia “tuta,” Kikihia “nelsonensis,” and Kikihia “murihikua.” The markers were developed using whole-genome shotgun sequencing on the 454 pyrosequencing platform. Moderate to high levels of polymorphisms were observed with 14–47 alleles for 213 individuals from 15 populations. Observed and expected heterozygosity range from 0 to 1 and 0.129 to 0.945, respectively. These new markers will be instrumental for the assessment of gene flow across multiple contact zones in Kikihia. Kikihia (Dugdale 1971) is a monophyletic endemic New Zealand genus of cicadas (Buckley et al. 2002, Arensburger et al. 2004a. Currently, there are 14 described species and 16 yet-to-be-described species and subspecies (Dugdale 1971;Fleming 1984;Marshall et al. 2008Marshall et al. , 2011Larivière et al. 2010). Species of this cicada genus are found on the three main islands of New Zealand including North, South, and Stewart islands as well as smaller near-by islands and the more distantly located Kermadec and Norfolk islands. Kikihia is widespread with species found in most terrestrial habitats including grass, shrub, and forest ecosystems from lowland to subalpine regions of New Zealand. Phylogenetic and phylogeographic studies using mitochondrial and nuclear gene sequences have provided insight into the evolutionary history of the genus (Arensburger et al. 2004a,b;Marshall et al. 2008Marshall et al. , 2009Marshall et al. , 2011. Extensive field study and molecular investigation into this genus identified 20 contact zones between different species that are possible sites of hybridization (Marshall et al. 2011). Individuals found in these proposed hybrid zones possess morphological and acoustic traits that are intermediate between parental species found away from the hybrid zones. Although species in the genus are estimated to have diverged more than 6 million years ago, hybridization seems confined to species that diverged 2 million years ago or less (Marshall et al. 2008). Here, we present six newly discovered microsatellite markers to investigate the population dynamics, levels of gene flow, and impact of introgression on species boundaries throughout this diverse cicada genus. Because processes related to speciation are difficult to observe after the fact, the study of the interaction at contact zones between incipient species pairs or between recently diverged species provides insight into the speciation process (Barton and Hewitt 1989, Harrison 1993, Mallet 2007, Hewitt 2011. We have developed six novel highly polymorphic microsatellites that amplify in seven species of Kikihia; Kikihia "northwestlandica," Kikihia "southwestlandica," Kikihia muta (Fabricius, 1775), Kikihia angusta (Walker, 1850), Kikihia "tuta," Kikihia "nelsonensis," and Kikihia "murihikua." Data for pure populations (as far from hybrid zones as possible) are used to test the microsatellite markers' ability to differentiate these species. Materials and Methods Specimen Collection. Cicada specimens were identified in the field based on morphology and song traits. Taxonomic descriptions are pending for many Kikihia species as denoted by informal names in quotes. Populations located as far from putative hybrid zones as possible were chosen as species-typical or "parental" populations. Subsequently, 24-36 individuals per species from 1 to 5 largely species-typical populations were investigated to test these new microsatellite markers (Table 1). Whole body specimens were placed in 95% ethanol or three legs were removed and stored in 95% ethanol and the bodies were pinned. All ethanol specimens are stored at À20 C. Microsatellite Development. Nine species of Kikihia from throughout the genus were used to develop microsatellites: K. "southwestlandica," K. muta, K. angusta, K. "nelsonensis," Kikihia cutora cutora, Kikihia scutellaris, Kikihia "peninsularis," Kikihia horologium, and Kikihia "aotea western." Genomic DNA was extracted from cicada legs using the Qiagen DNeasy Blood & Tissue kit (Qiagen, Boston, MA). RNA was removed with 20 mg RNAase A (New England Biolabs, Ipswich, MA) per sample during DNA purification. The 454 GS FLX Rapid Library Preparation kit (454 Life Sciences, Branford, CT) with Rapid Library multiplex identifiers (MIDs) were used for one specimen from each of the nine species. MIDs are unique sequences incorporated in the 454 primers for the purpose of identification. Sequencing was done unidirectionally. Sample preparation and sequencing were performed according to the manufacturer's instructions (454 Life Sciences, Branford, CT). One library (K. muta) was sequenced on 1/16th of a GS FLX Titanium PicoTiter Plate. The eight remaining libraries were pooled and sequenced on one region of a two-region GS FLX Titanium PicoTiter Plate. GS De Novo Assembler v2.3 was used to align reads with the heterogeneous and large genome options. This resulted in 3,683 contigs with an average size of 743 bp and a range of 100-5,909 bp. MsatCommander (Faircloth 2008, Abdelkrim et al. 2009), a program that searches for di-tri-, and tetranucleotide repeats with sufficient flanking sequence for primer design, was used to scan contigs. Primer3 (Untergrasser et al. 2012) was used to design primers. In total, 30 primer pairs were screened. Six primer pairs were identified representing variable microsatellite loci that amplified in all species. These primer pairs were tested in multiple Kikihia populations. Fragment Analysis. Polymerase chain reaction (PCR) primer pairs were tested with the forward primer tagged with a M13 sequence on the 5'-end and the reverse primers for each locus plus a 6-FAM-labeled M13 primer (Schuelke 2000). PCR amplifications were performed in 15 ml reactions containing 0.4 U Ex Taq DNA polymerase (TaKaRa Biomedical Inc., Otsu, Shiga, Japan), 1x PCR buffer, 0.3 mM dNTP mix, 3 pmol forward primer, 12 pmol reverse primer, 12 pmol M13 primer with a 5' 6-FAM fluorescent label reactions, and up to 100 ng template DNA. PCR conditions were as follows: an initial 94 C for 5 min followed by 30 cycles of 94 C for 30 s, 57 C for 45 s, 72 C for 45 s, followed by 8 cycles of 94 C for 30 s, 53 C for 45 s, 72 C for 45 s, followed by a final extension of 72 C for 10 min. PCR was diluted 1:9 in HiDi Formamide (Invitrogen Life Technologies, Grand Island, NY) and run on an ABI 3130xl DNA sequencer (Applied Biosystems, Grand Island, NY). Alleles were designated according to amplicon size relative to LIZ 600 size standard (Invitrogen Life Technologies, Grand Island, NY). PCR forward primers that amplified specimens from across the Kikihia phylogeny and revealed variable microsatellites were then fluorescently labeled with the G5 (Invitrogen Life Technologies, Grand Island, NY) labels ( Table 2). Testing of Kikihia populations for the six described primer pairs were multiplex amplified using the Qiagen Type-it microsatellite PCR kit according to the manufacturer's instructions (Qiagen, Boston, MA), including an annealing temperature of 57 C for all multiplexed PCRs. Files were converted between different formats using CONVERT v1.31 (Glaubitz 2004 (Beaumont andNichols 1996, Antao et al. 2008). LOSITAN was run for 100 simulations using the neutral mean Fst and force mean Fst settings for a 0.99 confidence interval and infinite alleles model. Results Primer sequences, microsatellite repeat motifs, PCR amplicon sizes, and the number of alleles are provided in Table 2. Only one marker, A553, showed stutter patterns that made scoring of some individuals difficult. The largest peak was used, and individuals that were ambiguous because of stutter were marked as missing data. The other five markers had minimal stuttering that did not influence peak calling. The number of alleles per locus varied from 14 to 47 in a total of 213 individuals from seven species. Table 3 summarizes the genetic diversity estimated for each locus in each population for seven species. Observed heterozygosity ranged from 0 to 1 and the expected heterozygosity ranged from 0.129 to 0.945. There was a total of 62 private alleles with all but one population, OL.NLW having at least one private allele. The HWE test, which indicates deviations between observed and expected heterozygosity showed significant deviations in three loci for one to two populations each after a Bonferroni correction (a ¼ 0.00055). Three loci were in HWE in all populations. Null alleles were detected in two of the six loci, A1267 and M2333, which had overlapping amplicon ranges and were multiplexed in the same reaction. Null alleles in these two markers are likely due to some interference between them in the electropherograms. Using the Fst-oulier method implemented in LOSITAN, one locus was identified as being a candidate of balancing selection, M2333. This may be due because of the presence of null alleles, which would likely reduce heterozygosity and make this locus. The A1267 locus was unusual because over 50% of the specimens tested were homozygous for one very common allele, whereas the other loci were much more heterogeneous. Some populations failed to amplify one of the multiplex reactions. Ritchies Rd, SOUTH of Douglas Species NW, K. "northwestlandica"; SW, K. "southwestlandica"; Mur, K. "murihikua"; Muta, K. muta, K. "nelsonensis," K. "tuta," K. angusta; N, number of specimens per population; site code, the first two letters represent New Zealand district codes and the last three letters are unique collecting location codes; Lat, latitude; long, longitude; E, elevation in meters Table 2. Characterization of the microsatellite markers Discussion Next-generation sequencing using 454 pyrosequencing was employed in the development of novel microsatellite markers for the New Zealand cicada genus Kikihia. This technology is becoming more commonly used in microsatellite discovery from a range of taxa (Abdelkrim et al. 2009, Ekblom and Galindo 2010, Gardner et al. 2011. This is the first published discovery of microsatellite markers for any New Zealand cicadas. Unlike most studies that are focused on intraspecific diversity, we are focused on intra-and interspecific diversity in a genus that is approximately 10-12 million years old with the earliest extant species split more than 6 million years ago (Marshall et al. 2008). Microsatellite markers that were variable and cross-amplified in all seven species of Kikihia are presented here. Three of the six loci were found to violate the HWE in no more than 2 of the 15 populations tested. Heterozygote deficiency increases due to factors such as inbreeding, population stratification, null alleles, and genotyping errors. It is not surprising that species of Kikihia display high levels of inbreeding or population stratification since field observations suggest low dispersal rates. However, these were not explicitly tested here. The large number of potential hybrid zones between multiple species of Kikihia makes this genus uniquely suited for the study of hybrid zones. Many of these potential hybrid zones are between nonsister species that vary in their divergence times from less than 1 million years to more than 3 million years (Marshall et al. 2008(Marshall et al. , 2011. The markers developed in this study will be useful for investigations of the evolutionary past and future of this interesting species radiation. Acknowledgments We thank Kathryn Theiss, Department of Biology, Willamette University, Salem, OR, for advice and assistance in microsatellite development. We also thank Kent Holsinger, Elizabeth Jockusch, and
2016-05-04T20:20:58.661Z
2015-04-05T00:00:00.000
{ "year": 2015, "sha1": "9acab0390594246610a0d3b1b7b961cded79da1c", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jinsectscience/article-pdf/15/1/29/7964169/iev016.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9acab0390594246610a0d3b1b7b961cded79da1c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
208636923
pes2o/s2orc
v3-fos-license
Automated Analysis Using a Bayesian Functional Mixed-Effects Model With Gaussian Process Responses for Wavelet Spectra of Spatiotemporal Colonic Manometry Signals Manual analysis of human high-resolution colonic manometry data is time consuming, non-standardized and subject to laboratory bias. In this article we present a technique for spectral analysis and statistical inference of quasiperiodic spatiotemporal signals recorded during colonic manometry procedures. Spectral analysis is achieved by computing the continuous wavelet transform and cross-wavelet transform of these signals. Statistical inference is achieved by modeling the resulting time-averaged amplitudes in the frequency and frequency-phase domains as Gaussian processes over a regular grid, under the influence of categorical and numerical predictors specified by the experimental design as a functional mixed-effects model. Parameters of the model are inferred with Hamiltonian Monte Carlo. Using this method, we re-analyzed our previously published colonic manometry data, comparing healthy controls and patients with slow transit constipation. The output from our automated method, supports and adds to our previous manual analysis. To obtain these results took less than two days. In comparison the manual analysis took 5 weeks. The proposed mixed-effects model approach described here can also be used to gain an appreciation of cyclical activity in individual subjects during control periods and in response to any form of intervention. INTRODUCTION Colonic manometry is a procedure involving the placement of a flexible catheter incorporating pressure sensors into the colon to record contractile activity. It has been used to distinguish normal colonic contractions in healthy adult subjects (Bassotti et al., 1987;Soffer et al., 1989;Bampton et al., 2001;Rao et al., 2001b) from the abnormal contractility that may exist in patients with functional colonic disorders (Narducci et al., 1986;Chey et al., 2001;Dinning et al., 2010). More recently, several research groups have published findings from high-resolution colonic manometry. These catheters utilize a greater number of more closely spaced recording sensors, that provide a clearer picture of propagating contractile activity (Dinning, 2018;Pervez et al., 2020). Despite the improvements in catheter design, analysis of manometric recordings still relies upon either visual identification of propagating motor patterns or a generalized approach using area under the pressure curve (AUC) or motility index (MI) measurement. Visual identification of colonic motor patterns has identified differences in the count, velocity and amplitude of propagating pressure waves between health and patient groups, however, this approach is also subject to some fundamental problems. In some manometry traces the large number of pressure events can make identifying individual motor patterns very difficult. This is highlighted in Figure 1 which shows manometry traces recorded in 3 of our subjects. Not only is it time consuming to find each individual propagating event, but determining where they start, end and their direction of propagation can also be difficult. Composite measures such as AUC or MI avoid visual identification of motor patterns, however, their non-specific nature makes useful interpretation of the data very limited. For example, an increase or decrease in AUC or MI, within or between subjects, tells us little about the altered characteristics of specific motor patterns. Automated approaches that identify and quantify changes in motor patterns, standardize analysis between laboratories and remove potential personal bias, all within a workable time frame would be very beneficial to the international community. There have been attempts to achieve this previously (De Schryver et al., 2002;Pan et al., 2010;Wiklendt et al., 2013) but those developed techniques have not been adapted by any other groups. Part of the problem is the ability to determine the clinical worth of the findings from these automated approached. For example, in two approaches, the findings suggest disjointed or poorly coordinated pressure waves in patients with slow transit constipation when compared to healthy adults (Pan et al., 2010;Wiklendt et al., 2013). While of potential interest the analysis does not allow us to determine which pressure wave are poorly coordinated. In this current article, we developed a computerized approach for the analysis of high-resolution colonic data. The technique is based upon a wavelet transform method, currently used in analyzing time-series in fields such as neuroscience, geophysics, meteorology and oceanography (Torrence and Compo, 1998;Grinsted et al., 2004;Veleda et al., 2012). The wavelet transform is a signal processing technique that can be used to transform signals from the time domain to the time-frequency domain, effectively decomposing them into constituent frequencies. Using this approach, we are able to see in a single image changes in colonic pressure waves, at all frequencies, in response to any given stimulus (a meal in this instance). The images also contain information on propagation direction and speed of propagation and the statistical comparisons to determine if any stimulus effects differ between subject groups. We have applied this analytical method to data that we had previously analyzed manually in healthy adults (Dinning et al., 2014) and patients diagnosed with slow transit constipation (Dinning et al., 2015). The findings in that original article are compared to the findings from our developed automated approach in the section "Discussion." The structure of the article is as follows. Section "Spectral Decomposition" describes spectral decomposition with the wavelet and cross-wavelet transforms. Section "Statistical Framework" details the statistical framework that is used to compare the spectra between groups of subjects. We present an application of this technique to colonic manometry data described in Section "Data, " with results shown in Section "Results." The article concludes with a discussion in Section "Discussion." Wavelet Transform The continuous wavelet transform (Torrence and Compo, 1998;Mallat, 2008) is a useful tool for analyzing non-stationary quasiperiodic signals. It decomposes a time domain signal x(t) ∈ R into the time-scale domain w(t, s) ∈ C with equation (2.1): where ψ(t) ∈ C is an admissible wavelet function, and the * superscript represents the complex conjugate. An admissible wavelet function is one which has zero mean and its Fourier transform is continuously differentiable (Farge, 1992), with an extra desirable property that it be localized in both time and frequency. Intuitively, w(t, s) measures the variation of x(t) within a neighborhood at t of size proportional to s. In practice, we choose s from finite set of logarithmically spaced scales S = {s 1 , . . . , s L }, specify the wavelet basis function in the frequency domain, and perform the convolution in equation (2.1) via fast Fourier transform (FFT) utilizing the convolution theorem with: is the frequency-domain signal, F and F −1 are the Fourier and inverse Fourier transforms, and ω represents the frequency-domain locations in radians per second. The wavelet transform is susceptible to harmonic artifacts, and we solve this problem by applying the "MesaClip" algorithm as described in our recent article (Wiklendt et al., 2020). To map from scales (seconds) to frequencies (Hz) we use "Synchrosqueezing" (Daubechies et al., 2011). Synchrosqueezing redistributes the wavelet coefficients based on the first time-derivative of the phase (also known as "instantaneous frequency"). For a given set of K equally and logarithmically spaced frequency bins with centers F = {f 1 , . . . , f K }, synchrosqueezing can be described as: where φ(t, s) = unwrap( w(t, s)) represents the timedifferentiable "unwrapped-in-time" phase in radians with the complex argument (or angle) denoted by the parentheses-less function : C → (−π, π]. The function bin f (x) returns 1 if x and f are in the same bin, and 0 otherwise. Switching to discrete-time representation with samples recorded at times T = {t 1 , . . . , t N } we can view the wavelet spectrum as v(t, f ) : T × F → C. The time-average of the squared amplitudes produces the global wavelet power spectrum: Cross-Wavelet Transform The cross-wavelet transform combines two wavelet spectra with the complex-conjugated product: where v a and v b are the synchrosqueezed wavelet transforms of the two signals labeled a and b. The combined subscript v ab denotes the cross-wavelet transform between the two signals. A global wavelet power cross-spectrum could be computed in the same way for v ab as shown for v in equation (2.4). However, this discards the useful phase information contained in v ab . The effect of the complex-conjugated product is that the resulting phase represents the difference in phase between the two signals. For each frequency, computing a squared-amplitude-weighted histogram of the phase-differences yields a 2D histogram in the frequency-phase domain, analogous to the global wavelet power spectrum but stratified by phase-differences. Since phase-differences are actually phases, in the rest of this section we will refer to them simply as "phases, " keeping in mind that they represent the phase-difference between two signals, rather than the phase of one or the other. Given a set of M equally and linearly spaced phase bins with centers H = {ϕ 1 , . . . , ϕ M }, we define the 2D histogram of frequencies and phase-differences by: where bin ϕ (x) returns 1 if x and ϕ are in the same bin, and 0 otherwise. T ϕ (f ) is the set of all time samples such that v ab (t, f ) is in the bin containing ϕ. If pairs of sensors are spaced sufficiently close together in the environment being recorded, then the cross-wavelet transform between sensors in such a pair allows us to measure propagating quasiperiodic activity. The sign of the phasedifference determines the direction of propagation. The value of the phase-difference ϕ (rad) at the frequency of interest f (Hz) and the separation between the pair of sensors d (cm) can be used to determine the apparent velocity of propagation u (cm/s) with the simple formula: u = d2πf /ϕ (2.8) Retrograde propagation is displayed to the left of the midline, and antegrade to the right of the midline. The curved dotted lines indicate the speed of propagation, from 1 to 100 cm/min. The brightness of green pixels represents an increase in power. In this healthy adult prior to the meal, multiple frequencies were recorded, with no single frequency dominating (C,D); while propagating activity at ∼1.5 cpm (white oval) and 1 /2 cpm (red oval) exists, its power is so low that it is barely visible. The timing of this propagating activity is shown by white and red ovals in panel ( For quasiperiodic pressure signals in these data, a more appropriate measure of propagation may be "pace" which is the inverse velocity u −1 (s/cm), where synchronous events (or phase-locking) between the two signals may have a more robustfor-modeling pace of 0, rather than a velocity at ±∞. STATISTICAL FRAMEWORK For each unit of statistical data, we obtain from the wavelet analysis a 1D curvev(f ), or a 2D surfacev(f , ϕ). Such a curve or surface is considered to be a response under the influence of a set of predictors which can be any number of categorical or numerical variables specified by the experimental design. We want to measure and compare the effects of the given predictors. An independent regression model could be fit for each location x in either the frequency x ∈ F or frequency-phase x ∈ F × H domains. However, performing an independent fit at each location would require a multiple-comparison adjustment, and would fail to account for correlations between locations, effectively weakening the power of the analysis. Instead, we capture correlations between locations by treating the response curves and surfaces as individual functions rather than simply collections of independent points. We model these functions as samples from Gaussian processes, which allow us to specify a formula for correlation between locations, without needing to specify a formula for the shape of the functions themselves. A Gaussian process (GP) is a probability distribution with an infinite number of random variables, such that any finite set of variables form a multivariate Gaussian distribution. This is achieved by specifying a covariance kernel function k(x, x ), which when given a finite set of locations x ∈ {x 1 , . . . , x N } allows us to build an N × N covariance matrix with elements ij = k(x i , x j ). We have only finite data, and so the kernel function is evaluated only at the available data locations when fitting the GP. However, we can inspect the GP at any number of arbitrary locations in the kernel's domain, hence the infinite nature of the model as a step beyond a multivariate Gaussian. An analogy is fitting a simple regression line. The line is fit only to a finite set of data, but once we have an intercept b and slope a we can define a function y(x) = ax + b, where y-locations can be calculated for any choice of x-locations, not just those for which we have data. Model The latent GP function-on-scalar mixed-effect model we use can be written in the form: where GP represents the Gaussian process distribution, k σ is a kernel function describing the structured ω-standardized noise covariance, σ 2 represents unstructured ω-standardized noise variance, and y i is the response function for observation i ∈ {1, . . . , N}. The responses are based on the transformed power log(v). The intuition behind the ω-standardized noise co/variance can be seen by rearranging the terms in equations (3.1) and (3.2) to: which facilitates efficient inference by not requiring the structured residuals on the right-hand-side of equation (3.5) to be sampled, nor requiring a matrix inversion per observation. When evaluating the likelihood specified by equation (3.5), for the 1D case a simple Cholesky decomposition is sufficient, but for the 2D case an eigen decomposition is needed to separate the kernel functions from the unstructured noise σ 2 (see 1 for a Stan model source code example). In the mean specified by equation (3.3), X ∈ R N×P is a design matrix of P population-level predictors (a.k.a. fixed-effects) with X i ∈ R 1×P representing the row vector of predictors pertaining to observation i. β = (β 1 , . . . , β P ) is a P × 1 vector of iid latent GPs representing the P population-level effects. Z ∈ R N×J is a design matrix of J group-level predictors (a.k.a. randomeffects). b = (b 1 , . . . , b J ) is a J × 1 vector of potentially correlated latent GPs representing the group-level effects. Depending on the experimental design, an optional offset term o η is included in equation (3.3) which may be either set to the mean of all y as a way of centring the data, inferred to include a measure of variability in the centering, or given a different value per observation if some measure of exposure needs to be incorporated that would not otherwise fit as its own predictor in X or Z. Analogous to the predictors X and Z for the mean, the matrices W ∈ R N×Q and U ∈ R N×R are respectively, the population-level and group-level predictors for the log standard deviation equation (3.4), with corresponding effects γ and u. An 1 https://github.com/lwiklendt/gp_kron_stan explicit offset term is missing here since such an offset is implicitly handled by the scale of k σ . Each GP function in each vector of population-effects is given an iid prior: However, for the vectors of group-effects functions we include correlations between functions via multivariate or multi-output where b and u are covariance matrices dependent on the structure of the Z and U design matrices. These matrices will generally be block-sparse, facilitating efficient computation. The kernel functions k {σ,β,γ,b,u} (x, x ) and their parameters, also known as hyperparameters of the GPs, will be covered in the next subsection "Kernel Functions." The response functions y i , and the design matrices X, Z, W, and U are the supplied "input" data. The vectors of functions β, b, γ, u, and hyperparameters, are to be estimated and correspond to "outputs" of the inference. The structure of the design matrices depends on the experimental design, and we find it easiest to derive the design matrices (also known as "model matrices") based on formula notation as specified in section 2 of Bates et al. (2015). We provide an application in section "Results" using the formulae (3.15) and (3.16). We are interested in modeling power that was calculated using the wavelet transform as described in sections "Wavelet Transform" and "Cross-Wavelet Transform." To fit the power over frequencies, x = f is a scalar that represents frequencies. To fit over frequencies and phase-differences, x = (f , ϕ) is a 2D point that represents frequencies in one dimension and phasedifferences in the other. Kernel Functions The form and parameters of the kernel functions k depend on whether the response functions are 1D or 2D. There are many potential kernels to choose from, and they can even be built up from smaller kernels (Duvenaud, 2014), but for the sake of brevity we will limit our exposition to one concrete kernel function for each type of domain. For the case of 1D curves over frequencies we use a log-space squared-exponential kernel: with λ specifying the lengthscale of the correlation based on the distance |log(f ) − log(f )| between any two frequencies f and f . At a distance of 0 we have equal frequencies f = f , where the correlation is 1 and covariance is τ 2 . As the distance approaches ∞ the correlation and covariance approach 0. For the case of 2D surfaces over frequencies and phasedifferences we use a product of the log-space squared-exponential kernel and a periodic kernel: where λ f and λ ϕ specify the log-frequency and phase-difference lengthscales. When the difference in phase-differences ϕ and ϕ is either 0 or 2π, or any integer multiple of 2π, then the phasedifference component of the kernel will be 1, identifying the locations ϕ and ϕ . For equations (3.6) and (3.7), k represents the kernel function used in constructing a covariance matrix, but for equations (3.8) and (3.9) k is a kernel function used in constructing a correlation matrix by setting τ = 1, since including a free parameter for variance in k would make the model non-identifiable due to the variance parameters already defined in b and u . The kernel in equation (3.11) is separable, such that we can write it as: where with abuse of notation we are identifying kernel functions based on their argument symbols, such that k(f , f ) and k(ϕ, ϕ ) are different functions, with k(f , f ) defined in equation (3.10) and k(ϕ, ϕ ) defined in equation (3.13). Since we can factorize the 2D kernel equation (3.12), we can create a covariance matrix using the Kronecker product of the individual covariance matrices built from kernels equations (3.10) and (3.13): The Kronecker factorization of the kernel matrices also allows for a substantial speed up in the numerical calculation of the Cholesky and eigen decompositions of the covariance matrices (Saatçi, 2012) used in inference. Prior distributions for hyperparameters λ and τ are experiment dependent, and will in general depend on the scale of the data. For the application presented in section "Results, " each σ, β, γ, b, u (subscript omitted for brevity) is treated independently, unless otherwise specified. We used λ ∼ Lognormal(0, 1) with the exception λ σ ∼ Lognormal(−0.7, 1) while ensuring λ σ < λ {f ,ϕ} . For the correlation between λ f and λ ϕ we used ρ f ϕ ∼ Beta(2, 2), and τ ∼ (2, 1). Implementation A coarse grid was chosen for the functional domain so that posterior sampling could complete within a reasonable time. The grid can be refined relatively quickly after the expensive sampling step. Rather than the naïve linear or cubic interpolation, we can use GP prediction such that the covariance between locations is faithfully preserved in the refinement. Given a vector of N grid coordinates x, a vector of M refined coordinates x * , a vector of N function values y corresponding to x, and the kernel function k, then we can produce a vector of M refined function values y * at x * with: where (x, x) is the N × N covariance matrix obtained by applying k to the coordinates in x, and (x * , x) is the M × N matrix given by the covariances obtained by applying k to x * and x. Note, for the 2D case we can take advantage of: We use the Hamiltonian Monte Carlo sampler from the Stan (Carpenter et al., 2017) package to obtain a posterior distribution of GPs which can be inspected to detect where and how locations may differ between various categorical predictors. We apply the method to data (described in Section "Data") recorded from the descending and sigmoid colon of 11 healthy volunteers and 12 patients with slow-transit constipation, during 1 h preprandial and postprandial periods. Using formula notation: the design matrices X and Z are constructed from formula equation (3.15), and W and U from equation (3.16), according to the construction process described by Bates et al. (2015), where the log in equation (3.16) is a transformation of ω. The group predictor is a categorical variable indicating the group each subject belongs to: healthy or slow-transit constipation. The region predictor is a categorical variable indicating from which region of the colon the unit of data was recorded: descending or sigmoid. The meal predictor is a categorical variable indicating whether a recording was obtained during the preprandial or postprandial state, corresponding to a meal effect. The categorical variable subject identifies the subject. The nchan predictor in equation (3.16) is a real-valued standardized count of the number of sensors (or channels) in the recording, which varies per subject and per region. When computing weighted-averages over time as specified in sections "Wavelet Transform" and "Cross-Wavelet Transform, " we average not only over time but over both time and channels by effectively flattening the wavelet results into a single channel of length c|T|, where c is the number of channels and |T| is the number of time samples. Fewer channels are expected to result in a greater variation in the global averages, which is why we included it as a confounding factor of the signal variance. We set U = 0 with the formula equation (3.16) since we don't have repeated measurements, and so a within-subject variation is poorly identified. Two types of responses were analyzed, given by the 1D and 2D power from equations (2.4) and (2.6). The power was logtransformed to obtain the y s in model equations (3.1-3.4). For the 1D responses 33 frequency-bins were used, and for the 2D responses 17 frequency-bins and 18 phase-bins were used. After sampling from the posterior, the 1D responses were subdivided by a factor of 4 from 33 to 129 frequency 2 bins, and the 2D responses were subdivided by a factor of 6 from 17 to 97 frequency-bins and 18 to 108 phase-bins via GP interpolation equation (3.14). For each response type, the Hamiltonian Monte Carlo run consisted of 500 warm-up iterations and 500 sampling iterations over 8 (0-initialised) chains resulting in 4000 samples from the posterior distribution. We used an adapt-delta of 0.9. Diagnostics showed no divergent transitions, a top tree depth of 10, and visual inspection of trace plots showed good convergence that was validated by anR ≈ 1. Data appeared consistent with the posterior predictive distribution. On an i9-9900K processor running Windows 10 with 32GB RAM using PyStan v2.19.1.1 (Stan Development Team, 2019) with 8 parallel CPU cores (1 per chain), the 1D response type completed sampling in 75 min, and the 2D response type completed in 44 h. The computation process from raw pressure recording to time-averaged wavelet spectra for the 88 individual observations took approximately 30 s each. DATA We apply the aforementioned spectral decompositions and associated statistical analysis to colonic manometry data obtained to compare healthy volunteers and patients with slow-transit constipation. Pre-processing of the data was done to remove baseline drift and synchronous pressure increase removal in the same manner as detailed in Wiklendt et al. (2013). A synchronous pressure increase was defined as a synchronous increase in pressure waves that occurred across all manometry channels. Synchronous pressure waves that did not span all recording channels were not affected by this filtering. Pressures below 1mmHg were then clamped to 1mmHg, and log-transformed so that high-amplitude events would not overpower potentially interesting low-amplitude oscillations. The details of the healthy subjects, constipated patients, catheter types, placement, protocols and data collection have been described in a previous publication (Dinning et al., 2015). These are summarized briefly below. Subjects Colonic manometry was performed in 14 patients with scintigraphically confirmed slow transit constipation (2 male; median age 52 years; range 24-76 years). Colonic scintigraphy studies indicated that 13 of the 14 patients had >90% retention of isotope at 72 h. The remaining patient had no reading at 72 h but had >50% retention at 96 h. These data were compared to the colonic manometry recordings from 12 healthy adults (5 men; median age 51 years; range 27-69 years). Abdominal x-rays, taken at the end of each study, confirmed that the catheter tip was clipped to the ascending or hepatic flexure in 8 patients and to the transverse or splenic flexure in 6. In healthy subjects the catheter tip was located distal to or at the hepatic flexure in 11 and at the splenic flexure in 1. As all subjects had pressures sensors located in the descending and sigmoid colon, we used data from these regions for the analysis and results described in this article. All participants in the study had given written, informed consent and the studies were approved by the Human Ethics Colonic Manometry Colonic manometry was recorded with a fiber optic catheter containing 72 sensors spaced at one-centimeter intervals. On the day prior to the manometric recording, the bowel was cleared using sodium picosulphate and polyethylene glycol (Pharmatel Fresenius Kabi Pty Ltd., Hornsby Australia). All subjects drank clear fluids overnight. Lying in the left lateral position, with conscious sedation using midazolam and fentanyl, the manometry catheter was introduced with a colonoscope and clipped to the mucosa using Endoclips (Resolution Clip R Boston Scientific, MA, United States). Study Protocol Recordings were commenced within 60 min of the subject waking after the catheter placement. After a 2-h basal recording period, all subjects were given a 700Cal meal (24% protein, 43% fat, 33% carbohydrate). The meal consisted of 300ml of TwoCal R HN Vanilla (Abbott Nutrition, Columbus, OH, United States) and a chicken sandwich. Colonic pressures were then recorded for a further 2 h. RESULTS An example of the analysis applied to a recording from the sigmoid colon in a healthy adult is shown in Figure 2. The images contain the manometric traces constructed as PMaps (Figures 2A,E), the wavelet power spectrum of pressure waves at each moment (Figures 2B,F), the global wavelet power spectrum showing the dominant frequencies for the period (Figures 2C,G) and the global wavelet power cross-spectrum showing the dominant frequencies and their directions of propagation (Figures 2D,H). In this example, the meal induced a large increase in power at 2-4 cpm (Figures 2F,G), which propagated mostly in a retrograde direction at 30-100 cm/min (Figure 2H; magenta oval). A second major frequency (∼1 every 3 min) also occurred 30-50 min after the meal (Figures 2E,F,H; aqua oval) and consists of individual clusters each containing pressure waves occurring at 2-4 cpm; clusters visible in Figure 2E. 1D Group Analysis In this section we constructed power vs. frequency plots of motor events and compare them between healthy adults and patients in the descending and sigmoid colon. Healthy Adults vs. Patients With Slow Transit Constipation; Descending Colon ( Figure 3) The 1D analysis provides an indication of the power of pressure waves of different frequencies over a 1 h period. Preprandial activity is compared to the 1 h postprandial period. Furthermore, the difference between the periods can then be plotted to reveal significant changes caused by the meal, or significant differences between groups prior to or after the meal. During the preprandial recordings for healthy adults ( Figure 3A) and patients ( Figure 3D), a peak in power occurs at 2-4 cpm. A comparison in preprandial power between healthy subjects and patients is shown in Figure 3G. As can be seen there are no significant differences, indicated by the ratio between the two power densities not being outside of the 95% credible band (dotted curves). In the postprandial period, the peak at 2-4 cpm becomes more prominent (Figures 3B,E). The difference in postmeal activity between healthy subjects and patients is plotted in Figure 3H which shows that in patients, activity from 3 to 6 cpm is of lower power than in healthy subjects, as indicated by the ratios within the 95% credible band being all below 1 (green shaded region). The effect of the meal (relative to preprandial activity) is shown in Figure 3C (healthy subjects) and Figure 3F (patients). In both groups the meal induced a significant increase in power across almost the full spectrum of frequencies tested (blue shaded regions in Figures 3C,F). In Figure 3I, comparison of the meal effect between the patients and healthy adults indicated no significant differences, as indicated by a ratio of 1 remaining within the 95% confidence band. This analysis clearly shows that patients displayed a reduced power in the frequencies between 3 and 7 cpm after the meal compared to healthy adults. However, a meal proportionally induces a similar increase in power across the range of frequencies in both groups. Healthy Adults vs. Patients With Slow Transit Constipation; Sigmoid Colon (Figure 4) As with the descending colon, peak frequencies in the sigmoid colon were between 2 and 4 cpm, in both healthy adults and patients, in the pre-and postprandial periods (Figures 4A,B,D,E). The post-meal 3-5 cpm power is significantly reduced in patients when compared to healthy adults (green shaded region in Figure 4H). In both groups the meal induced a significant increase in power across almost the full spectrum of frequencies tested (blue shaded regions in Figures 4C,F). Healthy Adults vs. Patients With Slow Transit Constipation; Synchronous Pressure Increase Included To determine if the automated removal of the synchronous pressure increases had any impact upon these results we re-ran the analysis without any removal of data. The results remained unchanged (See Supplementary Figure 1), indicating that in these data removal of the synchronous pressure increases has no impact upon our findings in either descending of sigmoid colon. 2D Group Analysis The data used in the 1D analysis can be re-analyzed using a 2D group analysis which illustrates the direction of propagation of pressure waves across the range of frequencies (1/16 th to 16cpm). Healthy Adults vs. Patients With Slow Transit Constipation; Descending Colon ( Figure 5) Comparison of the preprandial recordings between the two groups (Figures 5A,D) indicates a significant reduction in retrograde and antegrade propagation across a wide range of frequencies (4-16 cpm) in the patient group (ratios shown in Figure 5G). During the postprandial period in healthy adults, the retrograde cyclic activity between 2 and 8 cpm is of significantly greater power than antegrade cyclic activity at the same frequency (black and white hatched outline in Figure 5B). The propagated frequencies between 1 and 16 cpm were significantly reduced in patients compared to healthy adults during the meal period (pale blue area at top of Figure 5H). The effect of consuming a meal on propagation is shown in Figures 5C,F. In both groups, the meal caused a significant increase in all propagated frequencies, which did not differ between healthy adults and patients ( Figure 5I). Therefore, while there were significant differences shown in the post-prandial motility between health and patients (Figure 5H), the proportional meal effect size was similar between the groups (Figure 5I). Healthy Adults vs. Patients With Slow Transit Constipation; Sigmoid Colon ( Figure 6) A comparison of the preprandial recordings (Figures 6A,D) between the two groups indicates a significant reduction in both retrograde and antegrade propagation across the full range of frequencies in the patient group ( Figure 6G; pale blue area). During the postprandial period in healthy adults, the retrograde cyclic activity between 2 and 8 cpm was of significantly greater power than the antegrade cyclic activity in the same frequency range (black and white hatched outline in Figure 6B). Comparison of the post-prandial period indicates a significant reduction in both retrograde and antegrade propagation across the full range of frequencies in the patient group (Figure 6H; pale blue area). The meal effect on propagation within each group is summarized in Figures 6C,F. In both groups, the meal caused a significant increase in the power of propagating activity at all frequencies, with a peak effect at 2-6cpm in healthy adults ( Figure 6C; bright orange region). The meal also caused a significant increase in the power of propagating activity with a frequency between 2 and 6 cpm in patients (Figure 6F), but this increase was not as marked as in healthy adults (Figure 6I; blue region within the white circle). Comparison Against Manual Analysis Our original publication of these data used manual analysis to identify propagating motor patterns in healthy adults (Dinning et al., 2014). That article was the first to describe in detail In each image, frequency is shown on the Y -axis. In panels (A,B,D,E) power is shown on the X-axis. 2000 overlapping gray lines in each panel represent posterior samples, and the dotted black lines form envelopes of 95% credible intervals. Panels (G,H) represent the power ratio across the frequency range, between patients and healthy adults. When the entire envelope lies to one side of the vertical red line (which represents a ratio of 1), this shows a significant deviation. Thus, in the period after a meal, if we compare patients (E) with health in panel (B) you can see a significant reduction in power of the 3-6 cpm activity in the patients [shown by green area in panel (H) for the frequencies where the entire envelope lies to the left of the red vertical ratio line]. Panels (C,F) depict the ratio of power of postprandial activity to preprandial activity for healthy adults and patients, revealing that both groups show a significant increase in power in frequencies ranging from 1/16th cpm to 9 cpm (the envelope lies to the right side of the red vertical ratio line). Panel (I) shows that the pan-frequency increase in power did not differ significantly between patients and healthy adults. the propagating motor pattern which consisted of pressure waves with a frequency of 2-6/min. This motor pattern was labeled the cyclic motor pattern and the key findings in that article, centered upon this motor pattern, included; (i) the cyclic motor pattern made up 69% of all propagating activity. (ii) It propagated in predominately retrograde direction. (iii) A meal was shown to increase the count of all motor patterns however, the major effect of a meal upon colonic motility was a significant (P < 0.001) increase in retrograde cyclic motor pattern. With our novel, automated technique, we have also shown at after a meal the retrograde cyclic activity between 2 and 8 cpm is of significantly greater power than antegrade cyclic activity at the same frequency [see sections "Healthy Adults vs. Patients With Slow Transit Constipation; Descending Colon ( Figure 5)" and "Healthy Adults vs. Patients With Slow Transit Constipation; Sigmoid Colon ( Figure 6)"]. The meal also resulted in a significant increase power of all propagating activity, with a peak effect at 2-6 cpm in healthy adults. In our follow-up article, comparing the data from healthy controls to patients with slow transit constipation, our manual analysis showed that a meal induced a significant increase in the cyclic motor pattern in patients, but the increase was significantly reduced in comparison to increase observed in healthy adults (Dinning et al., 2015). These findings are confirmed in this current article (See Figure 6I). DISCUSSION In this article, we have presented a method for analyzing high-resolution, spatiotemporal colonic manometry data by computing various time-averaged spectra and using them as Figure 3, apart from the region of bowel studied. In the postprandial period, the power of 3-6 cpm contractions were increased compared to preprandial, but this effect was smaller in the patient group compared to healthy adults [see green area in panel (H) for the frequencies where the entire envelope lies to the left of the red vertical ratio line]. The meal caused a significant increase in power at frequencies ranging from 1/16th cpm to 9 cpm [see blue areas (C,F)]. This overall effect of the meal did not differ between the groups (I). responses in a functional mixed-effects model, inferred via Hamiltonian Monte Carlo. This approach has allowed us to identify the frequencies of colonic pressure waves and compare differences in their characteristics between healthy adults and patients with slow transit constipation. Our main findings indicate that; (i) in both groups, prior to and after a meal, the dominant frequency of pressure waves in the descending and sigmoid colon is between 2 and 6 cpm and a meal results in a significant increase in the power of pressure waves across a wide range of frequencies (1/16 -8 cpm); (ii) in healthy adults only, the retrograde cyclic activity between 2 and 8 cpm is of significantly greater power than antegrade cyclic activity at the same frequency; (iii) in the sigmoid colon, the meal induced an increase in the power of antegrade, synchronous, and retrograde propagating activity with frequencies between 2 and 6 cpm, which was of significantly greater power in healthy adults than in patients. Previously we had presented our first step in the computerized development of software for the analysis of colonic pressure waves . That work allowed us to separate patients with slow transit constipation from healthy adults on the basis of a single "indicator value" calculated from the colonic manometry data. However, that indicator value provided no information on the frequency of pressure waves, or their direction and speed of propagation; all features provided by our current automated approach. In addition, we have also previously used fast Fourier transform (FFT) and wavelets to demonstrate a postprandial increase power of colonic activity (Dinning et al., 2015(Dinning et al., , 2016, but those publications lacked the rigorous statistical analysis of the current article. The advantages of the wavelet transform over the Fourier transform for our application are twofold. Firstly, the wavelet transform provides an instantaneous spectrum at each time point, allowing us to remove harmonic artifacts with the MesaClip algorithm (Wiklendt et al., 2020), and also facilitating the comparison of spectra between adjacent sensors for each time point to obtain the cross-wavelet transform which can reveal propagation delays via phase differences. Secondly, although the short-time Fourier transform (STFT) non-significance. Panels (C,F) compare power of propagating waves across the frequency range between preprandial and postprandial periods, for healthy adults and patients. The extensive red-shaded region in panels (C,F) indicates that propagating activity increased in power after the meal at all measured frequencies. The area marked by the solid white lines indicates a significant increase. Panel (I) compares the meal effect between patients and healthy adults, confirming that the comparative meal effect between the two groups was similar. could be used to compute near-instantaneous spectra, it requires one to choose (a) a window width, (b) a window function, and (c) an overlap amount between adjacent-in-time windows. The wavelet transform only needs the equivalent of (b), whereas the window width adjusts naturally to each frequency being analyzed, with the equivalent of maximum possible overlap without the prohibitively high computational burden as would be the case for the STFT. Importantly, the outcomes of the colonic manometry analysis provided in this article do not contradict our manual analysis of these same data published previously (Dinning et al., 2014(Dinning et al., , 2015. To obtain the results in this article, less than two days were required. In comparison the manual analysis of the control and patient data took five weeks to perform. Thus, the detailed analysis provided by our automated technique is orders of magnitude beyond the methods currently available in both detail, speed of analysis, and manual labor saved. The cyclic nature of colonic pressure waves shown in this analysis is not a new finding. Indeed, regular human rectal pressure waves at approximately 2-3/min were reported in Welch and Plant (1926). Nearly all colonic manometry studies since then show figures or report findings of pressure waves with similar frequencies. The physiological role of such motor patterns remains undetermined, but it is likely to play a role in mixing or retarding colonic flow (Spriggs et al., 1951;Rao and Welcher, 1996;Rao et al., 2001a;Lin et al., 2017a,b;Pervez et al., 2020). The frequency of 2-6 cpm is approximately the same as the frequency of human colonic slow waves (Rae et al., 1998;Carbone et al., 2013) which are generated by the interstitial cells of Cajal (ICC) (Huizinga et al., 2011;Costa et al., 2013). Figure 5. In panel (B), the black and white hatched outline indicates that the power of retrograde propagating motor activity at 2-8 cpm was significantly greater than the power of antegrade propagating motor activity at the same frequency. Comparing the preprandial and postprandial periods for healthy adults and patients, the analysis shows that the power of propagating waves at all frequencies were reduced in patients (G,H). The meal resulted in a significant increase in the power of waves at all frequencies in both groups (region demarcated by the solid white lines in panels (C,F)). Panel (I) shows that 2-6 cpm post-prandial increase in patients was significantly reduced compared to the increase in this frequency shown in healthy adults [region within the solid white outline in panel (I)]. The rapid increase (within 60 s) in this motor pattern after a meal is commenced suggests that the slow wave activity can be modulated by extrinsic neural pathways. Therefore, the ability to accurately identify this motor pattern and determine the influence of physiological stimuli upon it may help to unravel both the normal physiology of healthy adult colonic motility and provide insight into abnormalities that exist in patients with functional colonic disorders. In addition to the 2-6 cpm activity, a recent publication by Pervez et al. (2020) showed a cyclic motor pattern consisting of clusters of pressure waves at a frequency of 11-13 cycles/min. This motor pattern was identified throughout the colon and it occurred in isolation of other motor patterns or following high-amplitude propagating contractions. In our grouped data, a motor pattern of this frequency was not prominent either before or after a meal. However, that does not mean this higher frequency did not exist. Examples can be found in some of the individual subjects. Figure 7 shows post-meal sigmoid colon data from an individual patient in which a peak in the global wavelet spectrum can be seen at ∼11/min ( Figure 7C; hatched box). The overall diminished prominence of this frequency in our data, compared to the study by Pervez et al. (2020) may reflect the different protocols used to record colonic motor patterns. In our data, manometry was recorded in a prepared colon (faces removed) with a fiberoptic catheter and, apart from a meal, no other stimulation was provided. The protocol used by Pervez et al., also recorded from a prepared colon, however, they used water-perfused manometry, colonic balloon distension, gave the subjects a meal and the laxatives prucalopride and bisacodyl. This combination of colonic stimulation may have initiated prominent 11-13 cycles/min motor activity. This highlights that differences in protocols should always be considered when comparing data between colonic manometry studies. A common feature of many colonic manometry recordings is the high amplitude propagating contraction (HAPC). These FIGURE 7 | Representation of a manometry recording from the sigmoid colon in a single patient after a meal. (A) Shows the color maps depicting raw pressure data from the sensors within the sigmoid colon. (B) shows the power across the frequency range of 1/16th to 16 cycles per minute (cpm). (C) Shows graphs summarizing the power at each frequency; and (D) shows a summary of the 2D cross-wavelet analysis with retrograde and antregrade propagation. Synchronous activity is shown at 0 on the x-axis. Note that in panel (C) a peak can be seen at ∼11 cpm (hatched box). This higher frequency was seen in some of the subjects in this study, but the power of this frequency is very low. Frontiers in Physiology | www.frontiersin.org 13 February 2021 | Volume 11 | Article 605066 events are associated with movement of content , defecation (Herbst et al., 1997;Bampton et al., 2000) and have been shown to be diminished or absent in patients with constipation Rao et al., 2001b;Dinning et al., 2010). Therefore, their presence or absence in a manometry recording is always noted. The approach described in this article does not specifically identify these motor patterns, however, if several occur sequentially within the 1/16 -16cpm range used in this analysis they will form part of the calculated result. This should not be seen as a problem, in short duration recording within the prepared colon, HAPC make up <2% of the propagating activity, with many healthy adults not having any (Dinning et al., 2014). For specific characteristics of the HAPC we would still recommend manual analysis. In addition to HAPCs, articles on colonic manometry provide counts of all other propagating contractions and their extent of propagation. Such data is not available with this automated approach. However, software to both count the number of individual propagating contractions and calculate their propagation length, has been developed and validated by our colleges in New Zealand (Paskaranandavadivel et al., 2018). This work is currently submitted for publication elsewhere. It is likely that a combination of both approaches will be used in the future to provide a full description of colonic motor patterns. It is also important to note that we have based our findings upon these data after we removed synchronous pressure increases that occurred across all recording channels. Recently there have been publications in which these synchronous pressure increases have been included in the analysis (Corsetti et al., 2017;Chen et al., 2018;Pervez et al., 2020). However, synchronous pressure increases can also be caused by abdominal strain, diaphragmatic movement (laughing), coughs, sneezes or by body movement. (Corsetti et al., 2017) discriminated between synchronous pressure waves caused by colonic motor activity and abdominal wall muscle activity using abdominal wall electromyography (EMG). In our study, EMG was not used and therefore we had no way of discriminating between artifact and a genuine colonic motor pattern. As such our pre-processing of the data prior to analysis involved the identification and removal of the activity. However, as shown in Supplementary Figure 1, such removal had no impact upon our findings. Synchronous pressure waves that did not span the entire recording length were always part of our data and can be seen in the 2-D images as the activity recorded at phase 0 ( Figures 2D,H, 5, 6, 7D). Whether or not our automated approach improves the diagnostic potential of colonic manometry, remains to be determined, as yet it has only been performed on a small number of studies. As shown in Figure 2, our new approach does allow for a rapid appreciation of the colonic contractile activity in any given recording. Within 30 seconds figures can be produced which show the dominant frequencies of pressure waves, their propagation direction and speed and whether or not a meal (or any other stimulus) changes these characteristics. We are currently in the process of applying this analysis to larger data sets, with different colonic stimulation techniques and differing types of constipation. Such analysis may then allow us to determine whether defined categories of constipation (slow transit, normal transit, constipation predominant irritable bowel syndrome) display characteristic differences compared to healthy adults. Importantly this analytical approach will also allow us to determine the effects of treatment upon colonic motility. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Human Ethics Committees of the South Eastern Area Health Service, Sydney and the University of New South Wales (05/122; May 2010), and The Southern Adelaide Health Service/Flinders University Human Research Ethics Committee (419.10; March 2011). The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS LW devised the methodology and wrote the software and technical parts of the manuscript. PD wrote the manuscript. MC and PD established the concepts. MC, SB, and SS made critical revision of the manuscript for important intellectual content. All authors contributed to the article and approved the submitted version. FUNDING The work in this manuscript was supported by the National Health & Medical Research Council (Project Grant APP1162223). ACKNOWLEDGMENTS We thank Paul Heitmann for proofreading and improvements to readability. 605066/full#supplementary-material Supplementary Figure 1 | The one-dimensional (1D) analysis of pressure waves across a range of frequencies in the descending and sigmoid colon for healthy adults and patients with slow transit constipation during the preprandial and postprandial periods. The left hand images (pan-recording synchronous pressure waves removed) are same as shown in Figures 3, 4 in the manuscript. The right hand images show the results of the analysis without the pan-recording synchronous pressure waves removed. Removal of the synchronous pressure waves that span all recording sites has no impact upon the final results.
2019-12-05T05:10:51.000Z
2019-12-05T00:00:00.000
{ "year": 2020, "sha1": "0cc01cc0a61a81742b71c5dad84aafc7355547b6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2020.605066/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "43563d2edcd4213440a53906edc68b6e0ed67996", "s2fieldsofstudy": [ "Medicine", "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Mathematics", "Biology", "Computer Science" ] }
213679185
pes2o/s2orc
v3-fos-license
Characterization of a scintillator tile equipped with SiPMs for future cosmic-ray space experiments Current gamma-ray and cosmic-ray satellite experiments employ plastic scintillators to discriminate charged and neutral particles and to identify nuclei. Scintillators are commonly read out using the classical photomultiplier tubes (PMTs). Recent measurements and R& D projects are demonstrating that Silicon Photomultipliers (SiPMs) are suitable for the detection of fast light signals with resolution up to the single photoelectron, with a lower power consumption. For these reasons, next generation missions are planning to replace PMTs with SiPMs. We tested a prototype plastic scintillator tile, equipped with a set of SiPMs and studied its response to a beam of electrons and pions at CERN. We used Near Ultraviolet (NUV) SiPMs of 1x1 mm2 and 4x4 mm2 area, placed along the edges of the tile. The tile was irradiated in different positions in order to study the dependence of the collected light on the impact point of the beam particles. We also varied the energy of the beam in order to study how this parameter affects the amount of collected light. Introduction Plastic scintillators are widely used as particle detectors and discriminators in satellite experiment. Gamma-ray telescopes, such as the Fermi-LAT and DAMPE, employ these systems as anti-coincidence in order to reject the charged cosmic-ray background [1,2]. In other cases, plastic scintillators can be used to discriminate the charge of the incoming particle by measuring its energy loss in the scintillator. The detector is often segmented in small tiles to enhance gamma-ray selection efficiency which would be otherwise limited by the the back-splash of secondary particles produced in the electromagnetic showers initiated by gamma-rays. This effect is particularly important for gamma-ray energies above 10 GeV. Usually, in most satellite experiments, scintillators are read-out with Photomultiplier tubes (PMTs). The high operation voltage required by PMTs (of the order of kV) makes this solution unpractical on satellites. Recent developments in the field of Silicon Photomultipliers (SiPMs) have opened to the possibility of replacing PMTs. SiPMs are operated at much lower voltages (of the order of tens of V) and show a very good sensitivity to low light yields. For these reasons, plastic scintillators coupled to SiPMs are being tested for future missions such as e-Astrogam [3], AMEGO and HERD [4]. In recent years some tests of scintillators coupled to SiPMs have already been performed, exploring this possibility also for different applications [5,6,7]. In this work we present the measurements carried out on a plastic scintillator tile equipped with SiPMs produced by FBK and provided by AdvanSiD, sensitive to the Near Ultraviolet (NUV) photons. Scintillator tile preparation We used the plastic scintillator BC-404, which has a light yield of 68% of Anthracene and peak emission at 408 nm [8]. The tile used has a squared shape with a side of 15 cm and a thickness of 1 cm. Two angles were cut at 2.5 cm from the corner. The geometry of the tile is shown in figure 1. The scintillator was levigated and wrapped with a white paper as reflector and black paper as coverage. Small windows were cut in order to place SiPMs directly on the scintillator. The optical connection between the scintillator and the SiPM was achieved using optical grease. We used NUV SiPMs produced by FBK of 1×1 mm 2 and 4×4 mm 2 area, with micro-cell pitch of 40 µm. The photon detection efficiency (PDE) peaks at 400 nm, matching the BC-404 emission, with a maximum value of 43% which is reached at 5 V of over-voltage [9]. We equipped the tile with 12 SiPMs, 6 for each size, placed in different positions of the tile perimeter, as shown in figure 1. We will refer to the 4×4 mm 2 SiPMs as Large SiPMs and to the 1×1 mm 2 as Small SiPMs. Each SiPM was read-out using a transimpedance amplifier with an RC filter for tail cancellation. The 12 analog signals were integrated and acquired with a Caen V792 QDC [10]. Beam test setup The tile was tested at the CERN PS T10 beam line with 5 GeV/c particles and at the CERN SPS H8 beam line with 20 GeV/c particles. In both cases the beams were composed mainly by pions and electrons. A trigger system consisting of two plastic scintillators placed along the beam line was implemented. At PS-T10, a plastic scintillator with a hole was used as halo veto in order to select a circular beam spot of 3 cm diameter. In this case, we moved the tile with 2 cm steps in order to irradiate the scintillator in different positions and to study the dependence of the light collected by the SiPMs on the beam position. At SPS-H8 the tile was irradiated in the central position only. Position scan As already mentioned in the previous section, we first tested the tile at PS-T10. We irradiated the tile in 33 different positions with beam spots of 3 cm diameter. For each run we obtained the average number of photons detected by studying the measured charge distributions of each SiPM. Figure 2 shows two sample spectra measured for a small and a large SiPM. The pedestal distributions obtained without particles are superimposed on the same plots. The small amplitude peaks visible are due to the dark counts of the SiPMs which occur in the integration gate of the QDC. Similar spectra were obtained for all SiPMs and for all runs in different configurations. In the case of small SiPMs the number of collected photons was very low (of the order of a few photons on average). We fitted the charge distributions with multi-gaussian functions and then fitted the peak areas with a Poisson distribution, obtaining the average number of photons detected. In the case of large SiPMs the individual photon peaks could not be fitted individually, due to the higher intrinsic noise of the larger SiPMs and to the relative low statistics collected for each peak. We decided to reduce the number of bins and to fit the resulting histogram with a Landau distribution folded with a Gaussian distribution. The ADC charge corresponding to the peak of the Landau function was then converted into photons using the conversion factor from ADC counts to photons, which was obtained by fitting the pedestal distributions for each SiPM. Figure 3 shows two histograms and their fitting curves for a small and a large SiPM. Plots in figure 4 summarize the results obtained when changing the position in which we irradiated the tile for all large SiPMs. Each plot shows the number of photons detected by one SiPM in all the positions tested, which is indicated by the numbers inside the circles and by the color scale. The red boxes on the edges represent the position of the SiPM along the tile, with the numbers inside these boxes indicating the arbitrary index we assigned to each SiPM. The results show that the number of photons detected is almost constant (∼30-40 ph.) in all positions and for all SiPMs, with peaks in positions close to the SiPMs. This effect is probably due to a higher contribution of direct light. Similar results were obtained for small SiPMs. However, in this case the number of detected photons (less than ∼3) was not sufficient to separate the particle signal from the pedestal. Efficiency As shown in figure 2, the large SiPMs provide a very good separation of pedestal and signal distributions and could be well suited to detect the passage of a charged particle. At SPS-H8 the tile was irradiated in the central position only and we collected enough statistics to evaluate the detection efficiency. The left plot in figure 5 shows the pedestal and signal distributions measured for one of the large SiPMs. Individual photon peaks are visibile up to more than 50 photons. We evaluated the integral of the signal histogram with varying lower threshold in order to estimate the detection efficiency of a minimum ionizing particle. The result is shown in the right plot in figure 5. The visible steps are due to the individual peaks in the distribution. The result shows that a very high efficiency can be reached with this simple configuration, fulfilling the requirements of anti-coincidence systems in cosmic ray satellites. Conclusions The measurements performed show that SiPMs can be coupled to scintillators to detect the passage of charged particles. The 4×4 mm 2 SiPMs proved to be appropriate to detect the passage of a minimum ionizing particle above the background. On the other hand, the 1×1 mm 2 SiPMs detect too few photons to separate the signal from the pedestal and to efficiently detect particles. However, they could be used to extend the dynamic range and to detect or reject heavier ions. The beam position scan shows that the response is almost uniform in the tile, with the exception of the impact points close to the SiPM positions, for which the contribution of direct light is higher. This aspect must be taken into account when measuring the energy deposition in the scintillator. Finally, the detection efficiency achieved with this configuration is close to the requirements of ACD detectors for satellites. Improvements can be obtained by summing or implementing coincidence of multiple SiPMs. More tests are planned in order to study different tile and SiPM configurations and to fully explore the potentiality of these detectors for cosmic ray experiments.
2019-09-16T20:13:34.053Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "f3081e0f02a2032f519a239dc4c6720e26ba0696", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1390/1/012119", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fd5e5f0299e7157d58561c3806d0c1f4948c19a2", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
7080267
pes2o/s2orc
v3-fos-license
Dysarthria and Quality of Life in neurologically healthy elderly and patients with Parkinson ’ s disease Purpose: To compare the speech and voice of Parkinson’s disease (PD) patients and neurologically healthy elderly adults (control group, CG), to find out whether these features are related to the disease or the normal aging process, and investigate the impact that dysarthria has on the Quality of Life (QoL) of these individuals. Methods: This is a cross-sectional study involving 25 individuals, 13 patients with PD and 12 CG. All the participants underwent vocal assessment, perceptual and acoustic analysis, based on “Dysarthria Assessment Protocol” and analysis of QoL using a questionnaire, “Living with Dysarthria”. The data underwent statistical analysis to compare the groups in each parameter. Results: In the assessment of dysarthria, patients with PD showed differences in prosody parameter (p=0.012), at the habitual frequency for females (p=0.025) and males (p=0.028), and the extent of intensity (p=0.039) when compared to CG. In QoL questionnaire, it was observed that patients with PD showed more negative impact on the QoL compared to CG, as indicated by the total score (p=0.005) with various aspects influencing this result. Conclusion: The degree of modification of speech and voice of patients with PD resembles those seen in normal aging process, with the exception of prosody and the habitual frequency, which are related to the greatest negative impact on the QoL of patients with PD. INTRODUCTION The number of elderly people has been growing all over the world as a result of the increased life expectancy and the reduced mortality rate along with a drop in pregnancy rates, resulting, therefore, in a slower rhythm of population growth and an accelerated aging process (1) .In Brazil, it is estimated that by 2020 the number of elderly will reach 32 million (2) . The greatest challenge of this century will be to take care of this population, who represent, among many characteristics, elevated prevalence of chronic and incapacitating diseases (2) .Parkinson's disease (PD) is an example, for its incidence increases with age and affects 1-2% of the population over 65 years of age (3) . The PD is a neurodegenerative chronic disease, characterized by the death of the dopaminergic neurons of the substantia nigra of the compact part of the midbrain.The dopaminergic neurons are responsable, among others, for the control of motor activities.The main clinical characteristics of the disease are tremors at rest, bradykinesia (slowness of movement), muscle rigidity (decrease in range movement), and alteration of posture maintenance reflexes, resulting in postural instability (4,5) . Besides, it is estimated that 70-90% of the patients with PD have speech and voice alterations, called hypokinetic dysarthria.The most common symptoms of dysarthria found in PD are reduced vocal intensity, restricted modulation, monotone voice, changes of intonation, altered speed of speech, reduced frequency range, hoarse and breathy vocal quality, and articulatory imprecision (6,7) . The PD is the second most common neurodegenerative disease among elderly, with its prevalence estimated to be 3.3% in Brazil.Considering that it is a disease that mainly affects elderly, it is necessary to have studies that identify and differentiate the PD characteristics from the normal aging process, once it is known that speech and voice alterations also occur during this process (8) .Moreover, the disability caused by the disease limits the patients' activities and their social participation, compromising their quality of life (QoL) (9) .The study of the impact such alterations bring upon the QoL of patients with PD is necessary due to the subjective consequences of living with a speech disease due to a progressive neurological condition (10) . Therefore, this study aims to analyze and compare the voice and speech characteristics of PD patients with those of neurologically healthy individuals, in an attempt to differentiate the changes related to the disease from the ones related to the average aging process.Moreover, the impact dysarthria has on the QoL of these subjects was aimed to be investigated. METHODS This is a cross-sectional study, inserted in a broader study called "Disartria e qualidade de vida nas doenças dos gânglios da base," approved by the Research Ethics Committee of the Faculdade de Ciências Médicas of the Universidade Estadual de Campinas, endorsement no.710/2011. Selection of the subjects The study consisted of 25 subjects, 13 patients in the PD group (PG) and 12 neurologically healthy individuals in the control group (CG).The patients involved were followed up by the Physical Activity Program for PD patients (PROPARKI), in the Posture and Locomotion Studies Laboratory (Laboratório de Estudos da Postura e da Locomoção -LEPLO), in the Physical Education Department -Bioscience Institute -in the Universidade Estadual Paulista (UNESP).The participants in the CG were selected according to the age range and schooling level similar to the ones in the PG. The inclusion criteria were as follows: patients previously diagnosed with PD were classified in stages 1; 1.5; 2; 2.5 and 3 of the disease by the Hoehn and Yahr scale (11) ; they would be in the on-phase of the medication during evaluation. The exclusion criteria were as follows: patients diagnosed in phases 4 and 5 of the disease (10) ; submitted to surgical treatment; had been through or were under speech language treatment; and were with dementia or cognitive alteration conditions.For the CG, individuals with neurodegenerative diseases were not recruited. All the participants in the research were volunteers in the participation and, after agreement, signed the informed consent. Procedures Data collection occurred in the Undergraduate course of the Bioscience Institute of UNESP in a room with low level of environmental noise. The evaluation of dysarthria was performed according to the "Dysarthria Evaluation Report", adapting to the phonetic and linguistic characteristics of the Brazilian Portuguese, and formulated with the most altered components of the speech (12) .The data collected were recorded through audio and video, using an external sound board (by M-audio), microphone (SM-58), PRAAT (Programa de Análise Acústica) software, and Sony Cyber-shot camera of 7.2 megapixels. The patients, during data collection, were sitting down, with the microphone placed approximately 15 cm from the mouth.Before carrying out each test, the patients were oriented on the correct way to conduce it. According to the description of the protocol, the tests carried out by the participants were the filming of 1-minute breathing, to verify the respiratory cycles during this time; the emission of the number of words per exhalation, through the counting of numbers; the emission by vowels /a/ and /i/ and the consonants /s/ and /z/ in maximum phonation time (MPT); reproduction of phonemes, syllables, words, and affirmative, interrogative, and exclamatory phrases (12) .These tests were always performed with the previous explanation of the evaluator. The speech samples of the subjects were evaluated through an auditory-perceptual and acoustic analysis of the voice.In the auditory-perceptual analysis, the parameters assessed were breathing, phonation, articulation, and prosody, through attentive hearing and observation by the evaluator.At the end of each parameter, the evaluator gave scores from zero to six -zero, the absence of alteration and six, a severe alteration. In the acoustic analysis, we collected the habitual frequency, habitual intensity, MPT, and sub-harmonic presence or absence, taken from the production of the sustained vowel /a/.In addition, we collected the frequency and intensity, and speech rate (syllables per second) of the phrase "É proibido fumar aqui" ("Smoking is prohibited here").Both tasks are included in the "Dysarthria Evaluation Report". The acoustic measures were carried out by measuring the voice using the PRAAT software.The data used were selected from the most stable segments of the sustained vowel /a/ of the subjects, and the calculations were done manually, without the use of the software, to increase the reliability of data.Moreover, the acoustic measures of the extension of intensity in the phrase "É proibido fumar aqui" were analyzed in the vowels, for data standardization, considering that the vowel "u" after the fricative phoneme /f/ was excluded from the research, due to the interference of this phoneme.For the verification of the extension, we performed the calculation of the difference between higher and lower frequencies and intensities. The auditory-perceptual and acoustic evaluations of the speech samples of the individuals in the research were analyzed randomly, and the analysis was carried out by two evaluators with experience with dysarthria, by consensus.This way, the evaluations of all the participants in the research were recorded on video, so that both evaluators could, together, observe the parameters that needed the image of the participant to be observed, such as cycles per minute, words per exhalation, resonance, and articulation.It is noteworthy that for the auditory-perceptual and acoustic analysis of the voice, the evaluators were blind, i.e., they did not make use of video-recorded images. For the evaluation of the impact of dysarthria on the QoL of patients and elderly, the "Living with Dysarthria" ("Vivendo com Disartria") questionnaire was used, developed by the Vardal Institute, translated into Brazilian Portuguese, and culturally adapted by Behlau and Padovani (10,13) .This instrument aims at evaluating the perception of difficulties in speech of individuals with dysarthria, i.e., the way subjects perceive themselves and their difficulties in speech (10,13) . This questionnaire consists of ten sections, each containing five statements, to which the subjects must answer from one to six, the lowest number being "totally disagree" and the highest number being "fully agree".For the analysis of this questionnaire, we calculated the median of the answers for the five statements in each section (1-10) and also the sum of the scores of all statements, totaling 50.The total score may reach minimum value of 50 and maximum of 300 points.Only in section 1, relevant to the speech and voice aspects (breathing, phonation, articulation, and prosody), the statements were analyzed individually, in addition to comparing the medians between groups. Data analysis The data obtained through the "Dysarthria Evaluation Report" and the questionnaire "Living with Dysarthria" were analyzed for comparing the PG and the CG. For the statistical analysis, we used the Statistical Package for the Social Sciences software, version 13.0 for Windows, using the χ 2 test for the categoric variables and the Mann-Whitney test for numerical variables, assigning significance level of p-values<0.05. RESULTS The clinical characteristics were similar between groups, which indicated that the groups were clinically compatible.Moreover, the group with PD presented, according to the Hoehn and Yahr scale (11) , light to moderate stages of the disease, with two individuals in stage 1; five in stage 1.5; five in stage 2, and one in stage 2.5 (Table 1).In the evaluation of dysarthria (Table 2), it was observed that the PG showed significance difference in relation to the CG in the acoustic analysis for the habitual frequency among females (p=0.025) and males (p=0.028), in which both had higher frequency than in the CG.Moreover, the PG showed higher mean of intensity extension than the CG (p=0.039). In the auditory-perceptual evaluation, only the prosody parameter showed significance difference in the comparison between the groups (p=0.012),demonstrating higher compromised in the patterns of emphasis, intonation, and speed of speech in the PG.The remaining parameters studied (respiration, phonation, resonance, and articulation) presented no statistical difference between the groups. In the questionnaire "Living with Dysarthria" (Table 3), patients with PD had, in general, more negative impact on the QoL, when compared to the total score of the questionnaire (p=0.005).The patients had a worse subjective perception of the impact of speech and voice modifications when compared to the group of neurologically healthy individuals. Among the most significant aspects, we highlight: the aspects related to speech, in which patients noticed speaking in a slower and imprecise way, needing to repeat to be understood so that other people understand them; the aspects of language and cognition; the way how people see and communicate with the patients with PD; what patients believe contribute to their difficulties in speaking and how they think their speech is altered; and the perception and the possibilities of changes in the voice. DISCUSSION Most evaluation parameters for dysarthria revealed similar results between the studied groups.The main parameters differentiating the groups were the fundamental frequency and the prosody. When identifying that most part of the studied parameters was similar between the groups of patients with PD and CG, the study indicates that such modifications may be more related to the average aging process of the speech and voice aspects (presbyphonia) rather than the dysarthria aspects themselves due to PD. These results become relevant for considering that the speech and voice modifications may result from aging, especially in patients with PD in initial clinical stages of the disease (1-3) according to the Hoehn and Yahr scale (11) . However, this study showed that the fundamental frequency and the prosody are altered in the group of patients with PD.These two aspects may be considered as the main parameters of speech and voice differentiating groups, as well as the ones first to be affected by PD. Studies correlating speech and voice of patients with PD with the stage of the Hoehn and Yahr scale (11) refer to different results, since the absence of significant results between patients in the initial and advanced stages to the studies relating deviations in the fundamental frequency of vowels, modifications in the speed of speech and in intensity (14) . Thus, it is important to highlight that speech and voice alterations do not occur only due to PD.During the senile process, the laryngeal structures are also affected, i.e., anatomic and physiological alterations proper of aging will compromised the structual and function of the larynx and be related to voice alterations, called presbyphonia.Changes in voice quality are relatively common among elderly.Studies point out that approximately 29% of people aged over 66 years old report having different kinds of voice problems (14) . Changes due to presbyphonia include alterations in the mucosa of the vocal fold, with reduced amounts of elastic fibers, resulting in less elasticity and diminished vibration wave in the vocal fold; calcification of the cartilage; muscle atrophy; reduced muscle control transmission, resulting in instability and voice tremor; voice fatigue; hoarseness; and reduced intensity (15)(16)(17) . Modifications in the auditory-perceptual and acoustic analysis of speech and voice also occur in the process of aging, in which the decrease in the harmonics, alteration in the pneumophonoarticulatory coordination, reduction in MPT, and nasal resonance are observed (15,16,18) . The alterations of the aging process justify the similarities between groups in the parameters of breathing, phonation, resonance and articulation (auditory-perceptual analysis), habitual intensity, elocution rate, frequency range, MPT, and subharmonic presence.However, studies carried out among elderly identify that the presbyphonic voice tends to result in modifications in the habitual frequency, in which men have more high-pitched voices and women the low-pitched ones (14) .In this study, the PG, both for females and males, presented higher habitual frequency when compared to the groups of neurologically healthy subjects. The usual frequency considers the number of vibrations per second of the vocal folds in a given moment.It represents a direct relation with length, tension, rigidity, and mass of the vocal folds (19) . The higher frequency is produced by the elongation of the vocal folds associated with a rapid vibration of the wave of the mucosa.Due to rigidity, one of the clinical aspects of PD, there may be a constant activation of the muscles of the vocal folds and, consequently, and elongation of the interarytenoid muscles, resulting in the emission of sound in a higher frequency (20) . Moreover, studies report that the elevation of the habitual frequency may be possible in patients with PD due to the "on" period of the medication derived from levodopa, which presents a discrete improvement in acoustic measures (19) . In relation to the prosody, which consists of the rhythm and speed of speech, articulation, pauses in speech, and intensity variations (4) , patients with PD may have variable speed of speech, that being too fast in some moments of the emission, and occasionally alternating with slower ones (20) .Alterations in the speed of speech associated with PD have been justified by the presence of abnormal patterns of muscle activity, reduced articulatory range of movement, deficient strength, and tremor of the orofacial structures (20,21) . Moreover, other studies have justified the prosodic alterations not only due to muscle rigidity but also due to dysautonomia.Dopamine is important in the brain stem in autonomic regulation; considering this is the neurotransmitter affected in PD, the patients may present alterations both in central and peripheral pathways of the autonomic nervous system.In addition, studies report that the levodopa, usual medication of patients with PD, does not have relevant benefits in prosody (22,23) . This way, the alterations in the habitual frequency and in prosody, found in the present study, were proved different between the group with PD and the group of neurologically healthy elderly, and such alterations may be the first findings of dysarthria in groups of patients with PD in the initial stages. Thus, this study highlights the importance of the evaluation of these parameters in both groups, perhaps precociously aiding the differentiation of elderly and patients with PD. When analyzing the prosody through the evaluation of the QoL related to the dysarthria, it was verified that the impressions related to the dysarthria suffered great impact, especially evidenced in section 1, statements "C" and "D," with repercussion in the communication of the patients with PD and, consequently, the need for patients to repeat what they say to be understood (statement "1E"). These findings show that, regardless of the fact that most parameters studies are similar between the groups, the parameters changed (fundamental frequency and prosody) provide a negative impact on the QoL of these patients. However, the fact that patients with PD suffer with the diagnosis of a chronic, neurodegenerative, incurable, and progressive disease is not ruled out, which may result in fear and despair, reflecting directly in the QoL of these subjects (9) .This may also cause some patients to face some typical aging signs as possible alterations caused by the progression of PD. It is known that the QoL is multidimensional and involves several aspects, such as the degree of satisfactions in the family, love, social and environmental life, and having an unquestionable influence on the health of the individuals in the evaluation of their QoL (24) .Specifically, when it comes to communication, not only the aspects of speech and voice are important, but also other aspects may be associated with the negative impact of the QoL on the communication of patients with PD. The questions related to language and cognition also revealed negative impact on the QoL for the group with PD, as observed in section 2 of the instrument. Patients with PD may have cognitive alterations already in the initial stages of the disease, even with light motor symptoms (25) .Among the cognitive deficits most emphasized by the studies, there are alterations in memory, attention, executive functions, visuospatial capacity, language, and reduced abstractions capacity.Other studies highlight that the depression in PD is pointed out as one of the aspects which most cause impacts on cognition and has a great influence on the QoL of these patients (26) . During senescence, however, alterations in relation to language and cognition may also occurs, such as difficulties in organizing the thematic information of the narratives, alterations in the fast recovery of the lexicon in naming situations, difficulties in the access to the conceptual and perceptual information systems (linguistic and non-linguistic), and alterations in working memory, among many others (27) .Therefore, it is not possible to state, in this study, whether the alterations of language and cognition are due to PD or to the aging process, which is necessary for a more detailed investigation of the groups.However, it is evident that such aspects influence negatively on the QoL of patients with PD. Moreover, other aspects are involved.With the progression of the disease, alterations in posture and gait contribute for the elevated risk of falls of these patients.This will lead patients into reducing their activity level and increasing the level of dependency on other people.From this moment, patients tend to isolate socially, due to their fear of leaving home, spending most of their time in the household environment and alone, resulting, thus, in the compromising of the social support, with their families and in the relationship with the society (9) , keeping them away, and causing different reaction in people. Thus, the possible contributors for the changes in the QoL, focused on the communication of patients with PD, may be attributed to the social domain (such as problems in relationships and with the people in their social environment), difficulties in mobility and in daily life activities, the increased risk of falls, emotional well-being, cognition and speech, and voice alterations (24) . As approached in the last two sections of the questionnaire of QoL, patients with PD are unsatisfied with the way and the quality of their communication, having the need of help from other people to maintain their communicative function.Moreover, they show little hope in relation to the improvement in communication, voice, and speech once that, from the moment they are diagnosed and know about their neurodegenerative and chronic disease, they get frustrated knowing that the medical treatment is just palliative and that there is no treatment available to interrupt the course of the disease and avoid it (28,29) . This way, this study implies that the prosodic changes and of the habitual frequency in subjects with PD, together with physical and cognitive problems, the social isolation, and the perceptions of change and dissatisfaction with the communication, are determining factors for a negative view of the QoL. Specifically in relation to the communication, the prosodic alterations and the habitual frequency are proved relevant, once that, even with the remaining parameters being similar between the groups (Parkinson and control), the PG had a more negative impact on the QoL focused on communication, i.e., the compromising of the prosody and the habitual frequency have an impact on the QoL of the subjects with PD. Other significant data in voice evaluation were the intensity range, in which the PG had higher mean range.According to previous studies, such findings could be justified by the fact that the patients are conscious of their difficulties and, due to that, they perform more variations in the intensity to compensate the reduced vocal range (30) . From these results, the importance of speech therapy in the rehabilitation and in the QoL of patients with PD is emphasized.It is also important to emphasize the importance of therapeutic interdisciplinary planning with this population, once not only the aspects of speech and voice interfere on the QoL focused on communication. This study provides important information on the differences between the speech and voice aspects of patients with PD and the average aging process and their impact on the QoL.However, new researches are necessary, considering that there is a need for the broadening of the studied sample and of longitudinal studies, once the PD is progressive. CONCLUSION The acoustic and auditory-perceptual analysis of individuals with PD showed similar parameters to the ones present in the average aging process.However, the prosody and the habitual frequency may be some of the first changes in dysarthria among patients with PD. Besides, it is evident that issues of communication and speech have a negative effect on the QoL of the subjects with PDprosody and habitual frequency being the most related ones, with other factors, such as language, cognition, and socialization problems. Table 1 . Characterization of the sample according to age, gender, schooling, and time of disease in the group with Parkinson's disease and the control group Caption: PG = Parkinson's disease group; CG = Control group Table 3 . Comparison of the medians of the sections, statements in theme 1 and total score of the questionnaire "Living with Dysarthria" in the Parkinson's and the Control Group *Significant p-value -Mann-Whitney test. Table 2 . Comparison of the mean values and standard deviation of the auditory-perceptive and acoustic analysis and the percentage of the number of individuals with the presence of the subharmonic between the PG and CG *Significant p-value; **Percentage of individuals with sub-harmonic presence -Mann-Whitney and χ 2 test Caption: PG = Parkinson's disease group; CG = control group; X = mean; SD = standard deviation; F = female; M = male; Hz = hertz; Db = decibel; sil/sec = syllables per second; MPT = maximum phonation time; sec = seconds
2017-07-31T21:42:15.816Z
2015-06-01T00:00:00.000
{ "year": 2015, "sha1": "8796a5d09cc223fe76b6f50a2468a4b66e4fedb3", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/codas/v27n3/2317-1782-codas-27-03-00248.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8796a5d09cc223fe76b6f50a2468a4b66e4fedb3", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
53697469
pes2o/s2orc
v3-fos-license
The Cultural Construction of Migrant Women in the Italian Press This contribution focuses on the migrant women’s portraits that emerge in the Italian press. This discursive arena is dealt with by paying attention to what is taken for granted in the discourses about migrant women and their reproductive rights and behaviours. The analysis is based on a dataset of 634 newspapers articles, published between June 2005 and July 2012, and include both partisan, non-partisan, and religious press. It highlights the culturalization of migrant women, mainly portrayed as victims, and points to the high risk of xenophobic manipulation and political instrumentalization of migrant women’s rights INTRODUCTION Italy is a relevant case study for the analysis of the possible instrumentalization and culturalization of migrant women's rights in political discourses, for three main reasons. Firstly, migration is a politicized issue, at least since the '90s (Sciortino and Colombo, 2004).Migration flows regulation is a crucial topic in recent political campaigns and some political parties lever on anti-immigration agendas in constructing their political identity (Cousin and Vitale, 2006).The radical populist Northern League particularly focuses on undocumented migration at the main issue in its political discourses (Biorcio, 1997 and2010;Diamanti, 1996).Thus, the defence of migrant women's rights is intertwined with the political discourse on migrants. Secondly, the public discourse on migration has been widely studied in Italy (Binotto and Martino, 2005;Dal Lago, 1999;Calvanese, 2011;Corte, 2002).However, much of this research deals especially with media racism, paying little attention to its impact (see Sciortino and Colombo, 2004).This wide attention lead to the "Charter of Rome", a code of conduct for journalists in the media coverage of migrants, asylum seekers, refugees, and victims of trafficking, signed in 2008 by the National Council of the Journalists' Association and the Italian National Press Federation. 1hird, the issue of women's rights (a well-established topic in the Italian political sphere, where the feminist movement has a long history -Lussana, 2012) has been recently revived in the public sphere by a number of events.In 2011 (February, 13 th ), a large demonstration was launched after the sex scandals involving Prime Minister Berlusconi.The demonstration, protesting against the discrimination of women in politics and the labour market, gathered an impressive amount of people and was widely covered by the media. 2About the same time, a documentary and a book on the use of female body in the media gained a wide echo. 3As a consequence, the political and public debates over the role of women in society gained animus. In this political climate, the present contribution addresses the media representation of migrant women, by focusing specifically on their reproductive and sexual rights.-The sources are newspaper articles from the Italian daily and weekly press (2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012).The analysis of media shows a systematic lack of attention to issues of concern to migrant women: feminine migration appears to be almost completely invisible.In the rare cases where they do become visible, migrant women are predominantly represented through cultural lenses, portrayed as 'the others'.As it will be shown, this 'otherization' is often related to a political instrumentalization of migrant women's rights, whose defence is turned into an argument against migration. The next section gives a brief overview of female migration in Italy, while the third section addresses aims and methodology.The fourth section is devoted to the analysis of the results, while the last section discusses the outcomes of the research project. MIGRANT WOMEN IN ITALY The category of 'migrant' includes a large variety of situations: unskilled job-seekers as well as professional elites, UE as well non-UE citizens, single-migrants as well as families, seasonal workers as well as long-term residents, undocumented persons, and asylum seekers (see Bonizzoni, 2011).Official data bases vary, depending on the classification criteria they adopt (Busso, 2007).According to the public common-sense 'migrant' is often a synonym with 'foreigner', thus extending the displacement process much longer than the actual migration experience, which, instead of being an event, is turned into a status and a public identity (Bordignon and Diamanti, 2002). Until the late 80s, the regulation of immigration was based on temporary measures and occasional regularization programs (sort of 'amnesties'see Triandafyllidou, 1999 for a historical reconstruction).Scholars agree that there often was a lack of a long-term institutional perspective: until the framework law of 1998, the laws on migration could be defined as 'emergency laws', without a scheme of policies to support migrants (Caponio, 2005;Kosic and Triandafyllidou, 2005;Ambrosini, 2001).The framework law of 1998, reinforced by the 2002 migration law, introduced a mechanism that connected residence permits with job contracts -for economic migrants. 4ccording to the official data (ISTAT, 2012),5 foreign citizens living in Italy represent 7,5% of the population (4,570,317)an increase of 8% over the previous year.The migrant population is sex-balanced: however, huge differences emerge when considering the country of origin.Ukrainians, Moldavians, Poles, Peruvians, and Ecuadorians have higher percentages of women, while Indians, Tunisians, Egyptians, and Bangladeshis are mostly men.Moreover, migrants' presence has a huge regional variety, showing higher rates in Northern counties, even though the sex distribution is quite balanced (ISMU data, 1995-2011). 6 The number of foreigners living in Italy started rising in the 70s, and it especially increased in the late 80s (Triandafyllidou, 1999), when women began to assume the firstmigrant role.According to Tognetti Bordogna, women's migration in Italy includes three phases.During the first one, in the 70s, migrant women mostly came from Latin and Central America, the Philippines, Cape Verde, and Eritreamainly Catholic countries -, and middle-class families employed them as domestic workers.The second phase took place since the 80s: countries of origin differentiated and there was a decrease in jobsegregation.Nevertheless, women's migration remained an invisible process, for both scholars and the public sphere.In the 90s, migrant women became more visible, for different reasons, such as family reunification -that involved also non-working womenand sex trafficking (Tognetti Bordogna, 2004).Nowadays, as Italian families become increasingly dependent of migrant women's work, they are mostly employed as caregivers in reproductive work.The increasing presence of migrant womencharacterized by a high degree of internal differences in terms of migratory experience and legal status (Bonizzoni, 2011) triggered an increasing interest from the media and, slowly, feminine migration began to be visible even in the press, modifying the predominant representation of migration as an essentially male process. AIMS AND METHODOLOGY: MIGRANT WOMEN IN THE PUBLIC DISCOURSE This contribution focuses on the representation of migrant women in the media.In the last decades, the public sphere underwent a process of "mediatization" (Mazzoleni and Schultz, 1999): scholars consider contemporary societies to be "Democracies of the Public" (Manin, 1992; see also Rosanvallon, 2008). Therefore, the analysis of discourse in the mass media is crucial in order to understand how migrant women are constructed as a political subject in Italy, and how their rights are open to political and/or xenophobic manipulation.A long and wellestablished tradition of studies explored the close interconnections between discourse and power (Foucault, 1975) from a number of perspectives, such as political and/or critical discourse analysis (Fairclough, 1989;Laclau and Mouffe, 1985;Van Dijk, 1997), policy frame analysis, (Yanow, 1996), and media frame analysis (Gamson, 1992).In this perspective, the analysis of the representation of migrant women in the media includes the analysis of the narratives and frames they are located within in the public sphere. The relationships between migration and media have been widely studied.Many scholars have focused on racism, stereotypization and/or criminalisation of migrants, and Islamophobia (see, for instance, Said, 1997;Van Dijk, 1991).Nevertheless, significantly less attention has been paid to the specific representations of migrant women (Campani, 2001;Nash, 2006;Navarro, 2010).This has to do with the almost complete lack of media coverage (Van Dijk, 1991). In order to explore the Italian media narratives about migrant women I analysed 634 articles from Italian newspapers, published between June 2005 and July 2012.I chose 2005 as a starting point because the referendum over the regulation of medically assisted procreation, which took place on 12/13 June, has triggered the resurgence of public interest in reproductive rights, which constitute the focus of this project.I selected the articles dealing with migrant women and reproduction, and I codified them per topic (abortion, fertility, maternity, sexuality, other).Then, I used a text-driven coding scheme in order to identify the ways in which migrant women are connected to the topics taken into consideration (see table I to IV), and to explore the extent of women 'victimization' within the Italian press, as well as the role attributed to religion (table V).Finally, I reconstructed migrant women 'figures' as they emerge in the Italian press. I divided the press into four main categories: Non-partisan Mainstream Newspapers (La Repubblica, Il Corriere della Sera, La Stampa, Il Sole 24 ore, Il Messaggero, l'Espresso); Right-wing Newspapers (Il Giornale, Libero, Il Secolo d'Italia, Il Foglio, La Padania); Left-wing Newspapers (L'Unità, Liberazione, Il Manifesto); and Catholic Newspapers (Famiglia Cristiana, l'Avvenire, Osservatore Romano).The Italian media sphere is inherently intertwined with politics: the national mainstream newspapers are connected to powerful economic groups, while political newspapers are financed by political parties or groups, addressing different political audiences.Specifically, La Repubblica and L'Espresso are connected to the De Benedetti group, a slightly centre-left group, also active in energy and healthcare; La Stampa is associated to the Agnelli group Migration is a key-theme in the Italian public sphere.The attention towards migrants in the Italian press is hardly new.A careful reconstruction shows different phases in the migration discourse of the mainstream press (Sciortino and Colombo, 2004).The first phase (70s) describes two immigrant figures: the elite, rich foreigner and the foreign worker, destined to low skilled jobs and the focus of a slightly negative narrative about unfair labour market competition.The second phase took place in the 80s, and was related to the dramatic increase in the immigration flux.The discursive field changes: it is far less centred on the labour market and much more preoccupied with the impact of immigration.The "migration issue" undergoes a process of politicization (Balbo and Manconi, 1992;Maneri, 1998;Mansoubi, 1990).Finally, in the 1990s the interest in this issues diminishes, and the term immigrant is acknowledged as part of the common language -indicating a political problem.In media coverage a strict relation is built between immigrants and crime, while references to the labour market virtually disappear (Cotesta and De Angelis, 1999;Dal Lago, 1999, Maneri, 1998;Triandafyllidou, 1999).In the 2000s, the press begins to include references to Islam-related migration.Recent studies focus on racism in the media, including Islamophobia, and underline the wide media coverage of crimes related to migrants.In general, scholars' analyses on media and migration in Italy focus on media racism, by analysing the press as well as television channels with both local and national audiences.In broad terms, one could say that Italian media sphere is characterized by a negative image of migrants. 7Moreover, migrants in Italy can be described as very absent from their own narratives, since they rarely have a voice (Sibhatu, 2004). Very few studies specifically focus on migrant women in the Italian press (Campani, 2001;Censis, 2002).Campani, for instance, underlines their virtual absence compared to men.When they become visible, migrant women are mainly depicted as maids, as reassuring figures (especially in the 70s and 80s).In the late 80s and in the 90s, other figures emerge.First, the image of the migrant prostitutepresented with a high degree of emotiveness, submissive, completely dependent on men, and often labelled as a "slave".This victimized figure has been widely important in criminalizing migrantsand the press completely ignores the cases of women who managed to regain control over their lives.Second, the Islamic woman, arriving through family reunification, gets presented as "the other", submissive toward the Islamic men and completely embedded in the culture of her country of origin.In the next section, I present the results of a first exploration of the Italian media sphere concerning its representation of migrant women. MIGRANT WOMEN IN THE ITALIAN PRESS The research focuses on the media coverage of migrant women between 2005 and 2012. Table I shows the topics related to the images of migrant women, and focuses especially on reproductive rights. As it can be seen, the articles in the sample mostly focus on fertility/abortion: migrant women are mentioned either for their fertility or for abortion rates.A large percentage is also related to sexuality: this category includes topics as prostitution, gender relationships, and related issues (forced marriages and 'Female genital mutilation').Finally 15% of articles focus on migrant women and maternity and, more broadly, on the relationship between migrant mothers and their children (in Italy and abroad).The last category (various and mix) collects articles that mention migrant women in relation to other topics, 7 A brief bibliography on media and migration in Italy includes studies on the racism in the national media sphere Balbo and Manconi, 1992;Belluati, Grossi and Viglongo, 1995;Binotto and Martino, 2005;Calvanese, 2011;Campani, 2001;Censis, 2002;Corte, 2002;Cospe, 2003 and2008;Cotesta and De Angelis, 1999;Dal Lago, 1999;Etnequal, 2003;Guadagnucci, 2010;Lunaria, 2011;Mai, 2002;Maneri, 1998;Mansoubi, 1990;Marletti, 1991;Medici senza frontiere, 2012;Naletto, 2009;Osservatorio di Pavia, 2001;Riccio, 2001;Sciortino and Colombo, 2004;Sibhatu, 2004;Triandafyllidou, 1999;Villa, 2008.For analyses of the local media sphere see Bonerba and Mazzoni, 2013;Iris, 1991;Lippi and Tirotta, 2013;Lodigiani, 1996;Macciò, 2012;Riccio, 1997.On photojournalism, see Gariglio, Pogliano, and Zanini, 2010.In the last years, the on line media monitoring also increased.See, for instance, http://www.cronachediordinariorazzismo.org/; and http://www.mmc2000.net/.such as their presence in the labour market.Three types of articles are mapped: interviews with experts (doctors, sociologists, volunteers working with migrants) or with politicians; crime news; life stories.In what follows, I will present how migrant women are described in relation to the listed topics and the predominant narratives that emerge.Naturally, I pay attention to the differences in the media representation according to their political orientation. FERTILITY AND ABORTION Most articles mentioning migrant women focus on fertility and abortion (42%).Specifically, the media underline the strict correlation between migration and abortion (45% of entries on fertility/abortion), and, to a lower extent, between migration and fertility (18%). In all newspapers, migrants' abortion rate is connected to the difficulties that come with migration, especially the economic conditions of migrants (18%): job insecurity and hard work conditions, low incomes, and the absence of a family support affect the possibility of having a child.The predominant narrative reads migrant women as characterized by a low socioeconomic status and, because of this reason, forced to interrupt pregnancies, to abandon or even murder their children. A number of articles also highlight migrant women's emotional distress, in relation to abortion (8%): the experience of dislocation and loneliness in host countries are, in the words of some articles, overwhelming. Thirdly, some voices in the media portray migrant women as 'victims of ignorance' (11%).Specifically, they allegedly ignore contraceptive practices.Thus, their high abortion rate is related to their failure to prevent unwanted pregnancies. In the sample, 56 per cent do not use any kind of contraception for these reasons: 'The pill causes cancer' (Peru, in a relationship, without children).'I thought that I used the pill so much that I have become sterile, so I stopped using it' (Peru, The article above reports the outcomes of a project analyzing the reasons migrant women abort, and it clearly shows how the press characterizes migrant women: abortion is the ultimate contraception practice, connected to migrant women's alleged ignorance or cultural refusal of birth-control practices. Moreover, it is said that migrant women often resort to illegal abortion, either because illegal immigrants are afraid of being denounced or because they don't know the legal terms of abortion.Thus, migrant women' agency and individuality disappear (as it is well underlined in the international literature on this subject: see Phillips, 2007).On the contrary, their behaviour is put in connection to their countries of origin. 'In fact, the choices of foreign women are strongly affected by cultural elementssays Graziella Sacchetti, gynaecologist of the Italian Society of Medicine of Migration -Among the Arabs, for example, male involvement in contraception is unthinkable, and therefore the condom is excluded.The Moroccans have less problems, while the Egyptians reject the pill.' [...] Women from Eastern Europe, especially from former Soviet Union, traditionally use the pill, or the spiral.For Chinese women, also, contraception is not a taboo.However, they typically refer to the doctors of the big local communities, for example using a spiral made in China. (Ruggiero Corcella, "Aborto clandestino un dramma dell' immigrazione", 24.02.2008) The midwife is in charge of up to 20 patients.Many foreigners."I am in charge of many Roma women -says Fusco -for them, the birth-control pill is unthinkable, they do not like rules.'The Roma girls call her 'my sister'.(Cristina Zagaria, "La trincea dei consultori 'Ma non siamo abortifici'", La Repubblica, 23 .11.2005)These citations from non-partisan newspapers clearly show how the connection between migrant women and their countries of origin is used to build a cultural understanding of their behaviours.In other words, migrant women's reproductive behaviours are explained as determined by their cultural belonging, rather than as an individual choice.This is consistent with what occurs in other national contexts (cfr. Lonergan, 2012, for UK; see also Phillips, 2007).When speaking of migrant women and abortion practices, partisan and religious newspapers also refer to a cultural frame,with some significant differences. In our case, the vast majority of foreign women master the methods of contraception: they do not use them either because of a lack of responsibility, or because of an 'elementary' forma mentis, so to speakfor example, the belief that if you have just had a baby you will not get pregnant right away.(V.G., Sempre più donne (straniere) nei CAV: "Da soli non possiamo aiutarle tutte", Avvenire, 17.04.2012) This extract, from the Catholic newspaper l'Avvenire, reports the words of a doctor who volunteers in the Centri per la Vita (Centres for Life -Catholic organizations that try to prevent abortion).In a patronizing tone, he describes migrant women irresponsible and, in fact, ignorant about reproduction and pregnancy matters. On the contrary, left-wing newspapers sometimes bring up the 'culturalization' issue, for the purpose of criticizing it. The whole discourse on the 'Health of Migrants' and the protection of the human body is strictly connected to this issue.The possibility of an integration is directly proportional to the capacity of self-determination, in particular for the female gender. 01.08.2009) In this perspective, the recognition and the empowerment of migrant women's agency regarding their reproductive choices are connected with the migration issue. Also, the strict relation between fertility and migration is pointed out in a variety of articles (18%).According to the Italian press, migrant women show a higher birth rate either because of cultural reasons or because their migratory experience was successful: thus their fertility is specifically connected to a faith towards the future. In Therefore, migrant women seem, first of all, to be categorized as belonging to a disadvantaged class: thus, they suffer from demanding job conditions and they cannot afford contraception.Second, they are portrayed as ignorant, either for cultural reasons or because of a lack of education.Finally, migrant women are depicted as interrupting pregnancy only because of their difficult situation; under different circumstances, they would have several children.Nevertheless, there are some differences in reporting that we need to take into account: the argument from 'ignorance', for example, is more present in right-wing and non-partisan newspapers, while Catholic and leftist newspaper pay more attention to the material life conditions of migrant women. MATERNITY Maternity and migrant women becomes a theme related to fertility: migrant women have the 'merit' of increasing the low Italian fertility rate (13% of entries on maternity). Especially in religious newspapers (and, also, in non-partisan ones), this merit is connected to the supposedly different vision of maternity of migrant women (39%). In the culture of her country, being a mother is the highest expression of being a woman.And her desire pushes her to risk anything to keep her child.With that child, the pride of being a woman, and an African woman, is reborn in her.[...] The life of the African woman is based on three pillars, as three are the firestones on which she cooks: God, the community, and the family.For African women, therefore, motherhood is something essential to femininity, in the end it is what characterizes their womanhood.(Suor Eugenia Bonetti, "Becky, da prostituta a mamma", Famiglia Cristiana, 09.05.2011) This extract, that tells the story of a prostitute who changed her life through motherhood, shows the type of language related to maternity and migration: the migrant woman is framed as someone who has deep roots and connections with maternity, in an implicit contrast with the medicalized and rationalized Western way of life.Thus, motherhood seems to be culturally framed, in strict connection with the higher migrant fertility rates.This description of 'otherness' is consistent with the otherization that Yegenoglu narrates (1998): a reconstruction of Western women's identity mirrored by the construction of an 'other' identity. Some articles tackle the issue of transnational maternity (16%).Migrant women are forced to leave their children in their country of origin, so they suffer from a 'mutilated' motherhood.Again, especially religious (and, to some extent, non-partisan) newspapers pay attention to this topic, underlining the difficulties of being a mother abroad, taking care of the children of their employers while their own live far away. There are immigrants who do not see their children and their parents for years.Not only because the journey is too expensive, but because, being illegally present, they cannot afford to leave Italy for the fear of not being able to come back.To those who help us to take care of our families we often deny the right to their own family.The result is a permanent situation of uncertainty, which results in easy exploitation, but also in blackmail.(Chiara Saraceno, "Quei bisogni ignorati", La Repubblica, 07.07.2009) Migration processes, when involving families, heavily affect the intimate relations (Bonizzoni, 2009).This extract highlights what in the literature is referred to as 'international care chains': rich countries families are increasingly dependent on migrant women as care-givers; migrant women, in turn, leave their dependent relatives in someone else's charge (Bonizzoni, 2011: 316). The This extract from the Catholic Avvenire frames migrant women as victimsand heroines, who sacrifice themselves for the sake of their families.Nevertheless, their choices also carry the heavy weight of destabilizing the family and its traditional roles. Finally, migrant women are portrayed as mothers of children born in Italy.On the one hand, newspapers articles address the issue of the nationality of those children born to foreign parents.In Italy, second-generation children lose their residence right on turning 18at that moment, they can apply for citizenship, but they can be expelled in the meanwhile.This paradoxical situation, of children born and raised in Italy becoming foreigners on reaching adult agemakes the object of a fierce debate in the Italian political sphere. On the other hand, more attention is paid to mother-children relationships when crime-events defined as 'cultural clashes' occur.Both partisan and non-partisan newspapers widely cover the stories of crimes related to clashes between first and second-generation migrants (cfr.infra). Maternity is not a central issue in the Italian press concerning migrant women. Nevertheless, it helps to show different elements of culturalization.Again, some differences emerge in the press sub-spheres.Of course, right-wing newspapers do not affirm the merit of migrant women referring to their high fertility rate.The issue of a different (better) perception of motherhood is largely diffused in Catholic newspapers, but almost absent in the leftist ones.The question of the nationality of second-generation migrants is a highly politicized issue, and is therefore dealt with mainly by partisan newspapers, as the percentages below show. SEXUALITY Sexuality and sex relations are quite accounted for in my sample.A first sub-topic of migrant women's sexuality is related to the prostitution/sex-trafficking theme (11%). Migrant women were often mentioned when dealing with sex trafficking or positive experiences of emancipation from prostitution, especially in the 90s (Campani, 2001;Dal Lago, 1999;Sciortino and Colombo, 2004).Nevertheless, this topic seems to be less important in the media, and only Catholic newspapers pay some attention to it. Secondly, there is a quite important focus on the 'cultural clash', concerning secondgeneration migrant women, which include their difficulties in simultaneously adhering to their family's traditions and culture and with the pressures of the Italian (Western) culture (40%). The laceration of the migrant adolescents, divided between the tradition of the family of origin and the modernity, the daily relationships with the Italian peers, so different, so free.(Nunzia Vallini, "'A noi giudici diceva: voglio essere bresciana' Mandato di cattura europeo per il cognato", Corriere della Sera, 18.08.2006)Several newspapers report cases of crimes and murders, women killed or hurt by relatives, allegedly because of cultural or religious reasons.Articles frame these cases as 'cultural crimes', depicting migrant women as the victims of their tradition, which is often connected to Islam.The culture of the 'others' is behind these crimes and is implicitly described as traditionalist and primitive in comparison with Italian culture. The danger of the next decade is likely to be the 'latent conflict', embodied by the girls who study and integrate but who live in traditionalist families.'Many parents do not have a high level of educationsays Fihan Elbataa, from the Brescia section of the Young Muslims of Italyand then, faced with situations where they see a danger, they do not know how to react.They become severe and impose rules through aggressiveness.We try to encourage them to enter a dialogue, to leave a space of freedom'.(Gianni Santucci, "In Italia 2000 spose bambine ogni anno.E molte sono costrette a rimpatriare", Corriere della Sera, 20.01.2010) Right-wing and mainstream newspapers pay particular attention to the 2nd generation's double identity, and describe the youth as being divided between opposite loyalties.Catholic newspapers are less attentive to this issue, preferring to focus on cultural differences tout court.In this perspective, as it has been mentioned earlier, mother-children relationships emerge as another issue. Sister Claudia Biondi Caritas Ambrosiana has seen dozens of girls who go out dressed like daddy wants them to and then change clothes in the elevator, to match their peers'.'There is always a greater attention and protection towards the daughters, especially by fathers and brothers, a protection bordering on possessiveness'.This often leads to a rupture.Do mothers mediate?'Not always.In the case of a teenager runaway, for example, we had an encounter with a group of women who were divided'.Some linked to the origins, other allied to their daughters. (Alessandra Coppola, "Lo scontro di civiltà in casa e le donne in prima linea", Corriere della Sera, 05.10.2010)The differences between the Italian culture and the tradition of 'the others' are assumed as a datum by most articles, especially by Catholic newspapers.Most articles list a series of supposedly homogeneous cultural or religious practices that state a difference between the "Italians" (more often the Westerns) and the "others": forced marriages, female circumcision, and restrictions to girls' liberties, for instance. Female circumcision, for all African religions, becomes such an essential component of girls' life to make them forget the torture of having their genitals cut. (Carla Massi, "Infibulazione, carcere fino a 12 anni", Il Messaggero, 07.07.2005) Specifically, what emerges is that migrant women's culture is always framed as traditionalist and detrimental for women.The role of religion is always underlined as negative for migrant women, the specific target being Islam.Right-wing newspapers, especially, seem to consider only the migrants who arrive from Muslim countries, and to focus on Muslim patriarchy. There is a parallel city in our cities, an underground city that lives in harassment and abuse.But also in solitude and silence.Surrounded by family members, relatives, neighbours who observe, judge and monitor, for Muslim women who do not want to In this perspective, consistent with many scholars' observations, migrant women are culturally embedded.Moreover,they embody their culture because they are mothers: theyare responsible for transmitting a sense of identity to their children (Lonergan, 2012). There are some differences, though, especially considering left-wing newspapers, where the distinction between Islam and patriarchy is usually underlined. But patriarchy is not an inevitable timeless, a-historical event: it is a socio-symbolic structure that engages other social and cultural structures (including Islam) and whose fates depend on the relationships and conflicts between women, between women and men, and between men.The sexuality of migrant women also emerges as a radical otherness: migrant women appear to be sexualized (see Yegenoglu, 1998).Sexually active migrant women are described as either victims of sex trafficking or being torn between their loyalty to family and tradition, on the one side, and the freedom of modernity, on the other side. VICTIMS AND ISLAM Newspapers analysis shows, on the whole, a wide victimizing frame.Concerning reproductive rights, migrant women seem to be either driven by culture or at men's mercy. Specifically, a connection between culture, tradition, and religion, especially Islam, emerges.Left-wing newspapers, for instance, wonder about the invisibility of migrant women for left-wing activists, especially considering women coming from Muslim countries.Again, the implicit assumption is the victimization of (Muslim) migrant women. Thus, left wing newspapers articles underline the supposed tension between women rights (feminism) and minority (multiculturalism)cfr.infra.Nevertheless there is no criticism towards Islam as the whole (0%).On the contrary, an effort to differentiate religion from patriarchy is obvious (19%)as is the wide denunciation of migrant women's precarious situation (59%).While scholars underline that migrants' life conditions are less of a concern for the Italian press (Sciortino and Colombo, 2004), they prove to be keythemes when reporting about migrant women.Religious newspapers also pay a great deal of attention to migrant women as victims.Even though there is virtually no complain against Islam (3%), a subtle suggestion of the necessity of teaching and guiding the 'others' unfolds: A girl who lives and studies here acquires the self-esteem necessary to oppose, for example, arranged marriages in, for example, India...In some societies, male domination is still undisputed [...] But I remember that even in Italy, in order to stop the honour killing, we had to convince, and compel, otherwise the moral conscience does not develop.(Lucia Bellaspiga, "Cardia: niente alibi "culturali", sentenze severe", Avvenire, 29.05.2012) Migrant women emerge, again, as the vulnerable subjects, who have to be trained according to modern values by the supposedly emancipated Italians.In this perspective, there is a differentiation between 'Western women', driven by self-esteem, autonomy, and moral values, and 'non-Western women', whose behaviours are culturally driven (Phillips, 2007; see also Yegenoglu, 1998). IT'S CALLED Chrysalis Project, a title that says it all: to get out from the 'cocoon' the many immigrant women who come to Italy for family reunification, dependent on husbands and without their economic autonomy, and therefore more likely than the others (those, that is, who come here to work), to take refuge in the house and to suffer alone from the problems affecting all the immigrants.(Maria Cristina Carratù, "Crisalide, contro la solitudine delle immigrate", La Repubblica, 26.05.2010)This extract, again, clearly shows the image of migrant women as dependent on men and culture.Some articles from both leftist (22%) and non-partisan newspapers (27%) focus on positive examples of migrant women's stories of emancipation and empowerment, especially in relation to women's roles in the job market and in representative bodies (such as workers' unions).In other words, successful stories are related to a westernization of migrant women. 8Nevertheless, in a number of stories, women's strength is attributed to a supposedly fixed character of femininity, as in this extract below: Both Italians and 'new Italians' must break into the public sphere, and become the authoritative manufacturers of our civil society.We have one more resource to do this, that is the alphabet of feelings.[...] And it is in everyday life that women build the mixing of cultures and civilizations.Women are the leaders of a chain of coexistencejust think about the caregivers, the teachers, the child carerand have the ability to create moments of celebration in their neighbourhoods and in 8 A strategy also apparent in other countries' media, see, for instance, Navarro's study on the media representation of Islamic women in Spain (Navarro, 2010). Migrant women, thus, have a heavy weight to carry: they are women, sharing an ontological feminine character with other women; they are mothers, and they embody the culture of their country of origin, mysterious and radically 'other'; they live in Western countries, where they can learn about autonomy and emancipation.Thus, they are often portrayed as the possible mediators between supposedly radically different cultures. It has often been said that immigrant women are a key element of growth, development and integration [...] And the immigrant women who carry with them the richness of their cultures of origin, lovers of life and motherhood, offer us this gift. 31.10.2011) In particular, it is the daughters who can apply for this bridging role, as the following extract suggests: From the interviews a dual identity comes out: Italians in all respects, but also proudly connected to their roots, the culture of the country of origin, their parents and their religiosity.A pride reinforced by the Arab Spring.(Jolanda Bufalini, "Generazione due: orgoglio musulmano e voglia di votare", l'Unità, 30.09.2011) Right-wing newspapers focus on migrant women especially in order to stand against migration.First, migrant women in right-wing newspapers seem to originate only from Arab and Muslim majority countries: there is no room for other women.Second, there is a strict correlation between religion (Islam) and the image of migrant women as victims.In this perspective, the protection of migrant women's rights becomes an argument against migration and Islam. [...] we are striving for a real integration in our country, that will never exist as long as you do not give up on these incivilities; until you put it into your head that women are citizens, and as such they have the same rights and the same dignity as men. For this reason, the extracts show a high degree of culturalization and a homogenizing attitude towards the culture (and religion) of the others. In the Muslim culture a woman must obey her husband or father, if he decides that she must wear the veil, according to the Shariah, the wife or daughter cannot resist. The Muslim woman is not allowed to have male friends.The woman cannot contradict her husband and father and cannot leave the home, nor can she work or study without his permission.She is forbidden to have sexual relations outside marriage and to frequent non-Muslim men.These precepts are so deeply ingrained in the Islamic culture that even the converted adapt to these rules.(Patrizia Marin, "Dobbiamo difendere le donne musulmane dalla loro cultura", Libero, 08.09.2006)This extract is titled, meaningfully, "We have to defend migrant women from their own culture".It suggests that Islam is characterized by a cultural homogeneity, and that Italy must help Muslim women even when they do not want tobecause they are portrayed as being victims of their own culture. The other aspect that must be strongly emphasized is that, following this ideology of multiculturalism, women and the most vulnerable subjects are likely to remain victims of male domination, and of the strongest.(Souad Sbai, "Voglio tolleranza zero", Libero, 17.04.2008)Souad Sbai is the president of the Italian Association of women from Morocco.For this reason, her declarations are reported as 'authority arguments' on Islam by right-wing newspapers.Multiculturalism is highly criticized as "not good for women": in other words, the only reference is to the possible perverse effects of multicultural policies on women (the argument of vulnerable minorities within minoritiessee Phillips, 2007;Ponzanesi, 2007).Libero, 19.11.2006). Right MP Santanchè addresses the connection between feminism and multiculturalism in order to compete on the traditionally leftist field of women rights and to present the political right as the new feminism. I think the feminist issue is crucial to the process of integration.It is impossible to think of living together with the Muslim in our Country without reaffirming women's dignity.We cannot let them feel abandoned in Italy too.That's why I wrote my law proposal to remove the veil.And I must say that there has been only a deafening silence around me.Where are the feminists?And where were they, when Hina was buried, the Muslim girl killed by her father because she had rebelled against the Islamic culture?(Daniela Schiazzano, "Santanchè: l'integrazione deve partire dalla questione femminile", Il Messaggero, 22.03.2007)In this extract, MP Santanchè refers to the 'foulard issue'which had a wide echo in Italian press, even though almost no debate about the veil took place in Italy. To sum up, migrant women are portrayed as "the others".They are victims, either of their culture and religion or of the pre-modern traditions of patriarchy that characterize their countries of origin.The degree of the patronizing attitude towards migrant women is slightly different: their empowerment is referred to in different ways, within the press subspheres.Leftist newspapers focus on migrant women's individual agency and education, as well as on their role within the job market; right-wing newspapers, on the other hand, affirm that they have to be aided to emancipate themselves, even against their will.Thus, positive stories are the tales of migrant (and especially second generation) women who fight their families and challenge their traditions.Religious newspapers tell stories of migrant women's empowerment that focus on women's role within families instead, underlining their efforts to care for family ties even under difficult circumstances. ITALIAN PRESS This contribution analysed the representation of migrant women in the Italian press, with a special focus on reproductive rights. First, it is worth noticing that, consistently with the literature on media and migrants, migrant women are rarely a topic of debate per se (see Campani, 2001;Navarro, 2010). They become an object (and rarely a subject) of discourse.Second, five figures of migrant women emerge.The first three figures have been present in the Italian media sphere since the 80s and the 90s (see Campani, 2001): the maid, a reassuring image of migrant women working in Italian homes; the prostitute, dependent and subordinate; and the Muslim woman, dependent and subordinate as well, and embedded in her culture.Also, the figure of the emancipated (and westernized) migrant woman comes to the fore. Finally, the migrant mother, mainly ignorant and poor, subordinate either to her life circumstances, her family, or her culture, plays a prominent role.On the whole, migrant women are widely represented as being victims of their own cultures and traditions.Within a patronizing frame, migrant women are mainly described as ignorant, poorly educated, culturally driven and subjected to patriarchythis is particularly the case of women coming from Arab or Muslim-majority countries.Even when reporting "positive" examples of migrant women's empowerment, there is an implicit contrast with their initial disadvantages. Second, the representation of migrant women has two main characters: it expresses a radical 'otherness' and a process of 'culturalization'. Migrant women represent a radical otherness, internally homogeneous.This otherness constructs the Western sameness in a dialectical perspective, since the practice of identity construction 'constitutes not only the objects but also the subjects' (Yegenoglu, 1998: 22).At the same time, an essentialized femininity is described as being a common character beyond women's differences. ( which owns the powerful FIAT empire); RCSmediagroup owns Il Corriere della Sera (and El Mundo); Il Messaggero is owned by the Caltagirone group (centre-right oriented), while Il Sole 24 Ore is the voice of Confindustria (Italian employers' federation).Famiglia Cristiana is the most read Catholic weekly magazine, as well as one of the three most diffused weekly magazines in Italy; l'Avvenire is considered the daily newspaper of the Italian Bishops' Conference, while l'Osservatore Romano is close to the Vatican hierarchies.As for the openly political newspapers, Il Giornale is owned by the Berlusconi's group, La Padania is the Northern League newspaper, while Libero, Il Secolo d'Italia and Il Foglio address the larger right-wing audience.L'Unità was the Communist Party newspaper, now it voices the Democratic Party; Liberazione is the voice of the Communist Refoundation party, while Il Manifesto mainly addresses left-wing audience and grass-root movements.I decided to sample newspapers focused on different audiences in order to highlight the possible differences in framing and narrating migrant women' s issues. commenting on migrant women's fertility rates, right-wing newspapers flag the threat of invasion.The right-wing daily newspaper Libero, for example, frames the difference in birth rates by fomenting fears over the threat of a possible de-Italianization of Italy: [...] we would have no chance of winning the devastating sperm war.[...] Our people must be free to choose their demographic rates.[...] Immigration is not a corrective to the declining Italian birth rate, but a real replacing process.(Gilberto Oneto, "Le balle sugli immigrati.Alzano la natalità?No, ci invadono", Libero, 09.10.2011)To sum up, in relation to fertility and abortion issues, migrant women's representations underline their economic difficulties and their cultural embeddedness. Ukrainians have left their children at home, and so have the Romanians, the Moldovans and all the other women, especially those from Eastern Europe, who came to seek their fortune in Italy (there are 416,311 immigrant women who work in Italians' homes): they know about their children by phone or by pictures.According to the last data from the Romanian Ministry of Family, there are 200.000children with at least one parent abroad.And it is often the mother who leaves, because in the Ukrainian matriarchal family the woman is the one who bears the greatest responsibilities.[...] The other side of emigration is the destabilization of the family, which especially affects the youth and the elderly -the most vulnerable.(Giovanni Ruggiero, "Mamme e badanti: 'Noi, così lontane dai nostri bambini'", Avvenire, 13.11.2010) lower their heads and try to rebel, there are not many ways out.(Daniela Santanché, "Storie di donne violate in nome della sharia",Libero, 14.03.2008)Even religious newspapers underline the differences between the Italian culture, associated to Christianity, and Islam, though in a more subtle way.The centrality of cultural heritage in foreigners' and their children's lives (especially for those coming from Arabic countries) is confirmed, and there are changes and contradictions generated by the encounter with the new context.In particular, Egyptians and Pakistanis focus on maintaining the role of wives or daughters, who are in charge of preserving and passing on traditional values.There are values which are considered non-negotiable, near to a grey area where there is a greater openness to change.(Giorgio Paolucci, Interview with Giovanna Rossi, Sociologist of Migration, "La doppia attrattiva delle seconde generazioni,Avvenire, 17.01.2010) In her investigation of the Pakistani women in Val Trompia, last Thursday, Manuela Cartosio put a new light on how the conditions of the young Pakistani immigrants, even in the extreme cases, such as Hina's, is plagued by poor relationships between mothers and daughters, and by a lack of socialization and communication between women.And certainly this is the first node to address in order to change the situation.But the second step must be the opening of a struggle by men against other men's violent behaviours, within the immigrant communities as well as within the Italian society, and transversely between the ones and the others.(Ida Dominijanni, Transversal Patriarchies (it is the title of the newspaper article, Il Manifesto, 22.08.2006). Multiculturalism has brought to the fore forms of family organization different from those of our tradition, and the canons of cultural relativism prevent us from clearly stating that they, from the point of view of individual freedom, are worse.(Gaetano Quagliarello, "Quando i diritti sono incivili", IlGiornale, 30.01.2007)As the following extract shows, critics of multiculturalism resist the very idea of the equal treatment of minorities and different cultures.Lesbian or infibulated.Congratulations, women!The winning models in the autumnwinter trends of the leftist Italythat has the ass-face of Romano Prodi and the Islamic-Zapaterist head of the post-communismoscillate between the two opposite extremes.At the expense of the common woman, the normal female, the girlfriend, the wife, the mother, home-and-work.And this is the last glorious stage of women's emancipation.(Marcello Veneziani, "Lesbica o col burqa, così oggi si dice donna",
2018-11-15T06:33:10.336Z
2012-06-01T00:00:00.000
{ "year": 2012, "sha1": "249101f7680410b33f9cdf9c0d55e493cf4aa857", "oa_license": "CCBY", "oa_url": "https://journals.openedition.org/eces/pdf/1026", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0cbe2f0146323de3a8a054e599db2159ed342b7d", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
231960250
pes2o/s2orc
v3-fos-license
The correlation between tumor size, lymph node status, distant metastases and mortality in rectal cancer patients without neoadjuvant therapy Tumor size has an effect on decision making for the treatment rectal cancer. Transanal local excision can be selected to remove rectal cancer with favorable histopathological features. It is generally recognized that the risk of lymph node involvement and distant metastases increases as the tumor enlarges. However, the majority of the studies classified patients into two groups using concrete value as a cutoff point. The coarse classification was not sufficient to reveal a correlation between the tumor size and lymph node status or distant metastases across the full range of sizes examined. Between 1988 and 2015, a total of 77,746 patients were diagnosed with first primary rectal cancer who had not received neoadjuvant therapy. These subjects were identified using the Surveillance, Epidemiology and End Results (SEER) database. The association between tumor size, lymph node status, distant metastases and cancer-specific mortality was investigated. Tumor size was examined as a continuous (1-30 mm) and categorical variable (11 size groups; 10-mm intervals). A non-linear correlation between increasing tumor size and the prevalence of lymph node involvement was observed, while a near-positive correlation between tumor size and distant metastases was presented. In addition, the 5-year and 10-year rates of rectal cancer-specific mortality were increased as the tumor enlarged. For small tumors (under 30 mm), a positive correlation was noted between tumor size and lymph node involvement. The clinical value of the tumor size should be reevaluated by exact classification. Introduction Rectal cancer (RC) is the 8 th most frequently diagnosed cancer and the 10 th leading cause of cancer related deaths worldwide in 2018 [1]. Lymph node involvement and distant metastases have indicated poor prognosis in RC. It is believed that the risk of developing lymph nodal or distant metastases depends on intrinsic biological and tumor size characteristics since the larger tumor can readily metastasize [2,3]. Based on this theory, clinical guidelines recommend that transanal local excision can be adopted to remove lesions with favorable histopathological features, such as <3 cm size, T1, grade I or II, absence of lymphatic or venous invasion, or negative margins [4,5]. It is reasonable to assume that that a <3 cm tumor size with favorable histopathological features will be associated with low risk of lymph node involvement and distant metastases. The tumor-node-metastasis (TNM) staging system is widely applied for prognostic prediction of colorectal cancer (CRC). However, the tumor size has not been included in the TNM staging system and previous studies did not reach a consensus regarding the prognostic value of the tumor size in CRC [6][7][8][9][10]. Notably, these studies classified patients into two groups using concrete value (3 cm, 4 cm, 5 cm) as a Ivyspring International Publisher cutoff point. The coarse classification interfered with the detailed effects of tumor size on lymph node status and distant metastases across the full range of sizes. In the present study, we aimed to reveal the associations between tumor size and the risk of metastases (both lymph nodes and distant sites) in rectal cancer patients who did not receive neoadjuvant therapy across the size range of 1-100 mm using the Surveillance, Epidemiology and End Results (SEER) database. In addition, the association between tumor size and rectal cancer-specific mortality was evaluated. Material and methods A total of 77,746 patients diagnosed with first primary rectal cancer who had not received neoadjuvant therapy were identified using the SEER database . In general, the inclusion criteria were detailed as follows: RC was the sole type of primary cancer; patients with definite tumor size were included; no neoadjuvant radiotherapy was administered; surgery was performed; detailed information regarding cancer-specific survival (CSS) and survival duration was included. The following variables were included: age, gender, marital status, race, year of diagnosis, tumor size, grade, histology codes, T stage, N stage, M stage and survival information. The patients were classified into 11 categories according to primary tumor size (10-mm intervals, 1-100 mm and >100 mm). In addition, tumor size was evaluated as a continuous variable (1-30 mm). CSS was defined as the time from diagnosis to death resulting from RC. The Kaplan-Meier method was used to estimate the actual rates of rectal cancer-specific mortality at 5 and 10 years. All statistical analyses were performed with SPSS 25.0 and the data were presented using GraphPad Prism 8. Results The baseline characteristics of RC patients are summarized in Table 1. A total of 57,356 (73.8%) patients exhibited tumors that were smaller than 50 mm in size, whereas 19,415 (25.0%) patients exhibited tumors that were between 50 and 100 mm in size and 975 (1.2%) patients exhibited tumors that were larger than 100 mm in size. A total of 19,543 (25.1%) patients experienced lymph node involvement and 46,580 (58.6%) patients presented with lymph node-negative metastases. A total of 9,315 (12.0%) patients were classified as stage IV disease cases and 67,755 (87.1%) patients exhibited no evidence of distant metastases. By the end of the follow-up period, 25,813 (33.2%) patients did not survive due to RC. The correlation between tumor size (in 10-mm intervals) and the probability of lymph node involvement in patients with definite lymph node status is presented in Figure 1A. A non-linear correlation between increasing tumor size and the prevalence of lymph node involvement was observed. The proportion of lymph node involvement elevated stepwise as the tumor size was enlarged between group 1 (1-10 mm) and group 5 (41-50 mm), while the escalating trend tended to be horizontal between group 6 (51-60 mm) and group 8 (71-80 mm). It is interesting to note that the proportion of lymph node involvement was decreased stepwise as the tumor size was enlarged between group 8 (71-80 mm) and group 11 (>100 mm). Subsequently, the association between tumor size (in 10-mm intervals) and the probability of distant metastases was investigated in patients with definite disease stage. As shown ( Figure 1B), a near-positive correlation between tumor size and distant metastases was found. The proportion of distant metastases increased continuously from 1.1% for tumors that were 1-10 mm in size to 26.0% for tumors that were 91-100 mm in size. Furthermore, the absolute growth in the prevalence of lymph node involvement and distant metastases, as the tumor size was enlarged (per 20-mm), was also plotted ( Figure 1C, 1D). To highlight the variation tendency of the association between tumor size and lymph node status and that of distant metastases in patients who had tumors smaller than 30 mm in size, the tumor size was examined as a continuous variable (1-30 mm). A near-positive correlation was noted between tumor size and lymph node involvement ( Figure 2A). However, a small correlation between tumor size and distant metastases was noted ( Figure 2B). The overall trend was increasing ( Figure 2B). Subsequently, the association between primary tumor size, the prevalence of lymph node involvement and distant metastases was examined for rectal patients stratified according to histological type, differentiation and T stage. For patients with adenocarcinoma, the association between tumor size (10-mm intervals or 1-mm intervals between 1-30 mm) and the prevalence of lymph node metastases was similar for the entire cohort. However, a positive correlation between tumor size (10-mm intervals) and distant metastases was more profound compared with that noted in the entire cohort ( Figure 3A, 3B). As tumor size was examined as a continuous variable (1-30 mm), the overall trend was irregular ( Figure 3C, 3D). A non-linear correlation between increasing tumor size, the prevalence of lymph node involvement and distant metastases was observed for patients with mucinous adenocarcinoma (Figure 3C, 3D). The proportion of lymph node involvement was increased as the tumor size was enlarged between group 1 (1-10 mm) and group 8 (71-80 mm), whereas the proportion was decreased between group 8 (71-80 mm) and group 10 (91-100 mm). Due to the limited sample size of patients with mucinous adenocarcinoma, tumor size was not examined as a continuous variable for this subgroup. The proportion of lymph node involvement was increased stepwise as the tumor size was increased between group 1 (1-10 mm) and group 8 (71-80 mm) for patients with well differentiated tumors, while this proportion was decreased sharply between group 8 (71-80 mm) and group 11 (>100 mm). The trend of lymph node involvement was similar to that noted for the entire cohort for patients with moderate differentiation. The proportion of lymph node involvement was increased stepwise as the tumor size was enlarged for patients with poor differentiation between group 1 (1-10 mm) and group 11 (>100 mm). However, between group 5 (41-50 mm) and group 11 (>100 mm), the trend tended to be horizontal. The association between tumor size and distant metastases was also examined and only patients with moderate differentiation presented a significantly positive correlation. Between group 5 (41-50 mm) and group 11 (>100 mm), the prevalence of distant metastases was fluctuated in patients with well or poor differentiation ( Figure 4). Generally, tumor size represented horizontal growth index, while T stage reflected vertical infiltration index. Subsequently, we evaluated the association between tumor size and lymph node status as well as that between tumor size and distant metastases according to the different T stage of the tumors. A minimal correlation was evident between tumor size and lymph node involvement or distant metastases. However, the overall trend was indicative of an association between T1, T2 and lymph node involvement and between T1, T2, T3 and distant metastases ( Figure 5). The increase noted in the association trend was relative to the higher tumor stage. Finally, the correlation between tumor size and risk of rectal cancer-specific mortality was investigated. The 5-year mortality increased stepwise from 7.3% for tumors that were 1-10 mm in size to 53.6% for tumors that were >100 mm in size. The 10-year mortality increased stepwise from 12.0% for tumors that were 1-10 mm in size to 61.1% for tumors that were >100 mm in size ( Figure 6). Discussion The tumor, lymph node, metastasis (TNM) staging system has been established as the most important prognostic factor in rectal cancer. In addition, tumor deposits, serum CEA levels, tumor regression score, circumferential resection margins, lymph vascular invasion, perineural invasion, microsatellite instability and RAS and BRAF mutations should also be considered in the prognostic prediction and treatment decision making [11]. However, tumor size was excluded from the prognostic factors. In general, the T stage represented vertical tumor penetration across the bowel wall, whereas the tumor size reflected the horizontal growth index. Evidence regarding the prognostic value of the tumor size is limited and fails to reach a definitive conclusion. Several studies have shown that tumor size did not present any prognostic impact on colorectal cancer patients [12][13][14]. However, the results have been contradictory over the last years. Tayyab et al. [15] demonstrated a direct association between tumor volume and overall survival in rectal cancer. Kornprat et al. established tumor size as an independent prognostic parameter for patients with colorectal cancer. The authors of this study found that the optimal cut-off values were dependent on different parts of the large bowel [10]. Brunner et al. [16] demonstrated that tumor size was a predictor for regional lymph node metastasis in T1 rectal cancer using the SEER database. It is interesting to note that Takahashi et al. highlighted that tumor size was associated with tumor recurrence in colon cancer instead of rectal cancer [17]. Our previous study demonstrated that the mortality risk of node positivity increased as tumor enlarged until a threshold tumor size (tumor size of 7-8 cm) was reached in colon cancer. The value of tumor size in rectal cancer should not been neglected. In the present study, we examined the correlation between tumor size, lymph node status, distant metastases and mortality in a cohort of 77,746 rectal cancer patients without neoadjuvant therapy. Tumor size was examined as a continuous (1-30 mm) and categorical variable (11 size groups; 10-mm intervals) instead of previous coarse classification. A linear correlation was found between tumor size and the risk of lymph node involvement for tumors of group 1 (1-10 mm) and group 5 (41-50 mm). For relatively large tumors (higher than 50 mm), a notable departure was observed. The probability of a lesion being lymph node positive was 42.1% and reached the highest level for a tumor size of 71-80 mm. When the association between tumor size and lymph node involvement was examined for patients with small tumors (less than 30 mm) stratified in 1-mm intervals, an upward trend was noted as tumor size increased from 1 to 29 mm (from 0.5 to 25.8%). The indications of local excision for rectal cancer should be applied with caution. Chen et al. identified a tumor size of <5 cm as a strong negative prognostic factor for local recurrence in rectal adenocarcinoma [9]. However, the authors of that study failed to identify tumor size as an independent predictor of lymph node involvement. In the present study, a near-positive correlation between tumor size and distant metastases was found. However, the overall trend was increasing. The prevalence of distant metastases at diagnosis increased gradually from 1.1% for tumors 1-10 mm in size to 25.3% for tumors larger than 100 mm in size. This phenomenon may have occured due to the small sample size noted in each subgroup. It was also shown that distant metastasis was an early event during tumor progression. The tumors will acquire higher potential to metastasize as their growth is increased [18]. Similarly, the 5-year mortality increased stepwise from 7.3% for tumors 1-10 mm in size to 53.6% for tumors >100 mm in size. The majority of the previous studies grouped all tumors into two sets using concrete value as a cutoff point. We can fully understand the association between tumor size and lymph node status, distant metastases and mortality by refining the tumor size spectrum. In the present study, the data indicated that the probability of group 6 (tumor size of 51-60 mm in diameter at diagnosis) being node-positive was equal to the probability of group 10 (tumor size of 91-100 mm in diameter at diagnosis) being node-positive. These data suggested that the probability of developing extra lymph node metastases was extremely low during the period in which a tumor grew from 51-60 mm to 91-100 mm. In contrast to these observations, when a lesion had grown from 1-10 mm to 11-20 mm, the probability of developing new lymph node metastases was 10.5%. Subsequently, it was found that the proportion of distant metastases and the 5-year mortality were increased stepwise as the tumor size was enlarged between groups 6 and 10. We speculated that the increase in the 5-year mortality resulted from the increasing risk of distant metastases instead of lymph node metastases between groups 6 and 10. Subgroup analysis revealed a positive correlation between tumor size and distant metastases, which seemed to be higher in adenocarcinoma cases compared with the entire cohort, while a non-linear correlation between increased tumor size, the prevalence of lymph node involvement and distant metastases was observed in mucinous adenocarcinoma cases. One explanation for the lack of a tumor size effect on lymph node and distant metastases for mucinous adenocarcinoma is its high malignant potential and heterogeneity. In summary, we observed a non-linear correlation between tumor size and the prevalence of lymph node involvement and a near-positive correlation between tumor size and distant metastases using a large sample of rectal cancer patients. The shapes of the curves presented slight variation for the different subgroups. The clinical value of tumor size should be reevaluated by exact classification.
2021-02-20T05:03:52.735Z
2021-01-15T00:00:00.000
{ "year": 2021, "sha1": "5705ca015c7bbb7b6f84efeab3b8925d8bee1127", "oa_license": "CCBY", "oa_url": "https://www.jcancer.org/v12p1616.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5705ca015c7bbb7b6f84efeab3b8925d8bee1127", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4437254
pes2o/s2orc
v3-fos-license
Effectiveness and Safety of a Novel Approach for Management of Patients with Potential Difficult Mask Ventilation and Tracheal Intubation: A Multi-center Randomized Trial Background: Patients with potential difficult mask ventilation (DV) and difficult intubation (DI) are often managed with awake intubation, which can be stressful for patients and anesthesiologists. This prospective randomized study evaluated a new approach, fast difficult airway evaluation (FDAE). We hypothesized that the FDAE approach would reduce the need for awake intubation. Methods: After obtaining informed consent, 302 patients with potential DV/DI undergoing elective surgeries were randomly assigned to the FDAE group (Group E) and the control group (Group C). In Group E, patients were gradually sedated, and adequacy of manual mask ventilation during spontaneous breathing was assessed at various sedation levels. Awake intubation was applied in those with inadequate mask ventilation. In Group C, DI was evaluated under local anesthesia. However, the care team could intubate under general anesthesia if the vocal cords were visible. The primary outcome was the rate of awake intubations in both groups and the induction efficiency assessed by the induction time. The secondary outcome was the incidence of serious complications. Results: The rate of awake intubation was significantly lower in Group E than that in Group C (5.81% vs. 36.05%, χ2 = 42.3, P < 0.001). The induction time was much shorter in Group E than in Group C (11.85 ± 4.82 min vs. 18.71 ± 7.85 min, t = 5.39, P < 0.001). There was no significant difference in the incidence of intubation related complications between the two groups. Patients in Group E had a much lower incidence of recall (9.68% vs. 44.90%, χ2 = 47.68, P < 0.001) of the induction process and higher satisfaction levels than patients in Group C (t = 15.36, P < 0.001). Conclusions: The FDAE significantly reduces the need for awake intubation and improves the efficiency of the intubation process without comprising safety in patients with potential difficult mask ventilation and DI. Trial Registration: No. ChiCTR-TRC-11001418; http://www.gctr.org/cn/proj/show.aspx?proj=1562. predicted to be difficult to ventilate, 78% were easy to ventilate. Patients with anticipated difficult mask ventilation and/or difficult tracheal intubation often undergo "awake" tracheal intubation. [5,6] Awake intubation can be technically challenging for the anesthesiologist and psychologically stressful for patients. [7][8][9] Furthermore, awake intubation is not risk-free, as patients might experience adverse events such as obstruction or regurgitation during the process of securing the airway. Awake intubation usually requires fiberoptic bronchoscopy (FOB) guidance which is costly, not always readily available and requires kinesthetic skill and training to use. [10] Better prediction of patients who are difficult to mask ventilate or intubate would reduce the need for awake intubation. Inhalation induction with the maintenance of spontaneous ventilation has been used as an alternative to awake intubation. [11][12][13][14][15] However, airway obstruction might occur with this technique after induction of general anesthesia. We developed a novel technique in which patients received a gradual induction of inhaled anesthesia with sevoflurane with simultaneous testing of airway patency through mask ventilation during the induction process. If at any stage mask ventilation became difficult, patients were awoken, and an awake tracheal intubation was performed. In this multicenter randomized trial, we compared our novel approach with the traditional approach of tracheal intubation in patients with the suspected difficult airway. We also assessed the safety of our technique. We hypothesized that our novel approach would reduce the need for awake tracheal intubation without compromising patient safety. Exclusion criteria were as follows: (1) patients with severe airway obstruction who require awake intubation, for example, patients with a luminal transverse area of the trachea less than 1/3 its original size due to an intratracheal neoplasm, or external compression from tumor or mass around trachea; (2) patients unable to breathe in the supine position; (3) patients with complicated respiratory diseases including pneumonia, asthma, chronic bronchitis, pulmonary emphysema; (4) patients with a high risk of aspiration, including intestinal obstruction, full stomach, esophageal reflux; (5) patients with history or family history of malignant hyperthermia; (6) pregnancy. Eligible patients with high risk of difficult ventilation and DI were screened and enrolled. The demographic information and detailed airway assessments of enrolled patients were documented preoperatively. Eligible patients were randomly assigned into two groups: the FDAE group (Group E, n = 155) and the control group (Group C, n = 147). The randomization was generated using SPSS statistical software, sub-stratified by center. The group assignment of each patient was concealed in a nontransparent envelope and opened according to the patient enrolled ID. Routine monitoring was established including electrocardiography, blood pressure, pulse oximetry (SpO 2 ), and capnography. For each enrolled patient, atropine was administered intravenously to keep the airway dry, and ephedrine was used to prepare the patient's nostrils in case there was a need for a nasal intubation. In Group C, patients received awake evaluation as per routine practice of the four medical centers. Vocal cord exposure was evaluated under topical local anesthesia initially in the awake state with light sedation. In brief, the airway was topically anesthetized with 2% lignocaine or 1% tetracaine. Midazolam (0.5 mg incrementally to a maximum of 2.0 mg) and fentanyl (20 µg incrementally to a maximum of 100 µg) were titrated based on the assessment of the care team [ Figure 1]. In Group E, patients underwent FDAE process. After pre-oxygenation with 100% oxygen for 3 min, they were gradually sedated with sevoflurane inhalation while maintaining spontaneous breathing. The fresh gas flow was set at 6 L/min with initial inhaled sevoflurane of 1% and then raised at a rate about 1% in 2 min intervals up to 3%. Sedation levels were estimated by using Ramsay scoring and the Bispectral Index. [17] The degree of airway obstruction was assessed using the airway obstruction score (AOS), [18,19] a test of the adequacy of positive pressure ventilation through facemask (difficult ventilation test) was done between spontaneous breaths and measured using Han's Mask Ventilation Score. [20,21] If AOS <2 or the Han score <3, sevoflurane inhalation was kept at 3% until the loss of consciousness. When patients were asleep but AOS >2 or Han's score ≥3, an oropharyngeal airway was placed immediately. If the placement of the oropharyngeal airway did not improve the adequacy of positive pressure ventilation, sevoflurane was terminated and washed out using high fresh gas flow rates within a couple of min. These patients would then be awoken and intubated awake. After evaluating ventilation in Group E, the attending anesthesiologist in charge of the case assessed (recorded by the investigator) the direct laryngoscopy (DL) grade using the Cormack and Lehane (C&L) classification. [22] If the C&L grade was ≥3, video-assisted laryngoscopy with the Airtraq was used. For patients who had pathology causing airway obstruction superior to the vocal cords, muscle relaxants were administered before intubation if the C&L Grade was I or II with DL or video laryngoscopy with the Airtraq. However, in Group C, the care team decided (as their routine practice) whether to proceed with awake intubation when the C&L Grade was I or II. For those who had pathology inferior to vocal cords, the intubation was performed while maintaining spontaneous ventilation without using muscle relaxants. If the C&L grade was ≥III, intubation was conducted with the FOB as an adjunct [ Figure 2]. The primary outcomes of this study were the rate of awake intubation in the two groups and the induction efficiency assessed by the induction time. The most important secondary outcome was the incidence of serious complications associated with the intubation process including cardiac arrests, "Cannot intubate, cannot ventilate" (defined as both mask ventilation and intubation were impossible followed by severe hypoxia), laryngospasm and pulmonary edema (defined according to standard definition), the need for an invasive surgical airway; The secondary outcomes included: (1) the rate of successful intubation by Airtraq or DL; (2) satisfaction of the patients. A case report form was used to collect each participant's information. Apart from demographic data and preoperative airway assessments, the following were recorded: the induction time, namely, the period from starting the local anesthetic spray in Group C or the initiation of sevoflurane inhalation in Group E to the establishment of endotracheal intubation confirmed by end-tidal carbon dioxide. Patients were followed up until postoperative day 2. Patients' satisfaction score (0 = dissatisfaction; 10 = very satisfactory) and recall for the intubation process were documented. [23] Postoperative adverse outcomes such as postoperative throat discomfort and hoarseness were documented in the follow-up visits. Statistical analysis All comparisons were two-sided, and a value of P < 0.05 was considered statistically significant. Data were presented as the mean ± standard deviation (SD) or median with 95% confidence interval for non-Gaussian variables. Statistical analysis was performed with the SPSS 17.0 software (version 17.0, SPSS Inc., Chicago, IL, USA). Nonparametric data from the two groups were compared using rank sum tests. Comparison of percentages was performed using either a Chi-squared or Fisher's exact test. Parametric data between the two groups were compared using the Student's t-test. The sample size of this study was calculated based on our historic data and a pilot study. We assumed that application of FDAE would reduce the rate of awake intubation by 20%, with 5% being considered significant (one-side test) 242 patients would need to be enrolled. We estimated a dropout rate of enrolled patients of 20% giving a total number of patients of 291. results Three hundred and fifteen patients underwent eligibility screening, 13 of them were excluded based on our exclusion criteria [ Figure 3], 302 patients were randomized, 155 patients in Group E and 147 patients in Group C. There were no differences between the groups in age, No Yes Upper airway sprayed with local anesthetics DI assessment: Glottis exposed by direct laryngoscopy C&L = I-II C&L = III-IV Obstruction superior tovocal cords? Continue Awake Intubation height, weight, BMI, gender, and ASA classification (all P > 0.05; Table 1). The majority (83.77%) of patients had OSA with AHI 61. 19 ± 19.60 in Group E and with AHI 57.14 ± 19.80 in Group C (t = 1.63, P = 0.11; Figure 3). The Mallampati classification was ≥III in 81.29% and 87.07% of patients in Group E and Group C, respectively (Z = 1.43, P = 0.15; Table 1). In Group E, 94.19% of patients did not show signs of obvious airway obstruction and had AOS <2 or Han score <3 after the loss of consciousness. They were all intubated after general anesthesia induction. Only 5.81% of patients developed obvious airway obstruction after the loss of consciousness (AOS >2) with or without an oral airway. They failed to pass the difficult manual mask ventilation test with Han's score ≥3. Since they were under spontaneous respiration, the SpO 2 was maintained over 93%. They were awoken by discontinuing the inhaled sevoflurane. These patients were later intubated awake. In Group C, awake intubation was performed in 36.05% of patients, which was much higher than 5.81% in Group E (χ 2 = 42.30, P < 0.001; Table 2). The anesthesia induction time was 11.85 ± 4.82 min in Group E versus 18.71 ± 7.85 min in Group C (t = 5.39, P < 0.001; Table 2). There were no significant differences in the incidence of serious complications associated with intubation between the two groups, 1/150 in Group E versus 0/147 in Group C. No cardiac arrests, cannot ventilate, cannot intubate (CVCI), pulmonary edema, or emergency invasive surgical airway was required in either group. There was one patient who developed laryngospasm during the FDAE process. His SpO 2 briefly dropped to 50% but resolved within 1 min after a bolus of propofol (30 mg) intravenously. The patient was then intubated after the loss of consciousness. No differences in minor injuries including injury to teeth, lips, pharynx, and nose bleeding were found between the two groups. In the follow-up visit, only 9.68% of patients could clearly recall the induction process in Group E, versus 44.90% in Group C (χ 2 = 47.68, P < 0.001; Table 3). The patient satisfaction scores for the induction experience were much higher in Group E (averaged at 9 out of 10) than those in Group C (averaged at 5 out of 10) (t = 15.36, P < 0.001; Table 3). For intubation, the rate of successful intubation after induction by DL in Group E was 18.71% versus 7.48% in Group C. There were 81.29% of patients in Group E intubated by Airtraq while 89.12% in Group C. dIscussIon It is well known that awake intubation is extremely stressful to patients and avoiding unnecessary awake intubation without compromising safety would improve the quality of patient care. [24] This study is the first to demonstrate that a new approach of incrementally increasing sedation level using inhaled sevoflurane while testing airway patency reduced the need for awake intubation without compromising patient safety. The results clearly showed that the majority of patients (94.19%) were able to maintain adequate spontaneous breathing and oxygenation even after loss of consciousness in patients with anticipated difficult airways. Only 5.81% of patients who did not pass the positive ventilation test required awake intubation according to our FDAE protocol. Of these patients, they were able to maintain SpO 2 above 93% during the evaluation process since spontaneous respiration was maintained. For anticipated difficult airways, awake exposure of the vocal cords is routine practice. When the exposure is good, the patients may be intubated under anesthesia at the discretion of the supervising anesthesiologist. Therefore, the awake intubation rate in our control group was 36.05% instead of 100%. The study clearly showed that the new approach significantly reduced the need for awake intubation in patients anticipated to have difficult mask ventilation and/or intubation. The main advantages of this new approach are only provide awake intubation to those ones really need it, which would avoid stress and discomfort for the majority of patients who had potential DV/DI. If all the anesthesiologists are familiar with FDAE method, they would apply it frequently which might reduce none standard care of performing general anesthesia induction regardless the presence of DV/DI. Therefore, the new approach of FDAE for the management of difficult airway will enhance the clinical safety. The efficiency of intubation is also another concern regardless of the intubation approach, asleep or awake. We all know that awake intubation is time-consuming. It is questionable whether the new approach could improve intubation efficiency. In this control group, the average induction time was 18 min with large individual variations. In the FDAE protocol group, the induction time was 36% lower with less variation. This indicates that the FDAE approach is not only effective but also reduces the anesthesia induction time for anticipated DMV/DTI. The results also showed that the risks of the new approach are not higher than that of our routine practice. The only event was a single episode of brief desaturation in one patient in the protocol group (1/150 vs. 0/147 in Group C). Based on this result, the sample size has to be over 100,000 to detect a statistical difference of hypoxemia between our new protocol group and the routine practice group. Although the risk of developing laryngospasm and failure of mask ventilation is present during the FDAE process, there is no guarantee that this would not occur under routine practice. In the clinical setting, some patients with potential DMV/DTI as the ones included in our study are induced under general anesthesia without proper testing; these patients would have the risk of developing CVCI resulting in severe hypoxia or death. It is not feasible because of ethical considerations to compare the safety our FDAE approach with direct general anesthesia induction. We also assessed if the new approach was superior to routine practice in terms of discomfort, mental stress, anxiety, fear, and unpleasant memories for patients as described previously. [25,26] Patients undergoing the FDAE approach had much lower recall rates of the induction experience. They were much more satisfied with the intubation process when compared to routine practice. According to our study, the majority of patients with a high risk of difficult ventilation and DI do not have to suffer from the discomfort of awake intubation if the FDAE approach is applied. The FDAE approach not only helped the clinicians decide whether to secure the airway awake or asleep but also helped specify the intubation technique. [27,28] For anticipated difficult airway patients, FOB-guided intubation used to be the classic choice. However, with the advancement of video-assisted laryngoscopy, FOB is not necessary in most cases. [29] In our study, the vocal cords exposure was initially assessed under traditional DL in both groups. Although there were no differences in preoperative difficult airway assessments estimated by Mallampati classification et al., the FDAE approach had higher rates of intubation under DL which could save the usage of VAL. Patients in the FDAE group with C&L Grade ≥3 were all successfully intubated using the Airtraq. Unlike the control group, none needed FOB guidance. Therefore, the FDAE approach reduces the rate of DI, which is likely to save more medical resources. Most clinical studies cited by difficult airway management guidelines are not randomized clinical trials. This is due to the nature of hardship to conduct a clinical trial in this type of patients who are at high risk during anesthesia induction. Anesthesiologists are usually under stress during induction process particularly when patients present potential difficult ventilation. To recruit the patients into a study would need more efforts from anesthesiologists. Therefore, there are a number of limitations to our study. First, due to the nature of the study, blinding was not feasible. Anesthesiologists who performed the airway assessment, intubation and recorded the procedures were aware of the intervention. Therefore, potential bias could occur due to lack of blinding. Second, our power analysis was based on the calculation of effectiveness, not on safety. Our sample size may be too small to assess the safety of the FDAE approach. Third, the intubation process in the control group may be old fashioned. However, it is still a common practice for expected difficult airways in most hospitals in China. In conclusion, the FDAE approach to managing the airway of patients with potential DV/DI significantly reduces the need for awake intubation without compromising patients' the safety. 40) Data are expressed as mean ± SD or n (%). *One patient was excluded because the induction time is about 219 min in the control group due to poor local anesthesia; † χ 2 value; ‡ t value. DL: Direct laryngoscope; FOB: Fiberoptic bronchoscopy; SD: Standard deviation. .001 Data are reported as mean ± SD or n (%). *χ 2 value; † t value. SD: Standard deviation.
2018-04-03T00:53:45.650Z
2018-03-20T00:00:00.000
{ "year": 2018, "sha1": "3361e4e41f82c5d891129c26af94d93509cb9567", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0366-6999.226897", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3361e4e41f82c5d891129c26af94d93509cb9567", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
25289694
pes2o/s2orc
v3-fos-license
COLORECTAL CARCINOMA USING TMA ( TISSUE MICROARRAY ) : association with metastases and survival Context NM23, a metastasis suppressor gene, may be associated with prognosis in patients with colorectal carcinoma. Objective To analyze NM23 expression and its association with the presence of lymph node and liver metastases and survival in patients operated on for colorectal carcinoma. Methods One hundred thirty patients operated on for colorectal carcinoma were investigated. Tissue microarray blocks containing neoplastic tissue and tumor-adjacent non-neoplastic mucosa were obtained and analyzed by immunohistochemical staining using a monoclonal anti-NM23 antibody. Immunohistochemical expression was assessed using a semiquantitative scoring method, counting the percentage of stained cells. The results were compared regarding morphological and histological characteristics of the colorectal carcinoma, presence of lymph node and liver metastases, tumor staging, and patient survival. Statistical analysis was performed using the Mann-Whitney test, the Kruskal-Wallis test and Fisher’s exact test. Survival analysis was performed using the Kaplan-Meier method and the log-rank test. Results NM23 expression was higher in colorectal carcinoma tissue than in adjacent non-neoplastic mucosa (P<0.0001). NM23 protein expression did not correlate with degree of cell differentiation (P = 0.57), vascular invasion (P = 0.85), lymphatic invasion (P = 0.41), perineural infiltration (P = 0.46), staging (P = 0.19), lymph node metastases (P = 0.08), or liver metastases (P = 0.59). Disease-free survival showed significant association (P = 0.01) with the intensity of NM23 protein immunohistochemical expression in colorectal carcinoma tissue, whereas overall survival showed no association with NM23 protein expression (P = 0.13). Conclusions NM23 protein expression was higher in neoplastic colorectal carcinoma tissue than in adjacent non-neoplastic mucosa, showing no correlation with morphological aspects, presence of lymph node or liver metastases, colorectal carcinoma staging, or overall survival. Disease-free survival was higher in patients with increased NM23 expression. HEADINGS Colorectal neoplasms. Carcinoma. Tumor markers, biological. Antigens, CD. NM23 nucleoside diphosphate kinases. INTRODUCTION Colorectal carcinoma is one of the most common cancers in the Western world and is becoming increasingly prevalent (5,9,15) .Despite advances in surgical management and complementary treatment of these tumors, overall mortality has not decreased significantly over recent years (1,4,5,29) . The most significant prognostic factor in colorectal carcinoma is tumor staging at initial diagnosis.Depth of tumor penetration into the intestinal wall, lymph node involvement and presence of metastases are the most reliable indicators of survival in colorectal carcinoma (3,9,15) . Although subjected to different cancer screening procedures, several patients show a more advanced stage at surgery, and the overall 5-year survival rate reaches only 50% of colorectal carcinoma patients despite having resectable disease (9,15) .Metastasis is the main cause of death in this group, leading to locoregional or distant recurrence in late-stage tumors. Studies (3,21,29) have investigated factors that may reduce morbidity and mortality from colorectal cancer, with a special emphasis on tumor markers.Despite the relatively large number of studies analyzing tumor markers, only a few of them are currently used in clinical practice.However, high costs and low sensitivity and specificity limit the routine use of these markers in a clinical setting (5,11,19) . Most prognostic parameters based on the immunohistochemical expression of tumor markers require neoplastic tissue samples and, therefore, can only be assessed postoperatively or after obtaining tissue for biopsy.Moreover, reproducibility of results may vary (13,17,23,29) .Nevertheless, the study of tumor markers in neoplastic tissue is particularly interesting, because it allows analysis of tumor cells and intraindividual biological variability, providing a high degree of biological specificity concerning the cancer under study.Within this context, one may speculate that determining the immunohistochemical expression of markers in colorectal carcinoma tissue has the potential to provide prognostic information, even in non-advanced stages (2) . The NM23 gene is located on chromosome 17 and produces two proteins, NM23-H1 and NM23-H2 (11,14,18,33,36) .Identified as a metastasis suppressor gene, NM23 was first isolated in murine melanoma cell lines (11) .Campo et al. (11) associated low tissue immunohistochemical expression of these proteins with poor prognosis in colorectal carcinoma patients.On the other hand, Bazan et al. (6) found no correlation between the reduced tissue expression of this marker and prognosis in colorectal cancer.Other studies have shown conflicting results concerning the association between NM23 expression in colorectal carcinoma tissue and tumor prognosis (6,8,10,12,30) . Due to the high incidence of colorectal cancer and the difficulty in establishing a prognosis for these patients, especially in tumor-node-metastasis (TNM) stages II and III, the study of tumor markers that may predict prognosis is of utmost importance.The objective of this study was to analyze NM23 expression and its association with anatomicopathologic aspects of tumors, presence of lymph node and liver metastases, and survival in patients operated on for colorectal carcinoma, as well as to contribute to a better understanding of the biological dynamics of the NM23 protein in colorectal carcinoma by measuring its tissue immunohistochemical expression. METHODS The study was approved by the Research Ethics Committee of Universidade Federal de São Paulo (UNIFESP), São Paulo, SP, Brazil, under protocol no.1958/07.Inclusion criteria were adult patients with colorectal carcinoma confirmed by anatomicopathologic examination.Exclusion criteria were the presence of hereditary colorectal cancer, Crohn's disease, ulcerative colitis, metachronous colorectal cancer, or any other previously treated neoplasm. Biodemographic variables included age at diagnosis and gender.With regard to neoplastic lesions, we analyzed tumor site, macroscopic and microscopic characteristics, and stage of primary lesion.The following patient-related events were also analyzed: type of surgical intervention, presence of synchronous metastases, follow-up period, disease-free period, tumor relapse, death, mortality rate, and overall survival. Overall survival time was calculated from the date of surgery to the last follow-up visit or date of death.Diseasefree survival was defined as the interval after curative surgery during which there was no evidence of tumor relapse. NM23 expression was assessed on paraffin-embedded tissue sections by immunohistochemical analysis in colorectal carcinoma tissue and adjacent non-neoplastic colorectal tissue.The relationship of the intensity of NM23 tissue expression with anatomicopathologic characteristics of the tumor and outcome of patients was then obtained. The study population consisted of 130 patients (65 men and 65 women) operated on for colorectal carcinoma between October 2001 and March 2005 at the Proctology Service of UNIFESP.Patients' mean age was 64.2 years (29 to 90 years).The TNM classification system (UICC, 2002) was used for tumor staging. Prognostic criteria were represented by the following parameters: tumor relapse, overall mortality, overall survival, and disease-free period.The occurrence of relapse was confirmed by complementary tests or laparotomy. Patient follow-up ranged from 1 to 47 months (mean of 25.4 months).The mean follow-up period for patients treated with curative intent was 36.4 months. For the tissue microarray (TMA), all histological hematoxylin and eosin (H-E)-stained sections of colorectal carcinoma were examined by two pathologists.Specimens were assessed for diagnostic confirmation, histological grade and selection of sites for TMA core removal.A Beecher TM tissue arrayer device (Beecher Instruments, Silver Spring, USA) was used to construct the TMA, according to the standard technique (1) . Using an adhesive-coated tape system (Instrumedics Inc, Hackensak, USA), 4-µm sections were cut and transferred to adhesive-coated slides.A small roller was used to press the section flat against the tape, which was then placed on a resin-coated slide and pressed using the same roller for better adhesion of the section to the surface.These slides were then exposed to ultraviolet light for 20 minutes.The slides were dried and the adhesive tapes were removed. To ensure the representative of each area of the donor block, at least two samples were collected, each of them being represented in two different sites in the same recipient block, resulting in a mirror-image representation of the samples.Whenever the samples, even with a mirror-image representation, were not representative examples of the tissue in question, new samples were collected from the donor block and an additional recipient block was constructed.The slides were then submitted to immunohistochemistry. Immunohistochemical analysis was performed using the streptavidin-biotin-peroxidase staining technique with a monoclonal anti-NM23 antibody (Neomarkers, USA), at 1:1000 dilution.This antibody has affinity for both H1 and H2 components of the NM23 protein. Positive results were visible as brown cytoplasmic or nuclear staining for the antibody under study.Slides containing histological sections of NM23 positive colorectal tissue were used as positive controls (26) .The same slides were used as negative controls by removing the primary antibody from the staining reaction. The sections were examined using a slide scanner (ScanScope CS System, Aperio Technologies, UK), assessing the percentage of cells with a positive reaction in 10 microscopic fields at 400 magnification (x400).In the assessment of immunohistochemical markers, cells showing dubious staining and non-tumor cells were excluded.Assessment was performed independently and in a blinded fashion by two experienced pathologists.In cases of conflicting observations, which corresponded to less than 10% of the total sample, a consensus was achieved between both pathologists at joint reevaluation of their data previously obtained independently. The criteria used to assess NM23 expression (26) were based on the number of stained cells, and scores were assigned as follows: score 0 = 10% or less, score 1 = 11% to 25%, score 2 = 26% to 50%, and score 3 = 51% or more stained neoplastic cells.The assessment of NM23 immunohistochemical expression was performed in tumor tissue and in non-neoplastic mucosa obtained from region adjacent to the tumor.Scores 0 to 2 were considered as NM23 negative (protein downexpression), and score 3 was considered as NM23 positive (protein overexpression). The slides were scanned and images were captured by a camera (Samsung, South Korea), coupled with a microscope (Olympus Bx 40, Japan), using Win TV 32 software.Positive and negative staining was identified in tumor tissue and nonneoplastic mucosa (Figures 1, 2, 3). in the analysis of variance.Survival analysis was performed using the Kaplan-Meier method, and survival curves were compared using the log-rank test.Statistical analysis was performed using the Prism 4.0 statistical software (GraphPad Software Inc., USA), and the level of significance was set at P≤0.05 (5%). Table 1 shows immunohistochemical expression of the NM23 protein in relation to the main clinicopathological parameters. In the present study, overall and disease-free survival rates were higher in patients with increased NM23 expression.This finding is consistent with the prognostic role of a marker, which is considered a protector against the occurrence of metastases.Campo et al. (11) evaluated the loss of heterozygosity of NM23 in relation to patient survival, but found no significant differences.Along the same lines, Royds et al. (34) and Berney et al. (7) identified longer survival among patients with increased NM23 expression.In contrast, Cheah et al. (12) , Lee et al. (26) and Lindmark (27) found no relationship between NM23-H1 expression and tumor recurrence or patient survival. The present study showed a significant association between tissue expression of the NM23 protein and disease-free interval in patients who underwent curative surgery for colorectal carcinoma.NM23 expression was also higher, albeit not statistically significant, in patients with lower overall mortality and in those with fewer lymph node metastases, findings which corroborate the potential protective effect of the NM23 protein.Nonetheless, further studies are warranted to confirm the relevance of the NM23 protein in the prognosis of patients operated on for colorectal carcinoma. TABLe 1 . Relationship between the main clinicopathological parameters of patients operated on for colorectal carcinoma and tissue immunohistochemical expression of the NM23 protein (P value)
2017-06-23T20:02:26.297Z
2010-10-01T00:00:00.000
{ "year": 2010, "sha1": "3cf30af18524642df1233c1a02bc651d6f884fd9", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/ag/v47n4/v47n4a08.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3cf30af18524642df1233c1a02bc651d6f884fd9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
160010799
pes2o/s2orc
v3-fos-license
Adjacent intact nociceptive neurons drive the acute outburst of pain following peripheral axotomy Injury of peripheral nerves may quickly induce severe pain, but the mechanism remains obscure. We observed a rapid onset of spontaneous pain and evoked pain hypersensitivity after acute transection of the L5 spinal nerve (SNT) in awake rats. The outburst of pain was associated with a rapid development of spontaneous activities and hyperexcitability of nociceptive neurons in the adjacent uninjured L4 dorsal root ganglion (DRG), as revealed by both in vivo electrophysiological recording and high-throughput calcium imaging in vivo. Transection of the L4 dorsal root or intrathecal infusion of aminobutyrate aminotransferase inhibitor attenuated the spontaneous activity, suggesting that retrograde signals from the spinal cord may contribute to the sensitization of L4 DRG neurons after L5 SNT. Electrical stimulation of low-threshold afferents proximal to the axotomized L5 spinal nerve attenuated the spontaneous activities in L4 DRG and pain behavior. These findings suggest that peripheral axotomy may quickly induce hyperexcitability of uninjured nociceptors in the adjacent DRG that drives an outburst of pain. Results Rapid onset of neuropathic pain-related behavior after acute L5 spinal nerve transection. We transected the L5 spinal nerve by quickly withdrawing a pre-implanted loop suture surrounding the L5 spinal nerve in awake rats and conducted behavioral tests immediately after injury. In L5 SNT rats, the duration of spontaneous foot lifting (SFL), an indicator of spontaneous pain, was sharply increased in the ipsilateral hind paw at 10 min after injury, as compared to that in sham-operated rats (Fig. 1a). SFL in L5 SNT rats remained significantly higher than that of the sham-operated group at postoperative day (POD) 1 and POD4. The paw withdrawal threshold (PWT) to mechanical stimulation (von Frey hair) applied to the ipsilateral hind paw was significantly decreased from 10 min to POD14, as compared to that in sham-operated rats (Fig. 1b). In the hot plate test, the paw withdrawal latency (PWL) to thermal stimulation was significantly decreased from 10 min to POD7 after injury (Fig. 1c). The frequency of licking or biting of the ipsilateral hind paw after cold stimulation (acetone) was significantly increased from 10 min after injury to POD14 (Fig. 1d). In sham-operated group, none of these outcome measures was significantly changed from pre-injury baseline or the naive control group (Fig. 1a-d). These findings suggest that pain after axotomy may develop much sooner in awake animals than previously known [13][14][15][16] . The rat hind paw is innervated by L3-L5 spinal nerves from the medial to the lateral side. Further examination showed that the decrease in PWTs measured in L3 and L4 territories was significantly greater than that in L5 territory (Fig. 1e). This finding suggests that the mechanical hypersensitivity after acute L5 SNT was more prominent in the skin territories of uninjured L3-4 spinal nerves than in those of L5 spinal nerve. Evans blue extravasation is a common measure of neurogenic inflammation [17][18][19][20] . Compared to pre-injury level (3.44 ± 0.29 μg/g), the concentration of Evans blue was significantly increased at 10 min after SNT (14.04 ± 1.33 μg/g), peaked at 4 h (24 ± 1.38 μg/g), and gradually decreased from POD1 (18.6 ± 1.35 μg/g; 10 min-POD14, Fig. 1f). Evans blue extravasation was also greater in L3 and L4 territories, as compared to that in L5 territory ( Supplementary Fig. 1). Evans blue extravasation did not change significantly after sham operation. Local inflammation is often associated with an increase in skin temperature [21][22][23] . In line with the Evans blue extravasation findings, skin temperature of the ipsilateral hind paw was significantly elevated from 10 min to 240 min after L5 SNT, as compared to temperature on the contralateral side (Fig. 1g). Blocking the afferent inputs from L5 spinal nerve to the spinal cord by local application of lidocaine (2%) also increased skin temperature of the hind paw for approximately an hour after treatment (Fig. 1h). The core body temperature and skin temperature of the contralateral hind paw after L5 SNT were not changed from pre-injury level. These findings suggest that L5 SNT may quickly trigger neurogenic inflammation in the peripheral tissue, and more so in the territory of neighboring intact spinal nerves. Spontaneous activity quickly developed in nociceptive neurons of uninjured L4 dorsal root ganglion (DRG) after L5 spinal nerve transection. We next examined the neurophysiologic mechanisms that may underlie the quick onset of pain after L5 SNT (dissecting scissors cut). Recent microneurography studies in patients suggested that development of SA and ectopic discharge in a subpopulation of DRG neurons after nerve injury might underlie spontaneous pain 1,2 . By conducting highly sensitive ex vivo electrophysiology recordings of DRG neurons, we found that 14 nociceptive neurons (C neurons) out of 17 neurons recorded from uninjured L4 DRG in 17 rats developed SA within a few minutes after L5 SNT, and remained active for at least 30 min (Fig. 2a,b). The average rates of SA from all neurons (i.e., neurons with and without SA after SNT) after SNT were shown in Fig. 2a. DRG neurons were classified based on axon conduction velocity and response properties. The percentage of C neurons in L4 DRG that showed SA within 4 h after L5 SNT was significantly high-er than that after sham operation (Fig. 2c). In contrast, only a few C neurons in L5 DRG showed SA after transection (Fig. 2c). The injury discharge of C neurons in L5 DRG lasted for no more than a minute in our electrophysiological recording(Supplementary Fig. 2), which is in line with findings in a previous study 24 . Based on the response properties, we separated C neurons into different subgroups: C-mechano-sensitive (CM), C-mechano-heat-sensitive (CMH), C-mechano-heat-cold-sensitive (CMHC), and C-mechano-cold-sensitive neurons (CMC). The percentages of CM (14/39, 36%), CMH (13/31, 42%), and CMC (10/17, 59%) neurons in L4 DRG that showed SA at 0-4 h after L5 SNT were significantly higher than that after sham operation (Fig. 2d). We then examined the source of signals that trigger SA in L4 DRG neurons after L5 SNT. We transected the L4 dorsal root at 10 min before L5 SNT to prevent traveling of retrograde signals from the spinal cord to L4 DRG. Indeed, L4 dorsal root transection significantly reduced the SA in L4 DRG neurons after L5 SNT (Fig. 2c). In a separate experiment, we recorded SA for 5 min after L5 SNT and then intrathecally infused vehicle (artificial cerebrospinal fluid) and aminooxyacetic acid (AOAA, 10 mM, 10 μl), and agar was used to block drug diffusion into DRG bath. AOAA inhibited aminobutyrate aminotransferase activity and increased the level of gamma-aminobutyric acid (GABA) 25 . AOAA significantly reduced SA in L4 DRG neurons, as compared to pre-drug level and that after vehicle treatment (Fig. 2e,f). Together, these findings may suggest that retrograde signals from the spinal cord may elicit SA in L4 DRG neurons after acute L5 SNT. in the L4 dermatome of the ipsilateral and contralateral paws and core (rectal) temperature after application of 2% lidocaine quickly around L5 spinal nerve for 1 min (n = 3/group). Contralateral side vs Lidocaine L5: F (1,4) = 87.57, P = 0.0007; 10 min: P = 0.0002; 20 min: P < 0.0001; 30 min: P < 0.0001; 60 min: P < 0.0001. *P < 0.05, **P < 0.01, ***P < 0.001, # P < 0.001. (a-d,f-h) Two-way mixed model ANOVA with sidak's multiple comparisons test. Data are expressed as mean ± SEM. www.nature.com/scientificreports www.nature.com/scientificreports/ Using electrophysiologic recording to identify DRG neurons with SA is challenging because it requires sampling a large population of cells. Therefore, we used pirt-GCaMP6s mice to conduct high-throughput calcium imaging of DRG neurons in vivo [26][27][28] . GCaMP is a genetically encoded calcium indicator, and the intensity of its green fluorescence increases robustly when the cell is active 28,29 . In line with electrophysiologic findings, the percentage of small-diameter neurons (<450 μm 2 ) in L4 DRG that developed SA was significantly increased at 2 min (139/291 cells in n = 4 mice) and 30 min (47/271 cells in n = 4 mice) after L5 SNT, as compared to that at baseline (0/288 cells, Fig. 3). Thus, both electrophysiologic recording and GCaMP imaging studies suggested a rapid onset of SA in nociceptive neurons from uninjured L4 DRG after L5 SNT. www.nature.com/scientificreports www.nature.com/scientificreports/ Sensitization of nociceptive neurons in L4 DRG after acute L5 spinal nerve transection. We further examined whether L5 SNT also increases the evoked response of L4 DRG neurons to mechanical, thermal, and cold stimulation. Stimulus-evoked action potentials (APs) were recorded ex vivo from C neurons in L4 DRG for 4 h after L5 SNT or sham operation in rats ( Fig. 4a-d). Graded mechanical and heat stimulation evoked more APs in L4 DRG neurons after L5 SNT, as compared to that before injury (Fig. 4e,f). C neurons were characterized and separated into different subtypes (CM, CMH, CMHC, CMC) based on conduction velocity and response properties to mechanical, thermal, and cold stimulation applied to the skin receptive fields ( Fig. 4b-d,g). The activation thresholds to mechanical stimulation were significantly decreased in CM, CMH, and CMHC neurons after SNT, as compared to that after sham operation (Fig. 4h). In addition, the number of APs elicited by mechanical stimulation increased significantly after L5 SNT in each subgroup of C neurons ( Supplementary Fig. 3). The heat thresholds of CMH and CMHC neurons were significantly lower after L5 SNT than after sham operation ( Fig. 4i). In CMH and CMHC neurons, the numbers of APs elicited by heat (45-53 °C, Supplementary Fig. 4) and cold stimuli (0 °C, 20 s, Fig. 4j) were significantly greater in SNT rats than in sham-operated rats. We also conducted in vivo calcium imaging to examine responses of L4 DRG neurons to mechanical and heat stimulation in pirt-GCaMP6s mice. To recruit mechanical sensitive neurons, we used a rodent pincher analgesia meter to stimulate a large area of the hind paw, instead of using von Frey filaments. More small-diameter neurons in L4 DRG were activated by mechanical stimulation (Supplementary Fig. 5a) and heat stimulation ( Supplementary Fig. 5b) at 30-60 min after L5 SNT than before injury. Together, these findings suggest a sensitization of primary nociceptive neurons in uninjured L4 DRG soon after L5 SNT, which may correlate with the development of behavioral mechanical and heat hypersensitivities. Electrical stimulation of low-threshold afferent fibers reduced spontaneous activity in L4 DRG neurons and attenuated pain behavior after L5 spinal nerve transection. We determined whether stimulation of low-threshold afferents inhibits SA of C neurons in L4 DRG after L5 SNT. Electrical stimulation was applied via a suction electrode to the remaining L5 spinal nerve proximal to the transection site (Fig. 5a). The stimulation was applied for 10 min after L5 SNT at a low intensity that primarily activates non-nociceptive afferent fibers [Aα/β-fiber, 40% motor threshold (MoT), 10 Hz, 3 min]. The SA frequency in C neurons of L4 DRG at 0-5 min after electrical stimulation (0.02 ± 0.01 Hz) was significantly lower than that at pre-stimulation level (0.3 ± 0.13 Hz, Fig. 5b). Mechanical test stimulation was applied to the skin receptive field of the neurons at the end of the experiment to confirm responsiveness. www.nature.com/scientificreports www.nature.com/scientificreports/ Lastly, we examined whether electrical stimulation of low-threshold afferent fibers may also attenuate neuropathic pain-related behavior after L5 SNT. A pair of silver electrodes was pre-implanted under the L5 spinal nerve at 2 days before L5 SNT. Low-intensity electrical stimulation (40% MoT, 10 Hz, 3 min) was applied at the L5 spinal nerve proximal to the transection site. The electrical stimulation was delivered at 1, 6, and 12 h after L5 SNT and then twice daily from POD1 to POD7. Behavioral tests were conducted immediately after each stimulation. The duration of SFL in the ipsilateral hind paw at 10 min and 1 day after L5 SNT was significantly decreased in rats that www.nature.com/scientificreports www.nature.com/scientificreports/ received electrical stimulation than in those without stimulation (Fig. 5c). The decrease of PWT to mechanical stimulation after L5 SNT also was attenuated by electrical stimulation (Fig. 5d). Discussion Studies of neuropathic pain in animal models have been conducted primarily at hours to days after nerve injury 11,12,30 . Yet, severe pain after nerve injury has been reported in patients much earlier than that observed in animal models 2,4,5,17,31,32 . Because it takes hours for animals to fully recover from general anesthesia after surgery, rapid onset of neuropathic pain-related behavior may not be readily observed 10,12,13,15,16 . In order to avoid these potential confounding factors, we tested a rat model of axotomy induced by acute transection of L5 spinal nerve in awake rats. Using this model, we demonstrated a rapid onset of neuropathic pain-related behavior that included spontaneous pain and hypersensitivity to mechanical, heat, and cold stimuli, and exposed early changes in primary sensory neuronal after acute peripheral nerve injury. The mechanisms underlying the pain after axotomy remain unclear. Previous studies suggested that inflammation in nerve tissue is important to the development of neuropathic pain [10][11][12] . Yet, changes in gene expression, inflammation, and Wallerian degeneration often develop slowly after injury 11,33 and may not account for the rapid onset of post-axotomy pain. Our electrophysiology and GCaMP imaging studies revealed increases in SA and responsiveness of nociceptive neurons in neighboring uninjured L4 DRG only a few minutes after L5 SNT. Although the firing rate of individual C neurons was relatively low (<10 APs/5 min), this SA may induce pain owing to the large number of neurons involved. Compared to previous studies which were conducted at a later time point after spinal nerve ligation (SNL) 7,9 , we observed a higher incidence of C neurons and a lower incidence of A neurons in L4 DRG that show SA soon after L5 SNT. However, the average rates of SA in C neurons were lower than that in the previous studies. Differences in animal models (e.g., SNT versus SNL), post-injury time points, and experimental conditions may partially account for the discrepancies between current observation and previous findings 7,9 . Nevertheless, our finding is in line with the observation that low-frequency activity in C neurons elicits hyperalgesia in humans and rats 18,19 . The hyperexcitability of C neurons in L4 DRG developed quickly after L5 SNT, almost simultaneously with the onset of spontaneous pain and evoked pain hypersensitivity. Both mechanical hypersensitivity and neurogenic inflammation, as indicated by increased Evans blue extravasation, were more prominent in the dermatomes of uninjured spinal nerve. These findings suggest that hyperexcitability of uninjured C neurons may induce neurogenic inflammation and pain hypersensitivity after axotomy. The spontaneous pain and corresponding electrophysiologic changes witnessed in this study occurred almost immediately after L5 SNT and persisted until pain had transitioned into the chronic phase. The mechanisms www.nature.com/scientificreports www.nature.com/scientificreports/ that lead to a quick onset of C neuron sensitization in L4 DRG after L5 SNT remain to be determined in a future study. Intriguingly, transection of L4 dorsal root reduced C neuron hyperexcitability in L4 DRG, suggesting that retrograde signals from the spinal cord are important to the sensitization of uninjured DRG neurons after axotomy. Consistent with this notion, intrathecal infusion of AOAA, which would increase the inhibitory tone from GABAergic neurons, induced a similar inhibitory effect. The gate control theory of pain, proposed by Melzack and Wall 20 , postulates that activities of non-nociceptive afferent neurons drive a feed-forward activation of spinal inhibitory neurons to close the "gate" and inhibit spinal nociceptive transmission. However, it is unknown whether A-fiber inputs exert tonic inhibition of primary nociceptive neurons under physiologic conditions. A recent study showed that abolishing low-threshold afferent inputs by demyelinating A-fibers with cobra venom induced a quick onset of heat pain hypersensitivity and an increase of C neuron excitability 21 . Together, these findings suggest that a loss of tonic A-fiber inputs, such as by axotomy or demyelination, might deactivate certain inhibitory interneurons and "open" the gate in the spinal cord dorsal horn, and quickly induce pain. In contrast, low-intensity electrical stimulation at the venom injection site, which primarily activated A-fibers, inhibited SA in C neurons 21 . Here, electrical stimulation of L5 spinal nerve at an intensity that activates low-threshold afferent fibers also attenuated SA of uninjured L4 DRG neurons in vivo, and alleviated pain in awake rats after L5 SNT. Our results indicate that disinhibition of C-neurons by the loss of A-fibers input might induce nociceptive activation and pain immediately after nerve injury. These effects could be reversed by compensatory A-fiber inputs via peripheral nerve stimulation. Dorsal root reflex might be involved in the triggering of neurogenic inflammation after nerve injury as observed in this study. The detailed spinal cord neural circuit mechanisms will be further investigated in future studies. Although some widely adopted preclinical pain models such as severe ongoing pain after intra-plantar injection of formalin and capsaicin 22,23 were also performed in awake animals, it remains possible that L5 SNT produces more severe, traumatizing and lasting pain to awake animals than other models. In addition, current findings suggest that anesthesia did not impair the rapid increase of neuron excitability in L4 DRG, a possible neurophysiological correlate of the early neuropathic pain after acute L5 SNT. Accordingly, we suggest that further investigation of behavior change and peripheral neuronal mechanisms of pain after acute nerve injury should be conducted with effective anesthesia during surgery. In summary, our findings show for the first time that peripheral axotomy in rats induced a quick onset of neuropathic pain-related behavior and neurogenic inflammation, which may be associated with increased excitability of adjacent intact primary nociceptive neurons. Current findings may provide the biological basis for developing novel therapeutic strategies for neuropathic pain, including, but not limited to, the pain that develops abruptly after nerve injury. Methods Animals. Adult female Sprague rats Beijing, China, 128 rats for behavior test; 56 rats for Evans blue test. 16 rats for Skin temperature test; 89 rats for DRG recording, some behavior rats used for recording) and adult Pirt-Cre;Rosa26-flox-stop-flox-GCaMP6s heterozygous mice (25-30 g, both sexes) were used in this study. The animal behavior and electrophysiology studies were conducted at the Peking Union Medical College, China, and was approved by the Institutional Animal Care and Use Committees of the Chinese Academy of Medical Sciences and Institute of Basic Medical Sciences (Project #211-2014). The GCaMP imaging study was conducted at the Johns Hopkins University, USA, and was approved by the Institutional Animal Care and Use Committee. Confirming that all experiments were performed in accordance with relevant guidelines and regulations. Behavioral tests. In behavioral experiments, the L5 spinal nerve was separated and encircled loosely with 5-0 silk sutures 1 week before transection. Rats were anesthetized with pentobarbital sodium (50 mg/kg, administered intraperitoneally [IP], Sigma-Aldrich Corp., St. Louis, MO, USA). Under aseptic conditions, the right L5 transverse process was removed, and the L5 and L4 spinal nerves were identified. A suture loop was placed around the L5 spinal nerve and went through a plastic tube with both ends close to the skin incision on the back. For sham-operated rats, the suture loop was just placed adjacent to (but not surrounding) the L5 spinal nerve. The suture was located approximately 3-4 mm proximal to the junction with the L4 nerve. The incision was then closed in layers. The suture ends were placed under the skin when the incision was closed. L5 spinal nerve was transected in awake rats by quickly pulling out the pre-implanted both ends of the suture surrounding the nerve while holding the plastic tube inside the body, so that the suture loop was completely pulled out after cutting through the nerve trunk. After behavior test, we checked L5 spinal nerve all clean cut. To apply electrical stimulation at the L5 spinal nerve in animal behavioral studies, we implanted a pair of silver electrodes under the L5 spinal nerve proximal to the transection site 1 week before the L5 SNT. The electrodes were custom made with 0.2 mm diameter sliver wires (Cat#782000, A-M systems, Sequim, WA, USA) and wired underneath the skin to a small electric port sutured on the back of rat. The electrical stimuli were generated by a stimulator (SEN-7103, Nihon Kohden, Tokyo, Japan) and isolator (SS-102J, Nihon Kohden), which was connected to the port on the back of each rat before the stimulation. The intensity of electrical stimulation (40% motor threshold (MoT), 10 Hz, 3 min) was pre-tested so that only low-threshold afferent fibers were activated 34,35 . Rats were acclimatized to the behavioral testing box for 3 consecutive days prior to baseline testing and for 30 min before each test. Acute behavior test acclimatized less than 10 min for the first time point. All behavior test is separately independence test. Every behavior animal to sham VS SNT is 8 VS 8. Mechanical hypersensitivity was examined before surgery and at 10 min, 1 h, 4 h, 12 h, 1 day, 7 days, and 14 days after surgery. Heat and cold hypersensitivity and spontaneous pain were tested before surgery and at 10 min and 1, 7, and 14 days after surgery. The electrical stimuli were delivered at 1, 6, and 12 h after L5 SNT and then twice daily from POD1 to POD7 in the stimulation group, while no stimuli in the control group. Spontaneous pain, cold, mechanical and www.nature.com/scientificreports www.nature.com/scientificreports/ heat hypersensitivity were tested immediately after the electrical stimulation at 1, 6, and 12 h after L5 SNT and then in POD1, 3, 5, and 7. Mechanical hypersensitivity. Rats were placed in individual acrylic glass boxes with a wire grid bottom. Then, a calibrated electronic von Frey filament (Electronic von Frey 2390-5 Anesthesiometer; IITC Life Science, Woodland Hills, CA, USA) was applied perpendicularly to the plantar surface of the hind paws and held for approximately 3 s. Abrupt paw withdrawal, licking of the paw, or shaking of the paw indicated positive responses. Three measurements were made per side and the average calculated to yield the Paw withdrawal threshold (PWT) in response to mechanical stimulation. Heat hyperalgesia. Thermal hyperalgesia was measured by methods previously described 13,15 . Rats were placed in the acrylic glass box of a thermal testing apparatus (BME-410C Full-Automatic Plantar Analgesia Tester; Institute of Biomedical Engineering, Tianjin, China) and allowed to acclimatize to the apparatus for another 30 min. A movable radiant heat source located under the glass floor was focused onto the plantar surface of the hind paw (51 °C). The maximum automatic cutoff time was set at 20 s to prevent potential tissue damage. Abrupt paw withdrawal, paw licking, or paw shaking indicated positive responses. Three measurements were made per side, and an average of the readings was calculated to yield the paw withdrawal latency to heat stimulation. Cold allodynia. After rats were acclimatized, 0.1 ml of acetone was gently applied to the plantar surface of the hind paw. Rapid withdrawal of the hind paw, paw licking, and paw shaking in response to the spread of the acetone over the planter surface of the hind paw were considered positive responses. Three 3-min tests were conducted on each hind paw at 3-min intervals. An increase in the number of positive responses was interpreted as the development of increased cold sensitivity 36,37 . Spontaneous pain. The duration of spontaneous foot lifting (SFL) was used as an indication of spontaneous pain after nerve injury. It was measured as the cumulative duration (in seconds) per 10 min in which rats lifted their ipsilateral hind paw, often accompanied by shaking or licking. Foot lifting associated with exploratory behavior, locomotion, body repositioning, and grooming was excluded 6,38 . An increase in the duration of SFL compared with the sham-operated group was interpreted as the development of spontaneous pain 13,37 . Electrophysiologic recordings. A surgical procedure similar to that used in the rat behavior, but the L5 spinal nerve was separated and transected with scissors and rats were anesthetized with sodium pentobarbital. Primary sensory neurons innervating the skin of the hind limb were recorded with an ex vivo extracellular electrophysiologic preparation, as previously described 8,9,39,40 . Rats were used for electrophysiologic recording at 0-4 h, 1 day, and 7 days after surgery. A chosen cell body was suctioned into the mouth of a glass micropipette (tip diameter, 20-25 µm) filled with the bath solution, the extracellular artificial spinal fluid solution contained 120 mM NaCl, 3 mM KCl, 1.1 mM CaCl 2 , 10 mM glucose,0.6 mM NaH 2 PO 4 , 0.8 mM MgSO 4 , 18 mMNaHCO 3 (pH 7.4) with NaOH. APs were recorded extracellularly by using a Multiclamp 700B amplifier and Digidata 1440A. A peripheral receptive field was identified by exploration of the hind limb and application of various handheld stimuli. Applicators included a cotton-tipped swab and camel-hair brush (for innocuous mechanical stimuli), gentle pinching or indentation with a glass probe, and von Frey filament with a fixed tip diameter (200 μm) to deliver different bending forces (for mechanical stimuli), a temperature-controlled chip-resister heating probe for a series of heat stimuli, and ice water. Mechanical stimuli were delivered by a series of von Frey filaments applied to the receptive field for 1 s in ascending order (5, 10, 30, and 50 mN). A series of heat stimuli was applied with a baseline of 38 °C, a rapid temperature ramp, a 5-s plateau of 41, 45, 49, 51, or 53 °C; and then back to baseline. Water ice (0 °C), used as the cold stimulus, was applied to the receptive field as previously described [39][40][41] . Classification of DRG neurons. Each DRG neuron was classified as C, Aδ, or Aβ by its axonal conduction velocity (<1.3, 1.3~12, or >12 m/s, respectively) 39,[41][42][43] . DRG neurons with hairy and glabrous receptive fields were included. C neurons were further classified into CM, CMH, CMHC, or CMC subgroups if they were responsive to the following stimulation applied to the receptive field: mechanical stimulation (pinch pressure) only, mechanical and noxious thermal stimulation (51 °C, 5 s), mechanical and noxious thermal and cold stimulation (water -ice, 0 °C, 20 s), and mechanical and noxious cold stimulation, respectively. Criteria for defining a spontaneously active neuron. For each neuron identified, a continuous recording was obtained for 3 min without any external stimulation. If spontaneous ongoing discharge occurred during this period, the neuron was classified as spontaneously active. Any "injury discharge" that appeared on occasion immediately after electrode touching and lasted less than 30 s was ignored. Evans blue extravasation. A surgical procedure similar to that used in the rat behavior. Rats were anesthetized with sodium pentobarbital, injected with Evans blue (50 mg/kg, dissolved in 1 ml saline, administered intravenously; BDH, UK), and perfused transcardially with 0.1 M phosphate-buffered saline 20 min later. The skin of the hind paw was photographed for qualitative measurement and then removed for quantitative measurement of Evans blue extravasation according to the method described 44 . Briefly, the removed skin was incubated in N, N-dimethylformamide overnight at 55 °C to allow Evans blue to completely dissolve in the solvent. The following day, the Evans blue absorbance value was measured by a Microplate spectrophotometer at 630 nm. The concentration of Evans blue was normalized to a reference sample of Evans blue (0-9.6 μg/ml). (2019) 9:7651 | https://doi.org/10.1038/s41598-019-44172-9 www.nature.com/scientificreports www.nature.com/scientificreports/ Skin temperature measurement. SNT and Lidocaine (2% in saline around the L5 spinal nerve for 1 min) group procedure similar to that used in the rat behavior but rats were anesthetized with sodium pentobarbital. The skin temperature of the hind paw was measured at room temperature (22 ± 0.2 °C) by inserting an electric thermometer (diameter: 1 mm) in between two toes. The value was recorded on a double-channel chart recorder. GCaMP imaging. In calcium imaging studies, a procedure similar to that used in the rat electrophysiologic studies was applied to mice but with no electrode implantation. Pirt-GCaMP6s mice were anesthetized with 2% isoflurane, and the lumbar L4 DRG ipsilateral to the nerve injury was exposed as described in our previous study 26,28,45 . During surgery, mice were kept on a heating pad to maintain body temperature at 37 ± 0.5 °C as monitored by a rectal probe. To have a clean image of the sensory cell bodies neurons, the dura mater and the arachnoid membranes were carefully opened and removed using microdissection forceps. In vivo calcium imaging of the whole L4 DRG was performed immediately after SNT for 1-6 h as previously described 28 . Mice were laid in the prone position on a custom-designed microscope stage. The spinal column was stabilized with custom-designed clamps to minimize movements caused by breathing and heartbeats. All in vivo imaging experiments were performed using a Leica SP8 confocal microscope (Leica). For GCaMP excitation, a laser wavelength at 488 nm (2% laser power) was used, and the images were acquired at a bidirectional scan speed of 600 Hz. Raw image stacks were imported into ImageJ (National Institutes of Health, Bethesda, MD, USA) for further analysis. The train burst width was 8 s. After 16 s of baseline imaging, we applied the test stimulus to the hind paw. For spontaneous activity recording, isoflurane level was lowered to 1% and the DRG was imaged continuously for 40 min with no stimulation. Stimulus delivery. We applied a rodent pincher analgesia meter instead of a von Frey filament as a mechanical stimulus to the ipsilateral hind paw to ensure that most mechanically sensitive neurons were investigated. Because of the large contact surface area of the pincher, the force needed to evoke pain responses was much higher than that normally applied with von Frey filaments. We determined in advance and in our previous study that the PWT of the pincher was 500 g in normal mice 28 . A series of mechanical stimuli was sequentially applied to the hind paw, and the responses of all DRG neurons were recorded. The duration of the pressure was 8 s after 16 s of baseline imaging. Similarly, we applied thermal stimuli (45-51 °C, 5 s) to study responses to thermal stimulation after 16 s of baseline imaging 28 . Imaging data analysis. Raw TIFFs were exported and analyzed with ImageJ as previously described 28 . An experimenter manually traced the activated cells and determined cell size and relative fluorescent intensity off-line after completing the study. Briefly, Small, medium, and large diameter neurons were defined as having somal areas of <450 μm 2 , 450-700 μm 2 , and >700 μm 2 , respectively. The average fluorescence intensity in the baseline period was taken as F0 and measured as the average pixel intensity during the first two frames of each imaging experiment. The maximum fluorescence intensity, Ft, was measured by calculating the average (peak − background) pixel values in a given region of interest for each image frame recorded during a time interval before and during the stimulation period. The Ft was then used to calculate ΔF/F using the formula ΔF/F = (Ft − F0)/F0. We used ImageJ or Fiji (National Institutes of Health) and LIF (Leica Microsystems GmbH) to analyze calcium imaging data using standard functions and a custom macro. An activated neuron to the stimulation was defined by Ft/ F0 > 1.2, as that shown in previous studies 26-28 . Statistical analysis. Data analysis was performed by using Prism 6.0 statistical program (GraphPad Software, Inc.). Raw data were first evaluated for Gaussian distribution by using the D' Agostino & Pearson test (n > 8) or KS normality test (n < 8). Normally distributed data were analyzed with using parametric statistics (two-way analysis of variance (ANOVA), one-way ANOVA, unpaired two-tailed Student t test, and paired two-tailed Student t test). Data (or after log transform) that did not meet the basic assumptions for parametric testing were analyzed with using nonparametric statistics (Mann-Whitney U and Kruskal-Wallis test). Chi-square test was used to compare differences in the percentage of C neurons that showed SA between different experimental conditions. Data were expressed as mean ± standard error of mean, or as percentages where appropriate. A value of P < 0.05 was considered significant.
2019-05-22T14:27:20.683Z
2019-05-21T00:00:00.000
{ "year": 2019, "sha1": "e63e083ff15b399057d84a156cf8b19bb8dd016c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-44172-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e63e083ff15b399057d84a156cf8b19bb8dd016c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1933116
pes2o/s2orc
v3-fos-license
A comparison of nicotine dose estimates in smokers between filter analysis, salivary cotinine, and urinary excretion of nicotine metabolites Rationale Nicotine uptake during smoking was estimated by either analyzing the metabolites of nicotine in various body fluids or by analyzing filters from smoked cigarettes. However, no comparison of the filter analysis method with body fluid analysis methods has been published. Objectives Correlate nicotine uptake estimates between filter analysis, salivary cotinine, and urinary excretion of selected nicotine metabolites to determine the suitability of these methods in estimating nicotine absorption in smokers of filtered cigarettes. Materials and methods A 5-day clinical study was conducted with 74 smokers who smoked 1–19 mg Federal Trade Commission tar cigarettes, using their own brands ad libitum. Filters were analyzed to estimate the daily mouth exposure of nicotine. Twenty-four-hour urine samples were collected and analyzed for nicotine, cotinine, and 3′-hydroxycotinine plus their glucuronide conjugates. Saliva samples were collected daily for cotinine analysis. Results Each method correlated significantly (p < 0.01) with the other two. The best correlation was between the mouth exposure of nicotine, as estimated by filter analysis, and urinary nicotine plus metabolites. Multiple regression analysis implies that saliva cotinine and urinary output are dependent on nicotine mouth exposure for multiple days. Creatinine normalization of the urinary metabolites degrades the correlation with mouth exposure. Conclusions The filter analysis method was shown to correlate with more traditional methods of estimating nicotine uptake. However, because filter analysis is less complicated and intrusive, subjects can collect samples easily and unsupervised. This should enable improvements in study compliance and future study designs. Introduction A number of studies were published over the last 30 years that attempted to determine the amount of tar and/or nicotine that smokers receive from their cigarettes (reviewed in Stephen et al. 1989;Pritchard and Robinson 1996;Scherer 1999). The methodology used falls into three broad categories: (1) the analysis of biomarkers in human body fluids or expired breath; (2) the measurement of smoking behavior (puff volume, duration, and frequency) followed by a smoking machine set to duplicate human puffing conditions; and (3) the analysis of spent cigarette filters and the calculation of smoke yields from the filter efficiency. Typically, biomarker measurements in blood/plasma and smoking behavior measurements require that sampling or measurements be made in a laboratory environment. There is a possibility that smoking behavior becomes atypical in this type of environment (Comer and Creighton 1978;Ossip-Klein et al. 1983). Urine, saliva, and spent cigarette filters can be collected in a smoker's everyday environment. The analysis of 24-h urine samples for nicotine and major metabolites can provide quantitative data regarding uptake of smoke constituents as the product is used in a smoker's everyday environment (see Byrd et al. 1998 and references therein for examples). Subject compliance can be an issue when trying to determine actual cigarette yields because in an unmonitored environment the subject must be relied upon to smoke only a given brand, not use any other nicotine-containing products, provide an exact accounting of every cigarette smoked during the collection period, and collect all urinary output. The filter analysis method is one of the least invasive of the methods mentioned above. The smokers can use their product in their normal environment and the only deviation from normal behavior is to save the filters. Compliance is not an issue when trying to determine the subject's cigarette yield because the filters from the actual cigarettes smoked are analyzed. In most cases, the returned filter can be compared to those of the subject's stated brand to assure brand compliance. The primary issue with this method has been that smoking behavior can produce changes in the filtration efficiency of the filter. The filtration efficiency can vary according to the velocity of the smoke passing through the filter and, to some extent, the length of the tobacco rod smoked (Overton 1973;Dwyer and Abel 1986;Norman et al. 1984). The method used for this study was developed to minimize the effects of smoking behavior on filter efficiency by measuring only the portion of the filter downstream of the ventilation holes (i.e., the mouth end) (St.Charles 2001;Shepperd et al. 2006). This results in relatively constant filtration efficiency over a wide range of smoking behavior. The objective of this study was to compare the nicotine yield of human-smoked cigarettes (mouth exposure) as measured by the filter analysis method and human smoke uptake as measured by biomonitoring under strictly controlled conditions. The biomonitoring measurements included salivary cotinine and 24-h urinary nicotine, cotinine, 3′-hydroxycotinine (3-HC), and their respective glucuronide conjugates. With a good correlation between the methods, future studies on smoker exposure can use the simpler filter analysis method rather than resorting to human biomonitoring techniques. Study design The clinical portion of this study was conducted by an independent contract research organization in 2003 (Covance Clinical Research Unit, Madison, WI, USA). The analysis of salivary cotinine and urinary nicotine metabolites was performed at Covance Laboratories (Harrogate, North Yorkshire, UK). Filter analysis was performed by the study sponsor (Research and Development, Brown & Williamson Tobacco Company, Macon, GA, USA). The study was approved by Covance's Institutional Review Board and performed in accordance with applicable federal regulations. Subjects who participated in the study gave their informed consent, were told of the purpose of the study, and could withdraw at any time. Subject selection Habitual smokers were recruited by Covance. Enrollment criteria included males or nonpregnant, nonlactating females, between 21 and 65 years of age, within −20% to +30% of their ideal body weight, who smoked at least 15 cigarettes a day of the same cigarette brand during the previous year. Subjects were excluded if they were under 21 years of age, were pregnant or lactating, participated in any other clinical study within 30 days before study entry, had a history or showed signs of a significant medical or psychiatric condition, used prescription medications within 14 days before study entry, had a history of alcoholism or drug addiction within a year of study entry, or used alcohol or any nonprescription preparations within 72 h of study entry. A few subjects deviated from the enrollment criteria: underweight (1), overweight (5), medication or alcohol usage before study entry (4), low cigarette consumption (2), shortened brand loyalty duration (4), elevated clinical chemistry (2), abdominal/hernia surgery (7), and positive drug screen before study entry (2). Because these deviations were considered minor and not expected to interfere with the study objectives, the subjects were allowed to participate in the study. Subjects were assigned into one of four tar yield groups, which span the range of Federal Trade Commission (FTC) tar yields found in commercially available USA filtered cigarettes: 1-3 mg (ULL or ultralights/low), 5-6 mg (ULH or ultralights/high), 9-12 mg (LTS or lights), and 13-19 mg (FF or full flavor). The purpose was to cover a wide range of human nicotine exposure to allow robust correlations between the methodologies. The goal was to enroll 20 smokers per group; however, even with additional recruitment attempts, only 15 smokers enrolled in the ULL group (market share <2%). One subject in the FF group withdrew from the study due to illness. Table 1 summarizes the subjects' demographics and their respective brand characteristics by tar band. Subjects were confined to the clinic for six calendar days to give five consecutive 24-h periods. Nine confinement periods were staggered and limited to ten subjects or less (generally of the same tar range group). During confinement period number 8, one LTS smoker and two FF smokers were allowed to participate with six ULL smokers to complete the LTS and FF cells. Subjects were fed a standardized bulk diet that excluded grilled, smoked, or barbecued food items. Consumption of water and other nonalcoholic beverages was unrestricted. Subjects refrained from strenuous exercise. During confinement, subjects smoked their usual cigarette brand ad libitum in a dedicated smoking room equipped with ventilation and air filtration. Use of any form of nicotine other than the subject's declared own brand was prohibited. Cigarettes were purchased locally. Urine and saliva collection and analysis Twenty-four-hour urine samples were collected from each subject for five consecutive days. Collections started at approximately 0800 hours (first void excluded) and ended at approximately 0800 hours the following day (first void included). Urine was collected in 3-l amber plastic containers and kept refrigerated throughout the collection period. No chemical preservatives were used. After each 24-h sample collection, volume and pH measurements were recorded, a sample was taken for creatinine analysis, and 2×5-ml and 4×500-ml aliquots were taken and stored frozen at −70°C until shipped. Aliquots were shipped under dry ice to the analytical laboratory and stored at −70°C until analysis. Saliva samples were collected in sterile Salivette tubes (Sarstedt, Newton, NC, USA) from each subject for five consecutive days at approximately 1830 hours. On day 4, two additional saliva samples were collected at approximately 0830 and 1330 hours. Clinical staff supervised and timed the process to assure compliance. Collected saliva samples were immediately stored at −20°C, shipped under dry ice to the analytical laboratory, and stored at −20°C until analysis. Salivary cotinine was analyzed by a method developed and validated at the analytical laboratory (Analytical Procedure Covance no. HB-02-061) based on a previously reported method (Bentley et al. 1999). Filter collection and analysis To control cigarette brand accessibility and document cigarette consumption, cigarettes were issued by clinical staff individually. The used filter had to be returned before a subject received their next cigarette. Used filters were processed under the supervision of clinical staff by having the subject remove any tobacco particles from the used filter before depositing it in an individually labeled glass container. Used filters were collected for five consecutive days with each day starting/ending at the same time as urine samples. The filters generated daily by each subject were shipped by overnight carrier at ambient temperature to the analytical laboratory. Immediately upon receipt, a 10.0-mm 1.9 (0.9) 6.8 (0.6) 11.6 (1.4) 13.9 (2.0) 8.9 (4.6) Puffs/cig 7.6 (0.7) 8.0 (1.1) 7.9 (1.0) 8.2 (0.9) 7.9 (1.0) portion of the mouth end (tip) was cut from the filter using a specially designed jig, which kept the cut-length constant and kept a single edge razor blade perpendicular to the filter. Tip-to-tip length variation using the jig was within 3%. The tips were stored in labeled 30-ml glass jars with Teflon-lined lids at −20°C until analyzed. Nicotine yields from the human-smoked cigarettes were estimated by analyzing the tips for nicotine. Five separate machine smoking regimes were used to provide calibration curves for the filter tips of each brand style tested. Cigarettes from the same batches that the subjects smoked were used for calibration smoking. The smoking regimes were chosen to give a wide range of cigarette yields of approximately equal spacing. Table 2 shows the calibration puffing conditions for each tar band. These machine puffing regimes proved to cover the spread of human smoking results for all but 7% of the subject-days tested (5 ULH, 1 LTS, and 6 FF out of 370 total). Six of the 12 involved extrapolating by less than 0.1 mg of nicotine/cigarette and the maximum extrapolation was from 2.3 to 2.6 mg/ cigarette. Calibration equations were calculated using a linear regression of nicotine yield as a function of tip nicotine. These equations were then used to estimate human-smoked cigarette yield from the measured tip nicotine. This method was validated in a separate study using duplicated human puffing profiles (Shepperd et al. 2006). Aging tests by the study sponsor have shown that tip nicotine values were constant when whole filters were stored in glass jars at ambient temperature for up to 31 days. For this study, tips were cut from the whole filters within 5 days. One to nine tips per sample were extracted using either 20 or 40 ml of methanol containing 0.038 mg/ml decanol internal standard. Tips were extracted for 40 min using a flatbed orbital shaker at 200 rpm. All available tips were divided into at least three separate batches, which were extracted and analyzed on different days to average analytical variation. For samples from days 2, 4, and 5, extract absorbance at 310 nm was also measured to audit the nicotine analysis using an independent method. Absorbance gives a measure of the tar deposited on the filter (Sloan and Curran 1981;Shepperd et al. 2006) and we have found that absorbance per tip correlates linearly with nicotine per tip. For days 1 and 3, replicate vials of the extract were stored at −20°C and subsequently analyzed for nicotine to audit the nicotine analysis further. The root mean square difference between the nicotine replicates was 5.5 μg/ml (7.3%). A backup set of replicate vials was kept for use when outlying data points were identified by either of the auditing methods. Samples were retested when they were outside of the 95% confidence interval of the absorbance per tip vs nicotine per tip correlation (days 2, 4, and 5) or when nicotine replicates differed by more than 20% (days 1 and 3). The extract was analyzed for nicotine by gas chromatography using an Agilent 5890 Series II with a flame ionization detector and a J&W Scientific 30 m Megabore ® 0.53 mm ID, DB-Wax (1.0 μm film) fused silica capillary column (Agilent, Palo Alto, CA, USA). The UV absorbance of the extract was measured using an Ocean Optics (Dunedin, FL, USA) PC2000 spectrometer equipped with a fiber optic dip probe with a 2-mm path length. This gave absorbance values within the linear range of less than 2 without further dilution. Absorbance was measured on the same day the tips were extracted because it was found that extract absorbance declines with even overnight storage. Results Measured urinary concentrations for each metabolite were multiplied by their respective daily urine volumes and converted based on molecular weights to yield recovery results in nicotine equivalents. The sum of the nicotine equivalents for nicotine, cotinine, and 3-HC were calculated for each subject per day to give total daily urinary nicotine equivalents (UNE) in milligrams per day. Nicotine yield per cigarette was calculated from the mean nicotine per tip values for each subject-day and the appropriate calibration equation for the brand smoked. This was multiplied by the number of cigarettes smoked by the subject that day to give nicotine yield in milligrams per day. This study design allowed us to achieve a wide range of nicotine exposure suitable for the correlation of the three methods of nicotine uptake estimation. Nicotine yield from the human-smoked cigarettes (mouth exposure) as measured by filter analysis ranged from 3.7 to 67.1 mg/day, UNE ranged from 3.1 to 48.4 mg/day, and saliva cotinine ranged from 70 to 866 ng/ml. The mean (SD) proportions of urinary metabolites, including the glucuronides, were 22% (9%) for nicotine, 35% (6%) for cotinine, and 43% (13%) for 3-HC of the total UNE measured. When expressed as a percentage of nicotine entering the mouth, the proportions were 19% (9%) for nicotine, 31% (11%) for cotinine, 39% (17%) for 3-HC, and 89% (25%) for total nicotine equivalents. Statistical correlations Observed values from the three methodologies were correlated with each other. Figure 1 shows the three plots and correlations for all data points. Figure 1a is a graph of the UNE vs daily nicotine yield estimated from filter analysis; Fig. 1b shows UNE vs saliva cotinine; and Fig. 1c shows daily nicotine yield from filter analysis vs saliva cotinine. Linear regressions were significant for both the slope and intercept (p<0.001) for all three correlations. The best correlation was obtained with UNE vs daily nicotine yield (Fig. 1a, R 2 =0.66). The slope indicates that 67% of the variation in nicotine mouth exposure calculated from filter analysis appeared as variation in the six urinary compounds measured. The standard error of the regression (SER) was 4.9 mg nicotine/day. Saliva cotinine did not correlate as well with either UNE (Fig. 1b, R 2 =0.49, SER= 5.9 mg/day) or nicotine yield (Fig. 1c, R 2 =0.45, SER= 7.4 mg/day). Because the amount of nicotine entering the mouth was greater than the sum of the UNE, the standard error of the nicotine yield would be greater than that of the UNE due to scaling even if the correlations were equivalent. To allow an equal comparison, the standard error of the percent of the difference between calculated (using the regression equation) and measured values was calculated for the three correlations. The standard error calculated in this manner was 32 and 42% for UNE vs nicotine yield and saliva cotinine, respectively, and 41% for nicotine yield vs saliva cotinine. Summary statistics for all correlations are in Table 3. The means of the five daily data values per subject are shown in Fig. 2 using the same format as Fig. 1. Again, for all correlations, the slopes and intercepts were significant (p<0.01). The correlation for UNE vs nicotine yield (Fig. 2a) improved significantly as shown in Table 3. The slope increased to 0.74 while the intercept moved closer to zero. The saliva cotinine correlations (Fig. 2b,c) also improved, but not as dramatically as the urinary nicotine. The correlation for UNE vs saliva cotinine (Fig. 2b) had an R 2 value of 0.55 with a standard error of 5.3 mg/day (36%). The correlation for nicotine yield vs saliva cotinine (Fig. 2c) had an R 2 value of 0.54 with a standard error of 6.5 mg/day (33%). Other findings Although the primary purpose of this study was to determine the statistical correlations between the three methods of estimating nicotine uptake, data collected during the experiment allowed for additional observations. Creatinine normalization Normalization by urinary creatinine is useful when only a single or partial daily urine collection is taken. To test the effect of creatinine normalization, observed daily UNE were divided by their respective millimole creatinine. A correlation was calculated with nicotine yield as the dependent variable and UNE/ mmol creatinine as the independent variable. The results are summarized in Table 3. Both the intercept and slope are significant (p<0.001) but normalization with creatinine clearly degrades the correlation compared to the correlation without normalization. The findings suggest that normalization with creatinine adds another factor of variability, which has a degrading effect on R 2 , and therefore indicates that the use of 24-h urine samples without normalization is expected to provide more accurate results than the analysis of partial (<24 h) urine samples with normalization. This is in agreement with Heavner et al. (2006), who showed mechanistically that creatinine normalization may be appropriate for some, but not all, of the urinary metabolites of nicotine and that other methods of normalization may be more appropriate for spot urine samples. FTC smoke yields Because this was a confined clinical study, subjects would not necessarily be expected to behave as if they were in their normal environment. However, because a wide range of FTC cigarette yields were tested, it was considered worthwhile to test the correlation between the measured FTC yields and the three methods of nicotine estimation. Correlation results are summarized in Table 3 for the 5-day mean values for the three methods vs the FTC values on a per subject basis. All correlations gave significant slopes and intercepts (p<0.001). Differences between tar band groupings The results for mouth exposure to nicotine (from filter analysis), UNE, saliva cotinine, and cigarettes smoked per day were grouped by tar band and the mean, standard deviation, and one-way analysis of variance (ANOVA) were calculated on a per subject basis (Altman and Bland 1997). The results are shown in Table 4. Group means increased in rank order with increasing group yield except for saliva cotinine and cigarettes smoked per day. All measurements except cigarettes smoked per day gave a significant (p<0.05) effect of group by ANOVA. For each measurement, results with different letter assignments are significantly different at the 95% confidence level (Fisher test). Normalizing urinary nicotine metabolites with creatinine resulted in an increase in p value for effect of group and no significant differences between the FF, LTS, and ULH groups. All three groups were significantly higher than the ULL group. This again shows that creatinine normalization degrades the discriminating power. Table 4 also shows the difference in self-reported and measured cigarette usage. Self-reported cigarette usage was questioned and compliance with respect to truthfulness can be a concern (Byrd et al. 1998). In this study, subjects were asked to estimate their cigarette usage as part of the recruiting process. This gave a basis to compare with the measured usage during the study (overall mean=22 cigarettes/day, SD=4.2, range 15.8 to 32.0). The mean value for the (5-day average− self-reported) usage per subject was −0.2 cigarettes/day (SD=5.9, range −19.4 to +11.7). Even when broken down to the smaller tar band groups of 15 to 20 subjects, the mean difference in measured and reported usage was less than 2 cigarettes/day. Within-subject variation Because the trial took place over five consecutive days, within-subject, day-to-day variation was calculated for each of the measured variables. In addition, on day 4, saliva samples were taken at approximately 0830, 1330, and 1830 hours for measurement of saliva cotinine to estimate within-day variation. Results are summarized in Table 5 and are expressed as a pooled (rootmean-squared) coefficient of variation. The pooled day-today variations for each variable were similar, ranging from (46) 15.2% for cigarettes smoked per day to 18.1% for UNE per day. The within-day variation of 8.1% for saliva cotinine was approximately half the day-to-day variation of 15.7%. When a single factor ANOVA was calculated using saliva cotinine for the three time periods sampled on day 4, the variation between time periods was insignificant (p=0.58). However, this could have been overwhelmed by the subjectto-subject variation in nicotine uptake. This factor was removed by dividing the individual measurement by the daily average per subject for all three time periods. A single factor ANOVA was significant for time of day (p<0.001) with the mean (95% confidence interval) saliva cotinine values being 1.008 (0.015), 0.96 (0.012), and 1.03 (0.013) times the average daily value per subject for the 0830, 1330, and 1830 hour samples, respectively. Even though the time of day had a statistically significant effect, the practical differences were small. Discussion All three of the estimation methods correlated significantly with each other, but the best overall correlation was between the filter analysis method and UNE. The slopes for the correlations of UNE as a function of mouth exposure to nicotine were 0.67 and 0.74 for the individual and 5-day average regressions, respectively, implying that about 70% of the difference in the UNE measured in this study was due to a difference in mouth exposure to nicotine. However, the mean of the total UNE expressed as a percentage of the nicotine entering the mouth was 89%. The mean value falls within the reported range of 80% (Benowitz et al. 1994) to 90% (Curvall et al. 1991) but the slopes fall below this range. The difference between the two methods is due to the significant intercept calculated using the linear regression. The intercept using the 5-day average results was lower than the intercept using the individual results and the slope was greater. The intercept could represent compartmental nicotine and/or metabolite storage with subsequent carryover from storage before entering the study. This influence should diminish with a 5day average compared to using single day results. An example of this is demonstrated in Fig. 1a where there is a single circled data point that appears to be an outlier. Urinary output of nicotine equivalents was approximately two times the mouth exposure of nicotine for that day. However, this was a day 1 measurement for a single subject. For subsequent days, the data points for this subject are buried within all the other data points. One possibility is that the subject had a much larger exposure to nicotine before participating in the study and the unusually high urinary output was due to clearance of the prior exposure. Averaging the results over 5 days would take out much of the metabolic influence and result in a much better correlation as demonstrated with these results. For the saliva cotinine correlations, the R 2 values only improved slightly with averaging. Part of the reason for the improvement with all correlations can be attributed to averaging out measurement variation. This should be similar for all correlations. However, the additional improvement in the filter vs urine correlation can be explained by metabolism. Nicotine has a relatively short serum elimination half-life of 2.2-2.9 h (Scherer et al. 1988;Benowitz and Jacob 1993;Benowitz et al. 1999Benowitz et al. , 2002Benowitz et al. , 2004, which means that the urinary nicotine should be from the nicotine taken in that day. Cotinine has a serum elimination half-life typically reported at 16-18 h (Scherer et al. 1988;Benowitz et al. 1999Benowitz et al. , 2002Benowitz et al. , 2004De Schepper et al. 1987) and a similar urine elimination half-life (Benowitz and Jacob 1993;De Schepper et al. 1987). 3-HC, which is downstream metabolically from cotinine, has a serum elimination half-life of 5.9-6.6 h with a similar urine elimination half-life (Scherer et al. 1988;Benowitz and Jacob 2001). Given the pharmacokinetic information, these urinary metabolites must have originated from nicotine exposure on multiple days as has been demonstrated directly using nicotine infusion (Scherer et al. 1988). Because measurements were taken over sequential days of input (filter analysis), the temporal source of urinary and salivary metabolites could be estimated by multiple regression of the metabolites vs the daily nicotine exposure. This analysis was performed using the nicotine exposure for the current day and the two previous days according to the equation: where (N), (N-1), and (N-2) equals the nicotine exposure for the current day, 1 day before, and 2 days before the urine or saliva sample, respectively. Metabolite measurements from days 3, 4, and 5 were used. None of the coefficients for nicotine exposure from 2 days before the urine collection were significant (p>0.15). For saliva cotinine, the coefficient for 2 days before was not significant at the 95% confidence but the p value of 0.07 was small enough to warrant consideration. Given this, the regressions were recalculated using (N) and (N-1) and including the urinary metabolite measurements from day 2 as well. The results of this regression are shown in Table 6. For the sum of all urinary metabolites, the coefficients were significant for the current day and prior day's exposure. This implies that the total urinary metabolites originated from nicotine exposure over 2 days. The only coefficient that was significant for the urinary nicotine regression was for the current day's exposure as would be expected. The coefficient of 0.16 implies that 16% of the nicotine entering the mouth appeared as urinary nicotine plus the glucuronide. This agrees closely with the estimates of 13.8% (Benowitz et al. 1994) and 10-15% (Curvall et al. 1991). For the urinary cotinine regression, coefficients for the current day and previous day were significant with the current day being about twice that of the previous day. The two coefficients imply that 26% of the nicotine entering the mouth appears as urinary cotinine plus the glucuronide. This also agrees closely with estimates of 25.6% (Benowitz et al. 1994) and 20-25% (Curvall et al. 1991). For the 3-HC regression, the coefficients for the current day and previous day were significant with the current day being slightly smaller than the previous day. The sum of the coefficients (0.28) was smaller than the reported 41 to 60% of the total urinary metabolites (Benowitz et al. 1994;Curvall et al. 1991). It is possible that because 3-HC is the third step in the metabolism of nicotine, there was too much of a smoothing effect for the change in daily mouth exposure to fully capture. For saliva cotinine, all coefficients were significant, suggesting that saliva cotinine results are an amalgam of at least 2 days of nicotine exposure. Saliva cotinine correlations suffer because results are expressed as concentration rather than an absolute value, and as such can be influenced by body size. In addition, other variables come into play that are not easily explained. An example of this is shown in Figs. 1 and 2 by the data points with boxes around them. These were from one subject that appeared to have unusually low saliva cotinine values for both UNE (Figs. 1b and 2b) and mouth exposure of nicotine (Figs. 1c and 2c). This subject had the highest mouth exposure of nicotine of all the subjects studied, yet the saliva cotinine values were only slightly above midrange. There was nothing unusual about this subject and average urinary output was within 1 SD (1,100 ml) of the average for all subjects (2,400 ml). It is unlikely that nicotine yields were overestimated because Fig. 1a shows that the data points are scattered about the regression line for UNE vs nicotine yield. It is also unlikely that the saliva cotinine concentrations were in error because the saliva cotinine for each day was analyzed in separate batches along with other samples. The subject did not rapidly metabolize cotinine to 3-HC because cotinine accounted for 47% of the urinary metabolites measured for this subject compared to an average of 35% for all subjects indicating the converse. Therefore, it must be concluded that this is an anomaly characteristic of this individual. As shown in Table 3, with many of the estimation methods, the correlations are only slightly better than simply correlating with FTC nicotine yield of the cigarette. The average nicotine mouth exposure, as measured by the filter method, correlated with the FTC nicotine yield with a standard error of 38% and an R 2 value of 0.41. The correlation of nicotine mouth exposure with saliva cotinine gave a standard error of 33% and creatinine-normalized urinary metabolites gave a standard error of 32% with R 2 values approximately 0.1 higher. Correlations of FTC nicotine with biomarkers in this study were stronger than that reported in other studies (Byrd et al. 1998;Jarvis et al. 2001;Ueda et al. 2002;Hecht et al. 2005;Bernert et al. 2005). We believe that there are valid reasons for this. One is that the correlations were performed using the 5-day averages per subject instead of a single sample per subject. In addition, exact compliance to brand and cigarette consumption was assured in the current study, whereas all but one (Bernert et al. 2005) of the referenced studies used self-reported brand identification and cigarette consumption. Self-reported brand information was reported to have about a 25% error rate when compared with packs returned (Peach et al. 1986) or in a test-retest comparison (Eisenhower et al. 1993). This and the potential for use of alternate brands during a study can further confound a correlation with biomarkers. Other confounding factors are the use of creatinine normalized spot urine samples instead of 24-h urine samples and analyzing for a subset of the nicotine metabolites used in the current study. Simple creatinine normalization is only biologically valid for xenobiotics that have the same excretion mechanism and urinary flow rate dependence as creatinine. Heavner et al. (2006) have shown that 1-HC (free and glucuronide) and cotinine glucuronide have urinary flow rate dependence similar to creatinine while nicotine (free and glucuronide), free cotinine, 1-hydroxypyrene, and the free and glucuronide forms of 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol do not. In conclusion, two methods stand out as superior. One is the filter method, which estimates mouth level exposure directly on a per cigarette basis. Filter collection need not be quantitative, the filter returned can be compared to the brand it is supposed to be and it can be readily determined if it was smoked or not. Thus, brand compliance and smoking status can be assured even if the subject happened to occasionally use a different brand or other form of nicotine during the study. With self-reported daily cigarette use from groups of at least 15-20 subjects, the exposure per cigarette can be converted accurately to daily exposure. In addition, mouth exposure to tar can also be estimated using the filter method (Shepperd et al. 2006). The other measurement, which is considered by many to be the "gold standard," is the measurement of urinary nicotine and metabolites from 24-h urine samples without creatinine normalization. This method appears to reflect the mean daily nicotine uptake of the last 2 days.
2017-08-02T18:25:17.774Z
2006-10-07T00:00:00.000
{ "year": 2006, "sha1": "003007bc7aa1917ddd4be561c7f2dbffb28977ed", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00213-006-0586-x.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2a985d72dd229a8f83658f6f3dc6f2be06e7b2e4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119147995
pes2o/s2orc
v3-fos-license
A simplicial foundation for differential and sector forms in tangent categories Tangent categories provide an axiomatic framework for understanding various tangent bundles and differential operations that occur in differential geometry, algebraic geometry, abstract homotopy theory, and computer science. Previous work has shown that one can formulate and prove a wide variety of definitions and results from differential geometry in an arbitrary tangent category, including generalizations of vector fields and their Lie bracket, vector bundles, and connections. In this paper we investigate differential and sector forms in tangent categories. We show that sector forms in any tangent category have a rich structure: they form a symmetric cosimplicial object. This appears to be a new result in differential geometry, even for smooth manifolds. In the category of smooth manifolds, the resulting complex of sector forms has a subcomplex isomorphic to the de Rham complex of differential forms, which may be identified with alternating sector forms. Further, the symmetric cosimplicial structure on sector forms arises naturally through a new equational presentation of symmetric cosimplicial objects, which we develop herein. Contents 1 Introduction 2 2 Tangent categories and differential objects 5 3 Overview of main results, with examples of sector forms 11 4 Symmetric monoids, semigroups, and finite sets 16 5 Symmetric cosimplicial objects 20 6 Presenting symmetric cosimplicial objects by fundamental cofaces 27 7 The symmetric cosimplicial set of sector forms 31 8 Complexes of forms and the exterior derivative 38 9 Relationship to de Rham in synthetic and classical differential geometry 41 10 Conclusions and future work 54 First author supported by an NSERC Discovery Grant and second author by an AARMS Postdoctoral Fellowship. The second author gratefully acknowledges further financial support in the form of a Mount Allison University Research Stipend. Thanks to Robin Cockett for useful discussions. Introduction Tangent categories [34,5] provide an axiomatization of one of the key structures in differential geometry: the tangent bundle. Tangent categories are useful for a number of reasons. First, constructions of objects like the tangent bundle appear in a variety of categories, some related to the category of smooth manifolds, others to categories in algebraic geometry, and others to categories in homotopy theory and computer science. Thus, it is helpful to have a single axiomatization which can deal with all these examples simultaneously. Secondly, a variety of definitions and constructions in differential geometry are closely linked to the tangent bundle. For example, vector fields, the Lie bracket, connections, and differential forms can all be viewed as certain maps in the category of smooth manifolds which take as domain or codomain the tangent bundle (or bundles related to it). Thus, one can hope to give definitions and prove results about these objects in an arbitrary tangent category. This paper is a contribution to the second aspect of this program; in particular, in this paper we are interested in determining how to define differential forms, their exterior derivative, and the resulting cochain complex of de Rham in an arbitrary tangent category. However, to do so requires a close inspection of the nature of differential forms. This inspection reveals an interesting structure, a simplicial object of sector forms, of which de Rham cohomology can be seen as a simple consequence. There is a relatively straightforward analog of the notion of differential form in any tangent category. Classical differential n-forms on a smooth manifold M can be viewed as multilinear, alternating maps T n M → R where T n M is the object of consisting of all "n-tuples of tangent vectors at a common point on M". (That is, T n M is the fibre product of n copies of the tangent bundle T M → M over M.) These objects exist in any tangent category, and thus one can define (classical) differential forms in any tangent category as above, with R replaced by a suitable coefficient object. However, a difficulty arises when attempting to define a direct analog of the exterior derivative of such forms in an arbitrary tangent category. In the category of smooth manifolds, the exterior derivative of an n-form ω : T n M → R is an (n + 1)-form ∂ω, which can be defined locally on open subsets U ∼ = R n . In the case where M = R n one can define ∂ω as an alternating sum of certain maps T n+1 M → R [20, 7.8], each expressed in terms of the Jacobian derivative T (ω) : T (T n M) → T R ∼ = R × R by pre-composing with a certain canonical map κ : T n+1 M → T (T n M). A similar definition applies in any Cartesian differential category [9]. However, in an arbitrary tangent category, the objects need not be manifolds, and the local definition cannot be mimicked globally for want of a suitable map κ to mediate between the intended domain of ∂ω (namely T n+1 M) and the domain of T (ω) (namely T (T n M)). One solution to this problem can be found by considering how synthetic differential geometry (SDG) handles differential forms. In SDG one finds categories which have representable tangent structure; these are categories with an object D for which there is a tangent functor T defined by T M := M D . Various definitions and results have been transplanted from classical differential geometry to models of SDG; see, e.g., [19,25,32]. In a typical model of SDG, as in a tangent category, the objects need not be locally isomorphic to some R n . Thus, for a general object in such categories, the exterior derivative also cannot be defined by mimicking the classical definition directly. In SDG, the solution to this problem is to look at a different type of map: instead of considering multilinear alternating maps from T n M → R, one instead considers multilinear alternating maps where T n M is the nth iterate of the tangent bundle of M. Such maps were first considered in [17,15] and referred to as singular forms in [25,Definition 4.1]; we shall use that name here to distinguish them from other notions of form we shall consider. In contrast to the classical case, the Jacobian derivative T (ω) of a singular n-form ω does have the expected domain, namely T n+1 M. The references above show how to define an exterior derivative for such forms; the definition involves an alternating sum of permutations of the Jacobian derivative. Moreover, it has been shown that for a particular model of SDG which contains the category of smooth manifolds, if M is a smooth manifold then singular forms are in bijective correspondence with classical differential forms, and their exterior derivatives agree [32,IV,Proposition 3.7]. This shows that models of SDG have a notion of de Rham complex which generalizes the classical notion. Thus, for tangent categories, a natural point of investigation is to look at multilinear alternating maps from T n M to some coefficient object E. We show in this paper that such maps indeed have an exterior derivative that generalizes the definition from SDG, and has the required properties (Proposition 8. 13). Thus, this shows that tangent categories have a notion of de Rham cohomology (namely, the cohomology of the resulting complex) which generalizes the SDG notion, and hence also generalizes the classical notion. However, there is much more to say about maps from T n M to E in a tangent category. In particular, none of the results that need to be proved to show that such maps have an exterior derivative require that the maps from T n M be alternating; multilinearity suffices. Thus, one is led to consider maps ω : T n M → E (for a suitable coefficient object E) which are multilinear but not necessarily alternating. Such maps do not appear in published accounts of SDG, but have appeared in differential geometry [38,33]. They are known as sector forms (for some basic examples of sector forms, see Section 3). The exterior derivative of singular forms works for sector forms, and so in addition to the complex of singular forms, tangent categories have complexes of sector forms (8.2). However, there is much more structure to these sector forms than a cochain complex. We show that for each n, there are n + 1 'derivative' or co-face operations which take sector n-forms to sector (n+ 1)-forms (Theorem 7.7), there are n−1 symmetry operations which take sector n-forms to sector n-forms, and there are n − 1 co-degeneracy operations which take sector n-forms to sector (n − 1)-forms (Proposition 7.3) 1 . Taken together, these operations constitute the structure of an (augmented) symmetric cosimplicial object [1,11] of sector forms (Theorem 7.7); that is, there is a functor on the category of finite cardinals. This is a remarkably rich structure, and has not previously appeared in either ordinary differential geometry or synthetic differential geometry 2 . Thus, we view the symmetric cosimplicial object of sector forms as the primary object of interest in relation to the various notions of differential forms considered above. In particular, from this cosimplicial object one can obtain as a simple corollary the complex of sector forms and the complex of singular forms 3 . Moreover, by generalizing to maps which are not necessarily alternating, one also generalizes covariant tensors (multilinear maps with domain T n ) which have numerous uses throughout differential geometry [38,Section 3.1]. In other words, sector forms generalize three important ideas in differential geometry: differential forms, covariant tensors, and singular forms. Thus, it is important to understand the structure of sector forms, and this paper represents a substantial advance in the study of these objects in the general setting of tangent categories. Alternating Not alternating Domain T n differential form covariant tensor Domain T n singular form sector form It is also worth noting that this paper contains two other points of independent interest. First, to establish the symmetric cosimplicial structure of sector forms, it becomes natural to give an alternative presentation of symmetric cosimplicial objects, and in particular to give an alternative presentation of the category of finite cardinals. The standard presentation [11] involves co-face maps, symmetry maps, and co-degeneracy maps. However, for each n, the n + 1 co-face maps from n to n + 1 can all be obtained by applying symmetries to a single co-face map, and thus one can show (Theorem 6.4) that the category of finite cardinals can be presented by symmetries, co-degeneracies, and a single co-face map for each n. The second point of interest relates to methodology in tangent categories in general. The definition of the symmetry and co-degeneracy maps of sector forms involves various combinations of the lift natural transformation ℓ : T / / T T and the canonical flip transformation c : T T / / T T (which are part of the definition of a tangent category). To establish the various identities that are required of the symmetries and co-degeneracies, one then must perform various complicated calculations with these maps. One way to handle the complexity of such calculations is to use string diagrams, as was done in previous work on tangent categories [6]. Another way to handle the complexity is to use a recently discovered embedding theorem for tangent categories [10] (for more on this approach, see the discussion after 2.4). However, here we use a different approach. Diagrams involving the maps ℓ and c in a tangent category X can be viewed as the application of a certain functor from the category of finite cardinals and surjections (written as finCard s ) to the category of endofunctors on X (Example 5.14). Thus, to establish the commutativity of such a diagram of natural transformations, it suffices to establish the commutativity of a certain diagram in finCard s , and this is typically straightforward. For examples of this proof technique, see Proposition 7.3 and Theorem 7.6. The paper is laid out as follows. In section 2, we review the definitions of tangent categories and differential objects, which are the coefficient objects in which the forms will take their values. Before going into the various details required of many of the proofs, in section 3 we give an overview of the key definitions and results of the paper, providing more detail than in the discussion above, and also providing some examples of sector forms. In sections 4, 5, and 6, we study symmetric cosimplicial objects and related notions, emphasizing their relations to categories of finite cardinals and establishing equational presentations of some of these key categories. Throughout these sections, we show how some of the structure of these categories is present in the category of endofunctors on a tangent category. In section 7, we look at sector forms, their fundamental derivative, and how they have the structure of a symmetric cosimplicial object. In section 8, we obtain the complexes of sector forms and singular forms as simple consequences of the symmetric cosimplicial structure on sector forms. In section 9, we study forms in the presence of representable tangent structure, and we show how our definitions of sector forms and singular forms in a tangent category relate to existing definitions in classical and synthetic differential geometry. Finally, in section 10, we look at various ways to extend or add to the results we have presented. 2. Tangent categories and differential objects 2.1. Notation Throughout this paper, composition in diagrammatic order is indicated with a semicolon, so that f , followed by g, is written as f ; g. When F and G are functors, we will sometimes denote the composite F ; G instead by GF , so that juxtaposition of functors denotes classical right-to-left composition. Given an object C of a category C , we denote by Aut C (C) the group of automorphisms of C in C . Rather than straying from convention by defining multiplication in Aut C (C) in terms of the diagrammatic composition order, we instead take the view that groups are certain one-object categories, and we define composition in Aut C (C) as in C . • (universality of vertical lift) defining v : T 2 M / / T 2 M by v := π 1 ; ℓ, π 2 ; 0 T T (+), the following diagram is a pullback that is preserved by each T n : A category with tangent structure, (X , T), is a tangent category. Example. Here are several important examples of tangent categories, the first four of which are drawn from [5] and [7]. (i) Finite dimensional smooth manifolds with the usual tangent bundle structure. (iii) The infinitesimally and vertically linear objects in any model of synthetic differential geometry [19] form a tangent category: if D is the object of square-zero infinitesimals, then we take T M := M D . (iv) The opposite of the category of finitely presented commutative rings (or more generally commutative rigs 4 ) is another example of a category with representable tangent structure: here D is the 'rig of infinitesimals', N[ε] := N[x]/(x 2 = 0) and again T A := A D . (v) A very different example of a tangent category arises from abstract homotopy theory, in particular in work on Abelian functor calculus [13]. In [2], the authors show that a certain operation in abelian functor calculus gives rise to a Cartesian differential category [3]. As every Cartesian differential category is a tangent category [5,Section 4.2], this example is also a tangent category; this insight was useful in providing a straightforward proof of the existence of certain higher-order chain rules for abelian functor calculus (see the discussion at the top of page 5 in [2]). (vi) Other examples of tangent categories that arise as Cartesian differential categories include the models of the differential λ-calculus that appear in computer science (for example, see [30] and [8]) . More examples can be found in [5] and [7]. In addition to these examples, recent work of Leung [27] and Garner [10] establishes certain equivalent formulations of tangent categories that provide new perspectives on the axioms. Leung's work shows that tangent categories are closely related to categories of Weil algebras, while Garner's work builds on this result to show not only that tangent categories can be seen as certain types of enriched categories, but also that every tangent category can be embedded in a tangent category that is representable, in the sense that the functor T is representable (for more on representable tangent categories, see [5,Section 5] and also 9.1 below). This last point allows one to work in an arbitrary tangent category as if T was representable, allowing for calculations in a tangent category which closely resemble those in SDG. One may ask, for example, whether this could simplify the proofs of some of the results in this paper. However, we have not found that this was the case. Certain of our initial attempts at proofs of the main results of this paper indeed used representable tangent categories, but the resulting calculations were no less lengthy than those recorded herein; indeed, by observing that the transformations ℓ and c generate a model of a certain PROP, in Mac Lane's sense [28], we have reduced many of these calculations to showing that certain diagrams of finite sets commute. More importantly still, it was only by working with the relatively restrictive stuctures of tangent categories and associated PROPs that we discovered the main results of this paper, including the result that sector forms carry cosimplicial structure. Commutative monoids in a Cartesian tangent category The coefficient objects of our forms will in particular be commutative monoids, so it is useful to first make some remarks about commutative monoids in a tangent category. A tangent category (X , T) is said to be Cartesian if X has finite products that are preserved by the tangent functor T : X → X . In this case we denote by cmon(X ) the category of commutative monoid objects in X . For each object X of X , the functor X (X, −) : X → set preserves limits and so sends each commutative monoid E in X to a commutative monoid X (X, E) in set. When X itself is a commutative monoid in X , the hom-set cmon(X )(X, E) is a submonoid of X (X, E). Composition in cmon(X ) preserves this monoid structure in each variable separately, so we say that cmon(X ) is an additive category, which, following previous papers on tangent categories, we take to mean a category enriched in commutative monoids. Moreover, we have the following: 2.6. Proposition. Let (X , T) be a Cartesian tangent category. Then the tangent endofunctor T : X → X lifts to an endofunctor cmon(T ) : cmon(X ) → cmon(X ). Moreover, the endofunctor cmon(T ) is additive in the sense that it preserves the commutative monoid structure on the hom-sets of cmon(X ). Furthermore, if φ : T i ⇒ T j is a natural transformation between iterates of T (i, j ∈ N), then for each commutative monoid E in X the morphism φ E : Proof. There is a 2-category Cart whose objects are categories with finite products, wherein the 1-cells are functors preserving finite products and the 2-cells are arbitrary natural transformations. Letting C denote the Lawvere theory of commutative monoids, there is an equivalence of categories Cart(C , D) ≃ cmon(D) for every object D of Cart. But Cart(C , −) : Cart → Cat is a 2-functor valued in the 2-category of categories, and it follows that the assignment D → cmon(D) underlies a 2-functor cmon : Cart → Cat. We can apply this 2-functor to the 1-cell T and to the 2-cell φ, thus proving two of the above claims. Lastly, cmon(X ) is an additive category with finite products, which are therefore finite biproducts, and since T preserves finite products it follows that cmon(T ) preserves finite biproducts and hence is additive. 2.7. Differential objects Before we define sector forms and singular forms, we need to consider the objects in which these forms will take their values. These will be differential objects, which are certain objects E whose tangent bundle T E is simply a product E × E. This is formulated more precisely as follows. Example. Here are some important examples of differential objects from [7]. (i) In the category of smooth manifolds, each Cartesian space R n is a differential object, where λ : R n → T R n = R n × R n sends x to (x, 0), construed as the tangent vector x at the point 0. (ii) Similarly, in the category of convenient manifolds, each convenient vector space is a differential object. (iv) Differential objects in a tangent category associated to a model of SDG are precisely the Euclidean R-modules [25, 1.1.4] (see [7,Theorem 3.9] for a proof of this). All of the above examples are subtractive. 2.10. Remark. By definition, if E is a differential object, then T E ∼ = E × E. Through the isomorphism ν, one can show that the projection from the second component is p E : T E / / E. We will writep : T E → E for the projection to the first component, and refer to it as the principal projection. Differential objects can be alternatively axiomatized in terms of the principal projectionp. For example, this was how differential objects were originally presented [5,Definition 4.8]. It is a relatively straightforward exercise to show the equivalence of the two definitions [7,Proposition 3.4]. We will make use of both the lift λ : E / / T E and the principal projectionp : T E / / E when investigating differential forms with values in E. In particular, the following results about these maps will be useful. 2.11. Proposition. If E is a differential object with lift λ and principal projectionp, then (i)p is a homomorphism of commutative monoids, where T E has the commutative monoid structure as discussed in 2.6. Proof. The first five parts are established in [7, Propositions 3.4 and 3.6, Definition 3.1]. For (vi), we regard T E as a product E × E with projections (p, p E ). Then λ is the morphism 1 E , 0 induced by the identity morphism on E and the zero element 0 of the commutative monoid X (E, E) (2.5). Hencê p ; λ ;p =p ; 1 E =p = ℓ E ; Tp ;p by (iv). Also, using the naturality of p, the equation ℓ ; c = ℓ, and the fact that (ℓ E , 0 E ) and (c E , 1 T E ) are bundle morphisms (2.3) we compute that where each unadorned 0 denotes the zero element of the relevant hom-set (2.5) while 0 E = 0, 1 E : E → T E ∼ = E × E denotes the zero section, i.e. the component of 0 : I ⇒ T at E. Hencep ; λ = ℓ E ; Tp as needed. For (vii), we again use the fact that T E is a product E × E with projections (p, p E ). Appending the first projectionp to the equation in question, we compute that T λ ; c E ; Tp ;p = T λ ; Tp ;p = T (λ ;p) ;p =p =p ; λ ;p using (ii) and (v). Appending the second projection p E , we compute that using the naturality of p, the additivity of T (2.6), the fact that λ ; p E = 0, the fact that p is a homomorphism of commutative monoids, and the fact that Overview of main results, with examples of sector forms To prove the main results of this paper we have used a variety of techniques and definitions. However, we feel it is important to present many of the main results in a single place so as to be easily locatable and so as to feature them prominently. In this section we also look at some examples of sector forms, as they are perhaps much less familiar to the general reader than differential forms. Throughout this section, we work in a Cartesian tangent category (X , T) with a fixed object M and a fixed differential object (E, σ, ζ, λ). We first define a number of natural transformations between powers of T which will appear in the definitions of certain types of forms and their derivatives. Much of the work of sections 4 and 5 deals with how to interpret these natural transformations and handle them more efficiently. Definition. Define The main object of study of this paper is the following notion of form, originally due to White [38]. Definition. A sector n-form on M with values in E is a morphism ω : T n M → E such that for each i ∈ {1, ..., n}, ω is linear in the ith variable; that is, the following diagram commutes 5 : The set of sector n-forms on M with values in E will be denoted by Ψ n (M; E) (often abbreviated to Ψ n (M)). To help explore the similarities and differences between ordinary differential forms and sector forms, we will briefly look at sector 1-and 2-forms on R in the category of smooth manifolds. Example. Let us first consider what sector 1-forms on R with values in R consist of. By definition, such a form consists of a map ω : T R / / R that satisfies the single linearity equation Recall that T R is simply R×R, via the two projections p R : T R / / R andp : T R / / R. Hence the commutativity of the above diagram is equivalent to its commutation when post-composed with p R andp, respectively. But ω ; λ ; p R = ω ; ! R ; ζ = ! T R ; ζ by 2.8, and ℓ R ; T ω ; p R = ℓ R ; p T R ; ω = ℓ R ; c R ; p T R ; ω = ℓ R ; T p R ; ω = p R ; 0 R ; ω by the axioms for tangent categories (2.3). Hence ω ; λ ; p R = ℓ R ; T ω ; p R ⇔ ! T R ; ζ = p R ; 0 R ; ω ⇔ p R ; ! R ; ζ = p R ; 0 R ; ω ⇔ ! R ; ζ = 0 R ; ω since p R is a retraction (of 0 R ). On the other hand, ℓ R ; T ω ;p = ℓ R ; Dω where Dω = T ω ;p : T 2 R → R is the directional derivative of ω, so since λ ;p = 1 by 2.11 we find that the above linearity equation holds if and only if the following equations hold: But the first equation entails the second, since if the first holds then 0 R ; ω = 0 R ; ℓ R ; Dω = 0 R ; ℓ R ; T ω ;p = 0 R ; 0 T R ; T ω ;p = 0 R ; ω ; 0 R ;p = 0 R ; ω ; ! R ; ζ = ! R ; ζ by 2.3 and 2.11. Hence the linearity equation is equivalent to the equation ℓ R ; Dω = ω. In order to reformulate this equation more concretely, let us write ω : R×R → R as a function ω(x, v) of two variables x, v, so that we may write its first and second partial derivatives briefly as ∂ω ∂x and ∂ω ∂v . We can write T 2 R = T (R × R) as a product T 2 R = (R × R) × (R × R), whereupon Dω : T 2 R / / R is given by and ℓ : T R / / T 2 R is given by So the linearity equation says Thus, if we set f (x) = ∂ω ∂v (x, 0), then It is easy to see that any map of this form is indeed a sector 1-form. So, in this case, sector 1-forms are precisely the same as ordinary differential 1-forms (more generally, this is true for any smooth manifold). 3.4. Example. Despite 3.3, sector n-forms for n ≥ 2 are in general quite different from ordinary differential n-forms, even for a simple smooth manifold such as R. For example, for n = 2, a sector 2-form on R consists of a map ω : Using similar reasoning to the previous example, it is straightforward to show that any map of the form (where f (x) and g(x) are smooth functions from R to itself) is an example of a sector 2-form 6 on R. This is very different from the general description of ordinary differential 2-forms on R: there is only one, namely the zero form. One of the key points of this paper is that the sector n-forms have a rich variety of operations that can be performed on them. In particular, there are n + 1 different derivative or co-face operations δ n i which take sector n-forms to sector (n + 1)-forms: The operations δ, ε, σ together endow the set of sector forms on M with the structure of an (augmented) symmetric cosimplicial commutative monoid, i.e. a functor from the category of finite cardinals to the category of commutative monoids. In particular, this means that for every function between finite cardinals f : n → m there is an associated monoid homomorphism Ψ f : Ψ n (M) / / Ψ m (M) (given as some composite of the above co-face, co-degeneracy, and symmetry operations), and this entire assignment is functorial. Moreover, if E is subtractive, then this structure forms a symmetric cosimplicial abelian group. In general, any cosimplicial abelian group has an associated cochain complex whose differential ∂ is given by taking an alternating sum of the co-face maps: And so there is a complex of sector forms (8.2), whose differential we call the exterior derivative. The fact that the sector forms constitute a cochain complex appears to be a new result in differential geometry. 3.5. Example. Let us consider the first several groups of this complex for M = E = R in the category of smooth manifolds. By definition, a sector 0-form on R is simply a smooth map ω : R / / R. By Example 3.3, a sector 1-form on R is the same as a differential 1-form on R; that is, The exterior derivative of a 0-form ω is the same as for ordinary differential forms: For a sector 1-form ω : x, v → f (x)·v we have T ω ;p = Dω, the directional derivative of ω, which in this case is So then the exterior derivative of ω is ∂(ω) = Dω − c R ; Dω and hence is given by since the effect of c R is simply to switch the middle two co-ordinates (v 1 and v 2 ). So, in this case, every sector 1-form has exterior derivative 0. (Note that this is also automatic since every 1-form is the exterior derivative of a 0-form and the sector forms constitute a complex. However, it is useful to see how this works explicitly). However, the exterior derivative of a sector 2-form is not typically zero. For a sector 2-form its directional derivative takes as input an 8-tuple Note that if this is identically zero, then by setting all variables except t to 0, we get h(x) = 0, and then by setting all variables but v 2 and d 2 to 0, we also get g(x) = 0. Thus, a sector 2-form on R has exterior derivative 0 if and only it is identically zero.. Taken together, these results tell us the first three sector cohomology groups of R. The 0th cohomology is the same as ordinary de Rham cohomology (namely, R, since the constant functions are those with derivative 0). Similarly, the 1st cohomology is the same (namely, 0), since every 1-form (sector or differential) is the image of a 0-form. Finally, the second sector form cohomology group is also zero, but for a different reason than for de Rham cohomology. In de Rham cohomology, it is zero since there are no non-trivial 2-forms on R. For sector forms, it is zero since by the above, the only closed sector 2-form is the zero form. It is an open question whether sector form cohomology is always the same as de Rham cohomology; the basic examples given above, however, at least show that the complexes they form are quite different. We hope to explore the relationship between sector form and de Rham cohomology in a future paper. It is also important to note that the individual 'derivative' operations δ n i on sector forms appear to have geometric significance: for more on this, see [38,Chapter 4]. Returning to our general setting, we shall consider the following further property possessed by some sector forms: The exterior derivative operation ∂ defined above restricts to such singular forms, and so there is also a complex of singular forms (Proposition 8.13). 3.7. Example. Let us consider which sector 2-forms on R are alternating. By the above, a sector 2-form on R takes the form For 2-forms, the condition of being alternating amounts to a single equation Since c R swaps v 1 and v 2 , ω is alternating if and only if for all x, v 1 , v 2 , d, But this implies that f (x) = g(x) = 0, so ω is constantly zero. Hence the only singular 2-form on R is the zero form. In fact, we can show much more generally that the complex of singular forms on any smooth manifold (with values in R) is isomorphic to its de Rham complex. We shall prove this in 9.25 after first comparing the above singular forms to those studied in synthetic differential geometry [17,15,19]. Indeed, we shall show that in the tangent category determined by a model of SDG, the above complex of singular forms is isomorphic to its SDG counterpart (9.22), and in certain models of SDG the latter complex is known to be isomorphic to the ordinary de Rham complex of differential forms when M is a smooth manifold [32, IV, Proposition 3.7]. Symmetric monoids, semigroups, and finite sets In working towards the symmetric cosimplicial structure on sector forms, we will make use of an algebraic structure carried by the tangent endofunctor T , namely the structure of a symmetric semigroup (4.7). Many of the results and ideas in this section are due to previous authors [4,1,11,22,23], but the applications to tangent categories are new. Monoids and semigroups Given a strict monoidal category V , let us denote the unit object of V by I and write the monoidal product in V as juxtaposition. By definition, a semigroup (S, m) in V is an object S of V equipped with a morphism m : SS → S (called a multiplication) that satisfies the following associative law Sm ; m = mS ; m . Monoidal categories of finite cardinals Writing set for the category of sets, let us denote by finCard the full subcategory of set whose objects are the finite cardinals, which we identify with their corresponding ordinals and also with the natural numbers n ∈ N. The sum n + m of a pair of finite cardinals carries the structure of a coproduct in finCard, where the associated mappings n → n + m and m → n + m are order preserving and injective and send n and m, respectively, onto initial and final segments of the ordinal n + m. In general, if a category C is equipped with designated binary coproducts and a designated initial object, then C carries an associated structure of symmetric monoidal category. In particular, finCard is therefore symmetric monoidal, with monoidal product + and unit object 0. Further, (finCard, +, 0) is a strict monoidal category, but note that although n + m = m + n as objects of finCard, the symmetry isomorphism σ nm : n + m → m + n is not the identity map. We shall consider several non-full subcategories of finCard with the same objects as finCard itself: (i) finCard s , whose morphisms are surjections; (ii) finCard b , whose morphisms are bijections, all of which are automorphisms; (iii) finOrd, whose morphisms are order preserving maps; (iv) finOrd s , whose morphisms are order preserving surjections. Each of these subcategories is closed under the monoidal product in finCard and hence inherits the structure of a strict monoidal category. Note that finCard s and finCard b contain the symmetries σ mn and so are symmetric strict monoidal categories, whereas the other subcategories are merely strict monoidal categories. Universal monoids and semigroups The cardinal 1 carries the structure of a commutative monoid (1, µ, η) in the symmetric monoidal category finCard, where the associated multiplication µ and unit η are the unique maps Since these maps are order preserving, (1, µ, η) is also a monoid in finOrd. These monoids and their underlying semigroups have the following universal properties: 4.5. Theorem. Let V be a strict monoidal category. (i) Given a monoid (S, m, e) in V , there is a unique strict monoidal functor S ♯ : Proof. (i) and (ii) are well-known, e.g. see [29, VII.5, Proposition 1 and Exercise 3]. We will defer the proofs of (iii) and (iv) until 4.11 below, where we will see that they follow from more general results on the basis of the cited work of Burroni, Grandis, and Lafont. Hence, up to a bijection, monoids (resp. semigroups) in strict monoidal categories are the same as strict monoidal functors on finOrd (resp. finOrd s ), and analogous statements hold for the commutative variants of these notions. In the terminology of [28], finOrd is therefore the PRO that defines the notion of monoid, and finCard is the PROP that defines the notion of commutative monoid. Symmetric monoids and semigroups One of the ramifications of 4.5(iii) is that it provides a way to generalize the notion of commutative monoid to the context of nonsymmetric monoidal categories. Indeed, work of Burroni [4, 2.2] and of Grandis [11, §2] shows that a strict monoidal functor finCard → V valued in a mere strict monoidal category V is equivalently given by a monoid in V equipped with a compatible symmetry isomorphism, per the following definition: 4.7. Definition. Let V be a strict monoidal category. (i) A symmetry on an object S of V is a morphism s : SS → SS satisfying the following equations: gether with a symmetry s on the object S such that the following equation is satisfied and the commutativity law (4.1.iii) is also satisfied. (iii) (Grandis [11, §2]) A symmetric monoid (S, m, e, s) in V consists of a monoid (S, m, e) in V with a symmetry s on S such that (S, m, s) is a symmetric semigroup and the following equation is satisfied: One can generalize each of the above notions to the setting of an arbitrary monoidal category V by inserting associativity and unit isomorphisms as needed. 4.9. Remark. Any commutative semigroup (S, m) (resp. commutative monoid (S, m, e)) in a symmetric strict monoidal category V carries the structure of a symmetric semigroup (S, m, s) (resp. symmetric monoid (S, m, e, s)) in V when we take s to be the relevant component of the symmetry isomorphism carried by V . In particular, the monoid (1, µ, η) in finCard carries the structure of a symmetric monoid (1, µ, η, σ) in finCard, and its underlying symmetric semigroup (1, µ, σ) is also a symmetric semigroup in finCard s . The object 1 carries a symmetry σ in finCard b that is universal in the sense that if S is an object of V and s is a symmetry on S, then there is a unique strict monoidal functor S ♯ : Proof. (i) is explicitly proved in the cited work of Grandis and also follows immediately from the cited earlier result of Burroni. (ii) follows immediately from the cited result of Lafont, which gives a presentation of the strict monoidal category finCard s in terms of the generators µ, σ and the relations for a symmetric semigroup (4.7, 4.8). Similarly, (iii) follows from the cited result of Lafont, which presents the strict monoidal category finCard b in terms of the generator σ and the relations for a symmetry on an object (4.7). 4.11. Remark. We may apply the preceding theorem to commutative monoids in symmetric strict monoidal categories V by way of 4.9, yielding a proof of 4.5(iii). Similarly we obtain a proof of 4.5(iv). Symmetric cosimplicial objects The category ∆ of positive finite ordinals and order preserving maps admits a geometric interpretation that can be illustrated by way of a well-known functor from ∆ to the category of topological spaces, sending n to the standard geometric (n−1)-simplex ∆ n−1 ⊆ R n , i.e. the convex hull of the standard basis vectors in R n . Consequently, presheaves on ∆ abound in topology and are called simplicial sets. A similar geometric interpretation applies to each of the categories of finite cardinals that we have considered in 4.3, leading to several corresponding variants of the notion of simplicial set: For brevity, we will omit the modifier "augmented" when employing these terms within the present paper. The category of degenerative objects in C is defined as the functor category [finOrd op s , C ]. Similarly, each of the listed notions determines an associated category in which the morphisms are arbitrary natural transformations. Remark. Given a category C , any functor between the associated categories of C -valued functors. In particular, the inclusions induce functors between the various functor categories defined in 5.1. For example, every symmetric degenerative object carries the structure of a permutative object. Remark. By definition, a graded object C in a category C is a sequence of objects C n in C indexed by the finite cardinals n. Observe that a copermutative object C in C is equivalently described as a graded object C in C equipped with a sequence of group homomorphisms S n → Aut C (C n ) from the symmetric groups S n = Aut finCard (n) into the automorphism groups Aut C (C n ) of the objects C n of C ( §2.1). Dually, a permutative object C in C is a graded object equipped with group homomorphisms S op n → Aut C (C n ) where S op n is the opposite of the symmetric group. But every group G is isomorphic to its opposite G op via the map (−) −1 : G → G op , so copermutative objects are in bijective correspondence with permutative objects. From another perspective, this bijective correspondence is induced by an identity-on-objects isomorphism of categories given on arrows by ξ → ξ −1 . Example: The degenerative object of iterated tangent functors Given a tangent category (X , T), we saw in 4.12 that the tangent endofunctor T : X → X carries the structure of a symmetric semigroup (T, ℓ, c) in the opposite [X , X ] op of the category of endofunctors on X . Hence by 4.10, this symmetric semigroup determines a corresponding strict monoidal functor T ♯ : finCard s → [X , X ] op sending each finite cardinal n to the n-th iterate T n of T . This functor is an example of a symmetric codegenerative object in [X , X ] op , equivalently, a symmetric degenerative object finCard op s → [X , X ] in the category of endofunctors on X . Symmetric simplicial objects by generators and relations It is wellknown that the category of finite ordinals has a convenient presentation by generators and relations, leading to a familiar equivalent way of defining simplicial sets in terms of face and degeneracy maps; see, e.g. [29,VII.5]. Barr [1] and Grandis [11] gave an analogous presentation of the larger category of finite cardinals finCard in terms of the following larger collection of generators: We shall omit the superscripts n when they are clear from the context. The following theorem is well-known; for example, a proof is given in [29, VII.5]. 5.7. Theorem. The category finOrd of finite ordinals and order preserving maps can be presented by generators and relations (in the sense of [29, II.8]) as follows: (i) Generators: The maps ε n i , δ n i of 5.6. (ii) Relations: The following pure codegeneracy relations: together with the following pure coface relations: as well as the following coface-codegeneracy relations: By adding the symmetry maps as additional generators, together with further relations, Barr and Grandis established the following variation on the preceding theorem: 5.9. Theorem (Barr [1], Grandis [11, 4.2]) The category finCard of finite cardinals and arbitrary maps can be presented by generators and relations as follows: (i) Generators: The maps ε n i , δ n i , σ n i of 5.6. (ii) Relations: The relations (5.7.i), (5.7.ii), (5.7.iii) together with the following Moore relations as well as the following codegeneracy-symmetry relations: and the following coface-symmetry relations: 10. Remark. Grandis [11, §3] notes that for a fixed finite cardinal n, the maps σ n i generate the symmetric group S n , and the Moore relations (5.9.i) constitute a classical presentation of this group by generators and relations. By discarding the coface maps and all the relations involving them, we shall now establish an analogous presentation of finCard s in terms of the codegeneracy and symmetry maps: 5.11. Theorem. The category finCard s of finite cardinals and surjections can be presented by generators and relations as follows: (i) Generators: The maps ε n i , σ n i of 5.6. Proof. Let C denote the category presented by the given (formal) generators and relations (per [29, II.8]), with objects all finite cardinals. We will not distinguish notationally between the morphisms ε n i , σ n i in finCard s and the generators in C that bear the same names. It suffices to consider the cases where α, β, γ are generators, and then the equations are immediate from the definition of +. Verification of the unit laws for (C , +, 0) reduces to showing that the functors 0 + (−), (−) + 0 : C → C are merely the identity functor, but this is trivially verified on generators. Hence by 4.10(ii) there is a unique strict monoidal functor S ♯ : finCard s → C with S ♯ (1) = 1, S ♯ (µ) =μ, and S ♯ (σ) =σ. Note that S ♯ is identity-on-objects and sends the morphisms ε n i , σ n i in finCard s to the similarly named generators in C . Indeed, the definition (5.6) of the morphisms ε n i , σ n i in finCard s entails that the strict monoidal functor S ♯ sends them to respectively (using the definitions ofμ,σ, and +). Next we define an identity-on-objects functor M : C → finCard s by sending the generators ε n i , σ n i in C to the similarly named morphisms ε n i , σ n i in finCard s . This assignment respects the relations defining C , simply because the morphisms ε n i , σ n i in finCard s ֒→ finCard satisfy these relations (by 5.9). The composite functor M ; S ♯ : C → C preserves the generators ε n i and σ n i and so (by the universal property of C ) must be the identity functor. Hence M is faithful. We claim that M is also full (and hence is an isomorphism). Firstly, every morphism in finCard s can be expressed as a composite τ ; α : n → m where τ ∈ S n is a permutation and α : n → m is order preserving [11, §3], and then α is necessarily surjective. But by 5.10 we can express τ as a composite of symmetry maps σ n i , and by 5.8 we can express α as a composite of codegeneracy maps. Therefore the symmetries and codegeneracies σ n i , ε n i generate finCard s , so since they lie in the image of M it follows that M is full. As corollaries to the above theorems, we obtain not only the classical description of cosimplicial objects in terms of coface and codegeneracy morphisms but also analogous descriptions of symmetric cosimplicial objects and symmetric codegenerative objects, as follows: 5.12. Corollary. Let C be a category. We call the structural morphisms ε n i , δ n i , σ n i in 5.12 codegeneracies, cofaces, and symmetries, respectively, just like their similarly notated counterparts in finCard. Dually, a symmetric simplicial object C carries degeneracy morphisms ε n i : C n → C n+1 , face morphisms δ n i : C n+1 → C n , and symmetries σ n i : C n → C n . 5.13. The codegenerative object determined by a symmetric semigroup Given any symmetric semigroup (S, m, s) in a strict monoidal category V , the corresponding strict monoidal functor S ♯ : finCard s → V (4.10) is an example of a symmetric codegenerative object in V . Its underlying graded object consists of the n-fold monoidal powers S n of S. Since S ♯ is strict monoidal, the definitions of the generators of finCard s in 5.6 entail that the codegeneracies and symmetries carried by S ♯ can be expressed as ε n i = S i−1 mS n−i : S n+1 → S n σ n i = S i−1 sS n−i−1 : S n → S n . 5.14. Example: The symmetric degenerative iterated tangent functor As a special case of 5.13, we saw in 5.4 that the tangent functor T : X → X on a tangent category (X , T) carries the structure of a symmetric semigroup (T, ℓ, c) in [X , X ] op and so determines a symmetric codegenerative object T ♯ : finCard s → [X , X ] op , or equivalently, a symmetric degenerative object Note that if X is a Cartesian tangent catgory and E carries the structure of a commutative monoid (resp. abelian group) object in X , then the representable presheaf X (−, E) : X op → set lifts to a presheaf valued in the category cmon of commutative monoids (resp. the category ab of abelian groups). Hence the functor (5.15.ii) lifts to a functor valued in the category of symmetric codegenerative objects in cmon (resp. ab). Presenting symmetric cosimplicial objects by fundamental cofaces The coface morphisms δ n i : C n → C n+1 carried by a symmetric cosimplicial object C can be expressed in terms of the fundamental cofaces δ n 1 by repeated application of the equation δ i+1 = δ i ; σ i of (5.9.iii). This leads to the following new succinct equational presentation of symmetric cosimplicial objects, which will be useful in establishing the symmetric cosimplicial structure that engenders the de Rham complex: 6.1. Theorem. A symmetric cosimplicial object C : finCard → C in a category C is equivalently given by a graded object C in C equipped with morphisms ε n i , σ n i as in (5.12.i), (5.12.iii) together with a sequence of morphisms such that the equations (5.7.i), (5.9.i), (5.9.ii) are satisfied along with the following further equations: Therefore, in view of 5.12(iii), a symmetric cosimplicial object C in C is equivalently given by a symmetric codegenerative object C equipped with a sequence of morphisms (6.1.i) satisfying the equations (6.1.ii), (6.1.iii). Before proving this, let us adopt the following notational conventions. Hence it suffices to show that finCard has the relevant universal property [29, §8]. But in view of 5.12(iii) this universal property is equivalent to the extension property established in Lemma 6.3. The symmetric cosimplicial set of sector forms Let E be a differential object in a Cartesian tangent category (X , T), and let M be an object of X . Recall that Ψ n (M) (n ∈ N) denotes the set of all sector n-forms on M with values in E. In the present section we show that the graded set (Ψ n (M)) n∈N carries the structure of a symmetric cosimplicial commutative monoid. 7.1. Remarks on the definition of sector form Recall from Definition 3.2 that a sector n-form on M is a morphism ω : T n M → E such that for each j ∈ {1, ..., n} the equation a n j M ; T ω = ω ; λ holds, where λ : E → T E is the lift morphism carried by E. Using the results of the previous sections, we can get a better understanding of this equation. In particular, a n j is the composite transformation Concretely, one can readily verify that α n j is the mapping given by Proof. Given ω, τ ∈ Ψ n (M), the sum ω + τ in X (T n M, E) is a sector n-form, since for each j = 1, ..., n we can compute as follows, using the fact that cmon(T ) : cmon(X ) → cmon(X ) is an additive functor (2.6) and the fact that λ : E → T E is a homomorphism of commutative monoids (2.8): a j M ; T (ω + τ ) = a j M ;(T (ω) + T (τ )) = (a j M ; T (ω)) + (a j M ; T (τ )) = (ω ; λ) + (τ ; λ) = (ω + τ ) ; λ . Also, the zero element 0 of the commutative monoid X (T n M, E) is a sector n-form since we compute that a j M ; T (0) = a j M ; 0 = 0 = 0 ; λ, where each occurrence of 0 denotes the zero element of the relevant hom-set, again using the additivity of T and the fact that λ is a monoid homomorphism. The remaining claim is verified similarly. Recall that the graded commutative monoid (X (T n M, E)) n∈N carries the structure of a symmetric codegenerative commutative monoid (5.15). We now show that this structure restricts to sector forms: between the commutative monoids of sector forms. But the rightmost square commutes since ω is a sector (n+1)-form, so it suffices to obtain the commutativity of the following diagram. In view of 7.1, 5.4, and 5.14, this diagram is obtained by applying the strict monoidal functor T (−) : finCard op s → [X , X ] to the following diagram in finCard s n n + 1 so it suffices to find k such that this diagram commutes. In the case where i < j we can take k = j + 1, whereas in the case where i j we can take k = j, for in each case it is straightforward to verify that both composites in (7.3.i) are then equal to the map φ given by Next we prove that if ω is a sector n-form on M then σ n i (ω) = c n i M ; ω is a sector n-form on M. Letting j ∈ {1, ..., n}, it suffices to show that the following diagram commutes: Again since ω is a sector n-form it suffices to show that there is some k such that the following diagram commutes: But as above, we reason that this diagram is obtained by applying T (−) : finCard op s → [X , X ] to the following diagram in finCard s n n and so it suffices to show that this diagram commutes for some k. In the case where j / ∈ {i, i + 1} we can take k = j, whereas in the case where j = i we can take k = i + 1, while in the case where j = i + 1 we can take k = i, for in each of these three cases it is straightforward to verify that both composites in (7.3.ii) are then equal to the mapping φ given by Corollary. There is a symmetric codegenerative commutative monoid where Ψ n (M) is the commutative monoid of sector n-forms on M. Proof. By 7.3, the graded commutative monoid (Ψ n (M)) is equipped with codegeneracy and symmetry homomorphisms ε n i and σ n i . These are restrictions of the codegeneracy and symmetry maps carried by the symmetric codegenerative set X (T (−) M, E), so they satisfy the equations listed in 5.12(iii). Hence an application of 5.12(iii) yields the needed result. By the results of section 6, in order to show that the symmetric codegenerative structure on sector forms is part of a symmetric cosimplicial structure, it suffices to define the fundamental coface maps δ n 1 : Ψ n (M) → Ψ n+1 (M) and check that they satisfy certain equations (6.3). We now proceed to define these maps. 7.5. The fundamental derivative of a sector form Given a sector n-form ω : T n M → E on M, we define the fundamental derivative δ 1 (ω) of ω as the following composite morphism The upper-right cell commutes since ω is a sector n-form, and the lower-right cell commutes by the naturality of c. Hence it suffices to show that the following diagram commutes. Proof. By 7.6, the maps δ n 1 are well-defined, and they are homomorphisms of commutative monoids since T is additive (2.6) andp : T E → E is a homomorphism of commutative monoids (2.11). By 7.4 we have already defined a symmetric codegenerative object Ψ(M) : finCard s → cmon, so by 6.3 it suffices to verify the equations (6.1.ii) and (6.1.iii), which govern the interaction of the fundamental coface maps δ n 1 with the codegeneracies and symmetries. Corollary. If E is a subtractive differential object in X , then there is a symmetric cosimplicial abelian group where Ψ n (M) is the abelian group of sector n-forms on M with values in E. Proof. This follows from the preceding theorem and 7.2. 7.10. Corollary. Let E be a differential object in a Cartesian tangent category (X , T). (ii) If E is a subtractive differential object, then the functor Ψ lifts to a functor valued in the category [finCard, ab] of symmetric cosimplicial abelian groups. Proof. Recall from 5.15 that we have a functor X op → [finCard s , cmon] that sends each morphism f : M → N in X to the natural transformation X (T (−) f, E) : X (T (−) N, E) ⇒ X (T (−) M, E) whose components X (T n f, E) : X (T n N, E) → X (T n M, E) are given by precomposition with T n f : T n M → T n N. It follows immediately from the naturality of the transformations a n j : T n ⇒ T n+1 that X (T n f, E) restricts to yield a homomorphism Ψ n (f ) : Ψ n (N) → Ψ n (M) between the submonoids consisting of sector n-forms. We claim that the homomorphisms Ψ n (f ) constitute a natural transformation Ψ(f ) : Ψ(N) ⇒ Ψ(M). It suffices to verify the naturality condition on the generators ε n i , σ n i , δ n 1 of finCard (6.4). But for the generators ε n i , σ n i this naturality condition follows from the naturality of X (T (−) f, E), so it suffices to show that commutes, but this follows immediately from the definitions. Complexes of forms and the exterior derivative Given an (augmented) cosimplicial abelian group C : finOrd → ab, it is well-known [37, Definition 8.2.1] that the underlying graded abelian group (C n ) n∈N carries the structure of a (non-negatively graded) cochain complex C • when we define the differential ∂ n : C n → C n+1 by In particular, we therefore have that ∂ n ; ∂ n+1 = 0. We call C • the cochain complex associated to C. In the present section, we show that when C is a symmetric cosimplicial abelian group one also obtains a subcomplex C alt • ֒→ C • consisting of the alternating elements of C. Applied to the symmetric cosimpicial object of sector forms (7.9), we obtain complexes of sector forms (C • ) and singular forms (C alt • ) in tangent categories. 8.1. Sector forms and the exterior derivative Let E be a subtractive differential object in a Cartesian tangent category (X , T), and let M be an object of X . Definition. (i) The complex of sector forms on M is defined as the cochain complex Ψ • (M) associated to the cosimplicial abelian group Ψ(M) of sector forms on M with values in E (7.9). (ii) Given a sector n-form ω : T n M → E on M, the exterior derivative of ω is defined as the sector (n + 1)-form where ∂ n : Ψ n (M) → Ψ n+1 (M) is the differential carried by the complex Ψ • (M), recalling that the sector (n + 1)-form δ n i (ω) = c n+1 (i) ; T ω ;p is the derivative of ω in position i (7.8). The following theorem is now immediate, but it would be difficult to prove if we had just defined the exterior derivative directly without first proving Theorem 7.7: Remark. It is well known that the the assignment C → C • extends to a functor (−) • from the category of cosimplicial abelian groups to the category cochain + of nonnegatively graded cochain complexes 10 . Hence by 7.10 we can form the composite functor whose middle factor is the evident forgetful functor. This functor Ψ • sends each object M of X to the complex of sector forms on M. 8.5. The complex of alternating elements Given a symmetric cosimplicial abelian group C, we now define a certain subcomplex C alt • of C • . (i) We say that an element c of C n is an alternating element of C if σ n i (c) = −c for all i ∈ {1, ..., n − 1}, recalling that σ n i : C n → C n is the symmetry map carried by C. (ii) We denote by C alt n ⊆ C n the subset consisting of all alternating elements. 8.7. Theorem. Given a symmetric cosimplicial abelian group C, the alternating elements of C constitute a subcomplex C alt • of the cochain complex C • associated to C. Proof. C alt n ֒→ C n is an intersection of equalizers in ab and hence is a subgroup inclusion. Letting c ∈ C alt n ⊆ C n , it suffices to show that the associated element ∂(c) = ∂ n (c) of C n+1 is alternating. Letting i ∈ {1, ..., n} we must show that σ n+1 i (∂(c)) = −∂(c). Since σ n+1 i is a homomorphism of abelian groups we compute that Using the coface-symmetry relations (5.9.iii) and the fact that σ n+1 i is self-inverse, we compute that since i and i + 1 are of opposite parity. 8.8. Definition. Given a symmetric cosimplicial abelian group C, we call the subcomplex C alt • of C • the complex of alternating elements of C. 8.9. Proposition. There is a functor from the category of symmetric cosimplicial abelian groups to the category of (non-negatively graded) cochain complexes, sending a symmetric cosimplicial abelian group C to the complex of alternating elements C alt • of C. Proof. Given a morphism of symmetric cosimplicial abelian groups f : C → D, we claim that the associated morphism of chain complexes f • : C • → D • restricts to a morphism f alt • between the subcomplexes of alternating elements. Indeed, given c ∈ C alt n ⊆ C n , the associated element f n (c) of D n is alternating, since for each i ∈ {1, ..., n − 1}, we compute that σ n i (f n (c)) = f n (σ n i (c)) = f n (−c) = −f n (c) since f is natural and c is alternating. The result now follows from 8.4. 8.10. The complex of singular forms Again let us fix a subtractive differential object E in a Cartesian tangent category (X , T). Letting M be an object of X , recall that Ψ(M) denotes the symmetric cosimplicial abelian group of sector forms on M (7.9). 8.11. Definition. Since Ω • (M) is a subcomplex of Ψ • (M), the following theorem is now immediate: 8.12. Theorem. The exterior derivative ∂ω of a singular n-form on M is a singular (n + 1)-form. Proposition. There is a functor sending each object M of X to the complex of singular forms on M with values in E. Proof. This follows from 7.10 and 8.9. 9. Relationship to de Rham in synthetic and classical differential geometry Synthetic differential geometry (SDG) is an approach to differential geometry in terms of infinitesimals that was initiated in a lecture of Lawvere in 1967 and developed by several authors, starting with work of Wraith and of Kock [16] in the 1970s. The reader is referred to the books [19] and [25] for a comprehensive introduction to SDG. An approach to differential forms in SDG was developed in [17,15,31] (see [19], [25]), and in the present section we compare this work to the development of differential forms given above, recovering the classical de Rham complex of a smooth manifold as a corollary (9.25). This comparison involves specializing our treatment of sector forms to the case in which the tangent structure is representable (9.1), an exercise that is illuminating in its own right. In the most prevalent formulation of SDG, one begins with a topos E and a commutative ring object R in E , and then one defines D to be the subobject of R described by the equation x 2 = 0, so that D is the part of R that consists of square-zero 'infinitesimal elements'. Writing [M, N] for the internal hom between objects M and N of E , one construes the object [D, M] as the space T M of all tangent vectors on M. One postulates that R should satisfy the Kock-Lawvere axiom (see section 9.15), and sometimes further axioms, on the basis of which one can develop much differential geometry in E . One can define specific toposes E into which the category mf of smooth manifolds embeds, via an embedding mf ֒→ E that sends the real numbers R to R; see [32] and §9.23 below. The approach of defining D as the square-zero part of R was put forward in Kock's 1977 paper [16], wherein it is indicated that Lawvere's 1967 lecture did not define D in this way but rather postulated that an object D of infinitesimals should exist and that [D, M] for each object M should have properties expected of the tangent bundle of M. Evidently tangent categories provide an axiomatics of such properties, and indeed Rosický's 1984 paper [34] considers in particular those tangent categories (X , T) for which there is an exponentiable object D with T ∼ = [D, −] : X → X . This leads to an axiomatics for structure and properties that should be possessed by an object of infinitesimals D [34, §4], [5, 5.6]. The following definition was given in [5] and is a variation on a similar definition given in [34]. Herein, we say that an endofunctor F on a category X with finite products is representable if it is isomorphic to an endofunctor of the form [X, −] : X → X for some exponentiable object X of X , and we then say that F is represented by X. Definition. A category X carries representable tangent structure if X has finite products and carries a tangent structure T in which the endofunctors T n and T n (n ∈ N) are representable. It is proved in [5,Prop. 5.7] that a category X with finite products carries representable tangent structure if and only if there is an exponentiable object D of X that carries the structure of an infinitesimal object in the sense of [5,Def. 5.6]. In this case the tangent endofunctor T is represented by D, and its iterates T n are represented by the n-th powers D n of D. In particular, representable tangent structure is necessarily Cartesian in the sense of 2.5. Let us now fix a category X with representable tangent structure T, represented by an infinitesimal object D in X . Throughout, we shall assume without loss of generality that T = [D, −] on the nose. We shall not assume however that T n = [D n , −] for n > 1, but rather we now define specific isomorphisms T n ∼ = [D n , −] for use in the sequel. Definition. Let us define isomorphisms by recursion on n, as follows. Firstly, ψ 0 is defined as the canonical isomorphism from [1, −] to the identity functor on X . Next, the components of ψ n+1 are defined as the composites Note that ψ 1 is therefore the identity transformation on [D, −] = T . 9.3. Simple type theory and lambda calculus In synthetic differential geometry one often makes use of the internal language of a given topos E in order to define morphisms in E by means of 'elementwise' formulae, to show that diagrams commute just by chasing elements, and so on; see, e.g., [19,Part II]. After all, the internal language of E is a restricted form of set theory, or rather higher-order intuitionistic type theory [24]. Even though our given tangent category X is not assumed to be a topos, it still possesses an internal language, albeit a rather restricted one, namely the simple type theory of X [12, Chapter 2], considered as a category with finite products. We now informally review some basic elements of this language and one of its extensions, the simply typed lambda calculus; readers who are familiar with the latter may safely skip this section. Given any morphism f : X 1 × ... × X n → Y in X , we can form a typing judgment or term-in-context in which each expression x i : X i indicates that x i is a formal variable of type X i . The typing judgment asserts that the expression f (x 1 , ..., x n ) is a term of type Y . The part of the typing judgement to the left of the turnstile ⊢ is called the context. The simple type theory of X includes various term formation rules which allow us to construct new termsin-context from others [12, 2.1]. For example given terms-in-context x : X ⊢ f (x) : Y and y : Y ⊢ g(y) : Z associated to morphisms f : X → Y and g : Y → Z in X , we can form a term-in-context x : X ⊢ g(f (x)) : Z. Every term-in-context denotes an associated morphism in X , and in particular, the latter term-in-context denotes the composite morphism f ; g : X → Z. The simple type theory of X carries also a calculus of equations x 1 : X 1 , ..., x n : X n ⊢ t 1 = t 2 : Y where t 1 , t 2 are terms in the same context Γ, namely x 1 : X 1 , ..., x n : X n , and we say that such an equation holds in X if the morphisms in X denoted by Γ ⊢ t 1 : Y and Γ ⊢ t 2 : Y are equal. We shall often omit typing indications "y : Y " within terms-in-context and equations when the intended typing is clear. Having assumed that X has an infinitesimal object D, which is exponentiable, we would also like to employ an internal language in reasoning about exponential transposition of morphisms. In the case where X is Cartesian closed, we can employ the simply typed lambda calculus of X [12, 2.3] [24], which extends the simply type theory of X by adding term-formation rules corresponding to exponential transposition, together with corresponding rules governing equality. For example given a morphism f : X × Y → Z in X , the associated transpose X → [Y, Z] is denoted by the term-in-context where the construct "λy : Y." serves to bind the variable y within the scope of the expression λy : Y.f (x, y). We will often write just λy.f (x, y). Although the given tangent category X is not assumed Cartesian closed, we can clearly 11 still employ simply typed lambda calculus and its interpretation in X as long as the instances of exponential transposition and evaluation employed are those permitted by the exponentiable objects D n . Given objects X, Y, Z of X with X, Y exponentiable, As a first application of this type-theoretic notation, we record the following: Whereas X is not a strict monoidal category, we can construe (D, ⊙, s) as a symmetric semigroup in a Cartesian strict monoidal category X D that is defined as follows. Define ob X D = N and X D (n, m) = X (D n , D m ), with composition as in X . Informally, we will write D n for the object n of X D , noting that X D is equivalent to the full subcategory of X on the objects D n . One encounters no complication in defining a Cartesian strict monoidal structure on X D , and it is for this reason that we work with X D rather than the latter full subcategory of X . By 4.10, the symmetric semigroup (D, ⊙, s) in X D induces a strict monoidal functor and we will also write D ♯ to denote the functor D ♯ : finCard s → X obtained by composing with the canonical fully faithful functor X D → X . (i) The codegeneracies and symmetries carried by the symmetric codegenerative object D ♯ in X are the following morphisms, respectively: (ii) Writing D f : D m → D n for the morphism in X induced by a mapping f : n → m between finite cardinals n, m, the functor D ♯ : finCard s → X sends each permutation ξ : n ∼ − → n to the automorphism D ξ −1 : D n → D n in X . Proof. (i) is immediate from 5.13. For (ii) it suffices to show that the copermutative object finCard b → X D underlying D ♯ is equal to the composite whose first factor is the identity-on-objects isomorphism given in 5.3. But these two functors finCard b → X D are both strict monoidal, and they both send 1 to D and send the symmetry σ : 2 → 2 to the symmetry s : D 2 → D 2 in X D , so by 4.10(iii) they are equal. 9.8. Some infinitesimal left actions For each n ∈ N and each j = 1, ..., n we have a morphism α n j = σ n+1 (j) ; ε n j : n + 1 → n in finCard s (7.1). Using 9.6 and the definition of σ n+1 (j) = (j(j − 1)...321) (6.2), we compute that the morphism D ♯ (σ n+1 (j) ) : D n+1 → D n+1 carried by the symmetric codegenerative object D ♯ is characterized by Again applying 9.6 we therefore compute that the morphism is characterized as follows: In effect, D ♯ (α n j ) is the left action of D on the j-th factor of D n . Now fixing a differential object E in X , the third axiom in 2.8 entails that E carries an associative action of D, namely the transpose • : D × E → E of the lift morphism λ : E → [D, E] = T E carried by E. As with the multiplication carried by D, we will denote this left action on E by juxtaposition within the lambda calculus. where φ n+1 is the isomorphism defined in 9.2. Writing • n j in infix notation in the lambda calculus, this morphism • n j is characterized by the following equation: Given a pair of morphisms ν, ω that correspond under this bijection, so that ν = ψ n ; ω, we claim that ν is a synthetic sector n-form if and only if ω is a sector n-form. To prove this, first observe that the following diagram commutes, by the inductive definition of the isomorphisms ψ n (9.2). ( ( P P P P P P P P P P P P Next recall that ω is a sector n-form if and only if a n j ; T ω = ω ; λ for all j = 1, ..., n (3.2). Here a n But the two composites in this diagram are the exponential transposes of the two composites in the diagram (9.9.ii) whose commutativity characterizes synthetic sector n-forms. Proof. By 6.3 we know that δ n i (ν) = σ n+1 is the automorphism induced by the permutation σ n+1 (i) : n + 1 → n + 1 in finCard. Given any synthetic sector (n + 1)-form γ : [D n+1 , M] → E, we deduce by 9.10 that σ n+1 so by using (9.8.i) we compute as follows: .., d n )) (9.11.i) Let ω : T n M → E be the sector n-form corresponding to ν, so that In view of the proof of 9.10, δ n 1 (ν) : where δ n 1 (ω) = T ω ;p is the fundamental derivative of ω (7.5, 7.7). Hence since ω = ψ −1 n ; ν we compute that by the inductive definition of ψ (9.2). Hence we compute as follows: Applying this together with (9.11.i) in the case where γ = δ n 1 (ν), we obtain the needed result. Now let us assume that E is a subtractive differential object in X , and again let M be an object of X . 9.14. Theorem. There is an isomorphism of cochain complexes between the complex of singular forms and the complex of synthetic singular forms. Proof. This follows immediately from 9.10, 8.9, and 8.11. [17,15,31]. We now recall the definition of the notion of differential form employed in the cited sections of these books. We shall soon show that these are exactly the same as the synthetic singular forms defined in 9.12 above. where we have written * n j in infix notation. We say that a morphism ν : 9.17. Remark. Observe that the axioms for an SDG singular form are almost exactly the same as those for a synthetic singular form as defined in 9.12 above, except that for SDG singular forms the axiom (9.16.i) applies to arbitrary scalars r : R rather than just d : D. We shall show that these notions of form are identical in a suitable setting. The key idea is as follows. 9.19. The tangent category of microlinear objects In order to be able to use certain results given in [25], we shall now assume that R satisfies the Kock-Weil axiom (K-W) of [25, 2.1.3]. This axiom entails the above Kock-Lawvere axiom, and it also entails that the object R of E is microlinear [25, 2.3.1] (that is, R perceives finite quasicolimits of infinitesimal objects as colimits). Let X ֒→ E denote the full subcategory of E consisting of the microlinear objects, and let E iv ֒→ E denote the full subcategory consisting of those objects that are both infinitesimally linear and vertically linear in the sense used in [5, 5.2, 5.3]. Note that X is contained in E iv . By [5, 5.4] E iv carries representable tangent structure, with representing object D. Both X and E iv are closed under finite limits and exponentials in E ([25, 2.3.1], [5, 5.4]). Hence the tangent structure on E iv restricts to X , and X is a Cartesian closed category with representable tangent structure, represented by D. Now let us fix an object M of X and a Kock-Lawvere R-module E that lies in X . By [7, 3.9], E carries the structure of a subtractive differential object in X , where the associated lift morphism λ : E → T E = [D, E] is the transpose of the restricted action D × E → E, so that the latter is the morphism written as • in 9.8. Given any object X of E , let X * : E = E /1 → E /X denote the functor given by pullback along ! : X → 1. Since X * is a logical functor between toposes [14, 1.42], X * sends R to a ring of line type X * (R) = (π 2 : R × X → X) in E /X, and X * sends E to a Kock-Lawvere X * (R)-module X * (E) in E /X. Proposition. (i) Given any object X of X , the tangent bundle (T X, p X ) carries the structure of a Kock-Lawvere X * (R)-module in E /X. shall consider an embedding of mf into a topos E modelling SDG, and then we shall invoke 9.22 and a result on differential forms within this specific topos [32,IV.3.7]. In particular, we shall take E to be the Dubuc topos, i.e., the topos denoted by G in [32] and by B op in [19]. Explicitly, E is the topos of sheaves on the opposite of the category of germ-determined, finitely generated C ∞ -rings, with respect to the open cover topology. But more to the point, E is a topos equipped with an embedding ι : mf ֒→ E such that R = ι(R) is a ring of line type satisfying the Kock-Weil axiom [25, 8.3.3], and ι has several further pleasant properties making (E , ι) a well-adapted model (see [19,III.3,III.4,III.8.4]). We now show that any well-adapted model gives rise to an embedding of tangent categories: 9.24. Proposition. Let ι : mf ֒→ E be a well-adapted model of SDG. (i) The embedding ι factors as mf ι ′ ֒→ X ֒→ E , where X is the full subcategory of E consisting of microlinear objects. (ii) The embedding ι ′ : mf ֒→ X carries the structure of a strong morphism of Cartesian tangent categories (in the sense of [5, 2.7, 2.8]). ι ′′ preserves the Jacobian derivative [19,III.3.3] and therefore preserves Cartesian differential structure. Therefore ι ′′ is a strong morphism of Cartesian tangent structure, so the composite cart ֒→ Diff (X ) ֒→ X is a strong morphism of Cartesian tangent structure whose structural isomorphisms are the isomorphisms α M with M = R n as constructed in [19,III.4.1]. Hence the restriction of ι ′ : mf ֒→ X to cart is a strong morphism of Cartesian tangent structure. Now for an arbitrary manifold M ∈ ob mf, we can choose a covering by open embeddings e j : U j ֒→ M (j ∈ J) where the U j are Cartesian spaces. It follows that the families (T e j ) j∈J , (T 2 e j ) j∈J , and (T 2 e i ) j∈J (in the notation of 2.3) are coverings by open embeddings. For each of the structural transformations t ∈ {p, 0, +, ℓ, c} carried by mf , with corresponding transformation t ′ in X , we can now use the fact that α U j commutes with t U j , t ′ U j for each j ∈ J to show that α M commutes with t M , t ′ M . Proof. Taking E to be the Dubuc topos, we deduce by 9.24 that the associated embedding ι ′ : mf ֒→ X is a strong morphism of Cartesian tangent structure, and we can invoke 9. Conclusions and future work In this paper, we have shown that not only do tangent categories support a generalization of de Rham cohomology, but that they support a second cohomology, the cohomology of sector forms; furthermore, sector forms have a rich algebraic structure that goes beyond this cohomology. There are many possible extensions of this work. • We have shown that tangent categories possess a cohomology of sector forms. Even in the canonical case of smooth manifolds, this may be distinct from the ordinary de Rham cohomology of classical differential forms; further investigation is required to compare these cohomologies. • The relationship between classical differential forms and singular forms in an arbitrary tangent category needs to be better understood. In general, one would expect that any object M which is "locally a differential object" would have the property that classical differential forms on M and singular forms on M would be in bijective correspondence, but this requires detailed work to check. Another possibility is that differential forms and singular forms may correspond if M possesses a "symmetric n-connection" [26], suitably defined in a tangent category. • An important operation on differential forms is the wedge product. Since this involves multiplication in R, in the setting of tangent categories, one would need the coefficient object E to have ring structure. Once such a generalized wedge product is defined, one could consider how such an operation interacts with the co-face, symmetry, and co-degeneracy maps. • It is a well-known result that the exterior derivative is the unique map from n forms to n + 1 forms satisfying certain algebraic properties [35,Proposition 7.11]. It would be interesting to determine for which tangent categories this uniqueness result holds. • It is not clear to what cohomology theories the cohomologies found here correspond in algebraic geometry (for example, in the category of schemes). The cohomologies may recover an existing cohomology theory or represent a new one; further investigation is required. Finally, as mentioned in the introduction, sector forms generalize covariant tensors, and because of this, White writes that "the calculus of [sector forms] can serve as a unified framework for the presentation of classical local Riemannian geometry, and that it can lead to new methods of analysis in modern differential geometry" [38, pg. x]. The results presented here on sector forms contribute to this calculus by means of a methodology which is applicable more generally.
2018-04-11T13:36:15.000Z
2016-06-29T00:00:00.000
{ "year": 2018, "sha1": "4e2101d00028874fa1bd68981e4d052288d2f487", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1606.09080", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4e2101d00028874fa1bd68981e4d052288d2f487", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
263693950
pes2o/s2orc
v3-fos-license
Serum zinc and calcium level in patients with psoriasis The purpose of this study was to measure the serum zinc and calcium levels in psoriatic individuals. The Dhaka Medical College's Biochemistry Department conducted a cross-sectional study from 2021 to 2022, involving 110 participants aged 20-55. Group A included 55 diagnosed psoriasis patients and group B included 55 healthy individuals. Serum zinc and calcium measurements were made using a colorimetric technique. Statistical analysis was conducted using the study's data, using unpaired Student's 't' test for continuous variables, Chi-square test for categorical variables, and Spearman's rank correlation coefficient test for correlation (p> 0.5). When compared to healthy subjects, psoriatic patients' mean SD serum zinc and calcium levels (57.488.86 and 7.600.58, respectively) were substantially lower (p 0.001) than those of healthy subjects (79.427.37 and 8.750.45, respectively). Psoriasis and serum zinc and calcium showed a significant inverse relationship (r = -0.769, p0.001 and r = -0.736, p0.001, respectively). Only low serum zinc (p0.066) and considerably low serum calcium (p0.006) were identified in patients with long-term psoriasis (>5 years). It can be inferred from this study that psoriasis patients had lower serum levels of calcium and zinc. Thus, regular evaluation of these biomarkers may be useful in preventing worse outcomes brought on by hypozincemia and hypocalcemia. Introduction The skin condition known as psoriasis is characterized histologically by cutaneous inflammation, increased epidermal proliferation, hyperkeratosis, angiogenesis, abnormal keratinization, shortened maturation time, and parakeratosis (Gisondi et al., 2012;Hagforsen et al., 2012).Around 2-3% of the world's population is impacted.Around 125 million people worldwide are estimated to have psoriasis, according to the World Psoriasis Day Consortium (Gelfand et al., 2005).Despite evidence of a hereditary susceptibility, the cause of psoriasis is still unknown (Harden et al., 2015).Another important area of study is the contribution of the immune system to the development of psoriasis.The aetiology of psoriatic characteristics is still not fully understood, according to numerous research organizations.Systemic infections, metabolic abnormalities, medicines, and stress are significant environmental influences (Sazzad et al., 2023.Some Human Leukocyte Antigens (HLA) are also present with them (Duweb et al., 2005). There are abnormalities in the serum zinc and calcium levels in psoriasis.Zinc is an important trace element that is necessary for regular cell division and apoptosis (Sunny et al., 2021).It is essential for many metabolic processes, including cell division, transcription, and translation (Kuddus et al., 2002).For their catalytic functions, more than 300 enzymes require zinc.On the other hand, Jansen et al. (2009) found that the removal of zinc from catalytic sites results in the decrease of enzyme activity.Numerous studies show that people with psoriasis consistently have low serum zinc levels.Guenther reported in 2009 that plasma zinc levels were low in psoriasis.Asymmetry in the distribution of zinc between serum and psoriatic lesions has been suggested by certain investigations that found psoriatic lesions retain a higher zinc concentration than unaffected skin.Large-scale skin exfoliation can lower the serum level of zinc in psoriasis (Nigam, 2005;Kim et al., 2010).Stress depletes zinc levels, and stress has been shown to cause or worsen psoriasis (Remrod et al., 2015).Low blood albumin levels, which are brought on by removing a lot of scales from the body's surface, also contribute to falling zinc levels (Mohamad, 2013).Today, it is believed that oxidative stress plays a significant role in the etiology of psoriasis.Because the extracellular enzyme superoxide dismutase depends on zinc, zinc is regarded as an antioxidant (Alwasti et al., 2011).Superoxide dismutase is essential for the defense against free radical damage.According to Ghosh et al. (2008), a zinc deficiency can enhance oxidative stress-induced cell damage and decrease antioxidant enzyme activity. Calcium is another important macro mineral that controls a variety of cellular processes, including insulin secretion, muscular contraction, and mast cell degranulation.Because of an imbalance in calcium homeostasis, psoriasis may become worse.Since cadherins are calcium-dependent cell adhesion molecules, hypocalcaemia might harm them and make illnesses worse (Islam et al., 2018).The strong correlation between serum calcium levels and psoriasis has been confirmed by numerous investigations.It has been shown that pustular psoriasis of von Zumbush, a very severe form of psoriasis, is associated with modest hypocalcemia (Sunny, 2017).Low serum calcium levels have been shown to cause lesions to enlarge and become more intense in the majority of patients.According to Puri et al. (2014), calcium depletion from the horny layer may contribute to the development of psoriatic skin lesions. Keratinocyte proliferation and differentiation are tightly controlled by intracellular calcium.One of the major participants in the psoriasis pathogeneses is assumed to be keratinocytes.They make up the majority of the cells of the epidermis.They usually serve as a defense against outside invaders like infectious pathogens.According to certain accounts, aberrant keratinocyte differentiation and proliferation are brought on by a deficit in calcium intake.According to some transient receptor potential cation channels (TRPC), calcium ion entrance into cells is regulated (Birnbaumer et al., 2009).It was also stated that downregulation of TRPC 1, 4, and 6 was related with significant abnormalities in calcium ion influx in psoriatic keratinocytes.For the topical therapy of psoriasis, they recommended using TRPC channel activators (Birnbaumer et al., 2009). Psoriasis is becoming more common, as it is in other developing nations, in Bangladesh.This study's objectives include determining the serum zinc and calcium levels in psoriatic patients as well as correlating these levels with the development of the disease.However, our nation lacked studies to back up the aforementioned association.Since there is a knowledge and information gap surrounding the medical evaluation of psoriasis patients, this study has taken the effort to close that gap.So, the purpose of this study is to measure the serum zinc and calcium levels in psoriatic patients and to connect these levels with the course of the disease. Study design and population The Department of Biochemistry at Dhaka Medical College in Dhaka, Bangladesh, conducted the current cross-sectional analytical investigation from July 2021 to June 2022.In order to conduct this study, 55 psoriasis patients with confirmed diagnoses (group A) and 55 healthy individuals (group B) were chosen based on predetermined selection criteria from the outpatient dermatology and venereology department at the Dhaka Medical College Hospital as well as from the hospital grounds through direct contact with staff members such as nurses and doctors or patients present.Age and sex were balanced between the two groups.The history and distinctive appearance of erythematous papules and plaques with a silver scale in common areas including the scalp, elbows, knees, and back were used to make the clinical diagnosis of psoriasis.A comprehensive history, physical examination, and standard laboratory tests were used to evaluate each patient.To determine the clinical kind of the disease and its surface area, patients underwent examinations.The following patients were not included in the study because they met each requirement: 1. diabetic coma, 2. hypertension, 3. parathyroid conditions, 4. a serious systemic disease (malignancy, cardiovascular disease, bone disease, hepatic disease, or renal failure), 5. people using supplements or medications that modify the metabolism of zinc or calcium, such as rifampicin, phenytoin, or phenobarbital, 6. an acute or persistent infection, 7. pregnancy and breastfeeding, 8. a recent history of malnutrition and diarrhea. Data collection Each patient filled out a questionnaire that asked about their demographics, medical history, drug use, the extent and duration of their psoriasis, and whether they had any psoriasis in their families.Then, laboratory tests including liver function, renal function, thyroid function, complete blood count, fasting blood sugar, zinc, calcium, albumin, and 25-hydroxy vitamin D3 levels were advised for all subjects. Sample gathering and preservation Following every aseptic precaution, a disposable syringe was used to draw 6 ml of fasting venous blood from each study participant.To estimate fasting plasma glucose and resting blood sugar, 2 ml of blood was put into a test tube containing NaF after the needle was removed from the nozzle.To prevent hemolysis, 4 ml of blood was gently pushed into a dry, clean, deionized, graduated, screw-capped plastic test tube.The test tube was then kept in a slanting position until a clot formed before being centrifuged at 3000 rpm for 10 minutes to separate the serum and collect it in a labelled deionized, eppendrof.Early as practicable, all biochemical tests were conducted.In the event that the analysis is put off, the serum was kept at -200C.For the indices of this investigation, adequate cleaning of plastic and glassware was crucial.Pipettes, glassware, and plastic items were properly washed with tap water after being cleaned with detergent.For 24 hours, all the equipment was submerged in 20% nitric acid (HNO3) in deionized water.They were then dried outside after being rinsed three times with deionized water. Determining the serum zinc level The serum zinc level of each subject-both psoriasis sufferers and healthy controls-was measured.Each eligible case and control was given a 5 ml intravenous blood sample.Until the time of analysis, supernatant serums were separated by centrifugation for 10 min at 4000 rpm and kept at 40°C.Zinc-free polypropylene syringes were used to take blood samples, which were then put in zinc-free centrifuge tubes.A commercial kit (Zinc Assay Kit, Elitech, France) was used to measure the serum zinc concentration using atomic absorption spectrophotometry (Spectra AA 10 plus, Varian, Dickinson, Texas, USA).Adults with serum zinc levels between 60 to 120 micrograms per deciliter were considered to be normal. Determining the serum calcium level Serum calcium was estimated using a colorimetric method, using a color complex between calcium and o-cresolphtalein in alkaline medium.The procedure involved a cuvette, ethanolamine, chromogen, and calcium cal.The absorbance was measured against the blank, and the reference value was 8.4-10.2mg/dl.The conversion factor was 0.25. Determining fasting plasma glucose The fasting plasma glucose was enzymatically estimated using the "Glucose Oxidase" (GOD-PAP) technique.Using the semi-automatic Evolution-3000 analyzer, readings were recorded.The typical range for fasting plasma glucose is 3.9 to 5.6 mmol/l, according to the American Diabetes Association. Data analysis A predesigned data collecting sheet was used to record all of the data.The unpaired Student's 't' test was used to compare continuous variables between research subject groups.Continuous variables were represented as mean SD.Chi-square test was used to compare categorical variables, and absolute frequencies and percentages were provided.To compare the association between various factors and psoriasis, the Spearman's rank correlation coefficient (r) test was used.With significance set at p 0.05 or higher at the level of the 95% confidence interval, all p values were two-tailed.The Windows SPSS version 22 program was used to conduct the analyses. Aspects of ethics and the process for maintaining confidentiality The Dhaka Medical College's ethical review committee granted the study its ethical approval.The danger of physical, psychological, social, and legal harm during blood collection was minimal.A unique code was assigned to each patient, which was followed in each and every step of the procedure, and the name and address were recorded on a separate sheet to protect anonymity.The study's nature, procedure, goal, risks, and benefits were thoroughly discussed to the study subjects before obtaining their written informed consent.Here, neither a placebo nor an experimental new medicine was used.To protect their rights and health, the study participants' interests were not jeopardized. However, there was no discernible difference between the groups in terms of marital status.The mean age of the psoriasis patients was 35.7±4.3 years, whereas the mean age of the controls was 35.1±4.1 years.Age and gender variations between the two groups were negligible.96% of psoriasis patients had the chronic plaque form, whereas 4% had the pustular kind.In 44 (80%) of the patients, there was a confirmed family history of psoriasis in first degree relatives. Evaluation of blood pressure and body mass index There were no significant differences in terms of blood pressure body mass index and fasting plasma glucose between psoriatic patients and healthy subjects .Average blood pressure of Psoriatic patients was 117.40 ± 6.94 (mm Hg) and 116.00 ± 7.21 (mm Hg) for healthy individuals.The study identified average BMI 20.36 ± 1.81(mm Hg) in Psoriatic patients and 19.73 ± 3.04 (mm Hg) in health people. Comparison of serum zinc and calcium Serum calcium and zinc were significantly lower in psoriatic patients than in healthy subjects (Table 3).Average serum zinc of Psoriatic patients was 57.48 ± 8.86 (µg/dl) and 79.42 ± 7.37 (µg/dl) for healthy individuals.The study identified average Serum calcium 7.60 ± 0.58 (mg/dl) in Psoriatic patients and 8.75 ± 0.45 (mg/dl) in health people. Correlation of serum zinc level with psoriasis The study identified 83.6% of individual with psoriasis is suffering from hypozincemia and 89%of individual with psoriasis is suffering from hypocalcemia.There was a significant inverse correlation of serum zinc and calcium with psoriasis.Serum calcium was 8.00 ± 0.70 (mg/dl) that suggested it significantly decreased in psoriatic patients with increasing duration of diseases but serum zinc level was 61.58 ± 10.8 (µg/dl) that didn't significantly decrease.Correlation of serum zinc level with psoriasis (r-0.769,p<0.001).Correlation of serum calcium level with psoriasis (r-0.736,p<0.001), shown in figure 1. Discussion The goal of the current study was to measure the serum levels of calcium and zinc in psoriasis patients.For comparison, a control group of 55 seemingly healthy persons who were age and gender matched was added along with 55 psoriasis sufferers who were considered the research group for this reason.In this investigation, a few baseline, clinical, anthropometric, and laboratory variables of the study patients were compared.Regarding these traits and attributes, there were no appreciable differences across the groups, reflecting the homogeneity of the groups.In the current study, psoriatic patients had considerably lower mean serum zinc levels (p 0.001) than controls (79.42 7.37g/dl) (57.48 8.86g/dl).This finding was in line with a case-control study conducted by Younes et al. (2010), where the mean value of blood zinc in psoriatic patients was substantially lower (p 0.05) than the control group (60.2215.39g/dl), on average, than in the control group. Similar studies were conducted by Kumar et al. (2012), in which it was discovered that psoriatic patients' serum zinc levels were considerably lower than those of the control group in both studies (p 0.001).Afridi et al. (2010) conducted another investigation and discovered that psoriatic patients had significantly lower zinc levels than the control group (p 0.001).Lower zinc levels were found to be significantly associated (p 0.001) with psoriasis by Nigam (2005) and Basavaraj et al. (2010).In this study, 83.6% of the psoriatic patients had hypozincemia, compared to 16.4% of the control group.Al-Jebory (2012) carried out a similar type of investigation.According to the study's findings, 98% of psoriatic patients had lower serum zinc levels than the controls (p 0.001). There was no statistically significant difference in the serum zinc concentration between the two groups, according to research by Ala et al. (2013) (p=0.57).The failure to take into account the degree of skin involvement and the relationship between surface area involvement may be the cause of these contradictory findings for serum zinc levels in psoriasis, according to their hypothesis.In the current investigation, psoriasis patients' mean serum calcium levels were substantially lower (7.60 0.58 mg/dl) than those of the control group (8.75 0.45 mg/dl) (p 0.001).This finding was in line with another study by Mohamad (2013), in which the serum calcium levels of psoriatic patients were substantially lower (p0.05)than those of the control group (9.840.81mg/dl),especially in cases of severe psoriasis (6.50.33mg/dl).Similar research by Lee et al., 2005, discovered that generalized pustular psoriasis significantly reduced serum calcium levels (p0.05). In other research, Shahriari et al. (2010) discovered a significant correlation between hypocalcemia and psoriasis (p 0.001), while Sreekantha et al. (2010) discovered that the serum calcium level was considerably lower in psoriasis patients than in controls (p 0.001).In this study, we discovered that 11% of psoriatic individuals had normal serum calcium levels while 89% had hypocalcaemia.This finding was in line with a case control study conducted by Qadim et al. (2013) on 98 patients with psoriasis and 100 controls.Here, 42.9% of patients had normal serum calcium levels, while 57.1% of patients had hypocalcaemia.A study by Elhaddad et al., (2017) reported no significant difference in serum calcium level between psoriasis and control (p>0.05), which is contrary to our findings.When the Spearman's rank correlation coefficient (r) test was used in the current investigation, it was discovered that serum zinc and calcium levels were significantly inversely correlated with psoriasis patients (r = -0.736,p0.001 and r = -0.769,p0.001, respectively).No pertinent studies were discovered in this regard.In the current study, the blood calcium and zinc levels were compared according to the length of the psoriasis, and the results show that while the serum calcium level considerably reduced (p0.006) with increasing length of the disease, the serum zinc level did not alter significantly (p0.066).Al-Jebory (2012) conducted a study of a similar nature, finding that the mean serum level of zinc did not substantially decrease with increasing psoriasis duration (p>0.05).There were no pertinent studies found for calcium. Conclusion It is established that low serum zinc and calcium levels are observed in psoriasis patients and are associated with the development of the condition.As a result, it is indicated that testing for these minerals (zinc and calcium) in psoriasis may help to prevent complications and disease aggravation. Figure. 1 : Figure.1: Correlation of serum zinc level with psoriasis (A), Correlation of serum calcium level with psoriasis (B) Table 1 : Distribution of study subjects (N=110) in both groups according to age and gender Table 2 : Baseline parameters of study subjects (N=110) in both groupsUnpaired student's t test was done to measure the level of significance.Group A-Psoriatic patients.Group B-Healthy individuals. Table 3 : Comparison of serum zinc and calcium of study subjects (N=110) in both groupsUnpaired student's t test was done to measure the level of significance.Group A-Psoriatic patients.Group B-Healthy individuals..
2023-10-06T15:02:27.664Z
2023-10-04T00:00:00.000
{ "year": 2023, "sha1": "a44f07da395fd3835fc970173e1fc4103bb4dd8f", "oa_license": "CCBY", "oa_url": "https://jklst.org/index.php/home/article/download/39/28", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b0aaa7278872be3afb9c3997c0611f26f029a70a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
218807740
pes2o/s2orc
v3-fos-license
Smart Pigeonhole Alert System with SMS Notification Smart Pigeonhole Alert System INTRODUCTION The world's economic trends in business require organizations to respond quickly to demand and opportunities through competition and continuous expansion of domestic and international markets and by being innovative as well. This requires organizational members to move beyond and achieve higher frontiers which are achievable only by having the right information (Opoku, 2015). Information plays very important roles in the life of every organization and its identification involves realization of the pivotal roles of information in achieving various organizational goals and strategically plan for it (Opoku, 2015). Therefore, its benefits and management has attracted the attention of industrial practitioners and academics as well. Most organizations manage this information through the implementation of effective information systems across different levels of management. On the other hand, the purpose of an information system is processing, memorizing, and transmitting appropriate information in place, and the best information system is one that performs these functions effectively within little computational cost and time. 269 All systems including computer systems consist of accepting raw data as inputs, using stored programs for processing the data and producing outputs as timely information. The process of transforming inputs into outputs is known as information management (Shaqiri, 2015) which can be seen in both technical and management perspective (Robertson, 2005). From the technical perspective, information management is seen as managing web content document, records, digital asset, learning systems and enterprise search to improve information need of an organization (Reddy et al., 2009). From management perspective, information management is referred to as managing the organizational, social, cultural and strategic factors to improve information in organizations (Robertson, 2005). There are various roles of different information systems across several domains of applications in an organization. One of them is Management Information Systems (MIS). MIS are a kind of computer information systems that collect and process information from different sources to assist better decision making across all levels of management. MIS provide information in the form of pre-specified reports and displays to support business decision making (Heidarkhani et al., 2013). One approach which organizations can utilize computing capability is through the development of efficient and effective management information systems. MIS is a system using formalized procedures to provide management at all levels in all functions with appropriate information based on data from both internal and external sources, to enable them to make timely and effective decisions for planning, directing and controlling the activities for which they are responsible. The emphasis of MIS is on the uses to which the information is put. Planning, directing and controlling are the essential ingredients for management (Adeoti-Adekeye, 1997). In essence, the processing of data into information and communicating the resulting information to the user is the key function of MIS. It should therefore be noted that MIS exist in organizations in order to help them achieve objectives, to plan and control their processes and operations, to help deal with uncertainty, and to help in adapting to change or, indeed, initiating change. Management Information System (MIS) is one of the most important tools in any organization, which aims to provide reliable, complete, accessible, and understandable information in a timely manner to the users of the system (Al-Mamary, Shamsuddin, and Aziati, 2014). MIS is a flow of procedures for data processing based on the computer, and integrated with other procedures in order to provide information in a timely and effective manner to support decision making and other management functions (Bin Haji Sidek, 2010).The authors in (Ensour and Alinizi, 2014) opined that in order for organizations to advance into the future, they must adopt the technology utilization approach, which is a mandatory requirement for such organizations which seek excellence per performance. Moreover, the importance of MIS comes from the benefits that are generated by that system such as providing useful information in a timely manner, improved labor productivity, cost savings, providing information without any delays and mistakes, and improve the management of work (Al-Mamary et al., 2014). One of the effective ways of managing information dissemination and retrieval in an organization is the introduction 270 of Pigeonhole concept where information in the form of mails is delivered for the perusal and usage of different staff in the organization. Pigeonhole is an internal information exchange system used for communication in an organization. It is a creative, informal and traditional way of dropping and picking messages in a set of small open-fronted compartments usually used in a workplace or other organizations where letters or messages may be left for individuals. The chain of exchange is a kind of 'give-and-return'. It is a bi-directional 'pick-and-respond' information transmission mode (Ajayi et al., 2018). It can exist in wooden box or metal shelf or take any other form with several rectangular holes in it which is usually been used in offices or large organizations. Pigeonhole works like a letterbox where letter or memo for specific person will be placed in their letterbox in a typical organization or post office. The authors in Ajayi et al. (2018) stated that each staff in most big organizations has his/her own pigeonhole to receive any important letter or memo related to the official duty. Each staff has been allocated a pigeonhole for any letter or memo from within or outside the department, unit or faculty. Unfortunately, the current conventional pigeonhole system is unable to inform the staff on any urgent letter and this leads to significant delay in responding to such message. The main weakness of the current system is that staff needs to check their respective pigeonhole every day but due to the routine commitment or unforeseen circumstances, the pigeonhole cannot be possibly checked every day. The system proposed by the authors in Ajayi et al. (2018) intends to computerize the traditional way of dropping and picking messages in a set of small open fronted compartments usually used in a workplace or other organization with a two-way communication between the office clerk and the staff via a mobile application. The clerk sends messages via his/her mobile app to the staff notifying them of the availability of memos, letters and other documents in their pigeonhole and the intended staff sends feedback concurrently. Several problems have been identified in this method as untimely delivery and responses to messages, poor feedback, time and effort are wasted in checking up pigeonholes on daily-basis and sometimes clerk's unavailability. Therefore, this paper proposes a pigeonhole alert system using sensor device as an improvement to the stated problems in the work done by Ajayi et al. (2018) to detect the presence of a mail in the pigeonhole with the use of a sensor device and a short messaging system to alert the staff on their mobile phone of the need to pick up their mails. To back up the submission of this paper, a germane question was raised in form of Research Question. 1.1 Research Question i. What association exists between the departments and the idea of using mobile pigeonhole system? From this research question, the following hypotheses were formulated: 271 Research Hypothesis Hypothesis 1 H 0 : There is no significant relationship between the usability perception of the lecturers in the departments and the system performance. H 1 : There is a significant relationship between the usability perception of the lecturers in the department and the system performance. Hypothesis 2 H 0 : There is no significant impact of the designed system (performance) on the department's mail management. H 1 : There is a significant impact of the designed system (performance) on the department's mail management. LITERATURE REVIEW Mobile computing has been applied in several pigeonhole platforms in which several researchers have also proposed different techniques and technology to enable timely delivery and access of information across these platforms. A web-based announcement system was developed in Curran and Craig (2001) to provide timely information to students. The system was developed using Java programming language that can deliver a message from web-based interface (electronic form) and sent to a group of students. The system was closely related with Mobile Notice Board project for the delivery of urgent information to students but could not ascertain a feedback module in the deployment. Mohammad and Norhayati (2003) proposed an SMS service system for student collaboration on campus with focus on quick message communication and delivery among students on campus. Simplewire wireless message protocol and ActiveX SMS software development kit serve as the development tool for visual basic. The system was able to send messages but without mobility. The authors in Al-Ali, Rousan and Al-Shaikh (2003) proposed a system to monitor and control patient body temperature and blood pressure. It was achieved using temperature sensor and signal conditioning circuit, microcontroller, LCD display and GSM modem. The systems contributed immensely to the use of SMS technology for message delivery but were limited by high implementation cost. Furthermore, Al-Ali et al. (2004) used the same technology to develop a house monitoring system to ease the ordering and delivering of house equipment using SMS technology via mobile phone. The system was developed using C programming language and the hardware and software implementations consist of 8-bit microcontroller interface and driver circuit for connection between device and microcontroller, LCD display and GSM modem. Kadirire (2005) presented a platform where teachers in schools 272 or presenters at conferences can interact with audience via SMS. The system uses java servlets, a Tomcat Servlet container, an oracle database, HTML and an open source SMS gateway which runs on a UNIX platform. When an SMS message is sent to the SMS gateway using an SMS centre number provided, the servlet running on the Tomcat server receives and creates small frames called stickies that housed the SMS text. Each stickie has a thread that initiates the SMS between the teachers or presenters and the participants. An SMS technology that supports classroom interaction between students and lecturers is proposed in Markett et al. (2006). The aim was to bridge the communication between lecturers and students in a classroom environment. Students send SMSes via their mobile phone which are viewed, replied and addressed by the lecturers through a developed software connected with a modem. A class website developed was used as the interactive platform between the students and the lecturer after which the SMSes have been published on the website. The system was developed using Java programming language to provide a GUI that models the users' real mobile phone. However, cases of failure in remote connection between the students and lecturers were recorded. An SMS tool to exchange information in medical area is proposed in Obea et al. (2006). The work was developed as a Radiological Information System (RIS) where physician can send messages to their patients. The idea was to configure RIS system to send SMS when the examination is scheduled and to send another SMS later to remind the patient of the appointment. The system offered an easy medium for timely information delivery In the work of Shahjahan et al. (2008), a vision-based on-line traffic information system is presented to monitor and detect levels of traffic congestion on certain roads in Dhaka City and to make this information available to the travelers. Multiple Web Cams were installed on designated roads. The system captured digital images of the traffic, analyze these images and reach a clear decision about number of cars. Users were able to reach this data by using the short messaging service in their mobile phones. Basically, the system is divided into three independents and interacting modules: the image capturing module which automates the capture of images, the digital image processing module which processes the images and the short message service (SMS) server module which receives SMSs from a user and reply back to him by an SMS. A GSM-based notification speed detection system for monitoring purposes is presented by Sabudin et al. (2008). The motivation was to improve on the existing black box system that only notifies drivers through alarm systems and information is recorded in the black box in order to detect speed effectively. The system includes both hardware and software designs. The hardware design was carried out using microcontroller, PIC16F873, LCD DISPLAY and GSM Mobile Phone. On the other hand, the speed detector is the programmable equipment designed using the JAL (Just Another Language) software. The design integrates a new black-box system with the GSM notification system to send alert information to traffic authorized personnel or Transport Department through Short Message Services (SMS). An m-banking system using the m-commerce technology to provide various banking services to the customers by sending SMS in a two-way communication for the banking sector is presented by Jamil and Mousumi (2008). The system consists of five modules: Interfacing Module, SMS Technology Adoption Module, SMS Banking Registration Module, Service Generation Module and Data Failover Module. Four major services such as balance enquiry, balance transfer between authenticated customers, deposit payment and bill payment were provided without physically going to bank thereby saving customers' time. Wahab et al. (2009) developed an integrated e-parcel management system using GSM network. The system notifies user of the upcoming parcel reach in a university via SMS. Their work offered robust platform that enhanced quick message delivery and retrieval that is useful for the day-to-day activities of the university. In the work of Katankar and Thakare (2010), a Short Message Service using SMS Gateway is proposed. The system suggested a multi-level local authentication to the SMS gateway service. The application and mobile carriers are connected via TCP/IP. SMPP was used as the SMS protocol that is secure and sustain greater message volumes 10,000/min. The software was designed using Visual Basic 2005 while the database connection was written using Query based on SQL. SQL was used to store the record and to retrieve the record. The security of the system was done via a web interface for authentication and an encryption method for securing the data. However, there were inadequacies in messaging functionalities and old encryption algorithm was adopted which is easily vulnerable to brute force attack. In Ensour and Alinizi (2014), an SMS application system along with its corresponding server is developed. The system was developed to avoid the reliance of content delivery SMS application of student examination results to SMS Gateway Provider and the commercial SMS application developer which can be managed totally by the users. The Rational Unified Process (RUP) was used to iteratively do the system development during each phase. The system promoted SMS technology in school. However, cases of network traffic on the server affect system performance at peak times. A mobile interface for community health information tracking system (CHITS) was developed by Manguni et al. (2010). The system provides cheap and effective remote connection to the CHITS server for synchronizing real time data. The system was developed using JAVA2ME. Data was collected and compressed using the jgz java library with deflate based compression algorithm to minimize the amount of SMS messages to be sent. SMSLib serves as a platform through which a server phone was connected to the server to receive messages through designed protocol that assures availability and reliability. A cost effective way of transferring data remotely with the use of SMS was established but there were limitations in the amount of data that an SMS can carry and manual copying of database file when not using the remote mode. A framework for the design of a mobile-based alert system for outpatient adherence in Nigeria is proposed in Okuboyejo, Ikhu-Omoregbe and Mbarika (2012). The aim was to ensure adherence to long-term therapy in outpatient condition for effective treatment 274 and reduce or curb the prevalence of diseases. A system for mobile technology that will provide an easy way of complying with drug regimen was developed. The system utilizes Short Messaging Service (SMS) via mobile phones to provide reminders at dosing times. However, the system was limited by the inability to deploy and evaluate a prototype application within its scope, so the system was not tested. In Jamaruppin (2014), a pigeonhole notification system using telegram messenger is proposed to help lecturers get notification about the presence of mails in their pigeonholes and to give warning notification when the volumes of their pigeonhole reaches certain level of fullness. The system is a combination of hardware and software that operates together on giving the notification to the pigeonhole owner. The main hardware used was Raspberry Pi while Infra-Red sensor and Ultrasonic sensor were used to detect mails. The main software was telegram messenger while Linux was the operating system to the Raspberry Pi and Python was the language used to command the system. Interview was carried out to get the system requirements and questionnaires were used to collect data in evaluating the performance of the pigeonhole system. Thirty (30) respondents who are lecturers of the institution were randomly sampled. The results from data analysis showed that majority of the respondents chose that the best alternative for mail notification is via SMS. (2014) proposed a short messaging service as an alternative for pushing information to build efficient information passing systems in academic institutions. The system was targeted at improving existing levels of communication between teachers and students of an academic institution. A total solution architecture was proposed. The architecture consists of a central database server to store and forward requests, a networking interface to send SMS successfully and a client end application to read and acknowledge the SMS. The system was implemented using Open Source API and a middleware where one can build a service wherein students do not have to pay for student information services. The system provides a high degree of security and confidentiality and generates timely information needed in decision support system of the institution. However, there are possibilities for network failure and response of the entire architecture therein. In Norhairi (2015), an intelligent pigeonhole with e-mail notification using wireless system was proposed. The work focuses on incorporating electronics technology into conventional mailboxes as a solution for providing convenient mail notification and retrieval. Arduino UNO and Infrared Sensor were incorporated by linking the user's pigeonhole with e-mail facilities and this enables the users to be notified whenever a new mail is delivered, and pigeonhole is full. The system provides an easy and effective platform for sending e-mail to notify the users about important new mails reaching their pigeonhole or mailbox. A smart pigeonhole system was proposed through sending notification by short messaging system in Abdullah (2015). The aim was to develop a time-saving system for resident to get a notification about arriving mail via short messaging system on their phone. The system was implemented using several hardware components including 275 infrared sensors, ultrasonic sensors and arduinoyun board. Infrared sensors were used to detect the presence of the mail into the box while the ultrasonic sensor was used to detect the level of fullness of the mailbox. The system achieved SMS notification for pigeonhole mail arrivals, but the sensor could only be powered by electricity. Cases of power outage followed by subsequent mail arrival would limit the system functionality. Asmida (2015) developed a pigeonhole smart box for university application to assist lecturers who use pigeonhole to receive SMS notification once students drop their assignment in their pigeonhole. Microcontroller Arduino, IR sensor and GSM module were used in building the system. Microcontroller Arduino and GSM module were used for wireless transmission. The installed sensor in the pigeonhole smart box start functioning once it detects the document/assignment received and automatically send an alert signal via SMS notification to the lecturers' cell phone. Pramanik et al. (2016) also developed a GSM-based Smart home and digital notice board using a GSM SIM900 module that provides its users with a simple, fast and reliable way to put up important notices in an LCD where the users can send a message to be displayed in the LCD. The system consists of a 32-bit ARM based microcontroller LPC2148, GSM SIM900 module, an LCD, a motor and an android application for user interface with the hardware. SMS messages are sent through an android application to the GSM SIM900 module which has a SIM card inside it. However, cases of network failures may significantly affect the system performance. The development of an intelligent pigeonhole using GSM is proposed by Binti Wahab (2016) to notify users when any document arrives inside their pigeonhole using Short Messaging Service (SMS). The basic idea was to monitor software and the main components of the hardware are SIM900A GSM Modem, PIC 16F877A and infrared sensor (IR) system. The implementation utilizes the programming of the control system with Microcontroller PIC16F877A acting as the main process that controls the system when the input gives signal until it produces an output. The system provides a two-way transmission between users and PC16F877A Microcontroller. Wahab et al. (2016) also developed an electronic pigeonhole system integrated with GSM network to send a notification of any upcoming load. The system was needed to send short message service (SMS) notification to designated users when a new letter is placed in their pigeonhole. A detection circuit that contain voltage regulator, Infrared sensors and microcontrollers to acknowledge the existence of new post items was integrated with a GSM modem to transmit SMS to specific user. The system was attached to a metal pigeonhole and tested. Notifications were immediately sent to intended users for further action. However, the implementation was too costly and there is no possibility of ascertaining good performance of system on wooden pigeonhole platform. Duru, Ochonu and Okoronkwo (2017), proposed a mobile phone controlled wireless electronic notice board that can be used to circulate information in places such as schools, offices, homes and other establishments. The system includes a reliable and an authentic wireless display of SMS on LCD with a mobile phone and microcontroller using GSM Technology. The microcontroller is interfaced to a GSM Modem via MAX233 level converter to convert 276 RS232 voltage levels to TTL voltage levels and vice versa. A 20x4 LCD display is attached to the microcontroller for display. Microcontroller coding was done using embedded C with the help of Mikro C integrated development environment (IDE). The system offers flexibility and control of information to its users remotely as information is transmitted over a wireless network. Krishna, Anurag and Prabhune In a similar study, Ajayi et al. (2018) presented a mobile pigeonhole alert system. The authors were motivated to develop a system which allows easy and enhanced communication between the administrator-in-charge of messages (office clerk) and the recipients (academic staff) to ensure proper dissemination of information. The aim was to ensure quick notification, delivery and responses to mail in an academic institution. The system implementation was accomplished using Java and XML components of Android Software Development Kit (SDK). The authors' contribution bridged the gap between the administrator and the user of the pigeonhole by ensuring a feedback mechanism to alert the user of a mail when not attended to. However, the system involves human intervention in which the administrator in charge of the mail would have to send notifications to staff. Also, training is required when there is a change of administrator on how to use the system. METHODOLOGY The mathematical model for the proposed system was adopted from the model proposed in Ajayi et al. (2018). The general idea of the pigeonhole principle states that when there are k pigeonholes and there are k+1 mails, then there will be 1 pigeonhole with at least 2 mails (Sengothai, 2016). The idea sounds trivial, but its uses are numerous. Thus, this idea is explored to coin a model that supports the implementation of this research. A conventional pigeonhole system consists of one or more pigeonholes for each staff in an institution. Let represents pigeonholes and a variable denoting staff: Equation 1 where Equation 1 represent one or more pigeonholes for one or more staff while k is the count number for upper bound value Equation 2 where and represent alert and message respectively. Equation 2 indicates zero or more alerts for every message available to the staff in his/her pigeonhole. The map function for the pigeonhole process is presented as follows: Equation 3 indicating that for each staff, there exist exactly one pigeonhole for message delivery 277 Equation 4 Equation 4 denotes that any staff can have one or many messages. Equation 5 Equation 5 denotes that there exist one or many alerts for each message received in the staff pigeonhole. In equation 6, when an alert is not received on the staff mobile phone, then no message is received. A message is only received when one or many alerts are confirmed on the phone. Equation 6 Equation 7 In equation 7, indicates the staff response to the message available in their pigeonhole. When the staff responds to the received message, then no new message is found in the pigeonhole. Otherwise, when no response is received, one or more messages are left in the pigeonhole unanswered. Circuit Diagram The circuit diagram of the proposed system is presented in Figure 1 as follows: This pigeonhole circuit sends a message to the user about the arrival of a mail. The circuit is powered by a rechargeable battery of 6volt. Once the system is powered, a high bright white light comes up from the Light Emitting Resistor (LER) which is used to trigger the Light Dependent Resistor (LDR) to detect the arrival of a mail in the box. The sensor then sends a signal to the micro-controller which is ATmega8. The micro-controller then sends a signal to the GSM-Module on the arrival of a mail. The GSM-module uses SIM 800 module that is designed as a data communication equipment. It comprises of the ground (GND), the receive data (Rx) which is an input device, the transmit data (Tx) which is an output device and an antenna to detect network signal. Once the microcontroller sends a signal to the GSM-module, the Tx sends a message to the user whose number is registered on the SIM module for a mail arrival in the pigeonhole. The capacitors are used to regulate the frequency of the circuit. When the light from the LER fails to come up, the LDR will become non-conducting and the presence of a mail will not be detected. Sensor Module The sensor module consists of several hardware components that work together to ensure the smooth running of the system. Figure 2 shows the sensor module as follows: The components housed in the sensor module include Switch, Battery, Light Emitting Diode, Light Dependent Diode (LDD), GSM Module, Micro-controller, Capacitor, and Transistor: Pigeonhole Interface The actual pigeonhole implementation is carried out on both wooden and metal platform for testing. a. Wooden Pigeonhole Implementation Figure 3 shows the implementation of the sensor module using a wooden pigeonhole. Metal Pigeonhole Implementation Figure 4 shows the implementation of the sensor module using metal pigeonhole. Statistical Treatment of Data In this research, data were collected by questionnaire administration to respondents in order to infer a significant relationship between the independent and dependent variables. These data were further subjected to statistical analysis using Correlation and Regression Analyses respectively. Correlation Analysis Correlation analysis assesses the linear relationship between two variables, providing a measure of both the strength and direction of the relationship. Correlation makes no assumption on causality in the relationship. It assumes only a linear relationship, and variables with a strong nonlinear relationship may show poor or absent correlation. Correlation can be performed on both parametric and nonparametric variables. The most commonly used parametric method is the Pearson product-moment correlation which is adopted in this research while Spearman rank order correlation and Kendall methods are the commonly used nonparametric methods. In certain situations, the correlation relationship can be linear to a certain extent beyond which it may disappear or remain linear but at a different degree (Shi and Conrad, 2009). Regression Analysis Regression analysis assesses the relationship between one dependent (observed) variable and 1 or more independent and (explanatory) variables, with an implied causal relationship. Regression goes beyond correlation by inferring relationships between variables, allowing modelling of causal relationships, and predicting the value of the dependent variable from a given value of independent variables(s). Unlike correlation analysis, which makes few assumptions, regression analysis is based on several underlying assumptions. Regression analysis includes both linear and nonlinear regression. Linear regression involves a linear model, which is linear with respect to its parameters. Linear regression models may be simple (a single independent variable) or multiple (two or more independent variables). Non-linear regression deals with exponential, power, or more complex relationships (Shi and Conrad, 2009). Thus, the independent variable is the perception of the lecturers across various departments. These perceptions are the factors that constitute their views about the designed system while the system performance serves as the dependent variable. In this case, the independent variable is the computed value for the performance of the system. The computed value denotes the perception or view of the respondent (lecturers) who answered the questions about the systems performance during system testing. Thus, the average success rate (ASR) and average failure rate (AFR) of the designed system are presented in Equations 8 and 9 respectively: where = number of lecturers Equation 8 where = number of lecturers Equation 9 RESULT AND DISCUSSION The system was tested on both wooden and metal pigeonhole platforms and a structured questionnaire was developed and administered to two hundred (200) lecturers across nine (9) departments of the faculty of Science of a tertiary institution to elicit information about respondents' perception and system performance during testing. The data collected are presented in Table 1 and the graphical representation is shown in Figure 5 as follows: The average success rate of the system under testing was 92.5% while the system failed with an average failure rate of 7.5%. This was attributed to cases of network failures in which messages were not timely delivered on the respondent's mobile phone. However, the analysis of result was carried out using regression analysis of respondents' perception about system performance as presented in section 4.1 as follows: Table 2 shows the descriptive statistics of the given data. It shows the relationship between the central tendency (mean) and the variability (standard deviation). The result shows that over half of the selected respondents agree on the idea of the designed system. Table 3 shows the correlation between the departments. The Pearson correlation coefficient shows a positive correlation between the usability perception of the lecturer's across various departments and the performance of the designed system because there was no negative value in the result. For the Research Hypothesis 2, Table 4a shows the regression coefficients that describe the significant relationship between the independent variable and the dependent variable. Therefore, the p-value of 0.013 was used to determine the significance of the system testing. Since the p-value is lesser than 0.05, then there is a significant association between the lecturers' usability perception and the system performance. Hence, the objectives of the system were achieved. However, if the value is greater than 0.05, it means the system did not meet the expected objectives. Table 4b shows the model summary. It is shown that the R Square is 89%, which means that more than almost all the lecturers from each of the departments are in support of the designed system. Table 4c shows the significant of the model. Since the significant value on the table is 0.026, it shows that the system model is significant since the value is lesser than 0.05. Table 5 shows the comparative analysis of this work with other existing works. The results derived showed that our system is efficiently analyzed and effectively implemented. Hence, it is an evidence for improvement on other existing results in the reported studies.
2020-05-21T00:12:12.429Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "517a44aceec2763c403418640bed1f93d9ad6d4a", "oa_license": "CCBY", "oa_url": "https://www.stepacademic.net/ijcsr/article/download/120/62/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3c1874ee934faa79fdd52953161f5af6930d0c74", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
226065613
pes2o/s2orc
v3-fos-license
Antibacterial activity tests of isolate endophytic bacteria from the tea plant (Camellia sinensis) againts Staphylococcus aureus and Staphylococcus epidermidis Staphylococcus is one of the most common types of bacteria in Asia that causes local infectious diseases of the skin, nose, urethra, vagina, digestive tract, pneumonia, endocarditis, septic arthritis, and septicemia. Staphylococcus aureus and Staphylococcus epidermidis are the most common types of Staphylococcus in Asia. Tea plants contain bioactive compounds and endophytic bacteria which are widely used as antimicrobial agents. Endophytic bacteria are bacteria that exist in plant tissues, not pathogenic, and have the ability as the host plant. The purpose of this study was to determine the antibacterial activity of endophytic bacterial isolates of tea plants (Camellia sinensis) against the growth of Staphylococcus aureus and Staphylococcus epidermidis bacteria. The antibacterial activity test of endophytic bacteria of tea plants includes a series of processes such as sample selection, surface sterilization of samples, isolation of endophytic bacteria in agar medium, screening, suspension of endophytic bacteria in 0.9 % NaCl and standardized with 0.5 McFarland, making endophytic bacterial culture in nutrient broth medium, making endophytic bacterial supernatant and antibacterial activity test with paper disc diffusion method. The result is that there is an antibacterial activity from the endophytic bacterial supernatant isolates B14, B23, and A2 to the growth of Staphylococcus aureus and Staphylococcus epidermidis. The best antibacterial activity was found in endophytic bacterial B14 isolates with inhibition zones of 7.75 mm and 12.5 mm followed by B23 isolates with 7.5 mm and 8.25 mm inhibition zones and A2 isolates with large inhibition zones of 7.42 mm and 8.16 mm. Endophytic bacteria of tea plants showed antibacterial activity against the growth of Staphylococcus aureus and Staphylococcus epidermidis. Introduction Staphylococcus is a gram-positive bacterium, coccus, and white [16]. At present the genus Staphylococcus has been divided into 45 species and eight sub-species [2]. Staphylococcus is commonly found in the surrounding environment, medical equipment and the human body [2] [1]. Basically, the Staphylococcus bacteria is one of the normal flora which is estimated to be around 30% found in the human body, especially many found in the anterior nares [1]. However, due to invasion, mutations and an increase in the number of Staphylococcus bacteria, several cases have been linked to pathogenic activities such as 3 sample sterility was carried out using 100 μl sterile distilled water in the last rinse on Nutrient Agar medium which was then incubated with isolates of tea plant samples at 37 0 C for 24-48 hours. Purification and culture of endophytic bacteria One of the endophytic bacterial isolates closest to the sample was taken and inoculated on the agar nutrient with the quadrant streak plate method. Incubated at 37 0 C for 24-48 hours to obtain separate bacterial colonies. Purification was repeated with an unspecified time limit to ensure the isolates obtained were truly pure. Preparation of test bacterial suspension and endophytic bacteria Staphylococcus aureus and Staphylococcus epidermidisbacteria [24] hours rejuvenation results were taken one ose and suspended into a tube containing 10 ml of 0.9% sterile NaCl solution. Endophytic bacterial suspension is made aseptically by endophytic bacterial isolates on nutrient agar so that 24-hour rejuvenation results are taken one ose and suspended into a tube containing 10 ml of 0.9% sterile NaCl solution. The turbidity obtained was then compared to the 0.5 McFarland standard which is equivalent to the number of bacterial cell growth (1-2) x 10 8 CFU / ml [8]. Making positive control and negative control The positive control used in the form of 500 IU chloramphenicol powder with a concentration of 30 ppm dissolved in 10 ml of sterile NaCl 0.9%. The negative controls used were nutrient agar medium and sterile paper disc. 2.7.Antagonistic test of endophytic bacterial isolates The selection of the most potential endophytic bacteria from 12 endophytic bacterial isolates was carried out through screening tests using endophytic bacterial isolates with Staphylococcus aureus and Staphylococcus epidermidis through the paper disc diffusion method on nutrient agar [26]. The method of endophytic bacterial isolates and indicator bacteria Staphylococcus aureus and Staphylococcus epidermidis on 24-hour rejuvenation results was taken one ose and each suspended in a tube containing 10 ml of sterile 0.9% NaCl solution. The turbidity obtained was then compared to the 0.5 McFarland standard. Next, 100 µl of indicator bacterial suspension was taken and flattened on NA media contained in different Petri dishes using sterile cotton swabs. Next, 5 μl of endophytic bacterial suspension was taken and dropped on the sterile paper disc. Paper discs that have been filled with endophytic bacterial suspensions are then placed and arranged in such a way on the nutrient agar that has been given a bacterial suspension test. When finished, the petri dish is sealed with plastic wrap and labeling. Then put in 37 0 C incubator for 24 -48 hours. Making endophytic bacteria culture on medium nutrient broth One ose of endophytic bacteria potential rejuvenation results in the medium nutrient agar for 24-hour, put in 5 ml nutrient broth medium for making a starter and then agitated on a rotary shaker at a speed of 120 rpm at 37 0 C for 24 hours. Then the starter is put in 100 ml of nutrient broth media and mixed until homogeneous. Potential endophytic bacterial culture in the nutrient broth medium was then agitated on a rotary shaker at a speed of 120 rpm, temperature 37 0 C for 52 hours for sampling measurements of growth curves and the optimum time to produce secondary metabolites of potential endophytic bacteria. Bacterial growth curves are done by measuring Optical Density (OD) using a UV-VIS spectrophotometer (λ = 620 nm) [12]. Sampling is carried out in 52 hours every 4 hours starting at 0, 4,8,12,16,20,24,28,32,36,40,44,48, and 52 hours. Making endophytic bacteria supernatant A total of 10 ml of starter containing 10 ml of nutrient broth media with one ose of potential endophytic bacteria isolates that have been shaker and agitated for 24 hours is put into an Erlenmeyer containing 100 ml of sterile nutrient broth media then agitated on a rotary shaker at a speed of 120 rpm at 37 0 C. Then as much as 10 ml of potential endophytic bacterial culture was centrifuged at 4000 rpm at 4 0 C for 15 minutes to obtain a separation between the supernatant at the top and debris at the bottom. The endophytic bacterial supernatant is then separated from the debris by taking part in the supernatant using a sterile micro type to be transferred to another sterile container/bottle and stored in a cooler. Antibacterial test with endophytic bacteria supernatant Antibacterial activity test was carried out by dividing the sterile petri dish into 5 parts with an angle of 72 0 on the bottom outside of the petri dish. Furthermore, each corner is given the name supernatant (Sp) potential endophytic bacteria to be used. Nutrient media to pour 20 ml of a sterile petri dish. After the media solidifies, the bacteria Staphylococcus epidermidis or Staphylococcus aureus are inoculated in a petri dish using a sterile cotton swab method evenly on the surface of the media. Antibacterial test with a potential endophytic bacterial supernatant was carried out using each endophytic bacterial supernatant being dropped on 5 µl paper disc using a micropipette. The paper disc was transferred to the nutrient agar which had been added 100 µl of bacterial suspension test on a petri dish. Paper disc with endophytic bacterial suspension is placed on the area following the name of the supernatant (Sp) endophytic potential bacterial potential endophytic bacteria code 1, 2, or 3. One part on the agar nutrient to be given sterile paper disc as a negative control, and one paper disc containing chloramphenicol 30 μg / ml with a concentration of 5 μl was used as a positive control. All plates were incubated for 24-48 hours at 37 0 C. Each treatment was repeated 3 times to obtain accurate antibacterial test results. The diameter of the inhibition zone formed is measured using calipers. Measurement of endophytic bacterial inhibition zones Barriers are measured using calipers. The method of measurement is to take 2 lines perpendicular to each other through the center of the paper disc, while the third and fourth lines are drawn between the two lines by forming an angle of 45 0 . Measurements are made using 4 lines at different places namely by using the diameter of the zone resistance line AB plus the diameter of the inhibitory zone of the CD line plus the diameter of the inhibitory zone of the EF line and the diameter of the inhibitory zone of the GH line. The results of the fourth sum are then averaged and divided by four. Whereas the ab, cd, ef, gh lines only indicate the size of the diameter of the paper discs used. Characterization of bacteria Bacterial characterization was carried out macroscopically and microscopically. Macroscopic observations include the morphology of single colonies of endophytic bacterial cells. Observation of a single colony in the form of shape, edge, elevation, and color 5 . Microscopic observations were made through gram staining to determine the type of gram-positive or gram-negative bacteria. Gram staining of bacteria was carried out using isolates of potential endophytic bacteria produced by 24-hour rejuvenation made outward, fixed above the burner spirtus. Dripped violet crystal paint and allowed to stand for 45 seconds. The rest of the paint is washed with running water then dropped with iodine solution and allowed to stand for 1.5 minutes. Wash with running water and drop 70% alcohol for 30 seconds then wash with running water and dry. Then given safranin dilution solution, allowed to stand for 20 seconds, then washed with running water, dried. Observed under a microscope with low to large magnifications up to 1000X. Grampositive bacteria show purple and Gram-negative bacteria show red in gram staining. Data analysis Quantitative data are in the form of inhibition zone diameters around the paper disc on the nutrient agar, statistically analyzed using the tabulation data stage using a completely randomized design (RAL) factorial pattern and One Way ANOVA (Analysis of Variance) at 5% level. If there are real differences, further tests are performed using the LSD (Least Significant Difference) test to determine the best effect with the SPSS 16.0 application. Result and discussion Tea plant samples (Camellia sinensis) were taken from the Medini Tea Plantation, Ungaran Mountain, Kendal District, Semarang, Central Java. The study was conducted at the Integrated Laboratory, Diponegoro University, Semarang. Figure 2 shows the results of the isolation of endophytic bacterium of tea plants (Camellia sinensis) from the three samples obtained by colonies of endophytic bacterial isolates cells around the roots, stems, and leaves of tea plants followed by the results of the surface sterilization control of each sterile sample. Cell colonies of endophytic bacterial isolates have medium size and are white to yellowish. Most bacterial cell colonies appeared in root samples, followed by stem samples and finally leaf samples. Endophytic bacteria are generally more common in the roots and stems and decrease in number in the tubers and leaves [21] [22]. Figure 3 shows the results of the purification of endophytic bacterial isolates with the results of 12 pure endophytic bacterial isolates including 2 from leaf samples (D1 and D2), 7 from stem samples (B11, B12, B13, B14, B21, B21, B22, and B23), and 3 from root samples (A1, A2, and A3). Furthermore, macroscopically and microscopically characterize 12 endophytic bacterial isolates as the table 1. Table 1. shows macroscopic characteristics of 12 endophytic bacterial isolates including the size of bacterial cell colonies with 2 different sizes namely small and moserate. The colony shape includes 3 types, namely circular, irregular, and filament. Elevation includes 2 types, namely flat and undulate. Margins include 3 types, namely entire, lobate, and filament. The microscopic characteristics of all 12 endophytic bacteria are gram-positive because of the staining of gram cells of purple [32]. The shape of bacterial cells is coccus, the endophytic bacterial isolates B12, and B23 form endospores. Endospores are a form of bacterial defense against unfavorable environments. Endospores are thermostable and resistant to extreme environmental conditions [30]. Gram-positive bacteria have thick cell walls such as thick nets made of peptidoglycan (50-90% by weight of the cell envelope), a layer of cell membrane, and do not have an outer membrane, whereas gram-negative bacteria have a thin cell wall (10% by weight of the sheath cell) which is between two layers of cell membrane [24]. Figure 4 shows an overview of screening tests of 12 endophytic bacterial isolates using the disk diffusion method [23]. Clear zones appear to form around the paper disc that has been given a suspension of endophytic bacterial isolates B14, B23, and A2 and positive control (+). The clear zone is one indicator of inhibitory zones of endophytic bacterial isolates related to their antibacterial activity against Staphylococcus aureus and Staphylococcus epidermidis. The symbol (-) indicates a negative control with the result that no inhibition zone is formed around the paper disc. Table 2 and 3 shows the average diameter of inhibition zones of 12 endophytic bacterial isolates from the screening test which have antibacterial activity against the growth of Staphylococcus aureus and Staphylococcus epidermidis. The largest average inhibition zone diameter was found in isolate B14 with an average inhibition zone diameter of 7, 3 mm and 8.7 mm followed by endophytic B23 isolates with inhibition zone diameter of 9.42 mm and 10.7 mm and A2 with inhibition zone diameter of 7 mm and 6.41 mm. This isolate is the most potential endophytic bacteria because these three isolates have an antibacterial activity that is consistently able to inhibit the two test bacteria with large diameter inhibition zones that fall into the medium to strong category. Based on the diameter of the inhibition zone formed, the antibacterial activity is included in 3 categories, namely weak if the diameter of the inhibition zone is 1-5 mm, while the diameter of the inhibition zone is 6-10 mm, strong with the inhibition zone diameter of 11-20 mm, and very strong with inhibition zone diameters>20 mm [29]. Figure 6 shows the growth curves of endophytic bacterial isolates B14, B23, and A2. From the curve it appears that the growth of the three endophytic bacterial isolates is in a static condition because there are 4 growth phases, namely lag phase, logarithmic/exponential phase, stationary phase, and death phase [12]. The lag phase is the phase of bacterial adaptation to the growth environment where bacterial cells grow but with very little cell division despite continued cell metabolism [12]. The log phase is the phase of bacteria doing cell division constantly with a very fast growth time so that the addition of bacterial cells is very high 12 . The stationary phase is a phase of bacterial growth that is very slow or stationary [12]. While the phase of death is the phase of bacteria experiencing death due to nutrition in the reduced media and the presence of toxicity in the growth media. In the phase of death, many cells die, whereas those who are still alive are no longer able to divide. The depletion of nutrients and accumulation of inhibitor products such as acids are some of the factors that influence cell death [12]. In endophytic B14 and A2 isolates, there was no lag phase or bacterial adaptation phase to the environment. There may be an influence from the use 10 of a starter to facilitate the adaptation of endophytic bacteria to the growth environment while in isolate B23 endophytic bacteria there is a lag phase. This endophytic bacterial isolate may require a longer time to adapt to the growth environment. This can also be seen where the endophytic B23 bacterial isolate cells produce endospores or a spore that is used by bacteria to survive a less favorable and extreme environment. The initial stationary phase of the three isolates began at the 30th hour and the final stationer phase was obtained at the 32nd hour. The final stationary phase is the phase in which bacteria produce the most optimal secondary metabolites, one of which is antibiotics [12]. Figure 7 shows an overview of the results of antibacterial activity tests against the Staphylococcus aureus bacteria and Staphylococcus epidermidis bacteria using the paper disc diffusion method. Shown in the picture is the inhibitory zone around the paper disc that is supernatant from bacterial isolates of endophytic B14, B23, and A2 and in positive control with a suspension of antibiotic chloramphenicol. While in the negative control no inhibition zone is formed around the sterile paper disc. Table 4 shows the average diameter of inhibitory zones for antibacterial activity test using supernatant isolates of endophytic bacteria B14, B23, A2 with Staphylococcus aureus test bacteria. The largest inhibition zone diameter was formed by isolate B14 with a large inhibition zone of 7.75 mm followed by B23 isolates with a diameter of inhibition zone of 7.5 mm and A2 with a diameter of inhibition zone of 7.42 mm. all three isolates showed antibacterial activity which was included in the medium category. Based on the diameter of the inhibition zone formed, the antibacterial activity is included in 3 categories, namely weak if the diameter of the inhibition zone is 1-5 mm, while the diameter of the inhibition zone is 6-10 mm, strong with the inhibition zone diameter of 11-20 mm, and very strong with inhibition zone diameters >20 mm [29]. Table 5. shows the diameter of inhibitory zone results of antibacterial activity test of endophytic bacterial isolates B14, B23, and A2 against Staphylococcus epidermidis. The biggest inhibition zone diameter was found in isolate B14 with a large inhibition zone diameter of 12.4 followed by isolate B23 with a large inhibition zone diameter of 8.25 mm and A2 with a large diameter of inhibition zone 8.16 mm. The antibacterial activity of B14 endophytic bacterial isolates is included in the strong category because the diameter of the inhibition zone is in the range of 11-20 while the endophytic bacterial isolates B23 and A2 are included in the moderate category because of the large diameter of inhibition zone in the range of 6-10 mm. Based on the diameter of the inhibition zone formed, the antibacterial activity is included in 3 categories, namely weak if the diameter of the inhibition zone is 1-5 mm, while the diameter of the inhibition zone is 6-10 mm, strong with the inhibition zone diameter of 11-20 mm, and very strong with inhibition zone diameters <20 mm [29. Data analysis The results of data analysis using the One Way ANOVA test showed that the inhibitory zone diameter data of the three isolates of endophytic bacteria B14, B23, and A2 in their antibacterial activity against the growth of Staphylococcus aureus and Staphylococcus epidermidis were significantly different at the 0.05 level. Table 6 and 7 shows the results of data analysis using the One Way ANOVA test, in which the diameter of inhibition zones of the three isolates of endophytic bacteria B14, B23, and A2 in their antibacterial activity on the growth of Staphylococcus aureus and Staphylococcus epidermidis differed significantly at the 0.05 level. This is because the significance value of the three isolates both against Staphylococcus aureus and Staphyloccus epidermidis is below 0, 05 or p <0.05. The significance value of the three isolates of endophytic bacteria B14, B23, and A2 against Staphylococcus aureus test bacteria was 0,000 and against Staphylococcus epidermidis test bacteria was 0.001. Tables 4.9 and 4.10 show the LSD test results there were no significant differences between groups of isolates B14, B23, and A2 against Staphylococcus aures. Because the p-value <0.05 Whereas the effect of Staphyloccus epidermidis B14 was significantly different from the bacterial isolates of endophytic B23 and A2 because the P value> 0.05. While there were no differences between B23 and A2 isolates because of P-value <0.05. Conclusion Tea plant endophytic bacteria (Camellia sinensis) isolates B14, B23, and A2 have antibacterial activity against Staphylococcus aureus and Staphylococcus epidermidis. The best antibacterial activity occurs in endophytic bacteria isolate B14 with a moderate category of antibacterial activity on Staphylococcus aureus and strong on Staphylococcus epidermidis, followed by B23 and A2 with moderate category antibacterial activity against both of Staphylococcus aureus and Staphylococcus epidermidis. Endophytic bacteria of tea plants (Camellia sinensis) isolates B14, B23, and A2 showed a better effect of antibacterial activity of Satphylococcus epidermidis compared with Staphylococcus aureus bacteria.
2020-06-25T09:09:36.067Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "c203113da23f6ec508aed171120001e5f4931ed2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1524/1/012067", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a4136659fd7b033f7a3524d1fd02fc24d71c485b", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Physics", "Biology" ] }
265073237
pes2o/s2orc
v3-fos-license
Financial Inclusion in Rural and Urban Nigeria: A Quantitative and Qualitative Approach , Study Background Marketing financial inclusion services plays an important role in the essential functioning of individuals, businesses, and the macro-economic stability of countries (Mogaji et al., 2021;Soetan, 2014;Soetan et al., 2021).The success or failure of banks in marketing financial inclusion services therefore significantly affects people in their daily activities.There are still many people who are financially excluded because they are not financially literate and as a result are not able to access financial products and services (Asuming et al., 2018;Mogaji et al., 2021).Financial inclusion is defined as the ability to own an account and have access to credit and be able to conduct transactions in a formal financial institution by the financially excluded in the society (Dev, 2006;Ozili, 2018;Zins & Weill, 2016).The economic and financial well-being of financially vulnerable customers is linked to their financial inclusion which is defined as a state in which all people of working age have access to a full suite of quality financial services, provided at affordable prices, in a convenient manner, and with dignity for the clients (Accion International, 2009). Financial inclusion as a result of financial literacy offers huge benefits to financially vulnerable customers at both the individual and societal levels.At the individual level, financially included customers are able to save, invest in education, and absorb financial shocks while at the societal level, financially included customers are able to contribute to economic growth and poverty reduction (Demirguc-Kunt et al., 2018;Mende et al., 2019;Mogaji et al., 2021;Soetan, 2019;2020).Financial inclusion implies access to mainstream financial service providers (Mende et al., 2019) but many financially vulnerable customers who mostly reside in the rural areas in Nigeria do not have that kind of access.For example, customers who live in urban areas are more likely to have access to mainstream financial services providers compared to those who live in rural areas who are often excluded because of their "status" and issues such as bank charges (Baradaran, 2012;2015;Morgan et al., 2016;Hegerty, 2016). Overview of Financial Inclusion in Nigeria Nigeria prides itself as the largest and most populous country in Africa with over 200 million people (World Bank, 2021).Nationally, 40% of Nigerians (83 million) live below the poverty line while 25% (53 million) are vulnerable.These vulnerable citizens can fall easily into poverty and live below poverty following the Covid-19 outbreak (World Bank, 2021).The financial services sector in Nigeria has been making significant contributions to the enhancement of financial inclusion in the country.The country boasts of 22 commercial banks, over 400 Microfinance banks, several licensed mobile money operators, and over 110,000 mobile money agents (MMAs) according to (Adesanya, 2017).Nigeria"s financial inclusion landscape continues to grow, and it offers great hope even though there are several challenges such as insecurity and economic instability (Financial Inclusion Insights, 2021) to be contended with.In Nigeria, 60% of the population live below poverty.These people, who are mostly rural dwellers, are mostly uneducated and not financially literate.29% of the adult population in the country are digitally included through their banks because their banks offer digital services, 59% of the adult population have savings with their financial institutions while 63% of the adult population are gainfully employed (Financial Inclusion Insights, 2021).Over $2/day, 45% Less than $2/day, 18% Source: Financial Inclusion Insights (2021). From Table 1, the percentage of adult Nigerians who own a bank account shows that women are still in the minority with more men owning an account.This may be an offshoot of the patriarchal nature of the society which reflects in the cultural practices and religious beliefs that condemn women and girls to certain roles and responsibilities in the society based on their gender and age (Aidis et al., 2007;Drinkwater, 2017).This gender bias limits their ability to make contributions to socio-economic development (Umukoro & Okurame, 2017;Bako & Syed, 2018) and reduces female participation in bank account ownership compared to men.This pattern, as expected also obtains in terms of the location of the people who own an account.They are mostly urban dwellers compared to the rural dwellers.This may be attributed to the insufficient spread of financial institutions, particularly in rural areas (MP & Pavithran, 2014) and low adoption and use of mobile banking and MMAs in the rural areas (Yawe & Prabhu, 2017).Also, regarding registered account owners, there are more people who make over $2/day who own an account with a financial institution in Nigeria compared to those who make less than $2/day as seen in Table 1.Sahoo et al. (2017) argued that there is a significant correlation between household income and bank account ownership.Indeed, the fact that those who make a higher income from Table 1 own an account compared to those on lower income further confirms the position of Sarma and Pais (2011) who averred that those individuals with higher incomes tend to have formal bank accounts and are able to better take advantage of the benefits of financial inclusion compared to individuals who have low or irregular incomes. Financial Inclusion In recent times, financial inclusion has been a major policy initiative and objective of governments in both developing and emerging economies due to the benefits it provides to their economic development.It provides the opportunity to bring the excluded and vulnerable population into the formal financial sector, such that they can be empowered through the provision of access to formal financial products and services (Allen et al., 2016).As a result of the tremendous efforts being paid to financial inclusion, several emerging economies are already experiencing high levels of financial inclusion which is aiding the economic acceleration of these economies. In spite of these recorded successes, there is still a long way to go for developing and emerging economies to achieve the effective form of financial inclusion obtainable in developed economies.This disparity highlights the huge gap between those in the financial inclusion bracket and the excluded groups in developing and emerging economies.Literature reveals that a low level of financial inclusion does not augur well for economic emancipation in emerging and developing economies.Dawar and Chattopadhyay (2002) suggest that most consumers in emerging and developing economies are daily wage earners who do not have the capacity to engage in significant savings but rather depend on a limited cash flow of income for daily survival.Consequently, such consumers are focused on survival (food, clothing, and shelter) and are not invested in owning or using financial services.In addition, there are fewer financial services operators across the vast rural areas of these emerging and developing economies.As a result, most consumers in these communities do not have access to financial services (Dupas et al., 2012;Soetan, 2019). The absence of a high level of financial inclusion in any society implies financial vulnerability.Financial vulnerability can be conceptualised from both a personal perspective and a market structure perspective (Mogaji et al., 2020).On the one hand, the personal perspective occurs when an individual does not have access to financial services and as a result, cannot manage their transactions and bills effectively.Hence, these personal circumstances can force an individual to become financially vulnerable (Coppack et al., 2015).On the other hand, the market structure perspective relates to market contexts that limit access to financial services (CMA, 2019).In emerging and developing economies, especially in Africa, there are high levels of institutional adversity (Parente et al. 2019).This is due to the absence of market-supporting institutions, lack of infrastructure and specialised intermediaries, weak government regulations, non-implementation of policies (Centre for Global Development 2018), high levels of market imperfections, low levels of financial literacy and education (Bongomin et al., 2016;Sashi, 2010;Shah & Dubhashi, 2015;Zins & Weill, 2016), and poor communication and transportation services (Bayero, 2015). Vulnerable Group Theory of Financial Inclusion The theoretical framework upon which this study is hinged draws from research on financially vulnerable customers and financial inclusion.Literature reveals an increasing number of studies and findings on financial inclusion involving both policy research and academic research (Demirguc-Kunt et al., 2018) although Prabhakar (2019) argued that there is still an absence of synergy between the policy and academic literatures of financial inclusion.A number of theories related to financial inclusion have been identified even though there are presently no elaborate theories of financial inclusion in the policy or academic literature (Ozili, 2018).The theory of vulnerable group of financial inclusion argues that empowerment programs that enhance financial inclusion should be targeted at the vulnerable members of the society such as the poor, young people, unemployed/underemployed, women, physically challenged and elderly people who mostly suffer from economic hardship and crises (Mogaji et al., 2020;Ozili, 2020). One way of increasing the financial inclusion bracket to vulnerable people according to this theory is by extending Government-to-Person (G2P) social cash transfer to vulnerable people.The success of the G2P social cash transfer to this category of people encourages them to open a formal bank account in order to better take advantage of the benefits of the G2P initiative of the government which also helps to increase the financial inclusion bracket.The other benefits of this theory include its focus on vulnerable people by bringing them into the formal financial system as a way of enhancing financial inclusion.Furthermore, the theory identifies vulnerable people based on certain attributes and/or characteristics such as income level, gender, age, and other demographic characteristics.These make it easier to enhance financial inclusion by targeting the vulnerable people in the society who live mostly in the rural areas rather than the entire population (Ozili, 2020). Figure 1. Interplay of participants involved in Financial Inclusion Figure 1 shows the interplay of the participants who are involved in the attainment or enhancement of financial inclusion.According to the figure, financial institutions are expected to promote a proximodistal pattern of increasing financial inclusion in the society.Financial institutions which include both banks and non-banks have marketing strategies, services, and programs to attract vulnerable customers who are mostly the poor, low-income earners, women and youth into the financial inclusion bracket.These financial institutions also ensure that they continue to come up with different initiatives, services, and programs that continue to engage non-vulnerable customers (Mogaji et al., 2021) who include gainfully employed workers, salary earners, and entrepreneurs to continue to enjoy the benefits that accrue to customers who are gainfully employed.That is in addition to ensuring that vulnerable customers who have been drawn into the financial inclusion bracket remain there through programs such as financial literacy, education, and other training seminars that would sustain their interest in financial inclusion services. 1.5 Hypotheses Mia et al. (2021) found that lending in rural areas is more cost-efficient than in urban areas, even after considering various proxies and endogeneity issues.In another study, Liu et al. (2021) found that the industrial economy and governmental intervention are the common determinants of urban and rural digital financial inclusion development.In that regard, Song (2017) argued that digital inclusive finance could serve more rural customers by extending the reaches, reducing the costs and enhancing the risk controls.Based on outcomes of these related studies, the direction of this study is guided by the following hypotheses. Hi: There will be a significant main and interaction effect of residential status and perceived cost of financial inclusion on access to financial inclusion services in an emerging economy. Hi: There will be a significant main and interaction effect of residential status and perceived cost of financial inclusion on quality of financial inclusion services in an emerging economy. Hi: There will be a significant main and interaction effect of residential status and perceived cost of financial inclusion on usage of financial inclusion services in an emerging economy. In addition, this study aims to provide intervention-based insights for policymakers on improving the rural-urban gap in financial inclusion while encouraging financial services managers towards product development through appropriate services research.Therefore, further analysis would be guided by the following research question. What form of digital inclusion finance measure can be adopted to reduce the rural-urban financial inclusion gap in an emerging economy? Method The study adopted both quantitative and qualitative approaches across two phases.Quantitative methods were adopted for Phase One of the study while qualitative methods were adopted for Phase Two of the study.The first phase of the study was dedicated to obtaining information that described perceptions of financial inclusion services among consumers while the second phase focused on obtaining further insights about financial inclusion from key informants in the Nigerian banking industry. Design A cross-sectional design was adopted as a quantitative approach to obtain data from a target population within a specific point in time.The study was conducted in one of the largest cities in Africa with specific emphasis on an urban and a rural population.The variables of interest included financial inclusion and cost of financial inclusion.Financial inclusion was defined as the availability and equality of opportunities to access financial services among a populace.Cost of financial inclusion describes the financial implication of utilizing resources of financial inclusion.Relevant data were collected between November 18 and December 8, 2021.The urban sample was obtained from a major urban setting in the city while the rural sample was obtained from the outskirts of the city.The study sample was made up of 453 participants with 46.8% of them being rural residents and 53.2% being urban residents.The sample comprised of more male (60.9%) participants than their female counterparts with ages ranging from 18 to 60 years and a mean age of 37.9.Non-probability sampling methods were used to derive the study sample, which included a combination of purposive and convenient sampling.The justification for adopting non-probability sampling methods stemmed from the limited control over a sampling frame for members of the general public. A structured questionnaire was developed and utilized in eliciting relevant information from the study participants.The questionnaire was made up of three sections; the first section contained items that described the socio-demographic characteristics of the respondents.The second and third sections contained items that measured financial inclusion and cost of financial inclusion respectively.The scales for measuring these variables of interest were developed by the researchers through an in-depth review of the literature on financial inclusion from a culturally relevant perspective.Financial inclusion was measured along three dimensions: -Access to financial inclusion services, -Quality of financial inclusion services; and, -Usage of financial inclusion services. Five items were used to rate the first dimension based on a "yes" or "no" response format.Sample items included "I own a functional bank account" and "I know my bank"s USSD code for financial transactions".The second dimension was measured using a 5-item scale with a 5-point response format ranging from "1=strongly disagree" to "5=strongly agree".Sample items included "there are several POS units in my area of residence" and "my bank branch is proximally available in my home environ".The third dimension was measured using a 7-item scale with a 5-point response format ranging from "1=Never" to "5=Always".Sample items included "How often do you use POS units for transactions?"and "How often do you use bank USSD codes for transactions?" Perceived cost of financial inclusion was measured based on a 3-level rating (Low, Moderate and High) of the service charges incurred for utilizing financial inclusion platforms such as ATMs, POS, Bank Apps etc.A list of relevant financial platforms was provided for the respondents. Data Collection Procedure Ethical approval to conduct the research in the study area was duly obtained from the University research ethics board where the research was conducted.The researchers then liaised with 10 willing research assistants to administer the questionnaire.The research assistants are university graduates in the country.The research assistants were apprised about the study and given adequate training on the questionnaire administration and data collection process.The services of the research assistants were incentivized.The research assistants were given adequate introductory letters and identification credentials for the purpose of the study.The administration of the questionnaire was done on a house-to-house basis in which one questionnaire was expected to be administered to one member of each of the households visited. The research assistants were paired up such that each pair of research assistants was tasked with visiting 50 households in their designated survey areas.Household heads (if available) or any adult member of the household was to complete the instrument.After brief introductions, the research assistants explained what the study was about and provided opportunities for prospective respondents to ask clarification questions related to the study.Participation was completely voluntary.Respondents who gave verbal consent to participate in the study were then given a copy of the questionnaire for immediate completion.Participants were assured of the confidentiality of their participation and responses.They were also implored to provide sincere responses in order to enhance the external validity of the study outcomes.Completed questionnaires were retrieved on site by the research assistants.Out of the 500 copies of the administered questionnaire, 453 of the retrieved copies were deemed adequate for further data analysis.The 453 copies of the questionnaire were coded and input into a current version of the SPSS software.Descriptive statistics such as frequency distribution and percentages were used to analyze the demographic characteristics of the respondents while inferential statistics such as the factorial ANOVA was used for the hypotheses testing at 0.05 level of significance. Design A phenomenological design was adopted as a qualitative approach for this phase of the study.The objective of this phase of the study was to gain further insight about outcomes obtained in the phase one of the study.Data was obtained from experienced professional bankers (as key informants) within the Nigerian banking and financial services sector.A combination of purposive sampling and snowballing technique was used to identify four key informants from branches of reputable banking institutions in the study area.The interview sessions were guided by an objective template which contained items that were in line with the study objectives.The interview guide also included items that identified socio-demographic characteristics of the participants such as work experience, designation etc. Face validity of the interview guide was achieved through an evaluation by a committee of experts.Items that received unanimous acceptance were retained while items that seemed vague, complex or invalid for the study were modified accordingly.The interview sessions were conveniently scheduled and conducted via Zoom. Data Collection A trained research assistant was recruited to cater to the initial groundwork of interview scheduling for the study. The research assistant was provided with the necessary documentation for identification and authorization to assist the researchers on the field.Four banks were purposively sampled based on their reputation in financial inclusion efforts.Initial contact was established with experienced personnel in managerial capacity within the banks through the researchers" contact.The purpose of this initial contact was to inform the contact-person about the study and, via snowballing, identify a potential participant with expertise and knowledge about financial inclusion services in Nigeria.Upon briefing and gaining consent from the potential participant to participate in the study, a convenient date and time was scheduled for a phone call interview.Having obtained the interview schedules, the researchers made contact via Zoom to the participants for the interview as arranged.During the interview sessions, the researchers reassured the participants that all ethical considerations as previously briefed during the initial briefing would be upheld.Additionally, they were informed that their participation in the interview was based on their expertise as bank managers in the Nigerian financial services system, and not as representatives of their banking institutions.The interview sessions were recorded for further transcription and analysis.Data obtained from the interview sessions were subjected to content analysis and clustered emergent themes.This procedure provided a contextual lens used to obtain valid answers to the research questions of the study. Hypothesis One There will be a significant main and interaction effect of residential status and perceived cost of financial inclusion on access to financial inclusion services.This hypothesis was tested using a 2x2 factorial ANOVA. Results are presented in Table 2. Results from Table 2a show that there is a significant main effect of residential status on access to financial inclusion services [F (1, 315) = 42.215;p<.05] while perceived cost of financial inclusion had no significant main effect on accessibility to financial inclusion services [F (1, 315) = 1.021; p>.05].Furthermore, the interaction effect of residential status and perceived cost of financial inclusion on accessibility to financial inclusion services was not significant [F (1, 315) = .483;p>.05].The hypothesis stated is therefore partially accepted due to the main effect of residential status.The direction of effect is presented in the summary of estimated marginal means below.Results from Table 2b show that residents in the rural setting reported lower (x ̅ =3.160) accessibility to financial inclusion services, while residents in the urban setting reported higher (x ̅ =4.560) accessibility to financial inclusion services.The results suggest that financial inclusion among the rural populace may be significantly hampered by the lack of accessibility to financial inclusion services.In simple terms, the results suggest that accessibility to financial inclusion hubs such as bank, ATM points and POS units was limited in rural areas due to their relative unavailability when compared to urban areas.It may be implied that rural communities are highly under-served as regards financial inclusion services, irrespective of the numerous financial institutions that are invested in the economy.This divide in rural-urban differentials of financial inclusion services have left a major lacuna in the financial inclusion status of the country and have created an avenue for informal players to take advantage of this deficit in rural areas.Thus, informal and non-skilled financial entities have begun to provide financial services in many rural locations (Oluwoyo & Audu, 2019).For instance, the culture of money lending by individuals as a business venture still thrives in many rural areas in both emerging and developing markets. While some positives may be identified from the activities of such informal financial institutions in rural areas, their weak institutional and managerial capacities, as well as their isolation from the financial system pose significant limitations with huge ramifications.It is therefore suggested that financial institutions should be proactive in identifying the benefits of extending financial inclusion services to rural areas in Nigeria.Many rural economies in Nigeria are characterized by needs to purchase agricultural inputs, obtain veterinary services, maintain infrastructure, contract labour for planting/harvesting, transport goods to markets, make/receive payments, manage peak season incomes to cover expenses in low seasons, invest in education, shelter, health, or deal with emergencies.Therefore, investing in financial inclusion services in rural areas will not only be a viable long run investment for financial institutions, but will also expand the horizon for development in such areas. Hypothesis Two There will be a significant main and interaction effect of residential status and perceived cost of financial inclusion on quality of financial inclusion services.This hypothesis was tested using a 2x2 factorial ANOVA. Results are presented in Table 3. Results from Table 3 show that there is no significant main effect of residential status [F (1, 315) = .010;p>.05] and perceived cost of financial inclusion [F (1, 315) = 2.968; p>.05] on the quality of financial inclusion services. Similarly, the interaction effect of residential status and perceived cost of financial inclusion on the quality of financial inclusion services was not significant [F (1, 315) = .304;p>.05].The hypothesis stated is therefore rejected.The results imply that participants" report of the quality of financial inclusion services did not vary irrespective of urban-rural differentials and perceived cost of financial inclusion.The results suggest that the quality of the financial inclusion services that could be obtained in rural areas was not different from that which was obtainable in urban settings.This is justified by the fact that the same models of financial inclusion services which are used by service providers function at par irrespective of their location or perceived cost.For instance, POS agents provide similar services and functions in both urban and rural areas. Hypothesis Three There will be a significant main and interaction effect of residential status and perceived cost of financial inclusion on usage of financial inclusion services.This hypothesis was tested using a 2x2 factorial ANOVA. Results are presented in Table 4. Results from Table 4a show that there is a significant main effect of perceived cost of financial inclusion on usage of financial inclusion services [F (1, 315) = 96.857;p<.05] while residential status had no significant main effect on usage of financial inclusion services [F (1, 315) = .180;p>.05].Furthermore, the interaction effect of residential status and perceived cost of financial services on usage of financial inclusion services was not significant [F (1, 315) = .013;p>.05].The hypothesis stated is therefore partially accepted due to the main effect of perceived cost of financial inclusion.The direction of effect is presented in the summary of estimated marginal means below.Results from Table 4b show that participants with perceptions of high cost of financial inclusion reported lower usage (x ̅ =18.044) of financial inclusion services, while participants with perceptions of low cost of financial inclusion reported higher usage (x ̅ =27.032) of financial inclusion services.The results suggest that usage of financial inclusion services among the populace may be significantly hampered by the cost of financial inclusion services.Thus, the charges incurred for utilizing financial inclusion services, may seem negligible for an urban populace, but are deemed expensive for a rural populace.This is obviously due to class disparities between rural and urban populaces; with the affluent society clustered in urban settings and the masses in rural settings.It may therefore prove worthwhile for financial institutions to consider locational differences in ascribing service charges for financial inclusion services.This is a possibility which can be exploited with technological advancements in identifying mapped coordinates of financial inclusion platforms and usage locations. National financial inclusion strategy As obtained from the data transcripts, efforts to drive financial inclusion in rural Nigeria prompted the National Financial Inclusion Strategy (FIS) in 2012 which was set up to pursue an agenda for expanding financial inclusion in rural interiors of the country such that by the year 2020 and beyond, rural populace would have significant access to inclusion services at affordable costs that meet their financial needs.The FIS was premised on four strategic areas which included mobile banking, agency banking, linkage models, and empowerment of clients.The major issues that were articulated around these strategic areas revolved around demand, supply, regulations, barriers, targets, key performance indicators (KPIs), implementation, structure etc.Since the inception of the FIS, several positive strides have been made towards expanding financial inclusion across rural Nigeria.Some of these efforts were further highlighted in the responses provided by the study participants. I think sometime in 2013 or thereabout, I"m not certain, CBN started what we call National Financial Inclusion Strategy because they realized Basically, the purpose of the strategy is for any Nigerian to be able to have access to financial products at an affordable cost and very easy to then that we have a lot of people in rural areas that don"t have access to financial services. KII-2 access.Either you want to deposit or withdraw money or make a transfer. KII-1 In line with the objectives and importance of the financial inclusion strategy as highlighted by the study respondents, financial inclusion strategies have been known to improve commerce and have a wide scope covering both the public and private sectors (Aduda & Kalunda, 2012).A financial inclusion strategy has several broad focal points as highlighted by Reyes (2011).However, the basic objective of a financial inclusion strategy or action plan is to harmonize all efforts from public and private stakeholders geared towards scaling up and improving financial inclusion while at the same time improving financial stability and integrity within the economy (Sarma & Pais, 2011).Financial inclusion strategies can be wide-ranging, including both public and private sector interventions.They could be implemented as stand-alone policies or made part of a holistic strategy for developing the financial services sector.Such strategies may also be context specific in areas where the need for action is highlighted, such as provision of capital base for SMEs, funding or financial literacy, or broader measures to address various barriers to financial inclusion (Sarapaivanich & Patterson, 2015).  Licensing of Payment Service Banks More than half of the study respondents made reference to the introduction of licensed Payment Service Banks (PSBs) by the Central Bank of Nigeria (CBN) as a financial inclusion strategy.Given the holistic challenge of effectively reaching out to rural interiors of Nigeria, the CBN decided to license the operation of PSBs in Nigeria. The PSBs are geared towards the use of mobile and digital channels to improve and expand financial inclusion in rural areas while stimulating grassroots economic activity through the accessibility of financial inclusion services. The introduction of PSBs is expected to facilitate high-volume and low-value transactions in remittance services, micro-savings, and withdrawal services in a secure, technology-driven environment, which would further deepen rural financial inclusion.The main objective of establishing the PSBs is to enable small businesses, low-income households, and other financially vulnerable individuals to improve financial inclusion by enhancing access to payment services and remittances within the confines of a secure technology-driven structure.The licensing of PSBs empowered other non-traditional banking corporations to deploy technology-based services in providing and facilitating basic banking operations such as deposits, transfers, savings and other transactions among banked and unbanked persons in rural interiors or places devoid of traditional banking activities. CBN gave approval to TELCOS (Telecommunications companies) to drive FI and that is the main reason for Payment Service Banks (PSBs). So, the goal of PSBs is to take FI to the rural area especially the far North of the country KII-4 One of the first ones which is the most recent is the Central Bank of Nigeria (CBN) roll out of license for the Payment Service Banks (PSBs) with the requirement that the PSBs must have the existence of their branches in the rural service area of Nigeria.So, that is a major effort that the CBN has done to try to promote FI in the rural areas. KII-1 In line with the financial inclusion efforts of the CBN through the licensing of PSBs, Wormald et al. ( 2021) have anticipated that Nigeria is gradually following similar approaches adopted by some African countries like Kenya to integrate financial services through well-established Telcos companies.This approach allows customers with phone numbers to access financial services through their mobile phones (Olaleye et al., 2018).While the Telcos have led this drive for financial inclusion in many countries, it is a different approach in Nigeria, where Telcos follow the banks' lead (Lepoutre & Oguntoye, 2018).  Agency banking and fintech operations As further highlighted by the study participants, the licensing of PSBs paved way for the influx of agency banking and fintech operations.Formal banking services have remained out of reach for most Nigerians as most traditional bank branches in Nigeria are located in big cities.This is not an ideal structure in a country with over 100 million people dwelling in rural areas.Results from an initial pilot study conducted by the researchers indicated that distance from rural residents to banking structures was a key factor limiting financial inclusion in rural Nigeria.However, with the rise of agency banking and mobile money model in the country, more Nigerians in rural areas are now involved in financial transactions via technology-based services than a few years ago.Agency banking and fintech startups have been key to expanding access to financial services in a largely impoverished mass market.Agency banking, in particular, has seen significant growth, driven by clusters of fintech startups such as OPay, Paga, Carbon, and Traction as well as major telecom providers such as MTN and Glo mobile.Additionally, the traditional banking institutions have also taken advantage of the PSB license to participate and compete favorably with other fintechs by introducing mobile money applications and USSD channels to access banking services. The other one is just also part of what the CBN has done way back to when the agency banking guideline was released.Agency banking guideline facilitated operational processes which enable banks to offer banking and basic financial services at remote locations.The service is provided by agent operators who are individuals within their locality. KII-1 One way or the other, we encourage them to have POS (Point of Sale) machines which they use to make payments such that people don"t have to come to the bank.They can pay the customers through the POS which is what the agency banking is all about.I know there are so many agents all over the place. KII-3 In addition, the CBN also created a framework called Agency banking because they realized that all the banks in Nigeria cannot be everywhere.No bank may be present in all the 774 Local Government Areas ( 774) in Nigeria, and even if a bank is in all the LGAs, there is still a huge distance within the LGAs that people have to travel to access a bank office. KII-2 License for PSB has enabled Fintechs to help deepen financial inclusion.It"s an easy way of spreading financial channels to rural areas and that"s why people are leveraging on agency banking Agency banking as an expansion strategy borrows its concept from branchless banking model used for delivering financial services without reliance on bank branches as depicted by Afande and Mbugua (2015).The authors averred that agency banking represents less cost alternative to traditional banking through the use of common delivery channels such as retail outlets, mobile phones, internets, and ATMs.In agency banking, third parties are involved in doing all banking activities usually performed by the banks" officers.The authors further show that agency banking is beneficial to the clients because it lowers transactions cost by bringing services closer to homes to save transport cost to reach bank branches.Lotto ( 2016), averred that agency banking allows customers to enjoy longer opening hours since this business operates for longer hours than banks and reduces longer queues.Idoko and Chukwu (2022) show that the operations of banking agents help commercial banks from attending to long queues in their branches and, therefore, increase the convenience of serving their customers.In other developing countries" financial institutions, agency banking is used to reach the business segment which is geographically located away from their usual business centers. Introduction of KYC framework accounts Another effort to drive financial inclusion in rural interiors of Nigeria was identified in the introduction of KYC framework accounts.KYC is an acronym for "Know Your Customer/Client" which focuses on a process of customer/client identification at the point of opening an account.Simply put, it entails ensuring that the bank is able to identify and verify the authenticity of the account holder and account operations.The modification of the KYC framework into different tiers was motivated by the need to make opening and owning a bank account less stressful with different categories of documentation.As such, the stringent provision of specific documentation for opening an account has been made more flexible based on the availability of the different tiers of KYC frameworks.For instance, a Tier-1 KYC account may be opened with basic information (e.g., name and phone number) and limited documentation (e.g., utility bill or passport).Such a Tier-1 KYC account may however, be limited to basic financial operations (deposits and transfers) with a specific capacity for these transactions.This tier of KYC account is especially appreciated by many rural dwellers who would not have possessed some of the documentation needed for opening regular bank accounts; and whose financial capacity and operations are relatively limited.Interestingly, customers can update and upgrade their KYC accounts to higher tiers when such necessities arise.In times past, we used to have some requirements from customers before an account can be opened. They have to have valid ID, utility bill for address verification but now, to drive financial inclusion, the government has approved the opening of Tier-1 accounts called remote areas accounts which does not actually need any form of ID except your passport photograph KII-3 KYC is the due diligence that financial institutions must perform to identify their customers and establish applicable information relevant to doing financial business with them.The compliance function by financial institutions has an increasingly important role to play in protecting the corporate values and reputation of the institution.Hopton ( 2009) stated that KYC covers from "cradle to the grave"; it means knowing the customer throughout the relationship and keeping this knowledge updated over the entire period.This underscores the fact that KYC is not a one-time activity but a continuous process.Lilley (2003) describes KYC as the first line of defense to a bank against criminals.In banking, the KYC model is a structured framework which any prospective bank customer goes through before establishing a contract with the bank and is continuously monitored throughout the relationship.The rules are reviewed from time to time in line with industry dynamics.These reviews among others are meant to provide an environment conducive for a healthy financial system in line with the best banking practices worldwide (Muller et al., 2007).However, the introduction of a tier system in the KYC by the CBN in 2013 was aimed at extending financial inclusion to financially vulnerable populations in the country (CBN, 2013).  Introduction of Bank Verification Number The Bank Verification Number commonly called BVN is a biometric identification system implemented by the CBN to curb or reduce illegal banking transactions and protect customers' banking transactions in Nigeria.However, in recent times, the BVN has begun to serve another purpose in driving financial inclusion in rural Nigeria.The ownership of a BVN by account holders has promoted financial inclusion among rural dwellers who are now able to access and obtain small scale loans from fintech agencies and microfinance banks.Such persons may only need to provide their BVN as documentation for small scale loans which may be awarded after an evaluation of the applicants" financial transactions.This is a major achievement of the BVN policy in providing rural dwellers with an opportunity to access loans as capital for startup or expansion of their existing small-scale Between Tier 2 and 3 accounts, a Bank Verification Number is required to operate both accounts.The BVN provides security for high profiled transactions that can be carried out using Tier 2 and 3 accounts. KII-2 Using the BVN framework, fintechs now offer small -scale and short-term loans to account holders with BVN and active bank accounts.This enables persons and entities without collateral access such loans KII-4 The need for BVN is necessitated by the increasing incidents of compromise on conventional security systems (password and PIN), hence, there is a high demand for greater security for access to sensitive or personal information in the Banking System (NIBSS, 2019).In recent times, biometric technologies have been used to analyse human characteristics as an enhanced form of authentication for real-time security processes.It involves capturing of all ten fingerprints and facial image including other personal details that facilitate identification and location of each bank account holder (Ernest & Amanda, 2018).For authentication purposes, individuals performing banking transactions such as applying for loans shall be required to identify themselves using their biometric features which will be matched against information in the central database.The access to a central identification base for all bank account holders is a valid platform which fintechs and other mobile money merchants take advantage of in providing loans for entities without collateral or specific documentation (Esoimeme, 2015).This is because the owner of a flagged BVN account is easily traced via the central database.  Public sensitization via market storms One of the participants identified the use of public sensitization via market storms to create and raise the level of public awareness about the availability of financial inclusion products and services within rural areas of Nigeria.Some of the traditional banks have begun to conduct various forms of awareness campaigns on financial inclusion.For instance, between the 24 th of November and 16 th of December, 2018, First Bank of Nigeria engaged in financial inclusion awareness through its extension of agent banking campaign to various markets and business hubs across Nigeria.The campaign involved an intensive education and engagement period with rural populace on financial inclusion products from both entrepreneurial and consumer perspectives.Generally, the involvement of agency banking and fintech operators in rural areas of Nigeria have been instrumental in raising the awareness levels and knowledge base of various financial inclusion services among the rural populace. We do what we call "Market Storm."We move to villages and communities that lack access to financial services.We identify businesses situated in good locations and encourage them to open an agent banking service with us by providing POS for them. In some places, we will brand their shops so that people will know that it is an agent of Access Bank.We call it "Access Closer." KII-2 We still have people that keep money in their house despite all the channels, despite all the options available -for people to still use their bedroom to keep money -but I think with regular awareness campaigns carried out by financial institutions, people are gradually getting to know about financial inclusion services in their areas KII-3 There are a series of sensitization and mobilization that the CBN is doing to encourage people by providing enough education on financial systems and services in Nigeria.I think it would be worthwhile to include financial system literacy as a civic education subject in the secondary school curriculum. KII-4 Banking consciousness and activities have been on the rise in rural Nigeria as a result of financial inclusion campaigns.This is also corroborated by Munoru ( 2016) who attested to the fact that financial inclusion products and services can now be easily accessed in rural areas through various delivery channels as a result of significant knowledge expansion among rural inhabitants.Similarly, Barasa and Mwirigi (2013) provided evidence of the growing impact of agency banking in creating awareness of financial inclusion in developing economies. Evaluating the Success and Barriers of the Financial Inclusion Strategy The objective of the Financial Inclusion Strategy (FIS) in Nigeria was basically to ensure that financial inclusion products and services is available and can be accessed by 80% of its adult populace by the year 2020.The researchers were therefore compelled to gain insight into the achievement of this objective as perceived by the study participants.While all the participants provided positive evaluations on the success rate of efforts in extending and driving financial inclusion across rural Nigeria, the specific objective of the FIS had not been adequately achieved due to various challenges that were identified.For instance, through various policy provisions, every adult (above 18 years) now has the capacity to objectively own a bank account, and the rate of bank accounts being opened has significantly increased; however, the willingness to own a bank account is still a subjective decision.This lends credence to the proverbial illustration that providing a stream for the horse does not always predict its drinking from the stream.Similarly, based on a regional evaluation of the FIS in rural Nigeria, one of the respondents admitted that its success rate in Southern Nigeria had hit 70% inclusion, but admitted that Northern Nigeria could not boast of such figures.The challenges highlighted as barriers to the success of the FIS in rural Nigeria included the economic downturn, low-income earnings, the security challenges in the northern parts of the country, poor literacy rate, low trust in financial service providers, challenges in resolving transaction disputes, epileptic power supply, and limited network coverage. The literacy level for financial inclusion is low in many rural interiors of Nigeria.So, they may have a phone, but lack the knowledge of how to use it for financial activities KII-1 Some rural inhabitants don"t trust banks.They believe that banks are overcharging them through unnecessary deductions from their account.So, they don"t trust bank to sometimes save their money in the bank. KII-1 The issue of fraud also discourages rural inhabitants from using bank accounts.Stories of defrauded victims, whose accounts were accessed via social engineering or phone theft, create fear of opening bank accounts. KII-2 Also, many rural inhabitants don"t have any economic activity to get a bankable income.They go to the farm, they do subsistence farming, and they sell or maybe trade by barter within their community and they live on less than N200 or I"m not conversant with the northern part of the country but in the Southwest where I have travelled to including rural communities that border Nigeria with other surrounding countries, I can say there has been about 70% Nigeria is yet to achieve 100% coverage for GSM networks in all rural interiors within its boundaries.The major channels for financial inclusion services adopted in Nigeria are technology-driven and reliant on telecommunication maybe N300 (less than $.70 or $.90) everyday.They don"t have an income to put in a bank account.KII-3 success rate; but there is still a lot to be done because we still have a lot of challenges that people still encounter.KII-3 networks.So, areas devoid of GSM networks are often excluded from novel services in the financial system. KII-4 There have been some success stories on financial inclusion in a few emerging and developing economies in Sub-Sahara Africa (SSA) according to (Adeniji & Awe, 2018;Hove & Dubus, 2019;Lichtenstein, 2018;Mogaji et al., 2021;Ndung'u, 2018;Okoroafor et al., 2019;Ozili, 2020;Soetan et al., 2021).In spite of these recorded successes, there is still a long way to go for developing and emerging economies to achieve the effective form of financial inclusion obtainable in developed economies.In emerging and developing economies, especially in Africa, there are high levels of institutional adversity (Parente et al., 2019).This is due to the absence of market-supporting institutions, lack of infrastructure and specialised intermediaries, weak government regulations, non-implementation of policies (Centre for Global Development, 2018), high levels of market imperfections, low levels of financial literacy and education (Bongomin et al., 2016;Sashi, 2010;Shah & Dubhashi, 2015;Zins & Weill, 2016), and poor communication and transportation services (Bayero, 2015). Discussion Financial Inclusion products and services come with attendant costs for accessing and utilizing them.Customers have to pay varying amounts of service charges depending on the type and channel of financial inclusion service being used.As obtained from the outcomes of an initial pilot study conducted before the data collection for this study, the cost implication for accessing and utilizing financial inclusion services is a major limitation which dissuades price sensitive customers in rural areas from contemplating its use.It was found that many rural-based inhabitants cited the cost of financial inclusion services as a reason for not utilizing available channels within their vicinity.In some cases, service charges had to be paid directly by the customer to access financial inclusion services (e.g., via POS channels) while in other cases, service charges are deducted automatically from the customers" finances (via USSD channels).Other insights obtained from the pilot study highlighted the lack of transparency on product pricing which resulted in creating trust issues between customers and service Consequently, the CBN spearheaded a review of the price guidelines for utilizing financial inclusion services in 2019 which saw a lowering of price caps for e-banking transactions taking effect from January 2020.Additionally, financial service providers were encouraged by the CBN to restructure their fees and limits for financial transactions to be more customer-friendly.This was in response to the increased use of digital channels for financial inclusion during the imposed lockdown and restrictions occasioned by the Covid-19 pandemic in Nigeria which limited physical access to banking halls.However, the consideration of price differentials in financial inclusion services between rural and urban areas in Nigeria has not been given much consideration in the literature.The researchers were therefore prompted to obtain stakeholder-views on the possibility of rural-urban price differentials for financial inclusion services.In their response, all of the respondents were not aware of any current or proposed policies that catered to rural-urban price differentials in financial inclusion services. KII-1 Well, currently, I"m not aware that there is an intervention program aimed at making provisions for rural-urban differential in the course of FI services.But some pricing adjustments have been made in recent times.For instance, the SMS charge for notification, was reduced to N4 from N10. KII-2 Now, you made a valid point by suggesting rural-urban price differentials.I"m 100% in support of that because people in the metropolis have many alternative channels for financial transactions while people in the rural areas have limited channels.So, in that regard, those customers in the metropolis should pay more for financial services. KII-3 In theory, that"s how it should be because the people in rural areas are those, we are trying to poach to have access to financial services.Those are the set of people that are financially excluded and for you to win somebody over, you have to give them something that is financially attractive to them.That is the theory.But let us also look at the practical side of it.Is it sustainable? The seemingly non-existence of a proposed policy or empirical discourse on rural-urban price differential in the cost of financial inclusion services was therefore considered a major gap by the authors which, if exploited, may yield positive effects in driving financial inclusion in rural Nigeria.Wayne et al. (2020) stated that the use of geospatial technological approaches in exploiting this line of thought is a novel financial intervention which is achievable.For instance, Fibaek et al. (2021) used geodata to measure areas with financial access and financial exclusion as a means to improve financial inclusion in Ghana.They relied on a spatial decision support system which enabled geospatial analysis of financial inclusion services being used via mobile devices in different locations.This provided information on the coverage of financial inclusion across the country.Relying on a similar technology to facilitate the capacity to adopt rural-urban price differentials for financial inclusion services seems realistic.Further consultation with an IT specialist yielded the proposed framework in Figure 2. The framework provides a theoretical approach towards achieving rural urban pricing differentials in financial inclusion services.The framework is based on a GIS mapping of transactions carried out via GSM-based mobile devices.All the telecommunication masts within the country would be designated as being situated in rural or urban areas.The location of the transaction-initiating mobile device would then be obtainable via its inbuilt GPS locator or proximity to the nearest telecommunication mast.This implies that smart phone users relying on mobile app channels for financial transactions would need their in-built GPS locator to be enabled in order to benefit from pricing differentials while non-smart phone users relying on USSD channels for financial transactions would be identified by their proximity to the nearest telecommunication mast.This proposed framework is an offshoot of related interests in the geospatial aspects of financial inclusion (UK Space Agency, 2020; Wayne et al., 2020). A major finding from related financial inclusion studies is the clear division of financial services provided to urban, semi-urban, and rural residents.This is known as the "proximity gap" or "proximity challenge".The availability and accessibility of channels for financial inclusion services decreases rapidly with increasing distance from urban centers (Peachy & Mutiso, 2019).In Wamba et al."s (2021) study, it was found that important considerations are needed to measure the social and physical distance when evaluating proximity gaps in financial inclusion.It is highly unlikely that customers would go beyond existing social barriers in order to utilize financial inclusion services.Therefore, contextual information, such as the socio-economic profile of the area where the access points are located, should be used to further qualify the socio-physical distance for a more accurate capture of the level of financial inclusion.Furthermore, encouraging the target populace to utilize the available financial inclusion channels can be realized through cost-benefit approaches in the provision of rural-urban price differentials for financial inclusion services. Conclusion The rural populace in Nigeria is largely excluded from financial products and services due to a variety of factors.They include individuals with limited access to banks.Due to their geographical locations, they seldom have access to bank branches.Due to their personal and economic situation, they may choose not to have a bank account and not integrate into the financial system.These are some of the issues that have been explored under the context of financial inclusion.However, facts support the widespread availability and ownership of mobile phones among a larger percentage of rural inhabitants.Therefore, it is imperative to understand how financial inclusion can be driven through telecommunication companies for these unbanked customers, albeit without the physical structure of a bank.If adequately motivated to access financial inclusion services through their mobile phones, it will become easier to integrate them into the financial system, allowing them to receive money through their mobile phones and mobile number.Undoubtedly, the financial and economic characteristics of these individuals vary, and their level of education and attitude to technology also varies; but this proposed framework for rural-urban geospatial pricing differentials for financial inclusion services via mobile phones offers an entry-level for these customers to access financial services, unlike ever before. This study makes a significant contribution to existing body of work on financial services provision, financial literacy, and financially vulnerable individuals towards enhancing financial inclusion in today"s global market.Specifically, the study highlights the inherent challenges of financially excluded consumers and the huge benefits through financial services design and innovation that can alleviate these challenges (Mogaji et al., 2021, Soetan et al., 2021).In addition, with a focus on financial services and inclusion in Nigeria, the study has provided theoretical and empirical insights into financial services provision and inclusion from a developing and emerging economies" perspective.While it is acknowledged that there are people in the developed world who are still excluded in spite of the abundance of services around them, customers in Nigeria and possibly other developing and emerging economies have a unique characteristic that shapes their experience of financial inclusion.Importantly, banks and other financial service providers have a role to play in engaging with financially vulnerable individuals and providing services that meet their needs.Findings of the study confirm the notion that if consumers are financially included, they are empowered to make financial decisions which can enhance their lives and those of their immediate family (Beck et al., 2015, Mogaji et al., 2020;MP & Pavithran, 2014, Ozili, 2018, Salampasis & Mention, 2018, and Soetan et al., 2021). There are key managerial implications for stakeholders, especially the banks and financial services providers, policy makers, social enterprises, and charity organizations working on improving the financial literacy and financial wellbeing of people.Bank managers are expected to intensify their efforts in ensuring consumers are integrated into the financial system.This involves creating banking products that are targeted toward individuals and prospective customers who are financially excluded.It is also necessary to educate prospective customers about different products and services, to streamline their account opening processes, and use technology to ease their business operations.The effort of the CBN is recognized in reducing the number of financially excluded persons, but there is more to be done with regards to policies that will align the efforts of the banks, fintech developers, and other stakeholders within the industry.Specifically, considerations should be made around bank charges that discourage people from using banking services, and ideas for open banking which allows customers to explore service offerings from other banks, and a streamlined database which can be the basis of credit file and records.Social enterprises and charity organizations also have a role to play in creating awareness about the inherent challenges around financial inclusion.By working in partnership with banks, fintech developers, and policymakers, they can educate people about financial inclusion, financial education, and financial management.Consumers need to be educated about different bank accounts, different forms of borrowing, and credit facilities that are available.This education can start from secondary schools in both rural and urban areas which would allow consumers to make an informed decision about their finances.This will further highlight the transformative role of financial literacy in improving lives. This study provides some tentative support to the argument that financial inclusion has a positive impact on individuals who reside in both the rural and urban areas of an emerging economy.This positive impact also contributes to the economic development of the country since financial inclusion makes a significant contribution to the financial empowerment of individuals.Furthermore, this study provides a tentative support to the assertion that residents of urban areas experience a higher level of financial literacy and ultimately financial inclusion compared to residents in rural areas due to the low or limited presence of financial services providers in rural areas.While there are several other factors apart from the presence of financial services providers that may influence financial inclusion, the evidence in this study lends support to a significant relationship between presence of financial services providers, financial literacy, and financial inclusion (Soetan, 2014).Since this study took place in one of the South-Western states of an emerging economy, Nigeria, it will be important to see if other studies that are extended to other parts of the country will come up with similar findings. Figure 2 . Figure 2. Proposed framework for Geospatial Pricing Differentials Table 1 . Data showing financial inclusion in Nigeria Table 2a . Summary of 2x2 factorial ANOVA showing main and interaction effect of residential status and perceived cost of financial inclusion on access to financial inclusion services Table 2b . Summary of estimated marginal mean Table 3 . Summary of 2x2 factorial ANOVA showing main and interaction effect of residential status and perceived cost of financial inclusion on quality of financial inclusion services DV: Quality of financial inclusion services. Table 4a . Summary of 2x2 factorial ANOVA showing main and interaction effect of residential status and perceived cost of financial inclusion on usage of financial inclusion services Table 4b . Summary of estimated marginal means DV: Usage of financial inclusion services.
2023-11-10T16:21:29.966Z
2023-10-25T00:00:00.000
{ "year": 2023, "sha1": "005a1a984c1aed43216082dd91923c13ed020316", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/ijef/article/download/0/0/49432/53375", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4464e4d0f6c2ec2c42032d77a7fb771c5e1046d4", "s2fieldsofstudy": [ "Economics", "Sociology" ], "extfieldsofstudy": [] }
13024907
pes2o/s2orc
v3-fos-license
Exogenous l-Valine Promotes Phagocytosis to Kill Multidrug-Resistant Bacterial Pathogens The emergence of multidrug-resistant bacteria presents a severe threat to public health and causes extensive losses in livestock husbandry and aquaculture. Effective strategies to control such infections are in high demand. Enhancing host immunity is an ideal strategy with fewer side effects than antibiotics. To explore metabolite candidates, we applied a metabolomics approach to investigate the metabolic profiles of mice after Klebsiella pneumoniae infection. Compared with the mice that died from K. pneumoniae infection, mice that survived the infection displayed elevated levels of l-valine. Our analysis showed that l-valine increased macrophage phagocytosis, thereby reducing the load of pathogens; this effect was not only limited to K. pneumoniae but also included Escherichia coli clinical isolates in infected tissues. Two mechanisms are involved in this process: l-valine activating the PI3K/Akt1 pathway and promoting NO production through the inhibition of arginase activity. The NO precursor l-arginine is necessary for l-valine-stimulated macrophage phagocytosis. The valine-arginine combination therapy effectively killed K. pneumoniae and exerted similar effects in other Gram-negative (E. coli and Pseudomonas aeruginosa) and Gram-positive (Staphylococcus aureus) bacteria. Our study extends the role of metabolism in innate immunity and develops the possibility of employing the metabolic modulator-mediated innate immunity as a therapy for bacterial infections. Several lines of evidence have demonstrated that bacterial infections cause host metabolic changes, including central carbon metabolism, amino acid metabolism, and fatty acid metabolism (6)(7)(8)(9)(10). Pathogens also shift their metabolic programs to adapt to their new environment. More importantly, it has been demonstrated that several metabolites can be immunoregulators that modulate the function of immune cells (11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23). Examples of such metabolites include l-valine, which regulates the maturation and function of monocyte-derived dendritic cells (DCs) through a nutrient-sensitive signaling pathway (16). These results indicate that modulation of host innate immunity by metabolites may be a new valuable solution against bacterial pathogens. Metabolomics is a powerful tool for studying metabolic processes, identifying crucial biomarkers responsible for metabolic characteristics, and revealing metabolic mechanisms. Furthermore, crucial biomarkers can be used to reprogram a metabolome, leading to a specific metabolome to cope with changes in internal and external environments (23). Using this approach, we have identified crucial biomarkers that contribute to metabolic mechanisms in bacteria and hosts in response to antibiotics and pathogen invasion. The use of these key biomarkers reprograms the bacterial and host metabolomes to eliminate bacterial resistance to antibiotics and enhances host immunity against bacterial infections, respectively (24)(25)(26)(27)(28)(29)(30)(31)(32). Here, we report the use of gas chromatography-mass spectrometry (GC-MS) combined with multivariate statistical tools to characterize the blood metabolome from BALB/c mice infected by sublethal doses of K. pneumoniae. Furthermore, we identified a potential immunomodulatory metabolite, l-valine, which is capable of enhancing host immunity against K. pneumoniae infection. We were specifically interested in understanding the metabolic mechanism by which this potential compound modulates the survival-related metabolome to enhance cell antiinfective abilities. The results are reported as follows. MaTerials anD MeThODs ethics statement All work was conducted in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the Institutional Animal Care and Use Committee of Sun Yat-sen University (Animal Welfare Assurance Number: I6). chemicals Fluorescein isothiocyanate (FITC, F7250), l-valine (V0513), l-arginine (A8094), lipopolysaccharide (LPS) (LPS, L4524), and S-(2-boronoethyl)-l-cysteine (SML184, arginase inhibitor) were purchased from Sigma-Aldrich. LY294002 was purchased from KeyGen Biotech, China. Two antibodies, phospho-Akt1-S473 (AP0140) and β-actin (AC004), were from Abclonal, USA. Four nitric oxide (NO) inhibitors, Carboxy-PTIO (PTIO), l-NMMA, SMT, and l-NAME, were purchased from Beyotime Biotechnology, China. The urea assay kit and Nitric Oxide Assay Kit were purchased from BioVision (Mountain View, CA, USA) and Beyotime (Beijing, China), respectively. LY294002 was dissolved in DMSO. Meanwhile, the equal volume of DMSO was also added in the other groups of this experiment as solvent control to exclude the effects of DMSO on phagocytosis. Bacterial strains, culture conditions, and experimental animals All bacterial species in this study were obtained from the collection maintained at our laboratory. The bacterial strains used in the present study consisted of the following clinical isolates: K. pneumoniae (No. 0367 and No. 1924), Escherichia coli, MRSA, and Pseudomonas aeruginosa. The E. coli MCC-5 and HCC-13 were isolated from chickens, and the other bacteria were isolated from humans. The bacterial strains were cultured from frozen stocks in LB medium in a shaker bath at 37°C. Bacterial cells from overnight cultures were diluted 1:100 into 100 mL of LB medium. The cultures were harvested at an absorbance of 1.0 (OD600) by centrifugation at 7,000 rpm for 15 min at 4°C. The cells were washed in 40 mL of sterile saline (0.85% NaCl) and then resuspended in 0.85% NaCl. Male mice (BALB/c, pathogen-free), weighing 24 ± 2 g from the same litters and obtained from the Animal Center of Sun Yat-sen University, were reared in cages fed with sterile water and dry pellet diets. Between 50 and 100 μL blood was obtained from the orbital vein of each mouse as the non-infection group. Then, each mouse was intraperitoneally or intravenously infected by inoculation with the indicated colony-forming units (CFUs) of bacteria. Equal amounts of blood were collected from each mouse in the experimental group at 6 h post-infection using the same approach as for before infection. The experimental group was further divided into the dead and survival groups at 15 days depending upon whether the mice either succumbed to the infection or survived after infection. Metabolite extraction in Mouse Plasma Total metabolites extracted from plasma were performed as described previously (27). Briefly, 50 μL plasma was quenched by using 50 μL cold methanol and collected by centrifugation at 8,000 rpm for 3 min. This step was repeated two times. The two supernatants were mixed, and aliquot of sample was transferred to a GC sampling vial containing 5 μL 0.1 mg mL −1 ribitol (Sigma) as an analytical internal standard and then dried in a vacuum centrifuge concentrator before the subsequent derivatization. Two technical replicates were prepared for each sample. Derivatization and gc-Ms analysis Samples must be derived before GC-MS analysis. Therefore, 80 μL of methoxamine/pyridine hydrochloride (20 mg mL −1 ) was added to dried samples to induce oximation for 1.5 h at 37°C, and then 80 μL of the derivatization reagent MSTFA (Sigma) was mixed and reacted with the sample for 0.5 h at 37°C. A 1 μL aliquot of the derivative of the supernatant was added to a tube and analyzed using GC-MS (Trace DSQ II, Thermo Scientific). The separation conditions of GC-MS consisted of an initial temperature of 70°C (5 min) with a uniform increase to 270°C at a speed of 2°C min −1 (5 min); 0.5 μL sample volume, splitless injection; injection temperature, 270°C; interface temperature, 270°C; ion source (EI) temperature, 30°C; ionization voltage, 70 eV; quadrupole temperature, 150°C; carrier gas, highly pure helium; velocity, 1.0 mL min −1 ; and full scan way, 60-600 m/z. In data processing, spectral deconvolution and calibration were performed using AMDIS and internal standards. A retention time (RT) correction was performed for all the samples, and then the RT was used as reference against which the remaining spectra were queried and a file containing the abundance information for each metabolite in all the samples was assembled. Metabolites from the GC-MS spectra were identified by searching in National Institute of Standards and Technology (NIST) library used the NIST MS search 2.0. The resulting data matrix was normalized using the concentrations of added internal standards that were subsequently removed so that the data could be used for modeling consisted of extracted compound. The resulting normalized peak intensities formed a single matrix with Rt-m/z pairs for each file in the dataset. To reduce between-sample variation, we centered the imputed metabolic measures for each tissue sample on its median value and scaled it by its interquartile range (28). The z-score analysis scaled metabolites according to a reference distribution. The control samples were designated as the reference distribution. Thus, the mean and SD of the control samples were determined for each metabolite. Then, each sample was centered by the control mean and scaled by the control SD, per molecule. In this way, we can know how the metabolite expressions deviated from the control state. In addition, independent component analysis (ICA) was selected as the pattern recognition method (25). Western Blotting RAW264.7 cells were lysed in 4 × loading buffer [250 mM Tris pH 6.8, 8% (w/v) SDS, 40% glycerol, 20% β-mercaptoethanol, and 0.01% bromophenol blue] and boiled for 10 min. After centrifugation, 50 μg of total protein extracts was separated by 12% SDS-PAGE, which was then transferred to nitrocellulose membranes for Western blotting. After blocking with 3% bovine serum albumin dissolved in Tris-buffered saline (TBS) containing 0.05% Tween-20 for 1 h at room temperature, the membranes were incubated with anti-phospho-Akt1-S473 or anti-β-actin primary antibodies at appropriate dilutions, followed by goat anti-rabbit or anti-mouse secondary antibodies conjugated with horseradish peroxidase, respectively. Positive band intensities were detected using a gel documentation system (LAS-3000 Fujifilm Medical Systems, Stamford, CT, USA). Therapeutic effect of l-Valine or/and l-arginine on Bacterial eradication Mice were acclimatized for 1 week and then randomly divided into groups for the investigation on the therapeutic effects of l-valine, l-arginine, or both. Mice were intraperitoneally challenged by bacterial pathogens (K. pneumoniae, MRSA, P. aeruginosa, or E. coli strains isolated from humans and chicken). Prior to killing by decapitation and extracting spleen, liver, and kidney tissues at 24 h, l-valine (0.5 g kg −1 ), l-arginine (0.25 g kg −1 ), l-valine (0.5 g kg −1 ) plus l-arginine (0.25 g kg −1 ) or an equal volume of sterile saline were intravenously administered to the bacteria-challenged mice at 1, 4, 7, 10, and 20 h through the tail vein. Replacement of l-valine or l-valine/l-arginine with d-valine or d-valine/l-arginine, respectively, was used as a control. The tissues were ground using sterile saline in gnotobasis. Plate counting was utilized to investigate bacterial eradication in the tissues. The homogenates were diluted in appropriate multiples and aliquots of the diluted homogenates were sampled in LB solid medium. Bacteria were counted when single colonies appeared in the medium after growing at 37°C. Differences between the groups were tested for significance at two significance levels (0.05 and 0.01) using the Statistical Package for the Social Sciences statistical software. cell culture and Quantitative Phagocytosis assay The murine macrophage cell line RAW264.7 was cultured at 37°C in a 5% CO2 incubator in DMEM (HyClone) supplemented with 10% (V/V) cosmic calf serum (HyClone), 100 U mL −1 penicillin G and 100 U/mL streptomycin. Macrophage phagocytosis was examined as described previously (26). Briefly, RAW264.7 cells were harvested using CaCl2-and MgCl2-free PBS containing 5 mM EDTA and plated at 5 × 10 6 macrophages/well in 6-well plates. For experiments with administration of the indicated concentrations of l-valine, LPS, l-arginine, arginase inhibitor, or NO inhibitor, the cells were deprived of serum overnight and then incubated alone or additively with the abovementioned molecules for the indicated times in serum-starved media, including DMEM, l-valine-free medium (DMEM without l-valine), l-arginine-free medium (DMEM without l-arginine), and l-valine and l-arginine-free medium (DMEM without l-valine and l-arginine). After pretreating for 6 h, E. coli-GFP or FITC-conjugated K. pneumoniae cells were centrifuged onto macrophages at a multiplicity of infection of 100 in the indicated medium without serum or antibiotics. Then, the plates were placed either in 37°C or 4°C for the indicated times. After infection, the macrophages were vigorously washed with cold PBS to stop additional bacterial uptake or to destroy the bacteria in the phagosomes. Cells were washed at least four times in cold PBS and then fixed in 4% paraformaldehyde before being harvested in cold PBS containing 5 mM EDTA and subjected to FACS ® analysis. Ultra Performance liquid chromatography (UPlc)-Ms analysis for extra-and intracellular l-Valine and l-arginine l-valine or l-valine plus NO inhibitors were added into l-valinefree DMEM and then incubated with RAW264.7 cells. After 3 h incubation, 100 μL aliquots of medium were mixed with 400 μL of acetonitrile. The mixture was mixed by vortex for 2 min, followed by centrifugation at 14,000 rpm for 10 min at 4°C. The cells were lysed by sonication in 400 μL extraction solution (50% acetonitrile and double-distilled water). After centrifugation, the supernatants from medium or cell samples were transferred and diluted at a ratio of 1:1 with acetonitrile, which was used for subsequent analysis of UPLC/MS-MS. Approximately 90 μL ACN were added to 30 μl mice serum sample. The mixture was mixed by vortex for 1 min, followed by centrifugation at 12,000 rpm for 10 min at 4°C. Murine tissues were homogenized in a laboratory homogenizer for 3 min. Then, 1 mL ACN was added to the sample. The samples were disrupted by sonication for 2 min, followed by centrifugation at 12,000 rpm for 10 min at 4°C. The supernatant was collected and analyzed by UPLC-MS/MS. Ultra performance liquid chromatography analysis was performed on a Waters ACQUITY UPLC system equipped with an Acquity BEH C18 column (50 mm × 2.1 mm i.d., 1.7 μm; Waters Corp.). The sample was injected during the loading step by the loading pump and auto-sampler onto the column. Separation was using linear gradient elution with mobile phase A (acetonitrile) and B (10 mM ammonium acetate with 0.1% formic acid in ultrapure water) at a flow rate of 0.25 mL min −1 . The gradient elution was as follows: 0-0.5 min, 95% A; 0.5-2 min, 20% A; 2-2.5 min, 20% A; 2.5-4 min, 95% A. The injection volume was 10 μl, and the column temperature was kept at 35°C. Mass spectrometry detection was carried out in QUATTRO PREMIER XE equipped with an electrospray ionization source operating in positive ionization mode (ESI+). The capillary voltage was set to 3,000 V; the cone voltage was set to 20 V. The extractor voltage and RF Lens were set at 1 and 0.5 V, respectively. The desolvation gas flow was set to 650 L h −1 at temperature of 450°C, the cone gas flow rate was set at 50 L h −1 , and the source temperature was set at 120°C. Identification was measured in MRM mode, the precursor ion and quantifier ion of l-arginine and l-valine was 175 > 70 and 118 > 72. nO, Urea concentration, and arginase activity Measurements Total NO concentration in culture medium and cells was calculated by measuring the nitrate and nitrite concentrations with a Total Nitric Oxide Assay Kit (Beyotime, China) according to the manufacturer's instructions. The optical densities at 540 nm were recorded using a Microplate Reader (Thermo MutliscanMK3; Thermo Fisher Scientific, Waltham, MA, USA). The concentration of NO output was calculated from the standard curve. Urea production was determined using a Urea Colorimetric Assay Kit (BioVision). Five million macrophages were harvested and lysed for 30 min in 100 μL of 10 mM Tris-HCl, pH 7.4, containing 0.4% (w/v) Triton X-100. Then, the cells were centrifuged at 13,000 rpm for 10 min. The supernatant was collected for the Arginase Activity Assay kit (Sigma, MAK112). The quantitative data were expressed as the mean ± SD. One-way ANOVA was used to determine the statistical significance. effect of Mouse serum on Killing K. pneumoniae in the absence and Presence of l-Valine l-valine (0.5 g kg −1 ) or equal volumes of sterile saline were intravenously administered to the mice through the tail vein. Two hours later, serum was collected from these mice. K. pneumoniae at an absorbance of 1.0 (OD600) in LB medium were harvested by centrifugation at 7,000 rpm for 15 min at 4°C. The cultures were washed and then resuspended in 0.85% NaCl. Equal amounts of K. pneumoniae (~10 4 CFU) were added into the serum of both groups in a final volume of 150 μL. Samples were incubated at 37°C for 24 h with slow rotation. K. pneumoniae dilutions were plated on LB agar for colony formation. effect of Valine and arginine on Bacterial growth Klebsiella pneumoniae-0367 and E. coli Y17 were cultured in LB medium for 16 h at 37°C. Samples were harvested at 7,000 rpm for 5 min, washed three times with 30 mL sterile saline and resuspended in sterile saline to 0.6 at OD600. Different concentrations of valine or arginine were added to the samples to reach a final volume of 5 mL; no metabolite was used a control. Samples were incubated at 37°C for 6 h. After incubation, 100 μL aliquot samples were removed periodically, serially diluted, and plated on LB agar. The plates were incubated at 37°C for 8-10 h. Only those dilutions that generated 20-200 clones were counted to calculate the CFUs. Percent survival was determined by dividing the CFU obtained from a treated sample by the CFU obtained for the control. resUlTs Metabolomic Profiling of Plasma from surviving and Dead Mice following K. pneumoniae infection Mice infected with LD50 K. pneumoniae (No. 0367) producing TEM-type ESBLs ( Figure S1 in Supplementary Material) displayed two consequences: either succumbing to or surviving the infection ( Figure 1A). Plasma samples were drawn from the mice 6 h post-infection; serum drawn before infection served as the control group ( Figure 1B). Metabolic profiling of the plasma samples was performed using a GC-MS-based approach followed by multivariate analysis to identify crucial biomarkers. The reliability of GC-MS was assessed through correlation coefficients with two technical repeats ( Figure S2A in Supplementary Material). A total of 68 metabolites were detected in each sample; internal standard and solvent peaks were excluded. The metabolites are displayed as a heat map and Z-score plot (Figures S2B,C in Supplementary Material). ICA identified two independent components, IC01 and IC02, whereby IC01 differentiated three groups without any significant outliers ( Figure S2D in Supplementary Material), indicating the reproducibility of the samples. Pattern recognition identifies l-Valine as a Potential anti-infection Metabolite A two-sided Wilcoxon rank-sum test coupled with a permutation test was used to identify crucial biomarkers that differentiated these three groups (27). Compared with the control group, the abundance of 56 and 50 metabolites were significantly altered in the dead and survival groups (p < 0.05), respectively (Figures 1C,D), among which 42 metabolites were shared. Among the 42 metabolites, 20 metabolites were increased, 20 metabolites were decreased, and 2 metabolites were increased in the survival group but lower in the dead group. In addition to the shared metabolites, seven metabolites were increased, and seven metabolites were decreased in the dead group, whereas two FigUre 1 | continued Differential analysis coupled with pathway enrichment analysis identifies l-valine as a potential anti-infective metabolite. (a) Survival percentage for mice infected with sublethal doses of Klebsiella pneumoniae. Based on the pretest, the sublethal dose of K. pneumoniae was determined to be 1 × 10 8 CFU. (B) Experimental design for sample acquisition for gas chromatography-mass spectrometry (GC-MS)-based metabolomics. Prior to infection, 100 μL blood was drawn from the orbital vein in each of 20 mice as the control group. Eighteen hours later, a half-lethal dose of K. pneumonia was intraperitoneally injected into each mouse. Six hours after infection, 100 μL blood was collected. Survival of all the mice was observed for 15 days. (c) Heat map showing the relative abundance of the 56 and 50 significantly differential metabolites in the dead and survival groups as indicated, respectively. (D) Z-score plots corresponding to the data in panel (c). The upper panel is the dead group, and the lower panel is the survival group. (e) Venn diagram showing the overlap of differential metabolites between the dead and survival groups. Decreased and increased metabolites are indicated with green and red arrows, respectively. (F) Pathway enrichment of differential metabolites in the dead and survival groups. A horizontal histogram was selected to show the impact of the enriched pathway with impact values >0. 1. (g,h) Abundance of l-leucine, l-valine, and l-threonic acid in the control, dead, and survival groups. Error bars ± SEM, *p < 0.05 and **p < 0.01. metabolites were increased and six metabolites were decreased in survival group (Figure 1E). In the pathway analysis, the differential abundance of 50 metabolites in the survival group and 56 metabolites in the dead group enriched for four and three pathways (p < 0.05 and impact > 0.1), respectively ( Figure 1F). The shared pathways were valine, leucine, and isoleucine metabolism, and galactose metabolism. All of the detected metabolites from galactose metabolism, including d-glucose, mannose, fructose, galactose, and myo-inositol, were decreased in both the survival and dead groups ( Figure S2E in Supplementary Material). Two metabolites, l-valine and l-leucine, were enriched in valine, leucine, and isoleucine metabolism. Although the abundance of l-leucine was increased in both the survival and dead groups, no significant differences were detected between the two groups ( Figure 1G). By contrast, l-valine and l-threonic acid were differentially expressed between the two groups (Figures 1D,H). The abundance of l-valine was higher than l-threonic acid in the survival group. Therefore, l-valine could be a prognostic biomarker for K. pneumoniae infection and could act as a modulator for protecting the host against infection. exogenous l-Valine Displays an antiinfective effect on Bacterial infection The concentrations of l-valine in the control, dead, and survival mice were 9, 5, and 17 μM, respectively, as normalized to the internal standard control ribitol (0.1 mg mL −1 , 5 μL). Thus, l-valine levels should increase at least two-fold to promote survival during bacterial infection. To examine the potential anti-infective role of l-valine in vivo, two groups of mice were i.p. injected with K. pneumoniae, followed by i.v. injection with l-valine (0.5 g kg −1 ) or sterile saline five times within 20 h. The levels of l-valine, but not l-arginine, were significantly elevated in plasma at different time points (Figures S3A,B in Supplementary Material). All of the mice were sacrificed at 24 h after infection. The liver, spleen, and kidney were removed surgically for analysis of bacterial load and l-valine. Bacterial counts were significantly lower and l-valine was significantly higher in the liver, spleen, and kidney of the mice injected with l-valine than those in the saline control group (Figure 2A; Figure S3C in Supplementary Material). Similar results were obtained when the mice were exposed to a clinical strain of antibiotic-resistant E. coli Y17 (Figure 2B). The replacement of l-valine with d-valine had a similar bacterial load as the saline control ( Figures S4A,B in Supplementary Material), thus excluding the non-specific effect of l-valine on mice. These data strongly suggest that l-valine decreases bacterial load in K. pneumonia-and E. coli-infected mice. To exclude the possibility that l-valine exerts its effects through the complement system or immunoglobulins, we investigated the effect of l-valine on bacterial growth in the presence of freshly drawn murine serum. The growth of bacteria was unaffected even in the presence of l-valine ( Figure S4C in Supplementary Material), implying that l-valine-mediated bacterial elimination was unlikely attributed to the complement system and immunoglobulins. Thus, to further explain the anti-infective effects of l-valine on infected mice, we surmised that l-valine might stimulate phagocytosis, which would thereby increase the rate at which pathogens were eliminated from the host. This hypothesis was tested in vitro by examining the cytoplasmic mean fluorescence intensity (MFI) in murine macrophages (RAW264.7). l-Valine-pretreated RAW264.7 cells were incubated with FITC-conjugated K. pneumoniae or green fluorescent protein (GFP)-expressing E. coli. l-valine markedly stimulated the phagocytosis of fluorescence-tagged bacterial pathogens at 0.4-10 mM in a dose-dependent manner ( Figure 2C). However, phagocytosis was not changed in d-valine -pretreated RAW264.7 cells ( Figure S4D in Supplementary Material). Other amino acids, such as glycine, were unable to enhance phagocytosis ( Figure S4E in Supplementary Material). These results support the immunomodulatory function of l-valine in macrophages. Notably, high doses of l-valine (20-40 mM) had no significant impacts on normal phagocytosis levels, even though high l-valine doses had fewer effects than lower l-valine doses ( Figure 2C). Additionally, l-valine alone did not affect bacterial growth ( Figure S4G in Supplementary Material). Plate counting revealed that incubation with l-valine reduced the bacterial load in extracellular environments and enhanced the bacterial load within cells ( Figure S4H in Supplementary Material). During infection with Gram-negative bacteria, including K. pneumoniae and E. coli, LPS is abundantly produced and therefore contributes to immune response. Therefore, the effect of l-valine on phagocytosis was examined upon LPS stimulation. Macrophages treated with l-valine displayed higher phagocytosis than untreated macrophages, even in the presence of LPS ( Figure 2D). Additionally, phagocytosis was also further boosted by pretreatment with LPS for 1 h as well as when followed by 2 h LPS plus l-valine treatment or by 1 h l-valine followed by 2 h l-valine plus LPS treatment ( Figure 2E). These data support that l-valine enhances macrophage-mediated innate immunity to Gram-negative pathogens. l-Valine-induced Pi3K/akt1 activation and nO Production contribute to the enhanced Phagocytosis A previous study showed that depletion of extracellular l-valine in DCs decreased the mTORC1/S6K-signaling pathway (16), which is activated by PI3K/Akt (33). Consistently, l-valine-treated macrophages displayed increased phospho-Akt1 (p-Akt1) ( Figure 3A). When macrophages were treated with the PI3K inhibitor LY294002 (10 μM), the macrophage MFI was lower than in untreated cells, regardless of the presence of LPS ( Figure 3B). In particular, upon LPS stimulation, LY294002 obviously reduced almost half of the phagocytosis enhanced by l-valine ( Figure 3B). Thus, l-valine-induced activation of PI3K/Akt1 is partly responsible for the improved phagocytosis. It has also been reported that the addition of l-valine led to a concentration-dependent decrease in urea and NO production in tissues and endothelial cells when arginase is inhibited (34,35). We therefore hypothesized that NO production was involved in l-valine-enhanced phagocytosis. First, exogenous l-valine increased intracellular levels of l-valine as quantified by UPLC-MS (Figure 4A), indicating that l-valine functions inside macrophages. Consistent with a previous finding (34), exogenous l-valine concentrations ranging from 0.4 to 10 mM prompted NO production but reduced urea production extraand intracellular levels in a dose-dependent manner. Higher concentrations of l-valine had weaker effects (Figures 4B,C). LPS stimulation further enhanced l-valine-induced NO production ( Figure 4B). Meanwhile, l-valine inhibited RAW264.7 cell arginase activity (Figure 4D). The inhibition of arginase activity with S-(2-boronoethyl)-l-cysteine, a potent and specific inhibitor, significantly boosted macrophage phagocytosis ( Figure 4E). These data provide a strong interrelationship between l-valine-induced phagocytosis and NO production. To further prove this idea, we investigated the effects of four other NO inhibitors, carboxy-PTIO (PTIO, NO scavenger), l-NMMA (inhibition of total NO synthase), SMT [inhibition of inducible NO synthase (iNOS)], and l-NAME (inhibition of endothelial NO synthase), on l-valine-induced phagocytosis. As expected, these inhibitors significantly suppressed l-valineinduced macrophage phagocytosis with or without LPS treatment (Figure 4F), and MFI decreased in a time-dependent manner ( Figure 4G). Together, these data reveal that PI3K/ Akt1 activation and NO production all contribute to l-valineenhanced phagocytosis. l-arginine is involved in l-Valine-Mediated Phagocytosis Because l-arginine is the exclusive source of NO in cellular metabolism, the role of l-arginine in l-valine-mediated phagocytosis was investigated. Removal of l-arginine from the culture medium reduced basal phagocytosis levels ( Figure 5A). l-arginine does not affect bacterial survival ( Figure S4D in Supplementary Material). However, replacement of l-arginine with d-arginine had no effect on macrophage phagocytosis ( Figure S4G in Supplementary Material), indicating the specific effect of l-arginine on phagocytosis. When different concentrations of l-arginine or l-valine were added to medium deficient in both l-valine and l-arginine, respectively, exogenous l-arginine was not effective and even countered phagocytosis (Figures 5B,C). However, the addition of 10 mM l-valine increased phagocytosis in the absence or presence of LPS (Figures 5B,C). Therefore, we believed that the increased phagocytosis that occurred at 10 mM in valine-incubated cells was specific and might be caused by eNOS, which is constitutively expressed in RAW264.7 mouse macrophages (36). We proposed that 10 mM valine was able to inhibit arginase activity, thereby elevating NO production through eNOS metabolism, which thus enhanced phagocytosis. However, the MFI increase was very slight because of the limited l-arginine within the cells (Figure 5B). Lower concentrations of valine probably did not optimize the inhibition of arginase activity and higher concentrations, such as 40 mM valine, might promote cellular side effects (see details in Section "Discussion"), thereby resulting in the decrease in the MFI. Macrophages displayed the highest levels of phagocytosis when 10 mM l-valine and l-arginine were used ( Figure 5D). Regardless of the presence of LPS, l-arginine was decreased in l-valine-treated macrophages and macrophage-cultured medium (Figure 5E), which could be restored by additional treatment with NO inhibitors, except for PTIO (Figure 5F). It is possible that PTIO is a specific NO scavenger and has limited effects on NO synthase activity, which directly metabolizes l-arginine to produce NO, thereby maintaining or even increasing the consumption rate of l-arginine. Together, these data indicate that l-arginine is essential for the l-valine-induced phagocytosis and potentially has a synergetic effect on bacterial elimination in vivo. l-Valine and l-arginine synergistically Protect Mice against clinically relevant Multidrug-resistant Bacteria Clinically relevant multidrug-resistant bacteria are associated with therapy failures and public health crises that are hard to be or even no longer controlled by antibiotics, which must be addressed with new treatments (37,38). Therefore, it would be clinically helpful if the combined administration of l-valine and l-arginine promotes innate immunity-dependent killing of these multidrug-resistant bacteria. To test this idea, K. pneumoniachallenged mice were treated with l-valine, l-arginine, or both via intravenous injection. l-valine alone enhanced the elimination of K. pneumoniae by the host, whereas injection of both of l-valine and l-arginine had stronger effects (Figure 6A), which was supported by elevated l-valine and l-arginine levels in the plasma and kidney ( Figures S3A,B,D in Supplementary Material). However, neither d-valine nor d-arginine alone could eliminate K. pneumoniae and E. coli Y17 ( Figures S4A,B in Supplementary Material), indicating the specific effects of l-valine and l-arginine. Valine-arginine combination therapy was also tested in other clinical multidrug-resistant pathogens, including Gram-positive and -negative bacteria isolated from humans or chickens; this combination also displayed therapeutic effects against these pathogens ( Figure 6B). These findings indicate that valine-arginine combination therapy could be a useful intervention in infectious diseases caused by multidrug-resistant bacteria. DiscUssiOn Intensive and inappropriate use of antibiotics dramatically leads to the development of drug resistance in bacterial pathogens, increasing the risk of severe diseases or death after exposure to multidrug-resistant bacteria (5,39). Although antibiotics are still the first choice for treating such infections, severe consequences would be expected if more multidrug-resistant bacteria were generated and spread. A novel strategy for controlling such a situation is urgently needed. Our previous studies strongly suggested that the harnessing alanine and glucose, metabolites that are suppressed in antibiotic-resistant bacteria, could revert such a phenotype (24). The idea of using metabolites to reprogram existing metabolic pathways to a way that we want has been tested in many other species (23). Here, we show that metabolite-mediated reprogramming is not limited to bacteria, but is possible in the host as well. We identified that l-valine is a key metabolite for promoting mouse survival under K. pneumoniae challenge; this effect could also be observed with other pathogens such as E. coli, P. aeruginosa, or MRSA, implying that l-valine may be a metabolite that regulates immune functions. Not surprisingly, we found that l-valine actually increases macrophage phagocytosis in an l-arginine-dependent manner. The enhanced phagocytosis was attributed to increased PI3K/Akt1 activation and NO production (Figure 7). Thus, our study not only proposes the use of metabolites in managing bacterial infection but also presents a well-established platform for identifying metabolites for bio-reprogramming. l-valine, an essential amino acid, has more functions than just in nutrition. In a previous study, mice fed with synthetic diets with limited l-valine exhibited obviously increased susceptibility to bacterial infection (40), indicating the anti-infective potential of l-valine; valine supplementation is not currently used in the clinic. However, the mechanism underlying this anti-infection property is unknown. A recent paper revealed the immunological function of l-valine in DCs and demonstrated that l-valine deficiency inhibits the differentiation of monocytes into mature DCs as well as IL-12 production, likely by downregulating the mTORC1/S6K signaling pathway, which may be the cause of enhanced sensitivity to bacterial infection in cirrhotic patients (16). Normally, PI3K/Akt is the upstream activator of mTORC1 (33,41). Our study found that l-valine is capable of boosting PI3K/Akt1 activation in macrophages. PI3K inhibition partially reduced the l-valine-induced phagocytosis of bacteria, which is consistent with previous studies showing that PI3K/Akt activation mediates macrophage phagocytosis (42,43). We also addressed whether the mechanism of l-valineenhanced phagocytosis of bacteria in macrophage is associated with NO production. Although phagocytosis and NO production are believed to clear pathogens, their intercorrelation has not been actually established. Our study revealed that l-valine induced macrophages to engulf bacteria through NO and that NO played a major role in the direct intracellular-mediated mechanism that leads to killing pathogens (44). Meanwhile, NO produced by iNOS functions as a signaling molecule, which could strengthen the phagocytosis by LPS-stimulated or IFN-γ-primed macrophages (45,46). Furthermore, engagement of Fcγ-receptors triggered neuronal and endothelial NOS activity (nNOS and eNOS), both of which produced low levels of NO that functioned to promote macrophage phagocytosis (47). The present study showed that iNOS and eNOS were the two major metabolic enzymes that produce NO in l-valine-treated macrophages. Inhibitors of iNOS and eNOS reduced l-valine-enhanced phagocytosis. In fact, promoting NO production with l-valine has been documented in the literature (34,35). However, the ability of l-valine to enhance NO production and in turn increase bacterial phagocytosis by macrophages was previously unrecognized. However, it is helpful if primary murine macrophages, NOS KO macrophages, and human macrophages can be used to further confirm our conclusions. Another finding in this study is that l-arginine was essential for l-valine-enhanced phagocytosis and NO production. l-arginine is the sole metabolic source for NO production. Factors that limited the availability of l-arginine could reduce the production of NO, thereby increasing host susceptibility to invading pathogens (48). In mammalian cells, arginase and NOS compete with one another for l-arginine as an enzyme substrate. Deprivation of l-arginine leads to the reduced translational efficiency of iNOS mRNA and stability of iNOS protein (49,50). Although the enzymatic activities of arginase and NOS are coinduced in macrophages in response to bacterial infection, the role of arginase is obviously stronger than that of NOS (51,52), indicating that arginase is the predominant regulator of arginine availability in activated macrophages. Furthermore, inhibition of arginase activity in macrophages increases host survival during Toxoplasma gondii infection and reduces the bacterial burden in the lung during tuberculosis infection (52). These data are consistent with our finding that arginase increases the survival of clinically relevant multidrug-resistant isolates. Addition of l-arginine reinforces that phagocytosis in l-valine-treated macrophages and valine-arginine combination therapy further reduces the bacterial load in tissues, proving the significance of l-arginine availability in eliminating the bacterial infection. Although the findings on exogenous metabolite-induced cellular l-valine/arginine elevation and l-valine-enhanced phagocytosis in vitro correlate with elevated host l-valine/arginine and bacterial control in vivo, further investigation is required to determine whether the in vitro mechanism is relevant to bacterial control in vivo, which includes examining whether the macrophages are really targeted in vivo by l-valine and l-arginine administration and determining whether other immune cells are involved (53). Additionally, two phenotypes of unknown cause from the present study will also need to be determined in future studies. One is how increasing concentrations of l-valine entail weaker effects on urea/NO synthesis (Figures 4B,C). In mammals, degradation of l-valine generates propionyl-CoA, which can be metabolized to succinyl-CoA by sequential catalytic reactions, including propionyl-CoA carboxylase and methylmalonyl-CoA racemase. Succinyl-CoA feeds into the Krebs cycle and produces NADH, which results in ATP production through mitochondrial respiration. During this respiration, reactive oxygen species (ROS), a pathway byproduct, are generated due to electron transfer to O2 (54). Low concentrations of mitochondrial ROS augment macrophage bactericidal activity (55); however, high concentrations of mitochondrial ROS may pose a barrier for macrophage survival (54,56). Therefore, we propose that increasing concentrations of l-valine induces overloaded mitochondrial ROS, which leads to macrophage dysfunction, thereby weakening the activity of NOS and eventually promoting weaker effects on urea/NO synthesis. Another phenotype is that phagocytosis is inhibited by 20 mM l-arginine supplementation in the absence of l-valine ( Figure 5B). As mentioned above, the products of arginase catalysis are urea and l-ornithine, of which l-ornithine may be used for the synthesis of polyamines by macrophages (57). High concentrations of polyamines have the ability to inhibit NOS activity (58,59). After l-valine deprivation, excess l-arginine is metabolized by arginase to produce abundant polyamines, the latter of which potentially suppress the activity of eNOS, a constitutively expressed enzyme in RAW264.7 cells (36). This process thereby reduces eNOS-mediated NO production and eventually decreases NO-mediated phagocytosis by macrophages. In summary, we demonstrated that mouse resistance to bacterial infection is strongly associated with metabolic states. High levels of l-valine are vital for survival strategies by pathogeninfected hosts. Although how the host accumulates high levels of l-valine in vivo upon bacterial infection is unknown, our study provides a new clue to identifying metabolic modulators and highlights the possibility of employing this metabolic modulatormediated innate immunity as a therapy for bacterial infections. aUThOr cOnTriBUTiOns BP, X-hC, and X-xP wrote the manuscript. X-xP, T-cZ, and BP conceptualized and designed the project. X-xP, BP, HL, and X-hC interpreted the data. X-hC, BP, J-xZ, and HL performed data analysis. X-hC performed experiments. All the authors reviewed the manuscript.
2017-05-03T19:04:10.816Z
2017-03-06T00:00:00.000
{ "year": 2017, "sha1": "8c3d3df88af9bee7e628e5e90dfac642c5f03f6a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2017.00207/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8c3d3df88af9bee7e628e5e90dfac642c5f03f6a", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
54506532
pes2o/s2orc
v3-fos-license
Interactive comment on “ Summer and winter variations of dicarboxylic acids , fatty acids and benzoic acid in PM ~ 2 . 5 in Pearl Delta River Region , China ” The summer to winter differences in concentrations of various organic compounds seems to be affected by both photochemistry and air mass transport patterns (especially the location of each site with respect to major source areas). There are a few issues that should be clarified a bit here. First, how typical the observed air mass transport patters are for this region? Are air flows in summer and winter always like those presented in Figure 2, or was this major difference between the two seasons just a coincidence? vehicles exhaust, or formed from photochemical degradation of aromatic hydrocarbons. Seasonal variations of the organic specie concentrations were found in the four sampling cities. Higher concentrations of TQWOC were observed in winter (598 ± 321 ng m −3 ) than in summer (372 ± 215 ng m −3 ). However, the abundances of TQWOC in OC mass were higher in summer (0.9-12.4%, 4.5 ± 2.7% on average) than in winter (1.1-5.7, 2.5 ± 1.2% on average), being consistent with enhanced secondary production of dicarboxylic acids in warmer weather. Spatial variations of water-soluble dicarboxylic acids were characterized by higher concentrations in Hong Kong and lower concentrations in Guangzhou (GZ)/Zhaoqing (ZQ) during winter whereas the highest concentrations were observed in GZ/ZQ during summer. These spatial and seasonal distributions are consistent with photochemical production and the subsequent accumulation under different meteorological conditions. K. F. Ho et al.: Summer and winter variations 2001). Distributions and concentrations of the organic acids in aerosols are important to understand their photochemical reactions and long-range transport. These species are emitted to the atmosphere directly from natural and anthropogenic primary sources (Kawamura and Kaplan, 1987;Rogge et al., 1991Rogge et al., , 1993aFang et al., 1999;Schauer et al., 1999;Simoneit et al., 2002). They are also produced by secondary atmospheric chemical reactions. Total dicarboxylic acids account for ∼1-3% of the total particulate carbon in urban areas and >10% in remote marine environments (Kawamura and Ikushima, 1993;Kawamura et al., 1996a, b;Kawamura and Sakaguchi, 1999;Kerminen et al., 2000). Fatty acids were one of the most abundant compound classes in the polar organic fraction of aerosols from urban atmospheres (Oliveira et al., 2007). They were found to contribute 6-53% of identified organic compounds from emission sources such as biomass burning (Rogge et al., 1998;Nolte et al., 2001;Schauer et al., 2001;Fine et al., 2002), cooking (Schauer et al., 1999Rogge et al., 1991;He et al., 2004), paved road dust (Nolte et al., 2002) and automobiles (Rogge et al., 1993a;He et al., 2006). Benzoic acid is a secondary product from photochemical degradation of aromatic hydrocarbons emitted by automobiles (Suh et al., 2003). This acid has also been measured as primary pollutant in the exhaust of motor vehicles (Kawamura et al., 1985;Rogge et al., 1993b). The Pearl River Delta (PRD) region in China covers nine prefectures of the province of Guangdong, namely Guangzhou, Shenzhen, Zhuhai, Dongguan, Zhongshan, Foshan, Huizhou, Jiangmen, Zhaoqing, the Hong Kong Special Administrative Region (HKSAR), and the Macau Special Administrative Region. It had a population approximately of 40 million people. The climate of PRD is dominated by the Asian monsoon, with north wind as prevailing wind in winter and south wind in summer. PRD is one of the rapidest economic growth regions in China. As influenced by urbanization and industrialization, air pollution has been becoming more and more severe in PRD, which is one of the four heaviest haze regions in China, together with the Yangtze River Delta, Beijing-Tianjin-Tangshan and Chongqing. Particulate matter with diameter less than 2.5 micrometers (PM 2.5 ) has recently received much attention (Yang et al., 2005;Feng et al., 2007;Li et al., 2008). These fine particles can penetrate deeply into human's lung, and also affect visibility, environment, and radiative forcing (Penner and Novakov, 1996;Menon et al., 2002;Wilkening et al., 2000;Nel, 2005). Adverse health, environment, and climate effects of the fine particles are constitutionally derived from their chemical components and properties. Previous studies have determined the organic acids in Guangzhou (Feng et al., 2006;Wang et al., 2006). However, the seasonal and spatial variations of organic acids were not studied (e.g., Feng et al., 2006). To better understand the organic compositions in the PRD region, PM 2.5 samples were acquired at four sampling sites simultaneously during winter and summer. The main objectives of this study are to determine the spatial and seasonal variations of selected organic species and to explore their implications for sources and photochemical reactions. Sample collection Four sampling sites were selected in PRD region, including Sun yat-sen University in Guangzhou (GZ), Zhaoqing University in Zhaoqing (ZQ) in Mainland of China, as well as The Hong Kong Polytechnic University (PU) and Hok Tsui (HT) in Hong Kong. Their locations are shown in Fig. 1. These four sites represent different types (urban: GZ; semirural: ZQ; urban/roadside: PU; rural: HT). Twenty-four-hour sampling of PM 2.5 were conducted simultaneously in the four sites from 14 December 2006 to 28 January 2007 in winter and from 4 Julyto 9 August 2007 in summer. Fifteen samples (i.e. 8 samples in winter, 7 samples in summer) were collected at each site for consequence organic analyses. The PM 2.5 were acquired on pre-heated (800 • C, 3 h) quartz fiber filters (102 mm) by medium volume samplers at a flow rate of 113 L min −1 . The sampling flow rates were checked before and after sampling with a TSI mass flow meter (model 4040, Shoreview, MN, USA). After sampling, aerosol loaded filters were stored in a refrigerator at 4 • C to prevent loss of any volatile components. However, this temperature does not totally prevent the loss of very volatile components and does not avoid some microbial processing. One field blank was collected at each site to subtract positive artifacts that resulted from passive adsorption of gas-phase organic compounds onto the filter during and/or after sampling. Meteorological data (from Hong Kong Observatory and China Meteorological Administration) shows that the southern air mass flow was dominated during summer, therefore GZ and ZQ were the downwind sampling locations in summer. One the contrary, northern air mass flow dominated during winter, thus PU and HT were the downwind sites in winter. Molecular compositions of low molecular weight diacids (C 2 -C 12 ), ketocarboxylic acids (ωC 2 -ωC 9 , pyruvic acid), αdicarbonyls (C 2 -C 3 ), benzoic acid and fatty acids (C 12 -C 25 ) were determined by gas chromatography/mass spectrometry (GC/MS). Samples were also analyzed for organic carbon (OC), elemental carbon (EC) and water-soluble organic carbon (WSOC). OC, EC and WSOC analysis OC and EC were measured on a 0.526 cm 2 punch from each filter by thermal optical reflectance (TOR) following the IM-PROVE protocol on a DRI Model 2001 Thermal/Optical Carbon Analyzer (Atmoslytic Inc., Calabasas, CA, USA) (Chow et al., 2004Cao et al., 2003a). This produced four OC fractions (OC1, OC2, OC3, and OC4 at 120, 250, 450, and 550 • C, respectively, in a helium (He) atmosphere), a pyrolyzed carbon fraction (OP, determined when reflected laser light attained its original intensity after oxygen (O 2 ) was added to the combustion atmosphere), and three EC fractions (EC1, EC2, and EC3 at 550, 700, and 800 • C, respectively, in a 2% O 2 /98% He atmosphere). IMPROVE OC is operationally defined as OC1 + OC2 + OC3 + OC4 + OP, whereas EC is defined as EC1 + EC2 + EC3−OP. The minimum detection limit for the carbon analysis is 0.30 and 0.15 µgC m −3 for OC and EC, respectively, with a precision better than 10% for total carbon (TC). For determination of WSOC, five punches (a total area of 2.63 cm 2 ) were taken from each filter and placed into a 15 ml screw-capped vial to which 5 ml of distilled de-ionized water (DDW) was added. Samples were extracted in a ultra-sonic water bath for 1 h. Filter debris and suspending insoluble particles were removed from the water extracts using a syringe filter (0.2 µm PTFE membrane). Each filtered extract was then transferred into a clean auto-sampler fitted vial. The filtered extract was analyzed for total organic carbon (TOC) using a Shimadzu TOC-V CPH high-sensitivity Total Carbon Analyzer (Columbia, MD, USA). The minimum detection limit is 0.01 µgC m −3 , with a precision of ±5%. Negligible amounts of OC, EC, and WSOC were observed in the field blanks. The data reported here are all corrected for the blanks. Inorganic compounds analysis Inorganic ions were determined using an ion chromatograph (IC) gradient pump (LC40) with a conductivity detector (CD25) (Dionex, Sunnyvale, CA, USA). An analytical column (AS11, 4 mm) with a guard column (AG11, 4 mm) and an anion trap column was used for inorganic anion detection with gradient elution from 0.2 to 5 mM NaOH. A cation analytical column (CS12, 4 mm) with a guard column (CG12, 4 mm) was used to analyze inorganic cations with an eluent of 20 mM methanesulfonic acid. Organic acid analysis The details of sample extraction and derivatization are documented elsewhere (Kawamura 1993;Kawamura and Yasui, 2005;Ho et al., 2010). An aliquot of the sample was extracted with pure water (10 ml × 3) to isolate low molecular weight dicarboxylic acids, ketoacids and α-dicarbonyls as well as free fatty acids. After concentration, the extracts were reacted with 14% BF 3 /n-butanol at 100 • C to convert the carboxyl groups to butyl esters and the aldehyde groups to dibutoxy acetals. The derivatized total extracts were analyzed with a Hewlett-Packard 6890 GC installed with HP5 fused silica capillary column (25 m × 0.20 mm i.d., 0.5 µm film thickness) and flame ionization detector. Authentic standards were used for the peak identification based on GC retention times. Homologous series of fatty acids were determined as butyl esters (Mochida et al., 2007). Mass spectral confirmation of the compounds was achieved using Thermo-Quest Trace GC/MS (Austin, TX, USA) using a similar GC conditions. Recoveries of the dicarboxylic acids, ketocarboxylic acids, α-dicarbonyls, and fatty acids were >70%. The reproducibility errors of the methods for the determination of organic species was <15% (Kawamura and Yasui, 2005;Mochida et al., 2007). Levels of field blanks were within 15% of actual samples, except for phthalic acid (up to 30%). The data reported here were all corrected against the blanks. The term of total quantified water-soluble organic compounds (TQWOC) (carbon) is defined as the sum of diacids, ketoacids and α-dicarbonyls. Concentrations of OC, EC and WSOC Spatial and seasonal distribution of OC, EC and WSOC are shown in Table 1. Average OC concentrations ranged from 1.8 ± 0.8 (HT in summer) to 13.9 ± 4.4 µgC m −3 (PU in winter) while average EC concentrations ranged from 0.7 ± 0.2 (HT in summer) to 14.7 ± 4.4 µgC m −3 (PU in summer). Among the four sampling sites, the average WSOC at ZQ had the highest concentrations (4.7 ± 2.5 µgC m −3 in winter), while the lowest was found at PU and HT (0.4 ± 0.1 and 0.4 ± 0.2 µgC m −3 , respectively in summer). The OC to EC ratio has been used to infer the origin of carbonaceous particles (Cao et al., 2003b;Novakov et al., 2005). The average OC/EC ratios (0.7 ± 0.4) at PU site (roadside) were significant lower than those found at the urban/rural sites. The low OC/EC ratio at PU was primarily due to the high EC emissions from automobiles. Higher OC/EC ratios (2.5 ± 0.7) at the HT sites suggest that the transportation of older aerosol as well as secondary organic aerosol (SOA) were significant. The ratio of WSOC to OC ranged from 0.04 to 0.64, with an average of 0.29 ± 0.16. And TQWOC (carbon) accounted for 3.4 ± 2.2% of OC and 14.3 ± 10.3% of WSOC. In general, large variations of these species were observed among the four sampling cities in PRD. The concentrations of total dicarboxylic acids ranged from 99 to 1340 ng m −3 , with an average of 438 ± 267 ng m −3 in PRD. Oxalic acid (C 2 ) was the most abundant dicarboxylic acid, followed by phthalic acid (Ph). These two species accounted for on average ∼60% of TQWOC. The concentrations of oxalic acid ranged from 31 to 1035 ng m −3 (260 ± 213 ng m −3 , on average), which are within the range of the values we reported in the same sampling sites in Hong Kong (Ho et al., 2007). The predominance of oxalic acid was recognized in previous studies also (Ho et al., 2007(Ho et al., , 2010. The abundant presence of cis configuration (maleic acid and methylmaleic acid) in the urban atmosphere supports an oxidation of aromatic hydrocarbons (benzene and toluene) as a precursor of oxalic acid. Three phthalic acids including o-, m− and p− isomers were detected. The isomer distribution was characterized by a predominance of phthalic acid followed by terephthalic acid and isophthalic acid, being consistent with those reported in the aerosols in Mt. Tai, China (Fu et al., 2008) and East China Sea (Simoneit et al., 2004). The average concentration of phthalic acid in PRD (81 ± 74 ng m −3 ) is ∼2 times higher than that observed in urban area of Tokyo in summer (29 ng m −3 on average) (Kawamura and Yasui, 2005), but is close to those reported in the Chinese cities (Ho et al., 2007). Phthalic acid can be formed by photodegradation of naphthalene (NAP) and other polycyclic aromatic hydrocarbons (PAHs) in atmospheric aerosols (Bunce et al., 1997;Jang and McDow, 1997). NAP is a ubiquitous pollutant in the atmosphere. The concentrations of NAP in urban areas such as Hong Kong have been reported to be as high as 3.5 µg m −3 (Lee et al., 2001). The products generated in the reaction of gas phase NAP with OH radical have lower vapor pressures than NAP, thus promoting the formation of SOA. Besides C 2 − C 4 dicarboxylic acids, the concentrations of azelaic acid (C 9 ) were the highest among the straight-chain saturated carboxylic acids in the PRD. Azelaic acid is an oxidation product of unsaturated fatty acids (Kawamura and Gagosian, 1987). An average abundance of azelaic acid was found to be 13.8 ± 9.1 ng m −3 in PRD, indicating that aerosols of biological origins are exposed to significant atmospheric processing. The total concentrations of ketocarboxylic acids ranged from 0.6 to 207 ng m −3 (43 ± 48 ng m −3 on average). Glyoxylic acid (ωC 2 ) is the dominant ketocarboxylic acids, followed by pyruvic acid (Pyr) and 4-oxobutanoic acid (ωC4). Their concentrations are close to those reported in Tokyo, Japan (Kawamura and Yasui, 2005) and other urban sites in China (Ho et al., 2007(Ho et al., , 2010. The total concentrations of α-dicarbonyls, including glyoxal and methylglyoxal, ranged from 0.2 to 89 ng m −3 , with an average of 11 ± 18 ng m −3 in PRD. Their concentrations were consistent to those reported in Hong Kong (Li and Yu, 2005) which did not exceed 100 ng m −3 . α-Dicarbonyls have been demonstrated to be precursors to the formation of SOA via heterogeneous processes (Kroll et al, 2005;Liggio et al., 2005). Photooxidation of glyoxal can lead the formation of oxalic acid. The higher concentrations of glyoxal and methylglyoxal may represent the greater potential of subsequence SOA formation processing in PRD. 0.40 ± 0.37 0.13 ± 0.14 0.23 ± 0.10 0.34 ± 0.11 0.21 ± 0.14 0.00 ± 0.00 0.04 ± 0.12 0.00 ± 0.00 Ketomalonic, kC3 1. Molecular compositions of fatty acids and benzoic acid Concentrations of a homologous series of straight chain saturated fatty acids (C 12:0 -C 25:0 ), unsaturated acid (oleic C 18:1 ) and benzoic acid are shown in Table 1 as well. The total quantified fatty acids concentrations ranged from below MDL (spell out) to 103 ng m −3 , with an average of 43.4 ± 27.3 ng m −3 . Hexadecanoic acid (C 16:0 ), octadecanoic acid (C 18:0 ) and oleic acid (C 18:1 ) are the three most abundant fatty acids in PRD. This was consistent with the data reported by Zheng et al. (2000). The odd number fatty acids with C ≥19 were below MDL in the sites, demonstrating a strong even to odd predominance for fatty acids. Both biogenic and anthropogenic sources are the essential inputs of fatty acids. Microbial activity is one of important biogenic sources (Simoneit and Mazurek, 1982). For anthropogenic sources, C 16:0 , C 18:0 and C 18:1 are predominantly emitted from the meat cooking while C 16:0 can also be directly formed in fossil fuel combustion (Rogge et al., 1991;Schauer et al., 1999Schauer et al., , 2002Zhao et al., 2007a,b). The high concentration of total fatty acids suggests that both cooking and vehicular emissions are important pollution sources in PRD as well as vegetations. Molecular distributions of fatty acids are characterized by a strong even carbon number predominance with a maximum (C max ) at hexadecanoic acid (C 16:0 ). Similar distribution patterns of fatty acids have been reported in other urban and rural areas in Hong Kong and China Hou et al., 2006;Fu et al., 2008). The dominance of even carbon number fatty acid to odd carbon number fatty acid isomers is quantified by Carbon Preference Index (CPI) and is calculated as: CPI fatty acid = Even carbon number fatty acids Odd carbon number fatty acids The predominance of even carbon numbered fatty acids emphasizes that a significant influence from biological sources of aerosols such as microbial activities and epicuticular waxes of vascular plant (Simoneit and Mazurek, 1982;Simoneit, 1984). Here the CPI was calculated with homologous series of fatty acids (C 12:0 to C 25:0 ). The CPI values of the fatty acids are 19.4, 13.8, 40.4 and 3.26, in GZ, ZQ, PU and HT, respectively. The CPI values are higher in summer than in winter indicate that biogenic source has a larger contribution in hot weather. Octadecenoic acid (oleic acid, C 18:1 ) was detected in most of the urban samples which had concentrations ranging from below MDL to 26 ng m −3 (4.1 ± 4.7 ng m −3 on average) in PRD. Automobile engine exhaust is one of the pollution sources for C 18:1 (Rogge et al., 1993b). Ratio of C 18:1 to C 18:0 can be used as an indicator for aerosol aging. A lower ratio was observed in aged aerosol as unsaturated fatty acids can be photo-chemically degraded, while saturated fatty acids are more stable in the atmosphere (Kawamura and Gagosian, 1987;Wang et al., 2006). In PRD, an average of C 18:1 /C 18:0 ratio was 0.53 ± 0.39, suggesting an enhanced photochemical degradation of unsaturated fatty acid. Benzoic acid was detected in most of the samples in PRD, with an average concentration of 165 ± 48 ng m −3 . Benzoic acid was proposed to be a primary pollutant in the motor vehicles exhaust (Kawamura et al., 1985;Rogge et al., 1993b), and a secondary product from photochemical degradation of aromatic hydrocarbons such as toluene emitted by automobiles (Suh et al., 2003). Guo et al. (2004) found that high daily concentration of toluene, with a maximum of 53 µg m −3 , was determined in Hong Kong. This suggests that major portion of benzoic acid in the PRD aerosols is probably produced by the oxidation of toluene in the atmosphere. Summer/winter variations and spatial distribution Summer/winter variations of the organic species were found in the four sampling sites. TQWOC concentrations ranged from 145 to 1340 ng m −3 (544 ng m −3 on average) in winter and from 99 to 665 ng m −3 (318 ng m −3 on average) in summer. These values are similar to those (90-1370 ng m −3 , 480 ng m −3 on average) reported in urban Tokyo (Kawamura and Ikushima, 1993), but are lower to those reported in other urban cities in China (Ho et al., 2007). Total ketocarboxylic acid concentrations ranged from 4.5 to 178 ng m −3 (43.9 ng m −3 on average) in winter and from 0.59 to 207 ng m −3 (42.3 ng m −3 on average) in summer, while total dicarbonyl concentrations ranged from 1.3 to 88.6 ng m −3 (11.0 ng m −3 on average) in winter and from 0.15 to 68.2 ng m −3 (11.6 ng m −3 on average) in summer. These concentrations are similar to those reported (ketocarboxylic acids = 53 ng m −3 ; dicarbonyls = 12 ng m −3 ) at the Gosan site on Jeju Island, South Korea . Total quantified fatty acids ranged from 2.9 to 103 ng m −3 (45.3 ng m −3 on average) in winter and from below MDL to 96.1 ng m −3 (41.3 ng m −3 on average) in summer, while benzoic acid concentrations ranged from 101 to 256 ng m −3 (157 ng m −3 on average) in winter and from 83.9 to 306 ng m −3 (175 ng m −3 on average) in summer. In order to investigate the transport and the source region of air pollutants, 2-day air mass back trajectory analyses were conducted using NOAA HYSPLIT model (HYbrid Single-Particle Lagrangian Integrated Trajectory, NOAA/ARL) with a starting elevation of 100 m. In winter, prevailing northeasterly wind travels across South China before reaching Hong Kong (Fig. 2a). Total dicarboxylic acids were the most abundant in downwind location of Hong Kong (i.e., PU and HT sites). Poor air quality in Hong Kong in winter is due to the influence of local sources and polluted air mass transported from South China. In contrast, the highest dicarboxylic concentration was found in the downwind locations in GZ and ZQ during summer, when prevailing southwesterly wind brings warm and damp air masses from the South China Sea through Hong Kong to PRD region (Fig. 2b). Comparisons of winter and summer concentrations for both individual quantified compounds are shown in Table 1 while comparisons of winter and summer concentrations for TQWOC and sum of fatty acids are shown in Fig. 3. The highest average concentration of the TQWOC was found at PU in winter, which is attributable to the mixed contribution of local and regional sources. The concentrations of the organic species in winter were statistically higher in PU than in GZ. The high abundances of organic aerosols in downwind urban locations (PU) are due to the emissions from urban local sources and regional long-range transport from PRD when the air mass came from the north. Reversibly, statistically lower concentrations of the TQWOC were found at PU and HT in summer because of the upwind locations. The local emission sources were diluted by marine air masses transported from the South China Sea. In contrast, the concentrations of the TQWOC at downwind locations (GZ and QZ) were 2-3 times higher than those in Hong Kong during summer. Concentrations of total quantified fatty acids at urban sites (GZ, ZQ and PU) were found to be 3-23 times higher than those at background site (HT). TQWOC were normalized by OC and WSOC to better discuss summer/winter variations (Fig. 3). The relative abundances of TQWOC in WSOC were higher in summer (5.9-50.7%, 20.2 ± 10.3% on average) than in winter (winter: 2.2-40.8, 9.6 ± 7.6% on average) while the relative abundances of TQWOC in OC were also higher in summer (0.9-12.4%, 4.5 ± 2.7% on average) than in winter (winter: 1.1-5.7, 2.5 ± 1.2% on average) except at PU site, being consistent with enhanced secondary production of dicarboxylic acids under warmer weather conditions. Correlation analysis and the ratios of selected species Low molecular weight dicarboxylic acids can be primarily produced from anthropogenic emissions. Photochemical reactions in the atmosphere also play an important role in the formation of dicarboxylic acids. The dicarboxylic acids are secondarily generaed in the atmosphere by photochemical chain reactions of unsaturated hydrocarbons or fatty acids as well as their oxidation products (Kawamura and Sakaguchi, 1999;Kawamura et al., 1996b), even though their formation mechanisms are poorly understood. The correlation coefficients of selected species were examined in different sites in both seasons. Table 2 shows the correlation coefficients of selected dicarboxylic acids, ketocarboxylic acids and αdicarbonyls. Strong correlations were observed for C 2 , C 3 , C 4 , C 9 , ωC 2 and ωC 9 in downwind locations, that is, GZ and ZQ in summer and PU and HT in winter, respectively. Other than direct vehicular emission, photochemical processes control the atmospheric concentrations of these species. For instance, ωC 2 , the most abundant ketocarboxylic acid, can be further oxidized to C 2 dicarboxylic acid; thus a good correlation was found between ωC 2 and C 2 (r = 0.93, P < 0.01 in a) b) GZ; r = 0.98, P <0.01 in ZQ in summer; and r = 0.42, P < 0.5 in HK; r = 0.89, P < 0.01 in HT in winter). Figure 2 Furthermore, positive correlations of ωC 2 with Gly (r = 0.85 − 0.99,P < 0.01) are observed in the downwind sites. This is consistent with the atmospheric oxidation process proposed for Gly to ωC 2 (Kawamura et al., 1996c). Malonic (C 3 ) and succinic (C 4 ) acids can be oxidized to C 2 via the breakdown of intermediates such as ketomalonic acid (kC 3 ) (Kawamura and Ikushima, 1993), thus strong Table 2. Correlation coefficients of selected dicarboxylic acids, ketocarboxylic acids and dicarbonyls at four sampling sites in PRD in winter and summer seasons. correlations were observed among C 2 , C 3 and C 4 in this study. Other acids such as fumaric (F), maleic (M) and methylmaleic (mM) acids are fairly correlated each other (r = 0.61 − 0.70,P <0.1). These three dicarboxylic acids are known to be the photooxidation products of toluene, benzene, and xylene. Maleic acid (M) can isomerize to transfumaric acid (F) by photochemical transformations (Kawamura and Ikushima, 1993). C 2 , sometimes regarded as secondary organic aerosol tracer, has fair correlation with sulfate (r = 0.66,P <0.1), which is consistent with previous studies. Yu et al. (2005) argue that in-cloud processing has been established to be the dominant formation pathway for oxalate. Kawamura and Ikushima (1993) suggested that ratio of C 3 to C 4 can be used as an indicator of enhanced photochemical production of dicarboxylic acids. It is known that C 4 can serve as a precursor of C 3 . In this study, C 3 /C 4 ratios ranged between 0.24 and 5.42 with an average of 1.29, which are higher than those reported from vehicular emissions (0.3-0.5) (Kawamura and Kaplan, 1987), and for aerosols in Northern China in summer (0.61) and winter (1.12) (Ho et al., 2007). Our findings also suggest that in addition to primary exhaust, secondary formation of particulate dicarboxylic acids by photo-oxidation reaction is also important in PRD. The (F + M + mM)/EC ratios at the downwind sampling locations were much higher than those at the upwind sampling sites. The elevated abundance of M, F and mM in aged aerosols indicates that the photooxidation of aromatic compounds to F, M, and mM is important during long-range transport. Good correlations (r = 0.69, P <0.1 in downwind sites; r = 0.85, P <0.01 in upwind sites) were observed between the TQWOC and WSOC (Fig. 4). These results suggest that dicarboxylic acids, ketocarboxylic acids and dicarbonyls are the major water-soluble organic species in PRD, which are linked to the photochemical chain reactions. The TQWOC contributed more than 15% of WSOC in downwind sites (except HT), suggesting that the water-soluble organic species are one of the major contributors of WSOC in PRD. It is reasonable because there is a sufficient duration for the precursors to form SOC during the long-distance transport to the downwind sampling locations. Summary and conclusions Molecular compositions of low molecular weight dicarboxylic acids (C 2 -C 12 ), ketocarboxylic acids (ωC 2 -ωC 9 , pyruvic acid), α-dicarbonyls (C 2 -C 3 ), fatty acids (C 12 -C 25 ) and benzoic acid were studied in PM 2.5 samples collected from four sampling locations in PRD during the winter and summer to better understand their spatial and seasonal variations of water-soluble organic species. Oxalic (C 2 ) acid was found as the most abundant diacid, followed by phthalic acid (Ph) which are similar to other urban cites in China (Ho et al., 2007). The TQWOC contributed a significant fraction in WSOC (14.3 ± 10.3%). The fatty acids had an average total concentration of 43.4 ± 27.3 ng m −3 and are derived from both biogenic and anthropogenic sources. The strong even carbon number predominance in fatty acid distributions represents significant influences from biological sources such as microbial activities and epicuticular waxes of vascular plant in PRD region. Octadecenoic acid (oleic acid, C 18:1 ) was detected in most of the urban samples with concentrations ranging from below MDL to 26 ng m −3 (4.1 ± 4.7 ng m −3 on average) in PRD. Automobile engine exhaust may be one of the pollution sources for C 18:1 . The concentrations of the organic species in winter were generally higher in PU and HT than in GZ and ZQ. The high abundances of organic aerosols in downwind locations (PU and HT) are due to the emissions from urban local sources and long-range transport from PRD when the air mass came from the north. In contrast, lower concentrations of the TQ-WOC were found at the upwind locations of PU and HT in summer. The local emission sources were diluted by marine air masses transported from South China Sea. However, the relative abundances of TQWOC in OC were higher in summer (0.9-12.4%, 4.5 ± 2.7% on average) than in winter (winter: 1.1-5.7, 2.5 ± 1.2% on average) except at PU site, being consistent with enhanced secondary production of dicarboxylic acids under warmer weather conditions. These spatial and seasonal variations are consistent with photochemical production and the subsequent accumulation under different meteorological conditions. Relatively high C 3 /C 4 ratios (0.24-5.42 with an average of 1.29) were found for the molecular distributions of dicarboxylic acids in this study, further suggesting that in addition to primary emissions from vehicular emissions, secondary formation of particulate dicarboxylic acids via photooxidation reaction is important in PRD. Good correlations (r = 0.69 in downwind sites; r = 0.85 in upwind sites) were observed between the TQWOC and WSOC, suggesting that dicarboxylic acids, ketocarboxylic acids and dicarbonyls are the major water-soluble organic species in PRD. The TQ-WOC contributed more than 15% of WSOC in downwind sites (except HT), suggesting that the water-soluble organic species are one of the major contributors of WSOC in PRD. In light of WSOC being the most abundant component of PM 2.5 , future work is suggested to further speciate and quantify this fraction (e.g., humic-like substances (HULIS)) in PRD and other megacities in China.
2018-12-03T16:53:00.350Z
2010-11-08T00:00:00.000
{ "year": 2010, "sha1": "a5e924364fe32eed0aa6dce3ecca5e298aea2278", "oa_license": "CCBY", "oa_url": "https://www.atmos-chem-phys.net/11/2197/2011/acp-11-2197-2011.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "08ce55c7bf202f993b196d0c1453931bc7c7ad8d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
269043947
pes2o/s2orc
v3-fos-license
Extracting value from total-body PET/CT image data - the emerging role of artificial intelligence The evolution of Positron Emission Tomography (PET), culminating in the Total-Body PET (TB-PET) system, represents a paradigm shift in medical imaging. This paper explores the transformative role of Artificial Intelligence (AI) in enhancing clinical and research applications of TB-PET imaging. Clinically, TB-PET’s superior sensitivity facilitates rapid imaging, low-dose imaging protocols, improved diagnostic capabilities and higher patient comfort. In research, TB-PET shows promise in studying systemic interactions and enhancing our understanding of human physiology and pathophysiology. In parallel, AI’s integration into PET imaging workflows—spanning from image acquisition to data analysis—marks a significant development in nuclear medicine. This review delves into the current and potential roles of AI in augmenting TB-PET/CT’s functionality and utility. We explore how AI can streamline current PET imaging processes and pioneer new applications, thereby maximising the technology’s capabilities. The discussion also addresses necessary steps and considerations for effectively integrating AI into TB-PET/CT research and clinical practice. The paper highlights AI’s role in enhancing TB-PET’s efficiency and addresses the challenges posed by TB-PET’s increased complexity. In conclusion, this exploration emphasises the need for a collaborative approach in the field of medical imaging. We advocate for shared resources and open-source initiatives as crucial steps towards harnessing the full potential of the AI/TB-PET synergy. This collaborative effort is essential for revolutionising medical imaging, ultimately leading to significant advancements in patient care and medical research. Introduction Positron Emission Tomography (PET) has evolved from its initial role as a specialised research tool into an indispensable element in clinical diagnostics, thereby significantly enhancing our understanding of physiological and molecular activities within the human body.The integration of PET with computed tomography (PET/CT) Extracting value from total-body PET/CT image data -the emerging role of artificial intelligence The 'value' of TB-PET extends well beyond its technological advancements.Its true value is encapsulated in the flexibility of imaging protocols as well as in novel applications in both clinical settings and research domains.Clinically, TB-PET is notable for its enhanced sensitivity and efficiency, enabling rapid imaging and low-dose protocols, as well as facilitating delayed and same-day dual-tracer imaging [10].These attributes have already markedly improved diagnostic capabilities and patient experiences.While still in its early stages in clinical research, TB-PET has shown promise in exploring systemic interactions across organ systems and in fostering a more holistic understanding of the human body [11,12]. To date, artificial Intelligence (AI) has already established a significant presence in the realm of radiology, and its impact is increasingly evident in nuclear medicine as well [13].Over the years, AI has successfully integrated into the entire imaging workflow of PET, including aspects such as image acquisition, image reconstruction, data corrections, and data mining.In the context of TB-PET, the application of AI in augmenting TB-PET's value is still in its nascent stages but is showing steady growth.Given that TB-PET generates dense and rich datasets, AI is expected to play a central role in transforming this data into meaningful insights.Moving beyond its previous role as a supplementary technology, AI is emerging as a fundamental component in the future of TB-PET research. This manuscript aims to explore both the current and potential roles of AI in enhancing the functionality and utility of TB-PET.Central to this inquiry is an examination of how AI can not only streamline existing TB-PET procedures but also pioneer previously unexplored applications, fully capitalising on the technology's advanced capabilities.This paper will also discuss the critical steps and factors necessary for the effective integration of AI into TB-PET research. Current applications of total-body PET: enhancing efficiency with AI The clinical community has shown ardent interest in TB-PET, largely because of its greater sensitivity compared to traditional short-axial field-of-view PET/CT systems.This enhanced volume sensitivity facilitates two key imaging options: rapid acquisitions with conventional dose injection and low-dose imaging over standard acquisition times.The first approach allows for swift, comprehensive imaging, essential for a detailed assessment of disease in a single bed position.Conversely, the latter option allows for the distribution of radiation dose over time, enabling longitudinal studies for more detailed disease observation and characterization.Furthermore, TB-PET's heightened sensitivity also permits single-day, dual-tracer imaging [10].This approach entails sequential scanning utilising disparate tracers, thereby substantially optimising patient throughput and scanning logistics.Additionally, the advent of dynamic imaging in a single bed position, supplemented by vendors integrating direct parametric reconstructions into TB-PET systems, provides the opportunity for more nuanced characterization of oncological cases in clinical routine [14,15].But despite the advancements brought forth by TB-PET, it also introduces new challenges in the domain of clinical imaging. Revealing more, demanding greater quantification Initial investigations using the uEXPLORER (United Imaging) with healthy subjects have demonstrated remarkable detail in PET images from extended scan durations (up to 20 min), showcasing clear delineation of vessel walls, spinal cord, and brain structures [5].Subsequent clinical studies employing either the uEXPLORER or Siemens Quadra TB-PET/CT system have further demonstrated improvements in both image quality and lesion quantification [16][17][18][19][20]. Notably, delayed imaging techniques have been observed to enhance the contrast between lesions and their background while simultaneously reducing image noise [19,21].Beyond oncology, the efficacy of ultra-low-dose TB-PET in imaging cardiovascular conditions and autoimmune inflammatory diseases underlines its broad clinical utility [22,23]. The comprehensive diagnostic capabilities offered by TB-PET, though invaluable, also come with risks of information overload for clinicians tasked with interpreting these complex scans.Traditionally, results have been derived through either visual assessment or labourintensive manual segmentation, approaches that are increasingly inadequate given the breadth and volume of data provided by TB-PET.This is where AI-driven segmentation and detection tools become crucial, offering streamlined processing and interpretation of diverse biomarkers, from tumour loads (Fig. 1) and aortic wall uptake to systemic inflammations. Currently, no single algorithm exists that can match the multi-label classification skills of a clinician across a variety of clinical scenarios.Nonetheless, significant progress has been made in tumour segmentation within 18 F-FDG PET/CT imaging [24][25][26], driven by deep learning frameworks like nnU-Net [27] and MONAI Auto3DSeg [28], and supported by open-source datasets from initiatives such as AUTOPET [29] and HEKTOR [30].Despite these advances, the challenge of algorithmic generalisation beyond specific training datasets persists.This limitation becomes particularly pronounced in total-body PET imaging, which encompasses a diverse range of clinical findings, from various tumour types to pathologies like inflammation and infection, particularly since these may coexist in individual patients.Consequently, developing individual algorithms for each distinct aspect within this domain is impractical, considering the vast diversity of data involved. In response to these challenges, the concept of foundational models offers a promising path forward.The success of vision models such as Meta's 'Segment Anything Model' (SAM) [31] in general applications has inspired similar innovations in medical imaging.The Medical SAM (MedSAM), for instance, demonstrates the potential of these models to segment any specified area in medical imaging based on varied inputs like bounding boxes or points [32].Interestingly, the native SAM is already capable of performing semantic segmentation on 2D PET images (Fig. 2), without any modification.A similar approach for 3D, tailored for PET imaging, could accelerate the analysis of complex TB-PET datasets.A foundational model that is agnostic to tracer or disease, would allow clinicians to efficiently segment and analyse diverse data, greatly facilitating the diagnostic process.The clinical impact of such a model could be profound, potentially automating the detection of key biomarkers such as Total Lesion Glycolysis (TLG) and Metabolic Tumour Volume (MTV), and efficiently quantifying systemic inflammation.This innovation holds the promise of becoming an essential tool in routine clinical practice, enabling more effective and efficient data mining from TB-PET imaging studies. In the radiation drama of total-body PET/CT: CT plays the lead The substantial sensitivity increase in total-body PET/ CT imaging has led to the advent of ultra-low-dose PET techniques, using as little as 1/20th of the standard dose while maintaining clinical image quality [33].This advancement has broadened the scope for dose-optimised longitudinal imaging, with applications spanning various clinical areas [21,34].These include the development of new radiopharmaceuticals, monitoring of treatment responses, early detection of malignancy-related vascular complications, immune response imaging in infectious diseases, and paediatric imaging [21,[34][35][36][37][38].The core concept involves dividing the total radiation dose across multiple scans to avoid additional radiation exposure.However, it's important to note that in TB-PET/CT imaging, the primary source of radiation exposure is often not the PET component but rather the CT component.This aspect becomes particularly relevant in dual-time-point and dual-tracer studies, where patients undergo two CT scans [39,40].While CT is Fig. 1 Multifaceted 18 F-FDG PET Imaging Analysis of Follicular Lymphoma with AI-Assisted Tumor Detection.Presented here is a comprehensive visualization of follicular lymphoma characterized by diverse 18 F-FDG uptake patterns across nodal and extranodal sites.The illustration captures the Molecular Imaging Tumor Volume (MTV) on 18 F-FDG PET, delineated using the LION (Lesion Segmentation) algorithm, a native AI tool that identifies lymphoma lesions without the pre-setting of SUV thresholds.This intelligent segmentation excludes physiological uptakes in the kidneys, bladder, and brain for enhanced specificity in oncological imaging.Complementing this, the Multi-Organ Objective Segmentation (MOOSE) automatically defines organ contours, with a focus on the spleen in this instance.MOOSE enables the precise determination of the fraction of the spleen infiltrated by lymphoma, which is computed to be 56% of the total organ volume.The deployment of these AI algorithms for tumour and tissue segmentation provides a robust and reproducible quantitative assessment, offering novel prognostic insights into the extent and aggressiveness of follicular lymphoma indispensable in providing essential anatomical details and enabling attenuation correction in clinical PET/CT studies, in the context of longitudinal imaging, either reducing the CT dose or omitting repeated CT scans could be beneficial.Such an approach aligns with the ' As Low As Reasonably Achievable' (ALARA) principle [41], supporting radiation safety initiatives like the 'Image Gently' campaign [42], which advocates for minimising radiation exposure, particularly in paediatric imaging.Researchers have already used AI in tackling this challenge, particularly in the context of attenuation correction.Sari et al. developed a deep learning-based method to create attenuation maps for PET scans without needing CT scans for correction [43].Specifically, a convolutional neural network (CNN) was used to enhance initial µ-maps generated using a joint activity and attenuation reconstruction algorithm, showing promising results in enabling CT-free attenuation and scatter correction.This approach could be particularly useful in longitudinal imaging studies, where reducing or omitting CT scans can significantly lower patient radiation exposure while maintaining imaging quality.Likewise, Guo, Xue et al [44].address a key challenge in CT-free PET imaging using deep learning (DL): the heterogeneity of tracers and scanners.They simplify this complex issue through domain decomposition, separating the learning process into low-frequency, anatomy-dependent attenuation correction and preserving high-frequency, anatomy-independent textures.This approach, trained with just one tracer on one scanner, showed robustness and effectiveness across various tracers and scanners, enhancing the potential for clinical translation of DL methods in PET imaging. In another study by Hu et al., [45] an ultra-low-dose CT (ULDCT) reconstructed with an artificial intelligence iterative reconstruction algorithm (AIIR) was evaluated for use in 18 F-FDG total-body PET/CT examinations.The study, including both phantom and clinical components, explored the feasibility of ULDCT (10 mAs) reconstructed with AIIR in comparison to standard-dose CT (SDCT) (120 mAs) using hybrid iterative reconstruction (HIR).The results indicated that while ULDCT-AIIR did not completely match the image quality of SDCT-HIR, it significantly reduced image noise and improved the signal-to-noise ratio (SNR), suggesting its potential application under specific circumstances in PET/CT examinations. These advancements in AI for PET imaging not only enhance attenuation correction but also significantly increase the value of total-body PET by facilitating low-dose longitudinal imaging.This progression marks a pivotal step in maximising the clinical utility of PET imaging, offering more frequent and safer imaging options for patient monitoring and disease progression assessment, in line with minimising radiation exposure. Advancing disease characterization amidst growing data complexities The utilisation of dual-tracer PET/CT imaging with 18 F-FDG and 68 Ga-PSMA has been instrumental in enhancing our understanding of tumour biology, specifically in terms of aggressiveness and differentiation.This approach, which combines 18 F-FDG and 68 Ga-PSMA tracers, has been implemented in preliminary studies using conventional PET/CT systems [39].These studies have primarily focused on patient prognostic stratification.However, the integration of this dual tracer method into clinical routine has been limited.The primary challenges include increased radiation exposure and logistical complexities, such as organising scans on two separate days. Recent advancements in TB-PET/CT have shown promising developments in addressing these challenges.Clinically viable protocols have been developed that allow for the sequential imaging of 68 Ga-PSMA and 18 F-FDG on the same day [40].These protocols typically involve administering a standard dose of 68 Ga-PSMA, followed by a low-dose 18 F-FDG scan.Additionally, TB-PET/CT has been explored for dual-tracer PET/CT scans using 18 F-FDG and FAPI tracers, offering insights into the tumour-associated microenvironment [10]. However, it is important to distinguish these practices from multiplexed PET imaging.Multiplexed PET imaging involves administering a mixture of tracers to the patient and employing advanced reconstruction techniques to isolate individual signals.This method offers two significant advantages: firstly, it facilitates a single imaging session without the need for a second CT or subsequent scan.Secondly, it enables voxelwise alignment, providing true spatial multiplexing.This capability is crucial for understanding the spatial heterogeneity of tumours, as the multiplexed image can simultaneously highlight various attributes of the tumour under investigation. The advent of reconstruction-based multiplexing in PET/CT imaging represents a significant advancement in the field, offering a sophisticated approach to capturing complex biological processes in a single imaging session [46].While this technique holds great promise, its implementation in clinical practice is not yet widespread, primarily due to its implementation complexity.However, an equally effective alternative can be achieved through the precise spatial alignment of dual tracer PET/CT images, a method that can be readily applied in current clinical settings with the aid of artificial intelligence (AI). The spatial alignment of two distinct tracers in PET/ CT imaging presents a notable challenge, as these tracers often exhibit varying activity distributions.A promising solution to this problem is aligning the corresponding CT images first and then transferring the derived motion fields to their PET counterparts.This technique, especially relevant in sequential dual-tracer scans performed on the same day, could effectively mimic the outcomes of reconstruction-based multiplexing, thus offering a 'pseudo-multiplexing' effect (Fig. 3). Nevertheless, aligning the CT images is a complex task.Conventional diffeomorphic algorithms, despite their capability to handle large deformations, may not provide the necessary precision.Research has shown that augmenting these algorithms with dense segmentation maps can greatly enhance the accuracy of the motion fields, leading to more accurate alignment [47].In this context, the use of advanced open-source CT organ segmentation tools such as MOOSE [48] and TotalSegmentator [49], which are based on the robust nnU-Net [27] AI framework, becomes crucial.These tools facilitate detailed whole-body segmentations, which, when integrated into the registration process, significantly improve alignment accuracy. For the registration process, one can choose between classical diffeomorphic algorithms [50,51], known for their effectiveness in computational neuroanatomy, or adopt contemporary learning-based algorithms like VoxelMorph [47].Learning-based algorithms (e.g., VoxelMorph) offer a substantial advantage in terms of computational speed, as they eliminate the need for optimization during the inference process, unlike classical diffeomorphic algorithms, which are more computationally demanding.By leveraging AI, particularly in the alignment of dual-tracer TB-PET/CT images, we can approach the intricacies of tumour heterogeneity with a level of precision and detail akin to that achieved in multiplexing techniques used in immunohistochemistry. Another emerging area of interest in TB-PET imaging is dynamic imaging, adding a temporal domain to the rich 3-D information intrinsic to PET.Recent research indicates that by analysing the raw time activity curves (TACs) of tumour regions, it is possible to assess the spatial heterogeneity within tumours [20,52].At the same time, kinetic modelling has garnered considerable attention as well.Research groups are exploring its applicability in oncology, particularly in ways to abbreviate the scan duration required for kinetic analysis [53,54].The objective is to extract kinetic parameters that provide a more nuanced understanding of the tumour under investigation.Nonetheless, these dynamic PET imaging techniques present several challenges.Characterising TACs requires precise tumour segmentation, and kinetic modelling depends on segmenting specific regions to determine the input function derived from imaging data (IDIF).Both tasks are labour-intensive and demand high precision.In this context, whole-body AI-based organ segmentation tools like MOOSE [48] and TotalSegmentator [49] prove invaluable.They facilitate the segmentation process for IDIF as both cover major input function regions, thereby streamlining kinetic modelling (Fig. 4).Employing a foundational AI model for tumour segmentation, as previously discussed, can significantly ease the extraction and analysis of tumour TACs.Integrating AI into TB-PET imaging workflows is essential to fully leverage dynamic imaging's potential.Automating these processes reduces the manual and cognitive burden on clinicians, allowing them to concentrate more on interpretation and clinical decision-making. Comprehensive health assessment with total-body PET: a unified diagnostic approach The capability of TB-PET/CT to simultaneously image the entire human body, combined with its high spatial and temporal resolution, presents certain unique opportunities.Recent research has demonstrated the potential of conducting sub-second image reconstructions with TB-PET/CT, closely mirroring the temporal resolution achieved by functional Magnetic Resonance Imaging (fMRI) [55].These advanced capabilities in TB-PET/CT could herald a paradigm shift from traditional imaging (often colloquially referred to as "lumpology [56]") to a renewed focus on PET's fundamental strengths in assessing physiological and pathophysiological functions and processes.This capacity for high-temporal dynamic imaging across all organs promises to deliver a wealth of clinically relevant data, surpassing mere identification of pathologies and encompassing a comprehensive 68 Ga-PSMA-positive/ 18 F-FDG-negative prostate cancer (refer to PET/CT axial slice) highlighted in green and 18 F-FDG-avid metastatic melanoma (refer to coronal slice with prominent 18 F-FDG uptake and mild 68 Ga-PSMA uptake), highlighted with red arrows.Separate scans using 18 F-FDG and 68 Ga-PSMA tracers reveal distinct metabolic and molecular patterns corresponding to melanoma and prostate cancer, respectively.The composite image results from diffeomorphic algorithmic synthesis, assigning discrete chromatic channels-red for 18 F-FDG and green for 68 Ga-PSMA-to each radiotracer, thereby creating a composite image.This multiplexed image merges the two separate datasets into a single, integrated visual field.Manifestations of sole 18 F-FDG uptake are visualized in red, 68 Ga-PSMA uptake in green, and concomitant tracer accumulation is rendered in shades of yellow, indicating co-expression. 18F-FDG dominant malignancies in the multiplexed images are highlighted with red arrows, while 68 Ga-PSMA dominant malignancies are highlighted with green arrows.This technique of image multiplexing, akin to multiplex histopathology, allows for a nuanced characterization of tumoural heterogeneity, providing an intuitive and single-image synopsis of the distinct pathophysiological processes at play understanding of bodily functions.Measurements including first-pass cardiac ejection fractions, as well as pulmonary and renal perfusion assessments, may be derived through the analysis of finely sampled PET frames followed by voxel-level data evaluation, thereby providing an extensive assessment of health [37,57].In such studies, the definition of volume and motion correction is going to be crucial post-processing steps, essential for the generation of data that is both quantitative and useful.Furthermore, as previously discussed, the availability of various AI-based organ segmentation algorithms could prove to be indispensable in the facilitation of such research endeavours.Addressing motion correction in total-body PET presents a complex challenge, given the multifaceted nature of motion encountered in such settings.This includes gross body motion, respiratory and cardiac movements, as well as abdominal motion, with the motion profile varying from rigid structures like the brain to more deformable ones like the gut and bladder.Developing a motion compensation tool that effectively manages this range of motion profiles across various tracers poses significant difficulty. Recent research has explored the application of diffeomorphic registration for total-body motion correction.For instance, Sun et al. [58] utilised Symmetric Normalisation [50] for whole-body motion correction in 18 F-FDG PET/CT scans.In a similar vein, we introduced FALCON [59], a diffeomorphic algorithm optimised for speed and applied across various tracers to correct for total-body motion, albeit compromising the symmetric property of the algorithm for enhanced computational efficiency [51].Notably, both these algorithms demonstrate limitations in correcting early frames (less than 2 min postinjection), where tracer dynamics undergo rapid changes critical for clinical perfusion parameters.The primary challenge here lies in the disparity of image content in these early frames, attributable to the swiftly changing tracer kinetics, which complicates the task of any correction algorithm.To address this specific issue, the use of conditional Generative Adversarial Networks (GANs) has been proposed and effectively implemented in both brain [60] and total-body studies [61].The objective of these networks is to create synthetic images resembling those of later frames from the early imaging data.However, a hurdle in this approach is the limited generalizability across different tracers, necessitating specific training for each type of tracer used. With the emergence of generative AI models, such as diffusion models [62], there is potential to develop a more universal model capable of generalising across multiple tracers.Such a model could theoretically create a pseudo-late-frame image from early-frame data or transform all images into an intermediate synthetic form to facilitate motion correction, potentially overcoming the current limitations in early frame motion correction. Total-body PET with AI: a window into understanding normal physiology and health Originally, PET imaging predominantly served as a tool for exploring physiological processes prior to its evolution into a clinical diagnostic instrument [63].Concerns regarding radiation exposure have steered the medical community towards alternative modalities, notably MRI.However, the advent of TB-PET, coupled with advancements in minimising CT radiation exposure, has paved the way for ultra-low dose imaging.This innovation holds the promise of safely extending PET imaging applications to healthy populations, thereby broadening its utility in understanding normal physiology and non-malignant disease processes. Comprehending normal physiology is paramount for the accurate interpretation of disease-related anomalies.Within the field of oncology, PET imaging has predominantly concentrated on tumours and their immediate surroundings.Nevertheless, the wider scientific consensus views cancer as a systemic condition, thus underscoring the need to extend focus beyond just the tumour's locale.Observing the macroenvironment, particularly organ systems not directly compromised by tumour invasion, is crucial for a holistic understanding of cancer's and therapies systemic and toxic effects [64,65].This approach is not only pertinent in oncology but may also hold significant relevance in elucidating musculoskeletal disorders and metabolic diseases, where systemic factors play a key role [66,67]. The creation of a 'normative database' derived from healthy individuals is instrumental in facilitating the rapid systemic analysis of pathological cases.The notion of a normative database is well-established in medicine, providing clinicians with a benchmark of 'normalcy' for various parameters.This concept has been extensively applied in the realm of neuroimaging, where it has become a cornerstone in the identification of pathological conditions [68][69][70][71].Extending this approach to totalbody PET would allow for a similar utility in detecting systemic anomalies, offering a comprehensive reference point for distinguishing between normal and abnormal physiological states across the entire body. Initial research in the realm of whole-body MRI, particularly under the scope of Imiomics [72], has laid the groundwork for establishing a proof-of-concept normative database.This database focused on quantifying average distributions of adipose and lean tissue within an asymptomatic population.Participants for this study were randomly selected from the general population, which meant that not all individuals were in perfect health.In this sample, 2% had diagnosed diabetes, 8% were known to have hypertension, and 4% were undergoing statin therapy.However, none of the participants suffered from severe diseases, such as cancer, myocardial infarction, stroke, heart failure, or chronic obstructive lung disease.Though not representative of a completely healthy cohort, this initial effort has laid the groundwork for developing a comprehensive total-body normative database, a crucial step in expanding the potential of PET imaging in systemic health assessment. Aligning total-body PET images across individuals presents a significant challenge, particularly when compared to MRI.This difficulty arises from PET's relatively lower resolution and variable tracer uptake characteristics.Nevertheless, it is feasible to utilise the accompanying CT images to facilitate alignment, subsequently transferring the deformable fields to their PET counterparts.In the process of constructing a normative database, the deformable alignment of healthy control images is a key step in creating a standard atlas of healthy individuals. During this alignment process, two elements are of paramount importance: firstly, the alignment between subjects, and secondly, the segmentation that supports and enhances this alignment (Fig. 5).Recent advancements in AI, as discussed in the context of multiplexing, can greatly expedite this process.Tools producing dense segmentation maps, along with learning-based diffeomorphic methods like VoxelMorph [47], have the potential to significantly streamline the creation of normative databases.However, it is crucial to consider various confounding factors, such as age, body mass index (BMI), and gender, when developing these databases.Careful accounting for these variables is essential to ensure that the normative database accurately reflects the diversity and range of the healthy population [73].This careful consideration is vital for the database to be a reliable and representative tool in clinical and research settings. The creation of a normative database via TB-PET not only paves the way for high-throughput screening in at-risk populations like lung cancer (Fig. 6) or breast cancer but also presents the opportunity to explore comprehensive assessments of physiological health and ageing effects throughout the body.Notably, achieving a crucial milestone in this endeavour is the reduction of the effective radiation dose to patients to levels below 1 mSv per scan.While 18 FDG remains the clinical tracerof-choice for many clinical applications, generating similar normative databases for additional tracers that are now routinely used in clinical practice, including PSMA and somatostatin receptor ligands, and emerging tracers, such as FAPI agents, will also be beneficial. Making sense of systemic information provided by totalbody PET: AI In previous sections, we have established that TB-PET/CT generates a comprehensive array of multidimensional systemic data.The extraction of meaningful insights from such data necessitates the adoption of robust analytical techniques, among which AI stands out as particularly suited for this task.Recent research initiatives have focused on delving into this multidimensional data to understand systemic effects across both healthy and pathological cohorts.These studies primarily utilise classical correlation analysis methods, which involve extracting organ-specific Standardised Uptake Values (SUVs) and generating correlation heatmaps within the cohorts under study.The fundamental aim is to identify variations in the resulting correlation maps [8,11,74,75]. A notable advancement in this field was introduced by Sun et al. [11]., who proposed a novel methodology centred on the identification of individual deviations from normative patterns.This is achieved through a perturbation-based approach, where the baseline healthy correlation network is disrupted by integrating pathological cases, thereby facilitating the detection of individual anomalies.However, it is crucial to recognize that these studies typically involve relatively small sample sizes.Moreover, it is imperative to understand that these are correlation-focused studies that do not inherently imply causality.In the context of analysing comprehensive datasets derived from TB-PET/CT scans, a multitude of methodological approaches are available to researchers.Key among these is the utilisation of robust computational frameworks such as scikit-learn [76], which facilitate the compilation of an extensive array of parameters from total-body datasets.These parameters include SUVs, kinetic parameters, and additional clinical data, such as volumetric measurements obtained from CT scans.Subsequent to parameter extraction, various machine learning algorithms can be employed to effectively differentiate between distinct groups, thus framing this analysis as a classification problem. Alongside these conventional methodologies, the emergence of Automated Machine Learning (AutoML) represents a significant advancement in the field of medical image analysis.AutoML particularly enhances the automatic analysis of tabulated data from TB-PET scans.By automating critical tasks like model selection, hyperparameter tuning, and validation, AutoML renders advanced analytical techniques more accessible and efficient.Prominent frameworks in this domain include Google's AutoML [77], H2O AutoML [78], and TPOT (Tree-based Pipeline Optimization Tool) [79].Google's AutoML is notable for its user-friendly interface and powerful algorithms that adeptly handle complex data structures, making them suitable for researchers with varying levels of programming expertise.H2O AutoML is acclaimed for its efficiency in rapidly producing highquality models.Conversely, TPOT leverages a genetic programming approach to optimise machine learning pipelines, ensuring optimal model adaptation for specific datasets. The incorporation of these AutoML frameworks into the analysis of total-body PET data substantially streamlines the identification of relevant features and patterns.By automating the more labour-intensive aspects of model building, researchers can devote greater attention to interpreting results and extracting clinically relevant insights.Additionally, the iterative model refinement and adaptability to new data inherent in AutoML, ensure that analyses remain at the forefront of medical dataset evolution. To further enhance the transparency and interpretability of these algorithms, the application of explainable AI methods is advantageous.Techniques such as SHAP [80] (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) [81] elucidate how individual features contribute to specific algorithmic decisions.This clarity is instrumental in elevating the interpretability of the results. However, when employing these machine learning techniques, it is crucial to exercise caution to circumvent issues like overfitting and underfitting.A commonly overlooked yet critical aspect is the sample-to-feature ratio [82,83].Maintaining a minimum ratio of 10:1 is widely recommended, serving as a reasonable benchmark to ensure the robustness and reliability of the model's performance. The recent advancements in deep learning open promising avenues for mining TB-PET datasets, especially through the creation of embeddings [84].Utilising deep learning architectures like convolutional neural networks (CNNs) [85] or Vision Transformers [86], three-dimensional PET images can be transformed into high-dimensional vector embeddings.These embeddings have the potential to concisely capture the comprehensive physiological and metabolic profiles of patients, offering a distilled yet information-rich representation of the original dataset. The role of vector databases [87] in this context is crucial and deserves emphasis.Traditional relational databases are not optimised for handling the highdimensional data typical of deep learning outputs.Vector databases, on the other hand, are specifically designed to store, index, and retrieve high-dimensional vectors efficiently.This makes them uniquely suited for dealing with the kind of complex, feature-rich data produced by deep learning models applied to TB-PET datasets.Their ability to perform similarity searches and clustering at scale adds significant value, allowing researchers to quickly and accurately group patients into meaningful categories, such as responders and non-responders to treatments like radioligand therapy and immunotherapy. Incorporating vector databases into this process facilitates the handling and analysis of these complex embeddings, enhancing the potential of deep learning techniques to discern subtle patterns and correlations within the data.This synergy between deep learning and vector databases can significantly augment the precision and effectiveness of treatments, leading to more personalised therapeutic strategies. Charting the future of total-body PET with AI -a call for collaborative innovation As one reflects on the advancements in TB-PET and its clinical applications, it becomes evident that AI stands at the forefront of enhancing this field.TB-PET's increased sensitivity and comprehensive diagnostic capabilities, though invaluable, introduce the challenge of managing and interpreting vast amounts of complex data.Here, AI emerges not just as a tool but as a pivotal catalyst in transforming TB-PET from a diagnostic modality to a comprehensive solution for personalised medicine. The clinical community's growing interest in TB-PET is primarily driven by its capability for rapid and low-dose imaging.This advancement, however, brings to the fore the need for sophisticated analytical methods capable of managing the resultant data deluge.In this regard, AIdriven tools for segmentation and detection are becoming increasingly crucial.These tools not only streamline the processing of complex datasets but also enable the nuanced interpretation of diverse biomarkers, thus enhancing the overall utility of TB-PET. Yet, as we advance in integrating AI into TB-PET, challenges persist, notably in ensuring the broad applicability of algorithms across varied clinical scenarios.The development of foundational models, inspired by their success in general vision tasks, is a promising avenue for overcoming these challenges.Such models, adept at segmenting any specified area within medical imaging datasets, hold the potential to revolutionise TB-PET analysis by automating essential processes and improving diagnostic precision.However, the realisation of these foundational models, like MedSAM in radiology, is contingent on the availability of large-scale, diverse datasets and considerable computational resources.The PET imaging field currently faces a gap in available data volumes, with significant initiatives like AUTOPET [29] and HEKTOR [88] providing only a limited number of images.This situation underscores an urgent need within the PET community for a collective effort in data pooling.The prevailing concerns about data protection hindering the sharing of PET images must be re-examined.Given that high-resolution modalities such as CT have been successfully open-sourced, PET imaging should also venture down this path.It is imperative for the community to not only advocate for, but also actively pursue the open source availability of PET data subject to patient privacy regulations that operate in certain jurisdictions. The creation of a comprehensive normative database from TB-PET scans further exemplifies the need for extensive data pooling.Given the vast variability in human physiology, constructing such a database requires data from diverse and large population samples, something that single sites cannot achieve alone.A normative database, crucial for distinguishing between normal and pathological states, would benefit immensely from a collaborative approach to data collection and sharing.Emulating the open-source successes of radiology could significantly accelerate advancements in TB-PET analysis, paving the way for more personalised and effective patient care. Building on the momentum of integrating AI into TB-PET and addressing the challenges of data availability and algorithmic applicability, it is essential to also consider the role of AI in enhancing the safety and efficiency of PET/CT imaging.This is particularly pertinent in scenarios like low-dose longitudinal studies, paediatric imaging, or screening, where optimising the CT component of PET/CT imaging becomes crucial.While TB-PET's increased sensitivity enables inherently low-dose imaging, the radiation dose primarily stems from the CT component, necessitating careful consideration in repeated imaging scenarios.Advancements in AI offer potential solutions to reduce, or in some cases, eliminate the need for CT scans even though CT itself already has a role in some screening approaches and likely itself to provide complementary diagnostic information.Consequently, this approach requires a balanced perspective.CT scans provide essential anatomical details vital for various TB-PET data mining applications, including organ segmentation, multiplexing, and creating normative databases.These tasks depend heavily on CT as it is challenging to work on PET data due to image variability introduced by the tracers.It is crucial to reduce the dose while preserving the critical diagnostic and analytical value that CT imaging brings to TB-PET. In advancing TB-PET data-mining, the role of Automated Machine Learning (AutoML) is pivotal.AutoML streamlines the process of applying ML algorithms, making it more accessible and efficient.It automates crucial tasks like model selection, hyperparameter tuning, and validation, which are often barriers to effective data analysis in medical imaging.This automation is particularly beneficial in TB-PET, where the data's multidimensionality can be overwhelming and nuanced.With AutoML and explainable AI paradigms, researchers and clinicians can more readily analyse and interpret complex datasets.Importantly, the progress and acceleration of AI and ML in TB-PET should take cues from the broader AI community, especially regarding the open-source movement.The rapid advancement in AI fields is partly attributed to the community's commitment to open-sourcing and collaborative development, avoiding the pitfalls of redundant efforts.A prime example is nnU-Net, an open-source framework that has standardised neural network applications in medical imaging.Before nnU-Net, numerous variations of U-Net architectures proliferated, but its introduction streamlined the development process, demonstrating that open-source collaboration can lead to more efficient and effective solutions. Building on our previous discussions, it is evident there is a pressing need for a unified community initiative to consolidate resources, software, and data in TB-PET and AI.Currently, these elements are fragmented, impeding the pace of progress.Platforms like enhance.pet(https://enhance.pet)serve as a promising model, offering a centralised web hub for data, software, and educational resources.Similarly, the National PET Imaging Platform (NPIP, https://npip.org.uk/)represents another step in the right direction, aiming to create a cohesive framework for advancing PET imaging through shared resources and collective expertise. Conclusion In conclusion, this manuscript has comprehensively explored the transformative role of AI in elevating the capabilities of TB-PET/CT imaging.As we have elucidated, the integration of AI not only augments the efficiency of TB-PET but also unlocks novel applications in both clinical and research settings.However, the journey towards fully realising the potential of AI in TB-PET is not just a technological challenge but a collaborative endeavour.It calls for the dismantling of data silos, the creation of open-source tools, and the establishment of platforms for knowledge and resource exchange. Fig. 2 Fig. 2 Comparative Visualization of 'Segment Anything Model' Performance on an Unseen 2D PET Image.Panel A displays the original PET image slice.Panel B illustrates the 'Segment Anything Model' executing point-based segmentation, pinpointing a singular region of interest (ROI).Panel C demonstrates the model applying a bounding box approach to encapsulate the ROI within a minimal rectangular boundary.Panel D presents the multi-mask segmentation capability of the model, initiated from a point-based prompt to discern multiple areas with varying intensities.Panel E showcases the fully autonomous segmentation proficiency of the 'Segment Anything Model, ' delineating multiple ROIs without any manual prompts Fig. 3 Fig. 3 Characterising Oncological Heterogeneity through Multiplexed PET Imaging.The figure demonstrates the potential of multiplexed PET imaging technique on a patient with coexistent malignancies:68 Ga-PSMA-positive/ 18 F-FDG-negative prostate cancer (refer to PET/CT axial slice) highlighted in green and18 F-FDG-avid metastatic melanoma (refer to coronal slice with prominent 18 F-FDG uptake and mild68 Ga-PSMA uptake), highlighted with red arrows.Separate scans using18 F-FDG and68 Ga-PSMA tracers reveal distinct metabolic and molecular patterns corresponding to melanoma and prostate cancer, respectively.The composite image results from diffeomorphic algorithmic synthesis, assigning discrete chromatic channels-red for18 F-FDG and green for68 Ga-PSMA-to each radiotracer, thereby creating a composite image.This multiplexed image merges the two separate datasets into a single, integrated visual field.Manifestations of sole18 F-FDG uptake are visualized in red,68 Ga-PSMA uptake in green, and concomitant tracer accumulation is rendered in shades of yellow, indicating co-expression.18F-FDG dominant malignancies in the multiplexed images are highlighted with red arrows, while Fig. 4 Fig. 4 AI-Driven Multi-Organ Segmentation (MOOSE) for kinetic analysis in dynamic PET.This figure demonstrates an AI-assisted segmentation approach applied to dynamic PET/CT imaging for the extraction of time-activity curves (TACs) across multiple organs.Central is a PET image overlaid with segmented organs; surrounding it are graphs depicting TACs for the brain, left ventricle, aorta, lung, liver, pancreas, spleen, and skeleton.These curves are derived from dynamic PET scans post-segmentation and are instrumental in streamlining kinetic modeling and facilitating absolute quantification of tracer uptake, thus enhancing the precision of metabolic studies Fig. 5 Fig. 5 Methodology for Normative Database Construction from PET/CT Data.Panel [A] depicts the sequential process for establishing a normative database derived from PET/CT data.The protocol initiates with a TB-PET examination, followed by patient stratification according to BMI, age, and gender.The subsequent phase involves deriving detailed organ segmentations from CT scans.These segmentations then guide diffeomorphic registrations to align subjects across diverse cohorts.The derived deformation fields from the CT alignments are applied to the corresponding PET data, culminating in a comprehensive normative database.Panels [B], [C], and [D] display representative maximum intensity projections PET images from the database, segmented by cohort characteristics.Panel [B] exemplifies a Japanese male cohort with a BMI range of 20.0-24.9, while Panels [C] and [D] represent European cohorts, male and female, respectively, both within the same BMI range.The noticeable radiotracer uptake observed in the right arm of the normative template image in panel [C] is an artifact attributable to the initial administration site of the radiopharmaceutical.Such localized hyperactivity represents a procedural remnant rather than pathological significance.Each panel provides the cohort's demographic and sample size data, reflecting the database's population diversity Fig. 6 Fig. 6 Metabolic aberration Analysis Using PET Normative Database.This figure presents a method for evaluating patient PET scans against a PET normative database.The first panel shows the averaged data from a healthy cohort forming the normative PET database, therefore providing a reference for typical tracer distribution.The second panel displays a lung cancer patient's PET image, where abnormalities are indicated with arrows.The patient's PET image is diffeomorphically aligned with the Normative database, and the deviations from the normalcy are calculated as z-maps.In the third panel, the patient's PET data is overlaid with a z-map, highlighting deviations from the normative model.The fourth panel further overlays the z-map onto the patient's CT image, offering anatomical context to the functional PET data.The colour scale on the far right indicates standard deviations from the normative mean, with warmer colours denoting higher deviations
2024-04-12T13:11:19.630Z
2024-04-11T00:00:00.000
{ "year": 2024, "sha1": "d143c1d7d009155b3406646afd2bb1e404872861", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "30889c2aad068cc38a18f9d5651cbbaf82e3d77d", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
12274598
pes2o/s2orc
v3-fos-license
Improving Efficacy of Beauveria bassiana against Stored Grain Beetles with a Synergistic Co-Formulant The potential of a dry powder co-formulant, kaolin, to improve the control of storage beetles by the entomopathogenic fungus Beauveria bassiana, isolate IMI389521, was investigated. The response of Oryzaephilus surinamensis adults to the fungus when applied to wheat at 1 × 1010 conidia per kg with and without kaolin at 1.74 g per kg wheat was assessed. Addition of kaolin increased control from 46% to 88% at day 7 and from 81% to 99% at day 14 post-treatment. Following this the dose response of O. surinamensis and Tribolium confusum to both kaolin and the fungus was investigated. Synergistic effects were evident against O. surinamensis at ≥0.96 g of kaolin per kg of wheat when combined with the fungus at all concentrations tested. For T. confusum, adult mortality did not exceed 55%, however, the larvae were extremely susceptible with almost complete suppression of adult emergence at the lowest fungal rate tested even without the addition of kaolin. Finally, the dose response of Sitophilus granarius to the fungus at 15 and 25 °C, with and without kaolin at 1 g per kg of wheat, was examined. Improvements in efficacy were achieved by including kaolin at every fungal rate tested and by increasing the temperature. Kaolin by itself was not effective, only when combined with the fungus was an effect observed, indicating that kaolin was having a synergistic effect on the fungus. Introduction Stored-product insects and mites cause serious post-harvest losses, estimated to range from 9% in developed countries to 20% or more in developing countries [1]. Pest infestations reduce the value of the commodity by contaminating it with insect fragments, faeces, webbing, and metabolic by-products [2]. The majority of grain stores and exporters have zero tolerance for pests and pesticide residues. Concerns over insecticide resistance, residues, and environmental impacts together with changes in legislation have led to a decline in available pesticides to protect stored food. Alternative pest control products are required to maintain levels of food production and anticipate future demand. The use of entomopathogenic fungi such as Beauveria bassiana (Balsamo) Vuillemin (Hypocreales: Cordycipitaceae) and Metarhizium anisopliae (Metschinkoff) Sorokin (Hypocreales: Clavicipitaceae) to control stored product insects and mites has been explored extensively in laboratory and field scale trials [3][4][5][6][7][8]. Despite positive results, no microbial control agents based on entomopathogenic fungi are currently commercially available for grain storage. Perceptions that such products deliver lower efficacy than conventional chemical control measures may be one barrier to their commercialisation. These perceptions may be counteracted by improving the formulation and delivery of microorganisms. Inert dusts, such as mineral clays and silica powders, kill arthropods by removing the epicuticular lipid layers causing excessive water loss through the cuticle [9,10]. These dusts have been widely used for stored product pest control around the globe [11][12][13]. Of the inert dusts Diatomaceous earth (DE), a silica-based sedimentary rock, is the most widely used for storage protection and has been used commercially as a standalone preventative and curative treatment against stored product pests. DE demonstrates efficacy against a wide range of pests [14][15][16][17], but its widespread adoption has been hampered based on a number of perceived drawbacks: (1) its effectiveness varies widely depending on target species and life stage, humidity, and temperature [14,15,18], (2) problems with processing machinery maintenance & damage [10], and (3) reductions in grain quality parameters such as bulk density and flowability [19,20]. Kaolin, an alternative inert dust, is a common, silica-based clay mineral used in industrial manufacturing [21]. It is a siliceous mineral similar to, but softer than, DE; it scores relatively low (2.0-2.5) on Mohs hardness index, making it less likely to abrade milling equipment (Campden BRI, pers. comm). Kaolin has demonstrated effectiveness on stored product insects [22,23] but requires very high application rates in admixture with grain (5-10 g per kg of grain) when used as a standalone product [10,24]. Such inclusion rates would have an unacceptable impact on grain quality parameters such as appearance, bulk density and flowability. However, as with DE, there may be the potential to reduce application rates if it can be coupled with another active ingredient. In this study, we evaluated the ability of kaolin to enhance the activity of B. bassiana, isolate IMI389521, against stored grain beetles in a series of bioassays. Our hypothesis was that the presence of kaolin would increase the pathogenicity of the fungus to the target insects. A variety of adult beetles were tested to determine if any effect of kaolin was species-dependent. As test insects we used three major stored product beetle species with varying susceptibility to the fungus: the saw-toothed grain beetle O. surinamensis, the granary weevil S. granarius, and the relatively tolerant [32] confused flour beetle Tribolium confusum Jacquelin du Val (Coleoptera: Tenebrionidae). Sitophilus granarius is a primary pest, infesting intact kernels, while T. confusum and O. surinamensis are secondary pests, infesting only damaged or broken kernels [33] and processed grain such as flour. Additionally, due to the relatively low levels of mortality observed on adult T. confusum, the response of the larvae, which have been shown to be more susceptible to entomopathogenic fungi than the adults [26,30], were also tested. Test Insects All adult insects were taken from laboratory maintained cultures. The O. surinamensis individuals were reared on rolled oats and wheat germ in a ratio of 3:1 w/w at 27 ± 2 • C and 40% ± 5% relative humidity (RH). The T. confusum adults were maintained on a diet of rolled porridge oats and brewer's yeast in a ratio of 9:1 w/w at 30 ± 2 • C and 40% ± 5% RH. Adult S. granarius were reared on kibbled wheat grain and wheat germ in a ratio of 19:1 w/w at 25 ± 2 • C and 65% ± 5% RH. All insects were reared in continuous darkness. Adult beetles of mixed age and sex were used for testing. For the larval bioassay (bioassay 3) all of the early instar T. confusum larvae required were supplied in diet from i2L Research (Cardiff, UK). Test Items The isolate of B. bassiana IMI389521 was originally sourced from an infected adult coleopteran S. oryzae in a UK grain store [5] and had previously demonstrated efficacy against a variety of storage insects, good stability in storage, and high levels of viability following mass-production [34,35]. The isolate was manufactured by Agrauxine (Loches, France). The dry conidia were combined with co-formulants at Exosect Ltd. (Winchester, UK) to make the various formulations. In bioassay 1 one the B. bassiana quantities are expressed as total conidia per kg of wheat but in subsequent bioassays, due to a change in quality control procedures by the manufacturer, the quantities are expressed as colony forming units (CFU) per kg wheat. Kaolin clay (AgriBind™) was supplied by Imerys (Par, Cornwall, UK). In the S. granarius experiment, bioassay 4, the conidia were also combined with Entostat ® to aid dispersion and adhesion to grain and insects, and silica to aid flow. The Entostat variant used was micronized carnauba wax supplied by Exosect Ltd. and Sipernat d17 was supplied by Lawrence Industries (Tamworth, UK). Commodity Residue-free wheat was supplied from Street End Farm (Bishops Waltham, UK) and kibbled for 30 s in a coffee grinder, except for the S. granarius bioassay (bioassay 4), where residue-free wheat grain var. Alderon was supplied by KWS UK (Royston, UK) and was not kibbled. Grain used in bioassay three and four was stored in paper bags in 500 g samples and placed in a humidity-controlled incubator (TK 252, Nüve, Ankara, Turkey)) for one week before the bioassays in order to equilibrate the grain to the test conditions. Bioassay 1: Initial Testing of O. surinamensis The response of O. surinamensis adults to B. bassiana IMI389521 at 1 × 10 10 conidia per kg of wheat was tested with and without kaolin admixed at a rate of 1.74 g per kg of wheat. Kaolin without B. bassiana and an untreated control were included for comparison. Fifty grams of kibbled wheat were weighed into 125 mL glass jars and treatments were added on top. The jars were covered and placed on a vortex mixer for 10 s to disperse the treatments. Five replicate jars were created for each treatment. Twenty adult O. surinamensis were added to each jar and the jars were covered with a piece of gauze held in place with an elastic band. Jars were arranged on a shelf in a bioassay room set to constant darkness (except during mortality checks) and 25 ± 3 • C. Humidity was monitored throughout the trial with a Lascar data logger and found to range from 47% to 62% RH. At days 7 and 14 the contents of each pot were individually tipped onto a white plastic tray and the insects were checked for mortality. Dead beetles were removed. Live beetles, grain, and powder were tipped back into the pots, except at the final time point. Trays were sterilized with 70% ethanol between treatments. Bioassay 2: Dose Response of O. surinamensis The response of O. surinamensis adults to different rates of B. bassiana IMI389521 and kaolin was tested. The fungus was applied at five rates: 0, 5.4 × 10 8 , 1.7 × 10 9 , 5.4 × 10 9 , and 1.7 × 10 10 CFU per kg of wheat, and for each rate of fungus the kaolin was tested at one of four rates: 0, 0.096, 0.963, and 1.925 g per kg of wheat, resulting in 20 treatment groups in total. For each bioassay and treatment group 250 g of kibbled wheat was weighed into a 1 L glass Kilner jar and treatments were added on top. The jars were rolled by hand for 30 s to disperse the treatments. The grain was then subdivided into five replicate 125 mL glass jars. Twenty adult O. surinamensis were added to each jar and the jars were covered with a piece of gauze held in place with an elastic band. Jars were arranged in a humidity controlled incubator set to 25 ± 2 • C and 65% ± 5% RH in constant darkness. At day 14, the insects were checked for mortality. Bioassay 3: Dose Response of T. confusum Adults and Larvae The response of T. confusum adults and larvae to different rates of B. bassiana IMI389521 and kaolin was tested. The response of T. confusum was assessed at higher rates of B. bassiana than of O. surinamensis, as the adults are less susceptible to the fungus than the other beetles tested [34]. In addition, lower concentrations of kaolin were assessed in the T. confusum bioassays as the highest rate (1.925 g per kg of wheat) was subsequently considered to be not commercially viable. The fungus was applied at five rates: 0, 1.78 × 10 10 , 3.16 × 10 10 , 5.62 × 10 10 , and 1 × 10 11 CFU per kg of wheat in both bioassays. In the adult bioassay, for each rate of fungus the kaolin was applied at one of five rates: 0, 0.178, 0.316, 0.562, and 1 g per kg of wheat resulting in 25 treatment groups in total. In the larval bioassay, kaolin was tested at either 0 or 0.562 g per kg of wheat, resulting in ten treatment groups in total. In the adult bioassay, the grain was treated, sub-divided, and insects added as per Section 2.5. In the larval bioassay, it was not possible to count out larvae for each replicate jar so the container of larvae in diet, with an estimated content of 2500 larvae, was homogenized by gently rolling for 30 s and then subdivided by weight into ten samples, one for each treatment group. The treatments were applied to the grain as per Section 2.5 and then the larvae were added before sub-dividing the mixture into five replicate jars. Assuming a homogenous mixture of larvae in diet, this should have resulted in approximately 250 larvae per treatment group and then 50 larvae per replicate jar. The jars in both bioassays were stored as per Section 2.5. At day 14, the adult insects were checked for mortality as detailed in Section 2.4. At day 42, the jars from the larval bioassay were checked for adult emergence. Bioassay 4: Dose Response of S. granarius and the Effect of Temperature The response of S. granarius adults to different rates of B. bassiana IMI389521 with and without kaolin at 1 g per kg wheat was assessed. The fungus was applied at four rates: 0, 1.78 × 10 9 , 5.62 × 10 10 , and 1.78 × 10 10 CFU per kg wheat. For each of the formulations containing the fungus, a flow agent, (Sipernat d17) and Entostat, were also added. Sipernat d17 was added as 0.5% w/w of the total quantity of formulation to be applied. The balance of the formulation was made up of Entostat, which was added to the dry conidia before mixing with the kaolin. Details of the treatments, application rates and quantities of components are given in Table 1. An untreated control and kaolin-only treatment were also assessed. Mortality was examined 14 and 28 days after incubation at either 15 or 25 • C, resulting in 32 treatment combinations in total. For each treatment combination, 500 g of equilibrated wheat was treated in a 1 L glass Duran bottle by adding the formulation on top of the grain and then rolling the jar using a bottle roller for 3 min to ensure even distribution of the treatment. The wheat was then subdivided into ten 125 mL glass jars each containing 50 g of grain. Fifteen S. granarius adults were added to each jar. To prevent the insects from escaping, the tops of the jars were covered with a square of gauze which was retained in position with an elastic band. Jars were arranged in one of two humidity controlled incubators set to constant darkness and 65% ± 5% RH. One incubator was set to 15 ± 2 • C and the other to 25 ± 2 • C. At day 14, ten of the jars from each temperature × kaolin × fungal rate combination were randomly selected and the insects assessed as detailed in Section 2.4. The remaining jars were checked at day 28. Statistical Analysis For bioassay 1, the mortality data were found to deviate significantly from a normal distribution even after data transformation. As a result, the proportions data for each time point were analysed with a Kruskal-Wallis test followed by pairwise comparisons with a two-way Dunn's procedure. Analyses were performed using XLSTAT (version 2014.1.01.; Addinsoft, Paris, France, 2014). For bioassay 2 and 3, the mortality data were found to deviate significantly from a normal distribution, even after data transformations. For each bioassay, logistic regression (probit) was used to model the impact of fungal rate on mortality for each rate of kaolin that was tested. The dose variable was converted to a logarithmic scale before analysis. Each regression profile was then used to find the concentration of spores required to achieve the LC50 at each level of kaolin tested. The LC50 was calculated for doses that adequately fit the probit model and assessed using a χ 2 goodness-of-fit test (p > 0.1 indicates a good fit). The analyses were done using the "Dose" add-in for XLSTAT (version 2014.1.01). For bioassay 4, the S. granarius mortality data were found to deviate significantly from a normal distribution even after data transformations. As a result, the proportions data were used to construct a fully factorial Generalised Linear Model (GLM). An ANOVA was applied to the results of the GLM to look for significant effects. Due to overdispersal of residuals, quasibinomial errors were used. Analysis was carried out using R software (version 3.1.0.; The R Foundation for Statistical Computing, Vienna, Austria, 2014). As for bioassay 2 and 3, logistic regression (probit) was used to model the impact of fungal rate on mortality with or without kaolin, at each temperature, using the "Dose" add-in for XLSTAT (version 2014.1.01.). Bioassay 1: Initial Testing of O. surinamensis The effect of treatment was significant for both the day 7 and day 14 mortality data (p = 0.001 for both). After 7 d, most of the beetles in the combined treatment were already dead (>88%) (Figure 1) with significantly higher mortality compared to the untreated control and kaolin-only treatments (p < 0.001 and p = 0.004 respectively), where mortality did not exceed 22%. At day 14, mortality reached 99% in the combined treatment, and was again significantly different than that of the untreated control and kaolin-only groups, (p < 0.001 and p = 0.003 respectively), where mortality did not exceed 28%. Although mortality in the fungus only treatment (day 7: 46% and day 14: 81%) was less than in the combined treatment, the difference between these groups was not significant (p = 0.139 and p = 0.174, respectively); possibly owing to the fact that the data were analysed by non-parametric methods and the low number of replications (n = 5). The treatments with the most kaolin (1.9253 g per kg of wheat) consistently achieved the highest level of mortality (>90%) when combined with fungus at a rate of at least 1.7 × 10 9 CFU per kg of wheat or more ( Figure 2). The fungus alone never killed more than 50% of the insects, even at the highest rate (1.7 × 10 10 CFU per kg of wheat). Kaolin alone may have exerted an effect at a rate of 1.9253 g per kg of wheat with mortality at 13% compared to 0% in the untreated control, but this was not tested statistically. Treatments of the fungus with the lowest level of kaolin (0.0963 g per kg of Figure 1. Bioassay 1: Percentage mortality of Oryzaephilus surinamensis adults exposed to Beauveria bassiana IMI389521 at 1 × 10 10 conidia per kg of wheat, kaolin at 1.74 g per kg of wheat, and a combination of the two, after 7 and 14 days of exposure (Mean ± SE). Different letters between groups within time points denote statistical difference among treatments (p ≤ 0.004). Bioassay 2: Dose Response of O. surinamensis The treatments with the most kaolin (1.9253 g per kg of wheat) consistently achieved the highest level of mortality (>90%) when combined with fungus at a rate of at least 1.7 × 10 9 CFU per kg of wheat or more ( Figure 2). The fungus alone never killed more than 50% of the insects, even at the highest rate (1.7 × 10 10 CFU per kg of wheat). Kaolin alone may have exerted an effect at a rate of 1.9253 g per kg of wheat with mortality at 13% compared to 0% in the untreated control, but this was not tested statistically. Treatments of the fungus with the lowest level of kaolin (0.0963 g per kg of wheat) exhibited similar levels of mortality to the fungus alone. However, when the kaolin was combined with the fungus at 0.9626 or 1.9253 g per kg of wheat, the level of mortality compared to fungus alone was more than doubled in most treatment groups. At the lowest rate of fungus, 5.4 × 10 8 CFU per kg of wheat, the increase when combined with the highest rate of kaolin was more than seven-fold. Logistic regression was carried out on the mortality data for each rate of kaolin and was found to adequately fit the probit model ( Table 2). The LC50 for fungal rate decreased as the rate of kaolin increased. The LC50 was >12 times lower when kaolin was included at a rate of 0.9629 g per kg of wheat and >44 times lower when it was included at the highest rate of 1.9253 g per kg of wheat. Bioassay 3: Dose Response of T. confusum Adults and Larvae Mortality did not exceed 55% in any of the treatment groups and above 27% in all groups which contained fungus (Figure 3). In the treatments with no fungus mortality did not exceed 5.2%, even at the highest level of kaolin (1 g per kg of wheat). Kaolin appeared to have very little effect, if any, on mortality, even when combined with fungus. The fungal isolate seemed to exert some efficacy as mortality was higher than in the untreated groups but there was no clear dose response at the range tested ( Table 3). The data was a poor fit for the probit model in each case. The LC50 for fungal rate was found to decrease as the rate of kaolin increased but there was little difference between the top and bottom rate with overlapping 95% confidence interval (CI). Neither kaolin rate nor spore rate in the range tested made a large difference to the level of mortality. Figure 1. Bioassay 1: Percentage mortality of Oryzaephilus surinamensis adults exposed to Beauveria bassiana IMI389521 at 1 × 10 10 conidia per kg of wheat, kaolin at 1.74 g per kg of wheat, and a combination of the two, after 7 and 14 days of exposure (Mean ± SE). Different letters between groups within time points denote statistical difference among treatments (p ≤ 0.004). The treatments with the most kaolin (1.9253 g per kg of wheat) consistently achieved the highest level of mortality (>90%) when combined with fungus at a rate of at least 1.7 × 10 9 CFU per kg of wheat or more ( Figure 2). The fungus alone never killed more than 50% of the insects, even at the highest rate (1.7 × 10 10 CFU per kg of wheat). Kaolin alone may have exerted an effect at a rate of 1.9253 g per kg of wheat with mortality at 13% compared to 0% in the untreated control, but this was not tested statistically. Treatments of the fungus with the lowest level of kaolin (0.0963 g per kg of wheat) exhibited similar levels of mortality to the fungus alone. However, when the kaolin was combined with the fungus at 0.9626 or 1.9253 g per kg of wheat, the level of mortality compared to fungus alone was more than doubled in most treatment groups. At the lowest rate of fungus, 5.4 × 10 8 CFU per kg of wheat, the increase when combined with the highest rate of kaolin was more than seven-fold. Bioassay 2: Dose Response of O. surinamensis Logistic regression was carried out on the mortality data for each rate of kaolin and was found to adequately fit the probit model ( Table 2). The LC50 for fungal rate decreased as the rate of kaolin increased. The LC50 was >12 times lower when kaolin was included at a rate of 0.9629 g per kg of wheat and >44 times lower when it was included at the highest rate of 1.9253 g per kg of wheat. Bioassay 3: Dose Response of T. confusum Adults and Larvae Mortality did not exceed 55% in any of the treatment groups and above 27% in all groups which contained fungus (Figure 3). In the treatments with no fungus mortality did not exceed 5.2%, even at the highest level of kaolin (1 g per kg of wheat). Kaolin appeared to have very little effect, if any, on mortality, even when combined with fungus. The fungal isolate seemed to exert some efficacy as mortality was higher than in the untreated groups but there was no clear dose response at the range tested ( Table 3). The data was a poor fit for the probit model in each case. The LC50 for fungal rate was found to decrease as the rate of kaolin increased but there was little difference between the top and bottom rate with overlapping 95% confidence interval (CI). Neither kaolin rate nor spore rate in the range tested made a large difference to the level of mortality. No statistical analysis was carried out on the larval mortality data since adult emergence was zero, or close to zero (0.2% ± 0.18% at 1.78 × 10 10 with 0.562 g kaolin per kg wheat), in all the treatment groups that contained fungus. In contrast, the adult emergence from the treatment groups that contained no fungus were 53% (no kaolin) and 47% (kaolin at 0.562 g per kg wheat). From this data it is not clear whether kaolin has any effect, either on its own or combined with fungus. However, the data does clearly indicate that the fungus is effective at preventing adult emergence when the larvae are treated with ≥1.78 × 10 10 CFU per kg wheat. Bioassay 4: Dose Response of S. granarius and the Effect of Temperature The data clearly indicate that at each time point the level of mortality increased with increasing concentration of B. bassiana (Figure 4), and that for each rate of fungus the mortality was increased in the presence of kaolin. The concentration of B. bassiana (F (3,312) = 212.46, p < 0.0001) and the presence of kaolin (F (1,311) = 251.68, p < 0.0001) were found to be significant effects, as was the effect of temperature (F (1,316) = 61.53, p < 0.0001) and exposure period (F (1,315) = 121.35, p < 0.0001). Moreover, significant interactions were found between time and kaolin (F (1,302) = 11.93, p = 0.0006), fungus and kaolin (F (3,299) = 3.77, p = 0.011), and time × fungus × kaolin (F (3,293) = 2.77, p = 0.042). In addition to the mortality increasing with fungal rate and kaolin inclusion, mortality was significantly higher at the second time point. Mortality was also significantly higher at 25 • C than at 15 • C. The significant interaction between time, fungal rate, and kaolin indicates that the degree of synergy between kaolin and fungus was dependent on time. There were bigger differences between formulations with and without kaolin at the second time point than at the first. No statistical analysis was carried out on the larval mortality data since adult emergence was zero, or close to zero (0.2% ± 0.18% at 1.78 × 10 10 with 0.562 g kaolin per kg wheat), in all the treatment groups that contained fungus. In contrast, the adult emergence from the treatment groups that contained no fungus were 53% (no kaolin) and 47% (kaolin at 0.562 g per kg wheat). From this data it is not clear whether kaolin has any effect, either on its own or combined with fungus. However, the data does clearly indicate that the fungus is effective at preventing adult emergence when the larvae are treated with ≥1.78 × 10 10 CFU per kg wheat. Bioassay 4: Dose Response of S. granarius and the Effect of Temperature The data clearly indicate that at each time point the level of mortality increased with increasing concentration of B. bassiana (Figure 4), and that for each rate of fungus the mortality was increased in the presence of kaolin. The concentration of B. bassiana (F(3,312) = 212.46, p < 0.0001) and the presence of kaolin (F(1,311) = 251.68, p < 0.0001) were found to be significant effects, as was the effect of temperature (F(1,316) = 61.53, p < 0.0001) and exposure period (F(1,315) = 121.35, p < 0.0001). Moreover, significant interactions were found between time and kaolin (F(1,302) = 11.93, p = 0.0006), fungus and kaolin (F(3,299) = 3.77, p = 0.011), and time × fungus × kaolin (F(3,293) = 2.77, p = 0.042). In addition to the mortality increasing with fungal rate and kaolin inclusion, mortality was significantly higher at the second time point. Mortality was also significantly higher at 25 °C than at 15 °C. The significant interaction between time, fungal rate, and kaolin indicates that the degree of synergy between kaolin and fungus was dependent on time. There were bigger differences between formulations with and without kaolin at the second time point than at the first. (A) Not all of the data were a good fit for the probit model (Table 4), particularly if kaolin had been included in the treatment. At 14 days, including kaolin lowered the LC50 at both temperatures. At 28 days and 15 °C, the LC50 was also lowered when kaolin was included but no LC50 was calculated for the 25 °C data since all dose responses exceeded 50% mortality when kaolin was included. The LC50 at both time points was lower at 25 °C than at 15 °C. Discussion Our studies clearly demonstrate that the addition of even relatively low doses of kaolin clay powder can enhance the efficacy of B. bassiana against some adult stored grain beetle pests. Synergistic effects were demonstrated for both O. surinamensis and S. granarius, though not T. confusum. Despite the low susceptibility of T. confusum adults, the larvae were highly susceptible to the fungus with close to zero emergence of adult T. confusum when larvae were treated with a range of fungal rates, with or without the kaolin. In none of the assays was kaolin by itself effective, which is consistent with information from published literature [10] that as a standalone product, efficacy is only observed at rates ≥5 g per kg of wheat, and the highest rate tested in the present study was 1.9253 g per kg of wheat. Synergistic effects are observed when the effect of two actives taken together is greater than the sum of their separate effect at the same doses. When O. surinamensis were Not all of the data were a good fit for the probit model (Table 4), particularly if kaolin had been included in the treatment. At 14 days, including kaolin lowered the LC50 at both temperatures. At 28 days and 15 • C, the LC50 was also lowered when kaolin was included but no LC50 was calculated for the 25 • C data since all dose responses exceeded 50% mortality when kaolin was included. The LC50 at both time points was lower at 25 • C than at 15 • C. Discussion Our studies clearly demonstrate that the addition of even relatively low doses of kaolin clay powder can enhance the efficacy of B. bassiana against some adult stored grain beetle pests. Synergistic effects were demonstrated for both O. surinamensis and S. granarius, though not T. confusum. Despite the low susceptibility of T. confusum adults, the larvae were highly susceptible to the fungus with close to zero emergence of adult T. confusum when larvae were treated with a range of fungal rates, with or without the kaolin. In none of the assays was kaolin by itself effective, which is consistent with information from published literature [10] that as a standalone product, efficacy is only observed at rates ≥5 g per kg of wheat, and the highest rate tested in the present study was 1.9253 g per kg of wheat. Synergistic effects are observed when the effect of two actives taken together is greater than the sum of their separate effect at the same doses. When O. surinamensis were treated with a range of fungal and kaolin rates, the mortality in the combined treatment was greater than the sum of the mortality in the corresponding kaolin alone and fungus alone treatment groups when kaolin was ≥0.96 g per kg of wheat. Similarly, in most treatment groups, the mortality of S. granarius was greater in the combined formulation (kaolin at 1 g per kg of wheat) than the sum of the mortality caused by kaolin alone or the corresponding fungal rate applied alone. The results indicate synergistic rather than additive effects for these two insect species and are in accordance with results reported by Lord (2001) [29], Akbar et al. (2004) [30], and Sabbour et al. (2012) [6] when the inert dust DE was combined with entomopathogenic fungal spores for control of stored grain beetles and moths. However, as far as we are aware, the current study represents the first demonstration that such a synergistic effect can occur when kaolin is combined with an entomopathogenic fungus. It is not clear why inert dusts may synergise the effect of entomopathogenic fungi. Akbar et al. (2004) [30] found that the presence of DE increased the conidial attachment of B. bassiana on the cuticle of T. castaneum larvae, but Lord (2001) [29] reported no significant increase in the case of R. dominica larvae. Samodra & Ibraham [32] observed that the waxy layer of the insect's integument was abraded and removed by the kaolin, which had allowed greater conidial attachment and fungal penetration through the insect exoskeleton. The increased water loss through the insect cuticle as a result of abrasion and absorption of cuticular waxes by inert dusts may result in favourable conditions for spore germination or increase the stress levels of the insect making them more susceptible to fungal infection. A combination of factors may result in their suitability as co-formulants. The degree of synergism may be highly dependent on temperature and relative humidity. At higher temperatures, water loss is increased and insects are more mobile and so take up more particles on their cuticles [6,15]. Inert dusts are, therefore, generally more effective at higher temperatures. However, there is a threshold at which the activity of the fungus begins to decline. Vassilakos et al. (2006) [36] found that B. bassiana (Naturalis ® SC, Troy Biosciences, Phoenix, AZ, USA) was more effective against R. dominica and S. oryzae at 26 • C than at 30 • C. Athanassiou and Steenberg (2007) [25] reported that the efficacy of B. bassiana decreased when temperature increased from 25 to 30 • C. The optimum temperature for B. bassiana conidial germination and vegetative growth is reported as around 25 • C [37,38]. In the present study, temperature was found to be a significant factor affecting mortality of S. granarius, with greater mortality at the higher temperature of 25 • C. This is to be expected as fungal germination is increased with increasing temperature (up to a threshold) and, like DE, the effect of kaolin may be increased at higher temperatures. There was no significant interaction between temperature and the presence or absence of kaolin; the degree of synergism was not greater at 25 • C than at 15 • C. Alterations in the relative humidity were not investigated in the present study, but there may be complex interactions since water stress would increase under lower relative humidity. Inert dusts are more efficacious at lower moisture levels, however, it is not clear how this relationship may change if combined with an entomopathogenic fungus. The effectiveness of B. bassiana under different relative humidity varies widely with some studies indicating better efficacy at reduced moisture levels [39,40] and others reporting that the fungus may be less active in drier conditions [41,42]. In the present study, the efficacy with the fungus alone was actually greater at 1 × 10 10 conidia per kg of wheat in bioassay 1, with around 80% mortality, than at the higher concentration of 1.7 × 10 10 CFU per kg of wheat in bioassay 2 where mortality was <50%. Although the units of concentration are different, there are usually fewer CFU than conidia per kg of wheat since conidia per kg is the total number of spores present and CFU is the total viable spores present. The main difference between the two studies was that the relative humidity was uncontrolled and consequently lower in bioassay 1, supporting the findings by Lord [39,40] that activity of B. bassiana, at least in the case of this isolate, may be better at reduced moisture levels. The degree of synergism may also depend on the concentrations of each active. Vassilakos et al. (2006) [36] found some additive and some negative effects between B. bassiana and the DE product SilicoSec ® (Biofa GmbH, Münsingen, Germany) against adults of S. oryzae and R. dominica, which appeared to be concentration-dependent. At the lowest fungal rates, the addition of DE did not increase the fungal efficacy, and in some cases caused a detrimental effect, and at the highest fungal rates, an additive effect was more often recorded. In the present study, the lowest kaolin rate of 0.0963 g per kg of wheat had neither an additive nor detrimental impact on the mortality caused by the fungus against O. surinamensis; both higher rates of kaolin had synergistic effects. In the Sitophilus bioassay, no detrimental effects of including kaolin were observed, all effects were either additive or synergistic, and these effects were observed at both temperatures and time points. The susceptibility of the insects to the fungus and the degree of synergism between kaolin and the fungus is clearly dependent on the species and life stage tested. In the present study, in terms of adult insects, we found O. surinamensis to be the most susceptible, followed by S. granarius, and then T. confusum was the most resistant. This order of susceptibility is in agreement with previous work on this isolate [33]. Resistance of adult Tribolium sp. to entomopathogenic fungal infection has been previously reported [30,43,44]. Wakefield (2006) [33] demonstrated, using scanning electron microscopy, that quantitative and qualitative differences in adherence and germination of fungal conidia could be observed between a susceptible species, O. surinamensis, and the resistant T. confusum. At each of the post-treatment periods, O. surinamensis had a greater number of fungal conidia adhering to the cuticle, and the greater adherence appeared to be related to the greater number of setae, particularly on the ventral abdomen, of O. surinamensis compared to T. confusum. Lord (2007c) [45] showed that T. castaneum was more susceptible under desiccation stress. Desiccation stress may be achieved by including an inert dust such as kaolin. However, in the present study, against T. confusum adults, the kaolin had no synergistic effect on the fungus at the levels tested, and it was not possible to determine any effects on the larvae since all rates tested resulted in close to 0% emergence of adult beetles. As a result of this, no LC50 was calculated for larvae. Lower rates were not tested on larvae. At day 14 post treatment, an acceptable level of efficacy (>70%) against S. granarius was only achieved at the highest rate of 1.78 × 10 10 CFU per kg of wheat with kaolin at 1 g per kg of wheat, although lower rates were effective on O. surinamensis. As the formulation must contain a dose that will be efficacious against all the target species, a rate of 1.78 × 10 10 CFU per kg of wheat with kaolin at around 1 g per kg of wheat would ensure acceptable efficacy against adult O. surinamensis, S. granarius and the larval stage of T. confusum. It has previously been observed that Tribolium sp. larvae are more susceptible to entomopathogenic fungi than the adult life stage [26,30], an effect which has been shown to synergise with the presence of DE [30]. Kavallieratos et al. (2006) [27] recovered very few progeny of T. confusum in wheat treated with M. anisopliae and no progeny production was recorded when DE was included. Tribolium sp. larvae feed and develop in the external part of the kernels, thus, the chance of encountering fungal conidia is increased compared to that of the adults. In terms of susceptibility to inert dusts, like the fungi, the larvae are much more susceptible to DE than the adults [18,46]. In order to better understand the relationship between kaolin and B. bassiana IMI389521 and the potential for any synergy, lower rates of fungus would need to be tested. It has been demonstrated that the inert dust kaolin can have synergistic effects when combined with an entomopathogenic fungus and results in much higher levels of efficacy against stored grain beetles than would otherwise be possible with the isolate alone. This has been previously demonstrated for an alternative mineral earth powder, DE. Kaolin may convey certain advantages over the use of DE since it is softer and may be less likely to negatively impact processing machinery. While the kaolin demonstrated little effectiveness on T. confusum adults, the isolate tested, IMI389521, already demonstrated high levels of efficacy on the larvae, which may in fact be a better target. By targeting the larvae rather than the adults there is a reduced possibility that adults will emerge, mate, and lay eggs before the fungal infection has been able to take effect. It would be worthwhile to screen the isolate and co-formulant kaolin on the life stages of other grain beetles, and, perhaps, other important grain pests such as moths and psocids. Conclusions Kaolin is a widely available and relatively inexpensive material. It is already known as a non-synthetic control product with insect repellent properties, and it is currently used in particle film technology for protection of a wide variety of agricultural crops [47,48]. This research has now demonstrated the potential for kaolin as a co-formulant for entomopathogenic fungal control, in this case, for use in stored grain. By synergizing the effect of the fungus, the co-formulant kaolin may overcome the lower levels of efficacy that are perceived as a barrier to the widespread adoption of microbial control agents in storage systems.
2016-09-01T08:36:37.373Z
2016-08-26T00:00:00.000
{ "year": 2016, "sha1": "4ce78d13f28904d95821ca12849917bc4d045072", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4450/7/3/42/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0d3cd4b5f1f95d08c176d238dae9b752e816440c", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
261162810
pes2o/s2orc
v3-fos-license
Deconstructive and Divergent Synthesis of Bioactive Natural Products Natural products play a key role in innovative drug discovery. To explore the potential application of natural products and their analogues in pharmacology, total synthesis is a key tool that provides natural product candidates and synthetic analogues for drug development and potential clinical trials. Deconstructive synthesis, namely building new, challenging structures through bond cleavage of easily accessible moieties, has emerged as a useful design principle in synthesizing bioactive natural products. Divergent synthesis, namely synthesizing many natural products from a common intermediate, can improve the efficiency of chemical synthesis and generate libraries of molecules with unprecedented structural diversity. In this review, we will firstly introduce five recent and excellent examples of deconstructive and divergent syntheses of natural products (2021–2023). Then, we will summarize our previous work on the deconstructive and divergent synthesis of natural products to demonstrate the high efficiency and simplicity of these two strategies in the field of total synthesis. Introduction Since the synthesis of urea in 1828, humans have opened the door to organic synthesis.With the development of isolation and identification techniques and organic synthesis methods, people are pushing the limits of complex molecular synthesis, especially complex natural products, such as vitamin B12 and anemone toxin.Natural products are the main source of innovative medicines, such as artemisinin and paclitaxel.From 1981 to 2020, over 50% of the small molecule drugs approved by the FDA were derived from natural products or the parent molecular structure of natural products [1].It can be seen that innovative drug development based on active natural products has played a crucial role in the field of original drug research and development.However, active natural products are often very scarce in nature and have low direct druggability.Therefore, research on the total synthesis of active natural products can not only solve the problem of limited natural resources and prepare a large amount of active natural products but also enable the rapid synthesis of natural product analogues with diverse skeletons and functional groups, building a natural product-like compound library to conduct extensive biological activity screening and drug development research.It has important scientific significance and potential application value in the field of natural product synthesis and new drug development.Thus, the development of practical synthetic methods and efficient synthetic strategies in natural product total synthesis has always been an active field in organic synthesis and has attracted great attention from synthetic chemists. Deconstructive synthesis, namely building new (and often more challenging) structures through bond cleavage and the formation of easily accessible moieties, has emerged as a useful design principle in preparing complex bioactive natural products and other target molecules [2].The basic logic of deconstructive synthesis is to construct a "hardly Molecules 2023, 28, 6193 2 of 15 accessible" skeleton from an "easily accessible" skeleton through skeletal reorganization.Meanwhile, the rapid construction of polycyclic molecular frameworks, the precise instillation of multi-stereocenters bearing crowed quaternary carbon centers, and the efficient induction of versatile functional groups should also be highlighted.It also facilitates the continuous modification of key advanced intermediates to improve the overall efficiency of the synthetic route.The subtlety of deconstructive synthesis often lies in its creative approach of building intricate molecular frameworks, resulting in the design of easily accessible intermediates that could significantly reduce the challenge of total synthesis.The development of deconstructive synthesis strategies is often inspired by the molecular structural characteristics of natural products, which also tests the creativity and personalized perspective of synthetic chemists. Divergent synthesis, in short, involves working from one common intermediate to many natural products (Figure 1).In general, a class of natural products often shares a similar chemical structure with various functional groups or different molecular frameworks that can be constructed from a common intermediate in biosynthesis.The original definition of divergent synthesis was proposed and demonstrated by Boger in 1984 (Scheme 1) [3]."Divergent" is defined as a common intermediate (preferably an advanced intermediate) being converted, separately, to at least two natural products.Applications of divergent synthesis include not only preparing molecules in the same family but also accessing natural products that have the same molecular skeletons from different families.Divergent total synthesis, also defined as "collective total synthesis" by MacMillan, is when multiple skeletons of natural products are prepared from a versatile common intermediate [4]. Molecules 2023, 28, 6193 2 of 15 emerged as a useful design principle in preparing complex bioactive natural products and other target molecules [2].The basic logic of deconstructive synthesis is to construct a "hardly accessible" skeleton from an "easily accessible" skeleton through skeletal reorganization.Meanwhile, the rapid construction of polycyclic molecular frameworks, the precise instillation of multi-stereocenters bearing crowed quaternary carbon centers, and the efficient induction of versatile functional groups should also be highlighted.It also facilitates the continuous modification of key advanced intermediates to improve the overall efficiency of the synthetic route.The subtlety of deconstructive synthesis often lies in its creative approach of building intricate molecular frameworks, resulting in the design of easily accessible intermediates that could significantly reduce the challenge of total synthesis.The development of deconstructive synthesis strategies is often inspired by the molecular structural characteristics of natural products, which also tests the creativity and personalized perspective of synthetic chemists.Divergent synthesis, in short, involves working from one common intermediate to many natural products (Figure 1).In general, a class of natural products often shares a similar chemical structure with various functional groups or different molecular frameworks that can be constructed from a common intermediate in biosynthesis.The original definition of divergent synthesis was proposed and demonstrated by Boger in 1984 (Scheme 1) [3]."Divergent" is defined as a common intermediate (preferably an advanced intermediate) being converted, separately, to at least two natural products.Applications of divergent synthesis include not only preparing molecules in the same family but also accessing natural products that have the same molecular skeletons from different families.Divergent total synthesis, also defined as "collective total synthesis" by MacMillan, is when multiple skeletons of natural products are prepared from a versatile common intermediate [4].Compared with linear synthesis that can only achieve one natural product at a time, the divergent strategy provides a more efficient approach to access many natural products with similar or different molecular skeletons from a versatile molecule through a number of routine chemical operations.In addition, the divergent synthesis of natural products is also more conducive to building a natural-product-like compound library to conduct more biological activity screening and innovative drug development based on the bioactive natural products.Scheme 1. Boger's divergent total syntheses of rufescine and imetuteine. Compared with linear synthesis that can only achieve one natural product at a time, the divergent strategy provides a more efficient approach to access many natural products with similar or different molecular skeletons from a versatile molecule through a number of routine chemical operations.In addition, the divergent synthesis of natural products is also more conducive to building a natural-product-like compound library to conduct more biological activity screening and innovative drug development based on the bioactive natural products. The application of both strategies requires organic chemists to be familiar with the skeleton, functional group, oxidation state, and other characteristics of the target molecules.One of the most challenging steps is to design a suitable advanced intermediate that can be easily prepared and quickly converted to as many target molecules as possible.Meanwhile, the common intermediate should also be close to the target molecule to reduce the total number of synthesis steps. During the past decade, many research groups have developed diverse, novel synthetic methodologies to realize a lot of impressive approaches for the divergent and deconstructive synthesis of natural products.Previous literature on the divergent strategy in natural product total syntheses from 2013 to June 2017 has been summarized in Chemical Reviews [5].Excellent reviews focusing on deconstructive synthesis through rearrangement reactions already exist, providing more comprehensive perspectives for authors who have interests in this field [6][7][8][9][10][11][12]. In the last three years, many research groups have accomplished a lot of creative synthetic routes to syntheses of natural products with complex structures and broad biological activities.In this review, we will first introduce five excellent examples to demonstrate the synthetic utilities of the deconstructive and divergent strategies in natural product total synthesis.Then, we will summarize our previous work on the deconstructive and divergent synthesis of bioactive natural products to demonstrate the high efficiency and simplicity of these two strategies in natural product total synthesis.After a brief introduction of the respective bioactive natural products, we will discuss the details of the synthetic routes and how to combine the deconstructive and divergent strategies in target molecules. Selected Deconstructive and Divergent Syntheses of Natural Product (2021-2023) In 2021, the Reisman group from the California Institute of Technology reported the divergent total syntheses of three C19 diterpenoid alkaloids: (−)-talatisamine, (−)liljestrandisine, and (−)-liljestrandinine from phenol (Scheme 2a) [13].The highlights of this work include (1) a 1,2-addition/semipinacol rearrangement sequence to efficiently couple two complex fragments and construct the quaternary carbon center and (2) an intramolecular aziridination and radical cyclization to assemble the pentacyclic skeleton of the target alkaloids.The application of both strategies requires organic chemists to be familiar with the skeleton, functional group, oxidation state, and other characteristics of the target molecules.One of the most challenging steps is to design a suitable advanced intermediate that can be easily prepared and quickly converted to as many target molecules as possible.Meanwhile, the common intermediate should also be close to the target molecule to reduce the total number of synthesis steps. During the past decade, many research groups have developed diverse, novel synthetic methodologies to realize a lot of impressive approaches for the divergent and deconstructive synthesis of natural products.Previous literature on the divergent strategy in natural product total syntheses from 2013 to June 2017 has been summarized in Chemical Reviews [5].Excellent reviews focusing on deconstructive synthesis through rearrangement reactions already exist, providing more comprehensive perspectives for authors who have interests in this field [6][7][8][9][10][11][12]. In the last three years, many research groups have accomplished a lot of creative synthetic routes to syntheses of natural products with complex structures and broad biological activities.In this review, we will first introduce five excellent examples to demonstrate the synthetic utilities of the deconstructive and divergent strategies in natural product total synthesis.Then, we will summarize our previous work on the deconstructive and divergent synthesis of bioactive natural products to demonstrate the high efficiency and simplicity of these two strategies in natural product total synthesis.After a brief introduction of the respective bioactive natural products, we will discuss the details of the synthetic routes and how to combine the deconstructive and divergent strategies in target molecules. Selected Deconstructive and Divergent Syntheses of Natural Product (2021-2023) In 2021, the Reisman group from the California Institute of Technology reported the divergent total syntheses of three C19 diterpenoid alkaloids: (−)-talatisamine, (−)liljestrandisine, and (−)-liljestrandinine from phenol (Scheme 2a) [13].The highlights of this work include (1) a 1,2-addition/semipinacol rearrangement sequence to efficiently couple two complex fragments and construct the quaternary carbon center and (2) an intramolecular aziridination and radical cyclization to assemble the pentacyclic skeleton of the target alkaloids. In 2022, the Ma group from the Shanghai Institute of Organic Chemistry reported an asymmetric divergent approach to the total synthesis of six napelline-type C20-diterpenoid alkaloids in a convergent manner (Scheme 2b) [14].The highlights of this work include (1) a diastereoselective intermolecular Cu-mediated conjugate addition to couple the two fragments; (2) an intramolecular Michael addition reaction to construct the tetracyclic skeleton; and (3) an intramolecular Mannich cyclization to rapidly construct the azabicyclo [3.2.1] octane motif of the target molecules.In 2022, the Ma group from the Shanghai Institute of Organic Chemistry reported an asymmetric divergent approach to the total synthesis of six napelline-type C20diterpenoid alkaloids in a convergent manner (Scheme 2b) [14].The highlights of this work include (1) a diastereoselective intermolecular Cu-mediated conjugate addition to couple the two fragments; (2) an intramolecular Michael addition reaction to construct the tetracyclic skeleton; and (3) an intramolecular Mannich cyclization to rapidly construct the azabicyclo [3.2.1] octane motif of the target molecules. In 2023, the Fan group from Lanzhou University published a deconstructive and divergent synthesis of nine C8-ethano-bridged diterpenoids based on late-stage skeletal diversification in the longest linear 8 to 11 steps starting from readily available chiral materials (Scheme 2c) [15].Crucial advanced intermediates with different structural skeletons were rapidly constructed through regioselective and diastereoselective metalhydride hydrogen atom transfer (MHAT) cyclization from the multi-reactivity of densely functionalized polycyclic substrates. In 2022, the Ding group from Zhejiang University developed deconstructive and divergent syntheses of eight tetraquinane diterpenoids through a HAT-initiated Dowd−Beckwith rearrangement reaction for the efficient assembly of diversely functionalized polyquinane frameworks (Scheme 2d) [16]. In 2023, the Ding group finished the deconstructive and divergent syntheses of nine grayanane diterpenoids that belong to five distinct subtypes from a common advanced tetracyclic intermediate which was prepared through a tandem intramolecular oxidative dearomatization-induced (ODI) [5 + 2] cycloaddition/pinacol rearrangement to construct [3.2.1]-bicyclic skeleton and a photosantonin rearrangement to assemble the 5/7 bicyclic framework (Scheme 2e) [17].In 2023, the Fan group from Lanzhou University published a deconstructive and divergent synthesis of nine C8-ethano-bridged diterpenoids based on late-stage skeletal diversification in the longest linear 8 to 11 steps starting from readily available chiral materials (Scheme 2c) [15].Crucial advanced intermediates with different structural skeletons were rapidly constructed through regioselective and diastereoselective metal-hydride hydrogen atom transfer (MHAT) cyclization from the multi-reactivity of densely functionalized polycyclic substrates. In 2022, the Ding group from Zhejiang University developed deconstructive and divergent syntheses of eight tetraquinane diterpenoids through a HAT-initiated Dowd−Beckwith rearrangement reaction for the efficient assembly of diversely functionalized polyquinane frameworks (Scheme 2d) [16]. In 2023, the Ding group finished the deconstructive and divergent syntheses of nine grayanane diterpenoids that belong to five distinct subtypes from a common advanced tetracyclic intermediate which was prepared through a tandem intramolecular oxidative dearomatization-induced (ODI) [5 + 2] cycloaddition/pinacol rearrangement to construct [3.2.1]-bicyclic skeleton and a photosantonin rearrangement to assemble the 5/7 bicyclic framework (Scheme 2e) [17]. Divergent Syntheses of Fawcettimine-Class Lycopodium Alkaloids Lycopodium alkaloids are structurally complex natural products with quinolizine, pyridine, or α-pyridone, originally identified in the Lycopodium genus.Lycopodium alkaloids exhibit important biological activities.For example, Huperzine A is a potent inhibitor of acetylcholinesterase (AChE) and shows promise in the treatment of Alzheimer's disease (AD).The fawcettimine-class Lycopodium alkaloids are a class of structurally unique alkaloids with complex and unique skeletons bearing quaternary carbon centers, such as fawcettimine (1); fawcettidine (2); lycojaponicumins C (3); and 8-deoxyserratinine (4) (Scheme 3) [18].In particular, lycojaponicumins C (3), isolated by Yu and co-workers in 2012 from the traditional Chinese medicine Lycopodium japonicum, exhibits lipopolysaccharide (LPS)-induced pro-inflammatory factors in BV2 macrophages [19].The fawcettimineclass Lycopodium alkaloids feature fused tetracyclic skeletons, including two common cis-hydroindene (6/5 bicycle) motifs and two diverse ring systems bearing quaternary carbon centers.The Lycopodium alkaloids have attracted great attention from synthetic chemists and medicinal chemists for their unique chemical structures and broad biological activities.During the past decade, several elegant approaches for the total synthesis of Lycopodium alkaloids have been reported [20]. inhibitor of acetylcholinesterase (AChE) and shows promise in the treatm Alzheimer's disease (AD).The fawcettimine-class Lycopodium alkaloids are a structurally unique alkaloids with complex and unique skeletons bearing qua carbon centers, such as fawcettimine (1); fawcettidine (2); lycojaponicumins C (3 deoxyserratinine (4) (Scheme 3) [18].In particular, lycojaponicumins C (3), isolate and co-workers in 2012 from the traditional Chinese medicine Lycopodium jap exhibits lipopolysaccharide (LPS)-induced pro-inflammatory factors in BV2 macro [19].The fawcettimine-class Lycopodium alkaloids feature fused tetracyclic sk including two common cis-hydroindene (6/5 bicycle) motifs and two diverse ring bearing quaternary carbon centers.The Lycopodium alkaloids have attracte attention from synthetic chemists and medicinal chemists for their unique c structures and broad biological activities.During the past decade, several approaches for the total synthesis of Lycopodium alkaloids have been reported [20 In 2013, Tu's group reported a creative approach to divergent synthesis fawcettimine-class Lycopodium alkaloids, namely, fawcettimine (1), fawcettid lycojaponicumins C (3), and 8-deoxyserratinine (4) [21].In this work, the authors d a common intermediate which can transform to alkaloids 1-4 in a divergent Meanwhile, a practical methodology to quickly construct the cis-hydroindene was designed and developed to demonstrate that natural product total synth promote the development of novel synthetic methodologies and synthetic strateg After optimization of the Mukaiyama-Michael addition conditions, couplin simple building blocks 5 and 6 was realized in the presence of triflimide to affo ether 7 with excellent yield and high diastereoselectivity (Scheme 4).Then, C catcalyzed carbene cyclization designed by our group was successfully conduct decarboxylation of the resulting cyclization product provided the desired ketone cis-6/5 bicyclic skeleton and a quaternary carbon center with a 55% yield [22].The 9 could also be scalable prepared with good yield by using a one-pot operation.N this novel methodology offers an efficient way to construct the cis-6/5 bicyclic from simple building blocks which can apply to the total synthesis of other c natural products.Then, Dickmann condensation of the ketone ester 9 gave an en tricyclic trione 10, which in situ reacted with π-allylpalladium complex to del tricyclic compound 11 with angular 6/5/5 tricyclic molecular framework bear quaternary carbon centers.Selective protection of the ketone on the six-membe and hydroboration of the terminal double bond of 11 gave the desired alcohol 12 could readily be converted into the common intermediate, the tricyclic azide 13, the Mitsunobu reaction.In 2013, Tu's group reported a creative approach to divergent synthesis of four fawcettimine-class Lycopodium alkaloids, namely, fawcettimine (1), fawcettidine (2), lycojaponicumins C (3), and 8-deoxyserratinine (4) [21].In this work, the authors designed a common intermediate which can transform to alkaloids 1-4 in a divergent manner.Meanwhile, a practical methodology to quickly construct the cis-hydroindene skeleton was designed and developed to demonstrate that natural product total synthesis can promote the development of novel synthetic methodologies and synthetic strategies. After optimization of the Mukaiyama-Michael addition conditions, coupling of two simple building blocks 5 and 6 was realized in the presence of triflimide to afford silyl ether 7 with excellent yield and high diastereoselectivity (Scheme 4).Then, Cu(tbs) 2catcalyzed carbene cyclization designed by our group was successfully conducted, and decarboxylation of the resulting cyclization product provided the desired ketone 9 with a cis-6/5 bicyclic skeleton and a quaternary carbon center with a 55% yield [22].The bicycle 9 could also be scalable prepared with good yield by using a one-pot operation.Notably, this novel methodology offers an efficient way to construct the cis-6/5 bicyclic skeleton from simple building blocks which can apply to the total synthesis of other complex natural products.Then, Dickmann condensation of the ketone ester 9 gave an enolate of tricyclic trione 10, which in situ reacted with π-allylpalladium complex to deliver the tricyclic compound 11 with angular 6/5/5 tricyclic molecular framework bearing two quaternary carbon centers.Selective protection of the ketone on the six-membered ring and hydroboration of the terminal double bond of 11 gave the desired alcohol 12, which could readily be converted into the common intermediate, the tricyclic azide 13, through the Mitsunobu reaction. The tricyclic azide 13 possesses an angular 6/5/5 tricyclic framework, two contiguous quaternary carbon centers, two carbonyl groups, and an azide chain and can serve as the total syntheses of four fawcettimine-class Lycopodium alkaloids, namely, fawcettimine (1, 10 steps), fawcettidine (2, 12 steps), lycojaponicumins C (3, 12 steps), and 8deoxyserratinine (4, 12 steps) from simple building blocks 5 and 6.A major innovation of this strategy involved the design of a versatile common intermediate 13, which can synthesize not only the fawcettimine-class Lycopodium alkaloids but also other complex natural products.Other highlights of this work include (i) two consecutive one-pot procedures to rapidly assemble the angular 6/5/5 tricyclic framework bearing two contiguous quaternary carbon centers at the early stage; (ii) a highly regioselective aza-Wittig reaction, Schmidt rearrangement, and selective C4-N cleavage to construct the differently sized rings of target natural products for late-stage skeletal diversification. Deconstructive Syntheses of Cyclopiane Class Tetracyclic Diterpenes The Cyclopiane class tetracyclic diterpenes are a class of unique bioactive natural products isolated from fermentation of marine-derived entophytic fungi of the Penicillium genus [23][24][25].Conidiogenone (19) and conidiogenol (20) exhibit potent conidiationinducing activity, while conidiogenone B (21) shows high antibacterial activities (Figure 2).The structures of the Cyclopiane class tetracyclic diterpenes feature a highly fused and strained tetracyclic (6/5/5/5) skeleton, 6-8 consecutive chiral centers, and four sterically hindered quaternary carbon centers.The Cyclopiane-class tetracyclic diterpenes have attracted great attention for their unprecedented chemical structures and important biological activities.During the past decade, several elegant approaches for the total syntheses of Cyclopiane class tetracyclic diterpenes have been reported [26][27][28][29].The first total syntheses of three Cyclopiane class tetracyclic diterpenes, namely, conidiogenone (19), conidiogenol (20), and conidiogenone B ( 21), was accomplished by the Tu group in 2016 through the use of a well-designed semipinacol rearrangement as a key step to construct the requisite spirocyclic (6/5) skeleton and sterically hindered quaternary carbon center of the target molecules [26].This work started with the preparation of enantioenriched cyclobutanone 25 that was obtained via chiral resolution of racemic 25, which was prepared in two steps from simple building blocks 22 and 23 via 1, 4-addition, and [2 + 2] cyclization (Scheme 6).The welldesigned vinyl cyclobutnol silylether precursor 27 was successfully synthesized by coupling known vinyl bromide A and phenylthiol-cyclcobutanone 26, which was obtained from enantioenriched 25 and phenyl disulphide in the presence of LDA.The phenylthioether group was firstly introduced to adjust the electron density of expected carbon in semipinacol rearrangement, aiming to enhance a dominated migration of the expected carbon of the rearrangement precursor 27.Fortunately, under the BF3•OEt2/DCM condition, the expected semipinacol rearrangement occurred and provided the desired spirocyclic product 28 bearing the correct quaternary carbon center with an 80% yield and with a ratio of 1.2:1 at the C9 position.The structure and stereocenters of 28 were confirmed via X-ray crystallographic analysis of its derivative 32 (vide infra).After serval chemical operations, the last five-membered ring of the target molecule was successfully constructed via acid-promoted Aldol cyclization of aldehyde, which was obtained from terminal olefin 31 through Ozonization.After removal of the unnecessary functional This work started with the preparation of enantioenriched cyclobutanone 25 that was obtained via chiral resolution of racemic 25, which was prepared in two steps from simple building blocks 22 and 23 via 1, 4-addition, and [2 + 2] cyclization (Scheme 6).The well-designed vinyl cyclobutanol silylether precursor 27 was successfully synthesized by coupling known vinyl bromide A and phenylthiol-cyclobutanone 26, which was obtained from enantioenriched 25 and phenyl disulphide in the presence of LDA.The phenylthioether group was firstly introduced to adjust the electron density of expected carbon in semipinacol rearrangement, aiming to enhance a dominated migration of the expected carbon of the rearrangement precursor 27.Fortunately, under the BF 3 •OEt 2 /DCM condition, the expected semipinacol rearrangement occurred and provided the desired spirocyclic product 28 bearing the correct quaternary carbon center with an 80% yield and with a ratio of 1.2:1 at the C9 position.The structure and stereocenters of 28 were confirmed via X-ray crystallographic analysis of its derivative 32 (vide infra).After serval chemical operations, the last five-membered ring of the target molecule was successfully constructed via acid-promoted Aldol cyclization of aldehyde, which was obtained from terminal olefin 31 through Ozonization.After removal of the unnecessary functional group on the triquinane (5/5/5 tricyclic) motif, tetracycle 35 was obtained with a high yield.Then, stereoselective installation of a methyl group in the presence of LDA and LiAlH 4 reduction of the resulting ketone, followed by quenching with HCl aqueous solution to install the enone motif, provided conidiogenone B ( 21) with a 70% yield.However, the positive optical rotation and CD spectrum data of synthetic conidiogenone B were opposite to those of the natural occurring one.Thus, the correct absolute configuration of naturally occurring conidiogenone B ( 21) is, in fact, the enantiomer of the originally assigned absolute configuration.Meanwhile, stereoselective epoxidation of conidiogenone B, followed by reduction of the resulting epoxide with NaSePh, afforded conidiogenone ( 19) with a 54% yield over two steps.Finally, diastereoselective reduction of conidiogenone (19) with L-selectride gave another diterpene conidiogenol (20) with a 77% yield. reduction of the resulting ketone, followed by quenching with HCl aqueous solution to install the enone motif, provided conidiogenone B ( 21) with a 70% yield.However, the positive optical rotation and CD spectrum data of synthetic conidiogenone B were opposite to those of the natural occurring one.Thus, the correct absolute configuration of naturally occurring conidiogenone B ( 21) is, in fact, the enantiomer of the originally assigned absolute configuration.Meanwhile, stereoselective epoxidation of conidiogenone B, followed by reduction of the resulting epoxide with NaSePh, afforded conidiogenone (19) with a 54% yield over two steps.Finally, diastereoselective reduction of conidiogenone (19) with L-selectride gave another diterpene conidiogenol (20) with a 77% yield.Scheme 6. Deconstructive syntheses of the Cyclopiane-class tetracyclic diterpenes (19)(20)(21). In this work, the Tu group achieved the first total synthesis of the cyclopiane class tetracyclic diterpene conidiogenone B and its transformation into conidiogenone and conidiogenol by installing a SPh "directing" group on the expected migratory carbon of the precursor to perform regio-and diastereo-selective semipinacol-type rearrangement.The absolute configuration of naturally occurring conidiogenone B ( 21) is also corrected through this synthesis.More importantly, the 6/5/5 tricyclic ring system of the target molecule was rapidly constructed via well-designed semipinacol rearrangement in one step, which not only constructed a crowded ring system bearing a quaternary carbon center but also reserved versatile functional groups for the introduction of the last ring Scheme 6. Deconstructive syntheses of the Cyclopiane-class tetracyclic diterpenes (19)(20)(21). In this work, the Tu group achieved the first total synthesis of the cyclopiane class tetracyclic diterpene conidiogenone B and its transformation into conidiogenone and conidiogenol by installing a SPh "directing" group on the expected migratory carbon of the precursor to perform regio-and diastereo-selective semipinacol-type rearrangement.The absolute configuration of naturally occurring conidiogenone B ( 21) is also corrected through this synthesis.More importantly, the 6/5/5 tricyclic ring system of the target molecule was rapidly constructed via well-designed semipinacol rearrangement in one step, which not only constructed a crowded ring system bearing a quaternary carbon center but also reserved versatile functional groups for the introduction of the last ring and vicinal quaternary carbon center (C9), making the synthetic route more pretrial and efficient. In this work, the use of catalytic C-C/C-H activation of 3-arylcyclopentanones as a key step has been illustrated in the enantioselective total synthesis of a range of diterpenoids, namely, (−)-microthecaline A (37, five steps), (−)-leubethanol (38, six steps), (+)-seco-pseudopteroxazole (39, seven steps), (+)-pseudopteroxazole (40, eight steps), (+)pseudopterosin G-J aglycone (41, eight steps), and (−)-pseudopterosin A-F aglycone (42, eight steps).This strategy can accelerate asymmetric construction of the poly-substituted tetrahydronaphthalene cores, therefore significantly simplifying the overall synthesis.This is a nice example of the design of common advanced intermediates for divergent synthesis of two classes of bioactive natural products through a deconstruction strategy.With the power of the new synthetic methodology and strategy, the synthetic approach to these diterpenoids is significantly shorter than that in previous work, which could remarkably accelerate the investigation of their potential as drug candidates in drug discovery and development.Using the synthetic route of preparation of compound 52, the common advanced intermediate 58 was obtained from aryl boronic acid 53, 2-cyclopentenone, and chiral organoborane 51 in five steps with a 30% overall yield (Scheme 8).Then, a direct crossdehydrogenative-coupling (CDC) of 58 was realized under o-chloranil/MeCN conditions to afford the desired tricyclic cyclization products 59 and 60 with a 40% combined yield with a ratio of 1.4:1.Finally, (+)-seco-pseudopteroxazole (39) and (+)-pseudopteroxazole (40) were synthesized from 58 and 59 via deprotection and one-pot oxidative oxazole formation, respectively.Pseudopterosin G-J aglycone (41) and (−)-pseudopterosin A-F aglycone (42) were prepared from 59 and 60 through deprotection, respectively.Using the synthetic route of preparation of compound 52, the common advanced intermediate 58 was obtained from aryl boronic acid 53, 2-cyclopentenone, and chiral organoborane 51 in five steps with a 30% overall yield (Scheme 8).Then, a direct crossdehydrogenative-coupling (CDC) of 58 was realized under o-chloranil/MeCN conditions to afford the desired tricyclic cyclization products 59 and 60 with a 40% combined yield with a ratio of 1.4:1.Finally, (+)-seco-pseudopteroxazole (39) and (+)-pseudopteroxazole (40) were synthesized from 58 and 59 via deprotection and one-pot oxidative oxazole formation, respectively.Pseudopterosin G-J aglycone (41) and (−)-pseudopterosin A-F aglycone (42) were prepared from 59 and 60 through deprotection, respectively. Deconstructive Synthesis of Morphine Alkaloid (−)-Thebainone A Morphine and codeine have attracted great attention for their powerful biological activity and medical applications (Figure 4).Modification of the morphine alkaloids is still an active field in drug discovery.Therefore, developing a new asymmetric synthetic route of morphine alkaloids and their analogues is highly desirable for exploring their potential utilities in drug discovery and development.In 2021, the Dong group reported a concise enantioselective deconstructive synthesis of the morphine alkaloid thebainone A for the first time, as well as formal synthesis of codeine and morphine from commercially available materials [2,33].The high efficiency of the synthetic strategy is enabled by an asymmetric Rh-catalyzed C-C activation reaction (cut-and-sew) to access the all-carbon fused-rings structure and quaternary carbon center. tetrahydronaphthalene cores, therefore significantly simplifying the overall synthesis.This is a nice example of the design of common advanced intermediates for divergent synthesis of two classes of bioactive natural products through a deconstruction strategy.With the power of the new synthetic methodology and strategy, the synthetic approach to these diterpenoids is significantly shorter than that in previous work, which could remarkably accelerate the investigation of their potential as drug candidates in drug discovery and development. Deconstructive Synthesis of Morphine Alkaloid (−)-Thebainone A Morphine and codeine have attracted great attention for their powerful biological activity and medical applications (Figure 4).Modification of the morphine alkaloids is still an active field in drug discovery.Therefore, developing a new asymmetric synthetic route of morphine alkaloids and their analogues is highly desirable for exploring their potential utilities in drug discovery and development.In 2021, the Dong group reported a concise enantioselective deconstructive synthesis of the morphine alkaloid thebainone A for the first time, as well as formal synthesis of codeine and morphine from commercially available materials [2,33].The high efficiency of the synthetic strategy is enabled by an asymmetric Rh-catalyzed C-C activation reaction (cut-and-sew) to access the all-carbon fused-rings structure and quaternary carbon center.According to the previously synthetic route, benzocyclobutenone 67 was prepared from commercially available compound 65 in three steps on a decagram scale with a 70% overall yield, while alcohol 69 was accessed through Birch reduction and ketal formation of commercially available anisole 68 (Scheme 9).Mitsunobu coupling of phenol 67 and alcohol 69 delivered the desired precursor 70 with a 93% yield.Because of the acidsensitive ketal, sterically hindered trisubstituted olefin, and relatively long linker of compound 70, the efficiency of C-C activation is a challenge.After screening a series of C-C activation reaction conditions, we successfully realized Rh-catalyzed enantioselective C-C activation and obtained the desired tetracyclic compound 71 with a 76% yield with 97:3 er on a gram scale.Notably, compound 71 contains the all-carbon fused 6/6/6 rings bearing the quaternary carbon centers of the target molecules.This step not only sets the requested stereochemistry at the C13 and C14 positions but also forms all the C−C bonds present in the natural products.Compound 71 transformed into styrene 72 with a 78% yield via LAH-reduction, elimination, and deprotection.Cleavage of the C−O bond with BBr3, followed by methylation of the resulting diphenol provided the desired alkyl bromide 73.Then, a one-pot sequence of ketal installation and SN2 amination of alkyl bromide smoothly delivered sulfonamide 78, which could provide the common intermediate 75 through formal hydroamination under a sodium naphthalenide condition to construct the piperidine and selectively deprotect the more sterically hindered methyl ether in the presence of NaSEt.The common intermediate 75 could not only transform According to the previously synthetic route, benzocyclobutenone 67 was prepared from commercially available compound 65 in three steps on a decagram scale with a 70% overall yield, while alcohol 69 was accessed through Birch reduction and ketal formation of commercially available anisole 68 (Scheme 9).Mitsunobu coupling of phenol 67 and alcohol 69 delivered the desired precursor 70 with a 93% yield.Because of the acid-sensitive ketal, sterically hindered trisubstituted olefin, and relatively long linker of compound 70, the efficiency of C-C activation is a challenge.After screening a series of C-C activation reaction conditions, we successfully realized Rh-catalyzed enantioselective C-C activation and obtained the desired tetracyclic compound 71 with a 76% yield with 97:3 er on a gram scale.Notably, compound 71 contains the all-carbon fused 6/6/6 rings bearing the quaternary carbon centers of the target molecules.This step not only sets the requested stereochemistry at the C13 and C14 positions but also forms all the C−C bonds present in the natural products.Compound 71 transformed into styrene 72 with a 78% yield via LAH-reduction, elimination, and deprotection.Cleavage of the C−O bond with BBr 3 , followed by methylation of the resulting diphenol provided the desired alkyl bromide 73.Then, a one-pot sequence of ketal installation and SN2 amination of alkyl bromide smoothly delivered sulfonamide 74, which could provide the common intermediate 75 through formal hydroamination under a sodium naphthalenide condition to construct the piperidine and selectively deprotect the more sterically hindered methyl ether in the presence of NaSEt.The common intermediate 75 could not only transform into morphine alkaloid (−)-thebainone A (64) through desaturation under Stahl's condition but also serve as a known precursor for the syntheses of codeine (61) and morphine (62) [13]. In this work, the Dong group developed a novel strategy for the synthesis of the morphine alkaloid (−)-thebainone A (67).Key steps included (i) construction of the allcarbon 6/6/6 tricyclic skeleton bearing a quaternary carbon center through an asymmetric Rh-catalyzed C-C bond activation reaction from easily accessible benzocyclobutenone; (ii) construction of a piperidine ring from dihydropyran through C-O bond cleavage with BBr 3 and C-N bond formation in the presence of sodium naphthalenide.This creative approach is an excellent example of the application of the deconstructive strategy in the total synthesis of natural products.Furthermore, a reoptimized catalytic C-C bond activation condition was also discovered, with good substrate scope and potential application in the synthesis of other polycyclic natural products. into morphine alkaloid (−)-thebainone A (64) through desaturation under Stahl's condition but also serve as a known precursor for the syntheses of codeine (61) and morphine (62) [13].In this work, the Dong group developed a novel strategy for the synthesis of the morphine alkaloid (−)-thebainone A (67).Key steps included (i) construction of the allcarbon 6/6/6 tricyclic skeleton bearing a quaternary carbon center through an asymmetric Rh-catalyzed C-C bond activation reaction from easily accessible benzocyclobutenone; (ii) construction of a piperidine ring from dihydropyran through C-O bond cleavage with BBr3 and C-N bond formation in the presence of sodium naphthalenide.This creative approach is an excellent example of the application of the deconstructive strategy in the total synthesis of natural products.Furthermore, a reoptimized catalytic C-C bond activation condition was also discovered, with good substrate scope and potential application in the synthesis of other polycyclic natural products. Conclusions and Future perspective Divergent and deconstructive synthesis are becoming common and important strategies in the total synthesis of natural products.How to effectively combine the two strategies has attracted the attention of synthetic chemists, which often results in the two strategies playing a role of "1 + 1 > 2", such as bringing greater convenience in the innovative total synthesis of natural products. The synthesis of complex natural products has always been a challenging objective in the Organic Chemistry field.It is exciting to see that more and more complex products, such as complex alkaloids and terpenoids, are being synthesized in the laboratory.Efficient and simple synthesis can provide reliable access to natural products thus facilitating innovative drug discovery and development.We hope that this review will attract more synthetic chemists to pay more attention to deconstructive and divergent synthesis, and to design elegant approaches to synthesize more natural products with complex polycyclic structures and broad biological activities. Conclusions and Future perspective Divergent and deconstructive synthesis are becoming common and important strategies in the total synthesis of natural products.How to effectively combine the two strategies has attracted the attention of synthetic chemists, which often results in the two strategies playing a role of "1 + 1 > 2", such as bringing greater convenience in the innovative total synthesis of natural products. The synthesis of complex natural products has always been a challenging objective in the Organic Chemistry field.It is exciting to see that more and more complex products, such as complex alkaloids and terpenoids, are being synthesized in the laboratory.Efficient and simple synthesis can provide reliable access to natural products thus facilitating innovative drug discovery and development.We hope that this review will attract more synthetic chemists to pay more attention to deconstructive and divergent synthesis, and to design elegant approaches to synthesize more natural products with complex polycyclic structures and broad biological activities. Figure 1 . Figure 1.Schematic explanation of linear and divergent synthesis.Figure 1.Schematic explanation of linear and divergent synthesis. Figure 1 . Figure 1.Schematic explanation of linear and divergent synthesis.Figure 1.Schematic explanation of linear and divergent synthesis. Scheme 2 . Scheme 2.Selected examples of deconstructive and divergent syntheses of natural products.
2023-08-26T15:14:58.439Z
2023-08-22T00:00:00.000
{ "year": 2023, "sha1": "2e4f2fd95bba8bcca149cd11b122323e532ff25f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/17/6193/pdf?version=1692844753", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9205bf04e97d37eedd900a59045c195da54a44eb", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [] }
231392245
pes2o/s2orc
v3-fos-license
Evidence for decreased parasympathetic response to a novel peer interaction in older children with autism spectrum disorder: a case-control study Background Individuals with autism spectrum disorder (ASD) often experience elevated stress during social interactions and may have difficulty forming and maintaining peer relationships. The autonomic nervous system (ANS) directs physiological changes in the body in response to a number of environmental stimuli, including social encounters. Evidence suggests the flexibility of the ANS response is an important driving factor in shaping social behavior. For youth with ASD, increased stress response and/or atypical ANS regulation to benign social encounters may therefore influence social behaviors, and, along with developmental and experiential factors, shape psychological outcomes. Methods The current study measured ANS response to a peer-based social interaction paradigm in 50 typically developing (TD) children and 50 children with ASD (ages 10–13). Respiratory sinus arrhythmia (RSA), a cardiac measure of parasympathetic influence on the heart, and pre-ejection period (PEP), a sympathetic indicator, were collected. Participants engaged in a friendly, face-to-face conversation with a novel, same-aged peer, and physiological data were collected continuously before and during the interaction. Participants also reported on state anxiety following the interaction, while parents reported on the child’s social functioning and number of social difficulties. Results Linear mixed models revealed that, while there were no diagnostic effects for RSA or PEP, older youth with ASD appeared to demonstrate a blunted parasympathetic (RSA) response. Further, increased severity of parent-reported social symptoms was associated with lower RSA. Youth with ASD reported more anxiety following the interaction; however, symptoms were not related to RSA or PEP response based on linear mixed modeling. Conclusions Physiological regulation, age, and social functioning likely influence stress responses to peer interactions for youth with ASD. Parasympathetic functioning, as opposed to sympathetic arousal, may be especially important in behavioral regulation, as older youth with ASD demonstrated atypical regulation and response to the social interaction paradigm. Future studies should help to further elucidate the developmental factors contributing to stress responses in ASD, the impact of physiological response on observable social behavior, and potential long-term consequences of chronic social stress in youth with ASD. Supplementary Information The online version contains supplementary material available at 10.1186/s11689-020-09354-x. Background Autism spectrum disorder (ASD) is a neurodevelopmental disorder now estimated to affect 1 in 54 children in the USA [1]. Symptoms of ASD are defined across two core diagnostic domains-impairments in social interaction and communication, and restrictive and repetitive patterns and behaviors [2]. Thus, individuals often have significant difficulty engaging with others, responding to novel social situations, and often find peer interactions to be stressful [3][4][5][6]. Nevertheless, humans are inherently social creatures, and despite social challenges, children must interact with peers nearly every day-in the classroom, on the playground, and in the community. The autonomic nervous system (ANS) is separated into two branches with primarily opposing functions, the parasympathetic (PNS) and sympathetic nervous systems (SNS). The PNS is described as the "rest and digest" branch by conserving energy as it slows heart rate (bradycardia), lowers blood pressure, decreases respiration, and increases intestinal activity, among other regulatory actions [7]. In contrast, the metabolically demanding SNS supports "fight or flight" responses for mobilization to threat, including, but not limited to, increased heart rate and respiration. The sinoatrial (SA) node, or pacemaker, of the heart is dually innervated by the PNS and SNS [8]. Non-invasive measures of cardiac function can identify the individual contributions of each branch, serving as useful markers of change in PNS and SNS activity (e.g., ) [9,10]. For example, changes in beatto-beat heart rate (heart rate variability; HRV) in conjunction with high-frequency range respiration, or respiratory sinus arrhythmia (RSA), indexes PNS influences. Further, the pre-ejection period (PEP), derived by impedance cardiography to detect volumetric changes, is a metric of time from electrical stimulation to mechanical opening of the aortic value and is a validated measure of pure SNS function (e.g. ) [10]. The ANS includes a neural network in which efferent signals originating from medullary brainstem regions [11,12] affect functioning of peripheral visceral organs, including the heart. The dually innervated SA node is said to be under tonic parasympathetic inhibition via the myelinated vagal nerve [11,13]. This "vagal brake" regulates behavior through maintenance and balance of PNS influence to the heart, thus allowing for changes in heart rate as parasympathetic regulation changes in response to changing environmental stimuli [11,13,14]. Therefore, in the presence of a stressor, removal of the "vagal brake" can allow for increases in heart rate and respiration without engaging the metabolically demanding SNS [15]. Vagal flexibility [16] is believed to play an important role in determining social behavior. According to the Polyvagal Theory [11], the parasympathetically-mediated Social Engagement System [13,14,17] is active during calm visceral states, thus promoting activation of the interconnected craniofacial nerves and their associated motor behaviors. These somatomotor components of the system control a number of actions relevant for social behavior, including but not limited to, eye movement (eye contact), vocalization (language), and head turning (social orienting) [18]. For example, individuals who demonstrate more cooperativity and sociability tend to have higher PNS regulation [19][20][21]. Additionally, young adults with higher vagal tone are more socially engaged than their peers with lower PNS regulation [22]. However, in cases of more severe threat, the SNS will activate, presumably inhibiting parasympathetic systems and blocking the Social Engagement System while initiating the fight or flight response to the stressor. It has been noted that many of the behaviors associated with the Social Engagement System, including eye gaze, language and vocalization production, and emotional expression [13,14,17], are often impaired in a number of neurological conditions, most notably, autism spectrum disorder [18,23]. The autonomic system may function atypically in ASD, evidenced by reductions in resting PNS regulation relative to TD peers [24][25][26]. Several studies additionally cite atypical PNS and SNS reactivity in response to stress (e.g.), [27][28][29][30]. In a study of school-aged children, those with ASD demonstrated lower RSA during interactions with unfamiliar peers; moreover, the reduction in RSA was associated with more social problems and problem behaviors [26]. A similar reduction in parasympathetic regulation, along with sympathetic hyperarousal, was seen in ASD children, compared to TD controls, when interacting with a familiar partner [30]. In the context of social play, higher resting PNS regulation has been associated with more gestures and sharing behavior in young children with ASD during play with an adult actor [31]. Therefore, social difficulties in ASD may be, in part, explained by failures of the parasympathetically-mediated vagal nerve to efficiently regulate the Social Engagement System, where PNS withdrawal and/or SNS hyperarousal inhibits the facial nerves and associated motor neurons responsible for many social behaviors [18]. Stress reactivity is dynamic, with physiological responsivity influenced by a number of factors including age or development (e.g., [32]) and social variables (i.e., peer support) (e.g.), [33]. During a naturalistic play protocol, the Peer Interaction Paradigm (PIP) (3), many youth with ASD demonstrate elevated stress reactivity of a neuroendocrine system, the hypothalamic-pituitaryadrenal (HPA) axis, relative to TD peers. Moreover, these effects are further modified by age. In two studies of 8 to 12 year olds with ASD or TD, older youth with ASD showed a greater stress response to the social interaction from baseline relative to their typically developing (TD) peers and younger youth with ASD [3], suggesting the age effects may be related to other social factors such as increased insight and more exposure to negative social experiences in the older youth. Similar developmental patterns have been noted in the autonomic system, where school-aged TD children exhibited differing RSA suppression responses to stress according to their age [34]. Specifically, younger children (8-11 years) demonstrated greater suppression to a cognitive stressor compared to older youth (12-15 years). However, no differences were noted between age groups in the social task, involving hearing an argument [34]. Collectively, these findings provide evidence that age may influence physiological stress responses, with further research needed to elucidate these possible relationships between autonomic stress reactivity, social functioning, and development in youth with ASD. The current study sought to extend previous studies by examining physiological stress response to a friendly social encounter (TSST-F) in youth with and without ASD. Specifically, we measured PNS and SNS responses over time-from baseline, throughout the interaction, until recovery-to examine whether youth with ASD showed a unique pattern of response relative to TD peers. Given previous research in similar physiological systems (HPA axis) using peer interaction paradigms (e.g.), [6], as well as noted developmental effects on ANS regulation and responsivity (e.g.), [35,36], we hypothesized youth with ASD, especially older children, would show heightened stress and arousal to the current social interaction paradigm. Additionally, we expected findings would be consistent with the Polyvagal Theory and Social Engagement System (e.g.), [14], such that social difficulties in ASD would be associated with a dysregulated physiological state. Specifically, we predicted that children with ASD would show: (1a) autonomic hyperarousal demonstrated by lower PNS and elevated SNS, representing a chronic mobilization state, (1b) atypical autonomic flexibility in response to the social interaction with less change in PNS response and heightened SNS reactivity; (2) stress response across the interaction would be modified by age, with older age associated with elevated physiological arousal; and (3) ANS hyperarousal would be associated with more severe social symptoms and state anxiety. Participants Participants included 100 children 10 to 13 years of age, with ASD (n = 50, mean age = 11.48) or typical development (n = 50, mean age = 11.35). Gender was matched between groups, with 14 females in each group. As part of a longitudinal study of pubertal development [37], families were enrolled from the community within a 200-mile radius through research registries, universitywide announcements, autism-and child-development clinics, and social media. The sample consisted of 85.0% Caucasian, 6.0% Black/African American, 1.0% Asian, and 8.0% Mixed Race. Moreover, 7.0% of the sample was Hispanic. Parental education served as a proxy for socioeconomic status; 50% of parents had a bachelor's or master's, 30% associate's or high school, and 20% doctorate or professional. An estimated 42% of children with ASD have been reported to take at least one psychotropic medication [38]; thus, study criteria did not require participants to be medication-naïve in order to be more representative of the overall ASD population. However, participants prescribed medications that may directly affect the ANS (e.g., stimulants) [39] were not enrolled. In total, 18 children with ASD were on medications at the time of the study, including primarily antihistamines, melatonin, or selective-serotonin reuptake inhibitors (SSRIs). Three TD participants were taking over-the-counter antihistamines at the time of enrollment for management of seasonal allergies. Diagnostic criteria All participants were required to have an estimated IQ ≥ 70, as measured by the Wechsler Scale of Abbreviate Intelligence (WASI-II), [40] in order to ensure adequate language to meet the social demands of the interaction task and to ensure the ability to complete self-report forms associated with the larger longitudinal study [37]. Parents completed the Social Communication Questionnaire-Lifetime (SCQ-L) [41], a screening questionnaire for identifying symptoms of ASD showing good sensitivity (0.88) and specificity (0.72) [42]. In order to be included in the study, TD youth required a score < 10 on the SCQ-L based on parent-report. Additionally, TD participants could not have a biological sibling with ASD. Diagnosis of ASD was based on DSM-5 criteria [2] and established by (1) previous diagnosis by psychiatrist, psychologist, or clinician with autism expertise; (2) current clinical judgment; and (3) corroborated by the Autism Diagnostic Observation Schedule, 2 nd edition [43], administered by research-reliable personnel. Procedures The study was carried out in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki). The Vanderbilt Institutional Review Board approved all study procedures. In compliance with the Institutional Review Board, informed written consent and verbal assent was obtained from all parent/guardians and children, respectively, prior to inclusion in the study. The study was completed across two visits to a university research lab. Diagnostic and cognitive measures were administered at visit 1. Parents also completed the Child Behavior Checklist (CBCL) [44] and Social Responsiveness Scale (SRS-2) [45]. At visit 2, participants were exposed to the social interaction protocol, the Trier Social Stress Test-Friendly (TSST-F) [46], and completed all physiological data collection Trier Social Stress Test-Friendly The Trier Social Stress Test-Friendly [46] is an alternative form of the original TSST [47], which has been shown to elicit a physiological stress response from social evaluative threat. The TSST-F, however, consists of a more "friendly" protocol, in which participants describe him or herself and/or a favorite book, movie, hobbies, or other interest in front of a novel peer of the same sex who shows encouragement (smiles, nods, shows interest, maintains eye contact) and asks followup questions. The "friendly" TSST, unlike the original TSST, produces no physiological stress response in typically developing individuals [46,48] and parallels other peer interaction paradigms [3]. After a 5-min resting baseline period when participants were asked to sit quietly, the instructions were read aloud and youth given the opportunity to prepare what they would like to share during a 5-min preparation period. During this preparation period, research personnel are not to engage with the participant, and if the child asks questions, personnel simply repeated the instructions, that they are "to prepare what they would like to say to the other child." Following the prep period was the 10-min social interaction with the novel peer. Lastly, a recovery period measured return to baseline, and participants were again asked to sit quietly and calmly for 5 minutes. Physiological data was collected continuously throughout the paradigm, including at preparation, through the social interaction (divided into two, 5-min segments-part 1 and part 2), and recovery (see Fig. 1). The 20-min TSST-F paradigm requires reciprocal social interaction with a novel trained peer, conceptualized to be a more potent stressor for children with ASD. Peers were thoroughly trained using a manualized protocol, review of videotaped TSST-F sessions, and practice with senior research personnel prior to working with a participant. Furthermore, the peers were monitored to maintain consistent implementation of the protocols. Each administration of the TSST-F was recorded for behavioral coding purposes, and videos were routinely checked to ensure peers maintained social interest without talking for > 50% of the conversation. If deviations in the protocol were noted, booster training sessions were promptly provided. Dependent measures Social symptoms and perceived anxiety The Child Behavior Checklist (CBCL) [44] is a parent-report measure of behavioral and emotional problems in children ages 6-18 years. Scores are rated on a Likert scale from 0 ("not true") to 2 ("very often true"). The CBCL has demonstrated good-to-excellent reliability in ASD, with individual scale reliabilities ranging from 0.69 to 0.94, including a reliability of 0.84 for the Social Problems domain [49]. Due to the a priori hypotheses regarding social symptoms and physiology, we specifically examined the social problems subscale. Previously, youth with ASD demonstrated significantly elevated scores on the social problems subscale relative to controls [50]. Raw scores were used in analyses, as recommended in the CBCL manual [44]. The State-Trait Anxiety Inventory for Children (STAI C) [51] is a self-report measure of anxiety, completed by participants, in which an individual describes how he/ she is currently feeling (state) and how he/she usually feels (trait). Previous studies have found youth with ASD are able to identify anxiety following stressors [52][53][54], including reporting elevated state anxiety following a social interaction [55]. The Social Responsiveness Scale (SRS-2) [45] is a parent-report questionnaire developed to identify the severity of ASD symptoms across several domains. Domain and total scores are presented as standardized T scores. The SRS shows high sensitivities (0.74 to 0.80) and specificities (0.69 to 1.00) for ASD [56]. Analyses included SRS total scores in order to examine total range of ASD-related symptoms. Heart rate variability Cardiac autonomic measures were collected using Mind-Ware Mobile Impedance Cardiograph units (MindWare Technologies LTD, Gahanna, OH) for synchronized electrocardiography (ECG) and respiration data collection using a seven-electrode configuration. Participants were told they would be wearing "stickers" throughout the protocol, and a color cartoon was provided to illustrate the location of the electrodes. Participants were given the opportunity to place an electrode on their hand prior to placement, and a five-minute acclimation period followed electrode placement to allow children time to become comfortable with the sensory aspects of the protocol. All 100 participants agreed to complete the heart rate collection and were able to comfortably tolerate the electrode placement. Resting ANS regulation was acquired using a 5-min baseline collection period in which participants were instructed to sit quietly without engaging in other tasks. During the social interaction, cardiac measures were collected continuously, calculated on a minute-by-minute basis, and averaged into 5-min epochs for each major period of the paradigm-baseline, prep, social interaction (two, 5-min segments-parts 1 and 2), and recovery (see Fig. 1). Parasympathetic regulation was indexed using respiratory sinus arrhythmia (RSA) and derived in accordance with the guidelines set forth by the Society for Psychophysiological Research committee on heart rate variability [9,57]. ECG signal was sampled at 500 Hz and analyzed using the Heart Rate Variability Software Suite provided by MindWare Technologies (MindWare Technologies LTD, Gahanna, OH). RSA was quantified as the integral power within the respiratory frequency band (0.12 to 0.40 Hz), and respiration was monitored by impedance cardiography [58]. The respiration signal was displayed to ensure that the values were within the designated frequency band. Respiratory frequency was confirmed to lie within the high frequency/RSA band (0.12-0.40 Hz) for all participants. Of the total collected data, 1.0% were excluded due to excessive motion artifact or cardiac arrhythmias. RSA was measured in ln (ms 2 ). Pre-ejection period (PEP) was collected using impedance cardiography and represents the interval from electrical stimulation to the mechanical opening of the aorta. PEP was processed with MindWare Technologies Impedance Cardiography Analysis Software (MindWare Technologies, LTD, Gahanna, OH) and calculated as the distance (in ms) from the ECG Q-point of the QRS complex to the B point of the impedance waveform, which corresponds with the time from ventricular depolarization to aortic valve opening [10]. PEP was ensemble-averaged for each one-minute epoch by the MindWare software, and B-point was calculated at 55% of the R-Z interval (time to dZ/dt peak) [59]. The QRS complex and dZ/dt signal were confirmed by visual inspection (RAM). Due to equipment malfunction or excessive artifact in the impedance signal, 14 participants had incomplete PEP data (TD, n = 6, ASD, n = 8,χ 2 (1) = 0.33, p = 0.56). An additional 2.0% of total data was excluded due to values less than 70 ms, which falls below physiological norms [60] and is suggestive of equipment or measurement error. Statistical analysis Demographic, diagnostic, and inclusion variables were compared between ASD and TD groups using independent sample t tests. The Welch degree of freedom approximation was used to correct for violations of homogeneity of variance. RSA and PEP values were normally distributed and free of extreme outliers. Hypotheses were tested using linear mixed models. Time was modeled continuously with linear and quadratic terms, calculated from five time points-baseline, prep, social interaction part 1, social interaction part 2, and recovery-with baseline centered as time zero. To examine whether ASD diagnosis was associated with autonomic hyperarousal, we tested the main effect of diagnosis, followed by the diagnosis*time interaction to determine if change in RSA or PEP from baseline differed by diagnosis. Additionally, we tested the main effect of age to examine the hypothesis that older age would be associated with increased stress (lower RSA and PEP). Subsequently, an age*diagnosis interaction tested whether diagnostic groups showed differences in age effects on RSA/PEP. Finally, exploratory models were investigated with CBCL social problems, SRS total problems, and STAIC state anxiety as potential covariates or effect modifiers. All statistical analyses were conducted using IBM SPSS Version 26 [61]. Demographics Age did not differ between ASD and TD groups (see Table 1). While there was a significant difference based on IQ, the ASD group was well within the average range of functioning. As both groups fell within the average range for IQ, we did not expect significant effects of IQ on autonomic response. Nevertheless, all models were also run while controlling for IQ, and the results were largely unchanged with no differences in the significance level of the findings (see Supplemental Tables). Children with ASD were rated by their parents as having significantly more social symptoms on the CBCL and SRS. The ASD group also self-reported greater anxiety after the TSST-F relative to TD youth. Within the ASD group, medication status (taking medication vs. no medications) was not associated with any of the demographic or outcome variables. RSA regulation and responsivity The initial model with diagnosis, time, and nonlinear age to model RSA was significantly improved relative to a trivial model with constant RSA level (χ 2 (4) = 46.55, p < 0.001). Wald tests using type 3 sum of squares showed little evidence for a main effect of diagnosis (F(1,99) = 0.39, p = 0.53). Further, addition of the diagnosis*time interactions to the model was not significant (χ 2 (2) = 4.41, p = 0.11; see Table 2 for parameter estimates), suggesting the rate of change in RSA with respect to time did not differ in the ASD group relative to the TDs (Fig. 2). A second model including diagnosis, time, age, and an interaction term for diagnosis by age showed a main effect for age (p = 0.04), indicating increased age is associated with higher RSA in the TD group. Further, there was a significant diagnosis*age interaction (p = 0.02; see Table 3 for model parameter estimates); thus, the rate of change in RSA with respect to age is slower in ASD relative to TD (Fig. 3). The effect of age on RSA for the ASD group was equal to a change of − 0.13 ms 2 per year, while in the TD group the RSA change equaled 0.22 ms 2 per year. Exploratory models adding the social problems domain of the CBCL to the base model with diagnosis, age, and time significantly improved fit (χ 2 (1) = 8.63, p = 0.003); however, CBCL social problems*time effects were not significant predictors of RSA (χ 2 (2) = 3.61, p = 0.16). Thus, mean RSA differed based on the severity of scores on the social problems domain (Fig. 4), but the change in RSA over time (slope) did not. Models for STAIC state anxiety (χ 2 (1) = 0.84, p = 0.36) or SRS total score (χ 2 (1) = 1.184, p = 0.28) were not a significant improvement over the base model with diagnosis, time, and age. Finally, exploratory ad hoc models investigated possible three-way interactions between diagnosis, age, and time. There was not sufficient evidence for a significant three-way, nonlinear interaction for diagnosis, age, and time (χ 2 (2) = 4.26, p = 0.12); however, the current sample may have been underpowered to test these higherorder interactions. Therefore, an estimate of effect size was calculated using a recently proposed effect size index (S [62];). This index is equal to ½ Cohen's d. The effect size for the interaction was S = 0.160, which falls in Cohen's small to medium effect range [63]. PEP regulation and responsivity The hypothesized model of diagnosis, time, and age was significant relative to a trivial model with constant PEP (χ 2 (4) = 9.501, p = 0.05). The addition of diagnosis*time interaction terms were not significant (χ 2 (2) = 1.345, p = 0.51). A second model including an interaction term for diagnosis by age was not significant (χ 2 (1) = 0.591, p = 0.44). See Tables 2 and 3 for detailed model results. Further models with social symptoms and anxiety were non-significant in predicting PEP (all p > 0.05). Discussion The primary objective of the current study was to determine whether youth with ASD showed differential physiological responses to a naturalistic social interaction task. Results revealed a profile of stress and arousal in youth with ASD in which physiological system, age, and social symptoms may all influence peer interactions. The PNS appeared to be sensitive to developmental effects, with older ASD youth evidencing lower RSA. In contrast, we did not find evidence that the SNS was sensitive to the TSST-F or differences associated with ASD symptoms. Youth with ASD may be in a state of autonomic hyperarousal from PNS withdrawal, which may not only influence social behavior, but can also increase risk for stress-related conditions (e.g., [64][65][66][67][68]), further emphasizing the important implications for clearly defining ANS functioning in ASD youth. RSA and PEP stress response across the paradigm did not differ between youth with ASD and TD. In regard to PEP, the SNS is often considered a second line of defense, only activated during more severe conditions of stress [15]. The PNS is more flexible, facilitating autonomic responses to dynamic conditions via changes in vagal tone, or suppression and activation of the "vagal brake" (e.g.), [14]. Therefore, the PNS as measured by RSA would be expected to change in response to a wider variety of stimuli. The lack of diagnostic effects on RSA regulation and response did not support our hypotheses and conflict with previous studies of similar-aged youth with ASD. For example, Van Hecke and colleagues (2009) reported that 8-12-year-old youth with ASD demonstrated lower RSA overall, and in particular, a decrease in RSA to a video of an unfamiliar adult. Similarly, a relatively small sample of school-aged youth with ASD was reported to show reduced RSA across baseline, cognitive, and social tasks [69]. However, others have found no differences in heart rate variability within a similar age range, while also noting that age had a significant effect on many physiological stress variables [70]. It may be important to consider age when examining stress responses, as it has been posited that children of certain ages may find certain tasks more or less stressful [34]. Previous research in other arousal systems, namely, the HPA axis, suggests individuals with ASD do experience elevated stress to social engagement with peers [3,6,71,72]. While there were no diagnostic effects for RSA response to the TSST-F across preparation, social interaction, or recovery contexts, there were notable interactions with age suggesting developmental factors may be contributing to PNS function. Specifically, the increase in RSA with older age was blunted for the ASD group relative to the TD group, despite expected positive developmental trajectories of the PNS (e.g.), [36,73]. The lack of change in RSA as function of age in the ASD group suggests a reduced PNS response to social engagement in the older ASD youth. Such an interaction would be consistent with previous studies of HPA axis responsivity in school-aged children (8-12 years) with ASD, which show that older children with ASD have significantly elevated stress responses to social play relative to younger children with ASD and same-aged TD peers [3,4]. Figure 5 suggests similar trends are observed in RSA, with older ASD youth demonstrating less PNS regulation to social interaction compared to younger children with ASD and same-aged TD peers. While there was not sufficient evidence for a significant three-way, nonlinear interaction, effect size fell in the small to medium range, and follow-up studies with larger samples are necessary. While it is possible that differences may arise from an inherent atypicality in the physical development of the ANS as individuals with ASD age, there is more likely an alternative explanation, such as previous social experiences (e.g., bullying) [74,75]; or increased insight into social difficulties [76], which shape future social anxiety (e.g.), [77] and contribute to these age effects in ASD. The TSST-F was designed to be a relatively benign engagement protocol, meant to emulate a naturalistic faceto-face conversation with another peer. In the context of the Polyvagal Theory and Social Engagement System [13,14,17], this non-stressful situation should be associated with calming physiological responses and inhibition of mobilization behaviors, which in turn would promote behaviors associated with social engagement. Those who do not demonstrate the expected increase in vagal tone may be in a more mobilized state favoring hyperarousal Fig. 4 Predicted RSA by social symptom severity. The figure represents predicted RSA according to a number of reported social problems on the CBCL while controlling for age and time. Both groups demonstrate a negative association between RSA and social symptoms (solid and dashed lines), such that the lowest RSA was associated with more severe social problems. Markers represent average RSA of the entire task, and slopes (lines) represent projected linear change in RSA by social symptom severity as estimated from linear mixed model which inhibits social engagement. Regarding the PNS, the severity of social problems was related to parasympathetic regulation, regardless of diagnostic status. Specifically, increased number of parent-reported social problems was associated with lower RSA. These findings are consistent with previous literature, such that RSA has frequently been associated with impairments in social functioning (e.g., [78]). In youth with ASD especially, lower baseline RSA and blunted parasympathetic increase to social interaction have been related to the severity of social symptoms [24,26,29,30,79]. Variability and flexibility of these arousal systems is necessary for maintaining dynamic, adaptive relationships with the environment [16,80,81]. Therefore, decreased variability, which is reflective of limited adaptability, is often associated with pathological conditions and may represent a state of persistent vigilance or preparation for threat mobilization [81]. While youth with ASD reported more anxiety following the interaction, self-reported state anxiety did not predict any of the physiological responses to the task. These findings are consistent with other recent work investigating perceived anxiety to social interaction [55]. It is important to note that the lack of an association between physiological arousal and perceived anxiety suggests distinct systems. Despite the lack of an association, it must be underscored that anxiety symptoms are prevalent in ASD, estimated to affect between 20 and 80% [82][83][84]. Moreover, chronic, atypical physiological arousal has been frequently cited in a number of anxiety conditions (e.g.., [64,85,86] Therefore, heightened responsivity to benign stimuli, though maybe not immediately associated with perceived anxiety, may contribute to persistent anxious tendencies (e.g., trait anxiety) and the development of anxiety conditions, especially as youth with ASD age. Limitations and future directions Despite the rigorous approach and compelling findings across both branches of the ANS, the current study has limitations. First, although the sample was comparable to many other studies in ASD, we lacked sufficient power to examine higher order interactions, such as three-way interactions with diagnosis, social functioning, and physiology, which may have further elucidated biobehavioral profiles in youth with and without ASD. Second, social symptoms were solely measured via parentreport questionnaire reflecting general functioning whereas previous studies in other arousal systems (HPA axis) have examined observable social behavior during the interaction [3,4,6]. Expanded studies should similarly integrate behavioral observation in order to more precisely identify whether ANS functioning is directly associated with social engagement behaviors. Additionally, the age range in the current study was limited to schoolaged, preadolescent, or early-adolescent youth and only assessed age effects at a single time point. Future studies across a wider age range, which follow youth longitudinally through developmental transitions, may further demonstrate the effects of age and related factors (i.e., insight, peer experiences) on stress responsivity. Finally, the PNS and SNS systems do not operate in isolation but are interconnected. Thus, considering their interactions within individuals will likely increase insight into unique physiological responses in ASD and their relationships with social behavior beyond studying a single system examined in isolation. Conclusion The current study supports a growing literature linking atypical physiological reactivity in ASD during relatively benign social situations. The results uniquely demonstrate evidence for reduced parasympathetic functioning, especially in older youth with ASD, during a naturalistic interaction with a same-aged peer. As children are confronted with frequent social encounters with peers, the implications for atypical physiological arousal to these daily occurrences are numerous. Chronic stress might increase susceptibility to a number of conditions, including gastrointestinal problems (e.g.), [87] or internalizing disorders (e.g., [88,89]), and impaired social engagement behaviors may increase social isolation and loneliness [90,91], thereby increasing the risk for depression or suicidality (e.g., [92]). Future research should aim to further explain the relationships between physiology and social functioning, especially through the course of development, in order to define physiological reactivity as a potential predictive marker of physical and behavioral health risk in children and adolescents with ASD.
2021-01-10T14:38:47.082Z
2021-01-09T00:00:00.000
{ "year": 2021, "sha1": "f48735bfba00cf8d0e02444be190fdb6d9a86f27", "oa_license": "CCBY", "oa_url": "https://jneurodevdisorders.biomedcentral.com/track/pdf/10.1186/s11689-020-09354-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c30d539cc8c0613bb784ddf21edc2807c0049b6a", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254920959
pes2o/s2orc
v3-fos-license
Exploring the Role of Language Families for Building Indic Speech Synthesisers Building end-to-end speech synthesisers for Indian languages is challenging, given the lack of adequate clean training data and multiple grapheme representations across languages. This work explores the importance of training multilingual and multi-speaker text-to-speech (TTS) systems based on language families. The objective is to exploit the phonotactic properties of language families, where small amounts of accurately transcribed data across languages can be pooled together to train TTS systems. These systems can then be adapted to new languages belonging to the same family in extremely low-resource scenarios. TTS systems are trained separately for Indo-Aryan and Dravidian language families, and their performance is compared to that of a combined Indo-Aryan+Dravidian voice. We also investigate the amount of training data required for a language in a multilingual setting. Same-family and cross-family synthesis and adaptation to unseen languages are analysed. The analyses show that language family-wise training of Indic systems is the way forward for the Indian subcontinent, where a large number of languages are spoken. using accurate <text, audio> pairs aligned at the sentence level. However, building E2E synthesisers for Indian languages is still a challenge due to the following reasons: 1) India has a wide linguistic diversity, with about 1369 rationalised languages and dialects [3]. Of these, 121 languages are spoken by more than 10,000 people in each language. There are 23 official languages, including English. Building a TTS synthesiser for each language from scratch is difficult, given so many languages. 2) There is a lack of accurately transcribed data for training, which is crucial for a TTS system. This is a bottleneck, especially in the E2E framework, which requires tens of hours of training data to produce high-quality speech [4]. 3) There are about 13 scripts that are used for Indian languages. This leads to a significant increase in vocabulary size in a multilingual context. Most Indian languages can be broadly classified into two language families-Indo-Aryan and Dravidian. In [5], datasets of languages belonging to the same language family were combined for training. This kind of pooling collectively increases the amount of data, with the added advantage of capturing a wide variety of contexts. A multi-language character map (MLCM) [6] and a common label set (CLS) [7] for Indian languages were developed to reduce the vocabulary size. Speaker embedding in terms of x-vector [8], [9] was also included during training, primarily for speaker selection during synthesis. Native and cross-lingual syntheses were performed. These systems were then adapted to limited data of a new speaker to synthesise the audio in that speaker's voice. The current work extensively studies the role of language families in training Indic systems, especially when resources are scarce. Most studies in the literature use a large amount of training data (ranging from 25-1250 hours for monolingual speech synthesis), or at least a pre-trained model that has been trained with a large amount of data. In the current work, we train an initial TTS system in a low-resource scenario (in terms of the amount of clean data and speaker coverage). We use a maximum of 20 hours of multispeaker multilingual data (one speaker for each language). The experiments performed in this work attempt to answer the following questions: r Is it possible to achieve a good-quality TTS system in such a scenario? r Can we reduce the compute power as a consequence of using less data without significant degradation in speech quality? r Suppose we have to train a TTS system for a new language with limited data; what are the best strategies that can be adopted for system building? The studies in this work attempt to answer these questions by focusing on the role of language families in training TTS systems. We build upon the work in [5] and systematically analyse different scenarios. The novelty of this work is highlighted here: r This work is one of the first attempts to study the importance of language families in the context of speech synthesis. r We compare language family-specific Indo-Aryan (IA) and Dravidian (Dr) models with a combined Indo-Aryan+Dravidian (IA+Dr) system. r We also assess the performance of models trained in datastressed situations. We reduce the training data used per language in the multilingual voice. r Zero-shot synthesis : We study the effect of same language family and cross-family synthesis for an unseen language. r Given a limited amount of data for an unseen language, we study different scenarios of adaptation-same family, cross-family and IA+Dr adaptation. r We highlight the differences between Indo-Aryan and Dravidian languages and quantify the differences by phonotactic 1 analysis using byte-pair encoding (BPE) [10], [11] and language modelling. In this work, multilingual and multispeaker voices (referred to as generic voices) are trained using single-speaker data per language. Most of the experiments do not use speaker embeddings as we do not aim to synthesise speech in a particular speaker's voice. The primary objective is to preserve language characteristics in terms of phonotactics rather than speaker characteristics. For completeness, the results of systems trained with speaker embeddings are presented in the supplementary material (Sections S1, S2 and S4). The performance of systems is quantified using objective and subjective measures and supported by qualitative analysis in terms of informal listening tests. The rest of the paper is organised as follows. Section II highlights the motivation to train systems based on language families. The literature on related work is discussed in Section III. Sections IV and V present the experiments and analysis of the various TTS systems. Observations are summarised in Section VI, and the phonotactic analyses of languages is presented. The work is concluded in Section VII. II. MOTIVATION The written scripts of Indian languages can be traced to the Brahmi script. Although different Indian languages may have different grapheme representations, they share a common set of sounds. Most languages have about 11-15 vowels and 33-35 consonants, except Tamil, which has representations for only 23 consonants. Despite a common set of phones, phonotactics across languages varies. Indian languages have simple phone clusters and are akshara-based [12]. In comparison, English has complex phone clusters such as twelfth and strength. Phone clusters such as sr and ph (aspirated p), rarely occur in English. Similarly, phone clusters such as ion and ous are quite rare in Indian languages [13]. Unlike English, Indian languages are replete with geminates [14]. Phonotactic differences are especially evident across language families. Most Indo-Aryan languages are characterised by schwa deletion, which is the absence of the inherent short vowel a [15]. Agglutination, which refers to the phenomenon of combining multiple words, is very common in Dravidian languages [16]. Language-specific phones also contribute to these phonotactic differences. Dravidian languages have many liquids and distinguish between alveolar, dental and retroflex places of articulation. Dravidian languages are also characterised by their lack of distinction between aspirated and unaspirated stop consonants. Telugu, Malayalam and Kannada scripts have representations for aspirated stop consonants primarily to accommodate the use of borrowed words from Sanskrit. Motivated by these differences, this work explores the effectiveness of training systems based on language families. An effective solution to multilingual training is to collect data of a single person speaking multiple languages, as performed in [17], [22], [29], [31], [37], [45], [46], [47], [48]. This is especially essential in the unit selection synthesis (USS) paradigm [17], [29], where waveforms are directly concatenated. Collecting single-speaker multilingual speech data may not always be feasible, and extension to new languages becomes restrictive. Most studies address this bottleneck by combining several monolingual databases recorded by different speakers. One study attempts to generate a polyglot database in a target voice by cross-lingual voice conversion [23]. Similarly, in [36], bilingual data is generated using voice conversion and the training data is augmented to build a TTS system capable of code-mixing. [39] trains a USS synthesiser utilising a mixture of monolingual corpora and then transforms the synthesised utterances to a target voice. A popular approach in the hidden Markov model (HMM) based TTS paradigm is to map/share attributes across languages, such as phone mapping [18], [19], [20], [31], and sharing of HMM states [30]. In [21], speaker and language-specific characteristics are modelled using separate transforms. In the domain of neural networks, speaker-independent and language-independent layers of a deep neural network (DNN) are shared. These layers serve as a bridge between speakerspecific and language-specific layers [22]. Similarly, [40] trains a multilingual bidirectional long short-term memory (BLSTM) neural network in which the hidden layers across different languages are shared. In contrast, the input and output layers are considered to be language-dependent. [41] trains an LSTM-RNN (recurrent neural network) based system, wherein language and speaker variations are modelled using cluster adaptive training and speaker-dependent layers, respectively. [43] proposes a multilingual phoneme inventory and trains a multilingual and multispeaker LSTM-RNN model. Recent literature on multilingual training is mainly in the E2E framework. The text processing module or the text encoder is modified to enable multilingual training. [32] explores two types of text encoders-(a) a single multilingual encoder with language embedding and (b) a separate encoder for each language. In [49], the text is represented as a sequence of bytes, thus rendering the text encoder language-independent. In a few experiments, international phonetic alphabet (IPA) based features, more widely known as phonological features, are used in multilingual training [27], [35], [50]. As TTS systems are trained in multispeaker and multilingual settings, additional embeddings such as speaker and language embeddings are included during training [24], [28], [33]. This enables the synthesis of any language in any speaker's voice. To improve cross-lingual synthesis, a popular technique is to disentangle speaker information and linguistic content [26], [51], [52], [53]. In [26] and [51], an adversarial loss is included to suppress speaker-dependent information. [52] uses domain adaptation objective to obtain language-independent speaker embedding and inter-speaker perceptual similarity to train a speaker encoder. [53] attempts to disentangle speaker and spoken content by minimizing the mutual information between them. In this context, [38] summarises various techniques used for the cross-lingual voice conversion task in the voice conversion challenge 2020. Experiments in [51] and [59] aim to remove foreign accents in cross-lingual synthesis. However, a non-native accent need not be an undesirable entity [19] and can be considered to mimic the real-world scenario. Hence, no attempts have been made to remove the accent in the synthesised speech in the current work. In the context of multilingual E2E training for Indian languages, [54] trains convolutional attention-based TTS with language, speaker and gender embeddings. In [56], pre-training strategies are explored between source and target languages, which enable the training of multilingual voices with a reduced amount of data. In [58], byte inputs are mapped to spectrograms and experiments are performed with 40+ languages, including Hindi, Tamil and Telugu. Most of the above literature uses a huge amount of data (ranging from tens to hundreds of hours) for training generic voices. For example, [58] uses close to 900 hours for training generic systems. In the current work, we use a maximum of 20 hours of accurately transcribed data for training an initial generic voice. Most importantly, in contrast to the above-presented literature, we explore the role of language families in system training. Although the TTS systems in [5] are trained based on language families, the relevance of training them in this manner is not explored. A recent study [60] observes that language family classification may not be an effective basis for choosing (source) languages for training a generic TTS. However, our own experiences with multilingual TTS systems and observations in [18] find that the intelligibility of synthesised speech depends on the similarity between any target language and source language(s). During the review process of this paper, a recent and parallel work has performed extensive studies on how various factors in training affect polyglot synthesis quality [61]. Specifically, the focus is on factors such as gender, speaker composition, and language family affiliation. Analysis of unseen language synthesis is performed by adding language variants belonging to the same and different language families in the training data. The paper concludes that adding languages to the training data closer to a target language is better than adding a dissimilar language (similar to observations drawn in [18]). Another motivation for undertaking this study is the lack of comprehensive literature on building E2E TTS systems for Indian languages. Although there is an impetus in building E2E synthesis systems, with new architectures and techniques being developed, there is very little work on addressing the challenges specific to the Indian context. There is a need to evaluate the problem from a low-resource scenario and a multilingual perspective. The authors hope this study will bridge the gap and help future researchers. IV. TRAINING INDIC TTS SYNTHESISERS This section describes the modules in building generic TTS systems and their adaptation to new speakers (and languages). Details about the datasets, the representation of multiple scripts and the E2E training of systems are presented. A. Datasets The datasets used in this work are part of the Indic TTS database [62]. 3 Each dataset consists of speech waveforms and the corresponding text in UTF-8. Details of the datasets are given Table I. Totally 9 languages are considered-5 belonging to the Indo-Aryan language family (Bengali, Gujarati, Hindi, Odia, Rajasthani), and 4 belonging to the Dravidian language family (Kannada, Malayalam. Tamil, Telugu). Each language has its own unique script, except Hindi and Rajasthani, which share the Devanagari script. It is to be noted that only 2 speakers (1 female and 1 male) are considered in each language. Gujarati and Tamil are considered unseen languages as they are not used in training the generic voices. Since attention in encoder-decoder architecture is not learnt effectively on long training utterances, utterances whose duration is less than or equal to 15 seconds are considered. The amount of training data used from each dataset is given in Table I. B. Text Representation Average voice models are trained in a multilingual context by combining data across languages and training a single network. Since each language has its own unique script, the combined vocabulary size becomes large, leading to poor training of the TTS system. Hence, the texts in different native scripts are mapped to a common representation. The multi-language character map (MLCM) [6] and the common label set (CLS) representations [6], [7] are used. Both MLCM and CLS operate on the principle that acoustically similar subword units across languages are given a common representation. While MLCM is designed for character-based representation, CLS is used for phone-based representation. In the MLCM representation, the text is first split into its constituent characters and then mapped to a set of token numbers using MLCM. To obtain the phone-based CLS representation, the unified parser for Indian languages [63] is used. An example of the UTF-8 word in the corresponding language and its character and phone-based representations is given in Table I. Both MLCM and CLS aid in training systems with a compact representation, with 68-72 tokens rather than 250+ tokens when 4 languages are pooled, or 500+ tokens when 8 languages are pooled. C. Training E2E Systems An illustration of the training process is given in Fig. 1. Data from different languages are combined, and a single neural network is trained. The training phase is divided into two parts-(1) text-to-mel spectrogram mapping based on the Tacotron2 architecture [2]. (2) mel-spectrogram to speech waveform generation using a WaveGlow vocoder [64]. The likelihood P (mel-spectrogram frames|text) is parameterised using an encoder and a decoder with attention. The text is processed into a set of tokens and embedded into a continuous vector. The embeddings are passed through a series of convolution layers to capture the long-term context in the input. The output is then presented to a BLSTM layer to generate the encoded features. The attention network summarises the encoded features into a fixed-length context vector. An auto-regressive decoder predicts the mel-spectrogram frame at each time step. The WaveGlow vocoder is used to generate the time-domain speech signal by conditioning on the mel-spectrogram [64]. D. Adaptation To improve the synthesis quality of new (unseen) languages, generic voices are adapted using limited amounts of accurately transcribed data from the target language. Specifically, the network parameters of the generic network are fine-tuned on the adaptation data. In [5], different amounts of adaptation data were considered-30 minutes, 15 minutes and 7 minutes. In the current work, we further reduce the amount of adaptation data (3 minutes, 1 minute) and test the limits of transferability in extreme resource-scarce scenarios. The adaptation process is shown in Fig. 1. We also investigate the effect of adapting from different generic systems-(1) of the same language family, (2) of a different language family, and (3) combined Indo-Aryan+Dravidian voice. V. EXPLORING THE RELEVANCE OF LANGUAGE FAMILIES FOR SYSTEM BUILDING This section gives an overview of the language family-based analysis carried out. Different evaluation metrics are considered to assess the various systems' performance. A. Experiments TTS systems are built using ESPNet's implementation [65] of Tacotron2. Training and validation sets are in the ratio 9:1. The validation set is a representative mixture of all the languages considered for training. Different generic voices are trained for male and female datasets to avoid the issue of gender in synthesis. The configuration of the encoder-decoder network used in the experiments is given in Table II. Nvidia's Wave-Glow implementation is used for speech reconstruction [64]. WaveGlow models are trained for Indo-Aryan and Dravidian data by fine-tuning a pre-trained ljspeech [66] WaveGlow model for 10,000 steps. Speaker-dependent models are also trained for each target speaker, but there is no observable difference in synthesis quality across speaker-independent and speaker-specific WaveGlow models. B. Systems Built Various combinations of voices are trained by considering the entities mentioned in Table III. To train the Indo-Aryan (IA) voice, 5 hours each of Bengali, Hindi, Odia and Rajasthani data are combined. The Dravidian (Dr) voice is trained using 5 hours each of Kannada, Malayalam and Telugu data. IA and Dr voices trained collectively with 20 hours and 15 hours of data, respectively, are called "full" voices. Monolingual voices built using individual language datasets (5 hours in duration- Table I) are considered baseline systems. Single-family voices are also trained with only 5 hours of collective data, wherein each constituent language has an equal contribution. This is a datastressed situation, and these voices are referred to with a "5hrs" tag. The idea is to compare the performance of single-family and monolingual TTS systems trained with the same duration. Further, Indo-Aryan+Dravidian (IA+Dr) voices are also trained for comparison. These voices are trained with a total of 35 hours of data. The above systems are also compared based on their text representation-character-based (MLCM) and phone-based (CLS). Overall, 16 single language family voices and 4 IA+Dr combined voices are built, considering male and female datasets. These multilingual systems are adapted to unseen (new) languages. As mentioned in Section IV-D, three types of adaptation are carried out: r Combined IA+Dr voice adaptation: IA+Dr voice is adapted to Gujarati and Tamil (4 adapted systems in total). In addition to the systems mentioned in Table III, the following systems are also trained: r Multilingual and adapted systems with x-vectors as speaker embedding. r A single IA+Dr voice with x-vector, combining male and female datasets. x-vectors are extracted from audio files using a pre-trained time-delay neural network (TDNN) [8] and then appended to each encoder state of the Tacotron2 network. Compared to the systems without speaker embedding, the systems with speaker embedding have better speaker stability and improved quality in a few cases. Nevertheless, from a language family-based perspective, results are similar with/without speaker embedding. Hence, the results of these additional systems are presented in the supplementary material (Sections S1, S2 and S4). C. Test Set and Evaluation Metrics Held-out sentences not used for training are considered for evaluations. This set does not overlap with the data mentioned in Table I. The test set for each dataset has at least 100 sentences and covers at least 10 occurrences of each phone. The length of the test set ranges from 114 to 178 sentences. Table IV summarises the languages and the number of test sentences considered in each dataset. The following evaluation metrics are used to analyse the synthesised audio of different TTS systems: 1) Mel-Cepstral Distortion (MCD): MCD is an objective evaluation metric used to measure the distortion in mel-cepstral features of synthesised speech compared to that of the corresponding recorded speech [67]. Dynamic time warping (DTW) is first performed to align the speech signals. A lower average MCD indicates that the TTS system produces less distorted speech. 2) MUSHRA Test: MUltiple Stimuli with Hidden Reference and Anchor (MUSHRA) is a subjective evaluation metric used to assess the perceptual quality of the synthesised speech [68]. Synthesised utterances (of the same sentences) generated by various TTS systems are presented to the listeners on a single panel. For each panel, the order of systems is randomised. Listeners are asked to rate the quality of the synthesised speech with respect to a reference. The scoring is on a scale of 1-100; a score of "100" indicates that the quality of the synthesised utterance is the same as that of the reference audio. 3) Additional Subjective Evaluations: Customised subjective evaluations such as intelligibility tests and language verification (LVF) tests are conducted. The tests are detailed in the relevant sections. 4) Additional Qualitative Observations: In addition to the above formal evaluation methods, informal analysis is also conducted to verify the observations. This includes manual verification, informal listening tests, and feedback on the synthesised audio. Attention plots of the sequence-to-sequence models are also studied. D. Analysis The performance of various voices is analysed using the metrics mentioned above 4 . Synthesis using multilingual voices is divided into two categories-(1) synthesis of seen languages and (2) synthesis of unseen languages. Only languages seen during training are synthesised and evaluated in the first scenario. In the second scenario, the text of unseen languages is synthesised. It is to be noted that generic voices have not been fine-tuned for any in-training speakers. Fig. 2 shows the MCD scores corresponding to male TTS systems. The x-axis in the plot refers to the language of the native text. Each text is passed through 7 different voices-baseline monolingual TTS voice of that language, four single-family (IA/Dr) and two combined IA+Dr systems built using different text representations. 1) Analysis of Generic Systems for Seen Languages: MCD scores: It is seen from Fig. 2 that baseline monolingual systems perform better than generic systems in most cases. Considering only single family (full) systems, character-based representation performs better than phone-based (CLS) representation for Indo-Aryan languages. The reverse is true for Dr (full) voices. The phone-based representation performs best for all languages for single-family (5hrs) systems. The degradation in performance of the phone-based IA (full) system could be a consequence of incorrect grapheme-to-phoneme conversion (mainly schwa deletion), which has become more prominent in the full voice than in the 5hrs voice. There is no significant difference between the performances of both IA+Dr systems. Only for Kannada and Malayalam, MCD scores are better for IA+Dr voices compared to monolingual and single-family voices. However, this difference is not very significant, given that IA+Dr voices are trained on a considerable amount of data (35 hours compared to 5/15/20 hours of monolingual or single-family voices). Comparing MCD scores of monolingual TTS systems (with 5 hours of training data) and the best single-family (5hrs) systems, the average relative degradation with respect to the former is only 3.82%. A similar comparison of the best single-family (5hrs) system with the best IA+Dr system indicates an average relative degradation of 3.57% with respect to the latter. This is an encouraging result, given that the IA (5hrs) and Dr (5hrs) voices are trained with only 1.25 hours and 1.67 hours of data per language, respectively. Similar results are observed for systems trained on female data and systems trained with speaker embedding, as presented in the supplementary material (Sections S1 and S2). The average relative degradation in MCD score of the best single-family (5hrs) voice compared to that of monolingual systems is only 4.5%, and with respect to the best IA+Dr system is 3.14%. Subjective intelligibility tests: A subjective word error rate (WER) test is carried out to measure the intelligibility of each system. This test is performed only for Hindi and Kannada systems trained using male data. Evaluators are presented with sentences and corresponding audio files synthesised by each system. The evaluators were asked to enter the number of words wrongly pronounced in the synthesised utterances. Although providing the text does bias the participant, this bias is uniform across all systems. For the evaluation, synthesised utterances corresponding to 10 randomly selected sentences were considered. Details of the evaluation and WER across all systems are presented in Table V. The results are more or less similar to the patterns observed for MCD scores. Monolingual TTS and IA+Dr (MLCM) perform the best for Hindi and Kannada, respectively. Single-family (MLCM, 5hrs) systems have the highest WER. MUSHRA tests: MUSHRA tests are conducted to assess the quality of synthesised utterances across various systems. Based on MCD scores and informal listening tests, IA (MLCM, full) and Dr (CLS, full) voices are the best single-family voices. IA+Dr (CLS) combined voice performs better than IA+Dr (MLCM) voice in most cases. Hence, MUSHRA tests are conducted for these systems, along with corresponding singlefamily voices in data-stressed situations (IA (MLCM, 5hrs), Dr (CLS, 5hrs)). Monolingual TTS synthesisers are also included for comparison. Each listener evaluated a set of 20 audio files in each test, 5 from each system. Fig. 3 presents the MUSHRA scores for male voices. In most cases, the synthesis quality of single-family (5hrs) voices is the least, followed by the IA+Dr voice. The performance of single-family (full) and monolingual systems is similar in most cases, except for Hindi and Kannada. For Kannada, multilingual training (particularly IA+Dr) seems to improve synthesis quality compared to monolingual training. Similar results are observed for systems trained on female data, as presented in the supplementary material (Section S3). On average, the relative degradation in MUSHRA score of single-family (5hrs) voices is 12.93% compared to the best system. 2) Analysis of Generic Systems for Unseen Languages (Zero-Shot Scenario): The scenario of synthesising text from unseen languages (Gujarati and Tamil) is analysed. Only single-family voices are considered here. We aim to study the extent to which language families can affect the synthesis in unseen languages. Two types of cases are explored: 1) Same language family synthesis-Gujarati and Tamil texts are synthesised by IA and Dr voices, respectively. 2) Cross-language family synthesis-Tamil and Gujarati texts are synthesised by IA and Dr voices, respectively. IA (MLCM, full) and Dr (CLS, full) TTS systems are considered as these are the best systems in the respective language families. The unseen language text is passed directly to the single-family voice during synthesis. It is to be noted that although these languages are not used during training, the CLS and MLCM representations can handle them. Attention plots of IA and Dr male TTS systems for sample Gujarati and Tamil texts are shown in Fig. 4. The monotonic nature of the plots indicates that the synthesised utterances are reasonably intelligible. However, informal listening tests present a different story. Both cases of synthesis have a non-native accent, which is expected. Same language family synthesis is relatively more intelligible compared to cross-family synthesis. However, the accent in cross-language family synthesis is quite pronounced and impedes intelligibility. Clearly, languages are not only different due to phonotactics but also prosody. Differences in the phone sets between the unseen language and the single-family voice further contribute to this degradation. Evaluating non-native synthesised utterances is not trivial. We have designed a subjective language verification (LVF) test to assess both cases of unseen language synthesis. Details of the LVF test are presented in Section V-D3 along with a comparison with adapted systems. A note on speaker identity and stability: Since there is only one speaker per language in the training data, the problem of stability of speaker identity is largely avoided. To quantify this, a speaker identification (SID) system is built by combining the training data mentioned in Table I. The idea is to see to what extent the speaker identity of synthesised seen language is in the corresponding seen speaker's voice. On average, this value is 90.5% and 91.8% for language family-specific and IA+Dr voices, respectively. It is also observed that speaker similarity does not necessarily influence speaker identity in synthesis. Details on this are presented in the supplementary material (Section S6). With x-vectors, we can explicitly specify a voice for synthesis. For unseen languages, the synthesised speech is in the voice of a seen speaker, which varies with the test sentence. As seen in [5], even if we specify the speaker embedding of the unseen speaker during synthesis, this is not reflected in the speaker identity of the output audio. It is still in the voice of a seen speaker. 3) Analysis of Adapted Systems (Same Language Family): A TTS system built using only a limited amount of data (say, 30 minutes) in an unseen language does not train well. Hence, generic TTS systems are fine-tuned on this limited data to improve unseen language synthesis. Adaptation is performed within the same language family, and the best single-family systems are considered. IA (MLCM, full) and Dr (CLS, full) TTS systems are adapted to Gujarati and Tamil, respectively. We study adaptation with varying amounts of data-30, 15, 7, 3 and 1 minute. Table III shows the combinations of adapted systems from the same language family. Adaptation is performed separately for male and female data. The test set corresponding to these languages in Table IV is used for synthesis. For example, in the case of Gujarati female, we use only 5 utterances (1 minute) for adaptation, but test the system on 140 sentences. MCD scores: Fig. 5 presents the MCD scores of different adapted systems. MCD scores are plotted against the amount of adaptation data used. As the amount of adaptation data reduces, it is seen that the MCD score increases. The system's robustness reduces, resulting in high variance and more edge cases and outliers (as indicated by the + symbol in the plots). The performance of the Gujarati male voice drops significantly with 1 minute of adaptation data (Fig. 5(a)). The performance degrades gracefully with reduced adaptation data for the remaining scenarios. Language verification (LVF): A novel language verification subjective evaluation metric is developed, where subjects are asked to verify the language of the synthesised output. This is performed only for the unseen languages, Gujarati and Tamil. Evaluators are presented with a set of 24 audio files randomly ordered-(a) 4 original recordings of the language, and (b) 5 audio files each synthesised by IA (MLCM, full), Dr (CLS, full), adapted (1 minute) and adapted (30 minutes) voices. The adapted models belong to the same family adaptation. The adapted models are included to assess to what extent evaluators can verify the language with 1 minute and 30 minutes of clean data. Evaluators are asked to rate if the audio clip is of the unseen language, disregarding any foreign accent. A 5-point rating scale is used, with the following indications: r Score 5: sure that the audio clip is of that language. r Score 3: the audio clip could be of that language. r Score 1: the audio clip is not at all of that language. 7 and 11 native listeners participated in the Gujarati and Tamil LVF tests, respectively. Table VI presents the results of LVF test. It is seen that with cross-family synthesis, the LVF score is less compared to that of the same family synthesis. This is especially evident with the synthesis of Tamil text using IA voice, for which evaluators have rated that the language is not Tamil. Evaluators have mostly indicated that the language is Gujarati for Gujarati text synthesised using the IA voice. With 1 minute of adaptation data, the LVF score improves over the same family synthesis for Tamil. However, for Gujarati, the score degrades considerably. This is because the quality of the system adapted with 1 minute of Gujarati data is poor (as seen in Fig. 5(a)), and this impacts its language verification. For systems trained with 30 minutes of adaptation data, evaluators are fairly confident that the language of the text is indeed the same. The evaluations are conducted for systems built using male data, and informal evaluations also indicate similar results for female data. These evaluations indicate the importance of same language family synthesis, even when no training data is available for unseen languages. 4) Analysis of Adapted Systems (All Scenarios): To get a better understanding of how important language families are in the context of adaptation, different generic voices are adapted to Tamil and Gujarati as given in Table III. Three scenarios of adaptation-same language family, cross-language family and IA+Dr adaptation are explored, as elaborated in Section V-B. Only 7 minutes of data from each language is considered for adaptation. MCD scores: Fig. 6 presents the MCD scores of various adapted TTS voices corresponding to Gujarati and Tamil male and female datasets. The x-axis indicates the generic system used for adaptation. It is seen that the MCD scores are slightly higher in cross-language family adaptation compared to the same family adaptation. Informal listening tests indicate that languagespecific phones, especially in Tamil, are not pronounced correctly with cross-family adaptation. The performance of IA+Dr voice adaptation and the same family adaptation is almost on par. The difference in performance of same family and cross family adaptation is statistically significant (p < 0.05) and that between same family and IA+Dr adaptation is not very significant (p > 0.05). This indicates that the same language family voice, trained on a small amount of data, is sufficient for effective adaptation. A combined IA+Dr voice can also be adapted if substantial data is available across language families. In the adaptation experiments presented here, only the best generic systems are adapted-IA (MLCM, full), Dr (CLS, full), IA+Dr (CLS). The observations on language families still hold even when other generic (MLCM/CLS) models are adapted (Section S4 in the supplementary material). We see that the better the generic models, the better the performance of adapted models. MUSHRA test: A MUSHRA test is conducted to evaluate the various adapted systems. 8 native Gujarati and 13 native Tamil speakers participated in the study. Each listener assessed a set of 21 audio files (7 each from each system). Results of the MUSHRA test are presented in Fig. 7. It is seen that the synthesis quality of cross family adaptation is poor compared to that of the other two adapted voices. The quality of same family adaptation and combined IA+Dr adaptation is similar in most cases. For Tamil female, Dravidian voice adaptation is better than combined IA+Dr adaptation. On closer inspection, we observe that this lower rating for IA+Dr adaptation is mainly due to unnatural pauses in the synthesised audio, which does not show up in the MCD scores ( Fig. 6(d)). This needs further investigation. VI. DISCUSSION In this work, we train and analyse TTS systems for Indian languages in a low-resource setting from a language family perspective. Our observations based on this work are summarised here: r Monolingual systems built using 5 hours of studiorecorded data and accurate transcriptions produce better r Single-family voices work reasonably well for unseen languages belonging to the same language family. r Cross-language family synthesis performs poorly. A contributing factor is a mismatch between the phonesets of a single-family voice and an unseen language of another language family. The phoneset of an unseen language is largely covered by other languages in the same language family. The phoneset of Gujarati is covered in the Indo-Aryan voice, while Tamil has additional phones such as "e" (short E), "o" (short O) and "zh" (retroflex continuant), which are not covered. Table VII provides details of phone coverage for each unseen language with respect to single-family data. Accent is also an important factor that contributes to poor intelligibility. r Single-family voices of reasonable quality can be trained in data-stressed situations. With an average duration of 1.5 hours per language (33.33% of monolingual data) for training, the MCD score of the single-family (5hrs) voice has an average relative degradation of only 4.16% in comparison to a monolingual voice built with the same amount of collective data (i.e., 5 hours). The synthesis quality of these systems has an average relative degradation of 12.93% compared to the best TTS system. r In Tamil, the same character represents both voiced and unvoiced stop consonants. For example, the bilabial unvoiced stop consonant "p" and its voiced counterpart "b" are represented by a single character. This distinction is made in the phone-based representation. Hence, Dravidian (CLS) voice is better suited for adaptation to Tamil. r In Malayalam and Tamil, when "u" occurs at the end of a word, it is not rounded but uttered as an unrounded back vowel. In most cases of Tamil synthesis using Dr and IA+Dr voices, the vowel "u" remains rounded. With more adaptation data, the network synthesises these Tamil words correctly. r Generic and adapted systems trained with x-vectors as speaker embedding have lower MCD scores than counterparts without speaker embedding (Sections S1, S2, and S4 of supplementary material). Nonetheless, language familybased analysis still holds for voices with x-vectors. r Training a single female+male voice with speaker embedding also does not seem to improve the performance of generic systems (Sections S1 and S2 of supplementary material). A. Analysis of Phonotactics Across Languages To better understand the outcome of the experiments described above from a theoretical perspective, we analyse the phonotactics of languages and compare them in a multilingual setting. As mentioned in Section II, phonotactics play a vital role in a language. Here we quantify phonotactics using two approaches-byte-pair encoding (BPE) [11] and phone-based language modelling. This is a text-based analysis. The text is first parsed into its phone-based representation. Along with the data used earlier for training and testing, additional text material is used for this analysis-(a) 150 test sentences in Bengali and Telugu (b) training text corresponding to a 5-hour duration in Gujarati and Tamil. Corresponding language data are combined for Indo-Aryan (IA), Dravidian (Dr) and IA+Dr text. Multilingual text data exclude Gujarati and Tamil, which are still considered unseen languages. 1) Analysis of Byte-Pair Encoding (BPE): BPE is a technique originally used for data compression [10], and now adopted for subword tokenization in machine translation [11] and speech-related tasks [69]. BPE tokens can represent the most common sub-strings of a language. These tokens are extracted for every language using their corresponding training text. We consider the top 500 BPE tokens. It is to be noted that this analysis does not include any test data and is performed only on the training text. The percentage of the same BPE tokens is calculated for every combination pair of IA/Dr/IA+Dr data and individual language data. Table VIII presents the results of BPE analysis. We see a higher match for languages to their corresponding language family data compared to the other language family data. For IA+Dr data, this percentage is between individual IA and Dr data. Similar results are also observed for Gujarati, which is not seen in the IA text. The only exception is Tamil, in which the matching percentage is slightly improved for IA+Dr data compared to individual Dr data. 2) Analysis of Language Models: A phone-level language model (LM) is trained using the corresponding training text for each multilingual data (IA/Dr/IA+Dr). Average sentence-level log-likelihood scores are calculated on the test data using these models. Language models are also trained on monolingual text to understand the maximum achievable likelihood scores. The models are trained and tested using the Stanford Research Institute language modelling (SRILM) toolkit [70] with the maximum order being 3 (i.e., up to trigrams). Table IX presents the log-likelihood scores of different test sets using various models. As expected, the likelihood scores of purely monolingual models are the best. IA models have better scores for Indo-Aryan (seen) languages than Dr models. IA+Dr model has slightly lower likelihood scores compared to the IA language model. A similar trend is observed for Dravidian (seen) languages. Even for Gujarati, whose text is not seen in any multilingual language model, the IA model has the best score among the multilingual models. For Tamil, the best multilingual model is IA+Dr. Also, the difference between the likelihood values of the IA+Dr/Dr model and the monolingual Tamil model is relatively high, even for unseen languages. This indicates that the phonotactics of Tamil could perhaps be quite different compared to other Dravidian languages. Overall, the above phonotactic analysis provides a basis for multilingual system training based on language families. This work shows that language families are important for system building, especially in resource-scarce scenarios. A suitable starting point to build a TTS synthesiser for a new language with limited data would be to use a generic voice trained for the same language family. This would ensure that similar phonotactics are largely covered (Sections II and VI-A), with the added advantage of reducing the overall training data requirement. Going ahead, the training data per language can be further reduced to assess extreme data-stressed situations. To improve the synthesis quality of seen languages, generic voices can be further fine-tuned on seen languages, as explored in [28], [56]. Additional embeddings, such as language embeddings, can be included during training. The code-mixing ability of generic voices can also be explored. Given data in more Indian languages, the study can be extended to include more language combinations. Even with recent approaches using transformer [71] and conformer [72] networks with FastSpeech [73] and FastSpeech2 [74], these findings are still relevant. Experiments and results of zero-shot synthesis with transformer-based FastSpeech2 architecture are presented in Section S7 of the supplementary material. Since FastSpeech2 uses explicit phoneme boundaries obtained from Montreal forced aligner [75], systems are trained using phone-based representations. VII. CONCLUSION This work highlights the importance of training multilingual and multispeaker voices for low-resource Indian languages based on language families. It is observed that single-family voices, which are trained on less data, perform comparatively to IA+Dr systems trained on a lot of data. Same language family synthesis and adaptation are better than the cross-family approach. The observations of this work are encouraging as they pave the way to training TTS systems in resource-scarce scenarios, with additional complexities of different scripts and language-specific differences. Given a large number of speakers in each language, with many that cannot read or write in India 5 , this work provides an avenue to disseminate knowledge and information. Hence, the relevance of this work and similar attempts cannot be underestimated.
2022-12-21T16:07:01.220Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "5abc01ebb54afbb45c18b2b48ef5ca97b252bb24", "oa_license": "CCBY", "oa_url": "https://www.techrxiv.org/ndownloader/files/38041326/1", "oa_status": "GREEN", "pdf_src": "IEEE", "pdf_hash": "1918522df0ea12e8e3e76c20a29e12abb5dfb123", "s2fieldsofstudy": [ "Linguistics", "Computer Science" ], "extfieldsofstudy": [] }
4371776
pes2o/s2orc
v3-fos-license
Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction Purpose Probe-based confocal laser endomicroscopy (pCLE) is a recent imaging modality that allows performing in vivo optical biopsies. The design of pCLE hardware, and its reliance on an optical fibre bundle, fundamentally limits the image quality with a few tens of thousands fibres, each acting as the equivalent of a single-pixel detector, assembled into a single fibre bundle. Video registration techniques can be used to estimate high-resolution (HR) images by exploiting the temporal information contained in a sequence of low-resolution (LR) images. However, the alignment of LR frames, required for the fusion, is computationally demanding and prone to artefacts. Methods In this work, we propose a novel synthetic data generation approach to train exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced quality are recovered by the models trained on pairs of estimated HR images (generated by the video registration algorithm) and realistic synthetic LR images. Performance of three different state-of-the-art DNNs techniques were analysed on a Smart Atlas database of 8806 images from 238 pCLE video sequences. The results were validated through an extensive image quality assessment that takes into account different quality scores, including a Mean Opinion Score (MOS). Results Results indicate that the proposed solution produces an effective improvement in the quality of the obtained reconstructed image. Conclusion The proposed training strategy and associated DNNs allows us to perform convincing super-resolution of pCLE images. Electronic supplementary material The online version of this article (10.1007/s11548-018-1764-0) contains supplementary material, which is available to authorized users. Introduction Probe-based confocal laser endomicroscopy (pCLE) is a state-of-the-art imaging system used in clinical practice for in situ and real time in vivo optical biopsy. In particular, recent works using Cellvizio (Mauna Kea Technologies, France) have demonstrated the impact of introducing pCLE as a new imaging modality for the diagnostics procedures of conditions such as pancreatic cystic tumours and the surveillance of Barrett's oesophagus [4]. pCLE is a recent imaging modality in gastrointestinal and pancreaticobiliary diseases [4]. The authors of [4] have shown that despite clear clinical benefits of pCLE, improving its specificity and sensitivity would help it become a routine diagnostic tool. Specificity and sensitivity are directly dependent on the quality of the pCLE images. Therefore, increasing the resolution of these images might bring a more reliable source of information and improve pCLE diagnosis. Certainly, the key point of pCLE is its suitability for realtime and intraoperative usage. Having high-quality images in real time potentially allows for better pCLE interpretability. Thus, offline processing would not fit in the standard clinical work-flow required in this context. The trend for image sensor manufacturers is to increase the resolution, as apparent in the current move to high-definition endoscopic detectors. Recently introduced 4K endoscopes provide 8M pixels, a difference to pCLE of 2-to-3 orders of magnitude. In pCLE, reliance on an imaging guide-an optical fibre bundle, composed of a few tens of thousands of optical fibres, each acting as the equivalent of a single-pixel detector-fundamentally limits the image quality. These fibres are irregularly positioned in the bundle which implies that tissue signal is a collection of pixels sampled on an irregular grid. Hence, a reconstruction procedure is needed for mapping the irregular samples to a Cartesian image. Other factors that reduce pCLE image quality are cross-talk among neighbouring fibres and limited signal-to-noise ratio. All these factors lead to the generation of images with artefacts, noise, relatively low contrast and resolution. This work proposes a software-based resolution augmentation method which is more agile and simpler to implement than hardware engineering solutions. Building on from the idea that high-resolution (HR) images are desired, this study explores advanced singleimage super-resolution (SISR) techniques which can contribute to effective improvement in image quality. Although SISR for natural images is a relatively mature field, this work is the first attempt to translate these solutions into the pCLE context. Beyond SISR, video registration technique [13] have been proposed to increase the resolution of pCLE. Such methods provide a baseline super-resolution technique, but suffers from artefact and are computationally too expensive to be applied in real time. Because of the recent success of deep learning for SISR on natural images [1], this work focuses on exemplar-based super-resolution (EBSR) deep learning techniques. However, the translation of these methods to the pCLE domain is not straightforward, notably due to the lack of ground-truth HR images required for the training. There is indeed no equivalent imaging device capable of producing higher-resolution endomicroscopic imaging, nor any robust and highly accurate means of spatially matching microscopic images acquired across scales with different devices. Furthermore, in comparison with natural images, currently available pCLE images suffer from specific artefacts introduced by the reconstruction procedure that maps the tissue signal from the irregular fibre grid to the Cartesian grid. The contribution of this work is threefold. First, three deep learning models for SISR are examined on the pCLE data. Second, to overcome the problem of the lack of ground-truth low-resolution (LR)/HR image pairs for training purposes, a novel pipeline to generate pseudo-ground-truth data by leveraging an existing video registration technique [13] is proposed. Third, in the absence of a reference HR ground truth, to assess the clinical validity of our approach, a Mean Opinion Score (MOS) study was conducted with nine experts (1-10 years of experience) each assessing 46 images according to three different criteria. To our knowledge, this is the first research work to address the challenge of SISR reconstruction for pCLE images based on deep learning, generate pCLE pseudo-ground-truth data for training of EBSR models and demonstrate that pseudo-ground-truth trained models provide convincing SR reconstruction. The rest of the paper is organised as follows. "Related work" section presents the state of the art for SISR with natural images. "Materials and methods" section presents the proposed training methodology based on realistic pseudoground-truth generation and detail the implementation of the SISR models. "Results" section gives information on the evaluation of our approach using a quantitative image quality assessment (IQA) and a MOS study. "Discussion and conclusions" section summarises the contribution of this research to pCLE SISR. Related work Super-resolution (SR) has received a lot of interest from the computer vision community in the recent decades [10]. Initial SR approaches were based on single-image super-resolution (SISR) and exploited signal processing techniques applied to the input image. An alternative to SISR is multi-frame image super-resolution based on the idea that HR image can be reconstructed by fusing many LR images together. Ideally, the combination of several LR image sources enriches the information content of the reconstructed HR image and contributes to improving its quality. Registration can be used to merge LR images acquired at slightly shifted field-of-views into a unified HR image. In the specific context of pCLE, the work proposed by Vercauteren et al. [13] presents a video registration algorithm that, in some cases, can improve spatial information of the reconstructed pCLE image, and reveals details which were not visible initially. The quality of the registration is a key step to the success of the SR reconstruction, but the alignment of images captured at different times is not trivial. Misalignment leads to incorrect fusion and generates artefacts such as ghosting. Moreover, registration is a computationally expen-sive technique, making this approach unsuitable for real-time purposes. Another interesting approach to SISR is exemplar-based super-resolution (EBSR), which learns the correspondence between low-and the high-resolution images. Thanks to the recent success of deep learning and Convolutional Neural Networks (CNNs), EBSR methods currently represent the state-of-the-art for the SR task [1]. Although many research groups have worked on deep-learning-based SR for natural images, and although CNNs are currently widely used in various medical imaging problems [11], only recently have CNNs been used for SR in medical imaging. Noteworthy is the work proposed in [12] that attempt to improve the quality of magnetic resonance images. The behaviour of CNNs, especially in the context of SR, is strongly driven by the choice of a loss function, and the most popular one is mean squared error (MSE) [16]. Although MSE as a loss function steers the SR models towards the reconstitution of HR images with high peak signal-to-noise ratios, this does not necessarily mean that the final images will provide a good quality perception. A model trained with a selective loss function involving a Generative Adversarial Network for Image Super-Resolution (SRGAN) was proposed by Ledig et al. [7]. The authors designed an adversarial loss to classify HR images into SR images and ground-truth HR images. Based on a MOS study, the authors showed that the participants perceived the quality of the restored HR images as higher compared to the image quality measured only by a PSNR. Another critical issue with deep CNNs is the convergence speed. Several solutions, such as using a very high learning rate for network training [5], and removing batchnormalisation modules [8] were proposed to tackle this issue. Materials and methods The Smart Atlas database [2], a collection of 238 anonymised pCLE video sequences of the colon and oesophagus, is used in this study. The database was split into three subsets: train-ing set (70%), validation set (15%), and test set (15%). Each subset was created ensuring that colon and oesophagus tissue were equally represented. Data were acquired with 23 unique probes of the same pCLE probe type. The SR models are specific to the type of the probe but generic to the exact probe being used. Thus, the models do not need to be retrained for probes of the same type. Another type of probe, such as needle-CLE (nCLE), would require a specifically trained model. nCLE and pCLE differ by the number of optical fibres and the design of the distal optics. "Pseudo-ground-truth image estimation based on video registration" section explains how the pseudo-ground-truth HR images were generated. "Generation of realistic synthetic pCLE data" section describes our proposed simulation framework to generate synthetic LR (LR syn ) images from original LR (LR org ) images. "Implementation details" section presents the pre-processing steps needed for standardising the input images and details the implementation of the super-resolution CNNs used in this study. Pseudo-ground-truth image estimation based on video registration To compensate for the lack of ground-truth HR pCLE data, a registration-based mosaicking technique [13] was used to estimate HR images. Mosaicking acts as a classical SR technique and fuses several registered input frames by averaging the temporal information. The mosaics were generated for the entire Smart Atlas database and used as a source of HR frames. Since mosaicking generates a single large field-of-view mosaic image from a collection of input LR images, it does not directly provide a matched HR image for each LR input. To circumvent this, we used the mosaic-to-image diffeomorphic spatial transformation resulting from the mosaicking process to propagate and crop the fused information from the mosaic back into each input LR image space. The image sequences resulting from this method are regarded as esti- mates of HR frames. These estimates will be referred to as HR in the text. The image quality of the mosaic image heavily depends on the accuracy of the underpinning registration which is a difficult task. The corresponding pairs of LR and HR images generated by the proposed registration-based method suffer from artefacts, which can hinder the training of the EBSR models (Fig. 1). Specifically, it can be observed that alignment inaccuracies occurring during mosaicking were a source of ghosting artefacts which in combination with residual misalignments between the LR and HR images, creates unsuitable data for the training. Sequences with obvious artefacts were manually discarded. However, even on this selected dataset, training issues were observed. To address these, we simulated LR-HR image pairs for training EBSR algorithms while leveraging the registration-based HR images as realistic HR images. Generation of realistic synthetic pCLE data Currently available pCLE images are reconstructed from scattered fibre signal. Every fibre in the bundle acts as a single-pixel detector. To reconstruct pCLE images on a Cartesian grid, Delaunay triangulation and piecewise linear interpolation are used. The simulation framework developed in this study mimics the standard pCLE reconstruction algorithm and starts by assigning to each fibre the average of the signal from seven neighbouring pixels [6]. In the standard reconstruction algorithm, the fibre signal, which includes noise, is then interpolated. Similarly, noise was added to the simulated data to produce realistic images and avoid creating a wide domain gap between real and simulated pCLE images. Despite some misalignment artefacts, the registrationbased generation of HR presented in "Pseudo-ground-truth image estimation based on video registration" section produces images with fine details and a high signal-to-noise ratio. Our simulation framework uses these HR and produces simulated LR images with a perfect alignment. The proposed simulation framework relies on observed irregular fibre arrangements and corresponding Voronoi diagrams. Each fibre signal was extracted from an HR image, by averaging the HR pixel values within the corresponding Voronoi cell. To replicate realistic noise patterns on the simulated LR images, additive and multiplicative Gaussian noise (a and m respectively) is added to the extracted fibre signals f s to obtain a noisy fibre signal nfs as: nfs = (1 + m). * fs + a. The standard deviation of the noise distributions was tuned based on visual similarity between LR org and LR syn and between their histograms. Sigma values were 0.05 and 0.01 * (max fs − min fs) for multiplicative and additive Gaussian distribution, respectively. In the last step, Delaunay-based linear interpolation was performed thereby leading to our final simulated LR images. LR and HR images were combined into two datasets: 1. Original pCLE (pCLE org ) built with pairs of LR org taken from sequences of Smart Atlas database and HR images, and 2. synthetic pCLE (pCLE syn ) built by replacing the LR org images with LR syn images. Implementation details The datasets were pre-processed in three steps. First, intensity values were normalised: LR = LR − mean lr /std lr and HR = HR−mean lr /std lr . Second, pixels values were scaled of every frame individually in the range [0-1]. Third, non-overlapping patches of 64×64 pixels were extracted for the training phase, considering only pixels in the pCLE Field of View (FoV). A stochastic patch-based training was used for training the networks, with a minibatch of size 54 patches to fit into the GPU memory (12 GB). Models were trained with patches from the training set. The patches from the validation set were used to monitor the loss during training with the purpose to avoid overfitting. Since all the considered networks are fully convolutional, the test images were processed full size and no patch processing is required during the inference phase. MSE is the most commonly used loss function for SR. Zhao et al. [16] showed that MSE has two limitations: it does not converge to the global minimum and produces blocky artefacts. In addition to demonstrating that L1 loss outperforms L2, the authors also introduced a new loss function SSIM + L1 by incorporating the Structural Similarity (SSIM) [15]. FSRCNN and EDSR were trained considering independently both L1 and SSIM + L1 to investigate their applicability for our data based on a quantitative comparison. Results Acknowledging the lack of proper ground truth for superresolution of pCLE and the ambiguous nature of established IQA metrics, a three-stage approach was designed for the evaluation of the proposed method using the three SR architectures considered in "Materials and methods" section. The first stage, presented in "Experiments on synthetic data" section and relying on the quantitative assessment, The best results for each section are highlighted in bold demonstrates the applicability of EBSR for pCLE in the ideal synthetic case where ground-truth is available. In this quantitative stage, the inadequacy of the existing videoregistration-based high-resolution images as a ground truth for EBSR training purpose is demonstrated. The second stage, presented in "Experiments on original data" section, focuses on the quantitative assessment of our methods in the context of real input images and on the evaluation of our best model against other state-of-the-art SISR methods. In the third stage, performed to overcome the limitations of the quantitative assessment, a MOS study was carried out by recruiting nine independent experts, having 1-10 years of experience working with pCLE images. Quantitative analysis For the quantitative analysis, the SR images were examined exploiting two complementary metrics: (i) SSIM to evaluate the similarity between the SR image and the HR, and (ii) Global Contrast Factor (GCF) [9] as a reference-free metric for measuring image contrast which is one of the key characteristic of image quality in our context. Analysing both SSIM and GCF in combination leads to a more robust evaluation. SSIM alone cannot be depended on when the reference image is unreliable, while improvements in GCF alone can be achieved deceitfully for example by adding a large amount of noise. Using these metrics, six scores for each SR method were extracted: mean and standard deviation of (i) SSIM between SR and HR, (ii) GCF differences between SR and LR and (iii) GCF differences between SR and the HR. Finally, to determine which approach performs better, a composite score Tot cs obtained by averaging the normalised value of SSIM with the normalised GCF difference between SR and LR was defined. Both factors are re-scaled to the range [0-1]. In our quantitative assessment, the score obtained by the initial LR org was considered as baseline reference. Experiments on synthetic data In the first experiment, synthetic data are used to demonstrate that our models work in the ideal situation where ground truth is available. The first section of Table 1 shows the scores obtained when the SR models are trained on pCLE syn and tested on LR syn . Here, it is evident that the EDSR and FSRCNN trained with SSIM + L1 obtain a substantial improvement on the different quality factors with respect to the LR image. More specifically, in comparison with the initial LR image, the SSIM was increased by + 0.06 when EDSR is used and by + 0.05 when FSRCNN is used. These approaches also yield a GCF value that is very close to the GCF in HR and an improvement of + 0.32 and + 0.36 in the GCF with respect to LR images. Statistical significance of these improvements was assessed with a paired t test (p value less than 0.0001). From this experiment, it is possible to conclude that the proposed solution is capable of performing SR reconstruction when the models are trained on synthetic data with no domain gap at test time. Experiments on original data When real images are considered, the same conclusions cannot be reached. The results obtained by training on pCLE org and testing on LR org are reported in the second section of Table 1, and here it is evident that all the different quality factors decrease. The best approach is the FSRCNN trained using SSIM + L1 as loss function. With respect to the previous case this approach loses 0.04 on the SSIM, and 0.12 on the GCF with LR. This leads to a final reduction of 0.14 for the Tot cs score. In this scenario, the deterioration of SSIM and GCF compared to the previous synthetic case can be due to the use of inadequate HR images during the training (i.e. misalignment during the fusion, lack of compensation for motion deformations, etc.). Better results are instead obtained when the SR models performed on LR org images are trained using the pCLE syn (last section of Table 1). Here, the quality factors increased when compared to the previous case, although they do not overcome the results obtained when the approach is trained and tested on synthetic data. EDSR, in particular, has a Tot cs score of 0.65 that is 0.08 better than the best approach trained on pCLE org (the second section of Table 1) and 0.06 worse than the best approach trained and tested on pCLE syn (first section of Table 1). The GCF obtained here are in general much better when compared to the previous two cases. An example of the visual results from the different training modalities is shown in Fig. 2. In conclusion, our findings suggest that existing video-registration-based approaches are inadequate to serve as a ground truth for HR images, while EBSR approaches, such as the EDSR and FSR-CNN, when trained on synthetic data, can produce SR images that enhance the quality of the LR images. Due to our conclusions, the MOS study was performed using images obtained from the models trained only with synthetic data. To further validate our methodology, in Table 2, the results obtained by the best model of our approach (EDSR trained on synthetic data with SSIM + L1 as loss function) were compared against other state-of-the-art SISR methodologies. Specifically, in this experiment a Wiener deconvolution, a variational Bayesian inference approach with sparse and non-sparse priors [14], the SRGAN and EDSR networks pretrained on natural images were considered. The Wiener deconvolution was assumed to have a Gaussian point-spread function with the parameter σ = 2 estimated experimentally from the training set. Finally, the last column of Table 2 includes the results of a contrast-enhancement approach obtained by sharpening the input with parameters similarly tuned on the trained set. Although our approach is not consistently outperforming the other on each individual quality score, when the combined score Tot cs is considered, our method outperforms the others by a large margin. Semi-quantitative analysis (MOS) To perform the MOS, nine independent experts were asked to evaluate 46 images each. Full-size LR org were selected randomly from test set of pCLE org , and used to generate SR reconstructions. At each step, the SR images obtained by the three different methods (SRGAN, FSRCNN and EDSR) trained on synthetic data and a contrast-enhancement obtained by sharpening the input (used as a baseline) are shown to the user, in a randomly shuffled order. The input and the HR are also displayed on the screen as references for the participants. For each of the four images, the user assigns a score between 1 (strongly disagree) to 5 (strongly agree) on three different questions: To make sure that the questions were correctly interpreted, each participant received a short training before starting the study. The results on the MOS are shown in Fig. 3. EDSR is the approach that achieves the best performance on Q2 and Q3. Instead based on Q1, both FRSCNN and EDSR do not introduce a significant amount of artefact or noise. The results of the MOS give us one more indication, which our training methodology allows improvements on the quality of the pCLE images. In Fig. 4 is shown a few examples of the obtained SR images using our proposed methodology. Fig. 3 Results of the MOS using a contrast-enhancement approach, FSRCNN, EDSR and SRGAN. The plots report the results on the three different questions Discussion and conclusions This work addresses the challenge of super-resolution for pCLE images. This is the first work to evaluate the potential of deep learning and exemplar-based super-resolution in pCLE context. The main contribution of this work is to overcome the challenge of lack of ground-truth data. A novel methodology to produce pseudo-ground-truth exploiting an existing video registration method, and simulating realistic LR image based on physical model of pCLE acquisition is proposed. The conclusions are that synthetic pCLE data can be used to train CNNs while applying them to real scenario data because of a physically inspired simulation process that reduces the domain gap between real and simulated images. The robust IQA test based on the Structural Similarity (SSIM) and global contrast factor (GCF) score confirmed the improvement of obtained results in respects to the input image. An analysis of perceptual quality of images with a Mean Opinion Score (MOS) study recruiting nine independent pCLE experts showed that SR models give clinically interesting results. Experts perceived an improvement in the quality of the reconstructed images with respect to the input image without noting a significant increase in the amount of noise and artefacts. The quantitative and semiquantitative user perception analysis provided consistent conclusions. Providing a better quality of pCLE images might improve the decision process during the endoscopic examination. Further evaluation will focus on the temporal consistency of the super-resolution and will rely on histopathological confirmation to validate the authenticity of the generated details.
2018-03-23T15:31:40.000Z
2018-03-23T00:00:00.000
{ "year": 2018, "sha1": "53d9f51d6597d884758a82fb9d7fdf1c4d47879a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11548-018-1764-0.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "53d9f51d6597d884758a82fb9d7fdf1c4d47879a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
259910279
pes2o/s2orc
v3-fos-license
The Financial Cost of Managing Menstrual Hygiene in Schools in the Health District of Bla in Mali Introduction: Many women and girls face financial challenges in meeting their menstrual hygiene management needs. The main objective of this study is to estimate the financial cost of menstrual hygiene management among school-going girls. Methodology : This is a prospective study carried out in a semi-urban school environment at the Public High School in the health district of Bla in 2020. Results : This study was conducted among 125 high school girls with no income. The average age was 18 years old. Parents lived in rural areas with no fixed monthly income in 52% of cases. Multipurpose pieces of cloth were the protective material used in 67% of cases. The girls explained this choice in 100% of cases by the high cost of single pads and tampons. The average monthly cost of menstrual hygiene management was 0, 56 $ or 6, 67 $ per year with extremes of 0, 16 and 2, 45 $ per month. This amount was used to buy either single-use pads or soap for body care and multiple-use pads. This financial cost was covered at 92% by female support (mother, aunt, sister); male support (father, spouse) accounted for only 8%. Conclusion : This study made it possible to estimate the average annual financial cost of menstrual hygiene management among young school girls in Bla. These findings call for further studies to better understand the financial implications of menstrual hygiene management in low-resource settings. INTRODUCTION Menstruation marks the beginning of reproductive life in young girls. Throughout the world and specifically in Mali, many young girls encounter difficulties in managing their periods. Several studies have made this observation [1][2][3][4]. These difficulties are related either to the lack of information or to the insufficiency of adequate infrastructures or material means available for the management of menstrual hygiene. They are mainly related to the financial cost [5] linked to menstrual hygiene management needs and the lack of income to cover this cost. The issue of menstrual hygiene has not yet received the desired attention in low-income countries like Mali. The lack of response to the needs of girls and of appropriate policies in terms of individual menstrual hygiene management can have consequences not only on the reproductive life of young girls but also on hygiene and public sanitation. This management has a financial cost which concerns the purchase of the material necessary for the absorption of the menstrual flow, the inputs for the body maintenance and the elimination of the worn material. Girls in school naturally face this cost regardless of their parents' income. Literature reviews [2,6] note above all the abundance of studies on knowledge and practices on the subject [7][8][9], the impact of insufficient management on the academic performance [10] availability of types of menstrual flow 253 absorbent materials and menstrual hygiene in humanitarian emergencies [3]. Studies on the financial cost and economic impact [5] are rare. The lack of data does not favor the development of effective policies in this area. We therefore initiated this study to evaluate the financial efforts made by girls and their families in schools in Bla, Mali for the management of menstrual hygiene. OBJECTIVE Estimate the direct financing of the management of menstrual hygiene among young girls in the Public High School of Bla in the region of Ségou in Mali. METHODOLOGY This is a survey of 125 young girls attending the Public Lyceum in Bla. This high school receives students from the urban environment of the city of Bla and the villages around this city. Data was collected from semi-structured questionnaires completed by the girls themselves. The direct financial cost in this survey concerned the cost of purchasing single-use sanitary napkins, soap and detergents for body care and multiuse equipment. This survey was carried out after the consent of the young girls and the authorization of the parents or guardians for those who were not yet of age. DISCUSSION This survey concerned 125 young schoolgoing girls among whom the average age was 18 years. It is comparable to the average ages of girls in the urban and rural groups of the Shibeshi comparative study [7] in Ethiopia which were 17.2 and 17.5 years respectively. In our study, the age of the participants was between 14 and 21 years old. The majority of studies have involved young girls of similar age groups. Thus, Babagoli [5] conducted its cost-effectiveness and cost-benefit study in Kenya with girls aged 14 to 16, Nnennaya and his collaborators [4] in Nigeria reported an age group of 10-19 years, and Ha and Alam [11] carried out a comparative study in Bangladesh among young girls aged 14 to 19 in urban and rural areas. This can be explained by the fact that all these studies were carried out in school settings. Bushathoki [3], who carried out his study with a general population of women in post-earthquake Nepal, reported an age range of 15-49 years. The choice of this age group was to target a population without financial autonomy, capable of providing sufficient information and making the minimum expenses necessary for the management of menstrual hygiene. 52% of the parents of the girls in our study lived in rural areas, 54% were farmers and 64% had at most a primary education level. Regarding financial resources, 82% of parents who finance the cost of menstrual hygiene management had no fixed monthly income. This probably partly explains the high rate of users of reusable pieces of fabric at 67% (48% only piece of reusable fabric and 19% mixed) in a situation of financial inability to buy towels or cotton for multiple use. . But the girls themselves mentioned in 100% of cases the high cost of single-use protection as a reason for not using it. The average monthly cost during this study was 0, 56 $, i.e. an average annual expenditure of 6, 67 $. The highest cost was recorded among girls who used single-use protection, 2, 45 $ per month or 29, 45$ per year. The lowest monthly expenditure of 0, 16 $ was recorded among girls using multiple-use equipment. This amount was the purchase of soap for washing pieces of fabric for reuses and body maintenance. Contrary to the study carried out by Babadoli and his collaborators [5], our study did not aim to analyze the cost-effectiveness and cost-benefit of the different types of protection, but rather to have an idea of the financial efforts made for parents to ensure a minimum level of menstrual hygiene for girls. In this study, mothers, aunts and sisters provided this effort in 92% of cases; fathers intervened only in 6% of cases. CONCLUSION This study made it possible to estimate an average annual cost of 6, 67 $ granted by the families of school girls for the management of menstrual hygiene. Funding this cost is a mandatory effort to ensure the dignity of women and girls regardless of the family's level of poverty. The results of this study call for further studies to better understand the equity and human rights implications of financing menstrual hygiene management in low-resource settings.
2023-07-16T15:06:49.423Z
2023-06-26T00:00:00.000
{ "year": 2023, "sha1": "dd37b51c11f50ee73e530170dc51f6f737e505aa", "oa_license": null, "oa_url": "https://doi.org/10.36348/sijog.2023.v06i06.006", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "29c73e1709cbadd3bbf39de469cb183162d612c1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
226415429
pes2o/s2orc
v3-fos-license
Mathematical models for intra- and inter-cellular Ca2+ wave propagations Intraand inter-cellular Ca2+ waves play key roles in cellular functions. Focal stimulation triggers Ca2+ wave propagation from the stimulation point to neighboring cells through the cytoplasm, which involves localized metabolism reactions and specific diffusion processes. Briefly, inositol 1,4,5-trisphosphate (IP3) is produced at membranes and diffuses into the cytoplasm, resulting in Ca2+ release from the endoplasmic reticulum (ER). Particularly, Ca2+ released from the ER is mediated by two principles, the IP3-induced Ca2+ and Ca2+-induced Ca2+ releases. Ca2+ is diffused through the cytoplasm and, furthermore, transported into neighboring cells through gap junctions. These intraand inter-cellular Ca2+ waves have been widely investigated using theoretical and experimental methods in various cell types. In this review we summarize the mathematical models used for the numerical simulation of intraand inter-cellular Ca2+ wave propagations. Introduction Ca 2+ is one of the most important messengers in cells. Ca 2+ signal is a result of a variety of stimuli and is a transient increase of the intracellular concentrations. Ca 2+ relays the information arriving at the cell surface to intracellular targets or coordinate groups of cells through inter-cellular communications. Information is generally transmitted as a Ca 2+ wave, which increased Ca 2+ travels from a stimulus through the cytoplasm of a cell and group of cells. Currently, intra-and inter-cellular Ca 2+ waves are recognized as important for intra-and inter-cellular communications. Therefore, Ca 2+ waves have been widely investigated in various cell types using theoretical and experimental methods. Ca 2+ influx occurs via two pathways ( Fig. 1): (i) inflow from the extracellular medium through Ca 2+ channels in the plasma membrane and (ii) release from internal stores. There are several types of Ca 2+ channels, such as voltagecontrolled, receptor-operated, and mechanically operated channels. Ca 2+ release from internal stores, such as the endoplasmic reticulum (ER), is mediated principally by two types of Ca 2+ receptors ( Fig. 1): the inositol 1,4,5-triphosphate (IP 3 ) receptor (IP 3 R) and the ryanodine receptor (RyR). In general, membrane receptors, such as G-protein-coupled receptors, are activated by an extracellular agonist. This results in the phospholipase Cβ (PLCβ) mediated hydrolysis of membrane phospholipid phosphatidylinositol 4,5-bisphosphate (PIP 2 ) into IP 3 and diacylglycerol (DAG). IP 3 is then released and diffused into the cytosol, leading to the opening of the IP 3 R and subsequent release of Ca 2+ (IP 3 -induced Ca 2+ release, *E-mail: sera@mech.kyushu-u.ac.jp Fig. 1 Diagram of the major fluxes of Ca 2+ in the cytoplasm. Ca 2+ influx occurs via two pathways: (i) inflow from the extracellular medium through Ca 2+ channels and (ii) release from the endoplasmic reticulum (ER). There are two types of Ca 2+ receptors on ER membranes, IP 3 , and ryanodine receptors, which are involved in the IP 3 -induced Ca 2+ release (IICR) and Ca 2+ -induced Ca 2+ release (CICR), respectively. IICR). Furthermore, RyR is involved in the Ca 2+ -induced Ca 2+ release (CICR). Additionally, Ca 2+ is removed from the cytoplasm in two principal ways: pumped out of the cell through the plasma membrane and reuptake from the cytosol to internal stores. On the other hand, inter-cellular Ca 2+ waves were reportedly dominated by IP 3 and Ca 2+ signaling through gap junctions and paracrine adenosine triphosphate (ATP) messaging [1]. The IICR/CICR models and one/two dimensional (1D and 2D) mathematical models of the intraand inter-cellular Ca 2+ waves in various cells have been proposed to study the metabolism and diffusion processes of Ca 2+ [5][6][7][8][9][10][11][12]. In this review, we summarize the mathematical models for the numerical simulation of intra-and intercellular Ca 2+ wave propagation. IICR model In the IICR model, IP 3 R is modulated by the binding of IP 3 and Ca 2+ . Here, Ca 2+ plays a dual role by activating and inactivating IP 3 R. Lebeau et al. [13] assumed that the IICR model is composed of four, functionally identical, independent subunits: (i) the shunt state S; (ii) open state O; and (iii, iv) two inactive states, I 1 and I 2 ( Fig. 2A). Binding of IP 3 causes IP 3 R to be converted to the O state from the S state. The O state is relatively unstable and the subunits progress through to the more stable I 1 state, in which IP 3 is still bound but the channels do not conduct. IP 3 R recovers from the I 1 state to the S state, where IP 3 dissociates from its binding site. The receptor can then rebind IP 3 and repeat the cycle. In addition, a transition from the I 1 to I 2 state is included in this process. In this case, the I 2 represents a second inactive site where IP 3 is no longer bound. The pathway is agonist specific and involves phosphorylation of the IP 3 R. The open probability of IP 3 R (P IP3 ) is given by the following: 3 ] are intracellular Ca 2+ and IP 3 concentration; and k -1 , k 2 , k 3 , and k 5 are constant; α 1 and α 4 are the maximum rates of the S to O and the I 1 to the I 2 transitions, respectively; and β 1 and β 4 are Ca 2+ and IP 3 concentrations at which the rate is half maximum, respectively. The model dose not explicitly incorporate the binding sites of Ca 2+ , but instead incorporates the effect of Ca 2+ through modulating the forward rate. Therefore, each transition between the states is also modulated by Ca 2+ to improve the IICR model [14]. CICR model Friel [15] proposed the simple CICR model to simulate Ca 2+ oscillation. Many cell types respond to various stimulations with oscillations in the Ca 2+ concentration. The periodic fluctuations of cell membrane potentials, and the associated periodic Ca 2+ entry through the Ca 2+ membrane channels, are involved in this complicated mechanism of Ca 2+ oscillation [3,15]. In other words, Ca 2+ oscillations are also dominated by the balance between the uptake and release by the ER. This model is assumed to be a single ER in a cell, and the ER exchanges Ca 2+ with the cytoplasm (J L2 :Ca 2+ release, J P2 : Ca 2+ uptake), which in turn exchanges Ca 2+ with external medium (J L1 : Ca 2+ entry, J P1 : Ca 2+ extrusion). The equations for this phenomenon are as follows: and κ 1 , κ 2 , K d , and n are based on the experimental data. On the other hand, the RyR activation occurs within milliseconds, whereas inactivation occurs on a timescale of a few seconds [16]. This adaptation occurs during the slow, spontaneous decrease in the open probability of a channel after it has been rapidly activated by Ca 2+ . Keizer and Levine [17] proposed a simplified model to mimics RyR adaptation using an open probability state (P), as follows: 1D and 2D model of intra-and inter-cellular Ca 2+ wave propagation As mentioned above, Ca 2+ diffuses through the cytoplasm and propagates into neighboring cells via gap junctions. Höfer et al. [7] considered the linear array of cells coupled by gap junctions and proposed a simple mathematical model for intra-and inter-cellular Ca 2+ wave propagation through gap-junctional Ca 2+ diffusion, as follows: fluxes are assumed to be proportional to the concentration differences across the gap junctions and permeability. As the results, they reported that the effective gap-junction Ca 2+ permeability in this model agreed with experimental data. Edwards and Gibson [5] proposed a 2D model for intra-and inter-cellular Ca 2+ waves in a network of glial cells that incorporates a simplified IICR and inter-and extracellular pathways. ATP binds to membrane receptor, initiating G-protein cascade that results in the IP 3 production involving PLCβ. IP 3 diffuses within the cell, leading to the Ca 2+ release from ER and to the ATP release in the extracellular space. This Ca 2+ release subsequently produces further IP 3 . The 2D space is discretized by 1 μm squares grid for extracellular space with the cells superimposed over this grid, and the model consists of 4 components: the extracellular space, cell somata, protruding processes of cells and gap junctions. These cells are individually connected through gap junctions in addition to extracellular communication, and IP 3 and ATP subsequently diffuse through gap junctions and extracellular spaces, respectively. Finally, the concentrations of ATP, IP 3 , Ca 2+ , and G-protein in each component are calculated by solving the reaction-diffusion equations. The model showed that extracellular pathway increased the extend and duration of Ca 2+ wave but did not change the propagation speed, and that the speed was alternatively increased by the amount of gap junctions. Kobayashi et al. [8] proposed a 2D model for inter-cellular Ca 2+ waves in keratinocytes, which included IICR, CICR, diffusion of Ca 2+ , and IP 3 through gap junctions and ATPmediated paracrine communications (Fig. 3), and modeled the ATP concentration in culture medium, the [IP 3 ] i and [Ca 2+ ] i in ith cell, the gap-junction activity, and influx from extracellular Ca 2+ . They considered a 2D space, where the circles were randomly distributed and any two cells were connected through gap junctions, and reported the consistency of Ca 2+ wave between experiment and simulation (Fig. 4). In addition, they demonstrated the utility of the mathematical model in various skin diseases by blocking the paracrine and gap junction communications. Warren et al. [18] also developed the mathematical model for IICR [13], CICR [17], and extracellular paracrine pathways for the intra-and inter-cellular Ca 2+ wave propagations in airway epithelium. In this model, mechanical stimulus triggers the release of ATP from the simulated cell and opens of the membrane Ca 2+ channels of the stimulated cell (autocrine) and non-simulated cells (paracrine) to allow the Ca 2+ influx from the extracellular space. The release of ATP results in the activation of the G-protein at the membrane and IP 3 production. IP 3 can diffuse through the cytoplasm to initiate the release of Ca 2+ from the ER or diffuse through gap junction to initiate Ca 2+ release from the ER in adjacent cell. The strength of mechanical stimulus was described by the relationship fitted to match the dose-response curve for airway epithelium to ATP, and the effective conductance of the membrane Ca 2+ channel was parameterized based on the experimental data. The flux of IP 3 between cells was assumed to be proportional to the concentration difference across the boundary. The reaction/diffusion equations were solved on a 2D mesh consisting of rectangular cells measuring 25 μm × 25 μm, and each cell consists of 81 nodes with an accompanying 64 elements. They considered the various cases in Ca 2+ wave propagation, such as a Ca 2+ -free extracellular space, physical gap in culture (Fig. 5), and different amounts of released ATP for extracellular communication. The comparison with experimental study showed that the decay in the magnitude of Ca 2+ with increasing radius was much flatter in experiment than numerical simulation, suggesting that an additional mechanism might exist for regenerative release of ATP from cells downstream of the stimulated cell (Fig. 5). Long et al. [9] assumed the intercellular Ca 2+ wave is transported between cells by diffusion through gap junctions and established the onedimensional chains of endothelial cells. The fundamental equation of Ca 2+ is expressed as follows: where D is a coefficient representing diffusion between cells through gap junctions, and f is the rate of change of intracellular Ca 2+ . x and t are the position and time variables, respectively. In addition, to model IICR and CICR, f is simply expressed by a simple Ca 2+ dependent non-linear reaction function: with k([Ca 2+ ]) representing a Ca 2+ dependent calcium release/intake rate constant. The reaction/diffusion equation above is discretized in space and time using finite differences as followed: [Ca ] x t x t i n i n is Ca 2+ concentration in cell "i" at time t n+1 . D* and k*([Ca 2+ ]) are a dimensionless diffusion coefficient and calcium release/intake rate constant, respectively. They examined multicellular structures composed of a single chain of cells and demonstrated inter-cellular Ca 2+ wave propagated from the stimulated cell in experimental and theoretical studies (Figs. 6 and 7). In addition, they demonstrated inter-cellular Ca 2+ wave propagations in the different chain of cells with a side branch, "T" structure cells, and suggested the importance of the architecture of multicellular structures. Additional models for Ca 2+ oscillation have been proposed but are not covered in our review [19,20]. Three-dimensional (3D) model of intra-and inter-cellular Ca 2+ wave propagation In almost all previous models for intra-and inter-cellular Ca 2+ wave propagation, cells are assumed to be a single computational domain. In other words, all metabolic reactions occur homogenously in cells. However, in living cells metabolic components, such as proteins and enzymes, are not homogenously present in subcellular compartments. In addition, the cytosol, membrane localizations, diffusion within cell spaces, and their related reactions must also be localized in cells. Moreover, certain ions and proteins can diffuse in 3D cell spaces. Ca 2+ wave propagation is mainly dominated by Ca 2+ diffusion, and so, to some extent, the simulation may be sufficient in 1D and 2D model. However, strictly, metabolic components, such as proteins and enzymes, are not homogenously present in subcellular compartments, resulting that the cytosol, membrane localizations, diffusion within cell spaces, and their related reactions must also be localized in cells. Intra-and inter-cellular Ca 2+ waves are recognized as important for intra-and intercellular communications and are involved in various signal transductions, such as protein kinase Cα (PKCα) and Ca 2+ / calmodulin-dependent protein kinase (CaMK) signals. For example, in PKCα signal, DAG diffuses through the plasma membrane and PKCα is activated by both Ca 2+ and DAG, as indicated by translocation from the cytosol to the membrane [21], and PKCα is translocated to different areas of the cell membrane depending on the Ca 2+ influx from extracellular space [22]. Additionally, recent studies show that PKCα was translocated toward wounded cells following destruction of cell-cell connections by single cell wounding [23,24] and that PKCα accumulated at points of mechanical stimulus [25]. 1D and 2D homogenous numerical simulations of Ca 2+ wave propagation make it impossible to investigate these heterogeneous signals. Therefore, 3D heterogeneous metabolic reaction/diffusion framework of Ca 2+ wave propagation could inform investigations of interand intracellular communications and functions. Regarding intracellular Ca 2+ dynamics, if the waves are initialized in a small region near the cell surface, they will spread as a roughly spherical wave through the cell and will provide the distribution of the ER uniformly in a cell. However, concave waves are found experimentally in some species, such as sea urchins [26], which may be due to enhanced distribution of the peripheral ER structures [27]. Hunding and Ipsen [28] distributed the small spheres containing ER randomly inside the cell and simulated a Ca 2+ wave in a simple 3D sphere based on the CICR from homogenous and heterogeneous ERs through channels activated by IP 3 . On the other hand, we derived computational domains for membrane and cytoplasmic processes to achieve the heterogeneous metabolic reactions (Fig. 8) based on the diagram of the major fluxes involved in the cytoplasmic Ca 2+ (Fig. 1) [29]. In this framework, we divided metabolic reactions into each domain according to a previous study of endothelial cells [30]. Finally, the intraand inter-cellular Ca 2+ wave propagations were induced using microscopic stimulation and were compared between numerical simulations and experiments. In simulation, we assume that Ca 2+ , IP 3 , PKC, and Ca 2+ bound PKC (PKC. Ca 2+ ) diffuse across the cell. For example, Ca 2+ dynamics is modeled according to the following equation: where V rel is the Ca 2+ release rate from the ER, V ER is the Ca 2+ influx rate into the ER, V com1 is the Ca 2+ consumption rate in the cytoplasm, V out is the Ca 2+ release rate into extracellular spaces, and D Ca is a diffusion coefficient for Ca 2+ . Similarly, the IP 3 dynamic, PKC, and PKC.Ca 2+ are also modeled based on reaction/diffusion equations. In intercellular Ca 2+ wave propagations, we excluded extracellular communication and modeled the diffusion of IP 3 and Ca 2+ into neighboring cells via gap junctions. The resulting metabolic reactions and Ca 2+ flux into neighboring cells is simply described by the following equation: where f Ca is Ca 2+ flux through gap junctions, K Ca is the permeability rate, and [Ca 2+ ] sti and [Ca 2+ ] nei are Ca 2+ concentrations in stimulated and neighboring cells, respectively. Similarly, the flux of IP 3 is also defined. Geometries of the cells in which Ca 2+ waves were observed experimentally were segmented from confocal microscopic images. Cartesian computational grids with 0.5-μm pitch were then generated using segmented surface data and membrane was assumed to be 0.01 μm height of the surface, and gap junctions were set at the surfaces of connecting voxels between stimulated and neighboring cells. The experiment (Fig. 9) and simulations (Fig. 10) of intra-and inter-cellular Ca 2+ waves show that Ca 2+ waves propagate from a focal stimulated point to neighboring cells, indicating the utility of our 3D model for investigations of intra-and inter-cellular messaging in endothelial cells. However, we modeled the IICR mechanism only, which simply depended on the IP 3 concentration and not the open probability of IP 3 R. Hence, although our 3D heterogeneous metabolic reaction/diffusion model is the first to simulate intra-and inter-cellular Ca 2+ waves, further simulations need to be designed to improve Ca 2+ release from internal store. In this section of 3D Ca 2+ wave propagation, we focused Schematic of Ca 2+ wave signaling in heterogeneous metabolic reaction/diffusion model [29]. The computational domain is divided into the membrane and cytoplasm. The divided metabolic reactions were also further divided into each domain according to a previous study [30]. Membrane receptors are activated by caged ATP (A), resulting in PLC mediated hydrolysis of the membrane phospholipid PIP 2 into IP 3 and DAG (B&C). IP 3 is then released into the cytosol and stimulates Ca 2+ release from the ER (F) during degradation (E). In this study, we assumed that ER distributed homogenously in cells. Ca 2+ then activates membrane PLC to induce further Ca 2+ release from the ER [34]. At this point, Ca 2+ returns to the ER (G), is bound by CaM (H) and PKCα (I), and is released into extracellular spaces (J). Finally, PKC. Ca 2+ is bound by DAG at membrane (D). Membrane thickness is assumed to be 0.01 μm and that IP 3 and Ca 2+ are assumed to be diffused through the cytoplasm in the stimulated cell and into neighboring cells via gap junctions. Extracellular communication is excluded. Fig. 9 Microscopic images of intra-and inter-cellular Ca 2+ waves in endothelial cells based on experimental analysis [29]. In this experiment, caged ATP was used to stimulate a single cell microscopically and trigger a Ca 2+ wave. Here, the stimulus point is indicated by a white circle in A. The EGTA (Dojindo, JAPAN) and Apyrase (Sigma-Aldrich Corp., MO, USA) were loaded to ignore the Ca 2+ influx from the extracellular space and paracrine pathway, respectively. Rhod-4-AM (AAT Bioquest, CA, USA) was used as a Ca 2+ indicator. The dotted line and white arrow indicate the cell geometry and gap junction, respectively. The white line in G demonstrates the Ca 2+ wave direction, and inter-cellular Ca 2+ wave propagates from the stimulus point. on the heterogeneous metabolic reaction models. On the other hand, not heterogeneous reactions, additional 3D frameworks of Ca 2+ waves are proposed, such as for cardiomyocytes [31] and coupled of endothelial cells and smooth muscle cells [32]. In particular, in myocytes, not only Ca 2+ wave propagations but also active contraction can be achieved by the finite element method [31]. Summary Ca 2+ is one of the most important messengers in cells, and intra-and inter-cellular Ca 2+ waves are widely recognized as an important factor for intra-and inter-cellular communications. These waves have been investigated using theoretical and experimental methods in various cell types. In this review, we summarize the mathematical models for numerical simulations of the intra-and inter-cellular Ca 2+ wave propagations. Various 1D and 2D mathematical models are proposed and validated by experimental studies. Indeed, Ca 2+ wave propagation is mainly influenced by Ca 2+ diffusion, and so, to some extent, the numerical simulation may be sufficient in 1D and 2D model. However, strictly, metabolic components are not homogenously present in subcellular compartments, resulting that their related reactions must also be localized in cells. Intra-and inter-cellular Ca 2+ waves are recognized as important for intra-and intercellular communications and involved in various signal transductions. Therefore, 3D heterogeneous metabolic reaction/diffusion framework will be useful for investigations of these communications and functions.
2020-07-02T10:26:13.255Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "75847df87f00f355d11363561f8dfc2afeb76ef9", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jbr/34/1/34_9/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f9850963c24c0ba482664730bead23aabd3b5f7e", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Chemistry" ] }
53715387
pes2o/s2orc
v3-fos-license
Origin of the largest South American transcontinental water divide Interbasin arches between hydrographic systems have a heterogeneous geological origin, forming under the influence of several different geomorphological processes. Independent of the underlying processes, these arches compartmentalize present-day river basins, encompassing different water chemistries, habitat types, soil domains, potential energy and, on a geological/evolutionary time scale, aquatic life varieties in the ecosystem. Through most of its length, the water divide between the Amazonian, Paraná-Paraguay, and São Francisco river basins in central South America coincides with an Upper Cretaceous intracontinental igneous alkaline province. This magmatism, independent of its nature, caused intense crustal uplift and influenced hydrological networks at different scales: from continental-scale crustal doming to continental break-up, and finally to local-scale phenomena. The available ages for alkaline rocks indicate a well-defined time-interval between 72.4 to 91 Ma (concentrated between 76 and 88 Ma) period of uplift that contributed to large-scale drainage compartmentalization in the region. Here we show that uplift associated with intrusive magmatism explains the origin and maintenance of the divide between the Amazonian, Paraná-Paraguay, and São Francisco river basins. to the origin of inland river basins are not well known 15 , and this holds true for the water divide comprising the Amazonian, Paraná-Paraguay, and São Francisco river basins. Geologically, the headwater streams of these continental-scale river systems are located at the margins of major South American cratons, namely the Amazonian, São Francisco, Rio de la Plata, São Luiz, and Luiz Alves cratons, which are surrounded by large ancient orogenic belts (Mantiqueira and Tocantins provinces) formed during the amalgamation of the Western Gondwana supercontinent in the Neoproterozoic 16 . Despite their orogenic origins, such ancestral mountains are too old to be directly associated with present-day landscapes or divides. However, a remarkable fact about Brazilian relief is the presence of Mesozoic summit surfaces at high altitudes 17 . Such flat tops on several high-relief relict topographic structures along the abovementioned water divide provide evidence of the long denudation history of an ancient Gondwanaland plateau 17 . The development of this Cretaceous mega-plateau of about 2000 m of topographic elevation 18 was coeval with local-scale volcanism, rifting, and uplifts 19 . Installation of the present-day observed drainages occurred alongside such mega-geomorphological dynamism. Results Current Amazonian-Paraná-Paraguay-São Francisco water divide. One of the most conspicuous characteristics of the current configuration of the Amazonian, Paraná-Paraguay, and São Francisco river basins is the ~2300-km-long, NW-SE-oriented water divide. This long trajectory over the Brazilian shield coincides with a remarkable geological feature -the Azimuth 125° lineament (Figs 1 and 2). This feature was first described as a succession of diamond deposits, located in Brazil, aligned from Abaeté (state of Minas Gerais) to Rio Machado Major geological features associated with the Azimuth 125° lineament: topographic high-relief consisting of water divides between the Amazonian, São Francisco (draining northward) and Paraná-Paraguay systems (draining southward). Intrusive alkaline complexes, paleo volcanoes, paleocurrents, and ages of alkaline intrusions are also plotted. Geological units correspond to sedimentary rocks deposited before uplift, (occurring both northern and southern to Azimuth 125° lineament, in black), and those deposited after uplift (restricted to northern or southern sides, in white). Area illustrated in Fig. 2 Age of the central South American river basins. Previous contributions dealing with the origins of the modern river system in the South American interior 1,2,23,24 agree with some basic points: (1) the present main water divides and basin architecture are Mesozoic in age; (2) major Jurassic-Cretaceous events, such as the break-up of the Gondwanaland, have a significant tectonic influence on the compartmentalization of present-day sedimentary and fluvial systems; and (3) the Andean chain significantly contributes to major hydrological changes; for example, Cenozoic deformations of the ancient post-Cretaceous paleo-plateaus were influenced by the geotectonic evolution of Andean foreland systems. Sedimentary records of intracratonic basins (and associated paleocurrent data) provide insights on the timing of spliting between adjacent fluvial systems. Along the Azimuth 125° lineament, the youngest shared sedimentary sequence between northern (Parecis) and southern (Paraná) intracratonic sedimentary basins is the Lower Cretaceous sandstone of the Botucatu Formation, located at the western limit of the azimuth between the upper Tapajós-Xingú and Paraguay river basins 25 . There is also no evidence of Mesozoic sediments of the Paraná basin (or Bauru basin) crossing the Canastra range 26 , the divide between the upper Paraná and São Francisco rivers. A compilation of available, mainly unpublished paleocurrent and provenance data [27][28][29][30][31] , for the late Cretaceous sedimentary units north and south of Azimuth 125° lineament shows a clear dispersion from this lineament, which behaved as a topographic high during sedimentation. Relatively well-known major geological events, such as the opening of the South Atlantic Ocean (Jurassic to early Cretaceous) or the rise of the Andean chain (late Cretaceous to Cenozoic), can explain several aspects of the South American drainage evolution, particularly along the eastern, passive, rifted margin of the continent 23 as well as on the opposite convergent Andean margin 32 . However, the origin of the present-day N-S compartmentalization of the drainage network requires further explanation with respect to the underlying combination of mechanisms involved. Heat source for intracontinental magmatic province formation. Heat source and faulting are important factors affecting the formation of intracontinental magmatic provinces, as here proposed to cause the formation of the long, South American transcontinental water divide. In this section, two proposed general alternative heat-source models are addressed: mantle plumes and tectonic reactivations. Geologic, geomorphologic, and geochronologic evidence has been used to postulate that the alkaline rocks between Poços de Caldas (continental interior; Minas Gerais) and the Cabo Frio coast (Rio de Janeiro) have a WNW-ESE alignment and were emplaced during the displacement of the South American plate over the Trindade hot spot currently located at ~18°40′S in the Mid-Atlantic Ridge (mantle plume hypothesis) 33 . According to this view, during the Eocene, this supposedly existing hot spot probably moved to the eastern boundary (coast of Rio de Janeiro) of South America, causing important tectonic and magmatic events. This relative hot spot displacement has been considered to have caused the formation of the volcanic Vitória-Trindade chain, located off the eastern coast of Brazil, corresponding to the oceanic extension of the Azimuth 125° magmatic lineament. Furthermore, the genesis of the Poxoréu Igneous Province (Mato Grosso, western Brazil) has been also proposed to possibly be associated with a more intense lithospheric extension above the western margin of the postulated impact zone of the Trindade plume, permitting greater upwelling and melting farther to the west at ~84 Ma 34 . Therefore, according to this view, the Trindade plume was considered to possibly represent a super-plume with a diameter of ~1000 km, and the plume were thought to serve as heat sources for continental-interior igneous province formation. It is important to note, however, that the western end of the Vitoria-Trindade Chain is more than 280 km north of the southeastern end of the Azimuth 125° magmatic lineament. Moreover, the plume hypothesis has been criticized recently because geochemical data do not support that tholeiites from the Paraná Magmatic Province resulted from the Trindade plume 35 , and the oceanic crust was recently reactivated as well as subject to alternating compressive and extensional stresses associated with normal faulting and volcanism 36,37 . Several supposedly existing "hotspot tracks", such as the Vitória-Trindade chain, might reflect that the heat is derived from the accommodation of stresses in the lithosphere during rifting rather than continuous magmatic activity induced by mantle plumes beneath the moving lithospheric plates. Considering this view, regional thermal anomalies in the deep mantle, mapped using geoid and seismic tomography data, offer an alternative, non-plume-related heat source for the generation of intracontinental magmatic provinces 35 . The distribution of alkaline occurrences along NW-SE-trending crustal discontinuities extending over 800 km and the nature of the magmatism as described above clearly indicate that deep lithospheric faults significantly controlled the tectonics of the alkaline provinces in the Azimuth 125° lineament 38 . Alkaline bodies were emplaced between 91 and 72.4 Ma (97 and 71.1 Ma including uncertainty), with a higher concentration between 76 and 88 Ma (Fig. 3). The distribution of age-dates of the alkaline rocks along the Azimuth 125° does not show any eastward-decreasing trend. Instead, the available ages indicate a relatively long magmatic activity (~12 Ma) that weakens the hypothesis of the action of a mantle plume. In fact, available age data indicate the occurrence of different phases of alkaline magmatism from Late Cretaceous to Paleogene 38 . Thus, the supposed "impact of the Trindade starting mantle plume head" 34 that developed at about 250 km west of the Poxoréu Igneous Province on intracontinental magmatic province formation has been perceived as "very improbable" 39 . Discussion Is there a link between drainage compartmentalization and uplift controlled by intrusive magmatism? The magnetic signature of the Azimuth 125° lineament indicates a set of linear features with regional continuity in the subsurface, characterized by a higher magnetic susceptibility compared with surrounding host rocks 40 . The importance of this lineament as a system of deep crustal discontinuities serving as the main conduit for several alkaline intrusions along the azimuth axis has been confirmed recently 40 . The injection of dike-forming magma into the faults of the lineament occurred during two or three tectonic events: (i) between 950 and 520 Ma at two Brasiliano orogeny cycles, older (950-650 Ma) and younger (ca. 700-520 Ma); (ii) at approximately 180 Ma, during the fragmentation of Gondwana; and (iii) at circa 90 Ma 40 . A compilation of the available ages of intrusions along Azimuth 125° indicates periods of intrusions, and consequently, uplifts and large-scale drainage compartmentalization between 91-72.4 Ma (Fig. 3, Table 1). Low temperature thermochronology, including apatite fission track analysis (AFT) and a minor set of apatite U-Th/He dating (AHe), indicate that the onshore coastal region of SE Brazil experienced cooling, uplift and exhumation between 100 and 70 Ma 41 . Up to 3 km of denudation was inferred 42 , but this is significantly attenuated to the continental interior. Some alkaline rocks along the Azimuth 125° have deep sources (up to 100 and 150 km for kamafugites and kimberlites, respectively) 43 . The 3D inversion of magnetic data demonstrated that alkaline intrusions along Azimuth 125° are shallow 44 . A large number of occurrences have associated hypabyssal and/or volcanic (lavas) equivalents, or even rocks subject to phreatomagmatic interactions, indicating shallow or near surface emplacement and a very low, long-term denudation rate since the Late Cretaceous. Emplacement of intrusive bodies causes surface uplift, as observed in other regions of the world as forced folds with amplitudes related to intrusion thickness and length 45 . Some intrusions (Araxá, Catalão 1, Poços de Caldas, Serra Negra, Tapira) (Table 1) dragged the surrounding rocks, causing uplift. A conspicuous feature in the Araxá (see map in 46 ) and Serra Negra intrusions 47 is the presence of a ring of Precambrian schists and quartzites that surround the alkaline rock body. In Poços de Caldas 48 part of the roof (Early Cretaceous eolian sandstone) is preserved. Outcropping alkaline bodies show a maximum depth/major axis of 4.5/4.5 km for Araxá, 17/9 km for Tapira, 12-15/10 km for Serra Negra-Salitre and 5/5 km for Catalão 1, and alkaline bodies without surface manifestation show a minimum depth/major axis of 0.3-2/6 km for Pratinha and <2/14 km for Registro do Araguaia 44 . At the southwestern border of the Parecis Basin, along Azimuth 125°, a set of currently shallow intrusive bodies were identified from magnetic anomalies, having maximum length and thickness of approximately 11 and 3.6 km, respectively 49 . These dimensions suggest that, at the time of its placement, the surface of the terrain experienced a probable uplift of 0.1 to 1 km 45 . Although the minimum value was 100 m, this uplift is considered to be appreciable and is likely to have caused a change in the drainage network. Here we show that uplift associated with late Cretaceous (91-72.4 Ma) intrusive magmatism explains the origin and maintenance of the present-day 2,300 km long, NW-SE-oriented water divide between the Amazonian, Paraná-Paraguay, and São Francisco river basins. Independent of the underlying mechanism (mantle plumes or tectonic reactivations), high cratonic topography arose from intracontinental magmatic activities in South America 19 . This scenario, along with several other completely different mechanisms (such as the Andean orogeny, large-scale foreland basins subsidence, marine incursions, the rise and disappearance of mega-wetlands, and erosive and tectonic headwater captures) illustrate the splendorous South American geodiversity acting on river basins throughout history. Methods Geological data were collected from the literature. Intrusive alkaline complexes (carbonatite, kimberlite, and syenite) were also mapped using CPRM data (Brazilian Geological Survey) available on http://geosgb.cprm. gov.br/. Mapping were performed using QGIS v2.18 (http://www.qgis.org). The ages of the alkaline rocks were obtained from different sources (listed in Table 1), and mainly comprise U-Pb, Ar-Ar and few K-Ar and Rb/Sr data.
2018-11-26T21:09:12.073Z
2018-11-08T00:00:00.000
{ "year": 2018, "sha1": "068fe21e322f2f222824eaf24f69c11a569f4aa2", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-35554-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f4d84f95c764d933b799d47f3470a7a9bd8dc85", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [ "Geology", "Medicine" ] }
16307382
pes2o/s2orc
v3-fos-license
TRIXcell+, a new long-term boar semen extender containing whey protein with higher preservation capacity and litter size. It was the aim of the present study to test whey as protective protein for the sperm cell in the long-term boar semen preservation medium TRIXcell. Analyses of sperm cell motility using computer-assisted semen analysis (CASA) indicated that the whey protein Porex has a similar protective effect as bovine serum albumin (BSA) in maintaining viability of stored boar sperm. Boar sperm diluted in TRIXcell+ maintains commercially acceptable motility (>60%) for 10 days, while swine sperm diluted in the semen preservation medium Beltsville Thawing Solution (BTS) maintains commercially acceptable motility (>60%) for 3-5 days for most boars. To test the on-farm fertility performance of TRIXcell+ compared to BTS, inseminations were started on 35 commercial pig production farms in the summer of 2006. During the period of July 2006 until July 2012 for each farm and each calendar year the mean farrowing rate and litter size for semen diluted in TRIXcell+ and stored for 3-5 days was found higher than that of semen stored for 1-2 days in BTS. Based on data gained from a total of 583.749 sows inseminated through the years 2006-2012, the mean farrowing rate for semen diluted in TRIXcell+ and BTS was 90.4 ± 4.0 and 87.9 ± 3.6, respectively, which is not significantly different. Based on the same data, the mean total number of piglets born alive for semen diluted in TRIXcell+ and BTS was 14.2 ± 0.7 and 13.6 ± 0.6, respectively, which is significantly different. We conclude that whey protein can effectively be used in the long-term preservation medium TRIXcell resulting in a higher litter size. Introduction The widespread use of artificial insemination (AI) in pig production has led to the development of highly specialized and professional AI centers that supply high quality diluted semen to their customers (Gerrits et al., 2005;Feitsma, 2009) . In addition to dilution of the semen, the use of semen preservation media is aimed at improving preservation capabilities by adding protective compounds such as bovine serum albumin (BSA), antioxidants and antibiotics (Johnson et al., 2000;Levis, 2000;Gadea, 2003). The worldwide most-used preservation medium for swine semen dilution is the Beltsville Thawing Solution (BTS) (Pursel and Johnson, 1975). This is a so-called short-term preservation medium which keeps the sperm viable for most boars when stored at 16-18 °C for 1-3 days. In most cases, insemination is done the day of production or the day after the production of the seminal dose. In contrast to BTS, a long-term preservation medium keeps the sperm viable for over 3 days, the number of days depending on the type of long-term preservation medium (Gadea, 2003). Several new long-term preservation media have been introduced in recent years (Weitze, 1990;Gadea, 2003). These new preservation media have been tested using different in vitro methods (Dubé et al., 2004;Vyt et al., 2004) and by on-farm trials (Anil et al., 2004;Haugan et al., 2007). However, widespread use of long term preservation media is limited due to the price when compared to BTS (Waterhouse et al., 2004;Haugan et al., 2007). Further, a lack of large-scale comparative application of different commercial preservation media from independent research institutes makes it difficult to compare and evaluate the real value of long-term preservation media. But in particular the past decade, there is a trend to replace BTS with long-term preservation media because the latter have practical advantages such as reduction of delivery days to customers and improved management of production and delivery of diluted semen (Kuster and Althouse, 1999;Haugan et al., 2005). A potential further improvement of boar semen preservation media lies in the replacement of bovine serum albumin (BSA), which is widely applied as the most commonly used protective protein in boar semen preservation media (Gadea, 2003). Since BSA is derived from cow's blood, it would be better to have an alternative for application in preservation media, http: //www.openveterinaryjournal.com B.M. van den Berg et al. Open Veterinary Journal, (2014), Vol. 4(1): 20-25 ________________________________________________________________________________________________________ since cow's blood may be related to the occurrence of bovine spongiform encephalopathy (Colchester and Colchester, 2006). This paper describes the results of motility studies using computer-assisted semen analysis (CASA) and large-scale inseminations of a new preservation medium that is based on the long-term extender TRIXcell supplemented with whey protein to replace BSA. Animals and semen collection Semen was collected on a routine basis at the AI Stations of Varkens KI Service (Staphorst and Punthorst, The Netherlands) using a standardized protocol. Sexually mature boars, mostly between 1 and 3 years old of the breeds Duroc, Pietrain, York, Primeur, and Hampshire, were used to collect semen for research purpose and for distribution of the diluted semen to customers. Boars were housed in individual pens (± 9 sq. m.) in environmentally controlled farm buildings. They were given ad libitum access to water and were fed commercial diets according to the nutritional requirements for adult boars (Brown, 1994). Semen was collected in the boar pens using the gloved hand technique (Hancock and Hovel, 1959) and was filtered through four layers of sterile gauze into a prewarmed beaker to remove gel particles during collection. The semen was immediately diluted with approximately the same volume of appropriate preservation medium that was kept at 30 °C. To compare the use of BTS and TRIXcell+ in on-farm trials, care was taken to use ejaculates of same boars for both preservation media, to exclude a boar effect. The pre-diluted semen was then transferred to the laboratory in insulated beakers for further processing. Boar semen preservation media The commercial semen preservation medium TRIXcell with BSA was purchased from IMV Technologies (L'Aigle, France). The TRIXcell semen preservation medium without BSA but with 0.1% whey protein (w/v) was named TRIXcell+. Both preservation media have a similar composition, but next to the protein difference, another difference is the composition of the antibiotics. We have used gentamycin (0.06 % w/v) and amoxycillin (0.06 % w/v) . The chemical composition of TRIXcell is proprietary. Beltsville thawing solution (BTS), and chemicals and antibiotics used to prepare the TRIXcell preservation media were purchased from Sinus Biochemistry and Electrophoresis (Heidelberg, Germany). The exact chemical composition of DiluPorc BTS from Sinus Biochemistry and Electrophoresis is unknown but the recipe for BTS is 37.0 g glucose, 1.25 g EDTA, 6.0 g sodium citrate, 1.25 g sodium bicarbonate and 0.75 g potassium chloride for 1 L of preservation medium as has been reported by Johnson et al. (2000). Whey protein with brand name Porex was purchased from Phenolix (Enkhuizen, The Netherlands). Phenolix has stated that Porex has been produced from milk sourced from dairy cows that have been kept and managed according to the European Community Regulation 999/ 2001, which has been set in place to enforce the rules for the prevention, control and eradication of certain Transmissible Spongiform Encephalopathies (TSE). Ultra-pure water was used to prepare the preservation media. The motility analyses as well as the on-farm inseminations were done using semen diluted in freshly prepared preservation media based on mixing the individual chemical components in the lab and solubilising the mixture at the day of semen collection. Comparative in vitro analysis of swine semen stored and diluted in BTS, TRIXcell, and TRIXcell+ Many experiments were carried out to test the motility of semen diluted in TRIXcell supplemented with Porex. For the illustrative motility experiment presented here, semen of 3 Pietrain boars that were known to give average quality sperm was collected and pooled, separated into 3 fractions and diluted using the appropriate preservation media to a concentration of approximately 30 x 10 6 sperm cells per ml. The analysis of sperm motility to compare the storage capability of BTS, TRIXcell, and TRIXcell+ was done using the Ultimate Sperm Analyzer from Hamilton Thorne (Beverly, MA, USA). The software settings recommended by Hamilton Thorne were used. Sperm cell samples were stored in 1 ml propylene tubes at 17 ± 1 °C and were only used once for analysis to prevent any effect of shaking the sedimented sperm or opening the tube. Motility determinations were done in triplicate. For assessment of motility and progressive motility, 3.5 µL of the diluted semen was pipetted onto a prewarmed LEJA slide with 4 counting chambers and a chamber depth of 20 µm, incubated for 10 min. at 37 ± 1 °C and then immediately analysed using the CASA system. At least 5 multiple microscope fields or 1000 sperm cells were analyzed within 2 min. after mounting the slides. LEJA counting chambers were purchased from NIFA Technologies (Leeuwarden, The Netherlands). Semen processing and insemination for large-scale on-farm insemination trials The concentration and motility of all pre-diluted semen samples was determined immediately after arriving in the laboratory using the CASA system of Microptics (Barcelona, Spain). For assessment of motility, 3.5 µL of the diluted semen was pipetted onto a pre-warmed LEJA slide with 4 counting http: //www.openveterinaryjournal.com B.M. van den Berg et al. Open Veterinary Journal, (2014), Vol. 4(1): 20-25 ________________________________________________________________________________________________________ 22 chambers and a chamber depth of 20 µm and immediatedly analysed using the CASA system. The pre-diluted semen was further diluted based on the CASA results with appropriate preservation medium to a concentration of approximately 25 x 10 6 sperm cells per mL. The pre-diluted semen was packed in Easy-Pack bags from Veltkamp (Lochem, The Netherlands) at a volume of 100 mL. The bags were airtight sealed with no air trapped inside and stored at 17 ± 1 °C. For semen diluted in BTS, collection and delivery of the diluted semen was primarily on Mondays and Wednesdays, while inseminations were done within 2 days after delivery. For semen diluted in long-term preservation medium, collection and delivery was primarily on Fridays, while inseminations were done on day 3, 4 or 5 after delivery. Transport of diluted semen to the farms was done in climatized boxes at a temperature of 17 ± 1 °C. At the farms, the bags were stored at 17 ± 1 °C until use. Large-scale on-farm trials of TRIXcell+ along with BTS was done on 35 commercial pig production farms in The Netherlands. The number of gilts and sows per farm ranged between 300 and 1500. Gilts and sows that were inseminated were of different genetic background: TOPIGS 20, TOPIGS 50, HYPOR, Danbred, and PIC lines. Data on farrowing rate and litter size (total number of piglets born alive) as main parameters of fertility were collected on-site and stored in a sow management program. For practical reasons we have presented the number of piglets born alive instead of total number of piglets born, since mummified foetuses are not always counted by farmers. Statistical analysis Excel Software (Microsoft, Redmond, WA, USA) and GraphPad Instat (GraphPad Software, San Diego, California, USA) were used for calculation of means, standard deviations, and significance of differences of the motility data. Means were considered significantly different with a p-value < 0.05. The fertility data of the on-farm trials were retrieved from the data management program from the farms and imported in Excel. Excel and GraphPad Instat were used for statistical analysis (two-way analysis of variance; ANOVA) of the insemination data. The data on the farrowing rate and total number of piglets born alive for both extenders were subjected to pair-wise comparison based on the Tukey method in conjunction with ANOVA to compare the differences. Means were considered significantly different with p < 0.05. Table 1 shows the results of a typical example of the analysis of the motility of sperm diluted and stored in BTS, TRIXcell, and TRIXcell+ during 10 days at 17 ± 1 °C. The percentage of motile sperm cells decreased with storage time for all three preservation media, but the percentage of motile sperm cells in the preservation media TRIXcell and TRIXcell+ remained higher when compared to BTS. In the case of the preservation medium BTS the motility dropped below the commercial threshold of 60% at day 4, while both TRIXcell and TRIXcell+ had even at day 10 a motility percentage that indicated good quality for commercial application. The data show that there is no significant difference between the storage capacity of TRIXcell and TRIXcell+. Also in many other experiments (data not shown) TRIXcell and TRIXcell+ showed a similar storage capacity as indicated by motility. Inseminations with semen diluted in BTS and TRIXcell+ To confirm the indication of the motility studies, large-scale on-farm inseminations were started in July 2006. A total of 35 swine production farms participated in using both BTS and TRIXcell+ diluted sperm. The number of inseminations using TRIXcell+ versus BTS varied per farm, while the number of gilts and sows per farm ranged between 300 and 1500. Table 2 shows the fertility results of the large-scale inseminations separated per calendar year, as well as the total over 6 years. The results show that for each calendar year both the farrowing rate and the total number of piglets born alive was higher for TRIXcell+ when compared to BTS. The data also show that for a total of 583.749 sows inseminated through the years 2006-2012 the mean farrowing rate was 2.5% higher and the total number of piglets born alive was 0.6 higher for semen diluted in TRIXcell+ when compared to BTS. The higher farrowing rate is not statistically significant, whereas the 0.6 higher litter size is significant (P < 0.05). Discussion Our study was started to find an alternative to BSA for application in semen preservation media. We focused our studies on whey protein since whey is an attractive alternative to BSA from cow's blood. First, whey contains low amounts of BSA. Second, whey protein is not related to the occurrence of BSE and third, whey is much cheaper than BSA. To test replacement of BSA by the whey product Porex we used various long-term preservation media such as TRIXcell, Androhep, Zorlesco and Modeno (Gadea, 2003). However, for extensive on-farm testing we have chosen TRIXcell (earlier also named Tri-X-cell or X-cell), since various studies on the use of TRIXcell have been reported. http: //www.openveterinaryjournal.com B.M. van den Berg et al. Open Veterinary Journal, (2014), Vol. 4(1): 20-25 ________________________________________________________________________________________________________ Table 1. Percentage (mean ± standard deviation) of motile and progressive motile sperm cells from day 0 to day 10 after collection and dilution of the sperm in the preservation media BTS, TRIXcell, and TRIXcell+ at a concentration of approximately 30 x 10 6 sperm cells per mL. The group p-value gives the result of comparison of means between BTS and TRIXcell+ , as well as TRIXcell and TRIXcell+. The results of these studies, in vitro studies (Waterhouse et al., 2004;DE Ambrogi et al., 2006;Estienne et al., 2007;Lange-Consiglio et al., 2013) as well as on-farm insemination trials (Kuster and Althouse, 1999;Haugan et al., 2007), showed that TRIXcell is an efficient long-term preservation medium. Motility of boar sperm cells, both fresh and stored, depends on many factors such as the genetic makeup, health and age of the boar, and the season. But it was not the aim of the current paper to report on the variables involved. Our motility studies showed that Porex, when added to TRIXcell to make TRIXcell+, gave similar results with regards to storage capacity when compared to TRIXcell, which contained BSA. Our studies indicated that Porex can replace BSA in TRIXcell without any significant change in storage capability as determined by CASA. Results of CASA do not provide definite proof of fertility, since there is no clear relationship between sperm cell motility and fertility. Several studies appeared on this subject, but with conflicting results (Liu et al., 1991;Holt et al., 1997;Gadea, 2005;Broekhuijse et al., 2012). Allthough motility results may serve as indication of storage capacity of preservation media, definite proof of performance with respect to fertility can only be gained from on-farm insemination trials. Therefore, we started on-farm insemination trials in the summer of 2006 aimed at studying the performance of TRIXcell+ along with BTS with respect to farrowing rate and number of piglets born alive as main parameters of fertility or success of AI. During the six year period of insemination trials, all 35 farms that participated had each year a higher farrowing rate and a higher number of piglets born alive when TRIXcell+ was used instead of BTS. But the consistently higher farrowing rate was not significantly different between the two preservation media. The difference in number of piglets born alive, however, is significant. TRIXcell+ showed an average increase of the number of piglets born alive by 0.6. Earlier studies indicated that TRIXcell had a similar performance with respect to farrowing rate and litter size as BTS (Haugan et al., 2007). Therefore, we conclude that it may be the protective effect of the whey protein Porex that causes the higher number of piglets born alive and that Porex is an effective additive in the long-term semen extender TRIXcell.
2017-10-10T23:33:30.638Z
2014-02-28T00:00:00.000
{ "year": 2014, "sha1": "e2fb4fc7e26c560eba65147bb7304602a8cddab0", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f5749438ad64259c737291ab4665305914b506f4", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
270389938
pes2o/s2orc
v3-fos-license
HTS384 NCI60: The Next Phase of the NCI60 Screen The new NCI60 cell line screen HTS384 shows robust patterns of response to oncology agents and substantial overlap with the classic screen, providing an updated tool for studying therapeutic agents. Introduction The NCI's 60 human tumor cell line screen (classic NCI60) has been in operation for more than 30 years and has proven to be a very useful drug discovery tool for the cancer research community (1)(2)(3).Initially, establishing the feasibility of the NCI60 screen focused on the development of the cell line panel, evaluation of various assays for in vitro drug sensitivity, as well as data analysis and presentation (4)(5)(6).The 60 human tumor cell lines were derived from nine cancer types including lung, colon, breast, prostate, melanoma, renal, ovarian, brain, and leukemia.In addition to the criteria that the cells span broad classes of human cancer, the specific cell lines were selected to grow in a single culture medium and demonstrate excellent reproducibility in their growth and response to test agents.Cryopreserved batches of each line were prepared and stored in the NCI Developmental Therapeutics Program Tumor Repository.At the time the NCI60 screen was developed, practical assays for cell growth in 96-well plates were limited.The protein-staining dye sulforhodamine B (SRB) was selected for use in the screen because the chemical fixation step enabled batch processing of plates in a time-independent manner.Additionally, SRB gave the best combination of stain intensity, signal-to-noise ratio, and linearity with cell number (7,8).The standard range of concentrations tested in the screen was set from 100 μmol/L to 10 nmol/L based on empirical determinations that, for a majority of compounds used during the development of the screen the growth response data across that range captured both the concentration causing 50% inhibition of growth (GI 50 ) and often captured the concentration causing total growth inhibition (TGI). In April 1990, the fully operational NCI60 screen was rolled out.Initially, approximately 20,000 samples/year including pure small molecules, pure natural products, and partial isolate fractions of natural products were tested in concentration response (1,9).The mean graph display visualizing differential cell line response was developed as was the COMPARE pattern recognition methodology that could identify common response patterns across the cell lines, independent of the absolute potencies of the tested compounds (4,5,10).The mean graph patterns of cell line responses and COM-PARE correlations were highly reproducible and enhanced understanding of potential cellular processes targeted by new agents.These advances facilitated studies of structure-activity relationships to direct chemical analog synthesis.For each of the last 5 years, approximately 6,000 compounds were tested in the NCI60 screen in a single concentration prescreen, of which, ∼20% met the criteria for further testing in the five-concentration screen (internal statistics from the Screening and Drug Prep Labs).In that same timeframe, more than 500 peer-reviewed publications have used NCI60 screen data and NCI60 cell line characteristics as a critical component of the study. The NCI60 screen is moving into a new era.The cell lines remain the same; however, the format has been updated to a fully automated 384-well assay with a 3-day test agent exposure period and a luminescent endpoint for cell viability.This report enumerates some of the similarities and differences between the classic NCI60 screen and the new HTS384 NCI60 screen using a library of 1,003 FDA-approved anticancer drugs and investigational agents tested in both assays.In addition to generating a core set or baseline of data for the HTS384 screen, the patterns of cell line response from the HTS384 screen are compared with the cell line response data from the NCI60 classic screen.The library of 1,003 FDA-approved anticancer drugs and investigational agents were tested at the standard five concentrations used in the classic NCI60 screen to allow comparison of concentration response between the screens and so that targeted compound groups with well-established response patterns in both screens could be analyzed by COMPARE. Submission of compounds for NCI60 HTS384 screening The standard screening assay is performed across the concentration range of 100 μmol/L to 10 nmol/L in one-log increments.If in vitro data are available when investigators submit new compounds to the HTS384 NCI60 screen, they can request a concentration range starting lower than the standard 100 μmol/L, though the dilutions will still be a series of four serial log dilutions.In addition, if the results from an initial screen of a new compound suggest that the GI 50 occurs at a concentration lower than the standard range, then the compound can be retested across an adjusted concentration range if sufficient compound is available (https://dtp.cancer./dscb/compoundSubmission/submissionProcedures.htm). Compounds Compounds from the investigational oncology agents (IOA) and approved oncology drugs library were acquired by internal/external synthesis or acquisition from external vendors (3).FDA-approved oncology drugs are available from the NCI at https://dtp.cancer.gov/organization/dscb/obtaining/available_plates.htm.All agents were demonstrated to be >95% pure by proton nuclear magnetic resonance ( 1 H NMR) and LC/MS.In some cases, additional analytical techniques were employed to ensure the integrity of the library (e.g., chiral HPLC, optical rotation, X-ray crystallography, and 13 C NMR).Compound stock solutions were prepared in DMSO (Sigma-Aldrich, cat.D2650), except for the platinum compounds, which were prepared in saline (Quality Biological Inc., cat.114-055-101), at 400-fold the tested concentration and stored at �70 °C prior to their use.All agents were prepared for testing as a 5-point concentration series from a high concentration of 100 μmol/L (final) and decreasing in one-log increments.The DMSO stock solutions were prepared manually.For the NCI60 classic screen, the assay plates were prepared manually with inspection and annotation of solubility problems when diluting from the stock.For the HTS384 screen, compoundonly plates are included to optically identify solubility problems.Additionally, in both screens, solubility problems at the highest concentration can manifest as a reversal in the progression of growth response values between the two highest concentrations, which are flagged during data processing.Endpoint values will be interpolated as long as they are reached before the reversal. Classic NCI60 screen Prior to inoculation into 96-well microplates, suspension cell lines were collected from T flasks and adherent cell lines were removed using TrypLE express (Gibco, Thermo Fisher Scientific, cat.12605010).The cell lines were harvested by centrifugation for 5 minutes at 212 � g.Following removal of the supernatant, the cells were resuspended in fresh medium and quantified using a Cellometer K2 fluorescent viability cell counter (Nexcelom).Live cells were quantified from dual fluorescence measurements of acridine orange to quantify all cells and propidium iodide to quantify dead cells.Monolayer cell suspensions of 100 μL were dispensed into the wells of 96-well clear flat bottom polystyrene TC-treated microplates (Corning Inc., cat.3598) using a BioTek MutiFlo FX peristaltic pump with a cassette head (Agilent Technologies, Inc.).The inoculation density for each cell line is shown in Table 1.Following inoculation, the microplates were transferred to an incubator and maintained at 37 °C and 5% CO 2 with 95% humidity.Twenty-four hours after inoculation, test agents or controls were delivered to the wells of microplates.Test agents and controls prepared as 400� stock solutions were dispensed into 12-channel deep well v-bottom reservoirs (Thermo Fisher Scientific, cat.1149Q22), and complete media containing 0.1% (50 μg/mL) gentamicin (Gibco, Thermo Fisher Scientific, cat.15750078) were added to achieve a 1:200 dilution.Final concentrations of the test agents and controls were achieved by transferring 100 μL of solution from the 12-channel reservoirs into the wells of 96well test microplates (1:2 dilution) using a 12-channel pipette.All test agents were evaluated in technical duplicate, and doxorubicin (NSC123127) was included as a standard in each experiment for quality control.Time zero microplates were also prepared 24 hours after inoculation.For this, 100 μL of complete media containing 0.1% gentamicin was transferred to all wells of the time zero microplates, which were subsequently fixed with cold trichloroacetic acid (TCA; Thermo Fisher Scientific, cat.C987X91).Adherent cell lines were fixed with 50 μL of 50% (w/v) TCA, whereas suspension cell lines were fixed with 50 μL of 80% (w/v) TCA, which was added very slowly to push the suspension cells to the bottom of the plates.After the time zero microplates were refrigerated at 4 °C for 1 to 3 hours, the TCA solutions were decanted, the wells were washed multiple times with tap water and the microplates were pounded against paper towels to remove residual water.The microplates were then placed on racks to air dry prior to staining (see below for the staining procedure).After 48 hours of cell line exposure to controls or test agents, the test microplates were removed from incubators and the cells were fixed with the appropriate TCA concentrations, rinsed with water, and dried as described above for the time zero microplates.Next, remaining cellular proteins were stained with sulforhodamine B (SRB, Pylam Products Co. Inc., cat.74072).Microplates were removed from the drying racks and 100 μL of 0.4% (w/v) SRB in 1% acetic acid were added to the wells of microplates using a Zoom HT LB 920 plate washer (Berthold Technologies).After 10 minutes to 1 hour, the stain solution was washed from the microplate wells three times with 350 μL of a 1% acetic acid (Macron Fine Chemicals, VWR International Holdings, Inc., cat.MK-V193-05) solution (v/v) using a BioTek 405 LS washer (Agilent Technologies, Inc.).The microplates were subsequently pounded against paper towels to remove residual waste and placed on racks to air dry.To solubilize the SRB stain bound to cellular proteins, 100 μL of 10 mmol/L Trizma base (Sigma-Aldrich, cat.T1503) was added to the wells of microplates using a Zoom HT LB 920 plate washer.Next, the microplates were placed on orbital shakers at room temperature.Microplates containing adherent cell lines were agitated for a minimum of 2 hours, whereas those containing suspension cell lines were agitated for a minimum of 5 hours.Finally, absorbance was measured from each well at 515 nm using a BioTek Synergy Neo2 hybrid multimode reader (Agilent Technologies, Inc.; ref. 8). HTS384 NCI60 screen The NCI60 cell lines were harvested, and live cells were quantified as described above.Mixed cell suspensions of 40 μL were dispensed into the wells of 384-well white flat bottom polystyrene TC-treated microplates (Greiner Bio-One, cat.781080) using a Microlab NIMBUS 96 workstation (Hamilton Company).The inoculation density for each cell line is shown in Table 1.Following inoculation, the microplates were transferred to an incubator (LiCONiC, STX500-ICSA) and maintained at 37 °C and 5% CO 2 with 95% humidity.Twenty-four hours after inoculation, test agents or controls were delivered to the wells of microplates.The test agents and controls were prepared as 400� stock solutions in Echo qualified polypropylene 384-well microtiter plates (Beckman Coulter Life Sciences, cat.001-14615) and 100 nL was transferred by acoustic dispensing with an Echo 655 Liquid Handler (Beckman Coulter Life Sciences) into the time zero and assay microplates to achieve a 1:400 final dilution.All test agents were evaluated in technical triplicates.Controls in each assay microplate included the DMSO vehicle [0.25% (v/v), final (n ¼ 14)], 100% cytotoxicity [1 μmol/L staurosporin (NSC755774) and 3 μmol/L gemcitabine (NSC613327; n ¼ 7)], and five concentrations of doxorubicin (NSC123127), 25 μmol/L (n ¼ 2), 2.5 μmol/L (n ¼ 1), 250 nmol/L (n ¼ 2), 25 nmol/L (n ¼ 1), and 2.5 nmol/L (n ¼ 1).After delivery of 100 nL DMSO into the wells of the time zero microplates, 40 μL of CellTiter-Glo 2.0 (Promega Corporation, cat.G9243) were dispensed into the wells using a LGR Precise Drop II dispenser (Let's Go Robotics, Inc.), and luminescence was measured using a PHERAstar FSX (BMG LABTECH), according to Promega's protocol, to assess cell viability.Following the delivery of controls and test agents to the assay microplates, they were transferred back to the incubator for 72 hours at 37 °C with 5% CO 2 and 95% relative humidity.After 72 hours of exposure to test agents and controls, 40 μL of CellTiter-Glo 2.0 were dispensed into the wells of the assay microplates and luminescence was measured, according to the manufacturer's protocol, to assess cell viability.The SK-MEL-2 cell line exhibited inconsistent growth during many of the HTS384 NCI60 assays and was not included in the current data analysis. The percent treated over control (PTC) was calculated at each of the test agent concentrations using the mean of vehicle control signal values (μ Vehicle ), and the mean of test agent treated signal values (μ Treated ). The percentage growth (%G) was calculated at each of the test agent concentrations using various measurements from the NCI60 screen [mean of time zero signal values (μ Tzero ), mean of vehicle Table 1.NCI60 cell lines showing the NCI/DTP cell line name, disease panel tumor type of origin, the cell line name from Cellosaurus (https://www.cellosaurus.org/),the cell line designation from Cellosaurus based upon historical data classic NCI60 cell line population doubling time in the 96-well format, classic NCI60 cells plated per well in the 96-well format, NCI60 cell line population doubling time in the 384-well format, and NCI60 cells plated per well in the HTS384 NCI60 assay. If μ Tzero > μ Treated , the following equation was used: In addition to %G, three response parameters were calculated for each test agent, namely, the GI 50 , TGI, LC 50 (50% lethal concentration) using the following equations: GI 50 was calculated as LC 50 was calculated as Values were calculated for each of these three parameters if the level of activity was reached in the concentration response and was bracketed by two data points; however, if the effect was not reached or was exceeded, the value for that parameter was expressed as greater or less than the maximum or minimum concentration tested.In both the classic NCI60 screen and the new HTS384 screening system the success of the evaluation of a new compound is based on reaching the GI 50 endpoint in at least 40 cell lines, if the compound exhibited concentration dependent growth.A web site (https://ioa.cancer.gov) was established to make available up-to-date data related to the collection of NCI investigational oncology agents and FDA-approved oncology drugs (3).These data are a subset of all public data from the classic NCI60 screen, which are also available (https:// wiki.nci.nih.gov/display/NCIDTPdata/NCI-60+Growth+Inhibition+Data). Study data The GI 50 values from the classic NCI60 and from the HTS384 NCI60 screens were analyzed in several ways.The GI 50 values from the classic NCI60 screen were the average by cell line of all GI 50 values from replicate experiments in the screen that used the same concentration range as was used in the HTS384 NCI60 screen.In cases in which there was no direct equivalent, the closest concentration range was used.Concentrations are represented as the log 10 of the molar concentration whether referencing a concentration response graph or the response endpoint for a GI 50 value.Some charts also show concentrations as micromolar.The concentration response data and associated endpoint values from both screens are available for download.Screen endpoint values are determined based on linear interpolation between two concentrations that generate growth above and below the growth response of interest.The processing of the growth response data is illustrated in Supplementary Fig. S1A-S1C (also, see "Data availability"). COMPARE correlations COMPARE correlations between the GI 50 values for two agents are a modification of a Pearson correlation.This unitless value is a measure of the strength of the linear relationship between two sets of variables (cell line GI 50 values), is independent of the order in which the cell lines are considered in the calculation and factors out differences in overall potency in the sets of variables focusing instead on the relative patterns of sensitivities across the cell lines.Cell lines with missing data for one or both agents are ignored in the calculation (5).After the calculation is run, there is a filter that requires at least 40 cell lines to have reported values for both agents and a filter that requires a minimum coefficient of variation of the endpoint values across the cell lines of 0.01.The latter is a default for the COMPARE application, which eliminates from consideration compounds in which the response across all cell lines is the same because there will be no SD.In the IOA compound set, there are 64 compounds with no cell line reaching GI 50 at the highest concentration tested (10 �4 mol/L, 100 μmol/L).In the classic NCI60 screen, the discrimination among the cell line response resides between 100 and 10 μmol/L for a subset of compounds and is important for identifying compounds that affect target classes, which have been much studied.(1). Data visualization To summarize the range of cell line responses for a compound, bar charts for the mean of the GI 50 s across all the cell lines were used and the SD is a surrogate for the range of values across the cell lines (see Supplementary Fig. S1C).To facilitate interpretation of a collection of Pearson correlations, a correlation map tool was developed that visualizes pairwise correlations among all members of a set of compounds (3).The endpoint data for individual compounds are represented by nodes (circles) on the map and links between the nodes represent the correlations between the endpoint data sets.Links between the nodes are only rendered if the correlation meets user-configurable minimum correlation criteria, and the length of the rendered links is proportional to the correlation with more similar compounds being closer together (3).If the compounds represented by two linked nodes share a common target, then the link carries the color assigned to that target, whereas links between nodes representing different targets render as black. Empirical evaluation of COMPARE correlations indicate that correlations <0.6 are probably not significant indicators of agents acting via similar mechanisms, as reflected by their cell line response patterns, the range of 0.65 to 0.75 are worth consideration, and correlations >0.75 are indicators of likely similarity (5).Correlation maps in the supplementary figures were configured to render links only for correlations at or above 0.75.The correlation map was used to focus on the most significant correlations within a set of correlations; however, it is not suitable for displaying all nonnegative correlations within a set of compounds with a common target (Fig. 1).A box and whisker plot was used to summarize the entire population of positive correlations for all possible pairwise correlations among compounds within a target set.The box represents the range of the second and third quartiles within the distribution of all pairwise correlations, whereas the "whiskers" encompass the first to fourth quartiles (Fig. 2). Data availability The data generated in this study are publicly available at the "Download HTS384 data" link at https://dtp.cancer.gov/databases_tools/bulk_data.htm.All other raw data are available upon request from the corresponding author.The data have also been integrated into the existing IOA COMPARE website at https:// ioa.cancer.gov.Guidance for using the website is in the "Approved and Investigational Oncology Agents COMPARE" section of the page at https://dtp.cancer.gov/databases_tools/compare.htm. Results A higher capacity screen designated HTS384 NCI60 was developed to continue the evaluation of investigator-submitted compounds and biologics as an NCI service to the cancer research community.Development of the NCI60 HTS384 screen required several years from optimizing the inoculation cell number and growth of the cell lines in the 384-well format to assembling and enabling an automated screening system and building a quality control and data processing structure.A 384-well screen using the NCI60 cell lines had already been developed at the University of Pittsburgh (11) as part of the NCI ALMANAC project to identify combinations of approved drugs with therapeutic potential (https:// dtp.cancer.gov/ncialmanac),and studies exploring the time dependence of compound effects on the NCI60 cell lines with a luminescence readout were performed (12).The new data handling system for the HTS384 NCI60 leverages automation, modern software, and processing power to allow for complete flexibility in plate layouts, the number of concentrations in a dilution series as well as A COMPARE correlation map, which comingles 1,003 FDA-approved and investigational oncology agents, run in the classic NCI60 screen and the HTS384 NCI60 screen (2003 GI 50 patterns).Nodes represent GI 50 determinations across the cell lines of the classic NCI60 screen (small symbols) and the NCI60 HTS384 screen (large symbols).The line length between nodes indicates the magnitude of the Pearson correlation between the GI 50 patterns, with lines only rendered for correlations >0.75.Compact clusters are apparent from compounds sharing a molecular target (some are labeled). the number of replicate wells.Before raw data are processed, substantial quality control is applied to look for problematic replicate wells, dilution members, replicates, or trends in the behavior of individual cell lines within each screen run and across multiple screen runs.All parameters are accessible through new graphical interfaces.The raw luminescence data are converted to percent viability by normalizing to the DMSO (vehicle treated) control.Once the screening data pass lab-level QC, the processing pipeline generates the outcome: series of cell growth measurements across a range of concentrations and the GI 50 , TGI, and LC 50 interpolated values from those growth measurements. The classic NCI60 screen and the new HTS384 NCI60 screen both provide concentration responses for test agents at 100, 10, 1, 0.1, and 0.01 μmol/L as well as the endpoints GI 50 , TGI, and LC 50 , which are interpolated from the concentration response data.In this study, a library of 1,003 FDA-approved and investigational small-molecule anticancer agents was screened by the two NCI60 assays.Data for all 1,003 agents were available from the classic NCI60 screen, and for 1,003 agents from the HTS384 NCI60 screen.As a basis for assessing the comparability of the screens, we evaluated COMPARE analyses of the mean GI 50 values for the entire set of individual agents as well as several subgroups of agents with common targets. On a compound-by-compound basis, in Fig. 3A the mean GI 50 values across all cell lines in the classic screen are plotted against the mean GI 50 values in the HTS384 screen for the same compound.There is an overall shift of the HTS384 NCI60 data toward apparent greater potency (74% of the compounds report a lower mean GI 50 value) as would be expected due to the longer compound exposure time before the determination is made.On a compound-by-compound basis, in Fig. 3B, the correlation between the HTS384 and classic GI 50 values are plotted against the difference in the mean GI 50 values for each compound (HTS384 mean GI 50 minus the classic mean GI 50 , negative values indicate that the mean GI 50 from the HTS384 screen results was less than the mean GI 50 from the classic screen results).For many compounds, even when there are substantial differences in the overall mean GI 50 values, there are high correlations between the patterns of GI 50 values for the cell lines.Figure 3C shows the difference in the mean GI 50 values when data were grouped by cell line across all compounds and suggests that the aggregate differences between the mean GI 50 values for the compounds are not being driven by substantial changes in individual cell line behaviors between the two screens but rather reflect overall changes in the measured responses to the compounds. Figure 1 is a graphical summary of what was observed when considering subsets of the collection of compounds grouped by target shown as a correlation map of the GI 50 values for the two screens (see Supplementary Fig. S2A-S2D for an illustration of the derivation of the correlation map from sets of COMPARE correlations).Compounds with highly correlated patterns of GI 50 values cluster together when links are rendered for Pearson correlations greater than 0.75.Higher correlations between the response patterns bring the nodes closer together (3).With this stringent requirement, AACRJournals.org Cancer Res; 84(15) August 1, 2024 2409 many agents form clustered groups based on their assignments to cellular targets or mechanisms of action, and results from the classic NCI60 screen and from the HTS384 screen comingle within many, but not all, of these groups.For some agents, there were cases in which the HTS384 NCI60 demonstrated a lower mean GI 50 for a targeted group than the classic NCI60.Examples included some inhibitors of the BET bromodomain, HSP90, IAP, and NAMPR-Tase. Figure 2 shows the distribution of positive correlations for all possible pairwise combinations of mean GI 50 values were assessed for targeted groups with at least six agents.Supplementary Table S2 lists the aggregate GI 50 Pearson correlations for 91 targets with more than a single representative within the compound set.Doxorubicin was used as the experimental quality control compound for both screens.Eighteen runs of the HTS384 NCI60 screen were undertaken to gather the data presented here.The concentration-response growth curves for doxorubicin in each of the runs are shown in Supplementary Fig. S3A and the associated cell line legends are shown in Supplementary Fig. S3B.The correlations of the doxorubicin GI 50 patterns among these independent replicates of the HTS384 NCI60 screen indicated a high level of quality and reproducibility, comparable to that observed across nearly 3,000 classic NCI60 screens (Supplementary Fig. S3C). The grouping of targeted agents indicates that overall, most correlations from the classic NCI60 and HTS384 NCI60 screens are similar.All positive correlations for pairwise combinations of mean GI 50 values were assessed for targeted groups with at least six agents and no cutoff correlation (Fig. 2).It is important to note that the whisker plots of all positive correlations show that within the sets of compounds assigned to common targets not all pairwise combinations meet even the reduced stringency of a correlation of 0.6 within the classic NCI60 assay and within the HTS384 assay.A contributing factor to this is that a single target assignment was made for each agent based on literature references without capturing any offtarget effects, at least some of which may be known.For some agents, there were cases in which the HTS384 NCI60 demonstrated a lower mean GI 50 for a targeted group than the classic NCI60.Examples included some inhibitors of the BET bromodomain, HSP90, IAP, and NAMPRTase.The mean GI 50 Pearson correlations for 91 targets with more than single representative are shown in Supplementary Table S2. The mean GI 50 values for EGFR-targeted agents were similar between the classic and HTS384 NCI60 screens (Fig. 4A).Mean GI 50 values for the agents ranged from about �6.3 mol/L (0.5 μmol/ L) to �4.88 mol/L (13 μmol/L).The distributions of all positive correlations for EGR-targeted agents are shown in Fig. 4B.When these data were rendered as a COMPARE correlation map at a correlation of >0.75, the cluster shown in Supplementary Fig. S4A resulted.When the data were restricted to cluster as the classic versus the HTS384 NCI60 data, two compact clusters shown in Supplementary Fig. S4B were generated. The mean GI 50 values for a large group of inhibitors targeting BRAF, MEK, and ERK indicated more heterogeneity between the classic and HTS384 NCI60 screens (Fig. 4C).The most potent compounds in this group had mean GI 50 values of �6.5 mol/L (0.32 μmol/L) and �6.86 mol/L (0.14 μmol/L) in the classic NCI60 screen and the HTS384 NCI60 screen, respectively.The least potent compounds in this group had mean GI 50 values of �4.35 mol/L (45 μmol/L) and �4.5 mol/L (31.7 μmol/L) in the classic NCI60 screen and the HTS384 NCI60 screen, respectively.The distributions of all positive correlations for these agents are shown in Fig. 4D.COMPARE correlation map presentations of these data are shown in Supplementary Fig. S5A and S5B.With comingling of data from the classic and HTS384 NCI60 screens (Supplementary Fig. S5A), there was extensive clustering across the targets for several agents as evident by the links shown as black lines.When data from the two screens were prevented from comingling, two similar complex clusters resulted (Supplementary Fig. S5B).Extensive cross target linking is evident (black lines) in both the classic NCI60 cluster and the HTS384 NCI60 cluster.This indicates that the overall cellular responses (GI 50 values) reflect the perturbation of targets involved in common pathways. The PI3K-targeted group includes 32 compounds, the largest group in the IOA library.The mean GI 50 values for agents in the PI3K-targeted group were generally similar between the classic and HTS384 NCI60 datasets (Fig. 5A).The most potent compound in this group was BTG-226 with mean GI 50 values of �7.96 mol/L (0.0109 μmol/L) and �7.97 mol/L (0.0107 μmol/L) in the classic and HTS384 NCI60 datasets, respectively.The least potent compound in this group was IOA-244 with mean GI 50 values of �4.25 M (56.23 μmol/L) and �4.03 mol/L (93.32 μmol/L) in the classic and HTS384 NCI60 datasets, respectively.The distributions of all positive correlations for PI3K-targeted agents are shown in Fig. 5B.The PI3K-targeted group included several nonclustered and tightly clustered agents when analyzed as a COMPARE correlation map at a correlation of >0.75 and the classic and HTS384 NCI60 datasets were allowed to comingle.(Supplementary Fig. S6A).When the data from the two screens were prevented from comingling, two complex clusters resulted with many unlinked compounds from both assays (Supplementary Fig. S6B).This result indicates that some of the compounds may be nonselective. DNA interacting agents form another large heterogeneous group of drugs and compounds (Fig. 5C) with 12 different specific targets represented.These compounds span a large range of potency from cyclophosphamide, ifosfamide, and temozolomide, which are prodrugs with little or no activity in cell culture, to exquisitely potent compounds such as trabectedin and lurbinectedin.The distributions of all positive correlations for the DNA interacting agents are shown in Fig. 5D. The first generation allosteric rapalog mTOR inhibitors sirolimus (rapamycin), temsirolimus, and everolimus were about two logs less potent in the HTS384 NCI60 screen than in the classic NCI60 screen (Fig. 6A).The pairwise distribution of correlations in the HTS384 and classic screens were similar (Fig. 6B); however, the allosteric rapalogs did not cluster with the other compounds in the HTS384 NCI60 dataset at a correlation of 0.75 or higher.(Supplementary Fig. S7A and S7B).As with other kinase inhibitor groups, the more recent mTOR competitive kinase inhibitors sapanisertib, onatasertib, and apitolisib produced similar mean GI 50 values (Fig. 6A) and formed compact clusters both in the comingled classic and HTS384 NCI60 datasets (Supplementary Fig. S7A) as well as when the data from each screen were examined individually (Supplementary Fig. S7B).When the stringency of the Pearson correlation clustering was decreased to 0.65, more connections were made between the two datasets; however, the allosteric rapalogs remained independent of the main cluster in the classic NCI60 dataset (not shown).Growth-response curves in the representative cell lines are shown for the rapalogs (Supplementary Fig. S8A) and for the competitive inhibitors (Supplementary Fig. S8B).The mean GI 50 values for BET bromodomain targeted agents were uniformly lower in the HTS384 NCI60 screen than in the classic NCI60 screen (Fig. 6C).The largest difference observed was nearly three logs from the compound PFI-1.Unlike several of the kinase targeted groups, the BET bromodomain inhibitors did not link when allowed to form a comingled cluster (Supplementary Fig. S9A).The clusters remained the same when comingling was prevented (Supplementary Fig. S9B).This disconnect is clear from inspection of the pairwise combinations of mean GI 50 values for the BET bromodomain targeted agents (Fig. 6D).The concentration response of five bromodomain inhibitors in five representative NCI60 cell lines (MDA-MB-468 breast adenocarcinoma, SK-MEL-5 cutaneous melanoma, LOX-IMVI amelanotic melanoma, HOP-92 non-small cell lung carcinoma, and HCC-2998 colon adenocarcinoma) showed increased cytotoxicity in the HTS384 NCI60 screen (Supplementary Fig. S10A). The NAMPRTase-targeted agents uniformly demonstrated lower GI 50 values by as much as two logs in the HTS384 NCI60 screen compared with the classic NCI60 screen (Fig. 7A) The NAMPRTase-targeted agents were more tightly correlated within the set of classic NCI60 data (Fig. 7B) and did not form a comingled cluster between the classic and HTS384 NCI60 datasets (Fig. 7C).The two NAMPRTase clusters remained independent even at reduced Pearson stringencies (Fig. 7B).Concentration response for four NAMPRTase inhibitors in five representative NCI60 cell lines (SW620 colon adenocarcinoma, HCC-2998 colon adenocarcinoma, M14 amelanotic melanoma, SN12C renal cell carcinoma, and OVCAR-8 high-grade ovarian serous adenocarcinoma) are shallow in both datasets with 1 to 2 logs more cytotoxicity in the HTS384 NCI60 dataset than in the NCI60 classic dataset (Supplementary Fig. S10B).cell compared with the other eight cancers represented in the screening panel.Over time as the genetics of cancer were elucidated, it became evident that patterns of cytotoxic activity across the NCI60 set of human tumor cell lines could serve as fingerprints for the mechanism of action of compounds being screened.The selection of cell lines was based on their availability, growth rates in the selected growth media, formation of a tightly bound monolayer, disease, and responses to clinically active drugs (4,5,7,13).The NCI60 cell line panel includes nine cancer types (leukemia, nonsmall cell lung, colon, CNS, melanoma, ovarian, renal, prostate, and breast) and has been used to profile potential oncology smallmolecule therapeutic agents for more than 30 years.Extensive genomic and proteomic profiling of the NCI60 cell lines makes this among the best characterized collection of human cancer cell lines including studies of gene mutations, amplifications and deletions, proteomics, the methylome, microRNA, exosomes, and more (2,3).Data from NCI60 cell lines are available from multiple websites including https://dtp.cancer.gov/databases_tools/bulk_data.htm;https://discover.nci.nih.gov/rsconnect/cellminercdb/;https://web.expasy.org/cellosaurus/;https://www.cbioportal.org/;https://cancer.sanger.ac.uk/cosmic; https://discover.nci.nih.gov/rsconnect/cellminercdb/.The NCI60 screen has proved to be a useful tool for drug discovery by the cancer research community and has facilitated the elucidation of molecular targets for potential new oncology agents. Qualitatively, the data from the new screen are the same as for the classic NCI60 screen: cell growth measurements across a set of five log dilutions of the test agent.IC 50 values are interpolated from the growth measurements considered a percentage of treated growth relative to control, whereas GI 50 , TGI, and LC 50 values are interpolated from the growth measurements corrected for cell number at the beginning of the assay (GIPRCNT).When the classic NCI60 was being developed, it was thought that the GI 50 and TGI endpoints might be more responsive indicators for different categories of agents; so, the standard assay concentration range that was picked was one that most-often captured those two endpoints. The new HTS384 NCI60 screen was developed to maintain the free screening service for the cancer research community.The updated screen makes extensive use of laboratory automation to test cells in a 384-well format with a 3-day exposure period to test agents and a luminescent readout for cell viability.The data presented provide an initial characterization of the HTS384 NCI60 screen and a comparison with data from the classic NCI60 screen as the NCI transitions to the new service.Of interest, and concern, was whether the large collection of public data from the classic NCI60 screen could be used in conjunction with data from the new HTS384 screen.Accordingly, the concentrations for test agents were maintained with the highest concentration in the HTS384 NCI60 screen is set at a high concentration of 10 �4 mol/L effectively asking whether the results of the compounds in an initial uninformed assay in the new HTS384 system would be able to identify the same compound in the historic, classic NCI60 data.This was done even for compounds with an optimum concentration range that started at less than 10 �4 mol/L. The COMPARE (https://ioa.cancer.gov)program allowed the direct comparison of data between the two screens for 1,003 FDAapproved and investigational agents and showed good comparability in patterns of GI 50 values for more than 45 molecular target groups with at least six compounds.Although mean GI 50 values from the two screens were similar for several kinase targeted agents, non-kinase targeted agents, such as inhibitors of the BET bromodomain, HSP90, and NAMPRTase, generally, had lower mean GI 50 values in the HTS384 NCI60 screen.It is possible that the longer compound exposure period in the HTS384 NCI60 screen resulted in some agents showing strong correlations within both the classic and the HTS384 NCI60 screens, but low or no correlations between the two screens.The increased compound exposure period might have allowed cell line differences to more fully manifest or for other off-target effects to play a greater role in affecting the response of the cells. Summary and conclusions Figures 1 and 2 provide graphical summaries of our findings: there is substantial overlap between the results from the classic NCI60 screen and from the HTS384 NCI60 screen for many, but not all, of the compounds grouped by target, however, even for targets that do not substantially overlap between the two screens there is clustering within the new screen comparable with that of the classic NCI60 screen. Similarities and differences were identified in the responses to targeted agents from the classic and HTS384 NCI60 screens.In addition to the focused set of FDA-approved and investigational agents discussed here, there are data for approximately 60,000 public compounds that have been tested in the classic NCI60 screen, most of which have defined chemical structures.That rich dataset can also be used to evaluate results from the new HTS384 NCI60 screen.Given that there is substantial overlap between the two screens for many targeted agents and mechanisms of action, it would not be unreasonable to use results from the new HTS384 NCI60 screen as a reference for COMPARE correlations against classic NCI60 datasets.As a corollary, it would not be unreasonable to run COMPARE against the classic NCI60 data using data from the HTS384 NCI60 screen for new agents with unknown targets or mechanisms of action.As is the case with data from the classic NCI60 screen being probed with the data from the same screen, strong correlations would suggest compounds or mechanisms of action for further consideration, whereas a lack of correlation could indicate a compound that acts via a mechanism not already represented in the NCI/DTP public data or could indicate a mechanism that cannot manifest during the time course of the screen or could indicate that the compound has a new target or mechanism not yet documented in the NCI60 database.For researchers who have built a series of compounds and associated screening data using the classic NCI60 screen for mechanistic or structural categories, focused assays with representative compounds in the HTS384 NCI60 screen could quantify the link between the old and the new screen. Table 1 . NCI60 cell lines showing the NCI/DTP cell line name, disease panel tumor type of origin, the cell line name from Cellosaurus (https://www.cellosaurus.org/),the cell line designation from Cellosaurus based upon historical data classic NCI60 cell line population doubling time in the 96-well format, classic NCI60 cells plated per well in the 96-well format, NCI60 cell line population doubling time in the 384-well format, and NCI60 cells plated per well in the HTS384 NCI60 assay.(Cont'd) Figure 1 . Figure 1.A COMPARE correlation map, which comingles 1,003 FDA-approved and investigational oncology agents, run in the classic NCI60 screen and the HTS384 NCI60 screen (2003 GI 50 patterns).Nodes represent GI 50 determinations across the cell lines of the classic NCI60 screen (small symbols) and the NCI60 HTS384 screen (large symbols).The line length between nodes indicates the magnitude of the Pearson correlation between the GI 50 patterns, with lines only rendered for correlations >0.75.Compact clusters are apparent from compounds sharing a molecular target (some are labeled). Figure 2 . Figure 2.Box and whisker plot distributions (see Supplementary Fig.S2C) of positive correlations from pairwise combinations of mean GI 50 values for compounds grouped by target for targets with at least six assigned agents.Blue, correlations within the classic NCI60 GI 50 dataset; orange, correlations within the HTS384 NCI60 GI 50 dataset; gray, correlations for the comingled mean GI 50 values from the classic and HTS384 NCI60 datasets. Figure 3 . Figure 3. A, Scatter plot of the mean GI 50 values from the HTS384 NCI60 screen plotted vs. the mean GI 50 values from the classic NCI60 screen for individual compounds run in both assays (r ¼ 0.93).B, A scatter plot of the Pearson correlations between the HTS384 NCI60 dataset and classic NCI60 dataset for GI 50 data for individual compounds run in both assays plotted against the difference in the mean GI 50 values.A total of 228 compounds correlated at 0.75 or greater and 470 compounds correlated at 0.6 or better.C, Mean GI 50 for 1003 compounds by cell line across the compounds. Figure 4 . Figure 4. A, Mean GI 50 values for compounds targeting the EGFR.Dark blue, data from the classic NCI60 screen; orange, data from the HTS384 NCI60 screen.Error bars indicate SDs serving as surrogates for the range of individual cell line GI 50 values relative to the mean of responses across the 60 cell lines (see Supplementary Fig. S1C).B, Box and whisker representation (see Supplementary Fig. S2C) of all possible pairwise combinations of EGFR-targeted agents.The box encompasses the second and third quartiles.Associated correlation maps for correlations >0.75 are shown in Supplementary Fig. S4A and S4B.C, Mean GI 50 values for compounds targeting BRAF, MEK, and ERK.Dark blue, data from the classic NCI60 screen; orange, data from the HTS384 NCI60 screen.Error bars indicate SDs serving as surrogates for the range of individual cell line GI 50 values relative to the mean of responses across the 60 cell lines (see Supplementary Fig. S1C).D, Box and whisker representation (see Supplementary Fig. S2C) of all possible pairwise combinations of the agents that target BRAF, MEK, and ERK.The box encompasses the second and third quartiles.Associated correlation maps for correlations >0.75 are shown in Supplementary Fig. S5A and S5B. Figure 5 . Figure 5. A, Mean GI 50 values for the PI3K-targeted agents.Dark blue, mean GI 50 values from the classic NCI60 screen; orange, those from the HTS384 NCI60 screen.Error bars indicate SDs serving as surrogates for the range of individual cell line GI 50 values relative to the mean of responses across the 60 cell lines (see Supplementary Fig. S1C).B, Box and whisker representation (see Supplementary Fig. S2C) of all possible pairwise combinations of PI3K-targeted agents.The box encompasses the second and third quartiles.Associated correlation maps for correlations >0.75 are shown in Supplementary Fig. S6A and S6B.C, Mean GI 50 values for the DNA interacting group of drugs and compounds.Dark blue, mean GI 50 values from the classic NCI60 screen; orange, mean GI 50 values from the HTS384 NCI60 screen.Error bars indicate SDs serving as surrogates for the range of individual cell line GI 50 values relative to the mean of responses across the 60 cell lines (see Supplementary Fig.S1C).D, Box and whisker representation (see Supplementary Fig.S2C) of all possible pairwise combinations of PI3Ktargeted agents.The box encompasses the second and third quartiles. Figure 7 . Figure 7. A, Mean GI 50 values for the NAMPRTase-targeted agents.Dark blue, data from the classic NCI60 screen; orange, data from the HTS384 NCI60 screen.Error bars are SDs serving as surrogates for the range of individual cell line GI 50 values relative to the mean of responses across the 60 cell lines (see Supplementary Fig. S1C).B, Box and whisker representation (see Supplementary Fig. S2C) of all possible pairwise combinations of NAMPRTase-targeted agents.The box encompasses the second and third quartiles.C, At a Pearson correlation threshold of >0.75, NAMPRTase-targeted agents form two independent clusters even when mean GI 50 values from the classic and HTS384 NCI60 datasets are allowed to comingle.
2024-06-13T06:16:08.261Z
2024-06-11T00:00:00.000
{ "year": 2024, "sha1": "d4d5afa9193b58aa78e6060de618266f867005fe", "oa_license": "CCBYNCND", "oa_url": "https://aacrjournals.org/cancerres/article-pdf/doi/10.1158/0008-5472.CAN-23-3031/3461715/can-23-3031.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b4fd3c073a62f5a8ec190c081d1a32375727c094", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235283718
pes2o/s2orc
v3-fos-license
Simulation research on miniature planar induction heater with cavity This paper studies a new type of miniature planar induction heater through simulation. The miniature plane induction heater consists of a metal heating plate, a glass slide and a plane coil. The metal micro heating plate is designed with a cavity of a specific size. Since the cavity contains residual gas, when the micro heating plate is heated, hot bubbles will be generated in the cavity better and faster. Through the analysis of the energy density, eddy current density, and temperature distribution of the metal micro heating plate, it is found that the cavities play a very good role in the concentration of eddy current and the increase of energy. This kind of micro planar induction heater can be used for various hot bubbles to drive micro execution devices, such as micro ejectors, micro mixers and micro pumps. Introduction In recent years, bubble-driven micro-injectors [1], bubble-driven micro-mixers [2], bubble-driven nozzle diffuser pumps [3], thermal bubble-type micro acceleration sensors [4]. The bubble drive actuator has the advantages of simple structure and low working voltage, and has obtained a wide range of application prospects in the field of microfluidic systems [5]. The hot bubbles on the micro heater are usually generated by resistance heating [6][7][8]. Peigang Deng [6] studied micro-thermal bubbles as the drive of the microbiological analysis system and designed a micro-heater with a non-uniform width. Under pulse heating conditions, single bubbles can accurately appear in the narrow part of the micro heater. Xu Jinliang [7] studied the influence of pulse heating parameters on the microbubble behavior of platinum microheaters. When the platinum microheater is immersed in a methanol pool, microbubbles are generated. Under different heat fluxes, the growth conditions of microbubbles can be divided into Different types. Rebecca braff maxwell [8] designed a micro-heater with a micro-mechanical nucleation cavity that can accurately locate hot bubbles. This micro-heater can achieve a controllable bubble formation temperature and bubble collapse. However, due to the small size of the micro heater, the above-mentioned resistance heating micro heater can only generate small single bubbles. Therefore, the resistance heating micro heater cannot provide strong power for mechanical movement. This paper studies a new type of miniature planar induction heater through simulation. There are cavities on the metal heating plate, which facilitates the storage of gas after increasing the cavities, increases the surrounding resistance, and can better produce bubbles. In this paper, the corresponding heating parameters, the size of the cavity, and the type of the cavity are adjusted through simulation. It is found that the energy density, eddy current density and temperature around the cavity will increase, which can better control the growth of bubbles. 2.1.Structure The structure of the micro induction heater is shown in Figure 1. This kind of micro heater consists of a coil, a glass plate and a metal heating plate. The circular chamber is designed on the glass plate, the disc-type metal heating plate is placed on the glass plate, and the coil is designed as a flat spiral type, placed under the glass slider, corresponding to the position of the heating plate. 2.2.Working principle Use a high-frequency AC power supply to connect to the coil of the micro heater. When high-frequency AC power is applied, an alternating magnetic field is generated around the coil, and eddy currents are generated during heating. Due to the eddy current heating effect, the plates alternate with the magnetic field and the heating plate temperature rises rapidly When it reaches a few hundred degrees Celsius, the liquid enters the micropump cavity for induction heating, causing some of the liquid near the heating plate to become steam. Therefore, bubbles will be generated on the metal heating plate. 3.Simulation The simplified geometry of the miniature induction heater is shown in Figure 2. The geometric model is an axisymmetric two-dimensional model. On the RZ plane, the size, A-D section of 20×20mm is the air around the micro heater. The micro heater is placed in the middle of the air section. The radius and thickness of the micro heating plate are 4mm and 20μm, respectively. The bottom of the micro heating plate is provided with a glass slide with a radius of 4mm and a thickness of 100μm. The actual number of turns of the plane spiral coil is 14, and the coil section radius is 80um. In order to facilitate the simulation calculation, the coil model is shown in Figure 3. Simulation results The temperature of the heating plate changes with the vortex density and energy density. In this paper, heating plates with different cavities are designed and the changes are analyzed. The high-frequency square wave current (I 0 ) is applied to the simulation solution process with a coil of 1A and a power supply frequency (f) of 70 kHz. The heating time starts from 0 s and ends at 1 s. The simulation model of a single row of holes is shown in Figure 4. At the central axis of the metal heating plate, a row of cavities with an array is designed, and the cavity radius and cavity spacing (the distance between the centers of adjacent cavities) Analyze the energy density, eddy current density, and temperature distribution of the heating plate. Figure 6~Figure 8 are simulation cloud diagrams of metal heating plates with the same hole diameter and the same hole distance. They are energy density distribution cloud diagram, eddy current distribution cloud diagram, and temperature distribution cloud diagram. Take the section shown in Figure4, and respectively intercept the heating plate energy distribution curve and eddy current distribution curve, as shown in Figure 9~Figure 10. When the cavity radius remains unchanged and the hole distance remains unchanged, the energy around the heating plate cavity , The eddy current is obviously higher than the position where there is no cavity. Because the heating plate is too small, the temperature of the heating plate is faster and the temperature is not obvious. 4.2.Double row hole heating plate The double-row hole simulation model is shown in Figure 5. A row of cavities with an array is designed above and below the central axis of the metal heating plate. The difference between the cavity radius R and the cavity distance d is changed and analyzed. Energy density, eddy current density, heating plate temperature distribution. Analyze the cross section of the double-row hole heating plate, as shown in Fig. 11. From Fig. 12and Fig. 13, it is known that the double-row hole array has little effect on the distribution of eddy current density, and has a great influence on energy density. Select the center point of the double row of holes in the vertical direction as the research object, as shown in Fig. 11, which are position 1, position 2, position 3, and position 4. As shown in Fig. 14 , when the distance d is constant, as the aperture increases , The energy density will gradually increase, but as the aperture increases, the horizontal distance of the cavity will gradually decrease, which will cause the energy density of position 2 to be higher than that of position 1. It can be seen from Fig. 15 that when the aperture is constant, as the distance increases, the energy density changes less and less. When the distance is constant, the cavity has a very small effect on the energy density. conclusion This paper discusses the simulation of micro heaters through ANSYS software. Two heating plate models are designed, namely a single-row hole model and a double-row hole model. The simulation analysis shows that as the diameter of the cavity increases, as the distance between adjacent cavities decreases, the energy density and eddy current density around the cavities will increase, which can cause higher temperatures around the cavities to facilitate the generation of bubbles, and make the bubble micropump have a better effect. The holes can make the eddy current more concentrated, and the double-row holes have more cavities, which will increase the surrounding resistance and have higher energy. When the cavity radius is 0.05mm and the distance is 0.02mm, the distribution of energy density and eddy current is more obvious, which is convenient to control the growth of bubbles.
2021-06-03T00:32:21.622Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "1db7db45380c152afd1f14afa85665cf6f3a3c62", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1846/1/012059", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1db7db45380c152afd1f14afa85665cf6f3a3c62", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
220970227
pes2o/s2orc
v3-fos-license
Predictors of Mortality in Patients with Chronic Heart Failure: Is Hyponatremia a Useful Clinical Biomarker? Background Chronic heart failure (CHF) is a global health burden. Despite advances in treatment, there remain well-recognised morbidity and mortality. Risk stratification requires the identification and validation of biomarkers, old and new. Hyponatremia has re-emerged as a prognostic marker in CHF patients. Methods This is a retrospective cohort study on 241 CHF patients recruited from King Fahd Hospital of the University, Al-Khobar, Saudi Arabia (January 2005–December 2016). Their serum sodium and biochemical parameters were measured at baseline, along with 2-D echocardiographic assessments of left ventricular mass and ejection fraction. The primary endpoint was the association between hyponatremia and all-cause mortality (ACM) after a follow-up period of 24 months. Results Mean age of patients was 60.61 ± 12.63 (SD) years; 65.1% were males, and type 2 diabetes mellitus (DM) was present in 71%. Baseline serum sodium was 138.00 (136, 140) (median and interquartile range). Hyponatremia (<135 meq/L) was present in 14.1%. After follow-up, 46 deaths had occurred. Multivariate Cox-proportional hazard model showed that type 2 DM, New York Heart Association (NYHA) class (III–IV vs I–II), age, and left ventricular mass index (LVMI) were significant and independent predictors of ACM, with HR 3.03 (95% CI; 1.13, 8.16) (P=0.028), HR 2.31 (95% CI; 1.11, 4.82) (P=0.026), HR 1.06 (95% CI; 1.03, 1.09) (P<0.001), and HR 1.01 (95% CI; 1.00, 1.02) (P=0.039), respectively. Estimated glomerular filtration rate (eGFR) was not a significant predictor. Kaplan–Meier survival analysis was used for the analysis of NYHA class and hyponatremia interactions and showed that hyponatremia had an association with poorer survival in patients with NYHA class III–IV rather than I–II (Log-rank test, P= 0.0009). Conclusion Hyponatremia was a feature in CHF patients, and ACM was predicted by type 2 DM, NYHA class, age, and LVMI. Hyponatremia impact on survival was in patients with more advanced disease. Introduction Chronic heart failure (CHF) is an established public health problem with significant morbidity and mortality, particularly among patients aged over 65 years. 1 It has a complex pathogenesis involving lots of genetic and environmental factors. 2 Despite established diagnostic criteria and up-to-date management guidelines, survival is enormously compromised and falls behind other serious conditions. 3 Appropriately, the persistent search for biomarkers with an adequate cost/benefit ratio that can guide the management plans and predict future outcomes remains a necessity. Older age and disease severity quantified by the New York Heart Association (NYHA) class are important predictors of mortality. 4 Reduced ejection fraction is an established and strong predictor of cardiovascular outcomes in patients with CHF. 4,5 Anemia, which is a relatively common pathology, has been found to be an independent prognostic factor for survival in CHF patients, 6 although the contribution of expanded plasma volume and pseudoanemia complicates the usefulness of serum hemoglobin as a prognostic marker. 7 More than 3 decades ago, a study reported serum sodium as a powerful predictor of survival in a cohort of 203 patients with severe CHF (defined as ejection fraction EF<30%) during a follow-up duration of 6-94 months. 8 This finding was supported by later studies that involved CHF outpatients, 9 ambulatory patients with heart failure and preserved/ reduced EF, 10 and heart failure patients with NYHA class III-IV, 11 and consolidated further by a meta-analysis that included 14,766 patients and reinforced the prognostic value of hyponatremia in CHF patients with reduced or preserved ejection fraction. 12 Genetic factors, on the other hand, have demonstrated an impact on vulnerability to CHF, on disease progression, and on the response to pharmacological agents. 2 Therefore, hyponatremia in CHF patients, that is known to be attributed by volume status fluctuation, neurohormonal factors, and diuretics use, 13 might exhibit different behavior in other patient populations, and this compromises its prognostic value. Therefore, we sought, via this observational study, to assess the prognostic value of hyponatremia in a sample of CHF patients in Saudi Arabia, especially after a recently developed regional registry began to give essential information about this population to be of younger age, with higher rates of diabetes mellitus (DM), and with predominant left ventricular systolic dysfunction. 14 Study Design and Setting This is an observational study with a retrospective cohort design to assess the association between hyponatremia and ACM, during a 24-month follow-up period (from the time serum electrolyte measurements were taken), in a sample of CHF patients treated with standard anti-failure drugs. The diagnosis of heart failure was in accordance with the American College of Cardiology/American Heart Association. 15 Patients were on sodium restriction (defined as <3 grams/day) and fluid restriction (<1.5 liter/day). The reporting system of this study was in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement. 16 Participants Patients were recruited from King Fahd Hospital of the University (KFHU), Al-Khobar, Saudi Arabia-a public health care system affiliated to Imam Abdulrahman Bin Faisal University. The hospital uses QuadraMed, a computerized database that identifies each patient's medical records by a unique number (unit numbering) which is preserved even after the patient's death. Ethical Approval The protocol was approved by the Institutional Review Board (IRB-2020-306-Pharm), Deanship of Scientific Research, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia. The study was conducted in accordance with the Declaration of Helsinki (2013), the ICH Harmonized Tripartite Good Clinical Practice Guidelines, and the laws of Saudi Arabia. Verbal informed consent was obtained from the patients included in the study or their next of kin by telephone conversation. The consent process was approved by the Institutional Review Board, Imam Abdulrahman Bin Faisal University. In addition, electronic medical records were reviewed for study endpoints. Inclusion and Exclusion Criteria The following inclusion and exclusion criteria were applied at the time of baseline data collection or study entry. Inclusion Criteria • Age > 18 years • Chronic heart failure with New York Heart Association NYHA functional class I-IV, due to any etiology, and maintained on anti-failure medication for a minimum of 3-month duration. • Active medical records, and updated contact information. Exclusion Criteria • Acute heart failure • Serum Hb ˂ 9 gr/dl and hematocrit ˂ 30% due to acquired anemia (with documented serum ferritin, transferrin saturation, or macrocytosis on peripheral Search Strategy The search for eligible patients was done via QuadraMed. Search term via the diagnosis of "Heart/cardiac failure", was used for outpatient and inpatient records from Jan 1st, 2005 till Dec 31st, 2016. This search yielded 876 patients. For the reasons mentioned above, 635 CHF patients were excluded leaving 241 patients for the final analysis ( Figure 1). Measurement of Serum Electrolytes, Biochemical, and Hematological Parameters Serum electrolytes, serum creatinine, and blood urea nitrogen were measured using Dimension EXL with LM integrated chemistry system (Siemens Ltd Saudi Arabia-Healthcare, Riyadh, Saudi Arabia). Glomerular filtration rate (GFR) was calculated using the Modification of Diet in Renal Disease equation. 17 Serum glycated hemoglobin (HbA1C) was measured using Tosoh automated glycohemoglobin analyzer (G8)(Griesheim, Germany). Serum uric acid was measured with Dimension EXL 200 integrated chemistry system (Siemens Ltd Saudi Arabia-Healthcare, Riyadh, Saudi Arabia). Complete blood count (CBC) parameters, including white blood cell count (WBCs), red blood cell count (RBCs), hemoglobin level (Hb), hematocrit (packed cell volume), mean corpuscular volume (MCV), and platelet count, were analyzed with an automated hematology analyzer, DxH 800 (Beckman Coulter (UK) Ltd, High Wycombe, UK). Measurement of Left Ventricular Mass (LVM) and Ejection Fraction (EF) Left ventricular mass (LVM) was calculated according to the guidelines of the American Society of Echocardiography and the European Association of Cardiovascular Imaging, using linear measurements derived from 2-D images. 18 LVM was indexed to body surface area (BSA) (g/m 2 ) and the result referred to as the left ventricular mass index (LVMI). 19 Ejection fraction (EF) was also assessed in study patients by 2D echocardiography using Modified Simpson method (biplane method of disks), which requires tracing the LV endocardial border in the apical 4-and 2-chamber views in both end-diastole and end-systole. 20 Outcome Assessments Electronic medical records of the patients were checked, and ACM was documented along with the corresponding diagnosis and date. Patients or their next of kin were contacted for other mortality incidents that might have occurred and documented in other hospitals. Deaths were considered cardiovascular unless a specific non-cardiovascular cause was identified. Cardiovascular deaths were classified according to the Candesartan in Heart failure: Assessment of Reduction in Mortality and Morbidity (CHARM) program. 5 Statistical Analysis Baseline data are reported as mean ± SD for continuous variables and as number and percentages for categorical variables. Median and ranges of values were used for nonnormally distributed data. Comparison between patient groups was done using unpaired Student's t-test, Mann-Whitney U-test, and chi-square test, depending on the type of data. To assess the association between biochemical and clinical variables with ACM, univariate and multivariate Cox-proportional hazards models, with chi-squared statistic test to assess the relationship between all covariates in the model were used. Baseline Characteristics A total of 241 patients were included in the final analysis, with a mean age of 60.61 ± 12.63 (95% CI; 59.01, 62.22) years. 65.1% of the patients were males, and 75.5% of them were Saudi. The population sample was overweight-obese, along with a high prevalence of type 2 DM (70.9%) ( Table 1). The etiology of CHF was predominantly ischemic and approximately half of the patients had mild-to-moderate disease (NHYA class I-II). Other significant comorbid conditions included systemic hypertension (in 77.2% of the patients), and dyslipidaemia (in 56.0%). The patients were on optimal anti-failure medication, as summarised in Table 1. Measurement of patient biochemical parameters, as summarized in Table 2, showed values for serum sodium which fell within the normal lab range of 135-145 mEq/L. Likewise, serum potassium and chloride were within normal limits; however, patients' CO 2 concentration and blood urea nitrogen were elevated (normal ranges; 23-29 mEq/L and 5-20 mg/dL, respectively). Measurement of hematological parameters showed normal serum hemoglobin (normal range; 12-14 g/dL), as well as normal hematocrit and MCV levels. The main echocardiographic features, LVMI and EF, are listed in Table 2. Study Population as per Serum Sodium Concentration Classifying the patients according to serum sodium levels falling below 135mEq/L revealed that 14.1% of the population had hyponatremia. When matched for age, sex, BMI, severity, and etiology of CHF, patients with hyponatremia had a significantly higher percentage of type 2 DM and COPD (Table 1). They had a trend for a higher percentage of systemic hypertension, higher use of clopidogrel, furosemide, digoxin, and insulin, but these did not reach statistical significance Notes: Values are expressed as mean ± SD or number (percentage of patients). Median with interquartile ranges are used for non-normally distributed data. Missing data from medical records: *15 patients did not have their height, **8 patients did not have NYHA class, ***Other 6 arrhythmias (premature ventricular contractions, atrial flutter, 2 supraventricular tachycardia, and 2 ventricular tachycardia). Abbreviations: BMI, body mass index; NYHA, New York Heart Association; IHD, ischemic heart disease; AF, atrial fibrillation (paroxysmal and permanent); COPD, chronic obstructive pulmonary disease; ACEIs, angiotensin=converting enzyme inhibitors; ARBs, angiotensin receptor blockers; CCBs, calcium channel blockers. ( Table 1). Patients with hyponatremia also had significantly lower chloride concentration, a smaller anion gap, and a higher WBC count (Table 2). Study Endpoints Within 24 months from the serum sodium measurements, 46 fatalities occurred, 41 of which were considered cardiovascular deaths. The causes of deaths and the number that died of each cause were as follows: 14 patients died of cardiorespiratory arrest complicating CHF, 7 from coronary events, 5 from stroke, 5 from cardiogenic shock, 1 from acute pulmonary edema, 1 from coronary artery bypass grafting complications, 1 from constrictive pericarditis; 4 deaths with histories suggestive of cardiovascular death occurred out-of-hospital; and there were 3 deaths in the hospital with missing death certificates. The five other deaths were attributed to sepsis/septic shock. Univariate analysis for ACM showed that age, NYHA class (III-IV vs I-II), BUN, eGFR, RBCs, Hb, hematocrit, LVMI, and type 2 DM had significant associations with ACM (Table 3). Sex, BMI, serum creatinine, sodium, potassium, chloride, CO 2 , AG, HbA1c, and EF did not show any significant associations with ACM. Multivariate analysis for ACM included age, NYHA class (III-IV vs I- Subgroup Analysis In an attempt to focus more on the significance of hyponatremia in patients with moderate-severe heart failure, and based on an interaction that was found between NYHA (III-IV vs I-II) with serum sodium (≥135 vs<135 mEq/L) (interaction plot not shown), the multivariate analysis for ACM was repeated with the same variables; however, NYHA class and serum sodium concentration were sub-classified into four groups, in an attempt to test the interaction between them. Group 1 (reference group); NYHA I-II with normal sodium concentration (108 patients), group 2; NYHA III-IV with normal sodium concentration (92 patients), group 3; NYHA I-II with hyponatremia (14 patients), and group 4; NYHA III-IV with hyponatremia (19 patients). The risk of ACM was; HR 2.10 (95% CI; 0.95, 4.64) (P=0.067) in group 2 vs 1, HR 0.93 (95% CI; 0.11, 7.46) (P=0.942) in group 3 vs 1, and HR 3.39 (95% CI; 1.21, 9.49) (P=0.020) in group 4 vs 1. The four groups survival curves were plotted and compared by Log-rank test and showed a significant difference in Figure 2 (P<0.001). Discussion We sought via this observational study to explore the association of hyponatremia with ACM in a population with a different genetic makeup and co-morbidities, and the results can be summarized by the following points. Firstly, the prevalence of hyponatremia in this CHF population (14%) is broadly similar to some reports (13.8%, and 15.2%) 10,22 but lower than other reports in the literature (57%-using a different cut-off value of ≤137, 17%, and 51%). 8,9,11 The later observation may be due to the fact that two of the referenced studies included patients with more advanced CHF than ours. 8,11 Secondly, significant predictors of ACM in the study population were: type 2 DM, NYHA (III-IV vs I-II), age, and LVMI in order of significance. However, the subgroup analysis revealed that hyponatremia can be a biomarker of poorer prognosis among patients with moderate-to-severe CHF. This observation is in agreement with some of the aforementioned studies that exclusively selected patients with ejection fraction <30% 8 or NYHA class III-IV. 11 Our results can be explained by the fact that disease severity correlates significantly with the plasma concentrations of certain neurohormones involved directly or indirectly in modulating serum sodium, including; norepinephrine, arginine vasopressin, atrial natriuretic peptide, and renin activity. 23 Thirdly, hyponatremia in this study population might reflect a number of underlying mechanisms. Normal uric acid concentration and normal anion gap might suggest a relative volume depletion as a contributory factor. 24 An observation of note potentially relating to this finding is the percentage of patients maintained on furosemide therapy-a well-known agent that predisposes to hyponatremia, 25 and hypochloremia, 26 interpreted cautiously in relation to the percentage of patients with moderate-to-severe CHF. Hyperglycemia could be another contributory factor in our cohort (as judged with suboptimal glycated hemoglobin). Finally, the association of hyponatremia with type 2 DM is a significant finding and is well supported in the literature. It is worth noting that the diabetic state is associated with hyponatremia that is attributable to numerous underlying pathological mechanisms. Dilutional hyponatremia results from hyperglycemia due to osmotic diuresis. 27,28 Pseudohyponatremia occurs with marked hypertriglyceridemia and hyperproteinemia. 28 Moreover, drug-induced hyponatremia occurs in patients with DM and can be attributed to more than one pharmacological class. Insulin potentiates vasopressin-induced aquaporin 2 expression in renal collecting ducts. 29 Metformin, via adenosine monophosphate kinase (AMPK) activation, stimulates the phosphorylation of aquaporin 2 and urea transporter A1 and potentiates the peripheral action of vasopressin. 30,31 This is in addition to hyponatremia resulting from first-generation sulphonylureas that are rarely used nowadays, and diuretics that might be prescribed at later stages of diabetic nephropathy. DM associated hyponatremia is yet another mechanism that has been described and attributed to an elevated plasma vasopressin concentration, independent of hyperglycemia. 32 DM in our cohort was the strongest predictor of mortality, and this is consistent with other studies in the literature. 4,33 However, a significantly higher prevalence of type 2 DM in our population (71%) was found in comparison to other CHF populations (28%, 17.6%, 42%, 23.7%, and 26.7%). 4,6,9,10,12 Such alarming prevalence, and negative prognostic value in CHF patients, along with BMI and fasting blood glucose as the top two risk factors in Saudi population, 34 dictates more attention and efficient preventive strategies in our patient population. In this study, we excluded patients with other confounding co-morbidities, such as hereditary anemia and nutritional anemia because of the effect of these conditions on survival and clinical outcomes in heart failure. 35 We also excluded patients with cancer/malignancy due to its direct influence on survival and the possibility of tumorassociated SIADH secretion. 36 Patients with advanced kidney disease who were likely to have hyponatremia and other electrolyte imbalances were also excluded. 37 Limitations This study has a number of limitations. Proper assessment of hyponatremia involves the correction of serum sodium, as per blood glucose concentration. 38 Such correction was not feasible as glycemic control is routinely assessed in our hospital by glycated hemoglobin, and not be random blood glucose concentration. Another limitation is the lack of other important biomarkers such as brain natriuretic peptide 39 (which is not measured routinely in our hospital) to allow for prognostic comparisons. The duration of CHF diagnosis is not documented in patient's electronic records, so it is unclear if this missing variable could have any prognostic implications. Finally, the subgroup analysis done resulted in a smaller number of patients per group, which would have resulted in underpowered results. Conclusion CHF is a growing public health problem worldwide, with significant morbidity and mortality; therefore, the search for simple and cost-effective serum biomarkers to guide proper management is warranted. Hyponatremia is a common electrolyte abnormality that has a prognostic value for clinical outcomes, including ACM. This study has shown that the prevalence of hyponatremia in this sample of Saudi patients with CHF is almost similar to, or less than that of other populations. ACM was independently predicted by type 2 DM, NYHA class, age, and LVMI. Despite that hyponatremia was not found to be an independent predictor of ACM in our population, it was associated with a reduced survival among patients with moderate-to-severe disease. Availability of Supporting Data The data that support the findings of this study are available on request from the corresponding author (MM. Alem). Ethical Approval and Consent to Participate The study protocol was approved by the institutional review board (IRB-2020-306-Pharm), Deanship of Scientific Research, Imam Abdulrahman bin Faisal University, Dammam, Saudi Arabia. Verbal consents were obtained from the patients included or their next of kin by telephone conversation. International Journal of General Medicine Publish your work in this journal The International Journal of General Medicine is an international, peer-reviewed open-access journal that focuses on general and internal medicine, pathogenesis, epidemiology, diagnosis, monitoring and treatment protocols. The journal is characterized by the rapid reporting of reviews, original research and clinical studies across all disease areas. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors.
2020-07-30T02:06:45.369Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "a2dd942f10e7cfc3960177f7728fed6397d55703", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=59815", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "861b33a2a6a6edbb13b5de829d203809cc67b384", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
139311538
pes2o/s2orc
v3-fos-license
Performance Research on a Twin Screw Expander Applied to a Power Generation System for Recovering Waste Pressure Energy of Natural Gas The twin screw expander, applied to a power generation system for recovering waste pressure energy of natural gas (NG), is reliable. It has lower capital costs and maintenance performance compared to the others. In this paper, a computational procedure based on the Benedict-Webb-Rubin (BWR) equation is established. Meanwhile, the experiment is carried out to study the performance of a twin screw expander prototype. The results shows that the twin screw expander has an optimum rotational speed where the specific shaft work, efficiency reach their peaks when delivered a certain gas, and the optimum rotational speed reduces with the decrease of the inlet pressure. Introduction Natural gas (NG) is mainly transported over long distances through pipeline network systems (corresponding to 67.5% of the global gas trade [1]). It is necessary to transport NG at high pressures (about 10-12MPa [2]) to cut the cost and energy demand. However, the pressure of NG must be significantly reduced to lower levels at pressure reduction stations (PRSs) before it is supplied to utilization sites [3]. Currently, most PRSs employ pressure reducing valves (PRVs) to accomplish pressure reduction and control outlet pressure [4]. It dissipates a considerable amount of available exergy of pressurized gas which can be harnessed to generate work in the expansion process. Many researchers have begun to paid attention to the energy recovery from a NG PRS. It is possible to convert the available decrease of physical exergy into mechanical work by means of an expander substituting for a PRV [5]. The generated work may be converted to electricity or provided for industrial direct applications. Although numerous theoretical analyses and evaluation methodologies for various NG expansion systems are widely developed, there are little industrial application and experimental research. In this paper, a power generation system for recovering waste pressure energy of NG is presented and a computational procedure based on the Benedict-Webb-Rubin (BWR) equation is established. Meanwhile, the experiment is carried out to study the performance of a twin screw expander prototype. The experiment used compressed air to substitute for NG as working fluid considering that NG has inflammable, explosive, toxic and other dangerous characteristics. The schematic diagram of the twin screw expander performance test is shown in figure 1. It primarily includes three parts: inlet and exhaust system, oil lubrication system, data measurement and acquisition system. Computational procedure The BWR equation of state, which has high accuracy in the application of hydrocarbons gases, was selected for calculating thermodynamic parameters of NG. The fundamental structure of the BWR equation for real gas is as follows: , c ,  ,  are all constants related to the kind of matter [6,7]. For the mixture, the mixing rules of constants in the BWR equation are defined as follows: where i x is the mole fraction of component I in the mixture. It was assumed that the whole expansion process was gas single-phase flow. A computational procedure was established to realize the thermodynamic calculation. The flow chart of the computational procedure is illustrated in figure 2. Flow chart of the computational procedure. Results and discussions The twin screw expander prototype, whose male rotor has 5 teeth and female rotor has 6, is based on the "W" rotor profile. The diameter of the male rotor and female rotor were optimized to 106.8 mm and 84.2 mm, respectively. The axial distance between rotors is 75 mm. The length to diameter ratio of the male rotor is 1.6. A series of experiments were carried out to investigate the performance of the twin screw expander prototype. Throughout the trial process, the expander ran smoothly with low noise and little vibration. Some main performances are analyzed as follows. experimental conditions Taking a set of experimental data as the example, the performance of the twin screw expander at different rotational speeds was analyzed. The air compressor outlet pressure was set between 0.55 MPa and 0.65 MPa, which had contributed to a satisfactory experimental condition that the expander inlet pressure was basically stable at 0.57 MPa and the inlet temperature was approximately 42℃. Meanwhile, the inlet and outlet cut-off valve of the experimental system were fully opened, which implied that the expander back pressure was equivalent to barometric pressure, and the bypass was in a closed state. Moreover, the condition that expander inlet pressure was 0.24 MPa and inlet temperature was 36℃ was used for comparison. The specific values of pressure, mentioned in this paper, refer to gauge pressure. Figure 3(a) shows the variation of the shaft power and specific shaft work with the rotational speed. It can be seen that the shaft power increases, whose change rate becomes gradually slower, and the specific shaft work changes with approximate parabola law as the rotational speed increases. The specific shaft work reaches the maximum value of 52.38 kJ/kg at about 2500 r/min. However, the shaft power doesn't reach the maximum value at the rotational speed where has a maximum specific shaft work, and continues to increase. That's because the value of shaft power depends on two aspects: specific shaft work and mass flow rate of working gas. Figure 3(b) shows the variation of the isentropic efficiency and exercy efficiency with the rotational speed. It can be seen that both the isentropic efficiency and exercy efficiency change with approximate parabola law as the rotational speed increases, and respectively reach the maximum values of 57% and 51% at about 2500 r/min. Figure 4 shows the effect of the inlet pressure on the performance of the twin screw expander. It can be seen that, in contrast with the condition of 0.57 MPa, the shaft power for 0.24 MPa indicates a rough parabolic changes with the increase of rotational speed, and reaches the maximum value at about 2000 r/min. The isentropic efficiency for 0.24 MPa also indicates a parabolic changes as the rotational speed increases, and the difference is its optimum rotational speed is 1750 r/min. Shaft power for NG Under the first test condition and with the same isentropic efficiency, the shaft power for NG was calculated by the proposed computational procedure. The components of NG used are shown in table 1. Compositions CH 4 C 2 H 6 C 3 H 8 CO 2 N 2 Mole fraction (%) 88.48 6.68 0.35 3.52 0.97 Figure 5 shows the differences between the shaft power for NG and that for air, the specific shaft work for NG and air also are compared in like manner. It can be seen that the shaft power for NG is slightly bigger than that for air and the difference between them are becoming larger as the rotational speed increases. Therefore compressed air can substitute for NG as working fluid to study the performance of the twin screw expander prototype. The specific shaft work for NG is higher than that for air and the difference between them changes with an approximate downward parabola as the rotational speed increases. Conclusions The main conclusions drawn from the present research are summarized as follows: (1) The specific shaft work, efficiency change with approximate parabola law as the rotational speed increases. For a certain twin screw expander, there is an optimum rotational speed where the specific shaft work, efficiency reach their peaks when delivers a certain gas. However, the shaft power doesn't necessarily reach the maximum value at that optimum rotational speed. (2) The optimum rotational speed of the twin screw expander, where the shaft power reached its peak, is not less than the rotational speed where the efficiency is at its maximum. (3) For the twin screw expander with a certain structure volume ratio, the optimum rotational speed where the isentropic efficiency reaches its peak reduces with the decrease of the inlet pressure. (4) In reality, compressed air can substitute for NG as working fluid to study the performance of the twin screw expander prototype, which can be applied to a power generation system for recovering waste pressure energy of NG.
2019-04-30T13:08:12.388Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "78cd343aef343ed0b43054dc92bc3962457d5348", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1072/1/012016", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ce5f8a45b6737607721d9c6df916913f36f91609", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
235732105
pes2o/s2orc
v3-fos-license
Deep calibration of the quadratic rough Heston model The quadratic rough Heston model provides a natural way to encode Zumbach effect in the rough volatility paradigm. We apply multi-factor approximation and use deep learning methods to build an efficient calibration procedure for this model. We show that the model is able to reproduce very well both SPX and VIX implied volatilities. We typically obtain VIX option prices within the bid-ask spread and an excellent fit of the SPX at-the-money skew. Moreover, we also explain how to use the trained neural networks for hedging with instantaneous computation of hedging quantities. Introduction The rough volatility paradigm introduced in [13] is now widely accepted, both by practitioners and academics. On the macroscopic side, rough volatility models can fit with remarkable accuracy the shape of implied volatility smiles and at-the-money skew curves. They also reproduce amazingly well stylized facts of realized volatilities, see for example [3,5,8,13,23]. On the microstructural side, it is shown in [6,7,22] that the rough Heston model introduced and developed in [9,10] naturally emerges from agents behaviors at the microstructural scale. Nevertheless, one stylized fact of financial time series that is not reflected in the rough Heston model is the feedback effect of past price trends on future volatility, which is discussed by Zumbach in [24]. Super-Heston rough volatility models introduced in [6] fill this gap by considering quadratic Hawkes processes from microstructural level, and showing that the Zumbach effect remains explicit in the limiting models. As a particular example of super-Heston rough volatility models, the authors in [14] propose the quadratic rough Heston model, and show its promising ability to calibrate jointly SPX smiles and VIX smiles, where other continuous-time models have been struggling for a long time [17]. The VIX index is in fact by definition a derivative of the SPX index S, which can be represented as VIX t = −2E[log(S t+∆ /S t )|F t ] × 100 , (1.1) where ∆ = 30 days and E is the risk-neutral expectation. Consequently, VIX options are also derivatives of SPX. Finding a model which jointly calibrates the prices of SPX and VIX options is known to be very challenging, especially for short maturities. As indicated in [17], "the very negative skew of short-term SPX options, which in continuous models implies a very large volatility of volatility, seems inconsistent with the comparatively low levels of VIX implied volatilities". Through numerical examples, the authors in [14] show the relevance of the quadratic rough Heston model in terms of pricing simultaneously SPX and VIX options. In this paper, in the spirit of [2], we propose a multi-factor approximated version of this model and an associated efficient calibration procedure. Under the rough Heston model, the characteristic function of the log-price has semi-closed form formula, and thus fast numerical pricing methods can be designed, see [10]. However, pricing in the quadratic rough Heston model is more intricate. The multi-factor approximation method for the rough kernel function developed in [2] makes rough volatility models Markovian in high dimension. Thus Monte-Carlo simulations become more feasible in practice. Still, in our case, model calibration remains a difficult task. Inspired by recent works about applications of deep learning in quantitative finance, see for example [4,19,20], we use deep neural networks to speed up model calibration. The effectiveness of the calibrated model for fitting jointly SPX and VIX smiles is illustrated through numerical experiments. Interestingly, under our model, the trained networks also allow us to hedge options with instantaneous computation of hedging quantities. The paper is organized as follows. In Section 2, we give the definition of our model and introduce the approximation method. In Section 3, we develop the model calibration with deep neural networks. Validity of the methods is tested both on simulated data and market data. Finally in Section 4, we show how to perform hedging in the model with neural networks through some toy examples. The quadratic rough Heston model and its multi-factor approximation The quadratic rough Heston model, proposed in [14], for the price of an asset S(here the SPX) and its spot variance V under risk-neutral measure is where W is a Brownian motion, a, b, c are all positive constants and Z t is defined as for t ∈ [0, T ]. Here T is a positive time horizon, α ∈ (1/2, 1), λ > 0, η > 0 and θ 0 (·) is a deterministic function. Z t is driven by the returns through √ V t dW t = dS t /S t . Then the square in V t can be understood 1 To ensure the martingale property of St, we can in fact use otherwise with x * sufficiently large. In this paper, for ease of notation we keep writing the square function. as a natural way to encode the so called strong Zumbach effect, which means that the conditional law of future volatility depends not only on path volatility trajectory but also on past returns. Note that in this case we have a pure-feedback model as S t and V t are driven by the same Brownian motion, see [6] for more details on the derivation of this type of models. We will see in Section 4 that this setting enables us to hedge perfectly European options with SPX only. We recall the parameter interpretation given in [14]: • a stands for the strength of the feedback effect on volatility. • b encodes the asymmetry of the feedback effect. It reflects the empirical fact that negative price returns can lead to volatility spikes, while it is less pronounced for positive returns. • c is the base level of variance, independent from past prices information. It is shown in [10] that under the rough Heston model the volatility trajectories have almost surely Hölder regularity α − 1/2 − ε, for any ε > 0. This actually recalls the observation in [13] that the dynamic of log-volatility is similar to that of a fractional Brownian motion with Hurst parameter of order 0.1. Similarly the fractional kernel K(t) = t α−1 Γ(α) in (2.2) enables us to generate rough volatility dynamics, which is highly desirable as explained in the introduction. However, it makes the quadratic rough Heston model non-Markovian and non-semimartingale, and thus difficult to simulate efficiently. In this paper, we apply the multi-factor approximation proposed in [2] to do so. The key idea is to write the fractional kernel K(t) as the Laplace transform of a positive measure µ Then we approximate µ by a finite sum of Dirac measures µ n = n i=1 c n i δ γ n i with positive weights (c n i ) i=1,··· ,n and discount coefficients (γ n i ) i=1,··· ,n , with n ≥ 1. This gives us the approximated kernel function c n i e −γ n i t , n ≥ 1 . A well-chosen parametrization of the 2n parameters (c n i , γ n i ) i=1,··· ,n in terms of α can make K n (t) converge to K(t) in the L 2 sense as n goes to infinity, and the multi-factor approximation models behave closely to their counterparts in the rough volatility paradigm, see [1,2] for more details. We recall in Appendix A the parametrization method proposed in [1]. Then given the time horizon T and n, (c n i ) i=1,··· ,n and (γ n i ) i=1,··· ,n are just deterministic functions of α, and therefore not free parameters to calibrate. We can give the following multi-factor approximation of the quadratic rough Heston model: with (z i 0 ) i=1,··· ,n some constants. Contrary to the case of the rough Heston model, θ(t) cannot be easily written as a functional of the forward variance curve in the quadratic rough Heston model. Then instead of making the factors (Z n,i ) i=1,··· ,n starting from 0 and taking Z n as the authors do in [1,2], here we discard θ(·) in Equation (2.4) and consider the starting values of factors (z i 0 ) i=1,··· ,n as free parameters to calibrate from market data. This setting allows Z 0 and V 0 to adapt to market conditions and also encodes various possibilities for the "term-structure" of Z n t . To see this, given a solution for (2.3)-(2.5), (2.5) can be rewritten as Then from (2.4) we can get By taking expectation on both sides of (2.7), we get Thus we can see that the (z i 0 ) i=1,··· ,n allows us to encode initial "term-structure" of Z n t . Therefore, it can be understood as an analogy of θ(t) for the variance process in the rough Heston model. Besides, we will see in Section 4 that this setting allows us to hedge options perfectly with only SPX. By virtue of Proposition B.3 in [2], for given n, Equations (2.7) and equivalently (2.6) admit a unique strong solution, since g n (t) is Hölder continuous, b(·) and σ(·) have linear growth, and K n is continuously differentiable admitting a resolvent of the first kind. We stress again the fact that Model (2.3-2.5) does not bring new parameters to calibrate compared to the quadratic rough Heston model defined in (2.1-2.2), with the idea of the correspondance between θ(t) and (z i 0 ) i=1,··· ,n . The fractional kernel in the rough volatility paradigm helps us to build the factors (Z n,i ) i=1,··· ,n . Factors with large discount coefficient γ n i can mimic roughness and account for short timescales, while ones with small γ n i capture information from longer timescales. The quantity Z n aggregates these factors and therefore encodes the multi-timescales nature of volatility processes, which is discussed for example in [11,13]. Remark 2.1. With multi-factor approximation, Model (2.3-2.5) is Markovian with a state vector of dimension n + 1 given by X n t := (S n t , Z n,1 t , · · · , Z n,n t ). Hence the price of SPX options at time t is fully determined by X n t . Model calibration with deep learning The Markovian nature of Model (2.3-2.5) makes the calibration with Monte-Carlo simulations more feasible. However, besides parameters ω, the initial state of factors z 0 is also supposed to be calibrated from market data. In this case pricing with Monte-Carlo is not well adapted to classical optimal parameter search methods for model calibration, as it leads to heavy computation and the results are not always satisfactory. To bypass this "curse of dimensionality", we apply deep learning to speed up further model calibration. Deep learning has already achieved remarkable success with high-dimensional data like images and audio. Recently its potentials for model calibration in quantitative finance has been investigated for example in [4,19,20]. Two types of methods are proposed in the literature: • From prices to model parameters (PtM) [19]: deep neural networks are trained to approximate the mapping from prices of some financial contracts, e.g. options, to model parameters. With this method, we can get directly the calibrated parameters from market data, without use of some numerical methods searching for optimal parameters. • From model parameters to prices (MtP) [4,20]: in the first step, deep neural networks are trained to approximate the pricing function, that is the mapping from model parameters to prices of financial contracts. Then in the second step, traditional optimization algorithms can be used to find optimal parameters, to minimize the discrepancy between market data and model outputs. It is hard to say that one method is always better than the other. PtM method is faster and avoids computational errors caused by optimization algorithms, while MtP method is more robust to the varying nature of options data (strike, maturity, . . .). In the following, the two methods are applied with implied volatility surfaces (IVS), represented by certain points with respect to some predetermined strikes and maturities. During model calibration, all these points need to be built from market quotes for PtM method, while we could focus on some points of interest for MtP method, for example those near-the-money. For comparison, we will test both methods in the following with simulated data. Methodology The neural networks used in our tests are all multilayer perceptrons. They are trained with synthetic dataset generated from the model. Our methodology is mainly based on two steps: data generation and model training. -Synthetic data generation The objective is to generate data samples ω 2 , z 0 3 , IV S SP X , IV S V IX , where IV S SP X and IV S V IX stand for implied volatility surface of SPX and VIX options respectively. We randomly sample ω and z 0 with the following distribution: 2 In our tests we do not calibrate α and fix it to be 0.51. In fact c n i , γ n i , i = 1, · · · , n in K n only depend on α, so making α constant fixes also c n i , γ n i , i = 1, · · · , n. Through experiments with market data, we find α = 0.51 is a consistently relevant choice. Results in this paper are not sensitive to the choice of α, and in practice we can generate few samples with other α and train neural networks with "transfer learning". One example is given in Appendix B. 3 We do not include S0 since we look at prices of options with respect to log-moneyness strikes. Then IV S M C SP X is represented by a vector of size m = #k SP X × #T , where #A is the cardinality of set A. We use flattened vectors instead of matrices as the former is more adapted to multilayer perceptrons, and analogously for IV S M C V IX . We generate in total 150,000 data pairs as training set, 20,000 data pairs as validation set, which is used for early stopping to avoid overfitting neural networks, and 10,000 pairs as test set for evaluating the performance of trained neural networks. We denote the neural network of PtM method by N N P tM (SP X,V IX) : R 2m + → Ω × Z. It takes IV S SP X and IV S V IX as input, and outputs estimation of parameters ω and z 0 . As for MtP method, the network consists of two sub-networks, denoted with N N M tP SP X : Ω × Z → R m + and N N M tP V IX : Ω × Z → R m + . They aim at approximating the mappings from model parameters to IV S SP X and IV S V IX respectively. The methodology is illustrated in Figure 3.1. Table 3.1 summarizes some key characteristics of these networks and the training process 4 . Note that for model training, each element of ω and z 0 is standardized to be in [−1, 1], every point of the IV S M C is subtracted by the sample mean, and divided by the sample standard deviation across training set. Pricing In this part we first check the ability of neural networks to approximate the pricing function of the model, i.e. the mapping from model parameters to IV S SP X and IV S V IX . To see this, we compare the estimations IV S SP X and IV S V IX , given by N N M tP SP X and N N M tP V IX , with the "true" counterparts given by Monte-Carlo method. Input dimension 120 15 15 Output dimension 15 60 60 Hidden layers 7 with 25 hidden nodes for each, followed by SiLU activation function, see [18] Training epochs 150 epochs with early stopping if not improved on validation set for 5 epochs Others Adam optimizer, initial learning rate 0.001, reduced by a factor of 2 every 10 epochs, mini-batch size 128 At this stage, it is reasonable to conclude that N N M tP SP X and N N M tP V IX are able to learn the pricing functions from data of Monte-Carlo simulations. With these networks we can generate implied volatility surfaces for SPX and VIX for any model parameters. Calibration In this part, we use N N P tM (SP X,V IX) , N N M tP SP X and N N M tP V IX to perform model calibration. For PtM method, the output of network gives directly calibration results. For MtP method, the result is given aŝ 10 . We use the same weight for all points of IVS in (3.1). In practice we could consider different weights to adapt to the varying nature of market quotes in terms of liquidity, bid-ask spread, etc. We apply L-BFGS-B algorithm for this optimization problem. Note that it is a gradient-based algorithm, and the gradients needed are calculated directly with N N M tP SP X and N N M tP V IX via automatic adjoint differentiation (AAD), which is already implemented in popular deep learning frameworks like TensorFlow and PyTorch. One can refer to Section 4.1 for calculation principles. Calibration on simulated data We apply the two methods on test set generated by Monte-Carlo simulations, and we evaluate the calibration by Normalized Absolute Errors (NAE) of parameters and the reconstruction Root Mean Square Errors (RMSE) of IVS: where θ is one element of ω or z 0 , and θ up , θ low stand for the upper bound and lower bound of the uniform distribution for sampling θ. Note that we could alternatively use Monte-Carlo to reconstruct IVS with the calibrated parameters instead of N N M tP SP X and N N M tP V IX . However it would be much slower and we have seen in the above that the outputs given by N N M tP SP X and N N M tP V IX are very close to those of Monte-Carlo. Figure C.3 in Appendix shows the empirical cumulative distribution function of NAE for all the calibrated parameters. Figure 3.4 gives the empirical distribution of RMSE of IVS with the calibrated parameters from the two methods. We can make the following remarks: • From Figure C.3 in Appendix, we see that PtM method can usually get smaller discrepancy for calibrated parameters than MtP method. This is not surprising since the latter may end with locally optimal solutions when we use gradient-based optimization algorithms. • MtP method performs better in terms of reconstruction error, which is expected since it is exactly what the algorithm tries to minimize. • Our results are of course not as accurate as those reported in [20] for some other rough volatility models, especially in terms of calibration errors for model parameters. This is because our model is more complex and has more parameters. Consequently more data and more complicated network architecture are demanded for training. The algorithms are also more likely to end with locally optimal solutions. Calibration on market data We use MtP approach on market data since there are no "true" reference model parameter values in this case, and the objective is to minimize the discrepancy between model outputs and market observations. The arbitrage-free implied volatility interpolation method presented in [12] is used to generate IVS with the same log-moneyness strikes and maturities as before. Taking the data of 19 May 2017 tested in [14] as example, we get the following calibration results: We then use Monte-Carlo to get whole IVS of SPX and VIX with these parameters. The slices corresponding to several existing maturities in the market are shown in fall essentially between bid and ask quotes for VIX options. In addition, excellent fits are obtained in terms of at-the-money skew of SPX options 5 . Note that we do not require the market quotes of SPX options and VIX options to have same maturities. Another example of joint calibration is given in Appendix C.2. Figure C.6 in Appendix presents the historical dynamics of parameters from daily calibration on market data 6 . It is interesting to remark that during the beginning of COVID-19 crisis, a, b, c all increased, which means stronger feedback effect, more distinct feedback asymmetry and larger base variance level. The quantity Z 0 := 10 i=1 c i z i 0 became negative. All these changes contributed to larger volatility. For certain dates with extremely large market moves, we observe some spikes of errors. In fact for these dates market liquidity is usually very concentrated on one type of contract (Call or Put) with specific strikes and maturities. In this case, market IVS is not smooth enough so that the model may fail to output satisfying fit. Alternative more robust IVS interpolation methods could be tested in practice to generate smoother IVS. We could also focus only on some points of interest on IVS by excluding other points from the calibration objective (3.1). Note that all the empirical results presented here are with α = 0.51 and n = 10. Interested readers can try other α or bigger n to improve globally historical RMSE, while new synthetic data need to be generated for each new setting. 5 For quotes far from the money, we can remark a discrepancy between the market and the output with Monte-Carlo method. In fact, the mismatch between the interpolated implied volatility surface and the market on these points, and the deviation of Monte-Carlo means from neural networks' outputs can both induce this discrepancy. 6 In this paper we limited the parameters in some restricted intervals to illustrate the methodology with reasonable size of random sampling. The observation that the predetermined bounds are reached for some parameters indicates that these intervals cannot cover all market situations. Interested readers can choose wider ones or use unbounded distribution like Gaussian for random sampling to relax this issue, although it may demand more synthetic data for network training. Toy examples for hedging We have seen in the previous section that the proposed model can jointly fit IV S SP X and IV S V IX with small errors. Then it is important to know how to hedge options with this model. In this section, we give toy examples on synthetic data and market data as well to show how to use the neural networks to perform hedging for vanilla SPX calls. We will see that in our model perfect hedging for these products is possible with only SPX. Hedging portfolio computation with neural networks Let Z t := (Z 1 t , · · · , Z 10 t ) and X t := (S t , Z t ). As indicated in Remark (2.1), Model (2.3-2.5) being Markovian, given strike K, maturity T and model parameters ω, the price of vanilla SPX call at time t is then a function of X t . Let P t (X t ; K, T, ω) denote this quantity. With the dynamics of (Z i t ) i=1,··· ,10 in Equation (2.5), we then have dP t (X t ; K, T, ω) = δ t dS t , Note that the factors (Z i ) i=1,··· ,10 can be fully traced because they are assumed to be driven by the same Brownian motion as S, which is observable from market data. With the neural networks approximating the pricing function of the model, we will see that we can then obtain approximation of δ t for any t. Of course continuous hedging is impossible in practice. Here we perform discrete hedging with time step ∆t. The Profit and Loss (P&L) of hedging at t ∈ (0, T ] is given by where δ t k is the hedging ratio given by neural networks at k-th hedging time t k , and P t is the price of the SPX option to hedge at time t. As we can see, J δ stands for the P&L coming from holding the underlying and J P reflects the price evolution of the option. We show in the following thatδ t k can be given directly from N N M tP SP X in our model. Note that N N M tP SP X behaves like a "global" pricer that is reusable under any model parameters ω. Given ω, we could actually train a finer network as a "local" pricer taking only z 0 as input. Of course the same methodology as before can be used with fixed ω. Here we apply alternatively Differential Machine Learning as a fast method to obtain approximation of pricing function from simulated paths, under a given calibration of the model, see [21]. With N N M tP SP X outputing implied volatilities with respect to log-moneyness strikes, we have where P BS (S, K, T, σ) is the price of European call under Black-Scholes model and σ k,T N N (ω, Z t ) is the implied volatility corresponding to log-moneyness strike k and maturity T , calculated directly by N N M tP SP X with (ω, Z t ) as input. Then the partial derivatives in (4.2) are given by (4.5) where δ BS (S, K, T, σ) and ν(S, K, T, σ) stand for the Delta and Vega respectively under Black-Scholes model. The quantity ∂σ k,T N N ∂Z i t corresponds actually to the derivative of the outputs of N N M tP SP X with respect to its inputs. Thus it can be obtained instantaneously with built-in AAD. (ω, Z t ). Note that some interpolation methods need to be applied for arbitrary pair (K, T − t) since N N M tP SP X has fixed log-moneyness strikes and maturities. Method 2: with Differential Machine Learning Given parameters ω, we can simulate a path of model state (X t ) 0≤t≤T starting from the initial state X 0 . The pathwise payoff (S T − K) + is in fact an unbiased estimation of P 0 (X 0 ; K, T, ω). Under some regularity conditions, the pathwise derivative ∂(S T −K)+ ∂X0 . We show in the following how to calculate this quantity with the simulation scheme proposed in (A.1). The basic idea of Differential Machine Learning [15,21] is to concatenate pathwise payoff and pathwise derivatives as targets to train a neural network, denoted by N N DM L , to approximate the pricing mapping from X 0 to P 0 under some fixed ω. Thus, the training samples are like X 0 , (S T − K) + , ∂(S T −K)+ ∂X0 , and the loss function for training is like with L 1 and L 2 some suitably chosen loss functions. Similarly to the case with N N M tP SP X , ∂N N DM L (X0) ∂X0 can be calculated efficiently with AAD. In this way, N N DM L aims at learning both the pricing function and its derivatives during training. This can help the networks converge with few samples, see [21]. ∂X0 can be calculated with AAD, which is based on the chain rule of derivatives computation, see [15] for more details. In our case, with the Euler scheme in (A.1), let ∆t the simulation step with N ∆t = T . We have ∂Ŝ k+1 where X j 0 is the j-th element of X 0 . LetX k := (Ŝ k ,Ẑ 1 k , · · · ,Ẑ 10 k ). This can be rewritten in matrix form ∆(k + 1) = D(k)∆(k), with ∆ i,j (k) = ∂X i k ∂X j 0 , i, j = 1, · · · , 11, and Then we have where ∆(0) is simply the identity matrix by definition, and V(0) can be calculated recursively: (4.8) Note that for each simulated path, D(k) k=0,··· ,N −1 can be readily obtained, so the quantity ∂(Ŝ T −K)+ Since we use the same trained network for hedging at any t ∈ (0, T ], to accommodate the varying time to maturity (T − t), we consider multiple outputs corresponding to different maturities T 1 , T 2 , · · · , T m for N N DM L , with T 1 < T < T m . The quantities ( ,··· ,m can be computed following (4.7, 4.8). For training, we use the average of m derivatives in (4.6) for simplification, see [21] for more details on designs with multi-dimensional output. After training N N DM L with respect to strike K and parameter ω, we have P t (X t ; K, T, ω) N N T −t DM L (X t ) , with N N T −t DM L (·) the output corresponding to maturity (T − t). Thus we get Then the formula in (4.2) is used to obtain hedging ratio. As in the case with N N M tP SP X , some interpolation methods are needed for arbitrary (T − t). In our tests, we choose the following characteristics for N N DM L and its training: • 4 hidden layers, with 20 hidden nodes for each, SiLU as activation function, • input dimension is 11, output dimension is 5 corresponding to maturities 0.02, 0.04, 0.06, 0.08, 0.1, • mini-batch gradient descent with batch size 128, initial learning rate 0.001, divided by 2 every 5 epochs, • sample uniformly X 0 and simulate 50,000 paths, train N N DM L with 20 epochs. Hedging on synthetic data Without loss of generality, we take the following parameters in the experiments: • ω = (1, 1.2, 0.35, 0.2, 0.0025) • S 0 = 100, Z i 0 = 0, i = 1, · · · , 10 • K = 98, T = 0.08 First we generate 50,000 paths of (S t ) 0≤t≤T by following the scheme A.1 with time step ∆t = 0.0012. The price P 0 is estimated by the average of pathwise payoffs. Then we evaluate J T for 5000 paths among them, with hedging time step ∆t ∈ {0.0012, 0.0036}. From Figure 4.1 we can see that both methods lead to hedging payoffs around 0. Hedging less frequently brings slightly larger variance of payoffs. We also remark that the method with N N M tP SP X generates smaller variance than the other with N N DM L . It is expected as the latter is trained with pathwise labels while the former is trained with "true" labels given by Monte-Carlo means, which have certainly smaller variance. Hedging on market data We perform daily hedging on two SPX monthly calls: On first day of hedging, we take market data for model calibration and get ω and z 0 . N N DM L is then trained on paths generated under ω. Then for each following day, we update the value of factors (Z i ) i=1,··· ,10 with respect to the evolution of SPX, and we compute the hedging portfolio with N N M tP SP X and N N DM L as explained in the above. Besides, we also test with Black-Scholes model where the implied volatility of the first day is used to compute Delta as hedging ratio. N N DM L can follow very well the market price of options, with smaller |J T | than Black-Scholes approach. Of course, in practice we need to consider more elements like hedging cost, slippage, and to do more tests for systematic comparison, but this is out of the scope of our current work. Conclusion We have seen that the deep neural networks can be used in calibrating the quadratic rough Heston model with reliable results. The training of network demands indeed lots of simulated data, especially when the dimension of model parameters is high. However, it is done off-line only once and the network will be reusable in many situations. Under the particular setting of our model, we can also use the network for risk hedging. Certainly, we can still improve the results presented in the above, for example fixing finer grids of strikes and maturities, or using more factors for the approximation. We emphasize that the methodologies presented in our work are of course not limited to the model introduced here, and can be adapted to other models and other financial products. A Kernel function approximation and simulation scheme Here we recall the geometric partition of (c n i , γ n i ) i=1,··· ,n proposed in [1]: . Given n, α and T , we can determine the "optimal" x n as x * n (α, T ) = arg min We fix T = 0.1 as we are more interested in short maturities and we get the optimal x * n for different n and α as shown in Figure A.1. It is consistent with the analysis in [1] that given α, we need to increase x n to mimic roughness with less factors. Given n, x * n does not change a lot with α, which indicates that in practice we can actually fix x * n independently of α. Choosing a good n is a trade-off between simulation efficiency and good approximation of rough volatility models. In our test, we take n = 10 and x * 10 = 3.92. On one hand, we can see from Figure A.2 that the approximation of K 10 to K is not far away from other K n with larger n. On the other hand, it means a margin for improvement with larger n. K n (t) K(t) K n with n=5 K n with n=10 K n with n=20 K n with n=50 K n with n=100 by taking x * 10 = 3.92. We discard the notation n and use the following modified explicit-implicit Euler scheme for simulating Model for a time step ∆t = T /N , k = 1, · · · , N and (W k+1 − W k ) ∼ N (0, ∆t). One could also use the explicit sheme for Z i given byẐ . However, with above x * 10 , we get γ 10 = 542.32. One then need ∆t to be necessarily small to ensure the scheme's stability. Instead we could usê which leads to Scheme A.1 and avoids this issue. B Network training with transfer learning When we switch to models with α not equal to 0.51, we can apply the idea of transfer learning to accelerate network training. More precisely, we use the parameters of the network corresponding to the case α = 0.51 to initialise the network for cases with different α.
2021-07-06T01:15:48.786Z
2021-07-04T00:00:00.000
{ "year": 2021, "sha1": "c3934d37200f475a35a5da6e320cef06282aec76", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "79222856dd805fa214356050a1c35fa0f9f8d507", "s2fieldsofstudy": [ "Computer Science", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
233845716
pes2o/s2orc
v3-fos-license
Effect of Pads and Thickness of Paddy on Moisture Removal under Sun Drying Background: Sun drying is a popular post-harvest operation to maintain rice quality during the storage period. Farmers use different pads and thicknesses for sun drying of paddy in Ampara district, Sri Lanka. A study was conducted to evaluate the suitability and effectiveness of the drying pad and thickness as practiced by local paddy farmers during the sun drying process.Methods: The grain with an initial moisture content of 28% (dry basis) was sun dried with four types of drying pads and five levels of thickness of grain. This experiment was conducted between 8.30 am and 4.30 pm at the South Eastern University of Sri Lanka in August 2020. The moisture contents of the grain were measured at regular time intervals.Result: It was found that the duration of drying of paddy from 28% to 13% moisture content on a dry basis was 300 to 540 minutes depending upon the drying pad and thickness. The tarpaulin is reasonable at shallow thickness with less time to reach the necessary moisture level than other drying pads. Black polythene and fertilizer bag can be utilized for sun drying of paddy at 4 cm thickness with 130 minutes. It was found that with an increase in the thickness of paddy from 0.5 cm to 4 cm, the drying time increases. A statistically significant interaction was obtained between drying pads and thickness level on moisture removal of paddy. Therefore, the moisture removal rate differs with the drying pad and thickness of the paddy under open sun drying. INTRODUCTION Open sun drying is one of the cost-effective and popular methods to remove paddy moisture content compared to mechanical drying in South Asian countries. Paddy drying is a highly energy consuming process and significantly affects the milled rice quality, such as head rice yield (Prahayawakon et al., 2005). Furthermore, drying is a critical step during post-harvest practices and inadequate drying contributes to increased post-harvest loss (Kumar and Kalita, 2017). Moreover, poor drying practices may cause 3-5% post-harvest losses of paddy (FAO, 2013). It is also essential to dry the paddy as soon as possible after harvesting, ideally within 24 hours (Wilfred, 2006). Acceptable drying practices are crucial that directly influence safe storage to minimize post-harvest losses (Masood et al., 2018). Therefore, an understanding of the different parameters that farmers can control on drying performance and on the final quality of the dried grain is fundamental, as it can optimize the drying process and maximize the quality and hence the value of the grain (Imoudu and Olufayo, 2000). The effectiveness of drying varies due to several factors such as variety, harvesting methods, initial and final moisture content and drying methods (Iguaz et al., 2003 andTorki-Harchegani et al., 2014). Sun drying increases the broken rice rate in milling if the grain temperature gets excessively high (Truong et al., 2012). Among these factors, the final moisture content is a critical factor determining rice's selflife during storage and other post-harvest practices. Because respiration in the grain at high grain wetness causes deterioration. High moisture content promotes the pest and disease attack in the grain. In contrast, if the paddy's moisture content is too low, the grains are so fragile when being milled. This can lead to a higher fraction of broken kernels. Keeping the paddy at acceptable moisture content can prolong storage time and prevent mould growth (Cheenkachorn, 2007). Therefore, the required moisture content is 13 to 14% for storage and 10 to 13% for milling (Babamiri et al., 2013). The drying pad is one of the critical factors determining the drying rate of paddy. Drying pads have been used for sun drying by farmers for easy unloading, loading and least losses. Imoudu and Olufayo (2000) found that sun drying on a concrete floor took a high drawn-out time than on a mat surface even though it produced higher head rice recovery. A similar sun drying experiment was conducted between 8 am to 4 pm in Cambodia to assess the effects of different drying pads (tarpaulin, nylon net, nylon net on husk layer and mat made of sugar palm leaves) and two levels of . Therefore, the drying pad influences the moisture removal rate from the paddy during the sun drying operation. The thickness of paddy on the drying surface is another crucial factor determining the moisture removal of paddy. Most of the farmers are practicing different thickness levels according to the quantity of paddy, condition of weather and labor availability without an understanding of the drying performances. Too thin layers tend to heat up very quickly, negatively affecting the head rice recovery. On the other hand, deep layers create dry grains on the top and wet grains on the base, which re-adsorbs moisture on subsequent stirring leads to high broken grains (IRRI, 2013). Thus, the paddy has to be dried in optimum thickness during the sun drying operation. Paddy is dried in an open environment with different conditions in the Ampara district, one of the areas with higher paddy production in Sri Lanka. Different drying pads and thicknesses have been used traditionally depending on the quantity, labor availability and surface area. However, the performance of drying under these conditions has not been studied. Therefore, the objectives of this study were to determine the suitable drying pad and the optimum drying depth during sun drying methods practiced by local paddy farmers in the Ampara District. Sample collection and experimental site Freshly harvested long grain paddy variety (AT 362) that is commonly grown in the region was used in this study. Paddy harvested by combine harvester was procured from the paddy field at Nithavur, Ampara District, Sri Lanka, during the Yala season. The grain sample was transported immediately from the paddy field to the experimental site. The sun drying experiment was conducted between 8.30 am to 4.30 pm in August 2020 at the South Eastern University of Sri Lanka (718'00.3"N and 8151'41.8"E). Experimental design The different drying treatments were identified based on the traditional methods used by local farmers. This experiment was designed as a Factorial Randomized Complete Block Design with two factors, different drying pad (4 types) and different bed depth (5 levels) with three (03) replications (Fig 1). Table 1 shows the characteristics of the four drying pads used in this study. Five levels of grain thicknesses, 0.5 cm, 1 cm, 2 cm, 3 cm and 4 cm, were prepared within one-meter square (1m 2 ) wooden frames. Measurement The grain's moisture content was measured hourly using a digital grain moisture meter (Model: LDS-1H) and the paddy was stirred hourly by hand raking. The moisture meter was calibrated by standard hot air oven method using collected paddy sample (ASAE Standards, 1998). The atmospheric temperature and relative humidity were recorded at one hour interval during the drying. All the drying experiments were carried out in triplicate. Data analysis All data were subjected to analyze the variance and significant differences among the treatments using SAS, SPSS Version 26 for Windows and OriginLab (2019b). Effect of different drying pads on moisture removal The changes in paddy moisture content under the tarpaulin, black polythene, fertilizer bag and hemp sack with day time are shown in Fig 2. Moisture content reduces during the drying period in all drying pads with different thickness levels. Effect of Pads and Thickness of Paddy on Moisture Removal under Sun Drying Around 12% of moisture content was removed from the paddy between 8.30 am to 12.30 pm in all the treatments and there are no significant differences in moisture removal were observed in all four drying pads after 1.30 pm. Candia et al. (2013) also conducted a similar experiment at the initial moisture content of 22 -28%. The required moisture content of 14% (dry basis) was obtained in 5 to 9 hours from 28% initial moisture content depending on the type of drying pads used. Kumoro et al. (2019) conducted another similar experiment from 7.00 am to 12.00 noon by evenly spreading rough paddy onto a concrete floor and tarpaulin (white and black) with the rice layer thickness 2 to 5 cm. Hellevang (2004) reported that plastic sheets could result in condensed water and tend to hold them in low places; thus, not suitable for drying pads. In this study, tarpaulin and hemp sack were found suitable at shallow thickness with less time to attain the required moisture level than other drying pads. Black polythene and fertilizer bag are ideal for sun drying with an increased thickness level of paddy. thickness of paddy. Since all paddy depths received the same quantity of solar radiation per unit area and at the same time, the deeper depths needed much more time to reach the recommended milling moisture content. Candia et al. (2013) reported that 7 to 8 days of drying is required for 7 cm thickness. A similar study conducted in the Philippines reported that the recommended paddy drying depth using the open sun drying method is 2 to 4 cm (IRRI, 2009). Therefore, a suitable thickness of paddy and efficient drying rate in open sun drying depends on the drying pad used. Table 2 illustrate the time required to obtain the recommended moisture content (14%) of paddy grains with four types of drying pads and five thickness levels. Overall, the time requirement is gradually increasing with the increased level of thickness from 0.5 cm to 4.0 cm in every treatment. The least time required to reach 14% moisture level is at 0.5 cm thickness using four different drying pads, whereas the highest time requirement is at 4 cm thickness. At 0.5 cm thickness, tarpaulin recorded the lowest time (98 minutes), while fertilizer bag required the highest time (156 minutes). Tarpaulin and hemp sack are not suitable with 4 cm thickness in terms of drying time. Fig 4 and Moreover, there was no significant difference observed among the four different drying pads using 1 cm thickness. Black polythene and fertilizer bag showed the lowest time at 2 cm, 3 cm and 4 cm of drying thickness. However, the Effect of Pads and Thickness of Paddy on Moisture Removal under Sun Drying tarpaulin showed less time to reach the target moisture level than the hemp sack drying pad. International Rice Research Institute (2013) reported that the optimum paddy layer thickness is between 2 to 4 cm for open sun drying. Black polythene and fertilizer bag can be used in paddy drying when the farmers need to dry the paddy with a high thickness level. In contrast, the increased thickness level of paddy under the tarpaulin is not suitable for sun drying. But tarpaulin is the right drying pad when the farmers want to dry the paddy in a shallow thickness level because it will take less time to attain the required moisture level than other drying pads. An increasing trend was observed in tarpaulin and hemp sack from a low level to a high level of paddy thickness, but no significant trend has been kept in black polythene and fertilizer bag. Interaction effect of drying pad, thickness and time on moisture removal Interaction between drying pad and thickness on moisture removal was found to be significant (p=0.001) and the relationship between thickness and moisture content also showed a significant (p=0.001) interaction with moisture removal. Similarly, the relationship between drying pad and moisture content also showed a significant (p=0.001) interaction with paddy moisture removal in this experiment. The posthoc test using Duncan's Multiple Range (= 0.05) results indicated no significant variation in the moisture removal by using a hemp sack and tarpaulin. Similarly, there is no significant variation in paddy moisture content by using fertilizer bag and black polythene in all the thickness level. Therefore, different drying pads and thickness levels showed moisture removal's influence under open sun drying. Effect of atmospheric temperature and relative humidity Fig 5 shows the variation of atmospheric temperature and relative humidity in the experimental site during the sun drying operation. Accordingly, weather and high relative humidity range between 29C -30C and 72% -78%, respectively. Around 12% of moisture content was removed from the paddy between 8.30 am to 12.30 pm in all the treatments. However, there are no significant differences in moisture removal observed in all four drying pads after 1.30 pm. The main reason for fast initial moisture removal before noon is due to high atmospheric temperature and low relative humidity in the experimental site (Fig 5). This is supported by Candia et al. (2013) as external wetness will readily evaporate when the paddy is open to hot air. Still, interior moisture evaporates gently as it has to transfer away from the kernel to the exterior due to surface forces. According to Mujumdar (2004), the mechanism of water evaporation in the material occurs through heat and mass processes simultaneously. The time taken to reach the required moisture content of paddy is ranged from 5 to 9 hours, depending on the air temperature and relative humidity in the experimental site. CONCLUSION Drying performance significantly varies with the drying pad and thickness of the paddy. The time requirements to reach the required moisture content with black polythene and fertilizer bag were 120 to 156 minutes, respectively, from 28% initial moisture content (dry basis). Tarpaulin was found suitable at shallow thickness with a less amount of time compared to other drying pads. In contrast, the tarpaulin drying pad is not ideal for sun drying for paddy's high thickness. The time required to reach the required moisture content has increased with the increasing thickness level in tarpaulin and hemp sack. There is no significant trend observed in black polythene and fertilizer bag with thickness. Black polythene and fertilizer bag can be used for sun drying of paddy at 4 cm thickness with 130 minutes duration under a sunny day. A statistically significant interaction was obtained between drying pads and thickness level on moisture removal of paddy.
2021-05-07T00:03:08.354Z
2021-03-05T00:00:00.000
{ "year": 2021, "sha1": "6dff3ca80e03aa22167469e6d1fa83b53e589b5d", "oa_license": null, "oa_url": "https://doi.org/10.18805/ag.d-327", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "6e8e11a7e1e7d09a13d1a06a120559aece8d3da7", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
80287878
pes2o/s2orc
v3-fos-license
Efficacy and safety of intralesional triamcinolone acetonide in the treatment of chronic hand eczema This randomized controlled clinical trial was conducted to assess the efficacy and safety of intralesional triamcinolone acetonide in the treatment of chronic hand eczema comparing with topical clobetasol propionate. A total 60 patients of chronic hand eczema were recruited in the study. Thirty patients (Group A) were treated with intralesional triamcinolone acetonide and the rest 30 (Group B) with topical clobetasol propionate. Severity and improvement were assessed using Hand Eczema Severity Index (HECSI) score. The patients of both groups were followed up at 4th week and 12th week. In Group A, median HECSI score at baseline, 4th week and 12th week were 3, 20 and 20 respectively; whereas these scores were 54, 10 and 8 in Group B. In both groups, HECSI score was decreased gradually but the rate was higher in Group B than Group A (p 0.05). The result of this study demonstrates that intralesional triamcinolone acetonide is effective and safe in treating chronic hand eczema but less effective than the topical clobetasol. Introduction Hand eczema or hand dermatitis is defined as an inflammation of the skin of the hands. 1 This itchy skin condition of hands usually presented with erythema, edema and occasionally vesicles in acute stage; whereas chronic hand eczema lichenification, scaling and fissures are prominent. 2The world-wide prevalence of hand dermatitis among different racial and occupational groups ranges from 4 to 10%. 2 The most common causes are allergic contact dermatitis (19%), irritant contact dermatitis (35%), atopic dermatitis (22%) and others such as nummular hand eczema, hyperkeratotic hand eczema, or pompholyx. 35][6] Long-term potent topical steroids are effective but the skin atrophy, infection, hypertrichosis, tachyphylaxis and adrenal suppression are among many cutaneous and systemic adverse effects. 7Triamcinolone acetonide is a synthetic corticosteroid and effective in the treatment various skin conditions.It is a more potent derivative of triamcinolone and about eight times stronger than prednisolone. 8ntralesional application of corticosteroid is used to treat a dermal inflammatory process directly deliver and maintain a high concentration of the drug at the pathologic site, with less systemic absorption. 9It avoids the thickened stratum corneum, minimize epidermal atrophy and deliver higher concentration to the site of pathology. 8-9Therefore, the purpose of the present study was to assess the efficacy and safety of intralesional triamcinolone acetonide in patients with chronic hand eczema comparing with topical clobetasol propionate. Materials and Methods It was a randomized controlled clinical trial, carried out from April to September 2014.It was conducted as a dissertation after approval of the protocol by Bangladesh College of Physicians and Surgeons (No. CPS-7/2014/DSN 2015 -010008).Sixty clinically and histopathologically diagnosed cases of chronic hand dermatitis of the duration of more than three months were selected after taking informed written consent and enrolled randomly into two groups: Group A (case) and B (control).In Group A, patients were given 0.1 mL injection of 40 mg/mL strength triamcinolone acetonide per square cm of involved area intralesionally and were repeated at 28 days interval on 4 th and 8 th week.Patients of control group were given Abstract This randomized controlled clinical trial was conducted to assess the efficacy and safety of intralesional triamcinolone acetonide in the treatment of chronic hand eczema comparing with topical clobetasol propionate.A total 60 patients of chronic hand eczema were recruited in the study.Thirty patients (Group A) were treated with intralesional triamcinolone acetonide and the rest 30 (Group B) with topical clobetasol propionate.Severity and improvement were assessed using Hand Eczema Severity Index (HECSI) score.The patients of both groups were followed up at 4 th week and 12 th week.In Group A, median HECSI score at baseline, 4 th week and 12 th week were 3, 20 and 20 respectively; whereas these scores were 54, 10 and 8 in Group B. In both groups, HECSI score was decreased gradually but the rate was higher in Group B than Group A (p<0.05).Thinning of skin, an adverse effect, was seen in patients of both the intralesional triamcinolone acetonide (10%) and topical clobetasol propionate (16.7%) groups (p>0.05).The result of this study demonstrates that intralesional triamcinolone acetonide is effective and safe in treating chronic hand eczema but less effective than the topical clobetasol. Cite this arti le: Ashraf T, Paul HK, Sikder MS, Zakaria ASM, Bhuiya MSI, S igdha KSR.Effi ay a d safety of i tralesio al tria i olo e a eto ide i the treat e t of hro i ha d e ze a. Ba ga a dhu Sheikh Muji Med U i J. Copyright: The The effect of treatment was evaluated using Hand Eczema Severity Index (HECSI) score 10 at baseline and after treatment in 4 th week and 12 th week.The patients were scored using HECSI and were recorded. For each location (total of both hands), the affected area was given a score from 0 to 4 (0= 0%; 1= 1-25%; 2= 26-50%; 3= 51-75% and 4= 76-100%) for the extent of clinical symptoms.Intensity of each clinical feature was given a score from 0 to 3 (0=no feature; 1=mild; 2=moderate; 3=severe).Finally, the score given for the extent for each location was multiplied by the total sum of the intensity of each clinical feature (each contributing equally to the final score), and the total sum called the HECSI score was calculated, varying from 0 to a maximum severity score of 360 points. In addition to that, adverse reactions like striae, telangiectasia, thinning of the skin, hypertrichosis were also recorded.There was no case of drop out in any treatment group. Results The mean age of Group A (32.4 ± 10.2 years) was higher than Group B (29.6 ± 10.0 years) but the difference was not statistically significant (p>0.050)(Table I).The male female difference between these two groups was not statistically significant (p>0.050).In Group A, 33.3% patients were housewife followed by same were service holder, 20.0% were students and 13.3% were business person.In Group B, 33.3% patients were the business person followed by 26.7% were the student, 20.0% were service holder and same were the housewife.The difference between these two groups was not statistically significant (p>0.050). The initial HECSI score in Group A was 33 (12-148), at 4 th week 20 (0-105) and at 12 th week 20 (0-80) (Table II).In Group B, it was 54 (30-152) initially, then 10 (0-34) and 8 (0-49) at 4 th and 12 th week respectively.In both groups, HECSI score was decreased gradually but the rate was higher in Group B than the Group A. In each follow-up, the difference between two groups was statistically significant (p<0.050). Thinning of skin was seen in 10.0% patients of Group A and 16.7% patients in Group B. Difference between these two groups was not statistically significant (p>0.050). Discussion The current study was conducted to compare the efficacy intralesional triamcinolone acetonide with topical clobetasol propionate in the treatment of chronic hand eczema.In both groups, HECSI score decreased gradually but the rate was higher in 130 BSMMU J 2017; 10: 129-131 Mann-Whitney U-test is done to measure the level of significance group B than A. In each follow-up the difference between two groups was statistically significant (p<0.050)(Table III). It has been well established that topical steroids are the mainstay of the pharmacological treatment of hand eczema. 5, 11Long-term potent steroids are very effective; however, the adverse effects like skin atrophy, tachyphylaxis, and adrenal suppression are well-known and after a few weeks of twice daily use it is recommended to use them on every alternate day or twice in a week. 7Similar to the present study Möller, et al (1983) compared longterm, intermittent maintenance treatment of chronic hand eczema with clobetasol propionate comparing other topical steroid and have found that the dermatitis of 90% of these patients have been cleared. 12No previous study has been found comparing the efficacy of topical clobetasol with intralesional triamcinolone in the treatment of hand eczema.Here, severity of hand eczema (HECSI score) reduced gradually in both intralesional triamcinolone treated group and topical clobetasol treated group but the rate was significantly higher in clobetasol treated group (p<0.050).Regarding the adverse reaction thinning of skin was seen in 10.0 and 16.7% patients in group A and B respectively (p>0.050).Alcers (1980) has shown that the common risk of long-term use of topical potent steroid is thinning of skin. 7Here, thinning of the skin with topical clobetasol propionate is comparable with that of intralesional triamcinolone acetonide. Conclusion Intralesional triamcinolone acetonide is effective in the treatment of hand eczema but efficacy is significantly less than that of topical clobetasol propionate with comparable safety. Harasit Kumar Paul, Md. Shahidullah Sikder, A. S. M. Zakaria, Mohammed Saiful Islam Bhuiyan and Kaniz Shahali Reza Snigdha topical Efficacy and safety of intralesional triamcinolone acetonide in the treatment of chronic hand eczema clobetasol propionate daily 2 times for 2 weeks, then every alternate day for up to three months.Clobetasol propionate ointment (Dermovate ointment; Glaxo-Smithkline) and triamcinolone acetonide injection (Injection trialan; Ziska Pharma BD.Ltd.) were purchased from the local drug store. opyright of this arti le is retai ted y the author s [Atri utio CC-By .] Availa le at: .a glajol.ifo A Jour al of Ba ga a dhu Sheikh Muji Medi al U i ersity, Dhaka, Ba gladesh
2018-12-16T11:37:18.124Z
2017-09-03T00:00:00.000
{ "year": 2017, "sha1": "3e63be09dd37b62ae954de06cf734ab3b47adf1a", "oa_license": "CCBY", "oa_url": "https://www.banglajol.info/index.php/BSMMUJ/article/download/33458/22855", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3e63be09dd37b62ae954de06cf734ab3b47adf1a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17386873
pes2o/s2orc
v3-fos-license
Impact of Chronic Simulated Snoring on Carotid Atherosclerosis in Rabbits Background and Purpose Chronic simulated snoring was induced in rabbits to determine the impact of snoring on the development of atherosclerosis. Methods The pressure wave of induced snoring at the carotid bifurcation of rabbits was acquired by gently pressing the airway. This wave was then simulated using custom-made mechanical devices. Twelve rabbits were used in this study, seven of which were assigned to the experimental group and the remaining five formed the control group. All of the rabbits were raised on a 1% high-cholesterol diet. Either working or sham devices were positioned at the ventral center of the neck in each rabbit. At the end of a 2-month observation period, all of the rabbits were sacrificed by perfusion fixation, the carotid arteries harvested, and the carotid atherosclerosis histology reviewed. Results All of the rabbits survived to the end of the experimental period. Blood sampling revealed the presence of hypercholesterolemia in both groups, with no significant difference between them. The presence and degree of atherosclerosis did not differ significantly between the groups. Conclusions The findings of this study show the feasibility of making a chronic simulated snoring rabbit model. However, the causative role of snoring in carotid atherosclerosis was not detected in this animal study. Introduction Atherosclerosis and its consequences greatly affect human health and longevity. 1 The factors regarded as contributors or causes of vascular diseases include hyperlipidemia, hyperhomocysteinemia, metabolic derangements such as diabetes mellitus, turbulent flow, hypertension, mechanical injuries, immunological injuries, and obstructive sleep apnea syndrome. [2][3][4] Snoring is a highly prevalent condition that occurs in 7-50% of people, [5][6][7][8][9][10][11] depending on their age, gender, ethnic group, and other relevant criteria. However, its significance has yet to be well defined. A recent epidemiological study found that the vibration associated with snoring is related to atherosclerosis of the carotid arteries. 12 That study found that the prevalence of atherosclerosis increased with the severity of snoring. The aim of this study was to obtain direct evidence for a causal relationship between atherosclerosis and snoring. A chronic rabbit snoring model was devised and used under controlled conditions. Subjects This study was performed on female New Zealand White rabbits (2.5-3.0 kg). The study protocol was approved by the Institutional Animal Care and Use Committee of SMG-SNU Boramae Medical Center. Modeling of snoring Anesthesia was induced in the rabbits by the intramuscular injection of Zoletil (3 mg/kg) and xylazine (5 mg/kg). Rabbits were positioned in a supine position and breathed spontaneously. After making a skin incision on the ventral side of the neck, the left carotid artery was exposed by blunt dissection up to the level of its bifurcation. A custom-designed balloon-tipped catheter (Department of Biomedical Engineering, Seoul National University Hospital) filled with water was placed beneath the carotid artery bifurcation. The catheter was connected to a pressure sensor (NovaSensor NPC-100, General Electric, Bently, NV, USA). The signal was amplified and filtered using a custom-built signal conditioning circuit and then digitized using a commercial data acquisition device (USB6009, National Instruments, Austin, TX, USA). The monitoring and analysis program was implemented in a visual programming language (LabVIEW, National Instruments). Having obtained a baseline wave, a 20-g sandbag was placed on the trachea to induce snoring. The waveform of induced snoring was recorded. This procedure was conducted according to Amatoury et al. 13 The effect of snoring was simulated using a small DC vibrator motor with an eccentric rotor. A custom-built control circuit turned it on and off every 2 s (0.25 Hz with a 50% duty cycle) to mimic the normal respiratory pattern. The induced pressure waveform was recorded through the balloon-tipped catheter under the carotid bifurcation while applying the vibrator to the skin of the ventral neck. Visually, the obtained waveform was similar to that of induced snoring (Fig. 1). Four rabbits were used for this proce-dure. They were euthanized by the intravenous injection of urethane (1 g/kg; Sigma, St. Louis, MO, USA) and potassium chloride (2 mmol/kg). Simulation of long-term snoring The rabbits wore custom-made vests in order to simulate longterm snoring; the vest supported the vibrator so that it was in contact with the neck, and also provided pockets for the controller and batteries. The controller was programmed to operate for 12 h during the daytime and then stop for the following 12 h. The vibrating stimulus was applied from 6.00 a.m. to 6.00 p.m. to cover rabbit's usual sleeping time with some extension. Although it was possible to provide maximal stimulation (i.e., for the entire day), the animals were allowed some rest. Four AA batteries were used as a power source (Fig. 2). A total of 12 rabbits were assigned to two groups: experimental and control. The motor operated during the experimental period in the experimental group (n=7), while a sham device with a dummy motor and circuit board was installed in the vest in the control group (n=5), with no vibratory stimulation. Rabbits were fed ad libitum with a high-cholesterol diet (1% cholesterol, DYET# 620007, Dyets, Bethlehem, PA, USA) and had free access to water. They were examined daily to ensure the correct positioning and proper operation of the apparatus. The batteries were changed weekly. Harvesting the carotid arteries The carotid arteries were harvested for histological examination at the end of the 2-month experimental period. After the induction of anesthesia by the intramuscular injection of Zoletil and xylazine, as described above, urethane (1 g/kg) was injected intravenously for a deeper anesthesia. Peripheral blood was sampled from an ear vessel. A midline sternotomy was performed with preservation of the pericardium. All four cardiac chambers and great vessels were identified after incision of the pericardium, and cardiac puncture was performed at the apex of the left ventricle using an 18-G needle. Prewarmed Hartman solution was infused through the ventricular needle, and the right auricular appendage was opened with scissors for the simultaneous drainage of blood. After infusion of 500 mL of the solution, 4% cooled paraformaldehyde (about 250 mL) was infused through the ventricular needle. The right and left carotid arteries from 3 cm below to 1 cm after the bifurcation were harvested after neck dissection. Each harvested specimen was immersed separately and immediately in 4% paraformaldehyde. Histological examination The harvested carotid arteries were fixed for 1 day and then embedded in paraffin. The specimens were subsequently sectioned at a thickness of 4 mm, mounted onto glass slides, and then stained with hematoxylin-eosin. Atherosclerotic changes of the carotid artery were assessed by a pathologist according to the modified American Heart Association (AHA) classification suggested by Stary et al. 14,15 Statistical analysis Statistical analysis was performed using SPSS software (version 14.0, SPSS Inc., Chicago, IL, USA). The independentsamples t-test and chi-square test were applied. Probability values of p<0.05 were considered statistically significant. Results All of the rabbits survived to the end of the experimental period. The gross appearance of the harvested carotid arteries revealed no significant soft-tissue injuries or abnormal findings. The blood chemistry results revealed a similar level of hypercholesterolemia in all animals, irrespective of their grouping (Table 1). Upon histological examination, each specimen exhibited a normal contour with a preserved lumen. Atherosclerotic changes were present in the carotid arteries from both groups, with no significant differences in the occurrence rates or degree of severity between them (p=0.364, chi-square test) (Table 2, Fig. 3). Discussion Snoring is a highly prevalent condition 5-11 whose significance has not yet been well defined. Lee et al. 12 recently conducted an epidemiologic, observational cohort study following 110 subjects with or without snoring, and found that the prevalence of atherosclerosis increased with the severity of snoring. The authors suggested that the regional vibration may have been responsible for the carotid atherosclerosis. There have been no other similar studies. However, while the study of Lee et al. represents a novel and impressive interpretation of the epidemiological data, it has some shortcomings. For example, some of their subjects had accompanying obstructive sleep apnea syndrome, which is a well-known risk factor for atherosclerosis. Furthermore, their study provides only Modified American Heart Association classification. 14 indirect evidence of the relationship between snoring and atherosclerosis due to the inherent limitations of observational studies. The present study devised a chronic snoring rabbit model in an attempt to elucidate the significance of snoring without obstructive sleep apnea in the genesis of atheromas. The custom-built mechanical stimulating device could be applied to the rabbits in a noninvasive manner, such that it did not hurt the rabbit and enabled long-term maintenance. The use of a neck collar prevented the device from being influenced by the behavior of the rabbit. The prevalence and severity of atherosclerosis did not differ significantly between the experimental and control groups in the present study. Although there were no positive findings, it would probably be impetuous to state categorically that snoring has no impact on the initiation or progression of carotid atherosclerosis. Several factors need to be considered when carrying out an experiment of this kind. Those factors are also the limitations of our study. This study was subject to several limitations. First, the sample size may not have been sufficient to elucidate a cause-andeffect relationship. However, since there are no documented data on the prevalence of atherosclerosis in this condition, it is difficult to establish the ideal sample size. In addition, the optimal period of observation in this type of study has yet to be determined; for example, the impact of snoring on the initiation of atherosclerosis may require a shortening of the observation period. Furthermore, it was found that the aorta in rabbits consuming a 0.2% cholesterol diet exhibited foam cells and fatty streaks within 3-5 weeks. 16 This suggests that the experimental period in the present study was too long and that the cholesterol concentration in the rabbits' diet was too high to discern any difference between the groups. The implementation of a longer observation period could also have dif-ferentiated between them, since none of the present carotid specimens exhibited atherosclerosis beyond AHA stage 3. The short experimental period also prevented elucidation of the effect of the vibration other than intimal injury. Other authors have applied experimental periods of 6-60 months to identify advanced atherosclerosis. [17][18][19][20][21][22] Finally, although an attempt was made in this study to simulate real snoring, this is difficult to achieve precisely. Further refinement of the device will enhance the similarity between the model and real snoring. To the best of our knowledge, this is the first report of the effect of long-term simulation of snoring for the development of atherosclerosis. Loudspeakers have been used by Puig et al. 23 to induce cell vibration and identify airway inflammation, by Almendros et al. 24 to trigger upper-airway inflammation in the rat, and by Cho et al. 25 to vibrate the carotid arteries of rabbits. In addition, Amatoury et al. 11 and Narayan et al. 26 induced snoring by placing a sandbag over the trachea. These examples were all in vitro or short-term in vivo models. The present study applied a noninvasive, simple, and reliable technique in rabbits, enabling the development of a new, longterm rabbit snoring model. Although the stimulus applied to the rabbit may not produce a precise simulation of snoring with respect to frequency, duration, and strength, it was an approximation of snoring with respect to the vibration transmitted to the carotid artery. It would be better if the parameters of the stimulus applied to each rabbit had been measured; however, this would have required an invasive procedure, which was not feasible in a long-term observation study. Conclusion A long-term rabbit model of snoring was designed in this study. The obtained results did not reveal a causative effect of snoring on carotid atherosclerosis. Further investigation is A B C necessary to elucidate the possible causes of the negative findings of the present study. Changing the cholesterol content of the rabbits' diet and adjusting the study period may reveal the effect of snoring on the development of atherosclerosis. Conflicts of Interest The authors have no financial conflicts of interest.
2018-04-03T05:16:46.068Z
2013-10-01T00:00:00.000
{ "year": 2013, "sha1": "f12e817587cddd49cc2309d907d60fdb47bb6aab", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3840138?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "f12e817587cddd49cc2309d907d60fdb47bb6aab", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
52958947
pes2o/s2orc
v3-fos-license
Condom use at last sex by young men in Ethiopia: the effect of descriptive and injunctive norms Background Condoms are an important prevention method in the transmission of HIV and sexually transmitted infections as well as unintended pregnancy. Individual-level factors associated with condom use include family support and connection, strong relationships with teachers and other students, discussions about sexuality with friends and peers, higher perceived economic status, and higher levels of education. Little, however, is known about the influence of social norms on condom use among young men in Ethiopia. This study examines the effect of descriptive and injunctive norms on condoms use at last sex using the theory of normative social behavior. Methods A cross-sectional survey was implemented with 15-24 year old male youth in five Ethiopian regions in 2016. The analytic sample was limited to sexually active single young men (n = 260). Descriptive statistics, bivariate and multivariate logistic regressions were conducted. An interaction term was included in the multivariate model to assess whether injunctive norms moderate the relationship between descriptive norms and condom use. Results The descriptive norm of knowing a friend who had ever used condoms significantly increased respondents’ likelihood of using condoms at last sex. The injunctive norm of being worried about what people would think if they learned that the respondent needed condoms significantly decreased their likelihood to use condoms. The injunctive norm did not moderate the relationship between descriptive norms and condom use. Young men who lived closer to a youth friendly service (YFS) site were significantly more likely to have used condoms at last sex compared to those who lived further away from a YFS site. Conclusions Social norms play an important role in decision-making to use condoms among single young men in Ethiopia. The interplay between injunctive and descriptive norms is less straightforward and likely varies by individual. Interventions need to focus on shifting community-level norms to be more accepting of sexually active, single young men’s use of condoms and need to be a part of a larger effort to delay sexual debut, decrease sexual violence, and increase gender equity in relationships. Plain English summary Condoms are an important tool to prevent the spread of HIV, sexually transmitted infections, and unwanted pregnancies. Research conducted in Ethiopia and elsewhere has shown various factors that influence young men's use of condoms. One important but understudied factor is the effect of social norms on condom use. This study assessed how social norms influenced whether young men in Ethiopia used condoms the last time they had sex by applying the theory of social normative behavior. Results showed that social norms do affect condom use: young men who knew of friends who had used condoms were more likely to have used condoms at last sex, and young men who were not worried about what people would think of them if they found out they needed condoms were more likely to have used condoms as well. Interventions should look to change community norms by increasing the acceptance of condom use among young men who are sexually active and not married, and this should be a part of a larger effort to delay the age of first sex, reduce sexual violence, and promote health masculinities and gender equity. Background Eastern and Southern Africa have the highest HIV prevalence rates among young people aged 15-24 in the world with 3.4% of young women and 1.6% of young men living with HIV in 2016 [1]. Though Ethiopia has a relatively low HIV prevalence rate for this region, at 0.4% of young women and 0.5% of young men (UNAIDS 2017) [1], it has the second-highest population in Africa (after Nigeria) at 105 million people in mid-2017 [2]. This large population translates to a high burden of people living with HIV across Ethiopia: in 2016, there were 87,000 young people living with HIV and 8700 new cases among young people across the country [1]. In 2015, the World Health Organization, UNAIDS, and UNFPA issued a position statement encouraging the promotion of condoms to young people, among other populations, as a critical intervention for preventing the spread of HIV, sexually transmitted infections (STIs), and unintended pregnancies [3]. Condom use is also an important indicator for assessing never-married young men's access to family planning and reproductive health (FP/RH) services. Among Ethiopian men aged 25-49, the median age at first marriage is 23.7 and the median age at first sex is 21.2, 2.5 years before marriage [4]. Furthermore, of the 13.8% of never-married men have who had sexual intercourse in the past 12 months with a person who was neither their wife nor a partner who lived with them, only slightly more than half (53.9%) reported using a condom during last sex with such a partner [4]. Several studies have assessed individual-level factors associated with condom use among young men in Ethiopia. Factors that have shown to support the use of condoms include family support and connection, strong relationships with teachers and other students, discussions about sexuality with friends and peers, higher perceived economic status, and higher levels of education [5][6][7]. Risk factors associated with non-use of condoms or other risky sexual behaviors include low involvement in religious activities, high levels of alcohol consumption, and poor knowledge of HIV/AIDS [5,8]. Thus, there is a need to focus on increasing use of condoms among young men, for the prevention of HIV, STIs, and unintended pregnancy. There is also a need for interventions to focus on the contextual factors surrounding pre-marital sex by addressing violence in early sexual encounters, and gender equity. Evidence on the role of social norms in young men's decision making to use condoms is mixed. A study of young adults in rural Ethiopia found that social norms influenced intention to use condoms but not reported use of condoms [9]. In four studies in South Africa, Tanzania, Uganda and Swaziland, where norms were measured by different statements, social norms were shown to influence male and female adolescents' use of condoms [10][11][12][13]. However, a study of adolescents in rural Tanzania that examined the role of social norms through statements including the following: "I agree with the opinion of my friends that I should use condoms when having sex" found that norms did not directly predict condom use [14]. These results are inconclusive and do not capture the full range of social norms that can influence behavior. Theory of normative social behavior The theory of normative social behavior (TNSB) [15] is a framework used to explain how social norms influence behavior. TNSB distinguishes between two types of social norms: descriptive norms, which are individuals' perceptions about the prevalence of a behavior [16], and injunctive norms, which are perceived social pressures to conform [17,18]. Injunctive norms influence behavior because failure to conform carries the threat of social sanctions [15]. TNSB holds that both descriptive and injunctive norms directly affect behavior, but that the relationship between descriptive norms and behavior is moderated by injunctive norms, among other factors [15]. This theory has been tested for various health behaviors, including contraceptive use [19], alcohol consumption [20], handwashing [21], and physical activity [22]. The purpose of this study is to explore the role of social norms, both descriptive and injunctive, on condoms use at last sex, and to determine whether injunctive norms moderate the relationship between descriptive norms and condom use among young men in Ethiopia. Data A cross-sectional household survey of 15-24-year-old males living in rural and peri-urban Ethiopia was conducted in Amhara, Benishangul-Gumuz, Oromia, Southern Nations, Nationalities, and Peoples' Region (SNNP), and Tigray regions from January to July 2016. The sampling strategy was designed to measure the effect of distance to a youth friendly service (YFS) health center on utilization of a range of health services. Of 247 eligible YFS sites identified in these regions by the Regional Health Bureaus, 5% were randomly selected for inclusion and the number of sites in each region was determined by probability proportional to size. A total of 14 YFS sites were selected from five regions. One non-YFS health center was randomly selected from each region for comparison. A stratified, two-stage cluster design was employed where enumeration areas (EAs) were the sampling unit for stage one and selected within 5 km of YFS site, within 5 km of a non-YFS site, and within 5-10 km of a YFS site. Households comprised the second stage and approximately 37 households with eligible respondents were randomly selected per EA, and one respondent per household was selected using a Kish grid [23]. A more detailed description of the sampling strategy is available in the study report [24]. The total number of males interviewed was 1244. The questionnaires included modules on background characteristics; household characteristics; social cohesion and autonomy; puberty, family planning, and sexual activity; and facility visits for condoms, sexually transmitted infections, HIV, and basic health services. The questionnaires were translated into Amharic, Afan Oromo, and Tigrigna. Sample population The sample was limited to 15-24 year old never-married young men who had ever had sexual intercourse. Young men who were married/in-union were excluded from the analysis because they reported low condom use at last sex (1.3%) because they were trying to get pregnant, were in a steady/committed relationship, or because of religious prohibition. Measures The dependent variable is condom use at last sex measured by responses to the question "Did you use a condom the last time you had sexual intercourse?". Though this measure does not capture consistent condom use, it is advantageous as it minimizes recall bias by only asking about one recent incidence of sex [25]. Respondents who used a condom at last sex were coded as 1 and those who did not use a condom were coded as 0. Descriptive norms were measured by asking respondents whether they knew of any friend who had ever used a condom. Respondents who knew of a friend who had ever used condoms were coded as 1 and those who did not know of a friend or were unsure were coded as 0. Injunctive norms were measured by the following attitudinal statement: "I would be worried about what people in my community would say about me if they found out I needed condoms." This statement was measured on a four-point Likert scale ranging from strongly agree to strongly disagree. The responses were combined to form a dichotomized variable of agree or worry coded as 0 and disagree or not worried coded as 1. Additional independent variables included in the model were: respondent's age; education; religion; wealth quintile; ownership of personal savings to assess financial autonomy; living with both parents; chewing khat, drinking alcohol, or smoking cigarettes in the past month; distance living away from a YFS facility to assess physical access and presence of age-appropriate services; age at first sex; and whether the respondent had gone for HIV testing or counseling in the last 6 months. Analysis Descriptive statistics were calculated for respondent characteristics, dependent and independent variables. Bivariate analyses of condom use at last sex were conducted using Pearson's chi-squared tests and t-tests for significance. Multivariate logistic regression models were used and were adjusted by variables that were statistically significant in the bivariate analysis or were theoretically important. Akaike's Information Criterion (AIC) was employed to compare relative quality and fit of several models, and the model with the lowest AIC was chosen. The final model was run with and without an interaction term to assess the moderating effect of injunctive norms on descriptive norms. The likelihood-ratio test was used to determine if inclusion of the interaction term improved model fit. Lastly, the Hosmer-Lemeshow test was applied to the final model to assess model fit to the data. All analyses were conducted using Stata v15. Table 1 shows characteristics of sexually active single young men aged 15-24 (n = 260). Three out of five (60%) respondents were aged 20-24 years and the median age at first sex was 17 years. Most respondents were out of school at the time of the survey (70%), were living with both parents (64%), and did not have their own savings (72%). Approximately half were Orthodox Christian (49%) and the remaining half were Muslim (41%) or Protestant (10%). Over one-third (37%) of respondents lived within 5 km of a YFS site while 38% lived within 5-10 km, and the remaining quarter (25%) lived within 5 km of a non-YFS site. Respondents were of all wealth quintiles, though the greatest proportion was in the lowest wealth quintile (27%). Results Slightly more than half of the sample had either chewed khat, drank alcohol, and/or smoked cigarettes (51%) in the past month. Most respondents had not gone for HIV testing or counseling in the last 6 months (90%), while virtually all respondents knew of HIV (99.6%) and knew that HIV can be transmitted by unprotected sex (96%). Fifty-seven percent of respondents knew of a friend who had ever used a condom (descriptive norm) and 63% agreed that they would be worried about what people in their community would say if they found out the respondent needed condoms (injunctive norm). Two-thirds (66%) of the respondents had ever used condoms and 56% of the sample used a condom at last sex. Table 2 presents the results of the bivariate analyses for condom use at last sex. Young men who had attended secondary education or higher were significantly more likely to report condom use at last sex (65%) than those had never attended or only attended primary school (49%). Young men who lived < 5 km from a YFS site were significantly more likely to have used a condom at last sex (65%) compared to those who lived < 5 km from a non-YFS (55%) and those who lived 5-10 km away from a YFS site (47%). Young men living in higher wealth quintiles were more likely than those in lower wealth quintiles to report condom use at last sex, with a range from 67% in the highest wealth quintile to 41% in the lowest. In terms of norms, condom use at last sex was significantly associated with both descriptive and injunctive norms. Among those who knew of a friend who had ever used a condom (descriptive norm), 67% used a condom at last sex, compared to 41% of those who did not know of a friend who had used condoms. Condom use at last sex was significantly higher among young men who disagreed with the measure of injunctive norm: those who disagreed (that is, would not worried about what people would say if they found out the respondent needed condoms) were significantly more likely to have used condoms at last sex (65%) than those who agreed (50%). A t-test was conducted to determine if mean age at first sex differed for those who used condoms at last sex and those who did not. The mean age was 17.2 for both populations, and the difference was not significant (data not shown). Since nearly all respondents reported HIV awareness and knowledge that unprotected sex can transmit HIV, these two measures were excluded from bivariate and multivariate analysis. Table 3 presents the adjusted odds ratios (AORs) of the effect of descriptive and injunctive norms on condom use at last sex, adjusting for respondent characteristics, distance to a YFS facility, and use of khat, alcohol, or cigarettes. Though theoretically relevant and shown elsewhere as significant predictors to condom use at last sex, we removed five variables from the multivariate model because these measures were not significant in the bivariate analysis, and to ensure a stronger fit of the data: age at first sex, school status, living with mother and father, has own savings, and used HIV testing or counseling in the past 6 months. The likelihood-ratio test was used to test the difference between the model with the interaction term of injunctive and descriptive norms and the model without the interaction. The test was borderline significant with a p-value of 0.0952. The final model presented in Table 3 includes the interaction term. Among respondents who agreed with the injunctive norm statement (would be worried about what people in their community would say if they found out they needed condoms), those who knew of a friend who had ever used condoms were 4.7 times (95% CI: 2.26-9.95) more likely to use condoms at last sex than those who did not have a friend who had used condoms. Furthermore, among respondents who did not know of a friend who had ever used a condom, those who disagreed with the injunctive norm statement (or would not be worried) were 3.4 times (95% CI: 1.38-8.35) more likely to have used a condom at last sex compared to those who agreed with the statement. The interaction term of descriptive norm × injunctive norm was not statistically significant, and no synergistic effect of condom use at last sex was observed. Respondents who lived 5-10 km away from a YFS facility were significantly less likely to have used a condom at last sex compared to those who lived within 5 km of a YFS site (AOR = 0.46, 95% CI: 0.23-0.92). There was no significant difference in the odds of condom use at last sex for those who lived within 5 km from a non-YFS site compared to those who lived within 5 km from a YFS site. No significant difference was observed among those who lived 5-10 km from YFS to those who lived < 5 km from non-YFS in condom use at last sex (data not shown). Those who lived in the higher wealth quintile were significantly more likely to have used a condom at last sex than those in the lowest quintile (AOR = 3.10, 95% CI: 1.24-7.73). The Hosmer-Lemeshow goodness-of-fit test demonstrated that the model fits the data reasonably well (p = 0.601). Discussion The purpose of this study was to assess the role of social norms in condom use at last sex among sexually active single young men in Ethiopia. Using the TNSB, the study looked at how perceptions of friends' use of condoms (descriptive norm) and worry of what community members would say if they learned that the respondent needed condoms (injunctive norm) influenced condom use at last sex. This study also explored whether the injunctive norm moderated the relationship between the descriptive norm and condom use. The results show that, in the context of never-married, sexually active young men in Ethiopia, while descriptive and injunctive norms individually influence condom use at last sex, injunctive norms do not moderate the relationship between descriptive norms and condom use. A study of contraceptive use using the TNSB framework in India also found that injunctive norms do not have a strong moderating effect on the relationship between descriptive norms and behavior [19]. Condom use, like contraceptive use, is a semi-private behavior in that it generally occurs between partners with limited public knowledge of the behavior. The lack of moderation may have occurred because private behaviors are not observed by others and are thus not subjected to the same levels of public scrutiny or stigma as public behaviors. In addition, the lack of moderation on condom use suggests that descriptive and injunctive norms may function differently for individuals and their interaction is more complex. The study results are important for the Ethiopian government to reach its ambitious goals related to HIV knowledge, condom use and HIV testing and counseling by 2020 [26]. Different programmatic strategies with young men need to be tested and evaluated. For instance, the Ethiopian government is exploring the inclusion of FP/RH and HIV education in school-based programming [26]. In our sample, the vast majority (95%) had attended at least primary school, though only 40% went on to secondary school. Using primary schools as a space where age-appropriate information can be shared about FP/RH and HIV at earlier ages may provide young men with the knowledge and tools that they need to make better and safer decisions around condom use in the future. The study results also showed that respondents who are worried and concerned about how they would be perceived if others in their community learned that they needed condoms suggest that even in a place like Ethiopia, where condoms are relatively ubiquitous, fear of how one who uses condoms is perceived can weigh heavily on adopting protective behaviors. As has been shown with programs to delay girls' marriage [27,28], the Ethiopian government may consider holding community conversations to engage young men and their parents to begin addressing social norms that restrict adolescents from using condoms, especially in rural areas where traditional notions forbidding pre-marital sex exist. The study showed that unmarried young men who know of a friend who has used condoms are more likely to use condoms themselves suggests that communication and sharing of information within social and peer networks is important in changing behaviors. Condom use is by and large a private behavior that is not explicitly known or seen by others. Knowledge of a friend's use of condoms would likely occur in discussions and so if young men brag to their friends about sex [29,30], then perhaps the narrative around sexual discourse has changed to also include condom use. A study in Ethiopia showed that discussions about sexuality with friends had a positive association with condom use, though "discussions of sexuality" was not well defined [6]. Condom use may be also considered a sign of autonomy or increased status, where young men can obtain condoms without shame. The government may consider creating young mens' groups to engage unmarried young men in a range of health and relationship issues, including the importance of condoms use, and to provide referrals to facilities for HIV counseling and testing. The study also showed that respondents who lived closer to a YFS facility were significantly more likely to use condoms at last sex compared to those who lived further away from a YFS facility, and marginally more likely than those who lived close to a non-YFS facility. This finding suggests that proximity to a facility, especially one that has received programmatic interventions to increase its youth friendliness, is important to young men's use of condoms at last sex. Ethiopia is scaling YFS sites across the country, and this may additionally contribute to increased condom use among young men. The results of this study suggest that increasing YFS, however, is not sufficient to increase condom use -social norms also must be addressed. Interventions aimed at increasing condom use and addressing social norms should also focus on greater contextual factors in Ethiopia. Because early sexual encounters in Ethiopia are often in the context of force or coercion [6,31,32], interventions should consider the role of gender-based violence and inequitable gender norms in condom use. For young women and girls in Ethiopia, the formation of girls' groups and community conversations about child marriage were shown to be effective not only in raising the age at marriage [27,28], but also in increasing FP/RH knowledge and voluntary contraceptive use [27]. Adaptations of these interventions may include forming young men's groups and convening community conversations to engage young men, their parents, and community leaders to begin shifting social norms that inhibit condom use and address gender norms, raise the age of sexual debut, and decrease gender-based violence among young people in Ethiopia. Further research is necessary especially on positive deviants, that is the young men who did use condoms at last sex, to understand the pathways that lead to them to this decision including where they first heard about condoms, how they learned to use them or where to get them, negotiating with their partner, among many other issues. Examining the factors associated with condom use at last sex among young males is paramount, as there is evidence to suggest that decision-making on condom use rests predominantly with males [6]. Efforts to examine and increase condom use must therefore include and target boys and young men, as the present study has, and empower them to access and use condoms. However, social change is also critical to enable inclusion of girls and young women in the discourse around condoms, so that condom decision-making becomes more equitable between partners. Study limitation A limitation of this study is the lack of temporality. Because this is a cross-sectional survey, it is not certain that knowledge of friend's use of condoms influenced condom use at last sex or whether use of condoms at last sex influenced the respondent to discuss condoms with their friends. Either way, communication around condom use with friends and peers appears to be important. Conclusions This study examined condom use at last sex among single young men in rural Ethiopia through the theory of normative social behavior to assess the relationship of descriptive and injunctive norms on behavior. More than half of single young men used condoms at last sex. Those who knew of a friend who had used condoms (descriptive norm), and who were not worried about what members of their community would say if they found out they needed condoms (injunctive norms) were more likely to use condoms at last sex, though there was no moderating effect of injunctive norms. Young men who lived close to a YFS site were also more likely to use condoms at last sex. The results of this study suggest that social change is needed to improve access to and use of condoms at last sex in Ethiopia.
2018-10-17T11:57:28.473Z
2018-10-03T00:00:00.000
{ "year": 2018, "sha1": "b6ededd9d7898cfd75b4debd194e7c46a6849810", "oa_license": "CCBY", "oa_url": "https://reproductive-health-journal.biomedcentral.com/track/pdf/10.1186/s12978-018-0607-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b6ededd9d7898cfd75b4debd194e7c46a6849810", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
254020238
pes2o/s2orc
v3-fos-license
ADAR1-dependent editing regulates human β cell transcriptome diversity during inflammation Introduction Enterovirus infection has long been suspected as a possible trigger for type 1 diabetes. Upon infection, viral double-stranded RNA (dsRNA) is recognized by membrane and cytosolic sensors that orchestrate type I interferon signaling and the recruitment of innate immune cells to the pancreatic islets. In this context, adenosine deaminase acting on RNA 1 (ADAR1) editing plays an important role in dampening the immune response by inducing adenosine mispairing, destabilizing the RNA duplexes and thus preventing excessive immune activation. Methods Using high-throughput RNA sequencing data from human islets and EndoC-βH1 cells exposed to IFNα or IFNγ/IL1β, we evaluated the role of ADAR1 in human pancreatic β cells and determined the impact of the type 1 diabetes pathophysiological environment on ADAR1-dependent RNA editing. Results We show that both IFNα and IFNγ/IL1β stimulation promote ADAR1 expression and increase the A-to-I RNA editing of Alu-Containing mRNAs in EndoC-βH1 cells as well as in primary human islets. Discussion We demonstrate that ADAR1 overexpression inhibits type I interferon response signaling, while ADAR1 silencing potentiates IFNα effects. In addition, ADAR1 overexpression triggers the generation of alternatively spliced mRNAs, highlighting a novel role for ADAR1 as a regulator of the β cell transcriptome under inflammatory conditions. Introduction The type I interferon (IFN) response has recently been identified as a common signature for the development of autoimmunity (1). Induction of type I IFN (IFNa/b) following viral infection or endogenous release of mitochondrial genetic material is a highly regulated process in which pattern recognition receptors (PRR), such as MDA5, RIG-I and TLR3, act in concert to control inflammasome activation and the production of IFNa and IFNb (2). In addition to this welldescribed sensing machinery, the adenosine-to-inosine conversion (A-to-I) catalyzed by the adenosine deaminase acting on RNA 1 (ADAR1) plays an important role in fine-tuning the innate immune response by destabilizing double-stranded RNA (dsRNA) duplexes and therefore reducing PRR substrate to limit further and potentially excessive inflammation (3). ADAR1 exists as two isoforms that contain a central dsRNA binding domain and an enzymatic deaminase domain located in the C-terminal region. Both isoforms differ in localization (p110 remains mainly nuclear while p150 is expressed in the nucleus and cytosol) and by the presence of a nuclear export signal located in the N-terminus (4). ADAR1 has an essential role in modifying self-dsRNA formed by repetitive inverted elements, such as Alu short interspersed nuclear elements (SINE) elements, which inhibits the immune response triggered by the recognition of self-dsRNA by PRR. Dysregulation of ADAR1 has been implicated in several interferonopathies, autoimmune diseases and tumor progression. Mutations within the RNA binding domain of ADAR1 alter both substrate affinity and specificity which affect RNA deamination and trigger the constitutive type I IFN response in Aicardi-Goutières syndrome (5). In contrast, high ADAR1 expression level has also been correlated with high tumor T-cell infiltrating lymphocytes (TIL) in breast cancer, and an increased amino acid substitution in the recognized antigens (a consequence of cytosine-to-uracil or adenosine-to -inosine editing at the RNA level) (6), demonstrating for the first time a role for RNA editing enzymes in the generation of tumorspecific neoantigens. Similarly, such processes have been proposed as a potential source of neoantigens involved in the development of autoimmune systemic lupus erythematosus (7). In type 1 diabetes (T1D), increasing evidence indicate that local inflammation or other forms of stress combined with genetic predisposition leads to the generation and accumulation of aberrant or modified proteins to which central tolerance is lacking (8,9). Examples of enzymatic deamidation or citrullination of self-antigens (e.g., proinsulin, C-peptide, GAD65, IA-2, GRP78, IAPP), as a consequence of activation of peptidyl arginine deiminase (PAD) or tissue transglutaminase (tTG) detected in pancreatic b cells in response to stress or primary islets from T1D patients, illustrate how the islet microenvironment can drive autoimmunity (10,11). RNA editing is a post-translational modification mediated by adenosine and cytosine deaminases which catalyzes the edition of a nucleotide into another in the context of an "editosome" (12). In addition to an amino acid change, RNA editing may enhance transcriptome complexity/diversity by directly changing splicing acceptor site motifs or altering splicing enhancer sequences with possible consequences for b cell immunogenicity (13,14). In T1D, circulating T cells directed against alternative splice variants of GAD65, secretogranin V, CCNI-008, IAPP and Phogrin have been recently detected in patient blood samples and in the pancreatic islets (15). To investigate the effect of the T1D pathophysiological inflammatory milieu on ADAR1 and the b cell transcriptome, we have analyzed high-throughput RNA sequencing data from human islets and EndoC-bH1 cells exposed to IFNa or IFNg/ IL1b, cytokines that contribute to the pathogenesis of T1D (16,17). We demonstrate herein that inflammatory-mediated changes characteristic of early and late T1D development can trigger an increased A-to-I Alu editing rate. In addition, we demonstrate that ADAR1 not only dampens the innate immune response in b cells but also contributes to the transcriptome complexity with possible consequences for b cell function. Alu editing index RNA Editing Index (21) version 1.0 was used to assess the overall editing in Alu elements, respectively. This measure calculates the average editing level across all adenosines in repetitive elements weighted by their expression, thereby quantifying the ratio of A-to-G mismatches over the total number of nucleotides aligned to repeats and comprising a global, robust measure of A-to-I RNA editing. Quantification of expression Abundance quantification was done using the quasimapping-based mode Salmon (21) (version 0.11.2) for human genome assembly hg38 with GENCODE version 24 and mouse genome assembly mm10 with GENCODE version 20. Gene expression analysis was later completed by using the tximport R package (version 1.12.3) to transfer Salmon's isoform-level abundances to gene-level abundances (22,23). RNA-sequencing and differential expression analysis Total RNA was purified from EndoC-bH1 and EndoC-bH1/ ADAR1 cells using the Nucleospin miRNA Kit (Bioke) according to the manufacturer's guidelines. RNA quality was determined by Experion RNA StdSens 1K Analysis Kit (Bio-Rad, product number 7007103) on a Experion Automated Electrophoresis System (Bio-Rad) following the manufacturer's protocol. Strand-specific bulk RNA sequencing was performed on a NovaSeq 6000 (2x150 paired-end with a depth of >150 million reads) by Eurofins Genomics Europe Sequencing GmbH (Konstanz, Germany). Reads were quality checked with fastp (24) to exclude reads of poor quality and remove remaining adapters. We used Salmon v1.3 (25) with additional parameters "-seqBias -gcBias -validateMappings" to quantify the gene and transcript expression. GENCODE was used as the reference genome and was indexed with default parameters. Differential Gene Expression (DGE) was performed with DESEq2 v.1.30.1 (26) with paired experiments included in the general linear model (i.e.,~pairing + overexpression). For each gene, we obtained a log 2 fold-change (log 2 FC), associated to an adjusted P value, which highlights the difference in gene expression between ADAR1-overexpressing cells and control cells. Gene Set Enrichment Analysis (GSEA) was performed using the above-generated DGE data with fGSEA (27). We pre-ranked genes according to the value of the Waldtest statistics provided by the DESeq2 output. Up-(enrichment) and down-regulated (depletion) pathways were considered significant when adj. P value < 0.05, regardless of their normalized enrichment score (Supplementary Data S1). Gene-sets affected by alternative splicing were evaluated with clusterProfiler (28) with Gene Ontology as the reference (Supplementary Data S2). RT-PCR Total RNA was extracted from EndoC-bH1 using NucleoSpin Kit (#740609.50S, Bioke). Approximately 0.5ug of RNA was used for reverse transcription. Oligo (dT) primers were used in the reaction. For siRNA experiments, RNA was isolated using Dynabeads mRNA DIRECT purification kit (Invitrogen, Carlsbad, California, USA) and reverse transcribed using Reverse Transcriptase Core kit (Eurogentec, Liège, Belgium). Real-time PCR amplification was done with SsoAdvanced Universal SYBR Green Supermix (BIO-RAD, Hercules, California, USA) and amplicons were quantified using a standard curve. Expression of the transcript of interest was detected using the following primers: ADAR1-p150 Western blot analyses Cells were lysed in RIPA buffer supplemented with protease inhibitor cocktail (Roche). Protein quantification was performed with the BCA protein assay kit (Thermo Fisher Scientific). 25 mg protein extracts were loaded on 10% acrylamide/bis acrylamide SDS page gel. After electrophoresis, protein transfer was performed onto a nitrocellulose membrane (GE Healthcare). Membranes were stained with primary antibodies overnight at 4°C and secondary HRP conjugated antibodies (Santa Cruz Biotechnology) for 1 hour RT. Antibodies used were: anti-ADAR1 [#ab168809, Abcam (Figures 1, 2) and #14175, Cell Signaling Technology (Figure 3)], anti-STAT1 and anti-STAT2 (#14995 and #72604, Cell Signaling Technology) and as a loading control anti-actin (#0869100, MP Biomedicals) or anti-tubulin. Lentiviruses production and transduction The vectors were produced as described previously (29). Viral supernatants (MOI=2) were added to fresh medium supplemented with 8 µg/mL Polybrene (Sigma-Aldrich), and the cells were incubated overnight. The next day, the medium was replaced with fresh medium. Transduction efficiency was analyzed 3-6 days after transduction. Assessment of cell apoptosis Cells were stained with the DNA-binding dyes propidium iodide (PI) and Hoechst 33342 (10 µg/ml, Sigma-Aldrich) to count apoptotic cells under a fluorescent microscope. In each experimental condition, a minimum of 500 cells were counted by two independent observers (one of them unaware of sample identity). Data and materials availability Bulk RNA-seq data that were generated by this study is available on the Gene Expression Omnibus (GEO) database under the accession number GSE214851. Other datasets mentioned are available on GEO using accession numbers GSE133218, GSE137136, GSE148058 and GSE108413. Cytokines trigger increased expression of ADAR1 in b cells IFNa and IFNg play important roles in T1D pathogenesis, from initiation of autoimmunity (IFNa) to the more advanced b cell destruction process (IFNɣ) (30,31). To identify key pathways involved during T1D development, we used RNAseq datasets from human islets and EndoC-bH1 cells exposed to IFNa (24h) or IFNg/IL1b (48h), and searched for common differentially regulated genes (32,33). This resulted in the identification of 623 common genes in EndoC-bH1 cells and 577 common genes in primary human islets. As expected, gene ontology pathway analysis identified IFN signaling and genes involved in HLA class I antigen peptide processing and presentation, highlighting the importance of the islet microenvironment in triggering cytotoxic T lymphocyte (CTL)-mediated b cell destruction ( Figure 1A). In addition to immune-related genes, we observed that both IFNa and IFNg/ IL1b stimulation led to a significant increase in expression of ADAR1 in EndoC-bH1 cells and primary human islets suggesting an increased RNA deamination rate in b cells during inflammation ( Figure 1B). We confirmed the effect of the different cytokines on ADAR1 mRNA and protein expression in EndoC-bH1 cells using STAT1 expression as a control for treatment effectiveness (Figures 1C, D). Of note, the expression of the isoform p150 of ADAR1 protein was undetectable in the absence of cytokine stimulation. Enhanced A-to-I editing in b cells following IFNa and proinflammatory cytokine stimulation To determine the consequences of the observed high ADAR1 expression following proinflammatory cytokine stimulation, and to decipher the RNA editome, we screened for A-to-G RNA mismatches (i.e., inosines present in the RNA are reverse transcribed into guanosines in cDNA and A-to-I editing is detected as A-to-G mismatches) by comparing reads in IFNa and IFNg/IL1b-treated samples and non-treated samples against genomic reference (34,35). Using the RNA editing index to measure the global rate of editing in Alu regions, we observed that IFNa and IFNg/IL1b specifically triggered A-to-I RNA editing in b cells and primary human islet samples (Figures 2A, B). ADAR overexpression inhibits the antiviral response while ADAR silencing exacerbates the effects of IFNa in b cells To model the effect of ADAR1-p150 independently of the pleiotropic effects of cytokines, we generated a stable ADAR1overexpressing human b cell line, by lentivirus transduction. In these cells, we detected an over 20-fold increase in ADAR1-p150 gene expression, and confirmed the corresponding increase in protein level by western blot analysis ( Figure 3A). While ADAR1 overexpression had no major impact on endogenous ADAR1, PDX1 and MAFA gene expression, we observed a slight but significant increased NKX6.1 and a 50% decreased insulin gene expression suggesting that ADAR1 may interfere with b cell function (Supplementary Figure 1A). Differential gene expression analysis performed on high-depth RNA-seq revealed profound transcriptome changes following ADAR1 overexpression. In total, 2,851 genes were differentially expressed (1,477 up-regulated, 1,374 down-regulated -|Log 2 FC| > 0.58; P adj. P value < 0.05) ( Figure 3B and Supplementary Data S1). Among them, we observed regulation The proinflammatory cytokines IFNa and IFNɣ+IL-1b induce a partially shared gene signature in EndoC-bH1 and human islets Venn diagrams of up-regulated (log 2 FC > 0.58 and adj. P value < 0.05) genes in EndoC-bH1 and human islets after exposure to IFNɣ+IL-1b (A), top) and IFNa (A), bottom). Common genes have been tested for enrichmentusing REACTOME as the referenceand significantly enriched pathways are represented as a dot plot: the x-axis represents the gene ratio and the y-axis the enriched pathways. (B) Heatmap representing the log 2 fold change of the 128 up-regulated genes in all 4 datasets (|log 2 FC| > 0.58 and adj. P value < 0.05). (C) ADAR1 p150 and STAT1 gene expression in EndoC-bH1 were assessed by qPCR after cytokine treatment. n=3 independent experiments (D) ADAR1 p150 ADAR1 (detected using Anti-ADAR1 #ab168809, Abcam), STAT1 and STAT2 protein expression determined by western blot. b-actin expression was used as loading control. of genes involved in immune system processes and defense to bacterium, confirming a role for ADAR1 in immune response ( Figure 3C). To validate this observation, we triggered the type I IFN response in b cells by mimicking viral infection via poly-I:C transfection (36). Poly-I:C transfection led to an increase in IFNb expression and downstream IFN-stimulated genes (ISG) such as MDA5, IFIT1, CXCL10 and STAT1, but ADAR1 overexpression completely abolished this antiviral response ( Figure 3D). To confirm these data using a reverse approach, we used an siRNA targeting ADAR1 leading to 40-70% reduction in gene and protein expression (Figures 4A-D). Of note, ADAR silencing had no effect on b cell identity genes and insulin expression (Supplementary Figure 1B). As expected, while IFNa treatment induced the expression of several ISGs [e.g., STAT1 ( Figure 4E) MDA5 ( Figure 4F) and MX1 ( Figure 4G)], ADAR1 silencing potentiated the effect of IFNa on the expression of these antiviral genes and sensitized EndoC-bH1 cells to IFNa-A B FIGURE 2 Cytokine treatment leads to A-to I-mutation in EndoC-bH1 and primary human islets. (A) Global A-to-I RNA editing index across Alu elements (short interspersed nuclear elements) in RNA-seq data demonstrates a higher A-to-I editing signal in IFNa or IFNɣ/IL1b stimulated samples after 8 hours and 48 hours, respectively. Student's paired two-tailed t-test; *P<0.05, **P<0.005, ****P<0.0001. (B) Noise levels (non-A-to-G mismatches) are notably lower than seen in the global editing index's biological signal (A-to-G mismatch). ADAR1 overexpression inhibits the type I IFN response. (A) ADAR1 p150 expression in EndoC-bH1 and EndoC-bH1 (ADAR1) cells determined by qPCR (upper panel) and western blot analysis using Anti-ADAR1 #ab168809, Abcam (lower panel). (B) Volcano plot on differential expressing genes after ADAR1 overexpression. Dashed lines show log 2 FC ≤ -2 or log 2 FC ≥2 and adj. P value < 0.05. Plot was generated using Enhanced Volcano (C) Pathway analysis on downregulated (left, log 2 FC ≤ -2, adj. P value < 0.05) and upregulated (right, log 2 FC ≥2, adj. P value < 0.05) genes. Plots were generated using Revigo. (D) Gene expression of ADAR1 p150, MDA5, IFNb, IFITH1, CXCL10 and STAT1 after ADAR1 overexpression in the presence or absence of polyI:C. N=3 independent experiments. Data are expressed as means of independent experiments ± SEM. Differences between groups were evaluated using one-way ANOVA or linear mixed model in case of missing values, followed by Bonferroni post-hoc test. **P<0.01 and ****P<0.0001. mediated cell death ( Figure 4H). Altogether, these data unveil a role for ADAR1 in dampening the type I IFN response to prevent an excessive inflammatory response potentially leading to b cell death. ADAR1 overexpression triggers alternative splicing events in b cells Besides a role in immunity, gene ontology pathway analysis presented in Figure 2C revealed a possible role of ADAR1 in regulating alternative splicing. Considering the trend for an increased A-to-I Alu editing rate in ADAR1 overexpressing cells ( Figure 5A), we studied the impact of adenosine deamination on the b cell coding transcriptome and searched for the presence of ADAR1-induced alternative splice variants. Of importance, in these cells, we observed an increased ADAR3 expression following ADAR1 transduction ( Figure 5A). After aligning the RNA sequencing reads to the reference genome, we identified a total of 323 alternatively spliced events (both known and de novo), modified by ADAR1 overexpression (Figures 5B, C). These events derived mainly from spliced exons (SE, 70%), but also mutually exclusive exons (ME, 10%) and alternative 3' spliced sites (A3SS, 10%). Retained introns (RI, 7%) and alternative 5' spliced sites (A5SS, 3%) were less abundant. Genes affected by alternative splicing were analyzed for pathway enrichment analysis using the REACTOME platform and were found to be mainly related to b cell function (e.g., pre-synapse, regulation of neurotransmitter levels), vesicle location (e.g., synaptic vesicle, vesicle-mediated transp ort t o the p lasma membrane) and protein transport ( Figure 5D). Discussion Our report positions ADAR1 as both an important player in dampening innate immunity in b cells and as a key editor of the b cell transcriptome. While exposure to inflammation, characteristic of the early or later stages of T1D development, is usually associated with deleterious effects, the data presented here recall earlier work on enhanced expression of Programmed death-ligand 1 PD-L1 detected in b cells from long-standing T1D individuals (37), suggesting that ADAR1, like PD-L1, is involved in the positive adaptive mechanisms to protect b cells from further destruction. During T1D, the induction of the unfolded protein response, following exposure to virus or inflammatory cytokines, participates in this adaptive phase to restore cellular homeostasis or to initiate apoptosis in the case of unresolved stress (38). At this decision point, the A-to-I editing induced by ADAR1 has been implicated in PERK activation and apoptosis induction via EIF2a/CHOP pathway (39). Other reports describe additional RNA-editing independent effects of ADAR1 via direct interaction with RIG-I, PKR or NF90 that could regulate cellular stress and the type I interferon response (3,40,41). Supporting the concept that b cells are not passive victims in their destruction (8), our results show that the increased A B D C RNA editing rate correlates also with the emergence of novel transcript variants, demonstrating that ADAR1 activity is not only limited to Alu sequences but also affects coding regions with possible consequences for gene regulation and cell function (42). Surprisingly, ADAR1 p150 overexpression in EndoC-b H1 cells led a to concomitant increase in ADAR3 expression, which has been reported to act as a negative regulator of ADAR1-mediated editing (43). The competition between the different ADAR proteins may explain the relatively low editing rate measured in our samples and add to the complexity of gene editing regulation. Despite the higher editing rate observed in inflammatory conditions or following ADAR1 overexpression, it is unlikely that all of the detected alternative splicing results solely from a direct A-to-I RNA editing of the target genes. As described here, ADAR1 overexpression led to extensive regulation of the RNA processing machinery or of spliceosome formation suggesting that ADAR1 may affect the transcriptome by modulating the expression of trans-regulatory elements. Among them, the splicing regulators ELAVL4 and NOVA2, previously reported as important splicing-regulatory RNA binding proteins involved in modulating b cell survival (44), were upregulated in response to ADAR1 transduction. The increased cell death observed after ADAR1-specific inhibition (Figure 4) is in line with this observation. Another report describes that the loss of RNA editing activity may lead to non-apoptotic cell death induction directly mediated by MDA5 (45), indicating that ADAR1 inhibition may lead to different forms of cell death. Of note, ADAR1 expression in our dataset led to decreased expression of pseudokinase mixed lineage kinase domain-like protein (MLKL) that serves as an effector in necroptosis. The present results illustrate a central role of ADAR1 in b cells during inflammation and shed light on a novel regulatory mechanism potentially used by b cells to cope with environmental changes after viral infection but also during the different phases of inflammation. Although ADAR1-dependent effects are mostly protective, the functional and immunological consequences of mutations induced by RNA editing, including the potential generation of neoantigens, remain to be investigated. Data availability statement The data presented in the study are deposited in the Gene Expression Omnibus (GEO) database under the accession number GSE214851. Other datasets mentioned are using accession numbers GSE133218, GSE137136, GSE148058 and GSE108413. INS, PDX1, MAFA and NKX6.1 gene expression level upon ADAR1 specific inhibition by siRNA. Data are expressed as means of independent experiments (n=3) ± SD. Differences between g r o u p s w e r e e v a l u a t e d u s i n g u n p a i r e d t -t e s t . * * p < 0 . 0 1 and ****p<0.0001. SUPPLEMENTARY DATA SHEET 1 Differential gene expression detected in EndoC-bH1 following ADAR1 overexpression. SUPPLEMENTARY DATA SHEET 2 Gene-sets affected by alternative splicing in ADAR1 overexpressing cells.
2022-11-28T14:12:42.937Z
2022-11-28T00:00:00.000
{ "year": 2022, "sha1": "130d285e8994ed068a6bd6d3083d92724eb9ebcf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "130d285e8994ed068a6bd6d3083d92724eb9ebcf", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233947350
pes2o/s2orc
v3-fos-license
Economic Impact of COVID-19 on the Vulnerable Population of Bangladesh This study aims to find out the Economic condition of the vulnerable population of Bangladesh in the COVID-19 situation. This study uses the primary data of 203 respondents who were collected from all around Bangladesh using convenience and purposive sampling during the first wave of the COVID-19 pandemic. In this study, a comparison of expected income in the COVID-19 situation is made with the pre-COVID-19 situation using Welch two-sample t-test, and the possible impact on Expenditure and Savings is discovered using OLS estimation. Descriptive analysis is also included considering the key economic variable of the respondents. This study finds a significant drop in income due to the COVID-19 pandemic and infers that it will cause both drop-in expenditure and monthly savings. Moreover, this study gives an overall idea of the situation of vulnerable people in the COVID-19 situation. This paper intends to help the stakeholders and policymakers to understand the situation of the vulnerable population of Bangladesh during the COVID-19 pandemic. In this crisis, the global economy could lose $3.7 billion worth of output. COVID-19 can push half a billion people into poverty, the informal sectors of the developing country will face a major hit (K. . The president of the World Bank David Malpass said that the poorest country will be hit the most in this economic crisis, especially those countries which have huge debts. In the developing countries of Asia, COVID-19 is causing a decline in domestic demand, tourism business as well it is disrupting trade and supply linkage (Abiad et al., 2020). Bangladesh is the largest economy among the least developed countries and by 2024 expected to leave the LDCs (Nations 2018). This country has a vast population of 168.1 million with a percapital income of 1,905.7 US$ (UNDP Report about Bangladesh 2018) as of 2019. According to the Bangladesh Bureau of Statistics (BBS) in 2018 the poverty rate in Bangladesh is 21.8% and people living below the extreme poverty line are 11.3% (Daily Star Desk Report 2019). According to the BBS report in the year 2017-18, the economically active population and the employed population were 62.1 million and 59.5 million respectively (Statistics 2018). According to an article by ILO (International Labor Organization) about 87% of the total labor force was employed in the informal sector in 2010 and these informal works included -wage laborers, self-employed persons, and unpaid family labors. Due to the effect of the CoronaVirus, almost 20 million workers in the informal sector lost their jobs in Bangladesh. A distinguished fellow at CPD (Centre for Policy Dialogue) Dr. Mustafizur Rahman said that both formal and informal sectors out of 60.8 million people working, 14 million people get their monthly salary from employers, 10 million workers are day laborers and the remaining 27 million are self-employed with mainly small businesses; out of them in a current pandemic situation, the day laborers and self-employed are temporarily jobless with zero earning (Staff Reporter_Financial_Express 2020). Bangladesh is the second-largest exporter of garments, this sector is injured by the Corona Virus crisis due to the order cancellation, and worldwide decreasing demand of the consumers (Zobaer Ahmed 2020). Coronavirus has had a negative impact on food supply by creating logistical challenges. Covid-19 has also affected the food sector by creating instability in market demand. We have noticed a decline in food items including vegetables, fruits, eggs (Independent Online 2020). Milk producers were seen selling their milk at a very low price which caused a great loss to them (Atik, 2020). To mitigate these negative effects of Covid-19, the government of Bangladesh, like the governments of other countries, announced a large stimulus package. However, the management and distribution of this package have raised several questions about the transparency of the system and the accountability of all involved. The various sectors most affected by the pandemic did not receive the packages properly. According to the research organization SANEM, where 30% of medium and 46% of large enterprises receive stimulus packages, only 9% of micro-small businesses among the survey respondents have received any kind of packages or incentives from the government. (Raihan, Uddin, and Ahmed 2021) According to the World Bank, the extreme poverty threshold is $1.9 per day, and for the lowermiddle-income country, the poverty threshold is $3.2 per day (World Bank Report 2018). According to the World Bank, In Bangladesh, only 15% of the population earns more than $5.90 a day and most of the employment is generated in the informal sector out of which a significant portion depends on daily wage to eat (Saleh, 2020). In the context of the economic turmoil due to COVID-19, this paper will consider the vulnerable people who have a personal monthly income below $177, and maximum income was considered $176.95 (15,000 Taka); as most of them will lose job for the time being (Hossain, 2021). In this paper, an income comparison of the vulnerable people is made, and the relationship between expenditure and savings is expressed using the Least Squares Regression. Moreover, it will give the descriptive statistics analysis of key economic indicators of the respondents. Research Objective This research will try to understand the current economic condition of the vulnerable population of Bangladesh. It will try to compare the monthly Income in the COVID-19 scenario with the pre-COVID-19 situation and will try to establish a relation of respondents' monthly savings and expenditure with respondent monthly income. Research Question 1) What is the condition of the vulnerable population of Bangladesh? 2) How COVID-19 will impact income, expenditure, and savings? This paper is composed of 6 sections, including section 1 as Introductions, & the remaining are Literature Review, Methodology, Results and Discussion, Conclusion, and References. Literature Review The response during the COVID-19 outbreak in China indicates our experience from the 2003 SARS (Severe Acute Respiratory Syndrome) epidemic outbreak played a significant role (McCloskey & Heymann, 2020). SARS (Severe Acute Respiratory Syndrome), a type of Coronavirus also caused massive havoc to the economy in 2003 with a global loss of 59 billion USD. The collapse of the US, the European, and Asian markets; also weakened the trade, and travel all around the world (Baric, 2008). The COVID-19 has impeded the goal of the United Nations of eradicating poverty by 2030, in the most extreme study by Sumner, Hoy, and Ortiz-Juarez (2020) shows that there might be a 20% drop in consumption/income, increasing the number of people living below the poverty line by 420-580 million. UNU-WIDER. (2020) predicted the fall of half a billion population of the developing countries into the poverty line due to this pandemic. Altig et al. (2020) found using economic uncertainty indicators that in the USA and UK COVID-19 is causing great economic uncertainties and economic fallouts. Ludvigson, Ma, and Ng (2020) used the VAR model in the case of the US to find the estimated loss due to COVID-19 and found that for 12 months from the start of this epidemic in the US (February/March 2020) the multi-period shock would cause about 12%/month loss of industrial production and in-service sector job loss of about 55%. Empirical evidence suggests that an epidemic outbreak reduces consumer expenditure but not across all categories, and using Ecommerce as alternative manufacturers, and retailers can lessen the negative impact of an epidemic outbreak (Jung, Park, Hong, & Hyun, 2016). Ruiz Estrada and Koutronas (2020) suggested based on the 2019-nCOVGEI-Simulator that for the corresponding region both the likelihood and the magnitude of the epidemics are related to the economic dynamics. An African Perspective of COVID-19 impact on the Economy of Nigeria was conducted by Kanu (2020), this paper says that Economic disruption in Nigeria seems to be loss of jobs, disruption of financial markets and corporate sector, loss of income, and gradual recession. In the unemployment crisis, a rise in government spending could potentially reduce unemployment and could increase output both in that crisis and in the future (Rendahl, 2016). In the time of recession, 2007-08 workers in small farms who have more external financial dependence are comparatively more vulnerable to job loss than the workers in big farms (Duygan-Bump, Levkov, & Montoriol-Garriga, 2015). Nigeria, a developing country used public money to stimulate the economy in this pandemic crisis to restrict business failure but some responses were inefficient (P. Ozili, 2020). In a country like Bangladesh, which has made great strides in all socio-economic indicators in the recent past, all these achievements may be lost under the negative impacts of Covid. In the last few years, Bangladesh has been able to achieve strong economic growth through political stability and policy support. As a result, we were in a better position than ever to eradicate the poverty level. However, the scenario has changed drastically due to the rise of the recent abnormal situation. According to the research organization SANEM, due to COVID-19, the number of poor and vulnerable populations may double (from 20% to 40%) in the coming times. They consider the tendency of poor and vulnerable people to save less as a factor behind this opinion. (Hossen et al. 2020). Moreover, a study by PPRC said in Bangladesh, most numbers of financially weak people in urban areas live in slums. The pandemic has reduced the income of 75% of urban slum dwellers and 62% of rural poor people. Many people among them have also become economically inactive. 70% of the urban slum dwellers and 60% of the rural poor people have become economically inactive due to the lockdown of COVID-19 (PPRC 2020). The socioeconomic impact of the COVID-19 should be reduced for which a proactive management approach should be followed, health policies should be taken by considering social detriments of health, education, and health literacy among the population needs to be increased, national, and international shifts in investments need to be encouraged, a strong private, and public partnership should be built; A unified world council should be established (Evans, 2020). This situation is instigating many countries to develop their public health sector, fixing the economy with the financial stimulus (P. K. Ozili & Arun, 2020). Sands, El Turabi, Saynisch, and Dzau (2016) also suggest that an infectious health disease as global health should not be neglected, we need to strengthen our public health capabilities to fight those threats. From the African perspective, Ataguba (2020) suggested the governments increase public health spending to tackle the virus. After reviewing the literatures it was found except for some reports from research organizations there are very few literatures available which comprehensively tried to show the economic turmoil due to COVID-19 on vulnerable population. Methodology The study uses primary data collected during the COVID-19 lockdown scenario of Bangladesh between 4 April 2020 to 6 June 2020. Both purposive and convenience sampling have been used in the collection of data, the non-random sampling was used as it was very much tough to collect data in the lockdown scenario. We collected a total of the data of 203 respondents from all around the country with monthly income below 15,000 Taka, who we generalized as the vulnerable population in Bangladesh. We used Ordinary Least Squares(OLS) linear regression, bar graph, descriptive statistics, Welch two-sample t-test in the analysis of our sample. For statistical analysis, we used SPSS version 25 & R-Studio version 1.3.95. Analysis First of all, we find out the demographics of our total sample, we used bar graphs, pie charts for that purpose. We also made a descriptive statistics analysis of respondent's economic variables-monthly income, monthly expenditure, monthly savings, people working in each family, total savings, days can be run with that savings, and days living on relief. This descriptive analysis also consisted of some categorical variablesalternative earnings, earnings in the COVID-19 situation, have savings, got relief, health insurance, and mobile banking account. We made a comparison of respondents' income in a pre-COVID-19 situation with income in the COVID-19 situation using Welch's twosample t-test. We used two OLS models to build the relationship between respondent's monthly income with monthly expenditures, & monthly savings respectively. Notably, 75% of our respondents are male, the remaining 23.19% and 0.99% are female and other genders respectively. The majority of the respondents are married which is about 74.38%, and the remaining 25.62% are single. We tried to collect data from all around Bangladesh, the maximum amount of data was collected from Mymensingh 41 (20.2%), followed by Rajshahi 21 (10.3%), and Sylhet 19 (9.4%) and other districts are also notable as shown in Figure 01. Our total sample is composed of respondents with low-paid and vulnerable professions in the context of Bangladesh as shown in Table 1. As can be seen in Tables 03 & 04; out of the examined 203 respondents' we got the mean of respondent's monthly income, monthly expenditure, and monthly savings of 5,748.28 Taka, 6,976.85 Taka, and 879.31 Taka, respectively. It has been found that about 61.60% out of 203 respondents has earnings in COVID-19 situation and about 77.3% of the respondent have some sort of savings. The COVID-19 monthly income amount and total savings amount were on average 1,451.72 Tk. and 5,131.43 Taka, respectively. Descriptive Statistics Moreover, 156 respondents reported that on average they can live about 36 days with savings. And, noting that out of 203 people majority 70.4% of the respondents said they got food relief, with which 145 respondents said they can run on average 14 days. Additionally, out of our 203 respondents, 62% said they got a mobile banking account, and 93.23% of 192 respondents said they don't have health insurance. We conducted unequal variance two samples independent welch t-test between Pre-COVID-19 earning & expected earnings in the COVID-19 situation, the result showed that there was a significant drop in income with a very low p-value at a 95% confidence interval (Table 05). Fig 02 Linear Regression line (OLS) between Monthly Expenditure & Monthly Income in Normal Situation This linear model between monthly income & monthly expenditure in the pre-COVID-19 case showed a significant linear relationship with a very low p-value and the R 2 value of this model is also satisfactory, which is about 0.497 which means this model can predict about 49% of the cases. The co-efficient of the variable pre-COVID-19 monthly income is 0.636, which means there is a positive relation between monthly income and monthly expenditure in normal cases. This coefficient can be interpreted as, for every 1 taka increase in monthly income about 0.636 takas in monthly expenditure will be increased, similar impact in case of decreasing income, expenditure will also decrease. Fig 03 Linear Regression line (OLS) between Monthly Income & Monthly Savings (Pre-COVID-19 Situation) This linear model between monthly income and monthly savings in the pre-COVID-19 condition showed a significant linear relationship with a very low p-value and though the R 2 value of this model is not very satisfactory, which is about 0.033. The co-efficient of the variable pre-COVID-19 monthly income is 0.048, and this relationship with the dependent variable monthly savings is found significant with a pvalue of 0.009 at a 95% interval, which means there is a positive relation between monthly income and monthly savings in a normal case. This co-efficient can be interpreted as, for every 1 taka increase in monthly income about 0.048 takas in monthly savings will be increased. Similar impact in case of decreasing income, savings will also decrease. As seen in the Welch two-sample t-test the income in COVID-19 has dropped significantly, and the positive and significant relationship of OLS Model 01 and 02 shows that both expenditure and savings will be a drop in the COVID-19 situation, due to a fall in the income. In our study, we observed a huge drop in the expected monthly income of the respondents during this pandemic. The collaborative research work of the research organizations PPRC (Power and Participation Research Centre) and BIGD (BRAC Institute of Governance and Development) have reflected that point (PPRC 2020). Another Research organization SANEM has also found a decrease in the monthly income level of the poor and vulnerable population in their research. They found a low saving nature among these poor and vulnerable people (Hossen et al. 2020). However, according to our analysis, a decent number of people (77.3% of the respondents) have some sort of savings. Conclusion COVID-19 is disrupting the whole global economy, so it is in Bangladesh. Standing on the verge of LDC graduation COVID-19 has a much severe impact on the overall economy of Bangladesh. Our research has analyzed different continuous and categorical economic variables of our sampled vulnerable population which can help to overall understand the situation and more new studies can compare the situation in an updated manner. It has found a significant drop in income due to COVID-19 and signifies that both monthly expenditure and savings will drop, as a significant relation was found between monthly income with both monthly expenditure and monthly savings separately. Due to the lockdown scenario, a 203 sample was collected using non-random sampling which can't give the proper picture of the vulnerable population. But it can be used to roughly understand the situation. Additionally, the lack of literatures on this topic impeded in comparison our results.
2021-05-08T00:02:44.075Z
2020-12-03T00:00:00.000
{ "year": 2021, "sha1": "a1b00981c776ce0da903588451b0d5b2e874667e", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-279226/latest.pdf", "oa_status": "GREEN", "pdf_src": "ElsevierPush", "pdf_hash": "4de3fa105556f506ed25b451b71ff7086bf90243", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Geography" ] }
256963880
pes2o/s2orc
v3-fos-license
Origin of macroscopic adhesion in organic light-emitting diodes analyzed at different length scales Organic light-emitting diodes (OLEDs) have been widely studied because of their various advantages. OLEDs are multi-layered structures consisting of organic and inorganic materials arranged in a heterojunction; the nature of adhesion at their heterogeneous interfaces has a significant effect on their properties. In this study, the origin of macroscopic adhesion was explored in OLEDs using a combination of microscopy techniques applied at different length scales. The different techniques allowed the identification of layers exposed by a peel test, which aided direct characterization of their macroscopic adhesion. Further, the contribution of each exposed layer to macroscopic adhesion could be determined through an analysis of photographic images. Finally, analysis of the local roughness and adhesion confirmed that the interface between an anode and emission layer could play a predominant role in determining the nature of macroscopic adhesion in OLEDs. These results may provide guidelines for exploring the origin of macroscopic adhesion properties through a combination of various microscopy techniques. In this study, we investigated the origin of local contributions to macroscopic adhesion in OLEDs by a combination of various microscopy techniques at different length scales. A peel test was performed to study macroscopic adhesion in OLEDs. Then, the exposed layers were identified using a common digital camera, optical microscopy (OM), and cross-sectional scanning electron microscope (SEM) imaging. Further, each area portion of exposed layers of the OLEDs was calculated using image analysis. The results on the calculation of the area portion showed that an exposed anode may be responsible for macroscopic adhesion. Additionally, the surface roughness and adhesion force on the exposed layers were analyzed using AFM. Through the AFM measurements, we confirmed that the exposed anode, i.e., the interface between the anode and emission layer (EL), had a predominant effect on the macroscopic adhesion properties of OLEDs. These results may provide useful information about the macroscopic adhesion of OLEDs, while providing guidelines for the exploration of the origin of macroscopic adhesion properties through a combination of various microscopy techniques. Figure 1 shows schematics of the OLED structure and the experimental procedures. Generally, the OLEDs are multi-layered structures, consisting of an encapsulation layer (SiN x )/monomer/SiN x /capping layer(CPL)/cathode/EL/anode/Via/thin film transistor/substrate. In order to explore the adhesion properties of the OLEDs, the widely used peel test was employed as shown in Fig. 1(b) 1,13,[24][25][26] . The peel test is a well-known technique for measuring macroscopically the adhesion properties of materials through the analysis of the maximum load because this maximum load is defined as the force measured when the layers completely detached from the fixed OLED. The details on the measurements can be found in methods section. In this way, the magnitude of the macroscopic adhesion force of each OLED was obtained. However, this method did not provide adhesion properties of each individual interface between layers, making it difficult to probe which layers demonstrated low or high adhesion. Thus, to explore the adhesion properties of the individual interfaces, we performed the following analysis procedure: (1) The peel test was performed to obtain the macroscopic adhesion force of the OLED ( Fig. 1(b)). Results and Discussion (2) Photographic and OM images were obtained to observe any exposed layers and their area portions over the OLED (Fig. 1(c)). (3) Cross-sectional SEM imaging was performed to identify each layer ( Fig. 1(c)). (4) After specifying each layer, topography and adhesion force images were measured using contact and PinPoint TM AFMs to probe surface roughness and the adhesion force, respectively, of each exposed layer, which are expected to be major factors contributing to macroscopic adhesion under fixed environmental conditions ( Fig. 1(d)). As described above, to investigate the exposed layers, we acquired photographic and OM images by digital camera and OM, respectively, following the peel test. In the photographic image shown in Fig. 2(a), three layers were exposed after the peel test, a behavior that occurred for all of OLEDs (see Fig. S1). These three layers are better distinguished in the OM image of Fig. 2(b). We note that the circle and diamond like shape is a pixel, which is an illuminating part in OLED. The exposed layers are differentiated by the colors indicated such as blue, dark brown, and light brown, throughout the OM image. We designate the blue layer as Layer [1], the dark brown layer as Layer [2], and the light brown layer as Layer [3]. After the classification of each layer with the aid of the OM Figure 1. Schematics of (a) a representative structure of OLED and (b-d) experimental procedures for (b) peel tests, (c) photographic, OM, and SEM measurements on the exposed surface after the peel test, and (d) AFM measurements on the exposed surfaces. image, SEM measurements were carried out to further identify each exposed layer. We first acquired a plane-view SEM image as shown in Fig. 2(c), in which the region is also indicated as a blue box in Fig. 2(b). Similar to the OM image, the exposed three layers were clearly distinguished by contrast. To identify each layer in the OLED structure of Fig. 1(a), cross-sectional SEM images were acquired as shown in Fig. 2 [2] and between the Layers [2] and [1], respectively. This implies that the exposed layers are stacked in the order of Layers [1], [2], and [3]. To better observe interfacial regions, we observed the black box of Fig. 2(e) with a higher magnification as shown in Fig. 2(f,g). Considering the OLED structure in Fig. 1(a), these SEM images indicated that the anode was Layer [3], the cathode was Layer [2] and stacked on the EL/anode (Layer [3]), and the CPL was Layer [1] stacked thereupon. This also indicates that the interfaces between the anode and the EL, between the cathode and the CPL, and between the SiN x and the CPL are weaker than those of the other interfaces. We note that there is a structural difference between the pixel part (diamond and circle like structures in OM and SEM images) and the non-pixel part. In particular, the anode and the EL layers are directly in contact with each other in the pixel part. Therefore, each exposed region can be represented as a schematic diagram shown in Fig. 2(h). We further note that the slightly darker contrast in the upper part of the EL layer ( Fig. 2(g)) might be related to the artifact from the shadowing of the cathode layer because the cross-section was not perfectly smooth. In addition, we further confirmed each layer using X-ray photoelectron spectroscopy (XPS) (see Fig. S2). After confirming the type of each layer from procedures (1) to (3), we chose two different OLEDs, of which show two extreme (smallest and largest) maximum loads in the peel test among 10 different OLEDs (see Table S1), and performed AFM measurements on these OLEDs with procedure (4). Figure 3(a) shows the peel load depending on the peel extension for two OLEDs. The maximum load of OLED #1 was higher than that of OLED #2, as shown in Fig. 3(a), meaning that the macroscopic adhesion force of OLED #1 over the measured region was relatively higher than that of OLED #2. Even though peel load values are widely used to evaluate differences in macroscopic adhesion, such values cannot distinguish which interfaces are weaker than others or determine the contribution of a certain interface to the macroscopic adhesion. The weakest interface can be primarily responsible for the macroscopic adhesion measured by the peel test and, furthermore, can exhibit largest area portion among the exposed layers or interfaces. Thus, after identifying each exposed layer in a manner similar to Fig. 2, the area portion of each exposed layer was graphically calculated in each OLED using the photographic images. As mentioned above, all of the OLEDs showed the same exposed layers. The yellow and light blue regions in Fig. 3(b,c) indicate Layers [2] and [1], respectively, which were also confirmed from the OM images ( Fig. 3(d,e)) taken at the dotted green boxes in Fig. 3(b,c). Figure 3(f) shows the area portion of each layer obtained from Fig. 3(b,c) by using the image analysis tool, ImageJ, showing that the area portion of Layer [1] was nearly the same in both OLEDs. This suggests that the area portion of Layer [1] had no significant effect on the peel test results. However, while the area portion of Layer [2] in OLED #1 was larger than that of OLED #2, the area portion of Layer [3] in OLED #1 was smaller than that of OLED #2. This implies that the difference in the ratio of the remaining portions of Layers [2] and [3] resulted in the difference of the maximum load in the peel test. Since the area portion of Layer [3] was much larger than that of Layer [2] for both OLEDs, the primary layer for determining the macroscopic adhesion is likely Layer [3], indicating that the interface between the anode (Layer [3]) and EL (Layer [2]) might significantly affect macroscopic adhesion. In other words, the relatively small area portion of Layer [3] in OLED #1 is related to its higher macroscopic adhesion. After examining macroscopic approaches, a topographic imaging was performed on each layer, as presented in Fig. 4, visualize local differences in topographic features all the layers for both OLEDs appeared uniform over the measured area, but the detailed surface microstructures differed slightly. Layer [1] is a uniformly flat surface with a few particle-like features (see Fig. 4(a,d)). However, Layer [2] has a completely different topography, with a surface appearing to be very rough. Layers [1] and [2] show similar topography for both OLEDs. However, the topographic features of Layer [3] were completely different for OLEDs #1 and #2, with OLED #1 being rough with many particle-like features, and OLED #2 being rather flat. These topographic features can be confirmed by the roughness values shown in Fig. 4(g) that represent the root mean square (RMS) roughness for each topographic image. Note that, in general, macroscopic adhesion is correlated with the degree of surface roughness 27,28 . Therefore, if we consider a single OLED, Layer [2] may not contribute significantly to the observed macroscopic adhesion because its roughness is relatively higher than that of the other layers, and therefore, the weaker interfaces then eventually determine the observed macroscopic adhesion. Furthermore, Layer [2] might not contribute to the relatively different macroscopic adhesion observed in the present study, because the surface roughness for Layer [2] is similar for both OLEDs. Instead, since the difference in roughness between the OLEDs is the highest for Layer [3], the difference due to the Layer [3] may be responsible for the relative difference in macroscopic adhesion. The relative (absolute) roughness differences compared to the higher value between Layers [1] and [3] are 43.8% (0.658) and 69.7% (1.548), respectively. In other words, the interface between anode (Layer [3]) and EL might strongly affect to the macroscopic adhesion properties of the OLEDs. In addition to the surface roughness, local adhesion can contribute to macroscopic adhesion. Thus, we measured the local adhesion of each layer using PinPoint TM measurements with the AFM. Figure 5 shows the adhesion force images for each confirmed layer on both OLEDs. The normalized adhesion values are shown in Fig. 5(g). It presents that the average value and deviation of adhesion is normalized by maximum averaged value of the measured adhesion force among the layers. The different materials composing each layer can be clearly distinguished through the adhesion force images. Both OLEDs exhibited the same tendency, with Layer [1] having the highest adhesion force and Layers [3] and [2] larger yet, in order. Moreover, when comparing, the adhesion forces of Layer [2] in OLEDs #1 and #2 are nearly identical, but those for Layers [1] and [3] are different. Similar to the surface roughness which was discussed above and shown in Fig. 4, while Layer [2] demonstrates nearly the same adhesion, Layers [1] and [3] show a difference in adhesion, confirming that Layer [2] did not contribute to the macroscopic adhesion. The relative adhesion differences comparing the higher values between Layers [1] and [3] were 36.0% and 32.1%, respectively. Even though Layer [1] showed a slightly higher adhesion difference, the local adhesion in this study might not be a dominant factor in determining macroscopic adhesion because the relative adhesion difference for both OLEDs were small or negligible, considering the deviation and relative roughness difference were relatively large. This can be also observed from the macroscopic adhesion shown in Fig. 3. In addition, since the measured local adhesion is relative to the AFM tip, the actual adhesion may be slightly different. Nonetheless, if the relative adhesion difference is large, local adhesion may contribute to the macroscopic properties. Overall, the difference between the OLEDs of macroscopic adhesion as measured by the peel test might originate primarily from the roughness of Layer [3]. Thus, the macroscopic adhesion properties of the OLEDs were primarily influenced by the interface between the anode and the EL. This phenomenon may be occurred by the OLED preparation process. Up to the deposition process of the anode, the preparation of each layer was continuously done. However, after the deposition of the EL layer, there was some delayed time for the deposition of the EL layer because of the difference from inorganic and organic materials. Therefore, since the delayed time is longest among all other processes, the anode surface might be affected by environments, causing slightly different adhesion properties. Conclusion In conclusion, we investigated the origin of local contributions to macroscopic adhesion in OLEDs using a combination of various microscopy techniques at different length scales. After performing a peel test, we identified two weak interfaces in the OLEDs that primarily contributed to macroscopic adhesion using a combination of photographic, OM, and SEM images. The three exposed layers remaining on the surfaces were the anode (Layer [3]), cathode (Layer [2]), and CPL (Layer [1]). Further, based on the analyses of the photographic images, the relative area portion between the exposed layers that interfaced between each anode (Layer [3]) and EL primarily affected macroscopic adhesion. Finally, after identifying each layer, the topography was characterized and the adhesion force images were acquired using AFM to probe local physical properties such as surface roughness and adhesion force. Even though no significant difference in local adhesion was found, visible differences in roughness were observed in Layer [3]. This confirmed that Layer [3], the interface between the anode and EL, was predominant in affecting the macroscopic adhesion properties of the OLEDs. As a result, by using a combination of various microscopy techniques at different length scales, weak interfaces in the OLEDs and the main contributors to macroscopic adhesion were revealed. These results could provide useful information on macroscopic adhesion of OLEDs as well as other devices composed of multiple layers. Furthermore, these observations provide guidelines for improving adhesion between interfacial layers in OLEDs. Methods OLED preparation and peel test. OLEDs were obtained from Samsung Display Co. Ltd. A subsequent peel test (INSTRON 5967) was performed on each OLED in order to peel the layers, demonstrating weak adhesion. The peel test is performed as follows: (1) Attaching each OLED to the Al plate by adhesive tape (to keep it from falling). (2) A mount jig was attached to the region where the peel test was to be performed (2 cm × 2 cm of surface area at the edge of OLED). (3) Fixing the Al plate to the peel test machine, the mount jig was lifted and the macroscopic adhesion was measured. The maximum load is defined as the force measured when the layers completely detached from the fixed OLED. The peel test was performed on 10 different OLEDs, and the other experiments were conducted on the largest and smallest of maximum load in OLEDs. The results measured by the peel test are attached in Table S1. Optical microscopy and SEM measurements. OM measurements were performed using a commercial instrument (DM2700, Leica). The OM images were acquired using the bright field mode at a magnification of 100× under ambient conditions. Plane-view and cross-sectional SEM images were obtained with an FEI Helios Nanolab 460F1 focused ion beam. Image analysis. Photographic images were taken with a common digital single lens reflex camera. Image analysis was performed using an open-source scientific image analysis tool ImageJ. Each layer in the photographic image was manually differentiated and the area portion was calculated. AFM measurements. A commercial AFM (NX-10, Park Systems) was used for adhesion force measurements. The PinPoint TM measurement method was performed using a Si tip (CONTAl-G, BudgetSensors) for obtaining adhesion force images 29 . Typically, the local adhesion force is measured via a force-distance (F-D) curve 30 . In the F-D curve, the AFM tip initially contacts the sample surface, followed by separation. Here, the pull-off force is considered equivalent to the adhesion force. Note that the PinPoint TM measurement technique acquires images for adhesion properties. It is considerably faster in acquiring the adhesion forces compared to the conventional F-D curve measurement techniques. During AFM measurements, the relative humidity and temperature were maintained at ~12% and 28.5 °C, respectively.
2023-02-18T14:55:06.678Z
2018-04-23T00:00:00.000
{ "year": 2018, "sha1": "d09a97a0c71c8e81444e05f80ab284fe382f820c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-24889-9.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "d09a97a0c71c8e81444e05f80ab284fe382f820c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
245345806
pes2o/s2orc
v3-fos-license
Online Exercise Program for Elderly During Coronavirus (Covid-19) Pandemic It was conducted to determine the benefits of exercise for the elderly who were restricted from curfew due to the coronavirus (Covid-19) epidemic and the technological competence of older individuals in accessing exercise. This study was conducted with the students in 60+ Tazelenme University, between March and May 2020. The sample of the study is 80 people. The elderly participated in the online exercise for eight weeks, during the three months when there was a curfew for the elderly. The Information Form, which was prepared by the researchers by reviewing the literature, was used to collect the data, evaluating the benefits of online exercise and the ability of the elderly to access online exercise. The data were collected online via "Google Forms". 66.0% of the individuals participating in the present study were between the ages of 60-69. It has been observed that the online exercise study affects the physical and psychological health of the participants. 96.7% of the seniors believed that online exercise during the quarantine improved their health. 95.6% of the elderly noted that exercise improves the quality of life, and 98,9% reported that it helps daily life activities. Also, 94.6% of the participants think that they have sufficient technological equipment. Although the elderly experienced some difficulties following the during exercises online, the majority were able to follow and apply the program. It is observed that the online exercise program has a positive effect on the physical and mental well-being of the elderly. The necessity of online exercise programs that can help elderly people stay active is noteworthy during the pandemic. Introduction The World Health Organization (WHO) announced on January 30, 2020, that the Coronavirus disease (Covid-19) pandemic had become a global health problem in the world.WHO warned people to stay at home during the ongoing Covid-19 pandemic to reduce exposure and infection rates.Taking precautions for infection control is imperative due to the rapid spread of Covid-19.Staying home is one of the most critical measures that can limit the widespread of the coronavirus (Chen et al., 2020).Staying in quarantine at home to control the Covid-19 disease, preventing the virus from being transmitted from person to person, may cause a decline in physical activity.Staying at home for a long time can increase the passive behaviors, such as watching television, playing computer games, and using mobile devices.The decrease in the level of physical activity has a negative effect on the health status of people (Owen et al., 2010).Thus, it is necessary to continue physical activity at home to remain healthy during the quarantine period.A home exercise program with simple, safe, and easy exercises, including stretching, strengthening, and balance (Chen et al., 2020) was suggested to be suitable for increasing and maintaining the level of physical activity (Halabchi, Ahmadinejad and Selk-Ghaffari, 2020). A certain level of physical activity and exercise is required to ensure active aging in the quarantine period.It is known that the physical activity of the older adults positively affects the prevention of sarcopenia, falling risk, frailty, and cognitive impairment (Cadore, Asteasu and Izquierdo, 2019).Studies have revealed that physical activity and exercises directly affect mental and physical health in combating chronic diseases ( Liu et al., 2019, Anderson andDurstine, 2019) and increase the quality of life (Buffart et al., 2017, Sagar et al., 2015).Therefore, exercise remains vital during quarantine to ensure that seniors maintain an active lifestyle and their general health at home (Jiménez-Pavón, Carbonell-Baeza and Lavie, 2020).During the quarantine days, the importance of exercise in the protection of the physical and mental health of the elderly has increased more.In this period we spend a long time at home, online exercise for the elderly has created an alternative in protecting health.The present study aimed to determine the benefit of online exercise on older adults during the quarantine period and the technological competence of older people accessing online exercise. Method The sample population of this prospective study was quarantined adults over 55 years of age who were students of 60+ Tazelenme University.The 60+ Tazelenme University is a novel implementation of University of the Third Age (U3A) in Turkey ( 11).The study sample consists of 100 people who participated in online exercises for eight weeks between March 2020 and May 2020 during the pandemic period.Of these, statistically, at a 95% confidence interval, the inclusion of 80 participants' data in the study could be considered sufficient (Yazıcıoğlu and Erdoğan, 2007), while 91 participants' data were included in our study.9 people were not included in the study because they did not fill out the questionnaire.This study was approved by the Institutional Ethics Review Board . Recent studies have shown that basic exercise program components in older adults require stretching, strengthening, balance, and resistance exercises (Sherrington et al, 2008).In our study, the online exercise program included: (a) static stretching (stretching the upper extremities, lower extremities, and both sides of the body), (b) muscle strengthening (extension, stretching, abduction of the shoulders), and hips, elbows, wrists, knees, ankle lengthening and stretching, and (c) balance (heel and toe lifting) and cool-down (deep breathing) exercises were performed.The necessary resistance for muscle strengthening exercises was provided with body weight.Participants aged 55 and over participated in the online exercise for 45 minutes, three days a week for eight weeks.The exercises started with 8-10 repetitions and increased to 15-20 repetitions in the following period. The Information Form (Appendix.1)generated by the researchers in line with the relevant literature review was used in collecting the data.A combination of open and closed-ended questions were included to evaluate the socio-demographic characteristics of the participants as well as the benefits of online exercise and the competence of older individuals in accessing online exercise.For closed questions answers used the format of a typical five-level Likert item: Strongly agree, Agree, Neither agree nor disagree, Disagree, Strongly disagree. The data were collected online via "Google Forms".All participants ticked the box that they agreed to participate in the study before answering the questions online.The forms were delivered to older people via the WhatsApp program, which is used by many elderly people today. Quantitative data were evaluated using descriptive statistics (number and percentage) in SPSS 23.0 software.The Cronbach-Alpha method was used to determine the internal consistency of the form.It was reported that the alpha value of 0.80 and above indicates high consistency (Uzunsakal and Yıldız, 2018) Results Considering the age distribution of the individuals participating in this study, 66.0% of them are between the ages of 60-69.The majority of the participants were women (84.6%), married (63.7%), and undergraduate (35.2%) individuals.63.7% of the older individuals stated that their income is equivalent to their expenses.Hypertension diagnosis (15.4%) takes the first place among those with chronic diseases.The demographic characteristics of the participants are shown in Table 1. Investigation of internal consistency The Cronbach-Alpha method was used to determine the internal consistency of the data form.The Cronbach-alpha value of all 23 items of the form was determined to be 0.878 that indicates high reliability .96.7% of the elderly believed that online exercise in quarantine improves their health.In evaluating the elderly people's thoughts about the effect of exercise on their physical health, 93.4% of the elderly reported that exercise helps reduce their pain, 97.8% of them found that exercise prevents the decrease in physical movement ability.Furthermore, 97.8% of the elderly stated that it protects muscle strength, 97,8% of them stated that it is suitable for bone and joint health.89.0% of the elderly believed that exercise helps keep their weight under control, 89.0% of them reported that it protects from other diseases (Table 2).In evaluating the elderly's thoughts about the effect of exercise on their psychological health, 93.4% of them stated that it is suitable for fatigue, 89.1% of the elderly found that exercise positively affects the quality of sleep, 92.4% of them believed that exercise makes them feel good.Moreover, 98.9% of the elderly remain active with exercise, 95.6% of them are thinking that the exercise makes them optimistic.95.6% of the elderly noted that exercise improves the quality of life, and 98.9% reported that it helps daily life activities (Table 3).Considering the difficulties experienced by the elderly in following online exercises, 74.8% of the elderly reported that they have difficulty in performing the movements since they are alone.Furthermore, 66.0% of the elderly have a visual impairment, 68.2% reported that hearing problems prevent them from exercise, and 95.7% of them stated that they have difficulty understanding movements.Due to fatigue, 58.3% of the elderly; due to pain, 73.7% stated that they have difficulty exercising (Table 4).Considering the benefits of online exercise for the elderly; 88.1% of the elderly stated that they felt less lonely by participating in online exercises during the quarantine.93.5% of the participants stated that they realized that they could exercise at home without going outside by participating in online exercises during the quarantine.Also, 94.6% of the participants think that they have sufficient technological equipment (Table 5).According to the elderly's open-ended questions, one of the most important benefits of exercise is that it prevents them from being inactive and makes them more active.It was stated by the majority of the elderly that exercise is beneficial in reducing muscle and joint pain by enabling the muscles to work.It was emphasized by the elderly that exercise is also effective on psychological well-being.It was observed that exercise had a positive effect on the psychological health of elderly people in terms of being emotionally cheerful, relaxing, doing something useful for oneself, feeling peaceful and happy, relaxing the mind, thinking positively, and feeling well. Although some of the elderly people stated reasons that prevent them from participating in exercise such as forgetting the starting time of the exercise, caring for their mothers, home and gardening, getting bored with repetitive activities, participating in other online activities, and experiencing some technical difficulties, it was observed that most of them had no problem in participating and following the exercises.Although there are elderly people who believed that continuing with online exercise programs may be beneficial after the quarantine, some of them stated that it is more beneficial to do the exercises face-to-face. Discussion and Conclusion The level of physical activity is reduced in older adults (Milanović et al., 2013).There are physical activity guidelines specifically developed for older people (Piercy and Troiano, 2018).While the elderly did not comply with the physical activity guidelines recommended for them even before the pandemic (Sun, Norman and While, 2013), quarantine resulted in a more decrease in the physical activity levels of the elderly (Ekelund et al., 2019).Although quarantine is the best and recommended option to stop the rapid spread of Covid-19 disease, other aspects of the health of the elderly who remain isolated at home in quarantine should also be considered (Jiménez-Pavón, Carbonell-Baeza and Lavie, 2020). Exercise plays an essential role in maintaining the health of elderly individuals (Lavie et al., 2019), active aging (Fletcher et al., 2018), and combating diseases such as diabetes, hypertension, cardiovascular diseases, respiratory diseases (Ozemek, Lavie and Rognmo, 2019).Some studies have shown that exercise positively affects the immune system (Nieman and Wentz, 2019).In a review by Polero et al., aerobic, strength, flexibility, and balance exercises were recommended during the quarantine caused by Covid-19 (Polero et al., 2021).However, it is not always possible for all elderly people to receive a personally planned exercise program from a healthcare professional (Said and Batchelor, 2020).Since we think that different studies are needed to support physical activity and exercise in the home environment of the elderly, we implemented a program consisting of stretching, strength, flexibility, and balance exercises for the elderly during quarantine. The elderly in the risk group for Covid-19 should be prevented from adopting a sedentary lifestyle during the quarantine process; thus, exercise opportunities should be offered to the elderly.The necessity of online exercise applications drew attention in this period when activities performed together threaten health.However, there are not enough studies on the effect of online exercise on the elderly and the technological competence of the elderly in accessing online exercise.In the present study, it is seen that one of the most important benefits of online exercise in quarantine allows the elderly to be more active by preventing them from being inactive. It is also mentioned that technology-based interventions are needed to overcome psychosocial difficulties during Covid-19 quarantine (Ammar et al., 2021).Especially, it is necessary to improve telehealth services for vulnerable groups (Said and Batchelor, 2020).Our study should emphasize that technology-based online exercises are practical on the psychological well-being of the elderly.Although they had some difficulties following the exercises online, most of them could follow the online exercise program and do their exercises.We think that it may be beneficial to continue online exercise programs after the quarantine period is over.Gamberini et al. (2006) state that various factors are connecting older people to technology (Gamberini et al., 2006).One of these elements is telehealth, which is used to diagnose and treat chronic diseases remotely and electronically.During the pandemic period, remote access to health services has become important.Contrary to the negative perception of the elderly in the use of technology in society (Baran, Kurt and Tekeli, 2017), it is impressive that in this study we conducted with elderly individuals at 60+ Tazelenme University, the majority of elderly individuals think that their technological devices are sufficient to benefit from online services.We think that the ability of elderly individuals to adapt to the changing daily life process makes a great contribution to the quarantine caused by the pandemic. The limitation of our study is that the online exercise program can not be prepared specifically for the individual and that a general exercise program is applied to the participants at a level that will not harm individuals with chronic diseases.There is a need for new studies on the evaluation and treatment of the individual online in the literature. In addition, in the digitized world, elderly individuals have problems in accessing and using technological devices to use the telehealth system.For this reason, in our study, online exercises were performed on platforms that the elderly can easily access.Despite this, in our study, the elderly experienced problems resulting from technological products such as the problem of connecting to the Internet and the small screen.
2021-12-21T16:03:35.909Z
2021-12-13T00:00:00.000
{ "year": 2021, "sha1": "11ff149016c885ceb7be8ce3ad1f790ada94a61a", "oa_license": null, "oa_url": "https://dergipark.org.tr/en/download/article-file/1759290", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4ff5cdc3ba8c97781a6a1123177b7fc3d36dd4ea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
236666418
pes2o/s2orc
v3-fos-license
Effect of Tea Leaves Powder Supplementation on Fermented Oil Palm Fronds on Fermentation Characteristics, Rumen Microbial Profile, and Methane Production In vitro | Tea leaves are types of plant leaves with the potential to be used as an antiprotozoal to increase livestock production. Therefore, this research aimed to evaluate the effect of tea leaves powder supplementation on fermented oil palm fronds (FOPF) on fermentation characteristics, rumen microbial profile, and methane production in vitro. Randomized block design with four treatments were used in this research. Treatments were tea leaves powder supplementation at 0, 2, 4, and 6% and four groups as replications. The observed variables were fermentation characteristics, rumen microbial profile, and methane production in vitro. The results showed that at 2% supplementation, the substrate had the lowest pH value. Meanwhile, at 4-6%, the supplement decreased NH 3 , total VFA, the proportion of propionate, DMD, OMD, protozoa population, and increased A:P ratio. However, there was no effect on bacteria population and methane production. In conclusion, supplementation of tea leaves powder at 2% on fermented oil palm fronds was did not influence fermentation characteristics, rumen microbial profile, and methane production in vitro . The tea can be used in ration to optimize the utilization of oil palm fronds. INTRODUCTION I ndonesia is the biggest producer of palm oil worldwide, and in 2019/2020 the production volume was recorded at 42.50 million metric tons (Shahbandeh, 2020). Directorate General of Estate Crops (2019) reported that in 2018 there were 14.33 million hectares of oil palm planted area, with private companies controlling 55.09%, smallholders 40.62%, and state-owned companies, 4.29%. Moreover, Poh et al. (2020) stated that 10 tonnes of oil palm fronds (OPF) are produced in one hectare of oil palm plant area, therefore, in Indonesia, there should be 140.33 mil-lion tons of OPF. In tropical countries, these fronds are used as ruminant feed in the livestock industry (Ng et al., 2011;Ebrahimi et al., 2015). This statement is supported by Azevêdo et al. (2012) which reported that ruminants could convert renewable agro-industrial by-products to high-quality feed. Nurhaita et al. (2014) reported that oil palm fronds (OPF) are feed from a by-product that has low quality and nutritional value, high fiber content, low palatability and digestibility. Furthermore, the major constraints to the use of OPF as livestock feed are the high crude fiber (CF, The low nutritional and digestibility value of OPF as a by-product could be improved using fermentation technology. This claim is supported by Wizna et al. (2009) which reported that the fermentation process was able to reduce crude fiber content by 32% and increase crude protein content by 360%. Fermented products have better nutritional quality and digestibility than unfermented products (Astuti et al., 2017;Wajiyah et al., 2015). Meanwhile, to improve the quality and modify rumen microbes, fermented OPF (FOPF) products need to be supplemented with bioactive compounds. A study on meta-analysis in vitro by Klevenhusen et al. (2012) and in vivo by Khiaosa and Zebeli (2013) reported that bioactive compound supplementation is used as a rumen microbial modifying agent, and reducing methane gas production has a positive effect on livestock productivity. Tea leaves are one of the plant leaves that have the potential to be used as antimethanogenic and antiprotozoal agents (Hu et al., 2005;Ramírez-Restrepo et al., 2016) to increase the growth performance of small ruminants (Mao et al., 2010;Zhou et al., 2011). However, there is not much information on the effect of the supplementation of these leaves on FOPF on fermentation characteristics, rumen microbes profile, and methane production. Therefore, this research aimed to evaluate the effect of tea leaves powder supplementation in FOPF on fermentation characteristics, rumen microbes profile, and methane gas production in vitro. PreParatioN of tea leaves Powder Tea leaves were obtained from the plantation area of Kepahiang, Bengkulu Province, and sun-dried for 4-5 days. The dry leaves were then powdered using a hammer mill with 80-100 mesh size. (Nurhaita et al., 2016). PreParatioN of local microorgaNisms The formula for preparing the local microorganism (MOL) was 2 kg of goat's rumen fluid, 15 L of coconut water, and 4 kg of molasses. Furthermore, these ingredients were put into a jerry can, closed tightly, and mixed evenly. Then a smaller jerry can was prepared with water and closed tightly. The caps of both jerry cans were perforated and connected using a small tube. Finally, the small jerry can that contained water functioned to accommodate the gas formed during the MOL production process (10 days incubation). fermeNted oil Palm froNds (Nurhaita et al., 2019). Oil palm fronds were chopped and sun-dried until the water content was at + 60%. Then 10 ml MOL/kg FOPF, 1% urea, and 5% molasses were added to the chopped OPF. Subsequently, the materials were mixed properly, put in a plastic bag, compacted, tied, and fermented for 7 days. The fermented oil palm fronds were then evaluated for their physical quality and powdered using a hammer mill with 80-100 mesh size. statistical aNalyses Randomized block design with four treatments was used in this experiment. Treatments were supplementation of tea leaves powder at 0, 2, 4, and 6% and four groups as replications. Data were analyzed using ANOVA and the differences among the means of the treatments were examined using DMRT (Steel & Torrie, 1980). NutrieNt Profiles of the fermeNted oil Palm froNds, tea leaves, aNd substrate The nutrient profiles of the OPF, FOPF, tea leaves powder, and substrate are shown in Table 1 fermeNtatioN characteristics In vItro Data of fermentation characteristics in vitro are shown in Table 2. The Rumen pH value ranged from 6.73-6.65, with the lowest value being due to the supplementation of 2% tea leaves powder. Furthermore, the supplementation of this powder on FOPF decreased NH 3 concentration (7.01-8.99 mM) and total VFA (67.18-97.26 mM), and their lowest value was in the treatment with 6% supplementation (NH 3 at 7.01 mM; VFA total at 67.18 mM). The proportion of acetate (61.66-66.73%) and butyrate (16.71-19.05%) in the rumen was not affected by the dietary treatments. However, supplementation of the powder at 4% caused the lowest proportion of propionate (15.91%) and the highest A/P ratio (4.24). A similar result was obtained for the DMD (43.92%) and OMD (42.71%), where the lowest values were in the treatment with supplementation at 4%. Finally, the ranges of DMD and OMD were rumeN microbial Profile aNd methaNe ProductioN In vItro Data for rumen microbial profile and methane production in vitro were presented in Table 3. It was observed that tea leaves powder supplementation up to 6% on FOPF did not affect the total bacteria (3.46-4.11 x 10 9 ), cellulolytic bacteria (6.87-8.80 x 10 7 ), and methane production (7.57-7.82%). However, the supplementation decreased the protozoa population, with the lowest value being caused by the supplementation at 6% (2.57 x 10 5 ). NutrieNt Profile of the fermeNted oil Palm froNds, tea leaves Powder, aNd substrate The MOL which consisted of microbes contributed to the increase in the CP of FOPF because these microscopic organisms mostly consist of protein. This was supported by Crueger and Crueger (1984) which stated that different kinds of microbes affect protein content, including bacteria, as they contain 70 to 78% protein (Crueger and Crueger, 1984). Based on the results of microbiology identification in the Indonesian Institute of Sciences (2019), the MOL in this study consisted of 5 species of Bacillus bacteria, namely Bacillus sp, B. aureus, B. altitudinis, B. cereus, and B. megatereum. Oboh (2006) stated that microbial proliferation or multiplication could increase protein content. Also, microorganisms may increase CP due to their ability to synthesize amino acids ( Jokotagha & Amoo, 2012). A similar result was reported by Yao et al. (2018), which showed that the CP in the fermented by-product from the beverage industry with Candida utilis and Bacillus subtilis supplementation was higher than in the unfermented product. The cell membrane of MOL bacteria contains inorganic compounds which contributed to an increase in ash content (Ahaotu et al., 2013). Moreover, Jokotagha & Amoo (2012) reported that some factors contributed to an increase in fat during the fermentation process such as an increase in microbial population that consists of lipid component, extracellular enzyme (lipase) from the activities of lipolytic microorganisms, and microbial oil substance in the fermentation medium. Fermentation technology could reduce CF content because the bacteria from MOL could degrade fibers. Obueh et al. (2014) reported that enzymes of hydrolysis and oxidation could convert the recalcitrant compounds from waste into utilizable compounds. Furthermore, it was discovered by Ojewumi et al. (2018) that CF content could be reduced through both aerobic and anaerobic fermentation. This was supported by Adeleke et al. (2017) which showed that the fiber content from fermented cassava peels decreased by 33.77 and 23.46%. Bacteria from MOL also degraded tannin and saponin in OPF. The same result was reported by Ojewumi et al. (2018), that after fermentation with Saccharomyces cerevisiae, tannin and saponin from cassava waste reduced by 44 and 27.78% respectively. Nutrient in tea leaves powder is almost similar to shrub legume of Desmodium velutinum that has CP at 16.00%, ash at 6.59%, and tannin at 7.70% (Heinritz et al., 2012). Moreover, supplementation of this powder at 2-6% on FOPF caused the CP to be at 13.27-13.94% and TDN at 65.64-68.78%, which conforms with the beef requirement for fattening in Indonesia (BSN, 2016). fermeNtatioN characteristics In vItro Rumen pH value in this research was normal as Dehority (2005) reported that the normal rumen pH value is between 5.4-7.8. The lowest pH which occurred with supplementation at 2% might be due to the higher total VFA production than in the 4-6% supplementation groups. Roca-Fernandez et al. (2020) reported that 50% orchardgrass + 50% birdsfoot trefoil had the lowest pH and highest total VFA production. The increasing supplementation of tea leaves powder on FOPF, the NH 3 concentration decreased progressively. This is probably due to the increasing tannin content on the substrate as more tea leaves powder was added. Similar research was reported by Mohammadabadi & Chaji (2012), that the addition of 30 g/kg DM oak leaves and fruit, and pistachio hull and leaves that contained tannin in sunflower meal decreased ruminal NH 3 concentration compared to the control. Tannin from tea leaves helps to the protect dietary protein from rumen microbes degradation. This condition has a beneficial effect with respect to ruminant production because the absorption of amino acids in the small intestine could be increased. Waghorn (2008) reported that animal which grazed forage with high condensed tannin (CT) had a higher flow of metabolizable protein and essential amino acids in their small intestine compared to those that grazed forage with low CT. There was a decrease in total VFA, DMD, and OMD with increasing levels of supplementation of tea leaves powder, probably due to the increasing formation of tannin with macromolecule complexes, which hinders microbial enzymes (McSweeney et al., 2001), therefore, reducing fermentative activities and depressing intestinal digestion (Makkar, 2003). Alipour & Rouzbehan (2010) stated that the addition of tannins might be due to a reduction of microbes in feed particles. Also, Gemeda & Hassen (2014) reported that tannin from a different species of tropical browse plants decreased total VFA and IVOMD. Jaya- Advances in Animal and Veterinary Sciences July 2021 | Volume 9 | Issue 7 | Page 975 negara et al. (2019) stated that with higher levels of the compound on Indigofera and Moringa silages, there was a greater reduction in total VFA concentration, DMD, and OMD rumen in vitro. The increasing levels of supplementation of tea leaves powder, there was a continuous decrease in the proportion of propionate and an increase in A:P ratio, meanwhile, there was no effect on the proportion of acetate and butyrate. A similar result was obtained by Patra et al. (2010), that the addition of 0.50 mL extracts of clove (plant extract rich in tannins) increased A:P ratio and decreased the production of propionate by direct inhibition of Selenomonas ruminantium population. Also, Cieslak et al. (2016) found that the addition of tannin extract from Sanguisorba officinalis caused a significant decrease in propionate (linear P<0.01). Castro-Montoya et al. (2011) in the research with dietary Quebracho, mimosa, and chestnut tannins demonstrated the opposite effect, that there was a significant linear decrease in the proportion of acetate (P=0.01) and A:P ratio (P<0.001) but an increase of propionate proportion descriptively. rumeN microbial Profile aNd methaNe ProductioN In vItro Supplementation of tea leaves powder did not affect total and cellulolytic bacteria population and methane production. However, the protozoa population decreased with increasing levels of supplementation. This condition occurred because the leaves contained saponin which has a defaunation effect that enables the lysis of protozoa cells. The study by Ramírez-Restrepo et al. (2016) is in support of this finding as it showed that tea leaves are one of the plant leaves with the potential to be used as an antiprotozoal agent. Saponin reacts with the cholesterol from the protozoa membrane, leading to an increase in the permeability of the cell walls, thus killing the protozoa (Wallace et al., 2002). Hidayah et al. (2020) also confirmed these findings, that the increasing substitution of native grass with jengkol (A. jiringa) peel powder that contained saponin, decreased the protozoa population. Finally, according to Cieslak et al. (2014), the use of S. officinalis root, which is a source of saponin, could reduce the population of protozoa (Cieslak et al., 2014). Methane production was estimated using VFA profiles, as this method leads to accurate results with a very low RMSPE and a high coefficient of measurement ( Jayanegara et al., 2015). The results showed that the supplementation of tea leaves powder on FOPF until 6% did not decrease the methane gas production. This is supported by Ramírez-Restrepo et al. (2016) which reported that the addition of tea seed (Camellia sinensis) saponin (TSS) did not affect methane gas emissions. Besides, this condition might be due to the level of tannin and saponin in the leaves, which was not sufficient to decrease the production of the gas. Based on the research of Krueger et al. (2010), low tannins (hydrolysable tannin) have little or no effect on methane production, and on the proportion of acetate, propionate, or butyrate. However, Hassanat and Benchaar (2013), found a contradicting result that the methane gas production decreased compared to the control with the addition of up to 40% of acacia, chestnut or valonea (consist of tannins ≥ 50 g/kg), or quebracho (consist of tannins ≥ 100 g/ kg). The varying impacts of tannin on methane production might be due to its chemical structure and concentration in the plant from which it was obtained (Ramírez- Restrepo et al., 2016). CONCLUSION Supplementation of tea leaves powder at 2% on fermented oil palm fronds was did not influence fermentation characteristics, rumen microbial profile, and methane production in vitro. The tea can be used in ration to optimize the utilization of oil palm fronds.
2021-08-03T00:05:56.039Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "eb8d16327c8428c47c442e3b5cb32794c2ad08ad", "oa_license": "CCBY", "oa_url": "http://nexusacademicpublishers.com/uploads/files/AAVS_9_7_971-977.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "49fb86e5c6d80bd6bdb9d317210dd469e94caeac", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
139216343
pes2o/s2orc
v3-fos-license
Comparative spectral analysis of the extra-cell matrixes surface of heart valves before and during the process of their decullularization There are presented the results of the application of Raman-scattering spectroscopy method (RS) for the qualitative analysis of rams' heart valves surfaces before and during their decellularization. While analyzing RS spectra, it was found that basic differences appear at wave numbers 812 cm-1, 1062 cm-1, 1340 cm-1 and 1440 cm-1, corresponding to phosphodiester linkage of RNA; OSO-3 corresponds to symmetrical stretching of glycosaminoglycans and chondroitin-6-sulfate; corresponds to deformation mode of proteins and nucleic acids (DNA); proteins, lipids. Optical analysis has shown that while analyzing decellularization on the valves surfaces, the content of glycosaminoglycans, proteins and lipids decreases; retains a high DNA content. It was found that with the aid of entered optical numbers it is possible to control the process efficiency of decellularization of the heart valves. Introduction The issue of treating people's heart valve diseases is one of the priorities of modern medicine. One of the most radical methods of treatment is the replacement of valves [1,2]. However, the quality, design and properties of the prosthetic cardiac valve are constantly improving; they can not be compared with native valves in their properties. Thuswise, clinical cardiosurgery is in need of creating new types of implants and improving the technology of their production. [3,4]. Because of generous amount of complications while using valve bioimplants there is a need of high-quality processing of biomaterials. Decellularization is one of the supplementary methods of tissue engineering of heart valves. This process is aimed at removing cells from the tissue with preservation of extra-cellular matrix and three-dimensional material structure [5,6]. A number of authors consider that in order to reduce the tissue antigenicity during the process of decellularization there is a necessity for absolute elimination of cellular components, in particular: membranes and connected membrane proteins with it, bioplasts, nuclei and nucleic acids that contained in them [7,8]. Currently there is no commonly used methodology for decellularization of heart valves. Furthermore, there is no universally received ways to control its effectiveness. With this object in mind, histological, histochemical, biochemical and immunological methods are currently used. Their main disadvantage is the destruction of the analyzed samples along with the labor-consuming nature and high cost [5][6][7][8]. Consequently, the search for appropriate way of analyzing the qualitative composition of heart valves in the process of decellularization is the priority. The Raman-scattering spectroscopy method can be effective in assessing the decellularization effectiveness of the surface of heart valve samples, since it is able to determine the content of the main matrix components and does not require destruction of the biomaterial during the study [9][10][11]. The goal of this research is to analyze the qualitative composition of the heart valves surface by using the Raman-scattering spectroscopy method before and after their decellularization. Materials and methods of research Aortic valves of sexually mature rams are used as a research material. Valves decellularization was carried out in the modification according to the protocols [11,12,13] in Samara State Medical University (SamGMU). There is a phase 1 of decellularization before fermentative treatment and a phase 2 after fermentative treatment. Biomaterials samples were stored in phosphatic-salt solution with antibiotics at a temperature of 4 ° C before the study. Spectral-response characteristics of the samples were studied using an experimental stand including a high-resolution digital spectrometer Shamrock sr-303i with built-in cooling chamber DV420A-OE, a fiber-optics probe RPB-785 for Raman spectroscopy, combined with laser module LuxxMaster LML-785.0RB-04 (with controlled output up to 500 mW, and a wavelength of 785 nm). The radiation power of 500 mW of the 6 laser within the exposure time up to 300 seconds does not cause destructive changes in the samples. RP probe focused laser radiation on the object at a distance of 7.5 mm from the output window with a focal spot diameter of less than 0.2 mm and collected radiated emission. Figure 2 shows the average RS spectra for the surfaces of aortic valve samples before decellularization (control), and RS spectra of valve surfaces in the process of performing decellularization before (step 1) and after (step 2) fermentative treatment of the samples. While carrying out RS spectroscopy of the valve surfaces before and during their decellularization, we obtained qualitatively identical Raman bands corresponding to certain oscillation modes (Table 1). Fig. 2 it can be seen that there was a reduction of intensity during the first phase of decellularization at wave numbers 812 cm -1 , 1062 cm -1 and 1440 cm -1 , corresponding to the phosphodiester linkage RNA; OSO-3 symmetrical stretching of glycosaminoglycans of chondroitin-6sulfate; proteins, lipids. As the second phase of decellularization completed there was noted a little The relatively constant surface samples component of the study before and during decellularization was amide III, corresponding to the line intensity at a wave number of 1246 cm -1 [14]. Therefore, the number magnitude was used as a denominator at calculating the entered optical number (coefficient) f: Results and discussion 1246 i I f I  I i -is the intensity value at the wave number of the analyzed component, I 1246 -is the intensity value at the wave number of amide III. Figure 3 presents two-dimensional diagrams of optical numbers (coefficients) reflecting a change in the composition of the main surface components of aortic heart valves before and during the process of decellularization at different phases. The two-dimensional dependence analysis showed that in the phases of performing decellularization there was observed a gradual decrease of the optical numbers (coefficients) I 812 /I 1246 , I 1062 /I 1246 and I 1440 /I 1246 in comparison with the values obtained during the study of the surfaces of intact aortic valves. The optical number (coefficient) I 1340 /I 1246 slightly decreases even after the second phase of decellularization. Conclusions During the study of surface of aortic valves before and during their decellularization by using RS spectroscopy, it was found that even after the first phase of decellularization there occurred intensity at the wave numbers 812 cm -1 , 1062 cm -1 and 1440 cm -1 corresponding to the phosphodiester linkage RNA ; OSO-3 symmetrical stretching of glycosaminoglycans of chondroitin-6-sulfate; proteins, lipids. After the second phase of decellularization was completed there was noted a little decrease in the intensity at a wave number 1340 cm -1 corresponding to the deformation mode of proteins and nucleic acids (DNA). Entering optical numbers (coefficients) and two-dimensional analysis, we elucidated the effectiveness of the decellularization process of aortic valves. This effectiveness was indirectly manifested as a decrease in the content of lipids, proteins and glycosaminoglycans on the surface. Nevertheless, the DNA content in the valve samples insignificantly decreased even before the completion of the second phase of decellularization. With the help of the entered optical numbers (coefficients), it is possible to control the effectiveness of decellularization process of the heart valves.
2019-04-30T13:07:41.582Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "e1732d66da59fcb366845c36cf06e6cb494d3a8d", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1038/1/012079", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "4dd9cd89e456e4530af2e872076620ec1795162b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
214753685
pes2o/s2orc
v3-fos-license
Novel T Cell Epitope Designing from PPRV HN Protein for Peptide based Subunit Vaccine: An Immune Informatics Approach Peste des petits ruminant (PPR) is an acute, highly contagious and morbid viral disease of goat and other small ruminants caused by PPR virus which comes under genus morbillivirus, affecting livestock of more than 70 countries (Kumar et al., 2014; Prajapati et al., 2019). The occurrence of the PPR virus mainly occurs during winters (Singh et al., 2014) and their seroprevalence were reported throughout the country (Balamurugan et al., 2014a; Hota et al., 2018; Pal et al., 2014; Saritha et al., 2014). The economic impact by the PPR was already reported and its control may help the poor farmers in their growth (Staal et al., 2009; Kamel and El-Sayed, 2019). The surface protein haemagglutinin International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 9 Number 3 (2020) Journal homepage: http://www.ijcmas.com Introduction Peste des petits ruminant (PPR) is an acute, highly contagious and morbid viral disease of goat and other small ruminants caused by PPR virus which comes under genus morbillivirus, affecting livestock of more than 70 countries (Kumar et al., 2014;Prajapati et al., 2019). The occurrence of the PPR virus mainly occurs during winters (Singh et al., 2014) and their seroprevalence were reported throughout the country (Balamurugan et al., 2014a;Hota et al., 2018;Pal et al., 2014;Saritha et al., 2014). The economic impact by the PPR was already reported and its control may help the poor farmers in their growth (Staal et al., 2009;Kamel and El-Sayed, 2019). The surface protein haemagglutinin ISSN: 2319-7706 Volume 9 Number 3 (2020) Journal homepage: http://www.ijcmas.com Peste-des-petits Ruminants (PPR) is a disease of small ruminants especially goat and its control and eradication till 2030 wants an extensive research to develop a potent vaccine. The role of surface HN protein in the attachment of the virus to cellular receptors makes it an appropriate target to develop a theranostics against the virus. In this study, cytotoxic T cells epitopes that will bind to MHC class I alleles were predicted out using bioinformatic tools. Ten immunogenic peptides were predicted using IEDB web server based on their binding with cow (BoLA) alleles. Among these predicted peptides, five immunogenic epitopes i.e. 429 SVFGPLIPHL 438 , 86 HQTKDVLTPL 95 , 261 RDLGLGPPVF 270 , 432 GPLIPHLSGM 441 and 555 VRLNFKGNPL 564 were selected on the basis of their high percentile score. Predicted three dimensional (3D) models of the PPRV HN protein and SLAM receptor were built and used to dock the immunogenic epitopes. It was used to predict the docked site in the structure. Furthermore, the involvement of these predicted epitopes in experiments may lead to creation of novel potent vaccine and diagnostic tools against the PPR. neuraminidase (HN) of PPR virus involves in the virus attachment and induces acquired immunity in the host cell (Yu et al., 2017). The inhibition of HN resulting in restriction of its attachment may lead to control the disease. Using conserved epitopes to develop a potent vaccine is a novel concept that applied in control of various harmful diseases (Gershoni et al., 2007;Iurescia et al., 2012;Abu haraz et al., 2017;Tahir et al., 2019). Therefore, prediction and analysis of the novel epitopes of PPRV HN protein is a crucial step to develop a peptide subunit based vaccine, antiviral peptides and diagnostic tools. In this study, cytotoxic T cells epitopes against PPRV HN protein were predicted out using immunoinformatics tools that may bind to MHC class I alleles and their docking has been performed to find their predicted docking site. The main motive of this study is to design a novel antiviral peptides or multiepitopic vaccine that restricts the virus attachment and helps in control and eradication of the PPR. Predictions of T cell epitopes Immune Epitope Database (IEDB) prediction tools (http://tools.iedb.org/mhci/) were emphasized to predict cytotoxic T lymphocyte (CTL) epitopes of PPRV HN protein using retrieved sequence (609 AA residues) that may interact with MHC (major histocompatibility complex) class I alleles (Lundegaard et al., 2008). Eight different cow alleles i.e. BoLA-T2a, BoLA-T5, BoLA-T2b, BoLA-D18.4, BoLA-T2c, BoLA-JSP.1, BoLA-T7 and BoLA-HD6 were used for analysis using netmhcpan_el4.0 method to predict the binding affinity. The length of amino acids was fixed to 10 and the cut off percentile rank was set in the range of 1-4 during the prediction of T cell epitopes. Three dimensional (3D) modeling and docking Due to non-availability of the crystal structure of PPRV HN protein in PDB format, the predicted 3D model of the PPRV HN protein from the retrieved sequence was built using SWISS-MODEL online server (https://swissmodel.expasy.org/interactive) (Biasini et al., 2014). Similarly predicted 3D model of the SLAM receptor was built from retrieved sequence (338 amino acids) in intensive mode using Phyre 2.0 web server (http://www.sbg.bio.ic.ac.uk/phyre2/html/pag e.cgi?id=index) (Kelley et al., 2015). Furthermore, docking has been performed between the predicted peptide sequences of the PPRV HN protein and predicted 3D model of SLAM receptor using HPEPDOCK online web server (http://huanglab.phys.hust.edu.cn/hpepdock/) (Zhou et al., 2018). Predicted cytotoxic T cell epitopes In this study, using MHC-I binding prediction method of IEDB web server, the surface immunogenic epitopes of PPRV HN protein were predicted out. A total of 10 fixed length T-cell epitopes that will interact with various cow (BoLA) alleles were selected along with their percentile rank and position ( and 520 MDLRYITATY 529 were predicted to interact with three and two BoLA alleles respectively. The peptide epitope 429 SVFGPLIPHL 438 and 555 VRLNFKGNPL 564 obtained the highest percentile rank of 3.8 and 3.3 respectively in comparison to other predicted T cell epitopes indicating the most probable potent immunogenic T cell epitopes of PPRV HN protein. 3D modeling of PPRV HN protein and SLAM receptor The 3D model of PPRV HN protein created by Swiss model revealed the sequence identity of 39.48% with measles virus (MV) haemagglutinin (H) protein using template 2zb5.1 (Crystal structure of the measles virus hemagglutinin) (Fig.1A). Furthermore, the final model posses an overall coverage of 0.76 and sequence similarity of 0.40. Then the QMEAN, Cβ, solvation and torsion values of the model were recorded as -4.89, -1.06, -1.84 and -3.97 respectively under global quality estimation. In addition, the 3D model of SLAM receptor built in intensive mode using Phyre 2.0 revealed 72% of amino acids modeled with greater than 90% confidence using template c2druA (crystal structure and binding properties of the cd2 and cd244 (2b4)2 binding protein, cd48) (Fig.1B). Docking of predicted T cell epitopes on SLAM receptor From the list of predicted immunogenic epitopes, five epitopes i.e. 429 (Fig.2). Discussion Due to huge economic consequences and morbidity rate, the control and global eradication of the PPR has been initiated. The most effective approach to control and eradicate the PPR is vaccination of livestock. To develop a potent multiepitopic vaccine, accurate prediction of the surface epitopes is a crucial step. Most of the morbilliviruses infection leads to the immunosuppression which may be protected by cell-mediated and humoral immune response against specific surface protein (Naik et al., 1997). In the study, the cytotoxic T cell epitopes were predicted out using cow (BoLA) alleles in order to bind with MHC-I alleles and their docking with SLAM receptor was performed using bioinformatics tools. Earlier studies reported the use of various animal and human alleles to predict the immunogenic T cell epitopes against multiple diseases using immunoinformatics (Patronov and Doytchinova, 2013;Liu et al., 2017;Idris et al., 2018;Abd Albagi et al., 2017;Prabdial-Sing et al., 2012;Ahmad et al., 2019;Prasasty et al., 2019). Previously, a docking between the MHC 1 and T cell peptide of chimeric protein of colorectal cancer using HPEPDOCK server was performed and results indicate a successful interaction with a docking score of −209.839 (Hassan et al., 2020). Furthermore, the H protein of the MV is highly homologous to the PPR HN protein and the interaction of the head domain of MV H protein with the SLAM receptor was reported earlier (Hashiguchi et al., 2011). As per reports, due to absence of adequate bioinformatics tools, only 10% of the predicted T cell epitopes were found immunogenic (Zhong et al., 2003). These findings indicated that the predicted epitopes might be able to bind and restrict the virus attachment and may work as potent antiviral agents. However, in-vitro and in-vivo experiments must be needed to validate and confirm the immunogenic epitopes that potentially binds to the MHC molecules. In this study, ten immunogenic peptides were predicted as T cell epitopes using IEDB web tool. Out of these predicted peptides, five potent epitopes i.e. 429 SVFGPLIPHL 438 , 86 HQTKDVLTPL 95 , 261 RDLGLGPPVF 270 , 432 GPLIPHLSGM 441 and 555 VRLNFKGNPL 564 were identified on the basis of their high percentile score and their probable binding affinity with multiple BoLA alleles. 3D model of the PPRV HN protein and SLAM receptor was built and docking of the predicted immunogenic peptides was done using these predicted models. Furthermore, experimentation using these predicted epitopes will lead to designing of specific theranostics tools which helps in the control and global eradication of the PPR. Research Institute, Izatnagar Bareilly U.P. for their support and cooperation.
2020-04-02T08:28:45.892Z
2020-03-20T00:00:00.000
{ "year": 2020, "sha1": "77bcd600b86be44dddb5e3e0348637bdc3039dda", "oa_license": null, "oa_url": "https://www.ijcmas.com/9-3-2020/Aditya%20Agrawal,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "77bcd600b86be44dddb5e3e0348637bdc3039dda", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
56030548
pes2o/s2orc
v3-fos-license
THE STUDY OF PARTICIPANTS' VALUES CONVERGENCE ON THE EXAMPLE OF INTERNATIONAL SCIENTIFIC PROJECT ON CYBER SECURITY 4 this infrastructure can be a project office on the basis of major enterprises and organizations [2]. Since such projects are mainly aimed at the development of new, innovative methods and models, while scientific organizations play a crucial role, such projects can be referred to as scientific. Rapid changes that take place in Ukraine and in the world today lead to a reasessment of values of all social and economic groups, as well as at the personal level. This results in significant changes in the strategies of behavior of economic entities in all areas of economic, social and political life, which causes the need for improvement of existing and creation of new methods and approaches to making management decisions to ensure effective interaction between project participants. The difference in the values of participants is especially vivd in the course of implementation of international projects. It is significantly influenced by different conditions of development of countries as a whole, as well as social groups and individuals in these countries. There is an extensive practice of remote control of international projects through modern means of communication, which, on the one hand, greatly simplifies and accelerates the processes of interaction Introduction Efficient activity of organizations under conditions of today's information society requires the formation of developed business environment, open to international cooperation.It is necessary to take into account rapid development of new technologies of information wars, operating around the world.Thus, we face the task of permanent development of new models and methods of cyber security at the international level, the implementation of which will allow enhancing a culture of information security of project participants.A solution to this problem may be found only under conditions of international cooperation and joined efforts in development and implementation of new methods and models in this area [1]. Establishing effective international cooperation is impossible without the use of project management, the main task of which in this case will be to ensure efficient interaction between project participants for its successful implementation.A determining factor of solving this problem is the formation of distributed information infrastructure as a catalyst for the application of methods of project management.The center of Thus, the relevance of this study is explained by rapid development of international cooperation on cyber security, which takes place in the project environment.The key to successful implementation of such projects is establishing effective interaction between all participants, which requires development of new methods and models in this direction. Literature review and problem statement Today there are a lot of articles in the field of cyber security relating to the technical part of this issue.The number of studies that highlight scientific and organizational issues of implementing projects on cyber security is much smaller.The main organizations-participants of scientific projects on cyber security are the state and scientific institutions, as well as business organizations. An analysis of the regulation of information security at the state level in economically developed countries, performed in [3], revealed that the safety rules may be comprehensive or partial, strategic or tactical, reactive or proactive.Some countries define security goals only; others have efficient mechanisms of risk management in this area.There are different approaches to determining the protection of personal data and privacy.These national differences are influenced by cultural norms of the society and have different advantages and disadvantages.This causes complexity in the implementation of international projects in this field. The formation of systems of information security in business organizations is described in many papers, for example, [4] proposed methodology to assist companies in evaluating their compliance with the international standards of ISO, as well as in planning and implementation of actions necessary to ensure certification by these standards.It may be a prerequisite for participation in the projects of cyber security. Scientific institutions are also important participants in certain projects as the sources of creation of new information and development of new products and services through effective use of information.Article [5] deals with the problems of infrastructure, operation, use of information, in accordance with the standards of information security of research institutions.The implemented method of risk analysis allows comparing security systems of different universities, which may determine the degree of their readiness to participate in the project activities of cyber security. In [6], it is noted that the implementation of measures on cyber security must be supported by the scientific approach to the standardization of these processes, both in the field of information technology and in the field of project management. The use of value-oriented approach to the projects implementation is considered in [7], where models of development of an organization and its corporate systems of project management, based on the value approach under conditions of turbulence, are presented.Paper [8] is also of interest since it offers a value-homeostatic approach to the assessment of project decisions, taking into account priority of expected values and the values that are contributed by participants, as well as compliance of the values, formed by a project, with participants' expectations.Article [9] in this field contains a value-oriented analysis of decision-making in managing projects and makes it possible to take into account the level of prevailing value memes in projects at certain points of their implementation and to perform calculations of value assessment of the project product more accurately. In addition, there are a large number of scientific papers, in which researchers put material sense to the concept of project value.This is proved by article [10], in which, based on the conducted analysis, the modern researchers' concentration on issues of maximization of commercial value was defined and further directions of business development were revealed.In [11], the difference between a value of the project and the value of project management was determined, taking into account changes in values of stakeholders in the course of project implementation.The study of value of project management itself, presented in [12], proves the existence of high-efficiency project management, but reveals the problem of different values in a project team, which corresponds to the set goal. The overview we performed indicates that the studies, carried out in the chosen direction, are at their initial stage and are relevant for further development.There is no mechanism for managing the values of project participants, which would take into account intangible values that play a key role in international projects. The aim and tasks of the study The aim of this study is to define methods of forming the participants' values system on the example of training for the participation in the NATO grant program "Science for Peace and Security Programme (SPS), topic: Cyber security" at Chernyhiv National Technological University.The aim also includes development of the apparatus of effective cooperation of major project participants on the basis of convergence of values.This will allow ensuring compliance of the values, obtained as a result of the project implementation, with the personal planned values of each project participant, which is the key measure of its success. To achieve the set goal, it is necessary to solve the following tasks: -to identify main participants of scientific projects on cyber security and to define their values; -to propose a method for forming a universal system of values of the project with the definition of the core of such system; -to propose a model for determining the degree of convergence of values of project participants and to create the mechanism for making project decisions to ensure their effective interaction; -to develop recommendations concerning a response of the project manager to the changes in the values of participants, taking into account the degree of their convergence. Materials and methods of examining the problem of the formation of the system of values of participants on the example of international scientific project of cyber security Since the concept of "value" is associated with the axiological category "evaluation" -measurement (appreciation or rejection), the value of a project product is usually determined by the ouput it produces and passes onto a product while meeting the requirements contained in the project mission.In the practice of project management, the value of a project product and the value of project management are distinguished.Both of these assets can be used to gain certain benefits. There are two necessary conditions that warrant creation of the project value.The first one is the practical ability of the project manager to complete the project in accordance with the plan; the second one is finding a way to harmonize (balance) the project value for all stakeholders through the properties of a project product.The first condition is mandatory, whereas the second one is a sufficient condition for creating the project value. Taking into account the individuality of perception and life experience, it is possible to argue that each person has its own unique model of perception and information processing depending on what this person finds valuable in this world. Therefore, at the present stage of development of project activity, the main factor of the project success is active participation of stakeholders in approving and making key decisions in the course of project implementation.Each stakeholder, as well as the project team, expresses different values that define different objectives and outcomes of the project.In addition, over the life cycle of the project there appear turbulence and migration of values of stakeholders, and in the project situations it is necessary to make decisions on the basis of project indicators, which should reflect harmonized value of all stakeholders. In this case, it is determined in [13] that the provision of information security must be implemented not only with the help of technical and technological means, but also through the formation of culture of information security of participants of information processes, based on their values. Paper [9] contains a formulated scientific fact that in project situations, stakeholders form a vision of a variant of project continuation without necessary regard to strategically-service values of the project, taking practically only personal values as a basis.This fact puts forward new problems to the project team, which should contribute to more grounded scientific approach to decision making by stakeholders of the project, otherwise there is a real threat to its successful implementation. Practical experience also proves that the majority of stakeholders cannot establish their own system of values independently, so there is a need for active participation of the project team that shapes and directs such a system in accordance with the overall project strategy, which requires special methods and models. It is known that success of a project depends on making project decisions by a project team and approving them by key participants.This can be achieved through ensuring the convergence of values of all participants of the project. The lack of convergence of values in a project could lead to such consequences as delays and overexpenditures, connected with delays in the process of making constructive decisions.The manager and the project team should develop an ongoing process of tracking the convergence of values and focus on timely response to any deviations. In international projects, it is possible to form and examine values in three categories: -the values contributed to project by its participants (competence, experience, investments); -the values of project itself, which are formed from the totality of values of participants of the project, taking into account synergic effect; -the values that the project participants and consumers gain from its implementation. Results of studying and forming a model for the system of values of participants on the example of international scientific project of cyber security In the process of preparation for the participation in the NATO grant program "Science for Peace and Security Programme (SPS), topic: Cyber security", the key potential participants of the project were defined.They were selected on the basis of the main goals of cooperation between NATO and partner countries in the field of cyber security, namely: -improving the efficiency of NATO and partner countries in the field of protection of crucial communication and information infrastructures against cyber attacks; -laying the foundations for support measures in cases of cyber attacks; -provivion of assistance in restoring normal functioning of a correspondent infrastructure after cyber attacks. The formed system of values of participants of the international scientific project on cyber security is shown in Fig. 1.Taking into account scientific orientation of the given project, it is proposed to create a management system of this project on the basis of leading universities, which have experience and capabilities to carry out scientific research in the field of cybersecurity. Fig. 1. Model of creating the system of international scientific project participants' values system The values of the key stakeholders are given in Table 1.It is clear that it is difficult to consider all values of participants, so only those, which provide for the possibility to execute projects within the framework of international programs of cyber security, were selected in the table. Taking the example listed in V is formed at the intersection of sets of values of project's stakeholders, who define the unity in pursuit of the project implementation. where This system forms the core of the project.The formation of such core can take place only with the active participation of the project manager, since not all project participants have a clearly formed list of values of the organization, and participation in the project can lead to the creation of new values that the heads of organizations and institutions may not be aware of. So, the formed core of values is necessary to evaluate in terms of its completeness to ensure implementation of international projects on cyber security. Discussion of results of modeling. Determination of the degree of convergence of project participants and methods for ensuring their effective interaction It should be noted that similar elements that are at the intersection of the systems can have the same name but be completely different by the semantic content.For example, the value of "security" is located at the intersection of all sets, but has a different degree of value to participants.The approximation of these concepts in the project is not simple averaging of their point-scoring assessment, but can take place through evaluation of the degree of convergence of these concepts for various participants [14].In this case, the degree of convergence can be defined using Euclidean distance.To do this, it is possible to perform the normalization of indices of the system elements, or to carry out point-scoring assessment of a value in terms of significance for each participant.It is also necessary to take into account the value of each element for a project, and then assessment of degree of convergence (Con) can be performed through Euclidean distance to the core of the project. where A i and B i are the degree of value of the i-th element for two different project participants; N is the number of project participants. Table 1 Values of the international scientific project on cyber protection's (security) stakeholders V g 1 -increasing resources of all entities on the territory of the state; V g 2 -territory and resources safety from all external claims; V g 3 -strengthening instruments capable of providing safety ; V g 4 -increasing life quality of all entities on the territory of the state; V g 5 -protection of living environment on the territory of the state. V e 1 -values, for the sake of which organization was created; V e 2 -increasing over time of all kinds of resources that are under control of organization; V e 3 -defense of own development under conditions of competitive economic environment; V e 4 -desire to find loyalty of consumers, state and humanity as a whole. V io 1 -consistent and harmonious development; V io 2 -providing global product safety; V io 3 -contribution to solving problems connected with global economic crisis; V io 4 -providing political dialogue on international economic and financial issues; V io 5 -saving system of stability of international trade; V io 6 -contribution to development of economically poor regions and countries. V if 1 -support of international cooperation and connections in area of scientific and technical development, socio-economic policy. V if 2 -financing and support of international projects and developments; V if 3 -strengthening development of democracy and civil society. V s 1 -uniformity in assessment authenticity of scientific research; V s 2 -scientific research should become common gains; V s 3 -responsibility for scientific research and developments to society; V s 4 -regulation of activity that accompanies knowledge production, complying with ethical norms of conducting research; V s 5 -formation of own image as social institution; V s 6 -orientation to consumer needs V is 1 -organization of communication process among scientific organizations in various countries; V is 2 -cooperation of scientists of various countries; V is 3 -pursuit of integration of international information streams into united system of knowledge. Then, after forming the core of the project, the degree of convergence can take into account the weight of each factor for the project (balanced convergence Con bal ). where k j is the value of the i-th element for the project.The value of certain element k j for the project can be determined by using such parameters as degree of influence on the outcome of the project and elasticity.The project team may conduct such assessment using the method of paired comparisons. Experimental evaluation of degree of convergence of participants of the project of preparation for the international program of cyber safety is shown in Fig. 2. Fig. 2. Cyber security project participants' values convergence We can see from the diagram given in Fig. 2 that even at the initial stage of the project of preparation for the implementation of programme on cyber security, there is a high degree of discrepancy between the values of business enterprises and organizations and the values of scientific organizations.It is clear that this is caused by essential differences in such values as gaining profit and replenishing resources.In this case, convergence in the core of values shows more convergence between the values of the state and scientific institutions.Here the project manager must work better to ensure interaction between commercial organizations and state authorities and scientific institutions. Resulting indicators of degree of convergence consequently will require constant monitoring and adjustments with the aim of convergence of values of the project participants to ensure a high level of motivation and successful project implementation.In this case, the value of the index of convergence degree can be characterized as it is listed in Table 2.The table also contains recommendations for the actions of the project manager in the identified situations.The limits for these indices are established considering the specifics of projects of cyber security. Further formation of the values' core can take place with the use of the method of structural matrices [15], which is based on the statement that any system is a set of input, core and output.In addition, useful factors and obstacles are distinguished at the input.The matrix of influences in Fig. 3 is plotted on the basis of the values, defined for the examined project taking into account the overall system (Table 1) and the cores of values of the project (1).Other factors are distributed between the input and the output.If the distribution into these groups is possible, this is a systems organization, if not -these are indirect links.Only direct links are specified in the matrix.We will divide all selected values into three groups: 1 st group includes the values that influence other values of the examined process, but are not exposed to influences (3,5,7). Then we relocate elements of the matrix so that the diagonal elements, which belong to the first group, are located in the first diagonal minor of the matrix, those belonging to the second group -in the second minor, those belonging to the third group -in the third minor. In our case, the matrix will take the form, presented in Fig. 4. It is clear that elements of links of the second minor create closed contours on the matrix, and the values corresponding to these elements will constitute the "core of the system".The values that belong to the first minor constitute "inputs" of the system, and those belonging to the third minor constitute the "outputs" of the system.In turn, values of the inputs and outputs may belong to the cores of other subwhich, together with the examined system, create a system of the higher level. Thus, we obtained the core of the system, which consists of the values of augmenting resources, security, enhancing life quality, gaining profit, sustainable development and responsibility.Input values х 3 , х 5 , х 7 can be considered not only as resources, but also as obstacles, which, in fact, are limitations of the project.For the examined project, these factors are limitations. Taking into account the fact that informational or organizational and technical systems are under consideration, project managers may, in the defined order, change links independently, on the basis of their own experience and results of discussion of these factors with stakeholders.In our case, we see that other 3 subsystems that make up the values 2 and 4; 4 and 6; 6 and 8, 6 and 1, 2 and 14 are distinguished inside the core.In this case, there is a question of importance of the links between factors 1 and 6, because for the subsystem of factors 6 and 8, one input for entering of this subsystem to the core is enough. The defined core will require further comparison with the demands and capabilities of all participants of the project on cyber security. After initial harmonization of the system of values of the project, in order to ensure its further development, new necessary components of the systems of values of participants are added to the defined set.As a result, a complementary system of project values CV p is formed: In the process of project implementation, there is a transformation (correction) of the values of project participants into the outcome.A system of values p V that contains those elements of the overall system of values of participants, for gaining of which the project was implemented: As a result of project implementation, there are also new project values V ps that appear due to synergy.Therefore, the outcome of project implementation will be the system of values V pr , which will be effective only inder condition of constant monitoring of convergence of values over entire life cycle: The effectiveness of project implementation in the system of values can be determined by ratio: The given model of formation and transformation of the system of values of the project participants makes it possible to support decision making process on the project, taking into account the values of all project participants, and, due to its constant alignment, to maintain a high degree of interest of participants in successful project implementation. To do this, it is necessary to implement the following logical steps: -to determine the degree of project participants' values convergence; -to form requirements to the level of convergence of project values and to make changes in the documentation for the project; -to carry out regular monitoring of changes in the degree of convergence of the project values over its life cycle; -to implement timely actions in case of deviations. Thus, the research we conducted demonstrated the possibilities of forming a universal system of values of international projects on cyber security for establishing effective cooperation of all stakeholders.Comprehensible mathematical apparatus of the proposed method for determining convergence of values of the project provides for the opportunity for its application by the project manager of any level.Results of the study might be used in other projects of varying complexity. Further research in this direction might be aimed at exploring changes in the convergence of values during project implementation and at developing measures to control the overall system of values of the project in accordance with these changes. Conclusions 1. Defining the key participants of international scientific projects on cyber security and their values is the basis for further modeling of the universal system of values of the project, ensuring effective interaction between them and enhancing a general culture of information security. 2. The proposed method for the formation of values of the project requires active participation of the manager to ensure effective interaction between all participants.This is especially true for international projects that have maximum number of differences.Selecting the core of the system using the methods of structural matrices provides for the possibility to ensure completeness of the overall system of values that will actually allow implementing the project on cyber security. 3. The model for determining a degree of convergence of values of participants of the project on cyber security allows us to monitor the process of definition, transformation and change of the values during project implementation and is the basis for building up an effective system of monitoring the convergence of values of the project participants. 4. The given recommendations that address a response of the project manager to changes in the values of participants with regard to a degree of their convergence should significantly enhance efficiency of the processes of project planning and make it possible to take timely and necessary decisions on establishing (renewing) efficient interaction between all project participants. V . C h e p k y i PhD, Associate Professor, Senior Researcher* Е-mail: viktor2011@mail.ru V . P a v l o v i c h Postgraduate student** Е-mail: pavidlovi4@gmail.com*Scientific Research Laboratory Military Academy Fontanskaya doroga str., 10, Odesa, Ukraine, 65009 **Department of computer and information-measuring technology Odesa State Academy of Technical Regulation and Quality Kovalska str., 15, Odesa, Ukraine, 65020 Fig. 3 . Fig. 3. Interconnections in the system of cyber security project participants' values Table 1 , we will construct a model for the project values and identify its common values. Table 2 Determining a degree of cyber security project participants' values convergence
2018-12-11T15:10:22.410Z
2016-12-19T00:00:00.000
{ "year": 2016, "sha1": "05b5f18d65da5a425f466a9750a337166a6f8bc8", "oa_license": "CCBY", "oa_url": "http://journals.uran.ua/eejet/article/download/85215/82318", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "05b5f18d65da5a425f466a9750a337166a6f8bc8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
252816022
pes2o/s2orc
v3-fos-license
Graph Neural Network Policies and Imitation Learning for Multi-Domain Task-Oriented Dialogues Task-oriented dialogue systems are designed to achieve specific goals while conversing with humans. In practice, they may have to handle simultaneously several domains and tasks. The dialogue manager must therefore be able to take into account domain changes and plan over different domains/tasks in order to deal with multi-domain dialogues. However, learning with reinforcement in such context becomes difficult because the state-action dimension is larger while the reward signal remains scarce. Our experimental results suggest that structured policies based on graph neural networks combined with different degrees of imitation learning can effectively handle multi-domain dialogues. The reported experiments underline the benefit of structured policies over standard policies. Introduction Task-oriented dialogue systems are designed to achieve specific goals while conversing with humans. They can help with various tasks in different domains, such as seeking and booking a restaurant or a hotel (Zhu et al., 2020). The conversation's goal is usually modelled as a slot-filling problem. The dialogue manager (DM) is the core component of these systems that chooses the dialogue actions according to the context. Reinforcement learning (RL) can be used to model the DM, in which case the policy is trained to maximize the probability of satisfying the goal (Gao et al., 2018). We focus here on the multi-domain multi-task dialogue problem. In practice, real applications like personal assistants or chatbots must deal with multiple tasks: the user may first want to find a hotel (first task), then book it (second task). Moreover, the tasks may cover several domains: the user may want to find a hotel (first task, first domain), book it (second task, first domain), and then find a restaurant nearby (first task, second domain). One way of handling this complexity is to rely on a domain hierarchy which decomposes the decision-making process; another way is to switch easily from one domain to another by scaling up the policy. Although structured dialogue policies can adapt quickly from a domain to another (Chen et al., 2020b), covering multiple domains remains a hard task because it increases the dimensions of the state and action spaces while the reward signal remains sparse. A common technique to circumvent this reward scarcity is to guide the learning by injecting some knowledge through a teacher policy 1 . Our main contribution is to study how structured policies like graph neural networks (GNN) combined with some degree of imitation learning (IL) can be effective to handle multi-domain dialogues. We provide large scale experiments in a dedicated framework (Zhu et al., 2020) in which we analyze the performance of different types of policies, from multi-domain policy to generic policy, with different levels of imitation learning. The remainder of this paper is structured as follows. We present the related work in Section 1. Section 2 presents our structured policies combined with imitation learning. The experiments and evaluation are described in Sections 3 and 4 respectively. Finally, we conclude in Section 5. (b) Domain-specific decision module. 2020a). These works adopted the Domain Independent Parametrisation (DIP) that standardizes the slots representation into a common feature space to eliminate the domain dependence. It allows policies to deal with different slots in the same way. It is therefore possible to build policies that handle a variable number of slots and that transfer to different domains on similar tasks (Wang et al., 2015). Our contribution differs from Chen et al. (2020b) on three points: first we perform our experiments on CONVLAB (Zhu et al., 2020) which is a dedicated multi-domain framework; second, the dialogue state tracker (DST) output is not discarded when activating the domain; third, we adapt the GNN structure to each domain by keeping the relevant nodes while sharing the edge's weights. The reward sparsity can be bypassed by guiding the learning through the injection of some knowledge via a teacher policy. This approach, called imitation learning (IL) (Hussein et al., 2017), can be declined from pure behaviour cloning (BC) where the agent only learns to mimic its teacher to pure reinforcement learning (RL) where no hint is provided (Shah et al., 2016;Hester et al., 2018;Gordon-Hall et al., 2020;Cordier et al., 2020). Extended GNN Policies with Imitation We adopt the multi-task setting as presented in CONVLAB, in which a single dialogue can have the following tasks: (i) find, in which the system requests information in order to query a database and make an offer; (ii) book, in which the system requests information in order to book the item. A single dialogue can also contain multiple domains such as hotel, restaurant, attraction, train, etc. Our method, illustrated in Figure 1, is designed to adapt: (i) at the domain-level (i.e. be scalable to changes in the number of slots), and (ii) at the multi-domain-level (i.e. be scalable to changes of domain). For each dialogue turn, it works as follow: first, the DST module chooses which domain to activate. Then, the multi-domain belief state (and action space) is projected into the active domain (i.e only the DIP nodes corresponding to the active domain are kept) as shown in Figure 1a. Afterwards, we apply the GNN message passing as Chen et al. (2020b) but only among the domain specific DIP nodes in the decision making module ( Figure 1b). GNN Policies The GNN structure we consider is a fully connected graph in which the nodes are extracted from the DIP. We distinguish two types of nodes: the slot nodes representing the parametrisation of each slot (denoted as S-NODE) and the general node representing the parametrisation of the domain (as I-NODE for slot-Independent node). This yields three types of edges: I2S (for I-NODE to S-NODE), S2I and S2S. This abstract structure is a way of modelling the relations between slots as well as exploiting symmetries based on weight sharing ( Figure 1b). Imitation Learning In addition to the structured architecture, we use some level of IL to guide the agent's exploration. In our experiments, we used CONVLAB's handcrafted policy as a teacher (or oracle) 1 , but other policies could be used as well. Behaviour cloning (BC) is a pure supervised learning method that tries to mimic the teacher policy. Its loss function is the cross-entropy loss as in a classi- Experiments We performed an ablation study: (i) by progressively extending the baseline to our proposed GNNs and (ii) by guiding the exploration with IL. All the experiments were restarted 10 times with random initialisations and the results evaluated on 500 dialogues were averaged. Each learning trajectory was kept up to 10,000 dialogues with a step of 1,000 dialogues in order to analyse the variability and stability of the methods. Models The baseline is ACER which is a sophisticated actor-critic method (Wang et al., 2016). After an ablation study, we progressively added Metrics We evaluate the performance of the policies for all tasks. For the find task, we use the precision, the recall and the F-score metrics: the inform rates. For the book task, we use the accuracy metric namely the book rate. The dialogue is marked as successful if and only if both inform's recall and book rate are 1. The dialogue is considered completed if it is successful from the user's point of view (i.e a dialogue can be completed without being successful if the information provided is not the one objectively expected by the simulator). Evaluation We evaluate the dialogue manager and the dialogue system both with simulated users. Dialogue Manager We performed an ablation study based on ACER as reported in Figure 2. First, all RL variants of ACER ( Figure 2a) have difficulties to learn without supervision in contrast to BC variants (Figure 2b). In particular, we see that hierarchical decision making networks (HFNN in green), graph neural network (HGNN in red) and generic policy (UHGNN in purple) drastically improve the performance compared to FNNs. Similarly, using IL like ILFOD ( Figure 2c) and IL-FOS ( Figure 2d) notably improves the performance. Therefore, learning generic GNNs allows collaborative gradient update and efficient learning on multi-domain dialogues. Conversely, we observe that hierarchical decision making with HFNNs does not systematically guarantee any improvement. These results suggest that GNNS are useful for learning dialogue policies on multi-domain which can be transferred during learning across domains on-the-fly to improve performance. Finally, regarding ILFOD variants (Figure 2c) Conclusion We studied structured policies like GNN combined with some imitation learning that effectively handle multi-domain dialogues. The results of our largescale experiments on CONVLAB confirm that an actor-critic based policy with a GNN structure can solve multi-domain multi-task dialogue problems. Finally, we evaluated our best policy (ACGOS) in a complete dialogue system with simulated users. It overcomes the baselines and it is comparable to the handcrafted policy. A limitation of current policies in CONVLAB, including ours, is that the robustness to noisy inputs is not specifically addressed as it had been done in PyDial (Ultes et al., 2017). It could be also interesting to study the impact of incorporating real human feed-backs and demonstrations instead of a handcrafted teacher. The GNN structured policies combined with imitation learning avoid sparsity, while being data efficient, stable and adaptable. They are relevant for covering multi-domain task dialogue problems. Belief State The belief state representation is deterministic. As shown in Figure 3, there is no uncertainty (all values are either 0's or 1's). State Space The input to the dialogue manager is the belief state which is a dictionary of all tractable information (slot-value pairs, history, dialogue actions of system and user, etc.). This is called the master state space. And, due to its large size, the representation is projected into the summary state space by a process called value abstraction (Wang et al., 2015). Finally, it must be vectorised in order to be interpretable by neural networks. Action Space The dialogue manager's output is a probabilistic distribution over all possible actions. To reduce the complexity of the learning problem, master actions, which are valued dialogue acts such as INFORM(date = '2022-01-15'), are abstracted into summary actions like INFORM(date), the value abstraction module being in charge of restoring the relevant values in the context. On CONVLAB the policy may activate several actions simultaneously (called multiple-actions). Domain Independent Parametrisation (or DIP) (Wang et al., 2015) standardises the slots representation into a common feature space to eliminate the domain dependence. In particular, the DIP state and action representations are not reduced to a flat vector but to a set of sub-vectors: one corresponding to the domain parametrisation (called slot-independent representation), the others to the slots parametrisation (called slot-dependent representations). Component / Description Beliefs constraint slot beliefs: {b inf d,s ∈ V s , ∀s ∈ S inf d , ∀d ∈ D} The goal constraints belief for each informable slot. This is either an assignment of a value from the ontology which the user has specified as a constraint, or has a special value -either dontcare which means the user has no preference, or none which means the user is yet to specify a valid goal for this slot. To be exact, for each domain, the constraint slot dictionary separates slots with respect to the task i.e we distinguish the find slot dictionary and the book slot dictionary. request slot beliefs: {b req d,s ∈ B, ∀s ∈ S req d , ∀d ∈ D}: A set of requested slots, i.e. those slots whose values have been requested by the user, and should be informed by the system. Features terminated: f 1 ∈ B: A boolean showing that the user wants to end the call. booked: f 2 ∈ V DB(d) : The name of the last venue offered by the system to the user with respect to the constraint slots with additional information like reference. To be exact, this feature is located in the book slot dictionary. degree pointer: f 3 ∈ B 6 : The vector counting the number of entities count matching with constraint slots in acceptance list: [count==0, count==1, count==2, count==3, count==4, count>=5]. System Acts system acts: a sys ∈ list(A sys ): The list of the last system actions. User Acts user acts: a user ∈ list(A user ): The list of the last user actions. A.2 State and Action Representations We propose to formally present the state representations used in our experiments. For details about our notations, see Table 3. Flat state representation in CONVLAB where x is the initial state, ϕ(x) is the full state parametrisation, S inf is the set of informable slots, b inf s is the one-encoding vector of the informable slot s, a user and a sys are the one-encoding vectors of previous user and system actions, f 1 is the boolean "terminated dialogue", f 2 is the boolean "booked offer" with respect to each domain, f 3 is the one-encoding vector of the matching entities count with respect to each domain and ⊕ is the vector concatenation operator. DIP state representation Slot independent parametrisation: where x is the initial state, ϕ d (x) is the active domain state parametrisation, a user | g and a sys | g are the one-encoding vectors of previous general user and system actions, f 1 is the boolean "terminated dialogue", f 2 | d is the boolean "booked offer" with respect to the active domain, f 3 | d is the oneencoding vector of the matching entities count with respect to the active domain and ⊕ is the vector concatenation operator. Slot dependent parametrisation: where x is the initial state, ϕ s i (x) is the slot parametrisation of the i th slot, S d is the set of slots of the active domain, a user | s i and a sys | s i are the one-encoding vectors of previous user and system actions of the i th slot, (2a) is the indicator of known value, (2b) is the indicator of informable slot and (2c) is the indicator of requestable slot and ⊕ is the vector concatenation operator. A.3 Implementation Details Imitation learning The used oracle is the handcrafted agent proposed by each framework. When we use ILFOD or ILFOS methods, 50% of the time the oracle trajectories is used. When we use ILFOS, we call also in 100% of the time the oracle which gives us the best expert action as supervision and a margin penalty µ = log(2) (Hester et al., 2018). Reinforcement learning Our policy algorithm is an off-policy learning that uses experience replay (all data are stored in buffers) without priority i.e without importance sampling. The exploitationexploration procedure is achieved by Boltzmann sampling with a fixed temperature τ = 1. Metrics and Rewards Inform recall evaluates whether all the requested information has been informed when inform precision evaluates whether only the requested information has been informed. Book rate assesses whether the offered entity meets all the constraints specified in the user goal. The system is guided by the rewards as follows. If all domains are solved (a domain is solved if all related tasks are solved), it gains 40 points. If the current active domain is solved, it gains 5 points. Otherwise, it is penalised by 1 point. Model setup for neural network architectures Our FNN models have two hidden layers, both with 128 neurons. Our GNN models have one first hidden layer with 32 neurons for each node (two in all: S-NODE and I-NODE). Then the second hidden layer is composed of 32 neurons for each relation (three in all: S2S, S2I and I2S). The size of the tested networks are of the order of magnitude of 10 000 to more than 100 000 parameters. For learning stage, we use a learning rate lr = 10 −3 , a dropout rate dr = 0.1 and a batch size bs = 64. Each loss function has a weight of λ Q = 0.5, λ π = 1., λ IL = 1. and λ ent = 0.01 respectively. The learning frequency is one iteration after each episode (finished dialogue) with only one gradient iteration. Used packages for the experiment We used the dialogue system frameworks named CON-VLAB (Zhu et al., 2020). For the implementation of neural networks, we used PYTORCH (Paszke et al., 2019) in our dialogue systems. We also used another toolkit for reinforcement learning research named OPENAI GYM (Brockman et al., 2016). A.4 Supplementary Results We propose to present supplementary results of our ablation study. We show the distribution (via boxplot) of different measures with 10 different initialisations and without pre-training. In particular, Figure 4 presents the distribution of inform recall, Figure 5 the distribution of book rate, Figure 6 the distribution of success rate and Figure 7 the distribution of cumulative rewards. We precise that the coloured area represents the interquartile Q1-Q3 of the distribution, the middle line represents its median (Q2) and the points are outliers.
2022-10-12T01:16:38.322Z
2022-10-11T00:00:00.000
{ "year": 2022, "sha1": "1a082424ce54699b8fa3ca3637832d8adaee9540", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "e74aa648818c06c9a94e44b41c57525f079e7e00", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
233389894
pes2o/s2orc
v3-fos-license
Effect of Quercetin on ABCC6 Transporter: Implication in HepG2 Migration Quercetin is a member of the flavonoid group of compounds, which is abundantly present in various dietary sources. It has excellent antioxidant properties and anti-inflammatory activity and is very effective as an anti-cancer agent against various types of tumors, both in vivo and in vitro. Quercetin has been also reported to modulate the activity of some members of the multidrug-resistance transporters family, such as P-gp, ABCC1, ABCC2, and ABCG2, and the activity of ecto-5′-nucleotidase (NT5E/CD73), a key regulator in some tumor processes such as invasion, migration, and metastasis. In this study, we investigated the effect of Quercetin on ABCC6 expression in HepG2 cells. ABCC6 is a member of the superfamily of ATP-binding cassette (ABC) transporters, poorly involved in drug resistance, whose mutations cause pseudoxanthoma elasticum, an inherited disease characterized by ectopic calcification of soft connective tissues. Recently, it has been reported that ABCC6 contributes to cytoskeleton rearrangements and HepG2 cell motility through purinergic signaling. Gene and protein expression were evaluated by quantitative Reverse-Transcription PCR (RT-qPCR) and western blot, respectively. Actin cytoskeleton dynamics was evaluated by laser confocal microscopy using fluorophore-conjugated phalloidin. Cell motility was analyzed by an in vitro wound-healing migration assay. We propose that ABCC6 expression may be controlled by the AKT pathway as part of an adaptative response to oxidative stress, which can be mitigated by the use of Quercetin-like flavonoids. Introduction Quercetin is a member of the flavonoid group of compounds, which is abundantly present in various dietary sources like vegetables, fruits, spices, tea, and wine.Quercetin is usually present as a glycoside, though the aglycone moiety is absorbed after deglycosylation in the gut.Its daily intake has been estimated to be about 6-18 mg in the United States, China, and Netherlands.Its bioavailability is considered low and varies from 2% to 44% in different studies, due to its poor solubility in water and to intestinal and biliary excretion, which limit its absorption [1]. Thanks to its ability to bind transition metal ions [2] and to scavenge free radicals, Quercetin has excellent antioxidant properties attributed to the catechol group in the B ring and the OH group at position 3 of the A ring [3].As a radical scavenger, Quercetin is effective against O 2 − and ONOO − and can terminate lipid peroxidation, which has deleterious effects especially on the cardiovascular and nervous systems [4,5].Quercetin also shows good anti-inflammatory activity by reducing Lipopolysaccharide (LPS)-induced expression of Tumor Necrosis Factor (TNF) TNF-α and Interleukin (IL)-1α [6] and by preventing the production of the inflammatory enzymes lipoxygenases (LOX) and cyclooxygenases (COX) [7]. Moreover, Quercetin has been found to be useful in cancer prevention, and very effective as an anti-cancer agent against various types of tumors, both in vivo and in vitro, such as breast, colon, liver, pancreas, lung, prostate, bladder, bone, and blood cancers [8].Multiple mechanisms and pathways are involved, from cell cycle arrest to induction of apoptosis and autophagy [9].The progression and metastatic potential of solid tumors are also affected through the inhibition of angiogenesis in a Vascular endothelial growth factor receptor 2 (VEGFR 2)-dependent manner [10,11] and by targeting Epithelial-to-Mesenchymal transition (EMT) [12,13] and Metalloproteinases (MMP)-mediated remodeling of the Extracellular Matrix (ECM) [14,15].Furthermore, Quercetin has been reported to inhibit the activity of ecto-5 -nucleotidase (NT5E) [16,17], a key enzyme in the purinergic signaling pathway [18] and in the remodeling of ECM components in cancer invasion [19].It has been known for a long time that flavonoids can modulate both expression and activity of many ABC transporters, with some relevant impact on drug resistance [20][21][22].Quercetin has been also reported to modulate the activity of some members of the multidrug-resistance transporters family such as P-gp [23][24][25][26][27][28], ABCC1 [27,29], ABCC2, and ABCG2 [27].ABCC6 is a member of the C subfamily of ATP-binding cassette transporters mainly expressed in liver and kidney, poorly involved in drug resistance [30][31][32].We recently demonstrated that ABCC6 silencing or its inhibition by the uricosuric drug probenecid, through the modulation of the purinergic system, leads to cytoskeleton rearrangement and reduced motility of HepG2 cells, thus identifying it as a potential target for anti-metastatic treatment [33].The aim of this study was to study the effect of Quercetin on ABCC6 expression.Quercetin decreased the expression of ABCC6 through the regulation of the AKT signaling pathway, thus also contributing to cytoskeleton rearrangement and reduced cells motility. Effect of Quercetin on Cells Viability and Reactive Oxygen Species Accumulation In order to assess the effect of Quercetin on cells viability, an MTT assay was performed on HepG2 cells treated with different concentrations of Quercetin, ranging from 660 to 82.5 µM, for 24 and 48 h.We found that Quercetin affected cell viability in a concentrationand time-dependent manner.However, a concentration of 165 µM was chosen for further experiments, as it did not cause significant cell toxicity at both 24 and 48 h of treatment (Figure 1).In order to assess the antioxidant activity of Quercetin on HepG2 cells, a 2 ,7 -Dichlorofluorescein (DCF) assay was performed to evaluate its effect on intracellular Reactive Oxygen Species (ROS) accumulation.As shown in Figure 2, Quercetin at a concentration of 165 µM significantly lowered the oxidative stress, with a reduction of intracellular ROS level greater than 40% as compared to the control cells.A two-hour pre-incubation with 500 µM H 2 O 2 was used as a positive control.Data are expressed as a percentage of the control group ± the standard error of the mean of three replicates of three independent experiments.Comparisons between treatments and control groups were performed by one-way ANOVA followed by Dunnett post-hoc correction; * p < 0.05; *** p < 0.001. Effect of Quercetin on Gene Expression In order to verify whether Quercetin was able to affect the expression of some relevant ABC transporters in HepG2 cells, an RT-PCR experiment was carried out.As shown in Figure 3a, no significant variation was found in such gene expression, with the only exception of ABCC6 and ABCC5.Fold change in mRNA expression was, respectively, 0.26 (95% C.I.: 0.40; 0.12) and 2.08 (95% C.I.: 2.66; 1.49).Previously, we demonstrated that NT5E expression is regulated by ABCC6, which in turn supplies ATP to feed the purinergic system [34,35].In panel b of Figure 3, the effect of Quercetin on genes involved in the purinergic system is shown.An increase in tissue-nonspecific alkaline phosphatase (TNAP) was found (fold change 1.81; 95% C.I.: 2.13; 1.53), while NT5E expression was significantly decreased (fold change 0.59; 95% C.I.: 0.72; 0.45).Western blot analysis showed that Quercetin decreased ABCC6 protein levels but not the expression of NT5E (Figure 3c,d). Quercetin Induces a Rearrangement of the Actin Cytoskeleton Cell motility is a key factor in a variety of pathophysiological processes, such as tumor invasion and metastasis.Cell movement is widely considered a very complex phenomenon driven by a finely coordinate rearrangement of actin filaments in the cytoskeleton.This is a highly dynamic process in which protrusive structures form at the leading hedge of motile cells, while at the opposite extremity, the body of the cells is retracting and loosening adhesion to adjacent cells and to the matrix scaffold.These typical structures, named lamellipodia and filopodia, are generated by the polymerization of filamentous actin and further organization in tight bundles (filopodia) or cross-woven webs (lamellipodia), in which filaments are oriented at a certain angle to the direction of movement [36,37].In order to evaluate if any changes occurred in the organization of the cytoskeleton following the treatment with Quercetin, immunofluorescence experiments coupled to laser confocal microscopy were carried out.HepG2 cells were grown on a coverslip and stained with a derivate of phalloidin, which binds in a specific manner to the actin filaments, conjugated with a fluorescent dye.Many filopodia were observed in HepG2 control cells (Figure 4a, arrows), while these structures were almost completely absent in Quercetin-treated cells (Figure 4b, stars), suggesting an inhibition of cell motility in Quercetin-treated cells. Quercetin Affects HepG2 Cells' Migration Rate Cytoskeletal rearrangement is closely related to cell migration.To study the effect of Quercetin on cell migration in HepG2 cells, an in vitro scratch test was performed.The scratch test is an easy, fast, accurate, and highly reproducible method to assess cell collective migration [38].As shown in Figure 5, Quercetin significantly reduced the migration rate in HepG2 cells.Unlike what was previously observed in Probenecid-treated or ABCC6silenced cells, neither ATP (Figure 5a) nor Adenosine (Figure 5b) restored cells motility.In addition, when ATP was used in combination with Quercetin, a further decrease in migration was observed.No effect on cell migration was detected when Probenecid was used in combination with Quercetin (Figure 5c). Effect of Quercetin on MAPK/ERK and PI3K/AKT Pathways In order to shed light on the molecular mechanism involved in Quercetin-mediated impairing of HepG2 cell motility, we investigated two major signaling pathways, known to be among the main regulators of cell motility, namely, phosphoinositide 3 kinase (PI3K)/AKT (also known as Protein Kinase B, PKB) and extracellular signal-regulated kinase (ERK) pathways [39,40].In Quercetin-treated cells, no changes in p-ERK/ERK ratio were detected (Figure 6a); on the contrary, the level of p-AKT was reduced (Figure 6b).Interestingly, in ABCC6-silenced (Sh-ABCC6) cells, a decrease of both phosphorylated kinases was observed (Figure 6c,d).c,d) on phosphorylated AKT and ERK.The ratios between phosphorylated and total ERK (a,c) and between phosphorylated and total AKT (b,d) were determined by comparing the intensities of the immunoreactive bands obtained by using specific antibodies.Cells treated with DMSO 0.25% or scrambled sh-RNA were used as controls for HepG2 cells treated with Quercetin and subjected to ABCC6 knockdown, respectively.α-tubulin or β-actin were used as a loading control.Data are presented as the mean and the standard error of the mean of three independent experiments.Results were analyzed by Student's t test; * p < 0.05; ** p < 0.01; *** p < 0.001. Discussion Flavonoids modulate both the activity and the expression of several ABC transporters [20][21][22].Most studies have been focused on P-gp, BCRP, and MRP1, which have a major role in drug resistance, but very limited information is available on different transporters and, specifically, on ABCC6, to our knowledge [41]. Quercetin is one of the most abundant flavonoids derived from plants, whose antitumor activity in hepatocarcinoma was recently systematically reviewed [42].In the present study, interestingly, among the considered ABCs, we found the most significant decrease in ABCC6 expression (Figure 3a), which is only marginally involved in drug resistance. In previous studies, we found that pharmacological inhibition or silencing of ABCC6 in HepG2 cells could contribute to cytoskeleton rearrangement and cell motility by reducing the availability of ATP to feed the extracellular purinergic purine pool [33].Therefore, we designed experiments to test the hypothesis that Quercetin, inhibiting ABCC6 expression, could impair cell motility by acting on the purinergic system.Although the treatment with Quercetin significantly reduced the migration rate of HepG2 cells, neither ATP nor Adenosine restored motility.Moreover, the further decrease of the migration rate when ATP was used in combination with Quercetin could be explained by the inhibition of NT5E enzymatic activity, previously assessed [16,17]: most likely, in the presence of Quercetin, AMP accumulates and exerts an additional inhibitory effect on cell motility by acting on different purine receptors.The addition of Probenecid in combination with Quercetin did not modify cell migration, probably because the effect of Quercetin includes the effect of Probenecid on the purinergic system. All together, these results suggest that the effect of Quercetin on cell motility may be due to the involvement of other targets and pathways, as indicated by its pleiotropic activity.It is widely accepted that MAPK and AKT are among major pathways involved in the control of tumor cells' proliferation and motility [39,40].It is known that Quercetin suppresses the migration of some Hepato Cellular Carcinoma (HCC)-derived cells by inhibiting the signaling pathway of AKT [42]; we also confirmed this effect on HepG2 cells.Interestingly, we observed the shutdown of both pathways in ABCC6-silenced HepG2 cells, probably as a consequence of a reduction of adenosine signaling, mediated by the purinergic pathway.In conclusion, both silenced cells and cells treated with Quercetin exhibited downregulation of the AKT pathway, thus suggesting that Quercetin could cause the reduction of ABCC6 expression through the downregulation of phosphorylated AKT. Activation of AKT plays a key role in the cell response to oxidative stress, by inducing the expression of target genes, which support tumor cells' survival, as well as resistance to chemotherapy [43,44].In the present study, we used a concentration of Quercetin associated with strong antioxidant activity.Therefore, we propose that ABCC6 expression may be controlled by AKT activity as part of cells' adaptative response to oxidative stress, which can be mitigated by the use of Quercetin-like flavonoids.Indeed, the downregulation of phosphorylated AKT and the removal of oxidative stress lead to decreased expression of ABCC6 transport activity and may produce a senescence-like phenotype in cancer cells, which we previously observed in ABCC6-silenced cells [45], thus through a cell pathway independent of the purinergic system. In any case, treatment of HepG2 cells with Quercetin showed once more a clear relationship among ABCC6 downregulation, cytoskeleton rearrangement, and motility impairment.This effect on ABCC6 expression could be part of Quercetin antitumor and anti-metastatic potential, especially in those tumors with a high expression of ABCC6.However, since the lack of ABCC6 transport activity is the cause of ectopic mineralization in Pseudoxanthoma elasticum (PXE), the potential of harm deriving from reducing its activity should be carefully evaluated.This is unlikely to happen when food is the only source of Quercetin, since the daily intake is limited, but it can represent a not-so-far-fromreal eventuality when Quercetin is used as a dietary supplement, a use that is acquiring a growing popularity. Cell Culture and Treatments Human hepatoblastoma cells (HepG2) were grown in Dulbecco's modified Eagle's medium (DMEM) with a high glucose concentration (4.5 g/L), to which 10% fetal bovine serum (FBS), 2 mM L-glutamine, 100 U/mL penicillin, and 100 µg/mL streptomycin were added.Cells were cultured at 37 • C, in a water-saturated atmosphere with 5% CO 2 .Quercetin was dissolved in DMSO at a concentration of 20 mg/mL as a stock solution.The final concentration of DMSO in cell treatments did not exceed 0.25% v/v.Control cells were treated at the same final concentration of DMSO (vehicle).All compounds were purchased from Sigma (Sigma, Saint Louis, MO, USA). Generation of STABLE ABCC6 Knockdown HepG2 Cells In order to silence ABCC6 expression in HepG2 cells, the shRNA technology was used by infecting cells with lentiviral particles, (vector purchased from Cyagen Biosciences (Santa Clara, CA, USA).EGFP fluorescence was used as a control for successful infection.In 12-well plates, cells were seeded at a density of 1.5 × 10 5 /well.After 24 h, a suspension of lentiviral particles at a suitable multiplicity of infection (MOI) of 10 packed with plasmid vectors containing ABCC6-shRNA or scrambled-shRNA for negative control was added.A preliminary 12-day selection with puromycin 2 µg/mL to remove non-infected cells was followed by clone selection with cloning cylinders in 200 mm plates.Clones with at least a 75% knockdown expression were used for further analysis. Viability Assay Cell viability was assessed by the MTT (3-(4, 5-dimethyl thiazol-2yl)-2, 5-diphenyl tetrazolium bromide) assay.In this experiment, 2 × 10 4 cells were seeded in each well of a 96-well plate.After 24 h, the cells were treated with progressive dilutions of Quercetin, ranging from 660 to 82.5 µM for 24 or 48 h, then incubated with fresh medium containing 0.75 mg/mL MTT for 4 h at 37 • C. The formazan crystals were finally dissolved for 1 h at room temperature on agitation in a mixture 1:1 of DMSO and isopropanol with 1% of Triton X-100.The viability of cells was assessed by comparing the light absorbance at 570 nm, after subtraction of the background at 630 nm, of treated and control cells (treated only with vehicle DMSO), defined as 100% cell viability.Spectrophotometric assays were performed using a microplate reader (Multiskan TM GO Microplate Spectrophotometer, Thermo Scientific, Waltham, MA, USA).Experiments were conducted in triplicate. Intracellular ROS Assay The antioxidant activity of Quercetin in HepG2 cells was assessed by the 2 ,7dichlorofluorescein assay.In this experiment, 1 × 10 4 cells were seeded in each well of a 96-well polystyrene black plate with clear bottom.After 24 h, the cells were treated with progressive dilutions of Quercetin, ranging from 330 to 41.25 µM for 24 h.Then, the cells were incubated with dichlorofluoresceindiacetate (DCFH-DA) at a final concentration of 10 µM in PBS for 30 at 37 • C, and fluorescence was measured by using a GloMaxMultiDetection System (Promega, Madison, WI, USA) equipped with a blue filter (ex.:490 nm; em.: 510-570 nm).As a positive control of ROS presence, a 2 h treatment prior to DCFH-DA addition was used.Results are presented as a percentage of the negative control (cells treated with the vehicle DMSO only).Each treatment was performed in triplicate. Real-Time PCR RNA extraction was performed by using the Quick-RNA MiniPrep kit (Zymo Research, Irvine, CA, USA).RNA was then retrotranscripted to cDNA using random primers and the High-Capacity cDNA Reverse Transcription kit (Applied Biosystem, Waltham, MA, USA).Total cDNA was amplified using iTaqTM Universal SYBR Green Supermix (Bio-Rad, Waltham, MA, USA) with the 7500 Fast Real-Time PCR System (Applied Biosystems).Each primer used here was designed so to span exon-exon junctions in order to prevent any unwanted genomic DNA amplification (Table 1). The comparative threshold cycle method (∆∆Ct) was used to quantify the relative amounts of product transcripts, with β-actin as endogenous reference control.The specificity of amplicons was confirmed by melting curve analysis.Each test was performed at least in triplicate. Confocal Fluorescence Microscopy HepG2 cells (1.5 × 10 5 ) were grown on coverslips in the presence of 165 µM Quercetin for 24 h.Staining of nuclei and F-actin was performed with propidium iodide and Phalloidin Alexa Fluor 488, as previously described [33]. Migration Assay Cell migration rate was evaluated by an in vitro wound-healing assay.HepG2 cells (1 × 10 6 ) were seeded in a 6-well plate and cultured in DMEM containing 10% FBS to obtain a nearly confluent cell monolayer.Cells were there treated with either 165 µM Quercetin or DMSO as control in the presence or absence of 500 µM ATP or 400 µM adenosine for 12 h in DMEM containing 10% FBS.Then, a linear wound was generated in the cellular monolayer with a sterile 10 µL plastic pipette tip.Any cellular debris was removed by washing with PBS, and the medium replaced with 2 mL of DMEM with 1% FBS still containing 165 µM Quercetin or 0.25% DMSO in the presence or absence of ATP or adenosine.The cells were incubated at 37 • C, and pictures of the scratch were taken every 12 h by using a Nikon Figure 1 . Figure 1.Effect of quercetin on HepG2 cells viability.Cells were treated with Quercetin at concentrations of 82.5, 165, 330, and 660 µM for 24 and 48 h.Data are expressed as a percentage of the control group ± the standard error of the mean of three replicates from three independent experiments.Statistical significance was assessed by multiple t test followed by Holm-Sidak correction for multiple comparisons; * p < 0.05, ** p < 0.01, *** p < 0.001. Figure 2 . Figure 2. Effect of Quercetin on the intracellular level of Reactive Oxygen Species (ROS) in HepG2 cells.Cells were treated with Quercetin at concentration of 41.25, 82.5, 165, and 330 mM for 24 h.A two-hour pre-incubation with 500 µM H 2 O 2 was used as a positive control.Data are expressed as a percentage of the control group ± the standard error of the mean of three replicates of three independent experiments.Comparisons between treatments and control groups were performed by one-way ANOVA followed by Dunnett post-hoc correction; * p < 0.05; *** p < 0.001. Figure 3 . Figure 3.Effect of Quercetin on mRNA expression of some relevant ABC transporters (a) and on genes involved in the purinergic pathway (b).HepG2 cells were treated with Quercetin (165 µM) for 24 h.Cells treated with 0.25% DMSO were used as a control.Results are expressed as the mean and 95% confidence interval of three different experiments.Statistical analysis was performed on ∆Ct values by using multiple T-test followed by Holm-Sidak correction for multiple comparisons; * p < 0.05; *** p < 0.001.Effect of Quercetin on ABCC6 (c) and NT5E (d) protein expression.Results are expressed as the mean ± the standard error of three independent experiments.Results were analyzed by Student's t test; * p < 0.05. Figure 5 . Figure 5.Effect of Quercetin on HepG2 cells' migration rate.Cells were treated with Quercetin (165 µM) for 12 h.Then, a scratch was made in the cell monolayer, and pictures were taken every 12 h, while the cells were still kept in contact with Quercetin.DMSO-treated cells were used as a control.ATP 500 µM (a), Adenosine 400 µM (b), or Probenecid 250 µM (c) was added to both control and Quercetin-treated cells.Data are expressed as the mean and standard error of three different experiments.Statistical significance was assessed by multiple t test followed by Holm-Sidak correction for multiple comparisons; * p < 0.05; ** p < 0.01; *** p < 0.001.Representative pictures of the scratches taken at different times are shown in Supplementary Figure S1. Figure 6 . Figure 6.Effect of Quercetin treatment (a,b) or ABCC6 silencing (c,d) on phosphorylated AKT and ERK.The ratios between phosphorylated and total ERK (a,c) and between phosphorylated and total AKT (b,d) were determined by comparing the intensities of the immunoreactive bands obtained by using specific antibodies.Cells treated with DMSO 0.25% or scrambled sh-RNA were used as controls for HepG2 cells treated with Quercetin and subjected to ABCC6 knockdown, respectively.α-tubulin or β-actin were used as a loading control.Data are presented as the mean and the standard error of the mean of three independent experiments.Results were analyzed by Student's t test; * p < 0.05; ** p < 0.01; *** p < 0.001. Table 1 . List of primers used in this study.
2021-04-27T05:14:04.283Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "8aed02328201582dd29e323720279cbe52150571", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/8/3871/pdf?version=1624370686", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8aed02328201582dd29e323720279cbe52150571", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
221364489
pes2o/s2orc
v3-fos-license
“Luiz de Queiroz” College of Agriculture An integrated approach to add value to acerola fruit (Malphigia emarginata): antioxidant extraction and its performance in food emulsion An integrated approach to add value to acerola fruit (Malphigia emarginata): antioxidant extraction and its performance in food emulsion Antioxidants have the ability to protect, prevent, or reduce the damage caused by the phenomena of oxidation due to free radicals’ activity. They play a key role in the defense mechanisms of plants and other biological systems, acting directly in the early stages of generation and propagation of free radicals in the oxidative process, in both food and biological systems. In foods, they act by preventing and/or delaying the processes of lipid autoxidation through the neutralization of free radicals. Autoxidation lead to the development of undesirable flavors and chemical compounds in food, besides deterioration of quality and shortening the shelf life of food products. The use of antioxidant compounds is a necessity in the food industry, which normally uses synthetic products. The questioning of the innocuity of synthetic antioxidants and the recent demand for natural ingredients by consumers have motivated the study of alternative sources and their processing parameters for food application. Attention has been given to ascorbic acid, tocopherols, and carotenoids that present known potential for food application. Acerola fruits (Malphigia emarginata DC.) are an important source of natural antioxidants, due to the high levels of ascorbic acid and phenolic compounds. In this work natural extracts obtained from acerola pulp and seeds were applied in lipid matrices. The study consisted of three stages: extraction conditions determination using experimental design and optimization methods; oxidative stability study in lipid system models comparing natural extracts to synthetic ones; and sensorial evaluation of emulsions added off natural antioxidants. The results evidenced a high antioxidant capacity of extracts obtained under optimized conditions and a significant antioxidant action in the oxidative stability in lipid systems. In addition, it was observed high sensorial acceptance and low sensorial profile differentiation of emulsions added of natural extracts in comparison to the synthetic antioxidants. Therefore, acerola is an efficient substitute for synthetic antioxidants in lipid-based foods without causing sensorial alteration. ABSTRACT An integrated approach to add value to acerola fruit (Malphigia emarginata): antioxidant extraction and its performance in food emulsion Antioxidants have the ability to protect, prevent, or reduce the damage caused by the phenomena of oxidation due to free radicals' activity. They play a key role in the defense mechanisms of plants and other biological systems, acting directly in the early stages of generation and propagation of free radicals in the oxidative process, in both food and biological systems. In foods, they act by preventing and/or delaying the processes of lipid autoxidation through the neutralization of free radicals. Autoxidation lead to the development of undesirable flavors and chemical compounds in food, besides deterioration of quality and shortening the shelf life of food products. The use of antioxidant compounds is a necessity in the food industry, which normally uses synthetic products. The questioning of the innocuity of synthetic antioxidants and the recent demand for natural ingredients by consumers have motivated the study of alternative sources and their processing parameters for food application. Attention has been given to ascorbic acid, tocopherols, and carotenoids that present known potential for food application. Acerola fruits (Malphigia emarginata DC.) are an important source of natural antioxidants, due to the high levels of ascorbic acid and phenolic compounds. In this work natural extracts obtained from acerola pulp and seeds were applied in lipid matrices. The study consisted of three stages: extraction conditions determination using experimental design and optimization methods; oxidative stability study in lipid system models comparing natural extracts to synthetic ones; and sensorial evaluation of emulsions added off natural antioxidants. The results evidenced a high antioxidant capacity of extracts obtained under optimized conditions and a significant antioxidant action in the oxidative stability in lipid systems. In addition, it was observed high sensorial acceptance and low sensorial profile differentiation of emulsions added of natural extracts in comparison to the synthetic antioxidants. Therefore, acerola is an efficient substitute for synthetic antioxidants in lipid-based foods without causing sensorial alteration. INTRODUCTION An antioxidant compound is generally defined as a class of heterogeneous molecules that, present in low concentrations, are able to reduce or protect a system against the damage caused by oxidative stress, which is caused by free radicals (HALLIWELL, 1990;HALLIWELL;GUTTERIDGE, 2007). Natural antioxidants act in the defense mechanisms of animal and plants, playing a direct role in the oxidative process, basically in the stages of generation and propagation of free radicals, both in food and in biological systems (ESPIN et al., 2000;SUJA;JAYALEKSHMY;ARUMUGHAN, 2004). In living beings (animals or plants), free radicals attack essential biological molecules, leading to many degenerative diseases such as cancer and atherosclerosis. In food, free radicals are responsible for the autoxidation process, lipid peroxidation and rancidity development, reactions that lead to the development of strange and undesirable flavors and chemical compounds in foods (LABUZA, 1971;ANGELO, 1996). Nowadays there is a strong scientific questioning about the innocuity and food safety of the use of additives and synthetic preservatives by the food industry, evidenced by the researches that have not comproved the safety of these substances for human health and by the recent demand for healthier and natural products by the consumers, has motivated studies in the food technology area. Research on substances extracted from natural sources that have significant antioxidant action have become of importance and the current needs are: identification of new additives and technology coadjuvants that have effective and significant antioxidant potential, which can be proven by the methods of evaluation of the antioxidant capacity; development of new technologies and extraction processes of these compounds; assessment of industrial viability for food application; the use of raw vegetable materials little explored commercially and industrially. The major challenge for the industry consists, therefore, in finding naturals alternatives and substitutes to the known and widely used synthetic antioxidants, taking into account that the main purpose of using these substances is to conserve and preserve the nutritional and sensorial quality of the products, aiming extend their shelf life. Synthetic antioxidants are widely used in lipid-based food products in order to avoid and delay the lipid oxidation of the fatty acids present. Butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT), propyl gallate, tert-butyl hydroquinone (TBHQ) and sorbates (2,4-hexadienoates) are the most commonly used. In order to attend the current trends, great attention has been given to substances such as ascorbic acid, tocopherols, tocotrienols, and carotenoids (betacarotene), since these compounds show potential for industrial application as they improve the stability and shelf life of food products (NOGUCHI;NIKKI, 2000). In addition, there is an advantage from the economic point of view, because if the antioxidant extraction source is a non-value-added raw material, such as peels and seeds generated in the fruit processing, it represents socio-environmental advantage compared to the synthetic compounds. In this research area for the substitution of synthetic substances in the food industry, acerola fruits, also known as Antilles cherry (Malphigia emarginata DC.), represent a satisfactory and potential alternative due to its composition in natural antio xidants. Native from Antilles, acerola was propagated over whole South America, including in Brazil, due to the good adaptation to the soil and climate . Acerola fruits present a higher content of ascorbic acid (vitamin C) and other nutrients such as carotenoids, thiamine, riboflavin, niacin, and minerals, such as calcium and phosphorus (DE ASSIS et al., 2001;. This work presents a review on antioxidants, their mechanisms of action, industrial uses, focusing on natural antioxidants extracted from fruits and vegetables (Chapter 1). The experimental study was developed on the application acerola fruits extracts (from pulp and seeds) in a based-lipid matrix (mayonnaise), as a substitute to synthetic antioxidants. The processes conditions to produce natural extracts from acerola pulp and seeds were studied by means of experimental design and optimization methodology. The antioxidant activity optimized extracts and their performance on oxidative stability essays (in different emulsified lipid systems model) compared to synthetic BHA and BHT were evaluated (Chapter 2). Finally, Chapter 3 presents a sensorial evaluation of mayonnaise type emulsion added off natural extracts and synthetic antioxidants was carried out, through a sensory acceptance test and a flash profile sensory description test, which promotes a description of the samples by means of attrib ute surveys. Introduction The main cause of deterioration and loss of nutritional quality in lipid-based foods (emulsions for example), oils and fats is the lipid oxidation of their mono and polyunsaturated fatty acids, caused mainly by free radicals. The main catalytic agents are light, oxygen, and the presence of transition metals. The industry has been using synthetic antioxidants to reduce and/or delay the oxidative process (a set of oxidation reactions that occur in fatty acids). However, the use of synthetic substances has been questioned and better studied, since toxicological studies with animals have shown that known and widely used products (such as TBHQ, BHA, and BHT) can present carcinogenic and other undesirable effects to human health, which explains the prohibition of the use of these substances by several countries [1][2]. For example, Europe and Canada have banned the use of TBHQ [3]. In Brazil, the use of these additives in food is controlled by Health Ministery, which limits the use to 200 mg kg -1 of oil for BHA and TBHQ, and 100 mg kg -1 of oil for BHT [4] Among the synthetic compounds with significant antioxidant activity, TBHQ is considered the most effective, since it presents high stability to heating, and is widely used in bulk oils [5]. Considering these questions raised about the innocuity of application of synthetic antioxidants in foods, it was necessary to identify and isolate natural antioxidants that have the same functional properties and act as substitutes for synthetics in prevention of oxidative deterioration of foods. Among the innumerable sources of natural antioxidants, there are cereals, seeds, and peels of fruits and vegetables (fruit and vegetable industrial residues in general), mushrooms, herbs and spices and medicinal plants [6][7][8][9][10]. In this context, there is a great potential in acerola fruit as antioxidant source for industrial due to its high antioxidant capacity. The objective of this work presents a review about lipid oxidation, the main cause of undesirable lipid alteration, as well as the methods for shelf life evaluation. It is also presented the main natural antioxidant compounds, their sources, and their use as an alternative to synthetics, with an emphasis on the acerola fruit antioxidants compounds (Malphigia emarginata DC). Antioxidant compounds Antioxidants represent a broad class of compounds that step in oxidative processes, including in degradation of food nutrients and essential biological molecules [11][12]. Recently, with growing concern about determining which antioxidants are safe for human health, the choice is usually for natural antioxidants, especially plant-derived antioxidants. The main class of antioxidant compounds found in natural sources are phenolic (flavonoids or non-flavonoids), which have as one of health benefits, for example, the inhibition/reduction of oxidation of low-density lipoprotein (LDL cholesterol) [13][14][15][16][17][18]. To accomplish its protective function, antioxidants present some mechanisms of action, such as free radical scavengers, metal chelating agents and singlet oxygen sequestrants [19]. It is important to emphasize that the low consumption of fruits and vegetables, natural sources of these substances with antioxidant activity, is associated with the development of stress-related disorders and other diseases [13; 20-22] They can be classified as primary antioxidants, which have the ability to interrupt chain reactions of free radicals, acting primarily as radical scavengers and/or electron or hydrogen donors to free radicals, and for this one of the mechanisms of action involves yielding electrons/hydrogen to a free lipid radical so that it takes on a thermodynamically more stable form. For example, some phenolic compounds have electron donor groups in the orto and para positions of their cyclic chain. Antioxidants can also be classified as secondary (also called processing antioxidants), which have organophosphite groups and thioesters in their structure and are known mainly for reducing the initiation of autoxidation process, and for that they present some mechanisms of action such as decomposition of peroxide radicals (hydroperoxides), metal complexation, absorption of ultraviolet radiation and scavenging or deactivation of singlet oxygen. The main examples are such as butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT), gallates and others. It is recognized that vegetables, fruits, grains and beverages such as tea, juice and wine are significant sources of natural exogenous antioxidants [18; 28-29]. Phenolic compounds The phenolic compounds represent a wide variety of phytochemicals, which are derived from phenylalanine and tyrosine and are chemically characterized by the presence of one or more aromatic rings linked to at least one hydroxyl radical and/or other substitutes [27; 30]. They can be divided according to the number of phenolic rings and with the structures to which they are linked [31][32]. The groups of phenolic compounds more abundant in foods are the flavonoids, phenolic acids, simple phenols, tannins, lignins, coumarins and tocopherols [32][33][34][35]. It is known that dietary polyphenols are beneficial to human health, exercising a range of biological effects, as elimination and/or scavenging of free radicals, metals chelators, enzyme activity modulator and change pathways of signal transduction [36]. Not always the most common in foods are the most biologically active, and this occurs for different reasons as intrinsic low activity, lower intestinal absorption or fast metabolization and excretion [37]. The main sources of phenolic compounds are the citrus fruits such as lemon, orange and mandarin, as well as other fruits such as cherry, grape, plum, pear, apple and papaya, being found in greater quantities in pulp compared with fruit juice. Green pepper, broccoli, purple cabbage, onions, garlic and tomatoes are also excellent sources of these compounds [38]. In vegetables are essential in the plant growth and reproduction, as well as act as antipathogenic agent and contribute to the color or pigmentation [35]. In food are responsible for the color, astringency, flavor [39] and oxidative stability [40]. Bioactive compounds that include phenolics are found in vegetables in free form or linked/complexed to sugars (glycosylated) and/or proteins [41]. Phenolics compounds are since simple to high degree of polymerization molecules with variable structure. Ribéreau-Gayon [42] proposal consists in a classification of them in three categories: little distributed in nature, polymers and widely distributed in nature. Within this classification in the category of phenolic little distributed in nature, it is possible found a reduced number, although with a certain frequency, simple phenols such as pyrocatechol, hydroquinone, resorcinol, and aldehydes derived from benzoic acids, which are constituents of essential oils, such as vanillin [43]. In the category of polymers some free phenolic are found (tannins and the lignans). The last category of compounds widely distributed in nature includes the most commonly phenolic of the vegetables: the flavonoids (anthocyanins, flavonols, and their derivatives), the phenolic acids (cinnamic and benzoic acids and their derivatives), and the coumarins [44]. According to the mode of action, phenolic compounds are included in the category of primary antioxidants as eliminators of free radicals, being very effective in autoxidation prevention [45]. Interact preferentially with the peroxyl radical for being this most prevalent in the autoxidation phenomenon and have less energy than other radicals, which favors your hydrogen abstraction [46]. The resultant fenoxil radical, although relatively stable, can interfere in propagation reaction when react with a peroxyl radical, via interaction between radicals. The compound formed, by the ultraviolet light action and high temperatures, may lead to new radicals, compromising antioxidant efficiency, which is determined by the functional groups, and position that they occupy in the aromatic ring, as well as, by the size chain of these groups [45; 47]. Phenolic acids The polyphenols or phenolic acids commonly found in plants are hydroxy derived from cinnamic and benzoic acid, that presented carboxyl functional group in your structure [30; 48-49]. They are classified into two groups: hydroxycinnamic acid derivatives and hydroxybenzoic acid derivatives. Hydroxycinnamic acid derivatives have an aromatic ring with a carbonic chain constituted of three carbons bonded to the aromatic ring. The p-coumaric, ferulic, caffeic, and synaptic acids are common examples typically presented in the form of ester. The most common example is the chlorogenic acid, which is the quinic acid esterified to caffeic acid. They are also found in glycosylated form (linked to glycosides/sugars or complex carbohydrates), in protein-bound, in other cell wall polymers, and rarely as free phenolic acids form [50][51][52]. Chlorogenic acid (5-cafeolquinic acid) is the main biologically active dietary phenol of foods. When hydrolyzed by the intestinal microflora it produces various aromatic acids as metabolites, including caffeic and quinic acids [58]. As well as caffeic acid, chlorogenic acid presented adjacent/free hydroxyl groups linked in the carbon of the aromatic ring, and this is related to the benefits (antimutagenic, anticarcinogenic, and mainly antioxidant in vitro activity) [59]. Coffee and legumes (beans) are the main sources of chlorogenic acid in the diet [60]. Ferulic acid is another phenolic acid widely found in vegetables, fruits and beverages, such as coffee and beer. It is derived from caffeic acid and belongs to the hydroxycinnamic acids class [32; 61]. The interest in this and other phenolic acid derivatives from the caffeic acid began in late 1950, when Preziosi, Loscalzo and Bianchi [62] and Preziosi and Loscalzo [63][64][65] elucidated and developed the mechanisms of action of this substance in the human body and verified action related to bile secretion by the liver (coleretic) and decrease of serum lipid levels (hypolipidemic) in addition to diuretic functions. Even with the discovery of these and other beneficial functions to human body, only recently this and other phenolics gained attention as potential coadjuvants in therapies and treatments of various diseases induced by free radicals. Ferulic acid, in particular, presented as the new antioxidant compound with a strong cytoprotector activity, both the ability of eliminating free radicals as activating the cellular stress response. Some unfavorable points are related to pharmacokinetics and metabolic mechanisms/pathways of this, due its low bioavailability of this compound after ingestion and/or oral administration and the limited number of clinical studies conducted in order to verify and demonstrate its efficacy and pharmacologic, toxicology safety, which limited the evidence regarding the potential interest of phenolic acid in relation to your action on the human organism [66]. P-coumaric acid other phenolic acids are widely distributed in fruits (pear, apple, grape), cereals, legumes (beans), and other vegetables (potato, tomato and teas). P-coumaric acid is a metabolite present in plants, being intermediate product in the synthesis of other phenols [60] and participating in the metabolic pathways of the phenylpropanoid. These substances are plants metabolites, components of plant essential oils that have as basic chemical structure a group ring phenyl (benzene ring) attached to a side chain with three carbon atoms [67]. Studies with animals (in vivo) suggest that p-coumaric acid presents antioxidant and anti-inflammatory properties in the mucosa of intestinal cells in rats [68][69][70][71][72][73][74]. It has also been attributed to this phenolic acid the ability to prevent the LDL cholesterol oxidation (a low-density lipoprotein) avoiding disruption of the chain caused by reactive species (free radicals) [70], preventing lipid peroxidation on the basis in its synergistic antioxidant activity with other natural antioxidant compound named tocopherol (vitamin E) [72]. Its mechanism of action is not fully known; However, it has been proposed that its antioxidant action is based on the capture/scavenger of reactive oxygen species (ROS) [75]. As well as the phenolic acids mentioned above, we have representatives of the class of flavonoids (another class of phenolic compounds) such as quercetin, rutin, catechins and epicatechin, which also are widely distributed in plants in nature and has great importance as antioxidants [13; 58; 60; 76-79]. Flavonoids Flavones and related compounds occur widely in vegetables. Catechins, quercetin, and pyrocatechol derivatives flavonoids have high antioxidant activity and have been used to stabilize lard. Polyhydroxychalcon are powerful antioxidants found in cabbage and other leafy vegetables, peppers, soy, peas, peanuts, cocoa, cottonseed and many other vegetables. Some flavonoids derivatives of pyrogallol, as the tannins in tea, also have antioxidant activity. The flavonoids of the tea are of practical interest, because tea presents sediment and leaves that are not suitable for drying and processing, and therefore these "waste" would be available in significant quantities for the preparation of antioxidants extracts [80][81]. Flavonoids are formed in plants from aromatic aminoacids phenylalanine and tyrosine, and malonate [82]. The basic structure of flavonoids is the flavan ring, which consists of 15 carbon atoms arranged in three rings (C6-C3-C6), A, B and C ( Figure 1). The flavonoids classes differ in the level of oxidation and substitution pattern of the C ring, while the individual compounds within the class differ in the substitution pattern of rings A and B [83]. They are compounds that usually occur in plants in the form of glycosylated derivatives, which contribute to the intensity color of the different shades of blue, red and orange in leaves, flowers and fruits [84]. In addition to vegetables and fruits, the flavonoids are found in seeds, nuts, grains, spices, medicinal plants, and also in beverages such as wine (particularly red wine), teas, and, at lower levels, in beer [85]. They play different roles in the ecology of plants, being one of them like a visual pigment for pollinating insects, due to the fact the flavones, flavonols and anthocyanidins exhibit attractive color and being considerate natural pigments of plants and therefore helpers in pollination process. Another important role played by flavonoids is in defense mechanisms of the plant, since the catechins and other flavanols presented certain astringency, which can represent a system of defense against insect's attack [86]. In addition to these functions, the class of flavonoids, they still act in the photosynthesis process as catalysts of the light phase, as regulators of iron chains involved in phosphorylation [87], as protectors of stress in plant cells for elimination of reactive oxygen species (ROS) produced by electron transport in photosynthetic system [88], and finally, due to its properties of absorption of UV radiation, they protect plants from solar UV radiation and reactive species (ROS) generated by UV radiation [89]. More specifically, the flavones apygenin and luteolin are common in cereal grains and herbs (parsley, rosemary, thyme), while its analogues hydrogenated hesperidin (glycosylated form of hesperitin) and naringenin are predominantly in citrus [90]. The flavonols quercetin and kaempferol are prevalent found in vegetable and fruit peels, with exception of the onions. Isoflavones are found most often in legumes, including soy, beans (black and green) and peas, alfalfa and sunflower seeds [91]. The flavan-3-ols (flavanols) such as catechins, epicatechin, epigallocatechin and their gallate esters are widely distributed in plants, although they are present in significant quantities in tea leaves. The flavans polymerized as proanthocyanidins (condensed tannins), that are dimers or oligomers of catechins and epicatechin, are present in apples, grapes, red berries (berries in general too), persimmon, black currant, sorghum and barley grains [92]. The anthocyanidins and its glycosylated forms (anthocyanins) are abundant in red fruits (berries) and red grapes [93]. Anthocyanins are substances that accumulate in the vacuoles of a large range of vegetative cells and tissues in reproductive organs of the plants [94]. They are also part of the plant pigments, as well as the carotenoids, being responsible for the coloration of plants (flowers, leaves and fruit), ranging from red, pink, purple, until blue [95]. Structurally are formed by di or trihydroxy β-rings substituted containing a flavilium cation which, due to conjugated double bond absorbs light in the range of visible light, with the peak wavelength is around 500-550 nm. The range of derived anthocyanins are, as well as the colors that display in plants depends on the degree of hydroxylation and the number and/or type of replaced groups. Aglicons forms of anthocyanin, i.e., anthocyanidins, are usually penta or hexa-hydroxyl replaced and is not linked with organic sugars, proteins and phospholipids [96]. The functionality and performance mechanisms of these compounds have always been and remain questionable and require larger studies, this is due to the fact that the benefits derived from anthocyanins in vegetables are different depending on the species of the compound in question and the way in which he finds himself in the plant, causing a universal explanation about the mechanisms of action of anthocyanins is not possible [94]. Given this, the current search presents at least four recent evidence regarding the functionality of the anthocyanins, and they are: solar protection agent and antioxidant action, mediator of oxidative chain reaction induced by reactive oxygen species (ROS) [94; 97], metal chelators [98][99][100][101][102][103][104] and leaf senescence retarders, especially in plants that grow with nutrient deficiency [105][106][107]. Acerola and its antioxidants compounds Acerola (Malphigia emarginata DC), a tropical fruit, is a small cherry with juicy pulp and sweet flavor, known primarily for its high content of vitamin C. The fruit origins from species native from the West Indies, which have adapted to Central and South America [108]. In the past, the plant was known by two scientific names considered synonyms, Malpighia glabra l. and Malpighia punicifolia L, that were later standardized with the scientific name of Malphigia emarginata DC. [109]. It is found from southern Texas (USA), Mexico, and Central America to some subtropical Asia, India and South America, being Brazil the greatest exponent and one of the world largest producers [110][111]. Regarding from nutritional and botanical aspects to industrial-technological (processing), it is observed that the fruit presented a short post-harvest period (2-3 days) at ambient temperature and its maturing process involves a number of chemical reactions, being the primary the conversion of chloroplasts to chromoplasts with concomitant carotenoids, anthocyanins and phenolics production, besides volatile compounds. The whole process of maturing leads to the peculiar characteristics of flavor of ripe fruit [112]. On the market, the most common forms of commercialization are the in natura and frozen pulp and juice [113]. Fruit consumption is associated with health benefits, such as reduced risk of cancers, arterial hypertension and other heart disease [111; 114], which come to its high nutritional value and the presence of compounds antioxidants, mainly due to the high content of ascorbic acid (vitamin C), whose levels vary between 300 and 4600 g 100 mg -1 of fruit, being one of the most important natural sources of this vitamin [115][116][117]. Acerola fruits also offer other important nutrients, such as anthocyanins (phenolic compounds belongs flavonoids class, which combined with the carotenoids, are responsible for fruit colorants), carotenoids, minerals (phosphorus, iron and calcium) and B vitamins (thiamine, riboflavin, niacin) [115; 117-120]. Carotenoids are present in levels ranging to 371 and 1881 mg 100 g -1 , being beta-carotene the majority carotenoid (40-60% of total carotenoids) [121][122]. In relation to the flavonoids, the main class is represented by anthocyanins (37.9-597.4 mg kg -1 ) and flavonols (70-185 mg kg -1 ) [123][124][125]. The high antioxidant content of acerola allows the realization and development of technologies and studies aiming at to obtain and apply these compounds in food matrix as an alternative to synthetic antioxidants commonly used by the industry. This is mainly due to antioxidant action of its components, whose mechanism of action is based on the inhibition of oxidation in cells and biomolecules such as DNA, proteins, lipids, by eliminating free radicals formed, responsible for biological systems oxidation [126][127]. This action of antioxidant compounds is related to health benefits, reported previously, that are prevent hypertension, arteriosclerosis and myocardial infarction [128-130]. Oxidative stress and oxidation in biological systems Oxidative stress is defined how the status of the organism that involves cellular damage, through the release of free radicals or not radical oxygen species without scavenging and/or neutralization factors, configuring a disproportion in redox reactions (oxidation), an imbalance of the oxidizing and reducing factors in biological systems and foods [131][132][133]. It is considered an interruption of redox circuits that are part of the signalization of the signal transduction pathways, such as cysteine portions regulated by glutathione or tiorredoxines. This definition led to the creation of methods to distinguish its redox signaling disruptions and thus control them. It is a response to the simple and viable markers in research and treatment of diseases, whose perpetrators are the reactive oxygen and nitrogenous species [132][133]. These reactive species can be represented by unstable radical containing at least one unpaired electron or by oxidation of radical species, which can promote the lipid peroxidation of membrane with lipid peroxides accumulation [134][135][136]. During cellular respiration, there is the unpaired electron transfer to molecular oxygen, generating free radicals and reactive oxygen molecules produced during the process [137]. These reactive species are present at physiological levels during the normal operation of cells and can react with some immune system cells (immune receptors) [138]. However, in excessive amounts these species are able to attack vital biological molecules such as nucleic acids, lipids (mainly polyunsaturated fatty acids), proteins, and carbohydrates. It is known that, since the DNA is damaged is susceptible to mutations, and consequently arise physiopathology of several diseases such as: heart disease (atherosclerosis, cardiorespiratory insufficiency), type 2 diabetes, neurodegenerative diseases (Alzheimer, Parkinson), infections/inflammations, and cancer [136; 139]. During normal biologic processes, ROS are formed in small quantities, and antioxidant systems of the organism can neutralize them. However, under conditions of stress such as drugs ingestion, metabolic disorders or UV radiation exposition, these ROS can be generated in quantities that exceed the normal defense capability of the antioxidants, causing the oxidation of biomolecules and initiation of oxidation in the tissues [141][142] In these circumstances, antioxidants present in the diet, slow chain reactions of oxidation, acting in initiation and/or propagation stages of the oxidative process [45; 135; 143). Lipid oxidation and antioxidants use in foods During storage, the edible oils and fats, as well as lipid-based foods, undergo oxidation and products are formed, causing oxidative rancidity and reduced sensorial properties in foods. Rancidity and off flavor products are produced as well as potentially toxic products [144]. It is known that the autoxidation process (oxidation/redox) is a set of chain reactions of lipids oxidation in the presence of catalytic oxidizers (light, oxygen, temperature, metals), in order to stabilize unstable molecules formed during the chain reactions, such as free radicals. In foods, the main oxidizer is usually the molecular oxygen present in the air, and the main catalysts are light and certain metal ions [145]. The set of chain reactions that occur during lipid oxidation is separated into three phases known as initiation, propagation and termination. The lipid autoxidation starts through a reaction of an unstable radical (oxygen radicalar, for example) with an oil that contain significant quantity of unsaturated fatty acids (double bonds between carbons of the chain) can be mono or polyunsaturated, and the more unsaturated the more unstable and susceptible are the lipid. Therefore, initiation occurs when the fatty acid that composes the lipid has hydrogen removed from the methyl end (CH3) of your chain for an unstable radical and from this beginning a series of reactions that are triggered in order to stabilize radicals formed. There are several initiators of this process in foods, previously cited, however the main ones are light (UV), transition metal ions, and certain enzymes. The products formed during lipid oxidation are responsible for the loss of food and nutritional quality and gives the rancid and oxidized characteristic to the lipid-based foods [144]. In order to avoid or reduce the reactions of oxidative process, it is necessary to stabilize the food lipids through the use of antioxidants (natural or synthetic), classified into groups according to the mechanism of action ( Table 1) [49]. Table 1. Classification and mechanism of action of lipid oxidation inhibitors POKORNÝ (1991) Studies related to antioxidants uses to prevent/reduce the oxidative rancidity in lipid-based foods were initiated 60 years ago. Several natural substances have been studied since then; however, synthetic compounds presented advantages for the food industry. Some of advantages are availability, regular quality, reduced price and high antioxidant potential, many times greater than natural substances. Gradually, the legislation in this area has been adapted to allow the use of animal-tested products with assured safety. As a result, increasingly complex and time-consuming tests have been required to ensure toxicological safety regarding the use of synthetic substances as antioxidants in foods [146]. Since the decade of 70, consumers and regulatory agencies (legislation) began questioning the safety of synthetic substances, even though in the past have been considered safe. There are efforts by the industry to replace the synthetic antioxidant for natural alternatives, since synthetics require expensive tests to prove harmlessness, and your use being contested and not accepted by most of consumers [147]. Antioxidant agents of greater acceptance are common food ingredients, whose use is not restricted or prohibited by law. Many foods contain compounds with antioxidant activity (Table 2), however can promote sensory changes in the final product, changing, for example, taste, aroma and color and for this reason some of them have your use limited, moreover, often presents low antioxidant activity or even low solubility in lipids, resulting in limited use in oils and fats stabilization, although they can be used in lipid basis foods [49]. Table 2. Natural sources of antioxidants POKORNÝ (1991) "Identical to natural" substances or purified extracts obtained from edible raw materials can be used as natural antioxidants. They also can be prepared from raw materials generally not used as food. An example is the NDGA (Nordihydroguaaretic acid), a pyrocatechol isolated from a plant of the genus Larrea, who was one of the first known natural antioxidants. The use as additive of natural and synthetic products is allowed, however, questions about food safety have resulted in a ban on certain products in certain countries, which once again demonstrates the need for toxicological and ensuring food safety and harmlessness for one compound when choosing an additive for use in food ( Figure 2) [49]. An effective way to prevent and reduce lipid oxidation in food includes processes of minimizing loss of natural tocopherols of foods, elimination of the contamination by metals and addition of antioxidants (natural). Many researches have been conducted to better understand the basic processes of lipid oxidation of polyunsaturated fatty acids, antioxidant action, and effects of lipid oxidation decomposition products during oxidative process. To better understand the action of antioxidants and their effects it is important to get specific information on chemical structure and inhibited oxidation products. Several tests are necessary to elucidate the mechanisms by which oxidation products act on lipid oxidative deterioration of foods [148][149]. The tocopherols are a class of most important natural antioxidants found in food derived from vegetable oils. They have the ability to disrupt lipid autoxidation due to the role in the chain of oxidative reactions, in steps of propagation and decomposition. The α-tocopherol in high concentrations inhibits the hydroperoxides decomposition. Ascorbic acid, another important natural antioxidant, acts synergistically to tocopherols (regenerating them, for example). Ascorbic acid also present multifunctional complex effects and may act as an antioxidant, pro-oxidant, metal chelator (inactive metals initiators), reducing agent (for example, reduce the hydroperoxides) or as oxygen scavenger [149]. Lipid emulsions and oxidation A lipid emulsified system presents three components that determine its behavior: the lipid component (oil phase), the interfacial material or emulsifier (which promotes interaction between lipid and aqueous phase) and the aqueous component. Each of these can have complex chemical composition. Lipid phase, for example, can be partially or fully crystalline and it is subjected to chemical changes such as oxidation or lipolysis; the aqueous phase may contain ions, which are responsible for destabilizing the emulsions. To understand the functional properties of these systems it is necessary to understand the properties of each component individually and together [150]. An emulsion consists of two non-miscible liquids (usually water and oil) dispersed in each other in the form of small spherical droplets. In most foods, the diameter of these droplets ranges from 0.1 to 50 µm. A system that consists of oil droplets in an aqueous phase is known as oil-in-water emulsion (o/w), and some examples are mayonnaise, milk, and cream soups. A system that consists of water droplets dispersed in an oily phase is known as water-in-oil emulsions (w/o) and has as an example the margarine and butter. Is a thermodynamically unstable system due to the free energy needed to increase the surface area between the oil and water phases to become miscible. Over time, emulsions tend to separate in a system that consists of a layer of oil (less dense) above a layer of water (highest density) [144; 151]. To obtain kinetically stable emulsions for a significant period (days, weeks and even months), chemicals known as emulsifiers should be added prior to homogenization. The use of emulsifiers necessary to make miscible the phases one each other. They are molecules that act on the surface of the droplets, are adsorbed on the surface of the droplets formed during homogenization, forming a "membrane" protector that prevents the droplets come near enough to aggregate and separate [151]. The most common used in the food industry are amphiphilic proteins such as casein and whey, soy or egg phospholipids (lecithin) and small molecules such as surfactants, spans, tweens or fatty acids [144]. Due to the instability of food emulsions, some factors are important in the interaction among particles of fat and on the stability of these systems as the concentration of protein (emulsifiers), temperature, concentration of particles of fat and not adsorbed particles [152]. Koczo and collaborators [153] reported a stability mechanism for emulsions, which involves the micro involvement by micelles of sodium caseinate, in the form of thin films between the particles of fat. The existence of this thin protective film between the fat droplets prevents them approach and has an increase in stability of the emulsion formed. The oxidative stability of food emulsion is also important for the food industry. The process of oxidation of unsaturated fatty acid chain is different in an emulsified system compared to a bulk vegetable oil. There are techniques to monitor the development of the oxidative process in oils and fats that can be applied in emulsions if lipid fraction is recovered from system before carrying out the analytical determination. These techniques measure the changes of concentration of molecules that are indicative of oxidative process, some of them measure the loss, for example, of reactive starting molecules (oxygen, lipid). Other measure the formation of primary and intermediated oxidation products (conjugated dienes and hydroperoxides) and also secondary oxidation products (alcohols, aldehydes, and ketones hydrocarbons) [144]. Accelerated shelf-life tests for lipid oxidation determination Rancidity in edible oils and lipid-based food occurs due to oxidation reactions and is a common problem in the food industry. There are three main related to it: the presence of polyunsaturated fatty acids (such as omega-3 and omega-6 or w-3 and w-6 respectively) in food formulation, the decrease or elimination of synthetic antioxidants, and iron fortification of certain foods (such as wheat flour for example). Lipid oxidation not only produces odors and flavors of rancid, but also affects the quality and nutritional security through production of secondary oxidation compounds formed after baking or processing (148-149; 154]. Due to the negative effects of lipid oxidation in food, it is essential that the industry make studies on the oxidative stability of the lipids that make up foods, especially those more susceptible to this problem before sells them. For that, the industry seeks more accurate methods of estimating the stability and shelf-life that are realized in a relative short period. Some accelerated tests are available to determine shelf life and oxidative stability of lipids (ASLT, accelerated shelf-life test). It is known that the oxidation reactions increase exponentially with the temperature, and therefore this parameter is typically used to accelerate the oxidative process in most of lipid stability tests [154][155][156]. Methods of accelerated shelf-life or stability have been studied comprehensively by Ragnarsson and Labuza [154] and Rossell [156]. However, the authors recognized that the conclusions reached in many studies of oxidation may not be valid because of the choice of inappropriate methods for evaluation of oxidative stability. Thus, the interpretations of lipid oxidation data must take into account the limitations of used method [157][158][159]. Traditional methods of stability are listed in the table below (Table 3) and presented in increasing order of severity of oxidizing conditions used [148][149]. Originally, these methods were developed to be used for homogeneous lipids, such as animal fats and bulk vegetable oils. Unfortunately, these methods present disadvantages, and only a few older studies such the related by Pohle and collaborators [160] and Paul and Roylance [161] were carried out in order to evaluate them critically. Pohle and collaborators (1964) concluded from their results that to in order to reach any satisfactory information of accelerated methods, each of the tests must be adapted to each product to be studied and must take into account mainly the lipid profile of products. Another drawback is that these methods, with the exception of active oxygen method (AOM), are not standard methods. Therefore, analysts need to adapt them considering the formulation and product type in study [154]. Although stability tests in normal conditions can get closer to the actual conditions of food storage, the procedure is very slow to be getting a practical value/application. In addition to this negative aspect, the reproducibility of the results is compromised by many variables, which are difficult to control in extended periods of storage. Another factor to consider is that due to the current interest in the use of natural antioxidants in food and the development of new oils and vegetable oil blends, it is appropriate to re-evaluate the current methods for evaluating the oxidative stability of food and edible oils and assess the effectiveness of antioxidants [148]. There are four parameters to control in accelerated shelf-life test: temperature, oxygen pressure, addition of metals (pro-oxidants), and contact with accelerated reagents. The reaction rate increases exponentially with increasing temperature and, therefore, the shelf life should decrease logarithmically with increasing temperature [148; 154]. Lipid oxidation of single component systems can be represented by the following equation: RH (lipid) + 02  ROOH  secondary products (rancidity) Equation (1) Where: RH = polyunsaturated fatty acid (PUFA) 02 = oxygen ROOH = hydroperoxides (primary product) To estimate the stability or susceptibility to an oxidation, the sample is subjected to an accelerated oxidation test in standardized conditions and an appropriate endpoint must be chosen to determine signs of oxidative deterioration. Therefore, these tests should be appropriate to the formulation, and the conditions used kept as close as possible to the storage conditions of the product. With a view to the practical application of the tests, results and predictions about the oxidative stability in lipid foods and bulk oils should have correlation with the shelf-life of the product and the induction period (IP), determined as the time necessary to achieve the oxidation end point. The first step in a typical shelf-life is to select an appropriate method to test the food product. A sample is placed under the conditions of time and temperature of the test and the results of induction/incubation time are translated or converted into a shelf-life of the product, measured as months of storage, which is usually done arbitrary and based on the experience of who is performing the analysis or test. As the lipid-based food feature highly susceptible to oxidation of their lipids the use of methods as a way to evaluate the effectiveness of antioxidants in preventing/reducing the reactions of oxidation process is normally applied [148; 154]. FRANKEL (1993,1996) The methods that use oxidation catalyzed by light or metal provide quick results. However, the photo-oxidation mechanism is different from auto-oxidation of free radicals, which usually occurs in food. The photo-oxidation results in the formation of different flavor with different volatile products formed [162]. Similarly, the oxidation catalyzed by metal can result in a higher proportion of carbonylic products related to the level of primary hydroperoxides [148]. The method of weight gain, based on the increase in weight due to the absorption of oxygen, is not very sensitive. The final point requires a level of oxidation that is beyond the point where the deterioration of taste is detectable in polyunsaturated oils. The schaal oven test conducted at 60-70°C features less limitations associated to hydroperoxides content, because the end point may represent a lower oxidation degree correlated to real storage conditions. At 60°C several secondary reactions are minimized [154]. Stability tests using high temperatures, including the method of oxygen uptake, oxygen pump, active oxygen method (AOM) and Rancimat are unreliable, since the mechanism of lipid oxidation change significantly under high temperatures. Volatile acids content is measured automatically by the AOM and Rancimat [163]. These types of tests present limitations, which include [148]: 1. Oxidation rates become dependent on the concentration of oxygen, due to the fact that the solubility of oxygen decrease at high temperatures 2. Oxidation occurs quickly and results in drastic changes in the availability of oxygen Conclusion The questions raised about the use of synthetic substances as antioxidants by the food industry in relation to safety for human health, showed there are many studies aiming to find natural substitutes, obtained from natural plant sources. Natural sources present a considerable range of bioactive compounds in its composition, whose main representative in acerola is the ascorbic acid (vitamin C) and phenolic compounds (acids phenolics and anthocyanins). This review allowed to conclude and verify the potential of vegetables matrices as antioxidants for the food industry. However, further studies are required due to the diverse and complex chemical composition, which is strongly influenced by several factors, in order offer a feasible alternative to replace synthetic antioxidants in food industry. Introduction Lipid oxidation occurs both in biological systems and in lipid-based foods such as oils and emulsions and consists of a set of oxidation-reduction reactions chain. It is a slow process developed during the three stages, which are induction (catalyzed by light, oxygen and metal ions), propagation, and termination. In foods it is responsible for the formation of undesirable chemical compounds, which are often responsible for off flavors and taste in food, and may be harmful to health . In order to avoid and reduce the oxidation reactions in lipid-based food products, e.g. emulsions, the industry use antioxidants compounds, in most of cases synthetics substances that present preservative function, stabilizing or neutralizing free radicals and consequently increasing the shelf life in food products. Many methods have been developed to study the oxidative stability to evaluate the evolution of the process in the presence of antioxidant substances in the system. These methods assist in the evaluation of the antioxidant capacity of isolated compounds and plant extracts as measured by oxidation products such as hydroperoxides and other products of lipid deterioration, such as conjugated dienes (Brand- Williams, Cuvelier, & Berset, 1995). Regarding the methods of evaluation of the antioxidant potential of biological samples, they are based on two main mechanisms of action: transfer or donation of hydrogen atom (HAT) and electron transfer (ET) . Most of the HAT tests present as a mechanism the competition between antioxidant and substrate by the free radical formed, which may be in the case of the ORAC test the peroxyl radical, which is formed by the thermal decomposition reaction of an azo compound. The ET assays are based on colorimetric reactions and evaluate the ability of an antioxidant compound to reduce an oxidant, which when reduced modifies the coloration, promoting a discoloration, and with that correlates the concentration of antioxidants with the different grades change of color. Antioxidant capacity measurement methods equivalent to Trolox (TEAC), such as DPPH, ABTS and FRAP assay are the main known ET assays, as well as antioxidant capacity assessment methods by oxygen radicals, assays typically of the HAT type, ORAC is best known and used (Zulueta, Esteve, & Frigula, 2009). Fruits and vegetables are highly perishable, and they are often submitted to different processing technologies in order to preserve the quality and extend the shelf life. One of the most used is dehydration and/or drying (Zotareli, Porciuncula, & Laurindo, 2002;Lüle & Koyuncu, 2015). However, drying methods that use high temperatures can cause decay and loss of thermo-sensitive compounds including those that present antioxidant action, known to reduce and/or eliminate the oxidative processes in biological systems and in foods, which are started by free radicals and other reactive species (Fujita et al., 2013).The expansion of the industrial use of acerola, fruit from tropical America with extensive cultivation in Brazil presenting high ascorbic acid content, is related to their process to juice, jelly and jam. There is high amount of residues generated after processing with are normally discarded, causing losses of raw materials and energy and consequently generate environmental impacts, which may represent up to 70% of the total volume produced (Da Cunha et al., 2016). Studies about fruit residues showed high antioxidant potential in acerola waste, associated to the presence of phenolics, flavonoids and ascorbic acid (Barrozo, Santos, & Cunha, 2013;Bortolotti et al., 2013;Barrozol et al., 1996;Duzzioni et al., 2013). Another component of the waste of acerola is water, which can represent up to 80% of the composition, a fact that limits the shelf life, transport and storage of these residues . Accordingly, there is a necessity to use drying methods, such as spray dryer (atomization or micro-encapsulation) or freeze dryer (lyophilization) for a better industrial application. This set of factors demonstrates the necessity of statistical tools used in experimental planning (factorial or univaried) and optimization of experimental parameters, which are essential for the proper understanding and analysis the results and information generated (Cunico et al., 2008;Pereira-Filho, Poppi, & Arruda, 2002 The main objective of this study was to apply optimization techniques on extraction process of natural antioxidants from pulp and seeds of acerola fruits, studying the variables temperature, ethanol concentration and sample: solvent ratio and as response the reducing power and antioxidant activity by DPPH and ABTS. In addition, a chemical characterization was done by. Finally, the study of oxidative stability used schaal oven test, evaluating two oxidation products (hydroperoxides and conjugated dienes) in a model lipid system added off natural extracts obtained from acerola (pulp and fruit) and synthetic antioxidants (BHA and BHT). Sample preparation and characterization The acerola fruits and commercial pulps were provided by the agricultural cooperative of Experimental design A central composite rotatable design (CCDR) was applied, involving three independent variables (2³) and five levels for each exploratory variable. The study ranges for variable temperature was 30°C to 60°C. The degree of ethanol hydration (ethanol concentration) ranged from 0% to 95% and sample: solvent ratio (in mass/volume or m/v) from 1:10 to 1:30. In total 18 runs were carried out, including four repetitions of the central point (Table 1). All experiments were conducted in random way and using the same equipments. The effects were analyzed by response surface methodology and multiple regression analysis in Statistica software (Statsoft, 2001). The mathematical models were adjusted, including linear, quadratic terms and interactions of exploratory variables, and in relation to the coefficient of determination (R 2 ) and analysis of variance (ANOVA and F-test). The responses were the reducing power and in vitro antioxidant activity (DPPH and ABTS). The reducing power in the extracts was performed by the method of Singleton, Orthofer, and Lamuela-Raventos (1999), which using the method of Folin-Ciocalteau, with gallic acid as standard and reading in spectrophotometer at 765 nm. The results were expressed in mg of gallic acid equivalent (GAE) per gram of sample (mg GAE g -1 ). Antioxidant activity was evaluated by DPPH and ABTS methods, with Trolox as standard (6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid). The method of DPPH was conducted according to methodology of Brand-Williams, Cuvelier, and Berset (1995) adapted by Kim et al. (2002). The method is based on the ability of antioxidant in reducing the oxidized radical DPPH (2, 2-diphenyl-1-picryl-hydrazyl-hydrate) by means of hydrogen donation. This reaction is evaluated by radical discoloration and absorbance reading in spectrophotometer at a wavelength of 515 nm after 45 minutes of the reaction. The ABTS method used was described by Re et al. (1999) modified by Kuskoski et al. (2004). The method presents the same principle of DPPH, having the same mechanism of hydrogen donation to the reduced radical ABTS (2,2′-azinobis (3-ethylbenzothiazoline-6-sulphonic acid)), promoting its discoloration that is quantified in spectrophotometer with absorbance in 734 nm 6 minutes after the reaction. The results for the DPPH and ABTS methodologies were expressed as Trolox equivalent (TEAC), in μmol Trolox g -1 sample. Extract preparation and characterization After the surface responses analysis, the ideal condition was selected and the mathematical model was validated by the determination of the reducing power, expressed in milligrams of gallic acid equivalent per ml of extract (mg GAE ml -1 ) and antioxidant activity by DPPH and ABTS. Additionally, the following determinations were performed: Antioxidant activity by ORAC method, quantification of phenolic and ascorbic acid, and anthocyanins quantification. The FRAP method was performed according to Benzie and Strain (1996) and Rufino et al. (2006;. The FRAP reagent was prepared by mixing 0.3 M acetate buffer (pH 3.6), 10 mM TPTZ solution (2,4,6-tri-2-pyridyl-1,3,5-triazine) The identification and quantification (in triplicate) of substances was carried out by comparing their retention times and the spectrum absorption in ultraviolet region. Emulsion preparation The model lipid systems were prepared in accordance with Huang et al. (1996). Three Oxidative stability To evaluated the efficiency and feasibility of applying natural antioxidant extracts obtained from acerola compared to synthetic antioxidants, oxidative stability studies were conducted by the means of lipid oxidation products analyses (hydroperoxide content and absorptivity at UV) in model emulsionated lipid systems (without addition of antioxidants (control) and added of synthetics (BHA and BHT) and natural (derived from acerola productspulp and whole fruit) antioxidants. The emulsions were stored in oven at 40°C and samples of each treatment collected and frozen at -20°C every three days during nine days of storage. The hydroperoxide content was determined according to Shanta and Decker (1994). To prepare and dilute sample was added 0.3 mL of separated oil emulsion with 1.5 mL of a mixture of isooctane/iso-propanol ( The absorbance reading was held in spectrophotometer (model) to wavelength of 232 nm using isooctane as blank. The results calculated by means of the following equation: E = Abs/C Equation (2) Where: E = % Conjugated Dienes Abs = Absorbance in a specific wavelength, 232 nm C = Concentration of sample solution in g 100 ml -1 Statistical Analysis The physicochemical parameters and extract characterization data were analysed by means of a completely randomized type design, evaluated through the SAS software (Sas Institute, 2011) and submitted to analysis of variance (ANOVA and F-test). A Tukey test, at the level of significance of 5% (p<0.05) was performed to determine statistically significant differences between the samples. The analyses were realized in triplicate and was calculated mean and standard deviation for each sample or treatment. For oxidative stability analyses an experimental design with randomized blocks was used to analyze the results, with three oils, eight treatments for oil and four storage times, all in triplicate. The data obtained, means and standard deviation for the three repetitions, were submitted to analysis of variance (ANOVA and F-Test), and the means compared by the Tukey test with interval of confidence of 5% (α= 0.05). Experimental design Factorial planning assists in determining factors that have relevant effects on the desired responses, and how the effect of one factor varies with the levels of the other factors. In addition, it allows establishing and quantifying the correlations between the different factors. Without the use of factorial planning experiments, important interactions between factors cannot be detected and maximum optimization of the system may take longer to achieve. In addition, the observed advantages, factorial planning makes the determination feasible, or at least indicates the optimal condition (Cunico et al., 2008). Results are present in following (Tables 2-5) Table 2. Reducing power (mg GAE g -1 ), DPPH and ABTS (µmol Trolox g -1 ) of acerola pulp extracts It was observed that for the reducing power that only the quadratic effects of the independent variables (ethanol concentration, temperature, and sample: solvent ratio) were significant, and both presented negative coefficients (p≤0.05). Ethanol concentration presented the higher effect on the reducing power of the acerola pulp extracts. For the DPPH, only the linear effect of the sample: solvent ratio variable was significant, presenting a positive coefficient (p≤0.05). For the ABTS variable, the quadratic effects of the studied variables were significant and presented negative coefficients (p≤0.05). The linear effect of the sample: solvent ratio was also significant, with a positive coefficient (p≤0.05). For both the ABTS and the DPPH, the sample: solvent ratio was the independent variable with the highest effect for the acerola pulp extracts. The interaction between the factors studied was not significant for any of the response variables. The analysis of variance (ANOVA) and the F-test were performed, and it was observed that the calculated F for all variables studied (reducing power, DPPH and ABTS) was higher than the F- (Table 3). The mathematical models for the three response variables are presented below and are complete second-order models, in which none of terms was excluded, so that the R 2 value was not decreased and some effect was not erroneously ignored. Table 3. Variance analysis (ANOVA) of reducing power, DPPH and ABTS variables of acerola pulp extracts * Significative for α = 5% (0.05), p≤ 0.05 F0. 95,9,8 = 3.23, F0.95,5,8 =4.82 Where: Y1 is a total reducing power (mg GAE g -1 ) Y2 is a DPPH antioxidant activity (µmol Trolox g -1 ) Y3 is an ABTS antioxidant activity (µmol Trolox g -1 ) x₁ is an ethanol concentration in % x₂ is a proportion sample: solvent in m/v x₃ is a temperature in ºC The response surfaces generated for the reducing power, DPPH and ABTS parameters confirm the statistical results, obtained by ANOVA, F test and effects analysis, and allow to conclude that for the reducing power any temperature and sample: solvent ratio can be used from that the ethanol concentration is between 25-75%. There was no optimal range for antioxidant activity by the DPPH (1:20-1:30). For the ABTS variable, the sample: solvent ratio was again the most significant exploratory variable ( Figure 1). Acerola seed For acerola seeds, lower results of reducing power were observed compared to acerola pulp. In addition, the seeds extracts did not demonstrate lower antioxidant activity. Rezende, Nogueira, and Narain (2017) found the optimum antioxidant extraction conditions in acerola pulp processing residues (skin and seeds) total phenolic contents of 1034 mg EAG 100 g -1 and antioxidant capacity by the DPPH and ABTS methods of 155 µmol Trolox g -1 e 179.8 µmol Trolox g -1 respectively, and also observed that the linear effect of the solvent: sample ratio was the most significant variable and influenced the phenolic extraction as well as on the antioxidant capacity (DPPH and ABTS). (Table 4). Silva, Duarte, and Barrozo (2016) obtained in residues of acerola juice and pulp processing average phenolic content around 726.6-913.6 mg EAG 100 g -1 , already the residues of acerola juice processing analysed by Oliveira et al. (2009) presented total phenolic content of 681 mg EAG 100 g -1 . For the DPPH response, the intermediate condition of ethanol and temperature and intermediate to highest sample: solvent ratio (47%, 45ºC and 1:20 -1:30) resulted in higher activity. (Table 4). and therefore, the model is not predictive, that is, with 95% confidence (α = 5%), the data are not explained by the proposed model. The regression coefficient R² of the model, which explains the variation of the data, was 77.50%. Considering this observation, it was opting to exclude the DPPH variable for optimization (Table 5). When analyzing the effects, in conjunction with ANOVA, it was observed that only the linear effects of the ethanol concentration and sample: solvent variables were significant for the reducing power and presented positive and negative coefficients (p≤0.05), respectively. The quadratic effect of the ethanol concentration was also significant, with a positive coefficient (p≤0.05). For the ABTS variable, the quadratic effects of the ethanol concentration and sample: solvent ratio were significant and showed negative coefficients (p≤0.05). The linear effect of the sample: solvent ratio was also significant, with a positive coefficient (p≤0.05). For the seed as well as the pulp, it was verified that the interaction between the variables was not significant for both reducing power, DPPH and ABTS. The ethanol concentration was the variable with the highest effect The mathematical models presented below for the reducing power, DPPH and ABTS of acerola seed extracts, as well as for acerola pulp, are complete second order models, in which none of terms were excluded. 95,9,8 = 3.23, F0.95,5,8 =4.82 Where: Y1 is a total reducing power (mg GAE g -1 ) Y2 is a DPPH antioxidant activity (µmol Trolox g -1 ) Y3 is an ABTS antioxidant activity (µmol Trolox g -1 ) x₁ is an ethanol concentration in % x₂ is a proportion sample: solvent in m/v x₃ is a temperature in ºC The graphs of the response surfaces presented, in agreement with ANOVA and effects analyzes, allowed to conclude that for the three variables there was an optimal range of extraction and again the interaction was not significant. For reducing power response, the optimum extraction range was close to the central point (47%, 45ºC and 1:20), with an ethanol concentration of 25-75%, a ratio temperatures (36-54 °C) can be used ( Figure 2). Sample characterization The chemical composition of acerola presents differences, which are due to factors such as the cultivar (genetic differences), environmental conditions of cultivation region (rainfall, temperature, altitude, fertilization, irrigation, occurrence of pests and diseases) and maturation process. The vitamin C content, pH, soluble solids content (SST), color, weight and size of the fruits are characteristics attributed to the quality of the fruit, which are also influenced by the mentioned factors, and therefore present differences when evaluating the fruit Kawaguchi, Tanabe, & Nagamine, 2007;Nogueira et al., 2002;Souza et al., 2006). verified low alteration in pH, increase in titratable acidity (TA), sugars and soluble solids, and decrease of vitamin C contents with maturation. Therefore, when analyzing the physicochemical characterization of pulps and ripe fruits, it was observed that the samples differed in relation to the coloring parameters, showing differences in the stages of maturation of raw materials and quality and quantity of chemical composition of the compounds responsible for the coloring of the fruits, such as carotenoids, anthocyanins and flavonoids. When evaluated the instrumental color, the commercial pulp samples presented higher results of the parameters of luminosity color (L*) (55.44), b * (45.64), Hue (59.21) and Chroma (53.13) being statistically different from the other samples. Only for a* parameter, the mature fruit had the highest result (30.32), differing statistically from the others. This is mainly due to differences in crop location and genetic variety, but also to the different stages of maturation, harvesting season, and climatic and soil conditions (Table 1). For the color parameters, studies found L* values between and Chroma between 33.2-48.23 (Adriano, Leonel, & Evangelista, 2011;Canuto et al., 2010;Lima et al., 2014;Jaeschke, Marczak, & Mercali, 2016). Also regarding the fruit and vegetable instrumental color, studies have shown high correlation between color parameters and bioactive compounds. For example, it was observed correlation between color parameters and the presence of carotenoids, anthocyanins and other polyphenols, and chlorophyll (Sant'anna et al., 2013;Meléndez-Martínez et al., 2007;Spada et al., 2012;Jiménez-Aguilar et al., 2011;Larrauri, Rupérez, & Saura-Calixto, 1997;Koca, Karadeniz, & Burdulu, 2006). The color is also an important quality indicator when it is desired to evaluate the effects of the processing carried out on a fruit, being observed for acerola, that the parameter L* is associated to non-enzymatic darkening phenomena and the decrease of ascorbic acid, the parameter a* related to the anthocyanin content and, finally, the parameter b* related to the yellow carotenoid contents . Triplicate mean ± standard deviation (SD) Same letters in column not differ significantly for p≤0.05 (α = 5%) Regarding the other physicochemical parameters evaluated, pH and soluble solids content (SST) were significantly higher in the pulp, differing statistically from the others. Similar situation was observed in the results of titratable acidity (TA), however the pulp sample did not differ from the mature fruit. Higher ratio values were observed in the commercial pulp, which was statistically higher than in the other samples (Table 2). Some investigations of physicochemical characterization of acerolas found pH values around 2.8-3.76, soluble solids content ranging from 3.5-11.3 °Brix, titratable acidity between 0.53-1.52 g citric acid 100 g -1 (%), and ratio (relation between soluble solids content with the titratable acidity -SST/TA) ranging from 2.41-8.31 (Adriano, Leonel, & Evangelista, 2011;Aquino, Móes, & Castro, 2011;Canuto et al., 2010;Lima et al., 2014;Moura et al., 2007;Santos et al., 2012;. Table 6. Results of Physicochemical parameters of pH, titratable acidity (TA), content of soluble solids total (SST) and ratio (SST/TA) in lyophilized pulps and mature fruit of acerola Triplicate mean ± standard deviation (SD) Same letters in column not differ significantly for p≤0.05 (α = 5%) Extract characterization Carrying out the antioxidant characterization and evaluating the reducing power and the antioxidant capacity of the extracts it was verified that only the ascorbic acid content did not differ between the two samples of acerola pulp studied. For the reducing power, ABTS, FRAP and total anthocyanins by the two methods (differential pH and HPLC) the commercial pulp presented higher results. For DPPH and ORAC (total, hydrophilic and lipophilic) the pulp presented values significantly higher in comparison to commercial pulp. In relation to the class of anthocyanic phenolic pigments, the results obtained for pulp and commercial pulp were 0.06 and 0.08 mg cyanidin-3-glycoside g -1 respectively. The studies revealed averages between 2.16-59.74 mg cyanidin-3-glycoside 100 g -1 and also observed a relationship between this class of compounds and the coloration developed in the fruit, and it was verified that samples with a more intense red color are those with the highest concentrations of these compounds (De Rosso et al., 2008;De Rosso & Mercadante, 2007;Düsman et al., 2014;Kuskoski et al., 2005;2006a;Lima et al., 2003;Mercali et al., 2013;Mezadri, 2005;Mezadri et al., 2008;Vendramini & Trugo, 2004). De Rosso el al, (2008) also carried out a study to identify phenolic compounds, and through the use of the HPLC-MS/MS analytical technique it was possible to identify the presence of anthocyanins cyanidin 3-rhamnoside, pelargonidin 3-rhamnoside and its free aglycone forms cyanidin and pelargonidin, which accounted for 76-78%, 13-16%, 6-8% and 2-3% of total anthocyanin content in acerola, respectively. By analyzing and comparing antioxidant capacity and the presence of bioactive compounds it is a consensus among the studies the significant contribution of these substances to the high antioxidant capacity verified in fruits and vegetables in general, and therefore, a strong correlation between the presence and contents of these phytochemicals as phenolics, vitamin C and carotenoids and the antioxidant capacity (DPPH, ABTS, ORAC and FRAP) methods were reported and observed by researches Mezadri, Pérez-Gálvez, & Hornero-Méndez, 2005;Rufino et al., 2010). As regards the presence of vitamin C in acerola, it is verified that this is of great importance and relevance in the chemical composition of the fruit, being one of the main bioactive compounds of the fruit. Studies have indicated that vitamin C content in acerola can vary from 0.8 to 3.5% in fresh fruit (Alves, Chitarra, & Chitarra, 1995;Alves et al., 1999;. Therefore, the vitamin C contents obtained were 61.23 and 64.31 mg g -1 for pulp and commercial pulp. The literature indicates a wide range of results, which varied around 470-4827 mg ascorbic acid 100 g -1 (Gomes et al., 2000;Mezadri et al., 2008;Oliveira et al., 1999;Rufino et al., 2010;Santos et al., 1999). The differences observed in vitamin C content in acerolas and in fruits rich in this phytochemical in general are due to the great instability of this and mainly to the factors of cultivation (soil, climate, variety), degree of maturation, and physicochemical characteristics, such as weight, shape and size of the fruit (Cardoso et al., 2011;Lima et al., 2005;Matsuura et al., 2001;Nogueira et al., 2002;Soares et al., 2001;. Results express for ml of extract or for g of sample in dry weight (DW) Triplicate mean ± standard deviation (SD) Same letters in lines not differ significantly for p≤0.05 (α = 5%) GAE -Gallic Acid Equivalent TEAC -Trolox Equivalent Antioxidant Capacity AA -Ascorbic Acid ORAC-FL-H -Hydrophilic ORAC ORAC-FL-L -Lipophilic ORAC Oxidative stability Analytical methods developed to evaluate the evolution of lipid oxidation in food emulsions are usually based on quantifying different lipid oxidation products that develop or are produced during oxidative lipid process steps (Jacobsen, 1999). Studies suggest that for margarines, mayonnaise, salad dressings, spreads and dairy products the rancidification level of the oils present in these products is evaluated mainly by peroxide index, conjugated dienes, anisidine or p-anisidine, and total carbonyl content, with the peroxide index being the most recommended for evaluating the initial oxidation stage and the total amount of carbonyl compounds and the anisidine value most recommended to evaluate the final stage (Jacobsen, 1999;Li Hsieh & Regenstein, 1992). The different existing methodologies quantify different oxidation products, such as peroxides and hydroperoxides, dienes and dienoic acids, among others, and the recommendations based on the use of one or other methodology are based on the fact that the oils have different compositions in fatty acids and with that different types and levels of oxidation products are formed, which explains why some methods are more appropriate than others depending on the type of oil predominant in the food or matrix in study (Bosset & Fluckiger, 1989;Frankel, 1998;Li Hsieh & Regenstein, 1991;1992). According to the literature reports, the schaal oven test and oxidative stability evaluation performed by means of quantitative analysis of oxidation products formed in oils and emulsified lipid matrix products using the peroxide index and UV Absorptivity methods, were chosen as analytical procedures more adequate to the evaluation of lipid oxidation in emulsified lipid systems. With this, the use of antioxidants (natural or synthetic), as well as the amounts and form of application, the choice of oil type (sunflower, canola, corn) and storage time were also studied since they are factors of great importance with regard to the quality and shelf life of oils and fats and lipid-based food products. The literature also reports that in emulsified lipid systems, one of the main triggers of lipid oxidation is the decomposition of hydroperoxides in free radicals promoted by factors such as oxygen, temperature, light and, mainly, metallic ions, such as iron, for example (Mcclements & Decker, 2000). However, as these unstable substances and the major oxidation promotors agents (metals) are located in the surface regions of droplets/micelles (interface of the emulsion) and in the aqueous phase, it has been suggested that oxidation reactions occur preferentially in these regions of the system (Nuchi, Mcclements, & Decker 2001;Mancuso, Mcclements, & Decker, 1999;Decker et al., 2005;Frankel, 1996, Frankel & Meyer, 2000Heins et al., 2007). In addition to this, there are some factors that influence lipid oxidation. Studies have reported that storage temperature is a significant factor for the oxidative stability of lipid systems (oils and emulsions), as an increase in oxidative deterioration was observed with increasing storage temperature (Dimakou et al., 2007). Another important factor to be considered is the concentration of the oil phase of an emulsion, which was also observed an increase in the oxidative reactions with the decrease of the oil phase concentration (Kiokias & Oreopoulou, 2006;Osborn & Akoh, 2004). The size of a droplet of an emulsion in this sense is also relevant in lipid oxidation, since studies have shown that the rate of lipid oxidation is increased as the droplet size decreases, this is because there is a larger surface area of the oil droplet exposed to the aqueous phase and oxidizing agents, which are concentrated on the surface of the micelle (interface between the phases of the emulsion) and in the aqueous phase of the system. In view of this, it can be seen that these regions of the emulsion are the most important in terms of oxidative stability, which oxidation phenomena occur more quickly and significantly (Mcclements & Decker, 2000;Osborn & Akoh, 2004;Roozen, Frankel, & Kinsella, 1994). The presence of metal ions in emulsions was another parameter studied, being verified in the studies that salts containing ions such as sodium chloride and potassium (NaCl, KCl) and iron ions present pro-oxidant behavior (Chen et al., 2012;Mei et al., 1998). In view of this, it can be concluded that lipid-based foods such as mayonnaises and butter, products with significant salt concentrations in their formulations, present as main factor of deterioration and decrease of the shef life the lipid oxidation, as they present great susceptibility to the lipid oxidation phenomena, thus confirming the pro-oxidant potential of metallic ions in general, such as sodium, potassium and iron (Cui et al., 2016;Heshmati, Vahidinia, & Salehi, 2014;Khaniki et al., 2007). In this sense, when performing the analyses to evaluate the evolution of lipid oxidation, it was expected that the start of storage (day 0) presented a lower level of oxidation and consequently lower levels of oxidation products (peroxides and conjugated dienes), and that the ninth day and therefore the last period evaluated had higher levels of peroxides and conjugated dienes since it is known that the storage of emulsified oils and lipid products (for example, mayonnaises and margarines) exposes the lipids to the various pro-oxidants agents previously mentioned as oxygen, light, metal ions and others, allied to the fact that antioxidants, both added and naturally present, have already been consumed. When carrying out the analyzes, these observations were confirmed, being only the level of peroxide in treatments with corn oil the exception; in treatments with corn oil, the sixth day of storage was the one that presented higher levels of peroxide (Table 8). Kiokias and Varzakas (2014) reported that with the storage time emulsions with cottonseed and sunflower oil were the ones that obtained the highest increase in the oxidation parameters and the emulsions of corn and olive oil, which obtained lower oxidation levels at the end of storage. The results obtained in the study were mainly explained by the composition in fatty acids of vegetable oils, since there is a consensus that oils with higher concentrations of polyunsaturated fatty acids of the linoleic type (C18:2, fatty acid with two unsaturations or double bonds), are more susceptible to oxidation and an even higher rate of oxidative reactions is verified in these oils. Table 8. Hydroperoxide content (mmol CHP L -1 ) and UV Absorptivity (Conjugated Dienes) during storage at 40ºC (0, 3, 6, 9 days) of lipid emulsions formulated with sunflower, canola and corn oil, added of natural antioxidant extracts from pulp and fruit of acerola and synthetic antioxidant BHA/BHT. Mean of all treatments ± standard deviation (SD) (each treatment in triplicate) Same letters in columns not differ significantly for p≤0.05 (α = 5%) CHP -Cumene Hydroperoxide it was verified that one of the main influencing factors is the physicochemical behavior and consequently the partitioning of these compounds in the system/micelle, which is mainly dependent on physical structure and physicochemical characteristics (polarity, reactivity). As an example, antioxidants considered more hydrophilic/polar are less effective in their activity as antioxidants in emulsions (complex lipid systems) while more lipophilic/apolar compounds are less effective in bulk oils (simple lipid systems) (Frankel, 1996). This difference in the efficiency of the compounds was explained and described by Frankel (1996) and Porter (1993) in the so-called "Polar antioxidant paradox", in which it is reported that an antioxidant is only effective and efficient in reducing the chain reactions that lead to oxidation and decrease in the quality and shelf life of lipid food products when it is concentrated in more reactive and unstable sites, that is, of a higher concentration of lipid hydroperoxides and oxidation initiators (oxygen, metals, free radicals) and therefore where hydroperoxides are more easily decomposed into free radicals by oxidative factors. In this sense, it is observed that the surface of the micelle (oil-water interface) consists of the most unstable and reactive region of a lipid system (Jacobsen, Meyer, & Adler-Nissen, 1998;Jacobsen, et al., 1999;Sasaki et al., 2010). In view of the above, it was expected to quantify higher levels of lipid oxidation products and consequently higher levels of oxidation in control treatments, since in these treatments the lipids do not present protection of their chemical structure by the antioxidant substances, and therefore lipid oxidation is given in greater degree and speed in these treatments with absence of antioxidants. Only for conjugated diene content in treatments with canola oil an exception was observed: for this oil the conjugated diene content was higher in the treatment with addition of 200 ppm synthetic antioxidant mixture (BHA + BHT) ( Table 9). Table 9. Hydroperoxide content (mmol CHP L -1 ) and UV Absorptivity (% Conjugated Dienes) in lipid emulsions formulated with sunflower, canola and corn oils, added of extract from pulp and fruit of acerola and synthetic antioxidant BHA/BHT during nine days of schaal oven storage at 40ºC. Mean of all periods ± standard deviation (SD) (each period in triplicate) Same letters in columns not differ significantly for p≤0.05 (α = 5%) CHP -Cumene Hydroperoxide The evaluation of oxidative stability allowed to observe that with the storage time, there was an increase in the hydroperoxide content and the absorptivity (conjugated dienes). Therefore, lower levels of these oxidation products were obtained at the beginning of storage (0 days) and higher levels in the last storage period (9 days Anwar et al., 2007;Chatha et al., 2006;Iqbal & Bhanger, 2007;Samotyja & Malecka, 2007;Suja et al., 2004). b Figure 3. Hydroperoxide content and UV Absorptivity of lipid emulsions formulated with sunflower (a) (b), canola (c) (d) and corn (e) (f) oils, added of extract from pulp and fruit of acerola and synthetic antioxidant BHA/BHT during nine days (0, 3, 6, 9 days) of schaal oven storage at 40ºC The analyzes of the emulsions with sunflower oil allowed to verify that the hydroperoxide content was statistically different for all the evaluated periods, being observed a significant increase, and, therefore, lower levels of peroxide at the beginning of storage (day 0) and higher levels in the last period (day 9) (Table 4). Absorptivity results also showed an increase in conjugated dienes over time. At the beginning of the storage (day 0), the result of the conjugated dienes was lower, being statistically different from the other periods, higher result of this oxidation product was obtained at the end of storage (day 9), which did not differ only from the third period evaluated (day 6) ( Table 4). The results obtained confirm and are in agreement with the expected and reported in the literature. Regarding the treatments, that is, application of antioxidants, the control treatment was the one that presented higher levels of peroxide, differing statistically from the others. The treatment with the addition of 400 ppm of lyophilized acerola extract showed lower levels of peroxide, thus demonstrating the great antioxidant potential of natural sources (acerola) with respect to lipid oxidation. This treatment did not differ statistically from the other treatment added with acerola extract (200 ppm) and the treatment with 200 ppm of the synthetic antioxidant mixture (BHA+BHT) ( Table 5). The control treatment was also the one that presented the highest levels of conjugated dienes, being this treatment statistically different from the others evaluated. In addition, the treatment of 200 ppm of the synthetic antioxidant mixture (BHA+BHT) showed the lowest levels of conjugated dienes, not differing only from the treatment with 400 ppm of lyophilized acerola pulp extract (Table 5). A study with application of vegetable extracts in oils and lipid emulsions conducted by Abdalla and Roozen (1999) allowed to verify that the formation of primary oxidation products, whose main exponents are hydroperoxides, was high in both products (bulk oil and emulsion) However it occurred faster in bulk oil than in emulsions, since the increase of these compounds was significant from the 1 st and 4 th day of storage for bulk oils and emulsions respectively. It was also observed a lower rate of hydroperoxide formation in the samples with higher concentrations of vegetal extracts, for both bulk oils and emulsions. In spite of the sunflower oil and the emulsion prepared with this oil, it was verified that the sage extract was the most effective in the inhibition and reduction of the oxidation phenomena, which was proved by the low levels of primary and secondary compounds of oxidation quantified, significantly lower than the other extracts tested. For the emulsions with canola oil, a statistical difference was observed in the peroxide index for all the periods evaluated, that is, a significant increase of this oxidation product with the storage time, and, therefore, lower levels were obtained at the beginning of the storage (day 0) and higher levels in the last period (day 9) (Table 9). Regarding the treatments studied, the control treatment (without antioxidants) was the one that presented higher levels of peroxide, only not statistically differing from the treatment with addition of 200 ppm of lyophilized acerola pulp extract. The treatment with the addition of 400 ppm of lyophilized acerola pulp extract showed lower peroxide levels, but did not differ statistically from the treatment of 400 ppm of lyophilized acerola extract. These results allow to observe and verify the great antioxidant potential of the natural sources (acerola) with respect to lipid oxidation, regardless of the way the raw material is available (pulp or whole fruit). The absorptivity results demonstrated once again, increase in conjugated dienes over the time of storage. The storage time (day 0) showed a lower result of conjugated dienes, which did not differ statistically from the third storage period (day 6), at the end of the storage (day 9) the highest result of this oxidation product was obtained which differed statistically from the other periods. Intermediate storage periods (days 3 and 6) did not differ statistically from each other ( Table 4). The results of the analyzes of corn oil emulsions allowed to verify statistical difference in the peroxide index for all evaluated periods, that is, a significant increase of this oxidation product was observed during the storage, and, therefore, lower levels were obtained at the beginning of the storage (day 0) and higher levels in the last period (day 9). Regarding the treatments studied, the control treatment (without addition of antioxidants) was the one that presented higher levels of peroxide, differing statistically from the other treatments studied. The treatment with the addition of 400 ppm of lyophilized acerola extract showed lower levels of peroxide but did not differ statistically from the treatment of 200 ppm of lyophilized acerola pulp extract. Absorptivity results again showed an increase over storage time. The storage time (day 0) showed a lower result of conjugated dienes, which did not differ statistically from the third storage period (day 6), at the end of the storage (day 9) the highest result of this oxidation product was obtained which differed statistically from the other periods. Intermediate storage periods (days 3 and 6) did not differ statistically from each other ( Table 4). The control treatment showed higher levels of conjugated dienes, which did not differ statistically from the treatments with addition of Conclusions The experimental design was a relevant tool in the evaluation as well as the determination of the variables that demonstrated influence on the extraction process, which for both pulp and seed were the concentration of ethanol and sample: solvent ratio. By employing the experimental design for the optimization a clear reduction of variation and time was reached, along to show a possibility of industrial application of natural antioxidant extracts from acerola pulp, which proved to be a potential alternative as a natural antioxidant due to its high composition in antioxidants, mainly represented by the ascorbic acid and phenolic compounds. Abstract Descriptive techniques develop a sensorial profile of the food products performed by raising sensory attributes. This aim to create an identity for the product, explain consumer perceptions and preferences and inform about perception and sensory impact of a new product on consumers when changes are made to the formulation, process or packaging. The flash profile is a technique developed as a combination of the free-choice profile with the sorting ranking. Due to the importance of sensorial analysis in product development, the objective of this work was to perform sensorial evaluation (flash profile and acceptance) of mayonnaises added of acerola and synthetic antioxidants. The results showed that for the samples distribution was a greater approximation between the samples profiles, being the sample added of synthetic antioxidant the one with a more distinct sensorial description; assessors' distribution was close, demonstrating relative agreement. The assessors and samples variances were low, reflecting consensus among assessors and little differentiation between the samples. When evaluating the acceptance, addition of 400 ppm of acerola presents greater acceptance, differing statistically from the others. When correlating the profile and acceptance, there was a high consensus, low discriminative power and similar acceptance, confirmed by the low variances and proximity of distribution, which allows to conclude that the substitution of synthetic antioxidants by acerola does not cause damages to the sensorial quality, being little noticeable and presenting significant sensorial acceptance. Keywords: Sensory analysis; Natural antioxidants; Acceptance; Flash profile; Consumers Introduction Sensory analysis with their different techniques have been developed since the decade of 1950, being a strong method for the food industry in the knowledge of consumers impressions and preferences. This is due the sensory evaluation methods are tools through which the industry can assess the quality (acceptance/preference), identify/describe attributes and develops descriptive profiles of foods, and, therefore, some methodologies are used to investigate the sensory properties and attributes of both traditional market products and new products. Sensory evaluation impacts decisions on changes in products such as formulation, process or package, or even ingredients substitutions, as well as to assist the shelf life determination in a food product. Sensory profile became an important tool to be employed in new products development, as it allows getting information about preferences and consumer's needs, through the survey of sensory attributes, description and quantification of sensory differences of food products allowing to position and description of products in a sensory space of multiple sensory attributes, which should be relevant to the product group (Delarue & Sieffermann, 2004). The descriptive technique through descriptive information (sensory attributes) creates identities and profiles for foods, therefore it uses tasters free of affective (hedonic) judgments as a tool for evaluation of a product. However, because humans are not equally discriminating in sensory attributes, for some descriptive techniques there is a need for a trained panel assessors aimed at better approximation of the consumers' impressions (Delarue & Sieffermann, 2004;Liu et al., 2016). In order to improve the sensory analysis techniques, Sieffermann (2000Sieffermann ( , 2002 has developed a methodology described and presented as a combination between free choice profiling and an ordination technique (ranking) called Flash Profile. As a descriptive and comparative method, it is possible to make comparisons among products and to evaluate the product as a whole, through the classification of these in discriminant attributes, which should preferably emphasize sensory differences and describe both the products and the differences between them. It was developed to be a more flexible method and fast identification of a position and sensory profile of products. Also presents a greater advantage of time saving compared to the conventional descriptive profile, given its foundations in the free choice profile (Williams & Langron, 1984), allowing the assessors to use their own list of attributes, which makes it the fastest and with fewer constraints of consensus and concept alignment, finally reduces the need for a training with the panel of tasters, as well as the single-step accomplishment of the familiarization and survey phases of product attributes, through simultaneous access to the judges of all the samples (O'mahony, 1991). This work aimed to evaluate the performance of lyophilized pulp and whole fruits extracts from acerola in a lipid base product (a mayonnaise) as a substitute for synthetic antioxidants. The sensorial evaluation was carried out by means of two methodologies, one of the description and sensory profile identification, a Flash profile, and other of acceptance, a global acceptance t est. Emulsion preparation Six formulations for sensory evaluation were prepared according to Di Matia and collaborators (2015) Sensory panels characterization Sensory analysis, profile (Flash Profile) and acceptance were performed with assessors panel familiar with sensory evaluation tests. The sensory profile was performed with 14 assessors, aged between 24 and 47 years, 6 men and 8 women. The acceptance was evaluated by 60 assessors, 43 women and 17 men, aged between 21 and 54 years (Table 1). at the same time. In the first session, the attribute survey stage was carried out, in which the assessors were asked to establish discriminatory appearance, odor, texture, and taste attributes for the sample group. At the end of the session, the attributes raised were organized and for each assessor a list of the attributes was made, along with the individual lists a global list of attributes with all the attributes raised by the panel was made. In a second session, the two lists were presented, and the assessors were asked to compare their individual lists with the global list and to define the final list of attributes to be evaluated. The third session consisted in ordering the samples on a scale of 1 to 9, with 1 being the lowest intensity for attribute and 9 the highest intensity for the attributes defined in the previous steps. The results were analyzed with the statistical analysis GPA (Generalized Procrustes Analysis) (Gower, 1975) in XLSTAT software (XLSTAT, New York, NY, USA, 2013) as a way to obtain consensus between the sensorial maps of the assessors, to harmonize and normalize the data of the matrices of attributes of each assessor. Global Acceptance test The acceptance was performed with sequential presentation (monadic) of the samples, and the assessors were asked to quantify the magnitude or degree of acceptance of each of the products, using a structured hedonic scale of nine points (1 = dislike extremely, 5 = did not like or dislike, 9 = like extremely) (Stone & Sidel 1998). The results were analyzed by Wilcoxon test with 95% confidence interval in XLSTAT software (XLSTAT, New York, NY, USA, 2013). Acerola Antioxidant characterization The reducing power in acerola antioxidant was performed by the Folin-Ciocalteau method according to Singleton, Orthofer and Lamuela (1999), which uses gallic acid as standard and reads in spectrophotometer at 765 nm. The results were expressed in milligrams of gallic acid equivalent per ml of extract (mg GAE ml -1 ). The antioxidant activity was evaluated by DPPH and ABTS methods, with Trolox standard (6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid) and results expressed in Trolox equivalent antioxidant capacity (TEAC) per ml of sample. The method of evaluation of antioxidant activity by the DPPH was conducted according to methodology of Brand- Williams, Cuvelier and Berset (1995) adapted by Kim and collaborators (2002), which is based on the ability of antioxidant in reducing the oxidized DPPH radical (2,2-diphenyl-1-picrylhydrazyl) by means of hydrogen donation, being this reaction evaluated by discoloration and radical absorbance reading in spectrophotometer at a wavelength of 515 nm after 45 minutes of reaction. The ABTS method used was described by Re and collaborators (1999) and modified by Kuskoski and collaborators (2004). In this essay the antioxidant donates hydrogen atoms to reduced radical ABTS (2,2′-azinobis (3ethylbenzothiazoline-6-sulphonic acid)), promoting your discoloration that is quantified in spectrophotometer with absorbance reading in 734 nm 6 minutes after reaction. Antioxidant characterization The results of the characterization of the acerola antioxidant samples are described below (Table 2). When evaluated the antioxidant activity by the total reducing power and antioxidant capacity (DPPH and ABTS), it was verified that the results demonstrated a relation about antioxidant capacity and presence of bioactive compounds (resulted present elsewhere). A strong correlation between the presence and contents of these phytochemicals as phenolics, vitamin C and the antioxidant capacity (DPPH, ABTS) methods were reported by the literature Mezadri et al., 2008;Rufino et al., 2010). In the study with dairy products, the authors could observe that for strawberry yoghurt, very similar results were obtained for the descriptive profiles obtained by different descriptive sensorial methods. On the other hand, the sensorial positioning of the fresh apricot cheeses showed little differentiation between the methods. For both sets of products, Flash Profile was slightly more discriminating than the conventional profile. The authors attributed these results to the context and objective of applying each method and also the application of the GPA as a statistical tool for evaluating the results in the conventional profile, because according to them when this treatment is applied in the data of the panel it ends up masking the differences of understanding of the attributes existing between the assessors, which, therefore, leads to a reduction of the discriminating power of the panel as a whole. In the study conducted by Albert and collaborators (2011), the authors concluded that there was a good correlation between the sensorial maps obtained by the different methodologies studied. Given the results and discussions of other studies cited above, it was possible to verify that the present study resulted in satisfactory and similar conclusions. The present study revealed little differentiated sensorial profiles among the evaluated products, that is, the new formulation tested showed low differentiation with respect to perceptions and sensorial preferences and also to the survey of attributes by the panel of assessors regarding control samples, which, a priori, the panel has greater sensory contact. Also, in relation to the results of the profiles, it was obtained a mean of attributes evaluated by the assessors between 8 and 9 attributes, among which were separated into categories of appearance, odor, texture and taste (Figure 1). From the total of 55 attributes surveyed, 15 terms of appearance, 11 of odor, 14 of texture and 15 of flavor were obtained ( Table 3). The highest citation attribute was "taste intensity" (8) and twenty-eight attributes were verified with only one citation. Also with regard to the mentioned attributes, there was a great similarity of terms, such as "creaminess" and "creamy", "vinegar odor" and "vinegary odor", and high frequency of citation of the term "intensity". Another observation was referring to the citation of terms, at first sight with opposite meaning, as "salty taste" and "sweet taste". The results were analyzed in the first three dimensions, which were responsible for the explanation of 59.22% of the results (Figure 1 and 2). Table 3. Frequency of attributes cited (Appearance, odor, texture and taste) For the distribution of samples and assessors in the three dimensions, it can be observed that the control sample showed greater proximity of profile and sensorial description, of the samples added of 200 ppm of acerola pulp and 400 ppm of acerola fruit, being the sample added of the mixture of synthetics antioxidants a more distinct and distant of sensory description from the others. With this, it can be verified that the samples, regarding the descriptive profile, did not present sensorial differences for the evaluated attributes. Regarding the distribution of the assessors in the three dimensions, it was verified that they are very close to each other, thus demonstrating that there was a significant consensus regarding the descriptive profile as well the differences and similarities of the samples for the evaluated attributes ( Figure 2). Observing the variance for assessor in the three dimensions, it was concluded that smaller variations occurred in dimensions F2 and F3, ranging from 11.4% to 54.3% in F1, 8.9 to 29.5% in F2 and 3.7 to 20.0% in F3. The lowest variances of each dimension were verified for assessors P13 (11.4%), P4 (8.9%) and P6 (3.7%), respectively ( Figure 3). In general, the variance was shown to be low for both assessors and samples, thus reflecting a consensus among the assessors regarding the sensorial scores attributed in the different attributes evaluated for the samples. Regarding the residues, responsible for explaining and normalizing the data forms, it was observed that the largest residue obtained was from the assessor P1 (79.3) The selection of the most relevant attributes in the sensorial descriptions of the samples was determined by evaluating in addition to the frequency of citation criteria, the observation of variances, distributions and residues of samples and assessors and the correlation of the terms in the three dimensions (Table 4). Regarding this parameter, the terms that presented correlation greater than or equal to 0.70 were selected as relevant and significant to the descriptive profile. Therefore, the terms "creaminess" (texture), "cream" (texture), "thick" and "thickness" were considered the main descriptors of the samples with negative correlation above 0.70 in the F1 dimension and significant citation frequency. The low correlation obtained for the other terms reinforces the difficulty in describing and applying the ordering of the samples due mainly to the low differentiation between the samples, which can be explained by the lack of training in the most important sensorial attributes to the product and by the diversity/divergence of generated terms. Table 4. Assessors' attributes correlation in the three dimensions (F1, F2 and F3) When analyzing the distribution of the attributes in the dimensions, it was observed that the terms "consistency", "creaminess" and "creamy" are distributed very closely, thus demonstrating a consensus among the assessors regarding these attributes. Another relevant observation is related to the terms and attributes that present the word "vinegar" or "vinegary" ("vinegar odor", "vinegary odor", "vinegar taste", "vinegary taste"), these terms were closely distributed among each other. Attributes related to color and taste also showed some closeness. However, when assessing the distribution of the attributes in a global way, there is a greater consensus on the appearance and texture attributes Assessor F1 F2 F3 1 creaminess (appearance) (-0.70); spoiled odor (0.74); creaminess (texture) (-0.70); dense (texture) (-0.70) greasy (appearance) (-0.59); odor intensity (0.77); taste intensity (0.14); acid (0.74) vinegar odor (-0.62); spoiled taste (-0.67) 2 ("consistency", "creaminess" and "creamy"), a fact observed in the attribute correlation table (Table 3 and Figure 5). Considering the results of the description and sensorial profile, of the variances, distributions and residues of samples and assessors and of the correlations between the attributes, both parameters analyzed in the three main dimensions that explain most of the data, a significant and consensual descriptive approximation between the samples was verified by the low differentiation between the samples for the evaluated attributes, which can be explain by the low variances and the proximity of distribution, for both samples and assessors in the three dimensions studied. Also was possible to conclude that the descriptive terms "creaminess" (texture), "creamy" (texture), "thick" and "thickness", which were the ones that best described samples with negative correlation above 0.70 in the F1 dimension and significant citation frequency, however the low correlation obtained for other terms reinforces the consensual difficulty in describing and apply the ordering of the samples, due to the small differences between the samples. As a general conclusion, when comparing the results of the present descriptive study with other studies, it has been that the more modern descriptive methods like flash profile (FP) can be used as fast alternatives to the classical method (QDA) and in research studies with consumers because they do not demand for trained panels of assessors. Flash Profile, as a general conclusion, can be used as a preliminary sensory mapping tool in contexts of broader or complete descriptive sensory studies, in mapping studies of consumer preferences, and may even be used to assist the development stage of language and terms/attributes of a conventional profile. In addition, it can be applied in the sensory evaluation of shorter shelf life products or in situations where there is no possibility to perform more than one sensory test session in the study, that is, whenever it is necessary to obtain a rapid sensory positioning of a group of products (Delarue & Sieffermann, 2004, Moniez, Truchot & Sieffermann, 2001. As a general rule, Flash Profile can be used satisfactorily in three main sensory situations: when it is desired to obtain rapid sensory responses (profiles and descriptions), as an initial screening tool for sensory perception of a new product or a new category of products, and to study specific markets (Dairou & Sieffermann, 2002;Delarue & Sieffermann, 2004;Tarea, Cuvelier & Siefffermann, 2007;Varela & Ares, 2012 3.2 Global Acceptance test The results obtained in the acceptance analysis reflected and agreed with the results of the sensorial profile of the samples. In the sensory profile, a high consensus was found, low discriminative power between the assessors was verified and confirmed by the low variances of both the assessors and the samples, due mainly to the low differences between the samples (low differentiation between the formulations), thus demonstrating that the substitution of synthetic antioxidants by the acerola pulp or whole fruit does not impair the organoleptic and sensorial quality of the product, being slightly perceptible and of significant sensorial acceptance. In addition to the results obtained, it is possible to observe a consensus among the assessors in the survey and description of attributes (profile) and in ordering the samples in relation to the selected attributes. ppm of acerola fruit) also presented similar sensory acceptance scores, however, these obtained the highest acceptance scores, 7.6 for the sample added of 400 ppm of acerola fruit and 7.2 for the sample added of 400 ppm of acerola pulp (Table 5 and 6). The sample added of the synthetic antioxidant blend presented a median sensory score of 7.1. * Samples with no significant correlation between them (α = 5%) Given that the sensorial acceptance notes were very close, with only sample added of 400 ppm of acerola fruit statistically different from the others with the highest grade, it is concluded that the substitution of synthetic antioxidants by antioxidants from natural sources such as acerola in emulsions is satisfactory and does not promote significant sensorial differences of acceptance and can be performed without impairment of the sensory and sensorial quality. Sample Acceptance Grades Conclusions and Final Considerations Both descriptive (qualitative) and hedonic/affective (quantitative) sensory evaluation methods are of great importance in relation to the direction of the industry in the area of product development, and they are complementary and auxiliary, as allow the development and construction of identity and quality parameters of food products. By aligning and correlating the results and information of these two techniques, it is possible to describe and position a product within a class of products, and also to size the sensorial impact, more linked to affectivity, that new products may have on consumers. For mayonnaise added of acerola antioxidants the sensorial evaluation showed a high consensus of the assessors in the description of attributes as well as in the ordering of the same for the sensorial attributes listed as best descriptors of the product. Even with a high consensus, it was also possible to verify the low discriminant power of the assessors, confirmed by the low variances and proximity of both assessors and samples, because the samples presented few differences in formulation (only the antioxidant, as well as concentration were different). Reinforcing and complementing the results and conclusions obtained with the profile, there was still a relative proximity of sensorial acceptance notes, being only one sample different from the others (added with 400 ppm of acerola). The combination of the two techniques allowed to conclude that the substitution of synthetic antioxidants by the pulp or fruit of acerola does not cause damages to the sensorial quality of the product, being little perceptible and of significant sensorial acceptance. GENERAL CONCLUSION This work revealed that the acerola presents a potential application as an antioxidant source for lipid base emulsion, representing a satisfactory alternative to the synthetic ones, due to its high content in bioactive compounds. It was possible to identify the ideal extraction condition to recover antioxidant compounds from whole acerola fruits, seeds and pulp. Finally, the results of the sensory evaluation study allowed to conclude that acerola, as an ingredient and source of antioxidants, has great potential for application in the food industry.
2020-08-20T10:05:39.228Z
2020-05-15T00:00:00.000
{ "year": 2020, "sha1": "6e5b2927d8b5888acd3a15db085cbfb699c848fe", "oa_license": "CCBYNCSA", "oa_url": "http://www.teses.usp.br/teses/disponiveis/11/11141/tde-14082020-090213/publico/Ana_Carolina_Loro.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2b6feff4bab1f16368219455ebfa7d1392100f2f", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
265913054
pes2o/s2orc
v3-fos-license
UNDERSTANDING COLLABORATIVE DOCUMENTARY FILMMAKING PRACTICE AS A METHODOLOGY FOR EXPLORING PARTICIPANT PERSPECTIVES OF PLACE IN AREAS OF LOW SOCIAL MOBILITY Documentary filmmaking practitioners have long engaged with socio-political narratives within a given society, and whether as a tool for creative exploration, aesthetic engagement, an ethnographic methodology or to greater understand the form of the medium itself, documentary filmmaking practice has demanded increased recognition from within the academy during the 21st century. It is, however, only in more recent developments of thought that serious consideration has been given to documentary’s potential to engage collaboratively with participants in an active and meaningful manner. This paper aims to frame emerging trends in documentary filmmaking within the context of collaborative practice methods, establishing how such methods can be used to engage participants with creative explorations of the relationship between people, place and the socio-political. Taking the phenomena of low levels of social mobility in the North Midlands (Social Mobility Commission, 2017) as a case study, I have been engaging with my own practice as a documentary maker to produce original filmic work with the aim of contributing towards debates currently taking place within the academic study of the medium. My research over recent years has attempted to interrogate the creative, theoretical, practical, and ethical challenges faced by the socially engaged documentarian producing work within the contemporary context of the field. Through examining my own ongoing research alongside that of others in the field, I propose that documentary filmmaking practice could, and perhaps should, re-align the focus and consideration of its impact to include participants, not just audiences, by engaging with methods of co-production and active collaboration. In doing so, practitioners can begin to engage with, and challenge, the established notion of an inherent imbalance of power within the participant/practitioner relationship (Nash, 2012), with the aim of moving towards a more meaningful collaborative engagement. ISSN: 2763-8677 ABSTRACT Documentary filmmaking practitioners have long engaged with socio-political narratives within a given society, and whether as a tool for creative exploration, aesthetic engagement, an ethnographic methodology or to greater understand the form of the medium itself, documentary filmmaking practice has demanded increased recognition from within the academy during the 21st century.It is, however, only in more recent developments of thought that serious consideration has been given to documentary's potential to engage collaboratively with participants in an active and meaningful manner.This paper aims to frame emerging trends in documentary filmmaking within the context of collaborative practice methods, establishing how such methods can be used to engage participants with creative explorations of the relationship between people, place and the socio-political. Taking the phenomena of low levels of social mobility in the North Midlands (Social Mobility Commission, 2017) as a case study, I have been engaging with my own practice as a documentary maker to produce original filmic work with the aim of contributing towards debates currently taking place within the academic study of the medium.My research over recent years has attempted to interrogate the creative, theoretical, practical, and ethical challenges faced by the socially engaged documentarian producing work within the contemporary context of the field.Through examining my own ongoing research alongside that of others in the field, I propose that documentary filmmaking practice could, and perhaps should, re-align the focus and consideration of its impact to include participants, not just audiences, by engaging with methods of co-production and active collaboration.In doing so, practitioners can begin to engage with, and challenge, the established notion of an inherent imbalance of power within the participant/practitioner relationship (Nash, 2012), with the aim of moving towards a more meaningful collaborative engagement. INTRODUCTION Within the context of documentary filmmaking practice, 'collaboration' , or, two parties working together towards some form of creative output, has in some sense featured in the production of documentaries since the medium's inception in the early 20 th Century; be this in the form of the working relationship which exists between practitioners during a production (found in both fiction and nonfiction filmmaking practice), or in the interactions which take place between the documentarian and the participant (or subject) of the documentary.Collaborative interactions have always been instrumental to the filmmaking process, and as such much attention has been given to the types of collaboration which can take within the context of film production; in particular, those which exist in 'fiction' filmmaking production environments.Both the collaborative interactions which take place between 'professional' filmmaking practitioners (i.e.directors, cinematographers, editors and other 'crew' working on production), and those which involve 'directors' and 'actors' , are considered a norm of the filmmaking production process and have been explored, explained and evaluated extensively in print and other in forms of media. With regards to non-fiction, or documentary, filmmaking practice, there exists considerable (if less expansive) evaluation of the collaborations which take place between filmmaking practitioners engaged in a production context, as in the fiction production model above (Chapman, 2007: 113:114;Reid and Sanders, 2021:55-61;Anderson and Lucas, 2016;Trump, 2018), and some exploration of working relationships between documentarians and topical experts (for example, social anthropologists or community activists) off-camera, often in an advisory capacity (Auferheide, 2007;MacDougall, 2022). These interactions are often considered to be collaborative ones, and are analysed, reflected upon, and ultimately understood as such.However, by comparison, when considering the relationship between documentarian and a participant, or more commonly the subject, who appears in front of camera, it is rarely within the context of a form of 'collaboration' . A key element of many socially engaged documentarians is a thematic engagement with the socio-political through practice; documentary filmmaking often engages with a creative exploration of a socio-political event, or moment.This is exploration is usually concerned with the impact or aftermath of a socio-political moment, as Chanan theorises in the concept of 'filming the invisible ' (2008:121-132); documentarians are often inherently unable to film the socio-political 'moment' as it happens for a variety of practical reasons, so instead they must engage with its aftermath, the residual atmosphere of the socio-political framed in moving image.The socio-political 'moment' (which could be as brief as a few moments, a day, or which could last months or even years), an event or decision which has triggered a wave of impact within a community, has taken place, and the documentarian then frames the wake.ISSN: 2763-8677 In order to do this, the documentarian relies on willing participants who have experience of the sociopolitical event, scenario or 'moment' to not only contribute to their understanding of the subject, but to help them produce a filmic exploration of it.This, inherently, requires collaboration to take place between the participant and the documentary practitioner. This article aims to examine how collaboration is contextualised within the field of documentary filmmaking and seeks to re-frame the participant/practitioner relationship as one with the potential for meaningful and active collaborative interactions, exploring methodologies I am currently exploring and developing as part of an on-going research project.The project explores how creative, collaborative interactions which take place between participant and practitioner can be used to explore senses of place within the context of a case study focussed on areas of low social mobility in the post-industrial North Midlands of England.It is my hope that some of these methods can be applied by documentary practitioners more widely when seeking to explore localised socio-political issues with participants. CONTEXTUALISING COLLABORATION IN DOCUMENTARY PARTICIPANT OR SUBJECT? There has been much consideration given by scholars to the inherent ethical complexities which exist within the relationship between documentarian and participant (Gross et. Al., 1991;Nash, 2011;Nash, 2012;Thomas, 2012;Hongisto, 2015;MacDougall, 2019), a relationship which has its foundation in what is now considered by many to be an exploitative approach to documentary processes and methods prevalent in the medium from its earliest incarnations.Interactions with participants in early documentary films were approached by documentarians in a manner informed by colonial perspectives on anthropology (Martinez, 2016), and though the most extreme examples of exploitative practices had been phased out by the second half of the 20 th Century, for a large portion of the medium's short history, an unethical, and either non-existent, or at least flawed, collaborative approach to engaging with those who featured in documentaries was the norm.In recent years, however, this has been challenged by scholars and practitioners alike; contemporary documentary makers often favour the practice of engaging with a participant, rather than capturing a subject, the shift to language indicative of a more respectful, collaborative view of the relationship, and one which this article will adhere to. WHAT IS 'COLLABORATION' WHEN CONSIDERED WITHIN THE CONTEXT OF DOCUMENTARY FILMMAKING PRACTICE? The author, filmed by a participant during an active collaborative exercise, 2022. When surveying foundational text-based sources concerned with documentary methodology, one is struck by how often there is a complete absence of any mention of collaboration, or collaborators, in any form, a surprising omission considering the how fundamental collaboration is to all documentary practice.When collaboration is mentioned, it is usually in relation to collaborative relationships which exist between filmmaking production crews working on documentaries (Chapman, 2007: 113:114;Reid and Sanders, 2021:55-61;Anderson and Lucas, 2016;Trump, 2018), or between documentarians and an 'expert' in the subject matter of which the film is concerned, for example; social anthropologists (Auferheide, 2007;MacDougall, 2022) or an external professional whose role is essentially that of a co-director, co-producer or a 'fixer' of sorts (Smaill, 2015:90).Where texts do mention collaborative interactions between the participant (usually still referred to as subject) and the documentary practitioner, it is only in so far as to suggest that their actual appearance on screen constitutes a collaboration in of itself, with no detail on the nature of the collaboration explored critically.Texts which do explore more creative, or active (as I will define in the next section) forms subject collaboration will often reference participants 'performing' on screen, and occasionally mention collaboration in the form of co-production taking place behind the ISSN: 2763-8677 camera (de Jong et. Al., 2012;Waldron, 2018).Overwhelmingly however, detailed exploration, evaluation or critical analysis of collaborative methodologies are hardly, if ever, present; such description proves elusive in both introductory and specialist texts. Despite the lack of considerable detailed exploration of collaborative methodologies in documentary practice, it is usually the case that when the collaboration involves interactions between practitioners and/or other 'professionals' or 'experts' , it is considered instrumental in the creative direction of the filmmaking process, constituting a kind of 'co-production' .This, in most cases, does not appear to apply to collaboration which takes place between practitioner and participant.Though it is clearly recognised that the willing engagement of participants on-screen (in, for example, a talking head interview, or by leading the camera through a particular location of significance) can constitute a form of collaboration (Spence and Navarro, 2007:213), there appears to be significantly less recognition of how a participant's collaboration could be more active in its engagement with the creative decision-making process, rather than an engagement where the participants contribution is confined to what they choose to do or say in front of camera. MOVING TOWARDS ACTIVE COLLABORATION I would suggest that there are clear differentiations in the nature of the collaborative interactions taking place between participant and practitioner, and that as such, it is useful when discussing participant/practitioner collaboration to assign separate prefixes to the different approaches.Therefore, when referring to collaborations in which the participant simply appears in front of the camera, either in an interview, or acting on instructions from the practitioner such as 'show me around the space' or 'walk me through what happened, and where' , but is not invited to influence the filmmaking process in any way beyond this, I propose the term basic collaboration.Basic here implies that the collaboration is simple, perhaps limited in some way, whilst acknowledging there is still an interaction taking place which could be defined as collaborative.When considering collaborative activities which involve the participant not only appearing in front of camera, but also being invited to directly influence the aesthetic and/ or creative decision making of the practitioner off-camera, whether in pre-production, production, or post-production, I propose the use of active collaboration.Though these are broad categorisations, I have found them to be useful in differentiating between collaborative interactions which are incidental, and those which are more direct in their impact on the documentary production itself.The methodologies I am seeking to develop through my current research would fall into the latter category, they are active collaborations. ISSN: 2763-8677 Despite the positive contributions active collaborations can bring to a documentary production, some scholars suggest that endeavouring to engage with such methods of participant collaboration which I would consider to be active, can actually have a detrimental impact on the quality of filmic outcome of documentary project itself.MacDougall suggests that collaboration between participant and practitioner inevitably requires compromise, and that this: 'may result in a kind of double negation, so that the interests of neither are properly expressed, or else remain blurred.It may be impossible to know whose perspective the film finally represents.My experience of collaborating with film subjects, which initially I embraced, has convinced me that the resulting ambiguity often constrains both parties.' (2022:27) Similarly, Chapman writes that efforts to 'empower the people that were to be featured in the film, by creating a democracy of production' constitutes a 'gamble with creative vision, especially if the production team moves dangerously near to a total abandonment of authorship and power ' (2007:15). There is often an air of caution present when discussing participant collaboration, a concern that the intentions of the practitioner will be compromised by the desire to facilitate the influence of participants. I would suggest that this concern is not a binary condemnation of the outcomes of such methodologies, rather a critique which fails to acknowledge variations in the aims and objectives of different practitioners. If the practitioner's objective is to produce a film that, while inclusive of other's perspectives, primarily conveys their own ideas and understanding, then this concern is valid-though it is important to note that this is not the aim or the objective of all practitioners working in the documentary field. While I would acknowledge the inevitable creative compromise which arises from participant/ practitioner collaborations, and even of the constraints which may result from this, I would argue that introducing active collaboration also presents the opportunity both to understand (and present) the participants perspectives in different ways, and to enrich the experience the participant has of the filmmaking process itself.Other forms of socially engaged arts practice, often rooted in community participation, have long engaged with processes which would align with my definition of active collaboration (Purcell, 2007;Gilchrist et. Al., 2015), recognising the act of collaboration as holding equal, or even more, importance than the final output itself.Applying this idea to documentary practice simply requires a re-evaluation of the aims and intended outcomes of a particular documentary film project, recognising the reduction of overall influence the practitioner has on the creative direction and aesthetic outcome of the project is balanced by increased influence of the participant. WORKING WITH INDIVIDUAL PARTICIPANTS The purpose of the first research film was to engage individual participants, one-to-one, using four separate active collaborative methodologies, with an aim of developing a further understanding of how approaching this collaborative relationship in a considered and creative manner can inform a documentary filmmaking project concerned with a socio-political theme.Through the development of both process-based exercises, and creative activities involving both filmmaker and participant simultaneously, I intended to respond to some of the aforementioned challenges often associated with participant, or 'subject ' collaboration, (MacDougall, 2022), and to explore if these methods can lead to a resolved collaborative output.The three participants were members of my own family, my sister, mother, and grandmother; three generations of women from the same family invited to explore their experiences growing up, living, and working in an area of low social mobility, and to collaborate in the production of a short, experimental documentary film exploring these themes.As the State of the Nation Report (Social Mobility Commission, 2017), which informed the development of the case study, gives such weight to variations in levels of social mobility as related to geographical area, the collaborative methodologies would aim to explore what impact social mobility has on a participants relationship to the 'place' where they live and work.The first active collaborative method developed was based upon the concept of skill-sharing; discussions between participant and practitioner would be prompted by activities based around some form of interest, hobby or vocation the participant has experience in; in this instance, these were furniture upholstery, maintaining an allotment and rock climbing, respectively.This process developed from the desire to address some of the inherent imbalances of power which are usually present in the participant/ practitioner relationship; while it is not uncommon for the practitioner to attempt to explain the basics of their filmmaking process to practitioners as a means of demystifying the technology being used, making the participant more comfortable in a filmmaking environment, this still risks creating a situation whereby the practitioner is a gatekeeper of specialist knowledge, in control of technical equipment and practical experience.In constructing the first filmed interactions of the project around both me explaining my practice (inviting participants to adjust basic settings and decide compositions) and participants introducing to me a skill of which they have experience (and I do not), I hoped to build the foundation of a balanced participant/practitioner relationship informed by the positive experience of skill-sharing, with both myself as the practitioner sharing filmmaking skills with participants, and the participants sharing ISSN: 2763-8677 their specialism or experience with me.This also served to create an environment in which to conduct conversations relating to the case study removed from the constraints of a formal interview environment, where participants could be introduced to socio-political topics in a less direct manner. The second method involved asking participants to lead the practitioner on guided walking and/or driving exercise through locations which held personal significance to them within the area of the case study.This was initially prompted by the act of drawing these routes on a map; both the act of mapping and then walking or driving between mapped locations was intended to prompt conversations with participants relating to memory, identity, and recollection (Stehlíková, 2012).This also created a dynamic within the documentary filmmaking process whereby the participant decided the locations which would feature in the final film, thus constituting active collaboration, as the creative and aesthetic direction of the work has been directly informed by the participant. The third method of active collaboration invited each participant to use a simple Super 8mm camera to film elements of their own participation, for their point of view.The only inputs I as the practitioner had on this process were providing the film cartridge and camera, ensuring the participant understood how to use them (each participant was gifted one 50ft roll of Super 8mm film, which will record just under three minutes of footage).I would also process and scan the footage after the participant has completed the roll, with the footage created by the participant ultimately being integrated into the final edit.The decision to use Super 8mm for the participants own documentation was partially determined by the practicalities of the medium; it's a simple process for participants to quickly engage with, and the physical nature of the analogue process involves giving each participant a film cartridge as both a gift and a task, a sort of diary to complete during the process.Though participants can be involved in some areas of creative decision making during the production of a documentary film, it is often the case that the technology used requires specialist knowledge to operate, and as such participants are often limited with regards to 'hands-on' practical input.With a simple 'point and shoot' Super 8mm camera, participants could operate the equipment with relative autonomy. Finally, I planned for periods of footage and editing review with the participants, where they would be invited to reflect on the footage created collaboratively, advise on some of the editing decisions and inform the development of the final output.This final mode of active collaboration is arguably the least 'active' of the four methodologies explored in this project; the practitioner still exercises considerable control over the editing process as the complexity of the editing software, not to mention the time it takes to complete (participants can't be expected to give up days or weeks of their time to supervise an entire edit) means participants are often limited to comments of what they do or do not like, or shots which they would or would not like to be included.However, though potentially the form of active collaboration ISSN: 2763-8677 with least direct impact on the filmic output of the four, I would still argue that creating an environment for the participant and practitioner to reflect on the filmmaking process does constitutes a form of active collaboration, albeit a less creatively ambitious one. REFLECTIONS, AND LOOKING AHEAD TO MORE POSSIBILITIES IN ACTIVE COLLABORATION After undertaking the methodologies outlined above over the past 18 months, I feel able to reflect upon both the positive ethical dimensions and exciting creative possibilities afforded by active collaborative methodologies, as well as some of the logistical and creative limitations of such methods when used within the context of documentary filmmaking practice. With regards to participant ethics, I would suggest active collaborative methods such as those outlined above constitute a more rigorous approach than that which is usually present in documentary filmmaking practice.Moving beyond (what should be) the standard ethical practice of ensuring consent is given and that the intentions of the work are made clear to the participant, active collaboration itself involves ceding much more control to participants with regards to the perspectives they choose to share, and how these are represented creatively, tonally, and aesthetically.Therefore, as a means for amplifying marginalised voices, and in the case of my research project, for exploring the perspectives of those who live in an area of low-social mobility, the fact that active collaboration seeks to transfer some of the decision-making power from practitioner to participant renders it a potentially effective tool for doing so. I would also suggest these methods of working are particularly well suited to participants who are unfamiliar with appearing, and speaking, in front of camera (as might often be the case with participants from groups marginalised in society).None of the participants I collaborated with had previously been filmed for a documentary project (or any other type of creative project); two of participants I worked with spoke of an initial nervousness about being filmed for the project that quickly subsided during the first 'skill-sharing' session.Both reflected that for the rest of the collaboration they often 'forgot' they were being filmed and found it easy to speak openly.These methods not only remove some of the potentially intimidating elements of a more formal 'talking-head' interview setting, but also seek to create an environment in which the participant/practitioner relationship is more balanced and informal. There are, of course, practical and creative limitations to the methodologies outlined above, some of which may render such methods inappropriate for certain documentary filmmaking settings.Firstly, these methods do not suit brief interactions; they require access to a participant for at least two full days of collaboration.Clearly, for some participants (and practitioners) this would not be possible logistically. The methods here are also designed to explore participants perspectives on their own interests and ISSN: 2763-8677 life experience-thematically this would not be appropriate for all projects, or indeed all participants; therefore, I would not suggest such methods to be universally applicable within the field of documentary filmmaking.However, I would propose that within the right project context (such as the case study and research film I have outlined above), engaging with active collaboration not only leads to methodologies which are more ethically rigorous than is often the case in documentary filmmaking, but also offers an exciting and creative way of engaging participants more directly in the presentation of their own voices and perspectives. Still from short film 'The Chair' (2022).The focus of my practice-based research since 2021 has been to attempt to develop a variety of active collaborative documentary filmmaking methodologies within the context of a case study I have chosen to engage with for this purpose; 'participant's relationship with place in an area of low socialmobility' .Many documentary practitioners, including myself, find themselves engaging with issues of the socio-political in their work, and so I indented to develop further understanding of how active collaboration with participants who have a close connection to a given socio-political issues might be used to explore their perspectives on such issues.In 2017, the majority of local authority areas in the North Midlands were identified as having amongst the lowest levels of Social Mobility in the U.K (Social Mobility Commission, 2017).I lived in the region until I moved away to study at 18, and still have close family connections to the area; as such, I have long been interested in exploring its socio-political landscape through my practice.Through the ongoing (2021-2024) production of a series of short documentary ISSN: 2763-8677 films working with participants in the area, I am developing active collaborative methodologies which I hope could be adapted for use in a variety of documentary filmmaking projects.Below I will outline and reflect upon the methods developed for the first of these research films over the past two years. Still from short film 'The Chair' (2022), showing the author and participant working together to reupholster a small chair during an active collaboration.
2023-12-07T16:04:22.109Z
2023-12-05T00:00:00.000
{ "year": 2023, "sha1": "ec16364ee1c86aa8c5dfb7bb4f3c6e99e2840fa6", "oa_license": "CCBY", "oa_url": "https://periodicos.feevale.br/seer/index.php/braziliancreativeindustries/article/download/3543/3232", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "edba42c4199532793c370d01dae783cab95e91df", "s2fieldsofstudy": [ "Art", "Sociology" ], "extfieldsofstudy": [] }
15152448
pes2o/s2orc
v3-fos-license
Monodromy approach to the scaling limits in the isomonodromy systems The isomonodromy deformation method is applied to the scaling limits in the linear NxN matrix equations with rational coefficients to obtain the deformation equations for the algebraic curves which describe the local behavior of the reduced versions for the relevant isomonodromy deformation equations. The approach is illustrated by the study of the algebraic curve associated to the n-large asymptotics in the sequence of the bi-orthogonal polynomials with cubic potentials. Introduction It is well known that, in certain asymptotic limits, the classical Painlevé equations [1] reduce to elliptic ones. For instance, the Painlevé sixth equation, x − 1 (y − 1) 2 + c 4 x(x − 1) (y − x) 2 , (PVI) with the large parameters c j , j = 1, . . . , 4, after the changes x = t 0 +δτ , c j = δ −2 a j + δ −1 b j , where t 0 , a j , b j = const, turns at δ = 0 into an autonomous equation which has the first integral The asymptotics of the classical Painlevé transcendents w.r.t. parameters or initial data were studied in numerous works, see [2] for extended but not exhaustive bibliography. The limit transitions of such kind are called below the scaling limits. The most effective to this date approach to the scaling limits in the Painlevé transcendents is based on the monodromy representation for the latter [3,4]. In [2], the isomonodromy deformation technique of [5] was adapted to the study of the scaling limits in the equations of the isomonodromy deformations for the 2×2 matrix linear first order ODEs 1 with rational coefficients. In particular, the results of [2] imply that the modular parameters determining the limiting (hyper)elliptic curve for the asymptotic solution of the isomonodromy deformation equation like D 0 in (1.1) are not arbitrary constants but certain functions of all the deformation parameters. These functions are uniquely determined by the system of transcendent modulation equations Re ℓ µ(λ) dλ = const, (1.2) where ℓ is an arbitrary closed path on the Riemann surface of the relevant spectral curve. Below, this result will be extended to the scaling limits in the isomonodromy deformation systems for N ×N matrix linear ODEs with rational coefficients. The work is motivated by recent developments in the theory of coupled random matrices. Indeed, while the statistic properties of the ensembles of the single random matrices can be given in terms of the asymptotics of the semi-classical orthogonal polynomials, see [6,7], which give rise to the linear first order 2 × 2 matrix ODEs with rational coefficients [8], the ensembles of the coupled random matrices in the very similar way give rise to the bi-orthogonal polynomials [9,10,11] and the linear N × N matrix ODEs [10]. The paper is organized as follows. In Section 2, we recall the basic facts in the isomonodromy deformations of the linear matrix equations with rational coefficients, introduce the notion of the scaling limits in such systems and describe the WKB approach to their asymptotic solutions. In Section 3, we find the modulation equations for the nonsingular asymptotic spectral curve and prove their unique solvability. In Section 4, we illustrate our approach using a particular 3 × 3 matrix equation satisfied by the bi-orthogonal polynomials for the cubic potentials. The isomonodromy systems with a large parameter In this section, following [12,13,3], we recall the basic notions of the theory of the linear matrix differential equations. Consider an N × N matrix first order ODE, We call equation (2.1) generic if the eigenvalues of A ν,−rν are distinct for r ν = 0 and if they are distinct modulo integers for r ν = 0. Without loss of generality, λ = ∞ is the singular point of the highest Poincaré rank, i.e. r ∞ ≥ r ν , ν = 1, . . . , n. Assume that (2.1) is generic and A ∞,−r∞ is diagonal (for the non-generic situations, see [13]). Then, near the singularity a (ν) , equation (2.1) has the formal solution where ξ = λ − a (ν) for a finite singularity a (ν) and ξ = 1/λ for infinity. The matrix coefficients ψ (ν) j and the diagonal matrix coefficients x (ν) −j of the formal expansion (2.2) are determined uniquely by the eigenvector matrix W (ν) of A ν,−rν . The ratioΨ −1 (λ)Ψ(λ) of any two solutions Ψ andΨ of (2.1) does not depend on λ. The ratios of the fundamental solutions normalized by (2.2) are called the monodromy data. The set of deformation parameters is specified for generic equation (2.1) in [3] (for non-generic equations in [14]). These parameters include the positions a (ν) of the singular points and the entries of the diagonal matrices x (ν) −j , j = 0, for θ (ν) (λ) in (2.2). All these quantities form together the vector x of the deformation parameters. Remaining parameters x is the completely integrable differential system whose fixed singularities are the planes a (ν) = a (ρ) , ν = ρ, (x The generic 2 × 2 system (2.5) admitting the only one deformation parameter is equivalent to one of the classical Painlevé equations. Remark 2.4. The scaling parameter δ is defined up to a positive factor which gives rise to a scaling freedom in the set of the "slow" variables. Below, we assume that this scaling freedom is eliminated by a normalization of one of the non-trivial "slow" deformation parameters. Complex WKB method. Here we recall the idea of the complex WKB method following in principal [12,13]. Consider (2.8) as δ → 0 assuming that the coefficients of B(ζ) remain bounded. Let T and Λ 0 be the eigenvector and eigenvalue matrices for B(ζ), satisfies (2.8) provided the diagonal matrices Λ n and the off-diagonal matrices T n , n ≥ 1, solve the recursion Let Γ be the Riemann surface of the algebraic curve 14) The branch points of (2.14) are called the turning points for (2.8). Let where T n are defined by (2.11)-(2.13), and ζ ∈ L. The matrix function where R (m) (ζ) is holomorphic and bounded for ζ ∈ L provided δ is small enough. Define the WKB approximation to (2.16), (2.17) and introduce the correction function χ (m) , and notations The contour γ ij (ζ) ⊂ L connecting the finite or infinite point ζ ij with ζ is called the (i, j)-canonical path if Using the arguments of [12], we thus obtain the following then there exist such positive constants C and δ 0 that In particular, To construct a canonical domain containing a given point ζ 0 ∈ L, consider a pair of i, j ∈ {1, . . . , N}, i = j, and introduce the segment ℓ ij ⊂ L of the (i, j)-anti-Stokes level curve-line passing through ζ 0 : Choose two points ζ ij , ζ ji ∈ ℓ ij in such a way that: a) ζ 0 ∈ [ζ ij , ζ ji ]; b) for any ζ ∈ ℓ ij separating ζ ij from ζ ji , the curve-line segment By construction, the union C ij of all the curve-line segments ℓ * ij , is the (i, j)-and (j, i)-canonical domain. The boundary of the constructed (i, j)-canonical domain C ij is formed by the (i, j)-Stokes level curve-lines passing through the points ζ ij and ζ ji and partially by the boundary of L. It is worth to note that, near the irregular singularities of (2.8), the (i, j)-canonical domain C ij can be extended beyond the boundary of L to fill out certain sector in the complex ζ-plane called the (i, j)-Stokes sector. The canonical domain C ∋ ζ 0 of validity of Theorem 2.1 is the intersection of the above (i, j)-canonical domains C ij , (2.25) The above construction implies that any closed simply connected domain L ⊂ L 0 can be covered by a finite number of the overlapping canonical domains C (k) , k = 1, . . . , K, since the opposite assumption can be easily brought to a contradiction. Modulation of the spectral curve Solutions of the Lax equation dB = [ω, B] are routinely interpreted as the approximate solutions for (2.9) as δ → 0 [16,17]. Supplemented by the eigenvalue problem (2.11), the Lax equation constitutes the basis for the algebro-geometric integration of the "soliton" equations. However, in the theory of the "soliton" PDEs, the spectral curve (2.14) is determined by the initial data, while in the isomonodromy deformation context it is determined by the original λ-equation (2.1). Moreover the spectral curve for the typical "soliton" equation is an exact integral of motion while, in the isomonodromy case, the curve varies, In what follows, we precisely describe the dependence of the algebraic curve (2.14) on the "slow" variables at δ = 0. Below, the subscript as denotes the relevant object at δ = 0. We call the curve (2.14) singular if its topological properties for all small enough δ = 0 differ from those at δ = 0. Given a parameterization of the curve, we define the discriminant set S in the total parameter space P as the set determining the singular curve. Also, let F be the union of the hyper-planes b −rν ) kk = 0 corresponding to the fixed singularities of (2.9) at δ = 0. Below, we always assume that our deformation parameters are apart from the fixed singularities F. Definition 3.1. The parameter D j is called modular iff the differential ∂ ∂D j µ as (ζ) dζ is holomorphic on the Riemann surface Γ as . Thus P = T ⊗ D, where T is the subspace of the deformation parameters t = (t, Re α) (see Remark 2.3 and constraint (2.3)) while D is the subspace of the remaining parameters D = (D, Im α). where h ℓ = J ℓ (t 0 , D 0 ) = const. Proof. Let ℓ ζ = π(ℓ) be a projection of the closed path ℓ ⊂ Γ with the base point (ζ 0 , µ 0 ) on the punctured complex ζ-plane. Let the integer m 0 be chosen in such a way that the liftl of m 0 ℓ ζ on Γ be closed for all branches of µ(ζ 0 ). Consider the analytic continuation of the WKB approximation (2.12) alongl. The projection π(l) = m 0 ℓ ζ is covered by a finite number of the overlapping canonical domains C k , k = 1, . . . , s, C s+1 = C 1 , in each of which (2.12) approximates uniformly in ζ an exact solution Ψ k (ζ) of (2.8). Because Ψ k+1 (ζ) = Ψ k (ζ)G k where G k is independent from both ζ and t ∈ T \ F, we obtain where Ml is the operator of analytic continuation along m 0 ℓ ζ , and Ml is the monodromy matrix for Ψ 1 (ζ) along m 0 ℓ ζ . Using for Ψ 1 (ζ) and Ψ s+1 (ζ) our WKB approximation, we find Since the curve is non-singular, the r.h.s. of (3.2) preserves while t remains in a neighborhood of t 0 . Equating the leading orders of the l.h.s. for (3.2) at t 0 and nearby points t, we arrive at (3.1). Theorem 3.1 immediately provides us with the following assertion: For the subsequent discussion, the following assertion is useful: For any closed path ℓ on Γ as punctured over b (ν) , ν = 1, . . . , n, ∞, the integral J ℓ (t, D) is continuous in (t, D) outside the fixed singularities F of (2.9) and is differentiable in (t, D) outside the discriminant set S. Proof. Since the contour ℓ is finite, the continuity of J ℓ (t, D) follows from the continuity of µ as (ζ). If the point (t 0 , D 0 ) is located apart from the discriminant set S, then there exists an open neighborhood V of (t 0 , D 0 ) such that the spectral curve (2.14) does not degenerate ∀(t, D) ∈ V. Then the differentiability of J ℓ (t, D) at (t 0 , D 0 ) follows from the continuous differentiability of µ as (ζ). In accord with Corollaries 3.2 and 3.4, if the spectral curve (2.14) is non-singular at the initial point (t 0 , D 0 ), then the modulation equation (3.1) is valid in a domain U bounded by the points where the spectral curve becomes singular. Applicability of (3.1) to a particular solution of (2.9) beyond this boundary depends on some subtle details in the initial data or, equivalently, in the relevant monodromy data of the isomonodromy system (2.8), see [2] and Section 4 below. Theorem 3.1 and Proposition 3.3 imply Let us discuss now the existence of the function D(t) such that J ℓ (t, D(t)) ≡ const. Varying the contour ℓ in (3.1), we obtain the system of equations J ℓ j = h j , where the set of contours ℓ j form a homology basis of the Riemann surface of Γ as punctured over b (ν) . For instance, taking for ℓ a small circle c ν around b (ν) , µ j (b (ν) ) , we find which contains (2.10) as the particular case h (j) cν = 0. To discuss (3.1) further, it is convenient to impose the conditions (3.3) and to remove (N −1)(n+1) small circles from the "sufficient" set of contours. The remaining cycles ℓ j , j = 1, . . . , 2g, form a homology basis of Γ as . Also, using (3.3), we exclude the constant parameters Im t (ν) 0 from the set of unknowns and assume below that the space D is g-dimensional complex space of the modular parameters. Proof. Theorem 3.1 and Proposition 3.3 imply that J ℓ j (t, D) are the first integrals of the completely integrable Pfaffian system dJ = 0, where ω and Ω are the matrices of the partial derivatives of J ℓ j (t, D) w.r.t. the entries of the vectors D,D and t,t, respectively. Here, the bar means the complex conjugation. Let constant c 1 be the determinant of the transformation of the natural basis { ∂ ∂D j µ as dζ} g j=1 into the basis of the normalized holomorphic differentials, and letB be the matrix of the B-periods of the normalized holomorphic differentials. Since ω = AĀ BB is the matrix of A-and B-periods of the holomorphic differentials and their complex conjugate, det ω = (−2i) g |c 1 | 2 det ImB = 0. Thus the matrix ω is invertible until F (ζ, µ) = 0 remains non-singular, and therefore the integral manifold for (3.4) is well parameterized by the deformation parameters (t,t, Re α). Remark 3.1. If h ℓ j = J ℓ j (t 0 , D 0 ) = 0 then the cycle ℓ j can not collapse, and the encircled by ℓ j branch points can not coalesce. Thus, along the integral manifold for (3.4), the spectral curve remains non-singular provided h ℓ j = 0 ∀j = 1, . . . , 2g. This observation ensures the applicability of (3.1), (3.4) and the existence of D(t,t) in any connected domain U ⊂ T \ F containing the initial point t 0 . Let J ℓ (t, D) in (3.1) vanish for all closed paths, Re ℓ µ as (ζ) dζ = 0, ∀ℓ ⊂ Γ as . (3.5) This system does make sense regardless the choice of the initial point since there is no need to fix a homology basis. Traditionally, it is called the Boutroux system. We recall that (3.5) may be not applicable to a particular solution of (2.9) in certain sectors of P \ S in spite of the Boutroux system itself does make sense in the whole parameter space P (using Proposition 3.3, the system (3.5) is interpreted at the points of the discriminant set S as a continuation from P \ S). Remark 3.2. As the integral manifold for (3.5) meets the discriminant set S, at least one of the cycles ℓ j collapses, and the corresponding real equation in (3.5) becomes trivial as being replaced by the complex condition of coalescence of two branch points. Thus, generically, the intersection of the integral manifold for (3.5) with S has codim R = 1 in the space of the deformation parameters. Theorem 3.6. There exists the unique solution D(t,t) of the Boutroux system (3.5). Proof. Here, we give the sketch proof. Uniqueness. Given t, two solutions D and D ′ determine two differentials µ as dζ and µ ′ as dζ meromorphic on the respective Riemann surfaces Γ as and Γ ′ as . The difference φ = (µ as − µ ′ as ) dζ is holomorphic on the covering Riemann surface G as , and Re ℓ φ = 0 for all closed paths ℓ ⊂ G as . However, there is no differential φ with such properties. Existence. Choose a point (t 0 , D 0 ) ∈ P\ S in such a way that h Applying the arguments used in the proof of Theorem 3.5 and taking into account Remark 3.1, we establish the existence of the function D(t,t, h) ∀t ∈ U ⊂ T \ F and ∀h : h j sgn (h The assumption that D(t,t, h) is unbounded as h → 0 leads to a contradiction. From a bounded sequence D (k) = D(t,t, h (k) ), h (k) → 0, we extract a convergent subsequence, lim m→∞ D (km) = D * . Then the continuity of the integrals J ℓ j (t, D) w.r.t. D yields J ℓ j (t, D * ) = 0. Remark 3.3. Assuming that the monodromy data of the system (2.8) are generic and do not depend on the scaling parameter δ −1 , or this dependence is weak enough, it is possible to prove that the relevant spectral curve satisfies the Boutroux system (3.5). Here, we do not prove this assertion (look for more details and for the proof of this statement in the case N = 2 in [2]). More details can be found in [19] where the Riemann-Hilbert problem for Ψ n (λ) is formulated. The fixed singularities of the relevant completely integrable system correspond to the infinite values of the deformation parameters x, y, t as well as to t = 0. In the case we are interesting here, the formal monodromy exponents at λ = ∞, which is the only singular point for the λ-equation in (4.1), are equal to n, −n/2, −n/2. The asymptotics of Ψ n (λ) as n → ∞ is of particular importance for the theory of coupled random matrices. The scaling changes (2.6) with κ = −1/r ∞ = −1/3 and Remark 2.4 imply and yield the system (2.8) with the spectral curve Generically, this curve has 10 first order branch points and therefore, via the Riemann-Hurwitz formula, has genus g = 3. Because the monodromy data for Ψ n (λ) are independent from n, x, y, t, the curve (4.3) for a generic solution of (4.1) satisfies (3.5), see Remark 3.3. By Theorem 3.6, given x 0 , y 0 , t 0 , system (3.5) uniquely determines the modular parameters D j , j = 1, 2, 3. The analysis of (3.5) is significantly more involved then the similar analysis of the elliptic curves associated to the classical Painlevé equations, see [18]. Here, we present the results in the numeric study of the integral sub-manifold for (3.5) parameterized by x 0 ∈ C as y 0 =x 0 and t 0 = 1 based on the use of MATLAB 6.1 package. The graph on the complex x 0 -plane shown in Figure 1 separates the regions with different topological properties of the relevant Stokes graphs. Namely, some of the cycles ℓ j existing in the neighboring regions collapse at the points of their common boundary. Our numeric study suggests that, at the points of the very central triangular domain in Figure 1, the curve (4.3) subject to (3.5) has genus g = 0, and the relevant Stokes graph is consistent with the Riemann-Hilbert problem data of [19]. The latter observation implies the applicability of (3.5) to the asymptotic study of the Ψ-function for the bi-orthogonal polynomials. In particular, the typical configuration of the branch points implied by (3.5) suggests that, for x 0 = 0, the asymptotics of Ψ n (λ) involves, in certain domains of the complex λ-plane, the exponential, Airy and parabolic cylinder functions. For x 0 at the boundary of the central triangular domain in Figure 1, the asymptotics involves also the Figure 1. The projection of the integral sub-manifold y 0 =x 0 , t 0 = 1 for (3.5) on the x 0 -plane Ψ-function associated to the Painlevé first transcendent. For x 0 = 0, besides exponential and Airy functions, the asymptotic description requires also a third order special function. For the values of x 0 beyond this triangular domain, the relevant Stokes graphs seem not consistent with the Riemann-Hilbert problem data of [19]. Therefore it is unlikely that, for x 0 beyond the central triangular domain, the system (3.5) can be applied to the asymptotics of the bi-orthogonal polynomials. The detailed description of the asymptotics of the bi-orthogonal polynomials, however, is out of the scope of the present paper and will be published later elsewhere.
2014-10-01T00:00:00.000Z
2002-11-15T00:00:00.000
{ "year": 2002, "sha1": "fbc225aa149303e6ffad70e3445618cbcc8d9799", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nlin/0211022", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ba8a42cf389aa492aa9dfb0b5f83e5184db3ae8f", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
267373207
pes2o/s2orc
v3-fos-license
Association between TPO gene polymorphisms with susceptibility to hyperthyroidism Introduction and Aim: Hyperthyroidism is a disorder characterized by excessively high amounts of tri-iodothyronine and thyroxine in the bloodstream. TPO is a thyroid-specific antigen. TPO gene mutations affect thyroid hormone synthesis, causing full or partial iodide organification abnormalities. In this study, we aimed to look at the single nucleotide polymorphisms (SNPs) in the TPO gene and find possible relation of these SNPs to the development or prognosis of hyperthyroidism. Materials and Methods : Blood samples were collected from hyperthyroid patients (n=75) and healthy individuals (n=25). The concentration of triiodothyronine (T3), Tetra-iodothyronine (T4), and thyroid stimulating hormone (TSH) were estimated in all samples. The autoantibodies TPOAbs and TR-Ab levels were measured using the ELISA assay. The extracted genomic DNA from blood of participants was investigated for polymorphism in the SNPs rs1126799, rs1126797, and rs732609 within the TPO gene. Statistical methods were used to evaluate the association between the TPO SNPs studied to hyperthyroidism susceptibility. Results : No significant correlation was observed between the SNPs studied and prognosis of hyperthyroidism. However, it was found that hyperthyroid patients carrying the TPO rs1126799 T allele (CT + CC genotypes) had significantly higher serum levels of TPOAb compared to those with the TT genotype (P<0.01). Similarly, hyperthyroid patients with the TPO rs1126797 C allele (CT + TT genotypes) and TPO rs732609 A allele (CA + AA genotypes) exhibited significantly elevated levels of TPOAbs compared to individuals with the CC genotype and AA genotype, respectively (P<0.01). Conclusion: There was no correlation between the TPO rs1126799, rs1126797, and rs732609 polymorphisms with the occurrence of hyperthyroidism in the Iraqi population. However, TPO rs2071400 and rs2048722 polymorphisms were found to be correlated to serum levels of TPOAb. INTRODUCTION hyroid peroxidase (TPO) is a heme-binding protein primarily found on the apical membrane of thyrocytes.The activity of TPO is crucial for the process of thyroid hormonogenesis (1).Autoimmune thyroid diseases (AITDs) are a group of autoimmune disorders that affect approximately 5% of the global population.However, the prevalence of AITDs can vary among the populations associated with the levels of iodine intake (1)(2)(3)(4).Thyroid peroxidase gene mutations are inherited as autosomal recessive traits and this alteration in the structure of the enzyme leads to the production of anti-TPO autoantibodies, which can potentially contribute to the destruction of the thyroid gland (5,6) and the impairment of thyroid hormone production can occur as a result of total or partial defects in iodide organification (7,8).During the last 20 years, different loci, such as MHC, CTLA-4, PTPN22, and TSHR genes have been linked to autoimmune thyroid disease (AITD) (9,10).Several mutations have been documented in the TPO gene, which include 103 deleterious mutations, 66 missense mutations in addition to nonsense mutations, deletions, insertions and duplications (11).Human TPO gene exhibits alternative splicing, resulting in the generation of multiple TPO isoforms that may lack one or more exons.Alternative splicing of TPO gene has also been reported in other species as well (12,13). Multiple autoimmune diseases, including but not limited to CTLA4 and PTPN22, share common susceptibility genes, suggesting the existence of overlapping genetic and molecular pathways among these conditions.This evidence indicates that autoimmune diseases may exhibit similarities in their underlying genetic and molecular mechanisms (14,15).Previous studies have revealed the existence of numerous susceptibility genes associated with autoimmune thyroid diseases (AITD).Broadly speaking, these susceptibility genes can be categorized into two main types.The first type comprises thyroidassociated genes which include: TSHR (thyroidstimulating hormone receptor) and TG (thyroglobulin).The second type consists of immunity-related genes, such as FOXP3, which play a role in immune system regulation (16), HLA-DR genes (17), and CD40 (15).This study aimed to examine the single nucleotide polymorphisms (SNPs) in the TPO gene to elucidate the potential relation between TPO gene T polymorphisms and the onset as well as the prognosis of hyperthyroidism. MATERIALS AND METHODS This study included 75 patients diagnosed with hyperthyroidism at the National Diabetes Center, AL-Mustansiriya University, Iraq between July and November 2021.In addition, the study also included 25 healthy volunteers who served as controls.Both genders and various age groups were represented among the patients and healthy controls.A specific questionnaire form was used to record the medical history of the patients. Inclusion and exclusion criteria Hyperthyroid patients were diagnosed based on early hormonal tests, specifically focusing on thyroid function, without considering the presence of other symptoms.However, hyperthyroid patients with comorbidities such as diabetes mellitus, hypertension, cardiovascular diseases, as well as pregnant women, were excluded from the study. Measurement of thyroid hormones and autoantibodies Blood samples were collected from all participants.Serum levels of thyroid hormones (total triiodothyronine (T3) and tetra-iodothyronine (T4) in addition to the level of thyroid stimulating hormone (TSH) were quantified using the Enzyme Linked Fluorescent Assay (ELFA) kit (Biomerx, France).Additionally, the levels of autoantibodies (TPOAbs and TR-Ab) were determined by using the Enzyme Linked ImmunoSorbent Assay (ELISA) kit (Demiditec, Germany).The assays were performed as per the manufacturer's instructions. Extraction and genomic DNA sequence Genomic DNA was extracted from the peripheral blood samples obtained from recruited patients by using Zymo Research Quick-DNA™ Miniprep Kit.DNA concentrations were examined by using the Nanodrop spectrophotometer.The promoter regions spanning the SNPs rs1126799, rs1126797, and rs732609 SNPs of the TPO gene were subjected to PCR amplification using precise primers as shown in Table 1.PCR cycles performed in a Thermocycler (Applied Biosystems, USA). The volume of PCR mixture (25 μL) was composed of several components which includes: extracted DNA, MgCl2, primer dNTP and Taq DNA polymerase.The amplification process involves several serial steps including denaturation step at the temperature of 95°C for 30 seconds, annealing step at the temperature of 57-62 °C for 30 seconds, and finally the extension step at temperature of 72°C for 30 seconds, for 25 cycles.The products of PCR cycle with amplicon sizes 406, 423 and 210 bp respectively (Table 1) were subjected to direct sequencing (Macrogen Company, Korea) utilizing the Sanger sequencing method.The obtained sequences were aligned online with the corresponding TPO gene sequences as found in the National Center for Biotechnology Information (NCBI) database, using the Bioedit software. Statistical methods To evaluate the allele and genotype associations for each SNP, χ2 and Fisher exact tests were employed.Additionally, a χ2 test utilized each SNP in both case and control individuals. Clinical characteristics Hyperthyroidism is characterized by hyper metabolism and elevated serum level of T4 and T3 and low TSH level.In this study relation between hyperthyroidism and serum levels of thyroid hormones and TSH were determined.The results presented in Table 2 demonstrated that hyperthyroid patients had high significant differences in levels of free T3 and T4 compared to healthy controls (p<0.01).However, these levels still remained within the normal range.The current study also revealed a significant decrease in serum TSH levels among hyperthyroid patients (p<0.001) as described in Table 2. Furthermore, this study investigated the levels of TPOAbs among hyperthyroid patients.The results indicated a significant increase (p<0.01) in TPOAbs levels in the serum samples of hyperthyroid patients compared to healthy controls. SNP rs1126799 T>C The occurrence of homozygous non-polymorphic genotype (TT), heterozygous polymorphic genotype (TC) and homozygous polymorphic genotype (CC) in hyperthyroid patients were observed to be 24%, 28 % and 48% respectively (Table 3).While the frequency of these genotypes in healthy control subjects were 12%, 20% and 68 % respectively.Furthermore, the frequencies of T and C alleles among hyperthyroid patients were 38% and 62% respectively, while in healthy control allele frequencies of T and C alleles were 22% and 78% respectively.Considering T allele as a reference, the allele distribution among hyperthyroid patients and healthy was found to be non-significant. In addition to that C and T allelic frequency in patients was 64% and 36% respectively, while in healthy control it was 76% and 24% respectively.No significant distribution was observed for alleles between hyperthyroid patients and healthy controls, with C allele as a reference allele (Table 4). SNP rs732609 A>C Table 5 shows results for the occurrence of homozygous non-polymorphic genotype (AA), heterozygous polymorphic genotype (AC) and homozygous polymorphic genotype (CC) in hyperthyroidism patients which is 76 %, 16% and 8% respectively.Similarly in the control group the frequency of these genotypes was observed to be 80%, 12% and 8% respectively.On the other hand, allele frequencies for A and C among hyperthyroid patients were 86% and 14% respectively, while in healthy controls the allele frequency of A and C was 84% and 16% respectively.These results showed that the distribution of A and C alleles among both hyperthyroid patients and healthy control was nonsignificant with A allele taken as a reference allele (Table 5). Serum levels of TPOAbs among polymorphic genotypes Results showed that the concentration of TPOAbs among the hyperthyroid, were T allele carriers (TC + CC genotypes) of SNP rs1126799 T>C (Fig. 1A), C allele carriers (CT + TT genotypes) of SNP rs1126797 C>T (Fig. 1B), and A allele carriers (AC + CC genotypes) of SNP rs732609 A>C (Fig. 1C) were significantly higher than those of wild-type genotypes (TT for rs1126799 T>C, CC for rs1126797, and AA for rs732609 respectively (p<0.01).However, it was observed that the serum levels of T4, T3, TSH were significantly different (p<0.01) in heterozygous and homozygous polymorphic genotypes than their levels in homozygous non polymorphic genotypes for the studied SNPs. DISCUSSION The thyroid gland plays a vital role in the synthesis of thyroid hormones (T3 and T4) by regulating the levels of TSH hormone produced by the pituitary gland.In patients with hyperthyroidism, there is often an elevation in the levels of thyroid hormones (T4 and T3) accompanied by a decrease in TSH hormone levels (18)(19)(20).Although T4 is the primary product of the thyroid gland, it is the biologically active T3 form that exerts physiological effects (21).Normally, the thyroid gland produces T4, which is then converted to T3 with the help of the TPO enzyme.However, when there is a defect in the TPO enzyme, the conversion of T4 to T3 is hindered, leading to an increase in T4 levels.The findings of this study revealed an excessive production of both T4 and T3 thyroid hormones in hyperthyroid patients which is in line with the findings by Yasaman et al., (22), who reported that TSH levels tend to decrease in hyperthyroid patients due to the negative feedback inhibition exerted by T3 and T4 on the anterior pituitary gland. Thyroid antibodies are proteins that are produced in the blood as a response to the presence of foreign proteins (antigens).These antibodies develop when the immune system mistakenly target thyroid gland cells and tissues causing damage in the thyroid organ.The presence of these antibodies is associated with autoimmune thyroid disorders, including Graves' disease (characterized by hyperthyroidism) and Hashimoto's thyroiditis (characterized by hypothyroidism).In this study, the relation between thyroid autoantibodies was studied, results of which showed that the TPOAbs are more often raised in hyperthyroid patients which is like findings reported earlier (10,23). TPO polymorphisms In this study, we investigated the relation among the frequency of TPO polymorphisms and incidence of hyperthyroidism.Results showed that the frequency of the TPO rs1126799 T>C, rs1126797C>T, and rs732609 A>C were not associated with the incidence of the disease, due to non-significant difference in the dominant polymorphic alleles among both hyperthyroid patients and healthy controls, which refers that these SNPs are not a risk factor for the incidence of hyperthyroidism. rs1126799 T>C rs1126799 SNP in exon 15 of TPO gene was also investigated.Results showed there is no significantly association between rs1126799 TPO gene polymorphism and the incidence of hyperthyroidism due to non-significant difference in the dominant polymorphic C allele among both hyperthyroid patients and healthy controls with odd ratio 1.19 and 0.65, which mentioned that the dominant polymorphic C allele is not a risk factor for the incidence of hyperthyroidism.These findings are similar to those obtained in the Japanese population (9), but not in the Iranian population (24). rs1126797C>T Results of this study showed that there is no significant relation with rs1126797 TPO gene polymorphism and the development of hyperthyroidism due to nonsignificant differences in dominant polymorphic T allele among hyperthyroidism patients and healthy controls (odd ratio of 0.71).This result indicates that the dominant polymorphic T allele is not a risk factor for hyperthyroidism.However, the results of the recessive model (odd ratio=6.25) of inherited T allele were seen as a risk factor for hyperthyroidism incidence.These findings are in line with a similar work by Tomari et al., (10) who reported a lack of association between rs1126797 TPO gene polymorphism and hyperthyroidism in the Japanese population.However, this SNP was reported to be associated with the incidence of hyperthyroidism in the Egyptian population (25). rs732609A>C Our study found no significant relationship between rs732609 TPO gene polymorphism and the incidence of hyperthyroidism in the studied group of Iraqi patients.A non-significant difference was observed for the dominant polymorphic C allele among hyperthyroid patients and healthy controls which indicates that the dominant polymorphic allele C is not a risk factor for the incidence of hyperthyroidism.On the other hand, results indicated that the recessive model of the inherited C allele was also not a risk factor for developing hyperthyroidism, as the odd ratio was 1.3.The results of this study exhibit resemblance to the outcomes observed in the Japanese population (10), while demonstrating dissimilarity to findings in other study populations (24)(25)(26).However, our study shows the possibility of establishing an association between the rs1126799 T>C, rs1126797C>T, and rs732609A/C polymorphisms and autoimmune hyperthyroidism by examining the correlation between these genetic variations and the concentration of TPOAbs in hyperthyroid individuals of various genotypes.This also assumes significance, as previous studies have shown SNP mutations within genes to be associated with various genetic diseases (27,28). CONCLUSION There is no correlation between the TPO rs1126799, rs1126797, and rs732609 polymorphisms with the occurrence of hyperthyroidism in the Iraqi population.TPO rs2071400 and rs2048722 polymorphisms, on the other hand, were shown to be associated with serum TPOAb levels. Table 1 : The primers were used for amplification of TPO regions Table 3 : The genotypic and allelic frequencies of TPO rs1126799 polymorphism among hyperthyroid and healthy individuals Table 4 : The genotypic and allelic frequency for TPO rs1126797 polymorphism among hyperthyroid and healthy individuals Table 5 : The genotypic and allelic frequency for TPO rs732609 polymorphism among hyperthyroid and healthy individuals
2024-02-02T16:02:46.100Z
2024-01-21T00:00:00.000
{ "year": 2024, "sha1": "242d386b6efd509c34e65c0e51e2797396f4705e", "oa_license": "CCBY", "oa_url": "https://biomedicineonline.org/index.php/home/article/download/3499/1120", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4985ababd292ccb0790ae4f634408897ee4d7b3e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
181706946
pes2o/s2orc
v3-fos-license
Mapping Ostrom’s common-pool resource systems coding handbook to the coupled infrastructure systems framework to enable comparative research : In the study of common-pool resource (CPR) governance, frameworks provide a metatheoretical language to describe system states, dynamics, elements, and relationships. The coding manuals which accompany CPR frameworks–in addition to providing guidelines for connecting empirical case work to conceptual variables–define a vocabulary of coding questions. For empirical work, connecting variables and coding questions with framework elements contributes to conceptual advance. In the process of analysis and publication, it is tempting to offer a novel framework without also developing, applying, or modifying the foundational questions and variables of coding manuals buttressing said frameworks. However, if the scholarly community is to generate robust knowledge for the study of CPR dilemmas, we must provide the underlying work of comparing across frameworks. In this paper, we report on one way the community might conduct such comparisons. We present results and challenges of using a group consensus process to link the more than 450 coding questions derived from the original Institutional Analysis and Development Framework (IADF) to the recently proposed Coupled Infrastructure Systems Framework (CISF). Despite overlap, discrepancies in the conceptual positions of the IADF and CISF suggest a need to modify or create new coding variables related to concepts of system boundaries, externalities, cross-scale interactions, multi-functionality, and technological change. We suggest that such work needs provisioning if commons scholars are to navigate the continued challenges of tailoring frameworks and coding manuals to evolving CPR governance dilemmas. Introduction The study of common-pool resource (CPR) governance-attending to complex social acts of providing for or taking possession of ecological and man-made systems-is necessarily interdisciplinary. Differing terminologies and diverse research frames among disciplines present communication, knowledge development, and theory building challenges to CPR scholars. In response to these challenges, scholars have developed a range of metatheoretical CPR frameworks. These frameworks, constituted by specific worldviews, help interdisciplinary scholars build common vocabularies and shared understandings and thus work together toward CPR goverance theory building and model development (Binder et al. 2013;McGinnis and Ostrom 2014;Pulver et al. 2018). Over time, numerous frameworks have been developed specifically to guide shared understanding of decision-making, collective action, and related interactions and outcomes associated with CPR governance. The Institutional Analysis and Development Framework (IADF) (Kiser and Ostrom 1982) was conceived to explain collective action in complex public economies of U.S. metropolitan areas. Since its initial application, scholars have enrolled the IADF in the systematic study of a diverse range of social dilemmas within a wide variety of CPR contexts (Ostrom 2005;Poteete et al. 2010). The IADF provides a language for comparative analysis through a vocabulary of coding questions associated with specific framework elements. This vocabulary was developed by Elinor Ostrom and colleagues through the Common-Pool Resource Research Project, a comprehensive effort to identify and evaluate coding questions of interest from more than one thousand unpublished case studies (Poteete et al. 2010) before being set down in the Common-Pool Resource (CPR) Coding Manual (Ostrom et al. 1989) (referred to below as "the manual 1 "). From the large number of cases that went into the creation of the manual, Ostrom and colleagues selected a smaller number for detailed analyses that would ultimately form the body of Governing the Commons (Ostrom 1990), earning Elinor Ostrom the Nobel Prize in Economics in 2009. Over time, scholarship building on the IADF has generated understandings not only of social dilemmas related to human use of biophysical resources, but also of successful 2 CPR governance systems (Ostrom 2009a). These deeper understandings have, in turn, fueled the creation of additional frameworks with which to study CPRs. The Social-Ecological Systems Framework (SESF) for example, arose as an effort to improve on the IADF by giving more equal attention to biophysical and ecological dimensions of systems and facilitate interdisciplinary research in this vein (Ostrom 2007(Ostrom , 2009b. However, as we discuss in more detail below, while the SESF accounts for an array of social and ecological variables likely to influence collective action processes, it provides limited guidance on how to understand broader social-ecological system dynamics, interactions, or robustness beyond how variables may theoretically interact within action situations (Anderies et al. , 2018. Binder et al. (2013) compared 10 frameworks 3 1 The coding manual includes a set of forms, instructions and coding questions. 2 Ostrom defines successful case studies as those governed by institutions (i.e., rules, norms, and shared strategies) "...that enable individuals to achieve productive outcomes in situations where temptations to free-ride and shirk are ever present" (1990,15). In her analysis, she uses the notion of "long-enduring systems" as well, meaning "resource systems, as well as the institutions, [that] have survived for long periods of time" (1999,58). 3 Driver, pressure, state, impact, response; earth systems analysis; ecosystem services; human environment systems; material and energy flow analysis; management and transition framework; socio-ecological systems framework; sustainable livelihood approach; the natural step; vulnerability framework. widely used among CPR scholars and found no single framework sufficient to address all CPR-related research questions. Pulver et al. (2018) in a follow up study of 6 other frameworks 4 confirmed this result and highlighted a trade-off between generality in theory and context specificity in application. Existing frameworks in the study of CPR governance differ in their conceptualization, goal and applicability, and temporal, social, and spatial scale addressed (Binder et al. 2013;Pulver et al. 2018). Such differences may result, as McGinnis and Ostrom (2014) noted, from "investment in updating and improving" a framework. In addition to gradual differentiation, new frameworks also develop as scholars place different emphases on conceptual elements, states, relationships, or dynamics associated with CPR systems. While an asset on one-hand, a rich diversity of analytical perspectives presents a challenge when comparing data across frameworks to develop knowledge and build theory (Poteete and Ostrom 2004;Binder et al. 2013;Partelow and Winkler 2016). Studies applying the same framework may differ in conceptual position. Thiel et al. (2015), for example, studied 20 publications using the SESF and found a low consistency of use and category measurement. Further, this proliferation confuses efforts to understand the relative importance of certain framework elements over others, as well as the identification of causal mechanisms related to theory building (Agrawal 2002). To address this challenge, Agrawal suggested a) greater attention be paid to comparative analyses using the same methods and b) a core set of variables be gleaned from the literature. Research on application of frameworks to the study of CPR governance suggests a need for mechanisms-such as guidelines for operationalizing research-to improve communication and comparability across CPR frameworks (Poteete and Ostrom 2004;Thiel et al. 2015;Partelow and Winkler 2016). Binder et al. (2013) elaborated that the generality of the SESF means data collected "within its structure" could in theory be used by other CPR frameworks. For our purposes, such theoretical applicability became an empirical question. In a nod to the need to overcome this challenge, Binder et al. (2013) suggested a database for common coding questions (to help collect and share data) be developed for use across multiple frameworks. Three such data collection instruments (e.g. coding manuals), attempt to do this and are, in fact, derived from the original CPR Coding Manual: the International Forestry Resources and Institutions (IFRI) database (Poteete and Ostrom 2004), the Nepal Irrigation Institutions Systems (NIIS) database (Benjamin et al. 1994), and the Social-Ecological Systems Meta-Analysis Database (SESMAD) project (Cox 2014). Despite the connection of coding questions in IFRI and NIIS to the IADF, and of SESMAD to the SESF, only the original manual's coding questions are designed to be applied to studies across topics. The pace of development of new research frameworks thus outstrips attempts at rigorous linking of frameworks to data collection instruments or coding manuals. Coding manuals, or "codebooks," represent collections of questions that query for conditions important to a particular research context. When used deductively, coding questions in a codebook probe for core themes relevant to analysis of CPR governance systems. Such analyses may then be used in comparison across sectors (e.g. fisheries, forestry, etc.) and scales (e.g. local, regional, etc.) of governance within or across frameworks. One original intent of the CPR Coding Manual was to aid CPR scholars in identifying core concepts and measures when applying the IADF for multiple sectors. This original purpose makes it an ideal data collection instrument to employ for the task of comparison-subsequently -across frameworks (as urged by Binder et al. 2013). Using an established coding manual in this way-working to identify alignments and lacunae in coverage when coding variables are mapped to frameworks derivative from the IADF-can better support comparison of results from empirical research of CPR governance. This act of mapping can also enhance identification of questions common to and left unanswered by frameworks, potentially making identification of core aspects of complex issues more efficient. Greater integration of empirical data should allow for more extensive analysis and hypothesis testing for theory development and so inform CPR governance. Doing the work of making such connections can, as McGinnis and Ostrom (2014) noted, enhance frameworks and coding manuals to "provide an essential scientific dictionary for core concepts and their subconcepts so that multidisciplinary teams of researchers can work together more effectively" (30). We have attempted a means of demonstrating the benefits possible from using a core set of variables in this manner by mapping the original CPR manual to the CISF. In the remainder of this paper, we present a means of provisioning comparison across data collection instruments of CPR frameworks. We share results of a "mapping" 5 of the original CPR Coding Manual questions (Ostrom et al. 1989) (associated with the IADF) to a pared-down version of the Coupled Infrastructure Systems Framework (CISF) 6 (proposed by Anderies et al. (2016)). By linking coding questions in the CPR Coding Manual to the CISF, we aspired to enhance the accessibility of each of these assets to other CPR scholars. In addition, we aspired to spark a larger conversation about how we, as a community of CPR scholars, can update existing and foster development of common languages to compare CPR governance systems. The process of mapping the 455 coding questions of the manual to the ten links and four nodes of the CISF helped identify areas for further research and conceptual renewal in the field. We discuss ambiguities we encountered in the mapping process, as well as implications for provisioning future work to map across other CPR frameworks, such as the SESF. Mapping Ostrom's CPR coding handbook to the CIS framework 533 Methods We selected the CISF (see Figure 1) for substantive and pragmatic reasons. The CISF was first conceptualized as the Robustness Framework in 2004 as a way to examine interactions among four core components of CPR systems: the resource, resource users, public infrastructure providers, and public infrastructure; as well as the impact of exogenous drivers/shocks on those elements (Anderies et al. 2004). Other diagnostic frameworks (c.f., Ostrom 2007Ostrom , 2009bBinder et al. 2013;Thiel et al. 2015) also have these emphases, however, we selected the CISF in part for how it makes explicit differences of hard, social, and human infrastructures as they pertain to complex resource systems (Anderies 2015) associated with sustainability challenges (Kates et al. 2001;Matson 2009). Attention to resource systems in this general way allowed our group, with a shared interest in research on governance of complex and novel CPR systems, to also accommodate our distributed foci across traditional and social-ecological and non-traditional social-technical systems (see section, Sorting Process). In addition, we selected the CISF for its usefulness when studying the interactions of multiple action situations within a complex CPR system in an integrated way. The CISF incorporates the exogenous elements of the IADF (biophysical context, rules, and attributes of the community of users and public infrastructure providers), allowing for integrated analysis of interactions and processes among those elements-as well as exogenous drivers and shocks to the system (Anderies et al. , 2018. Thus, the CISF builds on the foundation of the IADF (and SESF), which support the analysis of decision-making, outcomes and feedbacks of a CPR system (Kiser and Ostrom 1982;Ostrom 2005;Cox et al. 2010), to further support analysis of emergent properties and co-evolution of interdependent infrastructures across multiple action situations. Where other diagnostic frameworks emphasize categories useful for framing empirical research questions, the CISF emphasizes the dynamics, resilience, and robustness of CPR governance. In contrast to the IADF and the SESF, the CISF re-conceptualizes governance of resource-resource user interactions as an emergent feature of a system (Anderies 2015). However, as Anderies continues: The notion that "governance" is not something we do but, rather, something that emerges as a system feature may seem strange at first glance. Upon closer inspection, however, it becomes evident that most outputs of human activities are "emergent" in the sense that they involve inputs that are taken for granted, not a design consideration, or may even be unrecognized in the production process (270). This is particularly important given the complex and often unpredictable nature of contemporary coupled infrastructure systems CIS being studied. Finally, pragmatically, until now the CISF has lacked a specific, structured set of coding questions. Working to map IADF-sourced CPR Coding Manual questions to the CISF presented an opportunity to develop such a coding manual. Doing so in this way further enabled one of our core aspirations to explore a means of identifying alignments and lacunae in complementary frameworks used in the study of CPR governance. Ostrom et al. developed the original Common-Pool Resource Systems Coding Handbook Based on the IAD Framework of Elinor Ostrom and the original CPR Project to clarify terms used in the study of collective action dilemmas (1989). 7 The 358-page manual contains a standardized list with definitions of coding questions associated with the IADF. The manual contains an introduction to the CPR project and the IADF, as well as 11 specific coding forms (listed in Table S1 of supplementary material). These 11 coding forms contain descriptions of the overarching themes of a section; instructions for use; general notes relevant to questions within the form; a list of coding questions; and sets of response options for the analyst. The forms of the original manual contain 455 coding questions. These questions constituted the source material for our mapping project. We counted individual coding questions as single units of observation, noting the coding form from which they were drawn. We then sorted each individual coding question into the various components of the CISF. Figure 1 offers a representation of the CISF, with elements expanding on the 2004 Robustness framework covered in grey. 8 Detailed descriptions of CISF components may be found in Table S2 of the supplementary material, as well as in Anderies et al. 2004 and Anderies 2015. Sorting process The mapping process was conducted by the authors (see Table S3 in supplementary material for a presentation of the departments and fields of study of coding group members). Group membership did not change during the process. The group was formed as an extension of a large-N coding project that re-examined 69 small-scale CPR case studies to determine the link between design principle co-occurrence and social/ecological success of the CPR governance system Barnett et al. 2016;Ratajczyk et al. 2016). The authors, from five different countries and three different doctoral programs, held varied academic backgrounds and research foci but all utilized a variant of CPR methods and theories to inform their research and examine complex and novel CPR systems (Table S3). Since all members of the group knew each other, had mentors in common, and pursued research questions through a CPR lens, the potential for bias cannot be ruled out. However, we worked to minimize this potential for bias through the diversity of our backgrounds and research perspectives. That the placement of many coding questions resulted in spirited discussions and required consultation with the creators of the CISF offered some indication that bias from group composition was contained. Our group met to conduct mapping exercises in monthly 4-hour working sessions over the course of three university semesters, beginning in Fall 2015. We employed a consensus method to sort coding questions among CISF themes. Each group member led at least one sorting session for a single coding form; no group member led for more than two coding forms so as to further minimize the potential bias from any one individual in a sorting conversation. We printed the manual on paper and cut it into strips with a single coding question on each strip (marking the back of the paper with source location for tracking) to facilitate physical pile sorts. At a sorting, the rotating lead group member would facilitate discussion of coding questions in their respective coding form until all questions had been discussed. When we could reach consensus on mapping a coding question to a CISF component, we taped the coding question to a large whiteboard drawing of the CISF. At any given time, two group members took digital notes: one recorded mapping relative to the CISF in a spreadsheet and into a newly created Wikisite for further development in service of dissemination and research; the other recorded conversations surrounding placement in the mapping. Entering data into the spreadsheet enabled rapid quantitative analyses of coding question distribution among CISF components. During the sorting process, if even one person within our group withheld consent, the code was set aside as "unresolved." We subsequently brought "unresolved" coding questions to further discussion in a second round of sorting with additional input from John M. Anderies and Marco A. Janssen, co-developers of the CISF. We complemented our notes from these sessions with recordings of our meetings with Anderies and Janssen. As we resolved each remaining issue, we summarized the rationale for each decision and recorded mapping placement in our spreadsheet and the Wiki-site. Because the process of sorting stretched over two years, these detailed meeting records served a vital function as our group's collective memory. Results Upon completing the mapping process, a majority of coding questions could be closely aligned to core sections of the CISF without extensive deliberation. In a separate methodological discussion (below), we cover those questions that required extensive deliberation. Table 1 presents an overview of the final results of our mapping effort from the 11 sections of the coding manual to the 12 components of the CISF (see Table S4 in supplementary material for the specific location of each coding question in the CISF). The large number of coding questions mapped from the Operational Level and the Subgroup coding forms to the CISF Resource Users section makes sense, given the original coding forms related to "attributes of community" and "action situations." We found it sensible to see coding questions from "operational rules," "operational level," and "subgroup" forms distributed largely to various Links (particularly 1 and 6), Resource Users, and Public Infrastructure components of the CISF. We also found unsurprising the seamless sorting of "collective and constitutional-choice levels of analysis" information to CISF components Public Infrastructure Providers and surrounding Links (i.e. based on Public Infrastructure Providers generally assuming or being delegated authority to alter or create constitutional and collective choice rules). The placement of a large number of coding questions related to the physical and material conditions of a CPR to the Resource component of the CISF reflects the focus in the original coding forms on physical and material conditions of a resource. Distributions to Link 1 and Resource Users suggests the Location and Appropriation Resource coding forms describe not only the state of a resource, but also the interactions between the Resource Users and a Resource. An example coding question from this Appropriation form, WATERORI, asks: "What are the main sources of water used for irrigation?", which refers to a characteristic of the Resource component within the CISF (i.e. water). As an additional example, the coding question, MAINTRES, asks "Are there specialized staff or workers to undertake maintenance?", which refers to a Public Infrastructure Provider (i.e. in charge of provisioning of maintenance infrastructure). As reflected in the absence of codes distributed to Link 2 -little attention was paid to interactions between Resource Users and Public Infrastructure Providers in the original manual. Discussion Through mapping the coding vocabulary of the IADF to the language of the CISF, we found that the majority of original CPR coding questions translated seamlessly. In the process, we identified aspects of CISF components for which no CPR coding vocabulary existed, specifically related to exogenous shocks (Links 7 and 8). We also identified aspects of CISF components for which coding vocabulary was significantly diminished, as in the case of interactions between Resource Users and Public Infrastructure Providers, Public Infrastructure Providers and Public Infrastructure, and Public Infrastructure and Resource. These gaps may be The colors represent the ranges of the number of coding questions within each mapping category. Light blue=1-3; Medium blue=4-6; Dark blue=7-12; Light purple=13-20; Dark purple=21-50; Red=over 51 coding questions. traceable to the explicit attention of the CISF to dynamics and feedbacks among heterogenous sets of infrastructure-a feature missing from the more static focus of the IADF (Anderies et al. , 2018. A final type of finding consisted of coding questions about which we were unable to reach initial consensus when mapping. These related to four primary topics: 1) Demarcations between physical (natural) and institutional (humanmade) boundaries; 2) Externalities resulting from interactions among interconnected resource systems; 3) Ambiguities among organizational actors and institutions across levels; and 4) Complications arising from conceptualizations of technology in the CISF. We found these discrepancies to be a direct reflection of the challenge of applying a CPR governance perspective to coupled infrastructure systems. Further discussion and analysis of these discrepancies-recounted below-enabled us to identify conceptual and methodological gaps in the CPR coding question vocabulary as it relates to CPR governance systems. Natural and human-made boundaries The Location and Appropriation Resource coding forms in the original coding manual addressed issues related to physical and institutional characteristics of a resource system, including location, boundaries, and biophysical conditions. The first issue we encountered here related to a difficulty separating "natural" boundaries from "institutional" (human-made) boundaries using only the original CPR manual coding question "vocabulary." The CISF offers clear distinctions between Natural Infrastructure (i.e. a particular resource such as a forest or fishery) and Public Infrastructure (hard and soft human-made infrastructure such as a public road and a fishing regulation) (Anderies 2015;Anderies et al. 2016). Such demarcation allows for a clearer distinction between boundary creation and manipulation within the study system. In the manual, however, such distinctions are not as easily made. For example, the coding question RAINDIST asks, "What is the distribution of rainfall in this location?" Potential answers in the manual refer to rainfall spreading evenly throughout the year or being concentrated over rainy seasons. Other coding questions cover a range of "location dependent" biophysical components (e.g. temperature, dominant soil type, rainfall distribution, elevation, and size) (see supplementary Table S5). Yet use of the word "location" in these questions does not differentiate between natural or human-made locations, making the mapping from coding manual to CISF problematic. Related, the original coding questions vocabulary is limited when trying to analyze more complex resources systems where, for example, "locations" and "boundaries" cross spatial scales. For example, the coding question "BOUNDAR2" asks the analyst to identify whether the boundary of a resource is a result of natural/constructed and/or institutional arrangements; the coding question "LOCBOUND" asks for description of how the boundaries of the location were determined (see Table S6 in supplementary material for the full details of each abbreviated coding question related to this set of our deliberations). Neither offers a way to address potential location/boundary overlap or cross-scale interaction. Complex resource externalities Although the study of externalities is the subject of entire journals and professions, we found the subject of externalities captured only by a single question, RESCONF, in the original coding manual. This question, originally in the Location coding form, asks the analyst to characterize the majority of the effects between the appropriation of multiple resources as adverse, conflicting, complementary, or nested. That only a single coding question covers what today is an entire field of analysis represents a logical extension of the type of cases for which Ostrom et al. (1989) selected to further study: small-scale systems largely focused on a single primary resource, thereby inviting minimal complicating impact on other resources. For study of more complex coupled-infrastructure systems, the question arose of how to better address externalities when mapping from coding manual to framework. Entertaining externalities in the context of the CISF gave rise to several immediate issues: 1) How to bound the analysis of a system vs its externalities; 2) How to take into account a multitude of potential inter-resource effects; and 3) How to resolve issues of scale that result from having a diversity of nested, interacting infrastructures included. Ambiguities with organizations and institutions We identified ambiguities with classifying organizations and institutions when we attempted to sort coding questions about organizations-or individuals within an organization-serving as Public Infrastructure or Public Infrastructure Providers in different circumstances. We observed three general types of ambiguities. The first related to specifying Public Infrastructure or Public Infrastructure Provider organizations in analysis (i). A second related to distinguishing between interacting operational-and collective-choice level infrastructures (ii). The third ambiguity pertained to bounding and specifying sets of infrastructures implicated by appropriation and/or provisioning (iii). Ultimately, each challenge relates to a core observation: infrastructures entail legacies of operational and collective-choice decisions. Specifying organizations We traced issues with specifying organizations in analysis to the Organizational Structure and Process Form in the manual 9 (a complete list of coding questions for which this issue arose can be found in supplementary Table S8). The coding question MEMBAPPR exemplifies this type of ambiguity. MEMBAPPR asks, "What is the relationship of the size of this organization (or group) to the number of appropriators" (Ostrom et al. 1989, 133). In the context of the CISF the question may seem to be about the description of an organization, which, by nature is underlain by social infrastructure and potentially classifiable as a Public Infrastructure Provider; yet, the question also asks for description of the Resource User community. Further, the word "relationship" seems to imply the involvement of a Link, but then the request for information about number of appropriators seems generally about an organization. Operational-and collective-choice level ambiguities The issue of operational-and collective-choice levels of ambiguities arose in cases where a coding question plausibly referenced the execution of a rule by an individual (or organization) or inquired after the individual (or organization) charged with said execution. For example, consider the case of a water appropriator who is a member of a water appropriation association and serves formally as a water monitor. If a coding question asks for the association charged with provisioning monitoring rules, then said organization is serving to set operational level rules and operates at the collective choice level as a Public Infrastructure Provider. In this example, however, any given individual member of the association serving as a monitor might also be said to carry out enforcement at the operational level, and thus be considered Public Infrastructure. Bounding and specifying appropriation and provisioning infrastructures We also observed a difficulty sorting four coding questions that referenced appropriation, production, and provisioning resources. In the original manual's glossary, Ostrom et al. (1989) defined these actions as follows: • "Appropriation Resource: One of four stages of the delivery of a resource: production, distribution, appropriation, and use" (354). • "Production Resource: The production of water for irrigation involves making water available at locations and times when it does not naturally occur in the form of precipitation and immediate runoff" (357). • "Provision: Provision has a distinct and separate meaning from production. The following quotation provides a definition for provision: The organization of provision relates primarily to consuming, financing, arranging for production, and monitoring the production of a set of goods and services" (357). We found that any of the coding questions related to appropriation, production, or provisioning by design entailed a diverse array of infrastructures, thus complicating our mapping to the CISF. This observation aligns with the underlying rationale for the development of the CISF: social infrastructures are necessarily leveraged with natural infrastructures across CPR governance arrangements. Complications from technology A final general class of issue we encountered emerged as difficulties related to analyzing technology and technology systems. By and large, reference to technology in the coding manual pertains to whether "technology or technologies employed were the same throughout the period" of inquiry (Ostrom et al. 1989, 143) (see supplementary Table S9). In the original coding manual, specific questions related to rules governing the use of technology (USETECH, RULTECHC, BEGTECHX, ENDTECHX) while limited in number, were unambiguous when mapping to the CISF. Coding questions TECHEXTR, BEGNTFER, and ENDNTFER referenced the overall CPR case of interest to the analyst and were thus placed in our proposed "META" category. Given contemporary reliance on technologies in resource governance, we noted an overall lack of attention to technology in the coding manual. For example, in the case of NEWTECH, we found the phrasing, "Is there new technology introduced?" (Ostrom et al. 1989, 167) largely underspecified key details needed for rigorous analysis with the CISF (i.e. vital analytical distinctions would result from whether and how a new technology were public or private in use or provisioning). Second, and related, we noted a difficulty in even attributing public-ness or private-ness to technologies when thinking about them as interconnected infrastructures (as the CISF encourages). Public technologies may be captured for private use and benefit. Similarly, private technologies may impinge on or be used for public benefit. Consider, for example, an unsecured home Wi-Fi-network (owner's private infrastructure, available for external public use). As a counter example, consider public road infrastructure: if a private company builds a remote facility, then public infrastructure must be built to the facility, despite de facto use of the "public road" for private purpose (similar for cases of water infrastructure). This complexity with demarcating technology-related externalities alerted us to the need for expanding some of the vocabulary of the CPR manual to more complex CISF case language. Methodological discussion: proposed modifications Our experience and results demonstrate the value of revisiting foundational methodological work to better understand various aspects of CPR governance frameworks. Doing so has helped us better understand where the field has been and-by placing the CPR Coding Manual in conversation with the contemporary CISFidentify strengths, limitations, and opportunities with original (IADF) and derivative (CISF) approaches to studying CPR governance. Below, we offer a series of recommendations for modifying existing or adding new coding questions to the CPR manual to cover areas of particular interest to the CISF (and, concurrently, of hitherto less prominence in the IADF). Demarcating natural and human-made boundaries As our collective understanding of infrastructure expands with the CISF's conceptual perspective, the IADF metatheoretical distinction of "location" from "boundary" becomes more difficult. Consider for example the case of researchers and practitioners working on marine conservation of bluefin tuna, a species migrating thousands of miles every year. Such migration makes the idea of identifying a single study location highly problematic. This, in turn, complicates the identification of salient user groups and communities to analyze. Further, not only do different user groups need to be identified, but the multitude of different potentially relevant rules, strategies, and norms also increase in complexity with scale. Accordingly, when mapping coding questions to the CISF, we found a need to specify certain coding questions in the CPR Coding Manual to reflect a coupled infrastructure perspective on boundaries in more complex and interconnected systems; a perspective that enables a differentiation among human mediated (e.g. demarcation of nation states), and natural (e.g. the presence of the ocean) separations. We did so by adding wording to distinguish whether a coding question refers to "natural infrastructure" (e.g. replacing "location" with "natural infrastructure" in variables COUNTRY and SOILTYPE), or "institutional infrastructure" (e.g. replacing "boundary" with "institutional infrastructure" in variables BOUNDAR3 and DISTAPPR). Table 2 outlines several examples that exemplify this rewording process. Further examples of variables that are expanded to better illustrate the distinction between location/natural infrastructure and boundary/institutional infrastructure are listed in Supplementary Table S5. Sometimes it was necessary to lump or split coding questions in order to capture the variety of elements and interactions they represented in a more com- plex CIS. For instance, the original description of variables LOCBOUND and BOUNDAR2 failed to address issues of scale mismatch and overlap, which made it difficult to map them to the CISF. However, by lumping the content of the two coding questions and then splitting this consolidated content into three alternative coding variables (see Table 3), we were better able to connect to the appropriate CSIF components. We suggest this proposed revision will help researchers better parse their research question and identify details about a research location and system boundaries. Resolving complex resource externality issues In mapping to the CISF, we found a need for additional coding questions reflecting how "externalities" are internalized in coupled infrastructure systems. The ability to detect, manage, and engage with externalities changes depending on the scale of observation and relevance of boundaries to what is considered a relevant location. With the case of fisheries, a common example of this is pollution in or damming of waterways traversed for spawning, as happened in the case of the Kali Gandaki "A" Hydroelectric Dam (Nepal) (Larinier 2001). Fishermen who face overfishing dilemmas often have no knowledge about or leverage over "upstream" decision points. This may greatly affect their ability to predict future conditions or engage in successful collective action. LOCBOUND How have the boundaries of this location been determined? (e.g. is this primarily a natural or constructed "ecosystem" boundary such as a harbor, or is the location defined institutionally as when a village is the location?) Several original coding questions (NUMBERES, GRESNAME) allow the analyst to clarify which resources will be included in the system of study. To pay greater attention to externalities, we suggest creating a new coding question, 2_RESNAMES, and reformulating several follow-up coding questions (RESNAME1, RESNAME2, RESNAME3, RESNAME4), to more specifically delineate resources being considered part of a coupled infrastructure system. Then, for any of the resources in the system that generate negative externalities or spillovers, we recommend moving the original coding question RESCONF to a "Meta Category" section of the coding manual and creating a disambiguating coding question, "2_RESCONF_M" (Table 4). Nationalization and privatization have been seen as principal means for solving problems of externalities in CPRs from the top-down, but within successful CPRs, bottom-up solutions can include quality standards, technological prescriptions, location/temporal constraints, or any other number of rules (Arrow 2000). To help determine what human-made infrastructures (soft or hard) are created to mitigate, manage, or promote externalities, and how these infrastructures alter the dynamics of resource appropriation/production, we proposed two additional coding questions: 2_RESCONF_PI and 2_RESCONF_L5 (Table 4). 2_RESCONF_PI addresses whether public infrastructure is created to address an externality. 2_RESCONF_L5 would, in turn, capture the dynamics by which such Public Infrastructure may impact Resource use by Resource Users. We fully recognize that additional coding questions may be needed to capture a range of other aspects of public infrastructure such as: Do Resource Users have a seat at the table in designing 2_RESCONF_PI (constitutional/collective choice levels institutions)? In what form? For what scales are 2_RESCONF_PI institutions created? How is 2_RESCONF_PI enforced? How does the physical scope of Resource 1 relate to Resource 2? What conflict resolution mechanisms are available to mitigate resource conflicts? Creation of more nuanced cross-scale coding questions would benefit from a comprehensive literature review, in-depth case study analysis, and provisioning by the community of commons scholars to further update the coding manual. Characterization of all between resource interactions (spillovers and externalities) to be considered in the analysis 2_RESCONF_PI Is there public infrastructure created specifically to mitigate/ promote externalities/spillovers? 2_RESCONF_L5 How does 2_RESCONF_PI alter resource? Clarifying ambiguities with organizations and institutions The third area we found warranting attention was how, in larger-scale, interconnected, dynamic CPR governance arrangements, communities and organizations may have multiple functions, making them difficult to disentangle as Resource Users or Public Infrastructure Providers. Related to complexity in specifying organizations To resolve the issue of complexity in specifying organizations, we recommend creating a "meta Public Infrastructure Provider" theme within the CISF. This "meta-PIPs" then is inspired by the "attributes of community" element of the IADF. This "meta" portion of the IADF creates a space for analysts investigating CISs to qualitatively describe Public Infrastructure Providers involved in an overarching manner. To answer the question, "What type of organization ought to be described?", we turned to the CPR coding manual itself. In the Organizational Structure and Process Coding Form, Ostrom et al. (1989) specify focusing on "organizations that are related to the appropriation process of the resource" (128). We recommend that organizations of focus be specified based on the nature of the social dilemma being investigated. As such, we suggest creating a coding question 2_SOCDIL to ask about the nature of the social dilemma. Our consensus was that an analyst ought to tailor his or her study to the organizations implicated by or involved in managing said social dilemma. In amending the CISF to include a "meta-PIPs" theme, we found it useful to relocate several coding questions to this group. ORGPARAG, which requests a thick, qualitative summary description in the original question, was thus placed in meta-PIPs. For MEMBSUB, the challenge was less about describing the organization than about describing Resource Users and subgroups. Therefore, one possibility we have also considered is the addition of a meta-RU section related to resource user subgroup characterization. Establishing a meta-RU could make more straightforward the description of membership of an organization relative to subgroups where Public Infrastructure Providers are concerned. Related to operational-and collective-choice level ambiguities We identified a need to capture the effects created when rule development at a collective choice level may be far removed from operational level action, a phenomenon of increasing concern as the more immediate connections between governance action and resource users of original IADF cases become the exception, rather than the rule. The need to delineate between when an agent is acting at either an operational or collective choice level capacity is prominent in situations in which the agents who are charged with implementing the rules of an organization are also engaged in collective choice decision making about the rules they are charged with enforcing. This can lead to corruption, unsustainable decision making and regulatory capture, such as has happened in fisheries and civil forfeitures. Fishery licensors have the potential to gain benefits and large rents by preferring willingness to pay over other attributes such as knowledge of the resource or responsible fishery practices (Hanich and Tsamenyi 2009). Civil forfeiture by police demonstrates a similar challenge in which police may have the opportunity to enrich their departments through actions at the operational level, i.e. seizure of individual items of worth from individuals who are arrested (Piety 1991) based on favorable procedures police themselves craft at the collective-choice level. The case of original coding questions FUNDS and FISOURCE offers an illustration of the way we proposed to resolve the ambiguity of operational and collective choice levels in our analysis. Each of these coding questions refer to the sourcing of funds for an organization. FUNDS, as written in the coding manual, appears to be about an attribute of the general purpose local government, and thus Public Infrastructure Providers. However, the answer choices for FUNDS imply underlying rules about taxation (e.g. "More than 80% from local taxes and related sources" 68), and thus a relationship between Public Infrastructure Providers and Public Infrastructure (Link 3). FISOURCE appears to be an attribute of an appropriation management organization, and thus also related to Public Infrastructure Providers, however, answer choices in the manual imply underlying rules about the ways that funds are permitted to be sourced, thus implicating Public Infrastructure (e.g. "Membership fee", 140). To remain true to the original CPR manual, we determined FUNDS and FISOURCE each connect to Public Infrastructure Providers. Yet we agreed there was also good reason to have coding questions explicitly dig into rules regarding the source of funding/financing of general purpose local governments and appropriation management organizations. Therefore, we propose that in the future the community of scholars studying the commons create new coding questions related to rules governing organizational financial sources for general purpose local governments and appropriation management organizations (e.g. 2_ORGFISRULG (enumerating the actual rules that enable FUNDS); 2_ORGFISRULA (enumerating the actual rules that enable FISOURCE)). Bounding and specifying appropriation and provisioning infrastructures The challenge of managing coding question assignment in this case became how to word a sufficiently generalizable text with respect to changes in the state of shared infrastructures. Our determination was that an alternative wording of a single question, with references to a beginning and end state, be developed and placed in a meta category for public infrastructure (PI_META). Although we hope the community will come together to develop actual wording and response options at a later date, we offer a potential re-characterization of coding questions (2_SHRDINF; 2_BEGCONDI; 2 ENDCONDI) with the text, "What are the hard-physical structures maintained by the community that are used to access, withdraw, and distribute the resource." Such a question may sufficiently capture the diversity of shared infrastructures accounted for in the original manual. Addressing complications from technology Our study revealed the need to expand attention to issues of technology in CPR systems. As CPR research frameworks like the SESF and CISF attempt to grapple with what are increasingly recognized as complex interdependent social-technical-ecological systems (Miller et al. 2014), greater inclusion of advances in scholarship related to the ways in which values and cultures shape and are shaped by technology (c.f., Callon 1987;Law 1987;Pinch and Bijker 1987, etc.) may be of increasing importance to empirical and theoretical work on CPR governance. In the CISF, we find that it is the de facto public-or private-ness of a technology, rather than the de jure deploying owner of a technology, which is most important in the ontology of the CISF. As such, we recommend splitting the NEWTECH coding question into two separate, new questions -one each about public and private infrastructures, respectively, allowing for more straightforward linking of these new coding questions to the CISF. Questions of de jure vs de facto use of technology bring to the fore a potential opportunity for future scholarship by the community of commons scholars. Knowledge of and rights to exclusive rents of technologies confer political power to organizations, enabling them to reshape collective choice arrangements to their advantage (c.f., Schelling 1978;Joskow and Rose 1989). Technologies privilege communities of certain abilities and disadvantage others (c.f., Noble 1978;Wajcman 1991). Social groups involved in technology development have specific attributes that re-inscribe themselves on physical artifacts, and thus impose additional norms to a new user community-especially if that community has been marginalized (intentionally or unintentionally) from a development process. And of course, excluded social groups find ways to "hack" technologies designed for one context to realize benefits in a completely different one; often resulting in unintended spillover effects on natural and social infrastructures (Ika 2012). Each of the above illustrations implicates resource use; rules on the rights of parties involved in technology development; cultures of business, research, policy, user, and public communities; and rules governing the use and flow of information about such technologies. Whereas the original coding manual was not developed with such questions in mind due to its focus on small-scale CPR systems, the CISF is well suited to investigate these questions, marking an opportunity to augment the set of coding questions used by commons researchers generally, and for a better understanding of shared infrastructure systems in contemporary, "advanced, technology-dependent societies," in particular. Conclusion "The words we use and the ideas with which we work are the most fundamental part of human reality." -V. Ostrom 1997 Codebooks are collections of thematic codes querying conditions important to a particular research context. Codebooks require regular review and updating when used over longer periods of time (Bernard et al. 2017). The codes within them represent building blocks of theory development (Guest and MacQueen 2008), while frameworks, like the CISF, represent means of organizing such diagnostic inquiry to support theory building and model development (Ostrom 2005;McGinnis and Ostrom 2014;Anderies 2015). CPR frameworks necessarily prioritize "what matters" or "what counts" when it comes to resource governance. Differences across frameworks, we have shown, make it vital to be clear about the "words we use and the ideas with which we work" (Ostrom 1997)-namely the coding questions we ask and the variables and relationships we study. Analysis is nothing if not purposeful selection and exclusion; it may also entail accidental omission. Comparing across frameworks offers analysts an important opportunity to reflect on how our acts of selection, exclusion, or omission color the lenses through which we study CPR governance systems. Ostrom's CPR coding manual represents a selection of vetted thematic codes which identify key dimensions of contemporary coupled infrastructure systems and, we find, remains generally useful for analysis of CPR governance systems. Coming from a generation of commons scholars who did not originally work with the CPR Coding Manual, we found value in tracing the history of ideas set forth by this data collection instrument to better understand two CPR frameworks separated in time and by focus. Concomitant implications of globalization and interconnected sustainability challenges make review and reorganization of several CPR variables necessary to enhance coding manual utility and framework relevance. By mapping coding manual questions to the CISF, we not only provided a common data structure for the framework but also contributed to reviewing and updating the manual's organization and content-a process relevant immediately to the IADF and adaptable for use with complementary frameworks. In completing this work, we have enhanced communication between early and contemporary scholarship on commons governance: an homage to Ostrom's original vision of employing a, "Consistent, nested set of concepts that can be used in our analysis, research, and policy advice in a cumulative manner" (2005). Our mapping an established vocabulary of the IADF to the CISF now affords the commons research community an additional data collection instrument with which to compare cases, identify open questions, and advance theoretical inquiry. Such inquiry can be enriched as additional CPR framework languages like the SESF are also connected to the vocabulary of the original CPR coding manual. The practice of sustaining and expanding the coding manual positions it as something of a boundary object (Star 2010). Increasing scholarly exchange around and structuring information to advance alteration and addition of coding questions in this way could further enrich interdisciplinary research on complexities of CPR governance including, among other issues: multiple, nested physical scales; human organizational scales; issues of multifunctional entities; and externalities among and heterogeneity of resource systems. Capturing the results of such exchange to a database of codebook variables could enhance transferability and knowledge building across CPR frameworks. CPR scholars will need additional infrastructures to manage the additional complexities of expanding or re-specifying complementary frameworks and underlying coding manuals as boundary objects. Such an infrastructure for community scholarship would need to catalogue new coding questions and new processes and to establish revisions to text and definitions in coding vocabularies of the CISF, SESF, and other frameworks. Most immediately, and particularly for the CISF, this boundary object could be useful for elaborating considerations of private, public, soft-human, human-made, and social infrastructures related to Resource Users and Public Infrastructure Providers components (greyed out areas of Figure 1). To this end, we have developed a wiki (https://ciscodebook.seslibrary.asu.edu/wiki/ Coding_the_Commons_Wiki) in the process of our analysis to start provisioning this function. More generally, the community might benefit from a set of additional formal social infrastructures to update existing and develop new elements of the CPR coding manual (in addition to provisioning for linking to other frameworks). Going back to Ostrom's Common-Pool Resource Systems Coding Handbook helped us better understand areas of overlap, divergence, and general gaps between the IAD and CIS CPR frameworks. Understanding such relationships among CPR frameworks can support more robust synthesis of empirical work and, in turn, drive theory-building on governance of open-access resource systems. Our effort, however, demonstrates that comparison of data collection coding manuals requires the investment of a range of resources (person hours, web infrastructures, print materials, meeting space, mentoring, etc.): it requires provisioning. We hope our case of mapping coding questions to the CISF will inspire future efforts to connect CPR frameworks, such as the SESF, and spark a larger conversation about how we, as a community of CPR scholars, can take action to ensure the vocabularies, languages, and lessons of commons governance research remain vibrant and relevant far into the future.
2019-06-07T20:43:59.215Z
2019-05-08T00:00:00.000
{ "year": 2019, "sha1": "7f38cab512a2ec6eca58e3141e1b330d552fca72", "oa_license": "CCBY", "oa_url": "http://www.thecommonsjournal.org/articles/10.18352/ijc.904/galley/953/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6f9cceac7ee8f21f0e7ad5d410ae7f2814af3225", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
250317155
pes2o/s2orc
v3-fos-license
Accuracy of pedicle screw placement using neuronavigation based on intraoperative 3D rotational fluoroscopy in the thoracic and lumbar spine Introduction In spinal surgery, precise instrumentation is essential. This study aims to evaluate the accuracy of navigated, O-arm-controlled screw positioning in thoracic and lumbar spine instabilities. Materials and methods Posterior instrumentation procedures between 2010 and 2015 were retrospectively analyzed. Pedicle screws were placed using 3D rotational fluoroscopy and neuronavigation. Accuracy of screw placement was assessed using a 6-grade scoring system. In addition, screw length was analyzed in relation to the vertebral body diameter. Intra- and postoperative revision rates were recorded. Results Thoracic and lumbar spine surgery was performed in 285 patients. Of 1704 pedicle screws, 1621 (95.1%) showed excellent positioning in 3D rotational fluoroscopy imaging. The lateral rim of either pedicle or vertebral body was protruded in 25 (1.5%) and 28 screws (1.6%), while the midline of the vertebral body was crossed in 8 screws (0.5%). Furthermore, 11 screws each (0.6%) fulfilled the criteria of full lateral and medial displacement. The median relative screw length was 92.6%. Intraoperative revision resulted in excellent positioning in 58 of 71 screws. Follow-up surgery due to missed primary malposition had to be performed for two screws in the same patient. Postsurgical symptom relief was reported in 82.1% of patients, whereas neurological deterioration occurred in 8.9% of cases with neurological follow-up. Conclusions Combination of neuronavigation and 3D rotational fluoroscopy control ensures excellent accuracy in pedicle screw positioning. As misplaced screws can be detected reliably and revised intraoperatively, repeated surgery for screw malposition is rarely required. Introduction The lifetime prevalence of back pain in Germany is 85.5%, with men and women over the age of 50 being particularly affected [1]. Due to the increasing proportion of older people and the longer life expectancy in our population, it must be assumed that the medical and socioeconomic relevance of back pain and degenerative diseases of the spine will further increase [2]. In addition to debilitating degeneration, various underlying conditions such as trauma, inflammation, or neoplasms can be causative agents of spinal instability and require surgical treatment. The main goal of spinal surgery is to restore the spinal column's weight-bearing capabilities and motion range of the spine in order to improve patients' quality of life. To ensure this, a high degree of intraoperative precision is required for spinal instrumentation in patients with instabilities as screw misplacement can lead to neurological and vascular complications [3,4]. Conventional X-ray diagnostics (2D radiographs or biplanar fluoroscopy) are widely regarded as the reference standard for intraoperative imaging in general, and for spinal instrumentation in particular [5]. However, while spinal alignment and vertebral body shape are sufficiently assessable in the far majority of cases, exact screw placement may occasionally be difficult to evaluate [6]. The question of whether modern navigation techniques can improve the precision of spinal instrumentation compared with conventional methods has not yet been clearly answered. Particularly, the application of 3D rotational fluoroscopy in combination with neuronavigation appears promising for intraoperative screw position analysis, as it provides multiplanar image information comparable to multidetector CT imaging. The purpose of this retrospective study was to evaluate the accuracy of pedicle screw positioning in navigated, O-arm-controlled posterior instrumentation for the thoracic und lumbar spine. Material and methods Retrospective data analysis was approved, and informed consent was waived by the local ethics committee. Information on patient history and surgical procedure was obtained from the clinical information system (SAP SE, Walldorf, Germany) and anonymized for further analysis. The evaluation of intraoperative imaging in terms of screw position and screw length was performed using an open-source DICOM viewer program (OsiriX Lite 8.0.1). For this study, we retrospectively analyzed data from patients who underwent dorsal spinal instrumentation with 3D fluoroscopic navigation (O-arm, Medtronic, Dublin, Ireland) at the local neurosurgical clinic between June 2010 to June 2015. Treatment indication was based on spinal instability due to degenerative, traumatic, inflammatory or tumor-related conditions. Inclusion criteria included surgical treatment via a dorsal approach (± additional fusion), surgery performed with an open or percutaneous technique and at least one rotational 3D fluoroscopy scan after dorsal instrumentation to evaluate the position of the screws. Patients who did not receive a rotational 3D scan were excluded from this study. Furthermore, screws that were not included in the field of view of the initial 3D fluoroscopy scan (n = 34) were also left out of the analysis. The O-arm represents a 3D rotational fluoroscopy device designed for intraoperative application. In addition to the rotor, the gantry-based scanner architecture contains the X-ray tube (B100, Varian Medical Systems, Palo Alto, USA) opposite a large flat-panel detector (PaxScan 4030D, Varex, Palo Alto, USA). In 3D mode, the O-arm creates a series of projection images during a complete 360° rotation. Gantry rotation speed can be set to 30° per second in standard mode or 15° per second in high-definition mode with up to 400 or 750 images generated during a full 360° rotation. By integrating the navigation system (StealthStation S7, Medtronic) into the scanner setup, intraoperative imaging can be used directly for neuronavigation. This approach enables periprocedural display of entry points as well as identification of important neighboring structures. Qualitative evaluation of screw positioning for the thoracic and lumbar spine was performed using the 6-grade scoring system described by Zdichavsky et al., in which grade Ia represents an excellent position, whereas grades IIIa and IIIb are supposed to be surgically revised [7,8]. The classification system is based on the relative position of the inserted screw to the pedicle and vertebral body ( Fig. 1, Table 1). In addition, the length ratio between screw and vertebral body diameter was calculated, with any relative screw length between 85 and 100% classified as good [9]. All data were transferred to a standard spreadsheet (Microsoft Excel for Mac, version 15.22, Redmond, USA) for further processing. Statistical analysis was performed using dedicated software (IBM SPSS Statistics, version 24.0.0.1 for Mac, Armonk, USA). Normal distribution was assessed with Kolmogorov-Smirnov tests. For normally distributed continuous variables, we report mean values and standard deviation, whereas absolute values and percentage distribution are displayed otherwise. Chi-square tests were applied to compare categorical data. To measure the effect size of the chi-square-test, Cramer-V was computed. Student's t-tests were conducted to determine whether two normally distributed samples differ significantly. P values ≤ 0.05 were considered to indicate statistical significance. Results Between June 2, 2010, and June 29, 2015, 285 patients (134 women, 47.0%) underwent 295 dorsal screw-rod instrumentations with 62 procedures performed on the thoracic and 233 on the lumbar spine. Lumbar vertebra 4 was stabilized most often (185 times) and thoracic vertebra 9 least often (14 times). Mean patient age at the time of surgery was 64.1 ± 12.6 years with almost 69% of patients over 60 years of age. Indication to perform surgical treatment was most frequently based on tumor-induced instability in the thoracic spine (46.8%) and degeneration-induced instability in the lumbar spine (76.8%). Of 1704 included screws, 1621 (95.1%) showed an excellent position (Ia) in the initial intraoperative imaging. Of the remaining pedicle screws, 25 (1.5%) protruded the lateral rim of the pedicle (Ib), 28 (1.6%) protruded the lateral margin of the vertebral body (IIa), 8 (0.5%) crossed the vertebra's midline (IIb). Furthermore, 11 screws each (0.6%) fulfilled the criteria of full lateral (IIIa) or medial displacement (IIIb). Median screw length was 92.6% of the maximum diameter of the vertebral body with 1224 screws (71.8%) displaying "good" length in the first scan. After intraoperative revision, 58 of 71 screw positions were classified as Ia and one screw was classified as Ib. The other 12 revised screws were either not included in the field of view of the post-revision 3D scan or no further imaging was performed after intraoperative repositioning. Good relative length was ascertained in 1238 screws (72.7%). Screw position grading and relative length before and after intraoperative revision are summarized in Figs. 2 and 3, respectively. Repeated surgery was necessary in 11 patients (12 operations) with a total of 40 screws (2.3%) being repositioned. However, only two screws (0.1%) in one patient had to be revised due to primary malposition. The remaining screws had to be revised due to progressive loosening ( At follow-up, 82.1% of patients declared total pain release or at least significant improvement of back and/or leg pain after surgery. No patient reported aggravating or new pain after surgery at the control examination. Neurological follow-up showed significant improvement or complete remission of symptoms in 70.8% of patients with neurological deficits. In contrast, 8.9% of patients with followup had new neurological symptoms that were not reported preoperatively. Fig. 1 Visual representation of pedicle screw placement grading system. Schematic display and exemplary intraoperative 3D rotational fluoroscopy images of the classification system proposed by Zdichavsky et al. [7,8]. Grading criteria are summarized in Table 1 Table 1 Classification of pedicle screw placement Graduation of screw positions in accordance with the classification system proposed by Zdichavsky et al. [7,8] 9 0-100% 85-90% 80-85% <80% n.c. before intraoperaƟve revision aŌer intraoperaƟve revision Fig. 3 Relative screw length. Evaluation of relative screw length before and after intraoperative revision of 71 screws. Screw lengths between 100 and 85% are considered as "good" Discussion In this study, the accuracy of pedicle screw placement in the thoracic and lumbar spine was investigated using a combined approach of neuronavigation and intraoperative 3D rotational fluoroscopy. High precision of implant positioning was achieved in all spinal sections. With a total of 1738 screws placed, intraoperative revision was performed for 78 screws, whereas repeated surgery due to a missed malposition was necessary in just one patient. The classification system of Zdichavsky et al. represents a validated concept for determining the accuracy of pedicle screw placement in the thoracic and lumbar spine [7,8]. Our results for placement accuracy are superior compared to the literature on screw positioning with conventional fluoroscopy-guidance [9,10]. However, many earlier studies with similar designs use other forms of graduation, e.g., deviation from the ideal position in millimeter, dichotomous assessment of pedicle wall penetration [11] or screw placement < 50% or > 50% outside the pedicle [12]. Other studies only report accurate screw positioning when the thread is entirely intraosseous [13][14][15][16][17], or state misplacement solely in patients with postoperative neurological deficit or screws that require postoperative revision [18]. A meta-analysis by Gelalis et al. evaluated screw positioning with 3D fluoroscopy-guided neuronavigation, reporting accurate positioning for completely transpedicular screws in 81-92% of patients [19]. Assumedly, the inferior performance in individual studies within the meta-analysis compared to the present work may be attributed to substantially smaller patient samples with different sociodemographic characteristics. Besides, various definitions of screw misalignment yielded different shares of "correct" positioning. Different surgical indications, the experience of the surgeon, as well as the complexity of the surgery and height of the instrumented spinal segment also contributed to the heterogeneity of the results. Revision surgery frequencies of up to 5.2% have been described in various studies on neuronavigated spinal surgery [20][21][22], which is considerably higher than in the present work. In the series presented here, only one of 285 patients required repeated procedures because of screw misplacement that was not detected intraoperatively. We assume that the far lower frequency of repetitive surgery can be attributed to the superior screw assessability provided by the O-arm-navigated approach, which is in line with the findings of Beck et al. [6]. Perdomo-Pantoja et al. showed in a recent meta-analysis on the accuracy of pedicle screw placement with different techniques that the highest accuracy results from CT navigation [23]. Nevertheless, it must be stated that high precision can also be achieved with free-hand or fluoroscopy-assisted screw insertion, even in patients with pronounced spinal deformities such as degenerative scoliosis [24]. Although Chan et al. demonstrated that screw breach rates are lower with CT navigation compared to free-hand methods, complication rates remained low with either technique [25]. Several limitations have to be addressed for this study. Since we performed a retrospective analysis, data quality regarding long-term outcome, neurological status and pain relief was inconsistent. Intraoperative revision rates of 4.2% were slightly higher than in comparable studies [6,26]. However, we believe that this finding can be attributed to the inclusion of data from the introductory phase of the 3D fluoroscopy system. As degenerative diseases were predominantly responsible for spinal surgery in this study, decompression of spinal stenosis and/or cage insertion was frequently performed in addition to dorsal stabilization with a screw-rod system, hence affecting the clinical outcome. While 8.9% of patients with adequate follow-up reported new neurological symptoms, no association could be ascertained with misplaced screws that were revised intraoperatively. Conclusion Combination of neuronavigation and 3D rotational fluoroscopy control ensures excellent accuracy in pedicle screw positioning. As misplaced screws can be detected reliably and revised intraoperatively, repeated surgery for screw malposition is rarely required. Author contributions NC analyzed all data and prepared the manuscript. JPG and KSL supported the draft of the manuscript and revised it for style and language. KG and HH supported figure preparation and data analysis. PF provided quality control. SK and TW designed and supervised the study. All authors read and approved the final manuscript. Funding Open Access funding enabled and organized by Projekt DEAL. The authors did not receive support from any organization for the submitted work. Availability of data and materials The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Code availability Not applicable. Conflict of interest The authors declare that they have no competing interests. Ethical approval The local institutional review board approved this retrospective study and waived the need for additional written informed consent (reference number 20160925 01). This work was carried out in accordance with the ethical standards of the institutional and national research committee and with the 1975 Declaration of Helsinki. Consent for publication Not applicable. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-07-07T13:32:49.132Z
2022-07-06T00:00:00.000
{ "year": 2022, "sha1": "de29bd07cbffcec73bac85dcfaf7a141e377887d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00402-022-04514-1.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "de29bd07cbffcec73bac85dcfaf7a141e377887d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14971010
pes2o/s2orc
v3-fos-license
Diagnosis reliability of combined flexible sigmoidoscopy and fecal-immunochemical test in colorectal neoplasia screening Background Employing colonoscopy, the gold standard in colorectal cancer (CRC) diagnosis testing, for CRC screening presents a significant risk of complications. Alternative methods with a lower invasive-level and fewer risks are proposed in combination, though each with lower diagnosis performance when applied separately. The main objective of this cross-sectional pilot study was to evaluate the feasibility of a CRC screening program using combined flexible sigmoidoscopy and fecal-immunochemical test (FIT). Methods The patient population consisted of 2,201 consecutive-case symptomatic patients attending the gastroenterology outpatient clinic with mild complaints between 2012 and 2014. They were referred for FIT. A sample of 252 individuals underwent a subsequent colonoscopy, blind to FIT results, and theoretical sigmoidoscopy was simulated. On a subsample of 57 patients, real sigmoidoscopy was additionally performed. Prior probabilities in terms of patients’ compliance and CRC prevalence were estimated, together with predictive ability of FIT and sigmoidoscopy in screening population. We assessed the merit of a screening strategy employing two-stage serial multiple testing: a) first stage by combining two parallel tests, that is, flexible sigmoidoscopy and FIT and b) colonoscopy as the second diagnosis test. The scheme was validated using the actual predictive values derived from the study population. Results Colonoscopy found 75 (29.76%) individuals with advanced neoplasia. FIT was positive in 30.3% of advanced neoplasia cases, while between 23.73% and 28.28% met the theoretical sigmoidoscopy simulation criteria, with good concordance between real and theoretical sigmoidoscopy. The colonoscopy referral compliance rate was 52% among FIT-positives. Sensitivity and specificity of the first-stage test combination were better than sigmoidoscopy alone (McNemar test: P<0.001). Negative predictive values for low prevalence levels were between 81.5% and 90.12%. Conclusion Combining less resource challenging and less invasive testing procedures is worthwhile in colorectal neoplasia detection, improving sensitivity and specificity of either test alone, and leading to better posterior probabilities in usual screening scenarios. Introduction Colorectal cancer (CRC) is a major health issue worldwide, 1 so any form of CRC screening is effective and cost saving even in an average risk population. 2 The US Preventive Services Task Force recommended screening for CRC using high-sensitivity fecal occult blood testing, flexible sigmoidoscopy (SIG) with interval fecal occult blood testing, or colonoscopy (COL). 3,4 COL is the gold standard for colon examination, although it also is a complex and invasive procedure with a small, but not submit your manuscript | www.dovepress.com Dovepress Dovepress 6820 iovanescu et al insignificant risk of major complications. When deciding which test to use, several factors should be considered, for example, availability of endoscopy units and the burden on their capacity, available human resources (specialist medical doctors, nurses, and laboratory staff). Though flexible SIG can only examine the distal part of the colon, the procedure is less time consuming, no sedation is needed, and the bowel preparation is simpler. On the other hand, when a polyp or an adenoma (depending on the chosen medical criteria) is detected during the examination, a subsequent COL might nevertheless be required. Fecal-immunochemical tests (FITs) have gradually replaced chemical-based tests, showing an improved sensitivity for advanced neoplasia. Mixed criteria were proposed to classify the SIG results, namely: a) USPLCO criteria proposed by the USA Prostate, Lung, Colorectal and Ovarian (PLCO); 5 b) UK flexible SIG criteria; 6 c) Italian SCORE trial criteria; 7 and d) Norwegian NORCCAP trial criteria. 8 There has been an upward trend in CRC mortality in Europe, for example, by a median of 60% for men between 1989 and 2011. Though it decreased by a median of 14.7% for women, over the same period, the trend in CRC mortality is still upward in the Central-Eastern European countries. 9 So combining less invasive tests has been suggested. 10 This paper originates from a feasibility study aiming to evaluate the best screening strategy for colorectal neoplasia in western Romania. The primary objective was to assess a screening strategy employing multiple testing in a two-stage serial scenario: a) a first stage combining two parallel tests, that is, FIT and flexible SIG, both insufficiently effective when used alone and b) a second stage using COL as the subsequent diagnosis test, in cases of positive first-stage findings. In addition, we explored the patients' compliance and the CRC prevalence in the study population, providing an estimate for the prior probabilities. Subsequently, we evaluated the predictive ability of FIT and SIG in the study population, and compared the performance with the values found in the literature. Finally, the proposed scheme was validated by comparing the theoretical values with the actual predictive values derived from the study population. Methods The study protocol was approved by Ethics Committee for Scientific Research of the "Vasile Goldis" Western University of Arad. The County Hospital of Arad is a teaching hospital for the Faculty of Medicine, Pharmacy and Dental Medicine. The costs of medical investigations were covered by the medical insurance packages and the hospital (in part), or by the patients themselves. Each patient signed an informed consent to participate. Patient population The patient population consisted of consecutive-case symptomatic patients attending the Gastroenterology outpatient clinic in the County Hospital of Arad for mild abdominal complaints, between January 2012 and December 2014. The inclusion criteria in this cross-sectional study were the following: absence of any suggestive CRC symptoms, and being aged between 40 and 79 years. Subjects were excluded if they reported recent explorations (eg, SIG or COL in the last 5 years) or one of the following was present: they had a personal history of CRC; they had a family history of hereditary or familial CRC (defined as $2 first-degree relatives with CRC or one relative CRC-diagnosed before the age of 60 years); they had a terminal medical condition. All the 2,201 patients who complied with the study criteria were referred for FIT ( Figure 1 shows the study flowchart). A sample of 252 individuals, that is, arm (A), underwent a subsequent COL (blind to the FIT results), and a theoretical SIG was simulated. On a subsample of 57 patients, that is, arm (B), a real SIG was additionally performed in order to explore the simulated SIG bias. The sensitivities and specificities were determined both for the single tests and their combinations, followed by a reliability analysis. FiT investigation All individuals collected one stool sample without specific diet or medication restrictions. The rapid immunochemical test Hem-Check 1 (VedaLab, France) was used. Hem-Check is an immunochromatographic analysis with a sensitivity range between 0.04 and 120 µg Hb/g of feces. The standard FIT cutoff ($20 µg Hb/g) was considered for the fecal hemoglobin concentration. colonoscopy Before COL, all patients were given written instructions regarding the required diet (a low-residue diet during the entire day before the investigation), and the colonic cleansing protocol. A split-dose bowel cleansing regimen of 4 L polyethylene glycol-electrolyte lavage solution was used, and propofol deep sedation. The patients were examined with a videocolonoscope Olympus CF-HQ190 Evis Exera III. All polyps detected during the COL were endoscopically removed and retrieved for subsequent histological examination. If a colon cancer was detected, biopsies were taken. The examination exclusion criteria consisted of incomplete COL, poor bowel preparation, or lack of histology. 6821 Diagnosis reliability of combined flexible sigmoidoscopy and FIT simulated sig SIG was simulated considering COL findings in the rectum, the sigmoid, and the descending colon (distal to the splenic flexure), in a similar manner to Castro et al. 11 The diagnostic yield and the theoretical post-SIG referral for COL were defined according to four already established sets of criteria: a) USA PLCO trial, where any polyp (either adenoma or not) would be referred; 5 b) UK flexible SIG, where any distal polyp $10 mm, tubulovillous or villous histology, high-grade dysplasia CRC, or $20 polyps above the distal rectum would be referred; 6 c) SCORE trial, where any distal polyp $5 mm, tubulovillous or villous histology, high-grade dysplasia, $3 adenomas, or CRC would be referred; 7 and d) NORCCAP trial, where any distal polyp $10 mm, any adenoma, or CRC would be referred. 8 In addition, we simulated the diagnostic performance of FIT (actual results) in combination with each simulated flexible SIG strategy. Real flexible SIG Before the SIG, a bowel preparation was performed, consisting of a single enema self-administered either at home or at the endoscopy unit. advanced neoplasia Advanced neoplasia was defined as cancer or adenomas .10 mm, presenting villous architecture, or manifesting high-grade dysplasia. Data analysis Descriptive statistics were presented as mean ± standard deviation for continuous variables, or frequency counts with the percentage for the categorical variables. Exploratory statistical testing was applied to check whether the positive and negative subjects' groups were comparable regarding their age, sex, or urban or rural living (Mann-Whitney U-test for continuous variables and chi-square or Fisher exact test for categorical ones). Sensitivity (SN) and specificity (SP) were calculated for individual diagnosis tests, that is, FIT and flexible SIG, based on the COL result as the gold-standard reference. For comparing the binary diagnostic tests' performance, the exact McNemar statistical test for correlated proportions was applied. To assess the performance of test combination, two scenarios were considered, as follows. Parallel design The tests are administered independently and at the same time, and the results are logically combined, result C being positive if at least one test is positive, as in Equation (1); the effect is a higher sensitivity (ie, decreased false negative rate) at the cost of a lower specificity. C T T T S when the number of combined tests is S = 2, equations for the parallel design become: serial design Each test is administered taking into consideration the previous testing results; if the first result is negative, the final result is declared negative, and no other test is applied; otherwise, the procedure is repeated up to the number of available tests, as in Equation (4); this improves specificity (ie, decreased false positive rate) at the cost of a lower sensitivity. when the number of combined tests is S = 2, equations for the serial design become: Figure 2 shows the two-stage screening scheme proposed and analyzed. The statistical analysis was performed using SPSS version 17.0 and R project packages for statistical computing. Confidence level of 0.95 was considered for the estimating intervals, and a P-value of 0.05 was the threshold for the statistical significance. Results The initial sample (N=2,201) included individuals aged from 40 to 79 years (61.33±9.805), 39.3% females, and 75.1% from urban area. We found no statistically significant differences in proportion between subjects with positive and negative FIT results regarding either the sex (chi-square test, P=0.18), or urban/rural living (chi-square test, P=0.961). The sample in arm (A) sample included N=252 individuals aged 40-79 years (63.65±8.781). The main characteristics are presented in Table 1. We found no statistically significant differences in proportion between subjects with and without lesions regarding the sex (chi-square test, P=0.716), age (Mann-Whitney U-test, P=0.51), or urban/rural living (chi-square test, P=0.083). Proportion of advanced neoplasia was 3.4% of the total of 2,201 patients, with 2% CRC. The compliance with the COL referral after a positive FIT was rather low, that is, 108 out of 206 total of FIT-positives (52%). Out of the 252 subjects who actually underwent a colonoscopic investigation, 75 (29.76%) individuals tested positive for advanced neoplasia (Table 1). advanced neoplasia Diagnostic performance of the testing approaches is presented in Table 2. (Table 2A). The simulated SIG was rather poor as well, especially in terms of sensitivity, that is, leading to a large proportion of false negatives (Table 2B). Table 2C synthesizes the validation results in arm (B), as in Figure 1. Although point estimates for sensitivity and specificity varied, the 95% confidence intervals overlapped to a large degree and the exact McNemar statistical test for correlated proportions resulted in P-value asymptotically reaching 1, so we considered arm (B) to be a valid proof for the screening evaluation conducted by using theoretical SIG simulation. As the approach, employing UK criteria 6 was the most conservative in terms of diagnosis performance; all further analyses for test combination employed their sensitivity and specificity values. Proximal advanced neoplasia For advanced proximal neoplasia in the sample of arm (A), the FIT sensitivity was 75%, while the specificity was 69.17% (Table 2A). (Table 2C). Although in real flexible SIG, the splenic flexure could not always be reached, or the colon preparation had not always been done correctly, the sensitivity was nevertheless reasonably good and comparable to the simulated approach by USPLCO and NORCCAP criteria. Test combination The pilot scheme proposed for CRC screening is presented in Figure 2. It consists of a two-stage serial design with combination of two parallel tests, that is, SIG and FIT, in the first stage and a single gold-standard test, that is, COL, in the second stage. The parallel combination of the first-stage increases the sensitivity, resulting in a lower false negative rate. The first stage takes advantage of the complementary capacity of SIG and FIT to detect advanced distal and proximal neoplasia, respectively. In the second stage, the specificity increases while maintaining a reasonably high sensitivity. Overall, for such a serial testing scheme, when both FIT and SIG are negative in the first stage, that is, FIT#SIG(-), the final screening result FIT#SIG~COL(-) is negative, thus reducing the number of unnecessary COL tests. The performance issue lies in the false negative rate, that is, whether or not we can actually afford to apply such a screening scheme. Table 3 synthesizes the performance of the first-stage FIT#SIG combination, as in Figure 2, with simulated SIG. The interpretation of any diagnostic test depends not only on sensitivity and specificity, but also on the baseline prior 6825 Diagnosis reliability of combined flexible sigmoidoscopy and FIT probabilities in the actual population. A critical point is that, no matter how good the diagnostic tests are, prevalence and patient compliance affect the predictive value of any screening result. Therefore, the curves of predicted values versus prevalence (in a larger sense, including compliance) were drawn. Figure 3 shows predictive values versus prevalence ranging from 0% to 100% for FIT, simulated SIG following UK criteria, 6 and their first-stage combination FIT#SIG compared to SIG. As previously mentioned, the UK criteria were chosen for SIG simulation as being more conservative. The sensitivity and specificity were computed using the Equations (2') and (3'). Improved sensitivity and predictive value negative (PVN) can be observed. Figure 4 shows predictive values versus prevalence ranging from 0% to 100% for the first-step combination FIT#SIG, COL with sensitivity and specificity of 91.2% and 95.1%, respectively, 3 followed by their second-stage combination FIT#SIG~COL compared to COL. Sensitivity and specificity were computed using Equations (5') and (6'). Specificity and predictive value positive (PVP) improved, while the PVN did not fall dramatically within the first third of the prevalence range. We can see that the intersection of the two curves (ie, PVP and PVN) moved to lower prevalence values, with an improved PVP curve. For the actual values employed in this simulation, the intersection occurred at 22% prevalence with equal PVP and PVN of approximately 95%, similar to those of COL alone at a higher prevalence level. Table 4 presents the predictive values of the two-stage FIT#SIG~COL combination, with actual results of FIT and real SIG in arm (B) and calculated predictive values for the results with UK criteria in arm (A), as described in Figure 1. Discussion The results of this study demonstrate that combining flexible SIG and FIT in CRC screening would lead to better results than screening with either of them alone. Adding FIT to SIG increased sensitivity for advanced neoplasia in all the four simulating strategies evaluated. Other studies claimed similar findings, for example, using SIG in addition to FIT led to increased sensitivity for advanced proximal neoplasia by nearly 10%, 10 and adding FIT to SIG-based strategies produced a 10%-30% improvement in advanced right-sided neoplasia detection rate, although at the cost of a significant reduction in specificity, 11 which are similar to our results. FIT has already proved to be a cheap and harm-free procedure, which can reduce cancer mortality if used as an annual screening tool. There is conflicting data in the literature, as some authors obtained very low FIT sensitivity in detecting proximal lesions, for example, only 17% in Castro et al. 11 On the other hand, all reported specificity values for FIT were reasonably acceptable, that is, a good characteristic as false positive test results lead to anxiety and unnecessary follow-up COL. It is important to note though that, in some studies, FIT proved similar sensitivity for both proximal and distal advanced neoplasia. 12,13 FIT may have false negative results, but usually the missed lesions are advanced polyps, rather than cancers. Furthermore, it was observed that regular fecal-based diagnostic tests reduced the relative risk of CRC mortality by 15%-25%. 14 In addition, compared to COL, SIG is a cheaper and less time-consuming procedure, it does not need sedation, and bowel preparation is simpler. However, the major disadvantage is its inability to inspect the transverse and right colon. Combining SIG screening with regular fecal occult blood testing has been proposed as a viable option for improving the weak results of SIG in detecting proximal neoplasia. It has been shown that single SIG screening would only lead to distal CRC mortality reduction. 15 In addition, when comparing the detection rates of FIT versus COL, the latter found more neoplasia and advanced adenoma, but there were no statistically significant differences between the two methods for CRC detection. 16 A clear benefit of the present proposed scheme is the integration of COL as a second stage in a serial design. Hence, based on the good rate of PVP, when both tests from the first stage are negative, the result is negative, and further COL tests are avoided. Therefore, bearing in mind the low compliance rate for COL, 17 the proposed design would be clearly advantageous. Moreover, the increased demand for COL units usually leads to delays in CRC diagnosis, so a high accuracy level in the detection of true negative cases in the first stage is of considerable importance. 18 Test replication and standardization should improve precision; on the other hand, if the values of the diagnostic test correlated with the severity of the disease, a well-performing test for advanced illness would be less useful for identifying patients in early stages, when treatment might be most effective. 19 A surveying methodology for the systematic investigation of gastrointestinal disorders has also been proposed. 20 Taking all this context into account, the proposed pilot scheme is also advantageous for multiple testing, since it helps in the making of unequivocal diagnosis by altering the posterior probabilities in a predictable fashion, that is, parallel testing results with decreased false negative rate, while serial testing leads to decreased false positive rate. Moreover, there is less unnecessary anticipated discomfort, lowering the delays and increasing compliance. In addition, as the information is clearer, available, and explicable to all the stakeholders, it is easier to assess and revise assumptions when deciding on each practice approach. Moreover, patients' compliance increases when a concrete investigation scheme is provided. A limitation of this study is that all the estimates, except for COL sensitivity and specificity, were not independent, but derived from the study population data and were dependent on the existing knowledge and technology. Therefore, more research should be conducted to get independent estimates of the prior probabilities. Our study was also limited to a CRC screening single round. In addition, there was a rather small number of patients with a high mean age (63.65 years) and female predominance 6827 Diagnosis reliability of combined flexible sigmoidoscopy and FIT (57.1%). Nevertheless, this might not be a hindrance, as it is known that the prevalence of proximal colonic neoplasia increases with age and is higher in females than in males. 21 Also, patients were recruited in a Gastroenterology outpatient clinic, all of them being symptomatic (ie, not a general screening population), although none with cancer referral symptoms (eg, unexplained rectal bleeding, iron deficiency anemia, or changes in their bowel habit). FIT was performed in clinical conditions, therefore good performance results were obtained, which might not be entirely reproducible in less controlled conditions. Employing the UK flexible SIG criteria 9 in simulation was aimed at compensating for this potential difficulty. Although the compliance rate was rather poor (ie, only 108 out of the 206 FIT-positives agreed to undergo a subsequent COL), the data were similar to those reported elsewhere, [17][18][19] and justified the search for a friendlier and less inconvenient approach toward the patient. Combined with lower costs, this could achieve better screening compliance. 17 Another aspect to consider is that different strategies to identify COL referrals have been used in order to limit and better target its employment as an investigation procedure. When choosing a SIG strategy, one must bear in mind the need to balance between detecting many lesions and increasing the COL burden. The values obtained in our study for simulated SIG sensitivity and specificity were similar with those obtained in other studies. While the criteria proposed in the UK Flexible Sigmoidoscopy trial seemed to be the most appropriate in terms of saving resources, the USPLCO trial was the only one to demonstrate a statistically significant reduction in colon cancer incidence. 17,22 The mean values to increase the prior probability in the screening population include explicit referral criteria to be followed by the primary care physicians, risk-scoring based on symptoms and medical best practice recommendations. 18,23 With the proposed screening scheme, at a prevalence of ~3%-4%, the percent of negative subjects who would avoid the unnecessary invasive procedure of COL is between 95% and 99%, so better screening compliance should be expected. Conclusion Flexible SIG and FIT can play an important role in CRC screening, since they are less challenging in terms of the demands placed on resources on the one hand, and less invasive on the other, and so are more affordable when applied on a large scale. In this context, test combination significantly improves neoplasia detection, increasing sensitivity and specificity, as well as predictive values in usual screening scenarios. Moreover, on the patient side, having a clear scheme for the screening stages and actual predictive values, there is a greater likelihood of both informed consent and compliance with the physician's recommendations.
2018-04-03T05:33:51.217Z
2016-11-04T00:00:00.000
{ "year": 2016, "sha1": "3107a3e5d79733052efdf6d645cc446b94a86833", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=33415", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "089d5d1f2286f365a2c63e61f3a30450c93f3089", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220980268
pes2o/s2orc
v3-fos-license
Microstructural brain abnormalities in HIV+ individuals with or without chronic marijuana use Objective Cognitive deficits and microstructural brain abnormalities are well documented in HIV-positive individuals (HIV+). This study evaluated whether chronic marijuana (MJ) use contributes to additional cognitive deficits or brain microstructural abnormalities that may reflect neuroinflammation or neuronal injury in HIV+. Method Using a 2 × 2 design, 44 HIV+ participants [23 minimal/no MJ users (HIV+), 21 chronic active MJ users (HIV + MJ)] were compared to 46 seronegative participants [24 minimal/no MJ users (SN) and 22 chronic MJ users (SN + MJ)] on neuropsychological performance (7 cognitive domains) and diffusion tensor imaging metrics, using an automated atlas to assess fractional anisotropy (FA), axial (AD), radial (RD), and mean (MD) diffusivities, in 18 cortical and 4 subcortical brain regions. Results Compared to SN and regardless of MJ use, the HIV+ group had lower FA and higher diffusivities in multiple white matter and subcortical structures (p < 0.001–0.050), as well as poorer cognition in Fluency (p = 0.039), Attention/Working Memory (p = 0.009), Learning (p = 0.014), and Memory (p = 0.028). Regardless of HIV serostatus, MJ users had lower AD in uncinate fasciculus (p = 0.024) but similar cognition as nonusers. HIV serostatus and MJ use showed an interactive effect on mean diffusivity in the right globus pallidus but not on cognitive function. Furthermore, lower FA in left anterior internal capsule predicted poorer Fluency across all participants and worse Attention/Working Memory in all except SN subjects, while higher diffusivities in several white matter tracts also predicted lower cognitive domain Z-scores. Lastly, MJ users with or without HIV infection showed greater than normal age-dependent FA declines in superior longitudinal fasciculus, external capsule, and globus pallidus. Conclusions Our findings suggest that, except in the globus pallidus, chronic MJ use had no additional negative influence on brain microstructure or neurocognitive deficits in HIV+ individuals. However, lower AD in the uncinate fasciculus of MJ users suggests axonal loss in this white matter tract that connects to cannabinoid receptor rich brain regions that are involved in verbal memory and emotion. Furthermore, the greater than normal age-dependent FA declines in the white matter tracts and globus pallidus in MJ users suggest that older chronic MJ users may eventually have lesser neuronal integrity in these brain regions. Introduction HIV infection is associated with chronic neuroinflammation, which contributes to cognitive dysfunction [1] and various brain structural and functional abnormalities in people living with HIV (PLWH) [2]. Cannabis, or marijuana (MJ), is the most commonly abused illicit drug worldwide and in the USA [3] and is used much more often in PLWH than in the general population (26.4% vs.16%) [4,5]. With the legalization of cannabis for recreational MJ use in many states in the USA and in other countries, the prevalence of MJ use among PLWH has continued to increase [4]. Despite the highly prevalent MJ use by PLWH, data regarding whether HIV infection and MJ use may lead to additive or interactive effects on brain function or pathology remain scant and controversial [6]. Several studies found no independent or additive effects on cognitive deficits with chronic MJ users with or without HIV infection [7][8][9]. However, MJ use in PLWH was also found to have worse motor learning deficits [10], as well as better verbal fluency and learning [11]. Similar inconsistent findings were reported in MJ users without HIV infection. Chronic MJ users showed poorer learning, memory, attention, executive planning, and lower intelligence quotient (IQ) [12][13][14]. However, deficits in learning and memory normalized within a month after abstinence from MJ use [12,13]. Further, MJ users did not show greater cognitive decline compared with nonusers [15,16], except for those with adolescent onset of MJ use [12,14]. In addition, chronic MJ use may lead to apathy and lack of motivation [17]. Although chronic MJ use may suppress the immune system [18], MJ use in PLWH did not influence viral suppression by cART [19], adherence to cART [20], or mortality [21]. Few neuroimaging studies evaluated the combined effects of HIV infection and MJ use, but the findings were variable. For instance, chronic MJ use in PLWH did not show additional effects on brain atrophy [8] but had interactive effects on brain glutamate levels on proton MR spectroscopy [7]. HIV + MJ smokers also showed greater brain activation in frontal-insular regions compared to HIV+ individuals or MJ users [22]. Several diffusion tensor imaging (DTI) studies evaluated MJ users and found disrupted microstructural integrity in corpus callosum (CC), superior longitudinal fasciculus (SLF), thalamic radiation and uncinate fasciculus (UNC), as well as abnormal structural connectivity to the orbitofrontal cortex (OFC) [23][24][25][26]. Conversely, no group difference between recreational MJ users and nonusers on white matter integrity was also reported, except for an association between lesser white matter coherence in those with earlier age of first use [27]. Whether chronic MJ use influences brain microstructure in PLWH is unknown. Therefore, the current study evaluated whether brain microstructure differs between HIV+ individuals with and without chronic active MJ use (≥ 3 times/week for past 2 years or longer). All HIV+ participants in the current study were maintained on combined antiretroviral therapy (cART) regimens. We hypothesize that: (1) Consistent with prior reports [7][8][9], HIV+ individuals but not MJ users would have poorer cognitive function compared to seronegative non-MJ users (SN), with no interactive or synergistic effects between HIV+ and MJ use on cognitive performance. (2) Based on aforementioned DTI studies, HIV+ subjects would show lower FA and higher diffusivities (AD, RD, and MD) compared to SN, while MJ users would show minimal or no abnormalities, on DTI in the major fiber tracts and subcortical gray matter. Hence, we expected HIV + MJ users to show minimal or no additional effects on DTI metrics in these brain regions compared to either HIV+ subjects without MJ use or SN-MJ users. Participants Using a 2 × 2 design, 90 participants (ages 18-70 years), including 24 SN participants with no MJ use, 22 SN with chronic MJ use (SN + MJ), 23 HIV+ participants with minimal or no MJ use (HIV+), and 21 HIV+ with chronic active MJ use (HIV + MJ), were included in this study. All participants were recruited from the local community, by referrals, on-line advertisements, or flyer postings and were screened initially by telephone. Three hundred thirty individuals were screened initially; 182 (55%) potentially eligible participants were invited for further in-person screening. Each signed a written consent form after being verbally informed of the study aims and requirements. The protocol and the consent form were approved by the Cooperative Institutional Review Board of the University of Hawaii and The Queen's Medical Center and were Health Insurance Portability and Accountability Act (HIPAA) compliant. Each participant was additionally screened with detailed medical and drug use histories, medical records reviews, and underwent physical and neuropsychiatric examinations by trained research staff and physicians to ensure they fulfilled the study criteria. 127/182 (70%) participants fulfilled all study criteria, but only 90/127 (71%) of those completed the study. They were men or women of any ethnicity, aged 18-70 years, and able to provide informed consent. SN participants were negative on the ClearView® COMPLETE HIV-1/2 test. HIV+ participants fulfilled these inclusion criteria: (1) HIV seropositive (with documentation from medical records) and (2) maintained on a stable combined antiretroviral therapy regimen for 6 months or longer (by self-report and verified by medical records whenever possible). MJ participants fulfilled these inclusion criteria: (1) chronic MJ use (> 3 times/week for > 2 years) and (2) negative urine toxicology screen for other drugs of abuse (methamphetamine, amphetamine, cocaine, benzodiazepine, barbiturates, and opiates, except for false positive tests from prescribed medications). Exclusion criteria for all participants were similar to those reported previously [7]: (1) history of co-morbid major psychiatric illness; (2) any confounding neurological disorder; (3) significantly abnormal laboratory tests (> 2 standard deviations); (4) moderate to severe substance use disorders (SUD) within the previous 2 years (Diagnostic Statistical Manual-5 SUD criteria, other than marijuana and/or tobacco use disorders); (5) positive urine toxicology screen on the day of visit, except for Δ9-tetrahydrocannabinol (Δ9-THC) in the MJ users; (6) pregnancy; (7) inability to read at the 8th grade level (on Wechsler Test of Adult Reading); and (8) contraindications for MRI studies. Image acquisition and processing All participants were scanned on a 3 Tesla Siemens TIM Trio scanner (Siemens Medical Solutions, Erlangen, Germany). After a localizer, a sagittal 3-D magnetizationprepared rapid gradient-echo scan (TR/TE/TI = 2200/ 4.47/1000 ms; 1 mm isotropic resolution) and an axial fluid attenuated inversion recovery scan (FLAIR, TR/TE = 9100/84 ms, 3-mm slice thickness, 44 slices) were performed. All structural MR scans were reviewed by an experienced Neurologist (L.C.) to evaluate for any confounding gross structural abnormalities. Five of the 90 total participants had minor structural abnormalities: One had a small area of encephalomalacia in the right posterior parietal region; the second had several small gliotic lesions from old toxoplasma lesions in the anterior frontal lobes, the anterior cingulate cortex and thalamus; the third had a small old lacunar infarct in the ponto-cerebellar junction; the fourth had some small areas of white matter hyperintensities in the U-fibers in the parieto-occipital region; and the fifth had a small area of hyperintense signal at the right frontal and temporal lobe juncture along the Sylvian fissure. These abnormalities did not significantly impact the selected regions of interest (ROIs), based on comparisons of our findings with or without these 5 participants' data. DTI scans were performed with b = 0 and 12 directions at 1000 s/mm 2 , TR/TE = 3700/88 ms, resolution 1.7 × 1.7mm 2 , 4 mm axial slices with 1-mm gap, and 4 repetitions. Following motion correction [28], the tensor field for each individual brain was calculated using DTIStudio (www.MriStudio.org) and automatically fit to JHU-MNI atlas space using Large Deformation Diffeomorphic Metric Mapping [28,29]. Fractional anisotropy (FA), and axial (AD, first eigenvalue), and radial diffusivity (RD, mean of second and third eigenvalues) were measured in anatomical regions defined in the JHU-MNI atlas [30]. Based on prior DTI studies that demonstrated regional white matter abnormalities in MJ users [23][24][25][26] and in HIV-infected individuals [2], FA, AD, and RD were assessed in the following 11 major white matter structures (18 including the subsections of each structure): corona radiata (anterior, superior, and posterior; or ACR, SCR, and PCR), corpus callosum (genu, body, and splenium; or GCC, BCC, and SCC), sagittal stratum (SS), SLF, superior fronto-occipital fasciculus (SFO), inferior fronto-occipital fasciculus (IFO), internal capsule (anterior and posterior limbs and retrolenticular part; or ALIC, PLIC, and RLIC), external capsule (EC), posterior thalamic radiation (PTR), UNC, and cingulum (connecting to the cingulate gyrus and to the hippocampus; or CGC and CGH). Due to excessively high proportion of crossing fibers, only FA and mean diffusivity (MD, average of the three eigenvalues) were assessed in the four subcortical regions [caudate, putamen, globus pallidus (GP), and thalamus]. Neuropsychological testing Cognitive function was assessed in 7 domains: (1) Learning was assessed with the Rey Auditory Verbal Learning Test (RAVLT, immediate recall) and the Rey-Osterreith Complex Figure Test Test. (7) Fine motor skill was assessed with the Grooved Pegboard Test. The average duration to complete all the tests was 4 hours, with a break after the first half of the tests. Z-scores were generated for each domain, adjusted for age and education, based on a normative database from 547 SN healthy participants who were administered the same tests in a standardized manner in the same laboratory. HIV-associated neurocognitive disorder (HAND) or HANDequivalent status Following the guidelines from the Frascati criteria [31], we use the Z-scores generated for the 7 cognitive domains above, along with our clinical assessments, to evaluate all HIV+ individuals to determine whether they had HAND, and all SN controls whether they had HAND-equivalent cognitive status. Each HAND or HAND-equivalent participant was further subcategorized into asymptomatic neurocognitive impairment (ANI), mild neurocognitive disorder (MND), or HIVassociated dementia (HAD) or HAD-equivalent, based on the Frascati criteria that considered whether the cognitive impairment affected the subject's self-reported mental acuity and daily functioning at work or at home [31]. The impact on the subject's daily functioning was determined from the clinical assessment performed by a physician that included a detailed neuropsychiatric evaluation, along with the HIV dementia scale [32] and the Functional Activities Questionnaire [33]. Statistics All analyses were performed using R (version 3.5.2 https://www.R-project.org/). One-way analysis of variance (ANOVA), Chi-square, Mann-Whitney Test, Kruskal-Wallis Test, and Fisher's Exact Test were used to compare the demographic measures and clinical variables depending on the variable types and distributions. Two-way analyses of co-variance (ANCOVAs) were performed to evaluate the independent and interactive effects of HIV serostatus and chronic active MJ use across the four groups, on the cognitive domain Z-scores and on DTI metrics in the 22 ROIs. Comparisons of the cognitive domain Z-scores were performed with 2-way ANOVA, without co-variates, since age and education level were already adjusted in the Z-scores. However, 2way ANCOVA models for DTI metrics included covariates from variables that are known to or might have an influence on the DTI metrics, such as age, as well as the substance use variables that showed group differences across our subject groups, including percentage of lifetime tobacco users and percentage of regular (> once/ week) alcohol users within the past month. A p value < 0.05 was considered significant for cognitive domain Zscores. ROI-based analyses on DTI were also adjusted for multiple comparisons using the Benjamini-Hochberg procedure. Exploratory correlations were performed between DTI metrics and cognitive domain Z-scores that showed group differences, using the following general linear models: cognitive domain Z-scores as dependent variables; DTI metrics, HIV-status, MJ use-status, and their 2-way and 3-way interactions as independent variables; and age as a covariate. Similar methods were used to explore the correlations between DTI metrics and age, HIV-related clinical variables, or MJ use patterns. Results Participant characteristics (Table 1) All four groups had similar age, sex and racial distributions, socioeconomic status, years of education, and predicted verbal intelligence quotient (IQ). SN had the lowest depressive symptom scores on the CES-D across the four groups (p = 0.015). The two HIV+ groups had similar duration of HIV infection, current plasma RNA levels, nadir and recent CD4 cell counts, percentage of participants on stable combination antiretroviral therapy (cART) regimens, HIV dementia scores, and Karnofsky scales. The two MJ user groups also had similar age of first MJ use, duration of MJ use, daily MJ use, and total lifetime MJ usage. Although more MJ users reported lifetime tobacco use (p = 0.016) and regular recent alcohol use (> once/week within past month, p = 0.004) than non-MJ users, the four groups had similar total lifetime amount and duration of alcohol or tobacco use. Nevertheless, we included the percentage of lifetime tobacco use and percentage of regular recent alcohol use as covariates in the final model that evaluated the DTI metrics. HIV and chronic MJ use on neuropsychological test performance ( Fig. 1) Regardless of MJ use, HIV+ individuals had lower Zscores than SN controls in the domains of Design and Verbal Fluency (0.039), Attention/Working Memory (p = 0.009), Learning (p = 0.014), Memory (p = 0.028), and Global Function (p = 0.012). Trends for similar HIV effects were found in the Executive function (p = 0.055) and Speed of Information Processing (p = 0.064) (Fig. 1). Although incident depression does not appear to affect neuropsychological functioning in HIV-infected men [34,35], due to the group difference in CES-D, we also 1). HIV infection and chronic marijuana use on DTI metrics Independent of MJ use, HIV+ participants had lower FA than SN controls in bilateral anterior lateral internal capsule (ALIC), the left cingulum (CGC_L), the left superior fronto-occipital fasciculus (SFO_L), and the right sagittal stratum (SS_R) (p values between 0.001 and 0.038); only ALIC_L remained significant after correction for multiple comparisons ( Table 2; Fig. 2A, B). HIV+ group also had higher diffusivities (AD, RD, or MD) in multiple white matter and subcortical structures (Table 3, Fig. 3). For example, compared to SN subjects and regardless of MJ use, HIV+ had higher AD in the right BCC, left SCC, left SLF, left SCR, and right SFO (Fig. 3B), as well as higher RD in bilateral ALIC, bilateral PCR, left SCR, and left SLF (Fig. 3C), in addition to 6 other brain regions (Table 3). In the subcortical regions, HIV+ had higher MD in the right caudate, left and right GP, and a trend for higher MD in the thalamus (Table 3; Fig. 3D). The only brain region that showed an MJ use effect, regardless of HIV serostatus, was the right uncinate fasciculus, which showed lower AD in MJ users than nonusers (p = 0.024, Fig. 3B). In addition, an HIV-by-MJ interaction (p = 0.029) was observed for MD in the right GP; while SN + MJ had lower MD than SN nonusers, HIV + MJ had higher MD than HIV+ (Table 3, Fig. 3D). In addition to age, we also included percentage of lifetime tobacco users and percentage of regular alcohol use within the past month as covariates in the final model, since these two variables showed group differences, but all significant findings remained, and the interaction effect on the MD in right GP became more significant (Tables 2 and 3). Our exploratory analyses found no correlations between the DTI metrics that showed group differences and HIV-related clinical features (e.g., nadir CD4 and current CD4 counts and duration of HIV infection) or MJ usage patterns (age of onset, daily average use, duration and lifetime amount of MJ use). Abnormal DTI metrics predicted abnormal cognitive domain Z-scores We evaluated whether DTI metrics that showed group or interactive effects also predicted the cognitive performance. Lower left ALIC_FA predicted lower Fluency across all participants (r = 0.44; p < 0.001, Fig. 4A) and Age-related changes in DTI Although HIV+ showed lower FA and higher diffusivities than SN in multiple white matter and subcortical regions, regardless of MJ use, HIV+ and SN showed similar age-dependent decreases in FA in 5 brain regions (left and right ACR, left and right GCC, and right SS) and age-dependent increases in diffusivities in 16/44 Fig. 5A, B). In addition, independent of MJ use, HIV+ showed greater age-related decline in the right BCC_FA than SN (HIV × ageinteraction p = 0.037; Fig. 5C). Furthermore, regardless of HIV serostatus, MJ users showed greater age-related decline than nonusers in left SLF_FA (MJ × ageinteraction p = 0.044; Fig. 5D) and in left EC_FA (MJ × age-interaction p = 0.007; Fig. 5E). Lastly, age-related decline in the right GP_FA was found only in MJ users (HIV × MJ × age-interaction p = 0.02, Fig. 5F). Discussion The main findings of this study are as follows: (1) (3) DTI measures in HIV+ group, with or without MJ use, had lower FA and higher diffusivities than SN controls in multiple white matter and subcortical brain regions, indicating greater neurodegeneration and neuroinflammation. (4) However, regardless of HIV serostatus, MJ users had lower AD, suggesting lesser fiber integrity, only in the right UNC than nonusers. Furthermore, we observed an HIV-by-MJ interaction in the right GP_MD, indicating differential MJ effects on neuroinflammation in this brain region of HIV patients compared to SN controls. Cognitive performance in chronic MJ users with and without HIV infection The poorer performance in HIV+ compared to SN controls, regardless of MJ use, in the domains of Design and Verbal Fluency, Attention/Working Memory, Learning, Memory, and Global Functioning is consistent with prior studies in HIV+ individuals [1]. These persistent cognitive abnormalities despite cART were attributed primarily to ongoing neuroinflammation [23][24][25][26]. Also similar to prior reports [7][8][9][10], regardless of HIV status, MJ users had similar performance across all cognitive domains as nonusers. The lack of cognitive deficits in our adult MJ users suggests little or no neurotoxic effects associated with chronic MJ use, which is supported by the lack of decline in IQ in adult onset MJ users [14]. In contrast, the developing brain of adolescents may be more vulnerable to the neurotoxic effects of MJ. Earlier onset or regular (weekly) MJ use was associated with lower cognition [15,16] and decline in IQ and cognitive function (between ages 13 and 38 years) [14], while earlier age of first recreational MJ use was associated with lower FA and higher RD in the SLF, ILF, and forceps major and minor [27]. Furthermore, greater gray matter volumes in bilateral posterior cingulate, lingual gyri, and cerebellum were found in 14 years old adolescents who had only one or two instances of cannabis use compared to matched cannabis-naïve controls [27]. In the current study, although no significant HIV-by-MJ interaction was found in any of the cognitive domains, SN Table 3). D Regardless of MJ use status, HIV+ group had higher mean diffusivity (MD) in the right caudate, left and right globus pallidus (GP), and left thalamus than SN groups. In additional, HIV-by-marijuana use showed an interaction effect in the right GP (p = 0.029); while HIV + MJ users had higher MD than HIV nonusers, SN-MJ users had lower MD than SN nonusers. SN, HIV-seronegative nonuser group; SN + MJ, HIV-seronegative marijuana user group; HIV, HIVseropositive nonuser group; HIV + MJ, HIV-seropositive marijuana user group. Red star: p value remained significant after Holm-Bonferroni correction for multiple comparisons + MJ tended to have poorer performance than SN nonusers in Learning, Memory, and motor domains, while HIV + MJ tended to perform better than HIV+ nonusers in Design and Verbal Fluency, Executive Function, and Speed of Information Processing. These trends are consistent with a recent large study that found MJ use was associated with lower odds of neurocognitive impairment and higher verbal fluency and learning performance, in PLWH, but not in the SN participants [11]. This paradoxical effect of MJ use in SN and HIV+ individuals might be related to the anti-inflammatory effects from some of the MJ constituents on the neuroinflammation in PLWH [36,37]. For example, Δ9-THC suppresses cytokine-induced T-cell activation [36,37] and lowers the monocytederived proinflammatory factor IP-10 in vitro [37]. Furthermore, MJ using HIV+ participants showed faster decline of cellular HIV DNA levels during the first 4 months of cART, compared with those who did not use MJ or used other substances [38]. In addition, HIV+ light MJ users had better verbal fluency than SN light users [9], but this advantage was not found in HIV+ heavy MJ users [8], how the dosage and the potency of Δ9-THC in MJ, which has quadrupled in the past two decades [39], may impact cognition in PLWH will need to be evaluated in future studies. DTI metrics and neuroinflammation in chronic MJ users with and without HIV infection Consistent with prior DTI studies [2,[40][41][42], our HIV+ participants, regardless of MJ use, had lower FA and/or higher diffusivities in the corpus callosum, coronal radiata, internal capsule, the cingulum, SLF, SFO, and other white matter tracts. The lower FA and higher diffusivities in HIV+ individuals most likely reflect disrupted white matter microstructure, perhaps due to neurodegeneration and chronic neuroinflammation induced by ongoing HIV+ infection. Our HIV+ participants also showed higher MD in subcortical gray matter (caudate, globus pallidus, and thalamus), which suggests possible demyelination in these subcortical regions. The elevated caudate_MD was also reported in patients with early HIV infection [43], while the elevated GP_MD in our HIV subjects would be consistent with the 18 kDa translocator protein (TSPO) binding, indicating microglial activation, in this brain region of virally suppressed HIV patients [44]. Furthermore, relatively lower FA in the globus pallidus, along with poorer motor skills, was also found in HIV+ women but not HIV+ men [45]. In contrast, the right uncinate fasciculus (UNC) was the only brain region that showed abnormally lower AD in MJ users, regardless of HIV status. The lower AD in the UNC indicates reduced water movement along the axonal fibers, which might results from lesser axonal fiber density, accumulated cellular debris from damaged axonal, or extracellular space tortuosity [46]. In preclinical studies, reduced AD was consistently found at early stages of brain injury from models of multiple sclerosis and correlated with the axonal damage or loss [46,47]. Furthermore, the UNC in MJ users was found to have reduced FA and elevated MD [25], as well as shorter than normal fiber bundle [48]. Lower FA in the UNC was also associated with higher apathy scores in MJ users [25]. The UNC fasciculi are long-range projecting fibers that connect the orbitofrontal cortex with entorhinal and fusiform cortices, which have densely localized CB1 receptors that are target receptors for Δ9-THC [48]. Lower AD in the UNC might indicate lesser connectivity among these regions, which were found to be abnormally thinner and associated with poorer verbal memory in cannabis users [48]. The HIV-by-MJ interaction effect found in the right globus pallidus, with relatively lower MD in SN + MJ users but relatively higher MD in the HIV + MJ users, parallel the interactive effects observed in a proton MR spectroscopy ( 1 H-MRS) study, with relatively lower levels of myoinositol, a glial marker, in the basal ganglia of SN + MJ users but relatively higher myoinositol levels in HIV + MJ users [7]. In HIV+ patients, higher diffusivity was associated with higher myoinositol level, indicating greater neuroinflammation, which in turn correlated with poorer cognitive performance [49]. Therefore, this interactive effect suggests that while chronic MJ use suppressed glial activation in the GP of SN subjects, chronic MJ use promoted glial activation in HIV+ users. Although DTI or 1 H-MRS cannot determine the glial cell types or cellular processes involved with such glial activation, multiple in vitro or in vivo rodent or macaque HIV models demonstrated modulations of the microglial response by endocannabinoids as well as exogenous cannabinoids, such as Δ9-THC, see review [6]. For instance, in most rodent HIV models, endocannabinoid CB1/CB2 or CB2 receptor agonists [50,51] or inhibitors of the degradation enzymes for endocannabinoids [52] led to decreased gp-120 induced inflammatory interleukin-1β and/ or activation and upregulation of CB2 receptors, which are located on microglia, and ultimately suppressed the inflammatory processes associated with microglial activation and attenuated or mitigated HIV-associated neurotoxicity [6]. Another study that used a GFAP/GP120/FAAH-/mouse model also demonstrated decreased astrogliosis with improved neurogenesis due to the decreased endocannabinoid degradation by fatty acide amide hydrolase (FAAH) [53]. The only study that administered Δ9-THC to HIV-infected SCID mice found higher viral load, upregulated CCR5 expression, and greater HIV+ cells with longer THC exposure [54]. Furthermore, in simian immunodeficiency virus (SIV)-infected macaques, Δ9-THC administered prior to SIV infection produced dosedependent cognitive slowing, while chronic Δ9-THC after the SIV infection produced tolerance to the behavioral effects [55]. Another SIV model found that Δ9-THC before and after SIV infection slowed disease progression and decreased inflammation, as well as increased BDNF and decreased proinflammatory cytokines in the striatum [56]. Hence, although the majority of the cell culture and rodent models of HIV found endocannabinoids to have neuroprotective and anti-inflammatory effects, studies that used Δ9-THC in rodents and non-human primates were less clear. Future studies with more specific glial markers to assess these possible differential effects of MJ use on neuroinflammation in the GP between PLWH and SN are needed. DTI metrics predicted cognitive performance in HIV+ individuals and MJ users Microstructural abnormalities predicted poorer performance on some cognitive function in our participants. Specifically, lower FA in ALIC, suggesting lesser fiber coherence in this tract, predicted lower performance on Design and Verbal Fluency across all participants, and lower performance on Attention/Working Memory among our HIV+ and MJ user subjects. In addition, we found higher diffusivities in several white matter tracts (ALIC, SLF, PCR, SCR) in our MJ users that predicted poorer memory and Global Z-scores, which is similar to the findings that higher corona radiata diffusivity was associated with slower processing speed in the aging population [57] or with poorer learning in HIV+ participants [41]. Lower fiber coherence and higher diffusivities were often reported in brain disorders with neuroinflammation and correlated with microglial activation and cognitive deficits. For instance, microglial activation, as shown by greater TSPO tracer [(11)C]PBR28 binding correlated with higher diffusion on DTI and greater cognitive deficits in HIV patients [44], while greater ionized calciumbinding adaptor (iba-1) and lower synaptophysin staining in brain tissues also correlated with greater diffusion on DTI and cognitive impairments in a mouse model of HIV [58]. Even in HIV patients who were virally suppressed, [11C]DPA-713-TSPO binding also predicted poorer cognitive performance in multiple cognitive domains [59]. Lastly, 18F-DPA714-TSPO-binding in SIVsm804E-infected rhesus macaques also correlated with microglial activation assessed from iba1 staining in brain tissue, along with alterations in CSF viral load, CSF levels of monocyte chemoattractant protein 1 (MCP-1), tumor necrosis factor alpha (TNF-α), and various inflammatory cytokines [60]. Taken together, these human and preclinical imaging studies demonstrated strong relationships between microglial activation, ongoing neuroinflammation, and cognitive dysfunction. Microglial activation, increased proinflammatory cytokine production, and a reduction in synaptic density are key pathological features associated with HIV-associated neurocognitive disorders (HAND). Although the exact mechanisms for how microglial activation may lead to cognitive disorders remain unknown, one mechanism involves excitotoxic neuronal injury from increased extracellular glutamate concentration. The increased glutamate may result from upregulation of glutamate-generating enzyme glutaminase [61], which are found HIV-infected microglia and macrophages, and are potentiated by interferons from the innate immune responses [62], as shown in postmortem brain tissues of patients with HIV dementia [61,63]. Furthermore, the proinflammatory cytokine tumor necrosis factor inhibits the reuptake of glutamate by activated astrocytes [64], leading to incomplete recycling of glutamate back to glutamine. This incomplete recycling would led to reduced intraneuronal glutamate levels, as observed on 1 H-MRS studies, especially in HIV patients with cognitive deficits [65]. These imaging studies, along with preclinical and postmortem studies in HIV patients, documented that microglial and astroglial activation are both involved with the neuroinflammatory cascades that are amplified by toxic HIV-viral proteins, contributing to glutamate-mediated excitotoxic neuronal injury and cognitive dysfunction [66]. The relationships between marijuana use, microglial activation, and cognitive dysfunction are less clear. Two cannabinoids, Δ9-THC and cannabidiol (CBD), are present in marijuana, and both may decrease the production and release of proinflammatory cytokines, including interleukin-1beta (IL-1β), interleukin-6, and interferon (IFN)beta, from LPS-activated microglial cells [67], which would support the anti-inflammatory effects of marijuana. However, in mice, subchronic administration of THC activated cerebellar microglia and increased the expression of neuroinflammatory markers, including IL-1β, which in turn correlated with deficits in cerebellar conditioned learning and fine motor coordination [68]. Collectively, all of these studies indicate that greater diffusivity observed on DTI likely reflect ongoing microglial activation and neuroinflammation, which may ultimately lead to poorer cognitive performance. Age-related and MJ-related changes in DTI metrics in HIV+ individuals and MJ users Our HIV+ participants had greater than normal agedependent declines in FA, suggesting accelerated agerelated loss of fiber integrity, in the corpus callosum regardless of MJ usage; this finding is consistent with those in prior DTI studies [40,69]. In addition, we observed greater than normal age-related FA decline in the SLF and EC of MJ users regardless of HIV serostatus, which is also consistent with the greater than normal agerelated FA decline in multiple white matter regions in MJ users [24]. Lastly, age-dependent decline was also observed in the right globus pallidus FA only in SN-MJ users, which indicates lesser microstructural integrity in these older MJ users. However, due to the limited sample size in each of the subgroups, these exploratory observations will need to be confirmed in future studies. Limitations Our study has several limitations. (1) Our cohort included primarily men; therefore, we were not able to assess sex-specific differences on brain microstructure in relation to the possible additive or interactive effects of HIV infection and chronic MJ use. (2) Since this is a cross-sectional study, we could not determine the causality of chronic MJ use on altered DTI metrics or cognitive deficits in HIV+ individuals. Future longitudinal studies are necessary to further delineate the independent and combined effects of chronic MJ use and HIV infection on brain microstructure. (3) Self-report of MJ use or other substances used may be inaccurate or under-reported and might have confounded our results. (4) Despite cART in all HIV patients, a few patients still had detectable viral loads, which may be due to drug resistant mutations of the virus in these individuals. Therefore, the cognitive and DTI results may vary in those with persistent viral replication, which should be evaluated in future larger studies. (5) This higher than expected number of SN with HAND-equivalent cognitive status (21-27%) may be due to a selection bias, since individuals with subject cognitive complaints might have sought out research studies that provided free brain MRI scans and cognitive assessments. Having more HAND-equivalent SN control subjects might have minimized the cognitive group differences with the other subject groups; however, our SN and SN + MJ groups showed relatively normal Z-scores across the domains, except for slightly slower motor performance in the SN + MJ group. Conclusions Adding to previous studies that did not find additional adverse effects of chronic MJ use on cognition [7][8][9], brain morphometry [8], and clinical outcomes, such as viral load, CD4 cell count, and total mortality [19,21] in HIV+ participants, our findings suggest that chronic MJ use has no additional negative influence on neurocognitive deficits in PLWH. However, the lower AD in the UNC of MJ users suggests axonal loss in this white matter tract that connects to CB1 receptor rich brain regions that are involved in verbal memory [48] and emotion. Furthermore, the interactive effect on MD in the globus pallidus suggests that MJ use may have an anti-inflammatory effect in SN subjects but might exacerbate the neuroinflammation in this brain region in HIV patients. Furthermore, the greater than normal agedependent FA declines in several white matter tracts, and in the GP in SN-MJ users, suggests that older MJ users may eventually have lesser neuronal integrity in these brain regions.
2020-08-06T09:07:24.381Z
2020-04-24T00:00:00.000
{ "year": 2020, "sha1": "fff58baf4bccca7f6882ff4fb712b79e627ea06d", "oa_license": "CCBY", "oa_url": "https://jneuroinflammation.biomedcentral.com/track/pdf/10.1186/s12974-020-01910-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c01029bf9260bfd0d6cc831ade29fc851cf4763", "s2fieldsofstudy": [ "Psychology", "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17077128
pes2o/s2orc
v3-fos-license
Phase Coherence in Chaotic Oscillatory Media Collective oscillations of lattices of locally-coupled chaotic Roessler oscillators are studied with regard to the dynamical scaling of their phase interfaces. Using analogies with the complex Ginzburg-Landau and the Kardar-Parisi-Zhang equations, we argue that phase coherence should be lost in the infinite-size limit. Our numerical results, however, indicate possible discrepancies with a Langevin-like description using an effective white-noise term. Motivation A lattice of Rössler systems coupled to their nearest neighbors by diffusion can be schematically written: where C i is the three-component vector sitting at site i, ε is the coupling strength, τ is the interval between coupling times, N is the number of nearestneighbors, V i is the neighborhood of site i, and F τ (C i t ) represents the state of C t i after evolution under the Rössler flow R during a time τ . In other words, dropping the subscript i: The Rössler model possesses remarkable properties. It is usually written as a three-variable, first-order, ordinary differential system: where a, b, and c are real parameters. Increasing c while keeping a and b fixed (for example a = b = 0.2 as in [2]), system (3) undergoes a Hopf bifurcation followed by a cascade of subharmonic bifurcations eventually leading to chaos. For these parameter values, the chaotic attractor is characterized by c 3 peaks of irregular amplitude but almost perfectly-defined frequency (Fig. 1a). Similarly, trajectories in the (c 1 , c 2 ) plane are cycles of irregular amplitude but with a well-defined period (Fig. 1b). This allows the definition of "phase" and "amplitude" variables, either simply by using the (c 1 , c 2 ) plane with an origin set in the middle of the attractor (Fig. 1b), or by more sophisticated methods. One can thus speak of a "chaotic oscillator" [5]. Picturing the Rössler system as an oscillator translates the problem of the LRRO observed in [2] into a phase coherence, or synchronization, problem for a chaotic oscillatory medium 1 . In [2], the chaotic oscillatory behavior of lattices of Rössler systems of the type (1-3) was also used to draw an analogy with the complex Ginzburg-Landau 1 Note that the remarkable phase coherence of the Rössler model was recently studied and quantified in [5], where the possibility of (exact) synchronization of these systems was evidenced. equation (CGLE), the generic nonlinear partial differential equation describing an oscillatory medium near a Hopf bifurcation [6]. The CGLE reads: where A is a complex field, α and β are real parameters. It was argued that the LRRO observed can be well accounted for by a CGLE with parameters corresponding to a regime where homogeneous oscillations are (linearly) stable, with some residual "effective" noise. In that analogy, the complex plane roughly corresponds to the (c 1 , c 2 ) plane of the Rössler variables. On general grounds, one expects the soft phase modes of the noisy stable CGLE to be described at large scales by the Kardar-Parisi-Zhang equation (KPZE) 2 , a stochastic model for the kinetic roughening of fluctuating interfaces [3] which reads: where h is a real field, ν and λ are real parameters, and η(x, t) is an uncorrelated white noise with zero mean and correlators: In (5), h is the height of an interface and takes values on the entire real axis. In the present context, h represents the angular (or phase) argument of the complex field A (or the phase φ of the Rössler oscillators) followed by continuity in space and time from some arbitrary initial value. In this representation, the phase coherence problem described above can be restated in terms of the roughness of the phase interface: if the interface is rough (its mean square width diverges in the infinite-size infinite-time limit), then no phase coherence exists. The KPZE has gained considerable importance because many "microscopic" models share its non-trivial scaling properties, and also because some analytical results have been obtained [3]. Among those, one is crucial here: interfaces governed by the KPZE are always rough for space dimensions d ≤ 2. Assuming the validity of the KPZE to describe the large-scale properties of lattices of coupled Rössler oscillators, this result is at odds with the conclusion reached in [2]. Thus, either the numerical results of [2] were too limited to reveal the loss of phase coherence in the infinite-size, infinite-time limit, or the KPZE is not the correct stochastic equation. In the latter case, the discrepancy probably lies in the properties of the "effective noise" 3 . As a matter of fact, there is no a priori reason for the KPZE to be the relevant large-scale description of the phase dynamics of coupled Rössler oscillator lattices. But the "universality class" of the KPZE has been shown to be remarkably large. In particular, recent findings show it to include the phase interface dynamics generated by the CGLE in its so-called "phase turbulence" regime [9]. In this spatiotemporally chaotic regime, there are no zeroes of the complex field A, and a fluctuating continuous phase interface can always be defined 4 . To some extent, this case might appear very similar to the chaotic oscillatory media formed by lattices of Rössler systems; thus, one would expect the KPZE to be relevant. On the other hand, as recalled above, chaos in the Rössler system possesses some very specific features that might possibly produce an "effective noise" with peculiar properties. Given the space-time evolution of some interface, there exists a rather wellestablished procedure to determine whether the KPZE is a relevant large-scale description [7]. In the following, we briefly recall this procedure and follow it to investigate the coherent oscillations in lattices of coupled Rössler systems from the angle of the phase interface dynamics they produce. We have performed numerical experiments of system (1-3) and studied the scale-invariance properties of the interface constructed from the "phase" of each of the Rössler oscillators. The calculations were done in one space dimension, mostly for numerical convenience -if phase coherence is to be broken, it should be easier to observe for d = 1 -but also because many exact results are known in this case for the KPZE. Scale-invariance relies on the scaling assumption h(ℓx, where ℓ is a similarity factor, ζ and z are (respectively) the roughness and dynamical exponents. For interfaces governed by the KPZE, exponents are exactly known for d = 1: z = 3 2 and ζ = 1 2 . All scaling laws given in the following are for these values. In practice, one usually considers global quantities such as the mean square width of the interface: where . . . x denotes space average. For a system of finite-size L, the width of an initially flat interface grows and saturates to a size-dependent mean value: For an infinitely large system, w 2 grows indefinitely: For the KPZE, this growth phase actually takes place only beyond certain crossover scales [8]: before which another scaling is observed, because the nonlinearities are not yet effective. The "linear" growth phase is characterized by a growth exponent β = (2 − d)/4 = 1/4 for d = 1. One expects: : in the growth regime, a system of 2 18 oscillators seems to remain in the linear regime (single run). Lines of slope 1/2 (linear regime) and 2/3 (KPZE nonlinear regime) are shown. The insert shows the time evolution of the "local growth exponent" β loc (t) calculated over a time-window ∆t = 15000. We cannot rule out the beginning of a crossover from the value 1/2 to some larger value, but the data is too noisy to conclude. Relations (8), (9), and (11) allow one to check dynamical scaling and to estimate whether the measured exponents are consistent with those of the KPZE. In addition, the measure of the numerical prefactors of the scaling laws can lead to a determination of the effective parameters ν, D, and λ of the corresponding KPZE and of the crossover scales L c and t c , provided that λ is determined independently. This is usually achieved by measuring the changes in the velocity of the interface v = d φ x /dt when it is submitted to a tilt q = 2πn/L, where n is an integer "winding number", using the relation [7]: 3 Chain of Rössler systems with purely diffusive coupling from random initial conditions far from the center of the Rössler attractor, all oscillators quickly reach nearby values, yielding an initially quasi-flat interface. To estimate λ, we performed tilt experiments, preparing initial conditions with a prescribed winding number n, and measuring the velocity of the phase interface. Following Eq. (12), only the small q behavior is of interest. However, for too small tilts, the variations of v are not numerically measurable. Consequently, a reliable measure of λ is very difficult. Our results, obtained for moderate tilts, give λ ≃ −2.7. This is in rough agreement with [2], since, in the stable CGLE context, λ = 2(β − α) with the parameters estimated at α ≃ 0.66 and β ≃ −1.06 Gathering these results together, we obtain the following estimates: g ∼ 0.4, L c ∼ 350, and t c ∼ 2 × 10 −4 . Clearly, there is a contradiction between these estimates and the recorded behavior, which remained in the linear regime well beyond these scales. We will come back to this point in the discussion. There is one remarkable fact in the above results: the largest widths reached during our calculations are always very small (at most of the order of 2π). "Roughening" is thus extremely weak in this system, even though the natural extrapolation of our numerical results is that, indeed, phase coherence should be lost in the infinite-size, infinite-time limit. In the next section, we consider the same system but with a modified coupling designed to increase the roughening of the phase interface. Chain of Rössler systems with diffusive-dispersive coupling For the two-dimensional lattice of [2], the effective CGLE was found to be in a regime where the spatially-homogeneous solution A = exp(−iβt) is linearly stable. One way of bringing the effective CGLE into an intrinsically chaotic regime -and thus, hopefully, to strengthen roughening-is to increase α, so as to be in a phase turbulence regime (which is reached when 1 + αβ < 0). For the Rössler lattice, this can be naively achieved by introducing a dispersive-like coupling between the c 1 and c 2 variables, replacing (1) by: where δ is a new parameter controlling the dispersive part of the coupling. We studied the dynamic scaling of the phase interface with δ = 0.4, and all other parameters as in the previous section. There is a clear increase in the phase fluctuations, though without the appearance of defects, as might be expected from a "phase turbulence-like" behavior. As before, the mean saturated square width scales with L (Fig. 3a), yielding D/24ν ≃ 2.85 × 10 −5 . The growth regime of the phase interface of a large system quickly reaches the scaling regime (9) characteristic of the KPZE (Fig. 3b). Even though the roughening remains weak in absolute terms, the 2/3 exponent indicates that the particular choice of coupling made here achieved its goal. The insert of Fig. 3b shows that the skewness of the interface, a universal ratio of amplitudes, takes the value expected for the one-dimensional KPZE. The scaling laws for w shown in Fig. 3 do not allow the independent determination of ν and D. From Fig. 3b, using (9), one gets λD 2 /4ν 2 ≃ 1.2 × 10 −7 . This, together with the value D/24ν ≃ 2.85 × 10 −5 measured from Fig. 3a, actually provides the following estimate: |λ| ≃ 1.0. There exist several ways of completing the estimation of the KPZE parameters. Here, as we merely wanted to check the consistency of the KPZE picture, we limited ourselves to a rough fit of the early-time (t < 300)) growth with the linear regime (11) (not shown). This gives D/ √ 2πν ≃ 2 × 10 −4 . Finally, we find ν ≃ 0.5 and D ≃ 3.5 × 10 −4 , leading to g ≃ 2.4 × 10 −3 , L c ≃ 6 × 10 4 , and t c ≃ 8 × 10 7 . While the value of L c is reasonable in view of our numerical results, that of t c seems too large. One must keep in mind, of course, that these values are only rough estimates, especially for t c , given their large variation with parameters ν, D, and λ (cf. Eq. (10)). An additional quantitative agreement with the KPZE is provided by the value of the skewness of the interface in the growth regime, which is very close to the "universal" value for the KPZE [7]. Discussion Extrapolating the results of the numerical experiments reported in this work to the infinite-size, infinite-time limit, one may first conclude that phase interfaces of chains of coupled Rössler systems roughen, even if quantitative agreement with the KPZE is debatable. This implies the loss of the phase coherence observed in finite systems. But this roughening is extremely weak, especially in the case of pure diffusive coupling 5 . Using properties of the KPZE, one can only expect an even weaker roughening in two space dimensions. In particular, a very slow logarithmic variation of the saturated width with L during an extremely extended linear regime should be observed (the crossover scales can easily be huge, given their variation with parameters for d = 2) [8,9]. It is not surprising, then, that no loss of coherence could be detected within the size/time range investigated in [2]. As mentioned, the validity of the KPZE as the relevant large-scale stochastic description has not been firmly established. While the situation is satisfactory in the case of diffusive-dispersive coupling, there are discrepancies for the purely diffusive case: notably the estimates for L c and t c are inconsistent with the fact that the system was observed to remain in the linear growth regime for L = 2 18 and t > 10 5 . There are, in our view, two possible reasons for this. First, our estimates of the parameters of the effective KPZE might be inaccurate, leading to estimates for the crossover scales that are orders of magnitude away from their actual values. Indeed, given expressions (10), the values of t c and L c can change dramatically even with moderate changes of λ, D, and ν. Moreover, λ is given by the variation of the interface velocity near zero tilt (Eq. (12), a region difficult to probe numerically. Thus, the "true" value of λ could be extremely small, and consequently, L c and t c much larger than the estimates found here. Second, and this is probably related to the first point, the "effective noise" could well be very different from (6) [10]. We stress again that the chaotic regime of the Rössler system used here is characterized by strong amplitude fluctuations (in the (c 1 , c 2 ) plane) and quasi-nil phase fluctuations. Thus, the very strong coherence of the phase interface in the case of diffusive coupling should not be surprising. On the other hand, the cross-coupling term added to introduce dispersion (Sec. 4) does provide a way of obtaining large phase fluctuations directly coupled to the local amplitude chaos of the Rössler system. Finally, we would like to come back to the status and role of the CGLE in the problem studied here. Even though lattices of coupled Rössler systems do exhibit many of the qualitative features of the CGLE, their equivalence with a CGLE submitted to some noise cannot be a strict one. Specifically, there are no phase soft modes in the Rössler case (the "gauge invariance" of the CGLE is broken). The approximate correspondence between the (c 1 , c 2 ) coordinates of the Rössler system and the complex field A of the CGLE overlooks the role of the c 3 variable. A rough interface must, at every moment, include points where c 3 experiences a sharp peak (Fig.1b). The effect of such localized structures might well be the cause of peculiar properties of the "effective noise" in a Langevin-like description. Since an initially flat interface probably resists the appearance of such structures, one can imagine a particularly strong rigidity of the interface yielding small widths, and, ultimately, very small values of |λ|. For the diffusive-dispersive coupling case, on the other hand, the equivalent CGLE is expected to be in a phase turbulent regime 6 , which was shown in [9] to be itself well described by the KPZE at large scales. Any additional perturbations, such as those introduced by the c 3 variable, are not expected to alter significantly this picture, in agreement with our findings. Even though finer numerical investigations are needed to resolve the difficulties encountered above, our work once more points at the subtleties involved when one tries to build a Langevin description of chaotic extended systems.
2014-10-01T00:00:00.000Z
1998-02-10T00:00:00.000
{ "year": 1998, "sha1": "1438dcbe039f4a2ef41d5782f99a8b37de6e6451", "oa_license": null, "oa_url": "http://arxiv.org/pdf/chao-dyn/9802007v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c5c45d21c462fc129082650c5d90445dcfaaf21f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
73281298
pes2o/s2orc
v3-fos-license
PREVALENCE OF INTESTINAL PARASITIC INFECTIONS IN HIV INFECTED PATIENTS PRESENTING WITH DIARRHEA AND THEIR ASSOCIATION WITH CD4+ COUNTS Introduction: Intestinal parasitic infection has been an important problem in HIV patients, worldwide. Hence, this study was undertaken to establish the prevalence of intestinal parasitic infection among people with and without HIV infection and its association with diarrhea and CD4 T-cell count. we aimed to measure the prevalence and identify the factors associated with intestinal parasitic infection in people infected with HIV. Methodology: An analytical cross-sectional study in 1490 HIV-infected people attending for CD4 T-cell count was conducted. Results: The incidence of intestinal parasitic infection was 22.4% (95% CI 29.25 to 38.25). In univariate investigation, age, sex, longer time because diagnosis of HIV, CD4 T-cell count of <200/μL, diarrhoea, wedded status, and individual under tuberculosis (TB) treatment were drastically related with increased chances of intestinal parasite infection. Nevertheless, in the logistic malfunction representation, only the CD4 T-cell count of <200/μL (accustomed OR=6.3, 95% CI 3.75 to 10.5), diarrhoea (accustomed OR=4.2, 95% CI 2.7 to 6.45) and individual under TB cure (adjusted OR=4.35, 95% CI 2.7 to 6.45) remain as significant predictors. On stratification, CD4 T-cell count of <200/ μL was independently associated with higher odds of protozoal as well as helminthes infection. The parasites Cryptosporidium and Cyclospora were observed only in participants with CD4 T-cell counts <200/μL. Conclusions: HIV infection increased the risk of having intestinal parasites and diarrhoea. Therefore, raising HIV positive’s immune status and screening for intestinal parasites is important. This study showed that Immunodeficiency increased the risk of having opportunistic parasites and diarrhea. Therefore; raising patient immune status and screening at least for those treatable parasites is important. Introduction Intestinal parasites cause major morbidity and mortality throughout the world, particularly in developing countries and in persons with comorbidities (Wiwanitkit, 2006). The intestinal mucosa becomes a site of significant HIV replication and destruction of CD4+ cells (Assefa et al., 2009). Infections of the gastro-intestinal tract play a critical role in HIV pathogenesis, attainment a rate of up to 50 % in developed countries and 95 % in developing countries (Akinbo et al., 2010). The progressive decline in immunological and mucosal defensive mechanisms predisposes HIV-positive individuals to gastro-intestinal infections thus increasing susceptibility to a number of opportunistic intestinal pathogens . Intestinal parasites are endemic in many regions of the world where HIV is widespread such as sub-Saharan Africa (Kassu et al., 2007). Some factors including scarcity and starvation can endorse the extend of both infections, and attempts to improve these fundamental circumstances may progress the situation (Tuli et al., 2010). Ascaris lumbricoides, Trichuris trichiura and hookworms, Hymenolepis nana, Giardia duodenalis/Giardia intestinalis have been identified as common opportunistic pathogens affecting HIV-infected patients (Kurniawan et al., 2007). Intestinal parasites remain a main cause of diarrhoea and other GI symptoms with subsequent weight loss. However their prevalence in HIV-infected patients has dramatically decreased in countries where antiretroviral treatment is widely available (National Center for AIDS and STD Control., 2010). Few studies have addressed the issue of intestinal parasites among HIV-infected persons in India (Alfonso and Monzote, 2011). We studied the prevalence of intestinal parasites in HIV-infected patients, taking into account their CD4+ count status and treatment course (Adamu, and Petros, 2009 ). Methodology Patients who were confirmed as HIV positive cases and whose CD4 count was being evaluated were taken as study subjects. The people intestinal parasitic infected with HIV were enrolled for this study, the national policy for eligibility to start HAART on the basis of CD4 T-cell count was a count of < 200/ μL. The subjects were selected from different hospital of India, while the study wascarried out at the Department of Microbiology, Barkatullah University Bhopal (MP) India. Irrespective of their signs and symptoms of gastrointestinal tract infection, each participant was provided with three standard stool collection containers labeled with the participant's code. Instructions were given for the collection of stool sample. Short questionnaire was maintained which included participant's present medical history: any complaints of diarrhoea, sociodemographic data: age, sex and types of drinking water whether or not on antiretroviral therapy. Stool from adults who were HIV negative were taken as controls. Study design and data collection We studied to determine the prevalence of intestinal parasitic infection in HIV infected patients. A total of 1490 participants were included in this study which took place from June 2010 to June 2013. The study was briefly explained to the participants and they were assured of the confidentiality as well as anonymity of the collected information. An informed verbal consent was obtained from all the volunteers. Participants were requested to collect and submit a stool specimen by themselves. A case was defined as intestinal parasite positive if the stool specimen was positive for at least one of either a pathogenic protozoal or a helminth in microscopic examination. Similarly, a participant was categorized as intestinal parasite negative if the stool specimen on microscopic examination was not positive for pathogenic intestinal parasites. The status of diarrhoea was established by patients self history of enrollment having loose stools three or more times a day. Information about other medical conditions and demographic details was collected from a patient register maintained at Lab. Every fecal sample was examined by three methods. First, a direct wet mount in normal saline Blood samples were analyzed for CD4+ T-lymphocyte cell counts, using a flow cytometer . Briefly, 20 µL of phycoerythrineconjugated monoclonal antibody to human CD4 were gently mixed with 20 µL of whole blood into a test tube and incubated for 15 minutes at room temperature, protected from light. Next 800 µL of no-lyse buffer were added to the mixture. After homogenizing its content, the tube was plugged into the CyFlow Counter for automatic counting (Ibrahim et al., 2007). Statistical analyses CD4+ counts were compared based on the former treatment threshold fix at CD4+ ≤ 200 cells/µL and the current treatment threshold fixed at ≤ 350 cells/µL (Evering et al., 2006 ) . All statistical analyses were conducted using XLSTAT 2012 (Addinsoft SARL, Paris, France, 2012). Chi-2 test or Fisher exact test was used to investigate the association among prevalence of intestinal parasites, CD4+ counts, antiretroviral treatment, use of Co-trimoxazole, and symptoms of diarrhoea. Odds ratio was calculated to estimate the risk attributable to different factors with confidence intervals calculated using the Woolf's method. The level of significance was set at p-value = 0.05. Results A total of 1490 subjects were observed for intestinal parasites. About 42% of the participants were included in the study during the rainy season (July-September). regarding 31 % of the case patients had previously been positive tested for Tuberculosis (TB) and were under treatment. More than 80% of the participants were married, 11.7% of the case patients were under first-line HAART (highly active antiretroviral therapy). of the total participants had a CD4 T-cell count of < 200/μL in 43.89%, 25.77% had a CD4 T-cell count of 200-300/μL, and 30.33% had a CD4 T-cell count of >300/μL. The distinctiveness of case patients with intestinal parasitic infection was compared with those not infected and is shown in Table 2. Out of the total of 1490 stool samples analyzed from the same number of subjects, intestinal parasites were detected in 22.4% (95% CI 19.5 to 25.5) (334/1490). Among the total 334 volunteers harboring intestinal parasites, 83.9% (280/334) of the participants had a CD4 T-cell count of < 200/μL, whereas only 5.9% (20/334) of the participants had a CD4 T-cell count of > 300/μL (Table 1). The probability of being infected with an intestinal parasite was extensively higher in participants with a CD4 T-cell count of < 200/μL contrast to case patients with a CD4 T-cell count of > 300 (reference level) (unadjusted OR = 24.32, 95% CI 12.75 to 46.9). likewise, the prevalence of diarrhoea was 33.3% (95% CI, 44.85 to 55.05). The probability of having diarrhoea was considerably higher in case patients with a CD4 T-cell count of < 200/μL compared to case patients with a CD4 count of > 300 (OR = 34.35, 95% CI 19.2 to 61.8). (Table 1). CD4 T-cell count of < 200/μL, age, sex, marital status , diarrhoea, being under TB treatment, and a longer time in weeks in view of the fact that the first diagnosis of HIV status were significantly linked with more risk of intestinal parasitic infection ( Table 2). All of these variables were integrated in concluding backward stepwise logistic regression model to adjust for confounders. nevertheless, in the backward stepwise logistic regression model, only the CD4 T-cell count of < (Table 4). There was no evidence of statistical interface separately associated variables as indicated by the test of homogeneity. Altogether 10 different species of intestinal parasites were detected. Among the intestinal parasites, Trichuris trichuria (21%) was the most frequently detected, followed by Giardia lablia, and Cryptosporidium parvum, respectively. The opportunistic parasites Cryptosporidium parvum and Cyclospora cayetanensis were observed only when the participants had CD4 T-cell counts of < 200/μL. The distribution of different parasites in different categories of CD4 T-cell counts is shown in Table 3. Discussion The prevalence of intestinal parasitic infection and diarrhoea is most prevalent in HIV-infected people presence for CD4 T-cell count. CD4 T-cell count of < 200/µL, diarrhoea, and being under treatment for TB were the independent predictors of intestinal parasitic infection. Lower CD4 T-cell count was associated with increased risk of both protozoal as well as helminthes infection (Evering et al., 2006). Likely, the less CD4 T-cell count was concerned with increased threat of diarrhoea. Cryptosporidium parvum and Cyclospora cayetanensis were the most recurrent opportunistic parasites observed only in case patients with lower CD4 T-cell counts (Faye et al., 2010). We observed a high prevalence of intestinal parasitic infection rather Slightly higher than prevalence of intestinal parasitic infection (30.0%-35.7%) has been reported from HIV-infected individuals from other study (Akinbo et al., 2010). conversely, these studies were of lesser sample size. The prevalence of parasitic infections among HIV subjects ranged from 18.4% to 81.8% in different parts of the world . Such a huge difference in the prevalence of intestinal parasitic infection may be associated with the different levels of endemicity of such parasites. Diarrhoea (33.3%) was frequent among all participants and it was more frequent (80%) in participants with lower CD4 T-cell counts (Prasad et al., 2000). Higher prevalence of diarrhoea in association with lower CD4 T-cell counts has been reported by several studies (Mukhopadhya et al.,2005 ). The interrelationship between diarrhoea, lower CD4 T-cell count, and presence of intestinal parasites is complex and yet to be fully understood. We studied that that lower CD4 T-cell count, presence of diarrhoea, and being under TB treatment as independent predictors of intestinal parasitic infection, with lower CD4 T-cell count being the strongest predictor (Mohandas., 2002). There was a large difference in the unadjusted and adjusted values of odds ratios, indicating the confounding effect of variables included in the logistic regression model; however, there was no interaction among the three independently associated variables. This finding has important implications for improvement in HIV treatment programs. Screening, treatment, and measures for prevention of parasitic infection should be a part of HIV treatment programs for better outcomes in patients (Ramakrishnan et al., 2007). HIV-infected people with lower CD4 T-cell counts are not only at increased risk for protozoal infection but also for helminthes infection (Anand et al.,). This finding contrasts with those of some other studies which have reported an increased risk of being infected with protozoal parasite but not with helminthes parasites (Institutes National de la Statistique 2012) ). In addition, our study did not show any association between the rainy season and risk of parasitic infection, unlike a study from India which showed a higher prevalence in the rainy season (Cello J P and Day L W 2009 ). Trichuris trichuria was the most common parasite followed by Giarida lamblia and Cryptosporidium parvum . The occurrence of Cryptosporidium parvum and Cyclospora cayet anensis only below the CD4 T-cell count of < 200/µl indicates the typical opportunistic nature of these parasites. Other studies have also reported similar findings (Nitya et al., 2012). This is an surveillance study in which HIVinfected people diagnosed with intestinal parasitic infection were evaluated with HIV-infected people diagnosed not to have intestinal parasitic infection. Some HIV-infected people did not submit the stool specimen for analysis; therefore they were not included in the observational study, and we do not know if these people differ systematically from the participants or not. We have not included data on any participant's personal cleanliness, hygiene, drinking water, dietary situation, and employ of antiparasitic medicines, which can as well influence the results. In adding up, we did not included data on period of diarrhoea and were not capable to classify the status of diarrhoea as acute or chronic, even if the patients generally indicated in the direction of having diarrhoea several weeks. Conclusion Intestinal parasitic infection and diarrhoea are common in HIV-infected people in india. The prevalence of intestinal parasites was higher among those HIV infected individuals with diarrhea, low CD4 count, and ART-naive group groups. Case patient's consequences conceive the need for allowing for early on detection and treatment of intestinal parasites in HIV infected case patients in sort to diminish their morbidity. These look for immense awareness by those scientific service providers who are working in the ART unit. Adherence counseling of ART, health information distribution on ecological and individual hygiene should also be given to HIV/AIDS patients. In addition auxiliary huge level revision by using dissimilar investigative techniques, HIV negative control and assess predispose concerns of intestinal parasites is recommended.
2019-03-11T13:08:21.765Z
2015-02-27T00:00:00.000
{ "year": 2015, "sha1": "b2c8dd8cf797ffdca0d74858df701466b9b40fff", "oa_license": "CCBY", "oa_url": "https://www.nepjol.info/index.php/IJASBT/article/download/12203/10000", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "61fb0d36a8498d01470cc2394e4c1d7a2741f5d4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
35900276
pes2o/s2orc
v3-fos-license
Mutation of residue 33 of human equilibrative nucleoside transporters 1 and 2 alters sensitivity to inhibition of transport by dilazep and dipyridamole. Human equilibrative nucleoside transporters (hENT) 1 and 2 differ in that hENT1 is inhibited by nanomolar concentrations of dipyridamole and dilazep, whereas hENT2 is 2 and 3 orders of magnitude less sensitive, respectively. When a yeast expression plasmid containing the hENT1 cDNA was randomly mutated and screened by phenotypic complementation in Saccharomyces cerevisiae to identify mutants with reduced sensitivity to dilazep, clones with a point mutation that converted Met33 to Ile (hENT1-M33I) were obtained. Characterization of the mutant protein in S. cerevisiae and Xenopus laevis oocytes revealed that the mutant had less than one-tenth the sensitivity to dilazep and dipyridamole than wild type hENT1, with no change in nitrobenzylmercaptopurine ribonucleoside (NBMPR) sensitivity or apparent uridine affinity. To determine whether the reciprocal mutation in hENT2 (Ile33 to Met) also altered sensitivity to dilazep and dipyridamole, hENT2-I33M was created by site-directed mutagenesis. Although the resulting mutant (hENT2-I33M) displayed >10-fold higher dilazep and dipyridamole sensitivity and >8-fold higher uridine affinity compared with wild type hENT2, it retained insensitivity to NBMPR. These data established that mutation of residue 33 (Met versus Ile) of hENT1 and hENT2 altered the dilazep and dipyridamole sensitivities in both proteins, suggesting that a common region of inhibitor interaction has been identified. Human equilibrative nucleoside transporters (hENT) 1 and 2 differ in that hENT1 is inhibited by nanomolar concentrations of dipyridamole and dilazep, whereas hENT2 is 2 and 3 orders of magnitude less sensitive, respectively. When a yeast expression plasmid containing the hENT1 cDNA was randomly mutated and screened by phenotypic complementation in Saccharomyces cerevisiae to identify mutants with reduced sensitivity to dilazep, clones with a point mutation that converted Met 33 to Ile (hENT1-M33I) were obtained. Characterization of the mutant protein in S. cerevisiae and Xenopus laevis oocytes revealed that the mutant had less than one-tenth the sensitivity to dilazep and dipyridamole than wild type hENT1, with no change in nitrobenzylmercaptopurine ribonucleoside (NBMPR) sensitivity or apparent uridine affinity. To determine whether the reciprocal mutation in hENT2 (Ile 33 to Met) also altered sensitivity to dilazep and dipyridamole, hENT2-I33M was created by site-directed mutagenesis. Although the resulting mutant (hENT2-I33M) displayed >10-fold higher dilazep and dipyridamole sensitivity and >8-fold higher uridine affinity compared with wild type hENT2, it retained insensitivity to NBMPR. These data established that mutation of residue 33 (Met versus Ile) of hENT1 and hENT2 altered the dilazep and dipyridamole sensitivities in both proteins, suggesting that a common region of inhibitor interaction has been identified. Cellular uptake and release of nucleosides and nucleoside analog drugs is mediated by integral membrane nucleoside transporter proteins (1)(2)(3)(4). These proteins are involved in salvage of extracellular nucleosides for nucleotide biosynthesis in mammalian cells, especially those that lack de novo synthesis pathways such as enterocytes and hemopoietic cells. They are critical for the cellular uptake of cytotoxic nucleoside analogs used in the treatment of human hematologic malignancies, solid tumors, and viral diseases (5,6). Nucleoside transporters also affect the cell surface concentration of adenosine, which is a signaling molecule that binds to G protein-coupled cell surface adenosine receptors, affecting physiological processes such as coronary vasodilation, renal vasoconstriction, neuromodulation, platelet aggregation, and lipolysis (7,8). Mammalian nucleoside transporters are classified into two structurally and functionally distinct families: the concentrative nucleoside transporters (CNTs) 1 and the equilibrative nucleoside transporters (ENTs). CNTs mediate Na ϩ -dependent transport against the nucleoside concentration gradient and are found primarily in specialized cells such as intestinal and renal epithelia. Three CNT isoforms, a pyrimidine-nucleoside preferring (CNT1), a purine-nucleoside and uridine preferring (CNT2), and a broadly selective (CNT3) protein, have been identified by molecular cloning from mammalian tissues (9 -14). Mammalian ENTs are responsible for facilitated diffusion of nucleosides across cell membranes and have a broad tissue distribution. Two ENT isoforms have been identified by molecular cloning and functional expression from mammalian tissues and mediate nucleoside transport processes that are functionally distinguished by their differential sensitivity to inhibition by NBMPR (1)(2)(3)(4). NBMPR-sensitive nucleoside transport processes that bind NBMPR with high affinity, (K d ϭ 0.1-1 nM), have been assigned the functional designation es (equilibrative sensitive) and are mediated by ENT1 proteins. NBMPR-insensitive nucleoside transport processes are resistant to inhibition by micromolar concentrations of NBMPR, are functionally designated as ei (equilibrative insensitive), and are mediated by ENT2 proteins. ENTs are pharmacological targets for the coronary vasodilators dilazep, dipyridamole, and draflazine, which have been shown to inhibit transport and NBMPR binding (3,(15)(16)(17). Adenosine interacts with G protein-coupled cell surface receptors of endothelial and smooth muscle cells to induce vasodilation. Transporter-mediated adenosine uptake is the major means by which this interaction is terminated, a mechanism that is blocked by coronary vaso-dilator binding to the human ENT isoforms hENT1 and hENT2 (7). hENT2 shares 50% amino acid identity with hENT1 and is 2 and 3 orders of magnitude less sensitive, respectively, to inhibition by dipyridamole and dilazep than hENT1, whereas both rat isoforms (rENT1 and rENT2) are completely insensitive to these inhibitors (18,19). Human and rat ENT1 and ENT2 proteins share a common membrane architecture, recently confirmed by hydropathy analysis and glycosylation-scanning mutagenesis (20), with 11 transmembrane (TM) segments, a large glycosylated loop between TM segments 1 and 2, and a large intracellular loop between TM segments 6 and 7. In a previous study, chimeric recombinant proteins were created between hENT1 and rENT1 to identify the structural domains of hENT1 that are responsible for interaction with dilazep and dipyridamole (21). The inhibitor sensitivities of the chimeras suggested that TM segments 3-6 contain the major site(s) of interaction with secondary contributions from TM segments 1-2, providing the first insight into the regions of hENT1 that are important for interaction with dilazep and dipyridamole. The individual amino acid residues responsible for interaction with dilazep and dipyridamole have not yet been identified. The goal of the current study was to identify amino acid residues involved in dilazep and dipyridamole interaction by using a phenotypic complementation assay to screen a library of randomly mutated yeast expression plasmids containing the hENT1 cDNA (pYPhENT1) for functional thymidine transportcompetent mutants with reduced sensitivity to dilazep. The complementation assay is based on the ability of recombinant hENT1 produced in Saccharomyces cerevisiae to import thymidine under conditions of dTMP starvation, thereby allowing growth, which is inhibited by the addition of dilazep to the assay medium (22)(23)(24). hENT1 cDNAs were isolated from the resulting mutant clones and sequenced, revealing a mutation in codon 33 that converted Met 33 to Ile (M33I). When mutant and wild type recombinant hENT1 were produced in S. cerevisiae and Xenopus laevis oocytes to quantitate dilazep and dipyridamole sensitivity, a significant decrease in sensitivity was observed for the mutated protein. The corresponding residue in hENT2 (Ile 33 ) was therefore converted to a Met by site-directed mutagenesis, and the sensitivity of the resulting mutant to dilazep and dipyridamole was assessed. The results suggested that residue 33 in the first TM segment (Met versus Ile) contributes importantly to the ability of dilazep and dipyridamole to interact with hENT1 and hENT2. The hENT2 open reading frame was amplified by PCR using the primers (restriction sites underlined) 5Ј-XbaIei (5Ј-CCC TCT AGA ATG GCC CGA GGA GAC GCC-3Ј) and 3Ј-KpnIei (5Ј-CCC GGT ACC TCA GAG CAG CGC CTT GAA G-3Ј) and inserted into pYPGE15 to generate pYPhENT2. The hENT2 point mutant resulting in the I33M change in amino acid sequence was generated using megaprimer PCR methodology (29). All reactions were performed using Pwo polymerase (Roche Molecular Biochemicals), and the resulting PCR products were verified by DNA sequencing using an ABI PRISM 310 sequence detection system (PerkinElmer Life Sciences). Random Mutagenesis of pYPhENT1-Double-stranded plasmid DNA (10 g) was precipitated with ethanol/sodium acetate and resuspended in 500 l of freshly prepared hydroxylamine solution (90 mg of NaOH, 350 mg of hydroxylamine HCl, pH ϳ6.5, in 5 ml of H 2 O). The DNA was incubated for 16 h at 37°C, and the reactions were terminated by the addition of 15 l of 4 M NaCl, 50 l of 1 mg/ml bovine serum albumin followed by precipitation with 1 ml of 95% ethanol. The DNA was resuspended in 100 l of TE buffer (10 mM Tris, 1 mM EDTA, pH 8.0) and precipitated again with 15 l of 4.0 M NaCl, 250 l of 95% ethanol. The resuspension-precipitation procedure was repeated three times in total with a final resuspension in 20 l of TE buffer. Phenotypic Complementation and Screening of Mutants-The complementation assay was based on the ability of recombinant hENT1 produced in yeast to salvage exogenously supplied thymidine under conditions of dTMP starvation (23). In brief, KTK cells transformed with pYPhENT1 using a lithium acetate procedure (27) were plated directly onto CMM/GLU plates containing methotrexate (MTX) at 50 g/ml and sulfanilamide (SAA) at 6 mg/ml (CMM/GLU/MTX/SAA). Colonies formed with an efficiency of ϳ10 5 transformants/g of DNA after incubation at 30°C for 3.5 days in the presence of 10 M thymidine, and complementation was prevented when 10 M dilazep was also present. Hydroxylamine-treated pYPhENT1 (20 g) was transformed into KTK cells, which were then plated onto CMM/GLU/MTX/SAA with 10 M thymidine and 10 M dilazep and incubated at 30°C for 3.5 days. Colonies with apparent resistance to dilazep inhibition of complementation were isolated, grown in 5 ml of liquid CMM/GLU for 2 days, and restreaked onto CMM/GLU/MTX/SAA plates with 10 M thymidine and 10 M dilazep. The mutant hENT1 cDNAs were amplified from the yeast colonies by PCR, subcloned back into nonmutated pYPGE15, and sequenced. Uridine Transport in S. cerevisiae-The plasmids pYPhENT1, pY-PhENT1-M33I, pYPhENT2, and pYPhENT2-I33M were transformed into fui1::TRP1 yeast, a strain that lacks the endogenous uridine permease FUI1 (25). The transport of [ 3 H]uridine (Moravek Biochemicals, Brea, CA) by logarithmically proliferating yeast was measured as described previously using the "oil stop" method (30,31) with the following modifications. Yeast were grown in CMM/GLU to an A 600 of 0.7-1.5, washed once with fresh medium, and resuspended to an A 600 of 2.0 in fresh medium. All transport assays were performed at room temperature and pH 7.0. 1-ml portions of yeast culture were distributed into 15-ml plastic centrifuge tubes to which 5-10-l portions of stock dilazep, dipyridamole, or NBMPR (Sigma) solution or solvent alone (H 2 O, ethanol, or dimethyl sulfoxide) were added to achieve the desired final concentration. To allow for steady-state equilibration, the yeast were incubated in the presence of inhibitor for 30 min before addition of radiolabeled permeant (32)(33)(34)(35). Transport reactions were initiated by the rapid addition of a small volume of [ 3 H]uridine to a final concentration of 2 M. Transport reactions were terminated at graded time intervals by pipetting triplicate 200-l portions of yeast suspension into 1.5-ml microcentrifuge tubes containing 200 l of transport oil; the tubes were immediately centrifuged at 12,000 ϫ g for 2 min. The supernatants were removed by aspiration, the resulting pellets were solubilized with 5% Triton X-100 for 24 h, and the radioactive content was determined by liquid scintillation counting. Uridine Transport by Recombinant hENT1 and hENT2 in Yeast-Time courses for influx of [ 3 H]uridine were measured into fui1::TRP1, a uridine transport-defective strain of yeast (25), that contained pYPhENT1 or pYPhENT2 to determine incubation times that provided significant signal-to-noise ratios while also maintaining the initial rates of uptake (Fig. 1). The time course for nonmediated uridine influx was obtained by assessing uridine uptake into pYPGE15-containing yeast and yielded a rate of 0.11 Ϯ 0.01 pmol/mg protein/s. Time courses for uridine uptake into pYPhENT1-and pYPhENT2containing yeast for the first 10 s (Fig. 1, inset) gave rates of 1.03 Ϯ 0.40 and 1.63 Ϯ 0.45 pmol/mg protein/s, respectively. Uptake time courses over 40 min were linear for both pY-PhENT1-and pYPhENT2-containing yeast and yielded rates, respectively, of 0.93 Ϯ 0.02 and 1.4 Ϯ 0.02 pmol/mg protein/s. Uptake rates over the first 10 s were not different from the rates calculated from 40-min time courses, indicating that initial rates representing uridine transport were maintained over long periods of time. The extended linear time courses were likely due to efficient substrate "trapping" by conversion of uridine to UMP by uridine kinase, thereby minimizing backflow of [ 3 H]uridine from the small intracellular compartment to the much larger extracellular volume. Uridine transport rates were determined for all subsequent experiments using incubation times of 10 or 20 min. Random Mutagenesis and Screening-MTX and SAA prevent the conversion of dUMP to dTMP by yeast thymidylate synthase and thus cause depletion of intracellular dTMP pools and inhibition of growth (22). KTK yeast producing recombinant hENT1 and H. simplex thymidine kinase can salvage thymidine via transporter-mediated uptake when low concentrations (e.g. 10 M) are present in the growth medium, thereby allowing yeast to circumvent MTX/SAA-imposed growth arrest. Because thymidine salvage can be blocked by the inclusion of 10 M dilazep in the complementation growth medium (23), this inhibition of thymidine rescue was used to screen a hENT1 random mutant library for functional proteins with reduced affinity for dilazep. pYPhENT1 was treated in vitro with the mutagen hydroxylamine, transformed into KTK yeast, and screened for dilazep resistance. Dilazep-resistant yeast colonies were isolated, and the hENT1 cDNA was amplified and subcloned into nonmutated pYPGE15. Twenty-one resistant mutant cDNA clones were sequenced and shown to be identical, with a point mutation in codon 33 that converted Met to Ile. A Comparison of Sequences of Inhibitor-sensitive and -insensitive Mammalian ENTs-Recombinant human and mouse ENT1 proteins are highly sensitive to transport inhibition by dipyridamole, whereas recombinant human and mouse ENT2 proteins are much less sensitive (18,36). For example, the reported IC 50 values for mENT1 and mENT2 produced in X. laevis oocytes were 75 and 2204 nM, respectively, which corresponds to a 29.4-fold difference (36). A transport-deficient cultured cell line stably transfected with recombinant hENT1 or hENT2 exhibited a 70-fold difference between the two proteins in sensitivity to dipyridamole with IC 50 values of 5 and 356 nM, respectively (19). The rat ENT isoforms (rENT1 and rENT2) are completely insensitive to dipyridamole and dilazep transport inhibition when produced in X. laevis oocytes (18). Multiple sequence alignment of the predicted amino acid sequences for the human, mouse, and rat ENT1 and ENT2 proteins revealed that the identity of the amino acid at residue 33 was consistent with the dilazep and dipyridamole sensitivity of the recombinant transporters (Fig. 2). Residue 33 is a Met in human and mouse ENT1, the most inhibitor-sensitive transporters, whereas it is an Ile in rat ENT1 and human, mouse, and rat ENT2 proteins, all of which exhibit transport activity that is insensitive to inhibition by dilazep and dipyridamole (18,28,36,37). The predicted topology model for hENT1 suggests that position 33 is the last residue in the first TM segment and may therefore be solvent-accessible and/or in the plane of the extracellular bilayer/solvent interface (20,28). Effect of Met-Ile Interconversion at Residue 33 of hENT1 and hENT2 on Uridine Transport Inhibition by Dilazep, Dipyridamole, and NBMPR-Uridine transport was measured in fui1::TRP1 yeast containing pYPhENT1 or pYPhENT1-M33I in the presence or absence of a single high concentration of dilazep, dipyridamole, or NBMPR (Fig. 3A). hENT1-mediated uridine transport was inhibited Ն80% by 0.1 M dilazep and 0.3 M dipyridamole, whereas hENT1-M33I was capable of transport at 60% of the maximal rate in the presence of both inhibitors. These results suggested that hENT1-M33I was substantially less sensitive to dilazep and dipyridamole than wild type hENT1. In contrast, uridine transport was completely inhibited by 0.1 M NBMPR in yeast with either recombinant protein, suggesting that residue 33 was not involved in the binding of NBMPR. Although hENT2 can be inhibited by high concentrations of dilazep and dipyridamole, it is 2 and 3 orders of magnitude less sensitive, respectively, to these compounds than hENT1 (19). To investigate the role of residue 33 in inhibitor sensitivity of hENT2, Ile 33 was converted to Met using site-directed mutagenesis, and the effects of dilazep, dipyridamole and NBMPR on uridine transport were determined in fui1::TRP1 yeast containing either pYPhENT2 or pYPhENT2-I33M (Fig. 3B). Dilazep (10 M) and dipyridamole (1 M) had no effect on hENT2- (36), and AAB88050 (rENT2) (18). Multiple sequence alignment was performed with DNAMAN version 4.03 software using the BLOSUM 62 substitution matrix. mediated uridine transport, whereas both strongly inhibited hENT2-I33M-mediated transport. In contrast, uridine transport in yeast with either mutant or wild type hENT2 remained insensitive to NBMPR, a result that was consistent with the lack of an effect of the opposite conversion on NBMPR sensitivity of hENT1. These data, together with the data from Fig. 3A, indicated that residue 33 plays a key role in dilazep and dipyridamole inhibition of transport of both hENT1 and hENT2 and is not involved in NBMPR inhibition of transport. Kinetic Properties of Uridine Transport for hENT1, hENT1-M33I, hENT2, and hENT2-I33M-The effect of mutating residue 33 (Met versus Ile) of hENT1 and hENT2 on the kinetics of uridine transport was assessed by determining the concentration dependence of initial rates of uridine uptake (Table I). hENT1 and hENT1-M33I showed similar kinetic parameters for uridine transport with K m values of 110 Ϯ 12 and 110 Ϯ 28 M, respectively, and V max values of 5893 Ϯ 1399 and 5215 Ϯ 562 pmol/mg protein/min, respectively, suggesting that uridine interaction with hENT1 was unaffected by the mutation. In contrast, K m values were 729 Ϯ 53 and 87.2 Ϯ 13.8 M, respectively, for hENT2 and hENT2-I33M, indicating an 8.4-fold increase in the apparent affinity for uridine. V max values of 8370 Ϯ 1091 and 6555 Ϯ 1616 pmol/mg protein/min were obtained, respectively, for wild type and mutant hENT2. The V max values for the mutant and wild type hENT1 and hENT2 proteins were not significantly different (p Ͼ 0.05) based on an unpaired two-tailed t test, suggesting that expression of the recombinant proteins in yeast was not affected by mutation of residue 33. The V max :K m ratios for mutant and wild type hENT1 were similar (47 and 53 pmol/mg protein/min/M, respectively), whereas the ratios for mutant hENT2 were much larger than those for wild type hENT2 (75 and 12 pmol/mg protein/min/M, respectively). Concentration-Effect Relationships for Dilazep, Dipyridamole, and NBMPR-The relative changes in inhibitor sensitivities of mutant and wild type hENT1 and hENT2 were determined by assessing the concentration dependence of uridine transport inhibition for the recombinant proteins produced in fui1::TRP1 yeast. The yeast were incubated with graded concentrations of inhibitors and then assayed for [ 3 H]uridine transport (Fig. 4). The Hill coefficients determined from these relationships were not significantly different from unity based on a t test against the theoretical value of 1.00 resulting in p Ͼ 0.05, which was consistent with (i) the presence of a single class of binding sites and (ii) the findings of previous studies (21,23). The IC 50 values obtained from the data of Fig. 4 and the kinetic constants of Table I were used to compute apparent K i values, assuming that dilazep, dipyridamole, and NBMPR inhibit uridine transport in a reversible and strictly competitive manner at the concentration equal to the IC 50 value (Table II) (17, 33, 38 -40). The transport of uridine by wild type hENT1 was potently inhibited by dilazep (K i , 18.7 Ϯ 2.0 nM), whereas hENT1-M33I-mediated transport was an order of magnitude less sensitive to dilazep inhibition (K i , 195 Ϯ 51 nM). In contrast, hENT2-I33M was 46-fold more sensitive to dilazep inhibition than wild type hENT2 with K i values of 2.91 Ϯ 0.79 and 134 Ϯ 40 M, respectively. Thus, the mutations at residue 33 decreased the differences in dilazep sensitivity between hENT1 and hENT2. The mutant proteins displayed a 15-fold difference (hENT1-M33I Ͼ hENT2-I33M), whereas the wild type proteins displayed a 7000-fold difference (hENT1 Ͼ hENT2) in sensitivity to inhibition by dilazep. For both hENT1 and hENT2, the relative differences between the mutant and wild type proteins in sensitivity to dipyridamole were similar to those observed for dilazep (Table II). K i values of 47.9 Ϯ 8.9 and 528 Ϯ 165 nM were obtained for dipyridamole inhibition of transport for wild type and mutant hENT1, respectively, translating into an 11-fold decrease in sensitivity. The dipyridamole sensitivities of hENT2 (K i , 6230 Ϯ 900 nM) and hENT2-I33M (K i , 461 Ϯ 74 nM) differed by 13.5-fold. Wild type hENT2 was 128-fold less sensitive to dipyridamole than hENT1, which is consistent with the results of previous studies (19), whereas the mutant proteins displayed approximately equal sensitivities to dipyridamole. The results of Fig. 3 suggested that mutant and wild type hENT1 were highly sensitive to NBMPR because complete inhibition of transport was observed for both at 0.1 M NBMPR. In the experiments of 3. Inhibition of uridine transport mediated by recombinant hENT1, hENT1-M33I, hENT2, and hENT2-I33M 1.08 and 3.34 Ϯ 0.97 nM were obtained for hENT1 and hENT1-M33I, respectively, demonstrating that both were potently inhibited by NBMPR, with no statistically significant difference in K i values. The NBMPR sensitivities of hENT2 and hENT2-I33M were not determined because the experiments of Fig. 3B had established that neither protein was inhibited by NBMPR. In a previous study (21), recombinant chimeric proteins were constructed by domain substitutions between hENT1, which is sensitive to inhibition by dilazep and dipyridamole, and its rat isoform, rENT1, which is insensitive to both compounds and functionally characterized in X. laevis oocytes. The results suggested that TM segments 1-6 of hENT1 are required for interaction with dilazep and dipyridamole, with TM segments 3-6 being the major site of interaction and TM segments 1-2 making a secondary contribution. Because residue 33 is predicted to be the last residue in TM segment 1, recombinant hENT1-M33I was produced in X. laevis oocytes (Fig. 5) to assess the functional characteristics of the mutated protein in the same recombinant expression system as the chimera study. When oocytes producing mutant and wild type hENT1 were assayed for uridine uptake in the presence of graded concentrations of dipyridamole, IC 50 values were 3640 Ϯ 1410 and 300 Ϯ 79 nM, respectively, corresponding to a 12.1-fold lower sensitivity for the mutant protein. This relative decrease in sensitivity was similar to that observed when the recombinant proteins were produced in yeast. DISCUSSION The results of molecular cloning and functional expression studies on recombinant ENTs are consistent with the findings of studies on es-and ei-type transport processes in cultured cell lines and erythrocytes. The human and mouse es-type transporters, which correspond to the hENT1 and mENT1 proteins, are highly sensitive to dilazep and dipyridamole (3,16,41,42). In contrast, rat es and human, mouse, and rat ei transporters are relatively insensitive to transport inhibition by dilazep and dipyridamole, and these observed effects have been correlated with the transport-inhibition phenotypes of recombinant rENT1, hENT2, mENT2, and rENT2 (3,41,42). The current study provides evidence that mutation of residue 33 of the hENT1 and hENT2 proteins affects interaction with dilazep and dipyridamole significantly. The identity of this residue (Met versus Ile) corresponds with the relative dilazep and dipyridamole sensitivities of the known mammalian ENTs, being a Met in human and mouse ENT1 and an Ile in rat ENT1 and human, mouse, and rat ENT2 proteins (Fig. 2) (18,19,21,28,36,37). Mutation of Met 33 to Ile in hENT1 decreased the sensitivity of uridine transport to inhibition by dilazep and dipyridamole (as seen by the Ͼ10-fold increase in K i values) but did not alter the affinity for uridine (similar K m values) or the sensitivity to inhibition of uridine transport by NBMPR (similar K i values). In contrast, the sensitivity of hENT2 to dilazep and dipyridamole was increased Ͼ10-fold when Ile 33 was converted to Met, the affinity for uridine was increased 8.4-fold, and NBMPR sensitivity was not affected. These results, which implicated residue 33 in uridine interaction with hENT2 but not hENT1, suggested a difference in the permeant binding pockets of the two proteins. hENT1 and hENT2 are known to have different permeant binding properties because hENT2 is capable of transporting nucleobases and antiviral dideoxynucleoside analogs, whereas hENT1 is not (43,44). The apparent K m value for uridine transport obtained for recombinant hENT1 in yeast (Table I) laevis oocytes) and for the native protein in human erythrocytes (19,28,45). The basis for this discrepancy is uncertain but may have been due to the human protein being inserted into the yeast plasma membrane environment and/or an altered state of glycosylation, resulting in subtle changes in the conformation of the uridine-binding pocket. Previous work in which chimeric recombinant proteins were created by substituting domains between inhibitor-sensitive hENT1 and inhibitor-insensitive rENT1 suggested that the region including residues 100 -231 (which includes TM seg- Table II were calculated using the equation of Cheng and Prusoff (38) with the experimentally determined IC 50 values for each inhibitor and the uridine K m values for each recombinant protein (Table I). ments 3-6) is the major site of interaction with dilazep and dipyridamole and that residues 1-99 (TM segments 1-2) play a secondary role (21). TM segments 3-6 were also implicated in the interaction of rENT1 with NBMPR (46). The chimera studies demonstrated that the N-terminal half of hENT1 is critical for interaction with the inhibitors. In this work, when recombinant hENT1-M33I was characterized in the same expression system (X. laevis oocytes) as was utilized in the chimera study, the relative effect of the mutation on dipyridamole sensitivity was comparable with that observed in yeast. These oocyte results confirmed participation of Met 33 , which is predicted to be the last residue in TM segment 1, in binding of dilazep and dipyridamole. That the M33I mutation reduced but did not abolish inhibitor sensitivity in hENT1 (compared with rENT1 and rENT2, which are totally resistant to inhibition) suggests that binding of dipyridamole and dilazep is likely to be complex, involving contributions from several amino acid residues from different regions of hENT1. The results of equilibrium binding studies in cells with the es transport process, for which ENT1 proteins are believed to be responsible, have led to the conclusion that dilazep and dipyridamole are competitive inhibitors for a single or overlapping exofacial NBMPR and permeant binding site (17,34,39,40,47). However, results from other studies have suggested that dilazep and dipyridamole display characteristics of allosteric ligands when present at high concentrations (33,35,39,48). A unifying model that has been suggested for permeant and inhibitor binding to hENT1 describes two binding sites in which permeants, NBMPR, and other inhibitors such as dilazep and dipyridamole compete for a single high affinity site, which is subject to allosteric modulation by a distinct broad specificity low affinity site that binds nucleosides, nucleobases, and inhibitors when present at very high concentrations (3). The contribution of the potential allosteric binding site of hENT1 was likely to be negligible in the experiments of the current study because the Hill coefficients indicated the presence of a single class of binding sites. These results suggested that mutation of residue 33 affected dilazep and dipyridamole binding to the competitive binding site. The current study established that residue 33 of hENT1 and hENT2 is important for dilazep and dipyridamole interaction. It is not clear whether residue 33 of hENT1 and hENT2 is directly involved in permeant or inhibitor binding or whether the effects observed when it was mutated were due to changes in the tertiary structure of these proteins. The alternatives are difficult to resolve in the absence of detailed structural data. Future studies include using different random mutagenesis and screening approaches to identify other residues that may be important for interaction with nucleoside transport inhibitors. 5. The concentration dependence of inhibition of recombinant hENT1 and hENT1-M33I by dipyridamole in X. laevis oocytes. Initial rates of [ 14 C]uridine uptake were determined in the presence of graded concentrations of dipyridamole and were corrected for endogenous uridine transport activity by subtracting uptake values obtained in water-injected oocytes. The oocytes were pretreated with dipyridamole for 1 h to allow for complete binding site equilibration. Uridine transport rates (mean Ϯ S.E., n ϭ 10 -12) in the presence of inhibitor are represented as percentages of the rates observed in the absence of inhibitor (control), and the S.E. values are not presented where the size of the point is larger than the S.E. Mean values (Ϯ S.E.) for control uridine transport rates were 2.13 Ϯ 0.10 and 2.15 Ϯ 0.11 pmol/oocyte/5 min, respectively, for hENT1 and hENT1-M33I. IC 50 values were determined using GraphPad Prism version 3.0 software by nonlinear regression analysis and were 300 Ϯ 79 and 3640 Ϯ 1400, respectively, for hENT1 and hENT1-M33I.
2018-04-03T02:59:09.313Z
2002-01-04T00:00:00.000
{ "year": 2002, "sha1": "d8a20e2f4a853f048f0d2490598bc3cf3864aaba", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/277/1/395.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "9942bdc83c8ca0b5af667c260be5b4914973e3bb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
233329251
pes2o/s2orc
v3-fos-license
Deep Learning Classification of Cheatgrass Invasion in the Western United States Using Biophysical and Remote Sensing Data : Cheatgrass ( Bromus tectorum ) invasion is driving an emerging cycle of increased fire fre-quency and irreversible loss of wildlife habitat in the western US. Yet, detailed spatial information about its occurrence is still lacking for much of its presumably invaded range. Deep learning (DL) has demonstrated success for remote sensing applications but is less tested on more challenging tasks like identifying biological invasions using sub-pixel phenomena. We compare two DL architectures and the more conventional Random Forest and Logistic Regression methods to improve upon a previous effort to map cheatgrass occurrence at >2% canopy cover. High-dimensional sets of biophysical, MODIS, and Landsat-7 ETM+ predictor variables are also compared to evaluate different multi-modal data strategies. All model configurations improved results relative to the case study and accuracy generally improved by combining data from both sensors with biophysical data. Cheatgrass occurrence is mapped at 30 m ground sample distance (GSD) with an estimated 78.1% accuracy, compared to 250-m GSD and 71% map accuracy in the case study. Furthermore, DL is shown to be competitive with well-established machine learning methods in a limited data regime, suggesting it can be an effective tool for mapping biological invasions and more broadly for multi-modal remote sensing applications. Introduction Cheatgrass (Bromus tectorum) was unintentionally introduced in North America in the late 19th century from Eurasia and is now found in every state in the contiguous US [1]. In the western US, it has become a dominant component in many shrubland and grassland ecosystems [2,3], resulting in an increase in fine fuels that can lead to a cycle of increased fire frequency, fire severity, and irreversible loss of native vegetation and wildlife habitat [4][5][6]. Detailed spatial information on the presence and abundance of cheatgrass is needed to better understand factors affecting its spread, assess fire risk, and be able to identify and prioritize areas for invasion treatment and fuels management. However, such information is lacking for much of the ostensibly invaded area in the western US, with exceptions for parts of the Great Basin ecoregion [7][8][9][10][11][12][13]. Cheatgrass invasion has been especially devastating in sagebrush (Artemisia spp.) ecosystems, which are home to a variety of sagebrush-obligate species such as greater sage-grouse (Centrocercus urophasianus). Sage-grouse historically occurred throughout a vast (~125,000 km 2 ) area of the western US and portions of southern Alberta and British Columbia, Canada, but now occupies approximately half that area [14]. Remote sensing approaches to mapping cheatgrass for such a large area represents a significant challenge due to diverse environmental conditions and difficulties obtaining enough ground-truth data to train predictive models. Downs et al. [15], which we revisit later in this section, mapped cheatgrass for the sage-grouse range with moderate success (71% accuracy). To establish context for the current work, we broadly review the various approaches previously taken to map cheatgrass. Remote sensing approaches to mapping cheatgrass distribution, percent cover, and dynamics (e.g., die-off, potential habitat, phenological metrics) generally fall into three categories: those focusing on spectral signatures or phenological indicators in overhead imagery [10,12,13,[16][17][18][19][20][21]; those based on modeling the ecological niche of cheatgrass using known ranges of biophysical conditions of where cheatgrass is known to occur [22,23]; and those combining elements of those two approaches [7,8,11,15,24,25]. Much attention has been given to deriving phenological indicators of cheatgrass presence from spectral indices, such as the Normalized Difference Vegetation Index (NDVI), because its life cycle differs from many of the native plant species in its North American range. Cheatgrass is a winter annual that may begin growth in the late fall and senesce in late spring, whereas many native dryland ecosystem plants begin growing in mid to late spring and continue growth through summer under favorable precipitation conditions [2,26]. Thus, cheatgrass can be identified indirectly by assessing pixel-level chronologies of NDVI [7,10,12,13,19]. Phenological differences between cheatgrass and non-target vegetation can be difficult to detect in drier or cooler years, which is why some have focused on using imagery from years when cheatgrass is more likely to show an amplified NDVI response to above-normal winter or early spring precipitation [10,12]. Furthermore, the strength of this response vary among different landscapes because the timing of cheatgrass growth and senescence varies across ecological gradients such elevation [12], soils [27], and climatic conditions [10,12,28]. The separability of cheatgrass from other vegetation in overhead imagery (either by phenology or spectral characteristics) is also affected by its relative abundance within a pixel. Some have elected to use sensor platforms with a high revisit rate and coarse spatial resolution, such as MODIS [7,8,21,24,29] or AVHRR [10], which offer better potential for capturing within-season variation of cheatgrass growth but contain more spectral heterogeneity due to their coarse spatial resolution. Others have chosen platforms such as Landsat-7 or -8 in favor of their finer spatial resolution, but at the expense of less-frequent return cycles and risk of missing peak NDVI [10,12,13,[17][18][19]21]. Some have used multiple sensors to make independent predictions of cheatgrass at different geographic scales (e.g., [10]) or temporal periods (e.g., [12,21]). Recently, Harmonized Landsat and Sentinel-2 (HLS) data [30] was used to map invasive annual exotic grass percent cover, the dominant component of which the authors assumed is cheatgrass [31]. To our knowledge, combining concurrent data from multiple sensors with complimentary return cycles, spatial resolution, and spectral information to map cheatgrass has not been attempted. We revisit Downs et al. [15], which became an important source of data and motivation for this study. Their approach utilized cheatgrass observations compiled from unrelated field campaigns throughout the western US to train a Generalized Additive Model that considered both remotely sensed phenological data (NDVI) and a broad suite of biophysical factors such as soil-moisture temperature regimes, vegetation type, potential relative radiation, growing degree days, and climatic factors (see Section 2 for data descriptions). However, 37 of 48 potentially useful biophysical factors were excluded from their model due to high correlation. While their model achieved reasonable (71%) test accuracy, they expressed uncertainty about map accuracy east of the Continental Divide due to substantially fewer observations in that region. We hypothesize that using a larger volume of remote sensing data and more robust machine learning approaches, including those in the deep learning domain, might benefit the problem. Deep learning (DL) algorithms have received broad attention for environmental remote sensing applications such as land cover mapping, environmental parameter retrieval, data fusion and downscaling [32], as well as other remote sensing tasks such as image preprocessing, classification, target recognition, and scene understanding [33][34][35][36]. Reasons for the rise in popularity include well-demonstrated improvements in performance, ability to derive highly discriminative features from complex data, scalability to a diverse range of Big Earth Data applications, and improved accessibility to the broader scientific community [32,[35][36][37]. DL is also seen as a potentially powerful tool for extracting information more effectively from the rapidly increasing volumes of heterogeneous Earth Observation (EO) data [38][39][40]. Despite the advances with DL in remote sensing, its application to the field remains challenging due in part to a comparative lack of volume and diversity of labeled data as seen in other domains, and limited transferability of pre-trained models to remote sensing applications [34,37]. Zhang et al. [35] propose four key research topics for DL in remote sensing that remain largely unanswered: (1) maintaining the learning performance of DL methods with fewer adequate training samples; (2) dealing with the greater complexity of information and structure of remote sensing images; (3) transferring feature detectors learned by deep networks from one dataset to another; and (4) determining the proper depth of DL models for a given dataset. The availability of training data is a relevant concern in this study as the number of field observations is less than what many DL practitioners prefer and is typically seen in the literature. This concern applies broadly to the use of DL in spatial modeling of native and nonnative species, where field data are often time-consuming and expensive to collect or may not be readily accessible from other sources [12]. Our goal is to derive more discriminative, higher-resolution models of cheatgrass occurrence using Downs et al. [15] as a starting point. We expand from there by using DL and more traditional machine learning approaches to combine concurrent time series of Landsat-7 ETM+ and MODIS data. Our first objective is to compare the performance of all model types and configurations to identify a single high-performing model configuration. The second objective is to construct a consensus-based ensemble of the preferred model to generate a 30-m ground sample distance (GSD) map of cheatgrass occurrence for the historic range of sage-grouse. The results of this study are intended to provide more detailed information than previously available on the extent of cheatgrass invasion in the western US to support multiple land management agencies' efforts to mitigate impacts of cheatgrass invasion and facilitate further scientific investigation of factors affecting its spread. Datasets Three categories of data are used in this study: field observations of cheatgrass cover, time series earth observation satellite data, and biophysical spatial data. We utilize the same labeled field observations and 48 biophysical factors as Downs et al. [15], with the addition of a larger volume of multi-modal satellite data and three ancillary variables. Details about these data and their preparation are described in the following sections. Field Observations Downs et al. [15] compiled over 24,000 field vegetation measurements in the historic range of sage-grouse that were collected on multiple unrelated field campaigns between 2001 to 2014 ( Figure 1). Of these observations, 6418 are deemed useful based on geographic accuracy and overlap with the study area, completeness, and rigor of collection methods. This was further reduced to 5973 after removing observations that had incomplete satellite data and were less 60-m apart (i.e., the distance of at least two pixels). Nearest-neighbor spacing of field observations ranged from 61 to 79,421-m with a mean distance of 1837-m. Most of the excluded observations are from the U.S. Department of Agriculture (USDA) Forest Inventory and Analysis program, which does not provide the true geographic location of the publicly available version of its data. All field data were collected from transects ranging from 25 m to 100 m in length using point intercept or standardized plot frame techniques. Satellite Imagery Time series spectral data from both annual and seasonal composite MODIS Terra [41] and Landsat-7 ETM+ satellites [42] are used in this study (Table 1). Both sets of imagery are composited on a pixel-wise basis to reduce residual cloud and aerosol contamination and reflect the time of peak vegetation vigor as determined by maximum NDVI within the composite period. We use seasonal composite data for the approximate Northern Hemisphere spring (1 March-31 May) and summer (1 June-31 August) periods, which correspond to periods of peak productivity (spring) and senescence (summer) in the life cycle of cheatgrass. The Landsat-7 annual and seasonal composite data spans most (2003-2012) of the period of selected field observations. We initially used MODIS data for the same period as Landsat-7, but later added annual and seasonal composite data for years 2001-2002 and 2015-2016 because it was found to improve results. In summary, we compiled annual and seasonal composite satellite data corresponding to all years that field observations were collected, except for 2001-2002 and 2014 for which we did not have Landsat-7 data. It is appropriate to use different time phases of satellite data in our models because these data are treated either as independent non-sequential variables or as independent time series in our models. Furthermore, the full temporal stack of satellite data at each sample location is considered in the analysis, not just the year corresponding to when the sample was collected (see Section 2.2). Google Earth Engine [43] is used to create annual and seasonal maximum NDVI composites from MODIS 16-day composite data and resample it to the spatial resolution of Landsat-7 (30 m GSD). For each MODIS and Landsat-7 composite time series, we also derive a delta-NDVI grid for each year that represents the pixel-wise difference between NDVI and the long-term median NDVI based on all data in the time series. This metric is like that used by Bradley and Mustard [10], who found the delta-NDVI between dry and wet years to be a useful indicator of cheatgrass presence as invaded areas tend to exhibit greater inter-annual NDVI variability than native vegetation. . Biophysical and Ancillary Data We include the six types of biophysical spatial data used by Downs et al. [15] as well as the ancillary datasets Level-III ecoregions [44], elevation, and gridded latitude and longitude ( Table 2). The first biophysical dataset, soil moisture-temperature regimes, is considered because these regimes are known to influence ecosystem resilience and resistance to invasive grasses, including cheatgrass [27,45,46]. These data were derived by Chambers et al. [47] from the SSURGO and STATSGO2 national soils databases and where necessary, we replicated their method to expand its geographic extent to provide complete coverage for our study area. The second category of biophysical data used is vegetation data derived from the national-scale LANDFIRE Existing Vegetation Type dataset [48]. LANDFIRE vegetation types were generalized into broader plant community associations that are more appropriate for the scale of our analysis and unlikely to change over the time phase of our field observations and satellite data. Potential relative radiation (PRR) is a unitless index of available solar radiation for photosynthetic activity that is based on solar geometry and terrain and calculated for a specified seasonal period [49]; thus, it is assumed static across years and the time phase of our satellite imagery. PRR is calculated for the same approximate growing season of cheatgrass used in the previous study (i.e., 1 October-30 June). Growing degree days (GDD) is an index that represents the relative amount of time during a specified period that temperatures are above a given threshold that is considered suitable for growth of the target species [50]. GDD is calculated using the same data and parameters as Downs et al. [15]; i.e., 1-km gridded DAYMET daily minimum and maximum temperature data [51] for 1 October 2014 to 30 April 2015, and a minimum temperature threshold of 0 °C (32 °F). While GDD varies across years due to interannual climate variation, we use a single year to be consistent with the previous study which noted they were primarily interested in representing general geographic patterns of GDD. The largest, and final, category of biophysical data is a collection of 4-km gridded climatic datasets depicting monthly and annual 30-year (1981-2010) norms for minimum and maximum temperature, and precipitation [52,53] (Table 2). From these data we also derive five seasonal climatic datasets that correspond to important periods during the growing season of cheatgrass: cumulative winter (December-February) precipitation, cumulative spring (April-May) precipitation, cumulative summer (July-August) precipitation, and winter (November-February) minimum and maximum temperature. Seasonal groupings were determined based on expert knowledge and exploratory analysis of cheatgrass occurrence from field data and climatic variables. Variable Selection We select four combinations of predictor variables that represent two generic approaches for mapping cheatgrass. The first approach is based on biophysical factors that affect ecological niche and is identical to that used by Downs et al. [15], with the addition of ecoregions and gridded latitude/longitude. The second approach combines ecological niche factors and spectral-spatial remote sensing. The purpose of comparing model performance with these sets is twofold: to evaluate potential gains and losses in classification accuracy by combining ecological and spectral data, and to evaluate gains and losses in classification accuracy by combining data from multiple sensors. For defining variable sets, let be the complete set of predictor variables for location in the study area, and let be a function that selects a defined set of variables from . The four sets of variables tested are further described in Table 3. All continuous predictor variables are standardized by subtracting their respective mean and dividing by the standard deviation. Analysis Methods Four machine learning methods for predicting cheatgrass occurrence are compared: Random Forest (RF), Logistic Regression (LR), Deep Neural Networks (DNNs), and Joint Recurrent Neural Networks (JRNNs). The classification objective for all models is two classes of cheatgrass occurrence above and below a strong distribution break at 2% cheatgrass canopy cover ( Figure 2). This break is selected because it provides a good balance of sample sizes in each class and allows us to avoid the assumption of true absence for sites where cheatgrass was not observed. The LR, DNN, and JRNN models are implemented in Python using the Tensorflow library (version 1.2.1), and RF models are implemented in Python using the Scikit-Learn library (version 0.18.2). Sampling Model training and selection is performed using k-fold cross-validation with 90% of the dataset. The other 10% of the data is randomly withdrawn for independent verification. As the data is assimilated from multiple field campaigns that are unevenly distributed throughout time and the study area, there is potential for spatial and temporal autocorrelation. To reduce these effects, samples are randomly shuffled and cross-validation splits are stratified using equal joint distributions based on ecoregion and generalized land cover. Spatial and temporal autocorrelation are further mitigated by ensembling kfold models as described in Section 2.3. The red line illustrates a strong break at 2% canopy cover and is used to define cover classes for this study. Categorical Variables Three categorical variables are included in the models: soil temperature and moisture regime ( ), generalized vegetation cover type ( ), and EPA Level III ecoregions ( ) (Table 2). As the other inputs to our algorithms are real-valued vectors, we employ two simple strategies for incorporating vector representations of the categorical variables: one-hot vectors in RF models, and embedding vectors for LR, DNN, and JRNN models. One-hot vectors are necessary for incorporating categorical data in RF classifiers. This method can become a limitation when the cardinality of categorical variables is high, causing the dimensionality of the transformed vector to become unmanageable. However, this limitation does not exist in our case because cardinality is low for our categorical variables (i.e., = 7, = 15, = 23). For a categorical variable with possible classes, for each class, we define a one-hot vector where: We define , , and as the one-hot vectors associated with the categorical values for a location. Embedding vectors are a parametrized vector representation of categorical values. The values of embedding vectors are learned jointly with the other parameters of the LR, DNN, and JRNN models. We define three embedding matrices for the categorical variables, W ∈ ℝ × , W ∈ ℝ × , and W ∈ ℝ × where q is the size of the embedding vector, and the second dimension corresponds to the number of classes for a given categorical variable. We define , , and as the embedding vectors associated with the categorical values for a location. Random Forest The RF method has been shown to perform well for predictive vegetation mapping, is resilient to overfitting, and provides competitive results compared to machine learning approaches in low resource data regimes [54][55][56][57]. To optimize performance of RF models, we performed a search over 200 random RF hyperparameter configurations. Key hyperparameters that we sampled (and their range of values) include: (1) sampling method (with and without replacement); (2) criterion for splitting nodes (GINI index), (3) maximum number of features (square root of the total number of features); (4) minimum number of samples in a leaf node (1, 2, 4); (5) minimum number of samples in a split node (2, 5, 10); (6) maximum depth of a tree (10, 110); and (7) the number of decision trees in the forest (10-200). Logistic Regression We include LR as a baseline model in our study because it is used as the classification function in the DNN and JRNN models. With n as the number of continuous variables in the data subset, and the size of an embedding vector, the input for a given sample is a vector ∈ ℝ composed of standardized real values of the continuous predictor variables and vector representations of the categorical variables described in Section 2.2.1. ( is one of the variable selection functions defined in Table 3. The output of the LR model is a vector ∈ ℝ containing a probability distribution for the two classes of cheatgrass canopy cover for a given location: where the matrix ∈ ℝ × and vector ∈ ℝ are learned parameters. The softmax is defined as the elementwise vector-valued function that normalizes elements into a probability distribution: We fit the parameters of the logistic regression model by minimizing the cross-entropy between the ground truth and predictions with the ADAM optimization algorithm [58]. ADAM is a variant on the stochastic gradient descent that adjusts the model parameters adaptively according to estimates of the second-order moment of the gradient. We performed 200 training runs with random initializations and random uniform sampled L2 regularization weight (range 0.001-0.1) to select the best LR model. Deep Learning Models We evaluate two DL architectures (DNN and JRNN) for mapping cheatgrass. Collectively, these architectures offer a high degree of modeling flexibility for determining the pixel-, object-, or structure-based features, with an added capability of representing all three types of information to express complex relationships [35]. As a universal function approximator, a DNN can derive highly discriminative features that represent complex relationships between input variables in a high dimensional space. In contrast, RNNs (specifically JRNN in this study) can utilize their internal state and flow in multiple directions to exploit sequential connections that may be present in time-series imagery. The input to the DNN model is the same as defined for the LR model. The architecture of the DNN model ( Figure 3) consists of L hidden layers h, which are recursively defined as: where the ReLU (Rectified Linear Unit) [59] operation is defined as the elementwise vector-valued operation: The output of the DNN is directed to a linear LR function (Equation (3)). Dropout normalization is also used to avoid overfitting and reduce generalization error [60], and batch normalization is used to better condition gradient updates [61]. Together, these techniques help to stabilize learning during training. As an extension to the DNN, we employ a joint composition of bidirectional RNNs, or JRNN, with Long Short Term Memory (LSTM) to predict cheatgrass occurrence ( Figure 3). Bidirectional RNNs have been found effective for modeling difficult time-series problems and operate by processing sequence data in both directions, thus allowing output nodes to receive signals from both previous and future time steps [62]. The use of LSTM in an RNN can reduce training difficulties and improve the RNN's ability to model longterm dependencies [63]. LSTM's capability to track discriminative values over arbitrary time intervals is especially useful if there are response lags of unknown duration between dependent events in a time-series. Such is the case in this study as the timing and magnitude of cheatgrass growth (and subsequently its detectable presence in time-series spectral-spatial data) may be accelerated or lagged depending on climatic conditions across and within years. Let and be MODIS and Landsat-7 time-series, where and are vectors of annual-, spring-, and summer-composite pixel values of the th year for all spectral bands in Table 1. We define two bidirectional LSTM networks, and , for these respective imagery time-series. The LSTMs provide condensed vector representations of the platform time-series for a given location. These vectors are concatenated with the categorical embeddings and the continuous biophysical variables [ ( )] and used as input to a DNN as previously described but with as: It is important to note the JRNN cannot be applied with the D1 variable set alone due to the inherent time-series structure of the model. The parameters of the JRNN and DNN models are fit with the same optimization algorithm, objective function, and regularization strategies as described for the LR model. We perform 200 training runs with random initializations over randomly sampled hyperparameters to identify the best performing DL model configurations. Early stopping based on overall accuracy is used to mitigate overfitting model parameters for all models trained with stochastic gradient descent (LR, DNN, JRNN). The hyperparameters (and their ranges) that we sampled included: number of nodes per hidden layer (32-512 incremented by powers of 2); dropout rate (0-0.5); learn rate (0.001-0.03); and mini-batch size (16-128 incremented by powers of 2). (2)) are provided as input . In the JRNN configuration, the condensed vector representations of imagery time-series created by the joint LSTM networks are concatenated with the D1(x) set of continuous biophysical variables and categorical ( , , ) predictors (Equation (7)) to create an input to the DNN. Model Ensembling The resultant probability distributions of cheatgrass occurrence (i.e., ≥2% cheatgrass canopy cover) are strongly bimodal with peaks above and below 0.5; therefore, we use a simple decision rule of p ≥ 0.5 to classify cheatgrass occurrence. Maps of cheatgrass occurrence are produced for each k-fold model from the cross-validation process and ensembled using a simple consensus (or class frequency) method. The method is used to create two types of ensemble maps of cheatgrass occurrence. The first type of ensemble map is a based on the consensus value (0 to 5) and is intended to display spatial agreement among folds. The second type of ensemble map is a binary representation of the first using thresholds ranging from k ≥ 1 to k = 5, which represent less to more restrictive ensemble predictions of cheatgrass occurrence, respectively. This ensemble method is devised to provide insight into the spatial agreement of the individual folds and to identify an optimal ensemble of the folds, as described in the next section. All predictions are mapped at a 30-m GSD equivalent to that of Landsat-7. Certain portions of the study area corresponding to non-target land cover types (i.e., cultivated lands, pasture and hay, closed canopy deciduous and evergreen forest, urban/developed lands, water) as depicted in the national Cropland Data Layer [64] and National Land Cover Dataset [65] are masked from mapped predictions. Model Performance We use common performance metrics for binary classifiers including overall accuracy, precision, recall, and F1 score (harmonic mean of precision and recall), to evaluate the models and resultant ensemble maps. Acceptable accuracy is defined as >71% observed in our motivating study by Downs et al. [15]. Cross-validation performance is assessed by averaging performance metrics across the k test folds based on 90% of the entire dataset. Verification performance is assessed differently, as shown in Figure 4, due to the use of the consensus method to ensemble the k-fold predictions. Instead, k predictions are made for each verification sample and the consensus level that yields the best overall accuracy or F1 is then reported. In addition, we qualitatively investigate the effect of the consensus method on the performance metrics by plotting overall accuracy and F1 for all five consensus ensembles (i.e., k ≥ 1, k ≥ 2, k ≥ 3, k ≥ 4, k = 5) to determine which consensus level provides the best balance between overall accuracy and F1 (see Section 3.2). We chose to balance performance this way because overall accuracy can be misleading for imbalanced datasets and it helps to balance Type I and Type II error. Conversely, verification performance is assessed by ensembling the predictions from each of the k-fold models ( ) using the consensus method, which yields k estimates performance ( ). This same process is applied to each map pixel. Model Selection In cases where data are limited, reducing variance in estimators of generalization performance, such as k-fold cross-validation, may be as important as unbiased estimates from independent validation for estimating true generalization performance [66]. Given this consideration, our methods for model selection, and ultimately choosing a high-performing model to map cheatgrass distribution, are intended to accommodate the relatively limited size of our dataset and to independently verify generalization performance of the various methods and resultant map. We tested 10-and 5-fold cross-validation and found 5-fold cross-validation to be the most effective in reducing variance across folds while maintaining acceptable accuracy. Hyperparameters for each model class are chosen according to best average cross-validation accuracy from 200 random hyperparameter sets for each model. Final selection of a model for mapping cheatgrass is subsequently based on cross-validation performance to avoid potential model selection bias. Performance on 10% of the dataset reserved for verification is provided as a theoretically unbiased comparison of performance. Results The following results relate to our primary objectives as follows: (1) compare the performance of the LR, RF, DNN, and JRNN models tested with four combinations of biophysical and spectral-spatial variables to identify a high-performing model configuration (Section 3.1); and (2) construct a consensus-based ensemble of the preferred model to generate a 30-m GSD map of cheatgrass occurrence (Section 3.2). Comparison of Model Types and Variable Sets All model configurations that we tested demonstrated acceptable (>71%) overall accuracy in cross-validation and verification, except for LR with the biophysical variable set (D1). With this variable set, cross-validation accuracies improved slightly (2-4%) but verification accuracies did not improve relative to the previous study. The best performing models are those that combined Landsat-7 and MODIS with biophysical and ancillary data (D4), achieving cross-validation accuracies that are approximately 7-8% greater than the previous study (Table 4). Among these, the DNN-D4 configuration demonstrated the best cross-validation performance when considering overall accuracy (79.6%), variation in overall accuracy (2.47%; Table 5), and F1 (0.812; Table 6). Verification accuracy of this configuration is very similar (79.1%), suggesting the model is relatively stable when generalizing on unseen data. Table 4. Overall accuracy (%) of cross-validation (CV) and verification (V) across model types and variable sets. The kfold consensus threshold that yields the best verification accuracy is denoted in superscripts. Table 6. Cross-validation (CV) and test F1 scores across model types and variable sets. The k-fold consensus threshold that yields the best verification F1 score is denoted in superscripts. While the DL architectures we implemented do enhance prediction of cheatgrass occurrence, we found no apparent trends in overall accuracy (Table 4) or F1 (Table 6) across variable sets that suggests they are superior or inferior to their LR and RF counterparts. Within each variable set the performance advantages of one model type over another are also not strongly distinctive. LR appears to be the slightly stronger model with set D3, although the JRNN achieves a slightly better F1 score with the verification dataset. Similarly, the DNN has slightly better cross-validation performance with D4 than the other models, but RF appears to generalize better with the verification dataset. However, note the verification accuracy of the RF model may be overly optimistic in this case as its crossvalidation accuracy is 4.4% lower. Variable Set When we evaluate the effects of different variable sets on model performance, we find that adding satellite data (D2, D3, and D4) improves model performance compared to just using biophysical and ancillary data (D1). The differences in overall accuracy (Table 4) and F1 scores (Table 6) between using only MODIS (D2) or only Landsat-7 (D3) are mixed, suggesting there is no obvious advantage of using one sensor in lieu of the other for our application. However, three of the four model types tested (RF, DNN, and JRNN) achieve their best performance when using both sensors (D4). LR achieves its best performance using only Landsat-7 for satellite data (D3). We later hypothesized that the size of our training dataset (N = 5973) may not have been large enough to provide the DNN and JRNN models a competitive advantage over LR and RF. Recall, we had to discard more than 18,000 field observations due to imprecise geographic accuracy or other quality issues. We qualitatively tested this hypothesis by running the experiments with all the available data (N = 6418). Overall accuracy and F1 from cross-validation was boosted for all model configurations (Table 7), except for LR with set D1 where the F1 score decreased slightly. The DNN and JRNN models became more competitive with RF and LR across variable sets, although this trend is subtle. These findings support our suspicion, although more observation data and rigorous testing is needed to confirm. Ensemble Mapping of Cheatgrass Occurrence As described in Section 2.4, final selection of a model for mapping cheatgrass is based on cross-validation performance due to the limited amount of available data and steps taken to reduce risk of overfitting, model selection bias, and generalization error. Based on this selection criterion, the DNN-D4 configuration is selected to map cheatgrass. We plot trends in DNN-D4 test overall accuracy, F1 score, precision, and recall across all five consensus levels ( Figure 5) to examine tradeoffs of our ensembling approach and to identify an appropriate level for post hoc analysis and interpretation. Recall that k ≥ 1 is the least restrictive ensemble and k = 5 is the most restrictive in terms of the predicted area. As we expect, overall accuracy and precision increase as the consensus level becomes more restrictive and false-positives are reduced. Note that overall accuracy becomes unacceptable (<71%) at k ≥ 1 due to poor precision. Overall accuracy peaks (79.1%) at k ≥ 4 due to declining recall and increasing false-negatives, while the balance between precision and recall (F1) peaks at k ≥ 2 (F1 = 0.676). Therefore, we use the midpoint between peak accuracy and peak F1 of k ≥ 3 to produce an accuracy-F1-balanced map of cheatgrass occurrence (Acc. = 78.1%, F1 = 0.673, Prec. = 0.644, Rec. = 0.704; Figure 6). Figure 6. Predicted distribution of cheatgrass occurrence (>2% canopy cover) in the historic range of sage-grouse (excluding areas classified as non-target land cover types) depicted as: (a) the spatial agreement (i.e., consensus of k-fold predictions), and (b) accuracy-F1-balanced consensus of k ≥ 3 overlaid by EPA Level-III ecoregion boundaries (numbering corresponds to ecoregion names in Table 8). With this mapped prediction we estimate 253,727-km 2 (or 22%) of the historic range of sage-grouse (excluding non-target land cover types as described in Section 2.4) to be invaded by cheatgrass. The effect of balancing accuracy and F1 score on predicted areal extent is evident by comparing Figures 6 and 7, which reveals that even minor differences in ensemble performance can have significant impacts on the estimate of invaded area. Visual assessment of the cheatgrass maps and zonal analysis by ecoregion (Table 8) confirms that cheatgrass invasion is extensive in the Northern and Central Basin and Range, Snake River Plain, and Columbia Plateau where it has been studied more extensively and is known to be pervasive. Other notable areas of apparent invasion that are less-studied include a region overlapping a southern portion of the Wyoming Basin and the northern portion of the Colorado Plateaus ecoregions, a region in the western portion of the Northwestern Great Plains ecoregion, and a region overlapping the southern portion of the Northwestern Great Plains and northern portion of the High Plains ecoregions. Table 8. Predicted areal extents of cheatgrass occurrence by ecoregion based on k ≥ 3 spatial consensus. For reference, the total and masked areas of the historic range of sage-grouse are provided. The proportion of cheatgrass occupied area is relative to the masked area of the sage-grouse range. The numbering of ecoregions corresponds to map labels in Figure 6b. Discussion This study focused on developing more discriminative, higher-resolution models of cheatgrass occurrence for the historic range of the greater sage-grouse, using Downs et al. [15] as a baseline. In doing so, we were able to improve overall accuracy by approximately 7% and increase spatial resolution from 250-to 30-m GSD, relative to the previous study. We consider these improvements biologically significant because even minor differences in accuracy can result in large differences in predicted areal extents, especially for species that are widespread over large geographic areas like cheatgrass [67,68]. The accuracy of our accuracy-F1-balanced cheatgrass map (78.1%) is comparable to other studies that focused on much smaller regions in the Great Basin, Snake River Plain, and Colorado Plateau. For example, Bradley and Mustard [10] achieved 64% and 72% accuracy, respectively, using AVHRR and Landsat-7. In a related study [12], accuracies ranged from 54% to 74% using Landsat MSS, TM, and ETM+. It is worth noting, however, that these studies predicted more monotypic areas heavily infested with cheatgrass, whereas our study focused on identifying areas with at least 2% canopy cover of cheatgrass. Singh and Glenn [17] achieved 77% accuracy in southern Idaho using Landsat. Bishop et al. [19] reports higher model accuracies (85-92%) for seven national parks in the Colorado Plateau, although it is worth noting these estimates are based on the combined area of low and high probability of occurrence classes where cheatgrass was considered present if it occurred at >10% canopy cover; thus, making interpretation of accuracy difficult. In summary, we find our results encouraging compared to previous studies given the difference in geographic scale and ecological diversity of our study area, as well as lower threshold for detection of cheatgrass occurrence. Combining biophysical, ancillary, and satellite data generally improved the performance of the four model classes that we tested, lending further credence to approaches for mapping cheatgrass that incorporate ecological niche factors and remote sensing [7,8,11,15,24,25]. Looking more closely at the satellite data, we found that combining concurrent MODIS and Landsat-7 data generally improves model performance compared to using either sensor alone. We attribute this result to choosing sensors with spectral-spatial characteristics that are complimentary to mapping cheatgrass and selecting robust machine learning techniques that are well-suited for deriving discriminative features from multi-modal data. This approach is simpler than fusing satellite data and provides greater flexibility for choosing and testing sensors for a given application. However, we do not discourage data fusion or use of fused satellite data such as HLS [30], which has shown to be useful for mapping exotic annual grass cover [31]. In fact, DL algorithms have shown promise for performing pixel-level image fusion [69]. As such, the combined modeling and data fusion capabilities of DL make it an intriguing tool for leveraging the rapidly increasing volume of EO imagery [38,39]. The similar performance among model architectures in this study underscores the importance of evaluating multiple analysis methods and variable combinations. In a meta-analysis of land cover mapping literature, accuracy differences due to algorithm choice were not found to be as large as those due to the type of data used [70]. While DL algorithms have been proven superior in many remote sensing applications [35][36][37], their performance also hinges on having sufficiently large datasets to learn highly discriminative features in the data. However, what defines "sufficiently large" is not common knowledge and depends on the complexity of the problem and learning algorithm [71]. This topic is considered by some to be one of the major research topics for DL in remote sensing that remains largely unanswered [35]. We did observe a benefit to all models from adding 10% more data, suggesting that sample size may be a limiting factor in our cross-model comparison. The performance of DL models in this study is still encouraging, however, given the circumstances and comparable performance to LR and RF under a limited data regime. This is consistent with others who have shown good performance using DL for similar land use/land-cover applications [71][72][73][74][75]. Acquiring more field data was beyond the scope of this study but should be a priority for future research given that more data has likely become available since the previous study. We chose relatively simple DL methods as a logical first step to assess whether DL was appropriate for our application and might warrant investigating more computationally intensive methods such as convolutional neural networks (CNNs). CNNs are commonly used in overhead imagery remote sensing due to their ability to take advantage of information in neighboring pixels [36]. However, CNNs may not perform well in cases when the phenomena of interest occur in mixed pixels or exists in the sub-pixel space [74], such as is the case with cheatgrass. Furthermore, the problem can be exacerbated if higher resolution imagery is not available or there is significant cloud cover present. These considerations and greater ease of use of the DNN and JRNN methods factored into our decision to exclude CNN from this study. However, we suggest the relative success of the DNN and JRNN methods does warrant future testing of CNN approaches, and a logical next step might be developing joint DNN-CNN or JRNN-CNN architectures for a semisupervised classification. Conclusions In this paper, we propose two straightforward DL approaches (DNN and JRNN) using large predictive variable sets of biophysical and multi-modal remote sensing data (MODIS and Landsat-7 ETM+) to improve prediction (accuracy and spatial resolution) of the invasive exotic annual grass, cheatgrass. We benchmark DL models to two conventional machine learning algorithms (LR and RF) and compare results to a prior study that was an inspiration and data source for this study. Both DL approaches were found to improve prediction, although there was only a slight advantage over LR or RF with our dataset. We surmise that more labeled data is needed to achieve better performance with the DL methods but note the preferred DNN model provided a 7-8% accuracy improvement over the comparison study. The model's ability to predict cheatgrass occurrence over the historic range of sage-grouse (i.e., much of the western US) with comparable to improved accuracy compared to previous smaller scale studies is also noteworthy. Combining biophysical and multi-modal satellite data was also found to improve the prediction of cheatgrass in all models. A 30-m GSD map of cheatgrass occurrence is produced for the historic range of sage-grouse to help land managers and researchers better understand factors affecting its spread, assess fire risk, and identify and prioritize areas for treatment. We suggest future work explore existing models with additional observation data collected in subsequent years, along with an expansion of remote sensing time-series data. In addition, data augmentation techniques should be explored to increase the total population of training data and other DL architectures should be evaluated for performance improvements. Data Availability Statement: The data, results, code, and figures presented in this paper are openly available at www.github.com/pnnl/fieryfuture. Biophysical and satellite raster datasets are available on request from the corresponding author.
2021-04-22T13:14:24.048Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "223311018c4b46bf5db1978573c1fdba3be62372", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/13/7/1246/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7dffbb2ee83810daf8d74c27b1aed7f54d1cd884", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
233388105
pes2o/s2orc
v3-fos-license
High-speed hyperspectral four-wave-mixing microscopy with frequency combs A four-wave-mixing, frequency-comb-based, hyperspectral imaging technique that is spectrally precise, potentially rapid, and can in principle be applied to any material, is demonstrated in a near-diffraction-limited microscopy application. Introduction As the cost, robustness, and capability, of optical technologies continue to improve, their applications become increasingly widespread. In particular, the combination of spectroscopy and imaging, called hyperspectral imaging, is being applied to a diverse and growing number of fields including, but not limited to, agriculture [1], food inspection [2], biology [3], and astronomy [4]. There are even commercial, hand-held hyperspectral imagers available now [5] and detailed open-access instructions to build your own imager using 3D-printed and off-the-shelf parts [6]. All of these examples exploit linear optical phenomena (namely absorption and refraction), which make them simple but also limit their capabilities compared to nonlinear techniques. In the context of imaging, nonlinear methods have intrinsically higher spatial resolution, higher sensitivity to the environment, and the ability to probe richer information such as coupling between energy levels. For example, coherent anti-Stokes Raman spectroscopy (CARS) imaging [7] probes the Raman response of the sample. There have also been hyperspectral images formed using multi-dimensional coherent spectroscopy (MDCS) [8], which revealed coherent coupling between distant excitons, but the technique requires delay stages thereby limiting its throughput rate. A consistent goal in all of these cases is to increase the speed, signal-to-noise ratio (SNR), and resolution (both spectral and spatial). All of these can be achieved by harnessing the bandwidth compression (i.e. the conversion of optical data with a large bandwidth -commonly many THz -to rf data with a significantly smaller bandwidth -commonly tens to hundreds of MHz) of frequency combs in the experimental design [9]. An experiment similar to the CARS example mentioned above was able to do just this (Ref. [10]) and successfully achieved the same benefits of using combs as dual-comb spectroscopy did with respect to Fourier-transform infrared spectroscopy [9]. However, it also had the fundamental limitation of any CARS technique -it relied on Raman shifts, which greatly limits the number of samples to which it can be applied. Here, a four-wave mixing (FWM) based hyperspectral imaging technique is presented that can in principle be applied universally to any material while still retaining the potential speed, precision, and SNR advantages achievable with frequency combs. Experiment One of the fundamental challenges for nonlinear optical spectroscopy lies in distinguishing a nonlinear signal from a linear one. CARS uses the anti-Stokes Raman shift to spectrally shift the FWM signal away from the pumps enabling the use of a simple optical filter to isolate the FWM signal for high sensitivity detection. In this work, we combine a comb-based version of "frequency tagging" [11] with the standard dual-comb read-out technique [9]. The technique is similar to that in Ref. [12], but with the several modifications. In Ref. [12] two home-built Kerr-lens mode-locked Ti:Sapphire lasers with slightly different repetition rates were used. The repetition frequencies of the combs were phase locked to a direct digital synthesizer using feedback loops. The comb offset frequencies were not stabilized. The output of one of the combs was split into two parts using a half wave plate and a polarizing beam splitter (PBS). The offset frequency of the first part was shifted by an Acousto-Optical Modulator (AOM) and recombined with the second part on a PBS. The optical path lengths for the two arms were adjusted to overlap the two pulse trains in time. Before interacting with the sample, the beams were projected to the same linear polarization state using a polarizer. The combined beams were then focused on a sample consisting of 10 layers of 10 nm GaAs quantum wells (QW) separated by 10 nm thick Al 0.3 Ga 0.7 As barriers. The FWM signal that was emitted in the forward (phase-matched) direction and the incident beams were combined with another comb and interfered on a photodetector. The down-converted linear and FWM signals were then spectrally separated in the RF domain (Fig. 1b). In the experiment, the relative phase noise between two combs caused by relative offset and repetition frequencies' fluctuations were measured and corrected using a single continuous wave (CW) laser. In the experiment described here, we made several modifications (see Fig. 1a). First, a second CW laser (Toptica DL100 external cavity diode laser) was used to greatly improve the usable spectral bandwidth through post-processing [13]. Second, steering mirrors with piezoelectric actuators combined with two 4-f imaging systems enabled fast raster-scanning of the laser spot across the sample. Third, the FWM signal was detected in the backward direction making it compatible with standard imaging modalities (e.g. microscopy). Note that although the FWM signal is phase-matched in the forward direction for this collinear excitation scheme, the strong absorption of the sample limits the effective excitation depth to an amount comparable to the wavelength of the pump light within the medium (approximately 200 nm). Thus, appreciable FWM can be emitted in the backward direction [14]. To validate the experimental technique, hyperspectral images were taken of a stack of 10 GaAs/AlGaAs quantum wells with 10 nm separation mounted on a sapphire substrate. The sample was cooled to ¡10 K in a high-vacuum, flow cryostat to enhance the spectral features of the n = 1 light hole and heavy hole excitons [15]. For more-interesting images, 6 different high-strain areas of the sample were selected. The pump and probe combs used were home-built Ti:sapphire oscillators centered around 800 nm with bandwidths of greater than 50 nm, output powers of about 100 mW, and loosely-locked repetition rates of about 93.5 MHz (differing by about 65 Hz). Two 24 nm optical band-pass filters centered around 800 nm were used -one filtered the pump beam to avoid exciting unwanted transitions and the other filtered the FWM signal and local oscillator (LO) comb to reduce the amount of light sent to the detector. A wavemeter (Bristol) measured the optical frequencies of the CW lasers to provide coarse spectral calibration for the acquired data (center frequencies of 370.404(76) THz and 377.115(38) THz). A Labview program raster scanned the laser spot, waited several hundred ms after the movement command to allow settling of the physical parts, then acquired data from the data acquisition (DAQ) board, then repeated until 400 pixels of data were recorded. Due to the large size of the files that had to be saved after digitization for each pixel, each hyperspectral image took about 30 minutes of laboratory time to record even though only about 45 seconds worth of data was collected. This excessive acquisition time could be reduced using real-time techniques [16,17,18] that allow for significantly smaller file sizes. Further acquisition speed gains could come via faster raster scan control enabled for example by resonant galvanometer mirrors. The resulting data was post-processed similar to Ref. [12], with the added technique from Ref. [13]. Furthermore, due to computer memory limitations and processing speed considerations, much of the data between bursts in each FWM-LO heterodyne RF comb was ignored by splitting it up into 30 ps slices centered around each temporal burst (typically 6-10 slices total). Each slice was corrected separately, then they were all coherently combined by lining up the phases at each signal's peak. Lastly, a Gaussian window with a full width at half maximum (FWHM) of 10 ps was applied to reject noise. This limited the spectral point spacing of the acquired hyperspectral images, but was still sufficient to resolve the light hole and heavy hole exciton features. The resulting 6 hyperspectral images corresponding to 6 different sample locations are available as attached Visualizations 1-6 in video format where time in the video maps to optical wavelength. A select spectral slice from one hyperspectral image corresponding to the Visualization 1 is shown here in Fig. 2. In addition, white-light images of the sample were recorded at both 6 K and room temperature -see Fig. 2. Often in the analysis of hyperspectral images, it is useful to be able choose specific spatial locations and examine the corresponding spectrum. To demonstrate this capability, we selected 5 adjacent pixels, labeled in Fig. 2 and plot the corresponding FWM spectra in Fig. 3. Discussion The heavy-hole and light-hole exciton spectral features (see Ref. [15] -especially Fig. 2) are also present in the FWM hyperspectral images (see videos in Supplemental Material) -however, they show up at different wavelengths/energies, particularly with respect to position (see Fig. 3). It is well known that the spectral locations of these features depend on confinement [15], temperature, electric field via the Stark effect (or in this case the quantum confined Stark Effect (QCSE) [19]), and strain via direct changes to the band structure [20]. Note that both GaAs and AlGaAs are non-centrosymmetric resulting in coupling between strain and electric field via the piezoelectric and inverse piezoelectric effects. Interestingly, neither Ref. [20] nor Ref. [19] mention the complimentary effect despite this coupling and despite each being capable of similar magnitude shifts of the exciton spectral features. The source of the spatial dependence of the exciton FWM spectral features seen in Fig. 3 and in the videos in the Supplemental Material can be assessed as follows. It is unlikely to be spatial temperature variation because there is no reason to suspect the sample is not uniformly cooled. Likewise, spatial confinement variation could explain the results but since the sample was grown using molecular beam epitaxy this is also unlikely. On the contrary, both a spatially varying QCSE resulting from/or a spatially varying strain could produce the range of shifts seen in the results. Visual inspection of the room-temperature and cryogenic white-light images in the videos in the Supplemental Material appears to reveal a temperature dependent straining of the quantum wells. The cryogenic images clearly show many spatial features that match with the hyperspectral images. The room-temperature images, however, show far fewer similarities -particularly only cracks within the sample. A temperature-dependent strain could be generated by a thermal expansion coefficient mismatch with the sapphire substrate. Without an independent control or measurement of either the strain or electric field, it is impossible to isolate the contributions from the strain-based and QCSE shifts. As for the technique itself, the observed heavy-hole and light-hole features qualitatively agree with Ref. [15] up to the shifts described above. Furthermore, the spatial features of the hyperspectral images qualitatively agree with the cryogenic white-light images. The primary limitation of this demonstration was the slow rate of data transfer between the DAQ board and the computer. Because of thermal drift of the sample, this limited the number of bursts that could be acquired for each pixel to less than 10 which ultimately limited the SNR. Furthermore, this low SNR and the short decay times of the excitons relative to the pulse repetition period made it unnecessary to process all of the data between bursts. We point out that these limitations are not fundamental to the technique and could be mitigated with several real-time correction schemes [16,21,18]. The SNR could also be improved by preventing the pumps beams from reaching the FWM-LO detector. This could be achieved, for example, by using the boxcars geometry [22]. Even more spectral information could be obtained by performing MDCS [23,24,25] instead of just spectrally resolved FWM. The most logical way to extend the technique presented here would be to utilize tri-comb spectroscopy [26]. Such a technique would generate enormous amounts of data and would almost certainly require real-time processing techniques. Lastly, we point out that this technique can simultaneously acquire linear data as well if desired. Conclusion The technique presented here is spectrally precise and potentially rapid. It is capable of generating near-diffraction-limited FWM hyperspectral images. Furthermore, it can be applied to any material in principle. To the best of our knowledge there is no other technique with all of these features. (a) (b) Figure 1: (a) Simplified schematic diagram of experiment. C 1 = comb 1, C 2 = comb 2, PBS = polarizing beam splitter, AOM = acousto-optic modulator, P = polarizer, λ/4 = quarter-wave plate, and λ/2 = half-wave plate. Written above each detector are the combinations of optical signals of interest. A 10× objective with a numerical aperture of 0.25 was used. (b) Mapping between optical comb teeth (red -comb 1, yellow -AOM shifted comb 1, blue -relevant FWM comb, black -comb 2) and the RF beat notes observed on a detector. Not shown is a large "time-zero" beat-note at 13 MHz (93 MHz repetition rate less the 80 MHz AOM frequency) corresponding to beating between the AOM-shifted and non-AOM-shifted pump comb lines. 6 K 296 K Figure 2: Top: 400-pixel hyperspectral image from Visualization 1 showing the spectrally integrated amplitude of the fully corrected FWM-LO heterodyne signal (6 K -in high-vacuum cryostat). 75 ms of data (6 bursts) were collected per pixel. Prior to integration, the spectrum was multiplied with a Gaussian whose full width at half maximum is represented by the black box on the upper colorbar. Normalized FWM spectra for 5 neighboring pixels (indicated by 1-5 in the image) are shown in Fig. 3. Bottom: Corresponding white-light images during experiment (6 K -in high-vacuum cryostat) and afterwards at room temperature (296 K -1 atm pressure). The black square represents the hyperspectral scan area. Fig. 2
2021-04-26T01:15:50.440Z
2021-04-23T00:00:00.000
{ "year": 2021, "sha1": "ce9639e2c756f6b9e2513575df10e993345b796a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2104.11614", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ce9639e2c756f6b9e2513575df10e993345b796a", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
268664271
pes2o/s2orc
v3-fos-license
Substantiating the rational parameters for a complicated non-transport system when mining low-thickness fireclay deposits The paper examines a complicated non-transport system for mining a gently sloping fireclay deposit using ESH-10/70 dragline excavators. The research purpose is to substantiate the technological scheme of stripping operations and determine their parameters to reduce the strip-mining costs. Theoretical research is performed using the following methods: methods of scientific analysis of theoretical research, as well as practices of project and production organizations; mining-geometric calculations; variant method for comparing and selecting a mining system. As a result, the dependence of the excavator block mining velocity on the entry way width has been determined, which makes it possible to study the relationship between mining and stripping equipment in time. The change in the re-excavation coefficient depending on the width of the dragline excavator entry way has been studied and its rational value has been determined. The practical value of the research results is in the substantiation of an effective system for mining a gently sloping fireclay deposit. Introduction Mining and export of mineral resources are today the driving force in the development of the economies of many countries around the world [1][2][3].In modern Ukraine, various types of mineral deposits are also mined.The vast majority of minerals are mined by open-pit mining method.Open-pit mining is considered to be the most efficient and cost-effective method due to its high level of productivity and lower initial capital operating costs [4][5][6].Along with large iron-ore deposits, Ukraine has a powerful mining of fireclays, which are widely used for the production of refractory products used in construction, metallurgy, mechanical engineering and other industries [7,8]. Ukraine owns huge fireclay reserves, deposits of which have been explored within the Donetsk folded structure, the Ukrainian Shield and the Dnipro-Donets depression.The main area of fireclay distribution is the Donetsk folded structure, where more than half of Ukraine's reserves are concentrated [9].One of the leading enterprises producing high-quality fireclay is VESCO PJSC, which mines the Andriyivsky field. Open-pit mining of fireclays in the Andriyivsky field assumes a high stripping ratio, which is due to the low mineral deposit thickness of 1.6 m, and the average thickness value of sediments is 28 m. The high thickness of the overburden rocks determines the combined systems of the field mining.For example, in the Zakhidnyi Quarry No.1, the front bench is mined with hydraulic excavators using a transport mining system, and the next benchwith a dragline using a complicated non-transport mining system.The main share of costs for stripping are related to the mining of the front bench, which is due to the costs for transporting the rock mass and the costs of renting hydraulic excavators [10]. Therefore, the issue of improving the quarry mining system and determining its parameters to reduce the cost for stripping operations is relevant. Experience in using a non-transport system shows that with one piece of equipment, the efficiency of the technological scheme varies significantly depending on the location of the dragline excavators and directly on the mining system parameters.In this case, determining the mining system rational parameters is a difficult task [11].To do this, many factors should be taken into account, namely: mining-technical conditions of the field, parameters and performance of excavators, as well as the interrelation of the mining system elements. One of the methods for selecting rational non-transport mining system parameters is the graphicanalytical method.It consists in multiple selection of the dimensions for the technological scheme elements, and the criterion for efficiency is obtaining a minimum re-excavation coefficient.This method is complex due to multiple design of the mining system with different parameters, but using computer modeling it is significantly simplified.Therefore, the research uses a graphic-analytical method to determine the mining system rational parameters. Determining a link between the mining rate of the stoping bench and overburden bench One of the most important factors when choosing a mining system is the rate of the deposit stripping, that is, ensuring the front of mining operations for the mining of mineral resources.To do this, it is necessary to coordinate the operation of stripping and mining equipment [12]. The velocity of mining overburden benches in a complicated system directly depends on the dragline excavator performance and its face parameters.With a complicated non-transport mining system with upward and downward diggings using one dragline excavator, its performance drops by 15-20% unlike its operation with only downward digging [13].This is due, first of all, to an increase in the angle of the dragline rotation and a more complex digging process during the upper mining of the bench.It is possible to increase the dragline performance only with proper organization of the excavator operation. The excavator face parameters have a direct impact on the volume of the block mined by the dragline [14].As a block expands, its mining time also increases.However, when considering the velocity of advance along the front of mining operations, the movement of the dragline to a new block should be taken into account.Therefore, it is necessary to consider the excavator face parameters in order to ensure a sufficient velocity of the stripping operations. The main dragline excavator face parameters are the height and width of the entry way [15].The height of the bench in the case of the complicated mining system of the Zakhidnyi Quarry No.1 is determined by the geological structure of the field and the excavator parameters.Thus, the average height of the upper bench will be 8 m on loams, and the lower -20 m on sands.Therefore, this research examines the influence of the excavator entry way width on the advance velocity of the overburden and stoping benches. To determine the influence of the entry way width on the velocity of mining the overburden bench, it is necessary to calculate its maximum and minimum values according to safety conditions. When using a non-transport mining system, the dragline excavator is located at a safe distance В1 from the upper edge of the bench, which is ensured by the angle of stable structure of overburden rocks.With a complicated mining system, the safe distance from the upper edge is also taken into account [16,17].To determine the maximum entry way width of the dragline excavator, it is necessary to take into account the parameters of the upper bench and the transport berm.Thus, given the above, the maximum entry way width of the lower bench of a dragline excavator with a complicated mining system with upward and downward diggings is determined by the formula: here Rd.maxmaximum digging radius, m; Hup.b.height of the upper overburden bench, m; αslope angle of the upper overburden bench, deg. Maximum entry way width of the upper bench is calculated by the formula: After performing calculations, it possible to determine the excavator entry way maximum width for the lower and upper benches, respectively: Аmax.l = 59 m, Аmax.up = 52 m. Based on the safety conditions, the minimum entry way width for the lower bench of a dragline excavator is determined by the formula: here Rbexcavator body rotation radius, m. For the calculation, the minimum entry way width of the upper bench is taken to be 7 m less than the minimum entry way width of the lower bench Аmіn.up = 9.5 m. The total time for mining one excavator block consists of the time for direct mining of the block tmining and the time for moving the excavator to a new block tmov.Thus, the time for mining one block is determined from the expression: The time it takes for excavators to move to a new working block is influenced by many factors, such as the dragline movement velocity, the driver's qualifications, and the block length.Also important is the time for planning the site on the bench, the time for transferring, installing and connecting to the power transmission line network. The time it takes to move the excavator to a new working block tmov is calculated by the expression: The block length on the upper bench is limited only by the excavator parameters, and the block length during the downward digging is limited by the height of the bench and the physical-mechanical properties of the rocks.Based on the above, the block length, when mining the lower bench, is calculated using the expression: here αз -the slope angle of the dragline face, αз=45 о ; rbradius of the dragline excavator body, m; a safe distance from the stable bench upper edge to the excavator body, m. The block length is 44.5 m.The bulldozer performs the planning of the track for moving the excavator, then the planning time is determined by the expression: here Splthe planned site area, m 2 ; hlaythickness of the planned rock layer, m, hlay = 0.3 m; Qbul.per bulldozer performance, m 3 /hour.The Zakhidnyi Quarry No.1 uses Cat D8R bulldozers.Their performance according to technical specifications is Qbul.per= 300 m 3 /hour. The duration of moving the dragline excavator to a new block is 0.96 hours.With a complicated mining system with upward and downward digging, the dragline excavator mines the upper and lower bench from one installation place.Therefore, when calculating the duration of mining a block, the parameters of the upper and lower blocks are taken into account.From here, the duration of mining the blocks is calculated by the formula: here Qе.hourhourly performance of the dragline excavator. The ESH-10/70 theoretical performance is 680m 3 /hour.However, when an excavator operates with upward digging, the cycle duration increases due to a more complicated digging process and an increase in the excavator rotation angle.Therefore, with a complicated mining system, the hourly dragline performance is 350-450 m 3 /hour, with the lowest value taken for calculation. Knowing the maximum and minimum values of the ESH-10/70 entry way width according to safety conditions, the total time for mining the blocks can be calculated.Based on the calculation results, the dependence of the change in the time of mining the excavator blocks on the entry way width of the lower bench tbl = f(Аl) (Fig. 1) can be determined, as it is the lower bench that influences the entry way width of the mining excavator.From the data shown in the graph (Fig. 1), the time for mining the blocks with a dragline excavator increases with an increase in the entry way width, which is caused by an increase in block volumes. Having determined the time for mining the blocks, it is possible to determine the velocity of mining the overburden benches along the front of mining operations using the formula: Knowing the parameters of the mining excavator and the required annual performance for mining of minerals Аminer=100 thousand m 3 , it is possible to determine the advance velocity of the stoping bench mining along the front of mining operations, given that the entry way width of the stoping bench with a complicated mining system will be equal to the entry way width on the lower bench.For this purpose, the necessary hourly demand for mining of minerals is determined, taking into account the number of working days per year at the enterprise is nwork= 241 days. By determining the velocity of mining the overburden and stoping benches at different values of the excavator entry way width, it is possible to find the dependence of the advance velocity on the entry way width for the overburden and stoping benches, and also construct a dependency graph voverb = f(А) and vstop = f(А) (Fig. 2). Figure 2. Dependence of the velocity of mining the benches with a variable entry way width From the data shown in graph (2) it is clear that the velocity of mining the overburden benches is greater than the velocity of mining the stoping bench at different values of the entry way width.This makes it possible to assert that the ESH-10/70 excavator provides the stripping ratio of the required volume of minerals over time at different entry way width values. Determining the rational parameters for a complicated non-transport mining system For the conditions of the Zakhidnyi Quarry No.1, explore the possibility of using a complicated non-transport mining system with one ESH-10/70 excavator operating on the overburden bench.The dragline will work with both upward and downward diggings with direct placement of overburden into the internal dump. The possibility of using this technological scheme for stripping operations will be tested using a graphic-analytical method by constructing a mining system with the specified bench height parameters, provided that the slope angles are stable.The entry way width is assumed to be minimal according to safety conditions.When constructing a scheme, the volume of the internal dump should be greater than the volume of overburden rocks, taking into account the loosening coefficient. The flowsheet is presented in Figure 3.As it can be seen from the data presented in Figure 3, the volume of overburden that can be placed into the internal dump is less than in the overburden bench entry ways: Therefore, the use of this technological scheme is impossible.Next, consider a complicated nontransport mining system using an excavator in stripping operations and in a dump.This technological scheme consists of additional re-excavation of rock with a dump excavator located in the pre-dump area for additional movement of rock. The mining system parameters depend on the mining-geological conditions of the field.Thus, the height of the upper overburden bench is equal to Нup=8 m, the height of the lower overburden bench is Нl=20 m.Therefore, it is necessary to determine the rational entry way width of the dragline excavator.To do this, using the graphic-analytical method, the volume of rocks to be re-excavated and the re-excavation coefficient [18] can be determined by the formula: To determine the volume of mining operations, it is possible to construct a flowsheet for a complicated mining system with various parameters of the entry way width (Fig. 4).The obtained data are presented in Table 1.To determine the rational entry way, the dependency graph of the re-excavation coefficient is plotted when the width changes (Fig. 5).As can be seen from the dependency graph, the minimum re-excavation coefficient is achieved with a minimum entry way width, however, the value of the re-excavation coefficient increases insignificantly by 6% when the entry way width increases to the maximum value.Therefore, the rational value of the entry way width is A=30 m, since with this value of the entry way width, the greatest use of the excavator is achieved in time, and the maximum visibility of the face for the excavator operator is provided. Economic assessment of the proposed mining system The criterion for assessing the effectiveness of the proposed mining system is the specific cost for strip-mining of 1 m 3 [19,20,21,22].Therefore, we will calculate all cost items for this stripping site, and based on the obtained data, construct a histogram for calculating unit costs for mining the Zakhidnyi Quarry No.1 site (Fig. 6).As can be seen from the data shown in Figure 6, the main share of costs with the existing combined mining system is spent on fuel and electricity, and with the proposed complicated non-transport mining system, the main costs are only for electricity.This is due to the refusal to use dump trucks and diesel excavators.By eliminating equipment rental costs and reducing fuel costs, the cost of 1 m 3 of stripping operations is reduced by 17.51 UAH.Consequently, the proposed mining system can reduce the cost of 1 m 3 of stripping operations and gain additional profit. Conclusions The paper examines the improvement of conducting stripping operations in the conditions of the Zakhidnyi Quarry No.1 of the Andriyivsky fireclay field, mined by VESCO PJSC.The research performed makes it possible to determine the maximum and minimum possible values of the entry way width for a dragline excavator under safety conditions and during upward and downward diggings.It has been revealed that with a complicated non-transport mining system, the maximum value of the entry way width is Аmax.= 59 m, and the minimum is Аmax.= 16.5 m. The dependence of the velocity of mining the overburden and stoping benches on the entry way width has been determined for a complicated non-transport mining system, which allows us to state that with the required mining of minerals in the amount of 100 thousand m 3 /year, the velocity of mining the overburden benches is greater than the velocity of mining the stoping bench, given the different values of the entry way width. After determining the re-excavation coefficient for different values of the excavator entry way width, it has been revealed that the re-excavation coefficient increases by 6% when the entry way width increases.Taking this into account, the rational ESH-10/70 entry way width is 30 m, since with this value of the entry way width, the greatest use of the excavator is achieved in time, and the maximum visibility of the face for the excavator operator is provided. Economic efficiency from the implementation of a complicated non-transport mining system using ESH-10/70 draglines in the Zakhidnyi Quarry No.1 will reduce the cost of stripping operations compared to a combined mining system by 37.1%, which means additional profit in the amount of 30.81 million UAH per year. here lbl -length of the block mined by the dragline excavator, m; tsw.cthe time spent on switching the cable network, hour; tpltime spent on planning the track for moving the dragline, hour; υе -the dragline excavator movement velocity, m/hour.Theoretical velocity of ESH-10/70 movement is υе=200 m/hour. Figure 3 . Figure 3. Complicated mining system using a single stripping excavator 8 Figure 6 . Figure 6.Structure of unit costs per 1 m 3 of stripping operations Table 1 . Parameters for a complicated non-transport mining system with different entry way width
2024-03-24T15:07:53.004Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "16f72ca52ba69e4d3c0fb670639d84a1a61e412b", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/1319/1/012001/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "90ad336bfe52ebb874f5f1554557b40511060e66", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
20748370
pes2o/s2orc
v3-fos-license
Molecular Clock Regulates Daily α1–2-Fucosylation of the Neural Cell Adhesion Molecule (NCAM) within Mouse Secondary Olfactory Neurons* Background: Mammalian olfaction has circadian rhythm, and glycosylation plays critical roles in the olfactory system. Results: α1–2-Fucosylation increases during the nighttime in axons of secondary olfactory neurons in WT but not in Clock mutant mice. Conclusion: Rhythmic α1–2-fucosylation governed by clock genes is a potential mechanism of circadian olfaction. Significance: Glycosylation in the central nervous system is circadian. The circadian clock regulates various behavioral and physiological rhythms in mammals. Circadian changes in olfactory functions such as neuronal firing in the olfactory bulb (OB) and olfactory sensitivity have recently been identified, although the underlying molecular mechanisms remain unknown. We analyzed the temporal profiles of glycan structures in the mouse OB using a high-density microarray that includes 96 lectins, because glycoconjugates play important roles in the nervous system such as neurite outgrowth and synaptogenesis. Sixteen lectin signals significantly fluctuated in the OB, and the intensity of all three that had high affinity for α1–2-fucose (α1–2Fuc) glycan in the microarray was higher during the nighttime. Histochemical analysis revealed that α1–2Fuc glycan is located in a diurnal manner in the lateral olfactory tract that comprises axon bundles of secondary olfactory neurons. The amount of α1–2Fuc glycan associated with the major target glycoprotein neural cell adhesion molecule (NCAM) varied in a diurnal fashion, although the mRNA and protein expression of Ncam1 did not. The mRNA and protein expression of Fut1, a α1–2-specific fucosyltransferase gene, was diurnal in the OB. Daily fluctuation of the α1–2Fuc glycan was obviously damped in homozygous Clock mutant mice with disrupted diurnal Fut1 expression, suggesting that the molecular clock governs rhythmic α1–2-fucosylation in secondary olfactory neurons. These findings suggest the possibility that the molecular clock is involved in the diurnal regulation of olfaction via α1–2-fucosylation in the olfactory system. The circadian clock regulates various behavioral and physiological rhythms in mammals. Circadian changes in olfactory functions such as neuronal firing in the olfactory bulb (OB) and olfactory sensitivity have recently been identified, although the underlying molecular mechanisms remain unknown. We analyzed the temporal profiles of glycan structures in the mouse OB using a high-density microarray that includes 96 lectins, because glycoconjugates play important roles in the nervous system such as neurite outgrowth and synaptogenesis. Sixteen lectin signals significantly fluctuated in the OB, and the intensity of all three that had high affinity for ␣1-2-fucose (␣1-2Fuc) glycan in the microarray was higher during the nighttime. Histochemical analysis revealed that ␣1-2Fuc glycan is located in a diurnal manner in the lateral olfactory tract that comprises axon bundles of secondary olfactory neurons. The amount of ␣1-2Fuc glycan associated with the major target glycoprotein neural cell adhesion molecule (NCAM) varied in a diurnal fashion, although the mRNA and protein expression of Ncam1 did not. The mRNA and protein expression of Fut1, a ␣1-2-specific fucosyltransferase gene, was diurnal in the OB. Daily fluctuation of the ␣1-2Fuc glycan was obviously damped in homozygous Clock mutant mice with disrupted diurnal Fut1 expression, suggesting that the molecular clock governs rhythmic ␣1-2-fucosylation in secondary olfactory neurons. These findings suggest the possibility that the molecular clock is involved in the diurnal regulation of olfaction via ␣1-2-fucosylation in the olfactory system. Endogenous oscillators control various behavioral and physiological circadian rhythms in most living organisms ranging from bacteria to humans. The olfactory bulb (OB) 2 has recently been identified as a circadian oscillator that mediates daily changes in mammalian olfaction, and levels of olfactory sensitivity in rodents are far higher during the early nighttime than the daytime (1,2). The firing of secondary olfactory neurons of the OB (3), as well as the suprachiasmatic nucleus (4), habenular nucleus (5), cerebellum (6), and hippocampus (7) is circadian. The molecular mechanisms generating circadian olfaction remain obscure, although diurnal fluctuations have been found among connexins, AMPA receptors, and monoamines in the OB (8,9). Several molecular findings suggest that the periodic expression of clock genes drives the circadian oscillator in various tissues. Basic helix-loop-helix/Per-Arnt-Sim (PAS) transcription factors such as CLOCK, NPAS2, and BMAL1 are positive regulators of an autoregulatory transcription-translation feedback loop of the molecular circadian clock (10). Clock was the first clock gene to be identified in vertebrates by forward mutagenesis using N-ethyl-N-nitrosourea in a behavioral screening. The Clock allele is truncated and causes a deletion of 51 amino acids, but the mutation does not have a significant effect on the N-terminal basic helix-loop-helix and PAS domains, leaving CLOCK dimerization and DNA binding intact (10). Hundreds of circadian clock-controlled genes that regulate an impressive diversity of biological processes in peripheral tissues have been identified (11,12). Granados-Fuentes et al. (2) showed that canonical clock genes are involved in the regulation of circadian olfactory sensitivity in mice, but the underlying mechanisms remain unknown. Glycosylation affects the functional properties of proteins as well as lipids and regulates several cellular functions in various tissues. Glycosylation plays critical roles in neuronal formation such as neurite outgrowth and synaptogenesis in the OB (13), and the abundance and location of glycoconjugate moieties vary among developmental stages in the main olfactory bulb (14) and in the accessory olfactory bulb (AOB), which is the primary center of the vomeronasal system (15). We postulated that glycoconjugates in the OB regulate circadian rhythms in olfaction, because several recent studies have found that glycosylation is involved in regulation of the circadian clock (16,17). We evaluated temporal changes in glycan structures in the OB using a high-density lectin microarray, which is a useful platform for glycan analysis (18). Lectins are proteins that bind with high affinity to specific glycan structures (19). We then investigated diurnal variations of the ␣1-2-fucose (␣1-2Fuc) glycan, because all three lectins that had high affinity for this glycan in our microarray significantly fluctuated in a diurnal manner. Histochemical analysis using Ulex europaeus agglutinin-I (UEA-I), which detects ␣1-2Fuc glycan, confirmed circadian variation of ␣1-2Fuc glycan in axon bundles of secondary olfactory neurons. Real-time PCR and Western blotting revealed diurnal mRNA and protein expression of Fut1, respectively, in the OB. Daily fluctuations in ␣1-2Fuc glycan and Fut1 expression were severely dampened in Clock mutant mice, suggesting the possibility that the molecular clock is involved in diurnal regulation of olfaction via ␣1-2-fucosylation in the OB. EXPERIMENTAL PROCEDURES Animals-Male C57BL/6NCrSlc and ICR mice (Japan SLC), as well as Clock mutant mice on an ICR background (20) were maintained under a 12-h light:12-h dark cycle (lights on at 08:00 as zeitgeber time (ZT) 0) at a controlled ambient temperature of 24 Ϯ 1°C. This study proceeded in accordance with the guidelines for the Care and Use of Laboratory Animals at the National Institute of Advanced Industrial Science and Technology (AIST), and all procedures were approved by the Animal Care and Use Committee at AIST (approval number 2013-020). Lectin Microarray Production-Lectin microarrays were produced as described (21) with a minor modification. Briefly, 96 lectins were dissolved in spotting solution (0.5 mg/ml each) and spotted onto epoxysilane-coated glass slides in triplicate using a MicroSys4000 non-contact microarray printing robot (Genomic Solution). Lectins immobilized on the slides were incubated with blocking reagent N102 (NOF Co.) and stored at 4°C. Spot quality and the reproducibility of the microarrays were confirmed before use as described (21). Lectin Microarray Analysis-Male C57BL/6NCrSlc mice (age, 26 weeks; n ϭ 3 per group) were killed by cervical dislocation at 10:00 (ZT2) and 22:00 (ZT14), and the OB and liver (as control) were homogenized. Hydrophobic fractions isolated from these sources using the CelLytic MEM Protein Extraction kit (Sigma) as described by the manufacturer were labeled with fluorescent Cy3 monoreactive dye (GE Healthcare), and excess Cy3 was removed by desalting through columns containing Sephadex G-25 (GE Healthcare). The protein concentration was adjusted to 2 g/ml with PBS-T (10 mM PBS, pH 7.4, 140 mM NaCl, 2.7 mM KCl, and 1% Triton X-100), then the hydrophobic fractions were labeled with Cy3 NHS ester (GE Healthcare), diluted with probing buffer (25 mM Tris-HCl, pH 7.5, 140 mM NaCl, 2.7 mM KCl, 1 mM CaCl 2 , 1 mM MnCl 2 , and 1% Triton X-100) to 0.5 g/ml, applied to the lectin microarray, and left overnight. After washing with probing buffer, images were acquired using an evanescent field-activated GlycoStation Reader 1200 fluorescence scanner (GP BioSciences). Fluorescence signals emitted by each spot were quantified using Array Pro Analyzer version 4.5 (Media Cybernetics), and the background value was subtracted. Lectin signals from triplicate spots were averaged and normalized to the mean value of the 96 lectins immobilized on the microarray. The membranes were incubated with 3% powdered skim milk in PBS followed by biotinylated UEA-I (1 g/ml) or anti-NCAM antibody (1:200) in PBS for 1 h. Bound anti-NCAM antibody was reacted with biotinylated secondary antibody (1:5000) in PBS for 1 h. The biotinylated substances were reacted with the avidin-biotin complex reagent (Vector Laboratories) for 30 min. These complexes were stained using 0.02% 3,3Ј-diaminobenzidine tetrahydrochloride (DAB) dissolved in 50 mM Tris-HCl containing 0.006% H 2 O 2 for 1 min (rapid DAB staining) or detected using ImmunoStar LD (Wako Pure Chemicals). The negative control comprised UEA-I that had been pre-absorbed with 0.5 M L-fucose. Other membranes were incubated with Block Ace (Dainippon Pharmaceutical) in PBS followed by anti-FUT1 antibody (0.25 g/ml) in PBS for 1 h. The bound antibody was reacted with secondary antibody (1:5000) in PBS for 1 h and detected using ImmunoStar LD. The amounts of ␣1-2Fuc glycan, NCAM, and FUT1 were normalized relative to the amount of ␤-actin. Histochemical Analysis-Male ICR and Clock mutant mice (age, 12 weeks; n ϭ 3 per group) were anesthetized at 10:00 (ZT2) and 22:00 (ZT14) with an intraperitoneal injection of pentobarbital (0.20 mg/g body weight), sacrificed by cardiac perfusion with 4% paraformaldehyde fixative, and histochemically processed as described (22) with a minor modification. Briefly, the OB was routinely embedded in paraffin and cut sagittally into 5-m thick sections, which were deparaffinized, rehydrated, and incubated with 0.3% H 2 O 2 in methanol followed by 3% normal goat serum. The sections were incubated with biotinylated UEA-I (20 g/ml) or anti-NCAM antibody (1:50) overnight in PBS at 4°C. The sections that had been incubated with anti-NCAM antibody were reacted with biotinylated secondary antibody (1:200) in PBS for 1 h. Those with biotinylated complexes were reacted with the avidin-biotin complex reagent and colored with DAB. The negative control comprised UEA-I that had been pre-absorbed with 0.05-0.5 M L-fucose. Quantitation of Histochemical Staining Intensity-Gray scale images were inversed and measured using ImageJ (National Institutes of Health) software. A lower intensity threshold was adopted for negative regions on sections. Mean signal intensity was quantified within the glomerular layer of the AOB and the lateral olfactory tract, which contains most axons of secondary olfactory neurons. Real-time RT-PCR-Male ICR and Clock mutant mice (age, 12 weeks; n ϭ 4 -5 per group) were killed by cervical dislocation at 10:00 (ZT2), 16:00 (ZT8), 22:00 (ZT14), and 4:00 (ZT20). Total RNA was extracted from the OB using RNAiso Plus (Takara Bio), and cDNA was synthesized using PrimeScript RT reagent kits (Takara Bio). Real-time RT-PCR proceeded using SYBR Premix ExTaq II (Takara Bio) and a LightCycler (Roche Diagnostics). The reaction conditions were 95°C for 10 s followed by 45-55 cycles of 95°C for 5 s, 57°C for 10 s, and 72°C for 10 s. Table 1 shows the sequences of the primer pairs. The amount of target mRNA was normalized relative to that of Gapdh mRNA. Diurnal Variation of Glycan Structures Determined Using a High-density Lectin Microarray-Signals emitted by 16 of the 96 lectins differed between day and night (Student's t test, p Ͻ 0.05, n ϭ 3; see supplemental Table S1). Signals from 1 and 15 of the lectins increased at ZT2 and ZT14, respectively (Fig. 1A). These results suggested that the abundance of several glycan structures in the mouse OB varies in a diurnal manner. Among 96 lectins, UEA-I, Trichosanthes japonica agglutinin-II and Momordica charantia agglutinin specifically reacted with ␣1-2Fuc glycan, and all three reacted more intensely at ZT14 than at ZT2 (Fig. 1B) although the signal intensity was identical between ZT14 and ZT2 in liver extracts (Fig. 1C). Western blotting using UEA-I also showed significant diurnal variation in the mouse OB (one-way ANOVA, p Ͻ 0.05, n ϭ 3; Fig. 1D). These results suggest that ␣1-2Fuc glycan is expressed in a diurnal and tissue-specific manner in the OB, and that more is expressed during the nighttime. Diurnal Variation of ␣1-2Fuc Glycan in Axon Bundles of Secondary Olfactory Neurons-Histochemical analysis showed that UEA-I reacted with glomeruli in the AOB, the lateral olfactory tract that comprises axon bundles of secondary olfactory neurons, and cell bodies of these neurons ( Fig. 2A). The UEA-I reaction with the lateral olfactory tract ( Fig. 2A, arrows and arrowheads) was significant at ZT14, but weak at ZT2 (Student's t test, p Ͻ 0.05, n ϭ 3; Fig. 2D), whereas this reaction with glomeruli in the AOB ( Fig. 2A, asterisks) was identical between these time points (Student's t test, p ϭ 0.92, n ϭ 3; Fig. 2C). Pre-absorption with L-fucose dose dependently inhibited the UEA-I reaction (Fig. 2E). These findings agree with the results of the lectin microarray and suggest that ␣1-2Fuc glycan is more abundant in axons of secondary olfactory neurons during the early nighttime than in the morning. Diurnal Variation of ␣1-2Fuc Glycan Does Not Depend on the Amount of Main Core Protein NCAM-Murrey et al. (25) found using UEA-I affinity chromatography that 32 proteins including NCAM are ␣1-2-fucosylated in the OB, and Pestean et al. (26) reported that NCAM is the main glycoprotein with ␣1-2Fuc glycan in the OB. We again confirmed using Western blotting, immunoprecipitation, and histochemical means that the main core protein of ␣1-2Fuc glycan in the OB of WT mice is NCAM. Western blotting using UEA-I and rapid DAB staining detected only two bands with a molecular mass range that was similar to that of NCAM (about 180 and 110 kDa; Fig. 3A), although several UEA-I-positive bands were detected using luminescence (Fig. 3A). NCAM protein mainly consists of different proportions of three isoforms that are defined according to their molecular weight as NCAM-180, -140 and -120, at various developmental stages (27), and NCAM-180 and -120 seemed to be mainly ␣1-2-fucosylated in the adult OB. Immunoprecipitated NCAM reacted with UEA-I, with an intensity that varied in a diurnal manner (one-way ANOVA, p Ͻ 0.05, n ϭ 3-4), peaking at ZT14 (Fig. 3B). At both ZT2 and ZT14, anti-NCAM antibody significantly reacted with the lateral olfactory tract, but not with glomeruli in the AOB (Fig. 3C). The mRNA expression of Ncam1 did not vary (one-way ANOVA, p ϭ 0.40, n ϭ 4 -5; Fig. 3D), and the amount of NCAM protein was identical between ZT2 and ZT14 (Student's t test, p ϭ 0.46 n ϭ 3; Fig. 3E). These results suggested that ␣1-2-fucosylation associated with the major target protein NCAM is diurnal, and that diurnal variation of the ␣1-2Fuc glycan in the OB does not depend on the amount of NCAM. Diurnal Expression of Fut1 mRNA and FUT1 Protein-We used RT-PCR to assess the mRNA expression of eight enzymes that are involved in ␣1-2Fuc glycan metabolism. We confirmed that all applied primer sets could amplify each target gene, because we detected the PCR products of all of these enzymes in liver extracts with the predicted base pairs (Fig. 4A, Table 1). Although these eight enzymes play important roles in mammalian cells (28), only Fut1, Fut2, and Fuca1 transcripts were detected in the OB (Fig. 4A). Both Fut1 and Fut2 are ␣1-2specific fucosyltransferase genes and Fuca1 is a lysosomal ␣-fucosidase gene that generates fucose residues as a substrate for fucosylation by salvage pathways (28). These three enzymes are apparently critical for ␣1-2-fucosylation in the OB, and thus we used real-time RT-PCR to evaluate temporal changes in these three genes. The mRNA expression of Fut1 varied in a diurnal manner (one-way ANOVA, p Ͻ 0.05, n ϭ 4 -5), peaking late in the day Table S1. B and C, relative signal levels of three lectins with high affinity for ␣1-2Fuc glycan in OB (B) and liver (C). Values are shown as mean Ϯ S.E. *, p Ͻ 0.05; †, p Ͻ 0.01, Student's t test, n ϭ 3. D, Western blotting with UEA-I in WT and Clock mutant mice. *, p Ͻ 0.05, one-way ANOVA, n ϭ 4 -5. Different characters indicate significant differences, p Ͻ 0.05, Tukey-Kramer test. (ZT8) and reaching the nadir late in the night (ZT20) (Fig. 4B), whereas that of Fut2 did not (Fig. 4C). The mRNA expression of Fuca1 varied in a diurnal fashion (one-way ANOVA, p Ͻ 0.01, n ϭ 4 -5), peaking late at night (ZT20) and reaching the nadir in the morning (ZT2) (Fig. 4D). The amount of FUT1 protein also varied in a diurnal manner (one-way ANOVA, p Ͻ 0.05, n ϭ 3), peaking at ZT20 and reaching the nadir at ZT8 (Fig. 4E). These findings suggest that FUT1 expression that fluctuates according to the time of day results in diurnal variation in the abundance of ␣1-2Fuc glycan. Diurnal Variation of ␣1-2Fuc Glycan Depends on the Molecular Clock-Homozygous Clock mutant mice were examined using Western blotting, histochemistry, and real-time RT-PCR to determine whether the molecular clock is involved in diurnal variation of ␣1-2Fuc glycan in axons of secondary olfactory neurons. Diurnal variation of the UEA-I reaction with OB extracts was abolished in Clock mutant mice (one-way ANOVA, p ϭ 0.52, n ϭ 3; Fig. 1D). The UEA-I reaction with the lateral olfactory tract in Clock mutant mice was obviously weak at both ZT2 and ZT14 (Fig. 2B) and identical between at ZT2 and ZT14 (Student's t test, p ϭ 0.29, n ϭ 3; Fig. 2D), whereas that with glomeruli in the AOB was significant at both ZT2 and ZT14 as in WT mice (Fig. 2C). Real-time RT-PCR revealed that diurnal mRNA expression of Fut1 was completely abolished (one-way ANOVA, p ϭ 0.63, n ϭ 4 -5) at a low level in the OB of Clock mutant mice (Fig. 4B), whereas significant diurnal expression of Fuca1 disappeared (one-way ANOVA, p ϭ 0.56, n ϭ 4 -5) and continued at a high level in the OB of Clock mutant mice (Fig. 4D). High levels of Fuca1 expression appear to increase the number of fucose residues, which might result from a deficit in ␣1-2Fuc due to the low level of Fut1 expression in Clock mutant mice. These findings suggest that the molecular clock controls diurnal variation of the abundance of ␣1-2Fuc glycan by regulating Fut1 expression. DISCUSSION Olfaction is more sensitive during the period of active onset in the early night, than during the day in mice (2), and secondary olfactory neurons generate circadian rhythm in the activity (1,3). Glycosylation affects the functional properties of many proteins, and glycoconjugates in the OB play several important roles in neuronal formation including neurite outgrowth and synaptogenesis (13). Therefore, we postulated that glycan structures fluctuate from day to night in the mouse OB. Here, we showed that ␣1-2-fucosylation of NCAM fluctuates in a diurnal manner in axons of secondary olfactory neurons, and that such fluctuation is apparently governed by the molecular clock via rhythmic expression of the Fut1 gene. Lectin microarrays provide a useful platform for the exhaustive and precise analysis of minuscule differences in glycan structures between two specimens (18). The present results of such microarray indicated that the abundance of several glycan structures, including ␣1-2Fuc glycan, has diurnal variation in the mouse OB. The findings of several studies suggest that ␣1-2Fuc glycan mediates neuronal functions, such as learning and memory, as well as neuronal morphology, including neurite outgrowth and synaptic plasticity (29 -35). The inhibition of ␣1-2-fucosylation caused by 2-deoxy-D-galactose incorporation delays neurite outgrowth and synaptic plasticity in the rat hippocampus (29,30). Mice that are deficient in FUT1 exhibit developmental defects in neurons that express NCAM in the OB (25). We showed that the abundance of ␣1-2Fuc glycan fluctuates in the lateral olfactory tract, which mostly contains axons of secondary olfactory neurons. This finding suggests that the efficiency of transmission between secondary and higher neurons might be subject to diurnal variations in the olfactory system via changes in the degree of synaptic plasticity. Many fucosylated glycoproteins are transported in axons, and those that are synthesized arrive at neuronal endings within a few hours via rapid axonal transport (36). These properties of the axonal transport of fucosylated proteins apparently permit diurnal variation of ␣1-2Fuc glycan in axon bundles. Murrey et al. (25) found using UEA-I affinity chromatography that 32 proteins including NCAM are ␣1-2-fucosylated in the OB, and Pestean et al. (26) reported that NCAM is the main glycoprotein with ␣1-2Fuc glycan in the OB. We showed that the abundance of ␣1-2Fuc glycan associated with NCAM is diurnal, although the expression of Ncam1 mRNA and NCAM protein does not vary. NCAM functions in neurite outgrowth and synaptic formation, especially in the OB and hippocampus (37), and it is a highly glycosylated protein with multiple types of glycan (38,39). NCAM-180 contributes to the maintenance of synaptic formation (39), and OB development is defective in NCAM-180 knock-out (40), as in FUT1 knock-out (25) mice. Therefore, the part of ␣1-2Fuc glycan function described above might reflect the regulation of NCAM proteins. Müller et al. (41) showed that fucosylated glycoproteins are abundant and synthesized de novo in the OB, and that the OB absorbs exogenous fucose residues more frequently than other areas, indicating rapid metabolic turnover of fucosylated glycoproteins in the OB. We found that the expression of Fut1 mRNA and FUT1 protein significantly fluctuated according to the time of day, and that the molecular clock apparently con-trols the diurnal expression of Fut1 in the OB. Granados-Fuentes et al. (2) reported that clock molecules regulate circadian rhythm of olfactory sensitivity, although the molecular mechanism remains unknown. Because hundreds of circadian clockcontrolled genes regulate an impressive diversity of biological processes (11,12), the molecular clock-regulated diurnal expression of Fut1 might play an important role in olfactory sensitivity. Although diurnal variations of ␣1-2Fuc glycan regulated by the molecular clock is a potential mechanism for circadian rhythms in the activity of secondary olfactory neurons, further studies are needed to elucidate whether it is involved directly in circadian rhythm of olfactory sensitivity.
2018-04-03T02:58:22.646Z
2014-11-10T00:00:00.000
{ "year": 2014, "sha1": "6770d489bd349857aa209992d832e5e7dd37bf1d", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/289/52/36158.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "791317cb9c70b36dad7a5e42bc6aba7f1c14ab56", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine", "Chemistry" ] }
8304910
pes2o/s2orc
v3-fos-license
The Lincoln Continuous Tied-Mixture HMM Speech Recognizer The Lincoln robust HMM recognizer has been converted from a single Ganssian or Gaussian mixture pdf per state to tied mixtures in which a single set of Gaussians is shared between all states. There were some initial difficulties caused by the use of mixture pruning [12] but these were cured by using observation pruning. Fixed weight smoothing of the mixture weights allowed the use of word-boundary-context-dependent triphone models for both speaker-dependent (SD) and speakerindependent (SI) recognition. A second-differential observation stream further improved SI performance but not SD performance. The overall recognition performance for both SI and SD training is equivalent to the best reported according to the October 89 Resource Management test set. A new form of phonetic context model, the semiphone, is also introduced. This new model significantly reduces the number of states required to model a vocabulary. Introduction Tied mixture (TM) HMM systems [3, 6] use a Gaussian mixture pdf per state in which a single set of Gaussians is shared among all states: ci,j > O, E cij = l J where i is the arc or state, G i is the jth Gaussian, and o is the observation vector. This form of continuous observation pdf shares the generality of discrete observation pdfs (histograms) with the absence of quantization error found in continuous density pdfs. Unlike the non-TM continuous pdfs, TM pdfs are easily smoothed with other pdfs by combining the mixture weights. Unlike discrete observation HMM systems, the Gaussians (analogous to the vector quantizer codebook of a discrete observation system) can be optimized simultaneously with the mixture weights. The training algorithms are identical to *This work was sponsored by the Defense Advanced Research Projects Agency. the algorithms for training a Gaussian mixture system except the Gaussians are tied across all arcs. Mixture and Observation Pruning Computing the full sum of equation 1 is expensive during training and prohibitively expensive during recognition since it must be computed for each active state at each time step. (Because the word sequence is unknown, recognition has many more active states than does training.) Ideally, one would only compute the terms which dominate the sum. However, it requires more computation to find these terms than it does to simply sum them. Two faster approximate methods for reducing the computation exist: mixture and observation pruning. Mixture pruning simply drops terms that fall below a threshold during training. The weights may then be stored as a sparse array which also saves space. The computational savings are limited during the early iterations of training since only a few terms have been dropped. The final SD distributions are quite sharp (i.e. have only a few terms), but the final SI distributions are quite broad (i.e. have many terms). Thus the savings are limited for SI systems. When the distributions are smoothed with less specific models, they become quite broad again. These difficulties are just computational-there is an even greater difficulty. During training, the parameters of the Gaussians are also optimized which causes them to "move" in the observation space. With mixture pruning, a "lost" Gaussian cannot be recovered. (This was the fundamental difficulty with the earlier version of the system reported in Reference [12].) Instead of reducing the mixture order, observation pruning reduces the computation by computing the sums for all Gaussians whose output probability is above a threshold times the probability of the most probable Ganssian. (Some other sites have used the "top-N" Ganssians [3,7]. In our system, it gives inferior recognition performance compared to the threshold method.) All of the Gaussians must now be computed, but this is a significant proportion of the computation only in training. (Some pruning is possible. Our exploration of tree-structured search methods showed them to be ineffective because the number of Gaussians is too small and the observation order is too large.) The amount of computation is now dependent upon the separations of the Gaussian means relative to their covariances and the statistics of the observations. The computational savings were very significant except for the SI second-differential observation stream (discussed later). Observation pruning does not save space for several reasons. The observation pruned TM systems suffer from the same "missing observation" problem as do the discrete observation systems and therefore no mixture weight can be allowed to become zero. Similarly, recruitment of "new" Ganssians (due to their movement) during training also requires that no mixture weight be allowed to become zero. Both can be accomplished by using full size weight arrays and lower bounding all entries by a small value. Smoothing now causes no organizational difficulty or increase in computation since all mixture weight arrays are full order. The TM CSR Development The following development tests were performed using the entire (12 speakers x 100 sentences, 10242 words) SD development-test portion of the Resource Management-1 (RM1) database. Three training conditions were used: speaker-dependent with 600 sentences per speaker (SD), speaker-independent with 2880 sentences from 72 speakers (SI-72) and speaker-independent with 3990 sentences from 109 speakers (SI-109). All tests were performed with the perplexity 60 word-pair grammar (WPG). The word error rate was used to evaluate the systems: substitutions + insertions + deletions correct nr of words (2) Line 1 of Table 1 gives the best results obtained from the non-TM Gaussian (SD) and Gaussian mixture (SI) systems [10]. The SD system used wordboundary-context-dependent (WBCD or cross-word) triphone models and the SI systems used word-boundarycontext-free (WBCF) triphone models. The TM HMM systems were trained by a modification of the unsupervised bootstrapping procedure used in the non-TM systems: 1. Train an initial set of Gaussians using a binarysplitting EM to form a Gaussian mixture model for all of the speech data. 2. Train monophone models from a flat start (all mixture weights equal). 3. Initialize the triphone models with the corresponding monophone models. All of the systems described here use centisecond mel-cepstral first observation and time-differential melcepstral second observation streams. The Gaussians use a tied (grand) variance (vector) per stream. Each observation stream is assumed to be statistically independent of the other streams. Each phone model is a three state linear HMM. The triphone dictionary also included word dependent phones for some common function words. All stages of training use the Baum-Welch reestimation algorithm to optimize all parameters (the transition probabilities, mixture weights, Gaussian means, and tied variances) simultaneously. The lower bound on the mixture weights was chosen empirically. The initial observation pruned TM system was derived from the mixture pruned systems described in [12] and gave the performance shown in line 2 of Table 1. It used WBCF triphone models because there was insufficient training data to adequately train WBCD models. Fixed-weight smoothing [15] and deleted interpolation [2] of the mixture weights were tested and the fixedweight smoothing was found to be equal to or better than the deleted interpolation. (Bugs have been found in both implementations and the smoothing algorithms will require more investigation.) The fixed smoothing weights were computed as a function of the state (left, center, or right), the context (triphone, left-diphone, right-diphone, or monophone) and the number of instances of each phone. The TM system with smoothed WBCF triphone models showed a performance improvement for both the SD and SI trained systems. An additional improvement for both SD and SI systems was obtained by adding WBCD models (table 1, line 3). Until the smoothing was added, we had been able to obtain only slight improvements in the SD systems and no improvement in the SI systems by adding the WBCD models. Finally, a third observation stream was tested. This stream is a second-differential mel-cepstrum obtained by fitting a parabola to the data within =t=30 msec. of the current frame. It produced no improvement for the SD system, but improved all of the SI systems (table 1, line 4). However, there was a significant computational cost to this stream. Unlike the other observation streams, the number of Gaussians which pass the observation pruning threshold is quite large which slowed the system significantly due to the cost of computing the mixture sums. Increasing the number of iterations of the EM Gaussian initialization algorithm reduced the number of active Gaussians and simultaneously improved results slightly. The computational cost of this stream is still quite large and methods to reduce the cost without damaging performance are still under investigation. The best systems (starred in table 1) were also tested on the Resource Management-2 (RM2) database. (This database is similar to the SD portion of RM1, except that it contains only four speakers. However, there are 2400 training sentences available for each speaker. The two training conditions are SD-600 (600 sentences) and SD-2400 (2400 sentences). The development tests used 120 sentences per speaker for a total of 4114 words. The RM2 tests (Table 2) showed the SD systems to perform better when trained on more data. One of the speakers (bjw) and possibly a second (lpn) obtained performance which, in this author's opinion, is adequate for opera-tional use. This is the first time we have observed this level of performance on an RM task. There is still, however, wide performance variation across speakers. Semiphones The above best systems all use WBCD triphones. A scan of the 20,000 word Merriam-Webster pocket dictionary yields the following numbers of phones: (All stress and syllable markings were removed and all possible word combinations were allowed for the crossword numbers.) This suggests that a large vocabulary system using WBCD triphone models will require on the order of 60K phone models. (Even if the triphones are clustered to reduce the final number [8,13], all triphones must trained before the clustering process.) These numbers assume no function word or stress dependencies. (A variety of other context factors have also been found to affect the acoustic realization of phones [4].) While this number is not impossible--the Lincoln SI-109 WBCD system has about 10K triphones and CMU used up to 38K triphones in their vocabulary independent training experiments [5]--it is rather unwieldy and would require large amounts of data to train the models effectively. (60K triphones would require about 280M mixture weights and accumulation variables in the Lincoln SI system.) One possible method of reducing the number of models is the semiphone, a class of phone model which includes classic diphones and triphones as special cases. (A classic diphone extends from the center of one phone to the center of the next phone. In a triphone based system, a diphone is a left or right phone-context sensitive phone model.) The center phone of a three section semiphone model of a word with the phonetic transcription /abe/ would be: ar-bz-bm bt-bm-br bm-b~-cz where 1 denotes the left part, m the middle part, and r the right part. As shown here, each section is written as a left and right context dependent section (i.e. a "tri-section"). Thus the middle part always has the same contexts and is therefore only monophone dependent. The left (and right) sections are dependent upon the middle part, which is always the same, and a section of the adjacent phone. Thus the left part is similar to the second half of a classic diphone, the center part is monophone dependent, and the right part is similar to the first half of a classic diphone. (In fact, we implemented the scheme using the current triphone based systems simply by manipulating the dictionary.) If the middle part is dropped, this scheme implements a classic diphone system and if the left and right parts are eliminated it reverts to the standard triphone scheme. One of the advantages of this scheme is a great reduction in the number of models. For the above dictionary, the three section model has 5695 phones. (This number was derived from the above table and is therefore not quite correct since the single phone words were not treated properly. However, the number is sufficiently accurate to support the following conclusions.) If the semiphone system has one state per phone and the triphone system has three states per phone, each word model will have the same number of states (for a given left and right word context), but the semiphone system will have 5695 unique states to train and the triphone system will have 180K unique states to train. Semiphones avoid one of the difficult aspects of crossword triphones--the single phone word. A single phone word requires a full crossbar of triphones in the recognition network [11]. The semiphone approach splits the single phone into a sequence of two or more semiphones and simply joins the apexes of a left fan and a right fan for a two semiphone model or places the middle semiphone between the fans for a three semiphone model [11]. A final advantage of the semiphone approach over the classic diphone approach is the organization. The units are organized by the phone. This is a more convenient organization for smoothing and also makes the word endpoints explicitly available for word endpointing or any word based organization of the recognizer. Our current implementation of this scheme has not yet addressed smoothing the mixture weights of the semiphones, so the results-to-date can only compare unsmoothed semiphone systems with smoothed triphone systems. Line 1 of Table 3 repeats the corresponding entries for two smoothed triphone systems from Table 1 for comparison with the semiphone systems. Line 2 is an unsmoothed three-section semiphone system with one state per semiphone. For both training conditions, the number of unique states was reduced by about a factor of five. The difference in performance between the systems is commensurate with the difference between smoothed and unsmoothed triphone systems. Line 3 is equivalent to a classic diphone system with two states per semiphone and thus four states per phone rather than three states per phone as in the preceding systems. This system has twice as many states as the other semiphone system and yields equivalent performance. While the semiphone systems do not currently outperform the triphone systems, they bear further investigation. The October 89 Evaluation Test Set At the time of the October 89 meeting, the mixture pruned systems were not showing improved performance over the best non-TM systems and therefore non-TM systems were used in the evaluation tests. The best observation pruned systems (starred in Table 1) were tested using the October 89 test set in order to compare them to the results obtained at the other DARPA sites. The results are shown in Table 4. These results are not statistically distinguishable from best results reported by any site at the October 89 meeting [14]. The June 90 Evaluation Tests The best TM triphone systems (starred in Table 1) were used to perform the evaluation tests. Both systems used WBCD triphones with fixed weight smoothing. The SD systems used two observation streams and the SI-109 system used three observation streams. The results are shown in Table 5. Conclusion The change from mixture pruning to observation pruning has eliminated the Gaussian recruitment problem. The change increased the data space requirements, but provided a better environment for mixture weight smoothing and reduced the computational requirements for both training and recognition. Including fixedsmoothing-weight mixture-weight smoothing improved performance on both SD and SI trained systems and allowed the use of WBCD (cross-word) triphone models. Testing on the RM2 database showed that our systems developed on the RM1 database transferred without difficulty to another database of the same form. It also showed that our SD systems will provide better performance when given more training data (2400 sentences) than is available in the RM1 database (600 sentences). Operational performance levels were obtained on one or two of the (four) speakers. We found a simpler context-sensitive model--the semiphone--to produce similar recognition performance to the (by now) traditional triphone systems. These models, which include the classical diphone as a special case, significantly reduce the number of states (or observation pdfs) which must be trained. The semiphone model will require further development and verification but it may be one way of simplifying our systems. Since the number of semiphones required to cover a 20,000 word dictionary is significantly less than the number of triphones required to cover the same dictionary, they may be a more practical route to vocabulary independent phone modeling than one based upon triphones.
2014-07-01T00:00:00.000Z
1990-01-01T00:00:00.000
{ "year": 1990, "sha1": "af115466e5f422dfaef54ab073a270885bac1295", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "af115466e5f422dfaef54ab073a270885bac1295", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
52176682
pes2o/s2orc
v3-fos-license
FROM INTERWAR CROATIA ( YUGOSLAVIA ) AND MAKING WESTERN MEDICINE IN THE 1930 s CHINA To gain control and domination over a particular territory, medicine was often used as a tool for promoting different interests. Using the activities of the League of Nations Health Organization and the Rockefeller Foundation on the territory of China in 1930s, this paper analyses the interconnection of the international and local factors in the transformation of the traditional Chinese milieu to suit the new and trendy public health projects. These activities were conducted not only to improve the public health conditions in the country, but also to introduce the Chinese public health to the processes of internationalization and standardization to the west oriented type of medicine and medical education. Initiated processes necessarily interfered with the political influences, economical interests and cultural environment as well as with military actions in this very turbulent time of Chinese history. Public health activities were carried out by the group of international experts. Among them the main position took two Croatian physicians: Andrija Štampar (one of the founders of the World Health Organisation later) and Berislav Borčić (a director of the School of Public Health in Zagreb). On the basis of correspondence between these two physicians, as well as Original scientific article Acta Med Hist Adriat 2018; 16(1);75-106 Izvorni znanstveni ~lanak https://doi.org/10.31952/amha.16.1.3 Introduction 1 The promotion of political and economic interests, as well as cultural and scientific unification, was often expressed through medicine and especially public health.Interwar period presents an excellent opportunity to study this phenomenon.With the establishment of the League of Nations and its Health Organization (LNHO), health and, especially, public health issues attained international-indeed, global-importance.Public health problems, as defined by the scientific Western medicine, expanded out of the boundaries of Europe and Northern America to spread worldwide and to attempt to unify other health systems into its idiosyncratic format. 2 This essay discusses experiences of two Croats from the Kingdom of Serbs, Croats and Slovenes (from 1929 Kingdom of Yugoslavia), Dr Berislav Borčić (1891-1977) and Dr Andrija Štampar (1888-1958), 3 who had the leading roles among international public health experts working in the 1930s China towards the introduction of Western standards of public health and medicine.Berislav Borčić was charged with organizing a central hygiene institute in the Chinese capital, Nanking (Nanjing), while Andrija Štampar was appointed counsellor to the Chinese president Chiang Kai-shek.They came to China with the experience of having had set up broadly ranging public health projects on the territory of today's Croatia, as well as the whole of the Kingdom of Serbs, Croats and Slovenes.Mentioned Kingdom was an agricultural country with an unevenly, mostly poorly, developed public health system.Štampar, a head of department in the Ministry of Public Health, and Borčić as the director of the School of Public Health, had conducted large campaigns to establish public health institutions, educational programmes for the public health staff, as well as comprehensive systems of permanent and organized popular health education targeted at nearly all social and age groups, in urban and in rural communities.Additionally, Štampar and Borčić were instrumental in advancing international collaboration in the training of the medical staff: along with the specialized education in the newly founded social-medical institutions, it also included study trips overseas.Studying in international centres, Yugoslav medical staff gained new knowledge and exchanged ideas and experiences.Štampar, Borčić and their collaborators focused on increasing access to medical services; securing hygiene infrastructure, organizing energetic anti-epidemic campaigns as well as conducting continuous health education and working to improve economic factors affecting the health of the population.Their activities influenced vital indicators as well as increased hygiene and health conditions as well as the general culture and education of the population.The comprehensive project to improve the public health of the country did not remain without notice.Experts from the League of Nations Health Organization as well as the Rockefeller Foundation praised Prof. dr.Andrija Štampar (1888 -1958) this innovative work, and especially the Rockefeller Foundation financially supported Yugoslav efforts.Croatia and the whole of the Kingdom became popular destinations for public health study trips. 4The case of a rural, underdeveloped country that within a short period of time managed to significantly improve its public health became a model for similar activities in comparable countries.China was the first one to try it. 5terwar time was a turbulent period in Chinese history: the time of dramatic ideological, military and political conflict as well as external threat followed by the Japanese occupation.It was the time when this large country, troubled by its lack of unity, war, conflicts of ideas and interests, tried to take part in a project that, in many ways, was new and alien: Western medicine. 6holars of the 1920s and 1930s public health history, the era of Chiang Kai-shek's government in China, and especially Iris Borowy emphasize that it was the LNHO that pioneered the first involvement of Western medicine in China, and that it arrived at the direct Chinese invitation.Earlier entrances of Western medicine into China were through missionaries, who established their hospitals, or through private practitioners who in most cases came to practice in treaty ports or other regions in which Europeans and Americans resided.Whether Western medicine arrived in China invited or not is not the topic of this essay.Yet it is important to stress that public health and medical elements that arose within the West came to China in a legitimate manner and at the invitation of highest governmental ranks, to form local health systems on the Western model.China thus began to build a Western health system.Of course, building of this system could not take place overnight and without the participation of locals educated at medical schools that followed the principles of Western medicine.Many Chinese had attended European and American medical schools as well as After the Second World War, countries (mainly former European colonies) that came together in the Non-alignment movement drew on Croatian/Yugoslav experiences to build their own public health systems.postgraduate and professional educational courses.With the establishment of the collaboration with the LNHO, such educational and training opportunities multiplied. 7en in 1930 the LNHO charged the Danish professor Knud Faber with a study of medical education in China, it came to their knowledge that China had no more than three medical schools that could, to some extent, satisfy Western standards.These included the Sun Yat-sen University Medical School in Canton (Guanzhou), the Central University Medical College in Shanghai, and the Peking Union Medical College. 8Yet only the latter, established with the Rockefeller foundation funds, met Western standards. 9n the same report Faber stated that in China there are no more than 700 hospitals, the bulk of which were small and inadequately equipped, besides being outside control of either central or local national authorities, and no more than 5000 doctors10 trained in the theory and practice of scientific medicine.Together with the LNHO, the contemporary Chinese government launched a daring project to create a new health system that would allow the education of physicians, establishment of institutions (including laboratories) and organizing the administration, the task of which was to prepare and monitor health regulations.Such a comprehensive health system was a novelty.Scholars, such as AnElissa, Borowy, Brown Bullock, Litsios and Yip, have written extensively about the establishment of the system, ministry, legislation, schools, problems and successes. 12Probably because of linguistic barriers, inadequate state of research of archival documents dating from the Kingdom of Serbs, Croats and Slovenes, and the long inaccessibility of Andrija Štampar's diaries, researchers paid little attention to the impact that these two public health experts had on international public health.Similarly, the innovative model of public health developed in Croatia and in the Kingdom of Serbs, Croats and Slovenes/Yugoslavia between the wars, while mentioned in the literature, was not adequately assessed and its import for both interwar and postwar international health analysed.For this reason, this model was never placed in a broader context of modern public health history, and while the two protagonists remained little known to the international scholarly community. 13is essay uses archival documents kept in the Croatian State Archives in Zagreb and the Archives of the LNHO in Geneva to illuminate certain hitherto little explored details of this comprehensive project of the Nationalist China, supported by the international community.It focuses on the work of two key experts whose role was to put the League of Nations Chinese plans into action: Berislav Borčić and Andrija Štampar.An especially valuable source is Andrija Štampar's diary written during his travels in China. 14These two Croatian physicians, in their capacity as the LNHO experts and consultants to the Chinese government, took key roles in the reform of the health system in China.They worked in areas of high priority such as protection from contagious diseases, immunization campaigns, establishment of bacteriological and other laboratories as well as a network of health and preventative institutions.They also promoted further development of modern health education and quarantine services in maritime transport.The latter area was of particular importance because the League's experts observed that the maritime quarantine and sanitary supervision in Chinese ports were ineffective and so presented a potential threat not only to China and Far East 15 , but also to the European and American ports.The medical director of the Health Organisation of League of Nations, Polish bacteriologist Ludwik Rajchman, focused on these problems as early as his first visits to China. 16 China, Borčić and Štampar followed prescriptions issued by Rajchman and complemented the work of Selskar Gunn who represented the Rockefeller Foundation in the country.It is interesting to observe that it was precisely the group that had produced excellent results in the Kingdom of Serbs, Croats and Slovenes in the 1920s that gathered in China to work on the same problems in the 1930s.The group comprised not just Štampar and Borčić but also Gunn, who at the time was the Rockefeller Foundation's representative in the Kingdom and who decided on all projects that took place in the country.Gunn had shown remarkable sensitivity to local needs, understanding of the political situation and enviable diplomatic skills. 17Štampar's ideas, experience and power, as well as Borčić's exper-15 Far East as a term was popularized during the period of the British Empire for lands east of British India.Far East in its usual sense is comparable to East and Southeast Asia as well as Russian Far East might be included in the Far East to some extent.In his diary, Štampar referred to Gun as 'our old friend Gunn'.Štampar furthermore wrote: 'When Gunn was appointed to the position of the Foundation's vice-president for Far East, he wrote to me several times.At that time we had not thought we would meet again at this end of the world and when we saw each other again we remembered our memories from Yugoslavia and the times when we travelled together and made plans.If it weren't for Gunn, who knows if the Foundation would have taken so much interest in our work, and if it would have helped us so much.In the past few months, Gunn visited a large proportion of Chinese institutions, and it is especially useful that, with the help of an expert, he examined the current state of all higher education institutions in China.The results of this study were dismal; the number of institutions that may be treated as 'higher education' is incredibly small and most of them are in poor condition, inadequately equipped to teach curricula with which they are charged.At the same time, the study examined missionary institutions and they were shown to be no better' (Dugac Ž, Pećina M. (eds.)Andrija Štampar: Dnevnik s putovanja 1931-1938, Zagreb, HAZU, Srednja Europa, ŠNZ Andrija Štampar, 2008).For more on the projects of the Rockefeller Foundation in tise and tenacity, formed the foundations of Rajchman's plans for China.It was a dream team: Borčić and Štampar 18 for the League and Gunn for the Rockefeller Foundation: a line-up that produced a miracle in the Kingdom of the Serbs, Croats and Slovenes that awed the entire international community in the 1920s.It all seemed ideal: all they had to do was to apply experiences collected while working in a small, underdeveloped, largely rural country, to a large, underdeveloped, largely rural country.Yet problems appeared.Andrija Štampar and Berislav Borčić in the LNHO China was not the first time that Štampar and Borčić collaborated with the LNHO.Their collaboration went back to the early 1920s, when the newly founded LNHO threw itself with great enthusiasm into the fight against epidemics and the establishment of international public health standards.LNHO soon developed into an innovative institution that approached seriously the problem of the organization of public health services at the international level.It encouraged standardization, research and education, and built comprehensive international infrastructure. 19The processes that the LNHO initiated promoted the establishment of an intellectual community that could take on these new tasks. 20It all started with an informal meeting in London in 1919, when British, French and American representatives as well as the delegates of the Paris Office (previously International Office for Public Health) and the Red Cross League discussed plans for international health after the war.In the subsequent years, many international public health activities developed.So, headed by Ludwik Rajchman, the LNHO organized numerous programmes, meetings of international experts who studied various areas of public health and provided national governments with guidelines for future activities. 21drija Štampar and Berislav Borčić joined the organization in its early days.In March 1922 they represented the Kingdom of Serbs, Croats and Slovenes at the European Health Conference in Warsaw.The conference was of particular importance as it was the most important meeting by the newly founded LNHO ever since its London gathering.The conference was supposed to provide solutions for many problems of great importance to post-war Europe, such as protection from epidemics and especially from epidemic typhus that was spreading from Central Asia, Russia, Belarus and Ukraine towards Europe. The conference elected three committees.One of them had the task of analysing the epidemiological status based on reports provided by national representatives of countries that were affected by epidemics.The second committee was charged with suggesting measures to countries bordering with Russia to ensure exchange of information concerning the prevention of the spread of epidemics.The third committee was supposed to produce a detailed programme for the LNHO's activities.Štampar was elected vice-president of the second committee.That committee presented to the assembly its resolution concerning an international sanitary convention, and suggested to use the extant Paris International Sanitary Convention as the foundation for a new one, to which additional points could be added.So, they proposed adding typhus and recurrent fever to the list of diseases that are subject to international regulations.They defined the need to sign bilateral agreements see more in between states and to regulate deadlines for reporting epidemics.Of particular significance and a proof of Štampar's influence was the committee's emphasis on expanding the scope to include other health problems, such as social hygiene, diseases such as tuberculosis, sexually transmitted, workplace-related and other diseases, as well as the exchange of experts.The committee furthermore stressed that the efforts to improve public health would go nowhere without public support.So they suggested setting the systematic health promotion and education as absolute priorities.All of these projects were realized by Štampar within the public health system of the Kingdom of Serbs, Croats and Slovenes, especially those related to popular health education.Under the auspices of the Zagreb School of Public Health, the Kingdom saw the emergence of comprehensive programmes based on popular health education using a variety of methods: lectures, courses, film screenings, public health events, as well as the involvement of local communities into various preventative programmes targeted towards the improvement of public and private hygiene, and the interruption of the chain of infectious disease transmission. 22ese two experts were, thus, active participants of one of the foundational conferences of the League of Nations Health Organizations, which set the course of development of the international health system not only between the wars but throughout the twentieth century.This important conference produced a strategy for international health, which no longer restricted itself to the old defensive principles of protection from epidemics using barriers and cordons, but expanded its activities to produce a joint attack on epidemics and-of particular significance-their sources. 23ampar continued his active work in the LNHO in subsequent years.In 1926 he was a member of the Committee for Hygiene Education, and from 1929 of the Committee for Social Insurance. 24The 1930 annual report of the Health Organization named Štampar as a member of the Health (Hygiene) Board of the organization.In the same year, he was also mentioned as a 22 More in Ž. Dugac, Kako biti čist i zdrav: zdravstveno prosvjećivanje u međuratnoj Hrvatskoj, Zagreb, Srednja Europa, 2010.member of the Sub-committee for Preventive Medicine and the Committees of Public Health Experts and for Social Insurance. 25gether with Berislav Borčić, Štampar took an active part in the organization and work of some of the LNHO conferences, such as the meetings organized on the occasions of the opening of Schools of Public Health in Zagreb and in Budapest in 1927.These meetings brought together the leading public health experts of this period. 26They took active part in the meetings organized by the LNHO for directors of European Schools of Public Health such as the Paris meeting of 1930 that focused on the education of medical personnel and health promotion. 27In the same year, Štampar participated in a meeting in Dresden that also focused on health promotion. 28Also in 1930, Štampar undertook a study trip through the Netherlands and Scandinavian countries to investigate the organization of their national health systems. 29t the European conference about rural hygiene in 1931, Štampar lectured on the most effective methods to organize health service in rural areas. 30e continued his League of Nation service in 1931 by helping the German Hygiene Museum in Dresden organize an exhibition on rural health. 31rislav Borčić, a veterinarian and a physician, had worked on the protection from contagious diseases since the establishment of the public health system in the Kingdom of Serbs, Croats and Slovenes in 1919.His first appointment was as director of the Pasteur Institute in Belgrade right after the First World War, which was followed by directorship of the Bacteriological (Epidemiological) Institute in Zagreb.With the establishment of the Hygiene Institute and School of Public Health in Zagreb in 1926, Borčić became the director of this important establishment that formed part of a network of institutions in numerous cities of Europe and North America, and the key local institution for international cooperation in this area.From its very beginnings, Borčić focused on the fight against contagious diseases.During the First World War and the time when the south of Serbia was ravaged by epidemic typhus, he worked at the Pasteur Institute in Niš.After the war he put his efforts into measures against the introduction of plague-which in this period appeared in neighbouring countries--into the Kingdom of Serbs, Croats and Slovenes.Borčić organized stringent quarantine, for example in then-important port of Martinšćica in the northern part of the Croatian Adriatic coast.He also participated in the efforts to combat epidemic typhus in the Kingdom in the early 1920s.In agreement with Frederic Russel and Simon Flexner from the Rockefeller Foundation, the LNHO paid for Borčić's study trip to the US in 1924. 32Borčić used this opportunity to visit various medical institutions on the West Coast. 33A letter that he sent to Rajchman upon his return to Europe in August 1924 reveals the significance of this study trip to Borčić's career. 34Borčić's collaboration with the LNHO continued in subsequent years so in 1926 the same organization supported his study trip to the London School of Tropical Medicine in the UK, and in 1928 he studied rural hygiene in Germany, Denmark, Netherlands and Belgium on behalf of the LNHO. 35 I mentioned, 1920s were the golden years of public health in the Kingdom of Serbs, Croats and Slovenes.Both Borčić and Štampar completed internationally recognized and much applauded projects.By the end of this decade they had both earned international recognition.So in 1929 Borčić visited Greece with a LNHO committee to study local health conditions, institutions and personnel, and to suggest guidelines for further development.In 1930, Borčić organized a course for future collaborators of the new Greek public health service.The course was organized because the LNHO requested the Zagreb's School of Public Health to set up annual courses in public health for international travel grant recipients. 36In the same year, the LNHO charged Borčić with another comprehensive and demanding task: to go to China and to organize a central hygiene institute in the Chinese capital, Nanking (Nanjing).The institute was to become the core of a future 32 Letter hygiene institution for the entire country. 37That mission would keep Borčić busy until 1938, when China fell victim to the colossal Japanese invasion. 38 Project China The League of Nation's Health Organization began its negotiations about collaboration with China as early as 1925, when the League opened its Far Eastern Bureau in Singapore.The role of this office was to collect information about the spread of cholera, plague, smallpox and yellow fever in the Far East and to introduce epidemiological intelligence in numerous parts in this part of the world, as well as a quarantine system that would meet European requirements.This collaboration in the area of sanitary control in maritime ports was the beginning of a much larger collaboration between the LNHO and Chinese government, which eventually resulted in the introduction of Western medicine and the Western model of public health into China. 39In 1928, Rajchman visited China and opened negotiations on the collaboration in the area of maritime quarantine, medical and sanitary institutions, as well as medical training and anti-epidemic work.Because of the unrest in the country, it was only in 1930 that Rajchman succeeded in negotiating direct action. 40Accepting the invitation of the Chinese president, Rajchman left for China in 1930 to discuss which public health campaigns would take place, and which staff should be hired for them. 41As a result of these discussions, in the same year Berislav Borčić arrived in the country to help set up the Central Field Station at Nanking (Nanjing).Borčić thus became the chief health advisor to the Government lead by Chiang Kai-shek.The collaboration of the LNHO with the Chinese government was furthermore expressed in the appointment of Dr J. Hen Liu, Chinese health minister, to the post of 37 Belicza The LNHO was not the only organization that introduced Western medical and public health standards into China, a country with its own medical tradition.Missionary associations, individual doctors and European colonies (at treaty ports) all had been promoting and using Western medicine for decades.Also in the interwar period the Rockefeller Foundation launched its Chinese programmes of which the largest and most significant was the establishment of the Peking (Beijing) Medical School.The Foundation invested heavily into setting up Western medical training.a vice-chairman of the Health Committee of the LNHO in 1931.Dr J. Hen Liu also became a member of the Sub-Committee on the Budget of the Far-East Bureau of the LNHO, as well as the member of the Opium Commission. 42he Nanjing Central Field Station, a new public health centre and a counterpart of the European schools of public health, became operational as early as 1931. 43Its employees were a mixture of local Chinese public health workers and international experts. 44nutes of the Health Committee of the LNHO meeting in May 1931 show that the Chinese minister of health through his representative Dr Wu Lien-teh, director of the National Quarantine Service, informed the committee members that the Chinese Minister of Finance T.V. Soong approved the budget for the three-year plan for the Chinese National Health Service.Wu Lien-teh also requested the Health Committee of the LNHO to send to China experts, and "particularly Dr. Borčić, Dr. Park 45 and Dr. Faber 46 have rendered us invaluable service." 47In the same year, Borčić in China was joined by Štampar, first in his capacity of a LNHO expert and then as an advisor to the Chinese government. The Borčić-Štampar duo in China As mentioned previously, Borčić arrived in China first, in 1930, and Štampar followed him in 1931.These two experts would stay in China, with few interruptions, until the end of the 1930s.As early as 1930 Štampar knew that Rajchman would send him to China, yet his first journey across the ocean was a westward one.In September 1931 and at the invitation of the Rockefeller Foundation (as well as in his capacity of the LNHO advisor), Štampar left for the US and China.The Foundation invited him to take a tour of the best known American universities and public health projects, to overview activities taking place in the States and to provide his feedback.The LNHO requested him to review health administration and costs of health services, as well as the means of production of hygienically acceptable milk. 49During Štampar's stay in the US, a telegram from Borčić, stationed in Nanjing, arrived, informing Štampar that funds for his trip to China had been approved, and that he should plan a trip via Honolulu and Japan. 50fortunately, Borčić did not write a travel diary during his time in China, so sources for a study of his career are scant.In contrast, Štampar wrote a diary from 1931 to 1938 in which he described in great detail his experiences during his travels.This diary is an excellent source for the study of public health-as well as national politics-of China in this period. 51It tells us that Štampar disembarked in Shanghai on 20 January 1930.It was his first visit to that country and it took place in a particularly inauspicious moment, while Japanese bombs were falling on Shanghai, river Yangtze flooded and its tributaries caused massive damages in the area inhabited by 25 In his rather personal diary Štampar recorded some information about the life and career of Berislav Borčić in China.For instance, he wrote about the life of the family Borčić, thus providing a more complex perspective on the conditions in which his colleague worked.So, during his second trip to China Štampar wrote: "Mara (Borčić's wife) is sickly and she has lost much weight, she is feeling poorly both physically and mentally… Their apartment is lovely, in a peaceful neighbourhood outside the town.They live in a two-storey villa in a spacious walled courtyard.There is grass in the courtyard so the children can play.They do not go out at all; Berica (Borčić) goes to the office and comes back home late.Mara says they live in a golden cage."(Ž.Dugac, M. Pećina (ed.) Andrija Štampar, Dnevnik s putovanja 1931-1938.Zagreb, HAZU, Srednja Europa, ŠNZ «Andrija Štampar», 2008, 194).material goods, sought shelter.This situation further increased threat from dysentery, cholera and malaria. 52At that time, Borčić and his new European reinforcement, headed by the Romanian hygiene professor Mihai Ciuca, organized field units in ravaged, critical areas.The tasks of these units included organizing immunizations and various health campaigns. 53on after his arrival in China, Štampar got in touch with John B. Grant, professor of hygiene at Peking Union Medical College, established, as previously mentioned, by the Rockefeller Foundation. 54Grant was a remarkable connoisseur of China, and an expert in the areas of hygiene and social medicine.Štampar and Grant soon found a common language and started a collaboration that would last for the entire duration of their stays in China. 55rant introduced Štampar with one of the most interesting public health projects of contemporary China not launched by the LNHO but by local experts, oriented towards modern public health, yet well acquainted with local conditions and Chinese rural traditions.This was Ting Hsien project, a model-village that accommodated a unique project on hygienic and educational improvement of the village.The project was based on ideas of the legend of Chinese public health and education, Dr James Yen. 56 James Yen was an educator and an organizer of the cultural, health and economic revival of the Chinese village.His project took place from 1926 to 1937 in Ting Hsien.The project (also known as the experiment) in Ting Hsien used education through so-called people's schools to promote various innovations, from the breeding of hybrid pigs and agricultural co-operatives to cultural performances.All of these were very similar to the projects that Štampar and his colleagues conducted in the Kingdom of Yugoslavia.Public health work was of particular importance and the project educated an original kind of rural medical assistants whose role was to assist physicians and other medical staff in those regions where impressed by this project; he returned to Ting Hsien on multiple occasions, because it was based on similar principles to the programme that Štampar launched and managed in the Kingdom of Yugoslavia in 1920s. 57A man who at the time was a junior project collaborator, Dr C.C. Chen (Ch'en Chihch'ien), later undertook further education in public health in the Kingdom of Yugoslavia and especially in the School of Public Health.Upon his return to China, he became a leading architect of the Chinese public health experts and he continued his work after the Second World War. 58 early as his first visit to this large and populous country in which the LNHO hoped to establish a modern healthcare system, Štampar observed the difficult social situation of the Chinese population.For him, this was the main obstacle to any improvement in the public health.In his later reports, and in his more general considerations about public health in the contemporary world, Štampar used his Chinese experience to argue that the work on the improvement of public health is doomed to fail if the standard of living falls below the existential minimum.His study of economic and social conditions in this country led him to believe that eradicating huge social differences-and especially exploitation by large landowners-and the establishment of a just social order were of essential importance. 59ile the first visit of China was a training trip of a sort, Štampar's next trip, in 1933, was in the capacity of a League of Nations expert, to help the Chinese government organize their healthcare service.Štampar placed himself fully at disposal of the Chinese authorities and travelled to the parts of China rarely (or never) visited by Europeans.He travelled to the Chinese Far professional medical staff was scarce.The Ting Hsien project attracted the attention of the entire world, and especially of Štampar, for whom the new methods of rural development independent from the control of the central government, revolution or foreign investment were of particular interest.The control of the central government and political turbulences played a key role in the slowing down of the public health programmes in the Kingdom of Yugoslavia following the establishment of the King Alexander's dictatorship in 1929 (Ž.Dugac, M. Pećina (ed.) Andrija Štampar, Dnevnik s putovanja 1931-1938.Zagreb, HAZU, Srednja Europa, ŠNZ «Andrija Štampar», 2008). 57 The village Mraclin near a Zagreb (Croatia) was also a model village, an experimental station for trying out new methods of rural hygiene.It was also the training site for students of the Zagreb School of Medicine, as well as international visitors sent to the Kingdom with the support of the RF and LNHO.West, where he would return several times in the following years.This was the part of China that would become the site of Štampar's most intense social medical institutional activities.In those provinces forgotten by the Chinese administration, Štampar would repeat the feats he had once accomplished in the Kingdom of Yugoslavia: establish new social-medical institutions and schools for training medical personnel. His journey began with a trip across the Yellow River and river Wei down the route towards the old imperial city of Xi'an, the capital of Shaanxi province.He toured the district of Chang'an and the surroundings of Xi'an, before taking an air trip to the capital of Gansu, Lanzhou.He then returned to Nanjing and Shanghai to meet one of the most influential people of the period, Tse-Ven Soong, who in the Nationalist Government served at various positions: vice-president of the government, minister of finance and the governor of the Central Bank of China.Štampar established close collaboration with him. 60Having read Štampar's reports, Soong decided to accompany him to western provinces, first to Xi'an, then Lanzhou and then further, to the province of Qinghai (historical name of Kokonor, created out of northern Tibet) and its capital Xining.Štampar then flew along the course of the Yellow River and then crossed the Gobi desert at Alxa/Alashan, a mountain range in northern China between Ningxia and Inner Mongolia, arriving into the province Ningxia/Hui.The journey ended with their return to Xi'an and then Beijing. Soon after this trip, Štampar left for his third journey to the China's Far West.Soong granted him sufficient financial support to establish new medical institutions in the four provinces.He was going to start with the provincial health centres and schools for lower medical personnel.He found himself in Xi'an and the Yellow River area again.Then he left for Lanzhou and Xining, and again visited Ningxia. 6160 Tse-ven (Tzu-wen) Soong, a prominent businessman and politician.Soong's brothers-inlaw were Sun Yat-sen, Chiang Kai-shek and the powerful financier H. H. Kung.During his stays in China, Štampar established close collaboration with Soong and managed to get this powerful man interested in health programmes in the Chinese Far East.Soong provided an adequate financial support for these initiatives.In addition to the intense professional collaboration, Štampar also developed a personal attachment to Soong and devoted beautiful and touching parts of his diary to his friend.(More in: Ž. Dugac, M. Pećina (ed.) Andrija Štampar, Dnevnik s putovanja 1931-1938.Zagreb, HAZU, Srednja Europa, ŠNZ «Andrija Štampar», 2008). 61 Following his tour of the Far North-West, the site of the most important programmes, Štampar decided to see other parts of the country too.He travelled to the South-East: Hong Kong, then Canton (Guangzhou).In the Macau outskirts he hoped to see the 'model district' Chungshan but that project had failed, due to political reasons.From Guangdong Štampar's trip through China was exhaustive.Along with learning the geography of this enormous country, he was also getting himself acquainted with the people and their work.For Štampar meeting common people, coolies, and peasants was of particular importance.By observing the rural life, Chinese mothers, children, peasants working in fields, he established the foundations of his knowledge and tried to understand their needs.By developing close relationships with all the enthusiasts in the field of public health, local medical professionals and teachers, Štampar constructed a network of collaborators that, on one hand would be sensitive to local needs, and on the other accept the principles of new public health work.Štampar admired both the common people of China, whom he described as having unusually strong life power, but also Chinese physicians who in such a short time managed to achieve so much.These included Dr James Yen, C. C. Chen, and Marion Yang, the pioneer of women and child's health and the organizer of the modern midwifery in China.Štampar's teachers included a range of people, from peasants to the philosopher Hu Shih, with whom he socialized and exchanged observations. 66Obviously, Štampar built relationships not only with the common people and local medical professionals and intellectuals, but also with persons of great power.Along with T. V. Soong, Štampar also kept in touch with Mei-Ling Soong, also known as Madame Chiang Kai-shek.In his diary he wrote about her: "In this respect I admire this interesting woman: she is neither shy nor conceited, she is honest and recognizes her own mistake and ignorance.She was educated overseas in very good institutions but, of course, her education is not for China.She knows that some things are wrong but she does not know what.She lacks the basic understanding of rural issuesand these are of course of outmost importance for the future of the country-so she constantly falls under this or that new influence.There is no doubt that she is under my influence at the moment, but who knows for how long.She certainly has influence upon her husband, but I do not think she has the strength to make her ideas a reality." 67These observations reveal the problem.A woman who, with her husband, created many initiatives in her own country, could not understand the real state of affairs-the real problems-of the Chinese village; she was educated overseas, prone to external influences and finally too weak to face the problems and establish a clear vision of action.In their conversation Štampar stressed the example of Dr Yen's Ting Hsien project, which was very good, but was never transformed from a private initiative into part 66 Ibid.67 Ibid.620. of public administration.Štampar warned Madame Chiang Kai-Shek that China hosted many programmes the goal of which was to improve rural conditions, how these cost a great deal of money, yet they lacked unity, a shared plan that would lead to a common goal, and, most importantly, most of these programmes were based on similar examples tested in other countries, yet these had different traditions, different population, and different social economic conditions from China.Here again Štampar stressed the poor fit of foreign principles to Chinese circumstances, and the need to turn to local specificities. Štampar's obsession with rural China and the need to do something constructive about it is witnessed by the following observation in his diary: "That day I visited the secretary of the People's Economic Council, who received me to hear about my impressions from the trip through northwestern regions.He was exceedingly polite and listened to me with great attention.He called me a 'peasant advocate' and asked how much money I wanted on that day.I told him what I wanted and he immediately agreed to pay $2000 to enlarge a midwifery school in Sian (Xi'an) and $800 for a disinsection station in the health centre in Lancau (Lanzhou).As we parted he said he cared a great deal for me because I fought for peasants, and there were few like me in his country." 68ut wishes and enthusiasms of public health workers were one thing, and politicians' interests and ideologies another.The rift between these two camps is described in the following observation that Štampar made, which is important for the analysis of the political surroundings in which Rajchman hoped to establish collaboration with the Chinese government, and in which Borčić and Štampar were supposed to work: "I spent the afternoon with Rajchman who is in terrible trouble.He told me about the four personalities at the centre of the Chinese world: the Marshal Chiang Kai-shek, who has the army and supports the government; Wang Ching-wei, the minister president and the political leader; the marshal cannot work without him but Wang Ching-wei knows he depends on the marshal's mercy.T. V. Soong is the finance minister and the most capable person in the government, but he was overseas for half a year so the Nanjing elite decided that China could function without him.He also made a huge mistake when he argued that dictatorship is an excellent tool for solving problems.His argument appealed to the marshal so much that the latter decided to put it into action, but the marshal also thinks he should be the dictator and no one else.This caused the crisis and the finance minister has been in trouble ever since, because the marshal is trying it kick him out of the government, not so much because of the dictatorship, but because the finance minister will not give enough money for the anti-Communist campaign that the marshal has been trying to launch for months, but he has not been able to do so.The Communist uprisings in China are nothing but the rebellion of peasants who want land and want to be free from their rural and urban masters from whom they lease the land and pay the privilege not just in money but also in blood.It is obvious that this kind of Communist uprisings will be settled not with guns, airplanes and cannons, but with land distribution.Sun Fo is the fourth factor in the government.He has no talents except being the son of Sun Yat-sen, so the son of the great father, and because of this family connection he obtained the post of the president of the legislative committee.No law can be enacted without his approval; when he wants to stop a law, he simply leaves for Shanghai where he cannot be found.These are the circumstances in which Rajchman undertook the difficult task of helping this giant country get out of its troubles and make a step forward.His vision is sharp and he is able to find his way in tough situations.He says that the first problem is to solve the political crisis, because no work can be undertaken in such circumstances.I agree with him.So the poor man is travelling between Shanghai and Nanjing, then to Nanchang to hear the marshal's latest opinions and to settle the wars between the camps." 69Following Štampar's departure, Berislav Borčić remained in China until 1938. 70In these last years of his stay in China Borčić increasingly came into contact with the problems of political conflicts and the growing threat of the war.The difficult conditions in which he worked are depicted in the letter that Dr Ivo Kuhn from Belgrade sent to Andrija Štampar in November 1937.In this letter we can see the politics manipulating a medical report written by Berislav Borčić, on the examination of the Chinese who died during the 69 Ibid.192-193.70 During his long stay in China, Berislav Borčić inevitably neglected his work for the School of Public Health in Zagreb.His Zagreb colleagues hoped he would return to his home institutions.During this period, this institution was in an extremely difficult situation because the state failed to support it financially, and the grants provided by the Rockefeller Foundation were insufficient to pay for all the work that it did.For instance, in this period the school began to produce Neosalvarsan, which not only required additional funds, but also great diplomatic skills from the school management, because the central administration in Belgrade was trying to hinder the production in the Zagreb laboratory in order to launch such a business in Belgrade.In July 1938, Borčić took over the management of the School of Public Health, which was of great importance not just for the school but also for the Rockefeller Foundation.The latter was involved in encouraging the Neosalvarsan production as well as planned investments to enlarge the nursing school in Zagreb.More in: Dugac, Željko.Protiv bolesti i neznanja: Rockefellerova fondacija u međuratnoj Jugoslaviji.Zagreb, Srednja Europa, 2005. Japanese attacks: "On this occasion he gave me the second document 71 in which the minister of foreign affairs presents the submission by the Chinese delegate to the League of Nations, in which the latter refers to Berica's statement that some Chinese were injured by the Japanese poisonous gases, and is asking the secretary to distribute this document to all members of the League of Nations.Berica statement is a testimonial signed by the head of the Nanjing hospital, in which it is confirmed without doubt the findings of poisonous gas and dum-dum bullets.Our Ministry of Foreign Affairs added that, if we were in touch with Berica, we should let him know that such statements could be used in propaganda and for accusatory purposes, and because Yugoslavia had a good relationship with Japan, he should refrain from issuing such statements in future." 72 Reports and opinions Upon his return to Europe, Štampar reported to the LNHO about his 1934 enterprise carried out with the approval and support of the National Economic Council in four provinces: Shaanxi, Gansu, Qinghai and Ningxia.Provincial health centres were established in capitals of these provinces, while a chain of smaller institutions (hsien-centres) was planned to be built in rural areas.So in Shaanxi a provincial health centre, a school health centre and a midwifery school were established.The new Gansu provincial health centre consisted of an 80-bed hospital, midwifery school, department of maternal and child health, department of school medicine, outpatient units, and the department of public health and industrial hygiene.Provincial health centre in Qinghai and Ningxia were established on the same model. In the following year, 1935, the Yunnan government requested from the people's health administration to establish a provincial health centre.Following a study trip in that province, Štampar recommended other provinces to establish a health organization using a similar model, and also suggested a special health organization for the tin-mining regions where miners suffered poor health, and adults as well as children were subjected to near-slavery conditions at work.In the same year, 1935, much of the efforts concentrated upon the Sichuan province, the battleground of various military fractions and communists.Sichuan had one hospital and the medical school by the West China Missionary Union, which included the only school of dentistry in China.Štampar was surprised by the scope of the work of the school supported by five Protestant missionary associations, especially in popular health education.So he suggested to the government to prepare a plan of collaboration with the extant Christian mission, which would avoid duplicating medical institutions.Finally, before his departure from China, Štampar was charged with advising the government of the province of Fujian-the province with the strongest Communist movement-on the establishment of health system.In his Fujian report, Štampar emphasised the problem of drug (heroin and morphine) trade, especially from the island of Formosa (Taiwan), because the islanders enjoyed extraterritorial privileges.Štampar stressed the danger of infection with non-sterile needles used by drug addicts.Fujian too had some health infrastructure, such as a midwifery school and a health centre.The hospital was under construction so Štampar suggested a standardized programme as well as the integration of the extant institution into the provincial health system.He also recommended establishing a medical school in the University of Amoy (Xiamen).Fujian was affected by plague, so Štampar advised collaboration with the national quarantine service to protect the province from further infection.He emphasized that the plague arrived from the inland foci rather than the sea, so he suggested intensifying international collaboration in this area. 73 the mentioned report submitted to the LNHO in 1937, Štampar paid special attention to the largest problems that appeared in the course of formation of the new public health system in China.Štampar stressed that the success in most areas of public health, and especially rural hygiene, depended on economy.The problems were linked to agriculture, predominantly small farms, high rents, debts and high interest, all leading to further pauperization of Chinese farmers.Their lives were further aggravated by periodical natural catastrophes, such as floods and draughts, and loss of income when attempting any additional production, such as weaving, because the state imported cheaper ready-made cloth.One way out was provided by mining, but this branch of industry was plagued by slavery relationships between business owners and workers.Solving health and hygiene problems was not possible without tackling economic problems.So, improving public health could not be achieved if the standard of living fell under the existential minimum. Štampar further reported that he had advised provincial governments in the provinces he had visited to pay attention to the rural population.In his view, rural health centres were the most important cogs of the health system; the purpose of all other institutions was to serve their needs.Provincial health centres in provincial towns were supposed to serve their urban communities but also to supervise and support rural health centres. 74ampar furthermore stated that in the Chinese administration health service was within the remit of provincial authorities, while the central health administration outlined broad principles of health politics and supported local governments.Chinese provinces were thus meant to organize health services in accordance to the plan drafted by the People's Health Administration.According to that plan, the health centre was meant to be the central provincial medical institution and the supervisory body for all other health institutions in the province.Provincial health centres were supposed to include diagnostic laboratories, a hospital, midwifery school, department of maternal and child health, department of school health, a school and general outpatient clinic.They were set up using the model launched by Štampar in the Kingdom of Serbs, Croats and Slovenes in the 1920s, with one improvement and exception that for long time could not be implemented in Yugoslavia because of the continuous tendencies of the Belgrade to centralize administration: administrative independence of provinces that followed nothing more than general instructions issued from the centre. 75All local institutions were permitted to use the central government laboratories (such as those in Nanjing and Beijing) and were supplied with sera and vaccines from these institutions.In this way vaccination programmes against smallpox, cholera and abdominal typhus could be carried out.The production of vaccines was launched and supervised by Berislav Borčić. Štampar paid significant attention to medical education.He wondered if modern-and, by necessity, expensive--medical schools are suitable for Chinese circumstances, and whether they produce physicians able to serve the people.His answer was that modern medical schools in China are expensive; they produce few physicians; and to pay off the costs of the education, most graduates opened private practices in cities or took positions in larger medical institutions.In this way, the majority of these graduates did not help Chinese peasants.Yet at the same time, Štampar argued, cheaper schools were no solution either, because their inadequate equipment and opportunities for clinical training could not provide education at a satisfactory level.Štampar objected to the establishment of two kinds of medical schools in China, one kind for the elite and the other, cheaper and lower quality, for peasants.As early as his student days, in his 1911 article on Social medicine he argued that "While before and to a large extent today too, successes and benefits of modern medicine were enjoyed by the rich, at the present time the goal is for science and its practical outcomes to become a good shared by all members of the society". 76o, in his view two separate medicines, one for the poor and one for the rich, should not exist. Štampar's critique of medical education in China was supported by the Rockefeller representative Selskar Gunn.In his diary, Štampar wrote: "Gunn is very critical of the work of the Foundation that so far had invested around 40 million of gold dollars in various institutions and aid programmes.Yet results are so poor that this may be considered the worst investment in the world… we talked a lot about the educational system creating intellectuals whose upbringing alienates them from the needs of their own country.He can see that institutions need help so that they can train students into good and capable workers in rural areas and outside cities, because the fate of China depends on the advancement of villages rather than towns.So he is planning to suggest to the Foundation an entirely new programme of activity, which would involve systematic assistance of Chinese institutions in the new direction and training new people to fulfil the real needs of Chinese peasants.In this way, intelligentsia could be used for the most pressing needs. 77n the practical sense, Štampar agreed with the views of progressive Chinese physicians such as R. Lim from the Peking Union Medical College, C.C. Chen from Ting Hsien and others, who believed that rationalization could reduce costs.For instance, education at nursing colleges could be merged with the university; the duration of medical school training could be cut to 4 years while making sure secondary school students receive better education in natural sciences.Štampar also supported medical education that would provide students with an insight into socio-economic problems that affected health, and include field training in rural areas.Peking Union Medical College as the best equipped medical school, the level of which was not attained by any other medical school in Far East, was to train teachers for all other medical schools, extant or in preparation.Štampar also suggested a method tested by the School of Public Health in Zagreb: organizing courses for licenced physicians to train them specifically to solve problems they were likely to encounter frequently in their work.These courses were to be organized by the health administration in Nanjing.Along with physicians, Štampar paid much attention to the education of nurses, which he saw equally important to the education of physicians.In this period in China, wrote Štampar, medical personnel were mostly educated at missionary hospitals.Štampar demanded the same standards for all of these schools.He also stressed that nurses and midwives should be able to perform same duties. 78 Shadow of the war -conclusion As the 1930s approached their end, the political scene grew increasingly turbulent and brought unrest to public health projects.The mood that came to occupy the LNHO and RF staff was described in the following quote from Štampar's: "We (presumably Štampar and Borčić) talked to Gunn for long time; our conversation turned intimate when others left us, as we could not discuss everything openly in their presence; Gunn was once again depressed; the constant changes in this country, the uncertanties were taking their toll.His mood changed every day in response to the news; he wanted to do something good and permanent in China, he had put great efforts in this direction in New York and now he was worried that he would not be able to do it because of the most recent events.I consoled him by telling him about the positive sides of our stay; our moods improved when we relived memories from our trip through Yugoslavia 12 years ago.He remembered everything, which was a good indication of how much he had enjoyed that journey.He showed me his photographs taken in Dalmatia prominently displayed; he remembered all the details and asked about the people he had met there.He enjoyed talking about past times, about memories of his successful work in Europe.These memories soothed him in these difficult and uncertain times; I could observe on him a certain type of fatigue so frequently found in people disappointed with the present.He belongs to those people whose best days had happened ten years ago, when the future seemed so rosy; few could foresee events that happened afterwards." 79y the end of the 1930s, the LNHO lost its position and authorities. 80Optimism waned and the League gradually withdrew all of its experts from the country.After Štampar, Berislav Borčić too left China, in 1938.The actual Chang Kai-shek government was sinking.Although the new public health projects shook the country and animated the people, and although Štampar's diplomacy was successful in obtaining funds to establish institutions and programmes, politics and war could not be fought.When analysing Andrija Štampar's diary, reports he sent to the LNHO, letters and reviews, about the enthusiasm of Chinese doctors, and the huge population that desperately needed health care, we gain the impression that the project of modernization and 'Westernization' of the Chinese public health launched by the LNHO in cooperation with the Chinese government had a good chance and strong local support.Yet the enthusiasm, new ideas, new projects encountered problems that intensified resistance caused by the political instability and lack of finances.Although, new institutions had a good chance to survive and develop, the political situation in the country threatened.The times did not allow the programme to survive; the political and ideological disunity of China was too large, the Japanese invasion too close.Many projects were launched, many were planned, yet in the dawn of the Second World War, everything that had been built through the 1930s began to crumble.In the dawn of the Second World War, Štampar and Borčić left China, yet they could not avoid the war: it awaited them in Europe.As the threat of the armed conflict loomed, public health projects were all abandoned.The last international conference before the war took place in Geneva in 1938 and it focused on the standardization of hormones.At the end of the war, in 1944, an international conference on the standardization of penicillin took place in London (Conferences of the League of Nations Health Organisation, LNHA, Geneva).After the Second World War, in changed social-political circumstances, the work of the LNHO was inherited first by the Intermittent Commission (1945-1948), and then the World Health Organisation (WHO).Both Andrija Štampar and Berislav Borčić gave significant contribution to the work of these institutions.See: Belicza B, Rastija M. Prilog poznavanju života i rada dra Berislava Borčića (1891-1977), eksperta Lige Naroda i Svjetske zdravstvene organizacije.Saopćenja 30, 1984, 129-144 The end of the Second World War brought profound changes in China.The old regime was replaced by the new, communist one.Yugoslavia too saw the end of the war profoundly changed.Here communist ideology assumed leadership as well.But while Tito's regime broke connections with USSR, China chose to stay near USSR.Štampar and Borčić never returned to China to work on public health.An indirect connection was maintained through the work of their disciple, Dr C. C. Chen, who remained active in the new circumstances.Yet the postwar world brought new exchanges of knowledge and experiences, from Croatia and Yugoslavia to the countries that came together in the Non-alignment movement.It is, however, without doubt that it was the interwar period and the Chinese experience made a contribution. 6 For more on health and Chinese medicine in this period, see: Hiller S.M, Jewell J.A. Health Care and Traditional Medicine in China 1800-1982, London, Routledge, 1983; Yip K. Health and National Reconstruction in Nationalist China, Michigan, Association for Asian Studies, 1995.An interesting book of potential interest to the readers of this essay, on the history of public health in China between the 1930s and the end of the twentieth century, was recently published: Borowy I. (ed.), Uneasy Encounters: The Politics of Medicine and Health in China 1900-1937, Frankfurt am Main, Peter Lang, 2009. 7 Numerous Chinese health administrative employers specialized abroad.Dr Tsai-Hong, in charge of Division of Quarantine and Epidemiology in the National Health Department of China, studied health practices in several European countries and the USA.He also investigated the modern application of epidemiology in the Kingdom of Yugoslavia.Dr. L. C. Yen, chef of the Division of Medical Administration in the Central Health Department studied public health administration and licensing of doctors and midwives, also in Yugoslavia.Dr. P. Z. King studied the organisation and work of several institutes of hygiene in Europe, among them in Yugoslavia that had the similar role as the Central Field Health Station in Nanking (Nanjing), the direction of which he was supposed to take over.Dr. Chen Wan Li, health commissioner of the Province of Chekiang (abolished after 1955) studied general public health work, as well as sanitary and medical institutions, in Yugoslavia and in some other European countries (Annual Report of the Health Organisation for 1930, p. 22. Archives of the LNHO, Geneva). 16 Rajchman 's early critiques of the maritime quarantine may be found in: Borowy I. Thinking big: League of Nations efforts towards a reformed national health system in China.In: Borowy I. (ed.), Uneasy Encounters: The Politics of Medicine and Health in China 1900-1937, Frankfurt am Main, Peter Lang, 2009, 205-228. 17 Borowy I. Coming to Terms with World Health: The League of Nations Health Organisation 1921-1946, Frankfurt am Main, Peter Lang, 2009.20 Dubin M. D. The League of Nations Health Organisation, in: Weindling P (eds.)International organisations and movements 1918-1939, Cambridge, Cambridge University Press, 1995; Borowi, Iris.Coming to Terms with World Health: The League of Nations Health Organisation 1921-1946, Frankfurt am Main: Peter Lang, 2009.21 Goodman N. M., International Health Organizations and Their Work, Edinburgh, Churchill Livingstone, 1971.; Borowi, Iris.Coming to Terms with World Health: The League of Nations Health Organisation 1921-1946, Frankfurt am Main: Peter Lang, 2009. 40 Borowy I. Thinking Big -League of Nations Efforts towards a reformed National Health System in China.In: Borowy I. (ed.), Uneasy Encounters: The Politics of Medicine and Health in China 1900-1937, Frankfurt am Main, Peter Lang, 2009, 205-228. 48 42 Annual report of the Health Organisation for 1930, pp.60-63.Archive of the LNHO, Geneva.43 The Minutes of the Seventeenth Session of the Health Committee in 1931 show that the Chinese government slated the Central Field Station at Nanjing as one of the technical services under the new National Economic Council.The renovation of the old building of the former Ministry of Health, connected with the Ministry of Interior and which had been used for the Station, would be completed in 1932.(Minutes of the Seventeenth Session of the Health Committee, 4-8.5.1931,LNHO Archive, Geneva).44 Borowy I. Thinking Big -League of Nations Efforts towards a reformed National Health System in China.In: Borowy I. (ed.), Uneasy Encounters: The Politics of Medicine and Health in China 1900-1937, Frankfurt am Main, Peter Lang, 2009, 205-228.45 Under the guidance of Dr Charles Park from the Far Eastern Bureau, a campaign was launched between May and September 1930 to vaccinate 500,000 people as well as to set up measures to improve the supply of drinking water by means of artesian wells; destroy flies; protect food supplies; and improve laboratory investigations, arrangements for prompt notification and hospitalization and popular education.(Borowy I. Thinking Big -League of Nations Efforts towards a reformed National Health System in China.In: Borowy I. (ed.), Uneasy Encounters: The Politics of Medicine and Health in China 1900-1937, Frankfurt am Main, Peter Lang, 2009, 205-228).46 The mentioned Knud Faber, professor of medicine at Copenhagen University, spent the period between September and December 1930 writing a report on medical schools in China for the LNHO.47 Minutes of the Seventeenth Session of the Health Committee, 4-8.5.1931,LNHO Archive, Geneva.48 Dugac Ž. Andrija Štampar (1888-1958): Resolute Fighter for Health and Social Justice.In: Borowy I., Hardy A. (ed.),Of Medicine and Men: Biographies and Ideas in European Social Medicine between the World Wars.Frankfurt am Main, Peter Lang, 2008. Štampar was highly 52 Štampar Andrija, Zdravstvene i socijalne prilike u Kini, Liječnički vjesnik, 59, 1937, 323-327, a translation of a report in English originally published in the Bulletin of the Health Organisation of the League of Nations, 5/1936, 1090-1126.;Ž. Dugac, M. Pećina (ed.) Andrija Štampar, Dnevnik s putovanja 1931-1938.Zagreb, HAZU, Srednja Europa, ŠNZ «Andrija Štampar», 2008.53 More in Borowy I. Thinking Big -League of Nations Efforts towards a reformed National Health System in China.In: Borowy I. (ed.), Uneasy Encounters: The Politics of Medicine and Health in China 1900-1937, Frankfurt am Main, Peter Lang, 2009, 205-228.54 John B. Grant lived in China between 1921 and 1938 and accumulated an enormous experience in public health in this country.He collaborated with the Ting-Hsien programme, where his students were sent for their field practice.His relationship with Štampar was very close: they collaborated and travelled through China together.He was unique among the professorial staff of the Peking Medical School, as well as among the Rockefeller Foundation, for his passionate advocacy of social medicine and social programme.He was called a medical Bolshevik.(See more in Brown Bullock, M., An American transplant, The Rockefeller Foundation and Peking Union Medical College, University of California Press, Berkeley, 1989). 58 Štampar recommended Chen for further education overseas.In 1935 the League of Nations supported Chen's trips to the Soviet Union, Kingdom of Yugoslavia and India.In his later book, this highly active Chinese physician wrote how impressed he had been with the achievements in the field of public health in rural Croatia.See Chen C.C. Medicine in Rural China.Berkeley, University of California Press, 1989.59 See Ž. Dugac, M. Pećina (ed.) Andrija Štampar, Dnevnik s putovanja 1931-1938.Zagreb, HAZU, Srednja Europa, ŠNZ "Andrija Štampar", 2008. 74Ibid.; League of Nations, Minutes of the 96 th Sitting of the Council, 25 January 1937, LNHO Archive.Geneva.75Itwas only with the formation of provinces (Banovinas) in the late 1930s that health administration went on the local level. 79Ž.Dugac, M. Pećina (ed.) Andrija Štampar, Dnevnik s putovanja 1931-1938.Zagreb, HAZU, Srednja Europa, ŠNZ «AndrijaŠtampar», 2008, 491-492. 80 .; Dugac Ž, Fatović-Ferenčić S, Kovačić L, Kovačević T. Care for Health Cannot Be Limited to One Country or One Town Only, It Must Extend to Entire World: Role of Andrija Štampar in Building the World Health Organization, Croatian Medical Journal 49, 2008, 697-708.As the president of the Intermittent Commission, Štampar established the foundations for the Constitution of the WHO and thus steered the activities of the WHO in subsequent decades.He did this using the same principles he implemented in his work in the Kingdom of Yugoslavia in the 1920s and in China in the 1930s.
2018-09-16T05:47:12.670Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "cb23a9088f1a4b87585de76378ec7c67e5338ca3", "oa_license": "CCBY", "oa_url": "http://www.amha-journal.com/index.php/AMHA/article/download/33/483", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "cb23a9088f1a4b87585de76378ec7c67e5338ca3", "s2fieldsofstudy": [ "History", "Medicine" ], "extfieldsofstudy": [ "Political Science", "Medicine" ] }
144005044
pes2o/s2orc
v3-fos-license
Jadidism Phenomenon In Central Asia Jadidism as a new socio-political, reform movement in Central Asia and Kazakhstan occurred at the beginning of XX century under the strong influence of the revolution of 1905 in Russia and revolution in Turkey, Iran, India of 1908-1913. Although for occurring jadidism some ground was prepared in the second half of the XIX century as a result of activity of such educators as Danish Ahmad (1827-1897), Furkat (1858-1909), Mukimi (1850-1903), Abay (1845-1904) and others. From the very beginning the Jadids were aimed to the reform of the traditional system of education of Muslim religious school, establishing new-method schools, publishing, theatre, social, and political and cultural institutions, which, under their influence, were turning into a powerful ideological weapon in the struggle against economical, moral, and political backwardness of the peoples of Turkestan more and more. They were eager to study the experience and progress of the other people of the world, especially the best practices of the Muslim reformers of the Crimea, the Volga Region, the Transcaucasia, Turkey, and Iran, who had already became aware of the progressive social and political, spiritual and cultural life of Europe. The main merit of the Jadids of Turkestan consists in the fact that they were the first to ground political arguments of the national liberation movement against Russian colonialism. The national elite of Turkestan discredited the ‘legitimacy’ of the colonial form of government, and later it served as the basis of a powerful political movement. © 2013 The Authors. Published by Elsevier Ltd. Selection and or peer-review under responsibility of Assoc. Prof. Dr. Zehra Özçınar, Ataturk Teacher Training Academy, North Cyprus. Introduction The Arabic word 'jadid' (literally 'new') was initially used to call those who being under the influence of the didactic ideas of the prominent Crimean Tatar enlightener Ismail Gasprinski (1851Gasprinski ( -1914, opened new-method schools, where not only religious, but also secular sciences were taught (Gankevich, 2000). The emergence of Jadidism associated precisely with these schools ("usul-i jadid"). During his study at the Sorbonne, the founder of new-method schools Ismail Gasprinski familiarized with the new analytical and phonetic method of teaching the alphabet and was eager to reform the obsolete system of Islamic education. On returning home in 1884, he opened a "usul-i jadid" school, where he taught 12 students to read and write for 40 days. Later he wrote: "The result exceeded all my expectations, and then this method was implemented in a few more schools. Visitors from the regions familiarized themselves with these schools and also accepted the new method in more than 200 schools" (Ibid.2000.р.240). I. Gasprinski advocated his ideas from the pages of the newspaper "Tarjimon" ('Translator'), published by him, which opened a new world for its readers, the world of advanced, forward-looking ideas. Among the first subscribers of the newspaper were those from Marghelan, Tashkent, Bukhara, Samarkand, Turkestan, and Akmechet cities. *** Thus, Jadid movement was formed on educational ideas of the national-progressive intellectuals of Central Asiawhich had a wide spectrum of problems directed to the development of society. It is possible to consider Jadidism as one of the branches and variants of reformatory movement of the national-progressive intellectuals in Central Asia which had arisen and has been developing in many countries of the East in the19th and the early 20th centuries. The term "Jadid" came into existence from the concept "usul-i-dzhadid" ("a new method") which also included new methods of training per European samples. Later, as the extension of Jadid movement's task, the content of the term has also been extended. Along with enlightenment Jadids aspired to change the old system of social and political statuses to progressive forms of development. But it was the second stage of movement. In Turkestan the greatest representatives of this movement were Makhmud-Khodja Bekhbudi, Ubaidullah Assadullahodjaev, Munavvar kary Abdurashidhanov, Abdulla Avloni, Tashpulatbek Norbutayev, Khodja Muin, Abdukadyr Shakuri, Nasyrkhantura Komolkhonturayev,Obidzhon Mahmudov, Ashurali Zakhiri, Ishankhodja Khanhodjayev, Iskhakhan Tura Ibrat; In Bukhara SadriddinAini, Faizulla Khodjayev, Abdurauf Fitrat, Musa Saidzhanov, Abdulvahid Burhanov, Usman Khodjayev,Mirkomil Burkhanov, Mukhitdin Rafoat, Mukhitdin Mansurov, Mukhtor Saidzhanov, Abdukadyr Mukhitdinovand others and in Khiva Bobookhun Salimov, Palvanniyaz Khodja Yusupov, Avaz Utar, Khusain Matmurodov,Nazar Sholikorov, Otazhon Abdalov, Khudoibergan Divanov, Muhammad Rasul Mirzo, Matyakub Pozachi,Otazhon Sadayev, Bekzhon Rakhimov, Muhammad Devanzade and others. All of them made a huge contributionnot only to enlightenment of the broad masses, but also to the development of emancipating ideas. Jadidism was developed step-by-step. Starting with the idea of enlightenment, which progressionists considered a universal panacea changed a vector of its direction and found wider range of action. After two decades, Jadidsrealized that political changes were required to overcome economic and cultural stagnation. However, at an enlightenment stage progressionists saw the primary goal in creation of a new education system and all forces were given to realize this reform. The leaders of theJadid movement were Munawwar Qari Abdurashidkhanov, Abdullah Awlani, Ubaidullah Khodjaev in Tashkent,Mahmudkhoja Behbudi, Abdukadir Shakuri, Saidakhmad Siddiki-Ajzi in Samarkand, Fitrat, Faizulla Khojaev,Sadriddin Aini in Bukhara, Hamza, Ibrat, Chulpan in the Ferghana Valley, Palvanniyaz Khoji Yunusov and BabaAkhun Salimov in Khiva, and Konurkhoja Khodjikov in Turkestan city. They were the pioneers of Jadidbeginnings. Not limiting themselves with the opening of new-method schools in Turkestan, they helped young people to be sent to prestigious educational institutions of Russia, Turkey, Egypt, and Western Europe for study. The Jadids targeted young people to receive education, master secular science, and faithfully serve the people and the motherland as doctors, engineers, lawyers, agronomists, religious leaders, and statesmen. Among the young people who were sent for study from Turkestan to Turkey was Fitrat, the future ideologist and prom in entrepresentative of Jadidism, received education in an old school and a madrasa. Once he proved himself as atalented and progressive-minded young man, the Jadids sent him to Turkey for study. Between 1908 and 1913,Fitrat witnessed the first steps of the Turkish Revolution and its victory over the feudal system. Impressed by what he saw and read, critically reflecting on the events, taking place in Bukhara, he was convinced that the main impediment to progress was the religious fanaticism of the masses. He came to the conclusion that in order to change and improve the lives of working people, in the first place, it was necessary to fight against the darkness of ignorance and its "leaders", to discredit them, to rent off their hypocritical masks. The books "Spor"('Dispute') and "Indiyski puteshestvennik" ('Indian traveller'), written and published by him in Turkey, ruthlessly exposed the reactionary clergy and state foundations of the Emirate, and being secretly distributed in Bukhara, shook the young people. In 1905-1917Turkestan Jadids' Mahmudhodzha Behbudi (1875-1919, Munavvar Abdurashidhanov cary (1878-1931), Fitrat (1886-1938), Abdullah Avloni (1878-1934), Miryakub Dawlat (1885-1935, Ahmad Baitursunov, developing the ideas of his predecessors, began more actively participate in the socio-political and cultural life of the region. They actively formed public opinion in favour of reform in the political, economic and cultural spheres of Bukhara and Turkestan, and fought against backwardness and stagnation of the feudal and the Emir of the Russian colonial administration (Ayni, 1926). Jadida from the beginning aimed at reforming the Muslim religious schools, creating new methodical schools, publishing, and theatre, social, political and cultural institutions and under their influence they became a powerful ideological weapon in the struggle against the economic, spiritual, political backwardness of the people of Bukhara and Turkestan. Before engaging in enlightenment, future Jadids M.Behbudi, A.Shakuri, M.Abdurashidhanov, Hamza, A. Avloni visited Kazan, Orenburg, Istanbul, and got acquainted with the life and the best practices of Jadid schools, press, literature and theatre. A M.Behbudi, G.Yunusov, Fitrat went to Istanbul and Cairo University, U.Hodzhaev -in Saratov, M.Shermuhamedov, M.Muhamedzhanov, L.Olimi, Sh.Suleyman -in Ufa and Orenburg madrassas. They had the opportunity to get acquainted with the activities of such well-known educators, as Ismailbek Gasprinsky, Gabdurashit Ibragimov, Fatih Karimi and published articles on economic, political and cultural problems of the people of Bukhara and Turkestan in the pages of Tatar, Azerbaijani, Turkish periodicals, "Waqt" (Time) "Shuro" (Council) in Orenburg, "Ulfat" in St. Petersburg, "Tardzhimon" (Translator) in Bakhchisarai, "Molla Nasreddin" in Tbilisi, "Sirozha ul-Mustaqim" (The right way) in Istanbul. These journals spread in Central Asia, Bukhara, and had a strong influence on the development of peoples' consciousness of the region. At the same time, the Emir of Bukhara and the authorities of the Russian colonial rule in Turkestan activated the fight against Turkestan Jadid's. Secret surveillance and control of Russian political agency in Bukhara and Turkestan was actively conducted, in the reports which portrayed Jadids with black colours, and their activities as harmful, nationalist, pan-Islamic and pan-Turkic, awakening among the people of the national-patriotic spirit, directed against the interests of imperial Russia. One of the passionate supporters of reform was well-known educator Ahmad Donish, poets Mukimi Furkat, Hamza Berdakh, Zavki Baeni, Uttar Avaz who in their poetry appealed people to knowledge and education. Democratic ideas possessed the minds and such prominent figures as Abay, Shokhan Valikhanov who saw a way out of not only the enlightenment, but also in the unity of the Turkic peoples. From this Jadids, who had gone a long way from their predecessors, from the enlightenment to the policy had sprung. Enlightenment in Turkestan had deep genetic roots. Strengthening of education philosophy in the 19th century was associated with the objective reasons of backwardness of the Central Asian khanates in connection with weakening of the Great silk way role in the 16th century, opening of shipping routes and rapid technological progress of European countries in the 18-19th centuries. The subjective reasons were domestic wars doing harm to a national economy and general development. After the Russian Empire conquest of Turkestan, opportunities for penetration of democratic ideas opened up from Europe and Asia and dialogue with the world promoted a development of emancipation ideas in the region. İt was the first steps in forming national freedom philosophy whose main postulate was enlightenment. Its supporters were far from heading, organizing or supporting popular uprisings that would break out in different corners of the region, but they were well aware of their causes and their critical attitude to an existing system and understanding the necessity of reforms was a big achievement of that time. Jadidism was developed step-by-step. Starting with the idea of enlightenment, which progressionists considered a universal panacea changed a vector of its direction and found wider range of action. After two decades, Jadidsrealized that political changes were required to overcome economic and cultural stagnation. However, at an enlightenment stage progressionists saw the primary goal in creation of a new education system and all forces were given to realize this reform. Renovationist processes in the education system also spread in the Bukhara Emirate and the Khiva khanate. Abdulvakhid Munzim in Bukhara in1908 opened Thefirst new method school with the Tadjik language teaching. However, the confrontation of the conservatively minded clergy was stronger and effective that quite oftenled to schools destruction by a crowd of mullahs. The school of A.Munzima was also destroyed. Being afraid of severe reprisal he left Karshi. S.Aini, a passionate initiator of Jadid schools, hid at friends' for three weeks (Gafarov, 2000, p.73).As a result of it in 1909 A.Munzim's school was closed and people of Bukhara were not allowed to send the children to even Tatar new method schools. However a new school had already "made much noise" and people kept sending their children to the school. According to S.Aini when the number of children reached 50, they were put to a Tatar school near Gavkushan madrasah.Early in December, 1910, Jadids of Bukhara organized a secret society "Tarbiyai-atfol" ("Education of children") dealt with the opening of primary illegal new method schools. In 1911-1912 about 57 schools operated in theBukhara Emirate (Bendrikov 1960, p.260). The best schools among them were the schools of Mukomil Burkhanov, Usmankhodja Pulatkhodjayev,Khalidkhodji Mekhri (1913). At Mullah Vafo's school in Bukhara much attention was paid to the study of the Russian language. A new method education was also spread in other cities of the emirate such as Karshi, Shahrisyabz, Karakul andGizhduvan. But in July 1914, under the influence of the upper clergy of Bukhara and with political agency's approval and by the order of emir Alimkhan they were closed. In the mind of the population, especially the intellectuals, the efficiency concept of new method schools was already created, therefore, in spite of interdictions, their quantity started to increase. "Views of liberals and a new method of teaching took roots among high classes of Bukhara" (Zеnkоvsку, 1967. p.88). Therefore their children kept taking lessons from Jadid teachers. The anonymous appeal of the merchants to the Political agency in July 1914 making the request to support in opening of schools is demonstrative. It runs as follows: " ... new method schools, where our children in a short space of time learnt reading and writing, by the order of His Majesty the Emir under the complaint of 2-3 mullahs, were closed. About one month time has passed as our children have not gone to school and roam about the streets. It is wellknown to you that we, Bukhara nationals, are mostly merchants and handicraftsmen and there are not so many literate people among us and owing to this fact it is rather desirable for us if our children would be able to read and write quickly and maintain our trading records and accounts. We used to go to old method schools within 7-8 years but remained illiterate and got no benefit from them. Therefore we kindly request you to reopen the closed schools" (Klimovich, 1936. pp.214-215). The Educational Activities of the Jadids An extensive system of traditional educational establishments and the Jadid schools, opened at the turn of the century, influenced the level of the education and awareness of the local population. For example, there were 5892 schools and 353 madrasas in the country in the early 20th century (TSGARUz. F.47, Opis' 1, delo 979,list-81 (F.47, Inventory 1, file 979. Sheet -81)). Even the tsarist government recognized the fact that the local population had a high level of literacy. On March 14th, 1909 governor-general of Turkestan P.I Mischenko wrote to the Minister of Public Education of Russia: "The literacy of the natives of Turkestan, especially in its main regions such as Syrdarya, Ferghana and Samarkand, is at a very high level, which is much higher than that of European Russia. A welldeveloped system of primary schools (schools), secondary and higher education institutions (madrasas) tightly covered most of the territory". This traditional educational system, possessing its ideological influence, from the very beginning was tightly controlled by the colonial administration. The decree of the Russian emperor, issued on May 17, 1875, was the ground for the foundation of Turkestan department of educational establishments, which was granted a power to exercise control over the activity of Russian educational establishments, as well as over the national ones. Andon March 14 1894, governor-general Vrevsky approved the post of the third inspector of Turkestan public schools, who had the direct oversight of the traditional institutions of the settled and nomadic population (TSGARUz. F. 47, Opis 1, delo 149, List 10. (F. 47, Inventory 1, File 979, Sheet 60)). This inspector served as the governing body in madrasas and schools, so Muslim schools completely passed under the control of the department of public education. Under those circumstances, the educational activities of the Jadids of Turkestan, becoming an alternative to the activity of the colonial administration in the renewal of education in the territory, acquired a reformative character and had a direct influence on the process of the education of the people. Instead of the traditional education is inseparable in its essence from the medieval scholasticism. The system of schools they proponed and new methods used in teaching respectly ("Usul-i sovtiya"). Soon the schools became new-method ones ("Usul-Ijadid maktablari"). Besides religion, they were teaching science, the Russian, Arabic and Persian languages, and the genesis of the future intelligentsia was formed. At first, the Jadid schools used textbooks written by Tatar and Azerbaijani enlighteners, but later Turkestan Jadids started to publish textbooks and tutorials themselves. In1903 and 1904, the books "Kitabul-atfol" ("A book for children"), "Muhtasiri zhugrofiya Rusiya" ("Short geography of Russia") by Mahmudkhoja Behbudi, in the following years, the books "Muallem Awwal" ("Teacher"), "Muallem soniy" ("Second teacher") and others were published, and in 1905, the number of the Jadid schools in the country reached (Tabyshalieva, 1993). "Turkestan Native newspaper" which was not satisfied with Jadid's educational activities organized attacks on them and harassment on behalf of some fanatical and ignorant people. As a result of this persecution newspaper "Taraki" was closed, and the editor Ismail Gabitov was arrested (Kosimov, 1979). Control over the Jadid's activities and their organization was further enhanced. In a secret letter of the lieutenant Colonel Syzyh to the policeman of Samarkand from 25 October 1913 ordered to "find out all about the Muslim newspaper" Samarkand. "In response, Samarkand police station chief of Turkestan District Police Department from January 9, 1913 notified, that " the editor of Muslim newspaper "Samarkand" Mahmud Khoja Behbudi is 41 years old, an Arab, he is a local-born of Samarkand city, lives in the Russian part. He is married to native Sharofat, who is 35 years old. He has children (Maksut, 12, Suraya, 8, Matluba, 2 year old), and he has the real estate in the Russian and the native places of the city ... He is currently engaged in trading. He previously served as mufti in a Sharia court. He has perpetual passport booklet. He was not convicted.... « (TSGARUz. F. 461, Opis 1, delo 1312, List 665). In another report to Collegiate Secretary Naryshkin and to Lieutenant Colonel Rozalion Soshalski more detailed data on Behbudi, his colleague Said Khoja Ahmad Siddiqui and their Tashkent like-minded Khodjaev Ubaidullah was given: "Three or four years ago, the editor and publisher of the magazine Samarkand "Aina" ("Mirror") Behbudi wrote in a sart's dialect a play "Padarkush" which means "parricide." Behbudi in its direction was a progress nationalist, educated, as it seems, in Turkey ... Play for the first time was staged in Tashkent by amateur troupe led by Ubaidulla Khodjaev (from young Sarts of progressive ideas), probably, in 1914, before the current war. Then Khodzhayev was in Kokand, Andijan, Namangan and other cities, with local amateur forces of progressive elements, staged the same play " (TSGARUz. F. 461, Opis 1, delo 1919, List 69) . Conclusion The Soviet government did not accept the Jadids' ideas about the social and cultural modernization of the society, but it had to give class nature to some provisions and use them in the program of reforms. It was related to the granting of autonomy to the local population and the renewal of the education and culture. Thesocial and cultural development of the newly independent Central Asian republics have formed in the same direction, which demonstrates the vitality of the Jadids' ideas, risen in the early twentieth century. The political colour of the Jadid movement strengthened the ideology of the national liberation movement in the territory. Their evidence negated the "legitimacy" of the colonial power and the increased the opposition to colonial oppression of Russia. In addition, in its initial stage, the Soviet government was forced to reckon with the ideas of Jadidism. Thus, the political and ethnic processes in the first quarter of the twentieth century in Turkistan determined the directions of the sociopolitical development of the region. If the Jadids had not had the ideas to substantiate the nature and content of this development and the political movement of the masses to bring the ideas to fruition, the totalitarian power would not have met with strong resistance and it would not have reckoned with the national interests of the local people, and as a result, today's independent Central Asian states would get completely different look.
2019-05-04T13:07:41.893Z
2013-10-01T00:00:00.000
{ "year": 2013, "sha1": "a6fb25add69c4d9a0426a26dc3b54e1a00efe1ba", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.sbspro.2013.08.948", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5a9556138819374c8cc79c4f39be3494e1182d9c", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Sociology" ] }
231909296
pes2o/s2orc
v3-fos-license
Experimental Study on the Coating Removing Characteristics of High-Pressure Water Jet by Micro Jet Flow In this paper, coating removal characteristics of water jet by micro jet flow affected by cleaning parameters is analyzed. Numerical simulation of fluid field calculates the velocity and pressure distribution of a water jet impinging on a rigid wall, which is used for design experiments of coating removal affected by jet pressure, traversal speed, and repeated impacting times. The removal width is used as a measure of water jet coating removal capability. Experiment results show that the coating removal width is constant, independent with traversal speed or repeated times when total exposure time of waterjet impingement is fixed. According to results of coating removal by a linear moving water jet, this study also analyzes characteristics of coating removal by rotating jet disc, especially residual coating affected by rotational and moving speed of the cleaning disc. The research is helpful to improve the coating removal efficiency of cleaning disc devices. Introduction Remanufacturing cleaning technology is a method of cleaning and removing complex dirt adhering on surface of waste material using devices or chemical solvent. The cleaned product units can meet the requirements of cleanliness of analysis, detection, fix, processing, and so on. The technique of coating and rust cleaning on the steel structure surface is mainly about peening, chemical corrosion, and water jet cleaning. High pressure water flowing through the micro flow channel of the nozzle generates a high fluid-velocity water jet. When water jet impacts the painted plate, the kinetic energy of micro jet flow convert into pressure energy causing coating failure. Compared with the other two cleaning methods, water jet cleaning has characteristics of good security, non-pollution, recovery of impurities, high adaptability, and low cost. The surface cleaned by the water jet has a high degree of smoothness and little salinity which is fast-drying and has hardly any rusting. Water jet cleaning technology is widely used in the cleaning of vessel outer walls, aircraft skins, oil storage tanks, heat transfer equipment, and so on [1]. The parameters and applications of the existing water jet equipment are shown in Table 1. Studies on water jet cleaning are mainly about cleaning efficiency, different material removal mechanism by water jet impingement, and structural design of cleaning equipment. In aspects of numerical simulation and experiments of surface treatment by water jet, Kawale and Chandramohan [2] used CFD simulation to discuss the static pressure and shear stress distribution of jet impingement on the plane with different Reynolds number. The simulation results showed that static pressure and wall shear stress is increasing with Reynolds number. Jet velocity at micro flow channel exit of nozzle must be at least 3 m/s for effective cleaning flat plane. The simulation results were verified by experiments. Chillman et al. [3] used the ultra-high pressure water jet to remove α case of super plastic titanium alloy without damaging the substrate. The effectiveness of jet removal of titanium alloy case was evaluated by experimental method and surface micro-morphology analysis. Liu et al. [4] analyzed the jet velocity turbulent kinetic and void fraction of jet pressure ranging from 80 to 120 MPa. Simulation results showed pressure and velocity cross-sectional distribution of different target distance. This study took comparison of impinged samples by submerged and non-submerged water jet and makes assessment of surface morphology, micro hardness, and surface roughness by experimental method. The study showed that impact force of submerged jet is from jet kinetic energy and cavitation, while the impact force of non-submerged jet converts from jet kinetic energy. Guha et al. [5] took an experimental and numerical study of influence on stagnation pressure attenuation and pressure distribution on the substrate under different target distance, and obtained conclusion that there is a linear relationship between stagnation pressure attenuation and target distance on the jet axis. The optical stand-off distance is 5 times the diameter of nozzle and the jet diffusion radius is 1.68 times the diameter. Zhang et al. [6] analyzed motion characteristics of abrasive particles affected by jet parameters. According to simulation results, it was proved that there is a stagnation pressure zone during jet impingement. An abrasive particle with a large diameter and density has a higher speed and it is hard to change its moving direction. These properties help to improve abrasive waterjet efficiency. Zhang and Chen [7] analyzed coating removal from a passenger-vehicle by waterjet and designed a high-pressure water jet cleaning device for a coating removal rate test. This study showed the effects of jet pressure, moving speed, target distance and inject angle on coating removal rate. And a fitting formula of bumper surface micro coating removal rate was established according to experiment results. Che et al. [8] investigated the process of polishing super hard material surface by abrasive water jet, considering the influence of micro jet impact angle and substrate hardness on surface roughness, and established a mathematical model of a surface abrasive jet polishing super hard material surface roughness. Zhang et al. [9] compared the impact erosion performance of air sandblasting and an abrasive water jet on quartz board surface micromachining. Experimental results showed that an abrasive water jet has brittle and plastic erosion during surface treatment, and surface impinged by an abrasive water jet is smoother than polished by air sandblasting. Terimourian et al. [10] investigated the process of high pressure water jet de-painting organic coating from steel plate surface and the surface micro roughness of steel plate after cleaning by water jet with different target distances and moving velocities. The experimental results showed that coating mass loss increases with jet kinetic energy, and the maximum of mass loss occurs when jet is de-painting with an optical target distance. The substrate topography is not compromised after secondary water jet de-painting. Mabrouki et al. [11] established Ls-dyna 3D finite element model to simulate process of polyurethane coating on aluminum alloy removed by water jet based on momentum conservation and Euler-Lagrangian coupling method. The study analyzed surface stress-strain value with water jet impingement and designed experiment to verify the simulation results of coating surface morphology changed with jet exposure time. Xie and Rittel [12] studied the pure water jet peening process and established a 2D finite element model to simulate a micro jet flow impinging on the target plate. This study calculated jet velocity, pressure, and stress distribution of the plate at a fixed target distance. In the field of theoretical analysis about jet impingement and coating removal mechanism, Kunaporn et al. [13] established a mathematical model to analyze and calculate the pressure distribution on the surface of aluminum alloy impacted by waterjet at different target distances. The calculation result was used to predict contact pressure and effective peening range. Result of high cycle fatigue test illustrated that surface treatment by micro jet peening can improve fatigue life of aluminum alloy under the optimal target distance. Meng et al. [14] took a study of epoxy-based paint removal by moving water jet. A semi-empirical model was established to calculate coating removal mass by erosion of micro jet droplets based on Springer erosion formula. In order to study the mechanism of jet material removal, Chillman et al. [15] established a mathematical model of jet energy density distribution affected by analyzing the relevant parameters such as micro channel diameter of jet, jet pressure, and traversal velocity. This model was verified by experimental results of titanium alloy trench depth after jet impacting titanium alloy. Mieszala et al. [16] analyzed the erosion mechanism in the abrasive waterjet surface machining process, and carried out the impact test of micro particle with various moving speed. The material removal mechanisms of abrasive jet are different with various crystal structures and surface microstructures. Weiß et al. [17] used jet to separate plastic fiber from textile, and designed jet cleaning device to clean waste carpet and recover plastic components, so as to improve the recovery and utilization of plastic fiber. Hou et al. [18] took theoretical equations to calculate the erosion depth of submerged jets at various target distance and injection angle, and verified the reliability of the model by experimental results of scouring clay. Glover et al. [19] established a mathematical model to calculate the removal width of viscoplastic impurity layer on the smooth surface impacted by fixed and moving jets, and used high-speed photography technology measuring the jet cleaning width at different exposure times to verify the calculation results. Azhari et al. [20] analyzed the three-dimensional surface micro morphology of 304 stainless steel by various pressure jet combination treatments, and found that repeating cleaning with various pressure jets can improve surface smoothness and hardness, as well as prolong the fatigue life of the workpiece. As for research on cleaning efficiency of the rotating jet and its devices, Peng et al. [21] established a three-dimensional model based on the structure of the cleaning plate on the airport runway jet degumming vehicle to simulate vacuum suction process and the fluid field of internal rotating jet, and optimized operating parameters of the cleaning plate on the basis of simulation result. Borkowski [22] evaluated the cleaning effect of a multi-nozzle device. A theoretical model of surface trace distribution was established, which was used for analyzing cleaning efficiency affected by parameters of trace width, rotation speed, number of nozzles, and traversal speed. Chomka and Chudy [23] took a mathematical analysis of the cleaning trajectory distribution of double-nozzle rotary nozzles, and confirmed the range of the optimal moving speed and angular velocity to make the cleaning trajectory evenly distributed. Momber [24] used image processing method to evaluate the influence of parameters such as micro channel diameter of nozzle, number of nozzles, inject angle, target distance, and flow rate on paint removal effect, and optimized the jet paint removal parameters. There are relatively few studies on the width of micro jet flow coating removal affected by exposure time, especially the characteristics of jet paint removal on the rough plate. This paper analyzes the process of micro jet flow impinging on the wall with fixed target distance and obtain the theoretical width of impact pressure area. Width of coating removal is investigated under condition of changing traversal velocity, repeated cleaning frequency, and total exposure time. Combined with the application of jet cleaning rotating disc, this study analyzes the removal rate affected by the distribution of jet trajectory, which helps to optimize parameters for improving coating cleanout rate of the rotating cleaning device. Water Jet Structure The structure of the water jet is shown in Figure 1, which can be divided into three regions: Initial section, main section, and dissipation section with velocity attenuation. When the high-pressure water flows through the micro fluid channel of nozzle, water pressure energy converts into kinetic energy to form a high-speed water jet. Due to dynamic viscosity of fluid and friction of air, the velocity of the micro water jet at the front end will gradually attenuate. The micro flow at the edge of the beam slows down because of air friction and entrainment. Micro flow velocity in this field decreases to zero and the jet beam breaks into micro droplets at the same time. The radius of the jet increases with the target distance. Therefore, the flow velocity distribution at the cross section has the feature that the velocity is high beside the axial line and decreases to zero along the diffusion radius. In the industrial operation, the initial section is used for cutting because of its smaller diffusion radius and high fluid velocity. The main section has a larger diffusion radius and the jet velocity in this region still maintains a high value, so this zone is typically used for cleaning and coating removal operation [12]. frequency, and total exposure time. Combined with the application of jet cleaning rotating disc, this study analyzes the removal rate affected by the distribution of jet trajectory, which helps to optimize parameters for improving coating cleanout rate of the rotating cleaning device. Water Jet Structure The structure of the water jet is shown in Figure 1, which can be divided into three regions: Initial section, main section, and dissipation section with velocity attenuation. When the high-pressure water flows through the micro fluid channel of nozzle, water pressure energy converts into kinetic energy to form a high-speed water jet. Due to dynamic viscosity of fluid and friction of air, the velocity of the micro water jet at the front end will gradually attenuate. The micro flow at the edge of the beam slows down because of air friction and entrainment. Micro flow velocity in this field decreases to zero and the jet beam breaks into micro droplets at the same time. The radius of the jet increases with the target distance. Therefore, the flow velocity distribution at the cross section has the feature that the velocity is high beside the axial line and decreases to zero along the diffusion radius. In the industrial operation, the initial section is used for cutting because of its smaller diffusion radius and high fluid velocity. The main section has a larger diffusion radius and the jet velocity in this region still maintains a high value, so this zone is typically used for cleaning and coating removal operation [12]. is velocity at the nozzle outlet, xc is the length of the potential core, x is the target distance, vmax is the velocity on the axial line at different target distance, y is the distance from a point to the axis on the cross section of a certain target distance. Coating Damage Model In the process of the jet impingement on the coating, former researches show that erosion and surface shearing lead to failure of coating on the ideal smooth surface. When micro jet flow impacts the coating surface, kinetic energy of the jet converts into pressure energy. As impact pressure reaches value of failure strength of the coating, crack grows from the coating surface to the contact plane of substrate during a short time. Water jet impinging on the workpiece breaks the coating and scour trench. Then micro jet flowing along the contact surface impacts the joint of coating and steel plate continuously. Surface shear stress induces micro jet diffusion striping coating from the smooth surface. This phenomenon is typically called "water edge" [19]. The failure principle of coating adhering on the smooth surface is shown in Figure 2. Coating Damage Model In the process of the jet impingement on the coating, former researches show that erosion and surface shearing lead to failure of coating on the ideal smooth surface. When micro jet flow impacts the coating surface, kinetic energy of the jet converts into pressure energy. As impact pressure reaches value of failure strength of the coating, crack grows from the coating surface to the contact plane of substrate during a short time. Water jet impinging on the workpiece breaks the coating and scour trench. Then micro jet flowing along the contact surface impacts the joint of coating and steel plate continuously. Surface shear stress induces micro jet diffusion striping coating from the smooth surface. This phenomenon is typically called "water edge" [19]. The failure principle of coating adhering on the smooth surface is shown in Figure 2. In the actual coating removal operation, the substrate with surface roughness is used to improve adhesion force between coating and plate. Surface with micro bulges and concaves can change the direction of the jet reflection that the velocity direction of jet micro element is stochastic. So, diffusion jet with various direction hardly forms shear stress along the plate and failure of coating on the rough surface is mainly through jet impingement. Coating removal principle on the rough surface is shown in Figure 3. In the actual coating removal operation, the substrate with surface roughness is used to improve adhesion force between coating and plate. Surface with micro bulges and concaves can change the direction of the jet reflection that the velocity direction of jet micro element is stochastic. So, diffusion jet with various direction hardly forms shear stress along the plate and failure of coating on the rough surface is mainly through jet impingement. Coating removal principle on the rough surface is shown in Figure 3. The destruction effect of the jet on the coating is mainly generated by the pressure energy converted from the kinetic energy of high speed micro jet flow, so as to overcome the self-strength or adhesion strength to damage the coating [15]. Jet velocity at the nozzle exit and jet diffusion are related to the inlet pressure and nozzle micro flow channel. According to the flow field simulation of the jet nozzle used in the removal experiment of this paper, the diameter of the nozzle micro fluid channel is 1mm, and the inlet pressure is 32 MPa. The fluid field is calculated by k-ε turbulence model with wall functions. A two-phase flow model is used for calculating water jet impinging on the rigidity wall through the air. The target distance is 10 mm. Simulation result of velocity and surface pressure distributions are shown in Figure 4. Results shows that micro jet velocity is highest at the axis and attenuates to zero along the radial direction at the cross-section. When high pressure water flows through the micro flow channel, pressure energy of water converts into kinetic energy and generates water jet. The energy conversion causes the pressure drop through the nozzle shown in Figure 4c. There is an energy converting region called the stagnant zone around the intersection of the jet axis and the wall, and the reflective jet flows along the wall beyond the stagnant zone [2]. After the jet impacts the wall, the kinetic energy of the jet near the axis is converted into pressure energy, so there is a circular area with a radius of r near the jet axis where the workpiece is mainly subjected to the impact pressure. Target distance mainly affects the size of the stagnation pressure area and the maximum impact pressure. In order to study the effect of energy accumulation on micro jet impingement in the process of jet coating removal, further analysis is needed through experiments. In the actual coating removal operation, the substrate with surface roughness is used to improve adhesion force between coating and plate. Surface with micro bulges and concaves can change the direction of the jet reflection that the velocity direction of jet micro element is stochastic. So, diffusion jet with various direction hardly forms shear stress along the plate and failure of coating on the rough surface is mainly through jet impingement. Coating removal principle on the rough surface is shown in Figure 3. The destruction effect of the jet on the coating is mainly generated by the pressure energy converted from the kinetic energy of high speed micro jet flow, so as to overcome the self-strength or adhesion strength to damage the coating [15]. Jet velocity at the nozzle exit and jet diffusion are related to the inlet pressure and nozzle micro flow channel. According to the flow field simulation of the jet nozzle used in the removal experiment of this paper, the diameter of the nozzle micro fluid channel is 1mm, and the inlet pressure is 32 MPa. The fluid field is calculated by k-ε turbulence model with wall functions. A two-phase flow model is used for calculating water jet impinging on the rigidity wall through the air. The target distance is 10 mm. Simulation result of velocity and surface pressure distributions are shown in Figure 4. Results shows that micro jet velocity is highest at the axis and attenuates to zero along the radial direction at the cross-section. When high pressure water flows through the micro flow channel, pressure energy of water converts into kinetic energy and generates water jet. The energy conversion causes the pressure drop through the nozzle shown in Figure 4c. There is an energy converting region called the stagnant zone around the intersection of the jet axis and the wall, and the reflective jet flows along the wall beyond the stagnant zone [2]. After the jet impacts the wall, the kinetic energy of the jet near the axis is converted into pressure energy, so there is a circular area with a radius of r near the jet axis where the workpiece is mainly subjected to the impact pressure. Target distance mainly affects the size of the stagnation pressure area and the maximum impact pressure. In order to study the effect of energy accumulation on micro jet impingement in the process of jet coating removal, further analysis is needed through experiments. The destruction effect of the jet on the coating is mainly generated by the pressure energy converted from the kinetic energy of high speed micro jet flow, so as to overcome the self-strength or adhesion strength to damage the coating [15]. Jet velocity at the nozzle exit and jet diffusion are related to the inlet pressure and nozzle micro flow channel. According to the flow field simulation of the jet nozzle used in the removal experiment of this paper, the diameter of the nozzle micro fluid channel is 1mm, and the inlet pressure is 32 MPa. The fluid field is calculated by k-ε turbulence model with wall functions. A two-phase flow model is used for calculating water jet impinging on the rigidity wall through the air. The target distance is 10 mm. Simulation result of velocity and surface pressure distributions are shown in Figure 4. Results shows that micro jet velocity is highest at the axis and attenuates to zero along the radial direction at the cross-section. When high pressure water flows through the micro flow channel, pressure energy of water converts into kinetic energy and generates water jet. The energy conversion causes the pressure drop through the nozzle shown in Figure 4c. There is an energy converting region called the stagnant zone around the intersection of the jet axis and the wall, and the reflective jet flows along the wall beyond the stagnant zone [2]. After the jet impacts the wall, the kinetic energy of the jet near the axis is converted into pressure energy, so there is a circular area with a radius of r near the jet axis where the workpiece is mainly subjected to the impact pressure. Target distance mainly affects the size of the stagnation pressure area and the maximum impact pressure. In order to study the effect of energy accumulation on micro jet impingement in the process of jet coating removal, further analysis is needed through experiments. Experimental Setup The whole set of the single jet coating removal test system is mainly composed of three parts: High-pressure water generating device, a jet gun, and a ball screw driven by motor. High-pressure water pressurized by the plunger pump is transported through the high-pressure pipeline to the jet gun to form a high-speed jet from the micro flow channel of the nozzle. At the same time, the ball screw drives the jet gun to reciprocate in a straight line to carry out the paint removal test. The schematic and physical diagram of the device and the microstructure size of the nozzle are shown in Figure 5. The traversal velocity regulation range of the ball screw is 0.05 mm/s to 39 mm/s, the maximum stroke is 180mm, the rated pressure of the jet gun is 38 MPa, and the inner micro fluid channel diameter of the nozzle outlet is 1 mm. The application background of this paper is green coating and rust removal of ship wall. Therefore, the ship steel plate and fluorocarbon paint in hull construction are selected, and the average thickness of coating is 100 μm measured by thickness gauge. Experimental Setup The whole set of the single jet coating removal test system is mainly composed of three parts: High-pressure water generating device, a jet gun, and a ball screw driven by motor. High-pressure water pressurized by the plunger pump is transported through the high-pressure pipeline to the jet gun to form a high-speed jet from the micro flow channel of the nozzle. At the same time, the ball screw drives the jet gun to reciprocate in a straight line to carry out the paint removal test. The schematic and physical diagram of the device and the microstructure size of the nozzle are shown in Figure 5. The traversal velocity regulation range of the ball screw is 0.05 mm/s to 39 mm/s, the maximum stroke is 180 mm, the rated pressure of the jet gun is 38 MPa, and the inner micro fluid channel diameter of the nozzle outlet is 1 mm. The application background of this paper is green coating and rust removal of ship wall. Therefore, the ship steel plate and fluorocarbon paint in hull construction are selected, and the average thickness of coating is 100 µm measured by thickness gauge. Coating Removal by One Time Jet Impingement In order to study the effect of traversal velocity on the width of jet paint removal, the experiment of coating removal by water jet with different traversal velocity is designed. In this experiment, the fixed target distance is 10 mm, the jet inlet pressure is 35 MPa, 32 MPa and 30 MPa, the traversal velocity ranges from 0.05 mm/s to 1 mm/s, and the inner diameter of the jet nozzle is 1 mm. The width of paint removal from the area where the jet destroys the coating to the surface of the steel plate is measured after a single traversal of the jet. The variation of coating removal width measured in the experiments is shown in Figure 6. Using fixed target distance and the same kind of nozzle is to eliminate the influence of target distance and nozzle structure on paint removal width, so that the jet impacting force is a fixed value under a certain inlet pressure. The local impingement time on the micro section of paint removal zone is changed by traversal velocity. In this paper, the coating removal width is the width of the removal area where the coating is completely removed and the interface between coating and steel plate is exposed. When the inlet pressure of the jet is 30 MPa, the traversal velocity exceeds 0.7 mm/s, the single impact of the jet can destroy the coating but not impinge on the contact surface between coating and steel plate, which means the coating is not completely removed. Thus, there is no removal width value. When the jet inlet pressure is 30 MPa, the critical traverse speed of the single traversal coating removal is 0.7 mm/s. When the jet inlet pressure is 28 MPa, the critical traverse speed is 0.4 mm/s. Experiment results show that coating removal width increases with jet pressure. When the traversal speed increases, the value of removal width becomes smaller. With the increase of traversal speed, local exposure time of jet impingement decreases. It is necessary to analyze the effect of local exposure time on removal width. (c) Figure 5. High pressure water jet paint removal experimental system: (a) The principle of water jet generating; (b) Structure size of nozzle; (c) Coating removing test bench with water jet. High pressure water pressurized by the piston pump is sent to the jet gun through the pipeline. When the jet gun starts straight reciprocating motion driven by the slipway, the whole system begins the paint removal test. Coating Removal by One Time Jet Impingement In order to study the effect of traversal velocity on the width of jet paint removal, the experiment of coating removal by water jet with different traversal velocity is designed. In this experiment, the fixed target distance is 10 mm, the jet inlet pressure is 35 MPa, 32 MPa and 30 MPa, the traversal velocity ranges from 0.05 mm/s to 1 mm/s, and the inner diameter of the jet nozzle is 1 mm. The width of paint removal from the area where the jet destroys the coating to the surface of the steel plate is measured after a single traversal of the jet. The variation of coating removal width measured in the experiments is shown in The R-square of this fitting model is 0.973. According to this fitting equation, when the jet pressure is 28 MPa, and the traversal speeds are 0.2 mm/s and 0.25 mm/s, the calculation results of depainting width are 1.09 mm and 1.01 mm. And the experimental measurement results are 1.15 mm and 1.05 mm. the error are 5.2% and 3.8% The calculation results of the fitting equation can well fit the experimental data. . Figure 6. Effect of the moving speed on the width of coating removal. The width gradually reduces as the velocity increases. Coating Removal by Repeated Jet Impingement In the actual water jet coating removal operation, there is a method of repeated cleaning to achieve a higher removal rate. In order to study the influence of water jet repeated impact times on paint removal width, multiple impingement test of water jet with fixed traversal speed is designed. Water jet pressure is 32 MPa and 28 MPa, the traversal speed is 1 mm/s, the number of traversal varies from 1 time to 20 times. Water jet repeatedly impacts the same area of the painted plate with the fixed traversal velocity, then the width Coating removal by water jet is a complex physical process, which is mainly affected by jet pressure, target distance, traversal speed, coating thickness, and surface roughness of the steel plate. It is difficult to describe the correlation of these parameters to coating removal rate by formula. Therefore, most researches carry out numerical analysis on experiment results, and obtain empirical formulas under specific experimental conditions. Based on the numerical analysis of the experimental results, the fitting formula of the effect of inlet pressure P and traversal speed v on the coating removal width w is obtained as follows: The R-square of this fitting model is 0.973. According to this fitting equation, when the jet pressure is 28 MPa, and the traversal speeds are 0.2 mm/s and 0.25 mm/s, the calculation results of depainting width are 1.09 mm and 1.01 mm. And the experimental measurement results are 1.15 mm and 1.05 mm. the error are 5.2% and 3.8% The calculation results of the fitting equation can well fit the experimental data. Coating Removal by Repeated Jet Impingement In the actual water jet coating removal operation, there is a method of repeated cleaning to achieve a higher removal rate. In order to study the influence of water jet repeated impact times on paint removal width, multiple impingement test of water jet with fixed traversal speed is designed. Water jet pressure is 32 MPa and 28 MPa, the traversal speed is 1 mm/s, the number of traversal varies from 1 time to 20 times. Water jet repeatedly impacts the same area of the painted plate with the fixed traversal velocity, then the width of coating removal is measured. The measurement results of coating removal width are shown in Figure 7a. The partial enlarged view is shown in Figure 7b. the length of the scale is 2 mm and the division value is 0.05 mm. When the inlet pressure is 28 MPa, the impact of the jet with the traversal speed of 1 mm/s cannot completely destroy the coating and exposure the contact surface of the steel plate, thus there is no measured value. It can be seen from the Figure 8 that with the increase of the number of water jet impingement, the width of paint removal increases at first and then tends to a fixed value. From the point view of energy-based model, the failure of coating adhering on rough surface by jet impingement is that the kinetic energy transferred from micro jet flow to coating increases gradually with exposure time and reaches the threshold of coating failure energy. From a microscopic point of view, the surface roughness reduces the shearing area of the diffusion jet along the wall, so the energy accumulation in the coating mainly comes from the kinetic energy of the velocity component perpendicular to the painted plate. From a macro point of view, the impact force of the jet is greater than the failure strength. The coating is removed from the rough surface by water jet impingement. Therefore, the maximum width of coating removal after repeated cleaning should be the diameter of the jet velocity distribution area. From the simulation results, the maximum width of the paint removal on the rough surface is 2r. of coating removal is measured. The measurement results of coating removal width shown in Figure 7a. The partial enlarged view is shown in Figure 7b. the length of scale is 2 mm and the division value is 0.05 mm. When the inlet pressure is 28 MPa, impact of the jet with the traversal speed of 1 mm/s cannot completely destroy the coa and exposure the contact surface of the steel plate, thus there is no measured value. It be seen from the Figure 8 that with the increase of the number of water jet impingem the width of paint removal increases at first and then tends to a fixed value. From point view of energy-based model, the failure of coating adhering on rough surface b impingement is that the kinetic energy transferred from micro jet flow to coating incre gradually with exposure time and reaches the threshold of coating failure energy. Fro microscopic point of view, the surface roughness reduces the shearing area of the d sion jet along the wall, so the energy accumulation in the coating mainly comes from kinetic energy of the velocity component perpendicular to the painted plate. From a m point of view, the impact force of the jet is greater than the failure strength. The coatin removed from the rough surface by water jet impingement. Therefore, the maxim width of coating removal after repeated cleaning should be the diameter of the jet velo distribution area. From the simulation results, the maximum width of the paint rem on the rough surface is 2r. Coating Removal by Jet Impact with Fixed Total Exposure Time The single variable speed paint removal test and the constant speed multiple cleaning test show that the width of coating removal increases with the local exposure time of water jet impingement on the micro segment of the removal area. It is necessary to further analyze the effect of total exposure time, traversal speed and repeated cleaning times on the width of coating removal at constant water jet pressure. Experiment of water jet repeated impingement with constant total exposure time is designed as: In this experiment, single moving displacement of the jet gun is 180 mm, the total exposure time t c can be expressed as: where c is the number of repeated cleaning times and v is the traversal speed of jet. In test 1, traversal speed varies from 1 mm/s to 5 mm/s, total exposure time is 15 min and 30 min. Repeated cleaning times at different speeds for a constant total exposure time are shown in Table 2, and the values of removal width are shown in Figure 9a. In test 2, the traversal speed is 1 mm/s and 2 mm/s, the total exposure time ranges from 3 min to 30 min. Table 3. displays cleaning frequency of different total exposure time. The result of test 2 is shown in Figure 9b. Coating Removal by Jet Impact with Fixed Total Exposure Time The single variable speed paint removal test and the constant speed multiple cleaning test show that the width of coating removal increases with the local exposure time of water jet impingement on the micro segment of the removal area. It is necessary to further analyze the effect of total exposure time, traversal speed and repeated cleaning times on the width of coating removal at constant water jet pressure. Experiment of water jet repeated impingement with constant total exposure time is designed as: In this experiment, single moving displacement of the jet gun is 180 mm, the total exposure time tc can be expressed as: where c is the number of repeated cleaning times and v is the traversal speed of jet. In test 1, traversal speed varies from 1 mm/s to 5 mm/s, total exposure time is 15 min and 30 min. Repeated cleaning times at different speeds for a constant total exposure time are shown in Table 2, and the values of removal width are shown in Figure 9a. 1 1 15 5 2 2 15 10 3 3 15 15 4 4 15 20 5 5 15 25 6 1 30 10 7 2 30 20 8 3 30 30 9 4 30 40 10 5 30 50 4 1 24 8 5 1 30 10 6 2 3 2 7 2 9 6 8 2 15 10 9 2 24 16 10 2 Figure 9 confirms that the width of coating-removal increases with the total exposure time. When the total exposure time is constant, the value of coating removal width is fixed, independent with traversal speed and repeated times. That is: where f(v, n) is a function of traversal speed v and the number of repeated impingement times n. When t1=t2, that is Figure 9 confirms that the width of coating-removal increases with the total exposure time. When the total exposure time is constant, the value of coating removal width is fixed, independent with traversal speed and repeated times. That is: where f(v, n) is a function of traversal speed v and the number of repeated impingement times n. When t 1 = t 2 , that is L·n 1 /v 1 = L·n 2 /v 2 , d 1 is equal to d 2 . Characteristics of Coating Removal by Water Jet Rotating Cleaning Disc In the actual jet paint removal and cleaning operation, the water jet rotating cleaning disc is used for large area plate cleaning. The physical and structure of the device are shown in Figure 10. The nozzle is installed in the rotating rod inside the disc. During paint removal operation, the pneumatic motor in the cleaning plate drives the rotating rod with high rotational speed, and the cleaning plate moves to achieve coating removal. Characteristics of Coating Removal by Water Jet Rotating Cleaning Disc In the actual jet paint removal and cleaning operation, the water jet rotating cleaning disc is used for large area plate cleaning. The physical and structure of the device are shown in Figure 10. The nozzle is installed in the rotating rod inside the disc. During paint removal operation, the pneumatic motor in the cleaning plate drives the rotating rod with high rotational speed, and the cleaning plate moves to achieve coating removal. The nozzle in the disc chamber is moving with a resultant motion form that consists of a circular motion with radius R and angular velocity ω and a linear motion with moving speed v. The trajectory can be described as a cycloid. Its parametric equations are expressed by the following formulas: The trajectory of the nozzle axis is shown in Figure 11, and the shape is a cycloid. The trajectory has a self-intersection point when The height interval of each intersection is different, resulting in the sparse distribution of trajectory lines near the x-axis and dense distribution near the upper and lower boundary of the trajectory. The paint removal trajectory coverage diagram is shown in Figure 12. Too fast moving speed v causes the trajectory interval H to be larger than the coating removal width w of a single jet, so that the coating removal zone cannot completely cover the traversal area. The height difference between the intersection points near the upper and lower boundary of the trajectory is small with the paint removal trajectory densely distributed. The region near the boundary of trajectory is completely cleaned because of the overplus coverage of coating removal area. If the width of the residual coating is d, the width of the clean area near the edge of the cleaning plate is R-d/2, and the maximum interval of the trajectory at the intersection point of the x axis is Hmax. If the rotating cleaning disc can completely remove the coating in this area at one time, that is, the width of the residual coating d = 0, it is necessary to reduce the moving speed v and make the maximum trajectory interval max H w ≤ . When Hmax is equal to w, the maximum moving speed of removing the coating once is vmax. The nozzle in the disc chamber is moving with a resultant motion form that consists of a circular motion with radius R and angular velocity ω and a linear motion with moving speed v. The trajectory can be described as a cycloid. Its parametric equations are expressed by the following formulas: The trajectory of the nozzle axis is shown in Figure 11, and the shape is a cycloid. The trajectory has a self-intersection point when v· 2π ω ≤ 2R. The height interval of each intersection is different, resulting in the sparse distribution of trajectory lines near the x-axis and dense distribution near the upper and lower boundary of the trajectory. The paint removal trajectory coverage diagram is shown in Figure 12. Too fast moving speed v causes the trajectory interval H to be larger than the coating removal width w of a single jet, so that the coating removal zone cannot completely cover the traversal area. The height difference between the intersection points near the upper and lower boundary of the trajectory is small with the paint removal trajectory densely distributed. The region near the boundary of trajectory is completely cleaned because of the overplus coverage of coating removal area. If the width of the residual coating is d, the width of the clean area near the edge of the cleaning plate is R-d/2, and the maximum interval of the trajectory at the intersection point of the x axis is H max . If the rotating cleaning disc can completely remove the coating in this area at one time, that is, the width of the residual coating d = 0, it is necessary to reduce the moving speed v and make the maximum trajectory interval H max ≤ w. When H max is equal to w, the maximum moving speed of removing the coating once is v max . According to the trajectory analysis results, the experiment of rotating cleaning disc coating removal with different moving speed is carried out. The average rotational speed is 3000 r/min, and the moving speed is 1.2 m/min, 0.9 m/min, and 0.6 m/min. Coating residue on the steel plate is shown in Figure 13. Result shows that there is a clear removal zone near the boundary of the trajectory when the cleaning plate device performs the jet coating removal operation with high moving speed. There are some bright spots in the coating residual region. These spots are bulges on the rough steel plate where the jet target distance of these spots is smaller than the constant value. Therefore, the kinetic energy at these spots is higher and the removal effect is better compared with other regions. The test result is consistent with the results of simulation and theoretical analysis. Conclusions Based on the experimental and simulation methods, this paper analyzes the area of water jet impingement and the effect of exposure time on the width of micro jet flow coat- According to the trajectory analysis results, the experiment of rotating cleaning disc coating removal with different moving speed is carried out. The average rotational speed is 3000 r/min, and the moving speed is 1.2 m/min, 0.9 m/min, and 0.6 m/min. Coating residue on the steel plate is shown in Figure 13. Result shows that there is a clear removal zone near the boundary of the trajectory when the cleaning plate device performs the jet coating removal operation with high moving speed. There are some bright spots in the coating residual region. These spots are bulges on the rough steel plate where the jet target distance of these spots is smaller than the constant value. Therefore, the kinetic energy at these spots is higher and the removal effect is better compared with other regions. The test result is consistent with the results of simulation and theoretical analysis. According to the trajectory analysis results, the experiment of rotating cleaning disc coating removal with different moving speed is carried out. The average rotational speed is 3000 r/min, and the moving speed is 1.2 m/min, 0.9 m/min, and 0.6 m/min. Coating residue on the steel plate is shown in Figure 13. Result shows that there is a clear removal zone near the boundary of the trajectory when the cleaning plate device performs the jet coating removal operation with high moving speed. There are some bright spots in the coating residual region. These spots are bulges on the rough steel plate where the jet target distance of these spots is smaller than the constant value. Therefore, the kinetic energy at these spots is higher and the removal effect is better compared with other regions. The test result is consistent with the results of simulation and theoretical analysis. Conclusions Based on the experimental and simulation methods, this paper analyzes the area of water jet impingement and the effect of exposure time on the width of micro jet flow coat- Conclusions Based on the experimental and simulation methods, this paper analyzes the area of water jet impingement and the effect of exposure time on the width of micro jet flow coating removal on rough surface. According to the experiment results, the efficiency and parameter optimization of the rotating disc coating removal are analyzed. The influence of the cleaning disc moving speed on the cleaning rate of the coating is studied by analyzing the movement of the micro jet flow and the trajectory distribution of the paint removal. Experiment and the calculation results show that: (1) The simulation result shows that the inlet water pressure and the micro flow channel of nozzle mainly affect the jet outlet flow velocity. Jet velocity attenuation and increment of impingement area result from increasing target distance. The micro flow velocity has the characteristics that the maximal velocity occurred in the axial region and the velocity attenuated along the radial direction. (2) Reducing the traverse speed or increasing the number of repeated impingement times can increase the local exposure time of the jet on the trajectory micro segment, which can improve the coating removal width of the jet. (3) The results of a single-beam moving jet coating removal test show that when the jet pressure is constant, the paint removal width is positively correlated with the total exposure time. When the total exposure time is constant, the coating removal width has a fixed value, which is not affected by the traversal speed or the number of traversals. (4) The moving speed of the rotating cleaning disc mainly affects the width of the residual coating on the surface after the coating removal. With the increase of the moving speed v, the trajectory is sparser and the removal strips cannot fully cover the traversal area, which causes coating residual. It is necessary to slow down the moving speed v, so as to totally remove the coating by once traversal. This study is helpful to optimize the jet cleaning parameters. Conflicts of Interest: The authors declare no conflict of interest.
2021-02-14T06:16:18.202Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "c6d9a57f7a69557fd080edd4cb5c11327c50dbc2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/12/2/173/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a3cf75ad71bd7824124d6fd9ccd65748883f4fc2", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
264129784
pes2o/s2orc
v3-fos-license
Cellular signaling modulated by miRNA-3652 in ovarian cancer: unveiling mechanistic pathways for future therapeutic strategies MicroRNAs (miRNAs) are small non-coding RNA molecules that play pivotal roles in regulating gene expression and have been implicated in the pathogenesis of numerous cancers. miRNA-3652, though relatively less explored, has recently emerged as a potential key player in ovarian cancer's molecular landscape. This review aims to delineate the functional significance and tumor progression role of miRNA-3652 in ovarian cancer, shedding light on its potential as both a diagnostic biomarker and therapeutic target. A comprehensive literature search was carried out using established databases, the focus was on articles that reported the role of miRNA-3652 in ovarian cancer, encompassing mechanistic insights, functional studies, and its association with clinical outcomes. This updated review highlighted that miRNA-3652 is intricately involved in ovarian cancer cell proliferation, migration, and invasion, its dysregulation was linked to altered expression of critical genes involved in tumor growth and metastasis; furthermore, miRNA-3652 expression levels were found to correlate with clinical stages, prognosis, and response to therapy in ovarian cancer patients. miRNA-3652 holds significant promise as a vital molecular player in ovarian cancer's pathophysiology. Its functional role and impact on tumor progression make it a potential candidate for diagnostic and therapeutic applications in ovarian cancer. Given the pivotal role of miRNA-3652 in ovarian cancer, future studies should emphasize in-depth mechanistic explorations, utilizing advanced genomic and proteomic tools. Collaboration between basic scientists and clinicians will be vital to translating these findings into innovative diagnostic and therapeutic strategies, ultimately benefiting ovarian cancer patients. Video Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s12964-023-01330-x. Introduction Ovarian Cancer (OC) constitutes one of the most lethal gynecological malignancies that threatens women's health worldwide [72].Ovarian malignant tumours have the highest mortality rate among all gynecological tumours in developed countries.According to estimates from the Centers for Disease Control and Prevention (CDC) and the American Cancer Society, ovarian cancer (OC) ranks as the second most common gynecological cancer in the United States.In 2021 alone, it is estimated that more than 21,410 women will be diagnosed with this disease [65].Furthermore, the mortality rate is alarmingly high, exceeding 50%, with a high percent women expected to die from ovarian cancer in the same year [2,11].One significant challenge in managing ovarian cancer is the limited rate of early diagnosis.This is largely due to the absence of effective, sensitive diagnostic methods [13].Additionally, the high mortality rate is exacerbated by the frequent occurrence of drug resistance and high rates of recurrence [11,39].Several studies have highlighted the prognostic importance of miRNA-3652 in ovarian cancer [20]; existing literature shows that high/ low levels of miRNA-3652 are associated with poorer/ better survival outcomes [62].These findings suggest that miRNA-3652 could serve as a potential prognostic biomarker and highlight the urgent need for further studies aimed at understanding the mechanistic pathways modulated by this microRNA [20,62].A recent research led aimed to enhance the prediction of survival rates in ovarian cancer patients through the identification of specific miRNA markers [62].Utilizing a unique algorithm known as OV-SURV, which combines support vector regression with a dual-objective genetic algorithm for feature extraction, the team scrutinized miRNA expression and survival statistics from 209 patients sourced from The Cancer Genome Atlas [62].The algorithm exhibited strong predictive capabilities, reflected by a mean correlation coefficient of 0.77 and a mean absolute error rate of only 0.69 years during tenfold cross-validation.Key miRNAs like hsa-let-7f, hsa-miR-1237, and hsa-miR-98 were found to have a substantial impact on survival outcomes.Furthermore, pathway analysis pinpointed that specific groups of these miRNAs are predominantly involved in fatty acid production and breakdown processes [62].The precise cause of OC is still uncertain, but specific risk and contributing factors, including increased age, ovulation, hormonal imbalance, cytokines, environmental factors, and genetic predisposition, have been identified.Aberrant transcriptional regulations are already known to have their role in the onset of different types of cancers, including OCs [53].Despite advancements in generating cancer genome data, the biology and mechanisms of ovarian cancers (OCs) are still not fully understood.miRNAs, which are small non-coding RNAs of approximately 22 nucleotides in length, are involved in post-transcriptional regulation.These RNAs significantly regulate various cellular processes, such as cell growth, tissue differentiation, and apoptosis.The wellestablished role of several miRNAs in carcinogenesis is their ability to target specific mRNAs [25].Alterations of miRNAs expression patterns are found to be associated with several diseases, including cancers.Different miRNAs are found to be involved in OC pathogenesis, such as miR-200, miR-506, miR-183, miR-20 and many more [29,38,78].Additionally, alterations in the abundance of miRNA had been noted in patients with OC in many previous studies compared to healthy individuals [8,26].Novel targets in the research line are required as momentum for the diagnosis, treatment, and prognosis of OC.To this end, the expression analysis of miRNAs can emerge as a new hope regarding the management of OC.Therefore the correlation between aberrant expression of miRNAs and OC-promoting genes regulated pathways can provide new insights into the clinical manifestation, diagnosis and treatment of OC [49,79].Expression analysis has shown that the plasma and serum of ovarian cancer (OC) patients display downregulation of miRNA-3625 [28,40].This downregulation is associated with dysregulation of tumor-suppressive genes and overexpression of tumor oncogenes.Intriguingly, the OC patients that are resistant to front-line chemotherapeutic agents against OC [28] also show downregulation of mOC-resistant C-resistant cells exhibiting the epigenetic repairing of miRNA-3625 demonstrated to enhance the potency of therapeutic agents [14].Overall, most of the reported studies on miRNAs were mainly focused on evaluating the abundance of miRNA along with speculations about their role in causing OC.No study, according to the best of our knowledge, has yet reported the concise mechanism and/or role of miRNA-3652 in causing OC.The study has addressed the issue since it offers thorough evaluations and potentially insightful information on the miRNA-3652 targeted genes and their function in controlling signalling pathways that influence malignant cell behaviour.For miRNA-3652 target sites' recognition and verification in ovarian cancer, we have used a stereotypical and arduous approach.The study adopts a pathway-oriented method to pinpoint the relevant factors involved in cancer growth and describes the signalling cascades associated with ovarian cancer, thus paving the way for in-depth mechanistic studies and designing futuristic therapeutic strategies. Methodology miRNA selection This was a significant study choice because miRNA-3625 was downregulated in the serum and plasma of ovarian cancer patients.Additionally, its downregulation in resistant ovarian cancer cell lines due to epigenetic changes made it appear a crucial issue [43].An exhaustive literature search on databases (Scopus and Web of Sciences) revealed that there isn't any mechanistic study highlighting the role and/ or mechanism of miRNA-3652 in generating tumour microenvironment.The correlation between miRNA-3652 and OC was already highlighted, altering their regular expression of genes in neighbouring cells producing tumour microenvironment.The methodology has been represented by a flow sheet diagram in (Fig. 1). DIANA tools The speculative identification of miRNA-3652 target genes in cells uses DIANA tools) [57] http:// diana.imis.athena-innov ation.gr/.An ongoing project of the DIANA-lab group for miRNA target prediction from DIANA tools, DIANA-microT-CDS (http:// www.micro rna.gr/ microT-CDS) was chosen owing to its specificity for human beings and its machine learning approach for training data to make predictions and decisions.This programme is explicitly told to provide separate scores for the positive and negative sets of miRNA recognition elements (MREs), merging the two values into a single standard score known as the miTG score [36].The microT-CDS tool's threshold value 0.7 was set to prevent any misleading positive or negative results.This threshold value was employed as the foundation for choosing the truly targeted genes in sensitivity analysis because of its stringent high accuracy in providing hits with a mitTG score greater than 0.7 and an increased likelihood of an accurate prediction [4]. Assortment of targeted genes Micro T-CDS tool-recognized genes were furthermore divided for their endowment in cancer.Two approaches were used, "The Human Protein Atlas" http:// www.prote inatl as.org/) [73] which is a literature review and an online database.Extensive literature analysis was executed to disregard the genes not concerned with cancer progression and metastasis.The selected genes highlighted from the micro T-CDS tool were critically evaluated based on specific guidelines during the literature analysis; these guidelines included the gene's documented regulation in ovarian cancer, its role in targeting cancer progression signaling pathways, and publications within a time window from 2010 to 2020.By classifying Fig. 1 Flow diagram outlining the methodology of the study genes described in research publications playing the dynamic function in many metabolic pathways in cancerous cells, genes directly engaged in the growth of cancerous cells and metastasis were concentrated for definitive selection.There is a large amount of data about all human protein-coding genes in the open-access database known as The Human Protein Atlas v.14 [74].The Human Protein Atlas portal is divided into four sections, including photos and information on antibody-based proteomics and transcriptomics.The sections are normal tissue, cancer tissue, sub-cellular, and cell lines [55].The Cancer Atlas (http:// www.prote inatl as.org/ cancer) from these subportions enfolds a mass of human cancer specimens exhibiting the 20 most related types of cancer to create protein expression profiles utilising immune histochemistry, which is why we are interested in it [7].Using this database, the gene list was further categorised based on the expression level of human genes in cancer cells.Only the expressed genes in ovarian cancer were kept in the gene list, and others were abandoned. Pathway designing The association between these genes and certain types of cancer has been elucidated by investigating the involvement of specific genes in ovarian cell biological processes.To illustrate the influence of miRNA on the targeted gene in ovarian cancer, an extensive literature analysis was conducted in conjunction with utilising two separate pathway databases, namely KEGG and Reactome.The results of pharmacological studies were employed as the source of data from the KEGG and Reactome databases.Additionally, each pathway in the KEGG and Reactome databases has been assigned a unique reaction ID, which can also be used for data regeneration.2.5 KEGG.KEGG (Kyoto Encyclopedia of Genes and Genomes) is a resource for high throughput data analysis (http:// www.genome.jp/ kegg/) that is open to the public [34].KEGG was the first to offer complete pathways that were manually arranged using information from many organisms' genome projects [35].The KEGG pathway database (http:// www.kegg.jp/ kegg/ pathw ay.html) decodes how genes and molecules are crisscrossed by providing graphical information on biochemical and regulatory pathways.The KEGG pathway database offers graphical representations that elucidate the complex interactions among genes and molecules, shedding light on biochemical and regulatory mechanisms (http:// www.kegg.jp/ kegg/ atlas.html).Each map is dynamically created using the Kegg-Sketch programme to show insights into the molecular interaction and reaction networks [21].The KEGG Atlas is a slashing graphical interface with zooming and navigation features for the KEGG global maps.To filter organism-specific pathways connected to specific genes, we have retrieved information using the organism's name [34,35] and utilised the KEGG DISEASE domain to explore the involvement of our specific gene in pathways associated with cancer, to narrow down our understanding.Additionally, we selected related pathway domains to identify a comprehensive list of pathways in which our particular gene is implicated, serving as entry identifiers to visualise other signalling pathways where the gene plays a crucial role.This approach strengthened and streamlined our understanding of the gene's significance and involvement in various signalling pathways. Reactome Reactome is a free database, open access, thoroughly reviewed, and well-organized (http:// www.react ome.org/).The Reactome database systematically links human proteins and their functions [19] to a wide range of biological processes that are expressed as a single, stable reaction pathway.Because of these networks and pathways, researchers can understand a wide range of significant disease processes at the molecular level.Information was gathered using the easy text search tool to look up processes, proteins, and pathways linked to specific genes [48].Searched results were filtered out based on critical parameters such as molecular entity, event type, specie, cellular compartments and type of reactions.Afterwards, we further expanded our search by selecting locations in the pathway browser to visualise the organisation of events in Reactome, such as in the immune system, signal transduction, apoptosis, and many others.Ultimately, we selected event names playing a role in cancer metastasisrelated to our gene to open the corresponding pathway diagram. Literature review Regarding all of the targeted genes in the KEGG and Reactome databases, we found no direct involvement.By analysing the publications to have a thorough understanding of the actions of the genes in ovarian tissues, the pathways are dynamically built to address this issue. Results Initially, DIANA-Micro-TCDS tools suggested that thirty-four genes were connected to miRNA-3652.Based on articles, extensive data searching of biology-integrated databases (KEGG, Reactome), and discovering that this microRNA's expression was downregulated in ovarian cancer cells, it eventually reduced to eleven genes.The concept behind the downregulation of miRNA-3652 was that the overexpression or up-regulation of the eleven genes described above would enhance the tumour microenvironment and metastasis (Table 1).The following molecular pathways have been used to describe and demonstrate how the selected eleven genes correlate with the growth and spread of ovarian cancer. CCNL1 The Cyclin L1 (CCNL1) gene encodes a protein that regulates cyclin-dependent serine/threokinase activity as well as the phosphorylation of the C-Terminal Domain, which is crucial for the splicing of pre-mRNA [17].It had been noted that ovarian cancer cells overexpressed this gene.Due to this protein's high expression, ovarian cells could pass the G0 restriction point.Accordingly, the cancerous cells will produce more copies, and genomic instability will lead to tumour microenvironment [58].By activating cyclin-dependent kinase (CDK) precisely to CDK4/6, the activity of serine/threokinase is positively regulated by Cyclin L1.Cell cycle restriction to the G0 phase [50] is related to CDK4/6 activity.When changes are established in the cellular environment developing conversion of cells to abnormal or non-mortal cells, the normal cells restrict themselves permanently to G1/G0 phase.By inhibiting the tumour suppressor proteins p21 and p53, CDK4 upregulation promotes a cellular microenvironment that is more conducive to oncogenesis, allowing damaged cells or cellular DNA to segregate and produce more tumour cells that escape cell cycle checkpoints [1].By forming Cyclin-CDK complexes, CDK4 also phosphorylates Rb, a tumour suppressor gene, causing it to become inactive and releasing all bound proteins, advancing tumour cells into the S phase.The proliferation of tumour cells and genomic instability is caused by the stimulation of several transcriptional factors involved in the S phase of the cell cycle, including the crucial target protein E2f, by the phosphorylation of Rb [52].CDK 4/6 cyclin D complexes can also phosphorylate and regulate the expression of transcription factors, including FoxM1, Myc, and ME50, promoting an environment for cancer progression (Fig. 2). GLI2 Zinc finger protein GLI2 functions as a dynamic oncogene in ovarian cancer cells, causing cellular differentiation and proliferation, and as a moderator in the Hedgehog signalling pathway.The cytoplasmic GLI2 protein is related to the Hedgehog signalling pathways' PTCH1 (Patch transmembrane) receptor [54].Repression of SMO is lessened by allowing dissociation from SUFU, which leads to the translocation of full-length GLI2 into the nucleus and activates transcription of HH target genes, promoting ovarian cancer cell proliferation, survival, and invasiveness (starting with the activation of PTCH1 receptor onto HH ligand binding) [60] (Fig. 3).High expression and cellular trafficking of GLI2 NFIB A transcriptional activator encoded by Nuclear Factor IB (NFIB) is capable of binding to other factors, including YBX1 helping in attachment with Estrogen receptor (ESR1), developing phenotype environment independent of estrogen in the ovarian cells [56].NFIB behaves as an oncogene as it appeared to be over-expressed in ovary cells as a consequence of the down-regulation of miRNA-3652.The complex formed by NFIB, YBX1, and ESR1 suppresses ESR1 while also promoting angiogenesis, proliferation, impulsivity, and metastatic potential.Lower survival and poor prognosis are connected with the patients having repressed ESPR1 and high expression of YBX1: estrogen-dependent and independent ways provoking and amplifying tumorigenesis at the cellular level [9].Developing reactive metabolites and forming mutagenic DNA adducts are crucial in transforming ovarian cells into cancerous tumours independent of estrogen.Additionally, the repression of ESR1, which is responsible for estrogen signalling, leads to the generation of resistance against anti-estrogen therapy.This phenomenon, known as endocrine resistance, significantly amplifies the progression of ovarian cancer and hampers the effectiveness of anti-estrogen treatments.Several factors, including EZH2, SFTPC, IGFBP5, ELN and EDN2, are regulated by NFIB, which have considerate values in the tumour microenvironment, advancing cancer impulsiveness, angiogenesis, migration and spread (Fig. 4). SIK2 One of the key players in the metastasis in the tissues (omentum and adipocyte) of the ovary that leads to HGSO (high-grade serous ovarian cancer) is the salt inducible kinase 2 (SIK2) [32].At the location of metastasis in the body, p85 and ACC are phosphorylated to cause cell proliferation, fatty acid oxidation, and the development of the PI3-AKT Pathway inside adipocytes. It is activated due to the adipocyte's feedback of free fatty acids and by the intracellular calcium pathway.SIK2 plays a role in the phosphorylation of p85α, ACC eventually upregulates FA oxidation, and the PI3-AKT pathway stimulates the growth of ovarian cancer cells by releasing free radicals in the cell's microenvironment making the cell even more cancerous [69] (Fig. 5). ADAR1 An enzyme involved in A to I (adenosine to isosine) dsRNA editing is adenosine deaminase (ADAR), which acts on RNA.AZIN1 (S367G) alteration is the most prevalent one in ADAR1 substrates, which are associated with cancer progression by increasing its stability through interaction with an enzyme called antizyme and raised amounts of AZIN1 [23].Antizyme modulates the breakdown of two important factors, ODC (ornithine decarboxylase) and CCDN1 (Cyclin D1), by having a strong link with them.Activation of ADAR1 cause an increased amount of polyamine through overexpression of ODC, enhancing polyamine transport activity in tumour cells and causing tumour cell growth.Thus, giving rise to cell growth and lowering G1/D cell checkpoint potentials clears the way for cancerous cells to enter into the cell cycle by escalating the lethal effect of cancer cells [23].ADAR1, in conjunction with Glioma-associated oncogene-1 (GLI1), facilitates the substitution of R/G at position 701, which has been strongly associated with multiple tumours.GLI1 is well-known for its role in promoting the activation of the Hedgehog pathway, which ultimately contributes to tumour growth and progression [82].Endonuclease 8-like 1 (NEIL1) is another substrate that undergoes hyper-editing by ADAR1, and its clinical significance in tumour development has been established.An increase in edited NEIL1 levels can significantly impact the promotion of a tumour-inducing environment.This is achieved by impairing the oxidative DNA damage repair capability, leading to an accumulation of DNA damage within the cells.The compromised DNA repair mechanism contributes to the progression of tumours and the development of a favourable environment for tumour growth.ADAR1 is also in control of (A to I) editing at many positions of MDA5, PKR, and OAS, resulting in the inhibition of these proteins by triggering the inactivation of the immune system, developing tumorigenesis in the cells of ovary [77] (Fig. 6). TRPC3 A class of diacylglycerol-sensitive cation channels known as short transient receptor potential channel 3 (TRPC3), or TRPC3, controls intracellular calcium ions by activating the phospholipase C (PLC) pathway, which is essential for preserving a balance between the processes of cell proliferation, growth, and antiapoptosis [27].TRPC3 acts as an antiapoptotic regulator through the RAS-AT-MAP kinase pathway and is overexpressed in the plasma membrane of ovarian cancer cells.This elevated expression of TRPC3 in the plasma membrane promotes cell survival and inhibits apoptosis (programmed cell death) in ovarian cancer cells.The transition of the cell cycle between G1/S and G2/M processing the cell cycle beyond the checkpoint's boundary is another crucial function of calcium inflow in the cytoplasm.This function is accomplished by blocking p21 and p27, which in turn prevents cyclin D and CDK4/6.The cyclin B-CDK complex is successively activated by calcium influx, allowing the cell cycle to pass across checkpoint boundaries [71].The PI3K-AKT pathway is phosphorylated by calcium signalling, which then affects transcription factor expression.This leads to enhanced production of MMP2, MMP9, and ERK1/2, which contributes to a more metastatic microenvironment in ovarian cancer by facilitating tumor growth and increasing stability rates [10] (Fig. 7). SRF Serum response factor (SRF) is a ubiquitous nuclear protein that binds to the serum response element (SRE) in the nucleus, which is the promoter of the targeted genes.Upregulating the MRTF-SRF pathway causes cancer spread by inducing cell growth and proliferation [47].RhoGTPase activates MRTF by releasing G actin from it in the cell's cytoplasm.As a consequence, activation of RhoGTPase depends on the signalling of Wnt-βcatenin.POU2F1 gene accompanies the stimulation of the PI3/AKT pathway resulting in phosphorylation of the β-catenin-E-cadherin complex in the cellular environment of the ovary.This complex then activates the RhoGTPase pathway through the Wnt Signaling pathway to create a malignant environment.When MRTF releases G actin, it becomes activated and moves to the nucleus, where the SRF factor binds to the CARG Box [33].Therefore, the MRTF-SRF complex will ultimately activate MMPs and c-Fos genes, advancing migration, growth and metastasis of ovarian cancer cells (Fig. 8). VCAN A chondroitin sulfate proteoglycan is encoded by the Versican gene (VCAN), which plays a role in cell adhesion, proliferation, migration and angiogenesis, eventually taking part in crucial roles such as maintenance of extracellular matrix and morphogenesis of tissue.VCAN is also responsible for modulating the Wnt-mediated β-catenin signalling [76].VCAN expression in ovarian cancer was deemed to be overexpressed.Extracellular matrix proteins have a significant role in the development of cancer. The increased expression of cancer-associated VCAN in the cancerous ovarian cells triggering the tumour microenvironment positively controls TGF-β signalling, which further assists in ECM remodelling for the advancement and metastasis of the cancer [16] (Fig. 9). POU2F1 The class 2 transcription factor 1 component POU domain, which includes 160 amino acids, is required to bind an octameric sequence (ATG CAA T) [81].The initially silent genes are stimulated and advanced by the transcription factor OCT1/POU2F1 (proliferation and immune modulation).The ovarian cancer cells express more POU2F1 when miRNA-3652 is down-regulated, which leads to growth invasion, migration, and metastasis.HDAC2 (Histone deacetylase 2) plays a key role in gene regulation by removing acetyl groups from tyrosine residues on core histones.This enzymatic activity allows for the expression of various factors, including Twist1, Snai1, Snai2, ZeB1, and EMT genes.These factors are interconnected with the AKT pathways, which are regulated during the transcription process of EMT genes.This regulation promotes cellular activities such as growth, EMT migration, and invasion.Notably, POU2F1 acts as a cancer-promoting factor in this context.Recent studies have revealed that OCT1 is upregulated in this pathway [75] (Fig. 10). CRTC2 CRTC2 (CREB Regulated Transcription Coactivator 2) is a transcriptional co-activator that interacts with the human cAMP response element-binding protein (CREB), a transcription factor.Specifically, CRTC2 plays a role in the SIK/TORC pathway and is involved in the regulation of glucose production via the LKB1/AMPK/TORC2 pathway.Elevated expression of the CRTC2 gene may contribute to tumor development [59].In the human body, CRTC2 binding to CREB within the nucleus is a normal physiological process involved in various cellular activities.However, oncogenesis (cell transformation) can occur under specific conditions, such as when there is irregular activity of the CREB transcription factor, which has been linked to tumor cell proliferation and antiapoptotic activity [59].When LKB1, a tumor suppressor gene, is depleted, it results in the dephosphorylation and subsequent deactivation of salt inducible kinases.In this context, CRTC2 translocates into the nucleus and binds to CREB, leading to upregulation of the CREB gene's expression.This, in turn, promotes the overexpression of the ID1 oncogene, contributing to the development of tumors (Fig. 11). ERBB2 Erythroblastosis oncogene B, also known as ErBb2, is a gene that essentially encodes a receptor tyrosine-protein kinase.This gene, found on the human long arm's 17th chromosome, is also known as HER2 (Human epidermal growth factor receptor 2).The epidermal growth factor receptor interacts with p185 ErBb2, a 185 kDa transmembrane glycoprotein produced by the ErBb2 gene.The ErBb2 gene's increased expression is crucial for developing tumour cells.Upregulation of ErBb2 may either produce metastasis-related characteristics like angiogenesis or invasion, or it may promote curative resistance, ultimately deteriorating cancer cell metastasis and attempting distressing responses to cancer-curative MMP-9 and MMP-2 protein activities [80].This strategy enhances VEGF cell expression in ovarian cancer by upregulating increased ErBb2 expression, which in turn triggers a potent angiogenic response (Fig. 12). MicroRNAs: a brief overview of their normal functions MicroRNAs (miRNAs) are a class of small, non-coding RNA molecules, usually about 20-25 nucleotides in length [3,66].Unlike messenger RNAs (mRNAs) that serve as templates for protein synthesis, miRNAs primarily function to regulate gene expression at the posttranscriptional level [66].The biogenesis of miRNAs is a multi-step process [15]: i)Transcription: initially, miR-NAs are transcribed by RNA polymerase II as primary miRNAs (pri-miRNAs) in the nucleus; these are long stem-loop structures; ii) Processing: the pri-miRNAs are processed by the enzyme Drosha to produce precursor miRNAs (pre-miRNAs), which are about 70 nucleotides long; iii) Export: pre-miRNAs are then exported to the cytoplasm via Exportin-5; iv) Dicer cleavage: in the cytoplasm, another enzyme, Dicer, cleaves the pre-miRNAs to produce mature miRNA duplexes; v) RISC incorporation: one strand of the duplex is incorporated into the RNA-induced silencing complex (RISC), turning it into an active miRNA, while the other strand is generally degraded [15].The function of an individual miRNA can be context-dependent; a single miRNA can have multiple target mRNAs, and likewise, a single mRNA can be targeted by multiple miRNAs.This creates a complex regulatory network where miRNAs play pivotal roles in fine-tuning gene expression to ensure cellular homeostasis and proper response to environmental cues [31].The normal functions of RNAs: i) the primary function of miRNAs is to regulate gene expression; they do this by binding to complementary sequences on target messenger RNA (mRNA) transcripts, which usually results in gene silenc-ing.Depending on the degree of complementarity between the miRNA and its target mRNA, this can lead to either mRNA degradation or inhibition of translation [67].ii) crucial role during development and growth, they help in the precise temporal and spatial regulation of genes that drive processes like cell differentiation, growth, and the timely death of cells that have completed their roles.by repressing certain non-essential or inappropriate genes in specific cell types, miRNAs help in maintaining the unique gene expression profiles and functions of those cells [51].iii) some miRNAs are involved in controlling the cell cycle and, thus, play roles in cell proliferation [24]; also miRNAs are involved in the regulation of programmed cell death or apoptosis, and others can influence cell survival; miRNAs can be involved in cellular responses to various stresses, including DNA damage or oxidative stress, helping cells adapt and survive or decide to undergo apoptosis [31].iv) miRNAs play roles in various metabolic processes, including fat metabolism, insulin secretion, and more [45].v) some miRNAs have roles in the development and response of immune cells, influencing processes like inflammation and the body's response to pathogens [12].vi) in the nervous system, miRNAs contribute to neuronal development, plasticity, and function and they're crucial in processes like neurogenesis and synaptic plasticity [30].vii) several miRNAs are known to influence the differentiation of hematopoietic stem cells into various blood cell lineages.Some viruses encode their miR-NAs, which can interfere with host cellular functions to the advantage of the virus [64]. In physiological conditions, miRNA-3625 has been implicated in the regulation of cell cycle progression and apoptosis [42].It is known to target several mRNAs involved in G1/S phase transition, thus maintaining cellular homeostasis [18].Furthermore, miRNA-3625 plays a role in immune modulation by regulating T-cell activation and cytokine production [18].Understanding these normal functions provides a baseline against which pathological changes, such as those observed in ovarian cancer, can be compared. Rationale behind miRNA selection in ovarian cancer studies The selection of specific miRNAs for study in ovarian cancer is not arbitrary but is grounded in their potential biological and clinical significance [5].By understanding the rationale behind these selections, researchers can prioritize which miRNAs to study, accelerating discoveries that might eventually translate into clinical benefits [5,66].In the context of ovarian cancer, certain miRNAs have emerged as central players, either as oncogenes or tumor suppressors [28,61].The rationale for selecting a specific miRNA for study within this malignancy often arises from a combination of factors [15,22,66]: i) Differential expression in ovarian tumors: many studies rely on high-throughput sequencing or microar- ray analyses to determine which miRNAs are upregulated or downregulated in tumor tissue compared to normal ovarian tissue.Such differential expression can hint at a miRNA's involvement in tumorigenesis [22].ii) Links to clinical outcomes: some miRNAs are associated with particular clinical outcomes in ovarian cancer patients, such as overall survival, disease recurrence, or response to therapy; studying these miRNAs can provide insights into disease progression and prognosis [5,22,66].iii) Role in key signaling pathways: ovarian cancer progression involves several critical signaling pathways.miRNAs known to modulate these pathways, such as the PI3K/Akt pathway or the p53 signaling pathway, become key candidates for research [22,66].iv) Evidence from other cancer types: a miRNA's established role in other malignancies might suggest a potential role in ovarian cancer, warranting investigation [22,66].v) Functional impact on cellular processes: miRNAs that influence essential cellular processes like cell proliferation, apoptosis, angiogenesis, or metastasis are of inherent interest.Their dysregulation can shed light on the mechanisms behind ovarian cancer development and progression [22,66].vi) Potential for therapeutic modulation: miRNAs that can be feasibly targeted, either to inhibit (in the case of oncomiRs) or enhance (for tumor suppressor miRNAs) their activity, are particularly attractive for research, given the potential translational applications [22,66]. Several studies have demonstrated a correlation between dysregulation of miRNA-3652 and ovarian cancer.It has been observed that miRNA-3652 is downregulated in ovarian cancer patients' serum, plasma, and urine.However, the exact mechanisms and processes triggered by the dysregulation of miRNA-3652 in ovarian cancer patients remain unknown.Further research is required to unravel the specific consequences and underlying pathways associated with the dysregulation of miRNA-3652 in ovarian cancer patients [28].In this study, the researchers investigated miRNA-3652's system of likely target genes and its potential to cause cancer in OC cells.These mutagenic effects may create a tumour microenvironment, which could have a variety of carcinogenic outcomes [70].CCNL and ADAR1 have been disclosed to be counted in the modulation of the cell cycle.Numerous studies have confirmed that cell cycle checkpoints and cell cycle arrest are effective cell defence mechanisms to stop the progression of the cell cycle when it comes into contact with any type of genomic instability or mutagenic conditions in a cell [50,77].By driving cells past the G1/G0 restriction point and into the S phase, positive regulation of CCNL and ADAR1 allows the cell cycle to progress, leading to the proliferation of tumour cells and genetic instability in the tumour-producing environment.Our findings suggest that in OC, POU2F1 promotes tumour cell proliferation, EMT migration, and invasion.The expression of EMT genes aids in the migration of EMT in ovarian cells and initiates proliferation and immune regulation when POU2F1 is upregulated, which triggers tumour promoter factors that impair the immune system's health [75].Because miRNA3652 is downregulated in ovarian cancer cells, several signalling cascades, including the WNT-catenin signalling route, the MRTF-SRF pathway, and the Hedgehog signalling system, are stimulated, leading to the overexpression of the GL2, SRF, and VCAN genes.Numerous studies have demonstrated that GL2 influences other cellular cascades by positively regulating the Hedgehog signalling pathway (e.g., cellular differentiation, proliferation, cell survival and invasiveness).As a result, GL2 overexpression may affect how OC develops.We demonstrate that increased MRTF-SRF and Wnt-catenin signalling improves the carcinogenic environment by enhancing cell adhesion, proliferation, migration, and angiogenesis in ovarian cancer cells [16,47,54].Therefore, activation of the Wnt signalling pathway confirms the role of miRNA3652 in the progression of ovarian cancer.Because miRNA3652 is downregulated in ovarian cancer cells, NFIB and SIK2 are increased, which causes ovary cells to produce reactive metabolites, mutagenic DNA adducts, and free radicals [56,69].The dysregulated expression of these genes disrupts normal cellular function, leading to the transformation of ovarian cells into tumour cells characterised by enhanced proliferation, aggressive behaviour, and progression within the tumour microenvironment, ultimately resulting in the development of more malignant cells.These recent findings provide additional evidence to support the conclusion that reactive metabolites, mutagenic DNA adducts, and free radicals play a critical role in the persistence and metastasis of cancer cells.In OC cells, the TRPC3 enhances malignant cells' persistence and carcinogenic potential.Overexpression of TRPC3 modulates the calcium signalling pathway as it is associated with cell growth, multiplication and antiapoptosis in ovary cells advancing the ovary microenvironment to convert into a cancerous one [71].Calcium signalling triggers the activation of the cyclin B-CDK complex sequentially, facilitating the passage of the cell cycle through checkpoint boundaries.Additionally, the upregulation of TRPC3 promotes the activation of the PI3K-AKT pathway, which contributes to ovarian cancer progression by enhancing tumour takeover, durability, and advancement.Furthermore, our data suggested CRTC2 and ERBB2 behaved as oncogenes in ovarian cancer.The production of CRTC2 helps in the upregulation of glucose levels in the body because of the stimulation of the signalling pathway known as LKB1/AMPK/TORC2.The overexpression of CRTC2 triggers cancer cell proliferation and metastasis.At the same time, over-activating the ErBb2 gene triggers cancer cell metastasis and shows the anti-apoptotic behaviour of cancerous cells.This gene activates two other genes simultaneously: PI3k and RAS.PI3k activates AKT and blocks the expression of PTEN, while RAS activates MAPK and ERk1/2 expression and triggers angiogenic and oncogenic expression of cells [59,80].Investigating the pathway involving these eleven targeted genes provides valuable and conclusive insights into the cancer-promoting potential of miRNA3652 in ovarian cancer cells.The approach taken in this analysis is well-regulated and reliable.This analysis offers a comprehensive understanding of the fundamental cellular mechanisms and signalling pathways involved in the development and malignancy of ovarian cancer.The findings presented above will contribute to the future design of novel medical interventions and assessments for ovarian cancer, with enhanced treatment efficacy. Downregulation of miRNA in ovarian cancer The aberrant expression of microRNAs (miRNAs), both upregulation and downregulation, is a hallmark of many cancers, including ovarian cancer; the downregulation of certain miRNAs can promote oncogenesis and cancer progression according to the following mechanisms [3,5,37,44 Cellular signaling pathways often have feedback mechanisms: a downregulated miRNA might target a protein that, when upregulated, further suppresses the miRNA's expression, creating a feedback loop that further reduces the miRNA's levels [44]. The downregulation of miRNAs in ovarian cancer can be a result of various genetic, epigenetic, and cellular mechanisms.Understanding these intricate pathways and regulatory mechanisms can provide insights for therapeutic interventions, aiming to restore the expression of tumor suppressor miRNAs or inhibit the pathways causing their downregulation. Translational studies related to miRNA in cancer therapies: focus on ovarian cancer MicroRNAs (miRNAs) have emerged as essential regulators of gene expression, influencing a plethora of cellular processes, including proliferation, differentiation, and apoptosis [3].In the context of cancer, aberrant miRNA expression can drive tumorigenesis, progression, and resistance to therapies, as such, miRNAs offer a promising avenue for therapeutic intervention, especially in cancers like ovarian cancer, where there is a dire need for more effective treatments [3,15].This section will shed light on the translational studies focused on leveraging miRNAs in ovarian cancer therapies: i) diagnostic and prognostic biomarkers: several miR-NAs have been identified as potential biomarkers for the early detection of ovarian cancer or for predicting disease outcome.For example, miR-200 family members are typically downregulated in epithelial ovarian cancer and have been linked to disease progression and chemotherapy response [15].ii) therapeutic targets: oncomiRs-some miRNAs are overexpressed in ovarian cancer and contribute to tumorigenesis.These "oncomiRs" can be targeted for therapeutic silencing [15]; for example, miR-21 is upregulated in many cancers, including ovarian cancer, and has been associated with decreased apoptosis and increased chemotherapy resistance.AntagomiRs or locked nucleic acids (LNAs) targeting miR-21 have been explored to inhibit its function [15].iii) tumor suppressor miRNAs: some miRNAs act as tumor suppressors and are downregulated in cancers.Restoring their expression can inhibit tumor growth.For example: Let-7 is often downregulated in ovarian cancer, synthetic Let-7 mimics have been developed to restore its function, subsequently inhibiting tumor growth.iv) miRNA replacement therapy: involves introducing synthetic miRNA mimics into cells to restore the function of downregulated tumor suppressor miR-NAs.Clinical trials have been initiated for some of these mimics in various cancers, though the challenge remains in effective delivery to tumor cells without off-target effects.v) miRNA inhibition therapy: For oncomiRs that are overexpressed in cancer cells, antagomiRs or other miRNA inhibitors can be introduced to inhibit their function; challenges here also include specificity, delivery, and potential off-target effects [15].vi) chemotherapeutic drug resistance: miRNAs have been implicated in chemotherapy resistance in ovarian cancer; for instance, miR-214 induces cisplatin resistance by targeting the PTEN/Akt pathway [41].Therapeutically targeting such miRNAs can potentially resensitize tumors to chemotherapy.vii)a significant challenges in miRNA-based therapy is effective delivery to tumor cells; diverse nanoparticles loaded with miRNAs and are being explored for targeted delivery with minimal side effects [46]. The translation of miRNA research into tangible therapies for ovarian cancer is an exciting frontier in oncology; while challenges persist, particularly concerning delivery and specificity, the potential benefits are substantial.Continued research and innovation promise to usher in a new era of targeted, effective treatments for ovarian cancer based on miRNA modulation. Conclusion and future perspectives This study sheds significant light on the functional roles of miRNA-3652 in the progression of ovarian cancer.Our findings suggest that miRNA-3652 has noteworthy effects on cellular processes relevant to the malignancy, which makes it a potential biomarker and therapeutic target.Specifically, alterations in the expression of miRNA-3652 are closely linked to various hallmarks of ovarian tumor progression, including cell proliferation, migration, and apoptosis.The implications of these findings are paramount, especially considering the current gaps in understanding and treating ovarian cancer effectively.The present study underscores the functional relevance of miRNA-3652 in ovarian cancer progression, further in-depth mechanistic studies and comprehensive understanding of the molecular mechanisms by which miRNA-3652 exerts its effects will elucidate the broader role of microRNAs in cancer biology.Given the potential significance of miRNA-3652 in tumor progression, its utility as a therapeutic target should be rigorously examined, this includes exploring targeted delivery mechanisms and ensuring the specificity of the therapeutic interventions to minimize off-target effects.The potential of miRNA-3652 as a prognostic marker also should be further evaluated in clinical settings.Studies assessing the correlation between miRNA-3652 levels and clinical outcomes, including treatment response and patient survival, will be of immense value.It is essential to explore if the roles of miRNA-3652 observed in ovarian cancer are replicated in other cancer types this would allow for the exploration of common microRNA-mediated pathways in cancer and the development of more broad-spectrum therapeutic interventions.As our understanding of miRNA-3652 grows its potential synergy with existing therapeutic agents or strategies should be assessed, combining miRNA-based interventions with current therapies might offer enhanced therapeutic outcomes for ovarian cancer patients.In conclusion, this study has laid the groundwork for a deeper understanding of the role of miRNAs in ovarian cancer; a multidisciplinary approach that combines molecular biology, clinical research, and therapeutic design will be vital in harnessing the potential of miRNA-3652 in the battle against this devastating disease. Fig. 2 Fig. 3 Fig. 2 Illustration of CCNL1 signaling pathway.Positive upregulation of the CCNL1 leads to surpassing cell cycle checkpoints, tumour cell proliferation and genomic instability, stimulating the tumour microenvironment Fig. 4 Fig. 4 Illustration of NFIIB signalling pathway.Ovarian cancer cells exposed to an estrogen-independent environment are more likely to proliferate, behave aggressively, and have the potential to propagate Fig. 5 Fig. 5 Illustration of the SIK2 signalling pathway.One of the key players in metastasis, SIK2, upregulates the PI3-AKT Pathway and FA oxidation, boosting the proliferation of ovarian cancer cells by releasing free radicals into the microenvironment of the cell and turning it into a more malignant one.SIK2: Salt-Inducible Kinase 2; PI3-AKT: Phosphoinositide 3-Kinase-Protein Kinase B; FA: Fatty Acid Fig. 6 Fig.6 Illustrative scheme regarding the ADAR1 signalling pathway.ADAR1 editing enzyme Adenosine to Inosine in dsRNA.ADAR1 is linked to the growth of cancer through specific substrates editing in which most AZIN1 (S367G) substitution is the most usual one, GLI1 leads to R/G replacement, edited NEIL1, editing at many positions of MDA5, PKR, OAS resulting in inhibition of these proteins by triggering the inactivation of the immune system developing tumorigenesis in the cells of the ovary Fig. 7 Fig. 7 Illustration of the TRPC3 signalling pathway.TRPC3 regulates Calcium influx in the cytoplasm allowing the cell cycle to surpass checkpoint boundaries promoting the metastatic ovary microenvironment by developing tumour proliferation and increasing the stability rate Fig. 8 Fig. 8 Illustration of the SRF signalling pathway.Upregulation of SRF influences elicits cell growth and proliferation through upregulation of the MRTF-SRF pathway resulting in cancer metastasis through activations of MMPs and c-Fos genes Fig. 9 Fig. 10 Fig. 11 Fig. 9 Illustration of VCAN signalling pathway.The tumour microenvironment is positively controlled by the increased expression of cancer-associated VCAN in malignant ovarian cells, which also stimulates TGF-signalling, aiding in ECM remodelling for the growth and propagation of cancer Fig. 12 Fig. 12 Illustration of the ERBB2 signalling pathway.Upregulation of ErBb2 can either produce metastasis-related traits like angiogenesis or invasion, or it can upregulate the curative resistance, enhancing cancer cell metastasis Table 1 Genes up-regulated by miRNA-3652 in Ovarian Cancer
2023-10-16T13:54:20.610Z
2023-10-16T00:00:00.000
{ "year": 2023, "sha1": "74f5a18f37e4b067720cf3c376c32ffe8fcfbd2b", "oa_license": "CCBY", "oa_url": "https://biosignaling.biomedcentral.com/counter/pdf/10.1186/s12964-023-01330-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9cd816ac48e53a77464b3fddc831f3b220986b70", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
169140176
pes2o/s2orc
v3-fos-license
Evaluation of Traditional Medicine Programs in Public Health Centre Mengwi, Bali Public Health Centre in Mengwi is one of the Public Health Centre (PHC) that apply for traditional medicine program since 2012. However, the result of the program is still under the expected target. This study aims to evaluate the traditional medicine program at PHC in Mengwi. This study was a descriptive evaluation research with the qualitative method. Data collection used in-depth interviews with 11 informants. Data analysis was done by thematic analysis using the evaluation method of the program (input and process program). The result found in the lack of staff knowledge, this program had not been supported by special funding and facilities. The process of implementing the guidance and visits from PHC in Mengwi had not been done routinely. Traditional in healer has a low level of information about the requirements of making registered permit and registered letters of a traditional healer. The availability of inputs on the traditional medicine program at PHC in Mengwi is not optimal yet and the implementation process has not been run maximally. So, it is recommended for local government to use the result of program evaluation as the basis to provide further guidance for health workers and traditional healers. INTRODUCTION Treatment is divided into two major classes, namely modern medicine and traditional medicine. Modern medicine is a scientific treatment 1 , and traditional medicine is treatment by means medicines and healers that refers to experience, hereditary skills or training education, and is applied in accordance with the norms prevailing in society 2 . Traditional medicine consists of a traditional healer who is a person who is recognized or utilized by the community as a person who is able to perform traditional treatment and traditional medicine which is a material or ingredients in the form of plant material, animal material, mineral materials, sari preparations or a mixture of the substance from generation to generation 3 . In the international world, the development of traditional health services has also received attention from various countries. From the results of the WHO Congress on Traditional Medicine meeting in Beijing in November 2008, it was stated that safe and beneficial traditional health services can be integrated into the health care system and WHO encourages its member countries to develop traditional health services according to local conditions 4 . In Indonesia the percentage of traditional medicine use has increased in the last seven years from 15.2% to 38.30% 5 . In the year 2010 the use of traditional medicine increased very rapidly up to 49.53% in 2011 6 . Increased public interest requires the government to conduct supervision and fostering of traditional healers as an effort to protect the community and anticipate the occurrence of malpractice 7 . As a way of coaching the government has issued a decree of the Minister of Health of the Republic of Indonesia 1076/MENKES/SK/VII/2003 on the Implementation of Traditional medicine. In the regulation, the traditional healers must register themselves to the District Health Office to obtain a traditional healer's permit (SIPT) or a registered letter of traditional healer (STPT) hereby, the practice of traditional medicine can be continuously monitored by the local District Health Office or City, can provide a security guarantee for the community of its users 7 . Public Health Centre in Mengwi is one of the PHC in Bali implementing traditional health treatment efforts since 2012. This PHC has 65 traditional healers but only 7.7% of the traditional healers have SIPT and STPT. The low achievement is beyond the expectations of the government that requires traditional healers to have SIPT and STPT. The low ownership of SIPT and STPT on traditional healers raises the question of whether the program has been properly implemented. Besides the program has not been evaluated by the PHC, the low achievement attracts researchers to evaluate this traditional treatment program starting from the input and process conducted by PHC. RESEARCH METHOD This research is a descriptive research with qualitative method. Sampling method was done by purposive sampling technique while data collection on informant was done by in-depth interview consisting of Head of PHC in Mengwi, Holders of traditional medicine program at PHC in Mengwi, traditional medicine program holder in Badung District Health Office, four traditional healers in work area PHC in Mengwi and four community members who access traditional medicine services. Interview results were analyzed throughout the research process with thematic analysis methods. RESULT AND DISCUSSION Based on the results of the interview can be interpreted that in terms of the availability of traditional treatment program input, PHC own officers in managing all activities but in terms of quality, the officer did not have the educational background and training in this program. In addition, staff also has the responsibility of holding other programs. In terms of operational financing traditional treatment programs do not have special funds. Current funding is still obtained from other programs. For the facilities and infrastructure needed as a support, the PHC only has books and stickers for traditional healers obtained from personal donations from the Head of the PHC. The PHC has targeted that all traditional healers in the area already have a permit and a registered letter of practice for traditional healers or SIPT and STPT. Based on the results of the interview regarding the program implementation process, the planning has been made well that has been made in accordance with the targets set. Organizational development has been formed through team formation in the mini PHC workshop. In terms of implementation, the program carried out is in accordance with the planning made and the supervision, monitoring and assessment has also been going well by recording and reporting traditional treatment program activities to the Health Office regularly every month. Based on the results of interviews with traditional healers regarding SIPT or STPT letters, all traditional healers revealed that they did not know the benefits of the letters and some of them did not know how to make them. From the perspective of the community itself the practice of traditional medicine needs to get supervision from the PHC. The results of the interviews found that all informants revealed that there was a need for supervision from the PHC in order to protect and prevent traditional healers such as fake shamans, lewd shamans and criminal acts and so that traditional healers did not carry out a mall practice. Input traditional medicine program The input component of the traditional medicine program at Mengwi Health Center includes human resources, operational costs, facilities and infrastructure and targets. Human resources Human resources in traditional medicine programs are health workers who have been trained in health and traditional medicine services and are appointed or assigned to manage traditional medicine programs in the PHC area 10 . The availability of labor is seen from two aspects, namely in terms of quantity and quality. In terms of quantity, the PHC already has one program holder who manages all traditional medicine program activities but the program | 138 | holders also hold other programs in the PHC. Whereas in terms of quality, inappropriate educational background of program holders as well as lack of guidance and training obtained from the Health Service led to a lack of knowledge of program holders about traditional medicine programs. The quantity and quality of resources that are lacking can cause less effective implementation of the traditional medicine program. Cost One program input that is very important for a company or organization is the cost 11 . Operational costs are costs needed to carry out and or utilize public health services whose main purpose is to maintain and improve health and to prevent disease 12 . According to the Ministry of Health Regulation No.128/MENKES/SK/III/2004, The implementation of various individual health efforts and public health efforts that are the responsibility of the PHC, need to be supported by the availability of sufficient financing. The main source of funding for PHC is from the Regency or City Government in the form of APBD (Regional Revenue and Expenditure Budget) funds. Based on the results of the evaluation, the staff at PHC said that all traditional treatment programs currently have not received special fees or funding from the District Health Office. The continuity of activities so far still depends on other programs, such as funding the transportation costs to reach the traditional treatment centers, PHC still use funds from other programs. Regarding transportation costs, the Badung District Health Office said that PHC could use BOK funds, which are Activity Operational Assistance funds that could be used for transportation costs for fostering traditional healers in each village. The provision of BOK funds by the government is based on the consideration that the operational costs of PHC are relatively small, because local government budget allocations are more directed towards curative and rehabilitative health efforts. BOK funds can be used to optimize the performance of health workers in the PHC to provide promotion and preventive services 13 . BOK funds are prioritized on high leverage activities to achieve indicators on the Millennium Development Goals (MDGs) in the health sector. Determination of BOK allocations in PHC is considered based on priority issues, program coverage, geographical conditions, population, and number of health workers. The utilization of BOK funds for PHC activities must be based on the results of the planning agreed upon at the Mini PHC Workshop held regularly according to the conditions of the local area 14 . According to the Head of the PHC for now, BOK funds have not been able to support traditional treatment program activities because BOK funding has been budgeted for other promotion and preventive programs in the PHC. Facilities In an effort to achieve a program in the PHC must be supported by the availability of facilities and infrastructure. Without specific work facilities and infrastructure, the program cannot be completed as it should even experience obstacles. The implementation of a program even though it has clear goals and objectives without adequate facilities and infrastructure resources, the program activities will not be as expected 15 . The facilities and infrastructure needed to support the implementation of traditional medicine programs include monitoring books of the local area of traditional healers, stickers of traditional healers, referral cards and assisted by traditional healers, cover letters to administer STPT or SIPT 10 . Based on the results of the evaluation of the Mengwi Health Center currently only has a local area monitoring book for traditional healers and traditional healing stickers. Procurement of monitoring books and traditional healer stickers for now is obtained from personal donations from the Head of the PHC. These conditions in general will naturally affect the performance of traditional medicine programs so that the results of their activities become less optimal. Target Every program carried out at the PHC is equipped with program objectives that cover the target population and targets 16 . Target targets for traditional medicine programs at the Mengwi Health Center ideally include primary targets and secondary targets. Primary targets are traditional healers as providing traditional medicine services in the work area of the Mengwi Health Center and also secondary targets, namely the community as recipients of the traditional medicine services. Based on the results of the evaluation of interviews with the PHC, the target of traditional treatment programs in the working area of the Mengwi Health Center is 65 traditional healers. The target of the traditional healer is to have SIPT or STPT. Success in traditional medicine programs is very dependent on the availability of various inputs that are appropriate to your needs. Therefore, much attention and effort is needed from the PHC and the District Health Office in fulfilling all the input needs that are still lacking in relation to the traditional treatment program activities so that all activities can be carried out properly. Process of traditional medicine programs The management of PHC services, namely for the implementation of various individual health efforts and public health efforts in accordance with the principles of the implementation of the PHC need to be supported by good management of PHC Services. PHC Service Management is a series of activities that work systematically to produce effective and efficient PHC outcomes. A series of systematic activities carried out by the PHC will form management functions. There are several types of management models in PHC, namely PIE (planning, implementation, evaluation), POAC Models (planning, organizing, actuating, controlling), Model P1 › P2 › P3 (planning, implementation, monitoring-controllingassessment), ARRIF model (analysis, formulation, plan, implementation and communication forum) and ARRIME Model (analysis, formulation, plan, implementation, monitoring, evaluation). Of the various management models actually have the same management function. Each PHC is free to determine the management model that it wants to implement 17 . In the implementation of the traditional medicine program, Mengwi Health Center carried out the PHC management using the P1 (Planning), P2 (Movement-Implementation), and P3 (Supervision-Monitoring-Assessment) models. Planning Planning is one of the health management functions that must be carried out by the PHC in an effort to achieve the objectives of a program 18 . In general, planning can be said as a systematic preparation process regarding activities that need to be carried out to overcome the problems faced in order to achieve the stated goals. Planning is also a process that starts with formulating the goals of the PHC up to setting alternative activities to achieve it. Without a PHC planning function, there is no clarity on the activities to be carried out by the staff to achieve the PHC goals 19 . PHC Level Planning (PTP) is a systematic activity process to develop activities in the following year to increase the coverage and quality of health services to the community in an effort to overcome local health problems 20 . The planning in the PHC is divided into two, namely the Proposed Activity Plan (RUK) which is prepared to submit a budget and an Activity Implementation Plan (RPK) which is prepared as a Plan of Action (POA) for the PHC that are involved. The first step in the PHC level planning mechanism is to develop RUK 21 . The formulation of the RUK PHC must pay attention to various policies that apply both globally nationally and regionally in accordance with the results of the study of data and information available at the PHC. The activity plan must also be completed with financing proposals for routine, facilities, infrastructure and operational needs. The drafted RUK was discussed at the District Health Office. Furthermore, the RUK summarized in the proposal of the District Health Office will be submitted to the DPRD to obtain financing approval and political support. After obtaining approval from the DPRD, then handed it back to the PHC through the District Health Office. Based on the agreement, the PHC prepared RPK. The preparation of the RPK was held in January of the year in the first Mini Workshop forum 22 . From the results of the evaluation based on interviews with the PHC, the proposed plan of traditional treatment program activities has been made at the beginning of the year in accordance with the targets that have been determined. In the planning stage, the PHC head together with the staff compiles a plan for proposed activities and plans for a proposed budget for traditional treatment programs. After that the proposed activity in the form of RUK was submitted to the Health Office of Badung Regency. The Badung Health Office revealed that the proposed activities and budget from the PHC would be reelected and would later be proposed to the Regional Government Agency (BAPEDA). After the activity is approved by the BAPEDA, the Health Service will hand it back to PHC. Based on the approval of the proposed proposed activities, PHC Mengwi formed an RPK through a Mini Workshop. In the Mini PHC Workshop a fundraising activity can be carried out to obtain cooperation agreements in teams, to determine the division of tasks, responsibilities and designation of activity plans. The work plan that has been agreed upon in the raising can then become a work guideline 23 . Evaluation results for budget planning for traditional medicine program costs, from the results of interviews with the Head of PHC, the budget plan for traditional treatment programs for now are still not approved by the local government. For budget planning, the PHC revealed that it had planned a proposed fee to the Badung District Health Office, which amounted to Rp. 58,000,000 funding was budgeted for making stickers traditional healer, cards, survey sheets, stationery, transportation funds and socialization costs for socialization for health workers and socialization for traditional healers. The results of | 140 | interviews with holders of traditional treatment programs at the Badung District Health Office confirmed this, the proposed cost plan had been submitted by PHC every year but the local government had not agreed because they were still focused on other preventive and promotion programs. Movement and Implementation Movement and implementation are the second stage in the management function. Mobilization in the implementation of PHC is a process of mentoring staff so that they are able and willing to work optimally carrying out their duties in accordance with their abilities and skills 19 . while implementation is carrying out activities that have been planned and implemented by the organization or team that has been formed 24 . In the mobilization and implementation of traditional medicine at PHC is carried out in accordance with the planning that has been prepared based on a priority scale that includes the guidance and supervision of traditional healers 10 . The results of the evaluation of the implementation of traditional treatment program activities at PHC Mengwi showed that the implementation in the field had been carried out every month but the activities carried out were sometimes not in accordance with the plans that had been made in the RPK. This discrepancy is caused by a lack of financial support and transportation equipment owned by PHC so that activities carried out such as coaching to traditional healers must be adjusted to the schedule of other program visits at PHC. The traditional program visit activities are currently carried out together with other cross programs, namely the holders of health and environmental health promotion programs and in collaboration with PHC assistants and local village officials so they have to adjust the schedule together. Supervision, Monitoring, Assessment Supervision and monitoring is a controlling process that is to observe continuously the implementation of activities according to the plan that has been prepared and make improvements if there are irregularities. monitoring in the implementation of PHC activities includes seeing directly, seeing the results of the activity, through reports, and a mini workshop meeting 19 . The benefit of supervision and monitoring is to find out whether the implementation is in accordance with the plan made in the RPK, whether it has constraints or obstacles in implementation, knowing the involvement of staff, cross sectoral and to know the use of the budget and facilities in the implementation of the program 24 . Whereas assessment or evaluating is a process to determine the value or level of success of the implementation of a program in achieving a predetermined goal or an orderly and systematic process in comparing results achieved with benchmarks or criteria that have been determined 19 . As for supervision, monitoring and assessment carried out periodically by PHC for the management of traditional healing activities, it includes recording and reporting activities 10 . The results of the evaluation at the supervision, monitoring and assessment stage of the traditional medicine program, PHC Mengwi has carried out the recording and reporting of all the implementation of traditional medicine program activities to the Health Office regularly every month. However, based on the results of the report it appears that the results are not in accordance with the targets set. There is still a low coverage that most traditional healers who practice in the PHC Mengwi area do not have a practice permit such as not having a SIPT or STPT letter. However, based on the results of the report, the Health Office of Badung Regency has yet to respond and follow up. Interviews with the Badung District Health Office, currently the health department has not been able to follow up because it has not received support from the central government regarding this issue. Whereas in accordance with Government Regulation No. 103 of 2014 concerning "Traditional Health Services" states that the provincial and district governments are responsible for the implementation of traditional medicine. The provincial and district governments are obliged to provide guidance and supervision as well as guarantee traditional health services that are safe for the community by facilitating traditional healers to have SIPT and STPT 26 . Traditional healers Traditional healers are someone who is recognized and used by the community as a person who is able to perform traditional treatment, whose expertise is acquired from generation to generation, studying, holding or attending education and training 3 . Traditional medicine is divided into 16 types of treatment (Satria, 2013) that is: (1) Acupuncture treatment is stimulation of acupuncture points by inserting needles, electric current (electro acupuncture), heat (moxibustion), lasers (laser acupuncture), or pressure (acupressure). (2) Alexander Technique is a psychophysical reeducation to improve position and coordination. (3) Aromatherapy is the application of essential oils from plants, often accompanied by massage. (4) Autogenic training is autosuggestion or independent hypnosis techniques for relaxation. (5) Flatfoot is a treatment with intravenous EDTA for arteriosclerotic disease. (6) Chiropractic is a health care system through the belief that the nervous system plays an important role in health and most diseases are caused by spinal subluxation and can be cured by spinal manipulation. (7) Enzyme therapy is the administration of oral proteolytic enzymes for the purpose of health. (8) Treatment with infusion flowers extracts plants for physical and emotional balance. (9) Herbalism is a treatment with medicinal plants. (10) Homeopathy is a treatment by using the reflection effect of substances that produce symptoms of illness in healthy people. (11) Massage is a treatment by massaging in certain locations. (12) Osteopathy is a therapy by doing massage, mobilization and manipulation. (13) Reflectiology is a treatment using manual pressure to a specific area (especially on the soles of the feet) that is associated with internal organs. (14) Spiritual healing is the channeling of healing energy from a therapist to the patient's body. (15) Tai chi is a physical and mental enhancement using a system of movement and body position. (16) Sports Yoga is a treatment by stretching for respiratory control and meditation. According to Republic of Indonesia Minister of Health Regulation concerning the Implementation of Traditional Medicine, the classification and type of traditional medicine are divided into four categories, among others 1). Traditional healer skills consist of traditional healers, massage, broken bones, circumcision, dukun, reflection 2) Traditional healers consist of traditional Indonesian herbal medicine (jamu), gurah, physician 3). Traditional healer religious approaches consist of traditional healers with an approach to Islam, Christianity, Catholicism, Hinduism, or Buddhism and 4). The supernatural traditional healer consists of traditional healers in the inner, paranormal and reiky master 25 . Based on these guidelines PHC Mengwi groups traditional healers into two categories according to the expertise and skills of traditional healers in the region, namely are traditional herbal remedy and treatment traditional with skills. Evaluation results based on interviews with the traditional herbal remedy it was revealed that in providing traditional medicine services they usually treat by giving medicines derived from ingredients such as "tirta" (holy water), oil, and spices from TOGA (Family Medicine Park), while the treatment traditional with skills provides traditional treatment services with massage. The role of PHC in fostering and supervising traditional practitioners includes collecting data on traditional health services in their regions, direct guidance and supervision of traditional healer, giving traditional healer cover letters to manage STPT or SIPT and sending periodic reports to the District Health Office 26 . Based on the results of the reports made, PHC Mengwi has a low coverage on the target indicators of traditional healers having SIPT and STPT which is only 7.7% with the target set at 30%. And from the results of interviews with the four traditional treatment informants, only one informant revealed that they already had a SIPT and STPT was assisted by a local PHC assistant while three other informants revealed that they did not have SIPT or STPT and did not have the desire to make that letters. They revealed that they did not know and had not received information from PHC regarding the requirements used to make the SIPT or STPT. In general, the requirements for taking care of SIPT or STPT are very easy such as: Data of traditional healer, photocopy of KTP, certificate from the Village Head where to do work as a traditional healer, recommendations from associations or professional organizations in the field of traditional medicine concerned, photocopy of certificate or certificate of traditional medicine, a local PHC introductory letter, two 4x6 cm photographic photographs and a map of the business location and floor plan for traditional healers who administer SIPT 25 . From the results of the evaluation, it appears that there is still low interest in traditional medicine in having SIPT or STPT and there is still a lack of information provided by the PHC regarding the requirements needed to administer the letters. Whereas according to the Minister of Health Decree No.1076/MENKES/SK/VII/2003 concerning the Implementation of Traditional Medicines, all traditional healers must register with the Head of the District/City Health Office to obtain a Permit or Registered Traditional Medicine (SIPT/STPT). With this, the practice of traditional medicine can continue to be monitored by the local District/City Health Office so that it is hoped that in the end it can provide security guarantees for the user community (ITBI, 2012). From the results of the evaluation, the informant also revealed that PHC had come about 8 months ago and then provided guidance to continue to develop themselves, gave books and attached stickers to traditional healers and PHC also informed that they would come every month but there was no follow-up. Whereas regular visits, coaching and training from PHC and from the health services are felt to be very important and can be motivated by informants. Based on the results of the interviews, it appears that traditional medicine program activities, especially in the coaching conducted by PHC Mengwi, have not been carried out routinely and optimally. The community uses traditional medicine Traditional medicine is one of the treatments and treatments for other ways outside of medical and nursing sciences, which are widely used by the community in overcoming health problems 3 . Traditional medicine is still in demand by people in Indonesia even though modern health services have developed in Indonesia, the number of people who use traditional medicine remains high 27 . According to the 2001 National Socio-Economic Survey, 57.7% of Indonesian treated themselves, 31.7% used traditional medicine, and 9.8 chose traditional medicine 5 . Indonesian people who complain of illness, 65.01% choose their own treatment using traditional medicine or medicine 6 . Based on the results of evaluation of traditional treatment users at PHC Mengwi the reason for using traditional medical services is because the costs incurred are cheaper, the community is accustomed to using traditional medical services, to find out non-medical or 'niskala' diseases and are more natural without chemicals and effects side. In order to maintain security for users of traditional medicine PHC Mengwi is obliged to continue to foster the practice of traditional medicine. Such coaching can be carried out with cross-program collaboration, namely with health and environmental health promotion programs. One of the collaborations carried out is by providing guidance in terms of the cleanliness of the behavior of traditional healers and the cleanliness of the environment of traditional medicine practices. The results of the evaluation in terms of hygiene and traditional medicine practice skills, informants revealed that the traditional treatment practices that were accessed were clean and the skills had been good but for the cleanliness of equipment one informant revealed that the equipment used needed to be improved because sometimes the equipment used was like oil for massage still use oil that has smelled and sometimes the oil recommends to be drunk. Based on the results of the interview, the traditional medicine program holders in the PHC need to improve cross-program collaboration not only in terms of hygiene behavior and hygiene practices in the treatment environment but also in the development of facilities and infrastructure used by traditional healers. From the interviews, informants also revealed that the importance of traditional medicine to get guidance and supervision from PHC is not only able to maintain and provide guidance in terms of cleanliness and skills, but also to protect traditional healers from criminal acts and errors in providing treatment. The need for traditional healers to have registered letters and permits is also felt important for informants to generate trust so that people feel safer to access traditional treatment services. The evaluation results are similar to the research entitled "The Role of Battra in Traditional Medicine in the Agabag Dayak Community in Lumbis District, Nunukan District". The results of in-depth interviews showed that traditional medicine still had a place besides modern medicine. In principle, the profession of traditional medicine is considered helpful and is still very much needed. The informant in the study had the hope that traditional medicine could still provide treatment. In addition, the informant also hopes that the government can provide assistance to the profession in the form of funding so that they can focus more on the profession as a traditional healer. It is also expected that guidance for traditional healers will improve their knowledge and skills 26 . CONCLUSION Success in this traditional medicine program is highly dependent on the availability of various inputs that support the implementation of the program. Lack of local government support for this program is a major problem in terms of financing. In addition, the objectives that are not achieved from the program are caused by facilities prepared by the PHC that are still not up to standard of traditional medicine programs. The low interest of traditional healers in making STPT or SIPT due to lack of socialization about the benefits of these letters and the lack of training provided by the relevant health authorities on traditional medicines for both traditional healers and for personnel at the PHC in Mengwi. This study also obtained that the equipment used by traditional healers can disrupt health. PHC in Mengwi is expected to coach the practice of traditional medicine in terms of hygiene of environmental behavior and the means of equipment used. The need for traditional healers to have registered letters and permits is also felt important for traditional medicine users to generate trust so that people feel safer to access traditional medicine services. The researcher thanked all the informants who contributed to this research. Fellow lecturers, family and friends for the support provided.
2019-05-30T23:45:57.752Z
2018-08-31T00:00:00.000
{ "year": 2018, "sha1": "e0811f18314279909c6421e5a8441fa8c78125b2", "oa_license": null, "oa_url": "https://doi.org/10.18196/jmmr.7266", "oa_status": "GOLD", "pdf_src": "Neliti", "pdf_hash": "4ba8bb35a49a66c7237efec96488395078665baf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Business" ] }
237917046
pes2o/s2orc
v3-fos-license
THE ROLE OF EXCHANGE TRADED FUND TO DRIVE THE GROWTH OF THE MUTUAL FUND INDUSTRY IN THE CAPITAL MARKET ACCORDING TO A REVIEW OF ISLAMIC LAW PERANAN EXCHANGE TRADED FUND UNTUK MENDORONG As a product that has just been listed on the stock exchange, this Exchange Traded Fund (ETF) has attracted the attention of various lines of capital market investors. ETF is known as one of the mutual fund product innovations as a follow-up to POJK No. 49/POJK.04/2015. This study is a descriptive study of this product with all the features and advantages and uniqueness it has as a form of diversification of existing mutual fund products. This study also seeks to explore the strengths and weaknesses of ETFs both internally and externally. Interestingly, this descriptive study presents data on investor enthusiasm for ETF products. One of the main attractions of this product is that apart from being traded on the stock exchange, the price is also very cheap. Not only explaining in terms of products and their uniqueness, but this study also discusses the review of Islamic law on ETF transactions on the stock exchange. When compared with the practice of buying and selling, there are many similarities. And there are several other interesting factors to observe regarding the considerations of investors in choosing mutual fund instruments, especially ETFs as an alternative to new investments in the capital market. A. INTRODUCTION As an alternative source of funding (Bakhri, 2018), the capital market is seen as effective in accelerating a country's development (Setiawan, 2015). Through the capital market, governments that need funds can issue bonds/debt securities and make sales to the public (Nasution, 2015). So do not be surprised if the progress of the country's economic development can't be separated from the development of the capital market. The more developed and growing the capital market in the country, the economy will also grow. This means that the dynamics of the capital market is a very good indicator to measure the progress/regression of a country. In a global economy that is already very open, the stock price index in each country can be known and will be an indication of the dynamics of the global economy. The indicator in measuring the level of progress of a country's capital market lies in the level of variation of the available instruments. The more advanced, the more varied the instruments traded in the capital market. It seems that the development of the capital market in Indonesia has shown rapid development over the last almost a decade. This is because the data show that there is an increasing demand for key instruments in the capital market such as stocks, bonds, foreign exchange, etc (Fardiansyah, 2002). The capital market does not only have stock products but also has other products such as mutual funds (Kutra, Azhara & Gulo, 2019). Mutual funds are known and liked by the people of Indonesia, and have shown significant developments during the period 2000 to 2004. Total assets under management at that time also reached Rp 110 trillion. However, since March 2005, the NAV curve for mutual funds has begun to slope, until it has decreased by almost 70% (seventy percent) and this is referred to as the point of saturation, thus requiring innovations in developing mutual funds (Pratomo, 2009). One of the innovations of developing mutual funds is by developing mutual funds so that they can be traded on an exchange, known as an Exchange Traded Fund (ETF). In summary, it can be seen that ETFs are mutual funds whose performance refers to an index and are traded on the stock exchange floor like stocks. Global ETF developments are allegedly continuing to experience significant developments (Azmiana & Muhammad, 2021). Likewise, with developments in Indonesia, ETF-type mutual funds have continued to experience rapid increases over the last three years, of which until February 2021 more than 40 products have been owned (Mahardika, 2020). The above phenomenon is of course in line with the advice of Islam which indirectly orders Muslims to prepare for a better tomorrow, one of which is by investing (Azis, 2010). That recommendation has been stated in the QS. An-Nisa': 9, QS. An-Nisa': 29 and QS. Az-Zalzalah: 1-3, that the wealth should be turned over rather than hoarded, thus opening up new jobs in the investment sector. ETF mutual funds as an alternative to investing that are currently developing would need to be discussed further, so that we can know together with their role in the mutual fund industry and an overview of Islamic law regarding this investment method. B. METHODOLOGY This study uses descriptive qualitative research methods, namely research that describes phenomena from the point of view of the informant. Then this study also uses a triangular approach to find the relationship between a phenomenon from different perspectives and perspectives. Data collection techniques used are interviews, observations, field notes, study documentation, and identify reports on ETFs which can be accessed through the official websites of the Indonesia Stock Exchange and the Financial Services Authority (www.IDX.co.id and www.ojk.go.id). 18 informants were involved in this study and grouped them into four categories, namely practical academics (4 people), students (12 people), Self Regulatory Organizations (1 person), and academics and Islamic leaders (1 person). Data processing in this study used the help of the NVivo version 12 application. Capital Market According to Kasmir, a capital market is a meeting place for sellers and buyers to conduct transactions to obtain capital. Buyers in the capital market are companies that need capital (issuers), so they sell securities in the capital market. While the buyer (investor) is a party who wants to buy capital in a company that he thinks is profitable (Kasmir, 2014). The purpose of the capital market is basically to bridge the flow of funds from investors to companies that need funds, both for business expansion and improvement of the company's capital structure. The relatively large capital requirements of companies, as well as the high public interest in investing, prompted the government to establish the Indonesia Stock Exchange (IDX). Experts divide the capital market into several types. Tandelilin divides the capital market into two types, namely the primary market and the secondary market. The primary market is the market when the issuer sells its securities to investors for the first time. And the secondary market is a market where trading occurs by and between investors after the issuer is sold in the primary market (Tandelilin, 2017). The presence of the capital market facilitates investors and companies that need additional capital. Both are symbiotic mutualism, and both view the capital market as a medium to achieve investment goals (financial goals) under the rules that have been determined by institutions and professions related to securities (Fauzan & Suhendro, 2018). Indonesia Stock Exchange (IDX) The Indonesia Stock Exchange (IDX) or Indonesia Stock Exchange (IDX) was established, to conduct regular, fair, and efficient securities trading. To achieve the above objectives, the stock exchange is required to provide supporting facilities and supervise the activities of the members of the Stock Exchange. The vision of the Indonesia Stock Exchange is to become a competitive stock exchange with world-class credibility, with a mission to provide infrastructure to encourage orderly, fair, and efficient securities trading that is easily accessible to all stakeholders. As an example of infrastructure, since March 2, 2009, IDX has used a trading system called the Jakarta Automated Trading System (JATS-NextG), replacing the previous JATS system and the manual system. Stock exchanges also should establish regulations related to membership, listing, trading, securities equivalence, clearing and settlement of stock exchange transactions, and other matters relating to the stock exchange. Mutual Fund Linguistically, mutual funds are composed of two concepts, namely "mutual" which means guard/maintain and "fund" which means (collection of) money. Thus, mutual funds mean money that is maintained (Soemantri, 2009). According to Law No. Funds (RDS), and many more. Understanding the types of mutual funds is very important for potential investors. This is because each type of mutual fund has different characteristics, returns, and levels of risk. This needs to be understood so that investors can adjust their investment choices to the desired financial goals. Prospective investors can also determine the tolerance level for the level of risk that will be faced and adjust it to their respective financial conditions (Simatupang, 2010). Exchange-Traded Funds (ETFs) According to the Hong Kong Exchange and Clearing Ltd ETF is an investment product that represents a portfolio of securities designed to track the performance of an index, offering investors an innovative way to gain cost-effective exposure to a specific market or sector (HKEX, 2020), and traded on the exchange (Taunay, 2013). Legal Basis and Terms of Sale Buying and selling in fiqh terms are referred to as al-bai' which means selling, replacing, and exchanging something for something else. Al-bai' pronunciation in Arabic is also interpreted as the word ash syira (buy). Thus, the word al-bai' means to sell, but at the same time, it also means to buy (Haroen, 2000). Based on QS. Al-Baqarah: 275 which means: "And Allah has permitted trading and forbidden usury." Based on this verse, it can be seen that Allah has justified buying and selling to His servants properly and forbade the practice of buying and selling that contains usury. It is also explained in QS. An-Nisa': 29, which means: "O you who believe, let us not eat each other's wealth by vanity, except using commerce which is mutually beneficial between us, and do not kill yourselves, verily Allah is Most Merciful to you." The verse above clearly emphasizes that the law of buying and selling is jaiz (permissible) and prohibits buying and selling carried out in a way that is not justified by Allah and is carried out based on consensual and mutual benefit. That means, the law of jaiz (permissible) in buying and selling does not rule out the possibility of changing the law of buying and selling itself, it may or may not depend on whether or not the terms and pillars of buying and selling are fulfilled (Susiawati, 2017). As for the number of scholars in (Susiawati, 2017), the pillars of buying and selling itself are divided into four, among others: a. Akad (ijab-qabul). According to the Hanafiyah scholars, the definition of ijab-qabul is the determination of certain actions that show the pleasure spoken by the first person, either the person who gives or the recipient. b. Muta'aqidain (two people who have a contract). According to fiqh scholars in (Syaifullah, 2014). It is stated that the person who transacts buying and selling must have aqil baligh and have the reason of his own will, and the person who carries out the transaction must be a different person in the sense that one cannot be a seller and a buyer at the same time. c. Ma'qud 'alaih (object) of buying and selling must be clean, can be used, belong to the person making the contract, and know the substance, form, nature, and price. The development of the capital market and the phenomenon of the growing number of investors in Indonesia illustrate explicitly that investment is important. Jogiyanto defines investment as delaying current consumption to be used in efficient production for a certain period (Jogiyanto, 2015). One of the new investment instruments listed on the Indonesia Stock Exchange is the Exchange Traded Fund (ETF). ETFs are mutual funds in the form of collective investment contracts whose participation units are traded on the stock exchange. So this ETF is a combination of stocks in terms of transactions and mutual funds in their management. The differences between stocks, mutual funds, and ETFs are as follows. Source: Indonesia Stock Exchange, 2020 The differences in the picture above regarding stocks, mutual funds, and ETFs illustrate that ETFs have several advantages when compared to stocks and mutual funds. In addition to the advantages of ETFs that have been discussed previously, the authors found several other advantages in the results of observations with informants, including the following. Source: NVivo analysis version 12, 2021 (data processed) The results of observations regarding ETF products against informants said that ETFs were cheap. Then the next finding is that there are capital gains and dividends. ETF investors will still receive capital gains and dividends from the ETF units they buy. There are two types of ETFs in Indonesia, namely active and passive ETFs. ETFs that are passive is more dominant in following the performance of the index that is the reference, whose purpose is to replicate the performance of the index. The performance of passive ETFs is usually apple to apple or closer when compared to the performance chart of the benchmark index. Meanwhile, those who are active tend to be active in their management, aiming to beat market performance. The market division of ETF transactions is also divided into two, namely the primary and secondary markets. The following is how to trade on both markets. Source: Indonesia Stock Exchange, 2021 Open a securities account with a securities company Select ETF Buy ETF by entering the desired ETF ticker code in the buy order menu of the securities trading application The etf ticker code generally starts with the letter "X", except for the 2 products that appear the earliest in Indonesia (RE-LQ45X and R-ABFII) DONE ! The minimum unit purchase in the primary market is through the unit creation process, so that the minimum purchase is even greater, namely 1 unit creation or equal to 1000 lots. The price given is also cheaper than in the secondary market, but quite competitive. While the minimum number of purchases in the secondary market is smaller, namely 1 lot equal 100 Participation Units, and the transaction method is also easy, like stock transactions in general. The good growth shown by ETFs in Indonesia illustrates the good potential for ETFs in the future. With the advantages possessed by ETFs, it is not impossible to create new investment trends other than stocks and crypto which are currently being discussed hype. When compared to several countries in Southeast Asia such as Singapore, Malaysia, the Philippines, and Thailand, Indonesia has a significant increase in its growth chart and ranks first for the ETF growth chart. The trend of the JCI that continues to increase and the positive performance of the Indonesian economy in 2020 to 2021 is an opportunity for capital market growth which is a breath of fresh air for capital market investors. The involvement of ETFs in capital market developments also needs to be taken into account. As an investment instrument that is relatively new to the stock exchange, ETFs have shown their role to encourage capital market growth in Indonesia. The Role of Exchange Traded Funds (ETFs) in Encouraging the Growth of the Mutual Fund Industry In the development of ETFs, the Indonesia Stock Exchange also has its role so Source: Indonesia Stock Exchange The choice of investing in this ETF product is very diverse, there is even a link between the investment objective and the type of ETF product. Goals such as getting high dividends, caring for the environment and society, value investing, following the JCI, and even investing according to Islamic law, there is an ETF option. This ETF option can also be adjusted to the investor's budget or minimum capital, because of its low cost, this ETF can be purchased for less than IDR 10,000. products. This shows that mutual funds still have the opportunity to continue to grow. Meanwhile, the amount of ETF assets under management for investment managers and the number of products has continued to increase from 2016 to April. The last recorded amount of assets was Rp 15.18 trillion with a total of 49 products. Like mutual funds, which are still in demand by the public, ETFs are starting to be known and liked by investors. The number of ETFs that have been listed on the exchange is 49 products as of April 2021, so the number is only 2% when compared to mutual fund products as a whole. Meanwhile, the amount of assets managed by ETFs is IDR 15.18 trillion as of April 2021, so the percentage is 2% of the total assets managed by mutual funds. This figure is expected to continue to move significantly in the future if it is managed seriously, consistently, and professionally. Based on the data that has been presented, we can see that ETFs are starting to become mutual fund products that are starting to be known and favored by investors because of their nature as mutual funds but their trading flexibility is like stocks. In addition, the role of ETFs in the mutual fund industry is a new investment alternative for investors, especially novice investors. ETFs can also be the first step before novice investors step into stocks directly. Islamic Law's View on ETF Mutual Fund Transactions in Indonesia Based on the Sharia Financial Development Report (LPKS), which was last issued by the OJK in 2019, the total sharia capital market products amounted to 920 products and the total amount of assets under management was IDR 4569 trillion. With the largest proportion, namely the category of sharia shares worth IDR 3744.82 trillion and the percentage of market share of securities value of 51.55%. Source: Financial Fervices Authority, 2019 In addition to the Islamic capital market as a whole, Islamic mutual funds with their various products also have a contribution in its development. The picture above is the proportion of sharia mutual fund products. The largest proportion is in protected sharia mutual fund products with a percentage of 45.67% and sharia ETFs in the last order with a percentage of 0.06%. This figure continues to improve with the potentials that have been present in Indonesia. Investing in sharia instruments seems to be an obligation for every Muslim. However, when you want to invest a Muslim must pay attention to the principles of sharia. Adiwarman Karim in his writings "Implementation of Islamic Sharia in the Economy" put forward the principles of Islamic economics (Santoso, 2004), including: a. The principle of aqidah; Surah Al-Maidah: 17 which means, "To Allah belongs all that is in the heavens and the earth and what is between them". b. The principle of al' is; QS Al-Hujurat: 9 which means, "Verily Allah loves those who act justly". c. The principle of prophecy; QS Maryam: 56-57 which means, "everything from Allah and His Messenger must be true and only the truth". d. The principle of the caliphate; Surah Al-Hajj: 41 which means, "a leader who is guided will always encourage good and prevent evil". e. The principle of ma'ad, QS Al-Qashash: 77 which means, "Seek your afterlife and do not forget your world". In terms of the pillars and the terms of the sale and purchase, first, the conditions for people who have a contract in Islam must be reasonable and carried out by different people. This requirement has been fulfilled because the ETF transaction that carries out the contract is a broker who is an expert in his field and is an adult. Likewise, the difference between people who make transactions because there are sellers/buyers and also brokers where the two parties are different. The second relates to the terms of the ijab qabul reflected in the signing of the agreement on the transaction in the terms of the form which explains that all transactions related to stock trading activities are fully submitted to the broker and the transaction takes place in one assembly, namely at the dealing place. Third, the terms of goods that can be delivered directly when a transaction occurs. When compared with buying and selling activities, the practice of trading ETFs according to Islam, especially regarding the conditions found that many conformities will then be used as a reference in establishing trading laws carried out online. In terms of non-physical goods that cannot be delivered directly, they have been represented by the transaction form, which functions as a sign of handover of goods. As discussed earlier, the majority of scholars have agreed on the legal status of such matters, namely permissible or permissible. E. CONCLUSION Based on the description above, the following conclusions can be drawn: First, the rising trend of the JCI and other capital market opportunities has become a breath of fresh air for capital market investors. As an investment instrument that is relatively new to the market, ETFs can show their role. With a total of 49 ETF products and a total managed fund of Rp 15.18 trillion, it is a passion for ETFs to continue to contribute to the development of the capital market. Second, ETFs are starting to become mutual fund products that are starting to be known and favored by investors because of their nature as mutual funds, but their trading flexibility is like stocks. This is reflected in the increase in the number of products that have been listed on the exchange, as well as in the number of funds under management until April 2021. Third, when compared with buying and selling activities, according to Islam, the ETF trading practice, especially about the terms, can be found conformity which can then be used as a reference in establishing trading laws carried out online. The process of the occurrence of transactions from the broker to trading until the liquid does not deviate from Islamic rules. So the status to make ETF transactions is permissible or permissible.
2021-09-01T15:14:05.770Z
2021-06-20T00:00:00.000
{ "year": 2021, "sha1": "091b702b9f1d13f49a4bc7bd2f8fcff28b3bde02", "oa_license": "CCBYSA", "oa_url": "https://ejurnal.iainlhokseumawe.ac.id/index.php/AT-TIJARAH/article/download/1264/818", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "2828681b1af1552da4e94ac8c400c56ca0f661c5", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
53608272
pes2o/s2orc
v3-fos-license
Does habitat complexity influence fish recruitment? Human activities facilitate coastal habitat transformation and homogenization. The spread of marine invasive species is one example. This in turn may influence fish recruitment and the subsequent replenishment of adult assemblages. We tested habitat complexity effect on fish (Teleostei) recruitment by experimentally manipulating meadows of the habitat-forming invasive mac - roalga Caulerpa taxifolia (Chlorophyta). Among the fourteen fish species recorded during the experiment, only two labrids ( Coris julis and Symphodus ocellatus ) settled in abundance among these meadows. Patterns in the abundance of these juveniles suggested that reduced tri-dimensional meadow complexity may reduce habitat quality and result in altered habitat choices and / or differen - tial mortality of juveniles, therefore reducing fish recruitment and likely the abundance of adults. Introduction Habitat complexity (i.e. the three-dimensional arrangement of structures that form habitat, sensu August (1983)) exerts a strong influence on species diversity, abundance (Harborne et al., 2011a) and behavior (Harborne et al., 2011b).Atrill et al. (2000) and Horinouchi & Sano (1999) described that habitats with greater complexity typically support more species and individuals.For a given species, at a given life stage, differences in complexity between two habitats may result in differences in habitat quality in terms of the tradeoff between food availability and predation risk (Dahlgren & Eggleston, 2000).This may lead to active habitat selection aiming at minimizing this tradeoff and maximizing survival, or to differential mortality between habitats (Thiriet et al., 2014).Consequently, many species have very specific microhabitat requirements, which vary among species and life history stages (i.e.ontogenetic shift in habitat use) (Vigliola & Harmelin-Vivien, 2001). As a consequence, altering habitat complexity can have cascading effects on species composition and abundance.Many shallow subtidal habitats in the Mediterranean Sea have been modified by anthropogenic impacts (Sala et al., 1998;Francour et al., 1999;Milazzo et al., 2004;Mangialajo et al., 2008;Rovere et al., 2009;Coll et al., 2010;Montefalcone et al., 2010).These modifications include alteration of habitat complexity, by changing the composition of biotic and abiotic structural components.One Mediterranean example of such changes is the fragmentation and/or shoot density reduction of Posidonia oceanica (Linnaeus) Delile seagrass meadows due to repeated anchoring (Francour et al., 1999;Montefalcone et al., 2010); inversely, others examples illustrate the homogenization of seascapes through anthropogenic stressors.For example, beach artificial nourishment is known to homogenize the mixed heterogeneous bottoms of pebbles, boulders and rocks, therefore reducing their habitat quality for Sparidae fish juveniles (Cheminée et al., 2014).Among macrophytes, seascape homogenization has been as well reported through the introduction and dominance of invasive habitat-forming species, such as Caulerpa taxifolia (Vahl) C. Agardh and C. cylindracea Sonder, two benthic macroalgae (Chlorophyta) that have been introduced into the Mediterranean (Levi & Francour, 2004;Longepierre et al., 2005;Klein & Verlaque, 2008;Francour et al., 2009;Molenaar et al., 2009;Box et al., 2010;Tomas et al., 2011).In many coastal sites, heterogeneous habitats such as rocky reefs (Cebrian et al., 2012), sandy bottoms, or seagrass meadows tend consequently to be replaced by homogenous Caulerpa spp.meadows.In such areas, it has been suggested that the simple structure, i.e. low complexity of Caulerpa spp.meadows reduces the three-dimensional complexity of habitats relative to natural heterogeneous rocky reef habitats (Harmelin-Vivien et al., 2001).This habitat simplification should be detrimental to fish assemblage (Francour et al., 1995) because of the associated loss of diversity and amount of shelter and food (Levi & Francour, 2004), which in turn reduces habitat quality (Dahlgren & Eggleston, 2000;Hindell et al., 2000).Similarly, in the case of the Mediterranean Cymodocea nodosa seagrass meadows, another study (Cuadros, 2015) revealed that heterogeneous sectors of these meadows (i.e.scattered with boulders) supported more diversified and abundant juvenile fish assemblages.The author suggested that this is probably related to the diversified food and/or shelter resources, obtained through complementarity and/or synergy between patch-types among the more complex sectors of the meadow. In this context, it is crucial to understand the effect of habitat complexity on fish settlement and recruitment, because they are key events in the life history of individuals and therefore determine the replenishment of fish assemblages.In our study, settlement is defined as the arrival of early juvenile (post-larval) fishes (referred as "settlers") within benthic habitats after their pelagic larval phase; recruitment corresponds to the subsequent incorporation of these juvenile fish into adult populations after their survival in nurseries and migration towards adult habitats (referred as "recruits") (Levin, 1994;MacPherson, 1998;Beck et al., 2001).We use the term "juvenile" to encompass individuals present in the nursery habitats after settlement and until their dispersal (Cheminée et al., 2011).The maximum density of settlers is the best metric for the intensity of settlement events, i.e. the number of new individuals joining the benthic habitat in a given area (Macpherson et al., 1997).However it does not necessarily reflect the final abundance of juveniles that recruit into the adult population: indeed, the initial number of settlers might be highly depleted through mortality (Macpherson et al., 1997;Arceo et al., 2012).Macpherson (1998) defined recruitment level as the number of juveniles remaining at the end of the post-settlement period.However, this does not take into account mortality of juveniles during their transition from nurseries toward adult habitats (Beck et al., 2001).A proxy of recruitment level is the number of juveniles surviving arbitrary periods of time after settlement (Macpherson & Zika, 1999).These variables can be assessed by monitoring abundance of juveniles over the post-settlement period in the nursery until their dispersal towards adult habitats (Macpherson et al., 1997;Arceo et al., 2012). In this paper, the tri-dimensional structural complexity of a given habitat (here Caulerpa taxifolia meadow) was manipulated in order to test the effect of the complexity degree of this habitat on fish settlement and recruitment.We hypothesized that any increase in com-plexity within an homogenous meadow should result in an increase in juvenile survival and therefore densities (Connell & Jones, 1991).We experimentally manipulated the degree of habitat complexity in a Caulerpa meadow, using arrangements of concrete blocks.In order to test our hypothesis we studied temporal trends of densities of fish juveniles in these manipulated habitats of different complexities. Ethics statement The observational protocol was submitted to regional authority 'Direction interrégionnale de la mer Méditerranée' (the French administration in charge of the Maritime affairs) who did not require a special permit since no extractive sampling or animal manipulations were performed (only visual censuses in natural habitats), since the study did not involve endangered or protected species, since no works within any marine protected area were performed and since accessed field was not privately owned. Study site, treatments and experimental design The study was carried out along the coast of Cap Martin, near Menton, France (north-western Mediterranean; 43.75073° N, 7.48010° E).The study site was composed of flat, gently-sloping sandy bottoms, covered by a dense and continuous Caulerpa taxifolia meadow, at 10 meter depth.After its first appearance in the Mediterranean in 1984 in Monaco (Meinesz & Hesse, 1991), C. taxifolia invaded the study site in the 1990's (Francour et al., 1995;Meinesz et al., 1998) and formed large homogenous meadows (more than 90% cover) from 5 to 15 m depth. We used concrete blocks (20x20x50 cm) to manipulate habitat complexity within the Caulerpa meadow.Blocks were arranged on the bottom, in the meadow, and we manipulated the density of blocks, to create treatments of four complexities (Fig. 1).Each treatment was built by randomly spreading the blocks over a 2x20 m area parallel to the coast.Treatments were arranged in two parallel lines separated by 10 meters; each line contained one replicate for each treatment, and each replicate was separated by 3 meters.In one line the treatments were arranged from the highest to the lowest complexity; in the other line, the order was reversed.Overgrowing Caulerpa taxifolia fronds were regularly removed by SCUBA divers. Fish counts Fish counts were performed weekly from August 2000 to February 2001 (N = 18) when weather and diving conditions permitted.Counts were done by means of underwater visual census (UVC) (Harmelin-Vivien et al., 1985), by SCUBA divers at 0.5 m above the substrate; each replicate was censused in less than 5 minutes.All counts were made when visibility exceeded 3 m, and between 9 am and 11 am, a timeframe within which studied species were active.Fish abundance was recorded in units of 1 up to 10 individuals; when more than 10 individuals were observed, abundance was recorded in classes: 10-20, 20-50, 50-100 individuals (Francour, 1999). A total of 14 species belonging to the families Labridae, Serranidae and Sparidae were recorded (Table 1).Two species of labrids, Coris julis and Symphodus ocellatus, were the only two species to settle in high abundances on treatments (see next section); we therefore focused analyses on these two species.We categorized individuals into three size classes (small, medium and large; each class encompasses 1/3 of the total maximum length), and further subdivided the "small" size class into "settlers", "post-settlers" and "recruits", as defined in the previous section. Data analysis Relative densities of each species among treatments (habitat complexity: H>M>L>V; see Fig. 1) and through time, for the period following the abundance peak of a given size-class were analyzed.To standardize for differences in fish abundance between treatments (n=2 for a complexity level), we expressed abundances as the percentage of the maximum abundance per treatment to avoid densitydependent effect (Macpherson et al., 1997). Because assumptions of data normality were not met, the Scheirer-Ray-Hare test (SRH), a non-parametric alternative to two-way ANOVA (Sokal & Rohlf, 1995), was used to test the null hypothesis H 0 of no difference in relative abundances between the four treatments and between sampling dates after the abundance peak.Sampling dates were considered independent because, given the mobility of species, abundances at t time did not influence abundances at t+1.If H 0 was rejected, i.e. that at least one treatment or date was different from another, SRH test was followed by a non-parametric post-hoc test for pairwise comparisons (Siegel & Castellan, 1988) in order to determine which treatment(s) and date(s) was(were) different from each other.Separate analyses were conducted for each size class of each species.All statistical analyses were performed using the R 2.12.2 statistical software (R_Development_Core_Team, 2013). Results For both Symphodus ocellatus and Coris julis, peaks in the abundance of settlers, post-settlers and recruits succeeded each other sequentially from the start of the study (Fig. 2).The peak of C. julis abundance was recorded on the 27 th September (day 56) for settlers, the 3 th October (day 62) for post-settlers and 24 th October (day 83) for recruits.For S. ocellatus, these maxima were recorded on the 19 th September (day 48), 3 th October (day 62) and 24 th October (day 83), respectively.Individuals belonging to medium and large size classes were recorded during the entire survey period (August to Febru-ary) and their mean abundances did not show significant differences between treatments or dates (Scheirer-Ray-Hare test; p > 0.05).The maximum densities recorded for medium and large size classes were 1.62 and 0.37 ind./10 m² for C. julis and 3.50 and 0.50 ind./10 m² for S. ocellatus. Following the peak abundance for each size class of each species, significant differences in relative abundance between complexity treatments were revealed only for recruits of Coris julis (Scheirer-Ray-Hare test, H=11.06,Df=3, p = 0.011, Table 2); besides, at peak abundance for this size class (recruits, day 83), initial recruits densities did not differ between treatments (Kruskal-Wallis, 2).Pairwise comparisons of the density of recruits of C. julis revealed that relative abundances in habitats H and M (the most complex) were significantly higher than in habitats L and V (the least complex) (post hoc test, p < 0.0001, Fig. 3); they did not differ significantly between habitats H and M, or between habitats L and V respectively (post hoc test, p > 0.05). Discussion In our study only two species (Coris julis and Symphodus ocellatus), among the fourteen species we observed, settled in C. taxifolia meadows in substantial abundance.Comparing our treatments, for S. ocellatus the absence of significant differences between levels of complexity might be due to low initial densities of individuals.In contrast, recruits of C. julis varied significantly with habitat complexity.We infer that this pattern is due to a lower habitat quality for the lowest complexity treatments, i.e. a higher mortality risk due to increased predation rate and/or reduced food availability (Dahlgren & Eggleston, 2000;Hindell et al., 2000), which in turn results in active habitat selection and/or differential mortality of juveniles (Thiriet, 2014;Thiriet et al., 2014) . In our experiment, treatments differed by the number of refuges available (related to the number of concrete blocks) but probably did not differ in food availability because blocks were regularly cleaned of any epibiota.We therefore hypothesize that the lower C. julis juvenile densities we observed in the less complex habitats are due to higher predation risk in less complex habitat, resulting in higher mortality or active movement towards more suitable habitats.Previous studies in the Mediterranean on the deployment among P. oceanica meadows of anti-trawling reefs showed an increase in species richness and abundance of already present species (Ramos-Espla et al., 2000).Similarly, the presence of scattered boulders among Cymodocea nodosa meadows (resulting in more complex meadow sectors, versus homogeneous ones) resulted in more diversified and abundant juvenile fish (Cuadros, 2015).It was attributed to the more diversified food and/or shelter resources (diversified ecological niches), through complementarities and/or synergy (e.g.edge effects) between patch-types of the more complex sectors of the meadow.Furthermore, in our study, the absence of significant differences between complexity treatments for the smallest (settlers and post-settlers) size classes suggests that this did not equally affect fishes of all size classes, as previously shown for other species (Fisher et al., 2007).This might be because the smallest size classes (e.g.settlers, about 10-15 mm TL) may still find sufficient shelter despite the lower complexity, while larger individuals (e.g.recruits, >40 mm TL) cannot.Larger recruits of C. julis may not find sufficient space between thalli of C. taxifolia meadows; as a result they may not be able to use it as a shelter habitat in the same way that they use Posidonia oceanica meadows (Garcia-Rubies & Macpherson, 1995), although we did not test it directly; in addition, nor may they be able to use C. taxifolia meadows understory as a foraging habitat as they do in habitats dominated by Dictyotales and Sphacelariales (Guidetti, 2004;Cheminée, 2012;Cheminée et al., 2013).Therefore they may be more exposed to mortality by predation and/or starvation.For the "recruits" size class of C. julis, these abundance patterns were consistent through time during our study.Altogether, our results for C. julis are consistent with our initial hypothesis: least complex habitat may have a lower habitat quality and therefore lower juvenile fish's survivorship, resulting in increased mortality and/or active movements toward more complex habitats. Abundances of C. julis were consistently higher than those of S. ocellatus in all treatments.In other macrophyte-formed habitats, e.g.Cystoseira spp.forests or in P. oceanica meadows, the reverse has been observed: juveniles of S. ocellatus were consistently more abundant than those of C. julis (Francour & Le Direac'h, 2001;Cheminée, 2012;Cheminée et al., 2013).We hypothesize that the thicker body shape of S. ocellatus impairs their ability to hide between Caulerpa thalli; this restriction might not apply to the thinner C. julis individuals.The inter-thalli void (spaces between and under thalli) may indeed be larger below a Cystoseira or Posidonia canopy than a C. taxifolia canopy, although this has not been measured.If this hypothesis is correct, C. taxifolia habitat may offer suitable refuges only for the slimbodied individuals such as C. julis.Studies are needed to quantify the habitat complexity differences, and their putative impact on juvenile assemblages, between Caulerpa invaded and non-invaded Mediterranean substrates.Although our design did not allow us to test it, we hypothesize that our experimental set up may reflect the natural complexity differences between totally invaded sites (i.e.substrate homogeneously covered by C. taxifolia = our low complexity treatment) versus non-invaded (or partially invaded) ones (substrate with heterogeneous habitat characteristics = our complex treatment).Consequently we hypothesize that in sites totally invaded and covered by C. taxifolia, the low habitat complexity -sensu August (1983)-resulting from habitat homogenization at both micro-habitat scale (inter-thalli void) and seascape scale (loss of habitat diversity), is detrimental for at least some species and might be detrimental to the nursery role of coastal habitats, notably because of decreased habitat quality in terms of shelter and/or food availability.If this hypothesis would be validated, active habitat selection and/or higher mortality of recruits could explain the lower densities of adults that have been previously observed in C. taxifolia meadows in comparison with un-invaded (and more complex) habitats (Francour et al., 1995;Harmelin-Vivien et al., 2001).Consequently, as proposed by Harmelin (1996), artificial habitats superimposed on large homogenous C. taxifolia meadows could allow mitigation of these invasions by increasing survival of fish recruits. As a conclusion, we argue that Mediterranean fish assemblages rely on a complex mosaic of habitats and microhabitats suitable as nurseries for juveniles of different species (Cheminée et al., 2013;Thiriet et al., 2014).Alien species introduction is one of the main anthropogenic stressors acting in Mediterranean marine seascapes.In the Caulerpa spp.case, the invading species act as exotic engineers and tend to homogenize the sea-scape and reduce the diversity of habitats and microhabitats available (Harmelin-Vivien et al., 2001;Molenaar et al., 2009).Our study suggests it is relevant to bring new information about the fish recruitment patterns that operate in these transformed systems, as well as about their causes.Additional manipulative studies, including more species, comparing sites before and after invasion or comparing invaded versus un-invaded natural sites are required to assess if the relative low diversity of fishes that settled in our Caulerpa meadows (compared to references in native habitats) is an artifact of studied site/year or is truly and impact of habitat transformations.This can help us understanding the role that can play the expansion of Caulerpa (and other structurally similar exotic species) in the recruitment of littoral fishes. Fig. 2 : Fig. 2: Mean densities of Coris julis (a) and Symphodus ocellatus (b) settlers, post-settlers and recruits.Mean densities are given for various survey dates in each habitat complexity treatment.Error bars as standard error (n=2).Dashed-line rectangle: see detailed view for C. julis recruits on Figure 3. Fig. 3 : Fig. 3: Relative abundance of Coris julis recruits from the date of peak abundance for each treatment.The Y-axis is expressed as the mean of Ln-transformed proportion of the initial density at the peak (day 83) -error bars as standard error (n = 2). Table 1 Frequency of occurrence of the species recorded on each habitat complexity treatment between August 2000 and February 2001 (n = 18 censuses) -Habitats treatments (replicates n = 2): High complexity (H), Medium complexity (M), Low complexity (L), Very low complexity (V). Table 2 Results of Scheirer-Ray-Hare tests analyzing effects of habitat complexity treatments and date on abundances of Coris julis and Symphodus ocellatus for the settlers, post-settler and recruits size classes.Significant effect (p < 0.05) is marked by an asterisk.Df refers to degrees of freedom.
2018-11-04T03:33:29.990Z
2015-01-20T00:00:00.000
{ "year": 2015, "sha1": "960d4c2635dbb5885268fa730e6820b4981b528e", "oa_license": "CCBYNC", "oa_url": "https://ejournals.epublishing.ekt.gr/index.php/hcmr-med-mar-sc/article/download/13046/12513", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "aca6fd063735aed3b1e1a9dc2b763f8eb1421726", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
4827524
pes2o/s2orc
v3-fos-license
Health Equity Assessment Toolkit (HEAT): software for exploring and comparing health inequalities in countries Background It is widely recognised that the pursuit of sustainable development cannot be accomplished without addressing inequality, or observed differences between subgroups of a population. Monitoring health inequalities allows for the identification of health topics where major group differences exist, dimensions of inequality that must be prioritised to effect improvements in multiple health domains, and also population subgroups that are multiply disadvantaged. While availability of data to monitor health inequalities is gradually improving, there is a commensurate need to increase, within countries, the technical capacity for analysis of these data and interpretation of results for decision-making. Prior efforts to build capacity have yielded demand for a toolkit with the computational ability to display disaggregated data and summary measures of inequality in an interactive and customisable fashion that would facilitate interpretation and reporting of health inequality in a given country. Methods To answer this demand, the Health Equity Assessment Toolkit (HEAT), was developed between 2014 and 2016. The software, which contains the World Health Organization’s Health Equity Monitor database, allows the assessment of inequalities within a country using over 30 reproductive, maternal, newborn and child health indicators and five dimensions of inequality (economic status, education, place of residence, subnational region and child’s sex, where applicable). Results/Conclusion HEAT was beta-tested in 2015 as part of ongoing capacity building workshops on health inequality monitoring. This is the first and only application of its kind; further developments are proposed to introduce an upload data feature, translate it into different languages and increase interactivity of the software. This article will present the main features and functionalities of HEAT and discuss its relevance and use for health inequality monitoring. Background The 2015 launch of the Sustainable Development Goals (SDGs) marks a paradigm shift in the global discourse on poverty and development. It is widely recognised that the pursuit of sustainable development cannot be accomplished without addressing inequality. Goals 5 (gender equality) and 10 (reducing inequality within and among countries) reflect this shift [1][2][3], aptly conveyed in the Secretary General's simple appeal to "leave no-one behind" [1,4]. A key step towards achieving the goal of leaving no-one behind is ensuring the availability of data for all countries, including the least developed, on key development indicators disaggregated by dimensions of inequality like income, sex, age, and geographic location [5,6]. The disaggregated data can then be used to track and benchmark progress within and between countries. More specifically, Goal 3 calls for ensuring healthy lives and promoting well-being for all at all ages and universal health coverage, making explicit a commitment to health equity [2,5,[7][8][9][10]. Monitoring health inequalities can identify progress over time, highlighting the impact of health policies, programs, and interventions on the mostdisadvantaged subgroups. It can also serve as a warning system when health differences between population subgroups widen [11,12]. Health inequality monitoring entails collecting, analysing, interpreting, and reporting health disaggregated data. While data availability is gradually improving, there is a commensurate need to increase, within countries, the technical capacity for analysis of these data and interpretation of results for decision-making [13]. It has been found that routine reviews of health sector performance (such as annual health sector reviews) tend to report national averages and occasionally averages for sub-populations (e.g. urban and rural residents). This level of data disaggregation does not allow for a more critical analysis of inequality, including trends or benchmarking, that could assist with the policy/programme design or refinement [14]. Indeed, in the absence of appropriately disaggregated data, there is a danger that national health averages could improve without any improvement in health inequality [11,15]. The World Health Organization (WHO) has been working closely with national governments to build capacity for health inequality monitoring [13]. Training workshops on measuring and monitoring health inequalities have covered a large number of countries, giving an opportunity for government decision-makers to interpret data on health inequalities and to carry out priority-setting based on this appraisal. Throughout this process, it has been recognised that interpretation and priority-setting is greatly enhanced if analysts are equipped with a user-friendly tool that can be used to synthesize and visualize disaggregated data as well as summary measures of inequality (like differences and ratios [11]). It is also helpful if analysts can see the latest data on the status of health inequality and see change over time (i.e. whether inequality has been increasing or decreasing as distinct from what average trends are showing). In the initial training workshops, the pivot function in Microsoft Excel was used to show disaggregated data in tables and charts. Summary measures were calculated using the publicly available Health Disparity Calculator software (HD*Calc) (seer.cancer.gov/hdcalc) [16,17]. While this software combination could be used to perform the necessary analyses, it was inflexible. Excel had only limited interactivity features and did not allow for quick comparisons of multiple dimensions of inequality. HD*Calc required the creation/importation of specific file formats for analysis, and could only provide results for one indicator and one dimension at a time while importing data from an Excel file. This precluded the comparison of one indicator against another or the comparison of the same indicator with multiple inequality dimensions. These elements of interactivity were identified as crucial for the interpretation of the data and subsequent priority-setting. Over multiple workshops it became clear that there was a demand for a toolkit with the computational ability to display disaggregated data and summary measures in an interactive and customisable fashion that would facilitate interpretation and reporting of health inequality in a given country. To answer this demand, the Health Equity Assessment Toolkit (HEAT) was developed. During its development, HEAT was tested in capacity-building workshops on health inequality monitoring in the WHO Eastern Mediterranean Region (February 2015) and the WHO Region of the Americas (December 2015). In these workshops, there was strong endorsement of and appreciation for the software from participants representing health ministries and statistical agencies as well as trainers from reputed academic institutions. Feedback was received on the interface, technical aspects, aspects related to training and facilitation of the use of the software, as well as confirmation of functionalities to be included in HEAT going forward. HEAT is intended to be used primarily by those who are familiar with health information systems and have basic skills in interpreting health-related data. This may include technical staff (for example, in ministries of health and statistical offices), public health professionals, policy-makers, researchers, and students. This article will present the main features and functionalities of HEAT and discuss its relevance and use for health inequality monitoring. Implementation HEAT was developed using the free and open source statistical software R (https://www.r-project.org) and the R package shiny (https://cran.r-project.org/web/packages/ shiny). R is a free and open source software environment for statistical computing and graphics that operates in a Windows, Mac OS X, and Linux environment. Key R packages used in the tool implementation include dplyr for data analysis and management as well as the packages ggplot2, RColorBrewer, grid and gridExtra for graphing of the multi-dimensional data [18][19][20][21][22]. Shiny is a free, open source, extensible web applications framework for R that allows the creation of a rich, interactive web interface for querying and summarising data as tables, free text, or graphs. A shiny application can either operate on a local machine using a standard web browser to manage the interaction with a local instance of R, or it can operate on an internet connected server (http:// www.shinyapps.io). The HEAT source code was published under the GNU General Public License Version 2 (https://www.gnu.org/licenses/gpl-2.0) and is freely available through GitHub (https://github.com/WHOequity/ HEAT-1.0). Summary features of HEAT are provided in Table 1. HEAT is available as an online application and as a standalone version for use offline. Both versions can be accessed through the WHO website (http://www.who.int/ gho/health_equity/assessment_toolkit/). The online version can be accessed using any web browser on all desktop or laptop computers and mobile devices (a minimum screen size of 7.9 inches is recommended). The standalone version can be used on computers with Windows or Macintosh operating systems (separate packages are available for Windows and Macintosh). The standalone packages can be downloaded as .zip files that include portable versions of R and Mozilla Firefox, and do not require any additional software or installation. HEAT comes pre-installed with the WHO Health Equity Monitor database [23]. The 2015 update of the database draws on Demographic and Health Survey (DHS) as well as Multiple Indicator Cluster Survey (MICS) data from 94 countries, mostly low-or middle-income, collected between 1993 and 2013. For almost three quarters of the countries, data are available for at least two time points. The database includes more than 30 Reproductive, Maternal, Newborn and Child Health (RMNCH) indicators covering both health interventions and health outcomes. Data have been disaggregated by five dimensions of inequality: economic status, education, place of residence, subnational region and child's sex (where applicable). The database is updated regularly. In addition, fifteen widely used summary measures of inequality have been calculated in HEAT -seven absolute measures and Availability HEAT is available as an online application and as a standalone version for use offline. Compatibility The online version can be accessed using any web browser on all desktop or laptop computers and mobile devices (minimum screen size of 7.9" is recommended). The standalone version can be accessed on all computers with a Windows or Macintosh operating system. Installation The online version requires no installation. The standalone version is available in a zip folder and needs to be extracted and saved to the computer's hard drive. The extracted HEAT folder contains portable versions of the R statistical software and the web browser Mozilla Firefox, which are required to run HEAT, but do not require any installation. The standalone version can simply be launched by double-clicking the start file. Graphs Data can be visualised in bar graphs, line graphs and scatterplots. Users can adjust the height and width of graphs, specify axes ranges and add titles and axis labels. In addition, users can display 95 % confidence intervals. Graphs can be exported as pdf, jpg or png files. Supporting material User manual The user manual provides detailed information on how to set up and work with HEAT. Each feature of the toolkit is explained in detail and recommendations are made on how best to assess and interpret the data. Technical notes The technical notes provide detailed information about the data displayed in HEAT, including the disaggregated data from the WHO Health Equity Monitor database and the 15 summary measures of inequality that were calculated based on the disaggregated data. Indicator compendium The indicator compendium includes a comprehensive definition of each indicator included in the WHO Health Equity Monitor database. The ACI is a complex, weighted measure of inequality that indicates the extent to which a health indicator is concentrated among the disadvantaged or advantaged, on an absolute scale. The BGV is a complex, weighted measure of inequality that shows the squared difference between each subgroup and the national level, on average. The BGV is sensitive to large deviations from the national level (by use of squaring). The D is a simple measure of inequality that shows the absolute inequality between two subgroups. The MDB is a complex, weighted measure of inequality that shows the difference between each subgroup and the best performing subgroup, on average. The MDM is a complex, weighted measure of inequality that shows the absolute difference between each subgroup and the national level, on average. The PAR is a complex, weighted measure of inequality that shows the potential for improvement in the national level of a health indicator that could be achieved if all subgroups had the same level of health as a reference subgroup. The SII is a complex, weighted measure of inequality that represents the absolute difference in predicted values of a health indicator between the most-advantaged and most-disadvantaged (or vice versa for adverse health outcome indicators), while taking into consideration all the other subgroupsusing an appropriate regression model. Relative measures Index of disparity (IDIS) The IDIS is a complex measure of inequality that shows the proportional difference between each subgroup and the national level, on average. The KMI is a complex, weighted measure of inequality that represents the ratio of predicted values of a health indicator of the most-advantaged to the mostdisadvantaged (or vice versa for adverse health outcome indicators), while taking into consideration all the other subgroupsusing an appropriate regression model. The MLD is a complex measure of inequality that takes into account the population share of each subgroup. The MLD is sensitive to large deviations from the national level (by use of logarithm). Population attributable fraction (PAF) The PAF is a complex, weighted measure of inequality that shows the potential for improvement in the national level of a health indicator, in relative terms, that could be achieved if all subgroups had the same level of health as a reference subgroup. eight relative measures relevant for health inequality monitoring [11,14]. An overview of all summary measures, including their definitions, formulas and application to the dimensions of inequality is provided in Table 2. The software displays disaggregated data and summary measures of inequality in tabular and graphical format, allowing for interactivity (e.g. multiple indicators or inequality dimensions) may be viewed at the same time, views can alternate between latest status and a chosen number of years for a single country. Results & discussion The features of this software were conceptualized and developed between 2014 and 2016 in conjunction with capacity-building activities on health inequality monitoring involving multiple workshops with participants from a large number of countries. Since HEAT automates the computational tasks of calculating summary measures of inequality and visually depicts disaggregated data and summary measures of inequality for the user, the advantage is that data is ready for interpretation, allowing users to focus on the assessment of inequalities to move from data to action. To enable this, the HEAT interface has four main tabs: Home (which is the starting/homepage view), Explore Inequality, Compare Inequality, and About. The homepage provides an introduction and citation information. The About page includes tabs displaying the User manual, Technical notes, Indicator compendium, Software information, License information, Feedback information and Acknowledgements. The structure of the tookit is indicated in Fig. 1; more details on the remaining two key tabs, Explore Inequality and Compare Inequality, are provided below. Explore inequality Explore Inequality comprises four tabs, displayed in a horizontal panel at the top, to view the data in tabular and graphic format: Disaggregated data (tables), Disaggregated data (graphs), Summary measures (tables), and Summary measures (graphs). Disaggregated data (tables) presents a table with data on chosen health indicators by population subgroups (classified by dimension of inequality) in a selected country of interest for a given year, or multiple years. Disaggregated data (graphs) presents horizontal line graphs (equiplots [24]) or bar graphs with health data for population subgroups for one or more survey years in a selected country of interest. Summary measures (tables) presents a table with chosen summary measures of inequality for a selected country of interest for a given year, or multiple years. Summary measures (graphs) presents line graphs or bar graphs for a chosen summary measure of inequality for one or more survey years in a selected country of interest. In all tabs, there is a panel on the left that allows users to toggle and interact with the views in the screen, including choosing the country, data source(s), year(s), health indicator(s) and inequality dimension(s). In Summary measures tabs, the summary measure(s) may additionally be chosen. Multiple selections are allowed, where relevant, and default selections afford the user a view that can then be toggled and manipulated using the Table 2 Overview of summary measures and dimensions (Continued) Ratio (R) The R is a simple measure of inequality that shows the relative inequality between two subgroups. The RCI is a complex, weighted measure of inequality that indicates the extent to which a health indicator is concentrated among the disadvantaged or the advantaged, on a relative scale. The RII is a complex, weighted measure of inequality that represents the relative difference (proportional to the national level) in predicted values of health indicator between the most-advantaged and mostdisadvantaged, while taking into consideration all the other subgroupsusing an appropriate regression model. The TI is a complex measure of inequality, that takes into account the population share of each subgroup. The TI is sensitive to large deviations from the national level (by use of logarithm). TI ¼ Coloured shapes indicate population subgroups -each health indicator for each survey year is represented on the graph by multiple coloured shapes (one for each subgroup representing a dimension of inequality). Black horizontal lines indicate the difference between minimum and maximum subgroup estimates. • Explore Inequality • Disaggregated data • Bar graph The bar graph shows subgroup estimates (on the y -axis) for each survey year (on the x-axis). Coloured bars indicate population subgroups -each health indicator for each survey year is represented on the graph by multiple coloured bars (one for each subgroup representing a dimension of inequality). Numbers above bars indicate the respective subgroup estimates. Instead of numbers, 95% confidence intervals can be displayed in the form of vertical lines (or whiskers). When confidence intervals are not selected for display, value labels appear on top of each bar. • Explore Inequality • Summary measures • Bar graph The bar graph shows summary measure estimates (on the y-axis) for each survey year (on the x-axis). Numbers above bars indicate the respective summary measure estimates. Instead of numbers, 95% confidence intervals (analytic or bootstrap) can be displayed in the form of vertical lines (or whiskers). When confidence intervals are not selected for display, value labels appear on top of each bar. • Explore Inequality • Summary measures • Line graph The line graph shows summary measure estimates (on the y-axis) for each survey year (on the x-axis). 95% confidence intervals (analytic or bootstrap) can be displayed in the form of vertical lines (or whiskers). Compare inequality In Compare Inequality, users can compare the situation of their chosen country with that of comparators using disaggregated data or summary measure graphs. Following a similar logic to Explore Inequality, the panel on the left allows one to toggle and interact with the views in the screen, including choosing the country, data source(s), year, health indicator, and the inequality dimension the user wants to view. In Summary measures (graphs), the summary measure may also be chosen. In addition to this, there are benchmarking options allowing users to choose comparator countries by World Bank income group and WHO region, which creates a shortlist of countries that may further be added/removed from the graph. There is also an option to modify the range of years (0 to 5 years) that will be considered in the comparison. For example, if a range of 0 years is selected for a country of interest for the year 2007, only those countries that also have data from 2007 will be presented. If a range of 5 years is selected for the same country and year, any country that has data for the indicator in question between 2002 and 2012 will be included. Graph options similar to other graph tabs are provided to change graph height, width, axis range, as well as graph and axis titles. As with Explore Inequality, default selections afford the user a view that can then be toggled and manipulated Case study In April 2016, HEAT was used in a capacity building workshop on health inequality monitoring in Indonesia consisting of 30 participants from the Ministry of Health, the National Statistical Office, academia, and other UN agencies. Using HEAT, participants first explored the current and past state of inequality in RMNCH in Indonesia using the Explore Inequality tab. By visualising disaggregated data and summary measures in tables and graphs, participants could identify different patterns and levels of inequality for different indicators and dimensions. For example, it was found that economic-related inequality in coverage of births attended by skilled health personnel greatly decreased between 1997 and 2012, but large absolute differences in coverage between the richest and poorest population subgroups remained in 2012. On the contrary, existing gaps in demand for family planning satisfied were closed completely in the same period. Participants also Description of options selected for visual Snapshot Interpretation • Compare Inequality • Disaggregated data • Horizontal line graph The line graph presents health data for population subgroups in a selected country of interest (displayed at the top of the graph) and selected benchmark countries. Coloured shapes indicate population subgroups within countries -each study country is represented on the graph by multiple coloured shapes (one for each subgroup). Black horizontal lines indicate the difference between minimum and maximum subgroup estimates. • Compare Inequality • Summary measures • Scatterplot The scatterplot presents the national average (on the x-axis) and the level of within -country inequality as measured by the selected summary measure, (on the y-axis) for the selected health indicator and dimension of inequality for selected countries. Coloured shapes indicate countrieseach country is represented on the graph by one shape. Benchmark countries are shown in blue, while the country of interest is highlighted in red. Countries may also be represented by coloured ISO3 codes. Fig. 4 Guide to graphs under the Compare Inequality tab observed that for certain indicators, the situation varied between different dimensions of inequality. As an example, while economic-related inequality in measles immunization coverage remained unchanged over time, differences between urban and rural residents were eliminated. Participants then went on to compare the situation in Indonesia with that of other middle-income countries from the South-East Asia and Western Pacific Regions using the Compare Inequality tab. Looking at births attended by skilled health personnel, for example, there were countries that have achieved almost complete coverage, such as the Maldives and Mongolia, while other countries reported even larger inequalities than Indonesia. Again, the situation varied between different health indicators and dimensions of inequality. HEAT facilitated an initial assessment of inequalities in RMNCH in Indonesia and showed where inequalities existed [25]. In this sense, it can be considered as a priority-setting tool to identify the largest withincountry health inequalities. This being said, HEAT is a toolkit to assist health inequality monitoring and serves as a warning system. It does not depict multivariate analyses of inequality or explain why inequalities exist, for which further in-depth quantitative and qualitative studies are required [11]. Future developments Going forward, an upload data feature will be incorporated into the software so that instead of the Health Equity Monitor database, data meeting pre-defined specifications may be uploaded for health inequality analyses. It is also proposed to add additional interactive features to this software, like pop-up features to display or expand information shown in a data point or to rank and annotate data based on interpretation. There are several R packages that enable the creation of interactive graphics using html widgets or other approaches; these include Plotly, Highcharter, rCharts and others. Interactive maps using the Leaflet or Highcharter packages may also be considered. In addition, there are R packages that provide bindings to the Google Translate API that may allow the translation of the tool in to other languages. Conclusion HEAT comes at a critical point of renewal and re-invigoration in global health cooperation and national priority-setting in health. In the post-2015 era, the importance of disaggregated data for sustainable development is acknowledged [5,6], and this is one major attempt using software that facilitates exploring and comparing health inequality data in a user-friendly, interactive, and fullyflexible format. There is no similar application in the market of which we are aware.
2018-04-03T03:59:02.165Z
2016-10-19T00:00:00.000
{ "year": 2016, "sha1": "9121cef78df0370456b86ec6810dab1e63f1e811", "oa_license": "CCBY", "oa_url": "https://bmcmedresmethodol.biomedcentral.com/track/pdf/10.1186/s12874-016-0229-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9121cef78df0370456b86ec6810dab1e63f1e811", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
254328352
pes2o/s2orc
v3-fos-license
Theranostic roles of machine learning in clinical management of kidney stone disease Graphical abstract Introduction In routine clinical practice, kidney stone disease (KSD) can be detected by laboratory tests such as urinalysis, X-ray, ultrasonog-raphy, and/or computerized tomography (CT) scan [1]. Disease management depends on type and size of the stones. Most of KSD patients (or stone formers) are asymptomatic and may require no specific treatment [2,3]. In complicated KSD, extracorporeal shock wave lithotripsy (ESWL), percutaneous nephrolithotomy (PNL), ureteroscopy (URS) and other surgical procedures are the common therapeutic procedures to remove kidney stones [4][5][6]. Nevertheless, there is a high recurrence rate following the stone removal [7][8][9]. Machine learning has been used in medicine for diagnostics and therapeutics for quite some time. The use of artificial intelligence (AI) has been increasing in several aspects of biomedical areas. Using training dataset, machine learning algorithms can create models, identify underlying patterns, and then make predictions based on the best-suited model [10,11]. Development of image and speech recognition is one of the significant advancements in this field. The use of machine learning in medical imaging, such as ultrasound elastography (UE), CT scan and magnetic resonance imaging (MRI), improves diagnostic accuracy and reduces the possibility of human errors across a wide range of medical areas [12]. This approach has been also used in urology to diagnose urological disorders, to design appropriate treatment modality, and to predict therapeutic outcome [13,14]. Deep learning, a branch of machine learning, has a potential to be used as an innovative method for diagnosis of chronic kidney disease (CKD) [15] and predicting the decline of renal function [16], renal dysfunction [17], and diabetic nephropathy [18]. In KSD, machine learning has been employed for over two decades [19]. Recently, it has been widely used for stone detection [20], stone type prediction [21], determination of appropriate management option, and prediction of therapeutic outcome [22]. This review provides a brief overview of KSD and discusses how machine learning can be applied to diagnostics, therapeutics and prognostics in clinical management of KSD for better therapeutic outcome. Epidemiology and risks KSD, also known as urolithiasis, nephrolithiasis and renal calculi, is a common illness caused by deposition of solid minerals formed inside the kidney [23]. It is one of the oldest diseases that has caused human suffering for over millennia with evidence in Egyptian mummies [24,25]. The worldwide disease prevalence and incidence vary based on sociodemographic, lifestyle, dietary, genetic, gender, age, environmental and climatic factors [26,27]. The prevalence of KSD is greater in the Western hemisphere as compared with the Eastern (7-13 %, 5-9 % and 1-5 % in North America, Europe and Asia, respectively) [26]. KSD is a highly recurrent disease, of which recurrence rate is approximately 11 %, 20 % and 31 % within two, five and ten years, respectively, after the stone removal [28]. The evidence also indicates the continuously increasing prevalence and incidence around the globe [26,[29][30][31]. In addition to genetic and geographical backgrounds, which are environmental risk factors [26], some systemic diseases, including obesity, diabetes mellitus, hypertension, metabolic syndrome and gout, are also considered as the risks for KSD development [26]. Types of kidney stones and mechanisms of the stone formation Kidney stones can be classified into five major types based on the stone composition, including calcium oxalate (CaOx), carbonated apatite or carbapatite (CA), urate, struvite or magnesium ammonium phosphate, and cystine or drug-induced stones [32][33][34]. Kidney stone formation is a prerequisite process initiated by urinary supersaturation of ions of the stone composition, leading to their transformation from liquid phase to solid phase, the mechanism that is called crystallization or crystal nucleation [35,36]. Thereafter, the loosely formed stone crystals can enlarge by adding free ions from the supersaturated urine, resulting in crystal growth [37]. Additionally, individual crystals can form crystal aggregates that further enlarge the crystalline particles [37,38]. Moreover, the formed crystals can adhere onto apical surfaces of renal tubular cells via affinity between crystals and their receptors on the cell surfaces [39]. Crystal growth, aggregation and adhesion altogether slow down the elimination rate of the formed crystals through intratubular luminal segments with small size, resulting in crystal retention [35,36]. These processes are known as the ''free-particle model" of kidney stone formation (the stone forms inside renal tubule) [35,40]. In another model of kidney stone formation namely ''fixedparticle model" [35,41], the stone develops on the preformed plaque firstly described by Alexander Randall in 1937 [42]. Randall's plaque comprises mainly calcium phosphate that forms at interstitial compartment of the renal papilla and then serves as an anchor for stone formation [43,44]. Several studies have shown histopathological evidence indicating that the majority of idiopathic CaOx stones are associated with Randall's plaque [43,44]. And basement membrane of the thin loop of Henle is the main locale that plaque arises and expands to the nearby interstitial space under the urothelium [43,44]. After affecting the integrity of urothelium, the plaque is unmasked and exposed to the urine rich with calcium and oxalate ions. Thereafter, the supersaturated urine reacts with the emerging plaque to forms layers of the CaOx crystals on the Randall's plaque by repeated coating, crystallization and growth [43,44]. Diagnosis and management in current clinical practice Although most of the stone formers are mainly asymptomatic and do not require specific treatment or surgical intervention, they are suggested to attend the follow-up program annually or at least every 2-3 years to evaluate the disease progression [2,3]. Symptomatic stone formers typically have acute renal colic or flank pain (originating over the costovertebral angle and extending towards the inguinal area), nausea and/or vomiting [2,45]. Clinical presentations may also include hematuria, low urinary flow, hydronephrosis, and secondary urinary tract infection (UTI) [3,23]. Diagnosis and disease management usually start with confirmation of the presence of the stone [3]. The gold standard method for stone detection, size measurement and localization is non-contrast CT (NCCT) scan of the kidneys, ureters, and bladder [2,3,46]. NCCT scan is a highly sensitive and highly accurate method for stone imaging, which is very helpful for further selecting appropriate disease management [46]. Ultrasonography has lower sensitivity as compared with CT scan. However, it is more suitable for some stone formers, e.g., children, pregnant women and patients with frequent episodes of KSD [46]. MRI is used as a second-line modality for pregnant stone formers, who do not meet the criteria for ultrasonography [46]. Besides imaging modality, history taking, physical examination and laboratory tests (e.g., urinalysis and blood chemistry) are also required [2,3]. Based on the guidelines for management of KSD by the European Association of Urology (EAU), non-steroidal antiinflammatory drugs (NSAIDs) are recommended as the first-line analgesics for renal colic management [3,47]. Spontaneous passage is recommended for the cases with stones <5 mm, whereas medical expulsive therapy (MET) using a-blockers is recommended for those with stones >5 mm in the distal ureter [47]. In the cases with stones >20 mm, PNL is recommended as the first-line treatment [47]. Note that when the patients do not meet the criteria for PNL, retrograde intrarenal surgery (RIRS) or ESWL is recommended [47]. More details and the updated version of the guidelines for disease management are available on the EAU Guidelines Office website (https://uroweb.org/guidelines/urolithiasis). Roles of machine learning in KSD diagnostics Imaging is a crucial diagnostic tool and the first step for selecting the most appropriate treatment modality in KSD management. De Perrot et al. [48] have reported how well radiomics features and a machine learning classifier can distinguish KSD from phleboliths using low-dose CT. Li et al. [49] have employed the unenhanced abdominopelvic CT scans and deep learning segmentation networks to exclude false positive areas from kidney stones. Parakh et al. [20] have shown the efficacy of cascading convolutional neural network (CNN) for detecting urinary stones. Using this approach, the urinary tract is detected by the first CNN model, whereas the stones are detected by the second CNN model [20]. Additionally, a total of six models have been designed and deployed using CT image datasets of kidney stones, cysts and tumors [50]. Both deep learning techniques (VGG16, Inceptionv3 and Resnet50) and Visual Transformer variants (EANet, CCT and Swin transformer algorithms) can be applied to differentiate KSD from renal cysts and tumors with 99.30 % accuracy achieved by Swin transformer-based model [50]. Caglayan et al. [51] have examined the efficacy of a deep learning model for identifying kidney stones in unenhanced CT images in various planes based on stone size. The sagittal plane has provided the best sensitivity and specificity as compared with other planes [51]. Längkvist et al. [52] have created a computer-aided detection (CAD) algorithm that can detect a ureteral stone in a CT scan. Similarly, Sudharson et al. [53] have developed a CAD algorithm using support vector machine (SVM)-based machine learning classifier to identify kidney abnormalities of multiple classes, such as kidney stones, cysts and tumors, by ultrasonography. Clinicians would take great benefit from a deep learning system that is automated and can segment data automatically. Several previous studies have tried using automated machine learning to detect kidney stones. For example, Yildirim et al. [54] have applied a deep learning model to automatically detect and localize kidney stones from coronal CT scans. Cui et al. [55] have also reported automated detection of kidney stones in NCCT images using deep learning and S.T.O.N.E. nephrolithometry scoring method. To deal with noisy CT, Elton et al. [56] have employed CNN (U-Net model) for automated detection and volume quantification of small stones in coronal CT images. Babajide et al. [57] have analyzed the efficacy of a machine learning method to detect and characterize kidney stones automatically compared with manual diagnosis. The data have shown that the machine learning algorithm more accurately approximates the stone boundary with both sensitivity and specificity of 100 % [57]. Most of kidney stone studies on diagnostics use various medical imaging methods, including X-ray, CT scan and MRI. Nevertheless, only few studies have used clinical characteristics to assist KSD diagnostics. Using clinical and gut microbiota traits, one can predict the development of CaOx KSD [58]. Recently, Kavoussi et al. [59] have used 24-h urine and clinical data to predict urinary abnormalities. Age, gender and body mass index are the three variables that have the most impact on training the prediction models n/a n/a n/a AUC = area under the curve; n/a = not available. [59]. All the information obtained from the aforementioned studies (also summarized in Table 1) indicate the important roles of machine learning in KSD diagnostics. Roles of machine learning for stone type prediction Specifying type of kidney stones is an important step for management of KSD to achieve satisfactory therapeutic outcome. There is a wide attention to predict type of kidney stones using clinical and imaging data. As such, machine learning-based text classification has been extensively used for this purpose. For example, data mining techniques have been used to extract useful information, such as stone types and compositions, from electronic health record [60]. In a study by Kazemi et al. [61], 42 features extracted from medical information record of patients have been used to build a model for predicting type of kidney stones. Similarly, Abraham et al. [62] have predicted stone composition by using XGBoost machine learning on 24-h urine data and clinical information. Interestingly, performance of the predictive model is improved by using 24-h urine data [62]. In another study, the microwave dielectric properties, which differ in various stone types, have been used to predict three types of kidney stones [63]. Moreover, the eight simple clinical parameters, including gender, age, body mass index, estimated glomerular filtration rate, urine pH, the presence of bacteriuria, the presence of gout, and the presence of diabetes mellitus, can improve uric acid stone prediction with an area under the curve (AUC) of 0.936 [64]. Additionally, the stone type can be predicted from appearance, texture and section of the stones shown in digital images, CT scans and digital videography. Grosse Hokamp et al. [65] have used dualenergy CT scan and machine learning to predict various compositions of the stones, including whewellite (CaOx monohydrate; COM), weddellite (CaOx dihydrate; COD), calcium phosphate, cystine, struvite, uric acid, and xanthine. Zheng et al. [66] have created a predictive model with radiomics signature based on NCCT images and independent clinical predictors for detecting infection stones with an AUC of 0.825. Recently, machine learning has been used to analyze high-quality digital images of a kidney stone, resulting in successful prediction of the stone type with high specificity [21]. El Beze et al. [67] have developed an automated stone detection technique to discriminate six types of stones from endoscopy by using surface and section of urinary calculi. Using a dataset of smartphone-based microscopic images, Onal et al. [68] have evaluated an image recognition system for categorizing four types of kidney stones in the rapid and precise manner. Likewise, Estrade et al. [69] have applied deep learning method on digital endoscopic video sequences to automatically detect stone morphology during the stone fragmentation process. All the aforementioned studies, including their goals, AI methods used and results, are summarized in Table 2. Roles of machine learning for determination of appropriate treatment modality and prediction of therapeutic outcome Significant technological advancements have been made for management of KSD. Parekattil et al. [70] have used information from 384 stone formers who had spontaneous passing stones or underwent intervention (stent, ureteroscopy or ESWL) to develop the model. The findings have shown that the cutoff at 6 mm of the stone dimension can accurately identify patients who may require intervention. To prevent or minimize the problematic stone recurrence, many studies have employed machine learning to predict the therapeutic outcome of KSD. For this kind of research, most of the studies have applied artificial neural network (ANN) to predict the ESWL outcome. The clinical data and urine samples of patients who underwent ESWL are used as the parameters to predict the stone recurrence after ESWL [19,71]. In addition, radiographic images categorized by radiographic morphological patterns are used for prediction of stone clearance after ESWL with an accuracy of 92 % [72]. In addition, the most influential factors on prediction of the ESWL outcome are size and position of the stones, the usage of stents, and the stone width [73]. Moreover, combining threedimension textual analysis features (3D-TA) derived from CT images with clinical variables can improve prediction of the ESWL success [74]. NCCT image analysis of stone formers who underwent ESWL can create a model to predict fragmentation of stones and outcome of treatment [75]. Choo et al. [76] have utilized stone features from X-ray and CT scans to construct a decision support system (DSS) to forecast treatment success following ESWL with high accuracy, especially using the 15-factor model. Recently, Yang et al. [77] have also determined ability of DSS to predict the ESWL success rate with accuracy up to 88 % [77]. A more recent study has built a machine learning model that can predict the ESWL outcome to aid practitioners in decision making with a sensitivity of 87.5 % [78]. Machine learning has been also applied to predict the therapeutic outcome after nephrolithotomy. Aminsharifi et al. [79] have predicted postoperative outcome of PNL from preoperative and postoperative variables using ANN. The model can predict stonefree status or ancillary procedures with sensitivity and accuracy from 81.0 % to 98.2 % [79]. Moreover, machine learning technique classification software seems to provide better results as compared with the Guy's Stone Score (GSS) and the Clinical Research Office of Endourological Society (CROES) nomogram [22]. Machine learning has been also used to create the DSS for forecasting therapeutic success. In a study by Shabaniyan et al. [80] using four different classification methods to develop DSS, the PNL outcome can be predicted with a high degree of accuracy (94.8 %). Hameed et al. [81] have used Random Forest (RF)-based machine learning to develop a decision support system to predict stone-free status after PNL for staghorn calculi with an accuracy of 81 %. All the aforementioned studies, including their goals, AI methods used and results, are summarized in Table 3. Summary and outlook As evidenced by several studies, it becomes clear that machine learning plays essential theranostic roles in clinical management of KSD. Various machine learning algorithms, including XGBoost, CNN, ensemble-based method, k-nearest neighbors, ANN, SVM, RF and several other methods, have improved performance of the systems by increasing the accuracy and sensitivity of KSD diagnostics, prediction of stone type, prediction of therapeutic outcome and prognostics. The advantages of such computational-based approaches therefore serve as the other means for clinical management of KSD. These approaches may also lead to discovery of new therapeutic strategies, better therapeutic outcome, and more successful prevention of KSD. The amount of available information on KSD has been growing exponentially as new generations of the biotechnology has continuously emerged. The recently emerging medical imaging technologies, like high-resolution 3D imaging and other new methods, have offered higher quality of imaging in terms of resolution and signalto-noise ratio. These technologies together with improved machine learning algorithms have paved the way for more precise clinical diagnostics of KSD. Additionally, the well-developed texture analysis of stone images has dramatically improved the accuracy for prediction of kidney stone type. Such advances in these medical imaging technologies and machine learning are likely to be more extensively used in routine clinical management of KSD in the near future. However, there are still rooms for further improvements of machine learning algorithms to increase the sensitivity and specificity of automated classification methods, particularly for ureteroscopic kidney stone images. Furthermore, blood and urine chemistry laboratory tests should be also combined with clinical information and medical imaging to enhance the accuracy of machine learning in KSD theranostics. Finally, establishment of an international network to construct a centralized kidney stone database for each type of the stones comprising patients' demographic and background information, urine/blood parameters and chemical analyses, imaging, all other laboratory tests, treatment modalities, therapeutic outcome, etc., should be considered. Such ideal database will definitely pave the way for development of the more robust machine learning algorithm towards precision medicine for KSD. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2022-12-07T16:53:28.800Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "b0355b4cc6c6e2d73ee50c0be58fd3dc1cd234e8", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.csbj.2022.12.004", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "782ebfd96a68a1917774944c6a5166b8b3000cf6", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
14435460
pes2o/s2orc
v3-fos-license
Representations of the Weyl Algebra in Quantum Geometry The Weyl algebra A of continuous functions and exponentiated fluxes, introduced by Ashtekar, Lewandowski and others, in quantum geometry is studied. It is shown that, in the piecewise analytic category, every regular representation of A having a cyclic and diffeomorphism invariant vector, is already unitarily equivalent to the fundamental representation. Additional assumptions concern the dimension of the underlying analytic manifold (at least three), the finite wide triangulizability of surfaces in it to be used for the fluxes and the naturality of the action of diffeomorphisms -- but neither any domain properties of the represented Weyl operators nor the requirement that the diffeomorphisms act by pull-backs. For this, the general behaviour of C*-algebras generated by continuous functions and pull-backs of homeomorphisms, as well as the properties of stratified analytic diffeomorphisms are studied. Additionally, the paper includes also a short and direct proof of the irreducibility of A. Introduction Every physical theory requires fundamental mathematical assumptions at the very beginning. It is highly desirable to justify them by even more fundamental axioms that are both mathematically and physically as plausible as possible. In loop quantum gravity, there are a few of such technical prerequisites. First of all, of course, one assumes that all objects are constructed out of parallel transports along graphs in a base manifold of an SU (2) principal fibre bundle (or maybe also using higher dimensional objects like in spin foam theory). This is reasonable by the fact that classical (canonical) gravity is an SU (2) gauge field theory with constraints as discovered by Ashtekar in the mid-80s [1]. Secondly, one needs inputs about the quantization of this classical system. For this, at least the structure of the configuration space C of all those parallel transports (modulo gauge transforms) has to be fixed. If one wants to use functional integrals for quantization, one is forced to study measures on that space. The usage of parallel transports corresponding to smooth connections only, however, has lead to enormous mathematical problems. These could be widely avoided only by including distributional connections as well [2]. Namely, by the assumption that the reductions of the full theory to finitely many degrees of freedom (i.e. parallel transports on a finite graph) are continuous, one finds that the topology of C is a projective limit topology 1 making C a compact space. Here, the compactness is induced by that of the underlying structure group SU (2) comprising the values of the parallel transports. This strategy can be reused to find natural measures on C -one simply uses the assumption that the restrictions of the theory to finite graphs push forward the measure on C to the Haar measures on the finite powers of SU (2). This leads to the Ashtekar-Lewandowski measure µ 0 [6]. Of course, this measure is "natural", since the Haar measure on a Lie group is "natural" as well. However, this is at most a mathematical statement or a statement of beauty. The deeper question behind is how one can justify this choice by mathematical physics arguments. Early Attempts For the first time, this problem has been raised by Sahlmann [32]. He considered the class of measures on C that are absolutely continuous w.r.t. µ 0 , and realized [32,33] that (up to some additional technical assumptions) only µ 0 allows for a diffeomorphism invariant measure such that the flux variables are represented as operators on the corresponding L 2 space. Although these results were proven for the case of a U (1) gauge theory, they have been expected to hold also for the case of a general compact structure Lie group G. Moreover, it suggests that the diffeomorphism invariance of gravity together with its full phase space description could be responsible for the uniqueness of µ 0 . The situation is similar to ordinary quantum mechanics. There, the Stone-von Neumann theorem [11] tells us that there is (up to equivalence) precisely one irreducible regular representation of the Weyl algebra generated by the exponentiated position and momentum operators together with their Poisson relations. In the standard Schrödinger representation on L 2 (R, dx), these unitary operators are given by [e iπb x ψ](x) = e iπx ψ(x) and [e iξ b p ψ](x) = ψ(x + ξ). In loop quantum gravity, on the other hand, the connections are the generalized positions and the densitized dreibein fields are the generalized momenta. Exponentiation here includes also smearing: Connections are smeared along one-dimensional objects (i.e. paths) and exponentiated to give parallel transports -dreibeine along one-codimensional objects (i.e. hypersurfaces) to give flux variables. Now, one possible (even irreducible and regular) representation for the corresponding Weyl algebra A is given by multiplication and translation operators, respectively, on L 2 functions on C w.r.t. the Ashtekar-Lewandowski measure. All that suggests that maybe this representation π 0 is even uniquely determined as well by certain reasonable assumptions. Sahlmann and Thiemann [35,34], supported by results of Lewandowski and Oko lów [30] (see also [25] for further discussion), had argued that π 0 may be the only irreducible, regular and diffeomorphism invariant representation of A. Despite the progress given by these papers, there had remained many points open, both technically and conceptually. A conceptual one concerned the domain properties of the represented operators. In fact, all results for non-abelian structure groups in [35] relied crucially on the fact that the self-adjoint generators of both the represented and the non-represented unitary operators share a certain, but not naturally given common dense domain. Another issue regarding the smoothness properties of the diffeomorphisms will be discussed below. Achievements of the Present Paper The situation above has described the status some five years ago. The goal of our present paper is now to give a complete and rigorous proof of a Stone-von Neumann-like theorem in quantum geometry avoiding most of these problems. More precisely, we will show that every regular representation of A that has a cyclic and diffeomorphism invariant vector, is unitarily equivalent to the fundamental representation π 0 , provided the action of diffeomorphisms satisfies some rather mild condition. The main conceptual achievements of our theorem, in comparison to [35], are the following: • There are no longer any requirements concerning the domains of the operators in the game. This will be possible, since we consequently, from the very beginning, work with the exponentiated fluxes only. At no point, will we use their self-adjoint generators. There is only one issue, where we use the relation between operators and their generators. This will concern one-parameter subgroups in a compact Lie group in order to get some estimate for certain products in it. However, we will completely leave this infinitesimal arena before going back to the Weyl algebra level. • The requirements concerning the representations of the diffeomorphisms are drastically weakened. In [35], it had to be assumed that these are represented via pull-backs and respect the decomposition of the representation restricted to C(C) into cyclic generators. In particular, one had to assume that each of these components contains a diffeomorphism invariant cyclic vector. As to be discussed at the end of the paper, a priori these requirements drastically reduce the measures allowed in these decompositions. We will now be able to show that this assumption can be replaced by a weaker one. We only require that coinciding addends in the decomposition share the same representation of diffeomorphisms if at least one addend is diffeomorphism invariant. • Moreover, we will be able to clarify the particular class of diffeomorphisms to be used. Analytic diffeomorphisms are unsatisfactory from two points of view: Physically, they contradict the notion of locality, i.e., if we transform some set in the space(-time) manifold locally, then we transform this manifold even globally. Mathematically, they are not flexible enough as well, i.e., it will often be very difficult, if not impossible, to locally map objects onto each other under very rigid conditions, as we will see below. Therefore, we are forced to extend the class of isomorphisms. In fact, it will be manageable to use stratified analytic diffeomorphisms, slightly modifying the similar structures in, e.g., [29,21,10]. This, at the same time, leads to a natural extension of the surfaces used to define the Weyl operators, from analytic submanifolds to semianalytic sets. However, this is not a severe extension, since every semianalytic set can be stratified into a locally finite set of analytic submanifolds being mutually disjoint, i.e., having commuting Weyl operators. Idea of the Proof Let us very shortly outline the proof of the uniqueness theorem. As usual (see, e.g., [35]), the restriction of any representation π of a Weyl-like algebra to the continuous functions, can be decomposed into (w.r.t. C(C)) cyclic ones. These are always the canonical representations on some L 2 (C, µ ν ) with appropriate measures µ ν on C. Assuming that π contained a cyclic vector having some invariance property, we may find such a decomposition, such that one of the constant vectors 1 ν ∈ L 2 (C, µ ν ) has these properties as well. Then, being the first step where we use the particular structures of quantum geometry, regularity and diffeomorphism invariance imply that this µ ν is the Ashtekar-Lewandowski measure. Now, being the second step relying on quantum geometry, we may show that certain Weyl operators are diffeomorphism conjugate to their adjoints. By general arguments, using the two properties above and adding invariance and cyclicity of 1 ν , we prove that π equals (up to unitary equivalence) the fundamental representation of A. Comparison with LOST Paper While this paper was prepared, Lewandowski, Oko lów, Sahlmann and Thiemann (LOST) were working on a similar problem for the holonomy-flux * -algebra. This algebra is given if the fluxes themselves are considered together with the continuous functions on C. Some time after the present article had been sent to the arxiv, the four-men paper [26] has been finished and appeared there as well. In this subsection, we are going to compare the corresponding results. As already mentioned, the most striking difference between the two approaches lies in the algebra: We use both exponentiated positions and momenta, but LOST exponentiate positions only and keep the fluxes non-exponentiated. Consequently, LOST investigate the holonomy-flux algebra, a * -algebra, but we consider the Weyl algebra -a C * -algebra. Here the exponentiated fluxes are implemented as unitaries, whereas LOST study implicitly their self-adjoint generators being, of course, unbounded. The price to pay is that, in contrast to our case, LOST have to get rid of the persistent domain problems. This is done very directly using a state, since that -via GNS-guarantees the existence of a common dense domain for all the operators. By construction, this domain is spanned by the cylindrical functions on C. On the other hand, we only assume that the Weyl operators are continuously represented w.r.t. their smearing. This means that each corresponding one-parameter subgroup has some self-adjoint generator. If this was not the case, it is expected that then there exist other diffeoinvariant representations of the Weyl algebra. Nevertheless, note that our regularity assumption for each single one-parameter subgroup is much weaker than that of the existence of a certain common dense domain for all generators as in the LOST case. Indeed, our assumption follows from the LOST requirements: The GNS construction implies that, given a state, the * -invariant fluxes become symmetric operators. As it turns out, they are even self-adjoint. Hence they generate weakly continuous one-parameter subgroups. All that seems to show that our result is much stronger than that of LOST. However, there will be an additional assumption made in our paper only: the diffeomorphisms are implemented naturally. Until now, by no means, neither the relevance of this requirement nor its possible counterpart in the LOST paper is clear. However, while, as a matter of principle, it cannot be expected that the domain assumptions above can be dropped by LOST, we do hope that the naturality condition can be shown obsolete sometime. The remaining differences are, from our point of view, secondary. Let us only sketch a few of them. The technical advantage of the * -algebra case is the linearity of the fluxes w.r.t. the smearing, which enables LOST to use the scalar-product trick by Oko lów. At the same time, LOST have to use compactly supported smearing functions. We, on the other hand, are confined to (up-to-gauge) constant smearings, although there is some hope to relax that. Since compactly supported smearings mean that one can restrict oneself to "nice" parts of the surfaces and forget about near-boundary regions, LOST -in contrast to us-did not have to assume that the surfaces are (widely) triangulizable. Rather similar are the general assumptions concerning smoothness. The striking idea that underlies both investigations is that stratified analytic objects comprise both the advantages of analyticity and those of locality. Only the implementation somewhat differs. Both are influenced by the notion of semianalyticity introduced mainly by Lojasiewicz, but -for simplicity -we mostly study these structures on a given analytic manifold, whereas LOST define semianalytic structures in a more categorical way. Nevertheless, essentially all of our considerations should be directly transferable to the LOST framework and vice versa. There should also be no significant changes if we required semianalyticity to include not only continuity at the boundaries, but also C k as in the LOST regime. Only in the C ∞ case, this is not completely clear. Finally, we summarize our comparison in Table 1 on page 6. Note that there we slightly modify the notions used in the respective article to better explain coincidences and differences. Further Developments Both the LOST and the present paper originate from the quest for a quantum gravity theory. Therefore, as said above, its main application concerns an SU (2)-gauge field theory over a three-dimensional manifold M (i.e., some Cauchy surface) with diffeomorphism invariance as a fundamental symmetry. All the results contain, of course, this case, but go much beyond. Nevertheless, some related questions are still unsolved. For instance, what about theories with other symmetries or another field content? First results have been obtained for homeomorphism invariant scalar field theories [23,24]. Here, it turned out, that there are indeed other states, labelled by the Euler characteristics, i.e. algebraic-topological properties of the hypersurfaces. Another approach currently under investigation, has been taken by Bahr and Thiemann [9] extending the diffeomorphism group symmetry to general automorphisms of the path groupoid. Structure of the Article To finish the introduction, let us briefly outline the present paper. In Section 2 we start with a general investigation of C * -algebras that are generated by the continuous functions on a compact Hausdorff space X and by pull-backs of homeomorphisms of X. Afterwards, we switch over to quantum geometry. Since we would like to make the theory applicable to weaker smoothness classes the paths are required to belong to, we generalize the notion of oriented surfaces introducing quasi-surfaces and intersection functions in Section 3. Then, in Section 4, the Weyl algebra of quantum geometry is defined and the assumed structures regarding paths, hypersurfaces, diffeomorphisms etc. are fixed. After presenting a pretty short and direct proof for the irreducibility of the Weyl algebra in Section 5, we study the theory of stratified diffeomorphisms in detail in Section 6. The main result on the uniqueness of representations is then contained in Section 7, including a discussion of the assumptions made and the extensions possible. General Setting Let X be a compact Hausdorff space and Homeo(X) be the set of all homeomorphisms of X. Given some ξ ∈ Homeo(X), its pull-back to C(X) is denoted by w ξ or, as usual, ξ * . Correspondingly, for every H ⊆ Homeo(X), the set W H ≡ H * ⊆ Homeo * (X) contains precisely the pull-backs of all elements in H. The other way round, given some pull-back w ∈ Homeo * (X), the corresponding homeomorphism is denoted by ξ w , i.e., we have ξ * w = w. Analogously, H W ⊆ Homeo(X) is defined for all W ⊆ Homeo * (X). Moreover, we denote by W the (abstract) subgroup of Homeo * (X) generated by W and define, analogously, H . Obviously, H W = H W and W H = W H . Next, for every measure 2 µ on X, we denote by H(µ) the set of all homeomorphisms on X leaving µ invariant. Clearly, H(µ) = H(µ). Moreover, every w ∈ W H(µ) extends naturally to a unitary operator on L 2 (X, µ), again denoted by w. By w(f ψ) = w(f )w(ψ) for all f ∈ C(X), ψ ∈ L 2 (X, µ) and w ∈ W H(µ) , we have w • f • w −1 = w(f ) as operators in B(L 2 (X, µ)). Sometimes, we will extend the notion LOST Fleischhack theory gauge field theory gauge field theory geometric ingredients principal fibre bundle P principal fibre bundle P · structure group G · structure group G · base manifold M · base manifold M smoothness stratified analytic stratified analytic for w 1 , w 2 ∈ W H(µ) . Finally, let A(W, µ) denote the C * -subalgebra in B(L 2 (X, µ)) generated by C(X) and W ⊆ W H(µ) , and let π 0 be the identical (or fundamental) representation of A(W, µ) on L 2 (X, µ). Lemma 2.1 For every W ⊆ W H(µ) , the subalgebra spanned by all products f • w with f ∈ C(X) and w ∈ W is dense in A(W, µ). Proof Since w • f = w(f ) • w for all w ∈ W and f ∈ C(X), , and with w, also w * = w −1 is in W . Therefore, the span of C(X) • W equals the * -subalgebra of B(L 2 (X, µ)) generated by C(X) and W. qed Throughout the whole section, let µ be some arbitrary, but fixed measure on X. First-Step Decomposition Since every representation of a C * -algebra is the direct sum of a zero representation and a non-degenerate one, we may restrict ourselves to non-degenerate representations in the following. Lemma 2.2 Fix some W ⊆ W H(µ) and let π be a non-degenerate representation of A(W, µ) on some Hilbert space H. Then there are measures µ ν on X with ν running over some (not necessarily countable) index set N, such that π| C(X) is unitarily equivalent to the directsum representation ν π µν , where π µν denotes the canonical representation of C(X) on L 2 (X, µ ν ) by multiplication operators. Moreover, these measures may be chosen, such that two of them are equal if they are equivalent (w.r.t. absolute continuity). Proof Every non-degenerate representation of a C * -algebra is (up to unitary equivalence) the direct sum of cyclic representations [12]. The first assertion now follows, because every cyclic representation of C(X) is equivalent to the canonical representation on L 2 (X, µ ν ) by multiplication operators for some regular Borel measure µ ν [36]. Note that π| C(X) is non-degenerate by 1 ∈ C(X). Since measures on X are equivalent w.r.t. absolute continuity iff the corresponding canonical representations are equivalent [36], we get the proof. qed Definition 2.1 A decomposition ν π µν as given in Lemma 2.2 is called first-step decomposition of π. Sometimes we write (µ ν ) ν∈N or shortly µ to characterize such a decomposition. Moreover, if the particular W is not important, we will consider first-step decompositions without any reference to some π. Definition 2.2 A first-step decomposition is called short iff N consists of a single element. Remark First-step decompositions are not at all unique. In fact, consider a short one with µ ν = µ and choose U ⊆ X with 0 < µ(U ) < 1. Decomposing any ψ ∈ H into ψ = 1 U ψ + 1 X\U ψ with 1 U being the characteristic function on U , we get a first-step decomposition π µ U ⊕ π µ X\U . Here, µ U is the normalization of 1 U ⊙ µ. In the following, given some representation π of A(W, µ) on H, we will usually assume that π| C(X) equals (one of) its first-step decomposition(s). Moreover, we usually write shortly π ν instead of π µν . By · µν we denote the norm on L 2 (X, µ ν ) =: H ν and by P ν the respective orthogonal projector mapping H to H ν . In particular, we have π(f )ψ 2 H = ν f · P ν ψ 2 µν for all f ∈ C(X) and ψ ∈ H. Next, let I ν : H ν −→ H denote the (norm-preserving) canonical embedding of H ν into H and set 1 ν := I ν (1), where 1 is seen not only as an element in C(X), but in H ν as well. Anyway, often we will simply drop I ν . Analogously, we do not explicitly mark the transition from continuous functions to their classes in L 2 , when calculating scalar products. Note, however, that C(X) is, in general, not embedded into L 2 (X, µ ν ). Let, e.g., µ ν be the Dirac measure at some point in X, then the image of C(X) is isomorphic to C. Therefore, one has to be careful when operating with pull-backs of homeomorphisms that do not leave µ ν invariant. Finally, for µ ν 1 = µ ν 2 we denote the canonical isomorphism mapping Definition 2.3 Let W be a subset of W H(µ) and let π be some representation of A(W, µ) on some Hilbert space H. Note that we tacitly assume some information about π to be given when we speak on invariance w.r.t. some W. This will avoid some cumbersome notation when we study equivalent representations. Lemma 2.3 Let W and W ′ be subsets of W H(µ) , let π ′ be a representation of A(W ∪ W ′ , µ) on some Hilbert space H, and let ψ ∈ H be a W ′ -invariant vector. Then there is a first-step decomposition ν∈N π µν of π ′ and some ν ∈ N, such that 1 ν is a W ′ -invariant vector. If, moreover, ψ is cyclic for π ′ | A(W,µ) , then 1 ν may be chosen cyclic as well. Proof Define H ν := π ′ (C(X))ψ ⊆ H. Then both H ν and H ⊥ ν are invariant w.r.t. π ′ (C(X)). Since H ⊥ ν is non-degenerate (if not zero), the projection of π ′ | C(X) to H ⊥ ν is (up to equivalence) some direct sum ν ′ ∈N ′ π µ ν ′ of cyclic representations of C(X). Since, on the other hand, π ′ | C(X) is cyclic on H ν , it is equivalent to the canonical representation π µν of C(X) on some L 2 (X, µ ν ), whereas the corresponding intertwiner maps ψ to 1 ν . Now, by construction, π µν ⊕ ν ′ ∈N ′ π µ ν ′ is a first-step decomposition of π ′ . Moreover, the W ′ -invariance of ψ translates into that of 1 ν and the cyclicity, if given, as well. qed Now, throughout the whole Section 2, we let W and W ′ be some arbitrary subsets of W H(µ) , whereas w ′ (W) ⊆ W for all w ′ ∈ W ′ . Note that we do not assume that they are fixed once and for all, i.e., they may be changed from one statement to the other. Next, π and π ′ are always non-degenerate representations of A(W, µ) and A(W ∪ W ′ , µ), respectively, on some Hilbert space H, where π is the restriction of π ′ to A(W, µ). 3 We let ν π µν be a fixed first-step decomposition of π on H = ν H ν = ν L 2 (X, µ ν ) and usually set π ν := π µν for simplicity. Note that every first-step decomposition of π is also some for π ′ and vice versa, since π and π ′ coincide on A(W, µ) containing C(X). Moreover, if there is some W ′ -invariant (and π-cyclic) vector, then we assume that there is some ν ∈ N, such that 1 ν is W ′ -invariant (and π-cyclic). Note that this does not contradict the assumption above that measures in a first-step decomposition are equal if they are equivalent. Finally, in order to fix a home for the one-parameter subgroups in W introduced later, we fix some subset R in the set Hom(R, W) of homomorphisms from R to W. Analogously, we define these properties for w ′ ∈ W ′ . Since w is unitary, we have Corollary 2.5 Any finite product of π ν -units is a π ν -unit. Proof Fix some ψ ν ∈ H ν and recall that 1 ν is cyclic for π ν , i.e., for every ε > 0 there is Since w is a π ν -unit, we have π(w)π(f )1 ν = π(w(f ))1 ν ∈ H ν . By unitarity of w, The invariance of H ⊥ ν follows from the unitarity of π(w). qed Corollary 2.7 If each w ∈ W is a π ν -unit, then the restriction of π to H ν is cyclic. Continuous µ 0 -Generating Systems Until the end of this subsection, let µ 0 be some measure on X. Definition 2.6 A subset E of C(X) is called continuous µ 0 -generating system iff • 1 ∈ E is orthogonal in L 2 (X, µ 0 ) to each other element in E and • span C E is dense both in C(X) and in L 2 (X, µ 0 ). is a continuous generating system w.r.t. two measures µ 1 and µ 2 , then µ 1 equals µ 2 . Proof We have , the assertion follows from the regularity of the measures. qed Lemma 2.16 Continuous µ 0 -generating systems always exist. Proof C(X) always spans a dense subset in L 2 (X, µ 0 ). Let now E contain 1 and all f − 1, f µ 0 1 with f in C(X). qed Lemma 2.17 Let w ∈ W be some element. Assume that π ′ is W ′ -natural and that 1 ν is W ′ -invariant. Moreover, let E 0 ⊆ C(X) be some subset, such that for every non-constant f ∈ E 0 there are infinitely many elements {w ′ ι } in W ′ commuting with w, such that {w ′ ι (f )} ⊆ C(X) forms an orthonormal system in L 2 (X, µ ν ). Then P ν π(w)1 ν ′ is orthogonal to the span of E 0 for all ν ′ ∈ N with µ ν ′ = µ ν . Here, E 0 is seen as a subset in H ν . Regularity Definition 2.7 Precisely the elements of Hom(R, W) are called one-parameter subgroups in W, those in R ⊆ Hom(R, W) one-parameter R-subgroups in W. Definition 2.8 A one-parameter subgroup is called regular iff it is weakly continuous. Definition 2.9 A representation π of A(W, µ) is called regular w.r.t. R iff π maps regular one-parameter R-subgroups in W to weakly continuous one-parameter subgroups in π(W). If R is clear from the context, we will simply speak about regular representations. Definition 2.10 • Two one-parameter subgroups t −→ w 1,t and t −→ w 2,t in W are called commuting iff w 1,t 1 and w 2,t 2 commute for all t 1 , t 2 ∈ R. • The set given by all finite (pointwise) products of mutually commuting one-parameter R-subgroups in W is denoted by R . Lemma 2.18 The product of finitely many, mutually commuting one-parameter R-subgroups in W is a one-parameter R -subgroup in W . Moreover, if π is regular w.r.t. R, then π is regular w.r.t. R . Proof The first part is clear. For the second one use π(w t ) B(H) ≤ w t A(W,µ) = 1 for all t to show Therefore, in what follows, we will often assume that R is replaced tacitly by R . Here, the equality holds if π is faithful. with Lemma 2.19. qed Definition 2.11 Let ψ ∈ H be some vector. • Let f ∈ C(X). We say W ′ splits W at ψ for f iff there is a one-parameter R-subgroup w t in W, some ε > 0 and some t 0 > 0, such that In other words, w t is not uniformly weakly continuous on the W ′ -span of f . Moreover, note that the splitting property actually refers to the choice of R. Since, in general, we will have fixed R, we drop this notion here. Proof Choose a continuous µ-generating system E, such that W ′ splits W at 1 ν 0 for every Choose a one-parameter R-subgroup w t in W, some sufficiently small ε > 0 and some t 0 > 0, such that for all non-zero |t| < t 0 . Hence, using Lemma 2.20, for all non-zero |t| < t 0 . This, however, is a contradiction to our assumption that π is regular, i.e., t −→ π(w t ) is weakly continuous. Hence, 1, f ν 0 = 0 for all f in E. By Lemma 2.15, we have µ = µ ν 0 . qed Λ-Regularity Definition 2.12 Let A be any set. • A set Λ is called set of A-functions iff its elements are A-valued functions (i.e., there is no restriction for the domains of these functions). • A set Λ of A-functions is called topological (sequential) iff the domain of each λ ∈ Λ is a topological (sequential topological) space. Definition 2.13 Let A be some subset of a C * -algebra A, and let π be a representation of A on some Hilbert space H. Moreover, let Λ be a set of topological A-functions. Remark The ordinary regularity uses dom λ = R, where λ : t −→ w t runs over all oneparameter R-subgroups. Let us return to the case that π is a representation of A(W, µ) on H. Proposition 2.22 Let π be Λ-regular for some set Λ of W-functions. Fix for each λ ∈ Λ some subset Y λ in dom λ, such that λ(Y λ ) consists of π ν -units only and λ∈Λ λ(Y λ ) generates W. Then every w ∈ W is a π ν -unit. Quantum Geometric Hilbert Space In the remaining sections we will apply the general framework of Section 2 to quantum geometry. First, however, let us briefly recall in this subsection the basic facts and notations needed in the following. General expositions can be found in [6,4,3] for the analytic framework. The smooth case is dealt with in [8,7,27]. The facts on hyphs and the conventions are due to [14,16,19]. Let G be some arbitrary connected compact Lie group and M be some manifold. We let M be equipped with an arbitrary, but fixed differential structure. Later, we will restrict ourselves to analytic (or, if so desired, semianalytic) manifolds. A path is a piecewise differentiable map from [0, 1] to M , whereas differentiability is always understood in the chosen smoothness class. Moreover, we may restrict ourselves to use piecewise embedded paths only. A path is trivial iff its image is a single point. Two paths γ 1 and γ 2 are composable iff the end point γ 1 (1) of the first one coincides with the starting point γ 2 (0) of the second one. If they are composable, their product is given by An edge e is a path having no self-intersections, i.e., e(t 1 ) = e(t 2 ) implies that |t 1 − t 2 | either equals 0 or 1. Two paths γ 1 and γ 2 coincide up to the parametrization iff there is some orientation preserving piecewise diffeomorphism φ : A path is called finite iff it equals up to the parametrization a finite product of edges and trivial paths. In what follows, every path will be assumed to be finite. Next, two paths are equivalent iff there is a finite sequence of paths, such that two subsequent paths coincide up to the parametrization or up to insertion or deletion of retracings δδ −1 . Finally, we denote the set of all paths by P gen , that of all equivalence classes of paths by P. The multiplication of paths naturally turns P into a groupoid. Usually (but not in Subsections 3.2 and 3.3), paths are understood to be equivalence classes of paths. Initial and final segments of paths are naturally defined. We will write γ 1 ↑↑ γ 2 iff there is some path γ being (possibly up to the parametrization) an initial path of both γ 1 and γ 2 . A hyph υ is some finite collection (γ 1 , . . . , γ n ) of edges each having a "free" point. This means, for at least one direction none of the segments of γ i starting in that point in this direction, is a full segment of some of the γ j with j < i. Graphs and webs are special hyphs. The subgroupoid generated (freely) by the paths in a hyph υ will be denoted by P υ . Hyphs are ordered in the natural way. In particular, υ ′ ≤ υ ′′ implies P υ ′ ⊆ P υ ′′ . The set A of generalized connections A is now defined by with A γ := Hom(P γ , G) ⊆ G #γ given the topology which is induced by that of G, for all finite tuples γ of paths. Moreover, we define the (always continuous) map π γ : A −→ G #γ by π γ (A) := A(γ) ≡ h A (γ). Note, that π γ is surjective, if γ is a hyph. Finally, for compact G, the Ashtekar-Lewandowski measure µ 0 is the unique regular Borel measure on A whose push-forward (π υ ) * µ 0 to A υ ∼ = G #υ coincides with the Haar measure there for every hyph υ. It is used to span the auxiliary Hilbert space H aux := L 2 (A, µ 0 ) of quantum geometry with scalar product ·, · . If we included (generalized) gauge transforms into our considerations and studied the analytic category only, we could use the spin-network states to get a basis of H aux,inv = L 2 (A/G, µ 0 ) with G being the group of generalized gauge transforms. Here, however, we want to include gauge-variant functions as well and, moreover, do not want to restrict the smoothness class at the beginning. Therefore, we will consider now generating systems for H aux . For this, first of all, let us fix a representative in each equivalence class of irreducible representations of G, which we will refer to below. When considering matrix indices for matrices on some Euclidean space V , we assume that the underlying vectors are normalized. This means that for all A ∈ End V we have |A i j | ≤ A , where · denotes the standard operator norm. • More compactly, we set (T φ,γ ) m n : Observe that we get the same gauge-variant spin network state again if we simultaneously revert the orientations of an arbitrary number of edges and dualize the corresponding representations. This trivial overcompleteness will be ignored in the following, i.e., we will always identify graphs and hyphs differing in the ordering or the orientation of the edges only. Let us now recall Note, that (even after admitting only one edge orientation per hyph) υ M υ is a generating system for, but not an orthonormal set in L 2 (A, µ 0 ). This would still be the case, if we were in the (semi)analytic category and use graphs only (see below). In particular, we have Nevertheless, we will be looking for orthogonal decompositions of L 2 (A, µ 0 ). For that purpose, we will have to single out orthogonal subsets of gauge-variant spin network functions: Until the end of this subsection we will now consider piecewise analytic paths only. In contrast to the standard, i.e. gauge-invariant spin network states, the gauge-variant ones do not form an orthonormal basis for L 2 (A, µ 0 ) even after dropping some subset of them. The problem are the states arising in the decomposition of an edge into a product of subedges, i.e. having two-valent vertices. In the gauge-invariant case they can be dropped since, by invariance, they reproduce the original state. Here, however, in the gauge-variant case, we get a sum like where the (dim φ) gauge-variant spin network states together with that at the left-hand side span a (dim φ)-dimensional subspace of L 2 (A, µ 0 ). We might simply drop the one at the lefthand side, but this would lead to consistency troubles since we could want to decompose those at the right-hand side again. A possible solution for this dilemma is given by the extended spin network states as defined by Ashtekar and Lewandowski in [5]. We do not want to introduce that notion here, but only study the "most dangerous" cases in our frameworknamely, those gSN with "matching" indices 4 at each two-valent vertex. In the decomposition of the γ 1 γ 2 -state above, this concerns the vector at γ 1 (1) = γ 2 (0). with graphs γ and γ ′ be given. • there is a point m ∈ int γ ∩ int γ ′ , such that the representations for the edges in γ and γ ′ running through m do not coincide; • there is some m ∈ M being a two-valent vertex with non-matching indices for one and being interior for the other graph; or • there is some m ∈ M being a two-valent vertex for both graphs, whereas both "incoming" or both "outgoing" indices are different. Note that matrix indices are regarded as different if they belong to different representations. Proof The first two cases are obvious. The third one is clear observing our example above. Namely, decompose one of the graphs, say γ ′ , by inserting m as a vertex. In the decomposition of T ′ into a sum of gauge-variant spin network states of the enlarged graph, the indices of every addend are matching. By the orthogonality properties of matrix functions w.r.t. the Haar measure, we get the assertion. The last case is now clear as well. qed Definition 3.2 Let γ be an edge and φ be a non-trivial irreducible representation of G and let T := (T φ,γ ) m n be a gauge-variant spin network state. • all indices at two-valent vertices are matching, i.e., m k+1 = n k for all k. • all indices at two-valent vertices are matching, i.e., m k+1 = n k for all k and m 1 = n #γ . The set of all (γ, φ)-based gauge-variant spin network states will be denoted by B γ,φ . Moreover, we set B γ := {1} ∪ φ B γ,φ , where the union runs over all non-trivial irreducible representations of G. It contains precisely the γ-based gauge-variant spin network states. Note again that T is (γ, φ)-based if for some orientation and some ordering of γ, the conditions above are met. Lemma 3.5 B γ,φ is orthogonal to its complement in the set of all gauge-variant spin network states, for every edge γ and every irreducible representation φ of G. Proof Let T = (T φ,γ ) m n be a gSN not contained in B γ,φ . If im γ = im γ, the situation is clear. The same is true for φ k = φ for some k. Let now im γ = im γ and φ k = φ for all k. Then, possibly after modifying ordering or orientations, we have γ = γ 1 · · · γ n . Moreover, every vertex of γ is at most two-valent. Thus, the proof follows from Lemma 3.4. qed Corollary 3.6 For every edge γ, the Hilbert space Decomposition of Paths In the following we will study the intersection behaviour between paths and (generalized) surfaces. For this, we first consider how paths can be decomposed. Most of the relevant definitions and assertions are given in [13]. We will quote where appropriate and will simplify some assumptions and, therefore, proofs. Note that in this subsection we will often distinguish between P and P gen ; paths here are genuine maps from [0, 1] to M , not equivalence classes. It can easily be shown [13] that the set of all decompositions of a path γ is directed w.r.t. ≥. Definition 3.5 • A subset Q of P gen is called hereditary iff for each γ ∈ Q 1. the inverse of γ is in Q again, and 2. every decomposition of γ consists of paths in Q. • A subset Q of P gen is called complete iff it is hereditary and every path in P gen has a decomposition into paths in Q. A decomposition consisting of paths in Q only, will be called Q-decomposition. Lemma 3.7 Let Q ⊆ P gen be complete. Then for every hyph υ there is a hyph υ ′ ≥ υ with υ ′ ⊆ Q. Proof First decompose each γ ∈ υ into paths in Q. Collect all these paths in a set γ ′ ≥ υ. Since γ ′ may be not a hyph again, refine, if necessary, the paths in γ ′ further to get a hyph υ ′ ≥ γ ′ ≥ υ [14]. By completeness, υ ′ contains only paths in Q. qed Lemma 3.8 The set of all edges and trivial paths in P gen is complete. Main Construction Definition 3.6 Let Q be some hereditary subset of P gen . Then a map ρ : The set of all Q-germs from Q to G is denoted by Germ(Q, G). Observe that ρ(γ) and ρ(δ) coincide if γ and δ coincide up to the parametrization. In fact, since every decomposition γ 1 γ 2 of γ is also some for δ, we may apply property 2. above. Note that we will shortly speak about germs instead of Q-germs, provided the domain Q is clear from the context. Proposition 3.9 Let Q be some complete subset of P gen , and let ρ : Q −→ G be a germ. Then we have: • There is a unique germ ρ : P gen −→ G extending ρ. • The map ρ is given by Proof Let us first define the desired map ρ as given in the proposition above and now check its properties. 1. ρ does not depend on the choice of the Q-decomposition. Let γ and δ be two Q-decompositions of γ. Since, by assumption, every path in Q has Q-decompositions only, and since the set of decompositions of a path is directed w.r.t. ≥, we may assume γ ≥ δ. But, in this case the well-definedness follows directly from the definitions and germ property 2. of ρ. 2. ρ is constant on equivalence classes in P gen . Let γ and δ in P gen be equivalent. By definition, it is sufficient to check the following two cases: • γ and δ coincide up to the parametrization. This case is trivial, since every Q-decomposition of γ is also one of δ. Hence, ρ(γ) = ρ(δ). • There is some ε in P gen and some decomposition γ 1 γ 2 of γ, such that δ equals the product of γ 1 , ε, ε −1 and γ 2 . Now, in this case, choose some Q-decompositions ε 1 · · · ε K of ε and γ s1 · · · γ sIs of γ s with s = 1, 2. Then 3. ρ is a germ extending ρ, and [ ρ] is a homomorphism. This is proven as the statements above. 4. ρ is the only germ extending ρ. If ρ ′ is some other germ extending ρ different from ρ, then there is some γ ∈ P gen with ρ ′ (γ) = ρ(γ). Now, choose a Q-decomposition γ 1 · · · γ I of γ. By the properties of a germ, there is some i with ρ ′ (γ i ) = ρ(γ i ). However, since both ρ ′ and ρ extend ρ, both sides are equal to ρ(γ i ). Contradiction. qed Proposition 3.10 Let Q be some complete subset of P gen . Let X be some topological space, and let λ : X −→ Germ(Q, G) be some map. Finally, assume that the map λ(·) (γ) : X −→ G is continuous for all γ ∈ Q. is continuous, where · is given as in Proposition 3.9. Proof It is sufficient [19] to prove that π γ • Θ λ : X −→ G is continuous for all edges γ. Since the multiplication in G is continuous and Q is complete, we even may restrict ourselves to the cases of γ ∈ Q. Here, however, the assertion follows immediately from i.e., π γ • Θ λ = λ(·) (γ) for all γ ∈ Q. qed Lemma 3.11 Two generalized connections coincide iff they coincide for all (equivalence classes of) paths of a complete subset of P gen . Most relevant for the well-definedness of the Weyl operators to be introduced below, will be Theorem 3.12 Let Q be a complete subset of P gen and κ : Q −→ G an admissible map. Then there is a unique map Θ : A −→ A, such that, for all γ ∈ Q, Moreover, Θ is a homeomorphism preserving the Ashtekar-Lewandowski measure µ 0 . Hence, the pull-back Θ * : is an isometry and the induced operator on B(L 2 (A, µ 0 )) is well defined and unitary. An more general version is proven in [13]. We replay the corresponding proof. In fact, for all γ ∈ Q and all decompositions γ 1 γ 2 of γ, we have Here, we used the admissibility of κ with γ 1 ↑↑ γ 1 γ 2 and γ −1 depends continuously on A, by definition of the projective-limit topology on A. • Now, by Proposition 3.10, Θ : The uniqueness of Θ follows from the completeness of Q. • To prove that Θ is a homeomorphism, we explicitly describe the inverse of Θ. Define It is easy to check that κ ′ is admissible. As already proven above, there is a unique continuous map Θ ′ : for all γ ∈ Q. Altogether, this gives for all γ ∈ Q. The completeness of Q and Lemma 3.11 prove Θ ′ • Θ = id A . Analogously, one shows Θ • Θ ′ = id A . • Θ even preserves the Ashtekar-Lewandowski measure. In fact, let υ be an arbitrary, but fixed hyph. By completeness, there is some hyph υ ′ ≥ υ with Y ′ edges and υ ′ ⊆ Q. By construction, we have In other words, each Θ γ consists of a left and a right translation, whence the Haar measure on G is Θ γ -invariant. Since finite regular Borel measures on A coincide iff their push-forwards w.r.t. all π υ coincide, we get the assertion. qed We get immediately Corollary 3.13 Let Q be some complete subset of P gen . Moreover, let Y be some topological space and let κ : for all γ ∈ Q. Moreover, Θ is continuous. Surfaces and Fluxes Originally (see, e.g., [32]), the action of flux operators on cylindrical functions has been given by self-adjoint differential operators. Since these operators are unbounded, one has to study their domains very carefully. To avoid this problem, one usually considers them as generators of unitary, i.e. bounded operators. Now, the flux operators turn into some sort of translation operators. In this section, we are going to shift this action to a still deeper level. We will see that it can be regarded as the pull-back of some continuous action of translations on A itself. Quasi-Surfaces Before we can define this action we study how paths are decomposed by surfaces. Observe that the end points of an S-external path may be contained in S. It is only required for the "interior part" of the path, i.e., for all γ(t) with 0 < t < 1 to be outside of S. If S is clear from the context, we simply speak about external and internal edges. Definition 3.9 Let S be some subset of M . Then Q S denotes the set of all paths that are S-external or S-internal. Definition 3.10 Let S be a subset of M and γ ∈ P gen be an edge. Then a decomposition γ of γ is called S-admissible iff γ ⊆ Q S . Lemma 3.14 Let S be a subset of M . Then Q S is complete, if every edge has an S-admissible decomposition. Proof Heredity is clear. The completeness follows since any (finite) path can be decomposed into a product of edges and trivial paths, hence, by assumption, into a product of S-external or S-internal paths. qed Definition 3.11 A subset S of M is called quasi-surface iff every edge γ ∈ P gen has an S-admissible decomposition. Examples for quasi-surfaces, in case we are in the (semi)analytic category for the paths, are embedded analytic submanifolds that are even semianalytic. 8 Note that these submanifolds may have any dimension. Therefore, any collection of points having no accumulation point is a quasi-surface. This even remains true in the category of piecewise smooth paths. On the other hand, there are indeed non-semianalytic submanifolds that are quasisurfaces. Consider, e.g., the smooth function f on R with f (x) := e −1/x 2 for x = 0 and f (0) := 0. Of course, it is analytic everywhere except for x = 0. But, its graph S does not form a semianalytic submanifold in, for simplicity, R 2 . Nevertheless, it is a quasi-surface. In fact, let γ be a piecewise analytic path in R 2 . If it does not run through the origin, the statement is trivial. Assume now that γ runs through the origin. Decomposing γ appropriately, if necessary, we may restrict ourselves to the case of an analytic γ starting at the origin without returning there at any other parameter time. Assume next that γ has infinitely many intersection points with S, and let the origin 0 be an accumulation point for int γ ∩ S. W.l.o.g., 9 we may consider, finally, γ to be the graph of an analytic function on R, again denoted by γ. Use now the fact that two C ∞ functions f 1 , f 2 have identical Taylor coefficients at 0 if 0 is an accumulation point of f 1 = f 2 , to derive that γ has only zero Taylor coefficients, just because f does. Now, analyticity implies that γ is a straight edge along the x-axis never intersecting S again. Using this contradiction, the statement is now trivial. If we would like to take even more quasi-surfaces into account, we may reduce the set of paths under consideration. This might be relevant, e.g., in the case of piecewise linear paths, although there usually also the set of manifolds is restricted to that of piecewise linear submanifolds a priori. The punctures leading to an S-admissible decomposition will be relevant for the definition of Weyl operators. In particular, these operators depend on the transversality properties between the path and the (oriented) hypersurface. Therefore, we need to introduce a general notion for the properties an orientation should encode. 8 This, however, is no longer true if we drop the semianalyticity (for its definition see Subsection 6.3). In fact, consider R 2 and a smooth path γ in the closed half-plane y ≤ 0, such that γ connects (−1, 0) and (+1, 0) and intersects the straight line δ between these two points infinitely often without sharing a full segment. (See similar constructions, e.g., in [7,17].). Now define S to be the upper one of the two open sets in R 2 bounded by γ, by x = −1 and by x = +1. Of course, S is an embedded analytic manifold, although it is not semianalytic in R 2 . Nevertheless, δ leaves S and returns into it infinitely often. Therefore, there is no S-admissible decomposition of δ, whence S is not a quasi-surface. 9 Otherwise, restrict the domain of γ, such that the x-component ofγ is non-zero everywhere. If this is not possible, the x-component ofγ vanishes at t = 0. But, then γ is S-external anyway, at least locally. Definition 3.12 Let S be a quasi-surface of M . • A function σ S : P gen −→ Z is called − outgoing intersection function for S iff we have for all γ, γ ′ ∈ P gen with γ ↓↓ γ ′ . • An outgoing intersection function σ − S and an incoming intersection For brevity, we will denote a compatible pair (σ − S , σ + S ) of an outgoing and an incoming intersection function by σ S and call it intersection function for S. Even more, we use σ S and σ − S synonymously. Sometimes, we write σ(S, γ) instead of σ S (γ) to emphasize that the intersection function may depend on quasi-surface and path as well. Definition 3.13 Let S be a quasi-surface of M , and let σ S : P gen −→ Z be some intersection function for S. Then the intersection function −σ S is called inverse to σ S . Definition 3.14 Let S be a quasi-surface with intersection function σ S , and let γ ∈ P gen be some path. Assume, moreover, that there are only finitely many In our applications, we will, e.g., define σ S (γ) for an S-external path γ to be ±1 (depending on the direction of γ), if its initial path intersects S transversally, and equal to 0, otherwise: (0) is not tangent to S and some initial path of γ lies (except γ(0)) above (below) S. 2. The topological intersection function σ top S : P gen −→ Z is defined as follows: and no initial path of γ is contained in S and some initial path of γ lies (except γ(0)) above (below) S. Here, "above" and "below" refer to the orientation of S. Moreover, initial paths w.r.t. a trivial interval are not taken into consideration. It is easy to check that this definition is well defined. Moreover, obviously, for every orientable S there are precisely two natural (and two topological) intersection functions corresponding to the two choices of orientations. They coincide up to the sign. If S is a submanifold of codimension larger than 1, there is no longer just a pair of natural orientations. Nevertheless, in view of the applications we aim at, we may define "natural" orientations: Definition 3.16 Let S be some embedded submanifold of M being a quasi-surface of M and having codimension 2 or higher. Then an intersection function σ S : P gen −→ Z is called natural (topological) iff there is some oriented embedded hypersurface S ′ in M being a quasi-surface and having σ S as its natural (topological) intersection function. One sees immediately, that the number of natural intersection functions of such quasi-surfaces with higher codimension may be rather large. For instance, let S be (a bounded part of) a line in R 3 . Then we may take all the full circles in R 3 having S as its diameter. Of course, there is a continuum of such circles each having another pair of natural or topological intersection functions. Definition 3.16 gives an example for the induction of intersection functions. Lemma 3.15 The complement of a quasi-surface is a quasi-surface. Proof An S-admissible decomposition of an edge is also (M \ S)-admissible. qed Lemma 3.16 If S 1 and S 2 are quasi-surfaces, then S 1 ∪ S 2 and S 1 ∩ S 2 are quasi-surfaces. Proof If γ is some edge, decompose each path of some S 1 -admissible decomposition w.r.t. S 2 . It is easy to check that this leads to an S-admissible decomposition of γ with S being S 1 ∪ S 2 or S 1 ∩ S 2 . qed Corollary 3.17 Let S 1 and S 2 be quasi-surfaces with intersection functions σ S 1 and σ S 2 , respectively. Then σ S 1 + σ S 2 is an intersection function for S := S 1 ∪ S 2 . If, additionally, σ S 1 and σ S 2 coincide for all paths starting at S 1 ∩ S 2 , then the function σ S 1 S 2 defined by is an intersection function for S. It is called joint intersection function. Obviously, the joint intersection function equals σ S 1 + σ S 2 if S 1 and S 2 are disjoint. Sometimes, it is convenient to use some sort of standard decomposition of edges. Indeed, there is a minimal decomposition. Definition 3.18 Let S be a subset of M and γ ∈ P gen be an edge. An S-admissible decomposition γ of γ is called minimal iff γ ′ ≥ γ for any other S-admissible decomposition γ ′ of γ. Lemma 3.18 If an edge γ has any S-admissible decomposition, it has also a minimal Sadmissible decomposition. Moreover, this minimal decomposition is unique up to the parametrization of its components. Proof Let δ be an S-admissible decomposition of γ. Since γ equals δ 1 · · · δ K up to the parametrization, the parameter domain [0, 1] of γ may be decomposed into non- This implies τ i ∈ T , in contradiction to the minimality of γ. • Let γ(τ i ) ∈ S. Then, analogously, we get a contradiction. Consequently, Q j can overlap nontrivially only either P i or P i+1 . qed Definition 3.19 Let S be a quasi-surface with intersection function σ S , let γ be an edge and let γ = {γ i } n i=0 be its minimal S-admissible decomposition. (1) and σ + S (γ i ) = 0. We say that γ intersects S completely transversally iff there are no S-internal edges in the minimal S-admissible decomposition of γ and each γ-half-puncture is also a γ-puncture. Quasi-Flux Action In this subsection, S is some quasi-surface and σ S some intersection function for S. If Maps(M, G) ∼ = G M is given the product topology, then Θ is continuous. Moreover, the map , is a homeomorphism and preserves the Ashtekar-Lewandowski measure for each d ∈ Maps(M, G). Finally, the inverse of Θ S,σ S d is given by Θ S,σ S d −1 . Proof • Θ S,σ S exists uniquely and is continuous for the product topology on Maps(M, G). First note that Q S is complete by Lemma 3.14. Let now Y := Maps(M, G) and define The only nontrivial property of κ in Corollary 3.13 to be checked is κ(γ −1 is a homeomorphism and leaves µ 0 invariant. This now follows from Theorem 3.12. qed Remark Note that Θ S,σ S is, in general, not a group action of Maps(M, G). But, we have Lemma 3.20 Let S 1 and S 2 be two quasi-surfaces, and let d 1 , If σ S 1 and σ S 2 coincide for all paths starting in S 1 ∩ S 2 and vanish both for S 1 -and S 2 -internal paths, then Proof By direct calculation. qed Proof Straightforward. qed Weyl Operators Recall that every continuous map ψ : X −→ X on a topological space X defines a continuous pull-back map ψ * : C(X) −→ C(X). This map is an isometry if ψ is surjective. If X is even a compact Hausdorff space, ψ is surjective and µ a (finite) regular Borel measure on X with ψ * µ = µ, then ψ * is a unitary operator on L 2 (X, µ). This motivates Note that each Weyl operator is both a map on C(A) and L 2 (A, µ 0 ). In fact, Proposition 3.19 gives Proposition 3.22 • Every Weyl operator is an isometry on C(A). • Every Weyl operator is a unitary operator on L 2 (A, µ 0 ). Note, however, that measures, in general, lead to Weyl operators that are ill defined on the L 2 -functions: For instance, let us work in the analytic category, fix some hypersurface S and some intersection function σ S . Assume now that, g running over G, we have all Weyl operators at our disposal that are given by To make all these Weyl operators well defined as operators on L 2 (A, µ) for some µ, we have at least to demand that, for each S-external edge γ (having only one end attached to S), the support of the push-forward measure (π γ ) * µ equals G. Of course, there are many measures without this property. Let us now collect some additional properties of Weyl operators, again following directly from the properties of Θ and the definition of Weyl operators by pull-backs. Then we have (dropping always the upper indices S, σ S in w S,σ S d ): The preceding corollary implies that the inversion of the orientation of a quasi-surface leads to the adjoint Weyl operator. The uniqueness proof in Section 7 will heavily use this fact. for all T i ∈ M γ i and all functions d : M −→ G. Corollary 3.17 implies Lemma 3.26 Let S 1 and S 2 be disjoint quasi-surfaces with intersection functions σ S 1 and σ S 2 , respectively. Let, moreover, d 1 , d 2 : M −→ G be some functions. Then we have Lemma 3.27 Let υ be a hyph and w be a Weyl operator for some quasi-surface S. Then there is a hyph υ ′ ≥ υ with w(M υ ) ⊆ span M υ ′ . If, moreover, υ contains S-external and S-internal edges only, then w(M υ ) ⊆ span M υ . Regularity Proposition 3.28 Fix some quasi-surface S and some intersection function for σ S . Next, let Λ 0 be a set of sequential Maps(M, G)-functions, such that 10 pr x • λ 0 : dom λ 0 −→ G is sequentially continuous for every x ∈ M and each λ 0 ∈ Λ 0 . Finally, assign to each λ 0 some λ : W being the set of Weyl operators, and collect all such λ into Λ. Then λ( · )ψ : dom λ 0 −→ H aux ≡ L 2 (A, µ 0 ) is continuous for all ψ ∈ H aux and each λ ∈ Λ. Proof Fix some λ ∈ Λ with corresponding λ 0 ∈ Λ 0 and recall that sequential continuity equals continuity, if the domain is sequential. To avoid cumbersome notation, we write shortly w y instead of w S,σ S λ 0 (y) . • Of course, w y (1) = 1 for all y. • Let γ be an edge and T ∈ M γ some gauge-variant spin network state over γ. − If γ is internal, then w y (T ) = T for all y, hence y −→ w y (T ) is continuous. − If γ is external, then with T = √ dim φ φ k l and after a straightforward calculation, we have (There is no summation over k and l.) Since, by assumption each pr x • λ 0 is a continuous mapping from dom λ 0 to G, we get w y (T ) − w y ′ (T ) Haux → 0 for y → y ′ , implying the desired continuity of y −→ w y (T ). • Let υ contain external and internal edges only. Let, moreover, T = T 1 ⊗ · · · ⊗ T Y be in M υ . Then we have for y → y ′ . The factorization of the scalar products was possible, because w y leaves the span of (non-trivial) matrix functions over γ i invariant and because such spans are orthogonal w.r.t. µ 0 for paths in a hyph. • Let now T ∈ M SN be an arbitrary gauge-variant spin network function, i.e., there is a hyph υ with T ∈ M υ . Then there is some hyph υ ′ ≥ υ containing external and internal edges only. Since M υ ⊆ span M υ ′ by Lemma 3.2, w y (T ) → w y ′ T for y → y ′ . • Now, Lemma A.1 gives the proof: The span of M SN = υ M υ is dense in L 2 (A, µ 0 ), and w y = 1 for all y by unitarity. qed A typical example is given by the continuous (or differentiable) functions w.r.t. the supremum norm: Definition 3.22 Let S be some quasi-surface and σ S some intersection function for S. Now let Λ p,S,σ S for p ∈ N ∪ {∞, ω} contain precisely all mappings is equipped with the supremum norm on S. We now may transfer this result to one-parameter subgroups. Using the one-parameter subgroups on G induced by the elements of the Lie algebra g, we have t −→ (e td(x) ) x∈M Then we have: is strongly continuous w.r.t. to L 2 (A, µ 0 ) for each quasi-surface S with intersection function σ S . Graphomorphisms One of the particular features of quantum geometry is its invariance w.r.t. diffeomorphisms of M . More precisely, diffeomorphisms act naturally on the paths inducing a µ 0 -invariant action on A and, consequently, a unitary action on H aux . It remains the question, what kind of diffeomorphisms are to be admitted: analytic, piecewise analytic, smooth or something else? Anyway, we will postpone this discussion to Section 4 and consider here only some sort of minimal requirements. For this, let us again fix some smoothness class for the manifold and the paths in it. The map ϕ −→ α ϕ is even a representation of the group of graphomorphisms on L 2 (A, µ 0 ), because α ϕ 1 •ϕ 2 = α ϕ 1 • α ϕ 2 and α ϕ −1 = α −1 ϕ . 11 Graphomorphisms do not only act on graphs, but also on quasi-surfaces, intersection and other functions. We, therefore, will have to guarantee that admissible homeomorphisms do not only preserve the set of paths under consideration, but also that of quasi-surfaces, and have to avoid ill-defined intersection functions -in particular, if we aim at an "intrinsic" assignment of intersection functions to quasi-surfaces. All that will be provided by using stratified analytic isomorphisms as to be discussed below. Directly from the definitions, we get finally Generalized Gauge Transforms Any gauge theory incorporates gauge invariance. Therefore, we close this section with a few remarks on gauge transformations and, more general, bundle automorphisms. (1)) for all γ ∈ P . 11 Note that we did not care about the corresponding covariance property for the Weyl operators. In fact, there w is given by the pull-back of Θ, not of Θ −1 . Since, however, the Θ-transforms do not form a group, that does not matter. 12 Starting from Section 4, we will usually drop the word "generalized" for simplicity. Observe that β g 1 •g 2 = β g 1 • β g 2 and β g −1 = β −1 g . Proposition 3.37 • g −→ β g is a representation of G on C(A) by isometries. • g −→ β g is a representation of G on L 2 (A, µ 0 ) by unitaries. Generalized gauge transforms do also act on the G-valued functions labelling the quasisurfaces. Definition 3.28 Let g be a generalized gauge transform. Then we set: • g(d) := g · d · g −1 for every function d : M −→ G. Again, directly from the definitions, we get Bundle Automorphisms Up to now, we have widely ignored the bundle structure of the gauge theory. Without a real need, we tacitly assumed to deal with a trivialized bundle, as we focused on the manifold M and the structure group G only. Of course, it made the notations simpler and can, moreover, be justified a posteriori: A contains the C p connections of any G-principal bundle over M , independently from the bundle we started from. Similarly, G contains all C p gauge transforms in any such bundle. But, conceptually, it is much more desirable to include the full bundle structure. Then we would also like to include the full group of bundle automorphisms. Note, here, that given any bundle automorphism θ : P −→ P of the G-bundle P over M , we may extract from it a diffeomorphism ϕ θ : where pr M denotes the canonical projection pr M : P −→ M . Moreover, the (smooth) gauge transforms correspond to vertical automorphisms; these are the bundle automorphisms with ϕ θ = id M . Nevertheless, the full information on any (possibly stratified) C p bundle automorphism can be encoded in a (again, possibly stratified) C p diffeomorphism and a generalized gauge transform (even of any other bundle). The only danger arising from taking all the generalized gauge transforms of Subsection 3.7 is to take too many gauge transforms. However, observe that, at least for the piecewise analytic category, the set of gauge orbits A/G is densely embedded into A/G and no two piecewise analytic connections fall into the same equivalence class by moding out the group of gauge transforms [18]. Finally, as it will turn out, the diffeomorphisms and the gauge transforms will play different rôles in the following proofs. Therefore, to make the basic ideas clearer and to sometimes allow for relaxed assumptions in the assertions, we will refrain from considering the fully automorphism invariant treatment of the Weyl algebra. Thus, w.l.o.g., we may pragmatically consider the bundle-automorphism invariance given by implementing both diffeomorphism and gauge invariance. The translation into the fully invariant language has to be left to the interested reader. Structure Data In what follows, we are going to apply the above definitions and results to quantum geometry. Usually, this means to use piecewise analytic paths γ and oriented hypersurfaces S in M , whereas the intersection functions encode whether γ intersects S transversally or not and how its direction is related to the orientation of S. Moreover, (piecewise) analytic diffeomorphisms act on these objects. However, is it obvious that we should consider precisely these ingredients? Before we discuss this question, let us collect these assumptions to avoid cumbersome notation. Indeed, at the first glance, there seems to be an enormous freedom in choosing structure data of a theory. However, there are several antagonists in the game. For instance, if we would enlarge P, we might have to reduce S, simply because we have to guarantee that there are at most finitely many (genuine) intersections of paths and quasi-surfaces. In fact, this practically excludes the choice of the smooth category for the paths: There are even analytic submanifolds having an infinite number of isolated transversal intersections with smooth paths. Therefore, we are -from the mathematical, technical point of view -quite forced to admit at most (piecewise) analytic paths. This however reduces the number of graphomorphisms in ϕ. Namely, they have to map analytic paths to (piecewise) analytic ones. This would lead directly into conflicts, if general smooth diffeomorphisms were allowed. They have to be "analyticity preserving" -at least for one-dimensional submanifolds. There are indeed classes of homeomorphisms having this property: At first, of course, analytic diffeomorphisms fulfill this requirement. However, this will not be sufficient for two reasons: On the one hand, analyticity usually implies high non-locality -a feature not desired in gravity for physical reasons. On the other hand, in the sequel, the proofs will, in general, crucially depend on the locality for technical reasons as we will see later. Thus, some sort of piecewise analytic diffeomorphisms are to be admitted. In a natural way, this leads to stratified diffeomorphisms, because they map semianalytic sets (disjoint unions of analytic submanifolds forming stratifications) into semianalytic sets. Next, we have to take care of the intersection functions. Given some oriented submanifold, say, a hypersurface, we would like to use this orientation to define such a function. However, this might lead to problems again: Using piecewise analytic diffeomorphisms, it may happen that a surface (including its orientation) is kept invariant, but an originally transversally intersecting path may now be mapped to a tangential one. 13 This would contradict the 13 Let M be R 2 and divide M by the two lines x = ±1 into three open parts and the two lines. Now define ϕ on the open strip between these two lines by ϕ(x, y) := (x, y + √ 1 − x 2 ) and let ϕ be the identity otherwise. Of course, ϕ is continuous everywhere and an analytic diffeomorphism on each of these five parts. Nevertheless, the path γ with γ(t) = (t, 0) is transversal w.r.t. x = 1, but ϕ(γ) is tangent to it. concept that the intersection function encodes the transversality properties of a surface and its orientation, i.e., is assigned naturally and uniquely to an oriented surface. Of course, in contrast to the previous arguments, this rather is a conceptual demand and not a technical one. Moreover, it can be overcome using a slightly more special kind of piecewise analytic diffeomorphisms, as we will see later. Third, the selection of functions is to be discussed. Since we have argued that mostly analytic (or piecewise analytic) objects are to be used, we could restrict ourselves again to (piecewise) analytic functions (at least for the restrictions to the respective surface). However, although this is possible, we may consider more general classes. In particular, after decomposing a surface into several submanifolds, we may admit functions that are analytic only on these submanifolds, but do not satisfy any continuity condition at their "boundaries". In fact, assume, e.g., that we are given a 2-surface S and divide it by a line S 0 into two pieces S 1 and S 2 plus S 0 (like the interior of a circle is divided by a diameter). We now want to label S on each S i by some analytic function d i . We may take the Weyl operator w 0 for S and d 0 , then w j0 for S j with (d 0 ) −1 , and, finally, w j for S j and d j (j = 1, 2). Now, w•w 10 •w 20 •w 1 •w 2 is the Weyl operator for S with a function whose restriction on each S i is d i . We should remark, that this way one may even define submanifolds with codimension 2 or larger to be (quasi-)surfaces. This, however, brings back the problem that the intersection function is not necessarily given directly by the orientation of the submanifold itself: the transversality between paths and such lower-dimensional submanifolds would, in general, be destroyed already by analytic diffeomorphisms. Thus, one should restrict oneself to hypersurfaces (or at least semianalytic sets of pure codimension 1) and control lower-dimensional surfaces by including labellings of hypersurfaces with functions d that are nontrivial only on these "sub"-surfaces. Or, equivalently, one may give lower-dimensional surfaces orientations that are induced by hypersurfaces containing them. We will exploit this idea. Anyway, after all, it does not seem necessary to impose very strong smoothness restrictions on ∆(S) from the conceptual point of view. Nevertheless, as we will see, there will be some technical difficulties that lead to restrictions. To summarize, in what follows we will always assume to work with "nice" structure data having the following minimal properties: The requirements regarding regularity will be discussed in Subsection 4.3. The precise definitions of stratified objects will be given in Section 6. Note, that whether we consider closed manifolds only or include open ones, is not decided here. The remaining "fine-tuning" will be made if needed. 14 In contrast to Definition 3.16, we consider an intersection function on S with codimM S ≥ 2 to be natural iff it is induced by an embedded hypersurface S ′ that is contained in S, not just in M . Moreover, one can directly extend the definition of natural intersection functions to stratified sets, e.g., using triangulations. However, since, at the end, we are interested mostly in the orientation of genuine submanifolds (possibly with boundary) only, we do not consider this issue in this paper in detail. Thus, at the moment, the statement "Σ(S) contains at least the natural intersection functions of S" only refers to such submanifolds S. Weyl Algebra Assume we are working with some arbitrary, but fixed "consistent" structure data. We define invariant vector. Often we write "D-invariant" instead of "diffeomorphism invariant". Analogously, we speak about D-natural representations meaning W ′ -natural representations. Definition 4.6 Let π ′′ be a representation of A Auto on some Hilbert space H. Since 1 ∈ L 2 (A, µ 0 ) is already cyclic for C(X) ⊆ A, and α ϕ (1) equals 1 for all ϕ ∈ D as well as β g (1) does for all g ∈ E, we have The irreducibility of π 0 will be proven separately in Section 5. Regularity One of our goals in this paper is a uniqueness proof for certain representations of A. However, we will only be able to do this for certain regularity conditions. It is now reasonable to presuppose as little of them as possible. In other words, R which encodes the one-parameter subgroups to be mapped to weakly continuous ones, should be chosen as small as possible. As we will see, it will be sufficient to include that all t −→ w t = w S,σ S d(t) with d(t) := e td ∈ ∆(S) for constant d : M −→ g. Of course, more regularity, hence larger R, will not reduce uniqueness, but may even lead to the case that there is no such regular representation at all. Therefore, we are faced with some maximality conditions as well. First of all, we may at most allow for those one-parameter subgroups that map to the Weyl operators given by the structure data. Typically such restrictions are induced by the functions d at our disposal. For instance, let G, M and S be not simply connected, allow ∆(S) to contain continuous functions only, and let d : M −→ G have nontrivial mapping degree. Then, in general, it is not possible to deform d in ∆(S) continuously into the trivial function on G. This shows that it need not be possible to connect any Weyl operator to the identity within the limits of the structure data. Of course, using non-continuous d, it is always possible: Choose at every point x in M some d(x) ∈ g with e d(x) = d(x) and define w t := w S,σ S Ed (t) for all t. But, moreover, even if we might find for each t some allowed d(t) with d(t 1 + t 2 ) = d(t 1 )d(t 2 ), the corresponding maps t −→ (d(t))(x) need not be continuous at all. The reason behind is that the functional equation f (x + y) = f (x) + f (y) has non-continuous, "cloudy" solutions. Then the corresponding one-parameter subgroups of Weyl operators are no longer weakly continuous, as one immediately checks. Therefore, we should restrict ourselves indeed to the functions generated by the Lie algebra functions. We summarize these considerations in After all, we enlarge the structure data above by some subset R of the set of oneparameter subgroups in W. Using Corollary 3.29 and Proposition 3.28, we have for nice enlarged structure data Proposition 4.2 1. π 0 is regular w.r.t. R. 2. π 0 is Λ-regular with Λ given in Proposition 3.28. Irreducibility In this section we are going to prove the irreducibility of A for nice structure data. [15] Additionally, we assume that S contains at least the closed, oriented hypersurfaces of M . Since we do not need diffeomorphisms, there will be no restrictions for D. Note that given the irreducibility of the Weyl algebra of quantum geometry for these structure data, we get it immediately for all larger structure data. In fact, since the Weyl algebra cannot shrink if the structure data get larger, the commutant of the Weyl algebra cannot get larger in this case. Since, however, we will see it is already trivial for the assumptions above, the enlarged Weyl algebra is again irreducible. Nice Intersections In this subsection, properties of intersections between graphs and surfaces, together with their implications for certain scalar products are studied. Definition 5.1 Let γ be an edge and let γ be a (possibly trivial) graph. A surface S is called (γ, γ)-nice iff 1. S is naturally oriented; 2. S and (the image of) γ are disjoint; and 3. γ intersects S in precisely one interior point x of γ transversally, such that the orientation of S coincides with the direction of γ. In this case, x is called puncture of S and (γ, γ). Note that it does not matter whether we restrict ourselves to the case of closed surfaces or to that of open ones. Proof If we admit open surfaces S, then the assertion is trivial, since we may always find some neighbourhood of x disjoint to γ, where γ is a straight line. Take for S some sufficiently small hyperplane "orthogonal" to γ and that contains x. Let us, therefore, consider the case of closed surfaces. Roughly speaking, the problem here is that if γ "enters" S at some point, it has to "leave" it somewhere else. Thus, we have to ensure that at only one point this intersection is transversal. For that purpose, we consider some (real) analytic curve c in R 2 that has an inflection point, such that the corresponding tangent t intersects c in precisely one other point y transversally. Such curves exist -take, e.g., an appropriate Cassini curve [39]. As in the case of open surfaces, consider now some neighbourhood of x isomorphic to R n ⊇ R 2 and disjoint with γ, such that x is mapped to y and such that (the image of) γ coincides with t in some sufficiently large neighbourhood of y. Let now S be the rotational surface given by c and, e.g., the x 1 -axis in R 2 ⊆ R n . By construction, S has the required properties. (If the direction of γ and the orientation of S at the puncture do not coincide, simply mirror S at the hyperplane "orthogonal" to γ.) qed Lemma 5.2 Let γ be an edge and let γ be some (possibly trivial) graph, such that γ and the edges in γ intersect each other at most at their end points. Moreover, let S, S 1 and S 2 be (γ, γ)-nice surfaces, such that the corresponding punctures are different. Finally, let T be a gauge-variant spin network function of the form T = (T φ,γ ) m n ⊗ T ′ with T ′ ∈ M γ . Then we have for all g 1 , g 2 ∈ G. Moreover, if φ is abelian 15 , we have w S g (T ) = φ(g 2 ) T = χ φ (g 2 ) T for all g ∈ G. Here, w S g is a shorter notation for w S,σ S dg with σ S given by the natural orientation of S and with d g being the function on M constantly equal g ∈ G. Proof First of all, note that w S g (T ) = w S g ((T φ,γ ) m n ) ⊗ T ′ for all g ∈ G and for all (γ, γ)nice S. Assume now, that t 1 < t 2 , where γ(t j ) is the intersection point of S j and γ. Decompose γ into the three segments γ 1 , γ 0 and γ 2 according to the parameter intervals [0, t 1 ], [t 1 , t 2 ] and [t 2 , 1], respectively. Then we have Consequently, and, analogously, Since γ 1 , γ 0 , γ 2 and γ are independent, we get If t 1 > t 2 , the calculation is completely analogous. The assertion w S g (T ) = φ(g 2 ) T for abelian φ follows directly from the definition of w S g . Recall that every abelian representation is one-dimensional and maps G to U (1) 1. qed Irreducibility Proof Theorem 5.3 The Weyl algebra A of quantum geometry is irreducible on L 2 (A, µ 0 ). Proof We are now going to prove the irreducibility of A by verifying that the commutant of A consists of scalars only [12]. Since C(A) ⊆ A, we have A ′ ⊆ C(A) ′ = L ∞ for the commutants [36]. Next, one checks immediately, that w(f )w(ψ) = w(f ψ) for all w ∈ W, f ∈ L ∞ and ψ ∈ L 2 . In other words, Consider additionally some non-trivial gauge-variant spin network function T . It can be written as T = (T φ,γ ) m n ⊗ T ′ with nontrivial φ, where T ′ ∈ M γ is a (possibly trivial) spin network function, such that γ and the edges in γ intersect at most at their end points. By w(f ) = f and w * ∈ W for all w ∈ W, we have T, f = T, w * (f ) = w(T ), f and, therefore, Choose some (γ, γ)-nice surface S by Lemma 5.1. Then we have w S g (T ) = φ(g 2 )T for all g ∈ G, by Lemma 5.2. Consequently, Since G is compact and connected, there is a square root for each element of G. Moreover, by [20], each nonabelian irreducible character has a zero. Hence, there is a g ∈ G with χ φ (g 2 ) = 0. Choose now, by Lemma 5.1, infinitely many (γ, γ)-nice surfaces S i , whose punctures with γ are mutually different. Then, by Lemma 5.2, we have for i = j, due to the choice of g. Since w S i g is unitary, {w S i g (T )} is an orthonormal system. Using , f for all i, j, this implies w S i g (T ), f = 0 and thus T, f = 0. Altogether, we have proven T, f = 0 for all nontrivial gauge-variant spin network functions T . Therefore, f ∈ C 1, hence A ′ = C 1. qed Corollary 5.4 A Diff and A Auto are irreducible. Stratified Diffeomorphisms As we have mentioned in Section 4 and we will see in the proofs, analytic graphomorphisms will not always be sufficient for studying representations of A. A natural extension is stratified analytic isomorphisms. The theory of stratifications we will use here is motivated by [21]. The first definition will be quoted almost literally, however, that of stratified maps is slightly sharpened. Although we will later apply the whole framework to the analytic category, we assume at this point only that we have fixed some smoothness category C p with p ∈ N or p = ∞ or p = ω. Let M and N be C p manifolds. Definition 6.1 Let A be some subset in M . • A stratification M of M is a locally finite, disjoint decomposition of M into connected embedded C p submanifolds M i of M (the so-called strata), such that Definition 6.5 Two stratified sets S 1 and S 2 in M are called (weakly) strata equivalent iff there is a product of localized (weakly) stratified isomorphisms mapping S 1 onto S 2 . They are called oriented-strata equivalent iff there is such a product mapping additionally the orientation of S 1 to that of S 2 . Localized Stratified Diffeomorphisms in Linear Spaces In the sections below, we will have to study the local transformation behaviour of geometric objects in manifolds. To get prepared for this, we will now investigate first the corresponding problems in linear spaces. In particular, we will be able to rotate, scale and translate these objects locally, i.e. by transformations that are the identity outside some bounded region. This guarantees that we may lift the corresponding operations to manifolds. We recall that a q-simplex S in R k with q ≤ k is the closed convex hull of q + 1 points in general position. The corresponding interior of S is called open simplex. Moreover, the (open) faces of S are the (open) simplices spanned by subsets of these q + 1 points. Additionally, we denote by B q r (x), or shortly B r (x), some closed q-dimensional ball in R k with radius r around x. If x is the origin, we simply write B r . We remark that, in this subsection, nice orientations of some simplex or ball S will always mean an orientation induced by that of some hyperplane (i.e. not by some more general hypersurface as for natural orientations) containing S. This implies, e.g., that the nice orientation of a q-simplex S is always induced by some (k − 1)-simplex having S as one of its faces. Finally, let us remark that in most of the statements of this subsection we will use 0 as a base point. It should be clear that all these statements hold analogously if 0 is replaced by any point in R k . Strata Equivalence of Star-Shaped Regions Lemma 6.1 Let k be a positive integer and let U be an open subset of R k not containing 0. Next, let a, b, p : U −→ R be C p -functions, such that both a, p and pa + b are positive on U . Moreover, for every λ > 0, let whenever both λx and x are contained in U . Finally define ρ, ρ inv : U −→ R by Then ρ : U −→ R k defined by is a C p diffeomorphism between U and ρ(U ) and maps (subintervals of) each half-ray R + x into (subintervals of) the same half-ray. Moreover, its inverse is given by Proof ρ is indeed C p , since p never vanishes. Since ρ at a single x is just a positive scalar multiplication, it maps (subintervals of) each half-ray R + x into (subintervals of) the same half-ray. Moreover, ρ is injective and the image of ρ is an open subset of R k . Finally, one checks immediately, that ρ −1 is C p and that it is the inverse of ρ by pa + b > 0. qed Lemma 6.2 Let k be a positive integer. Let S 0 and S 1 be the boundaries of two bounded open regions R 0 and R 1 in R k both containing 0. Assume, moreover, that each R i is star-shaped, that the corresponding Minkowski functional p i for R i is C p and that each S i is an embedded C p submanifold of R k . Then, for all real λ ± and λ 0,± with there are C p mappings ρ + and ρ − with the following properties: 1. ρ ± is a C p diffeomorphism from some open neighbourhood of V ± onto some neighbourhood of W ± . Here, are compact sets with nonempty interior. 2. ρ ± maps S 0 to λ 0,± S 1 ; 3. ρ + and ρ − coincide on S 0 if λ 0,− = λ 0,+ ; 4. ρ ± is the identity on λ ± S 1 ; 5. ρ ± maps subintervals of half-rays to subintervals of the same half-ray. 6. The restrictions of ρ ± to (an appropriate open subset of) any linear subspace of R k are diffeomorphisms into that linear subspace. Corollary 6.3 Given the assumptions of Lemma 6.2, there is a stratified C p diffeomorphism ϕ mapping S 0 to λ 0 S 1 and R 0 to λ 0 R 1 for some λ − ≤ λ 0 ≤ λ + , such that ϕ is the identity inside λ − R 1 and outside λ + R 1 . Moreover, ϕ can be chosen, such that it preserves half-rays and its restrictions to linear subspaces of R k are stratified C p diffeomorphisms again. Proof Simply define ϕ to equal ρ ± on V ± and to be the identity otherwise. Since these mappings coincide on the corresponding overlaps λ − S 1 , S 0 and λ + S 1 , we get the assertion. qed Note that λ ± does only depend on the relative shape of S 0 and S 1 . In particular, λ ± need not be changed if both S 0 and S 1 are scaled by the same factor. • Let x ∈ V − , i.e. p 0 (x) ≤ 1 and p 1 (x) ≥ λ − . From the lines above, we see that this implies (p 1 a − + b − )(x) ≥ 0, the equality holding iff p 0 (x) = 1 and p 1 (x) = λ − . This, however, is impossible, since q(x) would be equal λ − < inf V q. Therefore, Lemma 6.1 now shows that ρ ± is a C p diffeomorphism on that neighbourhood. By the previous items we see that • The corresponding properties of ρ + are proven completely analogously. • By intersecting R i and S i with linear subspaces we get C p boundaries with C p Minkowski functionals again. The remaining statements are now clear. qed Scaling To study geometric objects in charts, it may be necessary to first shrink them to have enough "space". That this is (almost) always possible using stratified diffeomorphisms, guarantees the following Lemma 6.4 Let k be a positive integer and let R be a bounded star-shaped open region in R k containing 0 and having a C p differentiable Minkowski functional p. Moreover, assume that the boundary S of R is an embedded C p submanifold of R k . Then for all λ > 0 and all ε > 0, there is a stratified C p isomorphism ϕ preserving half-rays, such that ϕ = λ id on R and ϕ = id outside (1 + ε) max(λ, 1)R. One now immediately checks that gives the desired map. 18 For the homotopy property define ϕ t as above with tX instead of X. Then ϕ 1 = ϕ and ϕ 0 = id. qed Immediately from the proof, we get with the above assumptions Corollary 6.6 Let k be a nonnegative integer and let ε > 0. Moreover, let γ α be the straight line in R 2 connecting (− cos α, − sin α) and (cos α, sin α). Translation Lemma 6.7 Let k be some positive integer. Let γ be some edge in R k and U be some neighbourhood of γ. Choose r > 0, such that the balls with radius r centered at γ(0) and γ(1), respectively, are contained in U . Then there is a finite product of stratified C p diffeomorphisms of R k being the identity outside U and the translation by γ(1) − γ(0) on B r (γ(0)). Proof We only give the idea of the proof. The technical details are similar to that for the preceding statements. Moreover, in Lemma C.1 we will give a proof for a more specific type of translation. Here, we cover γ by (non-trivial) balls. By compactness, there is some r ′ , such that finitely many balls with radius r ′ centered at points of γ will cover γ and such that the convex hull of "neighbouring" balls is contained in U . The idea now is to first shrink B r (γ(0)) to B r ′ (γ(0)), then move parallelly this ball through the convex hulls of neighbouring balls and finally blow it up to its original size. All these operations are possible by the statements above without moving any point outside U . qed Strata Equivalence of Simplices and Balls Let us now show that all q-simplices are not only isomorphic as simplices themselves, but can also be mapped into each other by localized stratified C p diffeomorphisms. Moreover, they are equivalent to q-dimensional balls. Proposition 6.8 Let q ≤ k be two positive integers. Then all q-simplices and all q-dimensional balls in R k are strata equivalent. For this, we first show that each q-simplex can be mapped to a q-dimensional ball. Lemma 6.9 Let q ≤ k be two positive integers. Moreover, let V := {v 0 , . . . , v q } ⊆ R k contain q + 1 points in general position, such that 0 is contained in the interior of the q-simplex R V spanned by V . Finally, fix some ε > 0 and some r > 0, such that R V is contained completely in the interior of B r . Then there is a stratified C p diffeomorphism ϕ, being the identity outside of Proof Choose some set V ′ = {v ′ 0 , . . . , v ′ k−q } ⊆ R k of k − q + 1 points in general position, such that its span is complementary to that of V and such that the (k − q)-simplex spanned by W contains 0 in its interior and is contained in int B r . Define for every 0 ≤ i ≤ q and 0 ≤ j ≤ k − q the set }) now containing k + 1 points in general position, hence each defining a k-simplex R ij . These simplices form a complex, i.e., in particular, they share at most lowerdimensional faces. Let R 0 be the union of all these (k − q + 1)(q + 1) simplices. Its boundary is the union of the simplices V 0 ij spanned by V ij \ {0}. Let us now invoke Corollary 6.3. First of all, observe that the statement there can be extended directly to the case that R 0 is formed by a finite number of cones each having tip at 0 and each defined by k-simplices, such that these cones fill R k completely and share at most the boundaries with each other. Of course, the requirements for S 0 have to be relaxed accordingly. We refrained from explicitly giving this form of Corollary 6.3 (and Lemma 6.2, respectively), since it would have made the proof even more technical without introducing new ideas. One simply has to construct the stratified diffeomorphism in the more general case for every cone (more precisely, some open appropriate neighbourhood of it) and then use that these mappings fit together at the boundaries. This however, follows from the coincidence of the Minkowski functionals at these boundaries, the construction of the maps in the proofs above and the invariance of half-rays. Coming back to the present proof, define R 1 to be B r . Then, by R 0 ⊆ B r , the corresponding Minkowski functionals fulfill p 1 ≤ p 0 , and we may choose λ + = 1+ε > 1. This means that, by Corollary 6.3, there is a stratified C p diffeomorphism ϕ being the identity outside λ + B r , mapping R 0 to R 1 and ∂R 0 to ∂R 1 . Now the assertion follows, since ϕ preserves linear subspaces. Therefore, R V (being the intersection of R 0 with span R V ) is mapped to B r ∩ span R V being a q-dimensional ball. qed Proof Proposition 6.8 Let two q-simplices be given. Using Lemma 6.7, translate both, such that they contain 0 in their interior. Then each of them is strata equivalent to some q-dimensional sphere in R k , by Lemma 6.9. Shrinking these balls, if necessary, we make them of identical radius. Finally, by Lemma 6.5, we may find some localized stratified C p diffeomorphism rotating one ball into the other. Hence, these two q-simplices are strata equivalent to a (hence, any) q-dimensional ball. qed Now we are going to mirror simplices and balls into each other. Proposition 6.10 Let q < k be two non-negative integers. Then every q-simplex and every q-dimensional ball in R k having a nice orientation, is strata equivalent to itself having inverse orientation. Proof First assume that q = k − 1 and consider some q-dimensional ball B around the origin. Choose X ∈ so(k), such that X is zero on some (k − 2)-dimensional linear subspace V of span R B and generates a rotation in the two-dimensional complement in R k spanned by the normal of V in span R B and the normal of span R B in R k . In particular, it generates some map A := e tX ∈ SO(k), being minus the identity on this two-dimensional space for some t. Since only one of its "dimensions" belongs to B, the rotation A inverts the orientation of B. Now, Lemma 6.5 guarantees the existence of some stratified diffeomorphism inverting the orientation of B. To prove the statement for q = k − 1 and a given q-simplex S, we map it to some q-dimensional ball B, invert its rotation and take the inverse of the first mapping to get S back. Of course, the orientation of S has been flipped. Next, let q be arbitrary and consider a q-simplex S. Since we work with nice orientations only, there is some (k − 1)-simplex S ′ in M having S as one of its faces and inducing its orientation. Since we may invert the orientation of S ′ , we also may invert that of S by localized stratified diffeomorphisms. To prove the remaining case of q-balls for arbitrary q, reuse the argumentation above for q = k − 1 and reduce to the case of q-simplices. qed Corollary 6.11 Let q < k be two non-negative integers. Then all q-simplices and all q-balls in R k are oriented-strata equivalent, provided they have nice orientations. Proof Assume, first of all, that S is a q-simplex or a q-ball containing the origin, and let S be given two nice orientations. This means, there are linear subspaces T 1 and T 2 inducing these orientations by their own nice ones. There is now some A ∈ SO(k) leaving the q-plane spanned by S invariant and mapping T 1 onto T 2 . Hence A maps the one orientation of S to either the other one or the inverse of it. Hence, by Lemma 6.5, there is some localized stratified isomorphism mapping S onto itself and transforming the orientations by A. By adding, if necessary, some localized stratified isomorphism inverting the orientation as given by Proposition 6.10, we get such a transformation mapping the two orientations of S onto each other. Let now S i be a q-simplex or a q-ball for i = 1, 2. Then we may map them by localized stratified diffeomorphisms to some q-simplex S containing the origin. Since, as one checks immediately, these mappings can be chosen, such that the corresponding orientations of S are nice 20 , there is a localized stratified diffeomorphism mapping one orientation of S to the other, by the arguments above. qed Without explicitly stating the proof, we have by arguments as in the proposition above: Corollary 6.12 For every nicely oriented 1-dimensional ball S in R k with k ≥ 3, there are finitely many localized stratified isomorphisms, whose product is the identity on S, but inverts the orientation of S. Finally, we are looking for objects that can be divided into two parts, such that the original one is, on the one hand, strata equivalent to both of them and, on the other hand, is the disjoint union of them. Moreover, the orientation should be preserved. For example, consider an open 2-simplex, i.e. a full open triangle. Intersecting it by a line through one corner and some point of the opposite edge, we get two triangles. If we take their interiors, then they are strata equivalent to the original triangle, however not a decomposition of it -simply the border line is missing. One the other hand, if we were taking it to just one of the subtriangles, then they are no longer strata equivalent. The solution of this problem is to consider at the beginning an open triangle plus one of its open edges. Then, as above, we may divide the triangle by a line, now through some boundary point of the added edge. Now it is clear that the triangle plus edge is divided into twice a triangle plus edge and all three objects are strata equivalent. The generalization to higher dimensions is straightforward, but more technical: . . , v k−1 } into two parts R 0 and R 1 , whereas the intersection of this plane with R is added to R 0 , and R 1 is that "half" whose closure contains v 1 . We now may decompose each Let now ϕ i be products of localized stratified isomorphisms that leave v j with j ≥ 2 and v i invariant, map v 1−i to v and map the simplex spanned by {v 0 , . . . , v k−1 } onto that spanned by It is easy to check that ϕ i (S ∪ F ) = S i ∪F i and that ϕ i may be chosen to have the desired orientation properties. qed Localized Stratified Diffeomorphisms in Manifolds We are now going to transfer the results of the previous subsection to the case of general C p manifolds M . Definition 6.6 A subset S in M is called (nicely oriented) q-simplex in the chart (U, κ) iff S ⊆ U is mapped by κ to a q-simplex in R dim M (and the orientation of S is induced by one of the natural orientations of some hyperplane in κ(U )). Analogously, we may define q-balls. The definition of faces of q-simplices should be clear as well. We will speak about q-simplices and q-balls in general iff there is a chart of M , which they are q-simplices or q-balls in. Note that, at least locally, every simplex or ball S having a natural orientation is nicely oriented, i.e., it is induced by some hypersurface being (an open set of) a hyperplane in some chart. In fact, let N be some embedded submanifold in M containing S as an embedded submanifold and inducing its orientation. Then we may find some chart mapping N locally into some hyperplane in the local chart image of M and mapping S locally into some plane in the local image of N . Proposition 6.14 The statements of Propositions 6.8, 6.10 and 6.13 as well as of Corollaries 6.11 and 6.12 remain valid if we replace R k by M and assume all q-simplices and q-balls to be in one and the same connected chart and, moreover, nicely oriented there. Proof The only point to be shown is the case that the localized stratified isomorphism ϕ needs more space in R dim M than provided by the chart denoted by (U, κ). If this is the case, first shrink any occurring object S (being a ball or a simplex) to a sufficiently small size. Indeed, since simplices and balls are assumed to be closed and the chart is open, S -magnified (in the chart) by 1+ ε w.r.t. some interior point -is again in U for small ε. Therefore, the scaling lemma (Lemma 6.4) is applicable in order to shrink S by any factor λ ≤ 1. Now it may be necessary to move S to some other place in U inside this chart. To do this, we choose some path that S is moved along. By compactness and continuity reasons, there is a finite number of open k-balls in U covering this path. We now assume that λ is chosen small enough that the accordingly shrunk S can be transferred between any non-disjoint two of these balls by means of Lemma 6.7. This way it can be (parallelly) shifted between any two points in the chart. Using these ingredients of shrinking and shifting, it is now easy to generate the desired localized stratified isomorphisms by means of their counterparts in R dim M . qed Application to the Analytic Category Let us now come back to the analytic case, i.e. p = ω. Recall [21] that a subset A of an analytic manifold M is called semianalytic iff M can be covered by some open sets U ι , such that each U ι ∩ A is a union of connected components of a set f −1 1 (0) \ f −1 2 (0), for f 1 and f 2 belonging to some finite family of real-valued functions analytic in U ι . Complements, finite intersections and finite unions of semianalytic sets are semianalytic again [10]. Moreover, it can be shown [21,28] that every semianalytic set admits a semianalytic stratification, i.e. a stratification consisting of semianalytic strata only. 21 Proof For every semianalytic, hence stratifiable set A in M and every nonnegative integer k, we choose some semianalytic stratification N (A) of A and let N k (A) contain precisely the k-dimensional strata in N (A) contained in A. Moreover, let n be the dimension of M . Since the intersection of any two semianalytic sets is semianalytic, we may define N n,k := , N ′ n := k<n N n,k , N n := N n,n . This means, N n,k contains the k-dimensional strata given by all the intersections of elements in M 1 and M 2 . Since the boundary of every semianalytic set is semianalytic again [29], hence stratifiable, we may define successively for decreasing i Finally, we set M := n i=0 N i . One immediately checks that M is a stratification. Moreover, by construction, it is finer than M 1 and M 2 . qed Corollary 6.16 Every (weakly) stratified isomorphism is a graphomorphism. Proof Let ϕ be a weakly stratified isomorphism on M mapping M 1 to M 2 . Moreover, let γ be some analytic edge. Since im γ is a semianalytic set, there is some stratification M 3 of im γ. Choosing some stratification M finer than M 1 and M 3 , the image of γ is a union of strata in M; even a finite one, since im γ is compact. Refining M 2 w.r.t. ϕ and w.r.t. the refinement of M 1 to M, we see that ϕ maps im γ into a finite union of strata. This means that ϕ(γ) is piecewise analytic. The assertion now follows from Lemma 3.30. The deeper reason behind the investigation of simplices above is the fact that every manifold can be triangulized, this means, roughly speaking, it is isomorphic to some union of (open) simplices. Originally known for nonanalytic manifolds (see, e.g., [37,38]), this result has been extended later to semianalytic sets in analytic manifolds (see, e.g., [29]). Here, however, we need a notion somewhat stronger than the usual one. In fact, recall that all the results above on (closed) simplices require that they are contained in some chart in M . Therefore we first quote the definition of a triangulation from [29] (dropping, however, some condition) and then extend this notion to the case we need. Definition 6.7 Let {M i } be a locally finite collection of semianalytic subsets of M . • A triangulation of {M i } is a simplicial complex 22 K together with a homeomorphism f : |K| −→ M , such that for every σ ∈ K 1. f (σ) is an embedded analytic submanifold of M ; is some open chart in M containing the closure of f (σ) and mapping it to a simplex in that chart. If each M i is given a natural orientation, then we additionally require f to map this orientation to a nice one on each of these simplices. Proposition 6.19 Every semianalytic set is triangulizable. [29] One immediately checks that (nicely oriented) q-balls and q-simplices are widely triangulizable. What remains unsolved is Question 1 Is every semianalytic set widely triangulizable? Until now, we did not find any proof for either answer in the literature 23 nor are we able to decide it ourselves. There may be some hints for this answer to be affirmative. In fact, as proven by Ferrarotti (cf. [31]), there is a so-called strong triangulation (K, f ) of any analytic submanifold M of R n . This means that, firstly, for any σ ∈ K there is some neighbourhood U of σ in the Euclidean space containing K, and some analytic F σ : U −→ R n with F σ = f σ and, secondly, for every vertex v in K, the derivative df v : St(v, K) −→ R n is injective. But, nevertheless, our case remains open. Two Types of Localized Stratified Diffeomorphisms In this subsection 24 we will investigate in detail the types of stratified diffeomorphisms to be used for quantum geometry. Firstly, we present a more elaborate version of winding diffeomorphisms introduced originally by Sahlmann [33]. The aim is to produce stratified diffeomorphisms that wind an edge such that it has a certain number of punctures at some given surface. This would be possible even in the analytic category if only the pure number of punctures would count and the precise parameter values of the edge at the punctures would not matter. In fact, then one can use the approximation theorems for smooth mappings by analytic ones [22]. If, however, the precise location of the punctures becomes relevant, then probably this is no longer sufficient. Therefore, -nevertheless reusing the main idea by Sahlmann -we present here a more general statement in the stratified analytic category. Secondly, we study how one can transform a given graph into a very large set of independent graphs, but minimally modifying other geometric objects. There will be two cases 22 A simplicial complex is a locally finite collection K of disjoint open simplices in some finite-dimensional linear space, such that each face of any simplex belongs to K again. Moreover, |K| denotes the union of all these simplices. 23 Nor were we able to find our definition in the literature. 24 Note that all results of this subsection remain true in arbitrary smoothness categories, provided one enlarges the definition of intersection functions a little bit. Actually, they are defined only for quasi-surfaces; but only in the analytic category, hypersurfaces are always quasi-surfaces. The reason for that was that not every γ can be S-admissibly decomposed. Here, however, we might study the intersection behaviour of certain paths with S. This, of course, is possible in the general case of smoothness as well. depending whether a graph is contained in (the closure of) some surface or not. If not, we may leave the surface invariant pointwise. If, on the other hand, the graph is (partially) contained in the interior of the surface, then we may, at least, slightly transform the surface into itself getting an infinite number of different graphs. This, of course, is possible, only if this surface provided enough space, i.e. is at least two-dimensional. Winding Diffeomorphisms Proposition 6.20 Let dim M ≥ 3. Let γ be a graph, let γ ∈ γ be one of its edges and set γ ′ := γ \ {γ}. Moreover, let G be a finite subset in G. Then there is • some nicely oriented, open, embedded, analytic hypersurface S, disjoint to im γ, such that S and ∂S have a finite wide triangulation; • some analytic function d : S −→ G and • some s ∈ Z, such that for any sequences (g j ) ⊆ G and (τ j ) ⊆ I (with τ j < τ j ′ for j < j ′ ) having even length, there is a stratified (analytic) diffeomorphism ϕ with the following properties: 1. S and im ϕ(γ ′ ) are disjoint; 2. d(ϕ(γ(τ j ))) = g j for all j; 3. ϕ(γ) intersects S completely transversally; 4. {ϕ(γ(τ j ))} j is the set of ϕ(γ)-punctures of S; Proof First of all, since γ is embedded into M , there is some neighbourhood of some subpath γ| I of γ and some cubic chart, such that γ(I) is exactly the part of im γ in that chart. Any diffeomorphism constructed below will be constant outside this chart. Therefore, to simplify notation, we may restrict ourselves to the case that M is R n coordinatized by x ∈ R, y ∈ R and z ∈ R n−2 (where z is the first coordinate of z) and that γ(I) is the intersection of the chart and the x-axis. Next, for the surface S we choose some hypersurface y = a parallel to the x-axis for some a > 0. Here, a is selected under the assumption that y = 2a+2ε 0 and y = −2ε 0 are still hypersurfaces in the chart for some ε 0 > 0 (of course, only that part of the hypersurfaces whose xand z-values are admitted in the chosen cubic chart). Finally, choose some analytic d : S −→ G depending on z only, such that every element of G occurs somewhere (in a sufficiently close) neighbourhood of z = 0. 25 Let now finitely many (mutually different) points τ j ∈ I be given. The γ-images of these points will turn into the intersection points of the transformed γ with S. Fix, additionally, some small ε < ε 0 , such that the distance of any two of the marked points in I is greater than 2ε (and that, if necessary, each of the ε-neighbourhoods of the τ k are both in I and in the fixed chart). Now, in the first step we move γ the following way inside the x-y-plane: On the one hand, each segment of γ outside the ε-neighbourhoods of the marked points is again parallel to the x-axis, now, however, alternately with y = 0 and y = 2a. The ε-neighbourhoods, on the other hand, are the straight lines connecting these alternatingly lifted and unlifted segments. This way, the center of these neighbourhoods, i.e. the marked points themselves are mapped to half-way between the levels y = 0 and y = 2a. In other words, precisely the marked points are the intersection points of the transformed γ with S. Note, in particular, that this transformation of γ can be done by a stratified isomorphism that does only change the y-coordinates of any point in M , but neither the x-nor the z-values (see Lemma C.1). Moreover, note that we have tacitly used that there is an even number of τ k to end up at level y = 0 again after moving the largest τ k . This finishes the first step. We are now left with the problem to find the intersection points having the correct values of d in the second step. Nicely, the idea of the first step can be used again. To see this, assume that n = 3 and look at the scene from above the (x, z)-plane. Since we only changed the y-values of γ, no change can be seen from this perspective. Using our assignment of d to S, we move the ε-neighbourhoods of the marked points of (the transformed) γ. Slightly more generally than in the first step, however, we let the "bumps" that these neighbourhoods are mapped to, return to the original line before this neighbourhood ends. More precisely, the segments outside these neighbourhoods are not shifted again, and the "bumps" map each τ k to the correct "level" (i.e. z-coordinate) in order to get mapped to the point with the correct value g k of d. Note, that here we only need to change the z-coordinates, but leave, in particular, the y-coordinates unchanged. This implies that the parameter values where γ intersects S after having been transformed by both steps, are precisely those of the γ after the first transformation. If n > 3, this step is completely analogous. To summarize: It is clear that the constructed isomorphism has all the desired properties and that s can be chosen obviously. Originally, we looked for a stratified isomorphism mapping γ, such that its transform intersects S precisely for the parameter values τ k and at points having the desired values of d. By the arguments above, we reduced this problem to the existence of a diffeomorphism in R n as indicated in Figure 1 that, in particular, does not move any point outside the given square (times some ε-ball in the remaining n − 2 dimensions not drawn there). The existence of such a diffeomorphism, however, is guaranteed by Lemma C.1. This furnishes the present proof. qed The crucial idea in the proof of Proposition 6.20 was to define for each element in G some domain on the surface S, such that for a given sequence in G, the transformed graph punctures S at the correct points and in the correct ordering, i.e., leading to the correct sequence of values for d. We constructed above a single surface with an analytic d on it. However, we even might use constant d, if we admit S to consist of more than one connected component. In other words, for any finite number we may find such a number of hypersurfaces S i , such that γ may always be transformed to puncture these different surfaces in an arbitrarily given ordering. More precisely, choose for S i some open (cubic) subspace in S, and let the only restriction to S i be that its z-coordinate is in some sufficiently small interval I i . We may assume that the closures of these intervals are disjoint. Moreover, each S i is a hypersurface of M . Reusing the argumentation of the proof of Proposition 6.20, we have shown Proposition 6.21 Let dim M ≥ 3. Let γ be a graph, let γ ∈ γ be one of its edges and set γ ′ := γ \ {γ}. Moreover, let K be a positive integer. Then there is • some nicely oriented, open, embedded analytic hypersurfaces S i with i = 1, . . . , K, such that each S i and each ∂S i has a finite wide triangulation, each S i is disjoint to im γ, and all S i are mutually disjoint; and • some s ∈ Z, such that for any even integer J > 0, any function l : [1, J] −→ [1, K] and any sequence (τ j ) ⊆ I (with τ j > τ j ′ for j > j ′ ) having length J, there is a stratified analytic isomorphism ϕ with the following properties: 1. Generation of Independent Paths Transferred to the case of manifolds, Corollary 6.6 yields Proposition 6.22 Let M be some n-dimensional manifold with n ≥ 2 and let S ⊆ M . Assume that S and ∂S are connected embedded submanifolds in M (without boundary) and that S is an embedded submanifold in M having boundary ∂S. Moreover, let γ be some nontrivial graph in M , such that the image of γ is neither equal to S, ∂S nor S. Then there is a nontrivial path γ, a neighbourhood U of some m ∈ im γ in M and infinitely many stratified diffeomorphisms ϕ i of M , such that • γ is the only edge in γ not disjoint to U ; • ϕ i is the identity outside U ; • ϕ i leaves the set S invariant; If we additionally assume, that S has one of its natural orientations, then each ϕ i may be chosen such that, additionally, it leaves the orientation of S invariant. Proof 1. im γ is not contained in S. Let γ be an edge of γ not contained in S. Choose some interior point m of γ outside S, and let U be some open neighbourhood of m disjoint to S and disjoint to all other edges in γ except for γ. Choose some chart whose closure is contained in U and whose intersection with (the image of) γ is mapped to a straight line with m mapped to the origin. Corollary 6.6 now gives a collection ϕ α of stratified diffeomorphisms being the identity outside the chart that, therefore, may be extended to stratified diffeomorphisms of M that are the identity outside, at least, U . Since each ϕ α (γ) with α ∈ [0, π) has some interior point not passed by any other ϕ α ′ (γ), these paths are independent. The invariance of S is trivial as well as the fact that the orientation of S is preserved and that {ϕ α (γ)} α is a hyph. 2. im γ is contained in ∂S. In particular, this implies that ∂S is at least one-dimensional. In fact, otherwise ∂S would be a point and γ trivial. In the case that dim S < n − 1 and that we consider orientations, let, moreover, S ′ ⊇ S be some (n − 1)-dimensional embedded submanifold of M inducing the orientation of S. a) dim ∂S ≥ 2. Choose some interior point m of some edge γ in γ and some open neighbourhood U of m whose closure is disjoint to all edges in γ except for γ. By assumption, there is some chart whose closure is contained in U , such that the intersection of the chart • with m is mapped to the origin. Since dim S > dim ∂S ≥ 2, Corollary 6.6 provides us, analogously to the first case, with stratified diffeomorphisms having the desired properties. In particular, observe that, although they are not the identity neither on S nor on ∂S, they leave both S and ∂S (and, if applicable, S ′ ) invariant. The orientation of S is obviously preserved for dim S = n. For dim S = n−1, use the fact that t −→ ϕ tα is a homotopy over diffeomorphisms having the properties above, whence the natural orientation of S is preserved by each diffeomorphism. If dim S < n − 1, then that the natural orientations of S ′ are preserved as above, whence the induced orientations on S are so as well. b) dim ∂S = 1. Since ∂S is one-dimensional, it is isomorphic to either a line or a circle. Moreover, ∂∂S = ∅. Since the compact set im γ does not equal ∂S, there is some point m ∈ ∂(im γ) ⊆ ∂S. Moreover, there is a (unique) edge, say γ, having m as one of its endpoints. We may assume γ(0) = m and choose some open neighbourhood U of m whose closure is disjoint to all edges in γ except for γ. Now, we select some chart whose closure is contained in U , such that the intersection of the chart • with im γ equals [0, τ ) ⊆ R, and • with m is mapped to the origin, and such that B τ ⊆ U for some τ > 0. By Lemma 6.4, for α ∈ [0, 1 3 τ ), there are now stratified diffeomorphisms ϕ α , taking − 1 3 τ ∈ R ⊆ R n as the origin, such that (−τ, +τ ) is mapped onto itself, such that ϕ α [0, τ ]) = [α, τ ] and such that ϕ α is the identity outside U . In particular, each ϕ α leaves both S, ∂S and S ′ invariant. Choosing some monotonously decreasing, infinite sequence α i → 0, we get a hyph {ϕ α i (γ)} i∈N , since (α i , α i−1 ) is passed by no ϕ α j (γ) with j < i. The preservation of orientation by ϕ α i is shown analogously to the case above. 3. im γ is contained in S, but not in ∂S. As above, this implies that S is at least one-dimensional. Again, for dim S < n−1 and if we consider orientations, we let S ′ ⊇ S be some (n − 1)-dimensional embedded submanifold of M inducing the orientation of S. a) dim S ≥ 2. Choose in γ some edge γ not fully contained in ∂S. We now may find some interior point m of γ being in the interior of S and fix some open neighbourhood U of m, whose closure is disjoint to ∂S and disjoint to all edges in γ except for γ. By assumption, there is some chart whose closure is contained in U , such that the intersection of the chart • with m is mapped to the origin. As above, we may find stratified diffeomorphisms of the desired type, by Corollary 6.6. b) dim S = 1. Since S is one-dimensional, it is isomorphic to either a line or a circle. Hence ∂S consists of at most two points. Consequently, S is isomorphic either to a circle, a line, a ray or a closed interval. Since im γ ⊂ S is compact, there is some point m ∈ ∂(im γ) ∩ S. Picking, as above, the (unique) edge γ having m as one of its endpoints, we now may find some open neighbourhood U of m whose closure is disjoint to ∂S and to all edges in γ except for γ. Again, we select some chart whose closure is contained in U , such that the intersection of the chart • with S ′ is (if applicable) some open subset of R n−1 , • with S is some open subset of R (if applicable, in R n−1 ), • with im γ equals [0, τ ) ⊆ R, and • with m is mapped to the origin, and such that B τ ⊆ U for some τ > 0. Again, as in the case im γ ⊆ ∂S and dim ∂S = 1, we find the desired stratified diffeomorphisms by Lemma 6.4. qed Proposition 6.23 Let M be some n-dimensional manifold with n ≥ 2 and let S ⊆ M . Assume that S and ∂S are connected embedded submanifolds in M (without boundary) and that S is an embedded submanifold in M having boundary ∂S. Moreover, let either S or ∂S be an embedded 1-circle S 1 . Finally, let γ be a graph whose image is this S 1 and let m be some vertex of γ. Then there is a neighbourhood U of m in M , infinitely many different m i in S 1 ∩ U and for each i a stratified diffeomorphism ϕ i of M with the following properties: • ϕ i is the identity outside U ; • ϕ i leaves the set S invariant; • ϕ i maps m to m i ; • ϕ i is the identity on all edges of γ not adjacent to m. If we additionally assume, that S has one of its natural orientations, then each ϕ m ′ may be chosen such that, additionally, it leaves the orientation of S invariant. Proof This proof is very analogous to that of Proposition 6.23. Therefore, we only present its main idea. First choose some U , small enough to intersect im γ only at its edges adjacent to m and such that U ∩ S is a domain of a straight line (if im γ = S) or of a half plane (if im γ = ∂S). Now choose some point near m as the origin for a local scaling as in Lemma 6.4. This way, we may move m to every other sufficiently near-by point, leaving ∂S or S, respectively, invariant, without moving any point outside U . qed Representations of the Weyl Algebra Now we are prepared to give a rigorous proof of (a stronger version of the) uniqueness theorem claimed by Sahlmann and Thiemann [35]. As well, we will proceed in two steps: First we use regularity and diffeomorphism invariance to show that the first-step decomposition contains the Ashtekar-Lewandowski measure. This will follow from the fact that the diffeomorphisms split the Weyl operators, i.e., the weak convergence of Weyl operators is not uniformly on states related by diffeomorphisms. Second, using diffeomorphism invariance again, we show that each Weyl operator is a scalar at this component. This enables us to use the naturality of the action of diffeomorphisms in order to prove that each Weyl operator is even a unit there. Cyclicity will give the proof. At the end, we discuss the technical assumptions made in the proofs. Splitting Property As before, we assume to be given some nice enlarged structure data. Moreover, we restrict ourselves to the case that S and D contain at least those hypersurfaces and stratified isomorphisms, respectively, that are necessary to keep Proposition 6.21 valid. In other words, one possibility is to choose S to contain at least all of these "cubic" hypersurfaces and D to contain at least the stratified isomorphisms described in that Subsubsection 6.4.1. Throughout the whole subsection, let π ′ be some representation of A Diff on H and denote by π := π ′ | A the corresponding representation of A. Additionally, we require M to have at least dimension 3. Recall that regularity always means regularity w.r.t. R, whereas R is taken from the nice enlarged structure data. Corollary 7.2 Let π be regular, and let there exist a (cyclic) D-invariant vector in H. Then there is a first-step decomposition of π ′ , such that µ ν 0 is the Ashtekar-Lewandowski measure µ 0 for some ν 0 and 1 ν 0 is (cyclic and) D-invariant. Proof According to Lemma 2.3 and the agreements thereafter, we may find some ν 0 , such that 1 ν 0 is D-invariant (and cyclic). Now use the proposition above. qed Note that, as mentioned earlier, we do not distinguish between the D-invariances on equivalent representations. More precisely, we should say in the corollary above: There is an isomorphism U : H −→ H ′ with H ′ = ν∈N L 2 (A, µ ν ) for certain measures µ ν on A, such that U • π ′ | C(X) • U −1 is cyclic on each L 2 (A, µ ν ) with cyclic vector 1 ν ; moreover, µ ν 0 equals µ 0 for some ν 0 ∈ N and U π ′ (α ϕ )U −1 1 ν 0 = 1 ν 0 for all ϕ ∈ D. Before we will be able to prove the proposition above, we have to provide two estimates. Lemma 7.3 Let T ∈ M γ be a gauge-variant spin network state, and let φ be some representation occurring in T . Denote the Casimir eigenvalue w.r.t. φ by λ φ and set n := dim g. Finally, define η : R + −→ R + according to Lemma B.2. Then there is a one-parameter group w t of Weyl operators, such that, for each t 0 > 0 and each even J ∈ N + , there are (2n) J diffeomorphisms ϕ ̺ , such that for all |t| < t 0 . Proof Fix some edge γ ∈ γ, such that φ is the representation carried by γ in T . Let, according to Lemma B.2, {X i } n i=1 be a basis of the Lie algebra g of G, such that − 1 n i φ(X i )φ(X i ) is (up to the prefactor) the Casimir operator φ. Define with X i+n := −X i . According to Proposition 6.21, choose some interval I ⊆ [0, 1], for each i = 1, . . . , 2n some appropriate surface S i disjoint to im γ and some s ∈ Z. Moreover, let d i : M −→ g be the constant function of value 1 2 X i , and fix some strictly increasing sequence (τ j ) j∈N + ⊆ I. We are now going to consider the one-parameter group In fact, this is a one-parameter group: All the S i are disjoint, whence w i,t and w i ′ ,t ′ commute by Lemma 3.26. 27 Fix now some positive even integer J and some positive t 0 . By the choice of S i and of (τ j ), for each ̺ : [1, J] −→ [1, 2n] there is a diffeomorphism ϕ ̺ ∈ D with the properties described in Proposition 6.21. In particular, since ϕ ̺ (γ) intersects S := i S i completely transversally, the minimal S-admissible decomposition of ϕ ̺ (γ) contains S-external edges only. More explicitly, it equals ϕ ̺ (γ 0 ) · · · ϕ ̺ (γ J ) with γ 0 = γ| [0,τ 1 ] , γ j = γ| [τ j ,τ j+1 ] and γ J = γ| [τ J ,1] . By ϕ ̺ (γ(τ j )) ∈ S ̺(j) for all j, we see that ϕ ̺ (γ j ) starts in S ̺(j) and ends in S ̺(j+1) (with the obvious exceptions for j = 0 and j = J). Since, moreover, by construction, σ + (S ̺(j) , ϕ ̺ (γ j−1 )) = (−1) j+s = σ − (S ̺(j) , ϕ ̺ (γ j )) for j = 1, . . . , J, we get 27 If we choose some E(t) : M −→ G with E(t) = Ed i (t) on each Si and define σS to be the joint intersection function of S1, . . . , S2n, we get wt = w S,σ S E(t) . Recall that we assumed that R contains not only the "genuine" subgroups in W, but also the finite products of such subgroups, provided they mutually commute. Therefore, it is not important that E(t) is possibly not included in ∆(S). we have The proof of the proposition will use several steps we are now going to write down in separate lemmata. For this, throughout the whole section, we will assume that π ′ is a D-natural representation of A Diff having some 1 ν 0 as a D-invariant vector. Moreover, µ ν 0 equal µ 0 . Finally, as usual, we set π := π ′ | A . Lemma 7.6 Let S 1 and S 2 be elements in S having orientations σ S 1 and σ S 2 . Assume that they are oriented-strata equivalent. Finally, let g ∈ G be some element and d i : S i −→ G for i = 1, 2 be the constant function with value g. Then w is a π ν 0 -unit (π ν 0 -scalar). Proof Let ϕ be a product of localized stratified isomorphisms mapping S 1 onto S 2 as well as their orientations. Then Now, the assertion follows from Corollary 2.10. qed Lemma 7.7 Let S some subset of M , such that S and ∂S are connected embedded submanifolds in M (without boundary) and that S is an embedded submanifold in M having boundary ∂S. Moreover, assume that S has one of its natural orientations. Finally, let w = w S,σ S d for some constant d ∈ ∆(S). Then P ν 0 π(w)1 ν 0 ∈ H ν 0 ≡ L 2 (A, µ 0 ) is orthogonal to all non-trivial gaugevariant spin network states that are not based on an edge γ whose image equals S, ∂S or S. Recall that no edge of a gauge-variant spin network is labelled with the trivial representation. Proof Let T be a gauge-variant spin network state in M γ . There are two main cases: • im γ neither equals S, ∂S nor S. By Proposition 6.22, there is an infinite number of localized stratified diffeomorphisms ϕ i , leaving S (including its orientation and d) and each edge of γ except for some γ invariant, and forming a hyph {ϕ i (γ)} i . Consequently, α ϕ i T and α ϕ j T are orthogonal for i = j. Moreover, each α ϕ i commutes with w. Therefore, by Lemma 2.17, P ν 0 π(w)1 ν 0 is orthogonal to T . • im γ equals S, ∂S or S. Assume first im γ = ∂S. Since im γ is compact, ∂S has to be compact as well. Hence, it is isomorphic to S 1 . After a possibly necessary re-orientation of γ, the product of all paths in γ is a closed edge γ with image ∂S. Assume that T is not (γ, φ)-based for some φ. Then there is some vertex m in γ, where the adjacent edges at m are either labelled with different representations or carry non-matching indices. Now, as in the previous case, but this time by Proposition 6.23, there are infinitely many localized stratified diffeomorphisms ϕ i , leaving the sets S (including orientation and d) and ∂S invariant; they simply move m along ∂S stretching ∂S a bit. By Lemma 3.4, any two α ϕ i T and α ϕ j T with i = j are orthogonal. Since each α ϕ i commutes with w, Lemma 2.17 proves the orthogonality of P ν 0 π(w)1 ν 0 and T . The case of im γ = S is completely analogous. For im γ = S we may additionally get the case of an embedded interval. However, this is analogous as well. qed Immediately from the proof of the lemma above and that of Proposition 6.22, we get Corollary 7.8 Let S be some subset of M and w S,σ S d ∈ W be any Weyl operator. Moreover, let γ be a graph not contained in the closure of S. Then P ν 0 π(w)1 ν 0 ∈ H ν 0 ≡ L 2 (A, µ 0 ) is orthogonal to all non-trivial gaugevariant spin network states in M γ . We are now going to prove that the Weyl operators to open balls given some constant "labelling" d, are π ν 0 -units. We start with the dimensions 0 and 3+, but smaller than dim M , proceed with dimension 1 and end up with dimension 2. Corollary 7.9 Let s < dim M be some non-negative integer with s = 1, 2, and let S be an open or closed s-dimensional ball in M given a nice orientation. Then w := w S,σ S d is a π ν 0 -unit for every constant d ∈ ∆(S). To prove that w is even a π ν 0 -unit observe first that, by Propositions 6.10 and 6.14, there is a stratified isomorphism ϕ mapping S onto itself, but reverting its orientation. Thus, α ϕ (w) = w * , whence w 2 is a π ν 0 -unit by Corollary 2.11. Since G is compact, there is a square root for any element. Re-doing the proof for d 1 ∈ ∆(S) with d 1 d 1 = d gives the assertion. qed Lemma 7.10 Let w ∈ W be a Weyl operator for some quasi-surface S and some constant d ∈ ∆(S), and let γ be an analytic edge, such that P ν 0 π(w)1 ν 0 is contained in the closure of span B γ . If the image of γ is not completely contained in S, then w is a π ν 0 -scalar. Proof Let m ∈ im γ \ S. If γ is closed, we may assume that m is not the base point of γ. Consider now for each g ∈ G the Weyl operator w g,m given by the quasisurface S m := {m}, whereas the orientation of S m is chosen, such that the direction of γ coincides with the orientation of S m . Since S m and S are disjoint, w g,m and w commute. Moreover, by Corollary 7.9, w g,m is a π ν 0 -unit. Consequently, by Corollary 2.13, w g,m leaves P ν 0 π(w)1 ν 0 invariant. • Let m be not an endpoint of γ. Altogether, this shows that P ν 0 π(w)1 ν 0 is orthogonal to all non-trivial gaugevariant spin network states, i.e., w is a π ν 0 -scalar. • Let m be an endpoint of γ. We argue analogously, using Proof Let γ be the edge whose interior is S and choose one of its orientations. By Lemma 7.7, P ν 0 π(w)1 ν 0 is orthogonal to all non-trivial gauge-variant spin network states that are not based on the edge γ. By Corollary 3.6, P ν 0 π(w)1 ν 0 is contained in the closure of the span of γ-based gSNs. Since, however, the endpoints of γ are not contained in S, Lemma 7.10 implies that w is a π ν 0 -scalar. Now, by Proposition 6.10, there is some ϕ ∈ D being the identity on S, but inverting the orientation of S, i.e., α ϕ (w) = w * . Corollary 2.11 implies that w 2 is a π ν 0 -unit. As above, the assertion follows since square roots exist in G. qed Proof The image of an edge γ equals S, ∂S or S iff γ is a closed loop along ∂S ∼ = S 1 . By Lemma 7.7, P ν 0 π(w)1 ν 0 is orthogonal to all non-trivial gauge-variant spin network states not based on such a γ. Hence, we have P ν 0 π(w)1 ν 0 ∈ span B γ by Corollary 3.6. Observe that im γ ∩ S = ∅. Now argue as in Corollary 7.11. qed Proposition 7.13 Let S be a finitely widely triangulizable subset in M having a natural orientation with dim S < dim M . Then w := w S,σ S d is a π ν 0 -unit for every d ∈ ∆(S) being constant on S. Proof • S is an open q-simplex having a nice orientation. By Corollary 6.11 and Proposition 6.14, S is oriented-strata equivalent to a nicely oriented q-ball. So we get the assertion, since q-balls lead to Weyl operators that are π ν 0 -units (Corollaries 7.9, 7.11 and 7.12) and since this property is inherited to all oriented-strata equivalent objects according to Lemma 7.6. • S is finitely widely triangulizable. This means, by definition, S is the finite disjoint union of nicely oriented simplices. Since disjoint unions lead to products of (commuting) Weyl operators (see Lemma 3.26), we get the assertion as well in the general case. qed Proof Proposition 7.5 Use Proposition 7.13 and Lemma 2.12, observing that each w ′ ∈ W ′ is a π ′ ν 0 -unit and that A Diff is generated by W, W ′ and C(X). From our point of view (see also the discussion in Subsection 4.1), all ingredients are natural up to the restrictions on S and, maybe, on M and ∆(S). The inclusion of semianalytic sets is reasonable, since the stratified diffeomorphisms map analytic hypersurfaces to semianalytic sets anyway. At the same time, the inclusion of lower-dimensional surfaces becomes natural. But, it would be desirable to at least replace the condition of wide triangulizability by the "standard" triangulizability, since in this case it is known that any semianalytic set is triangulizable. The requirement that each simplex in the triangulation is nicely oriented, is not too restrictive, since every naturally oriented, embedded surface is at least locally nicely oriented. The finiteness, on the other hand, cannot simply be dropped. This may at most be possible for compact M . In fact, then every semianalytic set has a compact closure and compact boundary. Then we may triangulize them finitely, by local finiteness. Redoing the procedure with the (lower-dimensional) semianalytic set given by the intersection of the original one with its boundary, we may successively get a finite decomposition of the original set into simplices. For non-compact M , this is no longer true. Simply take a hyperplane in R 3 being triangulizable, of course, but not finitely. Well, although our proofs above have aimed at the finite case, we may extend the uniqueness result immediately to this example. Simply use that a hyperplane can be rotated onto itself inverting its orientation, and argue as in Corollary 7.9. In other words, it may be, as already mentioned above, that every analytic manifold is widely triangulizable; but even if not, there seems to be still some leeway in our argumentation above to keep the uniqueness given in the more general context. However, to explore this, several technical investigations in the field of semianalytic sets are necessary that go much beyond the scope of this paper. We mentioned also the restriction that M has to be at least three-dimensional. Well, for quantum gravity this is no problem at all, since the space-like hypersurfaces are threedimensional. The space-time is even four-dimensional, although this does not seem relevant here, since we work with compact structure groups excluding the full covariant formulation of general relativity in four dimensions using structure group SO(3, 1) or Sl(2, C). Nevertheless, we expect our result to be true in dimension 2 as well. In dimension 1, one should check it by hand -M can only be a line or a circle. Another issue concerns the choice of functions d ∈ ∆(S) to label the stratified sets. Constant functions mark some minimal condition. On the other hand, our proofs in Subsection 7.2 only go through for constant labellings. In fact, only these guarantee that diffeomorphisms mapping some S onto itself preserve even its labelling. The most obvious way out might be to add some stronger notion of regularity. In particular, we might reuse the idea of step functions for the definition of integrals. This means, we should approximate an arbitrary (sufficiently "smooth") function by simple functions, i.e. by sums of step functions, having sufficiently nice, disjoint supports. These sums now correspond to products of Weyl operators with constant labellings. Since these are represented identically, we would get the desired uniqueness for representations that are in this sense regular and if each d can be approximated this way. However, this approximation again may be in conflict with the triangulation problem above. Therefore, at this point, we state only the directly given Lemma 7.16 Besides nice enlarged structure data, assume that each ∆(S) consists of some subset of continuous functions d : M −→ G. Equip ∆(S) with the supremum norm on S induced by some fixed norm on G. Assume there is some sequence (d i ) i∈N with d i → d in ∆(S), such that for all i there are finitely many S i,k i forming a decomposition of S and each having a finite wide triangulation, whereas d i is constant on each S i,k i . Then, given the assumptions of Theorem 7.14, π ′ is equivalent to π 0 , provided π ′ is Λ 0,S,σ S -regular for all S ∈ S and σ S ∈ Σ(S). Recall that π 0 itself is always Λ 0,S,σ S -regular, i.e., if d i converges pointwise on S to d, then the corresponding Weyl operators converge weakly. Proof Let d, S and σ S be fixed. The Weyl operators corresponding to S i,k i and d i | S i,k i are even π ν 0 -units according to the proof of Theorem 7.14, hence each w S,σ S d i as well, by Lemma 3.26. Proposition 2.22 and the Λ 0,S,σ S -regularity imply that w S,σ S d is a π ν 0 -unit as well. Corollary 2.7 gives the proof. qed Further Assumptions Let us now say a few words about the other assumptions of Theorem 7.14. That we restrict ourselves to cyclic representations, is no restriction at all, since any (non-degenerate) representation can be decomposed into cyclic ones. Rather, the assumption that there is a cyclic vector being at the same time diffeomorphism invariant, is a restriction. This means that we only consider theories having a diffeomorphism invariant "vacuum". Well, this may be justified by the corresponding invariance of general relativity leading to some special kind of quantum geometry. Next, we assumed at least the "standard" regularity mapping weakly continuous one-parameter subgroups into weakly continuous ones. It may be desirable to drop this assumption; however, even in the classical theory of quantum mechanics, the Stone-von Neumann theorem relies on the regularity assumption. Indeed, it is very difficult to prove results without referring to it. However, in our case, there may be some hope, since the diffeomorphism group is that large and may thence identify so many objects in order to, possibly, replace some or all of the regularity assumptions. The naturality of the action of diffeomorphisms is discussed below. Improvements Finally, we would like to emphasize that we were able to drop a crucial assumption and to weaken another made in the paper [35] by Sahlmann and Thiemann: First of all, we did not need any assumptions about the domains of the operators. This was possible, since we are working with the exponentiated Weyl operators from the very beginning. The only point, where we went down to the non-exponentiated regime, was in Subsection 7.1 (and Appendix B). But even there, we did not do this for generators of the represented Weyl operators. In fact, we did only use results for the convergence of the genuine Weyl operators w.r.t. the supremum norm. This way, we get some "analytic" convergence at the exponentiated level that, afterwards, leads to the emergence of the Ashtekar-Lewandowski measure by splitting and regularity. Acknowledgements The author thanks Abhay Ashtekar, Jerzy Lewandowski, Stefan Müller, Andrzej Oko lów, Hans-Bert Rademacher, Hanno Sahlmann, Konrad Schmüdgen, Matthias Schwarz, Thomas Thiemann, Rainer Verch and Elmar Wagner for fruitful discussions. Moreover, the author is very grateful to Garth Warner for his remarks and, in particular, for pointing out a mistake in an earlier version of this article. Fortunately, all the results kept valid. Additionally, the author thanks the three anonymous referees for their very valuable comments and suggestions helping him to improve the article. The author has been supported in part by the Emmy-Noether-Programm (grant FL 622/1-1) of the Deutsche Forschungsgemeinschaft. A Continuity Criterion Lemma A.1 Let Y be some sequential topological space. Let X be a Banach space and let λ : Y −→ B(X) be some map. Moreover, let λ(·) : Y −→ R be locally bounded. Assume, finally, that there is some subset E ⊆ X, such that y −→ λ(y)e is continuous for all e ∈ E and that span E is dense in X. Then y −→ λ(y)x is continuous for all x ∈ X. B Two Estimates Lemma B.1 Let H be some Hilbert space and N ∈ N. Moreover, let A, A i and B i be linear continuous operators on H, such that A ≤ 1 and B i ≤ 1 for all i = 1, . . . , N . Then i.e. ϕ 120 (−τ, 0, 0) = (−τ, a, 0), • The maps ϕ 13 * : R n −→ R n are defined analogously to the case of ϕ 11 * . • The remaining maps ϕ 3i * are defined using the reflection symmetry w.r.t. x = 0. One immediately checks that ϕ : R n −→ R n defined by ϕ| G ij * := ϕ ij * and ϕ| R n \C := id is a well-defined stratified analytic isomorphism with the desired properties. qed
2019-04-12T09:06:59.140Z
2004-07-04T00:00:00.000
{ "year": 2004, "sha1": "d92fe400e68a1249969e166db7cecfda95446ee8", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00220-008-0593-3.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "d92fe400e68a1249969e166db7cecfda95446ee8", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
55366351
pes2o/s2orc
v3-fos-license
Numerical modeling of coanda effect in a novel propulsive system Coanda effect (adhesion of jet flow over curved surface) is fundamental characteristics of jet flow. In the present paper, we carried out numerical simulations to investigate Coanda flow over a curved surface and its application in a newly proposed Propulsive system “A.C.H.E.O.N” (Aerial Coanda High Efficiency Orienting jet Nozzle) which supports thrust vectoring. The ACHEON system is presently being proposed for propelling a new V/STOL airplane in European Union. This system is based on cumulative effects of three physical effects such as (1) High speed jet mixing speeds (2) Coanda effect control by electrostatic fields (3) Coanda effect adhesion of an high speed jet to a convex surface. The performance of this nozzle can be enhanced by increasing the jet deflection angle of synthetic jet over the Coanda surface. This newly proposed nozzle has wide range of applications. It can be used in industrial sector such as plasma spray gun and for direct injection in combustion chamber to enhance the efficiency of the combustion chamber. Also, we studied the effect of Dielectric barrier discharge (DBD) plasma actuators on A.C.H.E.O.N system. Dielectric barrier discharge (DBD) plasma actuators are active control devices for controlling boundary layer and to delay the flow separation over any convex surfaces. Computations were performed under subsonic condition. Two dimensional CFD calculations were carried out using Reynolds averaged Navier stokes equations (RANS). A numerical method based on finite volume formulation (FVM) was used. SST k-ω model was considered to model turbulent flow inside nozzle. DBD model was used to model the plasma. Moreover, a body force treatment was devised to model the effect of plasma and its coupling with the fluid. This preliminary result shows that, the presence of plasma near Coanda surface accelerates the flow and delays the separation and enhances the efficiency of the nozzle. INTRODUCTION Air transport has completely transformed our society in the last 100 years.The excess use of air transport leads to development of Vertical Short-Take off and Landing (V/STOL) air vehicles in the modern world.These types of air vehicles have enormous operational advantages in field of military, humanitarian and rescue operations.V/STOL aircrafts based on thrust vectoring.Thrust vectoring is the ability of an aircraft or other vehicle to deflect the angle of its thrust away from the vehicles longitudinal axis [41].Further, the use of thrust vectoring concept enables to control the aircraft in more controllable way.In case of military operations it can introduce an extension on aircraft controllability at high post angle of attacks, a moment where classic lifting surface lose their ability to provide the control of aircraft, creating what is called super maneuverability [45].This technology will improve take off and landing performances.It will allow the exploration of radical new concepts of aerial vehicle design realizing advanced concepts which have been previously postulated throughout the history of aviation but could not be realized because of the lack of an effective and affordable jet vectoring system.ACHEON (Aerial Coanda High Efficiency Orienting jet Nozzle) explores the feasibility of a novel propulsive system for aircraft which is expected to overcome the main limitations of traditional systems related to typical jet deflection systems [1,41].In particular, Project ACHEON comprises thrust vectoring propulsive nozzle named HOMER (High Speed Orientating Momentum with Enhanced Reversibility) that is supported in a patent developed at University of Modena & Reggio Emilia [41].This system also comprises a plasma actuator (boundary layer control devices) that enables to extend vector range of operations of nozzle [45] In the past, Germans used graphite control vanes in the exhaust stream of their V-2 ballistic missiles in World war 2 for some directional control of the jet [45].Thrust vectoring in an aircraft is a relatively new practice and the concept came under use during cold war [45].Several methods have been employed to generate thrust vectoring in the aircraft.Two common approaches have been taken to achieve thrust vectoring, namely mechanical and fluidic.Mechanical thrust vectoring uses turbofan engines with rotating nozzles or turning vanes to deflect the exhaust stream.This method can deflect thrust to as much as 90 degrees proving a vertical takeoff and landing capability.However, for vertical thrust the engine the aircraft requires a bigger heavier engine.As a result, there is increase in over all weight of aircraft and hence, the maneuverability reduced during the normal flight. Alternatively, fluidic thrust vectoring approach has been proposed by several researchers [45].In contrast to mechanical thrust vectors, fluidic thrust vector uses fixed geometry.In this approach, secondary flow has been used to control the main exhaust flow stream, and in this way redirect the flow at or near the exit plane.A wide range of concepts have been proposed such as shock vector control [70.], sonic throat skewing [69], synthetic jet actuator [70][71] and co-flow or counter flow [69] nozzles.Experiment tests concluded the main flow can be deflected up to 15 0 .They are beneficial because they have around 50% lower mass and cost.Further, their inertia is lower thus making them faster to actuate which results in a stronger control response.Also, the complexity is reduced because they require mechanical simpler system with no moving parts, having also reduced radar cross section for stealth properties [45].An excellent review of thrust vectoring approaches and its support in VTOL aircrafts has been presented by Pascoa et al. [45].The throat shifting method is most efficient fluidic approach, but it requires to have the ability to control the throat area under various working conditions, which can be difficult.A large deflection angle is obtained in the shock vector control method, but can present problems associated to shock impingement and reflection.The co-flow thrust vectoring method is least efficient and has problems related to control reversal when low injections mass flow are applied .In case of counter flow method, problem may arise due to the availability of a suction system for the secondary flow.In view of this, a novel thrust vectoring approach ACHEON has been introduced [41,44,47].It bears some resemblance with the co-flow, but it is not fluidic, and in this sense it will be less prone to problems associated with the injection of low mass flows.In ACHEON, there is no secondary fluidic stream, but only two co-flow streams whose differential mass flow rated allows to control the thrust vector angle [41].Plasma actuators will be implemented in HOMER nozzle to enhance and control the attachment of synthetic jet. In the past, significant research on boundary layer control technologies, including active and passive flow controls, has been carried out.The main idea is to generate microvortex inside the boundary layer thickness, to add flow momentum close to the wall and to reduce separation.Flow control devices can be categorized in two group.Passive flow control devices, which do not add energy to air flow.These passive devices may be rendered mobile, thus increasing the range of aerodynamic optimization, but they are quite complex mechanisms which are heavy both in terms of weight and power consumptions.An alternative is to use active devices which add energy to air flow.These devices have been researched and shown their ability to control air flow for boundary layer separation, wing tip vortex, shock/boundary layer interaction, also for engine exhaust jet, landing gear and cavity noise reduction.Many active control devices have been proposed over the years [2].A detailed review on active and passive flow control techniques can be found in [2][3]. Although the traditional methods of boundary layer control are effective from an aerodynamic point of view, their associated manufacturing and maintenance costs may limit their implementation, by introducing a significant increase in mechanical complexity and weight of the aircraft.Therefore, the replacement of these conventional systems by a system utilizing active flow control technology is a logical alternative.Plasma actuators are a relatively new flow control technology.A plasma actuator consists of two offset thin electrodes that are separated by a layer of dielectric insulator material.One of the electrodes is typically exposed to the air.The other electrode is fully covered by a dielectric material.The electrode exposed to air is assumed to be loaded by a high voltage, whereas an the electrode buried under dielectric is expected to be grounded.Comprehensive reviews on plasma on plasma actuators for aerodynamic flow controls have been published recently [3,5].Corke et al [4] provides an overview of the physics and modeling of SDBD (single dielectric barrier discharge) plasma actuators.It highlights some of the capabilities of plasma actuators through examples from experiments and simulations.Caruana [5] has given a survey of methods of air flow control for aircraft performance improvement.He has presented a short overview of non-plasma devices and studied ways for flow control.Touchard [6] also made a detailed review of the designs and associated setups for different aerodynamic plasma actuators developed these last twenty years, he further discussed the limits and the prospects of plasma actuators considered for airflow control. In this work, a numerical investigation has been carried out on ACHEON system.Effect of plasma actuator has been studied on the system.Thrust vectoring characteristics such as thrust angle (θ T ) and thrust has been computed for different flow condtions.The performance of the nozzle has been obtained. HOMER GEOMETRY AND OPERATION In this section we will present the novel nozzle, describing its operation and presenting the numerical model approach employed. The HOMER nozzle is depicted in Figure 1.The geometry comprises a duct (1) which bipartite into two channels by a central septum.The two channels converge into nozzle outlet, connected to Coanda surfaces (3) and (3´).This nozzle has an ability to permit the stabilization of a synthetic jet with arbitrary predefined direction and to modify this direction dynamically without any moving mechanical part.It generates a vectored and controllable jet by the combined action of two different physical phenomena: the mixing of two primitive jets ( 2) and (2´) and the angular deviation of the resulting synthetic jet by adhesion to the Coanda surfaces ( 3) and (3´). Referring to Figure 1, the following conditions can be indentified: 1. If the momentum of the primitive jet ( 2) is greater than the one of (2´) the synthetic jet (4) adheres to Coanda surface designated as (3). 2. If the momentum of the primitive jet (2´) is greater than the one of (2) the synthetic jet (4) adheres to Coanda surface designated as (3´). 3. If momentums are equal the synthetic jet is aligned with the nozzle axis. The angle formed by the synthetic jet (4) and the geometrical axis of the nozzle can be controlled by the momentums of primitive jets (2) and (2´).It can be increased when the difference between the moments of the two primitive jets (2) and (2´) increases, can be decreases and becomes null when it zero. PLASMA ACTUATORS Plasma-based devices exploit the momentum coupling between the surrounding gas and plasma to manipulate the flow.Unlike other flow control techniques, such as suction and mechanical actuators, plasma actuators require low power consumption, involve no moving mechanical parts, and have a very fast frequency response that allows real-time control.For these reasons, the plasma actuator has become a very promising and attractive device in the flow control community.Plasma actuators can be sub-categorized into two major families; the corona discharge, and the dielectric-barrier-discharge (the classification according to the class of discharges, may include corona discharge, dielectric barrier discharge (DBDs), glow discharge and arc discharge actuators; and also according to the conditions the classification can include the thermal and non-thermal plasma actuators).Different plasma actuators can be operated in various modes, depending on their geometrical configuration and the kind of high voltage applied (e.g., Nanosecond pulsed DBD, plasma synthetic jet, sliding DBD, Pulse DBD actuators).Very promising results for the application of plasma actuators have been observed in a wide range of aeronautic applications (boundary layer transition control [7], Separation control [8,10], control of a subsonic rotor blade wake [11], increasing the lift on a UAV [12], noise reduction [13] and pressure sensor [14], elimination of low Reynolds number separation in Low-Pressure Turbine flows [15] and reduction of the effects of turbine tip leakage [16]). The specific plasma actuator which is considered for this study is the single-dielectric barrier-discharge (SDBD).In this configuration, two electrodes are typically separated by a dielectric barrier usually made of glass, Kapton or teflon as depicted in Figure 1.When a high AC voltage signal, of sufficient amplitude (5-40kVpp ) and frequency (1-20 kHz), is applied between the electrodes the intense electric field partially ionizes the surrounding air, producing a nonthermal plasma on the dielectric surface.The collisions between the neutral particles and accelerated ions generate a net body force on the surrounding fluid.This body force is the mechanism for active aerodynamic control. The advantages of DBD actuators include being surface-mounted (the ability to apply the actuators onto surfaces without the addition of cavities or holes), fully electronic, low power, high frequency-band devices, having a fast time response for unsteady applications, a very low mass and no moving parts.Moreover, flexible operation is possible by controlling the input voltage and waveforms.As DBD plasma actuators are thin, surface mounted, and do not require internal volumes or passages, they are particularly attractive for gas turbine and turbomachinery applications.Furthermore, plasma actuators like other active flow control devices can be driven either by open loop (not regulated by the output) or closed loop with feedback control [17] . It has been observed that plasma actuators are sensitive to a variety of atmospheric conditions, including air velocity, humidity and air pressure, at which they are exposed in many potential practical applications.There have recently been a number of investigations into the effect of pressure and temperature (in other word gas density) on the body force and velocity profiles produced by DBD plasma actuators [18][19][20][21][22][23][24][25][26][27].Although air pressure has effect on the current used by Dielectric Barrier Discharge (DBD) plasma actuators, and the voltage limits for plasma production, in this study as a first attempt that effect is not considered. NUMERICAL METHOD Initially, we performed two dimensional CFD calculations on nozzle geometries for various flow conditions (for low and higher speeds).The effect of plasma actuator has been studied on the nozzle.To characterize the thrust vectoring for the nozzle, performance parameter of the nozzle has been defined and studied under various flow conditions. SOLUTION METHODOLOGY 2.1.1 Governing Equations For the present case, we have considered a steady, two dimensional and incompressible flows in the nozzle.The governing equations can be written as vector form, The variables are r, , p, t, and density, velocities, pressure, viscous shear stress tensor, acceleration due to gravity and body force respectively.For discretization of these equations, finite volume method has been used.2 nd order UPWIND scheme has been considered for the modeling of convective terms in the transport equations.Standard pressure interpolation scheme has been used for the pressure term in the equations.Moreover, for turbulent flow we have considered SST k-w model [31]. DBD Plasma Modeling There have been several numerical studies on DBD plasma actuators.Two main different modeling approaches are commonly employed to describe the plasma actuators.The first consists of chemistry based models [32,35] that attempt to spatially resolve the plasma phenomena directly.The second are algebraic models that are based on the solution of a Poisson's equation [36,39], [10][11].These algebraic models generally require assumptions regarding either the charge density or electric field produced by the actuator.The chemistry based family typically consists of drift diffusion type models.These models track the chemical species present in the plasma, such as electrons and ions, using a set of transport equations.The essential plasma physics such as ionization, recombination and streamer propagation are all modeled.Here in this study, the algebraic model of Suzen [39] is used for describing the effect of plasma actuation. Split potential field model The electrostatic formulation is based on the assumption that the plasma formation and fluid flow response can be decoupled due to the disparities in the characteristic velocities associated with each process.This is a reasonable assumption since the characteristic velocities of the transport fluid under consideration are between 10 m/s and 100 m/s and, for electron temperatures between 1000 K and 10000 K, the electron velocities, which present the characteristic velocities of the plasma, are of the order of 10 5 -10 6 m/s.The plasma actuators are formed by a pair of electrodes separated by a dielectric material.The actuator is placed in the surface with one electrode exposed to the surroundings and the other one embedded in the surface below the dielectric material (Figure 2).When a high AC voltage is supplied to the electrodes, this arrangement causes the air in their vicinity to weakly ionize.The ionized air, in the presence of the electric field gradient produced by the electrodes, results in a body force vector acting on the external flow that can induce steady or unsteady velocity components.This body force can be expressed in terms of the applied voltage and incorporated into the Navier Stokes equations.By neglecting magnetic forces, the electro hydrodynamic (EHD) force can be expressed as (3) where, is the body force per unit volume, r c is the net the charge density and E → is the electric field.This body force is a body force per volume of plasma, which is the basis of the plasma actuator effect on neutral air.Considering the Maxwell equations (respectively Gauss law, Gauss law for magnetism, Faraday's law of induction and Ampere's circuital law) : (4) Numerical modeling of coanda effect in a novel propulsive system here H → is the magnetic field strength, B → is the magnetic induction, E → is the electric field strength, D → is the electric induction and J → is the electric current.We assume that the charges in the plasma have sufficient amount of time for the redistribution process to occur and the whole system is quasi-steady, and that the time variation of the magnetic field is negligible, as is often the case in plasma.These assumptions imply that the electric current, J → , the magnetic field H → , and the magnetic induction B → are equal to zero, as well as the time derivatives of the electric and the magnetic induction, B → /t.Therefore, the Maxwell's equations give rise to (5) The relation between the electric induction and the electric field strength is given by: ( 6) where e is the permittivity.The permittivity can be expressed as e = e 0 e r where e r is the relative permittivity of the medium, and e 0 is the permittivity of free space.Using Eqn. ( 5), Eqn.( 6) can be rewritten as, (7) This implies that the electric field can be derived from the gradient of a scalar potential: Therefore, If we use the Boltzmann relation we have: (10) with f being the local electric potential, n 0 the background plasma density, T the temperature of the species, e the elementary charge, and K b the Boltzmann constant.In the above equation, the positive sign applies to electrons and the minus sign applies to the ions.The net charge density at any point in a plasma is defined as the difference between the net positive charge produced by ions and the net negative charge of electrons.The difference can be related to the local electric potential f by the Boltzmann relation (10).Assuming a quasisteady state with a time scale long enough for the charges to redistribute themselves, the following relation can be written (11) Where n i and n e being the ion and electron densities in the plasma.Expanding the exponential functions in a Taylor series for f << T, Equation ( 11) becomes, for the lowest order of j/T, ) The Debye length, which is the characteristic length for electrostatic shielding in a plasma, is defined as, (13) The free charges in the plasma are shielded out in a distance given by the Debye length.The Debye shielding is valid if there are enough particles in the charge cloud.The criteria for this is the dimensionless plasma parameter, D, that characterizes un magnetized plasma systems, defined as (14) If the plasma parameter is D>>1, then it means that the plasma is weakly-coupled, and the Debye shielding is valid.For plasma with the Debye length of approximately 0.00017 m, and the density of the charged particles is on the order of 10 16 particles/m 3 , the criteria is D = 3.5 × 10 5 , indicating that the assumption of the Debye shielding is true.With the present definition of Debye length we have, Experiments indicate that independently of which electrode the voltage is applied to, and independently of the polarity of the applied voltage, the resultant body force and the induced flow is in the direction towards the embedded electrode.The exposed surface of the dielectric plays a critical role.Even before the air ionizes, the dielectric surface communicates the potential charge from the covered electrode.When the voltage potential is large enough to ionize the air, the surface of the dielectric collects or discharges additional charge.As a result the dielectric surface is referred as a virtual electrode.Therefore there is the need for a better model that can account for these effects.According to [38 ] and [39] (split potential field model), since the gas particles are weakly ionized, we can assume the potential F can be decoupled into two parts: one being a potential due to the external electric field, f, and the other being a potential due to the net charge density in the plasma, , Assuming that the Debye length is small, and the charge on the wall above the encapsulated electrode is small, the distribution of charged particles in the domain is governed by the potential due to the electric charge on the wall, and is unaffected by the external electric field. Note that the grid spacing should not be larger than the Debye length.The smaller the Debye length, the narrower it becomes the plasma region located near the electrode and dielectric surface becomes.For the potential due to the external electric field, we have, (17) and for the potential due to the net charge density, we have, , Using Eqn.(15) we can rewrite Eqn. ( 18) as follows, Equation ( 17) is solved for the electric potential, f, using the applied voltage on the electrodes as boundary condition.The applied AC voltage is imposed at the exposed (upper) electrode as a boundary condition The waveform function f(t)can be either a sine wave or a square wave given by, where ω is the frequency and f max is the amplitude.The embedded electrode is prescribed as ground by setting the electric potential to zero on that electrode.At the outer boundaries is assumed.The waveform function f(t) is a time dependent boundary condition and can be used to model both steady and unsteady actuator arrangements.For the steady case, f(t) can be set to be a square wave.For unsteady cases, different frequencies and wave forms can be used to simulate actuation with different duty cycles.Eqn. ( 19) is solved for the net charge density r c , only on the air side of the domain.A zero normal gradient for the net charge density is imposed on the solid walls except in the region covering the lower electrode.The charge density is set to zero on the outer boundaries.On the wall, downstream of the exposed electrode, where the embedded electrode is located (virtual electrode), the charge density is prescribed in such a way that it is matched with the time variation of the applied voltage f(t)on the exposed electrode, (22) where is the maximum value of the charge density allowed in the domain (C / m 3 ).The variation of the charge density on the wall, r c,w in the streamwise direction x, is prescribed by a function G(x) chosen to resemble the plasma distribution over the embedded electrode.Experimental results [40] suggest that this distribution is similar to a half Gaussian distribution given by (23) In eqn.(23) m is the location parameter indicating the maximum x location, and s is a scale parameter determining the rate of decay.The location parameter m is chosen such that the peak corresponds to the left edge of the embedded electrode.Moreover, it is assumed that s takes a value of 0.3 to allow a gradual decay of the charge density distribution from the left edge to the right edge.It should be noted that in order to solve the above equation, it is necessary to specify two parameters, namely and l d These parameters control the strength of the plasma actuator's effects on the flow field and the extent of these effects into the flow field.These two parameters should be calibrated using available experimental data.The values for -1; sin(2π t) < 0 square wave , sine wave ( ) ρ c max and l d were empirically defined by Suzen et al. (2005) [39] as 8 × 10 −4 C/m 3 , and 0.001m.Since Eqn.(17) and Eqn.(19) do not contain a time derivative term.Then, Eqn. ( 17) can be normalized using the value of voltage of the exposed electrode f max f(t), and be solved by imposing a constant boundary condition equal to unity at the upper electrode.Once the dimensionless distribution is determined, the dimensional values at any given time can be obtained by multiplying this distribution with the corresponding value of f max f(t).Similarly, equation ( 20) can be solved by normalizing with .This implies that the boundary condition for the dimensionless charge density on the wall region covering the embedded electrode is G(x).The non-dimensional form of Eqns.( 17) and ( 19) is as follows, (24) (25) Computational Grid Figure 3 shows the computational grid for the numerical simulation.In order to capture the boundary layer effects near Coanda surface, y+< 1 was considered.2, 18, 155 hexahedral mesh was created using commercial mesh generation tool Pointwise.For modeling of Plasma near Coanda surface (Figure 3), an electrode with length 10 mm with thickness 1mm is mounted on Coanda surface.Another electrode with the same thickness and 1 mm in length is grounded and separated from the mounted electrode with the 3 mm thick layer of kapton as dielectric material.For the present simulation, three plasma actuators have been used. A commercial package FLUENT [50] has been used for the simulation.A user defined functions (UDF) has been coded and used for calculations of electric field and body force.Various types of boundary conditions have been used for the computation.Velocity inlet conditions have been considered at the both the inlet section of nozzle.Pressure outlet boundary conditions have been used as the outlet boundary conditions.The convergence of the numerical solution has been checked through the convergence history of lift and drag coefficient inside the nozzle. 3.RESULTS AND DISCUSSION First, CFD calculations have been done for different velocity ranges without plasma actuators. Table 1 and Table 2 show the different test cases for the numerical computation of the nozzle without plasma.Where, VR is the velocity ratio between two inlet (Figure 1) velocities.T x and T y are the magnitude of thrust in x and y direction respectively.θ T and T max are the thrust vectoring angle and maximum thrust.α is the jet vectoring angle for the jet.Simulations have been performed in the range of maximum velocities (V max ), 14 m/s to 80 m/s.Velocity contour for velocity ratio (VR) = 1.2 has been shown in figure 4. In a subsequent, CFD calculations were performed for both low and high speed flows in the presence of plasma actuor on inside the nozzle.The distribution of electric potential and charge density around the electrodes near Coanda surface is shown in Figure .5. As shown in Figure 6, the presence of potential difference between the electrodes, the air ionizes and forms ionic wind.This ionic wind overcome the adverse pressure gradient near Coanda surface and delays the flow separation.The presence of plasma actutaor near Coanda surface significantly increases the jet deflection angle (Figure 7.). 192 Numerical modeling of coanda effect in a novel propulsive system 4 show the different test cases for the numerical computation of the nozzle with plasma. For lower speed flows, the jet deflection angle (α) increases with increase in velocity ratio´s (VR) (Table 1).The variation of thrust vectoring angle (θ T ) and maximum thrust (T max ) with VR is presented in Figure 8 and Figure 9.As it clear from Figure 8., θ T increases with increase in VR's.It is interesting to note that, θ T improves significantly for the lower speed flows with the plasma on (Figure 8).It is clear from Figure 9, T max decreases with increase in velocity ratio (VR) from 0 to 1.2, then it increases towards higher VR's.In the presence of plasma, the thrust magnitude for the nozzle has been raised (Figure 9).It means, with less power consumption, the efficiency of the nozzle has been enhanced with use of plasma actuators.For high speed flows, it was observed that α increases with increase in VR's (Table 2).It is clear from figure 10, that θ T decreases with increase in VR's.It is interesting to note that, the effect of plasma is less significant the the variation of θ T with VR for high speed flows. It is clear from figure 11, for high speed flows, θ T increases with increase in of VR.The effect of plasma on the thrust vectoring for the nozzle is less for higher velocity ratio in case of high speed flows.From above study, it may be concluded that, there is an upper limit for vectoring thrust angle for the high speed flows with and without plasma actuators. The thrust vectoring for the nozzle, can be characterized by defining performance parameter (PP) of the nozzle PP = (θ T )/((m 2 )/(m 2 +m 2´) ) (26) Where, m 2 and m 2' are the mass flow rate in inlet 2 and inlet 2´ of the nozzle (Figure 1). The performance parameter (PP) of the nozzle for both lower and higher speed flows is shown in Figure 12 A and B. It has been observed that, for the lower speed flows the performance of the nozzle improves significantly in the presence of plasma actuator.In contrast, the effect of plasma is less significant for high speed flows. of low speed flows.From, the numerical result it can be concluded that, the plasma actuators have an potential to enhance the performance of the nozzle for lower speed flows (less power consumption). Figure 1 : Figure 1: Geometry of the HOMER nozzle.It comprises two flow streams, feed by two electric turbofans, and an exit Coanda surface.Operation with different mass flows allows to control the exit flow angle. Figure 2 : Figure 2: Schematic illustration of a single-dielectric barrier discharge plasma actuator Figure 8 : Figure 8: Low speed flows: Variation of thrust angle (θ T ) with velocity ratio (VR) with and without Plasma VR, Velocity ratio Figure 10 : Figure 10: High speed flows: Variation of thrust angle (θ T ) with velocity ratio (VR) with and without Plasma on newly proposed propulsion system ACHEON has been conducted.Two dimensional, incompressible, Reynolds-averaged Navier Stokes eqns.has been solved with simple electrostatic model to simulate flow inside the nozzle.The innovative active flow control of plasma actuators has been considered near Coanda surface.It has been observed from above study, plasma actuators are very effective in case Figure 11 : Figure 11: High speed flows: Variation of maximum thrust (T Max ) with velocity ratio (VR) with and without Plasma Table 1 : For low speed flows (without Plasma) Table 2 : For high speed flows (without Plasma) Table 3 and Table Table 3 : For low speed flows (with Plasma) Table 4 : For high speed flows (with Plasma)
2018-12-11T07:23:17.348Z
2014-06-17T00:00:00.000
{ "year": 2014, "sha1": "c40e241c75d8288039fd406f083e443e4320618f", "oa_license": "CCBY", "oa_url": "https://www.journal.multiphysics.org/index.php/IJM/article/download/8-2-181/219", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c40e241c75d8288039fd406f083e443e4320618f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
263106885
pes2o/s2orc
v3-fos-license
Associations, unions and everything in between: contextualising the role of representative health worker organisations in policy Associations, unions and other organised groups representing health workers play a significant role in the development, adoption and implementation of health policy. These representative health worker organisations (RHWOs) are a key interface between employers, governments and their members (both actual and claimed), with varying degrees of influence and authority within and across countries. Existing research in global health often assumes—rather than investigates—the roles played by RHWOs in policy processes and lacks analytical specificity regarding the definitional characteristics of RHWOs. In this article, we seek to expand and complicate conceptualisations of RHWOs as key actors in global health by unpacking the heterogeneity of RHWOs and their roles in policy processes and by situating RHWOs in context. First, we define RHWOs, present a typology of RHWO dimensions and discuss perceived legitimacy of RHWOs as policy actors. Next, we unpack the roles of RHWOs in policy processes and distinguish RHWO roles in regulation from those of regulatory agencies. The final sections situate RHWOs in political and labour relations contexts, and in sociohistorical contexts, with attention to institutional frameworks, professional hierarchies and intersectional factors such as race, gender, sexuality, class, caste and religion. We conclude by outlining research gaps in the study of RHWOs and policy, and by encouraging global health researchers and practitioners to incorporate an expanded focus on these actors. Taking this approach will generate a wider range of strategies to better engage these organisations in policy processes and will ensure stronger health workforce policies globally. INTRODUCTION Associations, unions and other organised groups representing health workers play a significant role in the development, adoption and implementation of health policy. 1 These representative organisations are a key interface between employers, governments and the members they represent.In this article, we refer to such organisations as representative health worker organisations (RHWOs) and present an overview of their role in health policy processes.We define RHWOs as organisations of health workers that identify and act on the collective interests of members (both actual and claimed) regarding labour conditions, articulate collective positions on policy issues (pertaining to health systems and beyond) and engage in other activities as determined by their organisational mandates. RHWOs often focus advocacy efforts on policy that directly impacts the health workforce, such as around training, remuneration, pricing negotiations, job security and working conditions. 2 RHWOs may also advocate for broader aspects of health reform (ie, availability, financing, quality of care, access) 3 and partner with governments to expand access to particular health services and health as a human right. 4 5Such advocacy aligns with the interests of RHWOs both insofar as it advances SUMMARY BOX ⇒ Representative health worker organisations-associations, unions and other organised groups representing various groupings of health workers-are key actors at all stages of health policy processes.⇒ There remains a paucity of research on representative health worker organisations and the ways in which these groups engage in health policy processes, particularly in low and middle-income countries.⇒ Researchers must engage with the heterogeneity of representative health worker organisations, across dimensions such as public or private sectors, levels of specialisation, career stage or systems of medicine.⇒ Power dynamics within and between representative health worker organisations, and the ways in which these organisations are situated in governance processes, are critical to understand.⇒ Further clarity and distinction between interestoriented roles and regulatory roles of organisations in health workforce governance. BMJ Global Health a vision of health reform that protects the professional status and scope of practice of members and because such actions justify or reinforce the status of RHWOs as important policy actors. 1 6 Examples of interest-oriented action on the part of RHWOs include well-documented actions on the part of physicians associations, such as the consequential, longstanding opposition of the American Medical Association (AMA) to single-payer financing of healthcare in the USA, 7 and the influence of the British Medical Association in expanding access to abortion in the United Kingdom on the grounds of protecting clinical autonomy for physicians. 8In recent years, a more complex picture regarding RHWOs has begun to emerge with researchers attending to the multifarious policy goals of RHWOs and in a wider array of contexts, particularly low and middle-income countries (LMICs). Examples of actions by RHWOs to actualise principial commitments regarding health systems include RHWOs' roles in shaping access to health services or health rights, 3 influencing regulatory policy pertaining to the health sector to protect their financial or professional interests 9 or advocating for improved remuneration, safety and working conditions. 10 11COVID-19 further amplified the role of RHWOs in health policy processes as platforms and channels for health worker discontent and frustrations during the pandemic. 12Beyond these policy objectives, RHWOs also work to define, build and strengthen capacities within their membership, such as through standard and norm setting and continuous education as well as to provide service to the community and to their membership. 13 14espite their major role, there remains limited global health research about how RHWOs engage in health policy processes, within countries and transnationally.Much of the existing scholarship on these actors, particularly those organisations representing physicians, have emerged from high-income countries. 7 15 16In the context of LMICs, attention to the political economy of health policy processes is growing, 17 with scholars highlighting context-specific dynamics and interactions between governments, domestic actors (eg, business, civil society, labour), international actors (eg, aid agencies, multinational corporations, philanthropy) and other stakeholders in shaping policy processes and outcomes.With some key exceptions, 3 9 11 18 there is very limited research on groups representing health workers' associations, unions and other organised groups serving as policy actors in LMIC contexts as well as the interactions of RHWOs globally and their influence on national and subnational policy.Existing research in global health often assumes-rather than investigatesthe role variously played by RHWOs in policy processes and lacks analytical specificity regarding the definitional characteristics of RHWOs. 19The high degree of selfregulation involved in health occupations-particularly for doctors-has the potential to conflate organisations representing health workers, non-profit regulatory agencies and statutory bodies enacted through legislation, but led and/or run by representatives of the professions. 13n this article, we seek to expand and complicate conceptualisations of RHWOs as key actors in global health by unpacking the heterogeneity of RHWOs and their roles and responsibilities in policy processes, and by situating RHWOs in political, historical, social and labour relations contexts.Our analysis is drawn from completed and ongoing research on RHWOs by the authors 1 9 16 20 21 as well as a review of existing research in LMICs and HICs.Our goal is to contribute to a deeper understanding of these organisations in global health scholarship and to stimulate further research on a largely neglected, yet important, set of actors in health policy analysis regarding LMICs and transnational contexts. DEFINING AND CATEGORISING RHWOS RHWOs-which encompass a wide variety of organisation forms-are civil society organisations, rather than official organs of the state apparatus. 14In the literature, health workers' associations and unions are often discussed using homogenous terms that do not capture the heterogeneity found in most contexts.Research in global health often makes reference to different types of 'advocacy organisations' for health occupations (ie, labour unions, professional associations, etc), but in practice, this grouping lacks precision regarding the distinct interests, membership, constituencies, histories, modes of functioning (ie, organisations professionally managed compared with those managed by members) and sources of power of these organisations. RHWOs exist on multiple (sometimes overlapping) dimensions-as captured below and in figure 1-and geographic scales (local, subnational, national and international).Building on a typology of the health workforce in 'mixed health systems', 19 figure 1 presents a nonexhaustive set of dimensions on which RHWOs exist, which includes the following: ► Occupational group (eg, nurses, physicians, allied health professionals, etc).► Sectoral (public sector employment vs private sector employment vs both) and subsectoral (eg, long-term care, acute care, etc) employment.► Level of seniority (students, residents, attendings or consultants).► Systems of medicine (eg, forms of traditional medicine, biomedical, etc) ► Level of specialisation (umbrella vs specialist associations) ► Qualification-specific (eg, medical students trained outside of the country).► Type of employment (eg, contractual employee).► Demographic characteristics (race, gender, religion, ethnicity, sexuality, country of origin, etc).► Geographic scale (international, national and/or local). BMJ Global Health Engaging with a diverse set of representative groups that cut across some of the dimensions addressed in figure 1 will provide a richer, more nuanced set of perspectives that are often missed in current research and analysis.Many research projects currently engage 'umbrella' representative groups in order to provide perspectives of health workers (eg, national medical associations or nursing unions).These groups undoubtedly provide important perspectives given that their large constituencies often include smaller organisations (such as state and local chapters) or overlapping memberships with other organisations (such as specialist associations).However, our experience suggests that primarily relying on umbrella organisations as respondents results in a limited understanding of health worker challenges and policy priorities.For example, the national umbrella association in a mixed health system context might have greater representation of members working in the private sector rather than the public sector; engaging with public sector doctors' association might, therefore, provide an additional and arguably more nuanced understanding of issues experienced by those doctors. 22In another example, umbrella organisations might not speak to the demands of cadres organised around particular employment terms, such as contract work, and organisations founded to address these concerns (eg, the National Health Mission In-source Employees' Association in India) would provide specific insights into the grievances and demands of those cadres. Taking a critical eye to RHWOs is also important because organisation titles may not accurately convey the scope of the organisation's membership, mandate or advocacy strategies.For example, National Nurses United, the largest representative organisation of registered nurses in the USA, refers to itself as both a union and a professional association.Many of its affiliates (eg, California Nurses Association, Minnesota Nurses Association and the New York State Nurses Association) use the word 'association' in their titles but describe themselves as labour unions.Similarly, the Indian Medical Association occasionally makes use of tactics typically associated with labour unions, calling for nationwide strikes or days of protest. 23It is also useful to recognise that the diversity of organisations beyond those with titles of associations or unions that engage in collective action; for example, numerous health worker organisations in India use the title sangathan, a term referring to organised groups, but often taking the connotation of grassroots organisations or networks.It is, therefore, more pertinent to look at the type of collective action that the organisation engages in to achieve its policy goals. Finally, analyses of RHWOs require a deeper understanding of the legitimacy of RHWOs (both with the groups that they claim to represent as well as policy audiences) and of the engagement of 'members' with RHWOs.Junk makes a distinction between two types of representation in membership-based organisations: individual membership and a broader 'claimed constituency', wherein organisations speak on behalf of constituents who may not have volunteered to be members and who may not otherwise participate in the organisation (eg, by paying dues or voting). 24This is often the case with umbrella organisations, which claim to speak on behalf of all professionals of a cadre regardless of whether those individuals have chosen to participate.In doing so, umbrella organisations are often able to play an outsized role in health policy processes. Laugesen has shown the ways in which BMJ Global Health the AMA has secured its position through its role advising the US government on pricing and through its ability to act as a coalition leader; this has enabled the association to retain power in spite of a drastic decline in membership. 26However, reliance on claimed constituencies rather than active members can also hinder certain type of actions.For instance, RHWOs that call for strikes rely on individual members' buy-in to their agenda in order to get them to join picket lines.RHWOs are typically quite reliant on volunteer labour and also need members to serve on boards and committees, to petition lawmakers, to mentor younger professionals. 27Health workers may also hold multiple memberships, further complicating RHWO claims of representation for particular health occupations. 9 ROLES OF RHWOS IN HEALTH POLICY PROCESSES It is important to recognise the linkages between health workforce policy and health sector reform and to note that RHWOs' support or opposition of health reform packages are shaped by their impact on health workers (ie, opposing an expansion of health services due to potential negative impacts on remuneration).RHWOs for the same occupational group might also hold diverse viewpoints on health system structure and reform, and health workers have formed cause-focused organisation in response to specific aspects of health reform (eg, Indian Doctors for Ethical Practice, Doctors for America).Finally, RHWOs might engage in matters of public policy, such as the climate crisis, gender-based violence and prejudicial actions and behaviours from state authorities.The role of RHWOs in these aforementioned policy domains is highly dependent on political context and labour relations in a particular jurisdiction. Depending on the context and occupation, RHWOs are also involved in the regulation of health occupations. 28Diverse regulatory mechanisms for the health workforce exist across the world, involving combinations of self-regulation, coregulation, direct government regulation or voluntary regulation and variously engaging with RHWOs in these models. 28The role of regulatory bodies, such as professional councils, and RHWOs are conceptually distinct and governed by different institutional frameworks. 13 29 30That said, in practice, the membership of regulatory organisations and representative organisations might overlap and result in considerable collaboration between the two, for example, by working together to establish professional regulation and standards 31 as well in the form of regulatory capture. 32rofessional councils or commissions, particularly in many LMICs, are statutory organisations that exist via legislation but involve self-regulation as their decisionmaking bodies are comprised of membership from the professional group. 33The National Medical Commission of India exists via an Act of Parliament, the National Medical Commission Act 2019, but consists primarily of doctors in leadership and decision-making positions. 34ese councils are mandated with setting standards of 'entry' into the profession (ie, education, licensing), maintaining quality and standards and overseeing ethical practices.In some contexts, such as the USA, the regulatory bodies overseeing health worker education are non-governmental (eg, the Accreditation Council for Graduate Medical Education is an independent, nonprofit organisation). While councils, commissions and other regulatory bodies are often distinct from representative organisations on paper, there are numerous ways in which representative organisations play a role in regulation, both formally and informally.Formally, representative organisations might have a 'seat at the table' in committees or policy processes, such as price setting or grievance redressal.For example, the Indian Medical Association has a formal seat on grievance redressal committees in Karnataka, India, 35 while in the USA, the AMA is the dominant player in the Specialty Society Relative Value Update Committee, a price setting committee that recommends health service prices for the Centers for Medicare and Medicaid, a standard-setter in the country for prices more broadly. 15Similarly, the Accreditation Council for Graduate Medical Education (ACGME) mentioned above is made up of member organisations including the AMA and other umbrella RHWOs and has been critiqued for its entanglements with these organisations. 36There are also myriad informal ways in which representative organisations shape policy, often through networks.For example, the head of the erstwhile Medical Council of India was also the leader of the Indian Medical Association for a period of time 37 and was accused of numerous acts of corruption while serving in both those positions. Situating RHWOs in political and labour relations contexts Reich has argued that research on the role of factors in health policy processes should be contextualised within the historical, political and social context in which they operate. 38Building on this scholarship, and recent calls to apply political economy analyses to health labour markets, 39 we extend this argument to research on RHWOs.The role of RHWOs varies considerably across political systems (democracy, monarchy, authoritarianism, etc) and labour markets (decentralised liberal, coordinated or dualist), 40 as the ability for health workers to influence policy is mediated by political institutions, labour or industrial relations and the extent to which civil society organisations are able to exercise power in formal policy processes. 14 40-42The institutional framework guiding interactions between the state and interest groups within a particular jurisdiction in turn impacts the organisation, functioning and influence of RHWOs.The importance of these factors is illustrated by a study of the Chinese Medical Doctors Association (CMDA). 43ao highlights how the CMDA's ties to the Chinese partystate limit its ability to represent its members' interest and influence their working conditions. BMJ Global Depending on the political, societal and labour relations context of a jurisdiction, RHWOs may draw on a range of strategies to pursue their policy goals.For example, in some contexts, the use of strike action might be sanctioned under institutional frameworks around lawful and unlawful strike action, while in other contexts, strike action occurs without any formal sanction. 39 44Such strategies are also strongly mediated by the positioning of a particular occupational group within the landscape and hierarchy of health occupations and the overall societal context (eg, the positioning of doctors as elites in a particular society might facilitate greater informal access to policymakers).The following approaches and strategies might be undertaken: Situating RHWOs in historical and social context Historical trajectories also help explain the ideas espoused by certain representative organisations, and as such, are useful in understanding their role in policy processes.For example, the history of the Kenyan Medical Association, from its origins as an East African chapter of the British Medical Association to its postcolonial role, provides important context for understanding its leadership, membership base and policy stances. 11Historical context may also provide a sense of shared 'traits' across associations, such as the evolution of profession-specific associations in former colonies.Citing case studies of professions in the British Commonwealth, Johnson 45 argues that professionals (and professional associations) in former colonies operated under frameworks that emerged during colonialism, where colonial administrations dictated and regulated the terms under which professionals (including health professionals) operate.Notably, colonial occupational control resisted movements towards the independent self-regulation of professions in favour of a system of corporate patronage in which the client (the empire) defines how its needs for professional services will be met.This form of centralised control differs from the norm of colleague authority that is observed in many colonising states, and that we tend to associate with medical regulatory bodies in high-income countries today. RHWOs do not operate in isolation; they tend to be members of networks that shape their priorities and approaches to the policy process in complex ways.Health worker organisations collaborate domestically and internationally to advance shared priorities.For example, Public Services International (a global union of workers in public services) develops and organises transnational campaigns that address issues facing health workers and systems globally, such as healthcare privatisation, and represents health workers' interests when interfacing with multilateral organisations, such as the WHO.Similarly, Mattison and Bourret have described how collaborative transnational relationships involving midwifery associations have enabled these associations to advance sexual and reproductive health and rights. 5For instance, the Canadian Association of Midwives (with the support of development funders) partners with midwifery associations in the global south to engage in shared learning that advances the practice of and policy regarding midwifery in both nations.Another example comes from the International Federation of Gynecology and Obstetrics Leadership in Obstetrics and Gynecology for Impact and Change Initiative in Maternal and Newborn Health.This initiative provided a capacity building network that enabled professional associations in eight LMICs to better influence policy and practice regarding maternal and newborn health. 46These examples of health worker organisations collaborating as part of networks show how networks influence both the policy outcomes that associations advocate for as well as their methods of advocacy. An intersectional approach to the politics of health worker representative organisations is crucial to a nuanced analysis of their role in health policy processes.The health workforce has long struggled with a lack of representation in terms of gender in policy spaces, despite women comprising 70% of the workforce. 47In addition to gender imbalances, race, class, caste, religion, sexuality and other characteristics-combined with professional hierarchies-also shape health worker voice and representation.One of the clearest examples of this is the organising undertaken by community health workers in India, Pakistan and other contexts. 10 48 49These cadres are largely made up of women of low socioeconomic status, living on low salaries and unstable employment, and providing an ever-expanding set of services. 20 CONCLUSION In this article, we have provided an overview regarding a key set of actors in health policy processes, RHWOs and highlighted the reasons for further investigation into their role in these processes.We conclude with a brief overview of knowledge gaps regarding the role of RHWOs as policy actors.Some of the areas requiring further research and analysis include: ► The internal dynamics of RHWO's organisation and representation; ► The impact of political systems and health labour markets on RHWO policy influence; BMJ Global Health ► The usage of advocacy strategies (such as strikes, lobbying and informal networking) to influence policy processes; ► How power dynamics and hierarchies between occupations shape RHWO policy engagement (eg, how community health worker associations' participation compares with physician associations); ► The relationship of RHWOs across different levels (local, subnational, national and international) and how these dynamics shape policy preferences; ► How policy engagement processes are shaped by intersectional factors such as race, gender, sexuality, class, caste and religion.Health workers are essential to ensuring improvements in access and quality to health services and to achieving targets in health-related global goals.Research on the role of RHWOs in health policy processes is an emerging domain within health policy and systems research.Critical analyses of these organisations will deepen the knowledge base and also stimulate a wider range of strategies to better engage these organisations in policy processes globally. Twitter Veena Sriram @veena_sriram and Sorcha A Brophy @sorchabrophy Contributors VS conceptualised the article and wrote the first draft.VS, SAB and KS synthesised evidence for inclusion in the draft.VS, SAB, KS, MAE and AM revised the draft critically for important intellectual content.All authors approved of the final version that was submitted. Figure 1 Figure 1 Types of representative health worker organisations. Print, television, radio or other media campaigns.Illegal and/or corrupt financial contributions (ie, bribes, favours, etc).
2023-09-29T06:18:18.269Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "67e5ba68c69ef661aa03463fcc492d2732c2c918", "oa_license": "CCBYNC", "oa_url": "https://gh.bmj.com/content/bmjgh/8/9/e012661.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd48f68e49dd686f106b2fc15ba604682569f40f", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
209335183
pes2o/s2orc
v3-fos-license
PCB-Based Magnetometer as a Platform for Quantification of Lateral-Flow Assays This work presents a proof-of-concept demonstration of a novel inductive transducer, the femtoMag, that can be integrated with a lateral-flow assay (LFA) to provide detection and quantification of molecular biomarkers. The femtoMag transducer is manufactured using a low-cost printed circuit board (PCB) technology and can be controlled by relatively inexpensive electronics. It allows rapid high-precision quantification of the number (or amount) of superparamagnetic nanoparticle reporters along the length of an LFA test strip. It has a detection limit of 10−10 emu, which is equivalent to detecting 4 ng of superparamagnetic iron oxide (Fe3O4) nanoparticles. The femtoMag was used to quantify the hCG pregnancy hormone by quantifying the number of 200 nm magnetic reporters (superparamagnetic Fe3O4 nanoparticles embedded into a polymer matrix) immuno-captured within the test line of the LFA strip. A sensitivity of 100 pg/mL has been demonstrated. Upon further design and control electronics improvements, the sensitivity is projected to be better than 10 pg/mL. Analysis suggests that an average of 109 hCG molecules are needed to specifically bind 107 nanoparticles in the test line. The ratio of the number of hCG molecules in the sample to the number of reporters in the test line increases monotonically from 20 to 500 as the hCG concentration increases from 0.1 ng/mL to 10 ng/mL. The low-cost easy-to-use femtoMag platform offers high-sensitivity/high-precision target analyte quantification and promises to bring state-of-the-art medical diagnostic tests to the point of care. Introduction The ever-growing role of molecular biomarkers in disease diagnosis and prognosis, as well as the prediction and assessment of treatment outcomes, calls for effective tools for clinically actionable biomarker detection and measurement. Many such tools capable of highly quantitative biomarker detection have been developed and are readily available in state-of-the-art centralized clinical laboratories. However, there remains a critical need for low-cost, easy-to-use, point-of-care diagnostic assays that can help facilitate healthcare in resource-limited environments and lead to improvements in diagnostics, treatment, and patient outcomes. The lateral-flow assay (LFA) platform-the technology underlying the home pregnancy test-has been extensively explored for applications in rapid and inexpensive point-of-care medical diagnostics due to its low cost, ease of use, and sensitivity sufficient for many applications. A typical LFA readout is based on observing a color change of the test line due to the selective accumulation of reporters in the test line volume. Visual information is limited to the reporters near the surface of the test strip, with limited or no ability to capture the signal from the bulk of the test strip. Analyte quantification is challenging as a result of variations in lighting, sample volume, and the operator's visual acuity. Despite numerous advancements in colorimetric labels, including gold nanocages [1], nanotubes [2], carbon nanoparticles [3], and cellulose nanobeads [4], among others, the sensitivity, signal range, and quantifiability of colorimetric label based LFAs remain a challenge. The use of other optical labels such as fluorescent dyes [5,6] or luminescent nanoparticles [7,8] faces various challenges due to reagent complexity, cost, and reader complexity [9]. In this study, an ultrahigh-sensitivity inductive transducer, the femtoMag, designed for the detection and quantification of superparamagnetic nanoparticle reporters immuno-captured in the volume of the test line of an LFA membrane, is demonstrated. Magnetic reporters provide a number of unique advantages over their optical counterparts: (1) magnetic fields do not interact with biological materials, so magnetic field-based detection is immune to signal degradation and distortion inherent to optical detection; (2) magnetic fields are not affected by LFA media, so every magnetic reporter within the test line volume contributes to detection; and (3) the properties of magnetic reporters can be tuned to match the biomarkers to optimize trapping efficiency and detection [10][11][12]. The femtoMag also provides a number of technological advancements over the current state-of-the-art magnetic biosensor technologies, including (1) volumetric detection, (2) high sensitivity, (3) direct quantitation, (4) easy integration with LFA technology, (5) portable electronic controls, and (6) low-cost manufacturability using conventional high-throughput PCB manufacturing technology. The femtoMag platform presented here enables LFA readout quantification using superparamagnetic nanoparticles as reporters. A typical LFA comprises a sample pad, a conjugate release pad, a test line, a control line, and an absorbent pad as illustrated in Figure 1. A sample containing the analyte of interest is applied to the sample pad. Capillary forces drive the sample through the nitrocellulose membrane towards the absorbent pad. As the sample flows through the conjugate pad, it interacts with and releases predeposited reporters, which are functionalized to bind to the target analyte. Superparamagnetic reporters will be captured by capture antibodies at the test line if the target analyte is present (sandwich assay). The control line is functionalized to capture magnetic reporters regardless of the presence of the target analyte. Any excess fluid is wicked by the absorbent pad. Magnetic reporters trapped in the test and control lines are detected by the femtoMag. The signal at the control line validates the test and the strength of the signal at the test line reflects the concentration of the targeted analyte. The lateral-flow assay (LFA) platform-the technology underlying the home pregnancy test-has been extensively explored for applications in rapid and inexpensive point-of-care medical diagnostics due to its low cost, ease of use, and sensitivity sufficient for many applications. A typical LFA readout is based on observing a color change of the test line due to the selective accumulation of reporters in the test line volume. Visual information is limited to the reporters near the surface of the test strip, with limited or no ability to capture the signal from the bulk of the test strip. Analyte quantification is challenging as a result of variations in lighting, sample volume, and the operator's visual acuity. Despite numerous advancements in colorimetric labels, including gold nanocages [1], nanotubes [2], carbon nanoparticles [3], and cellulose nanobeads [4], among others, the sensitivity, signal range, and quantifiability of colorimetric label based LFAs remain a challenge. The use of other optical labels such as fluorescent dyes [5,6] or luminescent nanoparticles [7,8] faces various challenges due to reagent complexity, cost, and reader complexity [9]. In this study, an ultrahigh-sensitivity inductive transducer, the femtoMag, designed for the detection and quantification of superparamagnetic nanoparticle reporters immuno-captured in the volume of the test line of an LFA membrane, is demonstrated. Magnetic reporters provide a number of unique advantages over their optical counterparts: (1) magnetic fields do not interact with biological materials, so magnetic field-based detection is immune to signal degradation and distortion inherent to optical detection; (2) magnetic fields are not affected by LFA media, so every magnetic reporter within the test line volume contributes to detection; and (3) the properties of magnetic reporters can be tuned to match the biomarkers to optimize trapping efficiency and detection [10][11][12]. The femtoMag also provides a number of technological advancements over the current state-of-the-art magnetic biosensor technologies, including (1) volumetric detection, (2) high sensitivity, (3) direct quantitation, (4) easy integration with LFA technology, (5) portable electronic controls, and (6) low-cost manufacturability using conventional high-throughput PCB manufacturing technology. The femtoMag platform presented here enables LFA readout quantification using superparamagnetic nanoparticles as reporters. A typical LFA comprises a sample pad, a conjugate release pad, a test line, a control line, and an absorbent pad as illustrated in Figure 1. A sample containing the analyte of interest is applied to the sample pad. Capillary forces drive the sample through the nitrocellulose membrane towards the absorbent pad. As the sample flows through the conjugate pad, it interacts with and releases predeposited reporters, which are functionalized to bind to the target analyte. Superparamagnetic reporters will be captured by capture antibodies at the test line if the target analyte is present (sandwich assay). The control line is functionalized to capture magnetic reporters regardless of the presence of the target analyte. Any excess fluid is wicked by the absorbent pad. Magnetic reporters trapped in the test and control lines are detected by the femtoMag. The signal at the control line validates the test and the strength of the signal at the test line reflects the concentration of the targeted analyte. In this work, we demonstrate the quantitative measurement of superparamagnetic Adembead reporters trapped in the test line of an LFA test strip, using the femtoMag at room temperature. The femtoMag design achieves comparable sensitivities to the recently reported induction-based biosensor by exploiting the nonlinear magnetic behavior of magnetic nanoparticle reporters by In this work, we demonstrate the quantitative measurement of superparamagnetic Adembead reporters trapped in the test line of an LFA test strip, using the femtoMag at room temperature. The femtoMag design achieves comparable sensitivities to the recently reported induction-based biosensor by exploiting the nonlinear magnetic behavior of magnetic nanoparticle reporters by exciting the reporters with a combination of two frequencies with magnetic fields strong enough to saturate the material [13,14]. With the exception of an exceedingly complex detection modality (not suitable for routine field applications) applied in conjunction with a conventional ac susceptometer [15], there are no published reports of magnetic nanoparticle inductive detection sensitivities approaching the femtoMag within 1000-fold. A benchtop susceptometer can detect~60 billion 50 nm superparamagnetic Fe 3 O 4 nanoparticles (20 µg) [16], while microfabricated planar coil sensors can detect 2 million 400 nm superparamagnetic Fe 3 O 4 nanoparticles (0.3 µg) [17,18]. On the other hand, inductive nanofabricated transformers can detect the presence of a single 1 µm magnetic bead but require the bead to settle inside a 1 um ring, precisely [19,20]. The femtoMag platform may boost LFA sensitivity well in excess of what is practically achievable with ELISA (enzyme-linked immunosorbent assay), the gold standard for immunoassay diagnostics, or with any other technology of comparable cost (~$3 per a disposable chip cartridge and a~$300 cartridge reader). Furthermore, compared to other point-of-care assays, our low-complexity LFA approach is expected to be among the easiest to use, far more sensitive than most (standard colorimetric LFA [21] and light scattering) and significantly less expensive than other assays of potentially sufficient sensitivity (upconverting phosphor LFAs [22], rotor/CD devices [23], immuno-PCR on emerging moderate-complexity PCR devices [24], or microfluidics [25,26]). PCB-Based Magnetometer The femtoMag is a differential inductive transducer designed to seamlessly integrate with an LFA. A schematic depicting the operating principles of the femtoMag is shown in Figure 2a. An excitation coil is used to generate an alternating magnetic field across the detector. The detector comprises a reference coil and a sensing coil connected in series but wound in opposite directions. In the absence of magnetic material in the sensing coil, the induced voltage across the detector is equal to zero. Magnetic materials such as superparamagnetic nanoparticle reporters in the sensing coil disrupt the balance, resulting in a non-zero readout voltage that is proportional to the amount of magnetic material in the sensing coil. This differential measurement minimizes environmental noises and enables precise voltage measurements on the order of 1 µV. exciting the reporters with a combination of two frequencies with magnetic fields strong enough to saturate the material [13,14]. With the exception of an exceedingly complex detection modality (not suitable for routine field applications) applied in conjunction with a conventional ac susceptometer [15], there are no published reports of magnetic nanoparticle inductive detection sensitivities approaching the femtoMag within 1000-fold. A benchtop susceptometer can detect ~60 billion 50 nm superparamagnetic Fe3O4 nanoparticles (20 μg) [16], while microfabricated planar coil sensors can detect 2 million 400 nm superparamagnetic Fe3O4 nanoparticles (0.3 μg) [17,18]. On the other hand, inductive nanofabricated transformers can detect the presence of a single 1 μm magnetic bead but require the bead to settle inside a 1 um ring, precisely [19,20]. The femtoMag platform may boost LFA sensitivity well in excess of what is practically achievable with ELISA (enzyme-linked immunosorbent assay), the gold standard for immunoassay diagnostics, or with any other technology of comparable cost (~$3 per a disposable chip cartridge and a ~$300 cartridge reader). Furthermore, compared to other point-of-care assays, our low-complexity LFA approach is expected to be among the easiest to use, far more sensitive than most (standard colorimetric LFA [21] and light scattering) and significantly less expensive than other assays of potentially sufficient sensitivity (upconverting phosphor LFAs [22], rotor/CD devices [23], immuno-PCR on emerging moderate-complexity PCR devices [24], or microfluidics [25,26]). PCB-Based Magnetometer The femtoMag is a differential inductive transducer designed to seamlessly integrate with an LFA. A schematic depicting the operating principles of the femtoMag is shown in Figure 2a. An excitation coil is used to generate an alternating magnetic field across the detector. The detector comprises a reference coil and a sensing coil connected in series but wound in opposite directions. In the absence of magnetic material in the sensing coil, the induced voltage across the detector is equal to zero. Magnetic materials such as superparamagnetic nanoparticle reporters in the sensing coil disrupt the balance, resulting in a non-zero readout voltage that is proportional to the amount of magnetic material in the sensing coil. This differential measurement minimizes environmental noises and enables precise voltage measurements on the order of 1 μV. The femtoMag transducers used in this work are manufactured using conventional printed circuit board (PCB) technology. A low-cost ($5/in 2 for three chips) and fast-turnaround time (seven days to ship) prototyping manufacturing service provider OSHPark was used [27]. Figure 2b is a schematic depicting the femtoMag transducer. The transducer is constructed using a pair of 2-layers PCB boards (1.6 mm thick FR4 substrate). The excitation coil is a copper trace on one side of a PCB. The copper trace is 10 mm long × 1 mm wide × 0.035 mm thick with an electroless nickel immersion gold (ENIG) finish. The excitation coil is insulated using solder mask over bare copper (SMOBC). The femtoMag transducers used in this work are manufactured using conventional printed circuit board (PCB) technology. A low-cost ($5/in 2 for three chips) and fast-turnaround time (seven days to ship) prototyping manufacturing service provider OSHPark was used [27]. Figure 2b is a schematic depicting the femtoMag transducer. The transducer is constructed using a pair of 2-layers PCB boards (1.6 mm thick FR4 substrate). The excitation coil is a copper trace on one side of a PCB. The copper trace is 10 mm long × 1 mm wide × 0.035 mm thick with an electroless nickel immersion gold (ENIG) finish. The excitation coil is insulated using solder mask over bare copper (SMOBC). This part is fabricated by OSHPark. The sensing coil and the reference coil is a microfabricated copper trace above the excitation coil. The microfabricated copper trace is 10 mm long × 0.25 mm wide × 0.001 mm thick and is deposited via shadow mask electron beam evaporation directly above the excitation coil. In future implementations, the femtoMag transducers will be manufactured using dual-layer PCB manufacturing technology to eliminate the need for shadow mask evaporation of the sensing and reference coils. The pair of PCBs are aligned using four pins that pass through the four alignment through-holes in the PCBs. The gap between the PCBs is set using precision spacers (stainless steel thickness gauge) before soldering the alignment pins to fix the assembly. The sensing coil induced voltage, due to the presence of superparamagnetic nanoparticle reporters, can be determined by calculating the flux change generated by a magnetic dipole under the influence of an alternating magnetic field. The flux density generated by a magnetic dipole is given by [28,29] where χ is the susceptibility of the magnetic reporter; H a is the external magnetic field; V is the volume of the particle; µ 0 is the permeability of free space; r is the radial vector from the center of the magnetic particle; and x, y, and z are the Cartesian distance from the center of the nanoparticle. From Faraday's law, only the normal (z) component of the magnetic flux contributes to the induced voltage. The normal component of the magnetic flux density from a magnetic dipole located at the center of xy plane of a rectangular induction coil with dimensions of x = 2t and y = 2h and distance z = 2p form the xy plane is is the induction coil shape factor. The time-varying flux results in an induced voltage given by [30] where f is the frequency of the excitation field. Equation (3) represents an approximate mathematical representation of the femtoMag readout voltage. The analytical model provides an intuitive understanding of the basic parameters that contribute to the induced voltage. To understand how the device might perform before committing resources to fabrication and experiments, we performed numerical simulations of the device using the boundary element magnetic field modeling package Amperes 3D by Integrated Engineering Software [31]. Simulating the femtoMag provides more accurate details on the magnetic field distribution in the sensing area and the reference area as shown in Figure 3. The red and blue regions represent the maximum and the minimum values of the magnetic field, respectively. The excitation coil is illustrated in transparent brown and the test line is outlined in transparent green. The magnetic reporters are assumed to be distributed uniformly within the test line and are modeled as a homogeneous magnetic material with a magnetic permeability corresponding to the number of trapped reporters. The difference between the flux in the sensing and reference area is used to calculate the induced voltage. LFA Membrane and Test Sample Preparation A model system based on the hCG pregnancy hormone was used to evaluate the performance of the femtoMag integrated with LFA nitrocellulose membranes. Whatman FF80HP nitrocellulose membranes (GE Healthcare) were used, with a length of 46 mm and a width of 3 mm. The membranes were functionalized with polyclonal anti-α hCG antibody (#ABACG-0500, Arista Biologicals, Allentown, PA, USA) at the test line and anti-mouse antibody (#ABGAM-0500, Arista Biologicals) at the control line. A BioDot XYZ3060 was used to dispense the antibodies (at 1 μg/cm) to form the test and control lines. Test samples were prepared using hCG model protein diluted in LFA buffer (1% Tween-20, 0.5% BSA, in PBS, pH 7.4), and 10 μL of each sample was mixed with 1 μL of 200 nm Adembead reporters functionalized with mouse monoclonal anti-β hCG antibodies (#ABBCG-0402, Arista Biologicals) [12]. Blank samples with no hCG were used as controls. System Integration and Calibration The integration of the LFA membrane with the femtoMag transducer is illustrated in Figure 4. First, the LFA membrane is aligned with the bottom PCB board. The femtoMag chip has two identical detectors. While one of them acts as a sensing detector which is used to quantify the concentration of superparamagnetic reporters such as Adembeads in the test line, the second one act as a reference detector that is located away from the test line. Thus, the test line signal reflects the difference between the amount of Adembead reporters specifically trapped in the test line and the amount of Adembead reporters nonspecifically trapped in the LFA membrane. Figure 4c is a photograph of the prototype femtoMag chip. The gap on the chip is set at 0.3 mm to enable the 0.2 mm thick test strips to drag through. This allows a single detector on the femtoMag to measure both the test line and the control line by sliding the test strip through the transducer. The Adembead concentration along the strip was profiled as the LFA membrane was fed through the femtoMag transducer at a rate of 1 mm/s while taking one measurement every 10 ms. The excitation coil of the femtoMag is driven by an RF generator, Rigol DG830, set to 5 Vp-p at 90 MHz. The spectrum analyzer-a Rigol DSA832E with an average noise level of 100 nV-was used to measure the induced voltage generated by the femtoMag. A linear actuator is used to drag a test sample through the femtoMag chip. A LabVIEW application was developed to control the data-acquisition. LFA Membrane and Test Sample Preparation A model system based on the hCG pregnancy hormone was used to evaluate the performance of the femtoMag integrated with LFA nitrocellulose membranes. Whatman FF80HP nitrocellulose membranes (GE Healthcare) were used, with a length of 46 mm and a width of 3 mm. The membranes were functionalized with polyclonal anti-α hCG antibody (#ABACG-0500, Arista Biologicals, Allentown, PA, USA) at the test line and anti-mouse antibody (#ABGAM-0500, Arista Biologicals) at the control line. A BioDot XYZ3060 was used to dispense the antibodies (at 1 µg/cm) to form the test and control lines. Test samples were prepared using hCG model protein diluted in LFA buffer (1% Tween-20, 0.5% BSA, in PBS, pH 7.4), and 10 µL of each sample was mixed with 1 µL of 200 nm Adembead reporters functionalized with mouse monoclonal anti-β hCG antibodies (#ABBCG-0402, Arista Biologicals) [12]. Blank samples with no hCG were used as controls. System Integration and Calibration The integration of the LFA membrane with the femtoMag transducer is illustrated in Figure 4. First, the LFA membrane is aligned with the bottom PCB board. The femtoMag chip has two identical detectors. While one of them acts as a sensing detector which is used to quantify the concentration of superparamagnetic reporters such as Adembeads in the test line, the second one act as a reference detector that is located away from the test line. Thus, the test line signal reflects the difference between the amount of Adembead reporters specifically trapped in the test line and the amount of Adembead reporters nonspecifically trapped in the LFA membrane. Figure 4c is a photograph of the prototype femtoMag chip. The gap on the chip is set at 0.3 mm to enable the 0.2 mm thick test strips to drag through. This allows a single detector on the femtoMag to measure both the test line and the control line by sliding the test strip through the transducer. The Adembead concentration along the strip was profiled as the LFA membrane was fed through the femtoMag transducer at a rate of 1 mm/s while taking one measurement every 10 ms. The excitation coil of the femtoMag is driven by an RF generator, Rigol DG830, set to 5 Vp-p at 90 MHz. The spectrum analyzer-a Rigol DSA832E with an average noise level of 100 nV-was used to measure the induced voltage generated by the femtoMag. A linear actuator is used to drag a test sample through the femtoMag chip. A LabVIEW application was developed to control the data-acquisition. Results and Discussion The sample strips were prepared by dispensing 50μL test samples with various hCG concentrations (from 0.1 to 10 ng/ml) onto the sample pad of each test strip. The sample strips were then washed with 200 μL LFA buffer dried at ambient conditions, and measured by dragging them through the sensor at a rate of 1 mm/s. Each sample strip was measured three times. Finally, we used a razor to extract the test line and control line to independently measure the amount of magnetic material using an alternating gradient field magnetometer (AGFM). An image of the 2.5 ng/ml hCG test strip (with test line not detectable by the naked eye) is shown in Figure 5a. A scanning electron image (SEM) of the test line revealing sparse loading with Adembead reporters on the nitrocellulose is shown in Figure 5c Results and Discussion The sample strips were prepared by dispensing 50 µL test samples with various hCG concentrations (from 0.1 to 10 ng/mL) onto the sample pad of each test strip. The sample strips were then washed with 200 µL LFA buffer dried at ambient conditions, and measured by dragging them through the sensor at a rate of 1 mm/s. Each sample strip was measured three times. Finally, we used a razor to extract the test line and control line to independently measure the amount of magnetic material using an alternating gradient field magnetometer (AGFM). An image of the 2.5 ng/mL hCG test strip (with test line not detectable by the naked eye) is shown in Figure 5a. A scanning electron image (SEM) of the test line revealing sparse loading with Adembead reporters on the nitrocellulose is shown in Figure 5c The peak signal from the femtoMag is compared with the results of the AGFM in Figure 6a. The femtoMag peak signal and AGFM measurements show a close correlation that increases with hCG concentration. The signal for the control sample (hCG concentration of 0 ng/mL) is not zero because some of the Adembead reporters are trapped nonspecifically as they flow through the LFA membrane. Significantly, both the femtoMag and the AGFM can detect the minute amount of Adembead reporters nonspecifically trapped in the test line of the control sample. The nonspecifically bound reporters that may be trapped in the nitrocellulose membrane outside the lines is below the detection limit of both the femtoMag and AGFM. The femtoMag can reliably detect an hCG concentration below 100 pg/mL. The variability in the femtoMag signal (~12 µV) is approximately 12 times larger than the femtoMag noise floor (~1 µV), suggesting an opportunity to improve the sensitivity of detecting hCG to 0.01 ng/mL or below. shown in Figure 5a. A scanning electron image (SEM) of the test line revealing sparse loading with Adembead reporters on the nitrocellulose is shown in Figure 5c. The Adembead reporters (Carboxyl-Adembeads, #02122, Ademtech, Pessac, France) are monodisperse and superparamagnetic iron oxide (Fe3O4) nanoparticles encapsulated by a highly cross-linked hydrophilic polymer shell [32]. The M-H loop for the Adembead reporter was measured with an AGFM to determine the magnetic susceptibility of 0.17 (Figure 5e). The peak signal from the femtoMag is compared with the results of the AGFM in Figure 6a. The femtoMag peak signal and AGFM measurements show a close correlation that increases with hCG concentration. The signal for the control sample (hCG concentration of 0 ng/ml) is not zero because some of the Adembead reporters are trapped nonspecifically as they flow through the LFA membrane. Significantly, both the femtoMag and the AGFM can detect the minute amount of Adembead reporters nonspecifically trapped in the test line of the control sample. The nonspecifically bound reporters that may be trapped in the nitrocellulose membrane outside the lines is below the detection limit of both the femtoMag and AGFM. The femtoMag can reliably detect an hCG concentration below 100 pg/ml. The variability in the femtoMag signal (~ 12 μV) is approximately 12 times larger than the femtoMag noise floor (~ 1 μV), suggesting an opportunity to improve the sensitivity of detecting hCG to 0.01 ng/ml or below. From the AGFM measurements, the number of Adembead reporters trapped inside the test line is given by where is the magnetization saturation of the test line measured by AGFM, is magnetization saturation of the Adembead reporters (40 emu/g), is the Adembead reporter density (2.0 g/cm 3 ), and is the Adembead reporter volume (4.2 × 10 −12 mm 3 ). A graph comparing the number of hCG molecules present in the test sample versus the number of Adembead reporters trapped in the test line is shown in Figure 6b. On average, this test requires approximately 100 hCG molecules for each Adembead reporter in the test line. The amount of hCG molecules in the sample per Adembead reporters found in the test line varies with hCG concentration as shown in Figure 7. The ratio of hCG molecules to the Adembead reporters in the test line increases monotonically from approximately 20 at a concentration of 0.1 ng/ml to 500 at 10 ng/ml. The hCG analyte is mixed with the Adembead reporters before applying them to the LFA, allowing its binding to the antibodies on the surface of the Adembeads to go to equilibrium. Therefore, the reaction kinetics of the reporters decorated with hCG as it flows through the available binding sites on the test line may be the limiting factor for the test sensitivity. Studying this phenomenon is beyond the scope of this report, but with the femtoMag, it is possible to methodically investigate this observation to understand the various factors (e.g. incubation time, the Adembead amount, and flow rate) that can improve the performance of the LFA. From the AGFM measurements, the number of Adembead reporters trapped inside the test line is given by where M T is the magnetization saturation of the test line measured by AGFM, M s is magnetization saturation of the Adembead reporters (40 emu/g), ρ is the Adembead reporter density (2.0 g/cm 3 ), and V is the Adembead reporter volume (4.2 × 10 −12 mm 3 ). A graph comparing the number of hCG molecules present in the test sample versus the number of Adembead reporters trapped in the test line is shown in Figure 6b. On average, this test requires approximately 100 hCG molecules for each Adembead reporter in the test line. The amount of hCG molecules in the sample per Adembead reporters found in the test line varies with hCG concentration as shown in Figure 7. The ratio of hCG molecules to the Adembead reporters in the test line increases monotonically from approximately 20 at a concentration of 0.1 ng/mL to 500 at 10 ng/mL. The hCG analyte is mixed with the Adembead reporters before applying them to the LFA, allowing its binding to the antibodies on the surface of the Adembeads to go to equilibrium. Therefore, the reaction kinetics of the reporters decorated with hCG as it flows through the available binding sites on the test line may be the limiting factor for the test sensitivity. Studying this phenomenon is beyond the scope of this report, but with the femtoMag, it is possible to methodically investigate this observation to understand the various factors (e.g., incubation time, the Adembead amount, and flow rate) that can improve the performance of the LFA. A comparison of the femtoMag signal to the predictions of the analytical model and the simulation is shown in Figure 8. The analytical model and simulation results approximately agree, and both overestimate the actual femtoMag signal, likely due to the overestimation of the excitation coil current. The femtoMag is driven by an ac voltage and the actual current flowing through the excitation coil depends on the coil's impedance and the impedance of various components in the circuit. The femtoMag's performance can be improved by optimizing the electronics to deliver greater power and by reducing parasitic resistances in the chip. Furthermore, since the femtoMag signal is linearly proportional to the drive current, the drive current can be used to increase the dynamic range of the measurement. Conclusions and Future Work The quantification of hCG in test samples using the femtoMag was demonstrated and verified using AGFM measurement. It was found that for every Adembead reporter found in the test line, there are approximately 100 hCG molecules in the sample. The ratio of target hCG molecules in the sample to the number of Adembead reporters measured at the test line increases monotonically from A comparison of the femtoMag signal to the predictions of the analytical model and the simulation is shown in Figure 8. The analytical model and simulation results approximately agree, and both overestimate the actual femtoMag signal, likely due to the overestimation of the excitation coil current. The femtoMag is driven by an ac voltage and the actual current flowing through the excitation coil depends on the coil's impedance and the impedance of various components in the circuit. The femtoMag's performance can be improved by optimizing the electronics to deliver greater power and by reducing parasitic resistances in the chip. Furthermore, since the femtoMag signal is linearly proportional to the drive current, the drive current can be used to increase the dynamic range of the measurement. A comparison of the femtoMag signal to the predictions of the analytical model and the simulation is shown in Figure 8. The analytical model and simulation results approximately agree, and both overestimate the actual femtoMag signal, likely due to the overestimation of the excitation coil current. The femtoMag is driven by an ac voltage and the actual current flowing through the excitation coil depends on the coil's impedance and the impedance of various components in the circuit. The femtoMag's performance can be improved by optimizing the electronics to deliver greater power and by reducing parasitic resistances in the chip. Furthermore, since the femtoMag signal is linearly proportional to the drive current, the drive current can be used to increase the dynamic range of the measurement. Conclusions and Future Work The quantification of hCG in test samples using the femtoMag was demonstrated and verified using AGFM measurement. It was found that for every Adembead reporter found in the test line, there are approximately 100 hCG molecules in the sample. The ratio of target hCG molecules in the sample to the number of Adembead reporters measured at the test line increases monotonically from 20 to 500 as hCG concentration increases from 0.1 ng/ml to 10 ng/ml. The total number of Adembead reporters found in the test and control lines amounts to no more than 25% of the total number of Conclusions and Future Work The quantification of hCG in test samples using the femtoMag was demonstrated and verified using AGFM measurement. It was found that for every Adembead reporter found in the test line, there are approximately 100 hCG molecules in the sample. The ratio of target hCG molecules in the sample to the number of Adembead reporters measured at the test line increases monotonically from 20 to 500 as hCG concentration increases from 0.1 ng/mL to 10 ng/mL. The total number of Adembead reporters found in the test and control lines amounts to no more than 25% of the total number of Adembead reporters loaded into the sample. The femtoMag can make measurements every 1 ms as the sample strip is fed through the transducer to profile the number of magnetic reporters trapped along the LFA membrane. The ability to quickly and reliably quantify the number of magnetic reporters along the entire LFA strip provides a powerful tool to study the transport of magnetic reporters through an LFA membrane and their binding kinetics. The femtoMag detector is an inductive sensor with a well-established principle of operation. An analytical model backed by simulations was used to design the current femtoMag transducer. There are many opportunities to improve LFA technology and the femtoMag can provide real-time, high throughput, and quantitative analysis of LFAs. Additional design optimization will help to further improve femtoMag sensitivity. Future work will also focus on improving the signal-to-noise ratio (SNR). SNR improvements are expected to help improve the limit of detection by at least a factor of 10 to enable the detection of 1,000,000 Adembead reporters. In the current work, research-grade digital electronics were used to control the femtoMag. A linear actuator was also used to drag the sample strip through the femtoMag for measurements. A practical field-deployable biosensor can be developed with minor modifications to the system. There are many low-cost and small footprint options available to replace the waveform generator and voltmeter used in this work. It is possible that a femtoMag reader controlled and powered by a smartphone can be produced for less than $300. The femtoMag chip is currently manufactured at a cost of $3, which can be reduced by a factor of 10 when ordering 10,000 or more units. The assembly of the femtoMag with an LFA can readily be automated to improve quality, reliability, and throughput. For consumer use, the LFA will be fixed in the femtoMag assembly. Multiple detectors can be added without increasing the complexity of design or fabrication. The first detector quantifies the test line, the second detector validates the control line and a third detector can be used to quantify a second test line. Testing multiple target analytes (multiplexing) can help improve the overall sensitivity and specificity of the assay.
2019-12-12T10:21:25.025Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "f45dca96004732ec4f33c708ead32aed018394ab", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/19/24/5433/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9b28821a028074c5b4aec6038bc25f4028ec3ea0", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Computer Science", "Medicine" ] }
269676400
pes2o/s2orc
v3-fos-license
Strain-Based Assessment to Evaluate Damage Caused by Deep Rolling The positive effects of deep rolling on fatigue strength—reduced surface roughness, work hardening and compressive residual stress—in the near-surface region are achieved by controlled high plasticisation of the treated material. However, excessive and/or repeated plasticising poses a risk of damage to the machined component. This paper investigates the damage caused by deep rolling of a railway axle. Two sections of the axle are experimentally deep rolled repeatedly at different feed rates until damage is detected. For comparative analysis, these experiments are numerically analysed and the damage is assessed using the strain-based damage calculation. The results are compared and a damage sum of ~120% is evaluated for both tests, thus developing a reliable and conservative assessment method. The single deep rolling treatment at a feed rate of 0.25 mm causes damage of 6.1%, and at a feed rate of 0.5 mm, damage of 4.7%. The developed and experimentally validated evaluation method allows for investigating the limits of applicability of different deep rolling parameters. The influence of the deep rolling force and feed rate and a proposed optimisation with multiple deep rolling with reduced deep rolling forces are investigated. Introduction Deep rolling is used to improve fatigue strength and inhibit crack initiation and propagation.The post-treatment process is most commonly used on rotationally symmetric components.During this process, a deep rolling tool is pressed against the rotating surface of the component to be machined and simultaneously moved in a longitudinal direction at a constant feed rate, very similar to the turning process.However, no material is removed in the contact between the tool and the component; instead, plastic deformation is induced.This reduces surface roughness, work-hardens the area near the surface and introduces residual compressive stresses [1,2].Three effects to increase fatigue strength are achieved in one operation through controlled plasticising.The process can be easily integrated into the manufacturing process as a final operation and is therefore comparably cost effective.As a result, deep rolling is widely used in a variety of industries, for example, in the automotive industry for the post-treatment of crankshafts, in the aerospace industry for the post-treatment of turbine components, and in the rail vehicle industry for the post-treatment of railway axles [1,[3][4][5][6][7]. Railway axles, which are the primary focus of this study, are the connecting part with the mounted wheels between the superstructure and the railway vehicle.The railway axles are designed to transmit the track guiding forces, the tractive and braking forces and the bending load caused by the vehicle weight.The main load is the rotating bending load.These components are highly relevant to safety, and their failure can lead to train derailment and the associated risk to passengers.However, research and engineering work is underway to further develop railway axles.The aim is to reduce axle mass and thus unsprung mass.This will reduce the dynamic forces in wheel-rail contact, thereby reducing wear and optimising energy consumption during operation.This unsprung mass of the rolling stock is also decisive for the calculation of track access charges [8,9]. Currently, the benefits of deep rolling are not explicitly emphasised in the European standards EN 13103-1 [10] and EN 13261 [11] for the design of railway axles.However, the integration of these benefits remains critical to achieving a long-term mass reduction in the new manufactured components. In high-strength steels, such as the 34CrNiMo6 steel investigated in this study, it is the compressive residual stresses introduced that make the greatest contribution to improving fatigue and crack propagation behaviour [2,12,13].These residual stresses and near-surface work hardening are achieved by plasticising the material.The increased work hardening is achieved by increasing the dislocation density in the material.These changing properties are analysed in terms of residual stress measurements and FWHM evaluation on the same railway axles used for the tests here, and the results are presented in [14]. In addition to the measurements, a numerical deep rolling simulation model is presented and validated with the residual stress measurements.The simulation model is used to investigate the influence of deep rolling parameters and optimisations on the residual stresses applied, and the results are presented in [15].The influence of these on cracking behaviour [16] and fatigue strength [17] is also investigated.The application of the highest deep rolling force shows the most significant depth effect on the residual stresses applied, the best crack propagation behaviour, and also the best result in terms of fatigue strength.This would lead to the idea that better deep rolling results can be achieved by further increasing the deep rolling force.However, this is not the case; as mentioned above, plasticising changes the material and increases the dislocation density.This is further increased by increasing forces or repeated deep rolling, and these can coalesce and cause microcracking [18][19][20][21].This, in turn, reduces the service life of the component in service.This phenomenon is often reported for the shot peening process, referred to as "over peening" [22][23][24][25][26][27].With increasing Almen intensity [28][29][30], fatigue strength increases, reaches an optimum, and then decreases again, leading to suboptimal results.Similar behaviour is assumed for deep rolling, but no literature on this could be found.This is precisely where the present study comes in as it investigates this behaviour.It is assumed that each deep rolling overrun with the appropriate deep rolling parameters causes "partial damage".However, up to a certain point this has a positive effect on fatigue strength, which has been mentioned several times in the literature [31][32][33][34], and then the achievable benefit diminishes. In this paper, the damage introduced is investigated experimentally and computationally.Experimental deep rolling is carried out repeatedly with the same parameters until damage is found in the form of surface cracks.Magnetic particle inspection is used for verification, and tests are carried out on two railway axle parts with two different deep rolling parameters.This repeated deep rolling is used to determine the limits of damage caused by the process application and to validate the calculation methodology. Damage evaluation is performed using a strain-based damage assessment approach.On the one hand, this approach is based on allowable material values, in this case a strain curve estimated according to the Uniform Material Law (UML) [35,36] and checked with test results.On the other hand, the strain caused by deep rolling has to be determined.The strain loads are determined using the numerical deep rolling simulation model.The time history of strain and stress over time is analysed and subdivided for each processing step to determine the strain amplitudes that occur.Partial damage is determined according to the strain-based damage approach considering the influence of mean stress after [37].The "partial damage" is summarised to obtain the damage of a deep rolling overrun according to the linear damage accumulation, the Miner rule [38]. The deep rolling application is also repeated in the simulation, the damage is determined and compared with the test result.The comparison is promising, a valid damage sum is determined, and thus a conservative calculation method can be established to determine the damage caused by deep rolling. This approach allows an evaluation of the influence of deep rolling parameters and optimisations on the applied damage.The influence of the most influential deep rolling parameters, the deep rolling force and the feed rate, and the optimisation with multiple deep rolling with reduced deep rolling forces are investigated. The scientific contribution of the article can be summarised as follows: • Experimental determination of the number of deep rolling overruns until a certain surface damage is detected. • Development of a calculation method to determine the damage caused by deep rolling based on strain-based damage assessment and the linear damage accumulation.• Validation of the calculation method using the results of the experiments. • Determination of the impact of the deep rolling force, feed rate and deep rolling optimisation using multiple deep rolling with reducing forces on the damage introduced. Materials and Methods Deep rolling is used to increase the fatigue strength of railway axles.The increase in service life is achieved by controlled plasticisation of the near surface area.This process involves the reduction of surface roughness, work hardening of the near surface area and the introduction of compressive residual stresses.The increase in fatigue strength achievable depends on the correct choice of parameters.Similar to over-peening in shot peening, it is assumed that over-deep-rolling, and thus a deterioration in component properties, can occur due to excessive plasticising.This means that the material is subjected to high loads, resulting in high local strains well into the plastic range.In deep rolling, such undesirable effects may arise from excessive forces, too low feed rates, or repeated deep rolling of the same area. The aim of this investigation is to develop a reliable calculation method that can accurately assess the damage caused by deep rolling.The calculation is based on a numerical simulation model of the deep rolling process combined with a strain-based damage assessment.The result of the calculation will be compared with the result of the experimental investigation.Two experiments are performed for validation.The parameters used in the experiment and the simulation are chosen identically and are summarised in Table 1.The difference between the two experiments is the feed rate.With different feed rates, the same area is processed a different number of times and therefore plasticised differently.The feed rate FR 1 of 0.25 mm is used in Experiment 1, and the feed rate FR 2 of 0.5 mm is used in Experiment 2; all other parameters remain the same.When the term "feed rate" is used in this publication, it always refers to the actual distance between the tools on the component.The experiments are carried out using a double roller tool with opposed discs.Due to the helical machining, half the feed rate of the lathe tool carriage feed is applied to the component surface.Therefore, for a feed rate FR 1 = 0.25 mm on the component, a feed rate of 0.5 mm is set on the machine, and for a feed rate FR 2 = 0.5 mm on the component, a feed rate of 1.0 mm is set on the lathe. Experimental Work Experiments 1 and 2 are carried out on two cylindrical sections of a railway axle, which has been used in previous studies [14].As mentioned above, all the deep rolling parameters remain identical, except for the feed rate.The deep rolling tool is mounted on a lathe on the tool carriage, and the railway axle parts are clamped.The test set-up is shown in Figure 1.A twin tool with opposing discs is used for deep rolling.The dimensions of the discs are given in Table 1.The deep rolling force F DR of 20 kN is generated hydraulically.The force is increased with a ramp at the beginning of the machining process, then held constant for 96 mm in the test area and reduced again at the end.A rotational speed of 180 rpm is used.During machining, the process is cooled and lubricated with coolant.component surface.Therefore, for a feed rate FR1 = 0.25 mm on the component, a f of 0.5 mm is set on the machine, and for a feed rate FR2 = 0.5 mm on the componen rate of 1.0 mm is set on the lathe. Experimental Work Experiments 1 and 2 are carried out on two cylindrical sections of a railw which has been used in previous studies [14].As mentioned above, all the deep parameters remain identical, except for the feed rate.The deep rolling tool is mou a lathe on the tool carriage, and the railway axle parts are clamped.The test s shown in Figure 1.A twin tool with opposing discs is used for deep rolling.The dim of the discs are given in Table 1.The deep rolling force FDR of 20 kN is generated hydra The force is increased with a ramp at the beginning of the machining process, then h stant for 96 mm in the test area and reduced again at the end.A rotational speed of 18 used.During machining, the process is cooled and lubricated with coolant.For the investigation, the deep rolling process described for Experiments 1 repeated in the specified areas with identical parameters until damage is found railway axle.Magnetic particle inspection is used to detect the damage.The su repeatedly inspected for cracks between overruns.The purpose of the test is to de the number of deep rolling overruns until damage occurs.For validation, this compared with the results of the strain-based damage assessment.The calculatio odology is presented in the following section. Strain-Based Damage Assessment In order to optimise the application of the deep rolling process, it is impo calculate the damage caused by the treatment itself.An approach based on strai damage assessment is used.The permissible strength of the material is determine the strain life curve according to the Uniform Material Law (UML) [35] and exte the influence of mean strain/stress using the Smith, Watson & Topper (SWT) proach.The strain amplitudes occurring during deep rolling are analysed by a sim model of the deep rolling process, and the partial damage introduced by each ind tool pass is determined.The total damage that occurs in a deep rolling overrun calculated from the partial damage using the linear damage accumulation appro calculation methodology is explained in detail in the following subsections. Strain-Life Curve The cyclic strain strength of the railway axle material is determined using UM For the investigation, the deep rolling process described for Experiments 1 and 2 is repeated in the specified areas with identical parameters until damage is found on the railway axle.Magnetic particle inspection is used to detect the damage.The surface is repeatedly inspected for cracks between overruns.The purpose of the test is to determine the number of deep rolling overruns until damage occurs.For validation, this result is compared with the results of the strain-based damage assessment.The calculation methodology is presented in the following section. Strain-Based Damage Assessment In order to optimise the application of the deep rolling process, it is important to calculate the damage caused by the treatment itself.An approach based on strain-based damage assessment is used.The permissible strength of the material is determined using the strain life curve according to the Uniform Material Law (UML) [35] and extended by the influence of mean strain/stress using the Smith, Watson & Topper (SWT) [37] approach.The strain amplitudes occurring during deep rolling are analysed by a simulation model of the deep rolling process, and the partial damage introduced by each individual tool pass is determined.The total damage that occurs in a deep rolling overrun can be calculated from the partial damage using the linear damage accumulation approach.The calculation methodology is explained in detail in the following subsections. Strain-Life Curve The cyclic strain strength of the railway axle material is determined using UML.The UML determines the strain-life curve based on the Young's modulus and the ultimate tensile strength of the material.Its equation is based on the approach of [35], is also presented in [36] and is given by Equation (1).The relationship between strain amplitude ε a and the tolerable number of load cycles N is given.The equation is composed of an elastic ε a,e and a plastic ε a,p part.The other parameters used are defined in Table 2. Parameter Symbol Formula Unit Value Young's modulus The Young's modulus and the tensile strength for the 34CrNiMo6 steel investigated are taken from the tensile test results in [14].The tests are carried out on the same railway axle as used for the experiments presented in this study.Table 2 summarises the numerical values used and the parameters calculated for the UML definition. Figure 2 shows the strain-life curve according to the UML using the parameters specified above in Table 2.The total strain-life curve and also its elastic and plastic components are shown.The validity of the UML for the material analysed is verified by the test results.In addition to the tensile tests, numerous cyclic tests were carried out on specimens taken from the analysed railway axle, some of which are presented in [14].The results which include different applied strain amplitudes and a constant load strain ratio of R ε = −1 are also plotted in the same diagram.Again, the total strain amplitude and the elastic and plastic components are plotted.The estimated strain-life curve shows good agreement with the test results. Materials 2024, 17, x FOR PEER REVIEW tensile strength of the material.Its equation is based on the approach of [35], is als sented in [36] and is given by Equation (1).The relationship between strain amplit and the tolerable number of load cycles N is given.The equation is composed of an εa,e and a plastic εa,p part.The other parameters used are defined in Table 2. The Young's modulus and the tensile strength for the 34CrNiMo6 steel invest are taken from the tensile test results in [14].The tests are carried out on the same ra axle as used for the experiments presented in this study.Table 2 summarises the num values used and the parameters calculated for the UML definition. Figure 2 shows the strain-life curve according to the UML using the parameters ified above in Table 2.The total strain-life curve and also its elastic and plastic compo are shown.The validity of the UML for the material analysed is verified by the test r In addition to the tensile tests, numerous cyclic tests were carried out on specimens from the analysed railway axle, some of which are presented in [14].The results include different applied strain amplitudes and a constant load strain ratio of Rε = also plotted in the same diagram.Again, the total strain amplitude and the elast plastic components are plotted.The estimated strain-life curve shows good agre with the test results. Mean Stress/Strain Consideration-SWT Approach The UML and the resulting strain-life curve are valid for an R-ratio of R = − therefore, no mean stress/strain is considered.The repeated side-by-side rolling Mean Stress/Strain Consideration-SWT Approach The UML and the resulting strain-life curve are valid for an R-ratio of R = −1, and therefore, no mean stress/strain is considered.The repeated side-by-side rolling of the deep rolling disc tools, offset by the feed rate during deep rolling, causes mean stress/strain, and therefore, different conditions prevail for each rolling pass.A common approach to considering mean stress/strains is the SWT approach [36,37].The damage parameter P SWT is calculated according to the following equations.The same parameters are used as in the UML and are given in the defined Table 2.The additional parameter σ max considers the influence of the mean stress/strain. Equations ( 2) and ( 3) can be set equal, resulting in a mean stress/strain-dependent relationship between strain amplitude ε a,i and tolerable number of load cycles N. As the mean stress/strain changes for each machining of each deep rolling tool, the damage parameters are recalculated for each machining.This is described by the indices i in the equation for the dependent equation parameters.All other parameters in the equation are constant. Numerical Simulation Model of the Deep Rolling Process The deep rolling simulation model presented in [14] and set up in MSC Marc is used to determine the strain amplitudes.To determine the damage caused by deep rolling, the allowable number of load cycles N is determined for each deep rolling disc tool based on the strain amplitude that occurs.The simulation model validated with residual stress measurements is used to analyse the influence of deep rolling parameters on the introduced residual stresses [15].The simulation model basically consists of a simplified cuboid section of the railway axle and deep rolling disc tools.An elastic-plastic Chaboche material model is applied to the railway axle section.The material model is parameterised on the basis of the uniaxial cyclic tests mentioned in Section 2.2.1 and presented again in [14].The numerical model is capable of simulating the same parameters as used in the experiment.The difference from real deep rolling is in the movement of the discs rather than the railway axle within the simulation.The cuboid section of the railway axle is constrained with symmetry constraints on all sides except the surface to represent the surrounding material.Several discs are modelled in parallel, aligning in dimensions with those used in the experimental setup.These roll over the surface one after the other.First, the deep rolling force of 20 kN is applied, and frictional contact (µ = 0.1) causes the disc to roll over the surface while the force is kept constant.Finally, the disc stops and the force is reduced again.This process is repeated for each disc in turn to reduce simulation time.The distance between the discs is chosen so that they do not influence each other and do not affect the simulation result. Figure 3a demonstrates the railway axle section during deep rolling.The von Mises stresses are presented.Small areas of high local stresses, further along the Y-axis, represent a contact between the disc tool and the simulation model.The larger area of high stress, closer to the Y-axis, shows the residual stresses already introduced.To determine the stresses and strains that occur during deep rolling, a history plot is evaluated from the evaluation node shown in the figure on the surface in the centre of the simulation model.The evaluation of the evolution of the von Mises stress and the equivalent of total strain over the duration of the simulation is shown in Figure 3b.For better visualisation, the result of a deep rolling pass from Experiment 2 with a higher feed rate is shown as an example. Strain Amplitude Definition From the stress and strain over time curves shown in Figure 3b, the required strain changes and maximum stresses for each disc are derived.This is now explained using Figure 4, on which a section of Figure 3b is shown.At the beginning, the time sequence for each disc is subdivided.This is shown by the red lines.In between there are the strain and stress changes for each passing disc.The time intervals with the largest strain changes, discs 15 to 19, are shown.The stress and strain state during deep rolling is multiaxial and changes with each disc passage.Initially, deep rolling takes place in front of the evaluation point, and then the discs approach the evaluation point at a constant feed rate.This is followed by several deep rolling passes with the discs in direct contact with the evaluation point, and finally the discs move farther and farther apart.In order to be able to compare the strains and stresses with the uniaxial strain-live curve, the equivalent of total strain and the equivalent stress, the von Mises stress, are used for the damage calculation. To define the strain range for each disc, the maximum strain value εmax and the minimum strain value εmin of the comparative strain occurring in the observed time period are Strain Amplitude Definition From the stress and strain over time curves shown in Figure 3b, the required strain changes and maximum stresses for each disc are derived.This is now explained using Figure 4, on which a section of Figure 3b is shown.At the beginning, the time sequence for each disc is subdivided.This is shown by the red lines.In between there are the strain and stress changes for each passing disc.The time intervals with the largest strain changes, discs 15 to 19, are shown. Strain Amplitude Definition From the stress and strain over time curves shown in Figure 3b, the required strain changes and maximum stresses for each disc are derived.This is now explained using Figure 4, on which a section of Figure 3b is shown.At the beginning, the time sequence for each disc is subdivided.This is shown by the red lines.In between there are the strain and stress changes for each passing disc.The time intervals with the largest strain changes, discs 15 to 19, are shown.The stress and strain state during deep rolling is multiaxial and changes with each disc passage.Initially, deep rolling takes place in front of the evaluation point, and then the discs approach the evaluation point at a constant feed rate.This is followed by several deep rolling passes with the discs in direct contact with the evaluation point, and finally the discs move farther and farther apart.In order to be able to compare the strains and stresses with the uniaxial strain-live curve, the equivalent of total strain and the equivalent stress, the von Mises stress, are used for the damage calculation.The stress and strain state during deep rolling is multiaxial and changes with each disc passage.Initially, deep rolling takes place in front of the evaluation point, and then the discs approach the evaluation point at a constant feed rate.This is followed by several deep rolling passes with the discs in direct contact with the evaluation point, and finally the discs move farther and farther apart.In order to be able to compare the strains and stresses with the uniaxial strain-live curve, the equivalent of total strain and the equivalent stress, the von Mises stress, are used for the damage calculation. To define the strain range for each disc, the maximum strain value ε max and the minimum strain value ε min of the comparative strain occurring in the observed time period are determined and the difference is calculated.The corresponding strain amplitude ε a,i is determined by halving the strain range ∆ε i according to Equation (4). Finally, the maximum stress σ max,i is missing.Therefore, the maximum value of the von Mises stress is determined for the time period of the disc passes. Damage Assessment and Linear Damage Accumulation Now that all parameters are defined, the allowable number of load cycles N i can be determined for each deep rolling disc passage i.This determination is achieved by solving Equations ( 2) and (3).Conversion to the strain amplitude ε a,i gives Equation (5). This equation is solved using a numerical non-linear system optimisation solver.The allowable number of load cycles N i for each disc passage i is then obtained, which is calculated as a function of the strain amplitude ε a,i and taking into account the mean stress/strain by σ max,i . The allowable number of load cycles N i is used to calculate the partial damage i by Equation ( 6).This is calculated by dividing the occurring number of load cycles N by the allowable number of cycles N i .However, as the allowable number of cycles varies for each treating disc, no classification is used and therefore the number of load cycles is always set to N = 1. In order to determine the damage of a deep rolling overrun D j from the partial damage D i accused by the individual discs, linear damage accumulation is applied, so that the sum of the partial damage of disc 1 (i = 1) is calculated up to the total number of discs i = N D required in the simulation. Investigated Parameters and Optimisation The functionality-tested calculation method is designed to evaluate the influence of deep rolling parameters and optimisations on the damage caused.This is to ensure that even with repeated deep rolling or lower feed rates, for example, no damage and thus component failure is caused by the process application itself. In [15], the influence of deep rolling parameters and an optimization proposal regarding the introduced residual stresses is investigated.The most influential parameters, specifically the deep rolling force and feed rate, and the proposed optimisation are selected and investigated in this study. The damage calculation is used for the deep rolling forces 20 kN, 15 kN, 10 kN, 5 kN and 2 kN, and the results are presented in Section 3.3.1. The feed rates 0.25 mm and 0.5 mm are already used for validation in Experiment 1 and Experiment 2. In addition, the investigation is extended to the feed rates 1.0 mm and 2.0 mm.The results are presented in Section 3.3.2. The optimisation proposal is based on repeated deep rolling with reduced deep rolling forces.The combinations 20 kN followed by 5 kN, 20 kN followed by 10 kN, and 20 kN followed by 10 kN and finally 5 kN are analysed.The results are presented in Section 3.3.3. Results and Discussion This section presents and interprets the results: firstly, the results of the experiments carried out, followed by the results derived from calculations and their comparison with the experiments.Finally, the calculation method is applied to assess the deep rolling parameters, the deep rolling force and the feed rate, and to the optimisation presented, and the influence of these on damage is presented. Experimental Work This section presents the results of the experiments described in Section 2.1.Two experiments are carried out on two cylindrical parts of the railway axle.The deep rolling overruns are repeated until damage is detected by magnetic particle inspection.In Experiment 1, with a feed rate FR 1 of 0.25 mm, the first damage is detected after j = 23 deep rolling overruns and in Experiment 2, with a feed rate FR 2 of 0.5 mm, after j = 48 deep rolling overruns. In Experiments 1 and 2, an additional two deep rolling overruns were conducted before completion of the tests to further investigate the crack formation after crack initiation is detected. Figure 5a shows the result of the magnetic particle inspection of Experiment 1 after 25 deep rolling overruns.Fluorescent liquid and ultraviolet light are used to make the cracks visible.In particular, it demonstrates the presence of elongated cracks that are oriented circumferentially around the railway axle.These occur irregularly and cannot be attributed to the feed rate used. Results and Discussion This section presents and interprets the results: firstly, the results of the experiments carried out, followed by the results derived from calculations and their comparison with the experiments.Finally, the calculation method is applied to assess the deep rolling parameters, the deep rolling force and the feed rate, and to the optimisation presented, and the influence of these on damage is presented. Experimental Work This section presents the results of the experiments described in Section 2.1.Two experiments are carried out on two cylindrical parts of the railway axle.The deep rolling overruns are repeated until damage is detected by magnetic particle inspection.In Experiment 1, with a feed rate FR1 of 0.25 mm, the first damage is detected after j = 23 deep rolling overruns and in Experiment 2, with a feed rate FR2 of 0.5 mm, after j = 48 deep rolling overruns. In Experiments 1 and 2, an additional two deep rolling overruns were conducted before completion of the tests to further investigate the crack formation after crack initiation is detected. Figure 5a shows the result of the magnetic particle inspection of Experiment 1 after 25 deep rolling overruns.Fluorescent liquid and ultraviolet light are used to make the cracks visible.In particular, it demonstrates the presence of elongated cracks that are oriented circumferentially around the railway axle.These occur irregularly and cannot be attributed to the feed rate used. The result of the magnetic particle inspection test of Experiment 2 after 50 deep rolling overruns is shown in Figure 5b.The surface is covered with fine cracks.These occur regularly at the feed distance of the disc tools.In both experiments with different feed rates it was possible to induce clear damage to the surface, so that the result can be compared with the result of the calculation and the corresponding damage sum can be determined. Strain-Based Damage Assessment and Comparison with the Experimental Results The calculation method presented in Section 2.2 is now applied to the first deep rolling overrun of both Experiments 1 and 2. This involves simulating the process at the appropriate feed rate, analysing the strains and stresses, determining the partial damage for each disc passage and calculating the damage for a deep rolling overrun using linear damage accumulation.The damage determined from a deep rolling overrun of Experiment 1 is 6.1%, and the damage for Experiment 2 is 4.7%.The double feed of Experiment 2 causes The result of the magnetic particle inspection test of Experiment 2 after 50 deep rolling overruns is shown in Figure 5b.The surface is covered with fine cracks.These occur regularly at the feed distance of the disc tools. In both experiments with different feed rates it was possible to induce clear damage to the surface, so that the result can be compared with the result of the calculation and the corresponding damage sum can be determined. Strain-Based Damage Assessment and Comparison with the Experimental Results The calculation method presented in Section 2.2 is now applied to the first deep rolling overrun of both Experiments 1 and 2. This involves simulating the process at the appropriate feed rate, analysing the strains and stresses, determining the partial damage for each disc passage and calculating the damage for a deep rolling overrun using linear damage accumulation.The damage determined from a deep rolling overrun of Experiment 1 is 6.1%, and the damage for Experiment 2 is 4.7%.The double feed of Experiment 2 causes less damage because the evaluation point, and any point on the surface, is treated less frequently and therefore fewer strain and stress cycles occur. To validate the assessment method, it must be compared with the experimental results.In order to achieve this, the deep rolling overruns j must also be repeated in the simulation.For Experiment 1, six deep rolling overruns can be simulated, and for Experiment 2, 14 overruns can be simulated due to the higher feed rate and therefore shorter simulation time.To simulate this repeated deep rolling with the specified number of overruns, both simulations require a simulation time of over three months and a disc space requirement of over 3 TB. The strain and stress curves over time are analysed for the deep rolling overruns, and the damage D j is determined for each deep rolling overrun.These are accumulated, and the total damage D is calculated from the first overrun j = 1 to the analysed number of deep rolling overruns j = N O . In Figure 6, the total damage D is plotted over the number of deep rolling overruns j for Experiment 1.A linear increase in damage is found for both Experiments, and the total damage is, therefore, linearly approximated.The parameters of the equation are given for Experiment 1 in the diagram.A similar trend is observed for Experiment 2. The parameters for the linear approximation of Experiment 2 are a = 1.261 and b = 2.466. less damage because the evaluation point, and any point on the surface, is treated less frequently and therefore fewer strain and stress cycles occur. To validate the assessment method, it must be compared with the experimental results.In order to achieve this, the deep rolling overruns j must also be repeated in the simulation.For Experiment 1, six deep rolling overruns can be simulated, and for Experiment 2, 14 overruns can be simulated due to the higher feed rate and therefore shorter simulation time.To simulate this repeated deep rolling with the specified number of overruns, both simulations require a simulation time of over three months and a disc space requirement of over 3 TB. The strain and stress curves over time are analysed for the deep rolling overruns, and the damage Dj is determined for each deep rolling overrun.These are accumulated, and the total damage D is calculated from the first overrun j = 1 to the analysed number of deep rolling overruns j = NO. In Figure 6, the total damage D is plotted over the number of deep rolling overruns j for Experiment 1.A linear increase in damage is found for both Experiments, and the total damage is, therefore, linearly approximated.The parameters of the equation are given for Experiment 1 in the diagram.A similar trend is observed for Experiment 2. The parameters for the linear approximation of Experiment 2 are a = 1.261 and b = 2.466.Using the linear equations, the damage can be extrapolated linearly until the damage sum is greater than 100%.Table 3 shows these results compared with the results of the experiments. For Experiment 1, damage exceeding D = 100% is calculated after 20 deep rolling overruns compared with the experimental result of 23 overruns.The calculated damage sum for 23 deep rolling overruns is 119.9%. The damage exceeding D = 100% for Experiment 2 is calculated after 41 repeated deep rolling overruns.In comparison, the number of overruns in the experiment is 48.The calculated damage sum for 48 deep rolling overruns is 119.6%.Using the linear equations, the damage can be extrapolated linearly until the damage sum is greater than 100%.Table 3 shows these results compared with the results of the experiments.For Experiment 1, damage exceeding D = 100% is calculated after 20 deep rolling overruns compared with the experimental result of 23 overruns.The calculated damage sum for 23 deep rolling overruns is 119.9%. The damage exceeding D = 100% for Experiment 2 is calculated after 41 repeated deep rolling overruns.In comparison, the number of overruns in the experiment is 48.The calculated damage sum for 48 deep rolling overruns is 119.6%. The calculation method presented provides a reliable means of determining the damage caused by the deep rolling process.The calculation gives conservative results for the damage sum D = 100%.The almost identical size of the damage sum where damage was found in the experiments of ~120% proves that the presented assessment method works reliably for different deep rolling parameters. The evaluation method is applied in the same way to evaluation points below the surface.This is to ensure that the worst damage caused by deep rolling does not occur below the surface and that cracking does not start there and then grow to the surface.In both Experiments 1 and 2, the greatest damage is found on the surface after one deep rolling overrun and also after several overruns. Investigated Parameters and Optimisation The valid calculation method is used to assess the damage of different rolling scenarios.As described in Section 2.3, the most influential deep rolling parameters, deep rolling force and feed rate, are analysed.In addition, the damage caused by optimising the deep rolling treatment with multiple deep rolling at reduced forces is investigated. Deep Rolling Force Firstly, the influence of the deep rolling force on the damage introduced is investigated for one overrun (j = 1).The deep rolling forces 20 kN, 15 kN, 10 kN, 5 kN and 2 kN are considered and analysed exclusively.The feed rate is kept constant at 0.5 mm.Five separate simulation models are set up and simulated with the appropriate forces.The required strain and stress are then evaluated, and the damage for a deep rolling overrun with the corresponding deep rolling force is determined using the evaluation method presented.The result of the evaluation is shown in reliably for different deep rolling parameters. The evaluation method is applied in the same way to evaluation points below the surface.This is to ensure that the worst damage caused by deep rolling does not occur below the surface and that cracking does not start there and then grow to the surface.In both Experiments 1 and 2, the greatest damage is found on the surface after one deep rolling overrun and also after several overruns. Investigated Parameters and Optimisation The valid calculation method is used to assess the damage of different rolling scenarios.As described in Section 2.3, the most influential deep rolling parameters, deep rolling force and feed rate, are analysed.In addition, the damage caused by optimising the deep rolling treatment with multiple deep rolling at reduced forces is investigated.For the deep rolling forces 2 kN, 5 kN and 10 kN, the calculated damage for one overrun Dj=1 is less than 1%.From 10 kN there is a significant increase in damage up to 15 kN, where 3.9% damage already occurs.The increase flattens out up to 20 kN, where the 4.7% damage already known from the results of Experiment 2 can be seen. Feed Rate Subsequent analysis focuses on the influence of the feed rate for one overrun (j = 1).Again, the influence of feed rate is explicitly analysed; the deep rolling force is always 20 kN.In addition to the feed rates FR1 = 0.25 mm from Experiment 1 and FR2 = 0.5 mm from Experiment 2, the feed rates 1.0 mm and 2.0 mm are also considered.For the deep rolling forces 2 kN, 5 kN and 10 kN, the calculated damage for one overrun D j=1 is less than 1%.From 10 kN there is a significant increase in damage up to 15 kN, where 3.9% damage already occurs.The increase flattens out up to 20 kN, where the 4.7% damage already known from the results of Experiment 2 can be seen. Feed Rate Subsequent analysis focuses on the influence of the feed rate for one overrun (j = 1).Again, the influence of feed rate is explicitly analysed; the deep rolling force is always 20 kN.In addition to the feed rates FR 1 = 0.25 mm from Experiment 1 and FR 2 = 0.5 mm from Experiment 2, the feed rates 1.0 mm and 2.0 mm are also considered.Again, appropriate simulation models are set up and analysed, and the damage calculation is applied.Feed rate has a direct effect on the number of times the same point is machined by the tools, so it is expected that the damage will decrease as feed rate increases. Exactly this behaviour is shown in Figure 8, where the results of the evaluation are again shown as points and the relationship between damage and feed rate is described by an equation.A quadratic equation is suitable for describing the behaviour here, and the parameters are listed in the figure .Again, appropriate simulation models are set up and analysed, and the damage calculation is applied.Feed rate has a direct effect on the number of times the same point is machined by the tools, so it is expected that the damage will decrease as feed rate increases. Exactly this behaviour is shown in Figure 8, where the results of the evaluation are again shown as points and the relationship between damage and feed rate is described by an equation.A quadratic equation is suitable for describing the behaviour here, and the parameters are listed in the figure. The damage result at a feed rate of 0.25 mm is already known from Experiment 1 to be 6.1%, and at a feed rate of 0.5 mm from Experiment 2 to be 4.7%.The damage continues to decrease as feed rate increases.For a feed rate of 1.0 mm, it is 3.6%, and for a feed rate of 2.0 mm, damage decreases further to 0.8%. Process Optimisation Finally, the influence of optimising the deep rolling application presented in [15] on the induced damage is investigated.It was found that repeated deep rolling with reduced deep rolling forces has a positive effect on the induced residual stress state.Furthermore, the fatigue strength assessment based on the residual stress state presented in [17] shows that the optimisation allows a significant increase in load capacity. The basis for the assessment is the simulation model for deep rolling at 20 kN and a feed rate of 0.5 mm, Experiment 2, with the calculated introduced damage of 4.7%.This result is again used as a reference.Based on this simulation model, three additional models are set up.Firstly, the model that has already been deep rolled at 20 kN is once subjected to an additional overrun at 5 kN (20 kN/5 kN), once to an additional overrun at 10 kN (20 kN/10 kN) and once at 10 kN and, finally, at 5 kN (20 kN/10 kN/5 kN). Multiple deep rolling is simulated, the simulation models are evaluated, and the damage calculation is performed.Similar to the validation of the calculation method, Sec-tion3.2, the damage from the additional deep rolling overruns is accumulated. An additional deep rolling overrun of 5 kN increases the damage from 4.7% to 4.8%.If 10 kN is applied in addition to 20 kN, the damage increases to 5.2%, and with triple treatment at 20 kN, 10 kN and 5 kN, damage of 5.3% must be expected. The results are shown in Figure 9.The damage is plotted against the increasing "degree of optimisation".The relationship can be described in simplified terms by using a linear equation.The parameters are shown again in the diagram.The damage result at a feed rate of 0.25 mm is already known from Experiment 1 to be 6.1%, and at a feed rate of 0.5 mm from Experiment 2 to be 4.7%.The damage continues to decrease as feed rate increases.For a feed rate of 1.0 mm, it is 3.6%, and for a feed rate of 2.0 mm, damage decreases further to 0.8%. Process Optimisation Finally, the influence of optimising the deep rolling application presented in [15] on the induced damage is investigated.It was found that repeated deep rolling with reduced deep rolling forces has a positive effect on the induced residual stress state.Furthermore, the fatigue strength assessment based on the residual stress state presented in [17] shows that the optimisation allows a significant increase in load capacity. The basis for the assessment is the simulation model for deep rolling at 20 kN and a feed rate of 0.5 mm, Experiment 2, with the calculated introduced damage of 4.7%.This result is again used as a reference.Based on this simulation model, three additional models are set up.Firstly, the model that has already been deep rolled at 20 kN is once subjected to an additional overrun at 5 kN (20 kN/5 kN), once to an additional overrun at 10 kN (20 kN/10 kN) and once at 10 kN and, finally, at 5 kN (20 kN/10 kN/5 kN). Multiple deep rolling is simulated, the simulation models are evaluated, and the damage calculation is performed.Similar to the validation of the calculation method, Section 3.2, the damage from the additional deep rolling overruns is accumulated. An additional deep rolling overrun of 5 kN increases the damage from 4.7% to 4.8%.If 10 kN is applied in addition to 20 kN, the damage increases to 5.2%, and with triple treatment at 20 kN, 10 kN and 5 kN, damage of 5.3% must be expected. The results are shown in Figure 9.The damage is plotted against the increasing "degree of optimisation".The relationship can be described in simplified terms by using a linear equation.The parameters are shown again in the diagram.According to [17], triple deep rolling (20 kN/10 kN/5 kN) allows a significant increase in fatigue strength.The increase in damage introduced is small and, due to the reduced forces, it is not expected that the increase in damage will have a negative effect on fatigue strength. Conclusions In this paper, the damage caused by deep rolling is investigated experimentally and a valid assessment method is presented.In addition, the main deep rolling parameters and the optimisation by repeated deep rolling with reduced deep rolling forces are investigated.The main results are summarised in the following list: • A railway axle is repeatedly deep rolled on two cylindrical sections until damage is detected.A feed rate of 0.25 mm is used for Experiment 1 and a feed rate of 0.5 mm for Experiment 2. In Experiment 1, damage occurs after 23 deep rolling overruns, and in Experiment 2, after 48 deep rolling overruns. • To quantify the damage caused by deep rolling, an appropriate damage assessment method is developed.The comparison of the number of deep rolling overruns from the experiment and the calculation with occurring damage or an exceeding damage sum of 100% for Experiment 1 is 20 and 23 deep rolling overruns, respectively, and 41 and 48 deep rolling overruns, respectively, for Experiment 2. • The calculated damage sum for the number of overruns where damage was found in the experiments is around 120% for both experiments.The calculation shows strong applicability for different deep rolling parameters and provides conservative results. • The developed assessment is applied to the parameters of deep rolling force and feed rate for one deep rolling overrun.The highest damage is introduced with the highest deep rolling force of 20 kN at 4.7% and the lowest feed rate of 0.25 mm at 6.1%. • Finally, the optimisation of deep rolling with repeated deep rolling at reduced forces is analysed.With three overruns at 20 kN, followed by 10 kN and finally 5 kN, the damage increases just by 0.6 percentage points from 4.7% to 5.3% compared with a single overrun at 20 kN. This paper presents a calculation method to compare the damage caused by different deep rolling parameters.Of particular interest is the effect and optimum "damage" applied to fatigue strength.Therefore, fatigue tests on railway axles with different applied damages are planned.In this way, the permissible "pre-damage" caused by the application of the deep rolling process can be determined without any negative effect on fatigue strength.Values well below 1 or 100% are assumed.According to [17], triple deep rolling (20 kN/10 kN/5 kN) allows a significant increase in fatigue strength.The increase in damage introduced is small and, due to the reduced forces, it is not expected that the increase in damage will have a negative effect on fatigue strength. Conclusions In this paper, the damage caused by deep rolling is investigated experimentally and a valid assessment method is presented.In addition, the main deep rolling parameters and the optimisation by repeated deep rolling with reduced deep rolling forces are investigated.The main results are summarised in the following list: • A railway axle is repeatedly deep rolled on two cylindrical sections until damage is detected.A feed rate of 0.25 mm is used for Experiment 1 and a feed rate of 0. • Finally, the optimisation of deep rolling with repeated deep rolling at reduced forces is analysed.With three overruns at 20 kN, followed by 10 kN and finally 5 kN, the damage increases just by 0.6 percentage points from 4.7% to 5.3% compared with a single overrun at 20 kN. This paper presents a calculation method to compare the damage caused by different deep rolling parameters.Of particular interest is the effect and optimum "damage" applied to fatigue strength.Therefore, fatigue tests on railway axles with different applied damages are planned.In this way, the permissible "pre-damage" caused by the application of the deep rolling process can be determined without any negative effect on fatigue strength.Values well below 1 or 100% are assumed. Figure 1 . Figure 1.Application of the deep rolling process during experiments. Figure 1 . Figure 1.Application of the deep rolling process during experiments. Figure 2 . Figure 2. Strain-life curve according to UML and cyclic test results. Figure 2 . Figure 2. Strain-life curve according to UML and cyclic test results. Figure 3 . Figure 3. Simplified railway axle section of the simulation model during simulation procedure of Experiment 2 with marked evaluation node (a) and von Mises stress and equivalent of total strain over simulation time evaluated there (b). Figure 4 . Figure 4. Definition of the strain range and maximum stress for each disc. Figure 3 . Figure 3. Simplified railway axle section of the simulation model during simulation procedure of Experiment 2 with marked evaluation node (a) and von Mises stress and equivalent of total strain over simulation time evaluated there (b). Figure 3 . Figure 3. Simplified railway axle section of the simulation model during simulation procedure of Experiment 2 with marked evaluation node (a) and von Mises stress and equivalent of total strain over simulation time evaluated there (b). Figure 4 . Figure 4. Definition of the strain range and maximum stress for each disc. Figure 4 . Figure 4. Definition of the strain range and maximum stress for each disc. Figure 5 . Figure 5. Result of the magnetic particle inspection for Experiment 1 after 25 deep rolling overruns (a) and Experiment 2 after 50 deep rolling overruns (b). Figure 5 . Figure 5. Result of the magnetic particle inspection for Experiment 1 after 25 deep rolling overruns (a) and Experiment 2 after 50 deep rolling overruns (b). Figure 6 . Figure 6.Calculated cumulative damage and linear regression. Figure 6 . Figure 6.Calculated cumulative damage and linear regression. Figure 7 . It displays the damage with the deep rolling force.The points plotted are the evaluated results.A suitable equation is found to describe the relationship between damage and deep rolling force.The relationship, equation and parameters are shown in the diagram. 3. 3 . 1 . Deep Rolling ForceFirstly, the influence of the deep rolling force on the damage introduced is investigated for one overrun (j = 1).The deep rolling forces 20 kN, 15 kN, 10 kN, 5 kN and 2 kN are considered and analysed exclusively.The feed rate is kept constant at 0.5 mm.Five separate simulation models are set up and simulated with the appropriate forces.The required strain and stress are then evaluated, and the damage for a deep rolling overrun with the corresponding deep rolling force is determined using the evaluation method presented.The result of the evaluation is shown in Figure 7.It displays the damage with the deep rolling force.The points plotted are the evaluated results.A suitable equation is found to describe the relationship between damage and deep rolling force.The relationship, equation and parameters are shown in the diagram. Figure 7 . Figure 7. Influence of deep rolling force on damage introduced. Figure 7 . Figure 7. Influence of deep rolling force on damage introduced. Figure 8 . Figure 8. Influence of feed rate on damage introduced. Figure 8 . Figure 8. Influence of feed rate on damage introduced. Figure 9 . Figure 9. Influence of process optimisation on damage introduced. Figure 9 . Figure 9. Influence of process optimisation on damage introduced. Table 3 . Comparison between experimental and calculated results. Table 3 . Comparison between experimental and calculated results. 5 mm for Experiment 2. In Experiment 1, damage occurs after 23 deep rolling overruns, and in Experiment 2, after 48 deep rolling overruns.• To quantify the damage caused by deep rolling, an appropriate damage assessment method is developed.The comparison of the number of deep rolling overruns from the experiment and the calculation with occurring damage or an exceeding damage sum of 100% for Experiment 1 is 20 and 23 deep rolling overruns, respectively, and 41 and 48 deep rolling overruns, respectively, for Experiment 2. • The calculated damage sum for the number of overruns where damage was found in the experiments is around 120% for both experiments.The calculation shows strong applicability for different deep rolling parameters and provides conservative results.• The developed assessment is applied to the parameters of deep rolling force and feed rate for one deep rolling overrun.The highest damage is introduced with the highest deep rolling force of 20 kN at 4.7% and the lowest feed rate of 0.25 mm at 6.1%.
2024-05-11T16:15:14.031Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "01650c608ba4a77f45b344e6ef2e9b0a6c8ef748", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/17/9/2163/pdf?version=1714980403", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "99672eac5db15c2dcfa4d72ed49d8c6672d16767", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
248749268
pes2o/s2orc
v3-fos-license
Do Patients with Psoriatic Arthritis Have More Severe Skin Disease than Patients with Psoriasis Only? A Systematic Review and Meta-Analysis Background: Early identification of patients at risk of psoriatic arthritis (PsA) is essential to facilitate early diagnosis and improve clinical outcomes. Severe cutaneous psoriasis has been proposed to be associated with PsA, but a recent assessment of the evidence is lacking. Therefore, in this systematic review, we address the association of psoriasis skin severity with the presence and development of PsA. Summary: We included articles from a review published in 2014 and supplemented these with recent literature by performing an additional systematic search to identify studies published between 1 January 2013 and 11 February 2021. A meta-analysis was performed when sufficient comparable evidence was available. Of 2,000 screened articles, we included 29 in the analysis, of which 16 were identified by our updated search. Nineteen studies reported psoriasis severity as psoriasis area and severity index (PASI), ten studies as body surface area (BSA), and two studies as “number of affected sites.” Most studies show that more extensive skin disease is associated with the presence of PsA. The quantitative pooled analyses demonstrate higher PASI (mean difference [Δ] 1.59; 95% confidence interval [CI] 0.29–2.89) and higher BSA (Δ 5.31; 95% CI 1.78–8.83) in patients with PsA as compared to psoriasis patients without PsA. Results from prospective studies – that assess the risk of future development of PsA in psoriasis patients – were inconclusive. Key Messages: In patients with psoriasis, more severe skin involvement is associated with the presence of PsA, underpinning the importance of optimal dermatology-rheumatology collaboration in clinical care. There are insufficient data to support the use of psoriasis skin severity to predict the future development of PsA in psoriasis patients. Introduction Psoriatic arthritis (PsA) is a musculoskeletal disorder characterized by inflammation of the skin, nail deformities, arthritis, axial spondyloarthritis, enthesitis, and dactylitis [1]. PsA develops in 6-41% of psoriasis patients, but it is unknown why only a subset of patients transits to PsA [1][2][3]. Psoriatic skin disease precedes PsA in 85% of the cases (on average, 10 years), which opens a window of opportunity for early recognition, treatment initiation, and possibly delaying or even prevention of the onset of PsA [1,4]. Early diagnosis and treatment of PsA are essential because irreversible joint damage can develop within 6 months and delayed diagnosis is associated with long-term adverse outcomes [5][6][7][8]. Therefore, defining patients at risk of PsA transition has been a topic of interest [9,10]. Multiple clinical predictors for PsA in psoriasis patients have been suggested, including obesity, trauma, nail dystrophy, and psoriasis localization [9,10]. Moreover, a meta-analysis published in 2014 reported a trend for an association between the extent of psoriasis and the presence of PsA [9]. The extent of cutaneous diseasecommonly expressed as psoriasis area and severity index (PASI; range 0-72) or body surface area (BSA; range 0-100) -is a relatively quick and noninvasive clinical outcome and could therefore function as a useful predictor for transition to PsA in psoriasis patients that can readily be applied in clinical practice [11]. However, a meta-analysis investigating this potential predictor for PsA development is lacking [10]. We aimed to update and complement the prior meta-analysis by Rouzaud et al. [9] with current knowledge of the association of psoriatic skin disease severity with PsA by assessing not only the association of psoriasis severity with the presence of PsA but also the association with later development of PsA. Furthermore, we postulate that defining the association between the skin disease severity and the development of PsA may support our understanding of shared pathogenic features within the psoriatic spectrum of disease [12]. Search We conducted a systematic literature search in PubMed and Embase on 11 February 2021 (PICO question: "Is psoriasis skin severity predictive of transition to PsA in psoriasis patients"?). We used a combination of synonym terms in the title/abstract and MesH/Emtree terms for "psoriasis," "psoriatic arthritis," "severity," "PASI," and "BSA" (online suppl. Table S1; for all online suppl. material, see ). We screened studies using predefined eligibility criteria in line with Rouzaud et al. [9] (online suppl. Table S2). We included original studies published after 1 January 2013, that studied human subjects aged >18 years old and compared psoriasis severity between psoriasis patients without PsA (Pso-PsA), patients with PsA (PsA), and/or psoriasis patients that developed PsA. We focused on publications after 2012 to supplement the comprehensive meta-analysis by Rouzaud et al. [9] (search period 1980 to January 2013). Data Extraction Eligibility of selected studies for qualitative and quantitative analysis was discussed by two authors (M.E.J. and J.N.P.) and a quality assessment was reported. After study selection, we identified estimators for the association of PsA and psoriasis severity (PASI, BSA, affected sites): mean and standard deviation (±) in (sub)groups, mean difference between groups (Δ), median and interquartile range in (sub)groups, odds ratio (OR), risk ratio, and hazard ratio (HR) for the association of psoriasis severity and PsA (development) with confidence intervals (95% CI). We calculated missing OR and CI and requested the corresponding authors to provide additional data if information to perform quantitative analyses was lacking. Differentiation by Research Question We differentiated between studies that report the association of cutaneous psoriasis severity with the presence of PsA and studies that report the association with later development of PsA in patients with psoriasis because these studies answer different clinical questions. Articles that report the extent of skin disease at a certain baseline and subsequently study conversion to PsA (prospective design) are important to support the potential use of psoriasis severity as a biomarker to identify psoriasis patients at risk for PsA transition. On the other hand, studies that compare skin disease severity between Pso-PsA and PsA (cross-sectional design) enable us to study the association of psoriasis severity and the present risk of PsA. Although these studies do not address our PICO, we reckon that they do answer a clinically relevant question and therefore we included them in our analyses. Meta-Analysis We performed quantitative meta-analyses if ≥3 studies used a homogenous study design, reported similar psoriasis severity measures, and used the same association measures. For quantitative analyses, we used random effects models and evaluated heterogeneity with the I 2 statistic. Meta-analyses were performed using review manager (Version 5.4) and meta-regression with comprehensive meta-analysis (Version 3). We considered a p value <0.05 to be statistically significant. Search Results The search yielded 2,000 unique studies. One author performed title/abstract screening and thereafter screening of 90 studies in full-text (M.E.J.). Selection of 14 studies was discussed by three authors (M.E.J., J.N.P., and E.F.A.L.) (Fig. 1). Two articles were retrieved via reference and related citations in PubMed and supplemented with 13 studies selected by Rouzaud et al. [9]. Of the 29 articles included in our final analysis, three studies assessed the extent of skin disease in Pso-PsA patients and the later development of PsA (PASI n = 2, affected sites n = 1). The other 26 studies reported psoriasis severity and the presence of PsA in Pso-PsA and PsA patients: either DOI: 10.1159/000524231 the highest value from repeated measures over a period of time (PASI n = 1, BSA n = 2) or a single measurement (PASI n = 14, BSA n = 6, PASI and BSA n = 2, affected sites n = 1) ( Table 1). Study Quality Concerning studies that investigated psoriasis severity and the presence of PsA, the overall quality was low. In seven studies, selection bias could have been introduced by patient selection (Choi et al. [13]; Cinar et al. [14]; Haroon et al. [15]; Henes et al. [16]; Jamshidi et al. [17]; Leijten et al. [18]; Truong et al. [19]), as they assessed previously undiagnosed PsA in cohorts of psoriasis patients (online suppl. A combination of synonym terms in title/abstract and MesH/Emtree terms for "psoriasis," "psoriatic arthritis," "severity," "PASI," and "BSA" was used (online suppl. Table S1). In total, 3,032 articles were identified. Duplicates were removed and 2,000 articles were screened on title and abstract based on predefined eligibility criteria. Consequently, 90 selected articles were screened full-text for relevancy to be included in the analysis. The search was supplemented with 13 articles by Rouzaud et al. [9] and 2 articles via related citations in PubMed and reference citations of the identified articles in the initial search. In total, 29 studies were included in the qualitative analyses. These studies reported the following outcome measures for skin disease severity: PASI (n = 17), PASI and BSA (n = 2), BSA (n = 8) and number of affected sites (n = 2). We included 13, 4, and 0 of these studies in the quantitative analyses, respectively. [29]). Whether psoriasis severity was determined by an experienced dermatologist was not described in more than half of the studies. All studies assessed psoriasis severity at a single time point, except for three studies that measured repeatedly over a period of time and reported the highest value during follow-up (Eder et al. [30], Soltani-Arabshahi et al. [26]; Tey et al. [22]). Most studies reported psoriasis duration. As expected, because psoriasis precedes PsA in most cases, psoriasis duration was longer in PsA patients compared to Pso patients (range 0.2-9.5 years) [1,4]. Details of therapies were not well described in most studies and varied greatly between studies. With regards to confounding, only two studies (Haroon et al. [15] and Eder et al. [31]) corrected for the use of (topical or systemic) psoriasis therapy. After selection based on our criteria of homogeneity, we included 15 studies in two meta-analyses to compare ΔPASI (n = 13) and ΔBSA (n = 4) between Pso-PsA and PsA patients (Fig. 2). With regards to the three studies that reported psoriasis severity and later development of PsA, the overall risk of bias was low. However, heterogeneity with regards to the reported psoriasis severity measures and estimators impeded pooling of results in quantitative analyses (Eder et al. [31]; Zenke et al. [21]; Wilson et al. [32]). Psoriasis Severity and Presence of PsA Sixteen cross-sectional studies reported PASI from one single measurement, of which 12 studies observed a higher mean or median PASI in PsA compared to Pso-PsA. We included 13 studies in our meta-analysis, that showed a significantly higher PASI in PsA (Δ 1.59 [95% CI 0.29-2.89]) with a high level of heterogeneity (I 2 85%) (Fig. 2a). Given the high heterogeneity and possible publication bias (online suppl. Fig. S1), we performed a sensitivity analysis by removing the studies by Dağdelen et al. [33] and Jamshidi et al. [17]. The result of the adjusted meta-analysis showed a smaller but still significant differ- BSA, body surface area (1% is equivalent to the size of the palm of the patient's hand); CI, confidence interval; FU, follow-up; na, not applicable; NR, not reported; NS, not significant (p value not reported); OR, odds ratio; PASI, psoriasis area and severity index; PsA, psoriatic arthritis; Pso-PsA, psoriasis without psoriatic arthritis; SD, standard deviation; RR, risk ratio. * Significant (p value <0.05). a Differentiation between studies that report the association of cutaneous psoriasis severity with the presence of PsA and studies that report the association with future development of PsA. b Assessment of psoriasis severity in a either a cross sectional (Cross) or prospective (Pro) design. c ORs calculated as follows: Fig. S2). Further, three studies (Choi et al. [13]; Cinar et al. [14]; Henes et al. [16]) compared PASI between Pso-PsA and PsA by stratification into mild, moderate, or severe psoriasis. Although two studies found that moderate-severe psoriasis was more prevalent amongst PsA patients, these results were not statistically significant. One study assessed psoriasis severity repeatedly over time and compared the highest PASI (dichotomized <10 vs. ≥10) during 3 years of follow-up (Eder et al. [30]), but these results too were not significantly different. Only one cross-sectional study compared the number of affected psoriasis sites between psoriasis and PsA patients (Table 1) [29]. The number of patients with generalized psoriasis (>2 affected sites) was higher in PsA (41.4% vs. 38.3%; OR 1.18), but these results were not significant. Psoriasis Severity and Future Development of PsA We identified three prospective studies that reported psoriasis severity in Pso-PsA patients and assessed later development of PsA (Eder et al. [31]; Zenke et al. [21]; Wilson et al. [32]). One study showed with multivariable logistic regression that severe psoriasis (PASI ≥ 10) at psoriasis onset is not a statistically significant predictor for PsA transition (OR 1.55, p value not reported), after correction for young age, sex, scalp psoriasis, and nail dystrophy [21]. The second study reported that severe psoriasis (PASI ≥ 20) is significantly associated with PsA transition within 8 years (risk ratio 5.39; p = 0.006) [31]. Finally, one study indicated using univariate Cox regression that patients with ≥3 affected sites were significantly more at risk to develop PsA (HR 2.24 [95% CI 1.23-4.08]), but this effect was not sustained in multivariate analysis after correction for age, sex, calendar year, scalp psoriasis, intergluteal psoriasis, and nail dystrophy (HR not reported) [32]. Discussion To our knowledge, this is the first systematic review and meta-analysis in 8 years to provide both qualitative and quantitative answers as to whether psoriasis severity is associated with the presence and development of PsA. This is a clinically relevant question because skin severity measurement could aid in identifying those psoriasis patients at risk for PsA transition and thus serve as an easily implementable clinical measurement to facilitate early PsA diagnosis and improve clinical outcomes. Our results confirm that in patients with psoriasis, the presence of slightly more extensive skin disease, as measured by higher PASI and BSA, is associated with concurrent PsA. We were unable to draw a definite conclusion about the as-sociation of psoriasis severity with later development of PsA. The majority of the cross-sectional studies found a positive association between severe psoriasis and the presence of PsA. Moreover, our meta-analyses revealed a statistically significant mean difference of both PASI and BSA between PsO-PsA and PsA patients, although the differences were relatively small. We speculate these results may be an underestimation because most studies included psoriasis patients that were treated in a hospital and patients with only mild psoriasis are typically less prone to visit a dermatologist. Unfortunately, we were unable to accurately assess the association of psoriasis severity with transition to PsA, as prospective studies were limited and heterogeneous. Although all point estimates were in the direction of a higher risk of developing PsA, the results were not always significant. Therefore, there is currently insufficient evidence to recommend dermatologists using psoriasis severity as a reliable biomarker for PsA development. In the past, specific psoriasis localizations have been suggested to associate with PsA, including scalp and intergluteal psoriasis [9]. PASI and BSA capture all anatomically affected sites of psoriasis and therefore may not be the most suitable outcome measures to assess risk for PsA transition. Moreover, a PASI score of severe scalp psoriasis can be numerically comparable with that of only moderate psoriasis on the knees. Therefore, we recommend future studies to include an in-depth topographic assessment of psoriasis localization and report individual PASI components. The difference in psoriasis severity between PsA and psoriasis patients could improve our understanding of the pathogenic link between skin and joint disease. From a pathophysiologic perspective, the association between severe psoriasis and PsA may be explained by the important role of the interleukin (IL)-23, IL-17, and tumor necrosis factor alpha (TNF) pathways in inflammation of both the skin and musculoskeletal apparatus [1]. Overlapping cytokines -including IL-17, IL-22, IL-23, and tumor necrosis factor alpha -play a role in immune-mediated inflammation of skin and synovium that involves infiltration of pathogenic CD8+ T cells, macrophages, dendritic cells, monocytes, and B cells [35]. It is hypothesized that local proinflammatory cytokine production and activated immune cells in psoriatic skin create a selfperpetuating inflammatory response that results in systemic inflammation and PsA [35]. However, this does not explain why in 15% of the patients, arthritis precedes skin lesions [1]. Moreover, cutaneous psoriasis severity has Dermatology 2022;238:1108-1119 DOI: 10.1159/000524231 shown only a modest correlation with joint disease [36]. Thus, the exact relation between inflammation of the skin, joints, and other domains remains incompletely understood [35]. This review has several limitations. First, we have not repeated the systematic search performed by Rouzaud et al. [9], but as they employed validated methodology and even broader search methods, we assume to have included all relevant publications. Second, our meta-analyses were limited by heterogeneity and a relatively small number of included studies. Third, most studies were conducted in dermatology clinics, which may have resulted in an overestimation of psoriasis severity in PsA, since patients with "PsA sine psoriasis" and limited psoriasis -typically seen by rheumatologists -could have been missed. Fourth, it needs to be taken into account that the meta-analysis did not include high-quality studies. Most importantly, the use of therapies could have confounded the results. However, these studies do represent daily clinical practice, as psoriatic patients are frequently treated with topical and/or systemic treatment. Furthermore, we examined the effects of two potential confounders that are associated with PsA in psoriasis patients, i.e., the presence of nail psoriasis and psoriasis disease duration. Meta-regression analysis suggested that our results were not explained by confounding by nail psoriasis or psoriasis duration, although we could only analyze the effects in six and eight studies, respectively (online suppl. Table S4). Additional subgroup analyses to investigate potential confounders -including psoriasis localization, family history of PsA, obesity, history of trauma of fracture, and smoking status -could unfortunately not be performed in consequence of limited reporting of data [10,37]. Overall, we deem that these results are the currently best available answer to a clinically relevant question. Concluding Remarks Our results demonstrate that psoriasis severity is associated with increased likelihood of concurrent PsA. The high extent of psoriasis skin activity in PsA patients reinforces the necessity of multidisciplinary collaboration between rheumatologists and dermatologists in PsA care. Defining psoriasis patients at risk for PsA transition remains an important topic to facilitate early recognition and prevent irreversible joint damage. Long lasting follow-up studies are necessary to study predictors for the development of PsA in psoriasis patients. Given the com-plexity of PsA pathogenesis, we deem that prediction models that combine genotypic and phenotypic predictors are the most promising to identify psoriasis patients at risk for PsA transition [38][39][40][41][42]. Key Message In patients with psoriasis, more severe skin involvement is associated with the presence of psoriatic arthritis. Statement of Ethics The paper is exempt from ethical committee approval because data were collected from published trials in which informed consent had been obtained by the trial investigators.
2022-05-14T06:22:48.659Z
2022-05-12T00:00:00.000
{ "year": 2022, "sha1": "8f4e3b418534cc9009d6b458747266252c3c2d39", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/524231", "oa_status": "HYBRID", "pdf_src": "Karger", "pdf_hash": "6fca1fdc7fe0d3d767f4bc66f89f3deb7b9318ff", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
256556483
pes2o/s2orc
v3-fos-license
Absolute characterization of high numerical aperture microscope objectives utilizing a dipole scatterer Measuring the aberrations of optical systems is an essential step in the fabrication of high precision optical components. Such a characterization is usually based on comparing the device under investigation with a calibrated reference object. However, when working at the cutting-edge of technology, it is increasingly difficult to provide an even better or well-known reference device. In this manuscript we present a method for the characterization of high numerical aperture microscope objectives, functioning without the need of calibrated reference optics. The technique constitutes a nanoparticle, acting as a dipole-like scatterer, that is placed in the focal volume of the microscope objective. The light that is scattered by the particle can be measured individually and serves as the reference wave in our system. Utilizing the well-characterized scattered light as nearly perfect reference wave is the main idea behind this manuscript. An absolute characterization technique for microscope objectives is presented, working without a calibrated reference element. To achieve this, a reference wave is created by a sub-wavelength object. Introduction Measurements are something we are very familiar with. Not only in science but also in our everyday life they are ubiquitous and we perform many of them without much thought. In fact, the majority of what we designate as measurements could be equivalently called comparison. Probably the most illustrative example for this is a beam balance, where the mass of an object is determined by comparing it to known masses. There are countless other measurements involving e.g. a ruler, a measuring cup or simply a watch, that all just work because there is a calibrated device acting as a benchmark to gain the desired information. Providing such a calibrated reference can be rather challenging, especially in the realm of modern technologies and methods demanding miniaturization and increasing resolution. In optics, a frequently occurring example is the characterization of optical elements based on the phase front of the transmitted light field. Usually, this is done by interferometry, where optical reference elements are utilized to create a reference wave. Consequently, the quality of these elements and their calibration sets an upper limit for the measurement accuracy, as their imperfections and calibration errors translate directly into the measured wavefront of the device under study. Especially when working with high numerical aperture (NA) optics, such a calibration involves its very own challenges 1,2 . Therefore, the development of so-called absolute characterization methods, working without a macroscopic reference object, is highly desirable. In this work, we present an absolute characterization technique for high-NA microscope objectives. To circumvent the need for the error-prone calibration of the optical reference elements, our reference wave is created by an object smaller than the wavelength, i.e., a nanoparticle. Such a particle only supports a very limited amount of optical modes-with the dominating contributions being dipole modes 3 -which can be determined experimentally 4,5 . The main concept presented in this work is based on the utilization of the well-characterized scattering as a nearly perfect reference wave. Experimental scheme For the wavefront characterization of a microscope objective (MO), or rather the experimental determination of its aberrations, it is necessary to measure the transmitted phase front. This can be achieved by either interferometric means or a specialized sensor such as a Shack −Hartmann-Sensor (SHS) 6,7 . In all cases, it is necessary to image the exit pupil (EP) of the MO under investigation (subsequently labeled as MO 1 ) onto the sensor, because wave aberrations of optical elements are defined in their respective EP. We first discuss two exemplary state-ofthe-art measurement techniques before presenting our own scheme. The first scheme is depicted in Fig. 1a. There, MO 1 is illuminated with an incoming wavefront, which after transmission carries the aberrations of MO 1 . For the illumination of MO 1 , an additional optical component, i.e., a second microscope objective MO 2 is necessary to adapt the incoming wavefront to MO 1 . Further, a telescope is usually utilized for imaging and also for matching the size of the EP-image to the SHS. The obvious problem here is the calibration of the auxiliary optics. When removing MO 1 , the light beam coming from MO 2 is not collimated anymore. Therefore, an additional precharacterized MO (identical to MO 1 ) acting as a benchmark is necessary to calibrate the measurement setup. Unfortunately, this would bring us back to the initial problem, namely the characterization of a microscope objective. An alternative technique that solves the abovementioned issue is presented in Fig. 1b. There, a spherical concave mirror is used to send the light coming out of MO 1 back, such that the beam passes the objective twice on the exact same paths, doubling the wavefront aberrations. Deviations of the concave mirror from a perfect sphere can be determined by specialized calibration procedures. The aberrations of the incoming wave as well as the beamsplitter and telescope can be determined by placing a plane mirror of known high quality to the right of MO 1 . Nonetheless, there is another source of error in this system. Seen from the SHS, the combination of MO 1 and the reflecting surface creates two images of the EP of MO 1 , which cannot be imaged to the same plane simultaneously. The two image paths are indicated in Fig. 1b by the green and red arrow. This affects the performance of the wavefront characterization, especially if small defects need to be located precisely. In Fig. 1c, we present an alternative and novel experimental scheme, which we describe in detail below. It was Fig. 1 Experimental schemes for the characterization of microscope objectives (MOs). The MO under investigation is always labeled as MO 1 . a, b Conventional methods based on a Shack−Hartmann-Sensor and reference elements. In a MO 1 is measured in a single pass, whereas in b the beam traverses MO 1 twice. c Sketch of the main part of the experimental setup based on scattered light as reference wave. An incoming beam is focused and recollimated by two confocally aligned microscope objectives. A spherical silicon nanoparticle is placed on a glass substrate in the joint focal plane of the system. The back focal plane of MO 2 (immersion type) is imaged onto a camera by an additional lens. Polarization optics, comprising two liquid crystal variable retarders and a linear polarizer, enable a polarization resolved analysis used to record all data presented in this manuscript and does not rely on any calibrated reference optics. The polarization of the incoming laser beam can be chosen arbitrarily as long as it is fixed and not changing or fluctuating with time. The beam is focused by MO 1 and impinges onto a silicon nanoparticle, placed on a glass coverslip 8 that is carried by a 3D piezo stage [Physik Instrumente (PI) GmbH & Co. KG, P-527 and E-710.4CL]. The size of the particle is chosen such that it predominantly supports dipole modes, with higher order modes (quadrupoles, octupoles, etc.) being strongly suppressed at the chosen wavelength 3 . The excited dipole moments are not known prior to the measurements and different microscope objectives (MO 1 ) with different aberrations will lead to different combinations of dipole moments. The actual dipole moments need to be reconstructed during the measurement 5 and their emission can then be calculated analytically 9 . The resulting dipole emission serves the purpose of a well-known, nearly perfect reference wave that can interfere with the remainder of the light, which passed through MO 1 . The transmitted beam as well as the light scattered by the particle in forward direction are collected by a confocally aligned immersion-type microscope objective (MO 2 , Leica Microsystems, HC PL FLUOTAR, ×100/1.32 OIL). The EP of both MO 1 and MO 2 are simultaneously imaged onto a conventional CMOS camera (The Imaging Source Europe GmbH, DMK 33UX252) by a standard achromatic lens. Using two liquid crystal variable retarders (Thorlabs Inc., LCC1113-A) and a linear polarizer in front of the imaging lens facilitates an angularly resolved full Stokes analysis 10 , which will become important during the evaluation procedure. The key feature of this setup is that it does not require any additional calibrated optical elements. After passing through MO 1 , all subsequent optical elements used for the measurement and retrieval are common-path, i.e., input beam and reference wave propagate along the exact same path. Consequently, it is not necessary to calibrate the phase aberrations of the auxiliary optics, including MO 2 . In addition, the aforementioned issue of multiple image planes for the EP does not occur in this configuration. Measurement To better understand the underlying principle of our measurement strategy, we first discuss the equation for the total intensity that needs to be solved 11 : For a specific polarization state σ, this equation describes the time averaged intensity I tot,σ of two interfering electromagnetic fields labeled by i = 1,2, where I i,σ and ϕ i,σ are the corresponding intensity and phase distributions, respectively. For the sake of brevity, the dependence (k x , k y ) is omitted. Besides, σ refers to the polarization state that is selected by means of the polarization optics. In principle, the polarization state σ can be chosen almost arbitrarily by means of the polarization optics, as long as it is homogeneous throughout the EP. Nonetheless, it is advantageous to choose σ to be equal to the polarization of the input beam in order to maximize the signal-to-noise ratio. In our scenario, i = 1 corresponds to the incoming beam that passed MO 1 and was transmitted into the glass substrate, carrying the aberrations of MO 1 in its phase distribution ϕ 1,σ . Consequently, the ultimate goal is the retrieval of ϕ 1,σ . Strictly speaking, we are interested in the wavefront ϕ 1,σ without the influence of the glass substrate. However, these influences can be removed from the results in a straight-forward manner, as explained in the "Methods" section. Last, the components I 2,σ , ϕ 2,σ refer to the intensity and phase distribution of the dipole wave emitted by the nanoparticle, respectively. Without loss of generality, we choose a particle diameter of 168 nm, an input wavelength of λ = 680 nm and a polarization parallel to the horizontal axis (σ = x) for the experiments reported here. We characterized two MOs featuring a high numerical aperture of NA 1 = 0.9. Due to the nature of immersion type MOs, MO 2 can exhibit an NA significantly larger than NA 1 (here NA 2 = 1.32). As indicated in the lower part of Fig. 1c, this results in two distinct regions in the collected angular spectrum of MO 2 , i.e., in the recorded EP images. First, a central region 0 k ? =k 0 NA 1 , with k ? ¼ k 2 x þ k 2 y , where light scattered by the particle and the transmitted input beam are collected. Second, an annular region NA 1 k ? =k 0 NA 2 , containing only light scattered by the nanoparticle without any contribution from the transmitted input beam. The measurement procedure starts by moving the nanoparticle on the optical axis to capture the combined signal I tot,x (Fig. 2a) of the excitation beam and the emission of the dipole. It is necessary to record a complete set of Stokes parameters (x, y, 45, 135, right-handed circularly polarized, left-handed circularly polarized). From this measurement, we use the aforementioned annular region above NA 1 where only the light of the dipole wave is present to identify the underlying dipole moments. To achieve this, we calculate the far-field emission of all electric and magnetic dipoles (6 in total) placed above an interface (substrate) 9 and use a numerical least square optimization to fit a combination of these far-fields to the measured Stokes parameters 5 . During this process, the amplitudes and phases of the dipoles serve as free parameters. The polarization resolved measurement is necessary to avoid ambiguities during this optimization. The knowledge about the induced dipole moments allows us to calculate their far-fields also in the central region of the EP, where interference with the transmitted excitation beam is observed. In other words, we extract the exact information including intensity I 2,x and phase ϕ 2,x distributions (Fig. 2c) of the reference wave, utilized for the characterization of the MO under test. A second measurement is done offside the nanoparticle with the focused beam not overlapping with the particle anymore, but only with the substrate. It enables the measurement of I 1,x (Fig. 2b), corresponding to a k-spectrum transmitted without interaction with the nanoparticle. At this point it is sufficient to record the chosen polarization state of the input beam (x). With the completion of this step, all necessary variables are known to solve Eq. (1) for the desired phase distribution ϕ 1,x . Solving such an equation generally yields two solutions. Considering the underlying physics, with ϕ 1 describing the electric field that caused the excitation of a dipole described by ϕ 2 , we know that ϕ 1,σ − ϕ 2,σ > 0. This rules out one of the two solutions and makes the evaluation unambiguous. The corresponding result is presented in Fig. 2d. Performance analysis To investigate the stability and precision of our system, we perform some further analysis of the recorded data in this chapter. For this purpose, we use the so-called Zernike polynomials 12,13 , which form a continuous and orthonormal basis over a unit circle that is well suited to describe the aberrations of optical systems featuring a circular pupil. In principle, an arbitrary wavefront ϕ(k x ,k y ) can be expanded into a series of polynomials where c j denotes the expansion coefficients and Z j are actual Zernike polynomials in the single index representation 13 . We show the distributions of Z j up to j = 35 in Fig. 3a. We now use Eq. (2) to decompose the experimentally retrieved phase distribution ϕ 1,x into Zernike polynomials. For our further analysis we set the expansion limit to j = Figure 3b, c shows the expansion coefficients for the two previously mentioned MO 1 s that were tested. The coefficients for j = 0,1,2,4 are removed from the images for the following reasons: Z 0 , called piston or bias, is only a constant phase offset. Z 1 , Z 2 (tip, tilt) describe phase ramps along k x and k y , respectively, that could be compensated by simply tilting MO 1 . Z 4 refers to a first-order defocus that can be corrected by moving MO 1 along the optical axis (z). Being heavily influenced by the physical alignment, these four contributions are not only of less importance, but they also yield results with rather high fluctuations when repeating the characterization. In Fig. 3d, e we present the final experimentally retrieved phase distributions with the aforementioned four contributions removed. The coefficients shown in Fig. 3b, c correspond to the distribution Fig. 3d, e, respectively. The data presented in Fig. 3b, d correspond to the same MO 1 that was shown already in Fig. 2. In addition, we show a reference measurement at the top right of both reconstructed phase distributions. These measurements were recorded by Optocraft GmbH with their SHSInspect metrology platform in the 2Xpass configuration. This system is based on the principle that was shown in Fig. 1b. As can be seen, our results show excellent agreement with the independently recorded reference dataset. To showcase the performance of our method, several additional checks were done that are explained in more detail in the "Methods" section. The results and error bars shown in Fig. 3 are retrieved by averaging 30 measurements for each of the two microscope objectives and highlight already the outstanding precision of the system. Conclusion and outlook In summary, we have developed and demonstrated an absolute method for the characterization of high numerical aperture microscope objectives by using a dipole scatterer in order to create a well-known reference wave. When performing microscopy of almost any kind, the microscope objective is without doubt the key element to determine both the resolution and the quality of the created images. This renders our presented method highly relevant for the development of cutting-edge microscopy systems but also for all kinds of experimental setups where a microscope objective is involved. Working with a characterized microscope objective and knowing its errors enables the implementation of error correction strategies and allows for quantitative measurements. In general, the method is rather flexible in terms of what microscope objectives can be used, but there are some restrictions that need to be satisfied. First of all, NA 2 > NA 1 , to get access to an outer region in the recorded BFP images that is used to reconstruct the excited dipole moments. In the current configuration optimized for the characterization of dry MOs, this is not an issue due to NA 1 being theoretically capped at 1. Second, although there is no strict limit for how low NA 1 can be, a lower NA 1 generally results in larger foci and therefore less scattering for a fixed particle size. Consequently, a lower NA 1 will increase the errors of the measurements. However, low-NA optics usually do not require such precise characterization of the transmitted phase front and there are many existing methods that are sufficient for these components. In principle, the scheme could also be used to characterize immersion-type MOs, as long as the scattering particle still behaves like a dipolar scatterer with negligible contributions of higher order multipoles when embedded in oil. In some cases, a different strategy will be required to identify the excited dipole moments as it is not always possible to choose NA 2 > NA 1 anymore. Powerful solutions for this could be cross polarization or structured illumination, but these go beyond the scope of this manuscript. It should be noted here that this technique is not restricted to the chosen wavelength. Although for a fixed nanostructure the potential spectral range is limited, the complete visible and near infrared spectral range can be covered by using particles of other sizes or materials. Furthermore, it is also not necessary to use a perfectly spherical nanostructure, since our procedure is capable of identifying arbitrary combinations of dipoles. As long as they feature a reasonably strong dipole response and simultaneously suppress higher order multipoles, it is actually possible to use almost arbitrarily shaped nanostructures. However, tailoring the size and shape of the particle can also offer a promising route to improve the precision of our technique even further. The main goal here is to minimize the amount of modes supported by the particle to still enable a simple data analysis. Any higher order mode that is not considered in the determination of the scattered light will contribute to a phase error in the reconstructed phase front. But also, the considered modes are measured with a limited accuracy, which leads to measurement errors. Accordingly, promising particle shapes for improving our approach are flat cylindrical particles, supporting mainly three dipole modes 14,15 , or nanorods that predominantly support only a single dipole mode 16 . In particular, metal cylinders (e.g. made from gold etc.) would be the most promising alternative to the spherical nanoparticles used in this work. Using modern lithography or milling techniques, cylindrical nanoparticles can be fabricated easily in arrays including different sizes, hence providing a full range of different probes on a single sample to cover and measure over a wide spectral range. Further, the method can be extended with ease to also detect the sensitivity of a microscope objective to different input polarization states, allowing for a detailed analysis of the birefringence of the MO. All necessary polarization optics for this extended analysis are already present as they are required for the detection of the dipole moments. Our experimental approach offers a powerful, versatile and novel method for the characterization of high-NA optics, which are used in the majority of microscopy, imaging, and sensing devices. Materials and methods Positioning of the nanostructure For a part of the measurement, it is necessary to place the nanostructure at the focal spot of the system. To find out where the nanoparticle is roughly located, the two confocally aligned MOs and the camera can be used as a scanning microscope. For this purpose, the sample is raster scanned through the focal volume and the intensity on the camera is integrated. The nanostructure then becomes visible as a dip in the integrated transmitted intensity distribution. Once the nanoparticle is roughly positioned in the beam, there are several ways how the fine positioning can be achieved. One possibility is to perform a finer scan around the position of the scatterer, followed by a center of mass calculation to find out where the minimum in the distribution of the transmitted light is. It is also feasible to use the distribution of the scattered light in the annular region of the EP above NA 1 to retrieve the relative position between the focused beam and the particle 17 . Both procedures easily achieve a precision below 10 nm, which is better than required as will become clear in the error analysis below. Influence of the glass substrate During the measurements, the air−glass boundary is very close to the focal plane, where the beam diameter is below 1 µm. The surface unevenness across such a small area is negligible. Further, traversing from air to glass introduces a defocus and spherical aberrations as additional wave aberrations. In order to calculate these aberrations, it is necessary to know the position of the interface relative to the focused beam. Assuming an aberration-free MO 1 , this position can be determined from the measured data. Since the actual wave aberrations of MO 1 would only give rise to higher order corrections to the positions of both particle and interface, which are negligible, this assumption is justified. Then, the additional spherical aberration introduced by the substrate interface can be determined and subtracted from the measurement results. Error analysis Several tests were performed to examine the reliability of the proposed method. To quantify the similarity between two measurements, we proceed as follows. First, the contributions of the Zernike polynomials Z j for j = 0,1,2,4 are calculated and subtracted from the individual reconstructed phase distributions. Thereafter, we compute the root-mean-square of the difference of the two corrected phase distributions. Last, the results are expressed in units of the wavelength λ. All tests were repeated several times, as noted. Further, no identical test was performed twice in a row. There was always at least one of the other tests performed in between. Repeatability The measurement is performed twice consecutively. The test was done eight times. The average measured deviation is λ/2705. Reproducibility The measurement is performed twice where in between two measurements the experimental setup is realigned. More specifically, after the first measurement, the particle is moved out of the focused beam and MO 1 was moved transversely to a position where no light was collected by MO 2 anymore. Afterwards, MO 1 is realigned, the particle is brought back to the center of the beam and the second measurement is performed. The test was conducted four times. The average measured deviation is λ/456. Systematic particle movement In order to investigate how critical the positioning of the particle is, the scatterer was intentionally displaced by a distance ±D along the xor y-axis. The measurements at the two positions are then compared to a third measurement at the center. The test was done four times. For values of D = {20, 40, 60, 80, 100} nm, the resulting deviations are λ/{1515, 773, 539, 377, 277}. The results clearly show that larger misplacements of the nanoparticle away from the optical axis lead to increasing deviations. Most likely, this is due to the phase ramp that is imprinted to the dipole wave once its origin is not on the optical axis anymore. The associated errors occur dominantly at the edge of the aperture of MO 1 where such a phase ramp can quickly result in a relative phase between the two interfering components that exceeds 2π. Such a large phase difference would require additional care in the evaluation algorithm. However, as the position of the particle can be comfortably aligned with a precision below 10 nm, this problem does not necessarily need to be solved. Displacements up to 60 nm still result in deviations smaller than the reproducibility values of the system.
2023-02-04T14:40:51.150Z
2021-11-02T00:00:00.000
{ "year": 2021, "sha1": "0f74d861fc2fff503e8df9df96f7da25bdc05b5e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41377-021-00663-x.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "0f74d861fc2fff503e8df9df96f7da25bdc05b5e", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [] }
29168965
pes2o/s2orc
v3-fos-license
Long-term Culture of Human iPS Cell-derived Telencephalic Neuron Aggregates on Collagen Gel . It takes several months to form the 3-dimensional morphology of the human embryonic brain. Therefore, establishing a long-term culture method for neuronal tissues derived from human induced pluripotent stem (iPS) cells is very important for studying human brain development. However, it is difficult to keep primary neurons alive for more than 3 weeks in culture. Moreover, long-term adherent culture to maintain the morphology of telencephalic neuron aggregates induced from human iPS cells is also difficult. Although collagen gel has been widely used to support long-term culture of cells, it is not clear whether human iPS cell-derived neuron aggregates can be cultured for long periods on this substrate. In the present study, we differentiated human iPS cells to telencephalic neuron aggregates and examined long-term culture of these aggregates on collagen gel. The results indicated that these aggregates could be cultured for over 3 months by adhering tightly onto collagen gel. Furthermore, telencephalic neuronal precursors within these aggregates matured over time and formed layered structures. Thus, long-term culture of telencephalic neuron aggregates derived from human iPS cells on collagen gel would be useful for studying human cerebral cortex development. The molecular mechanisms underlying human brain development are not well understood, which is mainly due to the ethical difficulties of using human embryonic brains and the lack of a culture method for long-term neuronal tissue culture in vitro. It takes several months to form the 3dimentional morphology of the human brain (Kim et al., 2008). Therefore, it is important for developmental research regarding the human brain to develop a long-term culture system using human iPS cell-derived neuronal tissue. The cerebral cortex is the outer layer of the cerebrum, which consists of six layers. The cerebral cortex functions in memory, attention, perception, awareness, thought, language and consciousness (Shipp, 2007). An efficient culture method for telencephalic selective neural differentiation of human ES/iPS cells has been established (Eiraku et al., 2008;Vaccarino et al., 2011). It has also been reported that a layered structure was observed inside these neuron aggregates at 46 days (Eiraku et al., 2008). Thus, the establishment of a long-term culture method is useful for analysing human cerebral cortex development. However, it is difficult to maintain the morphology of these neural aggregates for more than a month because these aggregates attach to each other or to the bottom of the plate. Collagen gel is an extracellular matrix that is known to be useful for long-term culture and maturation of neuronal cells (Bellamkonda et al., 1995;Krewson et al., 1994;O'Connor et al., 2001). Collagen is the main component of connective tissues in vertebrates and is the most abundant mammalian protein, accounting for about 20-30% of total body proteins (Lee et al., 2001). It has been suggested that neuronal cells adhere strongly and receive important signals from collagen gel, which play critical roles in cell development, function and survival (O'Connor et al., 2001). The use of collagen gel is expected to keep the morphology of these aggregates for a long time because it is less stiff than collagen-coated culture dishes. In the present study, we showed that human iPS cellderived telencephalic neuron aggregates could be cultured for more than 3 months on collagen gel. Furthermore, telencephalic neuronal precursors within the aggregates matured over time and formed layered structures. Human iPS cell culture The human iPS line 253G1 was obtained from CiRA (Center for iPS Cell Research and Application). These human iPS cells were maintained on a feeder layer of mitomycin-C-treated SNL cells (a mouse fibroblast STO cell line transformed with neomycin resistance and LIF genes; ReproCELL, Yokohama, Japan) in primate ES medium supplemented with 4 ng/mL basic fibroblast growth factor (ReproCELL) in a humidified atmosphere of 5% CO 2 and 95% air at 37°C. These iPS cells were passaged with Dissociation Solution for human ES/iPS cells (ReproCELL) every 4-5 days. Preparation of collagen-coated plates One volume of type I collagen solution (Cell-matrix, Type I-C; Nitta Gelatin Inc., Osaka, Japan) was mixed with nine volumes of 1 N HCl (pH 3.0) and kept on ice. This Type I-C collagen solution is used for collagen coating (Nagai et al., 2013;Iino et al., 2014). Aliquots (1.0 mL) of this collagen solution were placed in the wells of 6-well culture plates (Corning, Corning, NY, USA), and warmed to 37°C overnight. The coated plates were rinsed twice with sterile PBS and an appropriate volume of pre-warmed Neurobasal Medium was added. On day 25, cell aggregates were replaced onto collagen-coated 6-well culture plates. Percentage of number of human iPS cell-derived telencephalic neuron aggregates adhering to collagen-coated plates was calculated from 23 aggregates. Preparation of collagen gel Eight volumes of type I collagen solution (Cell-matrix, Type I-P; Nitta Gelatin Inc.) were mixed with one volume each of 10× concentrated DMEM (Invitrogen), reconstitution buffer (260 mM NaHCO 3 in 100 mL of 50 mM NaOH and 200 mM HEPES; Nitta Gelatin Inc.) and kept on ice. This Type I-P collagen solution is used for making collagen gel (Haga et al., 2005;Nagai et al., 2007). Aliquots (1.5 mL) of this reconstituted collagen solution were placed in the wells of 6-well culture plates (Corning), and immediately warmed to 37°C to allow gel formation. The gelformed plates were rinsed twice with sterile PBS and an appropriate volume of pre-warmed Neurobasal Medium was added. On day 25, cell aggregates were replaced onto collagen gel in 6-well culture plates. Percentage of number of human iPS cell-derived telencephalic neuron aggregates adhering to collagen gel was calculated from 25 aggregates. Ethics statement All experiments were approved by the Ethics Committee of Shionogi & Co., Ltd and conducted in accordance with the Declaration of Helsinki. Differentiation of telencephalic neurons from human iPS cells 253G1 An efficient culture method for telencephalic selective neural differentiation of human ES/iPS cells has been established using suspension culture in serum-free medium on low cell-adhesion 96-well plates (serum-free culture of embryoid body-like aggregates quickly [SFEBq]) (Eiraku et al., 2008;Vaccarino et al., 2011). To investigate whether telencephalic neurons were differentiated from human iPS 253G1 cells, we used a partially modified SFEBq culture method with small molecules (TGF-β inhibitor SB431542 and Wnt pathway inhibitor CKI-7) instead of recombinant Lefty-A protein and Dkk1 protein respectively as shown in Fig. 1A. When neural stem cells were induced from 253G1 cells, 25 days cultured embryoid body-like aggregates became larger than 18 days those (Fig. 1B) and immunocytochemical staining showed that cell aggregates contained large numbers of TUJ1 (class-III β-tubulin)-positive neurons and NESTIN-positive neural precursors (Wada et al., 2009) (Fig. 1C). qPCR analysis also confirmed induction of TUJ1 expression (Fig. 1D). In contrast, there were few cells positive for the undifferentiated state marker OCT4 (Scholer et al., 1990) on day 18, and they disappeared completely on day 25 (Fig. 1E). The results of qPCR analysis indicated that the expression levels of OCT4 and another undifferentiated state marker, NANOG (Chambers et al., 2003), were decreased during this culture period (Fig. 1F, G). Furthermore, expression levels of the definitive endoderm marker SOX17 (Kanai-Azuma et al., 2002), the early mesoderm marker BRACHYURY (Wilkinson et al., 1990) and the basal epithelial marker cytokeratin 17 (CK17) (Troyanovsky et al., 1992) were also decreased (Fig. 1H, I, J). After culturing the cells for 18-25 days, the expression level of the telencephalic marker Forkhead box g1 (FOXG1) (Shimamura and Rubenstein, 1997) was increased and the strong FOXG1-positive cells were also increased in spite of the decrease of the cell density in cell aggregates (Fig. 1K, L). qPCR analysis indicated that the Expression levels were determined by qPCR analysis and normalised relative to that of GAPDH. "Fold expression" is shown as the ratio of day 18/day 0 or day 25/day 0. "Cortex" indicates human fetal cerebral cortex cDNA. (L) Cryosections of SFEBq-cultured human iPS cell aggregates on days 18 and 25 were stained for FOXG1 (purple). Scale bar, 100 μm. level of FOXG1 expression was increased by approximately sixfold compared to the human fetal cerebral cortex (Fig. 1K). Therefore, human iPS 253G1 cells could be differentiated efficiently into telencephalic neural cells when cultured under the partially modified SFEBq culture conditions. Long-term culture of human iPS cell-derived telencephalic neuron aggregates on collagen gel Next, we examined whether human iPS cell-derived telencephalic neuron aggregates could be cultured on the extracellular matrix for a long time. We seeded these aggre- Long-term Culture of hiPSC-derived Aggregates gates on plates coated with type I collagen (collagen-coated plates), type I collagen gel (collagen gel) or poly-d-lysine/ laminin/fibronectin (PDL/laminin/fibronectin-coated plates), and then counted the number of aggregates adhering on each plate. Human iPS cell-derived telencephalic neuron aggregates that had been seeded on PDL/laminin/ fibronectin-coated plates or collagen-coated plates initially adhered to the plates, but they gradually came off the plates over time (Fig. 2A). Aggregates adhering to these plates on day 35 were collapsing with distorted neurites as demonstrated by expression of the neuronal marker, TUJ1 (Fig. 2B), and all of the aggregates came off the plate by day 45 (Fig. 2A). On the other hand, human iPS cellderived telencephalic neuron aggregates that had been seeded on collagen gel could be cultured for more than 35 days and significantly extended neurites into the gel (Fig. 2C, D). In addition, these aggregates became larger in a time-dependent manner and could be cultured for more than 95 days (Fig. 2E, F). These results demonstrated that human iPS cell-derived telencephalic neuron aggregates could be cultured on the collagen gel for more than 3 months. Human iPS cell-derived telencephalic neuron aggregates on collagen gel mature over time and form layered structures inside The cerebral cortex layer contains approximately 80% glutamatergic neurons, and this layered structure is formed by the precise migration and differentiation of neuronal progenitor cells (Wonders andAnderson, 2006, Molyneaux et al., 2007). As described above, human iPS cell-derived telencephalic neuron aggregates on the collagen gel became larger in a time-dependent manner (Fig. 2F) and extended neurites into the gel (Fig. 2C, D). These results suggested that human iPS cell-derived telencephalic neuron aggregates matured over time. To examine the maturation of human iPS cell-derived telencephalic neuron aggregates on the collagen gel, we cultured these aggregates for 3 months and analysed the expression of mature neuron markers. The results of qPCR analysis indicated that the expression levels of the mature neuron marker MAP2 (microtubuleassociated protein 2), glutamate receptor marker GLUR1 and vesicular glutamate transporter marker VGLUT1 were elevated until day 95. A gradual increase in expression of the presynaptic marker SYNAPSIN1 mRNA was also observed during the differentiation process. On the other hand, the expression level of the neural stem cell marker NESTIN was decreased over this schedule (Fig. 3A). These results indicated that the differentiation process proceeded from telencephalic precursors to differentiated glutamatergic neurons in human iPS cell-derived telencephalic neuron aggregates by culture on collagen gel over a long period. Previous studies have shown that a cerebral cortex-like layered structure is formed in human ES cell-derived tele-ncephalic neuron aggregates on day 46 in culture on PDL/ laminin/fibronectin-coated plates (Eiraku et al., 2008). As described above, human iPS cell-derived telencephalic neuron aggregates that had been seeded on collagen gel could be cultured for more than 35 days (Fig. 2C, D, E, F). We examined whether the cerebral cortex-like layered structure was formed in human iPS cell-derived telencephalic neuron aggregates cultured on collagen gel. Immunocytochemical analysis of these aggregates on day 35 revealed a mushroom-shaped structure formed by NESTIN-positive neural precursors (Fig. 3B). After culturing these aggregates on collagen gel, there were many NESTIN-positive layer-like structures and increased numbers of MAP2positive mature neurons around these layer-like structures on day 95 (Fig. 3C). These results indicated that layer-like structures were formed inside the human iPS cell-derived telencephalic neuron aggregates in long-term culture on collagen gel. Discussion Our data demonstrated that human iPS 253G1 cells could be differentiated efficiently into telencephalic neuron aggregates when cultured under partially modified SFEBq culture conditions using the low-cost and stable-activity chemical inhibitors instead of recombinant proteins. These aggregates could be cultured for more than 3 months on collagen gel. Furthermore, telencephalic neuronal precursors within the aggregates matured over time and formed layered structures. A previous study indicated that a cerebral cortex-like layered structure is formed in hES cellderived telencephalic neuron aggregates cultured on PDL/ laminin/fibronectin-coated plates (Eiraku et al., 2008). However, we found that human iPS cell-derived telencephalic neuron aggregates adhering to PDL/laminin/ fibronectin-coated plates or collagen-coated plates gradually came off the substrate. This might be caused by the difference in a favored extra-matrix for attachment between human iPS cells and ES cells (Lam and Lonqaker, 2012), but human iPS cell-derived telencephalic neuron aggregates that had been seeded on collagen gel could be cultured for more than 45 days (Fig. 2E, F). We suggest two possible reasons for these results. First, the human iPS cell-derived telencephalic neuron aggregates are fragile and are thus suitable for culture on collagen gel, which is softer than the collagen or PDL/laminin/fibronectin coating (Yunoki et al., 2011, Mizutani et al., 2007. Alternatively, the cellular crawling into the 3D-pore structure in the collagen gel contributed to the tight adhesiveness between these aggregates and the collagen gel. To test which hypothesis is correct, the use of collagen-coated acrylamide gel (soft and poreless) is preferable. In conclusion, our data indicated that collagen gel are more suitable for long-term culture of telencephalic neuron aggregates derived from human iPS cells SFEBq-cultured human iPS cell-derived telencephalic neuron aggregates on collagen-coated plates on day 35 were seen to be collapsing with distorted neurites. Scale bar, 500 μm. (B-(b)) These neurites were labelled with anti-TUJ1 antibody and stained using goat anti-mouse Alexa Fluor ® 488 (green) secondary antibody. Scale bar, 250 μm. (C-(a)) On the other hand, human iPS cell-derived telencephalic neuron aggregates cultured on collagen gel significantly extended neurites into the gel on days 35. Scale bar, 500 μm. (C-(b)) Neurites extended from human iPS cell-derived telencephalic neuron aggregates cultured on collagen gel were labelled with anti-TUJ1 antibody and stained using goat anti-mouse Alexa Fluor ® 555 (red) secondary antibody. Scale bar, 250 μm. Human iPS cell-derived telencephalic neuron aggregates cultured on collagen gel increased in size in a time-dependent manner on days (D) 53 and (E) 95. Scale bar, 500 μm. (F) Diameter of SFEBq-cultured human iPS cell aggregates on collagen gel. Diameter at each time point was calculated as the average diameter of 2-4 aggregates. Error bars shows standard deviations Long-term Culture of hiPSC-derived Aggregates than collagen-coated plates or PDL/laminin/fibronectincoated plates. In addition, the spherical morphology of human iPS cellderived telencephalic neuron aggregates was maintained when cultured on collagen gel. However, it was difficult to keep the spherical morphology of human iPS cell-derived telencephalic neuron aggregate adhering to PDL/laminin/ fibronectin-coated or collagen-coated plates ( Fig. 2A, B). This is probably because human iPS cell-derived telencephalic neuron aggregates are suitable for culture on softer collagen gel. In addition, the stiffness of collagen gel is close to that of the brain (0.1-1.0 kPa) (Engler et al., 2006;Janmey andMiller, 2011, Byfield et al., 2009). Thus, it was suggested that human iPS cell-derived telencephalic neuron aggregates on collagen gel could be cultured under conditions of stiffness close to that of the brain. Furthermore, qPCR analysis indicated that the expres-sion levels of the mature neuron marker MAP2, glutamate receptor marker GLUR1 and vesicular glutamate transporter marker VGLUT1 still increased in human iPS cell-derived telencephalic neuron aggregates cultured on collagen gel over a long period. In addition, this study revealed that telencephalic neuron aggregates derived from human iPS cells increased in size in a time-dependent manner (Fig. 2F), and significantly extended neurites into the collagen gel as observed by monitoring expression of the neuronal marker TUJ1 (Fig. 2C, D). It has been reported that soft fibronectin surfaces cause differentiation and increased neurite extension of mouse hippocampal neurons (Kostic et al., 2007). As described above, collagen gel is soft (0.1-1.0 kPa), similar to the brain. Cell function and differentiation have been reported to be regulated by substrate stiffness corresponding to the mechanical properties of the tissue in which the cells are normally located (Engler were stained for the neural precursor marker NESTIN (red) and mature neuron marker MAP2 (green). Nuclei were counterstained with Hoechst 33342 (blue). A mushroom-shaped structure was enclosed by white lines, and the inner region was delineated by white-dotted lines. Scale bar, 100 μm. et al., 2004). It has also been reported that the majority of human mesenchymal stem cells exhibit a branched, filopodia-rich morphology and high levels of neurogenic transcript expression on soft collagen-coated gel (0.1-1.0 kPa) (Engler et al., 2006). Therefore, it was suggested that NESTIN-positive neural precursors were promoted to differentiate and increase neurite extension by adhering to collagen gel. In addition, cells that were positive for TUJ1, a neuronal marker, were present not only at the periphery of cellular aggregates but also at the inner aggregates on the collagen gel as shown in Fig. 2C(b). On the other hand, TUJ1positive cells were present only at the periphery of cellular aggregates on the collagen-coated plate condition as shown in Fig. 2B(b). These results suggested that the cells at the inner aggregates differentiated more on the collagen gel than the collagen-coated plate. As described above, soft fibronectin surfaces induced differentiation of mouse hippocampal neurons (Kostic et al., 2007). It was also reported that a cell stiffening response was induced due to substrate properties (Vichare et al., 2014). From these reports, we hypothesized that not only do the outer cells adhere to the collagen gel, but also the inner cells in the cellular aggregates might sense a soft extracellular environment and thus differentiate more efficiently. In this study, we established defined conditions for the long-term culture of human iPS cell-derived telencephalic neuron aggregates with layered structures using collagen gel. The formation of the layered structures inside the human iPS cell-derived telencephalic neuron aggregates is expected to be applicable to research on human brain development and the pathogenesis of neuronal developmental disorders, such as autism, Asperger's disorder and attention deficit hyperactivity disorder. Thus, our method for culturing human iPS cell-derived telencephalic neuron aggregates on collagen gel will be useful for a variety of neurodevelopmental research.
2018-05-25T21:26:16.707Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "a392059147256765be404b6307d31de41112469f", "oa_license": "CCBY", "oa_url": "https://www.jstage.jst.go.jp/article/csf/43/1/43_18002/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e827f61526d5cb69558bd5b93a6c35038d6cb658", "s2fieldsofstudy": [ "Biology", "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
254713996
pes2o/s2orc
v3-fos-license
Genome Size in the Arenaria ciliata Species Complex (Caryophyllaceae), with Special Focus on A. ciliata subsp. bernensis, a Narrow Endemic of the Swiss Northern Alps The genus Arenaria (Caryophyllaceae) comprises approximately 300 species worldwide; however, to date, just six of these taxa have been investigated in terms of their genome size. The main subject of the present study is the A. ciliata species complex, with special focus on A. ciliata subsp. bernensis, an endemic plant occurring in the Swiss Northern Alps. Altogether, 16 populations and 77 individuals of the A. ciliata complex have been sampled and their genome sizes were estimated using flow cytometry, including A. ciliata subsp. bernensis, A. ciliata s.str., A. multicaulis, and A. gothica. The Arenaria ciliata subsp. bernensis shows the highest 2c-value of 6.91 pg of DNA, while A. gothica showed 2c = 3.69 pg, A. ciliata s.str. 2c = 1.71 pg, and A. multicaulis 2c = 1.57 pg. These results confirm the very high ploidy level of A. ciliata subsp. bernensis (2n = 20x = 200) compared to other taxa in the complex, as detected by our chromosome counting and previously documented by earlier work. The genome size and, thus, also the ploidy level, is stable across the whole distribution area of this taxon. The present study delivers additional support for the taxonomic distinctiveness of the high alpine endemic A. ciliata subsp. bernensis, which strongly aligns with other differences in morphology, phylogeny, phenology, ecology, and plant communities, described previously. In affirming these differences, further support now exists to re-consider the species status of this taxon. Upgrading to full species rank would significantly improve the conservation prospects for this taxon, as, because of its precise ecological adaptation to alpine summit habitats, the A. ciliata subsp. bernensis faces acute threats from accelerated climate warming. Introduction Nuclear DNA amount and genome size (C-values) are important biodiversity features that have essential biological significance and many practical and predictive uses [1]. Cvalue information is widely used in various domains of biology [2], as the knowledge of genome size in a given taxon is of great importance when framing scientific questions or planning research [1]. Consequently, there is a large demand for C-value estimates for plant species. Flow cytometry has become the method of choice for measuring DNA content, particularly because its sample preparation and analysis protocols are convenient and rapid [3,4]. The estimation of genome size and ploidy using flow-cytometry is a key data source for the investigation of evolutionary and biogeographical processes, as well as taxonomic issues, both of wild and cultivated plant species and varieties (e.g., [5][6][7][8]). A comprehensive database of recorded genome sizes in plants worldwide is published regularly by the Kew Royal Botanic Gardens [9], with the most recent update (Release 7.1., April 2019) providing 2c values for 11,500 vascular plant species. This comprises just 3.7% of the approximatively 308,000 described plant species, globally [10]. Despite progress with other more localized database initiatives (e.g., [11]), as well as national inventories (e.g., [12] for the Czech Republic, and [13] for The Netherlands), the list of species and taxonomic groups awaiting investigation remains very long. This is also the case for the Arenaria ciliata (Caryophyllaceae) species complex. This is surprising, as the taxon is widespread in many European countries and has already been the subject of several karyological and taxonomic studies [14][15][16]. Although the genus Arenaria comprises approximatively 300 species worldwide [17], only six taxa have been investigated in terms of genome size [9], none of which belong to the A. ciliata species complex. The A. ciliata species complex comprises a group of poorly-differentiated arctic-alpine herbaceous taxa with overlapping morphological and ploidy identities [16,[18][19][20]. In northern Europe and in the Arctic, there are traditionally two taxa belonging to this group: A. norvegica Gunn. and A. ciliata subsp. pseudofrigida Ostenf. and O. C. Dahl, the latter reaching Svalbard and Franz Joseph Land to the North [20][21][22]. In the European Alpine System (EAS), and thus also in the Swiss Alps and the Jura Mountains, this species complex is represented by four taxa: A. ciliata s.str. L., A. ciliata subsp. bernensis Favarger, A. gothica Fr., and A. multicaulis L. [19,23]. The results of chloroplast DNA analyses suggest that the A. ciliata species complex is a monophyletic group [18,19]. The main focus of the present study is the status of the A. ciliata subsp. bernensis (Figure 1), an endemic taxon occurring exclusively in the Swiss Northern Alps [19,24,25]. It was discovered in 1955 by Swiss botanist Claude Favarger, a professor at the University of Neuchâtel, and described in 1963 [14]. Originally, the taxon was known only from the summit area of Gantrisch and Leiterenpass (Canton of Bern). However, recent studies have shown that it occurs on nearly all summits between Stockhorn in the Canton Bern and Moléson in the Canton of Fribourg (Figure 2), forming an arc of sky-island populations ca. 50 km across [19]. The taxon grows exclusively on shady, cool and steep slopes with northern exposition in the alpine zone above 1900-2000 m a.s.l. (Figure 1). The majority of populations are small (less than 100 individuals), and, globally, the taxon counts no more than ca. 4000 individuals [19]. According to Parisod [26], the A. ciliata subsp. bernensis is a neoendemic taxon, probably of allopolyploid origin, having recently formed close to the so-called Penninic-Savoyic zone of secondary contact in the North-Western Alps. Similarly, Favarger and Contandriopoulos [27] classified the A. ciliata subsp. bernensis as an apoendemic taxon of allopolyploid origin. Such endemics are often the result of a relatively rapid mixture between different floristic elements, for example, due to rapid migration events. Due to the geographic position of the A. ciliata subsp. bernensis, in relation to A. multicaulis and A. ciliata s.str. [23], a post glacial hybrid origin involving these two species is possible. Interestingly, Berthouzoz et al. [19], using chloroplast DNA markers, demonstrated that the A. ciliata subsp. bernensis is genetically closer to A. multicaulis than to A. ciliata s.str. Significantly, despite its very small distribution area, the taxon displays high genetic diversity, and this could also be consistent with the refugial survival of the A. ciliata subsp. bernensis during the Pleistocene glaciations (nunatak survival) [19]. The possibility of a polyploid speciation origin is supported by the fact that the A. ciliata subsp. bernensis presents a very high ploidy level (2n = 200), particularly in comparison with A. multicaulis (2n = 40), but also with A. ciliata s.str. (2n = 40-160) and A. gothica (2n = 100) [16,23,28,29]. Its putative hybrid origin and complex polyploidization history is probably one of the reasons researchers have hesitated in attributing it a species status [30], as the traditional species concept is difficult to apply to hybrids and polyploids [31]. The main aim of the present study was to deliver the first evaluation of the genome sizes in the Arenaria ciliata species complex using flow cytometry, with special focus on the narrow endemic A. ciliata subsp. bernensis. The following specific questions have been addressed: (1) What are the differences in genome size among the four taxa belonging to the A. ciliata group, occurring in the Alps and neighboring Jura Mountains (A. ciliata s.str., A. ciliata subsp. bernensis, A. gothica and A. multicaulis)? (2) What is the geographic pattern and stability of the genome size across the whole distribution area of the narrow endemic A. ciliata subsp. bernensis? (3) Do the obtained results corroborate with the ploidy levels of the four studied taxa? Based on the results for these investigations, we also set out to evaluate the implications of our work for the taxonomy and conservation of A. ciliata subsp. bernensis. The main aim of the present study was to deliver the first evaluation of the genome sizes in the Arenaria ciliata species complex using flow cytometry, with special focus on the narrow endemic A. ciliata subsp. bernensis. The following specific questions have been addressed: (1) What are the differences in genome size among the four taxa belonging to the A. ciliata group, occurring in the Alps and neighboring Jura Mountains (A. ciliata s.str., A. ciliata subsp. bernensis, A. gothica and A. multicaulis)? (2) What is the geographic pattern and stability of the genome size across the whole distribution area of the narrow endemic A. ciliata subsp. bernensis? (3) Do the obtained results corroborate with the ploidy levels of the four studied taxa? Based on the results for these investigations, we also set out to evaluate the implications of our work for the taxonomy and conservation of A. ciliata subsp. bernensis. Results The highest 2c values among all four of the taxa from the A. ciliata complex investigated in this study were recorded for A. ciliata subsp. bernensis, varying between 6.26 pg and 7.75 pg of DNA (Figures 3 and S1, Tables 1 and S1), with a mean 2c value of 6.91 pg. The recorded genome size of A. gothica reached approximately half of these 2c values and varied between 3.62 and 3.76 pg of DNA, with a mean 2c value of 3.69 pg. Finally, A. ciliata s.str. and A. multicaulis both showed similar, but much lower, values, with a mean 2c value of 1.71 for A. ciliata s.str. and a mean 2c value of 1.57 for A. multicaulis. The higher standard deviation value in the A. ciliata subsp. bernensis is due to a much larger sample size, of 57 individuals, analyzed for this taxon in comparison to the other three taxa (between five and ten individuals). Table 1. Genome size (mean ± standard deviation) in Arenaria ciliata subsp. bernensis in comparison with three other taxa from the A. ciliata species complex occurring in Switzerland. Results The highest 2c values among all four of the taxa from the A. ciliata complex investigated in this study were recorded for A. ciliata subsp. bernensis, varying between 6.26 pg and 7.75 pg of DNA (Figures 3 and S1, Tables 1 and S1), with a mean 2c value of 6.91 pg. The recorded genome size of A. gothica reached approximately half of these 2c values and varied between 3.62 and 3.76 pg of DNA, with a mean 2c value of 3.69 pg. Finally, A. ciliata s.str. and A. multicaulis both showed similar, but much lower, values, with a mean 2c value of 1.71 for A. ciliata s.str. and a mean 2c value of 1.57 for A. multicaulis. The higher standard deviation value in the A. ciliata subsp. bernensis is due to a much larger sample size, of 57 individuals, analyzed for this taxon in comparison to the other three taxa (between five and ten individuals). The results show clearly that the genome size of the A. ciliata subsp. bernensis is very stable across the whole distribution area of the taxon, thus indicating an invariant ploidy level for all of the investigated individuals ( Figure 3, Tables 1 and S1). The direct counting of chromosome numbers in the selected samples from Dent de Brenleire ( Figure S1) resulted in 2n = 20x = 200, and combined with the stable 2c values across all sites, indicate that the A. ciliata subsp. bernensis individuals and populations investigated in this study all show 2n = 200. Discussion Our study delivers the very first genome size (2c values) estimates for the members of the arctic-alpine Arenaria ciliata species complex (Table 1, Figure 3). This new data adds to the observed genome sizes and ploidy evaluations in the highly variable genus Arenaria, whose base chromosome number ranges between x = 9 (as observed in e.g., A. balearica) and x = 15 (A. saxifraga) [14]. The most frequent chromosome numbers observed are x = 11 (ca. 40 Arenaria spp.) and x = 10 (ca. 25 spp.), and the present data affirm that it is to this latter group that the A. ciliata species complex belongs. Genome Size Values in the Genus Arenaria Based on the Kew Plant DNA C-values Database [9] and the published literature, 2c values are available for just six Arenaria taxa ( Table 2). The diploid A. leptoclados has a recorded 2c value of 0.79 pg (2n = 2x = 20) [13]; for A. gracilis, 2c = 1.19 pg (2n = 2x = 24) [32]; and the diploid taxa from the A. grandiflora complex possess 2c values ranging between 2.11 and 2.70 pg (2n = 2x = 24) [33]. For the tetraploid A. serpyllifolia, 2c values between 1.41 and 1.60 pg (2n = 4x = 40) were recorded [12,13,34]. The Arenaria tetraquetra subsp. amabilis displays 2c = 1.29 pg (2n = 4x = 40) [35], and tetraploid taxa from the A. grandiflora complex have 2c values between 4.24 and 5.27 pg (2n = 4x = 44) [12,33]. For A. deflexa, with an unrecorded ploidy level, the 2c value = 2.04 pg [36]. To date, no genome size values are available for the Arenaria taxa with a ploidy level higher than 4x. From the available published values for Arenaria, it is evident that higher ploidy levels are generally (but not always) associated with higher observed 2c values; however, the 2c values appear to be consistently higher for taxa with higher base chromosome numbers (eg. x = 12), compared to other taxa (eg. x = 10) within the same ploidy level. In this context, the results presented here are of special relevance in relation to other Arenaria taxa with the same Discussion Our study delivers the very first genome size (2c values) estimates for the members of the arctic-alpine Arenaria ciliata species complex (Table 1, Figure 3). This new data adds to the observed genome sizes and ploidy evaluations in the highly variable genus Arenaria, whose base chromosome number ranges between x = 9 (as observed in e.g., A. balearica) and x = 15 (A. saxifraga) [14]. The most frequent chromosome numbers observed are x = 11 (ca. 40 Arenaria spp.) and x = 10 (ca. 25 spp.), and the present data affirm that it is to this latter group that the A. ciliata species complex belongs. Genome Size Values in the Genus Arenaria Based on the Kew Plant DNA C-values Database [9] and the published literature, 2c values are available for just six Arenaria taxa ( Table 2). The diploid A. leptoclados has a recorded 2c value of 0.79 pg (2n = 2x = 20) [13]; for A. gracilis, 2c = 1.19 pg (2n = 2x = 24) [32]; and the diploid taxa from the A. grandiflora complex possess 2c values ranging between 2.11 and 2.70 pg (2n = 2x = 24) [33]. For the tetraploid A. serpyllifolia, 2c values between 1.41 and 1.60 pg (2n = 4x = 40) were recorded [12,13,34]. The Arenaria tetraquetra subsp. amabilis displays 2c = 1.29 pg (2n = 4x = 40) [35], and tetraploid taxa from the A. grandiflora complex have 2c values between 4.24 and 5.27 pg (2n = 4x = 44) [12,33]. For A. deflexa, with an unrecorded ploidy level, the 2c value = 2.04 pg [36]. To date, no genome size values are available for the Arenaria taxa with a ploidy level higher than 4x. From the available published values for Arenaria, it is evident that higher ploidy levels are generally (but not always) associated with higher observed 2c values; however, the 2c values appear to be consistently higher for taxa with higher base chromosome numbers (e.g., x = 12), compared to other taxa (e.g., x = 10) within the same ploidy level. In this context, the results presented here are of special relevance in relation to other Arenaria taxa with the same basic chromosome numbers as those observed for the A. ciliata complex (x = 10), and also those with the same ploidy level (2x diploid). Arenaria ciliata s.str. and A. multicaulis Among the four A. ciliata taxa investigated in our study (Table 1), the lowest 2c values were obtained for A. ciliata s.str. (mean 2c = 1.71 pg) and A. multicaulis (mean 2c = 1.57 pg). These values are similar to the two tetraploid taxa A. serpyllifolia and A. tetraquetra (both with basic chromosome number x = 10) ( Table 2). Therefore, our results corroborate with the published chromosome numbers for A. multicaulis (2n = 4x = 40, Lauber et al. 2018). According to Favarger [14] and Abukrees et al. [16], the chromosomal variability of A. ciliata s.str. is much higher, when analyzing plants from different alpine regions, with 2n-values between 40 and 160. Interestingly, all plants of this taxon analyzed in our study (Swiss Northern Alps) seem to be locally invariant and tetraploids. Since both taxa (A. multicaulis and A. ciliata s.str.) possess a relatively large distribution area in the Alps and neighboring mountain chains [30], wider investigations covering the whole distribution range are needed to capture the full level of variation in the genome size values. Arenaria gothica This taxon is a European boreal-montane plant element, possessing only few and highly disjunct occurrences in Jura Mountains (Lac de Joux, Switzerland) and in Scandinavia (mainly isle of Gotland, Sweden) [28,37]. In Switzerland, the species is extremely difficult to study, as it appears exclusively on the exondated shores of the Lac de Joux, and only during exceptional drought periods [38]. The most recent observed appearance of the population was in 2003, and the plant accessions used in this study are from an ex situ culture of plants collected at this time and maintained at the Botanical Garden of the University of Fribourg (Switzerland). Arenaria gothica is a high polyploid taxon with 2n = 10x = 100 [23,29,37]. The genome size estimation in the present study indirectly confirms this recorded chromosome number (Table 1), with the mean 2c value of A. gothica at 3.70 pg being approximatively half of the 2c-value of the A. ciliata subsp. bernensis (mean 2c = 6.91 pg; 2n = 20x = 200) (Figure 3). The relatively small variation in genome size values for A. gothica compared to the other sampled taxa may be an artefact of the long-time cultivation in an ex situ collection of closely related individuals. Arenaria ciliata subsp. bernensis This narrow endemic taxon, which was the main focus of the present study, shows the highest known 2c values (between 6.26 and 7.75 pg), not only within the A. ciliata species complex (Table 1, Figure 3), but also among all other species investigated thus far in the genus Arenaria (Table 2). Given the trend evident across the genus (Table 2), 11, 3489 7 of 11 this result is consistent with expectations, due to the known high ploidy level for this taxon (2n = 20x = 200), as affirmed in our own observations ( Figure S1) and the published literature [14,16,39,40]. Interestingly, the genome size varies only slightly, indicating that ploidy levels appear to be stable across the whole distribution area of the A. ciliata subsp. bernensis (Figure 2). This situation presents a contrast to the highly variable chromosome numbers in the closely related A. ciliata s.str., with 2n ranging between 40 and 160 [16,23]. In his exhaustive studies on the A. ciliata species complex [39,40], and also in the publication describing the A. ciliata subsp. bernensis [14], Favarger was not able to count the exact chromosome number, giving either 2n = 200 or 2n = 240, or writing "env. 240". The latter value was then adopted and repeated in all standard works of the Swiss flora (e.g., [23]). In contrast, our study supports the conclusion of Abukrees et al. [16], who reported 2n = 200 for the A. ciliata subsp. bernensis. Berthouzoz et al. [19] highlighted the presence of at least a few irregular flowers with 4-6 styles, 12-16 stamens and 6-9 petals per population of the A. ciliata subsp. bernensis ( Figure 1D). One possibility arising from this observation was that the variation in floral regularity might be associated with polyploid status. To address this question in the present study, we collected several plants with six and nine petals and analyzed their genome size (Table S1). The 2c values of these plants are not significantly different in comparison to other regular individuals, indicating no correlation between such morphological changes and the ploidy level. This finding would appear to have importance beyond the A. ciliata subsp. bernensis, as irregular flowers are also occasionally observed in many populations of other taxa from the A. ciliata species complex across Europe and the arctic (C. Meade, personal observation). Implications for Taxonomy and Conservation of A. ciliata subsp. bernensis The present study delivers additional and significant evidence regarding the taxonomic distinctness of the high alpine endemic A. ciliata subsp. bernensis. Our new evidence soundly aligns with other differences in morphology, phylogeny, phenology, ecology and associated plant communities, etc., as described by Favarger [14] and further explored by Berthouzoz et al. [19]. Importantly, in displaying an elevated but stable ploidy level, strongly differentiated compared to adjacent taxa in the A. ciliata species complex occurring in the western Alps and in the Jura Mountains, while maintaining a distinct restricted distribution and habitat ecology, the taxon would appear to merit a separate species status. The type specimens are conserved in the herbarium of the University of Neuchâtel (Switzerland). Its typification based on the original collection and new results and rising it to the species status is thus long overdue. It is important to note that the current subspecies rank slows down, or even completely hinders, the research and development of targeted protective measures for this taxon, as it is not always accepted and included in regional and national floras (e.g., [16,41]). Growing mainly between 2000 and 2350 m a.s.l., and due to its preferences and ecological adaptation to the high summits of the Northern Alps [19], the taxon belongs in the group of populations whose habitat faces the most acute threats from accelerating climate change [42]. In this context, although human-mediated global warming may be responsible for population declines of many alpine-arctic plants [43], Körner and Hiltbrunner [44] have postulated that high-altitude species are potentially very resistant to the impacts of climate change, particularly in relation to the exploitation of refugial microhabitats. However, the A. ciliata subsp. bernensis grows exclusively at the very top of the summits and in a geographically very small and isolated area. For this reason, these populations appear to have very limited scope to exploit the mosaic of micro-environmental conditions that may assist other high-altitude plants and climate relicts, to escape into neighboring microhabitats. Taxon Identification in the A. ciliata Species Complex The identity of Arenaria ciliata subsp. bernensis individuals were determined in the field, according to the description in Favarger [14,39,40], Lauber et al. [23] and Berthouzoz et al. [19]. The most important characteristics that facilitate the assignment to this taxon, compared to other Arenaria taxa in the Swiss Northern Alps, are: (1) large and solitary flowers, ca. 2 cm in diameter; (2) the loose habit of the whole plant with long shoots; and (3) the presence of (at least a few) irregular flowers per population, with higher numbers of petals, stamens and styles (Figure 1). In comparison, the flowers of A. multicaulis are only 1 cm in diameter and their inflorescence usually possesses 5-7 flowers. Arenaria ciliata s.str. exhibits a very compact habit with short pulvinate shoots. The fourth taxon used in this study, A. gothica, is morphologically similar to the subsp. bernensis (loose habit) but its petals are much smaller (4-4.5 mm) [23]. In addition, as in Central Europe A. gothica occurs exclusively along the well-studied shores of Lac des Joux in Switzerland, where no other taxa from the A. ciliata complex are recorded, the discrimination of this taxon is relatively uncomplicated [38]. Sampling of Plant Material The plant materials of A. ciliata s.str., A. ciliata subsp. bernensis, and A. multicaulis were collected in August 2022. In total, 57 plants of A. ciliata subsp. bernensis were collected from 6 summit areas, covering the whole known distribution of this taxon from Stockhorn (Canton of Bern) to Moléson (Canton of Fribourg) (Figure 2, Table S1). Plants with irregular flowers, with 6 and 9 petals ( Figure 1D), were also collected in order to test the correlation of such morphological anomaly with the genome size and ploidy level (Table S1). Additionally, five individuals of A. ciliata s.str. were sampled in the Moléson and Vanil Noir summit areas, and five individuals of A. multicaulis in the Gantrisch and Vanil Noir summit areas (Table S1). The plant material of A. gothica (10 individuals) was collected in October 2022 from ex situ culture, grown from seeds collected in 2003 from Lac de Joux, the only population of the taxon in Switzerland [38]. The plant material (small portion of flowering stem with flowers) was silica dried and kept for ca. 4 weeks in plastic bags prior to flow cytometry analyses. Flow Cytometry Analysis Approximately 1 cm 2 of silica dried leaves of the Arenaria samples were mixed with 1 cm 2 of fresh leaves of the standard plant (Allium schoenoprasum, genome size 2c = 15.03 pg). This was chopped with a sharp razor blade to release the nuclei in 100 µL of Cysrain nuclei extraction buffer (Sysmex, Norderstedt, Germany, https://eu.sysmexflowcytometry.com, accessed on 9 November 2022). The obtained suspension was then sieved through a 40 µm filter, and 1.5 mL of Cystain Pi (propidium iodide) absolute P staining buffer was added. After one hour, the fluorescence of nuclei in the suspension was measured using Sysmex ploidy analyzer (Sysmex, Norderstedt, Germany). The flow cytometry analyses were carried out by Plant Cytometry Services, Didam, The Netherlands, http://www.plantcytometry.nl. Confocal Microscopy Chromosome counting, using confocal microscopy, was performed only for plants of the A. ciliata subsp. bernensis, the main focus of the present study, grown from seeds collected in Dent de Brenleire (Vanil Noir summit area, Canton of Fribourg). Newly developed shoots were cut from living plants and stored in distilled sterile water for 24 h at 4 • C. For pretreatment, the axillary and apical buds were then cut from the tissue using a dissecting razor and placed in 1.5 mL tubes containing 0.002 mol/L 8-hydroxyquinoline solution (Sigma, Arklow, Ireland), for 4 h at 20 • C. Fixation was carried out in a mixture of 98% 3:1 absolute ethanol: glacial acetic acid (Carnoy's solution), for at least 1 h, at 4 • C. The buds were then washed with distilled water for 5 min. Bud hydrolysis was completed with a solution of 1N HCl (Sigma) at 60 • C for 5-10 min. Following a 2 min rinse in distilled water, the buds were incubated in 50% Schiff's reagent (Feulgen stain) (VWR Chemicals, Leuven, Belgium), for 20 min at room temperature, and then washed with 45% acetic acid, three times, for 5 min each time. The buds were then transferred to a clean slide and covered with 45% acetic acid to prevent drying, and from this stock, one or two buds were placed on a new glass slide and covered with a small drop of acetic acid. Under a dissecting microscope, the epidermis cells were carefully removed by using forceps and a scalpel blade. Using a teasing needle and scalpel, the exposed meristem cells were then separated out as much as possible to form a single layer in order to enable the clear identification of individual cells upon squashing; then, a cover slip was applied. A piece of filter paper was placed over the cover slip and then pressed firmly with the thumb to flatten the cells and to remove excess acetic acid. Using an Olympus FV1000 confocal microscope (Olympus Europa GMBH, Hamburg, Germany) under standard PI (propidium iodide) excitation settings, the Fuelgenstained chromosomes were then counted by reviewing the layered three-dimensional cell section images, an approach that minimizes halation-related miscounting. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/plants11243489/s1, Table S1: Characterization of all collected taxa and samples from the Arenaria ciliata species complex, with the corresponding genome sizes (2c values in pg of DNA). Irregular floral morphology: * plants with 6 petals, ** plants with 9 petals. Figure S1: A-Confocal micrograph image of Fuelgen-stained late metaphase chromosomes in Arenaria ciliata subsp. bernensis from Dent de Brenleire (2n = 200, Fribourg, Switzerland). Image scale 1000×. B-Genome size estimation for A. ciliata subsp. bernensis from Dent de Brenleire using flow cytometry (for more details see Materials and Methods).
2022-12-16T16:22:31.691Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "4c3dc5a23798580bc4871ceddb7a4848d39442ce", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/11/24/3489/pdf?version=1670932271", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef1848d5f0004988c51d4445969e78574a4c5b11", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233926385
pes2o/s2orc
v3-fos-license
Large Eddy Simulation of Multi-Phase Flow and Slag Entrapment in Molds with Different Widths : Slag entrapment is a critical problem that affects the quality of steel. In this work, a three-dimensional model is established to simulate the slag entrapment phenomenon, mainly focusing on the slag entrapment phenomenon at the interface between slag and steel in molds with different widths. The large eddy simulation (LES) model and discrete particle model (DPM) are used to simulate the movements of bubbles. The interactions between phases involve two-way coupling. The accuracy of our mathematical model is validated by comparing slag–metal interface fluctuations with practical measurements. The results reveal that the average interface velocity and transverse velocity decrease as the mold width increases, however, they cannot represent the severity of slag entrapment at the interface between slag and steel. Due to the influence of bubble motion behavior, the maximum interface velocity increases with mold width and causes slag entrapment readily, which can reflect the severity of slag entrapment. On this basis, by monitoring the change of impact depths in different molds, a new dimensionless number “C” is found to reveal the severity of slag entrapment at the interface between slag and steel. The results show that the criterion number C increases with mold width, which is consistent with the results of flaw detection. Therefore, criterion number C can be used to reflect the severity of slag entrapment in different molds. Introduction The mold is an important part of steel metallurgy and can be called the "heart" of a continuous casting machine. During this process, the effect of powder added into the mold can be summarized by three aspects: (1) preventing oxidation of molten steel; (2) absorbing non-metallic inclusions; (3) filling the gap between the mold and the slab surface to improve heat transfer and lubricating the mold surface. However, due to the instability of level fluctuation in the mold, the liquid slag is often incorporated into the molten steel, damaging mechanical properties of steel products, especially for automobile steel. Therefore, slag entrapment has become a critical problem that requires serious concern. Due to "black box" operation, the slag entrapment cannot be seen during the real casting process. Therefore, visual models have been developed to reproduce the slag entrapment at the interface between slag and steel. Among these studies, Gguta et al. [1] found an apparent asymmetric flow near the nozzle. Afterwards, Li et al. [2] captured the asymmetric vortex distribution of the steel/slag interface by injecting black sesames into water, and developed a mathematical model to predict the asymmetry flow in the mold. The study of Iguchi et al. [3] showed that the slag entrapment caused by shearing strength is the main mechanism for the slag entrapment during high casting speed operation, and that the interfacial tension has a significant influence on it. Watanabe and Yamashita Metals 2021, 11, 374 2 of 13 et al. [4,5] studied slag entrapment by argon blowing, finding that the maximum depth of slag involved would not exceed three times its diameter. In addition, according to Savolainen et al. [6], the effect of the viscosity of slag on the formation and size of slag droplets should be paid attention to take full control of it. Yamada et al. [7] suggested that argon bubbles in a mold become the desirable sites where alumina inclusions are gathered and form large alumina clusters. Despite these studies, some researchers focused on the influence of liquid properties on slag entrapment in the mold, such as liquid metal density and interfacial tension within the slag-metal interface [8][9][10], in which the critical slag entrapment velocity can be determined. However, this velocity can only predict the slag entrapment caused by shearing stress. Lei Hong et al. [11] proposed another theoretical equation for calculating shear entrapment, taking the viscosity of slag into account. However, Chung and Cramb [12] and others believe that the interfacial tension coefficient should be reduced to about three percent of the original value due to the existence of interfacial reactions. Harman [13] considered nine factors and obtained another formula for calculating critical velocity through non-linear fitting. The above studies are of great significance for understanding the slag entrapment phenomenon. However, the physical models cannot meet all the similarity criteria at the same time, which leads to some limitations for slag entrapment results. With the rapid development of computer science, computational fluid dynamics (CFD) technology has been an important tool in metallurgical process research, and its advantages are increasingly prominent. Saeedipour et al. [14] established a three-phase mathematical model to study the interface wave problem. Liu et al. [15] established a quasi-four-phase model to study the effect of bubbles on the interface fluctuation of the slag-metal interface. Li et al. [16] analyzed three kinds of slag entrapment mechanisms, and gave the transient process of mold slag entrapment in molten steel. Although these studies are of great significance to the study of slag entrapment, the relationship between mold impact depth and velocity, especially the association with slag entrapment, was not explored. There is still a lack of effective evaluation criteria to predict the effect of slag entrapment. In this work, the movement behavior of mold slag in the mold is studied to reveal the influence of mold structure on slag entrapment and a theoretical foundation for mold width adjustment is laid. The innovations of this paper are composed of three parts: Firstly, the slag entrapment between different molds are elaborated in detail. (2) Secondly, a mathematical model of four-phase (slag-metal-gas-air) flow is established to explain differences in slag entrapment in (1). Thirdly, a new dimensionless value is established to characterize the severity of slag entrapment in molds. The research results can act as a guide for the continuous casting process. Basic Assumptions To simplify the calculation, the mathematical model used in this work is based on the following assumptions: (i) Liquid steel is regarded as a Newtonian fluid, and its basic parameters, such as density and viscosity, are considered as constants. (ii) The heat transfer and solidification process between molten steel and cooling water are not considered, and the thermal characteristics of slag are ignored. (iii) The discrete phase bubble is assumed to be spherical, and its size change is ignored in the process of floating. (iv) The taper of the mold, as well as heat transfer between the mold and slab, are all ignored. Governing Equations There are four phases existing in the mold: molten steel, liquid slag, air, and argon. In this work, we use the volume of fluid (VOF) method to describe interactions between steel, slag, and air and adopt a discrete particle model (DPM) to track trajectories of argon bubbles. The interactions between these phases involve two-way coupling, following the law of Newton. The continuity equation for these phases can be written as follows: where ρ k is the density of the phase, the foot mark k representing the phases u m is the velocity of the mixture phase, α k is the volume fraction of each phase, which fulfills the equation α l + α s + α g = 1. Large eddy simulation (LES) is used to solve the Navier-Stokes (N-S) equations of fluid flow: In this equation, fluid motion is solved by a set of equations. The term ρ m represents the density of the mixture phase, P is static pressure. The molecular viscosity µ m and turbulent viscosity µ t in the equation are all weighted average values of the volume fraction of each phase, and the effective viscosity µ effect = µ m + µ t . F is the forces acting on bubbles. The term F γ is the interface tension between these phases. The expression of subgrid-scale stress τ ij is as follows: where the term τ kk is the isotropic part of the subgrid scale, δ ij is the Kronecker symbol, S ij is the strain rate, turbulent viscosity µ t = ρ m L 2 s S . The calculation formula of mixing length is as follows: where κ is the von Karman constant, taken as 0.4, d is the vertical distance from the fluid to the wall, C s is the Smagorinsky constant, taken as 0.2. Discrete Particle Model (DPM) The argon bubbles injected from the nozzle easily float up and escape from the upper surface of the mold. In this process, the momentum exchanges between the argon and steel are treated as two-way coupling, which obeys the second law of Newton: where u p and m p are the velocity and mass of particles, respectively, and F is the resultant force acting on the bubbles, the expression of which can be written as follows: where the terms on the right side of the formula are gravity, buoyancy force, pressure gradient force, drag force, lift force, and virtual mass force, respectively. The calculation equations [17][18][19][20] can be found in Table 1. The bubble size ranges from 0.5-15 mm, following the Rosin-Rammler law: where the variable Y d is the mass fraction of bubbles whose diameters are greater than d. The averaged bubble diameter d m = 5 mm, and the spread parameter n = 2. Source Term Formula Annotations Buoyancy force plus gravity force, The net effect acts on the difference between particle and fluid densities. The variable g is gravity acceleration, d p is particle diameter. Where drag coefficient ,and u p is particle velocity, Re p is particle Reynolds number. Pressure gradient force F p ρ m /ρ p u p ∇u m Pressure gradient force is significant when ρ/ρ p ≥ 0.1. Where the virtual mass force coefficient C v−m = 0.5. Boundary Conditions and Numerical Details As shown in Figure 1, the whole physical model consists of four parts: nozzle, mold, foot roller zone, and secondary cooling zone. In the production process, the thicknesses of argon and the slag layer are 40 mm and 50 mm, respectively. After entering the mold, the argon gas expands rapidly, resulting in a decrease in the density of argon gas. The bubble density in the molten steel can be calculated through the ideal gas law, as shown in Equation (8). The steel velocity from the nozzle is calculated by the casting speed, and a no-slip condition is used to perform the wall treatment of the mold. In this work, three typical widths of the mold were taken to study the slag entrapment inside the mold, and the cross section are 250 mm × 1100 mm, 250 mm × 1400 mm, 250 mm × 1650 mm, respectively. where the argon density at 20 • C is 1.78 kg/m 3 . The pressure P is standard atmospheric pressure, which can be assumed to be equal to standard atmosphere pressure near the top of the mold. In order to obtain a fine vortex structure, the mesh is refined near the slag-metal interface and gas-slag interface. The whole region contains 2.1 × 10 6 structural grids with good independence. The calculation step is set to 0.01 s and the total calculation time is 100 s. To save calculation time and cost, the k-ε model is used to calculate the steady-state In order to obtain a fine vortex structure, the mesh is refined near the slag-metal interface and gas-slag interface. The whole region contains 2.1 × 10 6 structural grids with good independence. The calculation step is set to 0.01 s and the total calculation time is 100 s. To save calculation time and cost, the k-ε model is used to calculate the steady-state field, and then switched to the LES model to simulate the transient flow field. The specific model parameters are shown in Table 2. The movements of argon bubbles consist three key stages, as shown in Figure 2: (1) Entering the mold from the nozzle and being carried downward by the nozzle jet; (2) floating up into the molten steel and passing through the slag-metal interface and entering into the slag; (3) collapsing and subsequently disappearing after floating near the gas/slag interface. The movements of bubbles in the continuous phases and the escape near the gas/slag interface are achieved by coding user-defined functions (UDFs). In addition, the impact point is defined as the position where the vertical shear force equals zero on the center plane of the mold. Figure 3 shows the surface defects after the rolling process, which is caused by the large inclusions. It is clearly seen that many black lines on the surface of the slabs are stretching across the whole surface of the slab, the lengths of which are more than 90 millimeters. This can degrade the quality of steel significantly. The "black line" problem is a significant problem that constrains the production of automobile steel. Figure 3 shows the surface defects after the rolling process, which is caused by the large inclusions. It is clearly seen that many black lines on the surface of the slabs are stretching across the whole surface of the slab, the lengths of which are more than 90 millimeters. This can degrade the quality of steel significantly. The "black line" problem is a significant problem that constrains the production of automobile steel. Figure 3 shows the surface defects after the rolling process, which is caused by the large inclusions. It is clearly seen that many black lines on the surface of the slabs are stretching across the whole surface of the slab, the lengths of which are more than 90 millimeters. This can degrade the quality of steel significantly. The "black line" problem is a significant problem that constrains the production of automobile steel. Figure 4 shows the compositions of inclusions inside the black line characterized by scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS). The results show that Na, Al, Si, Ca, O, and other elements exist in the black lines, whose compositions are similar to that of the liquid slag in the mold. Therefore, it is certain that the black line defect of hot-rolled sheets is caused by slag entrapment. Analysis of Inclusions on Slab (a) show that Na, Al, Si, Ca, O, and other elements exist in the black lines, whose compositions are similar to that of the liquid slag in the mold. Therefore, it is certain that the black line defect of hot-rolled sheets is caused by slag entrapment. Figure 3 shows the surface defects after the rolling process, which is caused by the large inclusions. It is clearly seen that many black lines on the surface of the slabs are stretching across the whole surface of the slab, the lengths of which are more than 90 millimeters. This can degrade the quality of steel significantly. The "black line" problem is a significant problem that constrains the production of automobile steel. Figure 5 shows the slag entrapment ratio in slabs with different widths. The slabs are tested after the rolling process. The ratio is defined as the percentage of slabs with slag entrapment problems against totally tested slabs. It can be seen from Figure 5 that the larger the width is, the higher the slag entrapment ratio will be. There must be some special reasons behind this phenomenon, however, few works have reported this problem and tendency. So, in this work, the reason for this phenomenon will be explained in detail, through our mathematical model. Figure 5 shows the slag entrapment ratio in slabs with different widths. The slabs are tested after the rolling process. The ratio is defined as the percentage of slabs with slag entrapment problems against totally tested slabs. It can be seen from Figure 5 that the larger the width is, the higher the slag entrapment ratio will be. There must be some special reasons behind this phenomenon, however, few works have reported this problem and tendency. So, in this work, the reason for this phenomenon will be explained in detail, through our mathematical model. Figure 5 shows the slag entrapment ratio in slabs with different widths. The slabs are tested after the rolling process. The ratio is defined as the percentage of slabs with slag entrapment problems against totally tested slabs. It can be seen from Figure 5 that the larger the width is, the higher the slag entrapment ratio will be. There must be some special reasons behind this phenomenon, however, few works have reported this problem and tendency. So, in this work, the reason for this phenomenon will be explained in detail, through our mathematical model. Level Height of Slag-Metal Interface Firstly, the accuracy of the mathematical model is verified through interface-level detections obtained from an eddy current sensor. The current sensor is installed on the top of the mold (Figure 6a), which can reveal the distance between the mold top and slag-metal interface. The installation position is in the middle of the mold, and the distance between the two probes is 0.4 m, as shown in Figure 6a. It can be seen from Figure 6b that the actual distance between the mold top and slag-metal interface is 83-93 mm. By comparison, the simulation results show that the value is 97-100 mm. It is easily seen that the simulated result is a little higher than the experimental result, the reason for which is the fact that the height of slag-metal interface is enhanced by bulging. Generally, the simulated results agree well with the experimental results, implying that the mathematical model established in this work is reliable (Figure 6b). Level Height of Slag-Metal Interface Firstly, the accuracy of the mathematical model is verified through interface-level detections obtained from an eddy current sensor. The current sensor is installed on the top of the mold (Figure 6a), which can reveal the distance between the mold top and slag-metal interface. The installation position is in the middle of the mold, and the distance between the two probes is 0.4 m, as shown in Figure 6a. It can be seen from Figure 6b that the actual distance between the mold top and slag-metal interface is 83-93 mm. By comparison, the simulation results show that the value is 97-100 mm. It is easily seen that the simulated result is a little higher than the experimental result, the reason for which is the fact that the height of slag-metal interface is enhanced by bulging. Generally, the simulated results agree well with the experimental results, implying that the mathematical model established in this work is reliable (Figure 6b). Figure 7 shows the transient molten steel flow patterns of different cross sections. It can be seen that the flow field in the mold is asymmetric and unstable, with multiple vortices of different scales. Figure 7 shows the transient molten steel flow patterns of different cross sections. It can be seen that the flow field in the mold is asymmetric and unstable, with multiple vortices of different scales. Generally, the liquid steel that flows out of the nozzle and impinges on the narrow surface of mold may be divided into two streams: upper recirculation flow and lower recirculation flow. Obviously, increasing the mold width leads to smaller impacting velocity and deeper injection of liquid steel. Furthermore, the maximum velocity of the slag-metal interface also increases, as shown in Figure 8. Generally, the liquid steel that flows out of the nozzle and impinges on the narrow surface of mold may be divided into two streams: upper recirculation flow and lower recirculation flow. Obviously, increasing the mold width leads to smaller impacting velocity and deeper injection of liquid steel. Furthermore, the maximum velocity of the slag-metal interface also increases, as shown in Figure 8. Figure 7 shows the transient molten steel flow patterns of different cross sections. It can be seen that the flow field in the mold is asymmetric and unstable, with multiple vortices of different scales. Generally, the liquid steel that flows out of the nozzle and impinges on the narrow surface of mold may be divided into two streams: upper recirculation flow and lower recirculation flow. Obviously, increasing the mold width leads to smaller impacting velocity and deeper injection of liquid steel. Furthermore, the maximum velocity of the slag-metal interface also increases, as shown in Figure 8. The phenomenon can be attributed to the distribution of argon and the velocity field, as shown in Figure 9. The argon bubbles easily float up with a relatively high speed, even higher than the injection velocity of molten steel. When it floats to the gas-slag interface, the bubble collapses and pushes slag firmly into the steel, resulting in an increase in the interface velocity of the slag-metal interface. Therefore, the movements of argon should be strictly controlled in the continuous casting process. Figure 10 shows the impact depth and impact velocity in molds with different widths. Here, the impact depth is defined as the distance between the slag-metal interface and impact point; and the impact velocity is defined as the velocity of the impact point. It can be seen that the flowing strand develops more fully and the impact depth becomes larger with the increasing of mold width. However, due to the flow loss in the flow process, the larger the mold width is, the smaller the impact velocity will be. Transient Flow Spectrum Analysis of Mold The phenomenon can be attributed to the distribution of argon and the velocity field, as shown in Figure 9. The argon bubbles easily float up with a relatively high speed, even higher than the injection velocity of molten steel. When it floats to the gas-slag interface, the bubble collapses and pushes slag firmly into the steel, resulting in an increase in the interface velocity of the slag-metal interface. Therefore, the movements of argon should be strictly controlled in the continuous casting process. Figure 10 shows the impact depth and impact velocity in molds with different widths. Here, the impact depth is defined as the distance between the slag-metal interface and impact point; and the impact velocity is defined as the velocity of the impact point. It can be seen that the flowing strand develops more fully and the impact depth becomes larger with the increasing of mold width. However, due to the flow loss in the flow process, the larger the mold width is, the smaller the impact velocity will be. Analysis of Velocity Characteristics of Slag-Metal Interface The average velocity fluctuation of the slag-metal interface is monitored and the results are shown in Figure 11. It can be seen that the fluctuations are quite different in different molds. On the one hand, the average velocity is 0.0732 m·s −1 with a small width of the mold and decreases to 0.0693 m·s −1 with a medium width of the mold. On the other hand, the fluctuation of the velocity with a large width of the mold is more significant, and the velocity continues to decrease with an average value of 0.0647 m·s −1 . This phenomenon is mainly attributed to the loss of molten steel flow. In a word, with the increase in the mold section, the velocity of the slag-metal interface decreases. However, there is little difference in the interface velocity between slag and liquid metal in these three kinds of molds, which cannot explain the why the slag entrapment increases with dif- Figure 10 shows the impact depth and impact velocity in molds with different widths. Here, the impact depth is defined as the distance between the slag-metal interface and impact point; and the impact velocity is defined as the velocity of the impact point. It can be seen that the flowing strand develops more fully and the impact depth becomes larger with the increasing of mold width. However, due to the flow loss in the flow process, the larger the mold width is, the smaller the impact velocity will be. Analysis of Velocity Characteristics of Slag-Metal Interface The average velocity fluctuation of the slag-metal interface is monitored and the results are shown in Figure 11. It can be seen that the fluctuations are quite different in different molds. On the one hand, the average velocity is 0.0732 m·s −1 with a small width of the mold and decreases to 0.0693 m·s −1 with a medium width of the mold. On the other hand, the fluctuation of the velocity with a large width of the mold is more significant, and the velocity continues to decrease with an average value of 0.0647 m·s −1 . This phenomenon is mainly attributed to the loss of molten steel flow. In a word, with the increase in the mold section, the velocity of the slag-metal interface decreases. However, there is little difference in the interface velocity between slag and liquid metal in these three kinds of molds, which cannot explain the why the slag entrapment increases with dif- Analysis of Velocity Characteristics of Slag-Metal Interface The average velocity fluctuation of the slag-metal interface is monitored and the results are shown in Figure 11. It can be seen that the fluctuations are quite different in different molds. On the one hand, the average velocity is 0.0732 m·s −1 with a small width of the mold and decreases to 0.0693 m·s −1 with a medium width of the mold. On the other hand, the fluctuation of the velocity with a large width of the mold is more significant, and the velocity continues to decrease with an average value of 0.0647 m·s −1 . This phenomenon is mainly attributed to the loss of molten steel flow. In a word, with the increase in the mold section, the velocity of the slag-metal interface decreases. However, there is little difference in the interface velocity between slag and liquid metal in these three kinds of molds, which cannot explain the why the slag entrapment increases with different widths. Therefore, the average velocity of the slag-metal interface cannot be used to evaluate slag entrapment severity. Metals 2021, 10, x FOR PEER REVIEW 10 of 13 ferent widths. Therefore, the average velocity of the slag-metal interface cannot be used to evaluate slag entrapment severity. The horizontal velocity fluctuation of the slag-metal interface is shown in Figure 12. The velocity is signed: The negative value of velocity points to the narrow side of the mold, and the positive value points to the direction of the submerged nozzle. In the mold The horizontal velocity fluctuation of the slag-metal interface is shown in Figure 12. The velocity is signed: The negative value of velocity points to the narrow side of the mold, and the positive value points to the direction of the submerged nozzle. In the mold with a small width, most of the velocity value is positive, indicating that the up-flow is strong, corresponding to a flow pattern of double-roll flow (see Figure 7a). However, with the medium width of the mold, the value is mostly negative, indicating that the flow has been transformed into single-roll flow and the flow rate is reduced. When the width of the mold reaches 1650 mm, the flow is completely transformed into single-roll flow, while the flow rate increases. Therefore, the horizontal velocity cannot explain the reasons for the differences of slag entrapment as well. The horizontal velocity fluctuation of the slag-metal interface is shown in Figure 12. The velocity is signed: The negative value of velocity points to the narrow side of the mold, and the positive value points to the direction of the submerged nozzle. In the mold with a small width, most of the velocity value is positive, indicating that the up-flow is strong, corresponding to a flow pattern of double-roll flow (see Figure 7a). However, with the medium width of the mold, the value is mostly negative, indicating that the flow has been transformed into single-roll flow and the flow rate is reduced. When the width of the mold reaches 1650 mm, the flow is completely transformed into single-roll flow, while the flow rate increases. Therefore, the horizontal velocity cannot explain the reasons for the differences of slag entrapment as well. Figure 13 shows the variation of the maximum interface velocity with time. It can be seen that with the rise in mold width, the maximum velocity of the slag-metal interface also increases, so the probability of slag entrapment increases. This explains the phenomenon that the number of slag droplets entrapped in the mold increases with the increase in the cross section width of the mold, as shown in Figure 3. In addition, it can be found that with the increase in cross section width, the amplitude of velocity fluctuation also increases, that is, a large cross section width easily causes slag entrapment, which is consistent with the changing trend of slag drop number (see Figure 10). Therefore, the maximum interface velocity can reflect the severity of slag entrapment in a different mold, which should be paid more attention to control slag entrapment in steel. Figure 13 shows the variation of the maximum interface velocity with time. It can be seen that with the rise in mold width, the maximum velocity of the slag-metal interface also increases, so the probability of slag entrapment increases. This explains the phenomenon that the number of slag droplets entrapped in the mold increases with the increase in the cross section width of the mold, as shown in Figure 3. In addition, it can be found that with the increase in cross section width, the amplitude of velocity fluctuation also increases, that is, a large cross section width easily causes slag entrapment, which is consistent with the changing trend of slag drop number (see Figure 10). Therefore, the maximum interface velocity can reflect the severity of slag entrapment in a different mold, which should be paid more attention to control slag entrapment in steel. Figure 14 reveals the number of slag drops in molds of different widths. It can be seen that the number of slag drops increases with the mold width. The reason for this phenomenon is related to argon blowing, that is, a more considerable amount of argon gas retained in the wide-faced mold and a longer floating time lead to more argon bubbles retained in the slag layer, accompanied by the enhanced emulsification effect. The velocity of bubbles is so high that they may significantly increase the velocity of the steel around them. Therefore, a large mold is more likely to cause slag entrapment. Figure 14 reveals the number of slag drops in molds of different widths. It can be seen that the number of slag drops increases with the mold width. The reason for this phenomenon is related to argon blowing, that is, a more considerable amount of argon gas retained in the wide-faced mold and a longer floating time lead to more argon bubbles retained in the slag layer, accompanied by the enhanced emulsification effect. The velocity of bubbles is so high that they may significantly increase the velocity of the steel around them. Therefore, a large mold is more likely to cause slag entrapment. Figure 14 reveals the number of slag drops in molds of different widths. It can be seen that the number of slag drops increases with the mold width. The reason for this phenomenon is related to argon blowing, that is, a more considerable amount of argon gas retained in the wide-faced mold and a longer floating time lead to more argon bubbles retained in the slag layer, accompanied by the enhanced emulsification effect. The velocity of bubbles is so high that they may significantly increase the velocity of the steel around them. Therefore, a large mold is more likely to cause slag entrapment. Slag Entrapment Evaluation Criteria The floatation of bubbles significantly affects the slag-metal interface, and increases the slag entrapment when mold width gets larger. Simultaneously, the floatation of bubbles can also lift the molten steel that is injected from submerged entry nozzle (SEN). Therefore, a dimensionless criterion number can be defined to reflect the effect of jet rigidity on slag entrapment in steel. As shown in Equation (9) Slag Entrapment Evaluation Criteria The floatation of bubbles significantly affects the slag-metal interface, and increases the slag entrapment when mold width gets larger. Simultaneously, the floatation of bubbles can also lift the molten steel that is injected from submerged entry nozzle (SEN). Therefore, a dimensionless criterion number can be defined to reflect the effect of jet rigidity on slag entrapment in steel. As shown in Equation (9): where α is the angle of the jet and θ is the angle of the nozzle (see Figure 2). Therefore, the physical meaning of Equation (9) can be interpreted as the ratio of jet angle to the original nozzle angle. Without bubble injection, the rigid jet is equal to C = 1. Considering the effect of argon, C < 1 when argon floats. Figure 15 shows variations of C value with different mold widths. It can be seen from Figure 15 that the C value increases with mold width. This trend is similar to that of slag drop variations shown in Figure 14, indicating that the C value can be used as a guide to reflect the slag entrapment in molds with different widths. where α is the angle of the jet and θ is the angle of the nozzle (see Figure 2). Therefore, the physical meaning of Equation (9) can be interpreted as the ratio of jet angle to the original nozzle angle. Without bubble injection, the rigid jet is equal to C = 1. Considering the effect of argon, C < 1 when argon floats. Figure 15 shows variations of C value with different mold widths. It can be seen from Figure 15 that the C value increases with mold width. This trend is similar to that of slag drop variations shown in Figure 14, indicating that the C value can be used as a guide to reflect the slag entrapment in molds with different widths. Figure 15. Prediction of slag severity through dimensionless value. Conclusions In this work, a three-dimensional model is established to simulate the slag entrapment phenomenon, mainly focusing on the slag entrapment phenomenon in different molds. The large eddy simulation (LES) model is applied for the calculation of the turbulence of molten steel, and the Smagorinsky-Lilly model is used to describe the Conclusions In this work, a three-dimensional model is established to simulate the slag entrapment phenomenon, mainly focusing on the slag entrapment phenomenon in different molds. The large eddy simulation (LES) model is applied for the calculation of the turbulence of molten steel, and the Smagorinsky-Lilly model is used to describe the sub-grid scale vortices. Based on the obtained study results, the following conclusions can be drawn: (1) The amount of slag entrapment increases with mold width, which is mainly due to the bubble pulsion. There exists a transformation of flow pattern in the mold when mold width increases. The double-roll flow pattern produces less slag entrapment than a single-roll flow pattern. (2) The impact depth of molten steel decreases with mold width, while the impact velocity increases with mold width. The larger the mold width, the weaker the rigidity of the jet. Thus, the bubble injection significantly affects the flow field in the mold. (3) Both of the average velocity and horizontal velocity of slag/metal interface cannot reflect the severity of slag entrapment in the mold. By comparison, the maximum velocity at the interface shows good advantages in predicting the severity of slag entrapment in the mold. (4) A dimensionless criterion number C with the physical meaning of the ratio of jet angle to nozzle angle is successfully established to predict slag entrapment in different molds.
2021-05-08T00:03:12.792Z
2021-02-23T00:00:00.000
{ "year": 2021, "sha1": "4a89baae5244ca0812ed8fb7005e7802eb6a6517", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4701/11/2/374/pdf?version=1614243308", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b530d9514c8b9f3311b5c7745960af3bbfca2fe6", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
139662633
pes2o/s2orc
v3-fos-license
Load and stiffness of a planar ferrofluid pocket bearing A ferrofluid pocket bearings is a type of hydrostatic bearing that uses a ferrofluid seal to encapsulate a pocket of air to carry a load. Their properties, combining a high stiffness with low (viscous) friction and absence of stick-slip, make them interesting for applications that require fast and high precision positioning. Knowledge on the exact performance of these types of bearings is up to now not available. This article presents a method to model the load carrying capacity and normal stiffness characteristics of this type of bearings. Required for this is the geometry of the bearing, the shape of the magnetic field and the magnetization strength of the fluid. This method is experimentally validated and is shown to be correct for describing the load and stiffness characteristics of any fixed shape of ferrofluid pocket bearing. Introduction As man began to explore space, it became relevant to develop efficient techniques to use and store rocket engine propellants under zero gravity conditions. For this reason, the NASA Research Center developed in the 1960s a kerosene-based magnetic fluid that could be collected at a desired location by the use of a magnetic field. 1 This magnetic fluid consisted of a stable colloidal suspension of tiny magnetic particles ($10 nm) providing the fluid with paramagnetic properties. 2 Rosensweig continued the research into these so-called ferrofluids and showed in the early 1970s that these fluids might also be interesting for the usage in seals and bearings. 3,4 Pressure builds up in the fluid because the magnetic particles are attracted by a magnetic field. This pressure can be used to develop a force to carry a load or to seal a volume. Compared to other bearing concepts, the ferrofluid bearings are an easy way to create a low friction movement that is free of stick-slip. [5][6][7][8][9] The bearing is furthermore inherently stable due to the use of permanent magnets. The magnetic field of these magnets can additionally be used for Lorentz actuation. [10][11][12][13][14][15][16] The overall specifications presented in literature show that the bearing is particularly interesting for low load applications that require fast and high precision positioning. Examples of possible applications are microscopy, wafer/chip inspection and pick and place machines. The low vapour pressure ferrofluids are suitable for vacuum conditions and even application in a zero gravity environment is possible since the ferrofluid is kept in place by the magnetic field. Two types of planar ferrofluid bearings can be distinguished. The first type is a ferrofluid pressure bearing that uses solely the magnetic pressure to carry a load. [17][18][19] The second category is a ferrofluid pocket bearing that enhances the load carrying capacity of an air pocket (or any other non-magnetic fluid), which is encapsulated and pressurized by a surrounding ferrofluid seal. 3,20,21 A simple example of this bearing concept is given in Figure 1. The planar ferrofluid bearing can be seen as a sort of hydrostatic bearing meaning that it does not need a relative movement between bearing faces to create a pressure field. Though the working principles are fundamentally different from the hydrostatic bearings of literature [22][23][24][25] that uses the magnetorheological effect and a pressure source to create a pressure field. The tribological performance of the planar ferrofluid bearing is, despite its potential, barely discussed in literature. This is completely opposite to the performance of the hydrodynamic journal bearing lubricated with ferrofluid that has received significant attention recently. [26][27][28][29][30][31][32][33] A problem experienced in these types of bearings is that there are no mathematical models available yet that describe the load and stiffness characteristics; the designer interested in using these bearing has limited information available on how to dimension the bearing to achieve certain specifications. All literature describing these bearings lacks the link between the measured performance and a theoretical model. 20 Another problem seen in these bearings is the poor repeatability in fly height. 13,14 The fly height is reduced during translation because of the trail formation that results in a smaller amount of fluid to be available for levitation. In the case of a pocket bearing, this might even cause air to escape from the encapsulated pocket of air resulting in a permanent change in fly height. In Cafe´1 2 and Lampaert et al., 15 the absence of a mathematical model to describe this effect accurately is mitigated in the presented positioning system by adding a control loop that controls the fly height of the bearing. This decision introduces extra actuators, sensors and therefore complexity in the system, which might take away the benefit of being low cost and simple. More knowledge on how the load and stiffness of this type of bearing is created might give more insight in how the trail formation affects the fly height of the bearing. In this article, a method is presented to predict the load and stiffness characteristics of a ferrofluid pocket bearing. A model is derived using this method that is then validated with an experimental setup. The resulting knowledge can be used to understand how a ferrofluid bearing should be designed to meet the desired load and stiffness specifications. Methods In this section, the derivation of the mathematical model of the ferrofluid pocket bearing is explained and validated. The validation is divided into three parts. The maximum load carrying capacity, the load carrying capacity as a function of the fly height and the bearing stiffness are validated. Mathematical model In the following section, the method to calculate the load and stiffness specifications of a ferrofluid pocket bearing is derived. It provides the theoretical basis on how the different parameters contribute to the final specifications. This method is not limited to the examples given in Figures 3 and 4 but is valid for all possible shapes of magnets and magnetic fields. The derivation starts from the Navier-Stokes equations for incompressible, Newtonian magnetic fluids. 34 In this formula, the assumption of Newtonian fluids is reasonable for fluids that do not show any particle chain formation (i.e. fluids with a small dipolar interaction parameter 35 ). @ũ @t In this relation, the density is represented by , the viscosity is represented by and the magnetic permeability of vacuum is represented by 0 . Definitions of other symbols can be found in Figure 3 or in the text. Now it is assumed that the fluid velocityũ of the ferrofluid is small and therefore of negligible influence on the pressure distribution p in the liquid. There are no other body forces except those induced by the Ferrofluid is applied to a disc-shaped magnet with axial magnetization and placed on a ferromagnetic surface. The ferrofluid collects at the circumference of the magnet where the magnetic field strength is highest ( Figure 2). A bearing is built by placing a surface on top of this configuration such that a pocket of air is encapsulated ( Figure 3). The ring of ferrofluid functions as a seal that captures the air inside. The magnetic field of this magnet is shown in Figure 2. Figure 1. The calculation is done with a remanent flux density of the magnet of B r ¼ 1T and a relative permeability of the iron base plate of r ¼ 4000. magnetic fieldf ¼ 0 . These assumptions reduce relations (equation (1)) to the following form In general, the magnetization strength M of the ferrofluid is a function of the magnetic field, but can be assumed to be constant and equal to the saturation magnetization of the fluid when the fluid is subjected to a magnetic field larger than that saturation magnetization. Furthermore, when the magnetic field is much larger than the saturation magnetization of the fluid, it can be assumed that the magnetic field is unaffected by the presence of the ferrofluid. The low relative permeability of the fluid ensures furthermore that considering the magnetic behaviour, the fluid does not behave much differently than air. Typical magnetic fluids have a relative permeability r of approximately 2 with a saturation magnetization of approximately M ¼ 32kA=m or 0:04T. For a ferrofluid pocket bearing primarily the pressure difference across the seal p i À p o is of importance for calculating the total load. This pressure difference can be calculated with the assumptions mentioned above, the relation given in (2) and the fundamental theorem of calculus in the following way The magnetic field at the inner fluid interface is equal to H i and the magnetic field at the outer fluid interface is equal to H o . From this relation follows that only the magnetic field strength at the fluid-air interfaces will determine the pressure increase in the pocket. The load capacity F L can be approximated by integrating the pressure over the force carrying surface area of the pocket A p . This is done with relation (equation (4)) in which it is furthermore assumed that the load carrying capacity of the ferrofluid ring itself is negligible (in the given example, its less than 10%). In a subsequent analysis, this effect is taken into account (see equation (11)). A graphical representation of the force relation is given in Figure 4. This only includes the load capacity caused by the ferrofluid seal. The normal stiffness of the bearing k ff is defined by the derivative of the load capacity (equation (4)) with the fly height h. Relation (equation (5)) implies that an increase in force, and the related increase of pressure, causes the ferrofluid interfaces to move outwards causing an increased counteracting pressure across the seal and an increased surface area for the force. The increase in surface area can be assumed to be negligible for a typical bearing design. Applying this assumption and combining relation (equation (5)) with (equation (3)) yields: The change of magnetic field difference H i À H o ð Þ over the displacement h is not directly known but can be found by relating that displacement with that of the inner fluid interface r in . The contour plot on the background presents the magnetic field intensity. Ferrofluid is added and attracted to the corners due to highest field intensity there. The pressure difference across the ferrofluid seal is proportional to the difference in magnetic field intensity across the seal Þ . This difference defines the load capacity of the bearing. The figure furthermore shows that the contour lines of the magnetic field intensity are identical to the contour lines of the pressure distribution. The relation dr in =dh can be seen as a pneumatic leverage meaning that a small change of bearing fly height will result in a large displacement of the inner fluid interface ( Figure 5). The two parameters are coupled via the geometry of the pocket, and the pressure, and therefore density of the air, inside the pocket. In this model, it is assumed that the pressure variation inside the pocket is small, and that therefore the air inside the pocket can be assumed to behave incompressible. This practically means that the stiffness of ferrofluid seals is much smaller than the stiffness of the pocket of air. The pneumatic leverage can, in the case of a cylindrical shaped incompressible pocket, be described with relation (equation (8)). Figure 6 shows the magnitude of the pneumatic leverage for different initial fly heights and fixed bearing radii. The figure shows that the pneumatic leverage can in general be assumed to be constant for small compression ratios. In the case of a ring-shaped pocket bearing, the air stiffness can be modelled as the stiffness of a pneumatic cylinder. That is given with the following relation at which V ini is the initial volume, V h is the compressed volume and is the heat capacity ratio. The stiffness of the encapsulated pocket of air can be seen as a stiffness that is in series with the stiffness of the ferrofluid seal. The total stiffness of the system can then be described to be The effect of the air stiffness can be assumed to be negligible when it is much larger than the seal stiffness. The assumption of an incompressible (cylindrical) air pocket can be checked by making sure that the stiffness of bearing is much smaller than the stiffness of the pocket showing that a small displacement of h results in a large displacement in r in . The relation dr in =dh (equation (8)) can be calculated by assuming a constant air volume V p of the pocket, which is reasonable for small displacements. stiffness of the bearing is dependent on whether there is an adiabatic or isothermal situation. In general, the stiffness at low frequencies will behave isothermally and stiffness at high frequencies will behave adiabatically. Experimental setup for validation Experiments are performed to investigate whether the derived mathematical models describe the load and stiffness characteristics of this bearing correctly. The validation is realised by comparing the performances predicted by the theory with the results of experiments. The required input parameters are the geometrical dimensions of the setup and the shape and strength of the magnetic field. The measurement data are obtained by pressing the bearing onto a surface using a tensile test bench that is able to measure the force over the displacement ( Figure 9). The setup has a relative force accuracy of 0:2 % and a relative force repeatability of 0.3%. The displacement is measured with a repeatability of 0.3 mm and a accuracy of 0.6 mm. The bearing consists of a ferrofluid pocket bearing constructed using a ring-shaped neodymium magnet with the magnetization in axial direction (see Figure 10 for more specifications). The ferrofluid that is used is the APG 513A from Ferrotec with a saturation magnetization of 32 kA/m. The magnetic field is derived using a FE analysis that is shown in Figures 10 and 11. It should be noted here that the ring magnet causes two radially distributed peaks in magnetic field intensity that potentially causes two seals in series. However, the two peaks act as one seal in this configuration due to capillary forces that connect the two seals together. The process for validating the maximum load capacity of this bearing (equation (4)) is divided into four steps. The first step is to apply a specified amount of ferrofluid on the magnet after which the fluid will flow according to the magnetic field and form a uniform ring. The second step is to move the bearing to a point where it is just touching the opposing surface. A pocket of air is now encapsulated by a seal of ferrofluid. The magnetic field intensity at the inner fluid interface is the same as the magnetic field intensity at the outer fluid interface ÁH ¼ 0 ð Þ meaning that no pressure is build up across the seal yet Áp ¼ 0 ð Þ and so the bearing has no load carrying capacity in this configuration (Figure 4). The initial outer magnetic field interface, when the bearing faces were not touching, is measured by comparing the position of the ferrofluid in the situation that the ferrofluid is not touching the opposing surface (Figure 9) to the magnetic field given in Figure 10. The outer magnetic field interface in this situation is measured by comparing the position and shape of the ferrofluid in the real system ( Figure 9) to the simulated shape ( Figure 10). The outer contour of the ferrofluid in the real system should coincide with one of the contour lines of the magnetic field in the simulated system. The contour line where it coincides is the value of the outer field intensity. The third step is to compress the bearing resulting air to leak out of the seal, since there is no ability to develop a counteracting pressure over the seal. The bearing is now at a lower fly height where the magnetic field intensity at the inner fluid interface now differs from the magnetic field intensity at the outer fluid interface ÁH 4 0 ð Þ . The inner fluid interface has moved to a location with a higher magnetic field intensity while the magnetic field at the outer fluid interface remains approximately the same (this stays the same due to the geometry of the bearing, the shape of the magnetic field and the amount of fluid added to the bearing configuration). This causes pressure to build up across the seal that gives the bearing a load carrying capacity Áp 4 0 ð Þ . The inner fluid interface is furthermore at a peak in field intensity ( Figure 11) since it is at the border of leaking air. The fourth step is to decrease the fly height even more. This causes air to escape and causes the magnetic field intensity at the inner fluid interface to increase even further. Now an even larger difference in pressure across the seal has developed and results in an even higher load capacity. Decreasing the fly height of the bearing in this way increases the magnetic field intensity at the inner fluid interface while the magnetic field intensity at the outer interface stays more or less the same. The inner fluid interface is located at a peak of field intensity ( Figure 11) along the whole curve of maximum load capacity of this bearing. The maximum load capacity of the bearing is calculated by determining the pressure build-up across the seal that is defined by the relevant magnetic field intensities at the inner and outer fluid interfaces of the seal. These values can be read from Figure 11 that presents the field intensity as a function of the radius for different fly heights. The field intensity at the outer fluid interface when there is no contact between the bearing faces is derived by comparing the location of the outer fluid interface with the isolines of Figure 10. When the load is increased further, the outer fluid interface will move outwards to a location with lower magnetic field intensity during the measurement. This is taken into account by the model, by a linear interpolation of these two values. The direct load contribution of the seal itself also has been taken into account in the model by averaging the magnetic field intensity over the surface area of Figure 9. The ring magnet is placed on a steel adapter and magnetic fluid is added to the configuration. The core of the magnet is filled with an aluminium disc to reduce the volume of the air pocket and so create higher stiffness. A tensile testing machine is used to measure the force-displacement curves by pressing this configuration onto a surface. The stiffness of the setup is about k setup ¼ 3  10 6 N/m. the seal. Relation (equation (4)) is now extended to the following relation: For the stiffness, two different expressions are mentioned in this article (relation (equation (6)) and (equation (7)) that are both validated individually. Relation (equation (6)) is validated by analysing the stiffness of the bearing between two points on the load curve generated by decompressing the bearing. To maintain a constant pocket volume, it is made sure that no air leaks across the ferrofluid seal during this decompression. One point that is easily distinguishable is the point of maximum load capacity given by relation (equation (13)). Another point that is easily distinguishable is the so-called 'knee point', which is a point on the force curve that shows a sudden change in slope. This 'knee point' is caused by a sudden change in the slope of the curve of the magnetic field intensity followed by the inner fluid interface. This occurs when the inner fluid interface is right inbetween the two peaks of magnetic field intensity presented in Figure 11. The inner fluid interface moves inwards for a decreasing compression. For the stiffness validation, it is required to know the difference in magnetic field intensity across the seal for a certain fly height for the two points (the point of maximum load capacity and the knee point). This is no problem for the point of maximum load capacity since the location of the inner fluid interface is known. The fly height of the knee point can be derived from the point of maximum load capacity by analysing how the inner fluid interface moves inwards for increasing fly height. For small displacements and so small change in pressure, the air volume of the pocket can be assumed to be incompressible. The fly height for a corresponding knee point can then be calculated with Relation (equation (7)) is validated in a similar way by predicting the linear stiffness between the point of maximum load capacity and the knee point. This is done by using the pneumatic leverage value of exactly in-between the two points. For the whole stiffness validation, it is assumed that the field intensity at the outer fluid interface stays constant since the displacements are only small. The measurements are performed by increasing the force up to a value of F L ¼ 5N or p i ¼ 0.13 bar. Next the force is decreased to a negative value to demonstrate that the bearing also is capable to deliver fly height Figure 11. The modelled magnetic field intensity at different fly heights (in mm) as function of the radius for a ring-shaped magnet with the dimensions of (24.5 mm  18.5 mm  3 mm) and a remanent flux density of B r ¼ 1.17 T. The outer peak of the magnetic field is defining the relevant magnetic field for the load capacity since this has the highest magnitude. The relevant magnetic field intensities at the inner and outer fluid interfaces can be read from this figure. a tension force. This is done three times to show hysteresis present in the system. Results and discussion This chapter validates the theoretical predictions with experimental results and is divided into three parts: first the model of the load capacity is validated followed by the validation of the knee point that is then used to validate the stiffness model. Maximum load capacity The measured curve of maximum load capacity is presented in Figure 12. During the experiments, it is observed that the initial outer magnetic field interface, when the bearing faces were not touching, is measured to be H o ¼ 1:3  10 5 A=m. The outer magnetic field interface when the bearing faces are fully touching is measured to be H o ¼ 1:0  10 5 A=m. Air is escaping from the pocket of air through the seal along the whole path, this means that the inner fluid interface is at a peak in field intensity along the whole path. The location of this peak for the measured fly height is traced back by using Figure 11. These values are used to plot relation (equation (13)) in Figure 12. The small ripple visible in the curve is caused by air popping out of the seal and demonstrates that the inner fluid interface is at a maximum value of magnetic field intensity. The data show that the theoretical model fits the measurements well. This furthermore shows that the load capacity is mainly defined by the pressure across the seal and only partly defined by the contribution of the pressure of the seal itself. Knee point The force curve that is applied over time is presented in Figure 13. The force in function of the fly height is presented in Figure 14. The location of knee point in the load curve is presented in Figure 15. The knee point can be calculated by using formula (equation (14)) and the shape of the magnetic field presented in Figure 13. These graphs show that point of maximum load capacity is located at a radial position of r max ¼ 11.8 mm with a fly height of h max ¼ 0.235 mm. Decompressing the bearing causes the inner fluid interface to move inwards towards the knee point. The air mass inside the pocket stays approximately constant during this process, which means that the location of the inner fluid interface can be calculated from the Compressing the bearing will cause air to escape from the seal, because the pressure in the pocket of air becomes larger than the pressure that can be counteracted by the seal. The first three datasets in the figure are three different measurements that show that the maximum load curve of the bearing has a high repeatability. The fourth dataset is the result from the theoretical model of this process presented in relation (equation (12)). The figure shows that the model fits the measurements well. Decreasing the compression even more makes the inner fluid interface to jump over this peak to continue back down at the other side of the inner peak ( Figure 16). The inner fluid interface is able to jump over this peak due to the fluid that sticks behind as can be seen from Figure 17. Fluid sticks behind due to the attracting force of the inner peak in magnetic field. This inner peak also causes ripple in the load curve due to air escaping from the outer chamber into the inner chamber. This introduces a hysteresis like behaviour that is clearly visual in the shape of the curve presented in Figure 14. The hysteresis decreases for increasing fly height due to the decreasing contribution of the inner peak as can be seen in Figure 16. Bearing stiffness The stiffness of the bearing can now be validated by comparing the measured stiffness with the stiffness that is described with relation (equation (7)). The location of the knee point is now used to calculate the average measured stiffness between the two points of interest in Figure 15. This has the following value The theoretical stiffness can be calculated from the magnetic field presented in Figure 16. ¼ À4  10 À7  32  10 3   0:0118 2 0:6  10 5 45  10 À6 ¼ 2:3  10 4 N=m ð17Þ The theoretical stiffness can also be calculated by using the pneumatic leverage. k mod ¼ À 0 M s A p dðH i À H o Þ dr in dr in dh ¼ 4  10 À7  32  10 3   0:0118 2 0:6  10 5 1  10 À3 21:9 The three calculated stiffness's have a value of around 2:3  10 4 N=m, which shows that the theoretical model fits the experimental results well. This furthermore justifies the assumption of the air to be incompressible for small displacements. From Figure 7, it can be seen that the stiffness of the air is about 10 times higher than the stiffness of the seal itself. The contribution of the stiffness of the ferrofluid ring itself is low and not taken into account, so the theoretical model is actually slightly overestimating the real system. Discussion The experimental results of this research are in good accordance with the derived model. This shows that the proposed method provides a reasonable method to predict the load and stiffness characteristics of a ferrofluid pocket bearing. This furthermore shows that the load capacity of the bearing is mainly determined by, the magnitude of the magnetic field, the magnetization strength of the ferrofluid and the surface area of the pocket. This also shows that the stiffness of the bearing is mainly determined by the gradient of the magnetic field at the fluid interfaces, the magnetization strength of the fluid and the surface area of the pocket. Maximizing the load and stiffness requires maximizing the different parameters they are related to or by placing multiple ferrofluid seals in series. The magnetic field strength at the fluid interfaces can be increased by using stronger magnets or by focusing the magnetic field with the use of example iron. Focusing the magnetic field has the additional effect that the gradient increases, which is beneficial for the stiffness. The compressibility of the pocket of air in the bearing is negligible for the bearing design, because the effective stiffness of the air is much larger than the stiffness of the seal. This might not be the case anymore for other designs that for example use a larger surface area of the pocket, the stiffness of bearing will in this case be predominantly determined by the stiffness of the air instead (see equation (11)). Conclusions The theoretical model for the maximum load capacity and the stiffness is in good accordance with the experimental results, which means that the proposed method is valid for describing the load capacity and the stiffness of a ferrofluid pocket bearing. This method shows that the load characteristics can be directly calculated from the shape of the magnetic field and the geometry of the bearing. Comparing the theoretical model with the measurements also shows that the load and stiffness of the bearing are in general mainly determined by the sealing capacity of the seal and only partly determined by the pressure of the ferrofluid itself. The results furthermore show that having two radially distributed peaks in magnetic field intensity introduces some hysteresis in the system that might be undesirable. It has been shown that a bearing with a diameter of 24.5 mm is capable of carrying a load of approximately 8 N with a stiffness of approximately $ 2  10 4 N=m. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
2019-04-30T13:04:58.521Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "094fc952c0af2442a43380bd2215f893b85d33e0", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1350650117739200", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5ccaed8ce918b7c3221b30491798074f69737d56", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
253319556
pes2o/s2orc
v3-fos-license
Multiple Sclerosis Followed by Neuromyelitis Optica Spectrum Disorder A woman presented at age 18 years with partial myelitis and diplopia and experienced multiple subsequent relapses. Her MRI demonstrated T2 abnormalities characteristic of multiple sclerosis (MS) (white matter ovoid lesions and Dawson fingers), and CSF demonstrated an elevated IgG index and oligoclonal bands restricted to the CSF. Diagnosed with clinically definite relapsing-remitting MS, she was treated with various MS disease-modifying therapies and eventually began experiencing secondary progression. At age 57 years, she developed an acute longitudinally extensive transverse myelitis and was found to have AQP4 antibodies by cell-based assay. Our analysis of the clinical course, radiographic findings, molecular diagnostic methods, and treatment response characteristics support the hypothesis that our patient most likely had 2 CNS inflammatory disorders: MS, which manifested as a teenager, and neuromyelitis optica spectrum disorder, which evolved in her sixth decade of life. This case emphasizes a key principle in neurology practice, which is to reconsider whether the original working diagnosis remains tenable, especially when confronted with evidence (clinical and/or paraclinical) that raises the possibility of a distinctively different disorder. Case Presentation The patient is a 57-year-old woman followed at our comprehensive multiple sclerosis (MS) center at the Cleveland Clinic after decades with the diagnosis of clinically definite, relapsingremitting MS (RRMS), followed by a transition to secondary progressive MS (SPMS), who now presents with longitudinally extensive transverse myelitis. The inception of this patient's neurologic course began in 1981 at age 18 years with partial transverse myelitis, followed by an episode of double vision ( Figure 1). Following corticosteroid therapy, she recovered completely from both syndromes, and a diagnosis of clinically definite RRMS was established. She was later treated with interferon beta-1a and remained neurologically stable on this therapy for nearly 3 decades until 2009 at which time a new clinical relapse prompted the transition of her diseasemodifying therapy (DMT) to glatiramer acetate ( Figure 1). Although she remained stable on glatiramer acetate with no evidence of disease activity for approximately 10 years, a follow-up visit in May 2019 revealed worsening on tests of processing speed, timed 25-foot walk, and fatigue. A surveillance brain MRI at that time showed lesions highly characteristic for MS (ovoids, Dawson fingers, periventricular plaques, and lesions perpendicular to the long axis of the ventricles) and multiple new and/or enlarging T2 hyperintense nonenhancing lesions in the deep white matter when compared with a February 2017 scan ( Figure 2). Her course at that time was most consistent with SPMS. In November of the same year, the patient developed the subacute onset of bilateral lower extremity weakness, rendering her nonambulatory and requiring the use of a wheelchair (Expanded Disability Status Scale score of 7). Examination at that time revealed full strength in the upper extremities, decreased strength in the lower extremities (rated 3 on the Medical Research Council scale), diminished sensation to vibration that was worse on the left vs the right lower extremity, diffusely brisk reflexes without clonus, and finger-to-nose dysmetria. Following high-dose oral prednisone treatment, she once again exhibited significant improvement and was able to ambulate with a walker. By February 2020, the patient was transitioned to siponimod, an oral sphingosine-1-phosphate (S-1-P) receptor modulator. In this figure, we detail the condition of the patient over time. The longitudinal axis (left to right) depicts the condition of the disease, whereas the smaller amplitude and lighter color indicates greater stability of the disease. Alternately, the expanded amplitude of the colored heat map (above and below the horizontal linear axis over time) designates increased disease activity (whether on a clinical or paraclinical basis) or complications of the treatment of disease. Other fields of information are added either above or below the heat map and include information about treatments, diagnoses, commentaries adding contextual perspectives, and results from specific test assessments from each most relevant period of clinical decision making. Each field is consistently color coded throughout as defined in the figure legend. 16 Glossary DMT = disease-modifying therapy; MS = multiple sclerosis; NMOSD = neuromyelitis optica spectrum disorder; RRMS = relapsing-remitting MS; S-1-P = sphingosine-1-phosphate; SPMS = secondary progressive MS; UTI = urinary tract infection. One month following the transition to siponimod, she reported difficulty ambulating, generalized weakness, dizziness, dysarthria, worsening spasticity, and confusion. Brain MRI showed greater than multiple new enhancing brain lesions, most prominently in the right centrum semiovale, anterior to the right lateral ventricle, and in the right superior periventricular region ( Figure 3). She was treated with highdose steroids and antibiotics for a concomitant urinary tract infection (UTI). Two months later, she described the new onset of weakness in the left upper extremity, which was confirmed on examination, in conjunction with nonsustained ankle clonus bilaterally. Siponimod was held due to lymphopenia at 200 cells/μL. She then presented to the emergency department with difficulty ambulating and altered mental status. A brain MRI showed no abnormal enhancements and no evidence of acute ischemia. MRI of the cervical and thoracic spine showed discontinuous short-segment (i.e., skip) nonenhancing lesions. She was treated with high-dose corticosteroids and antibiotics for yet another UTI, improved, and was subsequently discharged home. However, 1 week later, the patient was admitted to the hospital for worsening gait, dysphagia, and diffuse weakness. She was transferred to the intensive care unit, where she was intubated for airway protection and required vasopressors for blood pressure support. Imaging of the neuroaxis failed to demonstrate any interval changes. CSF analysis revealed an elevated IgG index and the presence of unmatched oligoclonal bands. Once the patient improved and stabilized, she was discharged to acute rehabilitation. While on the rehabilitation service, she developed severe weakness in the upper and lower extremities bilaterally that progressed to the point of exhibiting only trace movements in the upper extremities, paraplegia, and urinary retention. Given the severity of deterioration, the patient was treated with IV methylprednisolone and a course of plasma exchange. Repeat imaging of the spinal cord now demonstrated a longitudinally extensive pattern of confluent hyperintensity with peripheral enhancement and marked edema that spanned from the cervicomedullary junction to the upper thoracic spinal cord ( Figure 4). Serum testing by cell-based assay yielded a positive AQP4 IgG at a titer of 1:2,560, and a diagnosis of neuromyelitis optica spectrum disorder (NMOSD) was confirmed. Siponimod was discontinued while ocrelizumab, a CD20 monoclonal antibody, was initiated. The patient currently has only trace movements in her lower extremities with limited antigravity movements in her upper extremities. She requires an indwelling Foley catheter and has significant spasticity. Differential Diagnostic Considerations This case raises 2 intriguing possibilities: The first is that our patient was originally misdiagnosed with RRMS and followed an atypical NMOSD course until she presented with cervicothoracic myelitis. An alternative hypothesis is that she developed 2 distinctive neuroinflammatory disorders in a temporal sequence, namely the onset of MS at 18 followed by the development of NMOSD at 57. We will first discuss perhaps the more controversial hypothesis: the prospect that our patient's presentation represented manifestations of NMOSD from the very start of her complex clinical course. This would render the original label of RRMS, the initial working diagnosis, as fundamentally incorrect. Based on the disease characteristics associated with the formulation of her working diagnosis, is there any evidence that an early misdiagnosis could have been avoided? First, it is known that a minority of patients with NMOSD can, in fact, exhibit short-segment spinal cord lesions that are indistinguishable from the so-called classic skip lesions associated with MS. 1 Furthermore, such patients may also have brain lesions that appear consistent with MS. 2 Studies have shown that 60% of patients with NMOSD accumulate white matter lesions and that as many as 16% fulfill the Barkhof MRI criteria for MS. 2,3 A misdiagnosis of MS could explain a lack of response to escalation of MS DMTs, while the patient's previous stability on glatiramer may have been due to a protracted NMOSD remission, which can be a characteristic of the disorder. 4 In addition, it is possible that our patient's clinical deterioration was precipitated by the transition in DMT from glatiramer acetate to siponimod, as acute inflammatory activity associated with S-1-P modulator treatment has been documented in NMOSD. 3,5,6 The alternate hypothesis for consideration is that the patient developed 2 distinct neuroinflammatory conditions occurring in a temporal sequence. Although her initial presentation of partial myelitis and a brainstem syndrome can occur in both MS and NMOSD, the near-complete recovery of such syndromes with corticosteroids is highly reminiscent of, albeit not specific for, MS. Our patient was well controlled without evidence of disease activity on interferon beta-1a for nearly 30 years. Subsequently, a new exacerbation prompted a transition in DMT to glatiramer acetate, which provided another decade of disease-free remission. Radiographically, brain imaging studies revealed enhancing and nonenhancing brain lesions, with features highly characteristic for MS including ovoids, periventricular hyperintensities, Dawson fingers, and cerebral atrophy. Spinal cord imaging performed early in the disease course exhibited multifocal and discontinuous short-segment skip lesions, in keeping with the diagnostic criteria of definite MS. 5 Furthermore, CSF oligoclonal bands (as present in our patient) are identified in ;85% of patients with MS, but in only ;15% of patients with NMOSD. 3,5 The clinical course in our patient prior to 2019 included the documented transition from relapsing-remitting to SPMS. Development of progressive disability years after diagnostic confirmation and treatment of RRMS, is characteristic of SPMS, and is not an established feature of NMOSD. 3,6 However, by mid-late 2019, our patient began to exhibit a marked escalation in both clinical and radiographic disease activity, including an episode of longitudinally extensive transverse myelitis (more than 30 years after the initial presentation), an observation that is highly atypical for MS. Final Diagnostic Conclusions Taken together with the highly characteristic lesions on brain imaging investigations (e.g., Dawson fingers, ovoids, periventricular lesions, and typical enhancements as demonstrated in both Figures 2 and 3), spinal cord skip lesions, and the absence of antecedent syndromes characteristic for and also part of the rigorous diagnostic criteria for NMOSD (e.g., longitudinally extensive myelitis, optic neuritis, area postrema syndrome, or diencephalic syndrome), we believe that based on the evidence available, our patient's initial disorder of CNS inflammation was more compatible with MS than with NMOSD. Her presentation at our center decades into her disease course, with a longitudinally extensive transverse myelitis and the presence of AQP4 antibodies, supports the development of yet a second neuroinflammatory disorder, NMOSD. A study investigated a large cohort of patients with MS for the presence of aquaporin-4 antibodies in serum samples and found that the rate of misdiagnosis of NMOSD as MS was very rare, less than 1%. 7 Unfortunately, testing for AQP4 antibodies was not available at the time of our patient's initial presentation in 1981 and would not be widely available until some 25 years later. Discussion MS and NMOSD are separate diseases. MS is thought to result from an autoimmune attack targeting proteins expressed by myelin-producing oligodendrocytes. Alternately, NMOSD, a humoral autoimmune disease, was distinguished as a separate disease in 2004, with AQP4 identified as the target for the pathogenic antibody in 2005, long after the inception of our patient's disorder in 1981 ( Figure 1). 8,9 Evidence is also now well established to genetically differentiate the predilection of these 2 disorders. Specifically, MS is associated with the HLA-DR2 (DRB1*1501) and typically presents early in life (from adolescence to middle age), whereas NMOSD has been shown to be associated with HLA-DR17 (DRB1*0301) and can present at any age. 10 There are highly salient and differentiating radiographic characteristics for MS and NMOSD and well-defined and evidencebased diagnostic criteria. 3,6,[11][12][13] However, it is important to note that patients with MS are at higher risk of developing other autoimmune disorders, making the possibility of 2 distinct neuroinflammatory disorders not untenable. 14,15 For instance, it is clear that these 2 disorders can manifest in complex ways that raise diagnostic confusion, including the prospect that the 2 conditions might occur as separate entities in a temporally distinctive sequence. Although we cannot be certain, we believe that the analysis of the evidence available supports the contention that our patient did have RRMS, later transitioning into SPMS, and that she developed NMOSD in her sixth decade of life. Our case report is instructive in that it emphasizes a crucial and salient principle in clinical practice. Specifically, the neurologist must remain vigilant and committed to periodically revisit a fundamental tenet in neurologic diagnosis; "does the 'working diagnosis' still work?" In our patient, answering the question with precision as to whether the working diagnosis of MS remained valid in the context of a potentially transformational clinical syndrome, longitudinally extensive transverse myelitis, required a well-codified, objective (now including the utilization of a highly sensitive and specific molecular diagnostic tool; the cell-based assay performed on serum for the identification of the AQP4 autoantibody), and a systematic surveillance plan for disease monitoring in conjunction with the interpretation of treatment response characteristics. At least with respect to the new syndrome, the presence of the AQP4 antibody rendered the original diagnosis of MS no longer tenable, or that a second condition in the form of NMOSD, evolved in conjunction with, and temporally after the first (i.e., MS). Modification of the working diagnosis (es) was indeed tantamount so that an alternate and more appropriate treatment strategy could be formulated, one that provided potential efficacy for one and/or both conditions. Acknowledgment The authors thank their medical illustrators, Mr. Jason Ooi and Dr. Matthew Parsons, for their creation of the chronological heat map (Figure 1). The authors acknowledge funding from the Frohman Foundation: Innovating Precision CARE Through Discovery in Molecular Medicine for Figure 1. Study Funding The authors report no targeted funding. Disclosure C. Goldschmidt has no disclosures. S.L. Galetta has received consultant fees from Genentech. R.P. Lisak, over the past 2 years, has been funded for research support by the NIH, National Multiple Sclerosis Society (USA), Mallinckrodt Pharmaceuticals, Genentech, Teva Pharmaceuticals, Novartis, MedImmune, and Chugai; he has served as a consultant to Gerson Lehrman Group, Syntimmune, Alexion, Alpha Sites, Insights Consulting, Informa Pharma Consulting, and Slingshot Consulting; he has served on the speaker's bureau for Teva Pharmaceuticals (nonbranded talks only). L.J. Balcer is editorin-chief of the Journal of Neuro-Ophthalmology. A. Hellman and M.K. Racke are employed by Quest Diagnostics and may own stock options. A.E. Lovett-Racke has been a consultant for Biogen and Novartis. R. Alejandro Cruz, M.S. Parsons, and N. Sattarnezhad have no disclosures. L. Steinman is on the Editorial Boards of The Proceedings of the National Academy of Sciences and the Journal of Neuroimmunology; he has served on the Editorial Board of the The Journal of Immunology and International Immunology; he has served as a member of grant review committees for the NIH and the National MS Society; he has served or serves as a consultant and received honoraria from Atara Biotherapeutics, Atreca, Biogen Idec, Celgene, Centocor, Coherus, EMD-Serono, Genzyme, Johnson and Johnson, Novartis, Roche/Genentech, Teva Pharmaceuticals, Inc., and TG Therapeutics; he has served on the Data Safety Monitoring Board for TG Therapeutics; he serves on the Board of Directors of Tolerion and Chairs the Scientific Advisory Board for Atreca. Currently, L. Steinman receives research grant support from the NIH and Atara Biotherapeutics. S.S. Zamvil is Deputy Editor of Neurology, Neuroimmunology and Neuroinflammation and is a member of the advisory board for
2022-10-23T06:16:06.538Z
2022-10-21T00:00:00.000
{ "year": 2022, "sha1": "ff03850187e1fb11d93ebd9f3560a5b6b7cc2017", "oa_license": "CCBYNCND", "oa_url": "https://nn.neurology.org/content/nnn/10/1/e200037.full.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "eded3ed8abb5d8b72ecadb8185ad276cf5c03d15", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3357006
pes2o/s2orc
v3-fos-license
Automated Image Analysis of Offshore Infrastructure Marine Biofouling In the UK, some of the oldest oil and gas installations have been in the water for over 40 years and have considerable colonisation by marine organisms, which may lead to both industry challenges and/or potential biodiversity benefits (e.g., artificial reefs). The project objective was to test the use of an automated image analysis software (CoralNet) on images of marine biofouling from offshore platforms on the UK continental shelf, with the aim of (i) training the software to identify the main marine biofouling organisms on UK platforms; (ii) testing the software performance on 3 platforms under 3 different analysis criteria (methods A–C); (iii) calculating the percentage cover of marine biofouling organisms and (iv) providing recommendations to industry. Following software training with 857 images, and testing of three platforms, results showed that diversity of the three platforms ranged from low (in the central North Sea) to moderate (in the northern North Sea). The two central North Sea platforms were dominated by the plumose anemone Metridium dianthus; and the northern North Sea platform showed less obvious species domination. Three different analysis criteria were created, where the method of selection of points, number of points assessed and confidence level thresholds (CT) varied: (method A) random selection of 20 points with CT 80%, (method B) stratified random of 50 points with CT of 90% and (method C) a grid approach of 100 points with CT of 90%. Performed across the three platforms, the results showed that there were no significant differences across the majority of species and comparison pairs. No significant difference (across all species) was noted between confirmed annotations methods (A, B and C). It was considered that the software performed well for the classification of the main fouling species in the North Sea. Overall, the study showed that the use of automated image analysis software may enable a more efficient and consistent approach to marine biofouling analysis on offshore structures; enabling the collection of environmental data for decommissioning and other operational industries. Introduction Permanent offshore structures may form artificial reefs, which provide attachment and settlement sites for marine organisms (defined herein as marine biofouling).In the UK, some of the oldest oil and gas installations have been in the water for over 40 years and have undergone considerable colonisation by marine biofouling organisms.Marine biofouling organisms in the UK generally include algae, soft corals and mussels in the photic zone as well as anemones, hydroids, tubeworms, barnacles and cold-water corals on the deeper sections of the platforms [1].The location (e.g., distance to coast, proximity to other platforms), sediment type, prevailing water current, depth, water temperature and material of the structure all have an influence on the type, density and zonation pattern of marine fouling.Generally, the same major groups of organisms are responsible for platform biofouling worldwide, but the individual species involved tend to vary [2].The climax stage in both coastal and offshore areas is represented by communities in which the dominant forms are anemones, mussels, barnacles, sea squirts, sponges and algae [3,4]. In terms of the oil and gas industry, the North Sea is usually referred to by region: southern, central, and northern North Sea, (and West of Shetland; Figure 1), principally.In these regions, the vertical zonation of marine fouling varies.A series of studies for Oil and Gas UK were undertaken to collate knowledge and experience on the management of marine biofouling during decommissioning [5,6].BMT Cordah [5] and Sell [7] report on the difference in marine fouling zonation in the North Sea.The southern North Sea is shallow (approximately 30 m depth) and generally has a higher abundance of mussels near the surface (compared to the other regions) and a lower abundance of anemones.In the central North Sea, the water is slightly deeper (approximately 90 m depth) and soft corals tend to be present throughout this depth, and there is a dominance by anemone.The northern North Sea has the highest species diversity compared to other regions of the North Sea, although percentage cover of individual species may decrease.The cold-water coral Lophelia pertusa is only present on northern structures from circa 60 m depth to 140 m depth.Although not included in the report to Oil and Gas UK, anecdotal evidence from industry ROV footage, may suggest that zonation might be less pronounced West of Shetland, where water depth exceeds 200 m; and anemones are not the dominant fouling species as seen in the North Sea. Challenges and issues caused by marine biofouling for the oil and gas industry may include: corrosion of structures, impairment of visual inspection, obstruction of equipment and survey access, disruption of anodes and alteration of hydrodynamic loading [8][9][10][11].The addition of substantial biomass due to marine biofouling (e.g., weighing from a few hundred to a few thousand tonnes) also means that its subsequent disposal needs to be carefully considered if brought onshore (for disposal to landfill or composting at licensed facilities [11]).Furthermore, if marine biofouling includes species of conservation importance (SpCI), the Convention on International Trade in Endangered Species (CITES) regulations will need to be considered during platform decommissioning. In areas of the Gulf of Mexico, a "rigs-to-reef programme", (the conversion of offshore platforms into designated artificial reefs [12]) is in progress.However, in Europe, particularly in the North East Atlantic OSPAR 1 region, there is a requirement to remove all offshore oil and gas infrastructure, excluding pipelines, from the seabed (although derogations may be granted; under OSPAR Decision 98/3).As part of a decommissioning plan, an operator may be required to assess the extent of marine biofouling on a platform in order to estimate its potential weight for removal and disposal; and to determine the extent of any SpCI (e.g., Lophelia pertusa and Sabellaria spinulosa). Other areas of related, potentially important, research on marine biofouling include: the potential for the spread of marine invasive species [13]; especially when considering the movement and storage of decommissioned structures between marine and coastal areas; potential "stepping stone" habitats between natural ecosystems [14]; artificial reefs for conservation (e.g., de-facto MPAs [15]); platforms as fish aggregation devices [16,17]; or the use of pipelines by fish [18] and fishermen (S.Rouse, pers.comms.). 1 The Convention for the Protection of the Marine Environment of the North-East Atlantic (The OSPAR Convention).The OSPAR region refers to the North-East Atlantic and associated contracting countries.Quantification of marine organisms from underwater photography is more challenging than in situ inspection.This is due to limited image resolution, variable lighting conditions, water turbidity and the inability to interact with the organisms [19].This is particularly true when considering monitoring near-surface offshore infrastructure, due to swell, sea state and light penetration.Analysis of marine growth on offshore platforms is conducted at greater depths and with limited light conditions [20,21], compared to similar studies on shallower coral reefs [19].Inferior conditions may have an influence on the quality of the images collected. However, underwater photography or videography methodologies do provide a number of advantages.These include, the creation of a permanent record of species presence at a site; the ability to record more data with limited field deployments; the possibility of recording data where in situ observations are challenging or dangerous (e.g., where technical diving or submarine access is required) (e.g., [4]).Photographic surveys are now considered the standard within marine ecological field studies, due to the ability to collect substantial datasets efficiently, consistently and safely.Ongoing advancements and improvements in underwater digital photography quality, digital storage and computer vision methods will rapidly accelerate the analysis of underwater photography [19]. Analysis of marine biofouling organisms on offshore oil and gas infrastructure is conducted either by Remotely Operated Vehicle (ROV) operators, that generally identify the main fouling assemblages and percentage cover of "hard" and "soft" growth; or by marine growth analysts (MGAs) if a more scientifically accurate analysis is required.MGAs review ROV survey footage Quantification of marine organisms from underwater photography is more challenging than in situ inspection.This is due to limited image resolution, variable lighting conditions, water turbidity and the inability to interact with the organisms [19].This is particularly true when considering monitoring near-surface offshore infrastructure, due to swell, sea state and light penetration.Analysis of marine growth on offshore platforms is conducted at greater depths and with limited light conditions [20,21], compared to similar studies on shallower coral reefs [19].Inferior conditions may have an influence on the quality of the images collected. However, underwater photography or videography methodologies do provide a number of advantages.These include, the creation of a permanent record of species presence at a site; the ability to record more data with limited field deployments; the possibility of recording data where in situ observations are challenging or dangerous (e.g., where technical diving or submarine access is required) (e.g., [4]).Photographic surveys are now considered the standard within marine ecological field studies, due to the ability to collect substantial datasets efficiently, consistently and safely.Ongoing advancements and improvements in underwater digital photography quality, digital storage and computer vision methods will rapidly accelerate the analysis of underwater photography [19]. Analysis of marine biofouling organisms on offshore oil and gas infrastructure is conducted either by Remotely Operated Vehicle (ROV) operators, that generally identify the main fouling assemblages and percentage cover of "hard" and "soft" growth; or by marine growth analysts (MGAs) if a more scientifically accurate analysis is required.MGAs review ROV survey footage manually, recording the percentage cover and thickness of different species at various points and depth zones on the platform jacket.These results are then extrapolated to estimate the percentage cover, thickness and the mass of total marine growth over the entire platform.As such, only a limited number of images may be analysed within the project timescale. Beijbom et al. [19] discussed the ongoing advances in underwater image collection, image analysis and the advancement of underwater robotics (e.g., Autonomous Underwater Vehicles (AUVs) and ROV technologies).As such, Beijbom et al. [19,22] developed an automated annotation system (CoralNet, https://coralnet.ucsd.edu/)for coral reef survey images, with a publically available, user-friendly interface.The system assesses the texture and colour of a local image patch around randomly allocated annotation points, assigning the point to a predefined list of species or labels, using recent advances in computer vision science [19,22].Full details of the model, including system algorithms are available in Beijbom et al. [19,22]. Although designed for the purpose of coral reef analysis, it is proposed that the automated annotation method described above, could be used to provide overview analysis of species types, levels of biodiversity and depth and degree of the zonation of organisms on offshore structures.If the automated annotation system works well for North Sea species then taking up this approach would enable a far greater number of images to be assessed more efficiently and the automated nature will allow a much higher degree of data and analysis consistency across the oil and gas industry. This project is a scoping study, with the objective to test the use of automated image analysis software on images of marine biofouling collected from offshore platforms on the UK continental shelf (UKCS).The aims of the project are to: (i) train the software to identify the main marine biofouling organisms on UKCS platforms; (ii) test and compare different analysis criteria (methods A, B and C) to determine suitability of this type of analysis methodology on platform ROV footage; (iii) calculate the percentage cover of marine biofouling organisms on three test platforms to assess software performance; and to determine the platform species diversity and zonation patterns; and (iv) provide the reasoning and outline methodologies to industry for the use of this type of analysis software. Image Collection and Initial Training of CoralNet Software for North Sea Species The project uses the image annotation method developed by the CoralNet project (part of a National Science Foundation (NSF) funded project, Computer Vision Coral Ecology by University of California, San Diego, in 2012 [19,22]).ROV survey video footage (general visual inspection; GVI; defined as a regular routine structural survey undertaken for the purpose of assessing structure and component integrity) was obtained from a number of North Sea operators for a selection of North Sea platforms across the northern, central and southern North Sea regions (Figure 1), for software training.Platform location, names and operator names are anonymous.Footage was viewed and images were randomly collected via screen grab, where possible, across the full depth range of the platform/footage.Images were selected based on image quality, as judged by the analyst (e.g., Figure 2 showing an example of "good" quality) and containing a scale bar, where possible.This allowed for distance from camera to be estimated (allowing for known image pixel size to be entered into CoralNet). Training of the CoralNet software was required, in order to "train" the software to identify the species that are present on North Sea platforms.In order to train the software, a number of criteria needed to be set within CoralNet for this project: 1. Image Boundary • A boundary was set for each image, to prevent annotation points being placed too close to the image edge.Image X and Y boundaries were set at 10% and 95% respectively (meaning bottom and left 10% of image; and upper and right 5% of image will not contain annotation points), defining the rectangle within which annotation points would be generated (annotation area; the suggested default boundary within CoralNet; Figure 2). Annotation point generation (number and pattern) • Annotation point generation was set to "simple random (random within the defined annotation area)" for 20 points (Figure 2). • The aim was to train the software on an estimated 500-1000 annotation points per species (across the 21 species listed in Table 1).This was not possible for all species as some were too infrequent.Therefore, on average, 800 annotation points per species were allocated (Table 1). • Where possible, image distance was estimated (in cm), based on scale bar presence (e.g., Figure 2, estimated distance 10 cm) or based on approximation.All images used were taken within 100 cm of the infrastructure surface. • The confidence threshold for the automated annotation was set to 100% (e.g., all points required confirmation by a human analyst, following the computer classification/analysis). Each time the software analyses or classifies a group of images, this is subsequently called a classifier. Identified species list • A set of species/labels was determined from CoralNet's species list (defined within CoralNet as the labelset) (Table 1).These labels were used by the computer and the human analyst to classify the annotation points.Following initial project training, three North Sea platforms PlatMX (central North Sea), PlatEP (central North Sea) and PlatMS (northern North Sea; see Figure 1) were designated as test platforms, with the images excluded from initial software training (no southern North Sea platforms were included in the software testing as the project partner does not own any southern North Sea assets).Images were collected from each platform and collated by 10 m depth ranges from 10 m depth to the seabed.Image quality from 0 m to 10 m depth were not suitable for use due to swell; and these were subsequently disregarded.In addition, images were not available for all depth ranges on PlatMS.The selected training images were uploaded to CoralNet for processing.These training images were "anonymously" uploaded (i.e., no image/platform metadata was included) to the defined CoralNet project.The computer analysed the images, and made "best guess" identifications of the annotation points.The analyst then confirmed any correctly identified annotation points, and corrected any incorrectly identified annotation points (classifier trained and confirmed/corrected).The remaining images were uploaded in groups, classifiers trained and confirmed; with a total of 857 images used for software training.Images used for training, will form the basis from which the software will analyse any subsequent "test" images (in this study for the 3 "test" platforms).Analysis results do not include any training images.Images used in testing, once uploaded will help improve the "training" of the software. Within the project, the accuracy of the automated classifier increases as the number of training images for each species/label increases, as expected [18].The training of a new classifier (e.g., computer learning) is triggered when the number of images within the project increases by 10%.In addition, within the CoralNet project a "classifier-validation-step" is also used.A new classifier is only accepted into the system if it increases the accuracy of the classifier by at least 1% over the previous classifier.An example of training annotation point distribution is shown in Figure 2. Following initial project training, three North Sea platforms PlatMX (central North Sea), PlatEP (central North Sea) and PlatMS (northern North Sea; see Figure 1) were designated as test platforms, with the images excluded from initial software training (no southern North Sea platforms were included in the software testing as the project partner does not own any southern North Sea assets).Images were collected from each platform and collated by 10 m depth ranges from 10 m depth to the seabed.Image quality from 0 m to 10 m depth were not suitable for use due to swell; and these were subsequently disregarded.In addition, images were not available for all depth ranges on PlatMS. Testing Accuracy of Different Methods of Image Analysis Three criteria/methods of image analysis were carried out in the CoralNet software, as per Table 2. Method A was set at 20 random points and a confidence threshold of 80% (the software would automatically confirm annotation points it was 80% or more certain where identified correctly).Twenty random points were selected to mirror training criteria.Method B was set at 50 points, stratified random (as defined by CoralNet) within a grid of 5 rows by 5 columns (2 points per cell), confidence threshold was set at 90%.The confidence threshold was increased to test software improvement.Finally, for method C, 100 annotation points were set as a uniform grid of 10 rows by 10 columns (1 point per cell) with a confidence threshold of 90%.This method was applied to PlatMS only following initial analysis of methods A and B. * image classifier ran before completion of full confirmation/correction as per CoralNet classifier-validation-step. 1 As defined as "stratified random" within CoralNet. For each platform and testing method, images were uploaded (by platform) and the images analysed by CoralNet.The automated annotation (classifier results) image percentage covers were exported and the average percentage cover of each species was calculated for all depth ranges (unconfirmed % cover). The automated annotation points for each image were then confirmed or corrected where necessary by the analyst.Where one or more annotation points had been automatically confirmed by the software (i.e., the software was confident (80% or 90% depending on version) that it was identifying the species/label correctly) the number and species was recorded, along with any annotation errors (e.g., the occurrence of errors in the classifier's confidence threshold, i.e., how often did the classifier incorrectly identify a species with confidence (annotation points confirmed by the classifier above the confidence threshold) was noted).Following full confirmation or correction, the image percentage covers were exported and average percentage cover calculations were repeated (confirmed % cover).Finally, the percentage cover of species at each depth range for all three platforms was normalised to allow for the removal of the "no data" dataset; therefore the percentage cover was expanded to remove any areas of "no data".This was done in order to confirm the similarity between the manual analysts with the confirming analyst. In parallel to the annotation classification, a separate manual assessment of the percentage cover of marine organisms was carried out on the same three test platforms (using the same set of images and depth ranges), by an independent analyst with no prior image bias.The average percentage cover was calculated (manual % cover).The manual assessment was undertaken on images without annotation points, and represents the current methodology applied by one of the leading industry consultants undertaking oil and gas marine growth assessments (e.g., [23]). Biodiversity Analysis and Comparison of Methods A Shannon-Wiener Diversity index (H) was calculated in Excel to determine species diversity at each depth range for each platform.Finally, a comparison of each testing methods (A, B, C) and type (unconfirmed, confirmed and manual) for each platform was undertaken; and a two-way paired, 2-tailed distribution student t-test (p < 0.05; PlatMX n = 9; PlatEP n = 8; PlatMS n = 10) was calculated in Excel for each comparison pair (unconfirmed, confirmed, manual and normalised; per species across each platform), followed by Bonferroni correction applied to the t-test results. Initial Training and Testing of CoralNet Software for North Sea Species Following software training, the range of species that were identified confidently by the classifier, that is, above the confidence threshold, and subsequently confirmed correct, was limited across all three test platforms (Figure 3).The classifier identified six species/labels correctly/confidently across all three platforms: Metridium dianthus, Alcyonium digitatum, Mytilus edulis, Lophelia pertusa, "no data" and brittlestars (Figure 3).The species correctly identified most often on PlatEP and PlatMX was M. dianthus; and the label "no data" on PlatMS. Initial Training and Testing of CoralNet Software for North Sea Species Following software training, the range of species that were identified confidently by the classifier, that is, above the confidence threshold, and subsequently confirmed correct, was limited across all three test platforms (Figure 3).The classifier identified six species/labels correctly/confidently across all three platforms: Metridium dianthus, Alcyonium digitatum, Mytilus edulis, Lophelia pertusa, "no data" and brittlestars (Figure 3).The species correctly identified most often on PlatEP and PlatMX was M. dianthus; and the label "no data" on PlatMS. The PlatMX platform was the most confidently classified platform (A and B).The results showed 625 (33%) and 1750 (37%) correctly identified species/labels from a total of 1900 and 4750 annotation points respectively (Figure 4).The error rate (number of errors observed in the annotation points the software identified, i.e., above the confidence threshold) for PlatMX was 3.1% (method A) and 0.6% (method B).All other remaining annotation points for PlatMX (A = 1255, 66%; B = 2989, 63%) were not confidentially identified by the computer (i.e., were below the confidence threshold).The error rate (percentage of the total number of annotation points, correct and incorrect, above the confidence threshold that were incorrect) was below 4% for all platforms.The percentage of correctly identified species/labels (of the total number of annotation points) ranged from 3% (PlatEP) to 37% (PlatMX). Across all three platforms, the most common error was the identification of M. dianthus as Lophelia pertusa, representing 16 erroneous annotation points (Table 3).The PlatMX platform was the most confidently classified platform (A and B).The results showed 625 (33%) and 1750 (37%) correctly identified species/labels from a total of 1900 and 4750 annotation points respectively (Figure 4).The error rate (number of errors observed in the annotation points the software identified, i.e., above the confidence threshold) for PlatMX was 3.1% (method A) and 0.6% (method B).All other remaining annotation points for PlatMX (A = 1255, 66%; B = 2989, 63%) were not confidentially identified by the computer (i.e., were below the confidence threshold).The error rate (percentage of the total number of annotation points, correct and incorrect, above the confidence threshold that were incorrect) was below 4% for all platforms.The percentage of correctly identified species/labels (of the total number of annotation points) ranged from 3% (PlatEP) to 37% (PlatMX).Across all three platforms, the most common error was the identification of M. dianthus as Lophelia pertusa, representing 16 erroneous annotation points (Table 3). No significant difference (across all species) was noted between confirmed annotation methods (A, B and C) on all three platforms.On PlatEP, there was only a significant difference between unconfirmed methods A and B, for M. dianthus only (Table 4). Finally, no significant difference was noted on PlatMX and PlatEP for the confirmed-normalised vs. manual comparison (for both A and B methods; PlatMx p = 0.18-0.80;PlatEP p = 0.07-0.91).For PlatMS, a significant difference (p < 0.003) was noted between the confirmed-normalised vs. manual comparisons (A and C) for M. dianthus and other anemones (p = 0.00).If other anemones and M. dianthus are grouped to form "all anemones", this reduces the significant difference to none (Table 4). CoralNet Software Ability to Assess Percentage Cover Tables and graphs presenting the percentage cover of species (species/labels) for each platform and analysis type (unconfirmed, confirmed and manual) are presented in the supplementary material (Tables S1-S17; Figures S1-S17). The plumose anemone Metridium dianthus was the most dominant organism by percentage cover on the PlatMX platform, again followed by the label "no data".Also present on the PlatMX platform were the soft coral Alcyonium digitatum, the barnacle Chirona hameri, the blue mussel Mytilus edulis, the hydroids Obelia sp. and Tubularia sp., sponge spp., tubeworms and the label infrastructure surface (Tables S1-S5; Figures S1-S5). The most dominant organism by percentage cover on the PlatEP platform across all depth ranges was M. dianthus, followed by the label "no data" (Tables S6-S10; Figures S6-S10).Also present on the PlatEP platform were A. digitatum, C. hameri, M. edulis, Obelia sp., sponge spp.and the label infrastructure surface. Identifying Biodiversity Differences between Platforms Shannon-Wiener (H) index values (Table 5) showed that PlatMX has the lowest species diversity (H = 0.28 and 0.36) of the three test platforms, representing overall low species diversity (H < 1; for this study the H index has been interpreted as moderate diversity = H > 1; high diversity = H > 3), with diversity at its highest at 30 to 40 m depth (H = 0.59 and 0.83).PlatEP also shows overall low diversity (H = 0.79 and 0.80), with highest diversity recorded (moderate diversity; H > 1) at 50 to 60 m depth (H = 1.00 and 1.03).PlatMS has the highest diversity (H = 1.89-2.06)representing overall moderate species diversity (H > 1), with highest diversity recorded at 150 to 160 m depth (1.43 and 1.59).Lowest diversity (low diversity; H < 1) was recorded at 10 to 20 m depth (H = 0.81 and 0.95).Evenness (EH) was greatest on PlatMS (EH = 0.78 and 0.79) showing that individuals on this platform are distributed more equitably among the recorded species.Lowest evenness was recorded on PlatMX (EH = 0.14 and 0.16) which is represented by the dominance of the anemone M. dianthus on this platform. Discussion This study presents a method for the automated analysis of marine biofouling on offshore platforms; and is, as far as the authors are aware, the first attempt at using such analysis methods on these types of offshore structures and using industry collected data. Overall, this study has shown that the use of automated image analysis software may enable a more efficient and consistent approach to marine biofouling analysis on offshore structures; enabling the collection of environmental data for decommissioning and for other operational industries.It was considered that the software performed well for the classification of the main fouling species in the North Sea. In relation to time-saving and efficiency, it was estimated that the manual analysis of the selected images (excluding time to collect the images), was approximately five to six hours per platform.In comparison, the annotation of images within CoralNet, took roughly two to three hours per platforms.These timing were estimated as it was not a specific aim of the study, and time was spent recording errors of the annotations, which would not ordinarily be recorded when analysing images on a regular basis. Training and Testing Software Performance For some species/labels, the computer will not perform well, as there are not enough examples of the species/label in the training set.In this study, ten species/labels were below the targeted number of annotation points (at least 500): other anemone, Balanus balanus, Chirona hameri, sponge, tubeworms, bryozoan, brittlestars, unknown invertebrates, algae and "unknown".The labels for Scalebar and water were also below the target number, however, these were subsequently recorded as "no data" during testing.It is acknowledged that there are limitations to the use of the software, particularly for less frequently occurring species, which does ultimately lower the accuracy of the results presented here.However, for this study, the accuracy of the software was not our main focus, and are not looking to rely solely on this particular software for biofouling analysis, but instead use it as a tool to assist in the analysis of marine biofouling.The purpose of the study was to test the suitability of the software for an industry application, and make recommendations for its future industry use. The testing data within this study will now be incorporated into the "training" dataset, for subsequent platform analysis following the completion of this project.Therefore, the training of the software will improve as more analysis is undertaken; as well as amendment to the original species/label list as required (e.g., by making changes to the species and label list in the project within CoralNet). As the software relies on size and texture for the identification of the species/labels, the quality of the image is important, and a limitation of this study; a "good" quality image of known pixel size is essential.However, even on images of "good" quality, errors are possible.The most errors identified were the misidentification of M. dianthus as L. pertusa.On North Sea platforms, M. dianthus and L. pertusa form both orange and white colonies [24] and have a similar texture, particularly when L. pertusa polyps form young colonies and their tentacles are extended.In addition, A. digitatum appeared pale orange in most of the images collected, and when their tentacles are extended, they have a "fluffy" appearance.Showing a likeness to M. dianthus this may confuse the software; which is also the case for Tubularia sp. The second highest error noted was between "no data" and M. edulis.On the images used, "no data" (training annotation points n = 3112) was primarily attributed to the survey text on the images, or to areas off the structure, that in most images appeared black (or pale blue/green in areas of higher light intensity, representing open water).The M. edulis (training annotation points n = 1077) shells in the images were also predominantly black, which created a challenge for the software.Subtle texture on the M. edulis shells may therefore have had an influence, as "no data" is associated with areas of very "flat" black colour. "No Data" Ideally, an image used in the automation software would contain no "no data", therefore showing only the surface of the structure at a set distance from camera, with no areas off-structure.Given that this is not easy to achieve from archived ROV footage, "no data" needed to be taken into account (including areas of on-screen text).Normalising the annotation data (percentage cover) following removal of the "no data" allowed for a more comparable dataset, when comparing manual and annotation analysis. The results from this study showed that there was no significant difference between the "confirmed-normalised" analysis and the manual analysis for PlatMX and PlatEP platforms, and the significant difference on PlatMS was limited to M. dianthus and other anemones, but when these two labels were combined into "all anemones" was reduced to no significant difference.This is possibly due to the experience of the manual analyst.This suggests that, with the removal of the "no data" from the analysis, the manual analyst and the "confirming" analyst performed consistently, when restricted to a defined species/label.The unconfirmed methods showed some differences between species, however, this would likely improve as more images are trained. "No data" may also be represented by other criteria, not just "off-structure", such as a scale bar, probe, ROV arm or any other object not attached to the platform.It is recommended that where possible, images are collected showing "on-structure" areas only, with no additional text on screen, or other image limiting objects.It is suggested that non-inhibiting scale measures be used (e.g., laser scales).Additionally, when other interventions are not possible, it is recommended that the annotation point layout is tailored to minimise areas of "no data", while maximising species coverage. Annotation Points Three different testing criteria were used in order to determine the optimal number and layout for the annotation points.It was determined that there was no significant difference between all confirmed methods (A, B and C) for all three platforms following correction.However, analysis prior to correction should that there was a difference between PlatMS A and B (20 to 50 points); and A and C (20 to 100 points). From the visual inspections and the analysis results, PlatMS is a more diverse platform than PlatMX or PlatEP.The results of the comparative assessment suggest that the number of annotations points needs to be greater on more diverse platforms.This is due to the necessity of getting enough training examples for "rarer" species/labels.The threshold for the minimum number of points required is the number of total examples of the rarest categories.If these rarer categories are important then more points per image and more images will need to be used in the manual image training annotation set.In this study, however, there would appear to be a cut-off point, as no difference was noted between 50 and 100 points on PlatMS. It was assumed that the more annotation points that are analysed, the better the percentage cover predictions; however, this did not appear to be the case.This is likely to be due to the lack of diversity observed on the test platforms.It is likely that where less common species are noted, a larger number of annotation points may increase the chance of the points being placed on these more sporadically occurring species.If an image is being analysed manually, these sporadic species may well be recorded, whereas they may be missed using an automated system.Due to time and resource constraints, it was not possible to repeat the manual analysis using the annotated images as well as the industry standard method (as undertaken here).However, this wasn't considered an issue, as it was important to understand how this type of software compares to the industry standard method, and how it could improve or support industry analysis in the future. Hence, a balance is needed between analyst effort and the overall percentage cover accuracy when selecting the annotation criteria.From the results presented here, it is recommended that 50 annotation points per image would suffice, applied to at least 10 images per depth range, however, this will be dependent on the image quality and number, and overall diversity of species within the collected images.If more images were available, e.g., 500 images per depth range, then statistically, you would be able to undertake analysis with less annotation points per image (e.g., [10]), to achieve the same outcome.This should therefore be considered on a platform by platform basis. Limitations Although the use of automated image analysis software presents a significant opportunity for data collection, there are a number of limitations with the current software that should be taken into account. The quality of the images uploaded to CoralNet should be carefully considered.The footage available from offshore operators tends to be focused on structural survey requirements, fulfilling the operator's regulatory requirements.Images may not be of sufficient resolution, or contain suitable scales for the assessment of marine growth organisms.Images presented in Figure 5 represent the variation in "good quality" images collected for use in this study.Figure 5a you would be able to undertake analysis with less annotation points per image e.g.[10], to achieve the same outcome.This should therefore be considered on a platform by platform basis. Limitations Although the use of automated image analysis software presents a significant opportunity for data collection, there are a number of limitations with the current software that should be taken into account. The quality of the images uploaded to CoralNet should be carefully considered.The footage available from offshore operators tends to be focused on structural survey requirements, fulfilling the operator's regulatory requirements.Images may not be of sufficient resolution, or contain suitable scales for the assessment of marine growth organisms.Images presented in Figure 5 represent the variation in "good quality" images collected for use in this study.Figure 5a The diversity of species within the images also presents a number of challenges.The testing of the software is reliant on the quality and quantity of the training images, with the classifier improving with increasing numbers of training/confirmed images.Collection of images of certain marine growth organisms such as M. dianthus, Obelia sp. and A. digitatum is relatively easy, given that they are dominant species on North Sea structures.The collection of images of other typical, but rarer fouling species or groups, such as sponges, bryozoans, tubeworms, other anemone species and some corals are harder to locate, as they have patchy distribution; or are concealed by larger, more dominant species.Collecting images from video footage also presents challenges, particularly with the movement of the ROV.If surveys do not settle to take stationary measurements on the infrastructure surface, the resulting images are blurred. The diversity of species within the images also presents a number of challenges.The testing of the software is reliant on the quality and quantity of the training images, with the classifier improving with increasing numbers of training/confirmed images.Collection of images of certain marine growth organisms such as M. dianthus, Obelia sp. and A. digitatum is relatively easy, given that they are dominant species on North Sea structures.The collection of images of other typical, but rarer fouling species or groups, such as sponges, bryozoans, tubeworms, other anemone species and some corals are harder to locate, as they have patchy distribution; or are concealed by larger, more dominant species. One particular challenge of using automated software, versus traditional marine growth assessments is the analysis of multi layers of species.For example, it may be expected that over 100% marine growth may be recorded from a particular image.As presented in Figure 5d, the infrastructure surface is 100% covered with marine growth, but there is a layer of Obelia sp. over the M. edulis.At present, there was no way to correct this within the software used herein.Consequently, the total percentage cover of species may be under-estimated.One way to address this within the software, may be to create species/label lists that were applicable to overlaying species, for example, mussels and seaweed; mussels and hydroids; tubeworms and hydroids etc.The analyst(s) would need to create a set of rules as to what to label overlaying species.This is not a fault of the software, more an ecological challenge that would need to be adapted to. The CoralNet software is based only on the annotation points, and does not extrapolate up over the entire image.Therefore, if the annotation points are not assigned to an individual species (in particular less common species), it will not be recorded within the percentage cover plot.It is, therefore, important to consider the layout and number of annotation points.A high enough number of points and images are needed to statistically represent the percentage cover. Finally, at present the CoralNet software does not allow for the transfer of training images (privately) between projects, therefore, images will be need to be trained if other users wished to use this methodology to analyse their own images.It is hoped that will be addressed by the CoralNet project team in due course. Percentage Cover and Zonation In the North Sea, zonation of fouling organisms is dependent on the location of the platform as previously outlined.This study has analysed three North Sea platforms from the central and northern regions (Figure 1) and has corroborated earlier studies [5,7] on marine fouling zonation.Of note, the extent of the dominance of anemones on the assigned central North Sea platforms (PlatMS and PlatEP) was perhaps unexpected and potentially to the exclusion of some other expected species (e.g., soft corals or mussels); which may explain the low diversity score reported.On the northern North Sea platform (PlatMS), higher diversity was reported, which was to be expected and species dominance was not as obvious.A southern North Sea platform was not included in this study due to the lack of assets in this region operated by our project partner. It should be noted that the diversity of the platforms analysed in this study, may be underestimated given that there is a chance that not all species (particularly those less seen e.g., those <500 annotation points) have been recorded. A challenge of assessing percentage cover of biofouling is that the judgement of percentage cover by different analysts varies.One person's interpretation of a percentage may be different to another's.The use of automated software helps address this challenge in part, as only the annotation points are analysed.The use of experienced North Sea MGAs is important.The use of this software does not remove the need for a MGA entirely, but allows for more images to be analysed more consistently, even if multiple MGAs are utilised.The results of this study showed that there was some difference between the MGA confirming the images within the software, and the MGA undertaking the analysis manually.However, this issue was improved following the normalisation of the data to remove "no data". One aspect of the software, which is not replicable by eye, is the ability to export the annotation points for use on another image.For example, if it were possible to collect the same image from a platform over a defined time-series, the MGA would be able to overlay the initial annotation points within the software.This would enable comparative analysis to examine how marine biofouling may change over time. Recommendations The final objective of the study was to make recommendations on the use of this type of automated image analysis software for industry.The results were presented at small industry engagement sessions and a discussion was had with industry ROV operators and survey managers about how they envisaged incorporating the collection of suitable data for this type of analysis. In summary, the following recommendations are made for the use of the CoralNet software for the analysis of percentage cover of marine biofouling organisms on offshore structures: 1. When collecting new survey footage, the use of a high definition (HD) video or camera is preferred.2. If using video only, allow time for the ROV to settle at various points on the platform jacket. 3. Settle at different locations within 10 m depth ranges, at different orientations and perpendicular to the structure. 4. Stay within 1 m of the structure and try to fill the frame with the structure in order to limit "off-structure" areas within images. 5. Allow for a minimum of 10 images to be collected from each 10 m depth range.6. Use scale bars or scale lasers as accurate pixel size estimation is critical to the accuracy of the automated system.Ensure that the scale bar is not intrusive to the footage/image and ensure ROV arms or cathodic protection (CP) probes are not within the shot.7. Remove overlay text from survey footage, except for depth; or provide depth details in metadata or image title.8. Where text overlay is removed, the image boundary within CoralNet can be set to X: 10-95%/ Y: 10-95%.Where the text overlay is present, it may be necessary to test the boundary to minimise the chance of points landing on the text.9. It is recommended that 50 annotation points per image should be used, however, this will be dependent on the image quality, the number of rare species of interest and the total number of images taken per depth.This should be considered on a platform by platform basis.10.The annotation point distribution should be set within a grid-either uniform or stratified random (as defined by CoralNet) to ensure no overlap of points and equal coverage of the image.11.Where it is not possible to use images with no "no data", following analysis, normalise the dataset to remove "no data". Conclusions Marine biofouling on offshore structures is an important topic due to the extensive interest in the potential for turning offshore platforms into artificial reefs.However, the presence of marine biofouling on offshore structures, does not necessarily equate to these structures being classed as a "reef".Artificial reefs are defined by OSPAR as " . . .a submerged structure placed on the seabed deliberately, to mimic some characteristics of a natural reef . . .." [25] and natural reefs (e.g., Annex I reefs, stony or biogenic, under the EC Habitats Directive) are defined as " . . .a habitat that is colonised by many different marine animals and plants . . .and provides a home to many species . . .as well as giving shelter to fish and crustaceans such as lobsters and crabs" [26].Reefs are globally considered to be diverse ecosystems capable of supporting a variety of marine life, throughout the food chain. Therefore, without the knowledge of what grows on the offshore platforms, how this varies over a geographic region or an understanding of what additional species the structures support and how, it is not possible to establish the true benefits of these potential artificial reefs.There are research gaps on how these structures are used by fish or marine mammals for example (such as a food source or shelter from fishing); and understanding how these structures may contribute to carbon sequestration or to productivity are essential to informing the debate and policy. One of the challenges facing industry and the research sector is the access to industry data.In some circumstances the industry is reluctant to share data (in this case ROV survey footage) too widely.This project has demonstrated the use of automated software for analysing marine biofouling on offshore structures and has identified potential areas of future study.This study provides the initial evidence to show that is it very possible now to undertake further analysis of offshore structures on the UKCS using automated image analysis, and to process and present/map the results in a way that is satisfactory to the industry, ensuring commercial sensitivity is addressed and results are available for research and/or policy use. Supplementary Materials: The following are available online at www.mdpi.com/2077-1312/6/1/2/s1,Table S1: Average percentage cover of species per depth range PlatMX A Unconfirmed, Table S2: Average percentage cover of species per depth range PlatMX A Confirmed, Table S3: Average percentage cover of species per depth range PlatMX B Unconfirmed, Table S4: Average percentage cover of species per depth range PlatMX B Confirmed, Table S5: Average percentage cover of species per depth range PlatMX Manual, Table S6: Average percentage cover of species per depth range PlatEP A Unconfirmed, Table S7: Average percentage cover of species per depth range PlatEP A Confirmed, Table S8: Average percentage cover of species per depth range PlatEP B Unconfirmed, Table S9: Average percentage cover of species per depth range PlatEP B Confirmed, Table S10: Average percentage cover of species per depth range PlatEP Manual, Table S11: Average percentage cover of species per depth range PlatMS A Unconfirmed, Table S12: Average percentage cover of species per depth range PlatMS A Confirmed, Table S13: Average percentage cover of species per depth range PlatMS B Unconfirmed, Table S14: Average percentage cover of species per depth range PlatMS B Confirmed, Table S15: Average percentage cover of species per depth range PlatMS C Unconfirmed, Table S16: Average percentage cover of species per depth range PlatMS C Confirmed, Table S17: Average percentage cover of species per depth range PlatMS Manual, Figure S1 Figure 1 . Figure 1.Map of "generally referred" to oil and gas regions in the UK sector of the North Sea (boundaries, dotted line, are approximate and not administratively imposed). Figure 1 . Figure 1.Map of "generally referred" to oil and gas regions in the UK sector of the North Sea (boundaries, dotted line, are approximate and not administratively imposed). Figure 2 . Figure 2. Screen grab representing training annotation layout 20 simple random points, annotation area: X: 10-95%/Y: 10-95%; and scale bar protrusion used for determining distance from ROV to surface.Platform and location identifies have been removed from all images.Purple crosses and numbers indicate annotation points. Figure 2 . Figure 2. Screen grab representing training annotation layout 20 simple random points, annotation area: X: 10-95%/Y: 10-95%; and scale bar protrusion used for determining distance from ROV to surface.Platform and location identifies have been removed from all images.Purple crosses and numbers indicate annotation points. Figure 3 . Figure 3. Percentage of correctly identified (by software) annotation points per platform and method, separated by species/labels.Graph only shows proportion of correctly identified species and is not calculated in relation to the total number of points. Figure 3 . Figure 3. Percentage of correctly identified (by software) annotation points per platform and method, separated by species/labels.Graph only shows proportion of correctly identified species and is not calculated in relation to the total number of points. Figure 4 . Figure 4. Percentage of the total number of annotation points per platform correctly identified (pale grey) by the software; percentage of the number of annotation points identified by the software (all points above the confidence threshold) that were incorrect/errors (black); and % of the total number of annotations points below the confidence threshold (dark grey). Figure 4 . Figure 4. Percentage of the total number of annotation points per platform correctly identified (pale grey) by the software; percentage of the number of annotation points identified by the software (all points above the confidence threshold) that were incorrect/errors (black); and % of the total number of annotations points below the confidence threshold (dark grey). ,b are fairly typical images collected from ROV footage, but may be considered to lack clarity and scale and are poorly exposed.In comparison, Figure 5c,d represent the best examples of ROV images, demonstrating high definition, clear, and steady, with a scale.Better quality images are usually only collected during a specialist marine growth survey.J. Mar.Sci.Eng.2018, 6, 2 16 of 20 Figure 5 . Figure 5. Representative images collected from offshore structure ROV surveys.(a,b) represent typical quality footage which may not be suitable for use in CoralNet; (c,d) represent the best quality images that can be gathered from offshore ROV surveys. Figure 5 . Figure 5. Representative images collected from offshore structure ROV surveys.(a,b) represent typical quality footage which may not be suitable for use in CoralNet; (c,d) represent the best quality images that can be gathered from offshore ROV surveys. : Percentage cover per species by depth range for PlatMX A Unconfirmed, Figure S2: Percentage cover per species by depth range for PlatMX A Confirmed, Figure S3: Percentage cover per species by depth range for PlatMX B Unconfirmed, Figure S4: Percentage cover per species by depth range for PlatMX B Confirmed, Figure S5: Percentage cover per species by depth range for PlatMX Manual, Figure S6: Percentage cover per species by depth range for PlatEP A Unconfirmed, Figure S7: Percentage cover per species by depth range for PlatEP A Confirmed, Figure S8: Percentage cover per species by depth range for PlatEP B Unconfirmed, Figure S9: Percentage cover per species by depth range for PlatEP B Confirmed, Figure S10: Percentage cover per species by depth range for PlatEP Manual, Figure S11: Percentage cover per species by depth range for PlatMS A Unconfirmed, Figure S12: Percentage cover per species by depth range for PlatMS A Confirmed, Figure S13: Percentage cover per species by depth range for PlatMS B Unconfirmed, Figure S14: Percentage cover per species by depth range for PlatMS B Confirmed, Figure S15: Percentage cover per species by depth range for PlatMS C Unconfirmed, Figure S15: Percentage cover per species by depth range for PlatMS C Confirmed, Figure S17: Percentage cover per species by depth range for PlatMS Manual. Table 2 . Outline of the three testing methods: confidence threshold, annotation area, number and layout of annotation points, number of images from platform and the number of images trained in the classifier.[#: number of]. Table 3 . Number of annotation for species/labels identified incorrectly by classifier alongside the correct species/label.[#: number of]. Table 3 . Number of annotation for species/labels identified incorrectly by classifier alongside the correct species/label.[#: number of]. Table 4 . Difference in means between comparison pairs, two-way paired Student t-Test comparison, with Bonferroni correction per species for all testing methods (A, B and C; Con = confirmed, Ucon = unconfirmed, Eye = manual, Con-N = confirmed-normalised) for each platform (PlatMX, PlatEP and PlatMS). Table 4 . Cont.Bold ** = significant following Bonferroni correction (corrected p value in final row), PlatMS A Con-N vs. Eye and C Con-N vs. Eye = no significant difference across platform if combining anemone labels, Grey = no comparison available as "no data" not recorded during Manual analysis, Blank = no data recorded for label/species.Con = confirmed, UCon = unconfirmed, Eye = manual, Con-N = confirmed-normalised. Table 5 . Shannon-Wiener species diversity index (H), total number of species (S) and Evenness (EH) per platform and testing methods (A, B and C) at 10 m depth intervals. H = Shannon-Wiener Index, S = total number of species in the community (richness) (Note: non-species labels have been removed: "no data", scale bar, water and infrastructure surface), EH = equitability (evenness).Blank = no data recorded.
2018-02-18T16:07:06.211Z
2018-01-03T00:00:00.000
{ "year": 2018, "sha1": "6a2efc0d598bb8d66848b39de7e8461733a9fd3e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1312/6/1/2/pdf?version=1514988203", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6a2efc0d598bb8d66848b39de7e8461733a9fd3e", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
212929467
pes2o/s2orc
v3-fos-license
99mTc-radiolabeled Levofloxacin and micelles as infection and inflammation imaging agents. Easy and early detection of infection and inflammation is essential for early and effective treatment. In this study, PEGylated micelles were designed and both micelles and Levofloxacin were radiolabeled with 99mTcO4- to develop potential radiotracers for detection of infection/inflammation. Radiolabeling efficiency, in vitro stability and bacterial binding of 99mTc-Levofloxacin and 99mTc-micelles were compared. The aim of this study is to formulate and compare 99mTc-Levofloxacin and 99mTc-micelles as infection and inflammation agents having different mechanisms for the accumulation at infection and inflammation site. PEGylated micelles were designed with a particle size of 80 ± 0.7 nm and proper characterization properties. High radiolabeling efficiency was achieved for 99mTc-Levofloxacin (96%) and 99mTc-micelles (87%). The radiolabeling efficiency was remained stable with some insignificant alterations for both radiotracers at 25 °C for 24 h. Although in vitro bacterial binding of 99mTc-levofloxacine was higher than 99mTc-micelles, 99mTc-micelles may also be evaluated potential agent due to long circulation and passive accumulation mechanisms at infection/inflammation site. Both radiopharmaceutical agents exhibit potential results in design, characterization, radiolabeling efficiency and in vitro bacterial binding point of view. Introduction The diagnosis of infections with the help of radiological imaging modalities such computed tomography (CT), ultrasonography (US) and magnetic resonance imaging (MRI) provides both non-invasive diagnosis and accurate indication of the area of lesion. Depending on the basis of these anatomical imaging techniques, the infections can only be detected after formation of a morphological alteration. Therefore, early stage detections are not possible by the use of these routine medical imaging modalities especially in deeply seated infections, for example, osteomyelitis, intraabdominal infections and endocarditis. For this purpose, the use of scintigraphic imaging modalities such as gamma scintigraphy, single photon emission computed tomography (SPECT), positron emission tomography (PET) and hybrid imaging modalities comprising SPECT/CT, PET/CT, PET/MRI can provide physiologic and metabolic information about the lesion. Specific radiopharmaceuticals have been searched for obtaining lesion imaging with physiologic and metabolic information at the early stage of microbial infections. 99m Tc-labeled leukocytes was the first radiopharmaceutical in 1980s having the ability to image deeply seated infections to indicate regions of inflammation [1][2][3]. However, the cumbrous methods are one of the most commonly faced disadvantageous [4][5][6]. Some other agents for scintigraphic imaging were developed to perform quick and efficient imaging of inflammation and infection with high sensitivity and specificity [45,46]. Radiolabeled liposomes were designed as imaging agents for inflammation and infectious processes. It was previously demonstrated liposomes with small particle size and surface coating by a hydrophilic polymer (such as PEG) show enhanced blood circulation time providing an increased accumulation at the site of infection [47][48][49][50][51]. Liposomes which are formulated in a particle size of nanometer range and surface coated for passive targeting are called stealth liposomes. A variety of stealth liposomes were prepared and evaluated for this purpose [49,[52][53][54]. Although many previous studies were performed for 99m Tc-radiolabeling of antibiotics and liposomes, these techniques are still very valuable for their accumulation in the site of the infection and inflammation [25]. Infection specific radiopharmaceuticals can be successfully used for the diagnosis, imaging and therapy monitoring. There has been an intensive research for the development of specific infection and inflammation imaging probes because routinely used tracers in clinics still cannot evaluate infection and inflammation efficiently. Although, micelles are another efficiently used delivery systems for imaging or therapy of many diseases, the research about the diagnosis and imaging of infection and inflammation with micelles is very limited. Micelles are formed of lipid monolayers with a fatty acid core and polar surface. Inverted micelles are defined as the vice-versa of micelles in which polar core is in the center with fatty acids on the surface. Therefore, research about the evaluation of infection and inflammation potential of micelles may be beneficial. 99m Tc-Levofloxacin was radiolabeled in two studies [36,55] by different radiolabeling methods including cysteine·HCl as co-ligand. In vivo efficacy of 99m Tc-Levofloxacin in infection model small animals was found high for both studies. However, in vitro bacterial binding of 99m Tc-Levofloxacin was never investigated and compared with 99m Tclabeled PEGylated, phosphatidylcholine (PC), sodium dodecyl cholate (SDC) and DTPA-PE containing nanosized micelles. In this study, PEGylated, PC, SDC and DTPA-PE containing nanosized micelles were designed and both micelles and Levofloxacin were radiolabeled with 99m TcO 4 by tin reduction method to develop potential radiotracers for detection of infection and inflammation. The aim of this study is to formulate and compare radiolabelled micelles and radiolabelled antibacterial agent Levofloxacin as infection and inflammation agents having different mechanisms for the accumulation at infection and inflammation site. Radiolabeling of 99m Tc-Levofloxacin was evaluated with changing antibiotic concentration, reducing agent (SnCl 2. 2H 2 O) concentration, pH and incubation time. Among these processes, radioactivity was kept constant and percent labeling of 99m Tc-Levofloxacin was measured using ITLC plates. Characterization studies were performed for 99m Tc-radiolabeled micelles and radiolabeling efficiency was also evaluated. Bacterial binding of 99m Tc-labeled Levofloxacin and micelles were compared in Staphylococcus aureus (S. aureus) and Escherichia coli (E. coli) for evaluating and comparing their in vitro bacterial binding and specificity as potential infection and inflammation imaging agents. Radiolabeling of Levofloxacin This was performed to determine the best conditions for the labeling of Levofloxacin with 99m TcO 4 -. Labeling efficiency was calculated by changing SnCl 2. 2H 2 O concentrations from 15 to 150 μg, Levofloxacin concentration from 0.5 to 3 mg, pH from 3 to 7 and incubation time from 15 to 120 min with the same amount of sodium pertechnetate (5 mCi) which was freshly eluted from 99 Mo/ 99m Tc generator. The pH was arranged by using 0.1 N HCl and NaOH solutions [36,55]. Radiochemical analysis Radiochemical purity was evaluated by ITLC by using miniaturized ITLC-SG Plates to evaluate the percentage of unbound pertechnetate ( 99m TcO 4 -) and hydrolyzed/reduced technetium ( 99m TcO 2 ) by using acetone and saline as running solvents, respectively. 99m Tc-Levofloxacin was spotted at ITLC plates. The radiochemical purity of 99m Tc-Levofloxacin was measured by the using Equation (1) For the purpose of obtaining efficient and maximum radiolabeling, optimum amount of Levofloxacin, reducing agent, pH and incubation time were evaluated. The highest radiolabeling yield was calculated after passing 99m Tc-Levofloxacin through a 0.22 μm filter and by ITLC analysis [36,42,44,55]. Synthesis of DTPA-PE DTPA-PE was used as chelating agent for radiolabeling of micelles. It was synthetized by mixing 0.1 mM of DOPE in 4 mL of chloroform, supplemented with 30 μL of triethylamine. This was then added to 1 mM of DTPA anhydride in 20 mL of DMSO by stirring. This mixture was incubated for 3 h at 25°C under argon gas. Afterwards, the solution was dialyzed against 6 L of water at 4°C for 48 h. Purified DTPA-PE was freeze-dried and stored frozen at −80°C [56,57]. Characterization of micelles The characterization of PEGylated, PC, SDC and DTPA-PE containing nanosized micelles was determined by measuring mean particle size and zeta potential. Mean particle size and zeta potential Mean particle size, polydispersity index (PDI) and zeta potential of micelles were measured using the Nano-ZS (Malvern Instruments, Malvern, UK) by dynamic light scattering method at 25°C. Quality control of binding was checked by ITLC-SG Plates in saline and acetone as running solvent system. After completion of the development procedure, strips were cut and measured in a well type gamma counter. In vitro stability studies In vitro stability of 99m Tc-Levofloxacin and radiolabeled, PEGylated, PC and SDC containing micelles was determined at room temperature in the presence of saline (NaCl 0.9% (w/v)) or serum. For this purpose, 0.1 mL of samples were added to 0.9 mL of saline or equal volume of cell culture medium (PBS) supplemented with 10% FBS. Afterwards, they were analyzed after incubation with ITLC at 1, 2, 4, 6, 8, and 24 h to estimate the radiochemical stability of 99m Tc-Levofloxacin and radiolabeled micelles [55]. In vitro binding assay with bacteria In vitro binding efficiency of 99m Tc-Levofloxacin and 99m Tc radiolabeled PEGylated PC:SDC micelles were investigated against S. aureus and E. coli. For the evaluation of bacterial binding, 0.9 mL of a bacterial suspension containing approximately 10 8 CFU was taken, and exactly 0.1 mL of PBS containing approximately 1 mCi of radiolabeled Levofloxacin or micelles were added to the test tubes. The mixtures were then incubated for 1 h at 37°C. Afterwards, they were centrifuged for 5 min at 2000 rpm at 4°C and the pellets were then resuspended in 1 mL of PBS. The resuspended pellets were centrifuged for 5 min, the supernatants were separated and 1 mL of PBS was added. The supernatants were removed and the radioactivity in the bacterial pellets were determined by well-type gamma counter. The bacterial binding of radiopharmaceuticals was calculated according to Equation (2) [60][61][62]. Statistical analysis All values were expressed as the mean ± SD and n = 6. Nonparametric test methods were used for the evaluation of number of data less than 30. Depending on the group number, Student T test was used for the comparison of two groups and the Kruskal Wallis was used for the comparison of three or more groups. The significance level was set at p < 0.05. Results and discussion 99m Tc-Levofloxacin and 99m Tc-radiolabeled micelles were prepared for the purpose of effective infection and inflammation imaging. The characterization of 99m Tc radiolabeled, PEGylated micelles PEGylated, PC, SDC and DTPA-PE containing nanosized micelles were prepared and radiolabeled for imaging of infection and inflammation. The characterization of micelles was performed by measuring particle size and zeta potential. The average particle size of micelles was 80 ± 0.7 nm, polydispersity index was 0.14 and the zeta potential was 21.15 ± 1.2 mV. Radiolabeling process and quality control of 99m Tc-Levofloxacin The optimization of radiolabeling provides the best conditions to obtain maximum labeling efficiency. 99m Tc radiolabeling of Levofloxacin was tried in varying conditions and the labeling yield was optimized. Effect of Levofloxacin amount The radiolabeling efficiency of Levofloxacin was performed in varying concentrations (0.5-3 mg) with the same amount of radioactivity. The best radiolabeling efficiency of Levofloxacin was 96 ± 2.12% which was achieved by addition of 1 mg Levofloxacin. It was observed that lesser amounts of Levofloxacin resulted in decreased labeling efficiency. The radiolabeling efficiency of 0.5 mg of Levofloxacin was 83%. Higher amounts than 1 mg resulted no significant alteration on the labeling efficiency. Our findings were in agreement with the literature [55]. Effect of reducing agent Reducing agent (SnCl 2. 2H 2 O) was used in the range of 15-150 μg for the evaluation of optimum amount of SnCl 2. 2H 2 O to achieve highest labeling efficiency. It was observed that at low amounts of reducing agent, the labeling efficiency was very low (~57%). The highest radiolabeling efficieny was obtained at 50 μg mL −1 of SnCl 2. 2H 2 O (~96%). It may be concluded that lower amounts (lower than 50 μg mL −1 ) of reducing agent can not effectively reduce whole 99m TcO 4 − for the labeling process (Fig. 1). This observation was in agreement with previous studies [36]. Effect of pH Different pH values (pH:3-7) were evaluated for determining the effect of pH in radiolabeling efficieny. Any significant difference in labeling yield at acidic conditions was observed. On the other hand, the labeling efficiency decreased at slight basic pH which may be due to any possible change in the structure of the Levofloxacin related with carboxylic moiety at basic pH (Fig. 2). Its carboxylic moiety may be neutralized to prevent labeling with 99m TcO 4 during the metal exchange reaction. The maximum radiolabeling (96%) was achieved at pH:5. Higher and lower pH values cause decrease in labeling efficiency. This observation was in paralell with other studies [36,55]. Effect of incubation time Incubation time is an essential parameter in radiolabeling efficiency of radiopharmaceuticals. The effect of incubation time on labeling efficiency of 99m Tc-Levofloxacin was given in Fig. 3. Maximum radiolabeling efficiency (96%) was obtained after 15 min of incubation of Levofloxacin with sodium pertechnetate. It was observed that longer incubation time did not cause any significant difference in the radiolabeling efficiency which was in agreement with the literature [36]. After evaluation of the effects of changing antibiotic concentration, reducing agent concentration and pH, it was concluded that maximum labeling efficiency was achieved by using 1 mg Levofloxacin, 50 μg SnCl 2 .2H 2 O were dissolved in 1 mL of saline at pH 5 and incubated with Sodium pertechnetate (5 mCi) for 15 min. Afterwards, radiolabeled Levofloxacin was filtered through 0.22 μm filter at room temperature. The highest radiolabeling yield was observed as 96 ± 2.13% which was in paralell with previous studies [36,55]. In vitro stability studies According to the stability test performed at room temperature (25°C), the radiolabeling efficiency was remained stable with some insignificant alterations and degradations carried out at 25°C for 24 h ( Table 1). Any significant difference was not observed in the serum and saline stability of 99m Tc-Levofloxacin and 99m Tc-micelles which were in agreement with the literature [36,55,63,64]. In vitro bacterial binding In vitro bacterial binding of radiolabeled Levofloxacin and micelles after S. aureus incubation at 37°C was observed as 75 ± 1.3% and 45 ± 2.1%, respectively. In vitro E. coli binding of radiolabeled Levofloxacin and micelles at 37°C showed 63 ± 2.4% and 40 ± 1.9%, respectively (Fig. 4). Both radiolabeled Levofloxacin and micelles show high in vitro bacterial binding. Tc-99 m radiolabeled PC:SDC micelles exhibit lesser bacterial binding for both S. aureus and E. coli cultures (p < 0,05). This may be due to the fact that 99m Tcradiolabeled PC:SDC micelles are not specific to bacterial infections as much as 99m Tc-Levofloxacin. However, Tc-99 m radiolabeled PC:SDC micelles can also be assumed as sensitive radiopharmaceutical agents due to the mechanism of imaging of infection and inflammation. 99m Tcradiolabeled, PEGylated, PC, SDC and DTPA-PE containing nanosized micelles can accumulate at infection and inflammation sites due to long vascular circulation of vesicles by small particle size and surface coating by an hydrophilic polymer (such as PEG) [47,[52][53][54]. It was reported in some previous studies that the accumulation of radiolabeled liposomes in at infectious sites is performed by leakage of the vesicles through vessels. This leakage depends on the enhanced vascular permeability and following phagocythosis by macrophages of infected tissue. It was reported that the endothelial junctions existing in the blood vessel provides the penetration of the particles smaller than 200 nm from the vascularization [48,54]. Therefore, surface coated, nanosized drug delivery systems like liposomes can be accumulated at the site of infection and inflammation due to long circulation, enhanced vascular permeability and reduced removal by opsonisation [49,52,65]. Therefore, 99m Tc-Levofloxacin and 99m Tc-radiolabeled, nanosized, PEGylated, PC, SDC and DTPA-PE containing micelles were observed as potential imaging agents for the diagnosis and imaging of inflammation and infectious sites [66]. Despite, radiolabelled antibiotics could represent a promising tool for specifically image infective processes, radiolabelled micelles do not show a similar specific mechanism of accumulation in infectious/inflammatory sites that is mainly due to increased permeability (that is present in both infective and inflammatory processes) and small particle size. Therefore, this "passive" and non-specific mechanism implies that they could be useful for monitoring the inflammatory burden but, since radiolabelled micelles are not able to discriminate between an infection from a sterile inflammation, their use for imaging of infections is limited in clinical practice. Conclusion Radiolabeled compounds and nanocarriers lead to direct researchers toward easy and quick diagnosis and imaging of infection and inflammation. Although, radiolabeled Levofloxacin is particularly an excellent alternative for the detection of chronic infections caused by gram-positive and gram-negative bacteria, radiolabeled PEGylated, PC, SDC and DTPA-PE containing nanosized micelles were also evaluated as a potential candidate in the detection of infection and inflammation. As it was known, micelles similar to liposomes tend to accumulate at the site of infection based on the long vascular circulation of vesicles by small particle size and surface coating by a hydrophilic polymer (such as PEG). Both radiopharmaceutical agents exhibit potential results in design, characterization, radiolabeling efficiency and in vitro bacterial binding point of view. In vivo potential of radiolabeled Levofloxacin and micelles should also be evaluated in the infection and inflammation animal models in the future. Declaration of competing interest The authors state no decleration of interest.
2020-02-13T09:02:23.766Z
2020-02-08T00:00:00.000
{ "year": 2020, "sha1": "f64876f1391bf78f2d0c682e44877165422abfec", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.jddst.2020.101571", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "269a99c29782f5218174ebe67255af59b3f8e2c5", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
255972824
pes2o/s2orc
v3-fos-license
Prehospital Cardiac Arrest Should be Considered When Evaluating Coronavirus Disease 2019 Mortality in the United States Background  Public health emergencies leave little time to develop novel surveillance efforts. Understanding which preexisting clinical datasets are fit for surveillance use is of high value. Coronavirus disease 2019 (COVID-19) offers a natural applied informatics experiment to understand the fitness of clinical datasets for use in disease surveillance. Objectives  This study evaluates the agreement between legacy surveillance time series data and discovers their relative fitness for use in understanding the severity of the COVID-19 emergency. Here fitness for use means the statistical agreement between events across series. Methods  Thirteen weekly clinical event series from before and during the COVID-19 era for the United States were collected and integrated into a (multi) time series event data model. The Centers for Disease Control and Prevention (CDC) COVID-19 attributable mortality, CDC's excess mortality model, national Emergency Medical Services (EMS) calls, and Medicare encounter level claims were the data sources considered in this study. Cases were indexed by week from January 2015 through June of 2021 and fit to Distributed Random Forest models. Models returned the variable importance when predicting the series of interest from the remaining time series. Results  Model r2 statistics ranged from 0.78 to 0.99 for the share of the volumes predicted correctly. Prehospital data were of high value, and cardiac arrest (CA) prior to EMS arrival was on average the best predictor (tied with study week). COVID-19 Medicare claims volumes can predict COVID-19 death certificates (agreement), while viral respiratory Medicare claim volumes cannot predict Medicare COVID-19 claims (disagreement). Conclusion  Prehospital EMS data should be considered when evaluating the severity of COVID-19 because prehospital CA known to EMS was the strongest predictor on average across indices. Introduction 0][11] Epidemic preparedness in the United States is generally weak, and the COVID-19 response is largely drawn from preexisting pan-flu emergency plans. 12,13uring a public health emergency, the clinical knowledge needed to respond is developed by case surveillance drawn from preexisting data series.COVID-19 has presented an unusual opportunity to evaluate agreement across surveillance efforts within the United States.The ability to detect clinical findings from surveillance nets and epidemiology methods which were not necessarily designed to detect them in meaningful ways is a high priority for the future management of emerging infectious diseases.[16][17][18][19][20] Objectives In this study, public health surveillance data are processed using a machine learning approach to discover the relative agreement of a surveillance event series when predicting surveillance event series.Toward objectives, this study seeks to assess the agreement between event series and contrast the value of traditional surveillance methods (death certificates, influenza, and respiratory infection claims volumes) with nontraditional sources such as national Emergency Medical Services (EMS) call volume data in the COVID-19 era in the United States. Statistic of Interest Variable importance is the statistic of interest in this study.Variable importance means that when predicting the dependent variable, an independent variable which is of comparatively higher predictive value (association) than another is of higher (predictive) use value.When considering high variable importance with weekly event series data, series which help the machine learning models learn, predict, or guess the correct dependent weekly event series could be cooccurring or mutually observed events.The high variable importance scores from different sources suggest that series observe the same real-world event across surveillance efforts as they support prediction better than noise and other candidate series (other independent variables). Of special interest are "high variable importance and independent variables" from a different data source than the dependent variable.High same-source variables are most likely high in value because they are similarly distributed across study weeks to their parent-sister series and in turn are not necessarily interesting.A series of events can be said to have "agreement value" if it has high statistical agreement with other series from a different source.Low statistical agreement suggests "out of era" events or events which are not driven by the same causes as other series considered here. Toward noise and disagreement, influenza and respiratory infection claims volumes are considered below with COVID-19 claims volumes.Claims volumes are traditionally used in influenza surveillance.As a test of the efficacy of the models described here, COVID-19 volumes should be able to "outperform" influenza volumes as the COVID-19 era is largely understood to be influenza sparse.In this way respiratory and influenza events could be understood as a control arm as well as a model output of independent interest. Medicare Medicare provided three event series to this study.Medicare encounter-level claims through July, 2021 were sourced through the Chronic Conditions Warehouse (CCW).Records from 2015 through July 2021 were considered.Claims that contained influenza, COVID-19, or respiratory infection diagnostic code were enrolled.A series was generated for counts of distinct individuals within a series by calendar week.The Medicare-sourced series do not describe the duration of illness but the frequency of billing over time for distinct individuals.Medicare claims provided three series to this study, specifically "Influenza Diagnostic (DX) Codes," "COVID-19 DX Codes," and "(Viral) Respiratory Infection DX Codes" series.The viral respiratory series includes fever, bronchitis, viral lung infection, acute respiratory distress syndrome (ARDS), and pneumonia ICD10-CM codes.Procedure, HCPS, and CPT-4 codes were not considered. The Centers for Disease Control and Prevention The Centers for Disease Control and Prevention (CDC) provided five series for this study.COVID Deaths: COVID deaths are described as weekly data set which disambiguates the primary cause of death (COD) on Multiple Cause of Death Certificates (MCDC) received by the CDC within the given week.The dataset further describes secondary causes of death when COVID-19 diagnostic codes are present.The COD All Cause, COD COVID Primary, and COD COVID Secondary series in this study were learned from this data set.COVID deaths data were retrieved from: "https://data.cdc.gov/NCHS/Provisional-COVID-19-Deaths-by-HHS-Region-Race-and/tpcp-uiv5." 2][23][24][25][26][27] These deaths are technically preventable because they are being prevented in real time in other states.The interpretation of excess mortality is a complex topic, and individuals who die in excess are not necessarily dying significantly before they would have died baring excess.Two study series are learned from this data set, Observed Deaths and Excess Deaths.Excessive deaths are produced using Farrington flexible methods. 28,29Excess mortality data were retrieved from "https://data.cdc. The National Emergency Medical Services Information System The National Emergency Medical Services Information System (NEMSIS) provided five event series to this study.NEMSIS is a complex data center which collects data from state-level supervising EMS authorities. 30,31NEMSIS is designed to support EMS outcomes research and complex, evidence-basedmedicine research. 32NEMSIS has a stable data model of EMS episode values which are collected for every emergency (911) call which is routed to an EMS in the United States.A weekly extract was created using NEMSIS OLAP cubes for 2014 to 2016 and 2017 present.The cardiac arrest (CA) subset which codes calls for arrests before and after EMS arrived on the scene was also extracted."NEMSIS Calls," "NEMSIS Calls CA Yes," "NEMSIS Calls CA No," and "NEMSIS CA Prior" to arrival and "NEMSIS CA After" arrival of the EMS crew were learned from NEMSIS.NEMSIS data was retrieved from: "https://nemsis.org/viewreports/public-reports/ems-data-cube/." Statistical Models The 13 series sets were integrated into a single "cases per week" data model and processed using machine learning methods in h2o.ai (https://www.h2o.ai).Specifically, models were generated to learn the dependent to independent variable relationships across the series such that each series weekly value was attempted to be learned (predicted) from all other weekly event series values.Each series took a turn being the dependent variable in a Distributed Random Forest (DRF) model. 33R squares (r 2 ) for models as well as scaled variable importance in decision-making are described below in detail.Models were cross-validated five times each.Note each series was itself a model (being predicted) from other series for a total of 14 models (13 event series and the study week itself).The statistic of interest is the variable importance of an independent variable when attempting to predict the dependent variable within a DRF model. Models considered any volume between January 1st, 2018 and July 1st, 2021.Raw case count values were used, neither log/lag modeling nor relative rates were considered.Note DRF transforms numeric values to a continuous distribution in preprocessing (before processing).The fitness of "week" of event most likely obscures or confounds episode attribution of count data model events as a case could be transported by EMS, bill Medicare and populate a CDC death certificate within a calendar week or over several months in the case of advanced life support.The models should not be used to model the epidemic but rather to assess the agreement within the implicit (pseudo-harmonized) time scales of the series. Results ►Table 1 describes the event series, its data source, the specific data set name, the series extracted for this study, the time range, and the total events within the series of interest.Note that NEMSIS CA status is a declaration aggregate, and call where CA did not occur is a call with an explicit declaration.In turn, the total calls (sum) do not reflect the sum of CA and non-CA calls.►Fig. 1 shows the weekly volume of events within series described as totals in Table 1.The upper right describes Medicare weekly case events, and the bottom right describes excess mortality series.The upper left describes NEMSIS series, and the bottom left describes COVID-19 death certificates.►Table 2 presents a matrix of dependent and independent variable series relationships, where the scaled variable importance is presented.Each column is a DRF model where the column header is the dependent variable.The independent variables are listed along the left-hand side of the table.In scaled variable importance measures, "1" is the highest value and independent variable can receive; and only one "1" can be awarded within a model.For example, dependent "Influenza DX Codes" weekly values from Medicare were most strongly learned from "Respiratory Codes" (1) from Medicare followed by "All Cause COD" (0.7191) from MCDC, "Observed Deaths" from Excess Deaths (0.6552) and "COVID-19 DX Codes" from Medicare (0.4475).Alternately, "COVID 19 DX Codes" from Medicare shows "Week Ending Date" (1), followed by "COVID Primary COD" (0.4015) and "COVID Secondary COD" from MCDC (0.3455), "Excess Deaths" (0.2451), and strikingly "NEMSIS CA Prior EMS" (0.2445).Note that when predicting "COVID 19 DX Codes," "Respiratory Codes" are of little help (0.0636) but when predicting "Respiratory Codes," "COVID 19 DX Codes" are fairly helpful (0.8722) when making said prediction.r 2 is plotted above the dependent variable. ►Table 3 replots ►Table 2 values as above or below the model run's geometric mean variable importance score (column-wise geometric mean).The regions within the black outlines should be understood as variables from the same series source.While the models did know weekly features from the same data source their importance toward the study objective is minimal.For example, the only "same source series" variable importance below average was the Medicare "COVID 19 DX" model with influenza and viral respiratory variables being low importance (as expected).This should mean that the model did not learn what the weekly "COVID 19 DX Codes" volume was from viral infection and influenza codes; their series are independent in this study.Above variable importance within column models from different series should detail the interrelatedness of the multiseries weekly events.For example, "NEMSIS CA After EMS" shows above the geometric mean of variable importance for "Week Ending Date," "COVID 19 DX Codes," and "COD COVID Primary" series.The "Total Above" ranged 5 to 8, indicating similar importance distributions. In ►Table 4, the geometric mean has been computed for each row and if the raw value exceeds the geometric mean, the raw value is marked "above" as in ►Table 3. ►Table 4 can assess above average variable importance across models.High variable importance across models indicates that multiple series relied on the independent variable to learn the dependent weekly value.For example, in ►Table 4, "COD All Cause" independent variable was above the average variable importance (for different sources) models "Week Ending Date," "Influenza DX Codes," "Respiratory Codes," "Excess Deaths," and "Observed Deaths" (from excess deaths source).Total Above ranged from 2 to 10, suggesting that some series had acute agreement (small number) and some have generalized agreement.The Medicare sourced series have low Total Above, indicating their value is concentrated in models "COVID All Cause" and "Observed Deaths."Note that NEMSIS CA Prior EMS is tied with Week Ending Date in first place (10). Discussion 5][36] However, the potential for prehospital CA to be considered as a syndromic effect is perhaps limited to influenza and local area use cases in the United States. 37The same cannot be said for Europe. 38,39There is evidence that COVID-19 is associated with sudden cardiac death, some of which should be prehospital and pre-EMS arrival. 40As influenza has inspired developments in syndromic surveillance, perhaps COVID-19 will do the same. 382][43] Preexisting surveillance methods have proven inadequate, and CDC has proposed a modernization effort to produce novel surveillance efforts within the epidemic response. 44Ancillary events, such as EMS calls and Medicare bills, could support surveillance tasks like early detection of an outbreak, severity models, and prevention efforts.This paper demonstrates that Medicare and NEMSIS data have value when predicting traditional measures of epidemic modeling like COD and Excess Mortality. Within Medicare sourced series, EMS call volumes were below average variable importance for Influenza and Respiratory Viral claims volumes but were above average for COVID-19 volumes when calls without CA and calls where CA occurred prior to EMS arrival are considered.NEMSIS series benefited from knowing the call volumes which were CA prior to EMS arrival, consistently ranked within NEMSIS series as 1 or the most important.COVID-19 as primary COD on a multiple COD certificate and the volume of Medicare COVID-19 claims was also above average in importance when predicting NEMSIS call volumes.This suggests that COVID-19 is driving EMS call volumes. Within CDC MCDC series both primary and secondary COD models found above average predictive value from NEMSIS call volumes which involved a CA, suggesting that EMS arrests may not survive the experience.There is also predictive value in the CDC excess mortality model values but this is to be expected as the excess mortality model was designed to evaluate excess mortality from COVID-19.Within CDC Excess Mortality series, NEMSIS call volumes for CA as well as COVID-19 being present on a multiple COD certificate were high value when predicting the weekly Farrington Flexible mortality excess estimates. Variable importance detailed in Tables 2 and 3 demonstrates meaningful model segmentation between series and series events.Influenza and viral respiratory codes are particularly interesting as a "control" case in this COVID-19 era data set.Both influenza and viral respiratory series show interrelatedness in their variable importance and difference or segmentation from COVID-19."CA prior to EMS" arrival was also of note because "CA prior to EMS" arrival most likely results in a decedent without a COVID-19 diagnosis, a decedent who may be ineligible for a primary COD 'COVID-19' declaration.►Table 3 further belabors the point, with "COD Primary COVID" model showing "NEMSIS Calls CA Yes," "NEMSIS CA Before," "NEMSIS CA Prior," "Observed Deaths," and "Excess Deaths" above the geometric mean of variable importance within the "COD Primary COVID" model.Given that DRF does not know what a cardiac arrest is nor Farrington Flexible but is still able to associate the weekly distributions with COVID-19 primary COD on MCDCs from only the weekly counts highlights the strength of this approach. Table 4 demonstrates high general utility for most independent variables in the model series.It also suggests that the Medicare series was not as strongly utilized in decisionmaking with a geometric mean range of 2-3.This could be due to the real-world sampling distribution of Medicare enrollment relative to the total morbidity burden in the United States.How much of the COVID-19 burden should be among Medicare beneficiaries remains unknown.All other series are national, while Medicare is enrollee specific and may not offer as much instruction to prediction.However, despite the difference in real world lag (between claims being processed and a death certificate being populated, or a 911 call being placed), the model produced r 2 > 0.9 in most cases.Note that "NEMSIS CA Prior EMS" had as many "above" the geometric mean in ►Table 4 as the week itself.This means it is tied for the best predictor across models.The implications of these prior arrests are profound, and they may be a sink of underrecognized COVID-19 mortality. The length of the series, and the "isotonic" nature of the data may explain the difficulty of predicting the week of series, as the opportunity for weekly patterns to repeat most likely confused week assignments.As COVID and influenza had multiple "waves" over the observation period, a bad week guess could be a repeat start, peak, or end event.A bad week guess could also be a time point with little data being confused for another low-volume time point.The NEMSIS anomaly in 2017 (low volumes) is not well understood but is most likely due to NEMSIS transitioning OLAP series in 2017 or perhaps there was a national decrease in EMS call volumes in 2017.Most likely the models are not impacted as the models consider records from 2018 onward. The analysis would be more robust if series completeness could be achieved, especially in early model years.►Table 1 shows several data series available in earlier years than others.Medicare data particularly suffers from changes in diagnostic code recall in ICD9-CM versus ICD10-CM years (only ICD10-CM years were considered here).The "stability" of a series is of high importance when evaluating future surveillance value.The model did not weigh variables by series source and did not "know"' that variables were from the same data sources.Weighting series completeness may improve model results; however, r 2 was high across models.The Medicare series contains diagnostic and pathology codes for influenza and COVID-19.There may be noncase incidence drivers of testing, vaccination, and pathology including nosocomial infections, the "worried well" as well as public health interventions (mass testing and roster vaccinations).Disambiguating the Medicare indexes could increase their utility even further.The viral respiratory code list includes minor codes like fever as well as ARDS and pneumonia.Their disambiguation by severity may improve model utility as well. Conclusion Prehospital data (EMS) are of high value in COVID-19 surveillance and should be considered as a potential data source when attempting to learn COVID-19 severity within jurisdictions.Medicare data faired weaker though individuals providing care to the Medicare population should consider the disambiguation of patients with COVID-19 from individuals seeking COVID-19 prevention services (testing and vaccination). Human Subjects Protections While this study contains identifiable information describing live human subjects, no National Institutes of Health Institutional Review Board (NIH IRB) review was required.Note that Centers for Medicare and Medicaid Services (CMS) data access and use are approved through the CMS IRB, however.Data were further "cleared" for public release by C.C.W., and C.C.W. evaluated our compliance with CMS nonreidentification standards for data describing beneficiary populations. Figure one demonstrates a collapse in influenza Medicare claims and spikes in covid and viral respiratory infection codes toward the end (right) of the series.COVID excess deaths and MCDC indicate similar peaks on the right side of the x-axis as well.All NEMSIS call volumes are elevated as time progresses. Fig. 1 Fig.1The weekly event volume by event type.The upper right line graphs describe the per member per weekly occurrence of qualifying diagnostic codes on identifiable Medicare claims.COVID-19 (red), influenza (green), and respiratory infection codes (blue) are featured.The bottom right figures show the Excess Deaths (Red) and Observed Deaths (Blue) from which excess deaths are learned in the CDC excess mortality model.The upper left region describes the NEMSIS series with cardiac arrest after EMS arrival (Red), cardiac arrest prior (Brown), total calls (Green), calls without cardiac arrest (Blue) and calls with arrests (Purple).The lower left shows the all-cause mortality multiple cause of death certificate volumes (Red) and volumes where the primary (Green) and secondary causes of death (Green) were COVID-19.The x-axis is the study week, and the y-axis is the volume for all figures. Abbreviations: COD, cause of death; COVID-19, coronavirus disease 2019; EMS Emergency Medical Services; NEMSIS, The National Emergency Medical Services Information System. Table 1 Series ranges and data sources Abbreviations: CDC, The Centers for Disease Control and Prevention; COVID-19, coronavirus disease 2019; EMS Emergency Medical Services; MCDC, Multiple Cause of Death Certificates; NEMSIS, The National Emergency Medical Services Information System. Table 2 Variable importance matrix and original values with dependent variables (column wise) Abbreviations: COD, cause of death; COVID-19, coronavirus disease 2019; EMS Emergency Medical Services; NEMSIS, The National Emergency Medical Services Information System.Methods of Information in Medicine Vol.62 No. 3-4/2023 © 2023.The Author(s). Table 3 Variable importance matrix by dependent value column wise with independent variables above and below the geometric model mean (column wise) Abbreviations: COD, cause of death; COVID-19, coronavirus disease 2019; EMS Emergency Medical Services; NEMSIS, The National Emergency Medical Services Information System.Methods of Information in Medicine Vol.62 No. 3-4/2023 © 2023.The Author(s).Prehospital Cardiac Arrest When Evaluating COVID-19 Mortality Williams 105 Table 4 Scaled variable importance above the geometric mean row wise (independent variable) across models (column wise)
2023-01-19T20:37:57.645Z
2022-03-31T00:00:00.000
{ "year": 2023, "sha1": "fe2a58cd43300207c9774f2d0b35c94c84ca1ff8", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/a-2015-1244.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fd9a92d6c04edf79fe0fbb4db8bf322ff392c409", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54825591
pes2o/s2orc
v3-fos-license
Savings, investment and economic growth in Ethiopia: Evidence from ARDL approach to co-integration and TYDL Granger-causality tests This paper examines the causal relationship among savings, investment and economic growth in Ethiopia using annual time series data from 1969/70-2010/11 in a multivariate framework. Results from the PP unit root test shows that all variables under consideration are I(1). Result from the ARDL Bounds Testing indicates that there exists co-integration among gross domestic savings, gross domestic investment, real gross domestic product, labor force and human capital when RGDP is taken as dependent variable. Labor and investment have significant positive effect on economic growth of Ethiopia both in the short-run and long-run while GDS and human capital are statistically insignificant. Moreover, Toda-Yamamoto and Dolado-Lutkepohl as well as Innovative Accounting Techniques (i.e., IRFs and FEVD) approach to Granger causality analysis shows that there exists bidirectional causality between gross domestic investment and economic growth as well as between gross domestic savings and gross domestic investment. Granger causality running from investment to savings and from investment to growth is stronger as witnessed from impulse responses and variance decompositions. Although there is unidirectional Granger causality running from economic growth to gross domestic savings, it is weak. To attain high and sustained growth in the country, increased savings and especially investment are required due to its dual effect. INTRODUCTION Promoting economic growth through savings and investment has received considerable attention in many countries around the world (Verma, 2007).This is because high investment and saving rates are crucial for growth as a result of their strong positive correlation with GDP growth rates enunciated by endogenous growth theory (Agrawal, 2000). The conventional perception through which investment, savings and economic growth are related is that savings contribute to higher investment and hence higher GDP growth in the short run (Mohan, 2006).However, there are different thoughts regarding linkages among these variables and how they affect one another. The central idea of Lewis's (1955) traditional theory was that increasing savings would accelerate growth, while the early Harrod-Domar models specified investment as the key to promoting economic growth.In contrast, the neoclassical Solow (1956) model argues that increase in the savings rate boosts steady-state output by more than its direct impact on investment because the induced rise in income raises savings, leading to a further rise in investment (Jangili, 2011).This higher investment in turn accelerates economic growth by increasing aggregate demand in the economy.The relationship among economic growth, savings and investment works also in the other way round according to some recent studies which contradict with the conventional axiom that savings stimulate economic growth (Ahmad and Anoruo, 2001).For instance, studies by Jappelli and Pagano (1994), Gavin et al.(1997), Sinha and Sinha (1998), and Carrol andWeil (1994, 2000) argue that it is economic growth that promotes savings and not vice versa. In the Keynesian and post-Keynesian traditions investment plays a critical role both as a component of aggregate demand as well as a vehicle of creation of productive capacity on the supply side.In post Keynesian demand-driven models investment still plays a crucial role in determining medium run growth rates (Wondwesen, 2011). Savings and investment have been considered as two critical macro-economic variables with micro-economic foundations for achieving price stability and promoting employment opportunities thereby contributing to sustainable economic growth.However, inadequate savings and investment are common problem in developing countries. For instance, Ethiopian average gross domestic savings to GDP ratio has been lower than that of the SSA average in real terms (Dawit, 2004).The average GDS to GDP ratio in real terms for the Ethiopia had been 9.7% in the 1990s and 6.4% for the period 2000-08 which is lower than the corresponding average GDS to GDP ratio for SSA (Tasew, 2011).Poor performance of the economy, high unemployment level, engagement of a large proportion of the population in the informal sector and low wages are factors responsible for low domestic savings in small developing states. Empirical findings about the causal relationship among savings, investment and economic growth are mixed and controversial across countries, data and methodologies.Some empirical studies support the classical growth theory 1 while some agree with the Carroll-Weil hypothesis 2 and some do not support either of these 3 .Development and growth theories are replete with examples of how savings and investment play a critical role in promoting economic growth.However, most studies in Ethiopia look at the relationship between investment, savings and growth by commonly testing for 1 See Jappelli and Pagano (1994) 2 See Verma (2007), Sinha and Sinha (2008) 3 See Sinha (1996) Hundie 233 bi-variate Cointegration and Granger causality separately between investment and growth, or between savings and growth.This study therefore investigates the possibility of saving investment led growth and growth driven saving investment hypothesis by testing for Granger causality, under a multivariate framework, between gross domestic savings, gross domestic investment and growth in Ethiopia. Moreover, most empirical works on the relationship between savings, investment and economic growth are based on panel or cross-country regressions and may be criticized in view of the fact that they impose crosssectional homogeneity on coefficients that in reality may vary across countries because of differences in institutional set up, domestic policy measures, political, social and economic structures.The overall result obtained from panel or cross-section regressions represents only an average relationship, which may or may not be appropriate to individual countries in the sample. Actually, several time series studies have been conducted in the area.However, they treat causal relationship between savings, investment and economic growth bi-variately by looking into the causal relationship either between savings and economic growth or between investment and economic growth.The main objective of this paper is, therefore, to examine the causal relationship among savings, investment and economic growth in Ethiopia in a multivariate framework using data from 1969/70-2010/11.Moreover, the paper tries to examine the existence of long run relationship among savings, investment and economic growth in Ethiopia. The remainder of the paper proceeds as follows: section two furnishes the literature review.In section three, the data type and source, and methodology are discussed.Section four presents the empirical results and the last section provides the summary and conclusions of the study. LITERATURE REVIEW A lot of empirical researches have been done on savings, investment and economic growth (in a multivariate framework) in recent years.The motivation for these empirical studies is the growing divergence in saving and investment rates between the developing countries, the growing concern over the falling savings rates in the major OECD countries, and the increasing emphasis of the vital role of investment in the more recent economic growth literature (Verma and Wilson, 2005).This section, therefore, tries to present some of these empirical studies.Jangili (2011) examined the direction of the relationship between saving, investment and economic growth in India at both aggregate level and sectoral level for the period 1950/51 to 2007/08 by using Granger causality test through VAR/VECM framework.Besides, cointegration test based on Johansen and Julius (1990) method was used in order to test the long-run relationship among the variables.The cointegration test result suggests that there exists co-integration relationship among all series with GDP except private corporate saving.Study found that the direction of causality runs from saving and investment to economic growth collectively as well as individually and there is no causality from economic growth to saving and (or) investment.However, there exists reciprocal causality from saving and investment of the private sector to economic growth.This reciprocal causality comes from the household sector where saving and investment led growth and growth driven saving and investment were observed.Empirical evidence also reveals that private corporate sector saving does not cause Granger economic growth. The study conducted by Verma and Wilson (2005) on savings, investment, foreign inflows and economic growth of the Indian economy using the annual time series data from 1950 ─2001 shows little evidence that sectoral per worker's savings and investment affect GDP in the long run while per worker GDP has significant but small effects on per worker household savings and investment in the short run.The feedbacks of GDP are absent in the long run and only small and not precise in the short run.Whilst savings certainly influence investment, there are only weak links from investment to output.Generally, their findings do not support the Solow and endogenous growth policy prescriptions that it is desirable to increase household savings and investment so as to encourage economic growth in India.Verma (2007) empirically examined the relationship between savings, investment and economic growth in India using annual time series data for the period 1950/51 to 2003/04.The study applied the Autoregressive Distributed Lag (ARDL) Bounds Testing technique to test for Cointegration.The ARDL Cointegration result revealed that GDP, GDS and GDI have long-run relationship except when GDP is the dependent variable.The author also estimated the long-run and short-run elasticities of the correlation between GDS, GDI and GDP growth which exposes three conclusions.Firstly, the econometric evidence corroborates the Carroll-Weil hypothesis that savings do not cause growth, but growth causes savings.Secondly, the results obviously support the view that savings drive investment both in the shortrun and in long-run.Lastly, there is no evidence that investment is the driver of economic growth in India during the sample period.Attanasio et al. (2000) analyzed the short-run and longrun relationship among savings, investment and growth rate for 123 countries over the period 1961-94.By applying techniques such as OLS, Granger causality and impulse response functions, the study found the following results which are vigorous across data sets and estimation methods: i) lags of saving rates are positively related to investment rates; ii) investment rates Granger cause growth rates with a negative sign; iii) growth rates Granger-cause investment with a positive sign.Budha (2012) examines the relationship between the gross domestic savings, investment and growth for Nepal using annual time series data for the period of 1974/75 to 2009/10.The study employed the Autoregressive Distributed Lag (ARDL) approach to test for Cointegration and error correction based Granger causality analysis for exploring the causality between the variables of interest.Empirical results show that Cointegration exists between gross domestic savings, investment and gross domestic product when each of them is taken as dependent variable.Granger causality analysis shows that there is short-run and long-run bidirectional causality between investment and gross domestic product as well as between gross domestic savings and investment.Nevertheless, no short-run causality is found between gross domestic savings and gross domestic product. To come to the point, it is evident from the above theoretical and empirical literature review that the direction of causality between savings, investment and economic growth is mixed.Most of these empirical studies are cross section and cross country studies and fail to use long period data.The problem with such studies is the homogeneity assumption throughout the countries, which is unlikely because of differences in social, economic and institutional conditions.This necessitates country specific studies to shed more light on the causality issue of savings and investment and the related policy issues. Moreover, most of the existing country specific empirical studies, including those conducted for the Ethiopian case; look into the relationship between savings, investment and economic growth by normally testing for bi-variate Cointegration and Granger causality separately between investment and growth, or between savings and growth which can result in specification bias.Stern (2011) claimed that multivariate Granger tests are advantageous over bi-variate Granger tests in that they can help avoid spurious correlations and can aid in testing the general validity of the causation test which can be done through adding additional variables that may be responsible for causing y or whose effects might obscure the effect of x on y.There may also be indirect channels of causation from x to y, which VAR modeling could be find out as suggested by Stern (2011).Therefore, this paper tries to fill these gaps by examining the causal relationship between savings, investment and economic growth in Ethiopia through a multivariate Granger causality framework. Model specification To explain the possible association between the savings, investment and growth based on Ethiopian data, this study has postulated the following specification based on Budha (2012) and Verma (2007), with some modifications.Budha (2012) and Verma (2007) suggest that gross domestic product is positively related with the gross domestic savings and gross domestic investment, all other things being equal.Thus, GDP is an increasing function of gross domestic savings and gross domestic investment which can be given as below: (1) Where: GDP, GDS, and GDI are gross domestic product; gross domestic savings as a percentage of GDP and gross domestic investment as a percentage of GDP respectively.Gross domestic investment is proxied by gross capital formation as a percentage of GDP.Here, gross domestic savings and gross domestic investment rather than their net are taken for the analysis.The reason, according to Feldstein and Horioka (1980), is that the accounting definitions of depreciation are very imperfect, especially when there is significant inflation; errors of measurement in the depreciation estimates would cause a bias in the estimated coefficients. Human capital plays a special role in a number of models of endogenous economic growth (Barro, 1991).In Romer (1990), human capital is the key input to the research sector, which generates the new products or ideas that underlie technological progress thereby leads to faster growth.According to Lucas (1988), human capital is an important source of long -term growth because of its positive policies that enhance public and private investment in human capital, therefore, promote long-run economic growth.In this setting, increases in the quantity of human capital per person tend to lead to higher rates of investment in human and physical capital, and hence, to higher per capita growth.Moreover, Solow (1956)'s growth model suggests that labor plays a crucial role in determining economic growth.Based on these arguments, therefore, Equation (1) is augmented by including these two variables in the equation.Accordingly, Equation (1) becomes: (2) An econometric expression of Equation (2) is: (3) Where LF is labor force measured by share of population aged 15-64, ln stands for natural logarithmic transformation and t  is error term.HC represents human capital proxied by total capital expenditure on health and education (Adelakun, 2011;Asghar and Aswan, 2012;Adawo, 2011;Eigbiremolen and Anaduaka, 2014;Oluwatobi and Ogunrinola, 2011).The basic premise in this approach is that increase in workers' quality through improved education improves output.This affirms the human capital theory which suggests that education and healthcare of workers ensure greater productivity (Adawo, 2011).The variables are transformed to their natural logarithm form to remove or educe considerably any heteroskedasticity in the residuals of the estimated model. Unit root test The first step in building dynamic econometric models entails a Hundie 235 thorough investigation of the characteristics of the individual time series variables involved.Such an analysis is essential as the properties of the individual series have to be taken into account in modeling the data generation process of a system of potentially related variables (Lutkepohl and Kratzig, 2004). When discussing stationary and non-stationary time series, the need to test for the presence of unit roots in order to avoid the problem of spurious regression should be stressed.Unit root test should be conducted in order to determine whether individual variables are stationary or not.To this end, the Phillips Perron (1989) (PP) tests was applied since it has greater power than standard ADF test. 4 Cointegration Test: ARDL Bounds Testing Approach There are various techniques for conducting the Cointegration analysis among time-series variables.The well-known methods are: the residual-based approach proposed by Engle and Granger (1987) and the maximum likelihood-based approach proposed by Johansen and Julius (1990) and Johansen (1992). This paper adopts the so-called autoregressive distributed lag (ARDL) bounds which appear to be applied in recent empirical investigations.This method has certain econometric advantages as compared to other Cointegration procedures.First, it is applicable irrespective of the degree of integration of the variables (i.e.whether the underlying variables are Purely I(0), I(1) or mixture of both) and thus avoids the pre-testing of the order of integration of the variables.Second, the long-run and short-run parameters of the model are estimated simultaneously since it takes into account the error correction term in its lagged period.Third, the ARDL approach is more robust and performs better for small sample sizes. The ARDL approach requires estimating the conditional error correction version of the ARDL model for variables under estimation.Arising from the above, the augmented ARDL version of the model specified earlier is expressed as: (4) The parameters i  where i= 1, 2, 3, 4, 5 are the corresponding long-run multipliers, while the parameters i are the short-run dynamic coefficients of the underlying ARDL model. From Equation (3), we first test the null hypothesis of no Cointegration, H 0: = = = =0 against the alternative using the F-test with upper and lower critical values that are calculated automatically and reported after the ARDL regression estimates.To this end, the order of the lag distribution function should be selected using one of the standard information criteria such as Akaike Information Criterion (AIC) and Schwartz Bayesian Criterion (SBC).Pesaran and Shin (1995) argue that the Schwartz-Bayesian Criteria (SBC) is preferable to other model specification criteria because it often has more parsimonious specifications.Therefore a more parsimonious model is selected using the SBC criteria with the maximum lag order of two. 5 The Error Correction Models (ECM) Estimating a dynamic equation in the levels of the variables is problematic and differencing the variables is not a solution; so any information about the long run is removed.The more suitable approach is to convert the dynamic model into an error correction model (ECM).It is shown that this contains information on both the short-run and long-run properties of the model, with disequilibrium as a process of adjustment to the long-run model (Harris and Sollis, 2003). The error correction (EC) representation of the ARDL model can be expressed as: ( 5 ) Where is the speed of adjustment and ECM t-1 is error correction term lagged by one time period. The existence of an error-correction term among a number of cointegrated variables implies that changes in the dependent variable are a function of both the level of disequilibrium in the Cointegration relationship (represented by the ECM) and the changes in other explanatory variables.This tells us that any deviation from the long-run equilibrium will feed back on the changes in the dependent variable in order to force the movement towards the long-run equilibrium (Faras and Ghali, 2009). Granger Causality Test: The TYDL Approach There are three approaches to implement the Granger causality test depending on time-series properties of variables; a VAR model in the level data (VARL), a VAR model in the first-differenced data (VARD), and a vector error correction model (VECM).However, Phillips andToda (1993, 1994) argue that VAR estimation often involves nuisance parameters and then no satisfactory basis for mounting a statistical test of causality test applies as the F-test statistic does not have a standard distribution when variables are integrated.The VECM approach which involves pre-testing through unit root and cointegration tests suffers from size distortions and can often lead to mistaken conclusions about causality. 6 As a result, this study adopted the TYDL approach of Toda and Yamamoto (1995) and Dolado and Lutkepohl (1996).This approach has many advantages over other methods of testing Granger noncausality.TYDL approach is applicable irrespective of integration and cointegration properties of model.The TYDL method better controls the type I error probability than other methods based on the VARL, VARD, and VECM.The simulation results by Yamada and Toda (1998) indicate that among three causality procedures, TYDL is the most stable approach when compared to VAR and VECM.The basic idea behind the TYDL is to artificially augment the correct VAR order, k, with d max extra lags, where d max is the maximum likely order of integration of the series in the system.The 5 Pesaran and Shin (1997) and Narayan (2004) suggested two as the maximum order of lags in the ARDL approach for the annual data series. 6Such possibilities are demonstrated by a number of simulation studies (e.g., Yamada and Toda, 1998;Clarke and Mirza, 2006). lag augmented VAR representation of Equation ( 2) is given as below: (6) 6) -(10) will be estimated to determine the direction of causality between the variables under consideration. From ( 6), Granger causality from Then, Granger causality is tested using the modified Wald (MWald) test which is theoretically very simple, as it involves estimation of a VAR model augmented in a straightforward way. Impulse response function (IRF) and variance decomposition In empirical research, it is often necessary to know the response of one variable to an impulse in another variable in a system that involves a number of further variables as well.Thus, one would like to investigate the impulse response relationship between two variables in a higher dimensional system (Lutkepohl, 2005).To this end, generalized impulse response7 which is invariant to the ordering of the variables in the VAR has been used. To infer the degree of exogeneity of the variables beyond the sample period, the decomposition of variance which measures the percentage of a variable's forecast error variance that occurs as the result of a shock from a variable in the system should be considered (Narayan and Symth, 2004).As the orthogonalized forecast error variance decompositions are not invariant to the ordering of the variables in the VAR, the generalized forecast error variance decomposition which is invariant to the ordering of the variables in the VAR (Pesaran and Pesaran, 2009) is used in this study. Descriptive statistics Before directly going to the econometric estimation, it is better to have a look at the descriptive statistics of the variables under consideration.This is vital because these statistics summarize the statistical properties of the series in the model such that some explanations about the behavior of the series can be offered at a glance (Table 1). Unit root testing The null hypothesis for the test (in both ADF and PP) depicts that the data series under consideration has unit root while the alternative hypothesis claims that the series is stationary. As can be seen from Table 2, PP test witnessed that GDI in natural log at level is non-stationary under both options (i.e. with constant and trend, and with constant only) since we cannot reject the null hypothesis of unit root at 1 and 5% level of significance.On the other hand, when the first difference of natural log of GDI is considered it becomes stationary at 1 and 5% level of significances (when only constant is included) and at 1% level of significance (when both constant and trend are considered).Coming to the PP test, the result reveals that the first difference of lnGDI is stationary at 1% level of significance under all specifications.However, lnGDI at level is not stationary. The PP test shows that none of the variables is stationary at level.However, taking the first difference of the variables makes them stationary since the null hypothesis of unit root is rejected at 1 and 5% level of significance. In general, the PP test from Table 2 shows that all variables are integrated of order one, I (1).Thus, the determination of cointegration relationships using the ARDL technique does not face a problem from the existence of I(2) or beyond variables in the model specified. Co-integration test and estimation of long-run relationship A two-step procedure is used in estimating the long-run relationship: an initial examination of the existence of a long-run relationship among the variables in Equation ( 2) is followed by an estimation of the short-run and long-run parameters. The results in Table 3 show that lnRGDP, lnGDS, lnGDI, lnLF and lnHC are co-integrated when lnRGDP is taken as dependent variable since F-statistic, also written as F lnRGDP (lnRGDP| lnGDS, lnGDI, lnLF, lnHC) = 9.4448 [with lag order of (1,0,0,1,0) selected by the SBC] is greater than both the 95% Upper Bound critical value of Narayan (2004) and Pesaran et al. (2001) which is 4.000 and 4.4778 respectively.However, taking each of the remaining four variables (i.e.lnGDS, lnGDI, lnLF and lnHC) as a dependent variable never establishes cointegration since the calculated F-statistic is less than the 95% Lower Bound critical value in all cases. 8The existence of single co-integrating equation, according to Pesaran et al. (2001) indicates that there is unique longrun relationship among the variables under consideration. Before estimating the long-run relationship and the short-run dynamics of the model, it is important to analyze performance of the ARDL estimates through the diagnostic tests.As can be seen from the result, Rsquared is 99 percent and it is statistically significant (with P-value = 0.000) at 1% level of significance implying that the model fits well.Moreover, the model (ARDL estimates) is free from the problem of serial correlation, functional form, heteroskedasticity and normality as revealed in LM version of tests because we cannot reject the null hypothesis of each test statistic.See appendix III: A and B for details.Table 3 presents the estimated coefficients of the longrun relationship along with the diagnostic tests of the model.Based on the results given in Table 3, the longrun growth equation is given as below: (10b) The estimated coefficients show that gross domestic investment and labor force have a statistically significant positive impact on economic growth, which is in line with theoretical argument that investment and labor force positively contributes to economic growth.More specifically, the elasticity of labor indicated that a 1% increase in labor force leads to 4.2666% increase in economic growth on average, keeping other variables constant.Similarly, the long-run elasticity of gross domestic investment is 0.33434 which implies that a 1% rise in gross domestic investment results in about 0.33434 percent increase in economic growth.The result coincides with the findings of Were (2001) for the case of Kenya and Iyoha (1999) for the case of SSA countries. However, human capital (lnHC) has an insignificant effect on economic growth.This result is in line with the findings of Wondwessen (2011), Pritchett (1996), Pritchet (2001) and World Bank (1995).The reason why human capital is insignificant in explaining the Ethiopian economic growth is due to the fact that, firstly, the returns to schooling appear to differ sharply by economic activity. 9Evidences show that the estimated returns to schooling (human capital) are higher in manufactured exports (Ross and Sabot, 1995).But, Ethiopian economy which is dominated by agricultural sector contributes a lion's share both in terms of GDP, and employment does not respond this much to change in human capital, according to this argument.Secondly, technological progress in agricultural sector is low in Ethiopia.Foster and Rosenzweig (1995) argue that the returns to schooling of farmers are very low where technological progress is low.Moreover, the long-run model suggests that gross domestic savings has statistically insignificant effect on economic growth.This result is coherent with the findings of Budha (2012) for Nepal.This could be due to low level of savings which resulted from lack of continuous saving behavior in Ethiopia over time which is in turn primarily attributable to the subsistence nature of the economy where output is barely enough for consumption. The short run dynamic modelling (Error Correction Model) After estimating the long-run coefficients, we obtain the error correction representation (see Equation 4) of the ARDL model. The results of the short-run dynamic growth model and the various diagnostic tests are presented in Tables 4.About 67 percent of the variation growth is explained by explanatory variables included in the model.R-squared which is 66.9 is statistically significant at 1% level of significance implying that the model fits well since the explanatory variables are jointly significant at 1% level of significance. Based on the results given in Table 4, the short-run dynamics of growth equation is given as: (11) The result reveals that the estimated coefficients of lnLF and lnGDI are statistically significant with the positive sign.In line with the postulates of growth theories, labor and investment have a positive effect on real gross domestic product of Ethiopia in the short-run.However, gross domestic savings (lnGDS) and human capital (lnHC) do not have any impact on the economic growth of Ethiopia in the short-run.The reason is that it can take a long time before benefits from human capital arrive, as it takes time to build human capital. The estimated coefficient of the ECM t-1 is equal to -0.38 which states that departure from the long-term growth path due to a certain shock is adjusted by 38 percent over the next year, significant at the 1% level of significance and complete adjustment will take about three years.The model passes all the diagnostic tests.The diagnostic tests applied to the error correction model point out that there is no evidence of serial correlation and heteroskedasticity.Besides, the RESET test implies the correctly specified ARDL model.Skewness and kurtosis of residuals based normality test shows that the residuals are normally distributed. The stability of the regression coefficients is tested using the cumulative sum (CUSUM) and the cumulative sum of squares (CUSUMSQ) of the recursive residual test for structural stability (Brown et al., 1975).Plots of CUSUM and CUSUMSQ of the growth equation in its short-run version are given in Appendix III: C and D. As can be seen from the graphs, the regression equation seems stable given that neither the CUSUM nor the CUSUMSQ test statistics go beyond the bounds of the 5% level of significance. Granger Causality Test: Toda-Yamamoto and Dolado-Lutkepohl (TYDL) Approach As can be seen from Table 5, the optimal lag length is one.Since all variables become stationary after the first differencing, it implies that d max is also one.We then estimate a system of VAR in levels with a total of (d max +k=1+1) which is 2 lags where k is the lag length selected by information criteria.Using this information, the system of equations (i.e.Equations 5-9) is jointly estimated as a "Seemingly Unrelated Regression Equations" (SURE) 10 model. A range of formal diagnostic tests such as autocorrelation, non-normality, heteroskedasticity and stability tests are conducted for checking the adequacy of VAR model before using the model for Granger causality and related tests.The test results show that the model passed all diagnostic tests except that of non-normality 11 .However, Lutkepohl (2007) argued that normality is not a necessary condition for the validity of many of the statistical procedures related to VAR models.Thus, the VAR model is adequate and can be used for Granger causality test as well as for formulating the impulse response functions and the variance decomposition. Following the TYDL approach, the augmented VAR of order 2 is estimated and the Wald test is performed only on the coefficients of the first lag.The result of five variables VAR model was estimated using SUR regression technique; Table 6 shows that the null hypothesis that 'Granger 10 Zellner (1962) suggests that the regression coefficient estimators obtained by the SUR are at least asymptotically more efficient than those obtained by an equation-by-equation application of least squares.Moreover, Rambaldi and Doran (1996) show that SUR regression makes the computation of modified Wald test statistic too simple. 11VAR diagnostic test result is not presented here to save space.However, it is available upon request.no-causality from gross domestic savings to economic growth' cannot be rejected even at 10% level of significance.However, there is an evidence to support the reverse even though it is weak (significant at 10% level). That is, growth is found to Granger cause savings.This result is consistent with the Carrol-Weil (1994)'s hypothesis which states that it is growth that causes savings but savings does not cause Granger growth.Moreover, the result is in line with the findings of Abu (2004) for the case of Ethiopia, Khan and Shahbaz (2010) for the case of Pakistan, Sinha and Sinha (2007) for the case of Mexico, Attanasio et al. (2000) for 123 countries' case, Abu (2010) for the case of Nigeria, and Elbadawi and Mwega (1998) for the case of Sub-Saharan Africa. The result also reveals that the Granger causality between gross domestic savings and gross domestic investment is bi-directional.That is, gross domestic savings Granger causes gross domestic investment and there is a feedback from gross domestic investment.This result supports the empirical finding of Budha (2012) for the case of Nepal.However, it contradicts the finding of Abu (2004) for the case of Ethiopia. Similarly, Granger causality between gross domestic investment and economic growth is bi-directional.The implication is that the data can be viewed either through the Keynesians/ neoclassical glasses or with an accelerator model in mind.This result corroborates the empirical findings of Tang et al. (2008) for the case of China, Alfa and Garba (2012) for the case of Nigeria, and Elbadawi and Mwega (1998) for the case of Sub-Saharan Africa. Labor force precedes and Granger causes both economic growth and gross domestic investment.Moreover, it Granger causes gross domestic savings suggesting that economic growth increases the income of workers relative to that of non-workers (children and retirees).Hence workers' saving could rise.There is no Granger causality between human capital and the remaining other variables except gross domestic investment in which Granger causality runs from gross domestic investment to human capital. Response Functions and Variance Decompositions Table 7 illustrates the estimated generalized impulse response functions of variable lnRGDP for ten years.In response to a one standard deviation disturbance in current economic growth (Table 7), future economic growth increases by 4.8 percent in the first year, by 3.59 percent in the fifth year and gradually reduce to 3 percent in the 10 th year. A one standard deviation disturbance originating from economic growth results in an approximately 4.8 percent increase in gross domestic investment in the first period.But it continuously declines to about 3.65 percent in the third period and starts increasing after the third period and reaches about 6.5 percent in the 10 th period implying that the impact of growth on gross domestic investment is permanent.A one standard deviation disturbance originating from economic growth results in more or less 7.3 percent increase in gross domestic savings in the first period.However, this figure declines to about 2.7 percent in the third period but starts rising afterwards.Accordingly, it reaches about 4.8 in the 10 th period implying that the impact of economic growth on gross domestic savings is not dying out. The impact of economic growth on labor force is very small (about 0.7 percent in 1 st period and declined to 0.4 percent in the 10 th period).This shows that the impact of economic growth on labor force in temporarily lived phenomenon. The generalized impulse response output for lnGDS and lnGDI is not presented here to save space. 12The result shows that a one standard deviation shock arising from gross domestic investment results in about 12.3 percent rise in gross domestic investment itself in the first period which decreases to about 9.4 percent in the 6 th period and starts increasing afterwards.The response of natural log of gross domestic savings to one SE shock in natural log of gross domestic investment is relatively stronger as compared to that economic growth as it leads to approximately 10.5 percent increase in gross domestic investment in the first period while economic growth increases only by about 2.5 percent during the same period.The impact of gross domestic investment on economic growth and gross domestic savings never dies out as the impact increases to 3.9 percent in the 10 th period in case of economic growth and the impact on gross domestic savings follows rising pattern since the 4 th period.The implication is that the impact (due to shock) of gross domestic investment on economic growth and gross domestic savings is permanent one. The result for the generalized impulse responses to one SE shock in the equation for LNGDS shows that the gross domestic savings shocks have larger and permanent effects on gross domestic savings itself which 12 However, it can be obtained upon request. fluctuate in the whole period followed by its impacts on gross domestic investment.On the other hand, the impulse response of economic growth, human capital and labor force to one SE shock in gross domestic savings is very small. Despite the fact that impulse response functions trace the effects of a shock to one endogenous variable on to the other variables in the VAR, variance decomposition separates the variation in an endogenous variable into the component shocks to the VAR.However, it must be noted that unlike the orthogonalized forecast error variance decomposition the total variance in case of the generalized forecast error variance decomposition does not sum to 100 percent since the covariance between the original shocks is non-zero as suggested by Tang and Lean (2009).Tables 8-10 present the generalized variance decompositions of variables of interest (i.e.lnRGDP, lnGDS and lnGDI) for ten year time horizon. The results in Table 8 point out that disturbance arising from lnRGDP itself imposed the greatest variability to future lnRGDP: it contributes up to 78.26 percent variability one year ahead and approximately 50 percent four quarters ahead.This result indicates that current change in economic growth heavily determines future changes in economic growth.LnLF dominates over all other three variables (i.e.lnGDS, lnGDI and lnHC) in influencing economic growth.It accounts for approximately 46.3 and 41.8 percent of the total variance in economic growth two year and three year ahead respectively. The third largest source of variation in economic growth appears to be from lnGDI, which describes for approximately 15.6 percent of the variance in lnRGDP one year ahead and increases to 35.3 percent ten year ahead.The remaining two variables (i.e.lnGDS and lnHC) account for very little percentage of variations in lnRGDP.This result is in line with the result obtained from the TYDL approach to Granger causality that the natural logarithm of labor force, lnLF, and the natural logarithm of gross domestic investment, lnGDI, cause economic growth.Table 9 presents the generalized forecast error variance decomposition for variable lnGDI.The result shows that the largest source of variation in the forecast error of lnGDI goes to its own innovations.In the second period, for example, about 82% of the variation in lnGDI is explained by the innovations of lnGDI itself which gradually declined to about 54% in the 10 th period. LnRGDP is the second largest source of variation in lnGDI followed by lnGDS, suggesting that both gross domestic savings and economic growth Granger cause gross domestic investment which corroborates the result obtained from TYDL approach. Table 10 shows that the largest variation in the forecast error of gross domestic savings, lnGDS, arises from its own innovations which account for about 80.6 percent the first period and 50 percent even in the 10 th period; while gross domestic investment (i.e.lnGDI), which is the second largest source of variation in lnGDS, contributes 37.9 and 35.4 percent in the second and seventh period respectively.The variation of forecast error of lnGDS due to lnGDI is relatively strong that it contributes about 35.2 percent of the variation in lnGDS even in the 10 th period.LnRGDP is the third largest source of variation in lnGDS contributing about greater than 10 percent of forecast error variance of lnGDS.The results tend to confirm the conclusion found by within sample TYDL causal analysis which states that lnRGDP and lnGDI Granger cause lnGDS even though Granger causality from economic growth of gross domestic savings is relatively weak. CONCLUSION AND POLICY IMPLICATIONS As determinants of growth, the long-run coefficients of the natural logarithm of gross domestic investment and labor force are both positive and statistically significant at 1% percent level of significance, implying that these two variables have a significant and positive impact on growth in the long-run.However, the long-run coefficients of gross domestic savings and human capital are both statistically insignificant. Besides, ARDL based short-run dynamic modeling (Error Correction Model) for growth shows that labor and investment have statistically significant positive effect on growth in the short-run.Furthermore, the stability of the estimated parameters of both short-run and long-run relationships is supported by CUSUM and CUSUMSQ stability tests. The direction of causal relationship among the gross domestic savings, gross domestic investment and economic growth using the Granger causality tests based on the TYDL framework suggests that the direction of Granger causality is from savings to investment and then to economic growth which is in line with the conventional wisdom.Additionally, the Granger causality runs from economic growth to investment and then to savings.This implies that there is two-way causal relationship between gross domestic savings and gross domestic investment and between gross domestic investment and economic growth.However, Granger causality running from investment to savings and economic growth is the strongest as suggested by impulse response and variance decompositions.The result also shows that there is unidirectional Granger causality running from economic growth to gross domestic savings which is consistent with the Carrol-Weil hypothesis. Labor Granger causes savings, investment and economic growth.However, human capital does not Granger cause any the variables of interest.Similarly, only investment Granger causes human capital. The most important mechanism for spurring growth is investment since it helps savings and economic growth.Thus, the country is required to set an encouraging environment such as adequate access to credit in order to stimulate domestic investment.Therefore, the government should reduce lending rate through monetary policy in order to boost so as to bring high and sustained economic growth. Savings should be increased for two main reasons.Firstly, investment has to be financed some way or the other and therefore savings should be considered.Ensuring an adequate level of gross domestic savings is vital in closing the gap between savings and investment and reducing an extreme dependence on foreign capital which can be a risk due to its volatility.Secondly, it stimulates investment thereby economic growth and this higher growth reinforces savings and investment.Therefore, the government is required to set a sound and fertile environment in order to foster domestic saving that is adequate enough to finance investment and to realize sustainable economic growth.To do this, the government should: 1. Create stable and predictable economic atmosphere that honors savers for thrift and decreases the fear that inflation or a collapsing of financial system will lead to confiscation of their savings.Specifically, the government should stabilize inflation, strengthen domestic financial institutions, give savings incentives such as tax breaks and increase the role of market signals to create competitive environment in the sector, i.e. eliminating financial repression.2. Make strong improvement on the fiscal balance, particularly the revenue balance to render public savings positive.Moreover, the government should develop longterm savings instruments to mobilize household savings which in turn enhances public savings.3. Expand microfinance institutions and banks to far flung areas of the country to mobilize domestic savings from the small depositors.4. Increase the deposit rate of the commercial banks through monetary policy at the disposal of the Central Bank. This paper used annual time series data ranging from 1969/70-2010/11 obtained from different publications of National Bank of Ethiopia (NBE), Ministry of Finance and Economic Development (MoFED), Statistical data base of Ethiopian Economic Association (EEA), African Development Indicator (ADI) and WB CD-ROM. of the model; p is the true lag length; t 1  , t 2  and t 3  are the residuals of the model; ln represents natural logarithm.Equations ( p-values.Δ represents the first difference.** and * means the coefficients are significant at 1% and 5% level of significance respectively.A: Lagrange multiplier test of residual serial correlation, B: Ramsey's RESET test using the square of the fitted values, C: Based on a test of skewness and kurtosis of residuals, D: Based on the regression of squared residuals on squared fitted values. Table 1 . Descriptive Statistics of variables in the model (EViews 6 output) Table 2 . Result for the PP Unit Root Test (EViews 6 output) at level Intercept Intercept and trend Variables Test Statistic 1% CV 5% CV P-value Test Statistic 1% CV 5% CV P-value Decision Note: CV represents critical value and P-value < 5% shows that the variable is stationary. Table 3 . Estimated Long Run Coefficients using the ARDL Approach (Output obtained from Microfit 5.2 version). Long Run Coefficients using the ARDL Approach ARDL (1,1,0,0,0) selected based on Schwarz Bayesian Criterion Dependent variable is lnRGDP Dependent variable is lnRGDP.Notes: ** and * indicate significance at 1% and 5% level of significances.Figures in parenthesis are pvalues.A: Lagrange multiplier test of residual serial correlation, B: Ramsey's RESET test using the square of the fitted values, C: Based on a test of skewness and kurtosis of residuals, D: Based on the regression of squared residuals on squared fitted values. Table 4 . Short Run Dynamics Result for the Selected ARDL Model. Notes: Figures in parenthesis are Table 5 . VAR lag order selection criteria. Table 6 . Estimates of long-run Granger causality based on TYDL approach. Table 7 . Generalized impulse responses to one SE shock in the equation for lnRGDP. Table 8 . Generalized forecast error variance decomposition for variable lnRGDP. Table 9 . Generalized forecast error variance decomposition for variable lnGDI. Table 10 . Generalized forecast error variance decomposition for variable lnGDS.
2018-12-06T03:25:04.061Z
2014-10-31T00:00:00.000
{ "year": 2014, "sha1": "55da8dd4c26f97a09916dddb9bf7ed44a1197d60", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/JEIF/article-full-text-pdf/BBABCCB48470.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "55da8dd4c26f97a09916dddb9bf7ed44a1197d60", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
19185608
pes2o/s2orc
v3-fos-license
Mapping nurses’ activities in surgical hospital wards: A time study Background Balancing the number of nursing staff in relation to the number of patients is important for hospitals to remain efficient and optimizing the use of resources. One way to do this is to work with a workload management method. Many workload management methods use a time study to determine how nurses spend their time and to relate this to patient characteristics in order to predict nurse workload. Objective In our study, we aim to determine how nurses spend their working day and we will attempt to explain differences between specialized surgical wards. Setting The research took place in an academic hospital in the Netherlands. Six surgical wards were included, capacity 15 to 30 beds. Method We have used a work sampling methodology where trained observers registered activities of nurses and patient details every ten minutes during the day shift for a time period of three weeks. Results The work sampling showed that nurses spend between 40.1% and 55.8% of their time on direct patient care. In addition to this, nurses spend between 11.0% and 14.1% on collective patient care. In total, between 52.1% and 68% of time spent on tasks is directly patient related. We found significant differences between wards for 10 of the 21 activity groups. We also found that nurses spend on average 31% with the patient (bedside), which is lower than in another study (37%). However, we noticed a difference between departments. For regular surgical departments in our study this was on average 34% and for two departments that have additional responsibilities in training and education of nursing students, this was on average 25%. Conclusions We found a relatively low percentage of time spent on direct plus indirect care, and a lower percentage of time spent with the patient. We suspect that this is due to the academic setting of the study; in our hospital, there are more tasks related to education than in hospitals in other study settings. We also found differences between the wards in our study, which are mostly explained by differences in the patient mix, nurse staffing (proportion of nursing students), type of surgery and region of the body where the surgery was performed. However, we could not explain all differences. We made a first attempt in identifying and explaining differences in nurses’ activities between wards, however this domain needs more research in order to better explain the differences. Introduction Balancing the amount of nursing staff in relation to the amount of patients is important for hospitals to remain efficient [1]. Hospitals intend to deliver good quality of care and also work efficiently. To ensure this, there needs to be a good fit between patient needs and nursing staff on hospital wards. The amount of work that nurses do, their workload, needs to be well balanced, in order to prevent extra costs for overstaffing a ward but also to prevent deteriorating patient outcomes and increased stress or burnout in nurses by understaffing wards. There is a direct relation between nurses' workload and patient outcome [2][3][4][5] and workload is also a predictor for burnout [6,7]. Bakker found a relation between job demands such as workload and performance [8], and Toh's study showed a positive bi-directional relation between the nursing shortage and oncology nurses' job dissatisfaction, stress and burnout [9]. Also, in the near future healthcare labor shortages are expected to occur [10], so retaining nursing staff will be a challenge. Workload is related to intention to leave [11,12] and besides this, training of new staff is also costly. Many studies have identified factors that predict workload of nurses. There is evidence that these nurse-patient ratios or nursing hours per patient day (NHPPD) do not accurately predict workload of nurses [13], since these do not take into account the different needs between patients nor the differences in experience and education level of nursing staff. Twigg argues that relying on expert opinion in setting standards for workload, in their study a standard NHPPD per ward, is not optimal and recommends using a standardized patient acuity measurement [14]. In Belgium, hospitals are required to register the Belgium Nursing Minimum Data Set (B-NMDS) in order to benchmark hospitals on several dimensions, among which workload. Van den Heede showed that 70% of variation in nursing staff per unit was predicted by the B-NMDS item hospital type with the covariates nursing intensity and service type [15]. They recommended that instead of working with NHPPD, a NHPPD corrected for nursing intensity is a better measure. However, Sermeus stated in a 2008 study [16] that the B-NMDS nursing intensity did not necessarily give an indication of required nursing time. Another drawback of the B-NMDS is the extensive amount of registration required by the hospitals [17]. The RAFAELA™ patient classification system [18] is an instrument to assess optimum levels of nursing intensity. We consider this a form of workload management. The RAFAELA™ system consists of the Oulu Patient Classification instrument [19], a system that records daily nursing resources, and the Professional Assessment of Optimal Nursing Care Intensity Level questionnaire. The three are combined to measure nursing intensity. RAFAELA™ measures only the patient-related workload of nurses and does not include other tasks [20]. This method is widely used in Finland; while promising, it is not used for prospective workload management but only for assessments of workload in the past. For optimal versatility of nursing staff, prospective insight is of great value. In a previous publication [21], we describe the development of a framework for a new workload management method. The first step in this method is to determine patient characteristics that are relevant to nurses' workload. The second step in this method is to gain insight in what nurses' activities are on a day to day basis. Quite some research has already been done in this area. In 2000, Rasmussen [22] selected examples of work sampling studies of nurses' activities done in the 10 year time span from 1986 until 1996,. This overview showed results of studies in several settings (army hospital, regular hospital, different specialties including pediatrics and critical care), clustering activities in the categories Direct Care, Indirect Care (some studies have one category for the two), Unit-Related tasks and Personal Time (Prescott,[23]). Duffield [24] performed a work sampling study of nurses and also worked with the same four different categories, as also used by Urden [25]. Direct care is defined as patient-related activities performed in the presence of the patient and indirect care is defined as patientrelated activities away from the patient. It is assumed that patient-related activities can always be attributed to a single patient. In 2008, Hendrich [26] performed a time-and motion study of nurses' activities in 36 hospitals. Their goals were "to reveal drivers of inefficiency in how nurses spend their time and to identify opportunities to improve efficiency through changes to unit design and/or organization" [26]. With these goals in mind, nurses' time was divided into 4 categories of activities: nursing practice, unit-related functions, nonclinical activities, and waste. These 4 categories were in turn divided in a total of 12 subcategories. Unit-related functions were not divided in sub-categories of activities. However, unit-related functions also included patient related activities, such as transporting patients between wards. The subcategories were not specified, so subcategories of category Waste such as Looking/retrieving, Waiting and Delivering are difficult to interpret. In 2011, Westbrook also performed a time and motion study [27], using the Work Observation Method by Activity Timing method. They focused on ten work tasks, amongst which direct care, indirect care and ward-related activities and social activities. These are partly the same as the 4 categories mentioned by Duffield, however some specific activities were classified under a separate work task, for example the engagement of nurses with other healthcare providers, supervision, documentation and medication activities. Activities within work tasks were not registered separately, the study registered activities on work task level. In 1988, in the Netherlands, the Dutch Hospital Institute (NZi developed a workload management method using an activities list consisting of 23 activity groups which are clustered into categories Direct patient Care, Collective patient Care, Unit-Related tasks and Other time (which includes personal time and official breaks). These activity categories are quite similar to Prescott's, but the clustering of activities under Direct patient care and Collective patient care is different. This was done with the purpose that all activities under Direct patient care can be linked to one specific patient. Activities under Collective patient care are often harder to attribute to one specific patient, for example collective preparation of medication or collective handover. When performing a time study with observation rounds done every ten minutes, each time an observation is done, 10 minutes of care time is attributed to the observed activity and also to the related patient. We believe that for some patient related activities like handover and collective preparation of medication, this would overestimate care time for certain patients and under estimate care time for other patients, because the time spent per patient is usually only a minute or two in these activities. In our study to develop a workload management method, we are interested in relating nurse care time to patient characteristics, so we are not primarily interested in where an activity took place or with whom, but if the activity and the related care time can be accurately related to a specific patient or not. We chose to use the NZi method as a starting point, because it fits this purpose. Also, the NZi list contains 23 activity groups, which is more than in other studies and helps us better evaluate and understand differences in working processes between nursing wards [28]. Lastly, several years ago, a small scale time study using the NZi method was performed in the same wards that are involved in the current study. Ward management and most nurses were still familiar with this list. The current article describes a time study on activities of nurses, which is an important step in developing a new workload management method. We will describe how nurses spend their working day and the more detailed level of data collection will help understand differences between wards. Background Performing a time study of nurses' activities is the second of several steps in developing a workload management method for staff nurses. Fig 1 describes these steps. The full study protocol for developing this workload management method is described in our 2016 publication [21]. Scope The research took place in an academic hospital in the Netherlands. Six surgical wards were included, with 2 wards with 15 beds and 4 wards with 30 beds. Ward specialties were orthopedic/trauma surgery, vascular surgery, surgical oncology, otolaryngology, maxillofacial surgery and ophthalmology and urological surgery. The time study was done during the day shift. We chose to focus on the day shift, because this is the shift when the most nursing staff is required and most clinical nursing activities are performed. Weekends were excluded because task mix and staffing is very different in weekends and cannot be compared to day shifts of regular weekdays. We prioritized analyzing day shifts because controlling workload there will affect the most staff. In a later phase we plan to translate results to other shifts. Student nurses were included in the study; teamleaders and ward managers were excluded because they are not involved in direct nor indirect patient care. Activities of other types of ward staff (doctors, assistants, cleaning staff, etcetera) were not considered in this study. In our study, 87% of the nurses were female nurses, 62% were registered nurses and 38% nursing students. As shown in Table 1, 82% of the nurses is under 40 years of age. Of the registered nurses, 29% had less than 5 years' work experience in the study hospital, 13% had 5-10 years' experience and the others more than 10 years. Work sampling To accurately map nurses' activities, a work sampling methodology was used. Work sampling is a useful and efficient methodology to explore work-related activities [29]. In work sampling, activities of subjects are observed or registered every so many minutes, resulting in a sample of the activities of nurses. This way, we gain insight into the way nurses spend their working hours, for example to what extent their work is directly patient-related or which percentage of their working time is spent on administrative duties. Pelletier and Duffield [29] suggest working with trained observers as an alternative to selfreporting, because the latter can be prone to bias. This is only possible when the staff to be observed works in an area that can be surveyed by the observer, and the observer can determine the activities relatively easily. For example, if work sampling is done on staff that is moving great distances or is performing mostly cognitive tasks, then self-reporting can be better. They also advocate the use of handheld computers to make registration faster and more accurate. Sittig [30] also gives important tips for designing a work sampling study in healthcare: involve the nurses and nurse management in the study, determine relevant activities to register and make foolproof definitions, identify the right observers and train them well, and perform pilot samples to test the setup. We have followed up on these suggestions, in the next paragraphs we will elaborate on this. Activities. Nurses perform many different activities in a day shift. Registering this multitude of activities separately is virtually impossible. We therefore first identified groups of activities that we wanted to register during the work sampling. The basis for this was the list of activity groups that is part of the workload management method developed by NZi (Dutch Hospital Institute, [31]). We used the Delphi method to evaluate the activity groups: the Delphi method is a structured form of communication in order to acquire an expert opinion on a certain topic [32]. In two or more rounds questionnaires are answered. In between rounds a facilitator gathers response and provides an anonymized summary of experts' opinion, including the motivation of the experts. Experts can revise their opinion, based on the judgments of the other experts, working towards an end result with a good level of consensus. Ward management selected one staff nurse of their ward to be the expert in the Delphi group. All selected nurses were experienced nurses in the specialty of the ward they work in. The group was asked to comment on the NZi list of activity groups and corresponding activities. The activity groups were clustered into 4 categories: direct patient care, collective patient care, general tasks and other tasks. Direct patient care was defined as care time that can be directly related to one specific patient. This includes assistance with bathing or eating, handing out medication, changing bed linen, wound care, communication with the patient or family, etcetera. Collective patient care was defined as tasks that are patient-related, but are difficult to attribute to one specific patient. This includes general preparation of medication, patient handover, bringing a collection of samples to the laboratory, etcetera. General tasks includes education/supervision, meetings, organization of work (planning), administrative duties and domestic duties. Other tasks includes lunch and coffee breaks and personal time. In one-on-one interviews with the lead researcher, members of the group commented on the list. Based on the group's comments the list was adjusted: activities were added, group labels were adjusted and new groups were defined. Results were shared and the Delphi process was repeated, which resulted in a new, definitive activity group list (see Table 2). Note that this 24. Personal time list shows only the activity groups and categories. Details of which activities are placed in the activity groups are not shown (data available on request). Observer selection and training. Observers were selected and trained uniformly in how to register nurses' activities. Observers were either nurses from involved wards (observing on wards other than their own) or medical students. We preferred to work with nurses as observers where possible, because they are motivated to register activities accurately and are familiar with nurses' activities, and therefore less likely to misinterpret or make mistakes. In addition, nurses learn about working procedures on other wards, which broadens their horizon and will help exchange ideas and increase understanding among wards. However, it was not possible to schedule sufficient nurses to cover all observer duties and we hired medical students where necessary. There were 49 observers in the study of which 18 were medical students. Observer training consisted of a theoretical part (purpose of research, explanation of work sampling method, importance of accurate observation and registration) and a practical part (how to use the handheld computer, trial observations, examples of pitfalls). Attendance was mandatory for all observers and the training included a practical test under time pressure, and using all equipment that was to be used at the actual time study. Working instructions with all the work sampling study information were handed out to all observers. Observers were trained to always confirm with the nurses they were observing which activity the nurses were doing and -when relevant-for which patient. This procedure was introduced to prevent observers from making wrong assumptions. Registration of observations. During the work sampling study all nurses were observed approximately every ten minutes. Observers registered their observations with a handheld computer (Symbol PDT-3100 barcode scanner, with SCO software), by scanning predefined barcodes. Time intervals between observations were automatically randomized, with an average of ten minutes. Observers were asked to register three things each time they made an observation: the name of the nurse; the activity the nurse was performing; and, when the activity was patient-related, patient details. There were three barcode sheets available to the observers. Each sheet showed all possible entries for one variable, in a logical order. One sheet showed barcodes for all names of the nurses, another showed all barcodes for the predefined activity groups and the last sheet showed barcodes for all patients on the ward. Barcodes were positioned in such a way on the sheet that chances of accidental miscoding were minimized. By registering which activities the nurses were doing every ten minutes, a random sample of nurses' activities in day shifts was taken. Test work sampling. Before doing the actual work sampling study, a test sampling study was performed. The purpose of this test study was to: • Test the handheld computer and the barcode sheets: do they work properly and are they easy to use? • Test the activities list: is it complete and easy to interpret? • Test the workload of the observers: how many nurses can be observed by one observer and how long can an observer work uninterrupted? We actively looked for flaws in the registration process so we could prevent registration errors in the actual work sampling study. Four observers received a uniform, standardized training and spent a minimum of three hours observing nurses on different wards. After this test study, the observers were interviewed. The equipment worked well and turned out to be reliable. Based on the observers' experiences, choices were made regarding procedures: • We changed the order and position of the barcodes to make them easier to locate and scan properly. • A notepad was added to the observer's equipment to note mistakes that could not be corrected on the spot. • We decided to work with one observer per ward, with a maximum of 4 working hours in one observation shift. Work sampling period. Next, the actual work sampling study was planned. A representative time period was selected, in which workload was expected to be average: outside holiday seasons and periods with especially high or low occupancy rates (for example, due to reduction of operating room capacity) and periods with enhanced or reduced nursing capacity (for example, due to planned education). Also, the number of observations that were to be made in the work sampling study needed to be sufficiently large. For practical reasons, there was a limit to the amount of days that we could observe our staff. It is costly to arrange observers, and nurses will get tired of being observed. Ward management was asked to advise on the maximum amount of time study days that they felt was reasonable, and they advised a maximum of fifteen consecutive working days. When sampling nurses' activities every ten minutes, this would generate approximately 54,000 observations (= 15 study days x 6 wards x 12 nurses per ward x 50 observation rounds per day shift) of nurses' activities. The actual number of observations of nurses' activities was 54.663, which were aggregated to 290 observations of percentage of time spent on activities per nurse in a ward. Work sampling study. All nurses on duty in the day shift of six wards were observed during the selected observation period of 15 working days (Monday through Friday). Trained observers registered activities approximately every 10 minutes in the day shift, starting at 07.30 hours and finishing at 16.00 hours. Exact observation moments and start and finish times were dependent on the random time interval generator of the handheld computer. The standard training for observers included a procedure for correction of mistakes. If an observer made a mistake that could not be corrected on the spot, he or she would note details on the time, involved nurse and nature of the mistake on a note sheet on the clipboard. These notes were evaluated by the lead researcher, and checked by another researcher that was not directly involved in performing the work sampling. When corrections were approved by both researchers, they were corrected in the data. All corrections were logged uniformly. Interrater agreement. To test whether two observers registered the same activity in the same way, an interrater agreement was determined. Regular tests for interrater agreement, such as Cohen's Kappa or intraclass correlation cannot be applied here, because these assume that only one variable is observed and also that this variable is classified in a limited number of categories [33]. In our research, we have three variables (nurse/activity/patient), all with many possible categories: up to 15 names of nurses, 25 activities and up to 30 patients. For the study on differences of nurses' activities between wards it is important that at least two variables (nurse name and activity group) were registered correctly. We decided to calculate an exact agreement percentage between the raters on these two variables. To test the reliability of the registrations, an interrater agreement study was planned. The interrater agreement study was planned during the 3-week work sampling period. For this study, a second observer temporarily joined the scheduled observer. Both observers had the same training and both had already done at least one observation shift during the work sampling period. The agreement study was done twice, on two different wards with two different pairs of observers. One study was planned in the morning and one in the afternoon of the day shift. The observers walked their rounds in pairs and were instructed not to speak to each other or share registration results. On every observation round, one of the observers asked the nurses they observed which activity they were doing and, when applicable, for which patient. Both observers independently registered results in their own handheld computer. The interrater agreement was 88.4% exact agreement on 242 observations. We consider this an acceptable agreement percentage. The probability of an agreement occurring by chance is low, because there are so many selections possible for registration of nurse (15) and activity (25). Analysis We analyzed our work sampling data in two steps: 1. Descriptive analysis. This analysis gives a general impression of the way nurses spend their time, for the different wards that participated in the study. The descriptive analysis gives the mean percentage of time spent by nurses on the activity groups. However, this analysis does not give any information on variation within a department on the different activity groups, nor does it indicate whether observed differences could have been due to chance. 2. Compositional analysis. We also studied whether there were statistically significant differences between wards on the time their nurses spent on the various activities. The times spent on different activities are correlated: if one increases, another must decrease, since the total always amounts to 100%. Compositional analysis is an appropriate method for such data, since it allows for correlated outcome variables that sum to a fixed total [34]. We first analyzed differences between wards on the activity categories: Direct Patient Care (DPC), Collective Patient Care (CPC), General Tasks (GT) and Other Tasks (OT). Compositional analysis dictates that one variable needs to be chosen as a reference variable, to compare the others against. The category OT was expected to be the most stable category, because the activity groups (duration of coffee and lunch breaks) that fall into this category are mostly standardized; therefore we chose OT as a reference category. The other three variables were compared to OT as follows: we calculated 3 ratios for each nurse in the study: DPC/OT, CPC/ OT and GT/OT. Since ratios are difficult to handle mathematically and statistically [34], we converted the ratios to logratios. For each nurse in the study we defined three correlated logratios. The next 3 steps in the analysis were as follows: 1. MANOVA on activity categories. Since we had three correlated observations per subject (nurse), we used multivariate analysis of variance (MANOVA) to find significant differences on one or more of these variables between wards [34]. We used a significance threshold of 0.05 for the MANOVA. 2. ANOVA on activity categories. If the MANOVA indicated significant differences between wards, we wished to discover for which activity categories these differences materialize. This was done by an analysis of variance (ANOVA) for each of the three logratios separately. Again, a significance threshold of 0.05 was used. 3. Post-hoc between wards on activity categories. If the ANOVA indicated differences between wards on an activity category, then the next step was to make pairwise comparisons on each combination of wards for this activity category using a Tukey correction. This post-hoc test will indicate which wards differ from each other for time spent on a particular activity category. After the analysis on activity categories, we performed a more detailed analysis in which we compared 21 separate activity groups to the reference category OT (the sum of three activity groups in category "Other Tasks"). Again, we first used a MANOVA on all activity groups and, if significant differences were found an ANOVA was performed separately for each activity group. Tukey post-hoc tests were carried out for activity categories for which the ANOVA indicated significant differences between the wards. Due to the large number of comparisons being made, we lowered the significance threshold to 0.01 for this analysis. The descriptive analysis was performed in Excel and the compositional analysis using the package "compositions" in R version 3.3.2 [35]. For help in interpretation, we discussed the results of the compositional analysis with the nurse managers. Ethical considerations The study guaranteed the privacy of involved staff. There was no patient data recorded besides patient registration number. Only the lead researcher (lead author of this manuscript) has access to the master data and coded the data. Data have been processed in such a way that nothing can be traced back to specific persons. The study protocol was submitted to the medical ethical review board of the University Medical Center Utrecht and was approved, protocol number 14-165/C. Descriptive analysis The mean percentage of time nurses spent on the 24 activity groups is shown in Table 3: Compositional analysis We will show results for the activity categories and activity groups in separate paragraphs. Compositional analysis activity categories. The MANOVA on the activity categories indicated significant differences between wards (p <0.001) and the ANOVAs detected significant differences between wards on all three categories. The post-hoc tests showed significant differences for many different combinations of wards, see Table 4. Ward 1 differed from all other wards on time spent on Direct Patient Care (DPC) in proportion to Other Time (OT). Since OT was relatively stable across wards, we can conclude that nurses on ward 1 spent significantly more time on DPC than nurses on other wards. The descriptive analysis suggested that ward 1 spent less time on General Tasks (GT) than the other wards, but only the difference between ward 1 and 6 was significant in the post-hoc comparison. Compositional analysis activity groups. The MANOVA analysis on the activity groups resulted in a p-value of < 0.001, implying differences in time spent on activity groups between wards. The ANOVA per activity group detected significant differences between wards on many activity groups and the post-hoc results indicated which wards differed from one another. These results were added to the descriptive analysis and are displayed in Table 5: Direct patient care: • Fluid/tissue sampling: ward 2 (surgical oncology) spent more time on this than ward 4 (vascular surgery) due to frequent wound samples. • Communication with patient/family: ward 6 (oral maxillofacial surgery) differs from all other wards, except for ward 5 (otolaryngology) and ward 1 (urology/ ophthalmology), and spends more time on communication than the others. This is likely because surgery in the maxillofacial area often leads to problems of speech. Ward 1 also spends more time on communication (differing from all but ward 6) due to the fact that many of the patients have vision problems. Nurses on this ward have to read labels and other information out loud to patients. • Preparing medication: ward 1 (urology/ ophthalmology) spends more time on this than all other wards. Ward 6 (maxillofacial surgery) differs from ward 2 (surgical oncology) as well, it is unclear why. • Transport of patient: ward 1 (urology/ ophthalmology) spends more time on this than wards 3 (vascular surgery) and 4 (traumatology/orthopedics), which can be explained by the Note: measurements in Table 3 were only for nurses that involved in direct care for patients. Team leaders and care assistants were excluded here, because not every ward has care assistants and on some wards team leaders spend much more time caring for patients than in other wards. https://doi.org/10.1371/journal.pone.0191807.t003 Time study of nurses' activities complexity of the patient population and the resulting length of stay. Urology and ophthalmology patients typically have a short length of stay, and therefore more patients are admitted and transported to the operating rooms. • Personal care of patient: ward 4 spends more time on this than wards 5 and 6, and ward 6 also spends less time on this task than ward 2. We could not explain these differences. • Assistance with meals and/or excretion: ward 5 (otolaryngology) spends less time on this task than wards 1 (urology/ ophthalmology) and 4 (orthopedics, traumatology). Ear nose and throat surgery patients often cannot eat solid food (and therefore need no help in eating), whereas urology patients often need help with excretion by catheterization. The same goes for immobile patients from orthopedics and traumatology wards. General tasks: • Education and guidance: ward 6 spends more time on this task than all other departments but ward 5, and ward 5 spends more time on this than ward 4. This can be explained by the fact that wards 5 and 6 together form a special learning environment, where a relatively high number of young nurses are trained. • Meetings: ward 5 spends more time on this than wards 1,2 and 4 and ward 3 in turn spends more time on meetings than ward 2. Findings The work sampling showed that nurses spent between 40.1% and 55.8% of their time on direct patient care. In addition to this, nurses spent between 11.0% and 14.1% on collective patient care. In total, this is between 52.1% and 68% of time spent on tasks that are directly patientrelated. We found significant differences between wards for 10 of the 21 activity groups. The biggest differences can be found for activity groups "Education/guidance" and "Medication Time study of nurses' activities preparation", followed by activity groups "Assistance meals/excretion", "Communication patient/family", "Personal care" and "Meetings". The results of the compositional analysis were used in discussions with ward managers on observed discrepancies between wards The diversity is mostly explained by differences in the patient mix, nurse staffing (proportion of nursing students), type of surgery and region of the body where the surgery was performed. Comparison to other work sampling studies The NZi workload management method that we based our study on, employs a list of activity groups very similar to Duffield's [24]. There is an important difference, though: NZi distinguishes a category called "Collective patient care (CPC)" which includes activities that are patient-related but cannot easily be attributed to a single patient. For example NZi classifies "Handover" as CPC because during a handover, each patient is discussed only for a short time. If during a work sampling study a handover meeting was observed and the full 10 minutes attributed to the patient being discussed at that moment, it would be an unfair allocation of time to that patient. In our study we were not only interested in the way nurses spend their time, but also in the relationship between nurses' activities and patient characteristics (care time per patient group). Because of this, we chose to use a different list of activity groups and categories. We did not distinguish direct and indirect care activity groups on the basis of the location of the nurse (with the patient or away from the patient), but based on whether activities could be related to a single patient or not. Therefore we cannot directly compare our categories "Direct patient care" and "Collective patient care" to the categories "Direct care" and "Indirect care". However, we can compare the sum of these categories. In the studies shown in Rasmussen's overview of work sampling studies of nurses' activities [22], the sum of direct care plus indirect care makes up 59.7% to 67.6% of the activities of nurses. In our study, we found between 52.1% and 68% of nurses' time was spent in these two categories, with an average of 60.9%. Our study was performed in a setting of surgical wards in an academic hospital, which is quite different to the settings in the studies mentioned by Rasmussen: amongst others a military hospital, critical care wards, psychiatric wards and pediatric wards. We suspect that part of the difference can be explained by the educational tasks inherent in the academic setting; as shown in Table 1, a substantial part of our workforce are nursing students: 38%. These students require education by the registered nurses, which explains part of the difference. In our study, wards with the highest percentage of time spent on educational activities also spent the least time on direct patient care. Unit related tasks for the wards take up between 16 and 30% of nurses' time in studies in Rasmussen's overview [22]. In our research we found that General Tasks (which includes the same activity groups as unit-related tasks) comprised between 14.5% and 30.3% of time of the nurses. Personal time seems to vary considerably in Rasmussen's overview: between 4% and as much as 20.7% of nurses' time. Our study indicated less variation: between 13.5% and 17.2%. Personal time is quite standardized in our hospital: all nurses have one coffee break in the morning and a one-hour lunch break in the afternoon. In 2008, Hendrich [26] performed a time-and motion study of nurses' activities in 36 hospitals. We cannot compare our activity categories to this study, because the activity categories were too different from ours. For example, Hendrich defined a "Waste" category, which includes waiting, looking/retrieving and delivering. In our study, these activities were always related to a specific activity group. However, they concluded that nurses spend a smaller part of their time on patient care activities and more time on documentation, coordination of care, medication administration, and movement around the unit. This generally corresponds with our findings. Westbrook also performed a time and motion study [27] in which observers shadowed nurses for blocks of, on average, one hour at a time. Westbrook found that the percentage of time spent on direct and indirect care (according to our definitions) was 76% and 81% in two consecutive measurements, which is much higher than in earlier studies. They also found that nurses spent around 37% of their time with patients. In our study, we found this to be on average 31%. However, we noticed a difference between departments. For regular surgical departments this was on average 34% and for the two departments that have additional responsibilities in training and education of nursing students, this was on average 25%. This explains why our average is much lower: the educational responsibilities in our academic setting influences how nurses spend their working day. This is interesting because there is evidence that the more time nurses spend with the patient, the higher the patient satisfaction [36] and the better outcomes [37][38][39]. We did not find any other study that analyzed the differences in time spent by nurses between different wards with different specialties. In our study we made a first attempt in identifying and explaining differences in nurses' activities between wards, however we acknowledge this domain needs more research in order to better explain the differences and that these differences may vary between settings and countries. Study limitations Our study was set in an academic hospital, which potentially limits the generalizability of the study results to different settings, such as general hospitals. Nurses' activities in general hospitals may be different from activities in academic hospitals. Also, the study was set in surgical wards; when applying the results to other specialties, adjustments will need to be made. Nurses on internal medicine wards spend their time on different activities than nurses on surgical wards. For example, wound care is not expected to be a predominant activity on internal medicine wards, but nurses there are likely to spend more time on blood transfusions, dialysis or chemotherapy, for example. Different specialties have different working processes, so our study results can be applied most easily to surgical wards. However, we expect that the method we used can be applied in any hospital, though it would likely result in different activity groups and different work sampling results. One of our goals was to compare the percentage of time spent on activities between wards. For this purpose, we have 290 observations (observations of nurse/ward combination). This number allows for sufficiently detailed analysis of activities and differences between wards. The compositional analysis found significant differences on all levels, which supports this view. Interrater agreement was 88.4%. There is no clear rule of thumb in literature that defines whether this is acceptable or not. Though our measure did not correct for accidental agreement, the chances of accidental agreement are very small due to the large number of categories for all three variables registered (15-25 per variable). Therefore we believe that 88.4% interrater agreement is sufficient. We expected OT to be the most stable category, but there was still some variation between wards on this category. We did not find an explanation. However, we still stand by our choice to use this as a reference category, because it was the smallest category and the least interesting to compare across wards. Further research More in-depth analysis is needed to study differences between wards that could not be readily explained. As said, this work sampling study will also be used for developing a workload management system. The care time that nurses spent on specific patients will be related to patient characteristics that are expected to increase care time, such as isolation, psycho-social care or assistance with bathing. This way, we can calculate how much additional care time is needed when one or more of these characteristics applies to patients on a ward, forming the basis for a workload management method. Clinical implications and conclusion The data collected in the work sampling study are very interesting from an operational excellence perspective. This study formed a basis for discussing the working processes of different wards and helped to identify and understand differences in processes and operational excellence between wards. The results can be analyzed further and provide a starting point for improvements. Results of this work sampling study will be combined with data on patient characteristics and lead to insight in required resources per patient and per ward. This, in turn, will be used to further develop a workload management method, as described in section 2.1.
2018-05-03T00:30:16.103Z
2018-04-24T00:00:00.000
{ "year": 2018, "sha1": "bed6f833c734f6af72f21129d7c418134e4c12f0", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0191807&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bed6f833c734f6af72f21129d7c418134e4c12f0", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
15379912
pes2o/s2orc
v3-fos-license
Interpolation of spatial data – A stochastic or a deterministic problem? Interpolation of spatial data is a very general mathematical problem with various applications. In geostatistics, it is assumed that the underlying structure of the data is a stochastic process which leads to an interpolation procedure known as kriging. This method is mathematically equivalent to kernel interpolation, a method used in numerical analysis for the same problem, but derived under completely different modelling assumptions. In this paper we present the two approaches and discuss their modelling assumptions, notions of optimality and different concepts to quantify the interpolation accuracy. Their relation is much closer than has been appreciated so far, and even results on convergence rates of kernel interpolants can be translated to the geostatistical framework. We sketch different answers obtained in the two fields concerning the issue of kernel misspecification, present some methods for kernel selection and discuss the scope of these methods with a data example from the computer experiments literature. A survey in mathematics for industry Interpolation of spatial data is a very general mathematical problem with various applications, such as surface reconstruction, the numerical solution of partial differential equations, learning theory, computer experiments and the prediction of environmental variables, to name a few. Specific instances from different fields of application can be found in [9] and [85]. The precise mathematical formulation of the problem is as follows: Reconstruct a function f : T → R, where T is a domain in R d , based on its values at a finite set of data points X := {x 1 , . . . , x n } ⊂ T (usually called 'sampling locations' in geostatistics and 'centres' in kernel interpolation). This situation is illustrated in Figure 1 with topographic data from Mount Eden, New Zealand, 1 with X being a randomly chosen set of 300 sampling locations. In order to derive optimal procedures for reconstruction and to provide a priori estimates of their precision, it is necessary to make assumptions about f. We focus on two different approaches that deal with the above problem in different ways: kernel interpolation and kriging. The former assumes that f belongs to some Hilbert space H of functions of certain smoothness. This allows one to use Taylor approximation techniques to derive bounds for the approximation error in terms of the density of the data points. Smoothness is a comparatively weak and flexible assumption, and the error bounds permit control of the precision whenever it is possible to control the sampling. By construction, the kernel interpolation approach yields minimal approximation errors with respect to the norm · H on H. T f(x) In some applications there is only limited or no control over the sampling and one has to get by with the (sometimes very sparse) data that are available. Typical examples are environmental modelling or mining where sampling involves high costs or is limited by lack of accessibility of the variable of interest. Moreover, in these applications the variable of interest is often a very rough function, and together with the sparsity of data this implies that error bounds obtained on the basis of Taylor approximation are only of limited use. A way out is possible if the stronger modelling assumption that comes with a statistical modelling approach is adequate: the assumption that f is a realization of a random field. Then again optimal approximation procedures can be derived, and a satisfactory stochastic description of the approximation error is available. It is quite remarkable that both approaches finally come up with the same type of approximant despite different model assumptions and motivations of its construction. Several authors, including [10,42,51,57], have already pointed out this connection, and a comprehensive overview over both approaches is given by [4]. While the authors of [4] also establish the link between stochastic processes and reproducing kernel Hilbert spaces (RKHS), the equivalence of kernel interpolation and kriging is shown by the usual algebraic arguments. In this paper we introduce kernel interpolation and the underlying RKHS model in a different way than that usually taken in the spline literature. This will make it clear that the derivation of optimal interpolation procedures follows the same principles in the stochastic and deterministic frameworks, and reveal that the connection between these frameworks goes much further than algebraic equivalence of respective interpolants. In Sections 2 and 3 we describe kernel interpolation and kriging respectively along with their modelling assumptions and concepts of optimality. Some problems closely related to spatial interpolation and generalizations are discussed in Section 4. The presentation in Section 2 and 3 will show that the function that characterizes the magnitude of the pointwise approximation error appears -with different interpretations -in both frameworks. This will be used in Section 5 to apply theorems on the convergence rates of kernel interpolants to the stochastic framework, where statements of comparable generality have not been available so far. In Section 6, the issue of kernel misspecification is addressed, and we give an overview of the answers given in both communities to the question about the consequences of using an 'incorrect' kernel for the construction of the interpolant. In Section 7 we turn to the issue of parameter estimation and describe some of the procedures used to select a kernel based on the available data. The scope of these methods is briefly discussed and illustrated with a data example. The main focus of this paper is to point out the interconnections between the two approaches to spatial interpolation. Some topics which receive considerable attention in one of the two frameworks but are (from our current perspective) hardly relevant for the respective other, will be briefly addressed in the final discussion. Positive definite kernels In the kernel interpolation framework, f is assumed to belong to some Hilbert space H of real-valued functions on T with inner product (·, ·) H . It is further assumed that for all where H * denotes the dual of H with dual norm Then by the theory of reproducing kernel Hilbert spaces (see [67] and references therein), a unique symmetric function K : T × T → R ('reproducing kernel') exists with K(·, x) ∈ H and for all x, y ∈ T , g ∈ H. Note that either of the last two equations imply for any choice of points x 1 , . . . , x n ∈ T , n ∈ N and coefficients α = (α 1 , . . . , α n ) ∈ R n \ {0}. If the functionals δ x 1 , . . . , δ x n are linear independent when based on distinct points, we even have strict inequality in (2.3). In this case K is called a positive definite kernel and H = H K is called the native space for K. The important role of positive definiteness for kernel interpolation was pointed out by Micchelli [55]. [68,84] for further examples The native space is an abstract concept, but for some important function spaces an explicit link can be made to translation-invariant kernels where K(x, y) = Φ(y − x) for some function Φ : R d → R. Consider, for example, the Sobolev spaces W τ 2 (R d ), which are widely used in numerical analysis, in particular in the context of Partial Differential Equations (PDEs). They not only have a rich mathematical structure but also characterize the degree of smoothness of the functions belonging to them. For τ ∈ N, this can be seen directly from their definition where D α denotes an αth weak partial derivative [22,Section 5.2]. An equivalent definition exists in terms of Fourier transforms, and this definition has a straightforward generalization to non-integer orders, τ > 0. In the remainder of this paper, we always have τ > d 2 , which implies, by the Sobolev embedding theorem, that every equivalence class in W τ 2 (R d ) contains a continuous representer. We will interpret W τ 2 (R d ) as a set of continuous functions in this way. The following theorem [85,Cor. 10.13] shows that it constitutes the native space of certain translation-invariant kernels Φ which have a degree of smoothness that depends on τ. 4) for some τ > d 2 and constants 0 < c 1 6 c 2 . Then the native space of Φ coincides with the Sobolev space W τ 2 (R d ) as a vector space, and the native space norm and the Sobolev norm are equivalent. Table 1 shows examples (see [31,46,50,84] for details) of positive definite functions Φ commonly used in geostatistics and the approximation theory. For Φ ∈ L 2 (R d ) their Fourier transforms are defined as stated in Theorem 2.1. We write O((1 + ξ 2 ) −τ ) to denote that (2.4) is satisfied where closed forms are not available. While Sobolev spaces are intuitively more accessible and allow one to better understand what exactly is assumed for f, the framework of native spaces is useful to derive an optimal approximation of f based on the given data at X in the sense that the worst case approximation error is minimized pointwise. Specifically, we consider approximants of the form x∈ T , (2.5) which are, at each point x ∈ T , a linear combination of the given values of f. The coefficient functions u 1 , . . . , u n : T → R are defined pointwise, and for fixed x 0 ∈ T we consider the norm of the error functional λ err : According to (2.3), its square can be written as a quadratic form and it follows that optimal coefficients u * 1 (x 0 ), . . . , u * n (x 0 ) minimizing Q must satisfy If K is positive definite, then this system has a unique solution, and this in turn implies that the so-called Lagrange conditions, are satisfied. Hence, s f,X interpolates f at X, and it can be shown that it has minimal native space norm among all such interpolants [65]. This property is the usual starting point for the derivation of kernel interpolants in the spline literature. Conditionally positive definite kernels Some important classical interpolation schemes, such as thin-plate splines [18][19][20] or Hardy's multiquadrics [35], are not covered by the above theory, but can still be incorporated into the framework of kernel interpolation. To this end we must allow for kernels that are only conditionally positive definite with respect to some finite dimensional function space P (in applications we usually have P = π m (T ), the space of polynomials on T of order at most m). Let L P (T ) denote the space of all linear functionals of the form a i δ x i , a 1 , . . . , a n ∈ R, X := {x 1 , . . . , x n } ⊂ T , n ∈ N Table 2. Some conditionally positive definite functions Φ(h) together with the minimal space with respect to which they are conditionally positive definite that vanish on P, i.e. λ X (p) = 0 for all p ∈ P. This is a vector space over R under usual operations. A kernel K is called conditionally positive definite with respect to P if where the superscripts denote the application of λ X with respect to the first and second argument of K respectively. Note that such K is also conditionally positive definite with respect to any finite dimensional function space P ⊃ P, in particular we can always consider a conditionally positive definite kernel with respect to π m (T ) as conditionally positive definite with respect to π l (T ) if l > m. Assume from now that K : T × T → R is (symmetric and) conditionally positive definite with respect to P. In analogy with (2.3) we let and due to (2.9) this defines an inner product on L P (T ). This can be used to define the native space of K as the largest space on which all functionals from L P (T ) act continuously, i.e. H K,P := {g : T → R : |λ(g)| 6 C g λ K for all λ ∈ L P (T )}, (2.10) where C g < ∞ is a constant depending only on g. A semi-norm on H K,P can be defined via This characterization goes back to the pioneering work of Madych and Nelson [48]. We chose it because of its striking analogy with the theory of intrinsic random fields [52], which will be further discussed in Section 3. In [67] a detailed derivation of the native space of a given conditionally positive definite kernel K is presented, showing how values g(x), x ∈ T can be assigned to the abstract function g which a priori can be evaluated only by functionals from L P (T ) which does not include the point evaluation functionals δ x . Note that the positive definite case discussed above corresponds to P = {0}, and the continuity of any λ ∈ L {0} (T ) is a consequence of assumption (2.1) and the Riesz representation theorem. Examples of conditionally positive definite functions are given in Table 2. For the thin-plate splines, from now denoted by Φ d,l , a counterpart of Theorem 2.1 will provide some intuitive understanding of the corresponding native space. Φ d,l will be considered as conditionally positive definite with respect to π l−1 (R d ). The corresponding function spaces are the Beppo-Levi spaces Beppo-Levi spaces are closely related to Sobolev spaces, and this relation can be used to show that any BL l (R d ) with l > d 2 can be embedded into C(R d ) [19]. Table 2, considered as a conditionally positive definite with respect to π l−1 (R d ). Then the associated native space H K,P is the Beppo Levi space BL l (R d ) of order l, and the semi-norms are the same. When it comes to deriving an optimal approximation of f, minimization of the norm of the error functional λ err = δ x 0 − λ u(x 0 ) for fixed x 0 ∈ T again amounts to the minimization of the quadratic form Q in (2.6). In the general framework of conditionally positive definite kernels, however, λ err is not automatically in L P (T ), and the additional constraint for all p ∈ P (2.11) must be satisfied to ensure that λ err K is defined. Note that this constraint also implies that functions from P are always reproduced exactly by s f,X . Since P was assumed finite dimensional, we can choose a basis p 1 , . . . , p q , and the above condition becomes Minimizing (2.6) subject to (2.12) can be done using Lagrange multipliers η 1 (x 0 ), . . . , η q (x 0 ), and it follows that optimal coefficients u * 1 (x 0 ), . . . , u * n (x 0 ) must satisfy (2.12) and ) of s f,X is a good starting point to derive a pointwise optimal approximation of f, but it is quite inefficient from a computational point of view. Some algebraic manipulations of (2.5), (2.12) and (2.13) however yield the alternative representation (2.14) where the coefficients α 1 , . . . , α n and β 1 , . . . , β q are defined by the system which is again uniquely solvable if K is conditionally positive definite and X is Punisolvent. Note that the first set of equations simply forces s f,X to interpolate the data, while the second set is necessary to ensure a unique decomposition into two terms in (2.14). This system needs to be solved only once and then yields an expression for s f,X in closed form, valid on the whole of T . Its solution requires O(n 3 ) floating point operations which may still be too expensive for large spatial data sets. We refer to [85,Chapter 15] for an overview over some algorithms that compute an approximate solution to reduce the computational cost to a practically manageable level. Kriging The statistical counterpart to kernel interpolation is known as kriging, the geostatistical term for optimal linear prediction of spatial processes. Kriging is based on the modelling assumption that f is a realization of a random field Z, which is a collection {Z(x) : x ∈ T } of random variables over the same probability space (Ω, A, P ), indexed over T . The observations f(x 1 ), . . . , f(x n ) are then realizations of the random variables Z(x 1 ), . . . , Z(x n ). To predict Z at some (unobserved) location x 0 ∈ T , one considers all linear predictors of the form which are themselves random variables. The prediction of f(x 0 ) given f(x 1 ), . . . , f(x n ) is then as for kernel interpolation To determine optimal weights u * 1 (x 0 ), . . . , u * n (x 0 ), additional structural assumptions on Z are needed, and depending on these assumptions one distinguishes simple, ordinary, universal and intrinsic kriging. There are still quite other forms (such as complex kriging [45], indicator kriging [41] or disjunctive kriging [53,54]) but we shall only discuss the aforementioned ones due to their close connection to kernel interpolation. Figure 2. (Colour online) Perspective plots of one realization of Gaussian random fields with different Matérn covariance functions Φ ν with ν = 1.0 (left), ν = 1.5 (middle) and ν = 2.0 (right). We use the parametrization of Φ ν proposed by Handcock and Wallis [34], where the argument is rescaled such that the value of ν influences the shape of Φ ν (h) only near h = 0. Simple kriging The additional assumption with simple kriging is that Z(x) is centred, i.e. E(Z(x)) = 0, and that the second moments exist for every x ∈ T . Then the covariance function can be defined and it follows from some basic properties of the (co)variance that K is always symmetric and positive semi-definite. In this framework K describes the probabilistic structure of Z, but certain 'deterministic properties' such as the smoothness of realizations are controlled by K as well (see Figure 2, created with [73]). Note however, that a complete characterization of the probabilistic structure of a random field requires additional assumptions on its distribution (e.g. assuming all finite dimensional distributions to be multivariate Gaussian). The covariance function K can be viewed as an inner product on the vector space . . , a n ∈ R, x 1 , . . . , x n ∈ T , n ∈ N of second-order random variables. The closure of V Z under this inner product yields a Hilbert space that is isomorphic to H K from Section 2 (see [4]). When it comes to spatial interpolation, however, the natural counterpart of the Hilbert space generated by Z is the dual H * K rather than H K , as will become clear in the following. The geostatistical notion of optimality is to consider as the 'best' linear predictor Z u * (x 0 ) the random variable with minimal expected squared deviation from Z(x 0 ), i.e. The optimal weights are then obtained by minimizing . This is, however, the same quadratic form Q as in (2.6), and so the simple kriging prediction coincides with the optimal approximation (2.5) with weights u * 1 (x 0 ), . . . , u * n (x 0 ) determined by (2.7). Briefly, in kriging the covariance function takes the role of the interpolation kernel, and Q, originally introduced as the squared norm of the pointwise error functional at x 0 , becomes the expected squared prediction error. We briefly mention another perspective which is in a way even more probabilistic: the Bayesian approach. In the simple kriging framework, it is essentially the additional assumption of Z being Gaussian that needs to be made. As mentioned above, once K is fixed, the distribution of a Gaussian random field is completely determined, and the Bayesian approach uses it as infinite-dimensional prior distribution for the unknown function f. The posterior distribution of f at x 0 given the observations f(x 1 ), . . . , f(x n ) is then a Gaussian distribution with mean s f,X (x 0 ) (with weights u * 1 (x 0 ), . . . , u * n (x 0 ) as above) and variance Q(u * (x 0 )). Unlike in simple kriging and kernel interpolation, this result is not based on a particular loss function. It is an immediate consequence of the complete specification of a prior distribution for f (see also [62,Section 24.]). Ordinary and universal kriging Especially the assumption that Z is centred seems inappropriate in most applications of geostatistics. The first generalization of simple kriging is therefore to allow for a non-zero mean function m(x) := E(Z(x)) while still keeping the assumption that Z has second moments. The mean function is usually unknown in practice, but this problem can be bypassed by requiring the potential predictors Z u of Z to be unbiased, i.e. This has the additional advantage of preventing systematic over-or underestimation of Z(x 0 ). Note that any such predictor is automatically unbiased if Z is centred. Using this unbiasedness constraint to recalculate the target function (3.3) one obtains , which is again the quadratic form Q in (2.6), depending only on K but not on m. Its minimizer, however, is in general not the same as above, since the additional unbiasedness constraint restricts the choice of weights. To ensure that condition (3.4) can be satisfied at all, one cannot let the mean function completely general, but must assume a sufficiently simple, finite dimensional model. The simplest model is a constant (but unknown) mean function, and this assumption leads to what is called ordinary kriging. This is, however, just a special case of universal kriging where the mean function is modelled as with known and linear independent functions p 1 , . . . , p q , and unknown coefficients β 1 , . . . , β q . Such a mean function is also called a trend, and condition (3.4) becomes This condition must hold for any set of coefficients β 1 , . . . , β q , and so when predicting at x 0 we are back to condition (2.12) restricting the weights u 1 (x 0 ), . . . , u n (x 0 ). It follows that the universal kriging prediction coincides with the optimal approximation (2.5) from the conditionally positive definite kernel interpolation setup, with optimal weights u * 1 (x 0 ), . . . , u * n (x 0 ) determined by (2.12) and (2.13). Representation (2.14) was already noted by Matheron [51], and the corresponding equation system is known as dual kriging. The universal kriging interpolant can also be derived within a Bayesian framework. As a prior distribution for f, one assumes a Gaussian random field with covariance function K and mean function m as in (3.5). For a complete specification of the prior for f, a distribution assumption needs to be made for β 1 , . . . , β q as well. Then the posterior distribution of f can be worked out, but it will depend both on K and on the prior distribution of the trend coefficients. In the special case of a flat (uninformative) prior, however, Omre and Halvorsen [59] show that the posterior of f at x 0 is a Gaussian distribution with mean s f,X (x 0 ) and variance Q(u * (x 0 )), both calculated with the optimal weights from the universal kriging approach. Universal kriging and kernel interpolation with conditional positive definite kernels are formally equivalent and are derived from the same loss function Q. Nevertheless, the analogy is not yet perfect because the universal kriging assumption that Z(x) has second moments for every x ∈ T automatically entails positive (semi)definiteness of K. We therefore consider a slightly different stochastic model leading to kriging interpolants of the same form, but using a more general dependence structure that permits the use of conditionally positive definite kernels. Intrinsic kriging The idea with intrinsic random fields (introduced in [52]) is that one no longer specifies the full second-order structure of Z, but only the dependence structure of certain increments. More specifically, let P again be a finite dimensional space of functions on T , and let L P (T ) be as in Section 2, i.e. the space of functionals of the form λ X = n i=1 a i δ x i with λ X (p) = 0 for all p ∈ P. For every such λ X ∈ L P (T ) is called an allowable linear combination of Z with respect to P. Assume that all allowable linear combinations of Z have second moments and are centred, i.e. E(Z λ,X ) = 0. The We note that Z and Z + p have the same generalized covariance function for any p ∈ P. Moreover, since the expectation of any squared random variable is non-negative, K must be conditionally positive semi-definite with respect to P. The most important case in practice is the case where P = π m (R d ), the space of polynomials of order at most m, and where Z is intrinsically (weakly) stationary of order m, i.e. K(x, y) = Φ(y − x) for some function Φ : R d → R that is conditionally positive semi-definite with respect to π m (R d ). Assuming K to be translation invariant is often reasonable because general dependence structures are usually too complex for reliable inference. Note that λ X ∈ L π m (R d ) implies λ X+x ∈ L π m (R d ) for all x ∈ R d , owing to the binomial formula, and hence all random variables It follows that the random field {Z λ,X+x : x ∈ R d } of mth-order increments is weakly stationary (i.e. centred with second moments and translation-invariant covariance function) with Whenever in practice one faces the situation that Z itself does not appear to be weakly stationary, there is still a chance that this seems plausible for some higher order increments, and this then motivates the modelling with intrinsically stationary random fields. A detailed introduction to this topic is given in [9,Chapter 4], and in particular the differences from a modelling perspective to the model underlying the universal kriging approach are illustrated excellently. To predict Z(x 0 ) we consider the error functional λ err = δ x 0 − λ u(x 0 ) as in Section 2, but in the stochastic framework we are interested in the expected squared prediction error. When K is the generalized covariance function of Z, we have by definition which is again the quadratic form Q in (2.6). The requirement λ err ∈ L P (T ) again entails condition (2.12), and so it follows that the intrinsic kriging prediction is identical to the optimal approximation (2.5) in the conditionally positive definite kernel interpolation setup with optimal weights u * 1 (x 0 ), . . . , u * n (x 0 ) determined by the equation systems (2.13) and (2.12). The difference to universal kriging lies in the interpretation of representation (2.14). While it is legitimate in universal kriging to interpret the second term in (2.14) as an approximate mean function and the first term as an approximate deviation from it, such an interpretation would be wrong in intrinsic kriging where a mean function is not even defined. Comparison with the deterministic framework We sum up what has been pointed out so far concerning the two different modelling approaches and the corresponding notions of optimality. In both frameworks we seek to minimize, at every fixed location x 0 ∈ T , the quadratic form possibly subject to some additional restrictions. The sense in which the resulting approximation of f at x 0 is optimal then differs according to the interpretation of Q: (1) In the deterministic framework, Q(u(x 0 )) indicates how well δ x 0 can be approximated by the linear combination λ u(x 0 ) of the point evaluation functionals for points of X. It measures how big the approximation error can be in the worst case assuming only that f ∈ H K,P . (2) In the stochastic framework, Q(u(x 0 )) indicates how big the error for approximating a random field Z at x 0 by λ u(x 0 ) (Z) will be on average. Its calculation is based on the assumption that Z has generalized covariance function K. Both worst-case and average-case behaviour of numerical algorithms are studied and compared by Ritter [64]. For his average-case analysis, Ritter adopts the stochastic perspective and specifies a probability measure on the space of all functions by making the geostatistical assumption of a random field Z with covariance kernel K. The average-case optimality of K-splines that is stated in this monograph is then a consequence of the equivalence of kriging and kernel interpolation. A short list with terminology used in kernel interpolation and geostatistics is provided in Table 3. We note that in situations where both frameworks are applicable, different answers are obtained as to the question of which K should be used. Indeed, assume that f is a realization of a centred random field Z on R d with translation invariant covariance function Φ τ whose Fourier transform Φ τ satisfies (2.4). For this model the simple kriging framework applies, and states that the optimal interpolant is obtained with Φ τ . On the other hand, Scheuerer [71] shows that the realizations of Z-and hence f-are in the Sobolev space W μ 2 (R d ) if and only if μ < τ − d 2 . The space W μ 2 (R d ), however, calls for some reproducing kernel Φ μ satisfying (2.4) with μ instead of τ (see Theorem 2.1), and so Table 3. Some frequently used terms in the language of statistics (left column) and numerical analysis (right column) Covariance function Symmetric and positive definite kernel ∼ of a weakly stationary random field Translation-invariant kernel ∼ of a w. stat. and isotropic random field Radially symmetric, translation-invariant kernel Kriging Kernel interpolation Intrinsic kriging ∼ with a conditionally positive definite kernel Kriging variance P 2 K,X Power function P K,X a numerical analyst would rather use Φ μ with μ ≈ τ − d 2 for interpolation. In other words: if worst case optimality is aspired, then a rougher kernel is considered appropriate for the same function f than when the aim is average-case optimality. Generalizations and related problems The spatial interpolation problem discussed in this paper can be generalized by considering arbitrary functionals λ 0 (f), λ 1 (f), . . . , λ n (f) instead of f(x 0 ), f(x 1 ), . . . , f(x n ). Such a generalization covers, for example, the situation where f is to be reconstructed from both function values and derivatives ('Hermite-Birkhoff interpolation'), or the case where the interest is in approximating integrals of f over certain sub-domains of T . The latter is an important problem in mining, where measurements (which may also effectively be integrals over small areas) are taken at certain locations in some ore deposit, and the interest is in predicting the overall content. In numerical analysis, approximation with general functionals goes back to [87]. In the positive definite setup, the considered functionals must be in the dual space H * K , the analogue requirement with simple kriging is that λ 0 (Z), λ 1 (Z), . . . , λ n (Z) are all welldefined random variables and have second moments. If these conditions are met, the generalization of the quadratic form (2.6) is well defined, and its minimization yields optimal weights u * 1 (λ 0 ), . . . , u * n (λ 0 ) for the approximation of λ 0 (f) by In the framework of conditionally positive definite kernels (intrinsic kriging), one needs to require that (λ 1 i λ 2 j )(K) and λ i (p k ) are well defined for all i, j ∈ {0, . . . , n} and k ∈ {1, . . . , q}, and condition (2.12) becomes The quadratic form Q from above is then again well defined, and its minimization is straightforward. Wendland [85, Section 16.2] discusses the case of Hermite-Birkhoff interpolation in more detail and shows that the resulting system of equations has a unique solution if λ 1 , . . . , λ n are linearly independent, and if λ i (p) = 0 for all p ∈ P and i ∈ {1, . . . , n} implies that p = 0. The close link between approximation and integration of f and its consequences on the error analysis of these problems is discussed in [64]. In the geostatistical framework, both kriging with gradient information and kriging of block averages are described in [9]. The former method has been applied to meteorological problems involving several variables related by physical laws [7,8]. A different type of generalization of the setup in Sections 2 and 3 is required when the set X is infinite. One could think, for example, of data from moving tracking devices which measure f continuously along one-dimensional trajectories. λ u(x 0 ) itself is then no longer a finite linear combination of point evaluation functionals, but lies in the closure of such functionals, i.e. The definition of Q(u(x 0 )) := δ x 0 − λ u(x 0 ) 2 K remains unchanged and the functional λ u * (x 0 ) defining the optimal interpolant is obtained as the projection of δ x 0 on F P,X,x 0 [69]. In the same way, the kriging approximation Z u * (x 0 ) of Z(x 0 ) based on the values of Z at all points of X is obtained as the projection of Z(x 0 ) on the space where the closure is under the inner product induced by the generalized covariance function. While this generalization is straightforward in theory, it is not clear how a solution can be obtained in practice. Matheron [52] studies a special case where Z u (x 0 ) can be represented by a measure μ x 0 with support in X, i.e. Such a representation is not always possible, and a counterexample in [52] shows that smooth covariance functions, such as the Gaussian model, may lead to unsolvable systems. If, however, the optimal solution can be represented in that way, then the system of equations (2.12) and (2.13) becomes a system of integral equations that determine μ x 0 . At least in special cases, for example when K is such that Z has the Markov property and the geometry of X is sufficiently simple, closed-form solutions for μ x 0 can be obtained from these equations. A mathematical problem that is closely related to the spatial interpolation problem discussed in this paper arises when the function f has to be reconstructed based on data i.e. where f is observed with measurement errors 1 , . . . , n . In this situation the aim is no longer interpolation of the data, but rather approximation by a function that is close to the noisy observations (4.1) on the one hand but still reasonably smooth on the other hand. This is the standard setup in machine learning, and depending on the loss function that is used to assess the fidelity to the data, different approaches turn out to be optimal. Schaback and Wendland [70] discuss the role of kernel methods in machine learning. The approach that is most closely related to the kernel interpolants ('splines') in Section 2 is the concept of smoothing splines, which corresponds to a quadratic loss function. More specifically, instead of minimizing the native space norm among all functions interpolating the data, smoothing splines minimize where λ is a regularization parameter that controls the fit/smoothness trade-off mentioned above. The universal kriging counterpart of (4.2) is obtained when f is assumed to be a realization of a random field Z with covariance function K, and 1 , . . . , n are assumed to be realizations of independent, centred random variables with variance λ. The optimal solution in that case still has the form (2.5), but in the system of equation (2.13) defining the kriging weights, K(x i , x i ) is replaced by (K(x i , x i ) + λ) for all i ∈ {1, . . . , n}. For a detailed description of smoothing splines, their connection to RKHS on the one hand and geostatistical methods on the other, we refer to [83] and references therein. Error estimates In both frameworks the magnitude of the approximation error at x 0 ∈ T is characterized by Q. In the literature on kernel interpolation, the value of Q 1/2 for approximation with the optimal weights u * 1 (x 0 ), . . . , u * n (x 0 ) is denoted by and is called the power function. The definition of Q 1/2 immediately implies the error bound The function f is unknown but fixed, and so in order to control the approximation error one is interested in quantifying how P K,X depends on K and X. Conversely, a bound on the absolute approximation error at x 0 normalized by |f| H K,P that holds for any f ∈ H K,P implies a bound on P K,X (x 0 ). In geostatistics too P K,X (x 0 ) is a well-known quantity: its square is the so-called kriging variance. Stein's book [77] is the main reference for an in-depth study of the asymptotic behaviour of kriging interpolants. In Section 3.6 he considers a centred, weakly stationary random field Z on R with covariance function Φ τ satisfying (2.4). The simple kriging interpolant Z u * (0) at x 0 = 0 is calculated based on the values of Z at ±δ, ±2δ, . . . (interpolation problem) or at −δ, −2δ, . . . (extrapolation problem). In both cases it turns out that the kriging variance can be bounded by This result is very useful to understand the impact of the smoothness of Z on the precision of kriging approximations, but the geometric setup is rather special. In the kernel interpolation literature similar statements exist for very general alignment of points where f is observed. As a consequence of the coincidence of P 2 K,X with the kriging variance, these results can be easily translated to the stochastic framework. To characterize the density of locations in X without restricting to lattice data, one defines the fill distance Intuitively, h X,T is the radius of the largest ball centred at some x ∈ T that does not contain any of the data points. Results will be given for approximation on a bounded domain T ⊆ R d with Lipschitz boundary, which means that the boundary can be thought of as locally being the graph of a Lipschitz continuous function. Moreover, T must satisfy an interior cone condition, i.e. there exists an angle θ ∈ (0, π 2 ) and a radius r > 0 such that for every x ∈ T a unit vector ξ(x) exists such that the cone is contained in T . This simply means that there exists a cone of fixed size that can be placed everywhere inside T , thus excluding the possibility of extremely narrow bulges of the boundary. We state two results from the kernel interpolation literature (see [85,Section 11.6] and [58,Section 4]) and formulate their consequences for kriging: Theorem 5.1 Suppose that T ⊂ R d is a bounded domain, has a Lipschitz boundary, and satisfies an interior cone condition. Let X ⊂ T be a given discrete set and s f,X be the kernel interpolant based on a translation invariant and positive definite kernel Φ τ satisfying (2.4) with τ = k + s, where k > d 2 is a positive integer and 0 6 s < 1. Then the error between f ∈ W τ 2 (T ) and its interpolant s f,X can be bounded by for all sufficiently dense sets X. Corollary 1 Let Z be a centred, weakly stationary random field on a bounded domain T ⊂ R d with covariance function Φ τ as in Theorem 5.1. Assume that T has a Lipschitz boundary and satisfies an interior cone condition. Then the kriging variance can be bounded by for all sufficiently dense sets X. Theorem 5.2 Suppose that T ⊂ R d is a bounded domain that satisfies an interior cone condition. Consider the thin-plate splines Φ d,l from Table 2 as conditionally positive definite with respect to π l−1 (T ). Then the error between f ∈ W l 2 (T ) and its thin-plate spline interpolant s f,X can be bounded by for all sufficiently dense sets X. Corollary 2 Let Z be an intrinsically stationary random field of order l on a bounded domain T ⊂ R d with generalized covariance function Φ d,l as in Table 2. Assume that T satisfies an interior cone condition. Then the kriging variance can be bounded by for all sufficiently dense sets X. The preceding Corollaries give rates for the speed of decline of the kriging variance as the data become denser. Corollary 1 is in agreement with, but more general than the result from [77, Section 3.6] mentioned above. To our knowledge, there is no result on convergence rates of kriging predictions in the statistical literature that covers geometric setups of data points with the generality of Corollaries 1 and 2. We refer to [86] for generalizations of Theorems 5.1 and 5.2 to the situation (4.1) where f is observed with measurement error. Interpolation with misspecified kernels So far it has always been assumed that the correct K is known. In geostatistics this means that the covariance structure of the random field under study is known, in kernel interpolation it amounts to the assumption that one knows the native space in which f is contained. In practice, however, such knowledge is usually not available, and so the question arises whether the interpolation schemes discussed above are still near-optimal if an 'incorrect' kernelK is used instead of K. In kernel interpolation, the main interest is to ensure that the optimal rates in Theorems 5.1 and 5.2 are maintained. If this is the only goal, then rescaling the argument of Φ does not have an effect because in (2.4) rescaling only changes the constants, and thin-plate spline interpolants are invariant to rescaling of the argument of Φ d,l anyway. Misspecifying the smoothness of Φ, however, does have an effect on the rate. If a kernel Φτ withτ < τ is used in the setup of Theorem 5.1, then the statement remains valid for the lower rate ofτ − d 2 . Forτ > τ on the contrary, it cannot be guaranteed that f ∈ Wτ 2 (T ), and so does not decline faster than h X,T , however, it can be shown that the error rate is of the same order as if a kernel with the correct degree of smoothness was used [58]. Hence, for quasi-uniform sets X, i.e. q X 6 h X,T 6 cq X for some fixed constant c > 0, using a very smooth kernel does not degrade the approximation accuracy of s f,X with respect to the error rate. The power function, however, is independent of the true smoothness of f, thus decreases with the faster rate ofτ − d 2 , and consequently yields a false description of the magnitude of approximation errors. The last point is not considered a big deficiency in kernel interpolation, but in geostatistics the exact quantification of the approximation error plays an important role, and a different perspective has been adopted here. A major step towards a theoretically founded answer to the kernel misspecification issue was made in [75]: If K andK are compatible, then the approximation based onK will have the same asymptotic efficiency as the optimal approximation, and the relative deviation of the true expected squared approximation error from the one calculated under the false assumption thatK is correct is asymptotically negligible. A full explanation of the concept of compatibility is beyond the scope of this paper, for details consider [38,77,78]. To compare with the statement above, we shall however give a sufficient condition for compatibility in an important special case where x,y∈ T , σ, a, ν > 0, (6.1) i.e. K is translation-invariant, radially symmetric and of the Matérn type (see Table 1). In addition to the parameter ν controlling the smoothness of K, we consider a parameter a rescaling the argument, and a variance parameter σ that does not affect the interpolant s f,X but scales the power function. This choice of K satisfies (2.4) with τ = ν + d 2 for any value of σ and a. When K has the above form and d 6 3, compatibility of K andK is guaranteed [88] This still allows for certain deviations ofK from K, but limits the choice ofK much more than the conditionν > ν that ensures optimal rates of the approximation error. Note the dependence of the above condition on the space dimension. For d > 5, condition (6.2) is no longer sufficient, and K andK are compatible only in the trivial case whereK = K [1]. The case d = 4 is still open. To formulate the precise statement of [75], consider a random field Z on a bounded domain T with mean function of the form (3.5) and covariance function K. Let ZK(x 0 ) be the kriging prediction at x 0 ∈ T based on observations of Z at some set X n ⊂ T , derived under the (false) assumption thatK is the covariance function. Assume further that x 0^Xn , the sequence (X n ) n∈N of point sets is getting dense in T and as n tends to infinity, where E K denotes the expectation under K. Then it holds, for any compatible covariance functionK, that as n tends to infinity. The convergence is even uniform on T [77]. Recall that where the subscripts K andK denote for Q that the quadratic form is calculated using K andK respectively, and for u * the optimal weights were obtained by minimizing Q K and QK respectively. In the language of numerical analysis, (6.3) says that asymptotically the interpolant obtained with a compatible kernelK is still optimal, and that the power function calculated withK tends to the 'true' power function Q K (u * K (x 0 ). This statement is much stronger than that of an optimal convergence rate, but it is based on more restrictive assumptions like (6.2). An extension of this result to some conditionally positive definite covariance functions is proved in [78]. Putter and Young [60] consider the setting whereK is not fixed but may depend on n, which accommodates the situation in practice whereK n can be estimated from the data at X n (see Section 7) with increasing precision as n tends to infinity. This convergence is formalized by introducing the concept of contiguity (which replaces compatibility, see [60] for definition), and it is shown that (6.3) still holds if the stochastic models corresponding to the sequence (K n ) n∈N on the one hand and the true covariance function K on the other hand are contiguous. Kernel selection and parameter estimation An immediate question to follow up the issue of kernel misspecification is how to identify the 'correct' K based on the information and data at hand. We do not intend to give a comprehensive list of all methods available, but focus on two methods that are applicable in both deterministic and stochastic frameworks. The issue of kernel selection has received comparatively little attention in the framework of kernel interpolation. This is not surprising in the light of the preceding section where we noted that working with some smooth kernel would always guarantee optimal convergence rates whatever be the particular form (and scaling) of this kernel, provided that the sampling locations are quasi-uniform. Consequently, more emphasis was put on the study of good configurations of sampling locations [14,16,39] on the one hand, and edge correction strategies (see [26] for an overview) on the other hand to avoid undesired oscillations near the boundaries that often come with smooth and flat kernels. Nevertheless, several authors [6,27,28,63] have pointed out the big impact of the choice of, for example, the scaling parameter on the accuracy of interpolant. When ill-conditioning (see Section 8) is not an issue for a relevant range of parameter values, there is usually a value that minimizes interpolation errors. In the earlier literature, the question of suitable scaling of kernel has been typically solved by ad hoc rules [25,28,35]. Rippa [63] was the first to propose an algorithm based on the idea of leave-one-out cross validation (LOOCV) which chooses the scale parameter such that some norm of the LOOCV error vector ε is minimized. In the kernel interpolation setup, the components of ε are formed by leaving out one sampling location x i at a time, calculating the interpolant based on the remaining ones only and taking ε i to be the difference between the true value f(x i ) and the approximation s f,X (x i ). This procedure yields good choices of the scaling parameter and can be implemented such that the calculation of ε for a given kernel can be done with the computational cost of order O(n 3 ), where n is the number of sampling locations. A more recent paper [24] discusses extensions of Rippa's algorithm that have been applied in the context of an iterated approximate moving least-squares approximation of function value data and RBF (radially symmetric kernels) pseudo-spectral methods for the solution of partial differential equations. LOOCV does not make any explicit modelling assumptions and is therefore also applicable in the geostatistical framework. In the geostatistical literature, however, cross validation is mainly used as a diagnostic tool to compare the performances of geostatistical models. Traditionally, variogram-based estimation methods have been used (see e.g. [11, or [9,Chapter 2] for details) since an estimate of the variogram usually constitutes the first step in the exploratory analysis of geostatistical data. Here we focus on maximum likelihood estimation [49], which is applicable in all of the kriging setups presented above, and makes optimal use of the information contained in the data [37, Chapter 2 and Theorem 8.1]. It is usually derived under the additional modelling assumptions that Z is a Gaussian random field, and that K belongs to some parametric class {K θ : θ ∈ Θ} of covariance models. In the simple case where Z has a zero mean, the log likelihood function, i.e. the logarithm of the probability density function of the random vector (Z(x 1 ), . . . , Z(x n )) evaluated with the data vector f := (f(x 1 ), . . . , f(x n )) is then given by with A θ as defined below and |A θ | denoting its determinant. The maximum likelihood estimator then chooses the parameter that maximizes l(θ), reasoning that under the corresponding stochastic model observing the data f becomes most likely. An extension that works for both the case of a non-trivial mean of the form (3.5) and the case of a generalized covariance function was proposed by Kitanidis [43]. The idea is to use the information of n − q allowable linear combinations of f only, rather than the complete data vector. In the universal kriging setup this causes the mean function to be filtered out from the data. This procedure is called restricted maximum likelihood (REML) estimation, and it can be shown [36] that the restricted log likelihood function can be written as with A θ and P by An elementary introduction to maximum likelihood methods in spatial statistics is given in [44]. A major drawback seems to be the strong assumption that Z is Gaussian under which the maximum likelihood estimator is derived. In [72], however, an alternative derivation of REML in the framework of kernel interpretation (where much weaker modelling assumptions are made) is given, and a numerical study with several nonstochastic test cases is presented in which REML often yields very good choices of K. Within the Bayesian paradigm, parameter selection and interpolation are not formally distinct. The full specification of a probabilistic model permits, via Bayes' Theorem, to obtain posterior distributions for the unobserved values of f, the trend parameters β 1 , . . . , β q and the covariance parameter θ. One could even step up yet another level and let the Bayesian methodology choose between different parametric model structures [62,Section 5.2]. This unified treatment of model selection and interpolation has the advantage that the additional uncertainty due to the fact that the data-generating model is unknown is reflected in posterior distributions. These distributions can, however, in general not be stated in closed form. For certain choices of the priors, some of the integrals that result from the repeated applications of Bayes' Theorem within the hierarchical model specification can be calculated analytically [17,33], but the final posterior distributions usually require numerical approximations or Markov chain Monte Carlo (MCMC) methods. In the situation (4.1) where only noisy observations of f are available, the main focus is on estimating the regularization parameter λ in (4.2). Wahba [82] discusses a generalized cross-validation (GCV) procedure which has the advantage over standard LOOCV that it achieves certain desirable invariance properties (see [83] for a detailed motivation and asymptotic results for GCV). While Stein [76] proves that REML is asymptotically (as the sampling locations get increasingly dense) superior to GCV when the geostatistical assumptions are true, asymptotic results from Wahba [82] suggest that REML can fail when f is a smooth deterministic function, whereas GCV chooses a good λ in all frameworks. The following example, however, shows that a rather different behaviour may be observed in our interpolation framework and finite settings. We illustrate and compare LOOCV and REML with a test function (the 'borehole model') used by many authors (e.g. [40,56]) to compare different methods in computer experiments. Examples from this field of application are particularly interesting in the context of the present paper because they are typically deterministic in nature but considered as realizations of Gaussian random fields. ranges of interest are summarized in Table 4. We rescale these variables to the range (1, 3) and use the same orthogonal sampling design as Joseph et al. [40] with [27] locations. We now assume f to be a realization of a stationary Gaussian random field with covariance function of the Gaussian type. Its mean function will be considered constant but unknown so that we are in the framework of ordinary kriging (see Section 3). Table 5 shows the estimates for the parameters θ 1 , . . . , θ 8 and σ 2 obtained via REML, LOOCV 1 and LOOCV 2 , where the subscript indicates that either the · 1 -norm or the · 2 -norm of the cross-validation errors is minimized. Unlike real-world applications of computer experiments, the borehole function is cheap to evaluate, and this allows us to calculate its values on the grid G := {1, 1.5, 2, 2.5, 3} 8 on the space of the scaled input variables and compare them with the values predicted via ordinary kriging with the covariance functions estimated by different methods. The following error statistics are given in Table 5: In the borehole example, both root mean squared error (RMSE) and mean absolute error (MAE) are the lowest for the interpolant computed with the REML estimates, but the LOOCV estimates too give good results. To judge how well the kriging variance describes the prediction uncertainty, one can look at the mean absolute standardized errors (MAStE). If the kriging variance (which also depends on the estimated parameters) has correct magnitude, the absolute standardized errors should average to 1, and indeed all three parameter choices yield an MAStE quite close to that. Since we have assumed a Gaussian random field, we can go even further and calculate, for every x ∈ G, the probability integral transform (PIT) F x (f(x)), where F x is the cumulative distribution function of a Gaussian distribution with mean s f,X (x) and variance P 2 K,X (x). F x is a probabilistic forecast of f(x) that automatically comes with our stochastic modelling assumptions. If it is correct, then the PIT values have a uniform distribution in [0, 1], and this property can be checked by plotting them in the form of a histogram [2]. The PIT histograms in Figure 3 are quite far from uniformity, suggesting that the assumption of a Gaussian distribution is rather questionable. It is quite remarkable that REML, which is based on this assumption, does an excellent job in selecting good parameters, and we found that this is also true for many other test cases (see [72]). A critical issue about REML estimation is the computational cost of O(n 3 ) floating point operations for each choice of θ, which is prohibitive for large spatial data sets. When all sampling locations are on a (near-)regular lattice, spectral methods to approximate the likelihood can be used and allow to reduce the computational cost to an order of O(n log(n)) [13,29,32]. These techniques cannot be applied to scattered data, but other approaches to approximating likelihoods [5,47,79,81], covariance tapering [30] or simplified Gaussian models of low rank [3,12,21] have been proposed and shown to be quite effective in reducing the computational effort to an order that allows the application of REML in most practical situations. Discussion A variety of practical problems amount to or can be linked to the mathematical problem of data interpolation. In this paper two approaches -kernel interpolation and krigingwere presented and their interconnections were pointed out. In either framework the interpolation procedure is optimal in a certain sense, but optimality is based on the assumption that the 'correct' kernel is used. Answers given by numerical analysts and statisticians to the question about the consequences on approximation accuracy of using an 'incorrect' kernel were discussed. Finally, some methods for choosing a suitable kernel based on the given data were presented. The borehole example analysed in the preceding section poses the interesting challenge that it is not entirely clear if a stochastic modelling perspective is appropriate. While this does not matter anyway with respect to the interpolation method, it is comforting to see that with both cross validation and maximum likelihood good choices of an interpolation kernel are obtained. At first sight, this seems to contradict the asymptotic results by Wahba [82] mentioned above. It seems, however, that in this and many other examples the sample size is simply too small for asymptotic statements to hold. Moreover, in Wahba's setup the actual interpolation kernel is fixed, and only λ is estimated. Our belief is that REML is mostly competitive even in deterministic settings as long as it can choose from a sufficiently flexible class of kernels that permits, for example, adaptation to the regularity of f. Generally, when a high approximation accuracy is expected, the deterministic perspective seems more appropriate. When the data are sparse and/or f has low regularity, a random field model often yields a good description of f. The transition between the two perspectives and their respective methodologies, however, is rather smooth. We have focused our discussion on topics that are relevant for both numerical analysts and statisticians. An important issue in kernel interpolation not mentioned so far is that of an ill-conditioned equation system (2.15). This problem frequently arises because in the deterministic framework very smooth and flat kernels are often preferred since they can achieve high convergence rates when f is very smooth (see Sections 5 and 6). Such kernels, however, inevitably lead to ill-conditioned systems which are a big challenge for numerical algorithms, and they call for special techniques such as preconditioning or changes of basis. If the standard basis K(·, x 1 ), . . . , K(·, x n ) is used as suggested by representation (2.14), ill-conditioning is tied to smoothness of kernel and small approximation errors in terms of power function [66]. But the interpolant s f,X in function space is not dramatically ill-conditioned [15] such that ill-conditioning is a problem of bad basis, not a problem of the reconstruction process. In geostatistics, the variables of interest in typical applications are usually very rough and call for kernels with low smoothness, and so ill-conditioning is usually not a big issue. We shall finally mention a field of research where the methods discussed in this paper are applied in a slightly different context: the field of machine learning. The problem studied there can again be formulated as an interpolation problem, and both stochastic and deterministic modelling approaches can be used for its solution. An outline of connections to Gaussian processes and reproducing kernels is given in [62,74]. Van der Vaart and van Zanten [80] discuss the Bayesian approach to the machine learning problem and provide -in a slightly different setting and based on a different risk function -results on convergence rates and the role of regularity of the covariance kernel similar to the results that have been discussed in Sections 5 and 6. In machine learning too it is not always obvious if stochastic modelling assumptions are appropriate, and so understanding the implications of different assumptions and identifying the scope of the corresponding methods seem vital.
2014-10-01T00:00:00.000Z
2013-02-07T00:00:00.000
{ "year": 2013, "sha1": "5e5c5d1e88046f8786673f160269b39444c50262", "oa_license": "CCBY", "oa_url": "https://publications.goettingen-research-online.de/bitstream/2/29284/2/S0956792513000016a.pdf", "oa_status": "GREEN", "pdf_src": "Cambridge", "pdf_hash": "4db6ac1e6b2a88e026c3e41cb61c231ad022449f", "s2fieldsofstudy": [ "Mathematics", "Geography" ], "extfieldsofstudy": [ "Computer Science" ] }
58937625
pes2o/s2orc
v3-fos-license
MOLECULAR IDENTIFICATION AND GROWTH INHIBITION OF SOME HUMAN PATHOGENIC BACTERIA ISOLATED FROM KING FAHAD GENERAL HOSPITAL, JEDDAH Antimicrobial properties of bacterial antagonist against human pathogenic bacteria have become a field of increasing importance in the medical sector. The present study has been carried out to identify the antimicrobial properties of actinomycetes bacteria against five human pathogenic bacteria viz., Staphylococcus aureus, Streptococcus pyogenes, Pseudomonas aeruginosa, Acinetobacter baumannii and Escherichia coli isolated from many wound swabs, from different service units of King Fahad General Hospital (KFGH), Jeddah. These isolated bacterial isolates were identified by using morphological and physiological characters along with molecular techniques. The antibacterial activities of five actinomycetes species along with Ampicillin (5μg/ml) as positive control were also determined against the multidrug resistant S. aureus, S. pyogenes, P. aeruginosa, A. baumannii and E. coli by using disc diffusion assay. Result of study revealed that among the tested five actinomycetes species, Streptomyces 5 (St 5) was highly effective against P. aeruginosa, S. aureus, S. pyogenes and E. coli while it showed weak activity against A. baumannii. As compared to this, Streptomyces 2, 3 and 4 showed moderate antibacterial activities while Streptomyces 1 showed the lowest activity. Further, all the tested Streptomyces extracts and Ampicillin were found weak against the multi-drug resistant A. baumannii. In conclusion, actinomycetes especially genus Streptomyces can be used as a safe and effective source against multidrug resistant bacteria. * Corresponding author Introduction Excess use of antibiotic, not only developed multidrug resistance in human pathogenic microorganisms. Indiscriminate uses of these drugs were leading to the selection of the bacterial pathogens, resistant to multiple antibiotics (Grasso et al., 2016). Resistance in bacteria against multiple drugs was due to the accumulation of multiple resistance genes against different drugs on the plasmids and/or the increase in gene expression that code for multidrug efflux pumps (Service, 1995, Nikaido, 2009). In addition to this, sometimes excess use of antibiotics are associated with various adverse effects such as hypersensitivity, immunesuppression and allergic reactions (Ahamed et al., 1998). Further, certain bacterial strains developed mechanisms or produced substances which block the action of antibiotics or change their target or ability to penetrate cells (Ali et al., 1995). Therefore, disease causing microbes that have become resistant to antibiotics are increasing public health problems and this forced scientists to search alternative to antibiotics which have good bactericidal activities. Actinomycete metabolites have great bactericidal effects and therapeutic values against wide range of microorganisms. Further, antimicrobial activities of these actinomycetes are depending on the method of extraction. These metabolites can be used as alternative drugs with antibacterial or antifungal or antioxidant activities which protect the host from cellular oxidation (Singh et al., 2014). Zothan et al. (2017) reported that methanolic extract of S. cyaneofuscatus inhibited the growth of E. coli, P. aeruginosa, Micrococcus luteus S. aureus, and Candida albicans with IC50 ranged from 2.1-43.63 µg/ml. This antimicrobial activity may be due to the interaction of these metabolites with bacterial cell surface and causing structural changes, damaging, disturbing vital cell functions such as permeability, depressing the activity of respiratory chain enzymes, and finally leading to cell death (Demain, 1999, Ueda & Beppu, 2017. The aim of the present study was to evaluate the antimicrobial activities of some actinomycetes against multidrug resistant bacteria, isolated from local hospital. Materials and Methods All the required chemicals for this study were purchased from Merck, Germany and Sigma-Aldrich, USA. Collection and isolation of bacterial isolates: Sterile cotton swabs, moistened with sterile saline, were used to obtain samples from different patient wounds at King Fahad General Hospital, Jeddah, Saudi Arabia. Preliminary experiments were carried out to select antibiotics resistant bacteria, which have resistant against at least 3 most commonly used antibiotics. Wound samples were collected in two swabs, among these, one swab was used for Gram staining and the other one was inoculated on Mac-conkey and blood agar plates for isolating the different bacterial pathogens. Isolated Acinetobacter baumannii was identified by culturing this bacterium on Leeds Acinetobacter medium (Hardy Diagnostics, USA, catalog no. G261). The inoculated plates were incubated at 37 ο C overnight and examined for growth. Isolated colonies were identified on the basis of morphological characteristic and biochemical tests including Catalase, Oxidase, Urease, Indole, Methyl Red, Voges Proskauer and Citrate Utilization tests as described by Koneman et al., (2005). The pure cultures of the tested bacteria were maintained on Nutrient agar slants at 4°C and in Glycerol Broth (16 ml glycerol + 84ml nutrient broth) at -70°C. Molecular identification of the bacterial isolates For PCR, required DNA templates were obtained from overnight bacterial cultures that were collected, re-suspended in 200 ml of sterile distilled water, and boiled for 1 minute (Usein et al., 2009). Species specific gene uid A for E. coli, ecfX for Pseudomonas areuginosa, oxa-51 for A. baumannii and 16S rRNA for Staphylococcus aureus and Streptococcus pyogens were used for the identification the detection particular bacterium (Clifford et al., 2012). Primers for uidA (E. coli) were prepared as according to the method described by Moyo et al. (2007), while oxa-51 (A. baumannii) primers were according to Brown & Amyes (2005), and ecfX primers (P. aeruginosa) were according to Clifford et al. (2012). The universal 16S rRNA primers were prepared as according to Tork et al., (2016). PCR performed in the Takara thermal cycler (Takara, Tokyo, Japan) and the PCR products were separated by using 1.5% agarose in Tris-acetate-EDTA buffer at 100V and visualized. O'Range Ruler™ 100+500 bp DNA Ladder, ready-to-use was included in each run. Isolation of actinomycetes The five Actinomycete isolates were obtained from Department of Botany, Faculty of Science, Tanta University, Egypt. These collected isolates were identified as Streptomyces exfloliatus (St 1) and S. niveus (St 2) as per guide line given by Agwa et al. (2000), while S. anulatus SM 21 (St 3) and S. coelicolor SM 1 (St 4) were identified as guideline given by Aly et al., (2011). Further, S. exfoliates LP10 (St 5) was identified as method given by Aly et al. (2012). Estimation of Antimicrobial activity by agar well diffusion method Standard agar well diffusion method was carried out to detect the activity of Actinomycete filtrates against some selected pathogenic bacterial isolates (Cheesbrough, 2000). Each actinomycete isolate was cultured in 50 ml of starch nitrate broth for 2 days at 25°C and healthy cells were collected and used to inoculate in 50 ml of production broth medium, composed of (g/l) 10 g glucose, 1.0 g K2HP04, 1.0 g MgSO4, 7H20, 1.0 g NaCl, 1.2 g NH4N03, and 2.0 g CaCO3 (Agwa et al., 2000). After 7 days, cell free culture filtrate was collected for each organism and extracted with the same volume of methanol (V/V). The methanol extract was dried, dissolved in DMSO and the antibacterial activities were determined on nutrient agar on which 100 µl of the overnight suspension of each bacterial pathogen was spread. Using sterile cork borer (8 mm), agar well was made and filled with 50µl of the prepared actinomycete extract in DMSO or 5µl (5 mg/ml) of the standard antibiotic (positive control) under aseptic conditions. Then, the plates were kept in refrigerator for 2 hours before incubation to permit diffusion of the extract and incubated at 37°C for 24 hr, this was followed by the examination of antibacterial activity (diameter of inhibition zone, mm). Duplicate plates were used for each tested bacterial pathogen. Results and Discussion In wound infections, many aerobic and anaerobic bacteria are found which lead to morbidity or prolonged hospitalization (Bowler et al., 2001). From last few decades, the emergence of antibiotic resistance in pathogenic isolates of human pathogenic bacteria is a dangerous threat to worldwide public health. This became more serious in case of Gram-negative bacteria such as A. baumannii, E. coli and P. aeruginosa and Gram-positive S. aureus which were associated with pus and wound infections, due to extensive prescription and inadequate dose of antibiotics (Rice, 2006, Misic et al., 2014. Rapid spread of multidrug resistant bacteria poses a serious threat to public health due to the limited treatment options and the decrease in the discovery of new classes of antibiotics (Iredell et al., 2016). Bacterial identification is very important to reduce morbidity and mortality in patients. Good identification of the bacterial pathogens improves treatments options and lead to successful therapy (Barenfanger et al., 1999). In this study, five human pathogenic bacteria viz., Staphylococcus aureus, Streptococcus pyogenes, P. aeruginosa, A. baumannii and E. coli were isolated from wound of regular visiting patients of KFGH and identified by using conventional as well as molecular identification techniques. Among the isolated microorganisms, 16S rRNA primer was used for the identification of S. pyogenes and S. aureus while ecfX, uidA and bla oxa-51 were used for identification of P. aeruginosa, E. coli and A. baumannii respectively (Figures 1 and 2). The results of bacterial identification by using 16S rDNA are in agreement with the findings of Weisburg et al. (1991). Similar findings was reported by Zhang et al. (2014) those who reported that predominance of E. coli, S. aureus, and P. aeruginosa in pus samples from patients with severe intra-abdominal infection. In another study, S. aureus was identified as the dominant bacterial species from wounds and this was followed by P. aeruginosa, P. mirabilis, E. coli (Lorrot et al., 2014). Acinetobacter baumannii is a nosocomial pathogen which affects critically ill patients and has increased importance. Further, Aly et al. (2014) recorded resistance nature of A. baumannii against various commonly used antibiotics. There is an evidence that carbapenemase gene was naturally occurred in A. baumannii (Al Masoudi et al., 2013). Actinomycetes have highly significant roles in drug discovery and have provided bioactive secondary metabolites with interesting activities such as antimicrobial, antiviral and anticancer. An antimicrobial activity of actinomycete extracts varies with the tested bacterial isolates and the used pathogens (Bruntner et al., 2005, Bhave et al., 2013. In this study, methanolic extracts of the five Streptomyces species were found affective against the tested resistant bacteria (Figure 3) and S. exfoliates (St 5) was the most affective one with highest inhibition zone against P. areuginosa (27 mm), E. coli (25 mm), A. baumannii (18mm), S. pyogens (25 mm) and finally S. aureus (24 mm) as shown in Table 1. The growth and morphology of the S. exfoliates were shown in Figure 4. From past seven decades, the antibiotics obtained from actinomycetes have many successes. Actinomycetes are a sustained mine of new antibiotics with many mode of action that kills the pathogens without harming the host. Further, erythromycin, tetracyclines, aminoglycosides, daptomycin, tigecycline are the most common antibiotics which obtained from various actinomycetes. Among the identified bioactive compounds that have been obtained so far from microbes, 45 % are produced by actinomycetes while 38 % by fungi and 17 % by unicellular eubacteria (Mahajan & Balachandran, 2012). Result of study revealed that all the tested methanolic extracts of Streptomyces have good antibacterial activity against all tested bacteria except for A. baumannii and among the tested microbes, maximum antibiotic activity was reported against P. areuginosa and S. pyogenes while in case of conventional antibiotic, maximum activity of Ampicillin was reported against S. pyogenes with inhibition zone of 19 mm. The antimicrobial ability of Streptomyces might be referred to their effect on the cell wall of the bacteria which finally causing the destruction of cell wall and death of bacteria. Also, the extracts interact with the building elements of the outer membrane and might cause structural changes; degradation and finally cell death. Also, antibacterial activities of the extracts could be due to the susceptibility of pathogens cell wall and toxicity in addition to the change in membrane potential, inhibition of ATP syntheses and levels leading to collapse of all biological process (Cui et al., 2012). Similar results were reported by Srinivasan et al. (2009) who reported strong antibacterial activity against Gram-positive than the gram-negative bacteria, this may be due to the structure differences in cell wall structure, where Gram negative has outer membrane which block the penetration of antibiotics and plant extracts and making them resistant against various antibiotic substances. Extract of Streptomyces 5 was more effective as compared to other methanolic extracts of actinomycetes and it is equally effective against all the tested microbes. Further, moderate antibacterial activities were recorded for the methanolic extracts of Streptomyces 2, Streptomyces 3 and Streptomyces 4 while Streptomyces 1 showed the least antibacterial activity. Antibiotics from actinomycetes may affect essential processes in bacterial cell wall biosynthesis and change bacterial structures and functions. Antibiotics essentially target cell membrane protein translation, RNA transcription, DNA replication and synthesis (Ueda & Beppu, 2017). Glycopeptides are a class of drugs produced by Actinomycetes bind to the dipeptide D-alanyl--D-alanine of cell wall of Gram-positive bacteria preventing the addition of new units to the peptidoglycan and inhibiting the peptidoglycan synthesis (Demain, 1999, Ueda & Beppu, 2017. In conclusion, the antibiotics from actinomycetes must be intensively studied, their sources, structures, activities, and mode of actions and further research is required to use these safe and effective extracts in alternative medicines. Conflict of Interest Authors would hereby like to declare that there is no conflict of interests that could possibly arise.
2019-04-02T13:12:15.237Z
2017-10-31T00:00:00.000
{ "year": 2017, "sha1": "cd3e515a8b62789fd986cc0c2e175b3fd3100262", "oa_license": null, "oa_url": "https://doi.org/10.18006/2017.5(5).690.696", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "261cfbd151da0a97a7b4fcbe0383b6b7d88b183e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
256047461
pes2o/s2orc
v3-fos-license
Wetland Functional Responses to Prolonged Inundation in the Active Mississippi River Floodplain The Mississippi River experienced historic flooding during 2019, inducing >150 days of floodplain wetland inundation. We evaluated flood effects using repeated measures of hydrogeomorphic (HGM) wetland assessment variables prior to the flood (October 2018), immediately post-flood (August 2019) and one year after initial assessment (October 2019). The flood had little/no impact on 11 of 13 assessment variables, but altered the abundance of woody debris and forest floor litter. Immediately after the flood, these changes decreased the functional capacity of wetlands to 1) detain floodwater (mean − 9.7% reduction) and 2) precipitation (−17.3%); 3) cycle nutrients (−7.5%); and export organic carbon (−23.8%). Subsequent sampling documented the detain precipitation function returning to pre-flood conditions. The export organic carbon function also improved, yet remained below pre-flood levels. Other functions will likely require additional recovery time due to the persistence of accumulated excess woody debris. Across all sample intervals, floodplain wetlands displayed high wetland function capacities and appear resilient to surface water inundation. This analysis highlights the utility of the HGM assessment to detect responses to changing environmental conditions over short time intervals. The study also emphasizes the need to incorporate metrics with appropriate impact-response characteristics when developing and implementing ecological assessments. Introduction The Mississippi River watershed conveys water from 41% of the conterminous United States, representing the world's fourth largest river system. The lower portion of the Mississippi River valley historically supported 10 million ha of floodplain forested wetlands, far exceeding the spatial extent of other large wetland systems including the Everglades, Okeefenokee, and Great Dismal swamps (Turner et al. 1981;The Nature Conservancy 1992). Forested wetlands in the Mississippi River valley provide a wide variety of ecological functions related to hydrology, habitat, and biogeochemical cycling that in turn benefit society through regulation of flooding, provide economic and recreational value, and improve water quality (Smith and Klimas 2002). Landscape scale alterations in the region resulted in an estimated 70% reduction in forested wetland extent, mostly associated with conversion to agricultural lands, drainage for development, and the construction of flood control infrastructure (Stanturf et al. 2000). In particular, the establishment of more than 3500 km of levees adjacent to the main channel of the Mississippi River provides for navigation, agriculture, and economic development; protects over 4 million people from flooding; and has prevented over one trillion dollars in flood damage since its inception (Camillo 2012). The levee system also induced dramatic ecological changes (DuBowy 2013), decreasing the active Mississippi River floodplain area by 75-90% which altered the timing and extent of floodplain inundation (Schramm et al. 2015;Remo et al. 2018). The constriction of the river to a narrow floodplain results in more erratic flow regimes, more frequent major floods, and fewer years with stable water levels (Sparks et al. 1998). These altered flood regimes have important implications for faunal populations, vegetation communities and soil characteristics (Jones et al. 2019). For example, De Jager et al. (2012) reported a decrease in plant diversity and increases in fine soil textures with longer flood durations in the Mississippi River valley. Schramm et al. (2009) modeled nutrient cycling changes in the region under present day and historic hydroperiods, with results suggesting that the present inundation cycle removes less nitrogen than historic conditions. Despite the size and importance of the Mississippi River system, few studies investigate the implications of altered flood pulses on wetland functions and more research is required to evaluate the impacts of long duration, major flood events in large floodplain wetlands. The Mississippi River watershed experienced historic flooding during 2019, including periods exceeding 150 days above flood stage in many areas ( Fig. 1; Table 1), and resulting in an estimated $20 billion dollars in economic losses (NOAA 2020). The unusual duration of flooding provided an opportunity to evaluate changes in wetland functions following an extreme sustained flood event, including short term recovery potential following floodwater recession. In response, we applied the hydrogeomorphic (HGM) wetland functional assessment approach prior to the onset of 2019 flooding (October 2018) and at two sampling intervals following floodwater recession (August 2019 and October 2019). The objectives of the study included 1) determine if the HGM approach detected flood effects and identify associated assessment variables, 2) document implications for wetland functions, and 3) evaluate potential functional recovery during short (< 1 year) time frames. Methods The HGM wetland functional assessment developed by Murray and Klimas (2013) was applied at 35 locations within the active floodplain (i.e., batture) of the mainline Mississippi river levee system. The analysis included study sites in Missouri, Arkansas, and Tennessee. Sample locations all occurred within the low-gradient riverine overbank wetland subclass and exhibited hydric soils, mature forests dominated by hyrdrophytic vegetation, and indicators of wetland hydrology (USACE 2010). Common soils within the study area included Sharkey (Very-fine, smectitic, thermic Chromic Epiaquert), Robinsonville (Coarse-loamy, mixed, superactive, nonacid, thermic Typic Udifluvents), Commerce (Fine-silty, mixed, superactive, nonacid, thermic Fluvaquentic Endoaquepts), and associated series. Generally, Celtis laevigata, Salix nigra, Fraxinus pennsylvanica, and Populus deltoides dominated the tree stratum; Forestiera acuminata and Cephalanthus occidentalis were common shrub species; Saururus cernuus and Toxicodendron radicans were frequently observed herbaceous plants. Multiple Sample locations were selected based upon their proximity to proposed levee improvement projects, the presence of forested floodplain wetlands, and existing right of entry agreements that provided for site access to conduct the wetland assessment. Data collection occurred at the same study locations at three intervals: prior to the flood in October 2018 (i.e., pre-flood); < 30 days following the recession of floodwaters in August 2019 (post-flood1); and one year after initial data collection in October 2019 (post-flood2). A combination of 13 onsite and offsite variables were collected at each location during each sampling interval (Table 2). Variable metric data was transformed into variable subindex scores ranging from 0.0 to 1.0, and wetland functional capacity index (FCI) scores were calculated using empirical equations (Table 3). Statistical analysis compared assessment variable metrics, variable subindex scores and wetland functional capacity indexes at the three sampling intervals using a repeated measures approach in SPSS version 26.0 (IBM, Inc). The nonparametric Freidman's test was applied (α < 0.05) because the data displayed marked deviations from normal distributions. Where differences were detected, post-hoc testing using Wilcoxon signed-rank test with Bonferroni-adjustment (α < 0.017) identified differences between sample intervals (preflood vs. post-flood1; pre-flood vs. post-flood2; post-flood1 vs. post-flood2). Alternatively, the downed woody debris and snags variable exhibited an increase following the flood, with higher than normal abundance of woody materials present at 40% of study locations and excessive amounts (> 25% of ground coverage) of downed wood at 2 of the 35 locations evaluated during both post-flood sampling intervals (p < 0.001). The changes in observations of woody debris persisted between the post-flood1 and post-flood2 sample intervals (p = 1.0). The leaf litter cover variable also displayed differences following flooding. Prior to the flood, litter covered 92.6 ± 2.3% of the forest floor, flooding decreased surface litter cover to 22.8 ± 4.1% during the post-flood1 sampling interval (p < 0.001). The litter cover subsequently rebounded to 68.4 ± 3.9% over the next 90 days (post-flood2), higher than post-flood1 (p < 0.001) but remained depressed compared to pre-flood conditions (p < 0.001). As expected based on the variable metric results, wetland assessment variable subindex values followed similar trends with limited/no differences detected in 11 of the 13 parameters (Fig. 2). The V DWD&S variable displayed a change following flooding (p < 0.001), that persisted between post-flood sampling intervals (p = 1.0). The V LITTER variable exhibited a subsequent decrease (p < 0.001) followed by partial recovery (p < 0.001). The wetlands within the active floodplain exhibited high levels level of function, with average functional assessment scores of 0.93 ± 0.02 prior to the flood (Fig. 3). The high scores included functions related to hydrology (e.g., detain floodwater and precipitation), biogeochemical cycling (i.e., export organic carbon, cycle nutrients), and habitat for plants and animals. The variable responses induced by flooding resulted in decreases in four of the six wetland functional capacities examined. The detain floodwater (mean reduction = −9.7%; p < 0.001) and cycle nutrients (−7.5%; p < 0.001) functions decreased as a result of excess woody debris accumulation following flooding and those conditions persisted during both post-flood sample intervals (post-flood1 vs post-flood2; p > 0.520). Changes in litter cover decreased the detain precipitation function (−17.3%;p < 0.001), followed by recovery to pre-flood conditions one year after initial data collection (pre-flood vs post-flood2 p = 0.314). Changes in woody debris and litter compounded to reduce the export organic carbon function (−23.8%; p < 0.001) immediately after the flood, with partial recovery during the post-flood2 sample interval (p < 0.001). The plant communities and fish and wildlife functions were not altered as a result of the flood (data not shown). Discussion Wetlands within the active Mississippi River floodplain have undergone substantial alteration as a result of levee construction and other disturbances, yet continue to provide high levels of wetland function (DuBowy 2013). The changes in a subset of wetland assessment variables (i.e., alteration of woody debris and litter distribution) is not unexpected, given the hydrodynamics of the constrained floodplain at flood stage. Flood stage discharges and flow velocities were not available at each study locations, but in-channel discharges >4800 m 3 s −1 were documented at the upstream New Madrid, MO gauge near the sample locations, providing some insight into the energy associated with the flood. Downstream, out-of-channel measurements available for Vicksburg, MS report floodplain discharges exceeding 1.2 m 3 s −1 during the flood. These discharges are sufficient to introduce woody debris into floodplain wetlands and scour/bury litter on the forest floor, inducing the observed changes in wetland assessment variables. Other studies investigate the implications of woody debris and litter content alteration on floodplain ecological processes and functions (Wohl 2020). For example, burial or rafting of leaf litter during flooding alters organic material processing and nutrient cycling within floodplains and rivers (Mayack et al. 1989). The recruitment of woody debris changes sediment capture and floodplain flow velocities (Gurnell et al. 2002). Excessive woody debris transport in floodplains damages vegetation, and prolonged inundation periods may increase tree mortality, providing additional autochthonous sources of snags and woody debris following large flood events (Sparks et al. 1998;Johnson et al. 2000). Our results contribute to the existing literature by examining short term (< 1 year) wetland functional responses to a prolonged flood event. The HGM functional assessment applied here proved valuable for identifying flood effects and documenting initial post-flood functional responses following floodwater recession. The HGM assessment approach has previously evaluated impact-response relationships. For example, Berkowitz and White (2013) categorized HGM variables as 1) rapid response variables (e.g., ground vegetation cover), 2) response variables requiring additional time to display a measureable effect (e.g., tree basal area), and 3) stable variables that remain fixed over time (e.g., tract size) in a wetland restoration context. The current study supports this concept with the majority of assessment variables displaying no immediate change following the flood, including all stable offsite variables and response variables associated with mid-to long term forested floodplain evolution (e.g., development of forest strata). Additional studies will be required to determine if other variables, including tree basal area and composition will exhibit flood effects at time intervals >1 year. In particular, the 2019 event may result in wetland functional shifts because flood inundation persisting into the growing season has been shown to induce tree mortality which in turn impacts micro-depressional ponding (due to uprooting of flood damaged trees) and other factors. For example, Cosgriff et al. (1999) reported tree mortality rates >40% and associated shifts in community composition and structure following 195 days of floodplain inundation associated with the 1993 flood on the upper Mississippi River. The HGM functional assessment has also previously provided insight into recovery trajectories following ecological perturbations (Berkowitz 2018). Here, the rapid recruitment of litter resulted in full recovery of wetland function associated with detention of precipitation and partial recovery of organic carbon export. This response can be attributed to several factors. Floodwaters receded during the height of the growing season (August), a time of rapid herbaceous growth in the lower Mississippi River valley. Recruitment of propagules and hydrochory during flooding likely provided seed sources for rapid vegetation re-establishment (Moore et al. 2011). Also, immediately after the flood many areas exhibited bare ground allowing for herbaceous colonization with minimal competition for space and light. These conditions, in combination with the dominance of deciduous species shedding leaves into the floodplain during the subsequent sample interval (post-flood2) supported litter recuperation following the flood. The excess woody biomass currently accumulated in the floodplain remains entrapped in debris piles at the base of trees and tangled in drift deposits. The abundance of woody materials will decrease over time, but the degradation, burial, or transport of woody materials in the floodplain will require longer than leaf litter cover to recover. This will delay the return of wetland functions associated with floodwater detention, nutrient cycling, and (to a lesser extent) organic carbon export to pre-flood conditions. Notably, the wetlands examined continued to yield high levels of function after the flood event. This in combination with the mechanisms for ecological recovery described herein highlight the functional resiliency of the forested wetlands within the active Mississippi River floodplain. The flood did not result in changes to the habitat functions evaluating plant communities, fish, and wildlife. The maintain plant community function does not contain the V DWD&S or V LITTER variables, precluding changes in that function based upon the results observed following the 2019 flood. The fish and wildlife habitat function incorporates the V DWD&S variable, but the function also considers nine other assessment variables, limiting the influence of V DWD&S on the functional score. Sensitivity analysis indicates that V DWD&S can induce a Ability to support fish and wildlife species during some portion of their life cycle. ! 2 6 6 4 3 7 7 5 1=3 these findings suggest that the flood did not decrease the floodplain wetlands ability to provide plant, fish, and wildlife habitat within the time period examined. In summary, the 2019 flood provided a unique opportunity to evaluate short term flood effects on wetland functions within the active floodplain of the Mississippi River. Flooding resulted in declines in several wetland functions, with subsequent full or partial recovery in a subset of metrics. Despite the observed shifts, forested wetlands exhibited high levels of wetland functions at all sample intervals and proved resilient to the long duration flood. These findings, in conjunction with other studies, support the application of the HGM assessment approach to evaluate the effects of environmental gradients, impacts, and recovery trajectories at a variety of time scales. Further studies of long term functional responses will provide additional insight into the impact of extended flood events, inform restoration efforts, and improve the management of wetlands within the active floodplains of large rivers. Fig. 3 Flooding decreased the (a) detain floodwater and (b) precipitation, (c) cycle nutrients, and (d) export organic carbon wetland functional capacities. Note that the detain precipitation and export organic carbon functions displayed full and partial recovery, respectively, during subsequent sample intervals. Error bars represent one standard error and lower case letters indicate where differences were detected between sample intervals
2023-01-21T15:37:40.892Z
2020-06-27T00:00:00.000
{ "year": 2020, "sha1": "757438e97737c16c0947072167d60ab6db22f1cb", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13157-020-01309-1.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "757438e97737c16c0947072167d60ab6db22f1cb", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }