id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
254719523 | pes2o/s2orc | v3-fos-license | Allium sativum@AgNPs and Phyllanthus urinaria@AgNPs: a comparative analysis for antibacterial application
Although medicinal herbs contain many biologically active ingredients that can act as antibiotic agents, most of them are difficult to dissolve in lipids and absorb through biofilms in the gastrointestinal tract. Besides, silver nanoparticles (AgNPs) have been widely used as a potential antibacterial agent, however, to achieve a bactericidal effect, high concentrations are required. In this work, AgNPs were combined into plant-based antibiotic nanoemulsions using biocompatible alginate/carboxyl methylcellulose scaffolds. The silver nanoparticles were prepared by a green method with an aqueous extract of Allium sativum or Phyllanthus urinaria extract. The botanical antibiotic components in the alcoholic extract of these plants were encapsulated with emulsifier poloxamer 407 to reduce the particle size, and make the active ingredients both water-soluble and lipid-soluble. Field emission scanning electron microscopy (FESEM) and energy-dispersive X-ray (EDX) analysis showed that the prepared nanosystems were spherical with a size of about 20 nm. Fourier transform infrared spectroscopy (FTIR) confirmed the interaction of the extracts and the alginate/carboxyl methylcellulose carrier. In vitro drug release kinetics of allicin and phyllanthin from the nanosystems exhibited a retarded release under different biological pH conditions. The antimicrobial activity of the synthesized nanoformulations were tested against Escherichia coli. The results showed that the nanosystem based on Allium sativum possesses a significantly higher antimicrobial activity against the tested organisms. Therefore, the combination of AgNPs with active compounds from Allium sativum extract is a good candidate for in vivo infection treatment application.
Introduction
Respiratory diseases are common diseases in poultry that signicantly affect livestock productivity. 1 These diseases oen come from bacteria and Escherichia coli is one of the most common types. 2 A survey carried out in Ethiopia from 2010 to 2011 indicated that most of the infected yolk sacs in 3-to 5-dayold chicks were caused by E. coli strains, followed by Staphylococcus aureus. 3 Currently, antibiotics play an important role in the prevention and treatment of infectious diseases in livestock. 4 However, overuse of antibiotics in feed as growth promoters as well as in disease prevention and treatment poses a huge risk of antibiotic resistance and antibiotic residues in food. 5 In order to reduce the amount of antibiotics in livestock, great attention has been paid to replacing antibiotics with natural herbs. Recent medical studies have proved that local medicinal plants with antibacterial activities are very rich and effective in treating many different types of infection. Garlic (Allium sativum) and chamber bitter (Phyllanthus urinaria) are known to be the best natural antibiotics. 6,7 Allium sativum, or garlic, a popular component in traditional Asian medicine for common cold treatment, has several therapeutic characteristics based on its antibacterial, antiviral, and antifungal activity. 8 The three main bioactive ingredients of Allium sativum include allin, diallyl sulde and ajoene. 9 Allin is the strongest and the most important active ingredient of garlic. Allinase enzyme released when garlic cloves are cut or crushed will rapidly turn allin into allicin (C 6 H 10 OS 2 ). Allicin has a very strong bactericidal effect against Staphylococcus, typhoid, paratyphoid, dysentery, cholera, and diphtheria bacilli. 10,11 However, allicin is unstable and highly reactive to temperature, thus easily decomposed into other stable organosulfur compounds. In addition, garlic also has antioxidant effects by scavenging free radicals and inhibiting the oxidation of lowdensity lipoproteins. 12 Additionally, Phyllanthus urinaria, a frequently used herbal medication, has numerous biological properties including antivirus, anti-bacteria, and anti-tumor properties. 13,14 Lignans, tannins, avonoids, phenolics, terpenoids, and other secondary metabolites were abundant in P. urinaria. Eldeen et al. reported that P. urinaria inhibited growth of Pseudomonas stutzeri (Gramnegative) with MIC values of 177 mg mL −1 . Besides, P. urinaria had a high total phenolic content of 205 mg per GAE per g, which is correlated with DPPH radical scavenging activity. 15 The presence of metabolites such as phyllanthin, phyltetralin, rutin, quercetin, trimethyl-3,4-dehydrochebulate, and methyl brevifolincarboxylate is the underlying mechanism of the antibacterial effect of P. urinaria extracts. These chemicals may bind with proteins in the microbial cell membrane, forming persistent water-soluble complexes that cause microbial cell death. 16 Although the in vitro and in vivo bactericidal effects of herbs in disease prevention have been identied, the practical uses of herbal antibiotics are limited because of the in vivo instability, aqueous insolubility, and poor absorbability in gut. Herbal antibiotics are also less potent than conventional antibiotics, thus a higher dose and longer time are required, thus affecting livestock productivity. Therefore, a suitable method is necessary to improve the therapeutic effect of herbal antibiotics. One of the effective ways to enhance the antibacterial ability of herbal antibiotics is the application of nanotechnology. Numerous studies have shown that the important active ingredients in plants are enhanced in efficacy and exhibit many advantages in their biological applications using nanotechnology. 17 Nanoformulation of herbal antibiotics can enhance solubility, stability and bioavailability, pharmacological activity, as well as improve tissue macrophage distribution. 18 Moreover, nanoform of herbal antibiotics can also reduce their toxicity and decomposition by the physiological environment, and the drug release process can be also controlled. 19,20 In addition, silver nanoparticles (AgNPs) have been shown to have excellent microbial resistant properties, thus it has long been used as disinfectants in food and water containers and as a disease treatment agent. 21,22 The antibacterial properties of AgNPs are mainly attributed to the ability of Ag + ions to inhibit bacterial cell respiration and prevent bacterial DNA replication. 23,24 To synthesize AgNPs, more and more efficient green synthesis methods have been developed in order to overcome the constraints of physical and chemical procedures. The biological synthesis of AgNPs using plants has been proven to be cost-efficient and eco-friendly and is a valuable alternative for large-scale production. [25][26][27] Interestingly, green synthesized AgNPs expressed stronger antibacterial potency than chemically produced ones. 28 Yang et al. also demonstrated that the antimicrobial activity of AgNPs and Lonicera japonica Thunb extract was signicantly increased when compared with solely AgNPs or herbal extract. 29 Herein, the two herbal extracts of Allium sativum and Phyllanthus urinaria were used to synthesize AgNPs. The AgNPs were then combined with each emulsied herbal extract and encapsulated by alginate/carboxymethyl cellulose carrier to form the Allium@AgNPs and Phyllanthus@AgNPs systems. Usually, alginate and carboxymethyl cellulose used in drug delivery are usually formed hydrogel by Ca 2+ cross-linking. However, this formulation possessed the notable drawbacks of instability and rapid dissolution. 30,31 In the present study, alginate/carboxymethyl cellulose carrier was fabricated by chemical NH-CO bond, thus facilitating the formation of polymeric micelle-based nanoformulation. The physicochemical characteristics of the nanoformulations were determined to evaluate the effect of different herbal extracts on the Ag + reduction efficiency. Thereaer, based on the results of antimicrobial activity against E. coli test, a potential antibiotic agent could be recommended for further investigating of its applicability in batling infection at in vivo scale.
Preparation of Allium sativum extracts
Garlic cloves were separated and peeled for the preparation of garlic extracts with different solvents. 0.5 kg of the fresh bulb garlic was peeled off, pounded and pressed to get raw water extract, then extracted in 1 L of ethanol at 60°C for 5 hours. The extraction was performed 3 times and the ethanolic extract was obtained aer ltration through a cellulose membrane (pore diameter 0.22 mm). The residue of ethanolic extraction was further extracted with 1 L of water for 5 hours at 60°C. The process was, repeated 3 times and the aqueous extract was collected aer ltration through a cellulose membrane (pore diameter 0.22 mm). Both aqueous and ethanolic extracts were stored at room temperature until further use.
Preparation of Phyllanthus urinaria extracts
The whole chamber bitter (Phyllanthus urinaria, PU) plant was washed, chopped, and dried at 40-50°C. 300 grams of pulverized chamber bitter were placed in a cold-soaked ask with 1 L of ethanol and sonicated for 5 hours at 60°C. The subsequent extraction steps were performed in the same way as for garlic and the nal products were obtained with two fractions of ethanolic extract and aqueous extracts. Fig. 1 shows the synthesis process of the herbal anti-biotic@AgNPs. First, APTES (10% acid alcohol solution) was added to sodium alginate (1 mg mL −1 ) and stirred on a magnetic stirrer at 400 rpm for 2 h (80°C). The APTES reacts with the alginate's hydroxyl groups, promoting silanol group formation with some amine groups in the alginate molecules that are so-called activated alginate. Then, CMC was activated by the presence of EDC and NHS. A premixed solution of NHS (25 mg), EDC (100 mg), and carboxymethyl cellulose (CMC, 150 mg) in 20 mL of deionized water was added. The pH of the solution was then adjusted to 8.5 with triethylamine. The mixture was stirred for 4 h in a closed ask at 55°C and the activated CMC was obtained. Finally, activated alginate (with -NH 2 groups in the molecules) was allowed to react with activated CMC for 24 h at room temperature to obtain the alginate/ carboxymethyl cellulose carrier.
Synthesis of the herbal antibiotics@AgNPs
Encapsulation of active compounds in ethanolic plant extracts has been done by an emulsion solvent evaporation method. The ethanolic extract was added dropwise into alginate/CMC solution with a ratio of 1 : 5 (v/v) under vigorously stirring at 350 rpm. Then, 1.0 mL of 0.1 M silver nitrate (AgNO 3 ) was added slowly to the above mixture, followed by the aqueous extract with a 1 : 5 (v/v) ratio of AgNO 3 : the aqueous extract, and stirred continuously for 1 h. The solution was stirred overnight at room temperature, and then ethanol was evaporated under vacuum pressure. The product was centrifuged at 10 000 rpm for 10 min to remove unloaded compounds. Finally, predispersed overnight at 0-5°C poloxamer 407 solutions (at various ratios w/v from 0 to 10%) were slowly added to the obtained nanosystem. Thus, we obtained two series of nanosystems based on alginate/CMC carrier, one with Allium sativum extract and AgNPs (referred to as Allium@AgNPs) and the other with Phyllanthus urinaria extract and AgNPs (referred to as Phyllanthus@AgNPs). Similar procedure except for the use of AgNO 3 was utilized to prepare non-Ag nanosystems of Allium@NPs, Phyllanthus@NPs for comparison. The control AgNPs were synthesized under the same conditions that used sodium borohydride (NaBH 4 ) as the reducing agent instead of the herbal aqueous extract.
Physico-chemical characterization UV-vis spectroscopic analysis. The reduction of Ag + ion and capping of the resulting silver nanoparticles was monitored by UV-vis spectra recorded on a Cary 5000 UV-vis-NIR double beam spectrophotometer (Agilent Technologies, Santa Clara, USA).
X-ray diffraction (XRD). The crystalline structures of the synthesized systems were analyzed with a Bruker D8-Advance instrument operating at 35 kV and 30 mA, in the reaction mode with Cu Ka line of 1.5406Å. Data were collected over a 2q range of 30°to 70°, step size 0.02°and time per step of 4 s at room temperature. The detailed structural characterization was analyzed with the Rietveld method.
Fourier transform infrared (FTIR) spectroscopy. The molecular structure of materials was characterized by Fourier transform infrared spectroscopy (FTIR, SHIMADZU spectrophotometer) using KBr pellets in the wave number region of 400-4000 cm −1 .
Field emission scanning electron microscopy. The morphology of the samples was analyzed using a eld emission scanning electron microscopy (FESEM) of Hitachi S-4800. Samples were deposited on a Si wafer, dried, and inserted in the instrument without further coating. The measurement was performed at energies between 5 and 10 keV. Energy-dispersive X-ray mapping of the herbal antibiotics@AgNPs was obtained using the FESEM instrument equipped with an energydispersive X-ray spectroscopy (EDXS) attachment.
Transmission electron microscopy (TEM). The morphology and dimensions of the materials were also analyzed in a JEOL JEM-1010 system operating at 120 kV.
Dynamic light scattering (DLS). The size distribution of the hydrodynamic diameter of the herbal antibiotics@AgNPs and the stability of the suspension were examined using Zetasizer (Zetasizer Nano, Malvern Instruments, UK). The particles were diluted in water at 10 mg L −1 and ultrasonicated (Sonics Vibra Cell, 8 kJ, power 70%, pulse on/off 1 s/1 s). Each sample was measured 3 times.
Thermogravimetric analysis (TGA). The thermal stability of nanoparticles was determined by thermogravimetric analysis, for which the nanoparticles in powder form were performed on a Discovery TGA (TA Instruments, New Castle, DE, USA). For analyzing TGA, the nanoparticles were placed in an alumina pan and heated from 25 to 800°C at a ramping time of 10°C min −1 .
Quantication of plant extracts in the nanosystems
Allicin and phyllanthin are standard substances used to quantify the encapsulation of Allium sativum extract and Phyllanthus urinaria extract in the nanosystems, respectively. The amount of allicin and phyllanthin in the nanosystems was determined by High Performance Liquid Chromatography/ Mass Spectrometry (HPLC/MS) method on an HPLC-MS SCIEX-X500R QTOF instrument. Preliminary analysis of allicin and phyllanthin standards using a UV detector (Thermo Scientic, UK) with a spectral range of 200-600 nm was performed under chromatographic conditions. Chromatographic conditions included: Agilent ZORBAX Eclipse Plus 95Å C18 column, column size of 4.6 × 100 mm, packed particle diameter 5 mm; mobile phase of mix methanol and distilled water with gradient 1-5 minutes (100% H 2 O), 5-10 minutes (0-50% methanol), 10-12 minutes (70% methanol) and keep 70% until the end of 15 minutes; ow rate of 0.5 mL min −1 ; sample injection volume of 10 mL; UV detector of DAD wavelength 254 nm and 365 nm. MS conditions were ESI-HRMS positive; voltage 5000 V; temperature 450°C; collision energy 10 eV; ion source gas: 50 psi.
The encapsulation efficiency of the nanosystem was calculated using eqn (1).
Encapsulation efficiency (%) = (W Extract on the nanosystem /W Total extract used ) × 100 (1) in which W Extract on the nanosystem is the weight of loaded extract and W Total extract used is the initial weight of the fed extract. The trials were repeated three times, and average values were obtained.
In vitro release study of the extracts from the nanocarriers At four distinct pH levels, the pH-dependent release behavior was examined. The herbal antibiotics@AgNPs were inserted into a dialysis tube having a Molecular Weight Cut Off (MWCO) of 8 kDa. To mimic a neutral pH environment, 2 dialysis tubes containing Allium@AgNPs and Phyllanthus@AgNPs were dipped into 500 mL of 0.1 M phosphate buffer solution (pH 7.4) separately. Similarly, for the acidic environment, 500 mL of 0.1 M acetate buffer solution (pH 5.5) was used and likewise, 500 mL of 0.1 M phosphate buffer solution (pH 6.8) and buffer solution pH 1.2 (reference standard, ∼2.0 g per L sodium chloride and ∼2.917 g per L HCl) was used to achieve simulated gastric uid. The dialysis tubes were placed into beakers containing solutions to imitate different pH environments. The entire setup was stirred at 37°C. Aer a certain time interval, 3 mL of the dialysate was withdrawn from each of the beakers to determine the release prole of the materials. The withdrawn amount of the dialysate was replaced with an equal quantity of the fresh buffer solution to avoid the sink conditions. Measurements were performed in triplicate.
Minimal inhibition concentration (MIC) of nanosystems. The MIC test was carried out based on the method suggested by Netala VR et al. with the minor modication. 32 The two folds serial dilution of the four nanosystems (Allium@NPs, Phyl-lanthus@NPs, Allium@AgNPs, and Phyllanthus@AgNPs) in a sterile at-bottomed 96-well plate were prepared by the following steps: rst, 100 mL of sterile LB broth (Sigma-Aldrich, UK) was added to all wells. Then, 100 mL of each nanosystem was mixed to their corresponded row of the 1st column. Within a row, 100 mL of the well in 1st column was transferred to the well in 2nd column, continue to dilute until the column 10, and then discard 100 mL of each well in last column. Thus, it formed the diluting range from 1/2 to 1/1024. Doxycycline antibiotics and AgNPs solution were used as positive control. The initial concentration of tested nanosystems were presented in Table 1.
Cytotoxicity activity of nanoantibiotic system against vero cell. The cytotoxic effects of the herbal antibiotics@AgNPs were evaluated following the previously described methods. The vero cell line (from African Green monkey (Cercopithecus aethiops) kidney, ATCC-CCL-81) was incubated in DMEM (Dulbecco's Modied Eagle Medium) at 37°C in 5% CO 2 for 24 h, and seeded into 96-well microplates. The herbal antibiotics@AgNPs was diluted two-fold, and added to the plate. The concentrations of the tested samples were 16 and 8 mg mL −1 . In control group (untreated), 100 mL of DMEM were added in the wells instead of herbal nanoformulations. The plate was subsequently incubated at 37°C in 5% CO 2 . The cytotoxicity of the samples against vero cells was estimated based on the different of cell phenotypes between treated and untreated group aer 48 h of incubation. The cells were then stained with MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) agent for 4 h and solubilized to estimates the cell viability. The cell absorbance was measured at 570 nm (Multiskan SkyHigh Microplate Spectrophotometer, Thermo Fisher Scientic, US). The cell viability was calculated as suggested by Mosmann 1983. 33 Cell viability ð%Þ ¼ OD sample À OD blank OD control Statistical analysis. Each experiment was performed in triplicate and the data are expressed as the mean ± SD. Statistically signicant differences were realized at p < 0.05 via Student's ttest. Statistical analysis was performed using the SigmaPlot 14.0 soware.
Results and discussion
Synthesis of herbal antibiotics and AgNPs-encapsulated polymeric nanoparticles (herbal antibiotics@AgNPs) In this study, in order to bind the activated CMC to the alginate chains, the alginate polymer was treated with APTES ( Fig. 1). In the presence of water, APTES hydrolyzes, and the silanol groups condense with the hydroxyl groups of the alginate polymer to create C-O-Si covalent bonds. 34 This method provided alginate polymer with the amino group essential to create a CO-NH bond when reacted with the activated carboxylic group of CMC polymers. The nal chemical binding system was also conrmed by FTIR spectra (Fig. S1 †). Typical oscillations of functional groups in alginate, CMC components and Si-O symmetric stretching vibrations also appeared in FTIR spectrum of the alginate/CMC carrier with some shis (Table S1 †). The herbal antibiotics@AgNPs were synthesized by emulsion solvent evaporation method. Poloxamer 407 was used in the nanoformulation as a surfactant to optimize the physiochemical properties of the nanosystems. Table 2 summarizes the synthesis parameters and the results obtained from the Allium@AgNPs and Phyllanthus@AgNPs at emulsier (poloxamer 407) concentrations of 0%, 1%, 3%, 5%, and 10%, respectively. As outlined in Table 2, an increase in poloxamer 407 concentration produces a reduction of both particle size and PDI of the herbal antibiotics@AgNPs. The difference in particle size is comprehensible since, as explained in the literature, more surfactant molecules might cover bigger surfaces of nanoparticles of smaller size. 35 The formed nanoparticles had a mean dynamic size of about 160-300 nm. Nanoparticles smaller than 300 nm are preferred for absorption efficacy via oral administration. 36,37 The decreased PDI might be owing to an interaction between the encapsulating polymer and the drug. Still, it could also be due to the decreased scattering intensity found at smaller particle sizes. 38 Larger particles frequently demonstrate greater scattering intensity. This alters the ratio of the scatter signal of the main particle fraction to the scatter signal of small-particulate impurities, which has an effect on the PDI. 39 With the addition of poloxamer 407, the PDI values are below 0.3 indicating the uniformity and narrow size distribution of the herbal antibiotic@AgNPs. 40 Poloxamer 407 has an amphiphilic nature with both association and adsorption characteristics, and it can improve chemical solubilization and stability due to the remarkable physiological features and low toxicity. In theory, non-ionic surfactants like poloxamer 407 have no effect on the zetapotential. 41 However, in this study, the absolute value of zetapotential of the herbal antibiotic@AgNPs increased when the poloxamer concentration changed from 1% to 10% w/v with the least negative zeta potential of −33.1 mV. As the concentration of poloxamer 407 increased, a dense surfactant lm was formed at the interface between the nanoparticles and water, increasing the formulation's zeta potential (more negative). 42 Furthermore, because poloxamer 407 is an amphoteric surfactant, it can give further emulsion stability via electrostatic repulsion. 41 Although there was a slight decrease in the zeta potential of Alliu-m@AgNPs at 10% w/v of poloxamer 407, the zeta potential of −38.8 mV indicates the stability of the nanoparticles. Higher zeta potential reduces aggregation by electrostatic repulsion between similarly charged particles, imparting stability to nanoparticle dispersion. 43 The encapsulation efficiency of the herbal anti-biotics@AgNPs was calculated based on the content of allicin and phyllanthin by HPLC-MS analysis. Allicin and phyllanthin accounted for 0.025% and 0.2% of the Allium sativum and Phyllanthus urinaria extract, respectively ( Fig. S2 and S3 †). All herbal@AgNPs formulations prepared with poloxamer 407 showed signicantly higher entrapment efficiency than without poloxamer 407. The results also indicate that poloxamer 407 concentration had a signicant effect on entrapment efficiency; in which increasing poloxamer concentration also increases entrapment efficiency. With 10% poloxamer 407, the encapsulation efficiency was 88.24% and 85.17% for allicin and phyllanthin, respectively. This might be due to the stronger binding contacts between the drug and the polymeric carrier. The alginate/CMC carrier entrapped hydrophobic allicin and phyllanthin at the interface while poloxamer 407 stabilizes the nanoparticles by diffusing out the water molecules, resulting in the formation of the polymer-rich coacervate during the nanoemulsion process. This is in agreement with other reports. 44,45 Overall, nanosystem with 10% poloxamer 407 was found to have the smallest particle size, PDI, and zeta potential with highest entrapment efficiency, which would be the most stable formulation. Therefore, this nanoformulation was chosen for some selected further studies.
Physico-chemical characterization of herbal antibiotics@AgNPs
The formation of silver nanoparticles in the herbal anti-biotics@AgNPs systems has been elucidated by FESEM and UVvis spectra. As presented in Fig. 2a, in the wavelength range of 300-800 nm, the AgNPs showed a surface plasmon resonance (SPR) peak at 401 nm. According to previous studies, 25,46,47 green synthetic silver nanoparticles have plasmon resonance absorption peak between 400 nm and 460 nm. The SPR peaks of Allium@AgNPs and Phyllanthus@AgNPs were red shied with different degrees compared to those of AgNPs, at 456 nm and 412 nm, respectively. According to Mie's theory, this shis were caused by the local dielectric effect. 48,49 Furthermore, the existence of an SPR band in the UV-vis spectrum of herbal anti-biotics@AgNPs nanostructures was indicative of silver crystal formation on the nanosystems, which was completely consistent with the TEM image data (Fig. 3c & d).
XRD patterns of the synthesized herbal antibiotics@AgNPs are shown in Fig. 2b. Diffraction peaks (Braggs reections) at 38°, 44°and 64°corresponding to the crystal planes (111), (200) and (220), respectively, were observed in XRD patterns of both Allium@AgNPs and Phyllanthus@AgNPs, revealed the high crystalline nature of pure Ag NPs with a dominant (111) phase. 50 The calculated particles size of AgNPs using the Scherrer equation 51 was equal to approximately 4.98 nm and 3.25 nm for Allium@AgNPs and Phyllanthus@AgNPs, respectively. On the other hand, there were no sign of Ag 2 S or Ag 2 O diffraction peaks. According to the XRD patterns, it seems like only silver metal were formed in both Allium@AgNPs and Phyllanthu-s@AgNPs. Similar results have been established by Wei and Sun for the biosynthesis of AgNPs using Chinese herbal medicine as well as tea leaf extract, respectively. 25,52 Fig. 2c shows FTIR spectra of alginate/CMC carrier, two herbal antibiotics@AgNPs in comparison with those of the two extracts (Allium sativum and Phyllanthus urinaria). The alginate/ CMC carrier exhibited a broad band around 3303 cm −1 for the OH stretching vibrations, and a medium peak at 2909 cm −1 belonged to the stretching vibrations of C-H (sp 3 ). Two bands at 1588 cm −1 and 600 cm −1 for N-H bending. Two bands were observed at 1413 cm −1 and 1323 cm −1 for -COO − symmetric stretching vibration and -C-O stretching, respectively, while the antisymmetric C-O-C stretch was observed at 1080 cm −1 . Also, the Si-O symmetric stretching mode was observed at 659 cm −1 . In the FTIR spectrum of Phyllanthus urinaria's ethanolic extract, a reasonably wide band at wavenumber 3351 cm −1 indicates the presence of strain vibrations of the O-H group, and the peak at 1035 cm −1 was due to C-O-C stretching vibration. The broad absorption peak at 1712 cm −1 and 1611 cm −1 corresponded to C]O asymmetric stretching vibration, and the narrow absorption peak at 1453 cm −1 corresponded to -COO − symmetric stretching vibration. The appearance of these characteristic bonds is consistent with the presence of phyllanthin, phenolic, and avonoid compounds in Phyllanthus urinaria extract. 14,16,53 Major peaks of Phyllanthus@AgNPs showed at 3446, 2923, 1627, 1384 and 1113 cm −1 which have been assigned to the O-H stretching, -CH stretching, the asymmetric, the symmetric vibration of -COO stretching and C-O-C stretching, respectively. The absorption peak of Phyllanthus@AgNPs at the wave number of 638 cm −1 was caused by Si-O symmetric stretching vibration. The FT-IR analysis indicated the involvement of amides, carboxyl, amino groups, and polyphenols in the Phyllanthus@AgNPs. From this fact, it was inferred that the organic compounds in the ethanolic extract was encapsulated successfully in Phyllanthus@AgNPs systems by the alginate/ CMC carrier.
The FTIR spectrum of garlic's ethanolic extract showed the presence of functional groups like hydroxyl, carbonyl, carboxylic and organosulfur compounds (Fig. 2c). The broad peak at 3403 cm −1 is due to the O-H stretching of a hydroxyl group. The peak at 1635 cm −1 corresponds to C]O stretching of peptide linkages or C]O stretching of carbonyl and carboxylic groups while the peak at 1408 cm −1 indicates the O-H bend of carboxylic acids. These characteristic peaks showed the existence of polyhydroxy compounds such as avonoids, tannins, saponins and glycosides in the garlic ethanolic extract. 54,55 Another peak at 1028 cm −1 was due to the S]O group for the presence of organosulfur compounds including allin, allicin, and diallyl disulde. 56 A peak at 933 cm −1 belongs to C-S single bond. For the spectrum of Allium@AgNPs, additional bands, compared with garlic's ethanolic extract and alginate/CMC carrier, were detected at 3248 cm −1 (-OH), 2920 cm −1 (CH The morphology and particle size of the herbal anti-biotics@AgNPs are shown in FESEM and TEM images. The TEM images ( Fig. 3b and d) showed that most of herbal anti-biotics@AgNPs were mono-dispersed with spherical shapes, which was similar with the FESEM image ( Fig. 3a and c). The Allium@AgNPs were distributed separately, while some Phyl-lanthus@AgNPs were trapped in the blurred membrane and were observed as clusters. The particle sizes determined from TEM analysis were 13.5 and 20.45 nm for Allium@AgNPs and Phyllanthus@AgNPs, respectively. Meanwhile, the silver nanoparticles (the black particles present in the polymeric nanocarriers) integrated on the Phyllanthus@AgNPs system are 2-3 nm in size, smaller than those in the Allium@AgNPs system (4-5 nm).
To corroborate the chemical analysis, an EDX analysis of the samples was also carried out. The EDX spectra showed the peaks characterized for silver elements at 3 keV and 2.7 keV, which conrms the formation of AgNPs synthesized with the aqueous extract of Allium sativum and Phyllanthus urinaria, respectively. Next, we performed elemental mapping of the selected region in the SEM scanned image of the nanoparticles. The mapping results showed a clear map of silver elements in Allium@AgNPs and Phyllanthus@AgNPs scanned images. The EDXS Analysis conrms the presence of carbon (C), oxygen (O), sulphur (S), silica (Si), and silver (Ag) in Allium@AgNPs as shown in Fig. 4a. Similarly, Fig. 4b depicts the presence of carbon (C), oxygen (O), sodium (Na), silica (Si), and silver (Ag) in Phyllanthus@AgNPs. The peak for C and O suggests the organic ingredients of herbal antibiotics@AgNPs such as the polymer alginate, chitosan and the bioactive compound in Allium sativum or Phyllanthus urinaria extracts.
To further conrm the presence of organic compounds in the nanoparticles, the thermal behavior of the obtained herbal antibiotics@AgNPs was investigated (Fig. S4 †). The evaporation of moisture adsorbed on the surface of the herbal anti-biotics@AgNPs might be attributed to the rst weight loss step of the heating process, which occurred between 80 and 120°C. The next steps in the TGA curve could be assigned to the degradations of organic compounds present in the nanomaterials and the carbonization of the polymeric carrier (Table S2 †). The TGA results clearly showed that the bioactive organic compounds from the herbal extract were encapsulated in the produced nanoparticles. 57,58 In vitro drug release of herbal antibiotics@AgNPs A drug release study of allicin and phyllanthin from the herbal antibiotics@AgNPs was carried out using four different conditions at 37°C. Different buffer solutions were employed to simulate the gastric medium (pH 1.2), the macrophage environment (pH 5.5), the intestinal condition (pH 6.8), and the physiological condition (pH 7.4) (Fig. 5).
At pH 1.2, the amount of allicin released was very low, only 12.2% and 26.67% of allicin were released aer 8 hours and 60 hours, respectively (Fig. 5a). In the same case, the amount of phyllanthin released were 11.33% aer 8 hours and 24.8% aer 96 hours of testing. The release of allicin and phyllanthin were very low due to the poor aqueous solubility of the substances at a strong acidic pH. 59,60 Therefore, the drug encapsulated nanoformulations were quite stable and a very limited drug were released in this acidic environment.
When the pH was raised to 5.5, there was a signicant increase in drug release at each time point. From Alliu-m@AgNPs, 58.5% of allicin was released aer 24 hours, and aer 60 hours, the amount of allicin released reached 71.45% (Fig. 5b). Besides, aer 24 hours, 46.8% of phyllanthin was released from the Phyllanthus@AgNPs nanoparticles, and aer 60 hours, 60.5% the amount of phyllanthin were released. While the extracellular pH in the blood remains stable at 7. 35-7.45, inammatory situations are associated with acidication, with pH levels ranging from 5.5 to 7.0. 61 Acidic microenvironments are also described in the pathological environment of inammatory exudates as well as the intracellular environment of infected macrophages. 62 With the characteristic of gradual release at pH 5.5, both Allium@AgNPs and Phyllanthus@AgNPs have the potential to maintain the effect of the drug at the pathological site.
With the further increase of the pH to 6.8 (characterized pH of the colon environment, the rst part of the small intestine of the digestive system, where most of the absorption of the drug into the blood occurs), the nanoparticles showed an even higher rate of drug release. The amount of allicin released in the rst 8 hours and aer 6 hours was 44.5% and 81.35%, respectively (Fig. 5c). Under same condition, the released amount of phyllanthin, was 29.3% in the rst 8 hours and 74.27% aer 60 hours.
Under physiological conditions (pH 7.4), both drugs were almost completely released from the nanosystems aer 96 hours, 93.76% and 97.55% aer 95 hours for Allium@AgNPs and Phyllanthus@AgNPs, respectively. The release proles of Allium@AgNPs and Phyllanthus@AgNPs nanosystems showed two-stage release with an initial burst release within the rst 12 hours followed by a slower and continuous release over the next 84 hours (Fig. 5d). Moreover, in all the four pH conditions, the released amount of allicin was higher than that of phyllanthin (p < 0.001). This can be due to the fact that allicin has better water solubility than phyllanthin, so it easily diffuses through the alginate/carboxyl methyl cellulose carrier. 60,63 It is noted that both allicin and phyllanthin release prole of the herbal antibiotics@AgNPs is pH dependent due to the carboxyl methyl cellulose was complexed with sodium alginate to form a polyelectrolyte that provided a pH-sensitive shell layer. 64,65 The substances were released faster at a higher pH than around neutral and acidic pH (pH 7.4 > pH 6.8 > pH 5.5 > pH 1.2). In low pH environment, both alginate and CMC are in acid form which are slightly hydrophobic thus preventing the interaction between water and the active ingredient. But in neutral environment, they were both in base form, and the drugs were released more easily due to ionization. 64,66 As expected, drug release was prolonged in pH 5.5, whereas in pH 7.4, drug release was rapid in the rst 24 h. Efficient allicin release was achieved at pH 7.4 (90.3%), while only 52% release was observed at pH 5.5 within 24 h. The release result was similar for phyllanthin. Thus, both types of nanosystems are suitable candidates to be used as nano drug delivery systems for oral administration. 67,68 Estimating the MIC of nanoantibiotic systems against E. coli Table 3 showed that all tested E. coli strains resisted to conventional doxycycline and Phyllanthus@NPs at all tested concentrations. However, the integration of AgNPs improved the antibacterial activity of Phyllanthus@AgNPs. The Phyllanthus@AgNPs had the MIC value of 4 mg mL −1 against E. coli V19.11.3. On the other hand, Allium@NPs alone could inhibit the growth of V19.11.3, V19.3.6, and V19.3.14 at a concentration of 2 mg mL −1 . Compared to the Allium@AgNPs, the antibacterial activity of AgNPs solution was lower with the MIC value of 8 mg mL −1 for S19.3.6 and 4 mg mL −1 for S19.3.14. The reduction of ion Ag + to AgNPs by Allium sativum extract to form the Allium@AgNPs further enhanced Allium@AgNPs bacteriostatic against other E. coli strains such as V19.3.4 and V19.3.11, which previously resisted to Allium@NPs. The results were in agreement with previous studies, in which the synergic effect of AgNPs and herbal extracts against various pathogenic bacteria was proved. [69][70][71][72][73] In term of particle size, the small size of AgNPs (4-5 nm) as well as the small size of Allium@AgNPs (13.5 nm) might contribute to high antibacterial property of the nanosystem. 74-76 Patra and Baek et al. suggested that the AgNPs could strongly bind to the bacterial cell wall, then disrupt its structure. 77 Thus, other antibacterial agents such as herbal extracts could easily penetrate bacteria cells and exert their effects. Besides, Cheow et al. suggested that the lower initial antibiotic exposure, the higher the survival chance of biolm cells were. 78 In this study, obtained results proved the faster release rate of allicin from the nanosystem, compared to phyllanthin at all test pH conditions. Thus, apart from the AgNPs corporation, the burst release of allicin might also play an important role in increasing the overall antibacterial property of the Allium@AgNPs compared to the Phyllanthus@AgNPs.
Moreover, to conrm the safety of the antibiotics@AgNPs, the cytotoxicity test using vero cell lines was conducted. Fig. 6 showed the different in cell appearance upon the exposure to high antibiotics@AgNPs concentration of 16 mg mL −1 aer 48 h of incubation. These cells tended to look rounded and started to detach from the microplate surface. However, at the anti-biotics@AgNPs of 8 mg mL −1 , the antibiotics treated cells had almost similar phenotype compared to the negative control. The cell viability in 8 mg mL −1 antibiotic treatments was in a range of 98.3 ± 0.6 to 99.2 ± 0.8%, that was not signicantly different Table 3 The MIC values of prepared nano antibiotic systems against E. coli strains (mg mL −1 ) (n = 3, mean ± SD) a Antibiotic agents V19.11.3 S19.3.6 S19.3.14 S19.3.4 S19. 3 from the value of 97.5 ± 1.3% in negative control (Fig. 7). Therefore, the obtained results indicated that both of the Allium@AgNPs and Phyllanthus@AgNPs antibiotics could effectively inhibit the growth of pathogenic bacteria, but did not cause side effects to the normal cell at the required treatment dose in in vitro experiment.
Conclusions
In the present study, the herbal antibiotics@AgNPs from Allium sativum and Phyllanthus urinaria were successfully synthesized and their properties were assessed by different characterization techniques including UV-VIS, XRD, FESEM, TEM, EDX and HPLC-MS. The optimum conditions were 10% (w/v) poloxamer 407 for both Allium@AgNPs and Phyllanthus@AgNPs. The drug-nanocarrier formulations exhibited much sustained drug release for a prolonged period of time, especially in inammatory environment. The combination of herbal antibiotics and silver nanoparticles has shown an in vitro antibacterial effect on E. coli. In comparision, Allium@AgNPs nanosystem has better drug release ability and higher inhibitory activity against E. coli than the Phyllanthus@AgNPs nanosystem. As a result, we conclude that the synthesized Allium@AgNPs have the potential to be employed as an eco-friendly agent with antibacterial properties in biomedical applications. In vivo studies of the Allium@AgNPs formulation mentioned in this study will be conducted in the future.
Author contributions
Phan
Conflicts of interest
There are no conicts to declare. | 2022-12-16T16:08:33.273Z | 2022-12-12T00:00:00.000 | {
"year": 2022,
"sha1": "9d8cebbdc1d71ef7dc56b4b8c83aa1f29bd6cfc3",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/ra/d2ra06847h",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "903926f06ce5681771fc636e96869c198f7abe97",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
40508336 | pes2o/s2orc | v3-fos-license | IN VIVO STUDY OF PROMISING FORMULATED OCULAR BIO-ADHESIVE INSERTS OF CIPROFLOXACIN HYDROCHLORIDE COMBINATION WITH XANTHAN GUM AND CARBOPOL
Objective: To study the in vivo behaviour and irritant properties of different ocular bio-adhesive inserts of ciprofloxacin hydrochloride (CFX-HCl) prepared using a spray dried (SD2) matrix system consisting of xanthan gum, carbopol, and propylene glycol. Methods: CFX-HCl in aqueous humor samples was analysed using HPLC method. Applying a mobile phase of 0.01M sodium acetate: methanol (70:30 v/v) with pH around 3.00, and using Purosphere star 100RP-18 column (125 mm × 4.6 mm × 5 μm). The in vivo behaviour and irritant properties of ocular inserts was studied on rabbits. Twelve rabbits were used for the study and were divided into four groups. After placing the insert in the eye 100 μl of the aqueous humor was withdrawn at different time intervals in order to measure the concentration profile of CFX-HCl. The tested formulae R, F1, F2, and F3 were all containing 6.25 mg of CFX-HCl. Results: The method was well validated according to linearity, recovery, and precision. Where the calibration curve was linear over a concentration range of (2.500–7.826) μg/ml, with an average recovery of 99.76%. The presence of the matrix system enhances the absorption of CFX-HCl and sustains its release up to four days leading to increasing its bioavailability. Also, the ocular inserts of F2 and F3 have better biocompatibility compared with R and F1. Conclusion: The analysis method was found sensitive, accurate and precise and could be used to assess the in vivo behaviour of CFX-HCl. The ratio of the free drug to the matrix system controlled the rate of drug release.
INTRODUCTION
Ciprofloxacin hydrochloride is 1-cyclopropyl-6-fluoro-4-oxo-7-(piperazin-1-yl)-quinoline-3-carboxylic acid, monohydrochloride, monohydrate with the chemical structure shown in (fig.1), it is a commercially available antibiotic used to treat bacterial infections in different parts of the body [1].For example, it is used for the treatment of infectious types of keratitis and conjunctivitis caused by gram-negative bacteria.It affects bacterial DNA gyrase without affecting mammalian DNA activity [2,3].CFX-HCl has imperative applications in treating various ocular illnesses, such as corneal ulcers and bacterial conjunctivitis, although the regimen is tedious [4].
Ciprofloxacin hydrochloride CFX-HCl has short elimination half-life, therefore if it is used for treating eye illnesses, it must be given as 3-4 drops at least three times a day in order to maintain a continuous sustained level of medication, which gives the eye a massive and unpredictable dose, and unfortunately, as the drug concentration of the eye drop solution increases more amount of it will be lost through lacrimal-nasal drainage system, and then subsequent absorption of this drained drug may result in having undesirable systemic side effects [5].
A basic concept shared by most scientists in ophthalmic research and development is that the therapeutic efficacy of an ophthalmic drug can be greatly improved by prolonging its contact with the corneal tissue and/or conjunctiva epithelium [6].For achieving this purpose, viscosity-enhancing agents, such as methyl cellulose, were added to eye drop preparations, however, they did not yield a constant drug bioavailability as originally hoped then repeated medications were still required throughout the day [6].An ocular insert, in the form of a solid sterile device, was developed as a substitute for eye drops and designed to be held to the eye and to deliver drugs, some of its advantages are the low dose of the drug, no preservatives, good nurse and patient compliance and instantaneous removal in case of total inactivity or adverse effects [7,8].Actually the controlled drug delivery system has an upper edge over eye drops and ointments, because the drug is delivered at the site of action, less amount of the drug is required, then constant drug supply is maintained over a predetermined time, so patient compliance and efficacy of ciprofloxacin hydrochloride could be improved by the use of a drug delivery system promoting prolonged release of drug and thus increasing its application intervals [2,3].
Most ocular treatments call for the topical administration of ophthalmically active drugs to the tissues around the ocular cavity, where it was reported that controlled release gels composed of cellulose and carbopol derivatives have controlled the release of CFX-HCl and extended its microbial activity [9].Diffusion-controlled polymeric delivery systems are of increasing applications in the area of controlled release of pharmaceuticals.These systems are characterised by the release rate of drug which depends on its diffusion through an inert membrane barrier or polymer matrix.Usually, this barrier is an insoluble polymer.Generally, two types or subclasses of diffusional systems are recognized: reservoir devices and matrix devices [10].
Spray-dried polymeric delivery systems also have been recommended as a possible way to enhance the low bioavailability displayed by standard ophthalmic vehicles [11,12].A modifiedrelease ocular insert system of CFX-HCl was previously reported [13].The system released the drug with a prolonged constant rate resulted in promising therapeutic benefits in treating ocular illnesses, such as corneal ulcers and bacterial conjunctivitis.Where the drug was loaded into a suitable matrix-forming agent by spray drying, followed by direct compression and subsequent film coating.Spray dried powders were prepared by coupling the model drug CFX-HCl with Xanthan gum (XG) and Carbopol C-934 in the presence or absence of Propylene Glycol (PG).Ionic interaction between CFX-HCl, XG and C934 seemed to be the main interaction that determined the physical compatibility and release properties.Formulae with different ratios of (XG:C934:PG:CFX-HCl) were studied and it was found that the formula containing 0 mg of free CFX-HCl and 25 mg of the spray-dried matrix system SD2 which is composed of (1:1:1:1) ratio of (XG:C934:PG:CFX-HCl) showed a profile typical of a controlled delivery system where a useful drug concentration was obtained over a long period of time.The subject of the present work is to study the in vivo behaviour of the formulae yielded controlled release profiles in the previous study [13].
In vivo HPLC analytical method for the determination of CFX-HCl in aqueous humor
The in vivo samples were prepared by protein precipitation method using acetonitrile as a precipitating agent, and the samples were analysed using a validated HPLC method.
Preparation of aqueous humor standards of CFX-HCl
A known amount of CFX-HCl was dissolved in HPLC water to produce a working standard solution of 20 µg/ml, in order to produce CFX-HCl standards in aqueous humor different volumes of the stock solution (20 µg/ml) were added into 70 µl of blank aqueous humor in disposable polypropylene micro centrifuge tubes (1.5 ml, Eppendorf) and vortexed for 15 s, and then 70 µl of acetonitrile was added in order to precipitate proteins, the resulting mixture was mixed using the vortex mixer for 15 s.A centrifugation step was applied for 2 min at 11500 rpm/min in a micro centrifuge (Eppendorf).The aqueous layer was decanted in HPLC glass vial to be ready for injection.The obtained standards were in the concentration range of (1.333-8.333)µg/ml.The same procedure was applied for the preparation of the in vivo samples but without the addition of CFX-HCl.
Validation of the in vivo HPLC analytical method
This method was validated through the evaluation of its performance expressed as analytical parameters such as linearity, precision, accuracy, and specificity.
Chromatographic conditions
The HPLC analysis of CFX-HCl was performed at room temperature using 0.01M sodium acetate: methanol (70:30 v/v) as a mobile phase.Its pH was adjusted to be in the vicinity of 3.00 using glacial acetic acid.The mobile phase was always clarified by filtration through a nylon filter paper, with pore size equal to 0.45 µm, and degassed through a sonicator, then pumped at a flow rate of 1 ml/min on Purosphere star 100RP-18 column (125 mm × 4.6 mm × 5 µm).The peak response was monitored at a wavelength of 280 nm.A sample 40 µl was injected into HPLC system, and the data was acquired using Thermo Quest software.
Linearity
The linearity of the proposed method was established from the calibration curve at several concentration levels (2.50-7.83)µg/ml.Calibration curve was constructed for CFX-HCl in aqueous humor by plotting their response area against their respective concentration, the coefficient of the linear regression equation and the correlation coefficient (R 2
In order to confirm that there is no interference between the endogenous components of the aqueous humor and CFX-HCl peaks a blank aqueous humor sample was analysed and compared with aqueous humor containing CFX-HCl.Also, the specificity and selectivity of the analytical method were investigated by confirming the complete separation and resolution of CFX-HCl peak.
Precision
Method precision was determined in terms of repeatability (i.e, analysis repeatability).In order to determine the repeatability six samples of both standard solution and aqueous humor standard of the concentration 4.444 µg/ml CFX-HCl were prepared individually, then each sample was injected into the HPLC system.Repeatability of the areas was determined and expressed as mean±standard deviation (SDEV) and percent relative standard deviation (%RSD) calculated from the obtained data as a precision of the method.
Accuracy
The accuracy of the method was determined in terms of percent recovery.Spiked aqueous humor samples were prepared and extracted to get three concentration levels of 50%, 100% and 200% of the labeled content of CFX-HCl.Another set of standard solutions at the same concentration levels was also prepared.The samples were injected into the HPLC system.The percent recovery was calculated according to the following equation: Where [A] is the net peak area of the drug in aqueous humor sample, [B] is the peak area of the drug in standard solution.
Limit of detection (LOD) and limit of quantification (LOQ)
LOD is the lowest concentration of an analyte that can reliably be differentiated from background levels.LOQ of an individual analytical procedure is the lowest amount of analyte that can be quantitatively determined with suitable precision and accuracy.LOD and LOQ were calculated from the standard deviation of the response and the slope of the three linearity curves using the formula 3.3 α/S for LOD and 10 α/S for LOQ where α is the standard deviation of response and S is mean of the slope of three calibration curves [16].
Tailing factor
The tailing factor was estimated by employing the analysis of six standard solutions of the concentration 7.273 µg/ml of CFX-HCl, then was expressed as the mean±standard deviation (SDEV) of the six samples tailing factor.
Stability of samples
Stability studies of aqueous humor spiked with CFX-HCl (2, 5, 8 µg/ml) were carried out over a period of 30 d at -18°C.
In vivo performance of CFX-HCl ocular inserts in an animal model
The aim of this in vivo study was to determine objectively in rabbits the possible irritant properties and the in vivo drug profile of the ocular insert on single administration in the rabbit's eye [4].The administration was by placing one insert in the rabbit's conjunctival cul-de-sac.The in vivo experiment was carried out at the animal house of Jordan University of Science and Technology (JUST).An ophthalmologist attended this study to supervise collecting the aqueous humor samples and monitoring vital signs so as to minimize potential risks.
General procedure for animal's preparation
The rabbit was chosen as a model for this study because its eye simulates an adult human eye with respect to size, shape, physiology, and composition of tears.
Twelve healthy male, New Zealand albino rabbits, weighing between 2.5 and 3.0 kg were accommodated under standard temperature, humidity, and photoperiod light cycles.At the experimenting day, the rabbits were placed in restraining boxes where they could move their eyes and heads freely.They were identified by tattooing their ears.All experiments were carried out following the European Community Council Directives, and the study protocol was approved by the Ethical Committee of the Higher Research Council at the Faculty of Pharmacy, Jordan University of Science and Technology (Irbid, Jordan).
Evaluation of biocompatibility and residence time of inserts in the precorneal area
In this, in vivo study, four formulae were tested.The formulae were chosen according to in vitro dissolution studies carried out previously [13].Rabbits were divided into four groups, each consisting of 3 rabbits, for each group two trails were carried out, in the first trail one insert was placed in the lower conjunctival sac of the left eye and the right eye was served as a control and vice versa in the second trail.After insertion, the eyelids were closed for 10 seconds in order to prevent the rejection of the insert.The behaviour of inserts after 10, 60, and 180 min from insertion was evaluated by direct visual observation using a skit lamp.
Measurement of CFX-HCl trans-corneal penetration
In order to estimate the amount of CFX-HCl in the aqueous humor of the rabbit eye, around 100 µl of the aqueous humor, was aspirated from the interior chamber at different time intervals using 1 ml insulin syringe fitted with a 26 gauge needle.For the reference formula R the withdrawn samples were at 0, 1, 3, 5, 7, 9, 24 and 32 h, while for F1, F2, and F3 samples were withdrawn at 0, 1, 5, 9, 24, 28, 32, 48, 56, 72, 80, 96, 98, 100, 104 and 120 h.At the end of the experiment, the ocular inserts were removed and the withdrawn samples were stored at -18 °C until analysis.
Pharmacokinetic analysis
Ocular inserts were removed from each group at 96 h and the amount of drug remaining in each insert was determined.The area under the concentration in aqueous humor vs. time curve was calculated using the linear trapezoidal rule with extrapolation to infinity, which was done using the commercially available software package TOPFIT.The peak aqueous humor concentration of ciprofloxacin (Cmax) and the time to reach Cmax (Tmax
RESULTS AND DISCUSSION
) were recorded.Then the statistical analysis of the data was conducted using the software WIN NONLIN (Ver.3.3).
In vivo HPLC method validation
The analytical method was validated according to the International Council for Harmonization (ICH) guidelines.The method was found accurate, specific, and sensitive for the analysis of CFX-HCl in aqueous humor with complete separation of the drug.
Linearity
The linearity of the method was evaluated from the calibration curve of spiked aqueous humor samples at several concentration levels of CFX-HCl.The peak area of the drug yielded a linear correlation over the concentration range (2.500-7.826)µg/ml.Calibration curve of the spiked aqueous humor standards with the regression equation and their correlation coefficient (R 2 ) are shown in (fig.2).The results confirmed the linearity of the standard curve over the studied range.
Specificity
Representative HPLC chromatograms of blank aqueous humor and aqueous humor spiked with CFX-HCl are shown in (fig.3) and (fig.4) respectively, they indicate no any interference from the endogenous components.It is obvious from (fig.4) that the peak of CFX-HCl was well resolved and completely separated.Therefore, this HPLC method is considered as a sensitive and specific method for CFX-HCl without any interference with the endogenous components might be available in the sample.
Fig. 3: HPLC chromatogram of blank aqueous humor sample Fig. 4: HPLC chromatogram of an aqueous humor sample containing CFX-HCl Precision
The precision representing the repeatability (i.e.analysis repeatability) of standard solutions and spiked aqueous humor standards of CFX-HCl at the concentration of 4.444 µg/ml was studied.The %RSD values were found to be 0.25% and 0.45% for the standard solution and spiked aqueous humor respectively since they are less than 2% then the method is considered precise [3].
Accuracy
The accuracy of the method was determined on the basis of percent recovery at three concentration levels 50%, 100%, and 200% of the labeled content of CFX-HCl.They were found to be 99.43%,100.09% and 99.76% respectively.
Limit of detection (LOD) and limit of quantitation (LOQ)
LOD value was found to be 0.027 µg/ml, and LOQ was 0.081 µg/ml.These LOD and LOQ values insure that the lowest concentration of CFX-HCl determined in aqueous humor samples can reliably be differentiated from background levels and can be quantitatively determined with suitable precision and accuracy which correlates well with the literature [16].
Tailing factor
The tailing factor was measured and found to be 1.32±0.012and this is in compliance with the standard range mentioned in the United States Pharmacopeia USP [17].
Stability of samples
Results of the stability study of the spiked aqueous humor samples indicated that the samples were stable for four weeks when stored frozen at -18°C and the degradation occurred was within the recommended limits of biological studies.
Application of the method
The proposed validated HPLC method will be applied for assessing the in vivo behavior and the pharmacokinetics study of CFX-HCl in aqueous humor.
Bioavailability of CFX-HCl in aqueous humor of rabbit's eye
The bio-adhesive ocular system was studied using four different film coated inserts containing 6.25 mg CFX-HCl, which were given in a single dose and followed up to five days.This drug delivery system could be described as a circular disc consisting of spray-dried CFX-HCl with xanthan gum, cabopol C934, and propylene glycol, and coated with transparent lipophilic rate-controlling film of eudragit RL 100 copolymer [13].
The in vivo behaviour of uncoated inserts was previously studied, but unfortunately, they failed to release all of their drug content inside the rabbit eye since they were transformed into a gel and expelled outside the rabbit eye after 7-8 h from the application.
Based on previous in vitro release study [13], the coated formulae R, F1, F2, and F3 were chosen as models for this in vivo study.Where R is the reference release product, while F1 is representing the slow release behaviour, F2 is the intermediate release behaviour, and F3 is representing the fast release behaviour and they are all containing an amount of 6.25 mg CFX-HCl.The compositions of the studied formulae are shown in table 1.The aqueous humor concentrationtime profiles of CFX-HCl release from the different formulae are shown in (fig.5).
Formula
Free drug (mg) SD2 (mg) F1 0.00 6.25 F2 3.00 3.25 F3 5.00 1.25 R 6.25 0.00 This behavior is in agreement with the results shown in table 2, where after 96 h the ocular inserts were removed from each group of rabbit's eyes and the content of the drug remaining in ocular inserts was determined according to the in vitro analysis method mentioned before [13].So after five days, the total release was 79.33%, 83.64%, 88.39% and 90.96% for F1, F2, F3 and R respectively.The pharmacokinetic parameters of the four CFX-HCl ocular inserts formulae R, F1, F2 and F3 are shown in table 3. The results in table 3 show that the area under curve of the concentration-time profiles of F1, F2, and F3 increases compared with the reference formula R, where the AUC of F1 and F2 was around 2.5 folds of the AUC of R while F3 gave an AUC around 2 folds of R, so as the amount of spray dried CFX-HCl in SD2 increases the release of the drug prolongs more and more.
On the other hand the values of tmax
The pharmacokinetic parameters of the test formulae F1, F2, and F3 are also compared with the reference R as Test/Reference ratio.
Then the average relative bioavailability values (F) of F1/R, F2/R, and F3/R were determined based on AUC in table 3 show that the presence of SD2 increases the time required to reach the maximum concentration to be 24 h for F1 and F2 and 9 h for F3 compared with 7 h of the reference R. All these results are in agreement with our conclusion that the presence of the spray dried matrix system sustains the release of CFX-HCl and then increases its bioavailability.0-∞, AUC0-t and Cmax calculations.Results are represented as mean±SDEV and shown in table 4. It is obvious from the results shown in the table that all bioavailability values (F) of the test formulae are greater than 1, meaning that none of F1, F2, or F3 are bioequivalent to R [18].Also as the amount of free drug decreases and the amount of SD2 increases, the bioavailability of CFX-HCl increases.The relationship between AUC 0-t and the amount of SD2 available in the formula was constructed, and a linear correlation was observed as shown in (fig.6).The bioavailability of ocular inserts of ciprofloxacin was previously reported by other investigators, where the insert was a combination of ciprofloxacin, methylcellulose, hydroxypropyl methyl cellulose, hydroxypropyl cellulose and Eudragit RS100.The in vivo studies of the ocular inserts showed that ciprofloxacin hydrochloride had a significant effect on the reduction of induced ocular conjunctivitis, where the bacterial load in the treated groups with inserts was reduced by two folds compared to control groups [4].This relationship supports the conclusion that by increasing the amount of the matrix SD2 in the formula the extent of absorption of CFX-HCl will be increased and hence ocular bioavailability will also be enhanced, this is indicated by the increased values of AUC0-t The results of in vivo evaluations clearly demonstrated that CFX-HCl releasing insert system produced a significant, aqueous humor concentration throughout the 5-days insertion of one unit.Apparently, the application of the controlled drug insert system significantly minimizes the dose required of CFX-HCl for an effective management of corneal ulceration and conjunctivitis.In other words, the therapeutic efficacy of CFX-HCl in corneal ulceration and conjunctivitis treatment has been improved with the use of CFX-HCl insert system to control its ocular delivery.The above results support the conclusions reported by previous investigations and studies [20,21], that various ophthalmic systems such as inserts, ointment, suspensions, and aqueous gels have been developed in order to increase the residence time of the dose in addition to enhancing the ophthalmic bioavailability.Where the conventional ophthalmic delivery systems always result in poor bioavailability and therapeutic response because of the rapid precorneal elimination of the drug.with increasing the amount of SD2 matrix in the formula.Other researchers also developed a novel sustained release delivery system of ciprofloxacin for ocular treatment.The system was based on the use of carbopol gel or hydroxypropyl methyl cellulose as a viscosity enhancing agent in addition to dodecylmaltoside as a penetration enhancer to achieve the desired ocular absorption of ciprofloxacin.The carried out in vivo bioavailability studies showed that the sustained release formulations delivered 10-fold more drug into the aqueous humor than the standard solution formulation [19].
Statistical evaluation of the data
The Statistical analysis was performed using WIN NONLIN ® In order to examine the significant differences between different mean values for the pharmacokinetic parameters of the four formulae, the pair-wise comparisons using Least-Significant-Means Difference Test was done.In terms of AUC (ver 3.3).The Two-way multiple measures (Analysis Of Variance) ANOVA was used to test for potential differences among the four coated ocular formulae F1, F2, F3 and R. Critical levels of significance were set at P<0.05.Also, the results were corrected for pairwise comparisons using the formulae Least-Square-Means Differences Test (LSM).There are significant differences between the in vivo release profiles for the four formulae due to the "Formula" variable (P<0.05) at α = 0.05.
0-t and AUC0-∞
The 90% confidence interval (90% CI) for AUC , all the formulae are significantly different from each other except F1 and F3 since their P value was>0.05.0-t , AUC0-∞ and Cmax measures of relative bioavailability and bioequivalence lied within an acceptance interval of (0.80-1.25) for some formulae.None of the formulae F1, F2 and F3 has been bioequivalent with R in terms of AUC0-t, AUC0-∞, and Cmax, lower and upper 90% CI are outside the 0.80-1.25 interval.F1 and F2 are also bioequivalent in terms Ln AUC0-t and AUC0-∞.Also, F3 and F2 are bioequivalent in terms of Ln Cmax Evaluation of biocompatibility of inserts in the precorneal area values 90% CI.This was expected from the in vitro dissolution data [13].
Treatments F1, F2, and F3 were given in comparison with the reference formula R. The eyes were observed for redness, lacrimal secretion, mucoidal discharge, swelling of the eyelid, and response to ocular stimuli on a score basis up to 5 d.It was observed that ocular inserts F1, F2 and F3 have better biocompatibility compared with R and this could be due to the smaller size they have.
Importantly for previous investigators working on developing ocular inserts of different drugs [22], there has been a concern regarding the toxicity, non-biodegradability, and non-biocompatibility of synthetic polymers, so the trend was to use a combination of synthetic and biopolymers with well-known biocompatibility and biodegradability.
CONCLUSION
A rapid and precise RP-HPLC/UV method was used for the determination of CFX-HCl in aqueous humor.The method was validated according to the standard guidelines.The extraction procedure exhibited an excellent recovery of CFX-HCl.
The inserts based on F1 and F2 formulae exhibited a profile typical of a controlled release delivery system, so they have a good potential to provide an effective drug concentration over days in the aqueous humor with a reduced number of applications.
Fair biocompatibility of the matrix system in the rabbit eye was observed, then the insert formulae F1, F2, and F3 were with better biocompatibility compared with the reference formula R.
In conclusion, the ocular bio-adhesive inserts of CFX-HCl combined with the matrix system SD2 could be considered as a promising biocompatible controlled release dosage form for the treatment of corneal ulceration and conjunctivitis.
Fig. 5 :
Fig. 5: Average aqueous humor concentration time-profiles of CFX-HCl for the four different coated ocular inserts formulae in rabbits (n= 6) | 2017-11-06T20:44:28.300Z | 2016-12-31T00:00:00.000 | {
"year": 2016,
"sha1": "bef6441283279a16cd85a6d69e20cb7748c9736d",
"oa_license": "CCBYNC",
"oa_url": "https://innovareacademics.in/journals/index.php/ijpps/article/download/14967/8722",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "440523a2b6e349cd9d76fcdb17a2791e85ca57af",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
59417789 | pes2o/s2orc | v3-fos-license | Using Historical Methods in Information Systems : A Primer for Researchers
This article describes the use of historical methods in information systems research and provides a practical example of how this technique was used in a recent research project. Though the information systems researcher already has a rich cornucopia of research methods to choose from, historical research has the power to offer insights over and above those provided by other techniques. The researcher is forced to step away from a narrow focus on the research question in order to examine the “big picture”. This big picture approach means that recurring patterns are identified providing a broad set of findings that are applicable in many different settings. However the flip side is that historical research can have a lack of focus and does not always offer immediate answers to specific research questions. This paper provides guidelines for the use of historical methods by information systems researchers by demonstrating how the seven step approach developed by Mason, McKenney and Copeland was applied to an historical research study which explored the relationship between ICTs and regional development in New Zealand between 1985 and 2005. This research reveals the value of historical research for information systems researchers by showing the effects of long term social trends on ICT use. It also highlights some of the pitfalls that potential users of historical research need to be aware of such as gaps in the data trail and the questionable credibility of some historic records.
INTRODUCTION
Information systems have been defined as "a set of interrelated components that collect, manipulate, store and disseminate data and information and provide a feedback mechanism to meet an objective.The feedback mechanism helps organisations achieve their goals, such as increasing profits or improving customer service" (Stair et al. 2011 pg 5).The discipline is relatively young, and the technologies being researched are ever changing.Information systems researchers may see little value in going back to look at technologies that are now obsolete.Why carry out research into COBOL programming, when everyone is now using C#? However information systems is more than the study of technology alone; the interplay between technology, systems and organizations is multi-faceted and complex; the availability of a new technology on its own does not guarantee it's successful adoption The article sets the scene by first discussing the origins of the use of historical methods in information systems and the founding work done in business history and the history of technology.It then goes on to describe a research project which used historical methods to investigate the contribution of ICTs to regional development in New Zealand.In particular the seven step approach developed by Mason, McKenney & Copeland (1997b) is discussed in detail.This is followed by a reflection on the lessons learned while using this approach.The article concludes by considering what new insights the use of the historical approach can bring to the study of information systems.
THE ORIGINS OF HISTORICAL METHODS IN INFORMATION SYSTEMS
Historical methods consist of a collection of techniques and approaches which draw on both traditional history, and social research.The methodology was first developed in the nineteenth century by social thinkers such as Marx, Durkheim, and Weber (Neuman, 2003).There has been a resurgence of interest in historical methods in social science since the 1970s, when researchers began to recognise the limitations of methodologies such as structural functionalism and economic determinism, which take a static view of society.Increasing political conflict between Western nations meant that researchers became interested in exploring social change, and looked for a methodology that took into account historical and cultural contexts.Historical methods provide a powerful set of tools for addressing broad, big picture questions (Neuman, 2003).
The most well-known example of the use of historical research in information systems is the work carried out by Mason, McKenney & Copeland (1997a, Mason et al., 1997b).However historical comparative research has long been used by other disciplines.The two areas most closely related to information systems are business history and the history of technology.Business history is concerned with understanding the interplay between economics and individuals, organisations and wider society, while the history of technology is more concerned with the technological artefact itself.
Business History
Business history is generally agreed to have begun as a discipline in the 1920s at Harvard Business School (Hunter et al. 2006) where it grew in tandem with the use of the Harvard Case Method as a teaching tool.Each case would present issues faced by a particular organisation and ask readers to put themselves in the shoes of key decision makers.However business history has grown beyond focussing on a single organisation and the key individuals within it to encompass the broader perspectives of the industry sector alongside national and global perspectives.Many researchers regard Alfred Chandler's "Strategy and Structure" an account of emergence of the multi-divisional firm in American corporate history as the founding stone of the discipline (Chandler, 1962).
The early days of business history focussed on individual entrepreneurs and the organisations they founded (Yates 1997).Since the 1990s' the emphasis has shifted from significant individuals to institutions and organisations.Though this approach opened up new areas of research there was a concern that it could lead to institutional determinism if taken to extremes, meaning that individuals are portrayed as mere pawns at the mercy of the organisations they belong to (Yates, 1997).Researchers (Orlikowski, 2000, Yates, 2005) have used methodologies such as structuration theory to address this unease.
History of Technology
Much of business history is driven by economics, an approach which has worked well for some but has been criticised by others for having a narrow focus on the market (Yates, 1997), an alternative but related field is that of the history of technology which focuses on the technological artefact.As far as information technology goes the majority of material published in this area tends to concentrate on hardware and software development.Misa (2007) identifies three thematic traditions that have emerged in the history of computing in the last twenty-five years.The first phase was machine centred and concentrated on hardware and software, in the second phase looked at the "information age" characterised by Manuel Castells Information Age trilogy (Castells, 1997).The third theme was taken up by historians who began to ask the question "How did certain institutions shape computing?"organisations which received particular attention were the US military services, the National Science Foundation and IBM.
Both business history and the history of technology have been criticised for having a technologically deterministic approach, where technology is seen as an independent variable that changes structures such as society, firms and the organisation of work (Yates, 1997).In reaction to this many researchers in the history of technology field adopted the social construction of technology approach (Bijker et al., 1989, Hughes, 1994).The use of the social construction approach reflected three different trends, the first was a move away from a concentration on the individual entrepreneur, the second represented a move away from technological determinism, and the third trend was to study technological development as a whole, rather than making distinctions between technical, social, economic and political aspects.Thomas Parke Hughes used a systems approach to integrate technical, social, economic and political aspects in his studies of the different ways in which electric power networks spread across Western countries (Hughes, 1983).
Historical Methods in Information Systems
The use of historical methods in information systems was pioneered by Mason, McKenney and Copeland in their studies of Bank of America, Lyons Electronic Office (LEO) and American Airlines (Mason, 2004, Mason et al., 1997a, Mason et al., 1997b, McKenney et al., 1995, McKenney et al., 1997).Their approach was also used in a study of the use of IT in Texaco within a forty year period (Hirschheim et al. 2003;Porra et al. 2005;Porra et al. 2006).
Mason, McKenney &
Copeland, based their argument for the use of historical methods on the work of Joseph Schumpeter who saw capitalism as being characterised by "gales of creative destruction" where the economy is radically altered by innovations in products and/or processes, resulting in a fifty-five year cycle of creation, growth, and destruction, known as a Kondratieff wave.Schumpeter's theories were based in turn on the work of Nikolai Kondratieff, who argued that the possibilities of any given generation of technologies become exhausted approximately every fifty-five years (Hall, 1998).As the developments in ICTs form the most powerful force of creative destruction in the last fifty-five years, and can be regarded as the fifth Kondratieff Wave, this is of direct relevance to researchers in the field of information systems.
Building on Schumpeter's ideas of radical innovation, Mason, McKenney & Copeland advanced the concept of successful entrepreneurs who develop a "dominant design" which will change the market place.Central to this approach is the concept of three characters who must be present in an organisation for successful technological change to occur, the "Maestro", the "Executive" and the "Supertech".The Executive is an inspirational leader with a vision for their business, the Supertech has the technical skills to use IT to put their vision in place, and the Maestro is the bridge between the two, making the Executive aware of the possibilities offered by IT, and translating their vision into a form the Supertech can understand.The approach includes the use of a seven step framework to carry out, analyse and present the research.It is this aspect of their research, the seven step framework, that forms the focus of this article.
When introducing their approach they explain how it differs from histories of technologies, and though they acknowledge the contribution of business historians such as Alfred Chandler and JoAnne Yates (Chandler, 1962, Yates, 1989) they do not situate themselves clearly within the field of business history or acknowledge that there might be alternative approaches to the one they have developed.Their interpretation of historical methods is very dependent on the idea that IT can produce a radical change, and though they have produced a large number of case studies to support their theory (McKenney et al., 1995) other researchers, notably JoAnne Yates and James Cortada has shown that many successful organisations have taken an incremental approach to the adoption of IT (Chandler and Cortada, 2000, Cortada, 2007, Yates, 1989, Yates, 2005).
In any organization, the understanding of the present is facilitated by studying the past, and gaining an awareness of the long-term economic, social and political forces that shape events.The benefit of using historical methods is that deep and wide insights are obtained into the area being researched.For information systems researchers' new perceptions can be gained by considering the long term cultural context in which their research is situated.An application of Mason, McKenney & Copeland's historical methods approach was used to research the contribution that ICT made to regional development in New Zealand.Two contrasting regions were studied over a twenty year period in order to understand the relationship between new developments in information and communications technology and the changes in social and economic development.
For the study of regional development the long-time perspective, which considers the development of social capital, cultural values, and the build-up of social networks made historical methods the most suitable approach.Historical methods highlighted how the two regions had changed over time and the positive and negative consequences of those changes.As a reflective methodology, historical methods enables the researcher to obtain a deeper understanding of these issues than would be obtained by using a case study approach.
LEARNING REGIONS IN NEW ZEALAND
The historical methods approach developed by Mason, McKenney & Copeland was used for research which investigated the contribution that ICTs made to development of "Learning Regions" in New Zealand.The term "Learning Region" is widely used in the field of economic geography to identify regions that have been economically successful over a period of time, and that have successfully adapted to changed circumstances (Cooke, 1996, Florida, 1995, MacLeod, 2000, Storper, 1995).Such regions are characterised by the following factors: a competitive strategy based on learning; intense intra-regional linkages; capacity for innovation; creativity in both arts and sciences; efficient information flows; and regional norms and values that provide stability.These are all long term processes which can interact in a way that results in certain regions becoming consistently successful over time.ICTs have the potential to make an important contribution to the development of each of these factors.
The term learning region was first coined by academic authors (Florida 1995;Morgan 1997;Storper 1995) working in the fields of innovation studies and economic geography.The concept of the "Learning Region" is ambiguous and found in a variety of different contexts.There is no single definition of a learning region, however a common strand in the literature is that such regions have an explicit commitment to placing innovation and learning at the core of development (Larsen 1999).A learning region would generally consist of a network of inter-firm relationships, supported by social capital and trust, and kept dynamic by a continuous process of interactive learning.
The concept of the learning region is particularly relevant for New Zealand, as a small country located at the bottom of the South Pacific it faces particular problems in attempting to integrate the national economy into the global economy.Primary industries dominate, and exports of meat and dairy products make a large contribution to New Zealand's economy.However industries such as forestry, horticulture, fishing, manufacturing and tourism have become increasingly significant, and over the past decades, many new industries have emerged and grown strongly, including software, biotechnology, electronics, marine, education exports, media/film and wine.New Zealand's isolation and physical distance from major trading partners' means that New Zealand's predominantly small firms wanting to move into export markets face big costs.Regions are affected as technological innovation and increased competition lead to business centralisation.Also industry rationalisation and market deregulation have encouraged skilled people to leave rural regions for broader educational and employment opportunities in major cities (Schollman et al. 2002).
Additionally enhancements and upgrades of physical infrastructure in rural centres have not kept pace with technological progress (due to small market sizes, lack of critical mass, and no population growth) and have led to further population out migration.This has caused problems for some of New Zealand's rural regions, and in some cases led to a vicious cycle of decline.In urban regions all New Zealand cities have low-income areas and pockets of deprivation.In both urban and rural areas, Mãori and Pacific Islanders are disproportionately represented in the disadvantaged population.
The New Zealand government has implemented several initiatives to help to develop a knowledge society, encourage innovation, build up regional economic development, and improve usage and access to ICT.Though the concept of the learning region is not explicitly stated, these initiatives are in line with the thinking that lies behind the idea of learning regions.The overarching aim is to return New Zealand's per capita income to the top half of the OECD rankings and maintain that standing.The use of ICT is seen as central to all of these developments.Many of these initiatives are focused at the regional level, for example Project Probe was a 2002 joint initiative between the Ministry of Education and the Ministry of Economic Development (iStart 2004).The aim was to roll-out broadband communications to schools and rural communities, with a particular emphasis on closing the disparity between rural and urban schools.In order to attempt to assess the long term success of these initiatives this research focussed on regional New Zealand.Two contrasting regions were investigated, one urban region, Wellington and one rural region, Southland.Data was collected over a twenty year period, from 1985 to 2005.In the regional setting tacit or soft knowledge is more easily transferred than in a national context.This is because social interaction and exchange of information is easier and cheaper (Oughton et al. 2002).These soft people-based social networks take time to develop, and are likely to have a significant influence on the use of the ICT networks that are based within a particular region.The focus of the research was on the interplay between these soft social networks and the hard technology-based ICT based networks operating within the regional setting.The central research question was: Mason et al (1997b) have laid out clear guidelines for the researcher using historical methods, seven steps are identified which take the researcher through the stages of deciding on the research question, gathering and analysing the data, and writing up the results, these are outlined in Table 1.This paper will show how this seven step model was adapted to research the topic of learning regions in New Zealand.Though these steps are presented in a linear fashion, when carrying out real-world research, there will always be an overlap and iteration between the steps.
HOW HISTORICAL METHODS WERE USED IN THIS RESEARCH
Step One: Begin with focusing questions Focusing questions were arrived at by an inductive process of searching the literature, relating to the New Zealand context, and using the questions asked in the study of Texaco (Porra et al. 2006) as a guideline.A number of questions were posed, some examples of these questions are: What were the significant changes in the New Zealand economy between 1985 and 2005? How have new developments in ICT been adopted in regional New Zealand? What significant changes have occurred in human and social capital in regional New Zealand between 1985 and 2005?
Step Activities 1) Begin with focusing questions.
The questions asked are going to be about change, as history is primarily the story of change.Inductive thinking is generally associated with the interpretive paradigm, and involves the researcher identifying categories, or patterns in data, that seem suitable candidates for further investigation.
2) Specify the domain for the enquiry.
In the studies carried out by Mason et al (1997a) and (Porra et al. 2006) the primary unit of analysis is an individual organisation.The researcher needs make decisions about what will be included in the domain, and what is the appropriate time span for the study.
3) Gather evidence, using both primary and secondary sources.
Primary sources are those that came into existence during the time to which they refer, and secondary sources are those written by historians about a period in the past.Primary sources can be public documents such as annual reports, statistics and academic articles, which are organised around a timeline.Secondary sources can be slotted into this timeline and include less public information such as letters, budgets, and data collected from individual interviews.4) Critique the evidence.Is it authentic and credible?
It is common to find that evidence is contradictory, irrelevant or incomplete.Many of the best storytellers favour accuracy less than they favour a gripping narrative.Techniques such as counting the number of times an observation was made, determining the credibility of sources, and establishing whether there are meaningful relationships between the different parts of the evidence can be used to assist with this.5) Determine patterns using inductive reasoning.This is one of the central steps, though one of the most difficult.The task is to explain what happened, and how and why it happened.This can be done using a number of different tools; three of the most popular are conceptual frameworks, causal chain analysis, and establishing empathy with the main participants.A conceptual framework can be used to organise facts, and to concentrate attention on the essential areas to be explained.A causal chain is a type of conceptual framework that shows the sequence of events that produced the effects, results or consequences observed.Conceptual frameworks and causal chains can be developed in advance independently of the phenomena to be explained, and used as an explanatory framework, or they can be used as ideal types around which historical data can be organised.A third approach is to try to achieve empathy with the characters in the study.This means imagining how events might have appeared to those who actually experienced them.6) Tell the story.
This entails bringing together the results of evidence gathering, empathy, and causal chain analysis to form a narrative.7) Write the transcript.
The historical method is part of the hermeneutic tradition in that it treats the world as a script.Every written account takes its place in the context of a network of other written accounts that attempt to explain the relationships between living generations and their predecessors.
Table 1: Seven
Step Approach to Historical Methods (Mason et al, 1997) In order to address these research questions it was decided to use the concept of the ideal type as a basis for data collection and analysis.A theoretical framework of an "ideal" learning region was built up and the two actual regions were evaluated using this framework.The framework was developed by reviewing 23 academic articles mainly from the economic and regional geography literature that covered the concept of the learning region in order to identify common terms and themes (Christie et al. 2001;Cornford 2000;Florida 1995;Hudson 1999;Keating et al. 2002;Lagendijk et al. 2000;Larsen 1999;Lever et al. 1999;MacLeod 2000;Malecki 2002;Maskell 1999;Maskell et al. 1999;Morgan 1997;Oinas et al. 1999;Organisation for Economic Co-operation & Development 2001a;Rio 2001;Saxenian 1994a;Schollman et al. 2002;Sokol 2002;Storper 1995;Thompson 2002;Wolfe 2000;Wolfe 2002).Twenty two common terms were identified and these were ranked according to how often they were mentioned, and then grouped into six categories.These categories are presented as the 6-I framework, shown in Table 2.A more detailed explanation of this process can be found in (ref removed).The framework groups characteristics that a learning region should possess into six categories: interconnecting; informing; innovating; interacting; infrastructure and income.The "6-I" framework was used as a basis for data collection and analysis.
Interconnecting Step Two: Specify the domain for the enquiry Neumann (2003) distinguishes between micro-level, meso-level, and macro-level theories.With the selection of the appropriate level being based on time-span, numbers of people involved, and geographical area covered.The larger the level of the theory, the more abstract the concepts it deals with, micro-level theories would be used to explain the interactions between small numbers of individuals, whereas macro-level theories explain society-wide issues.
At the meso level two contrasting regions were selected one urban region, Wellington and one rural region, Southland.Southland is a remote rural area located at the south of the South Island.Farming is the mainstay of the local economy; the region also has the country's only aluminium smelter.Southland Frozen Meat is also a major employer.The Greater Wellington Region (hereafter referred to as Wellington) is located at the south of the North Island and includes the capital city, Wellington.
The region includes a wide range of different socio-economic groups; there are some high income areas but also areas of deprivation.As Wellington is a capital city, the public sector is of particular importance, as is the service sector.Wellington is the second most important centre for the IT industry in the country after Auckland.As in Southland, tourism is of growing importance to the region, and is often associated with events such as the Rugby Sevens or the Arts Festival.New Zealand's major exports are primary products and the rural sector has traditionally been the most important area of the economy.As a rural region Southland provided data about how the rural economy changed over the twenty year period studied.At the same time the country was attempting to diversify its export base, and IT and biotechnology were viewed as offering great potential.As an important centre for the IT industry, Wellington provided data about changes during the period studied.
The second reason for selecting these two regions was that they both had a strong reputation throughout the country for being innovative adopters of ICT networks.In 1995, Wellington was one of the first cities in the world to set up a broadband network in its central business district, and in 2003 Southland made a bold decision to implement a wireless broadband network throughout the region.
The intention was that the scope of this research, which is located in a regional context, should be geographically located at the meso-level, which provides a link between the micro (individual) and macro (national) levels, and therefore connects the particular with the general.However in practice it proved difficult to separate the three levels.Adding to this difficulty is the fact that New Zealand's small population of around four million means that regions are more inter-dependent than in more densely populated countries.The integration of micro and macro level data is traditionally a feature of historical comparative research.Issues are considered at both a society wide and at an individual level (Neuman 2003).For this research the impacts of macro level national policy around issues such as local government restructuring and availability of broadband had to be considered alongside the regional meso level.
Step Three: Gather evidence, using both primary and secondary sources During the first round of data collection in 2006, twelve in-depth semi-structured interviews were conducted with key figures involved in the adoption of ICT networks.The interview questions addressed a common set of themes including availability of infrastructure, the extent of linkages between local organisations, regional culture, commitment to learning within the region, and the adoption of innovative ideas.The interviewees worked for a range of organisations including local and regional councils, telecommunications providers, schools, and community groups as shown in Table 3.Some of the interviewees were selected for their knowledge of the local situation in Southland or Wellington; others had a national focus.The second round of data collection focused much more strongly on the regional level and also brought in the historical aspects.The aim was to build up a history of the development of ICT networks in the regions of Wellington and Southland in the twenty years between 1985 and 2005.The strategy adopted was to carry out both macro and micro level analyses of events in both regions over a twenty year period (Rooney, 1996).Primary sources were used to give an overall picture of developments in each region over the whole period, and for three selected years 1985, 1995 and 2005; a more detailed micro analysis was carried out.The idea was to see how those factors that had been identified as being relevant to the development of learning regions were changing in each region during the period covered by this research.
In order to obtain an even amount of material for each of the three selected years, and to cope with the problem of information overload it was decided to restrict the search to three regional newspapers (Dominion/Dominion Post, Evening Post, Southland Times) and one national magazine (National Business Review).The advantage of using newspapers is that they provided a breadth of coverage that was not available from other sources.The material from the newspapers was complemented by national and regional reports produced by a range of organisations such as Statistics New Zealand, independent economic consultants, non-governmental organisations, professional societies and voluntary groups The initial database held 3,033 items and when coding was completed this was reduced to 2,442 items.The breakdown of the number of articles for each category and region is shown in Table 4 (note that in 2005 the Dominion and Evening Post combined to become the Dominion Post which partly explains the drop in numbers).What is interesting to note is that even though the two newspapers were regional, more than half of the articles selected had a national rather than a regional focus.The fact that Wellington is the capital of New Zealand is also significant.Generally initiatives by the national government were categorised as national rather than regional even though they were located in Wellington.The numbers of articles collected for each category give a broad indication of the category's importance.Though these numbers are of no hard scientific value, counting the number of times a point is mentioned is one of the techniques used in historical research to establish trustworthiness.Counting is also a technique recommended by Miles and Huberman (1994) as a tactic for generating meaning.The same story was often reported in a number of publications, and there were often multiple articles about the same event.
N-National, S -Southland, W-Wellington After the searches had been carried out the next step was to build up a picture of the situation in each region for each of the three years using the "6-I" model as a framework.This was done by combining the results of the searches ordered by category, sub-category and year for each of the two regions and the national situation.This data was used as a basis to describe developments in that region during 1985 to 2005.As with the first round of data collection further refinement took place during the writing up process, duplications and overlaps were identified and articles were reassigned to different categories as appropriate.The data collection process is summarised in Step Four: Critique the evidence.Is it authentic and credible?
Historical source material consists of primary and secondary sources.Historians select the events and people that they consider important.By doing so they don't so much recreate the past as rediscover it, and to some extent colour it with their own set of value judgements.The historical researcher's most important role is to choose reliable sources, in order to create reliable narratives about the past (Howell et al. 2001).There needs to be a systematic approach to gathering data, as collecting only the most compelling evidence can result in material that is unrepresentative (Wenger et al. 2000).
Utilising the authoritative source only is not a wise approach.Evidence should be collected from a wide range of sources, each of which will have their own strengths and weaknesses (Tosh 2000).Any source material collected should be subjected to both external and internal criticism.The authenticity of the evidence is determined by external criticism, whereas credibility is established by internal criticism (Shafer 1980).The use of external criticism involves establishing whether a document can be traced back to the purported originator, establishing whether it is consistent with known facts, and studying the form of the document (Tosh 2000).Internal criticism consists of trying to establish the author's meaning and making a judgement as to the intentions and prejudices of the writer (Tosh 2000).An overview of the two techniques is shown as Figure 1.They also tend to have good response rates, for example Statistics New Zealand surveys generally get a greater than 80% response rate.The statistics were used to cross check and confirm assertions made in the newspaper articles.As previously mentioned the large number of articles used increases authenticity, the articles can be counted, if an issue was significant it would be reported on a number of times, both within one newspaper and across different newspapers.
Step Five: Determine patterns using inductive reasoning Mason et al (1997b) identify three different methods for determining patterns: conceptual frameworks; causal chain analysis and establishing empathy.With the conceptual framework and causal chain analysis generally a model or analysis would be developed before collecting the data, and then the material collected would be compared against the original model.The establishing empathy approach attempts to build an understanding of the motivations of the key historical figures in the study, and is generally carried out after data collection.Mason et al.'s research (Mason 2004;Mason et al. 1997a;Mason et al. 1997b;McKenney et al. 1995;McKenney et al. 1997) was conducted at the organisational level and used the approach of establishing empathy with individuals in those organisations.This approach was also used by Hirschheim et al (2003) in explaining the history of Texaco through the eyes of its Chief Information Officers.
This research was conducted at the regional level, and the establishing empathy approach was problematic due to the large number of individuals who contributed to regional development in widely different roles.The approach chosen for this research was the conceptual framework, which uses the concept of an ideal type to organise and interpret the data (Mason et al. 1997a;Mason et al. 1997b).The ideal type is presented as the "6-I" model, and the data collected was categorised using this model, within the context of both time and geographical location.The data collected was then evaluated against the definition of the "ideal" for each of the six categories, and the results for each of the two regions and the national situation were compared.The empathy approach does produce a persuasive story centred on the actions of key individuals however it can be rather subjective as the researcher has to put themselves in the decision-makers shoes.A conceptual framework is more objective, makes the research more transferable and adds rigour to the research.
Step Six: Tell the story One of the strengths of the historical approach is the compelling story that is produced.This narrative tells the story of the economic and social development of two regions in New Zealand (rural Southland and urban Wellington) with a particular focus on the role of hard ICT-based and soft people-based networks in regional development.In order to assess the role that ICT was playing in the development of learning regions, the two regions were assessed against the "6-I" framework of an "ideal" learning region shown in Table 2.A discussion of the results in each of the six categories together with an overview follows.
Interconnecting
An interconnected region will have active networks operating at international, national and regional levels.Throughout the period studied New Zealand developed more international connections, and ICT made a major contribution to the build-up of international, national and regional networks.ICT had a major effect on organisational form as businesses and public sector organisations throughout the country became more networked.Research based networks linking universities, crown research institutes and large organisations were present at the national level.Active networks were found in both regions, though these tended to be based in a particular sector for example the education sector.
Those regional networks that went across a number of sectors tended to focus on unemployment and retraining and had often been initiated by local government.Though there was evidence of clusters for example the effort by a Chamber of Commerce to form a high technology zone in the Wellington region, and the formation of the Southern Wood Council in 2001, no such initiatives had been consistently maintained throughout the period studied.
The results for this category were generally positive, New Zealand did become more connected at global, national and regional levels, and ICTs played a major role in facilitating these connections.
Informing
In an informed region there will be a commitment to learning and evidence of knowledge sharing between different organisations within the region.There was a strong commitment to education that came through at both the national and regional levels, and as with interconnecting, ICT played a major role.At the regional level ICT was used by schools to build networks and to share resources; at the national level the education sector was viewed as the leader for new developments, such as the rollout of broadband into rural areas.
In the education sector, knowledge sharing was enhanced by the use of collaboration software such as videoconferencing and interactive whiteboards.The use of collaboration software demonstrated that tacit as well as explicit knowledge was being exchanged.However, many interviewees made it clear that technologies such as videoconferencing could complement but would never replace face-to-face contact, indicating that there were limitations on the use of ICT networks to exchange tacit knowledge.
Another factor that should be found in a learning region is evidence of a bottom -up approach to knowledge sharing and transfer of best practice.This was definitely evident in the education sector in both regions studied.In the Wellington region, Primary and Intermediate schools under the direction of the Ministry of Education had formed an ICT cluster in order to share ideas and resources.In Southland a survey carried out prior to broadband adoption showed a strong desire on the part of local schools for collaboration.
Commitment to learning at individual, organisational and regional level is another feature of a learning region.The commitment to learning was found most strongly at the regional level particularly in Southland, where the regional economic development body, Venture Southland demonstrated a strong commitment to improving the educational level of regional residents.
The one weak area was skills shortages, at both regional and national levels.Out-migration of skilled workers in sectors such as health and IT was a major issue.This migration was strongly influenced by economic conditions.When times were good people stayed, but in lean times they looked for better opportunities offshore.Though the problem of staff shortages was felt across the country, it was clear that the problem was much worse for the rural region of Southland.However Southland worked hard to address this issue with a number of initiatives, one of which was the introduction of a zero fees policy at the regional Higher Educational Institute in 2001, these efforts paid off and by 2005 the population of Southland had stabilised.
In summary, the results for this category were generally positive.Throughout the period skill shortages decreased and investment in education grew.There was evidence of ICT enabled regional knowledge sharing in both regions.
Innovating
In an innovative region there will be evidence of new ideas, in terms of both products and processes, and the local culture will encourage competition.There should be evidence of entrepreneurial activity and a strong commitment to research and development.A broad definition of innovation was used, which included adopting innovative ideas from other regions or other countries; this was felt to be most appropriate for the regional context.
Though there was some evidence of innovation, especially in the IT sector, and certainly strong evidence that New Zealanders were world leaders in terms of adopting new ICT technology, neither region demonstrated the density of innovative activity that would be expected in a learning region.The success of individual IT companies was not built on and developed.This was not helped by the low investment in research and development at the national level.
Although the government's financial commitment to research and development could be questioned, there was strong evidence of entrepreneurial activity in New Zealand, though it was noted that there did seem to be a slowdown in the rate of patent applications between 2000 and 2005, despite a growth in the number of researchers.It is difficult to get an accurate measure of innovation levels as many software companies don't bother to apply for patents.
In pure economic terms individuals and businesses in the more populated urban regions of New Zealand were much more likely to be adopters of new ideas and technologies such as broadband than those in the less populated rural regions, even taking account of population size.However, social capital also made an impact.If a strong social network was in place with an active local champion, a new idea was much more likely to take off.
In both regions studied there was some evidence of innovation, and also evidence of the adoption of innovations from outside the region.In the opinions of interviewees the two regions were innovative, and both regions had won strong reputations throughout the country for certain projects e.g.Southland for a wireless broadband scheme.However there was a lack of hard data to back this up.The evidence does seem to indicate that the capacity for innovation is directly related to population size, with the most populated regions being the most innovative.
The findings for this category were mixed, there was evidence of innovation nationally and in both regions, but it was not present at the high levels that would be expected in a classical learning region.
Interacting
In an interactive region individuals within the region will share a common culture, social capital will be high and crime rates will be low.There will also be evidence of active social networks through work, sport, voluntary groups and similar.There was strong and consistent evidence of high social capital in New Zealand at both the national and the regional levels.There was a strong cultural identity in Southland; in Wellington cultural loyalties tended to be to the local authority area rather than to the region.
By the end of the period community groups had become well aware of the contribution ICT could make to regional development.In 1985 and 1995 ICT networks were mainly used by the government and private business, but by 2005, ICT was widely used in the voluntary and community sectors.This trend was observed in both regions and was reinforced at a national level by the publication of the government's Digital Strategy (Ministry of Economic Development 2005).Community groups were using hard ICT networks to complement and reinforce existing soft networks.
By 2005 information technology was being increasingly used by the not-for-profit sector and Mãori groups.However, though ICT networks were identified as playing a role in building interaction within a region they were seen as a complementary to rather than as a replacement for face-to-face contact.
The results for this category were very strongly positive, at both the national and regional levels there was plenty of evidence of good social capital, and active citizen involvement in civic life.
Infrastructure
The ideal learning region will have a well-developed telecommunications and transport infrastructure, together with institutional thickness, as demonstrated by lively interactions between different organisations in the region.In New Zealand the issue of infrastructure seems to be of more importance than in other countries with more developed infrastructure, such as Europe and the USA.
A well -developed infrastructure tends to be taken for granted; it is when there are gaps that it becomes a more pressing issue.
At both national and regional levels there was significant investment in telecommunications infrastructure.Despite complaints that New Zealand was not keeping up in global terms, it was clear that successive governments were committed to developing telecommunications and believed it would strengthen the economy.
In both regions there were frictions between local and regional government, which worked against the development of institutional thickness.There was evidence of interaction between the different organisations within a region.However, frequent changes caused by local politics meant it was difficult for these networks to develop and grow.
The results for this category were also mixed.Though telecommunications were recognised as crucial to the economy, New Zealand's low population means that it is always going to be more expensive to build up infrastructure than in more densely populated countries.The country is always going to struggle to improve its position in the OECD rankings when it comes to features such as broadband adoption.There was also a tension between the different levels of government that at times seemed to inhibit progress.
Income
The ideal learning region is consistently economically successful, with low unemployment rates.
Though the economy of both regions improved over the period studied, alongside reduced levels of unemployment, neither region exhibited the exceptional economic performance that would be expected of a learning region.
Though the findings for this section are a little disappointing, it should be noted that as a small country, located at some distance from major world markets New Zealand faces a more difficult task than many other countries in developing its economy.The fact that the country has managed to make progress despite these challenges needs to be recognised.
Overview of findings
The positive areas were interconnecting, informing and interacting and ICT was found to be making a contribution in all three areas.Between 1985 and 2005, organisations became much more interlinked in terms of their ICT networks, and information technology opened up access to the rest of the world.
ICT was used to increase interconnection at the regional level, particularly in the dairy farming, education and community sectors.These interconnections opened up new opportunities for regional learning and innovation.Both regions were successful in setting up high quality ICT networks, most notably in the education sector in Southland and the community sector in Wellington.
However, though ICT contributed to positive developments in these areas, it could not operate in a vacuum.The existence of good social networks and strong local champions were critical to regional development.ICT could complement these social networks but was no replacement for them.Therefore there was no direct cause and effect relationship between ICT and regional economic development.
Though many examples were found of positive initiatives in both regions it was difficult for initiatives to gain momentum and achieve lasting change.At various points throughout the twenty year period, initiatives were set up around establishing clusters, developing a regional strategy, setting up high technology zones or developing partnerships between education and business, but there was no evidence that such initiatives built steadily over the years.Proposed changes at a regional level seemed to be met with infighting and local resistance, which inhibited any steady long-term development.So though the soft networks formed by clusters, joint ventures and networks were present, no clear pattern of development could be observed.
In terms of infrastructure the general picture that emerged is of a clear linear progression in terms of the development of hard networks, but a more attenuated pattern in terms of soft networks where the same issues were revisited a number of times over the years.Though there was evidence of a relationship between the soft networks that existed at the regional level and the utilisation of hard ICT networks within a region, it was difficult to quantify.
A learning region is typically characterised by high levels of innovation, which in turn lead to economic success.Though New Zealanders have a reputation for being innovative, and examples were found of successful individual companies, neither region managed to develop anything close to a regional innovation system.The issues previously discussed are part of the reason.The findings of the research show that hard and soft networks evolve differently over time and that the relationship between the two is nuanced.Though good social capital existed in both regions, especially in rural Southland, it was located in different interest groups and was not easy to bring together.This lack of co-ordination meant that the possibilities opened up by ICT infrastructure in terms of increasing innovation were not fully realised.Both regions did demonstrate a strong commitment to learning, but this learning had yet to be translated into economic success.
Step Seven: Write the transcript The transcript produced needs to be placed within the context of previous work.This research builds on the work on historical methods in information systems carried out by Mason et al (Mason et al. 1997a;Mason et al. 1997b) by applying it at a regional rather than at an organisational level.It also provides an example of the use of a conceptual framework, the "6-I" model for data analysis, as well as using regional newspapers as a source of data.
Historical research is often incomplete and provisional, it provides a rich thick description of events, that is particular and descriptive rather than being analytical and general (Neuman 2003).A major goal of historical research is organising and giving new meaning to evidence rather than providing an authoritative account.This research fits in with this tradition, by providing a detailed examination of the use of ICT in two regions of New Zealand over a fairly limited period of time.The findings demonstrate the important role that soft social networks play in the successful utilisation of ICT networks within a regional setting.This was found to hold true whether the technology being considered was videotex, the internet or ultra-fast broadband.
LESSONS LEARNED
The following section reflects on the lessons learned at each stage of the research.
Step One: Begin with focusing questions This research asked a broad big picture question and historical methods was chosen as the best approach to address it because of the ability to provide deep and wide insights.Learning regions take time to grow and the development of social capital; cultural values and the build-up of networks are most meaningfully examined using a long time perspective.However, big picture type questions need to be made manageable, according to the resources available to the researcher.In this case the research question was broken down by using a framework developed from the literature review.This imposed some order on the research process, by facilitating the selection and ordering of relevant data.
Step Two: Specify the domain for the enquiry The issue of deciding on the appropriate scope for the research is critical.Most historians would consider looking back only twenty years as barely touching the tip of the iceberg.To some extent this can be justified by the fact that ICT is a fairly recent phenomenon.However in terms of social networks it would have been useful to dig further back into the history of each region.
One of the most difficult aspects of using historical methods for an IT researcher is setting an end date, the rapid rate of new developments in the IT field means that it requires immense self-discipline to put them to one side while concentrating on the past.In the case of this research there seemed to be an almost constant stream of new initiatives around the issue of broadband, which were very difficult to ignore.Alongside this is the concern that the findings of the research will be dismissed as out-ofdate and irrelevant.
Step Three: Gather evidence, using both primary and secondary source When using historical methods the availability of data is a key issue, if there is no data, there is no story.At an early stage the researcher needs to establish if there is enough information available to answer the research question.One frustration with this research was the difficulty of finding accurate data at the regional level; though Statistics New Zealand now collect regional statistics they weren't available for the earlier parts of the period studied.
Another issue when using historical methods is the large quantity of data that is collected, this is not only time consuming it also creates the challenge of ordering and categorising the data in order to make it meaningful.Details and individual incidents may be significant but overall findings have to be reported in a concise fashion.The technique used in this research to organise the data was the use of a conceptual framework, in this case a model of an "ideal" learning region, the "6-I" framework, was used to organise the data into categories.Another strategy was to only collect detailed data for three key years, 1985, 1995 and 2005, during the twenty year period studied.This meant there were ten year gaps between each collection point during which a lot of information was missed, meaning that the data collected can only be regarded as a snapshot in time.Strictly this means the research could be categorised as a longitudinal study rather than true historical research (Bannister 2002).This was ameliorated to some extent by the use of other materials such as statistical reports, but it is still a limitation of the research.
Step Four: Critique the evidence.Is it authentic and credible?
The use of newspapers for historical research raises questions about whether such materials are a good source for historical truth, as reporting can be biased and inaccurate.Some steps were taken to address this, such as cross checking events across a range of publications, and using reports produced by independent bodies, but it does need to be acknowledged that newspapers can be fallible.Contradictions were found.Different articles on the same topic often contained conflicting facts and figures; claims made by politicians weren't supported by the statistical evidence.Every effort was made to try and resolve these contradictions by cross-checking data from a number of sources, but in many cases this was not possible and data was presented as found.
The trustworthiness of qualitative research is always open to question; newspapers have an advantage over data collected by techniques such as interviews or focus groups, in that they are in the public eye.Newspapers can face libel if they publish inaccurate information therefore journalists take some steps to check their facts, and readers have a feedback mechanism in the form of the letters page.
One of the techniques of historical research is to listen for "silences", in other words to work out what is missing from the data.The regional newspapers did not provide good coverage of the industries in their regions, and initiatives such as the formation of business clusters tended to be under reported.
This issue relates back to Step 3, in this case newspapers were used extensively as little other data was found at the regional level, so a researcher not only needs to consider if there is data available to answer the research question, but also to assess the quality of that data before proceeding.
Step Five: Determine patterns using inductive reasoning Mason, McKenney & Copeland have outlined three approaches that can be used for this: conceptual frameworks, causal chain analysis and establishing empathy with the main participants.It is important for the researcher to decide at an early stage which approach they are going the use, as this will affect both the research question and the approach taken to data gathering.Establishing empathy is the most common approach used to date, and is suitable for a study of one organisation, it also has the advantage of producing a compelling story.This research has demonstrated the use of a conceptual framework, causal chain analysis is potentially the most rigorous approach, but also the most challenging.
Step Six: Tell the story The main goal of historical research is to produce a narrative.However due to the extensive data collection, that use of historical methods usually involves, that story is often rather long and very detailed.This creates issues for researchers who are under pressure to get their work published.Currently in information systems, publishing in journals is given more weight than writing a book, but it is often difficult to compress the findings of historical research into the word limits set by journals.
Step Seven: Write the transcript The researcher needs an understanding of where there work fits in with previous studies, they should be aware of previous research in this area, and what contribution will be made by their study.
CONCLUSION
This article gives an overview of the use of historical research in information systems and provides an example of how historical methods was used in a research project.In particular, the seven stage method of Mason et al. (1997b) was applied to explore the role of ICT in facilitating the development of two learning regions in New Zealand.
ICT is a maturing discipline, even though New Zealand was a relatively late adopter of ICT, with the first mainframe computer, an IBM 650 for the Treasury, not arriving in the country until 1960 (Newman, 2008), computers have still been around for over fifty years.Even before this the precursors to computers in the form of tabulating machines (Yates, 2005) and totalisers (Doran, 2006(Doran, -2007) ) were in widespread use.This gives researchers a long enough period to study the use of ICTs within the broader social, cultural and economic context.The reasons for the success of ICT in one setting and its failure within another become clearer.For example the rapid adoption of the Internet in New Zealand in 1995 can be seen in the context of a strong desire by citizens of a remote country to improve their connections with the rest of the world, this longing for fast and affordable international communications can even be traced back to Henniker Heaton's campaign for a penny post between Australia and the rest of the Commonwealth (De Garis, 1972).
Overtime historical research reveals underlying patterns which enable cause and effect to be established, this provides researchers with greater insights into the reasons behind the differing fortunes of ICT systems in different contexts.While the scope of this research was too limited to definitively establish cause and effect, underlying patterns were revealed, in particular the impact of power struggles between different groups within the two regions which often worked against the long term success of new initiatives.
The strength of historical research is that it takes the big picture approach which considers developments in information systems within the context of wider changes at the organisational, regional or national level.However the big picture approach can also prove a weakness as well as a strength; tackling a large scale problem often means that the contribution of any one piece of research is rather limited.Historical research should be regarded as a transcript which needs to be placed within the context of previous work (Mason et al., 1997b).It is often incomplete and provisional, providing a rich, thick description of events that is particular and descriptive rather than analytical and general (Neuman, 2003).A major goal of historical research is organising and giving new meaning to evidence rather than providing an authoritative account.New historical research in information systems should be regarded as providing only part of the big picture and should not be judged in isolation but evaluated on the contribution it makes towards building up that authoritative account.
Information systems is a discipline that prides itself on being forward looking, there is a tendency for researchers to focus on the latest trends and out-of -date technology is often dismissed as irrelevant.However people change more slowly than technology and patterns of behaviour tend to repeat themselves.There are many lessons to be learned from the past, and as the information systems discipline matures historical methods will form a useful addition to the information systems researchers' toolkit.
Figure 1 :
Figure 1: Internal and External Criticism (fromNeuman, 2003, p.421) Regional newspapers are an authentic primary source.The location and time of reporting are recorded, and for many of the later articles the author is also recorded.Statistics from organisations such as Statistics New Zealand and the Organisation for Economic Co-operation and Development (OECD) can also be regarded as authentic as they have official national and international approval.They also tend to have good response rates, for example Statistics New Zealand surveys generally get a greater than 80% response rate.The statistics were used to cross check and confirm assertions made in the newspaper articles.As previously mentioned the large number of articles used increases authenticity, the articles can be counted, if an issue was significant it would be reported on a number of times, both within one newspaper and across different newspapers.
Table 2 :
The 6-I Framework
Table 3 :
Location and role of interviewees
Table 4 :
Newspaper Statistics for 1985 to 2005
Table 5 :
Data Collection Rounds | 2018-12-29T08:12:22.125Z | 2013-11-01T00:00:00.000 | {
"year": 2013,
"sha1": "506cc3af5d033f8abc0991229cabd775fca94048",
"oa_license": "CCBYNC",
"oa_url": "https://journal.acs.org.au/index.php/ajis/article/download/798/556",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "506cc3af5d033f8abc0991229cabd775fca94048",
"s2fieldsofstudy": [
"History",
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
231829994 | pes2o/s2orc | v3-fos-license | Assessment of facial nerve injury using Deep Subfascial approach to Temporo-mandibular joint
Kumar, et al.: Assessment of facial nerve injury using deep Subfascial approach Asian Journal of Medical Sciences | Jan 2021 | Vol 12 | Issue 1 107 superficial fascia.11 Moreover, depletion in the facial nerve function impedes psychic expression, functional deficiency, and can lead to an esthetic deformity which might lead to overwhelming loss of quality of life.8,9 Recently, the deep subfascial approach has been advocated by many researchers stating that it is the safest approach in terms of preserving the integrity of the temporal branch of the facial nerve compared with the conventional approaches to the TMJ, because this nerve lies in a condensation of the superficial fascia.10-13 The present study aimed to assess facial nerve (FN) injury following TMJ surgery using deep subfascial approach and measuring it on House and Brackman facial nerve grading .14,15 Aims and objectives To assess FN injury following TMJ surgery using deep subfascial approach and measuring it on House and Brackman facial nerve grading system (HBFNGS). MATERIALS AND METHODS To concentrate on the study aim, the researchers outlined and executed a randomized controlled clinical trial which was carried out in the Department of Oral and Maxillofacial Surgery. The trial was sanctioned by the institutional review board and local ethics committee. The study followed the benchmarks as set by Helsinki. All patients without any systemic complications, who strictly satisfied the incorporating guidelines, were included. The study population was composed of all patients presenting for evaluation and management of unilateral and bilateral TMJ ankylosis from August 2013 to March 2017.To be included in the study sample, patients had tofulfil the following inclusion criteria: An established diagnosis of unilateral and bilateral TMJ ankylosis as proven by clinical and radiological diagnosis(Figure 1 and 2). Patients were excluded as study subjects if they were American Society of Anesthesiologist physical status classification system (ASA) III and IV compromised patients, had previous or current neurological disease that may adversely affect facial nerve function secondary to the previous or present neurological illness, patients who did not provide written informed consent, and patients who were not willing to attend follow-up appointments. The diagnostic work-up was done for all patients; it included clinical examination and radiographic presentation. Standard lab investigations were done for all participants. Written informed consent was taken from all the subjects. Regardless of age and sex, the randomization of the patients was done using a slot method. In order to control the bias in the study, a single operator performed all the surgeries under general anaesthesia along with standard aseptic provisions and protocol. Technique Skin preauricular incision extending to the temporal region, curving backwards and upwards posterior of the main branches of the temporal vessels was performed. The temporal component of the skin incision was made 45°to the zygomatic arch, from the superior auriculocutaneous junction. The incision was carried through the subcutaneous tissue, the superficial temporalis fascia, and the areolar fat tissue. Blunt dissection is carried out downwards, to a point 2 cm above the malar arch where the deeptemporalis fascia splits into 2 layers containing fat tissue. Modification of the surgical technique places the incision of the upper and lower layer of the deeptemporalis fascia, completely through the fat tissue exposing the fibers of the temporal muscle and producing a new subfascial layer (under the deep temporal fascia) (Figure 3). Once inside this pocket, the periosteum of the malar arch can be safely incised and turned forward. The pocket can be developed anteriorly, allowing a safe and Figure 1: Showing right (unilaetral) TMJ anklosis Figure 2: Showing bilateral TMJ ankylosis Kumar, et al.: Assessment of facial nerve injury using deep Subfascial approach 108 Asian Journal of Medical Sciences | Jan 2021 | Vol 12 | Issue 1 received as the general standard for detailing Facial Nerve work by the American Academy of Otolaryngology Head and Neck Surgery in 1984. It has accuracy of 93% among the diverse evaluators. Appraisal capacity was done pre and postoperatively at 24 h, 1 week, 1 month, 3 months, and a half year utilizing House-Brackmann Facial Nerve Grading System. The patients were analyzed in the accompanying positions: very still, raising the eyebrows, shutting the eyes with least exertion and with maximal endeavor, blowing the mouth. All twenty four patients were treated for unilateral and bilateral TMJ ankylosis. All patients were evaluated objectively for facial nerve function test before surgery (D0) and post-surgically after 24 hours (D1), 7days (D2), 30 days (D3), 90 days (D4), 180days (D5). Moreover, in order to determine the quality of life all the patients were followed post-surgically, at 1 month and 6 months. However, in order to control the bias, the Nerve Function Test and quality of life questionnaire was evaluated by another surgeon. Patients were re-evaluated for any clinical recurrence on all visits. Computed Tomography (CT scan) was advised if clinical examinations revealed any doubt.
INTRODUCTION
The temporomandibular joint (TMJ) is a commonly affected joint due to various ailments like ankylosis of the joint, condylar fractures, internal derangements, TMJ pathologies, zygomatico maxillary complex (ZMC) fractures, etc. which certainly require exposure of the joint and its nearby structures. However, there is a major limitation in the access to the joint, i.e. the facial nerve and its branches. While performing the surgical procedures on the TMJ the temporal branch is among the most endangered of the facial nerve branches to injury Contemporary literature reveals the incidence of facial nerve paresis which was found in 1.5% to 32% of patient. 1,2 Modern day publications describe the three approaches for TMJ exposure: Dissection that is superficial to the superficial temporal fascia, 3 exploitation of the surgical plane between the 2 layers of the temporalis fascia (subfascial) 4 and an approach deep to both layers of the temporalis fascia (deep subfascial). 5,6,7 This nerve lies in a condensation of the Background: Surgical access to the temporomandibular joint (TMJ) and zygomatic arch is a challenge even to the experienced maxillofacial surgeon. The conventional subfascial approach to these structures carries the potential risk of transient paralysis of the frontalis and orbicularis oculi muscles. The deep subfascial approach provides an additional layer of protection (the deep layer of the temporalis fascia and the superficial temporal fat pad) to the temporal and zygomatic branches of the facial nerve and thus, is the safest method to avoid facial nerve injury. Aims and Objective: To assess facial nerve injury (FN) following TMJ surgery using deep Subfascial approach and measuring it on House and Brackman facial nerve grading system (HBFNGS). Materials and Methods: A randomized study was performed from August 2013 to March 2017 on 24 patients with unilateral and bilateral TMJ ankylosis. All patients were evaluated objectively for facial nerve injury using with house and brackmann facial nerve injury grading system post-operatively and subjectively inthe various time periods, i.e. 24 hours, 1 week, 1month, 3 months and 6 months. Results: Brackmann facial nerve grading system at 24 hours post operatively-in the deep subfascial approach group, 91.7% of patients (23 cases) had Grade 1 injury and 8.3% (1case) had Grade 3 injury. The condition improved with time with full recovery of facial nerve injury (FN) at all surgical sites at 6 months. Conclusion: The deep subfascial approach has a distinct advantage over the conventional approaches when dissecting the temporal region and is the safest method to avoid injury to facial nerve injury (FN). superficial fascia. 11 Moreover, depletion in the facial nerve function impedes psychic expression, functional deficiency, and can lead to an esthetic deformity which might lead to overwhelming loss of quality of life. 8,9 Recently, the deep subfascial approach has been advocated by many researchers stating that it is the safest approach in terms of preserving the integrity of the temporal branch of the facial nerve compared with the conventional approaches to the TMJ, because this nerve lies in a condensation of the superficial fascia. [10][11][12][13] The present study aimed to assess facial nerve (FN) injury following TMJ surgery using deep subfascial approach and measuring it on House and Brackman facial nerve grading . 14,15
Aims and objectives
To assess FN injury following TMJ surgery using deep subfascial approach and measuring it on House and Brackman facial nerve grading system (HBFNGS).
MATERIALS AND METHODS
To concentrate on the study aim, the researchers outlined and executed a randomized controlled clinical trial which was carried out in the Department of Oral and Maxillofacial Surgery. The trial was sanctioned by the institutional review board and local ethics committee. The study followed the benchmarks as set by Helsinki. All patients without any systemic complications, who strictly satisfied the incorporating guidelines, were included. The study population was composed of all patients presenting for evaluation and management of unilateral and bilateral TMJ ankylosis from August 2013 to March 2017.To be included in the study sample, patients had tofulfil the following inclusion criteria: An established diagnosis of unilateral and bilateral TMJ ankylosis as proven by clinical and radiological diagnosis( Figure 1 and 2). Patients were excluded as study subjects if they were American Society of Anesthesiologist physical status classification system (ASA) III and IV compromised patients, had previous or current neurological disease that may adversely affect facial nerve function secondary to the previous or present neurological illness, patients who did not provide written informed consent, and patients who were not willing to attend follow-up appointments.
The diagnostic work-up was done for all patients; it included clinical examination and radiographic presentation. Standard lab investigations were done for all participants. Written informed consent was taken from all the subjects. Regardless of age and sex, the randomization of the patients was done using a slot method. In order to control the bias in the study, a single operator performed all the surgeries under general anaesthesia along with standard aseptic provisions and protocol.
Technique
Skin preauricular incision extending to the temporal region, curving backwards and upwards posterior of the main branches of the temporal vessels was performed. The temporal component of the skin incision was made 45°to the zygomatic arch, from the superior auriculocutaneous junction. The incision was carried through the subcutaneous tissue, the superficial temporalis fascia, and the areolar fat tissue. Blunt dissection is carried out downwards, to a point 2 cm above the malar arch where the deeptemporalis fascia splits into 2 layers containing fat tissue. Modification of the surgical technique places the incision of the upper and lower layer of the deeptemporalis fascia, completely through the fat tissue exposing the fibers of the temporal muscle and producing a new subfascial layer (under the deep temporal fascia) (Figure 3). Once inside this pocket, the periosteum of the malar arch can be safely incised and turned forward. The pocket can be developed anteriorly, allowing a safe and received as the general standard for detailing Facial Nerve work by the American Academy of Otolaryngology Head and Neck Surgery in 1984. It has accuracy of 93% among the diverse evaluators. Appraisal capacity was done pre and postoperatively at 24 h, 1 week, 1 month, 3 months, and a half year utilizing House-Brackmann Facial Nerve Grading System. The patients were analyzed in the accompanying positions: very still, raising the eyebrows, shutting the eyes with least exertion and with maximal endeavor, blowing the mouth.
All twenty four patients were treated for unilateral and bilateral TMJ ankylosis.
All patients were evaluated objectively for facial nerve function test before surgery (D0) and post-surgically after 24 hours (D1), 7days (D2), 30 days (D3), 90 days (D4), 180days (D5). Moreover, in order to determine the quality of life all the patients were followed post-surgically, at 1 month and 6 months. However, in order to control the bias, the Nerve Function Test and quality of life questionnaire was evaluated by another surgeon. Patients were re-evaluated for any clinical recurrence on all visits. Computed Tomography (CT scan) was advised if clinical examinations revealed any doubt.
RESULTS
The present study was conducted on 24 patients of unilateral TMJ ankylosis Assessment of facial nerve injury using deep subfascial approach to Temporo-mandibular joint Analysis of facial nerve injury was done using House Brackmann System The House Brackmann system was used to analyze the grades of facial nerve injury at various time periods explained: Time periods, i.e. 24 hours, 1 week, 1month, 3 months and 6 months, respectively. The results are 1) House-Brackmann facial nerve grading system at 24 hours post operatively-in the deep subfascial approach group, 91.7% of patients (22 cases) had Grade 1 injury and 8.3% (2 case) had Grade 3 injury (Table 1). 2) House-Brackmann facial nerve grading system at 1-week post operatively in the deep subfascial approach, 100% of patients had grade 1 injury, i.e. the nerve function became normal in all the patients of deep subfascial group (Table 2). 3) House-Brackmann facial nerve grading system at 1-month post operatively in the deep subfascial comfortable surgical approach to the articular capsule. This composite fasciocutaneous temporal flap includes: skin, subcutaneous tissue, superficial temporalis fascia, loose areolar tissue, superficial layer of deep temporalis fascia, temporal fat pad, and the deep layer of deep temporalis fascia. With this method we produce an additional protective fascial layer for the facial nerve. The dissection proceeds with meticulous running on the muscle fibers to the malar arch and capsule of TMJ ( Figure 4). Finally, the fascial layer can be repositioned and sutured, covering the temporal muscle.
The following variables have been evaluated: 1. Facial Nerve Function: The House-Brackmann reviewing framework was utilized to survey motor function of the facial nerve. 15,16,17 It is a clinical strategy for assessing the facial nerve damage that is very extensive and incorporates vital things, for example, the presence of the frontal, periorbital and peribuccal musculature, both very still and in movement. It was presented in 1983 for clinical use and was changed by Brackmannin 1985. On the proposal of the Facial Nerve Disorders Committee it was formally (Table 3). 4) House-Brackmann facial nerve grading system at 3 months post operatively in the deep subfascial approach, 100% of patients (24 cases) continued to be in the grade 1 category i.e. normal functioning of the facial nerve remained for all patients (Table 4). 5) House-Brackmann facial nerve grading system at 6 month post operatively. In the deep\ subfascial approach, 100% of patients (24 cases) remained in grade 1 facial nerve injury.
Complete recovery at all surgical sites proves the point that the deep subfascial approach is the safest among the preauricular approaches as far as facial nerve injury (FN) is concerned.
No sign of infection was observed in any patient in the follow-up appointments. Presence of Frey's syndrome defined as "perspiration of skin around the preauricular area while eating" was assessed on follow at 1 week, 1 month, 3 months, and 6 months postoperatively and was not evident in any of the patients.
In all the surgical sites, at 6 months follow-up, scar was imperceptible and esthetically acceptable.
DISCUSSION
The main purpose of this study was to evaluate the safety of a deep subfascial approach over on facial nerve injury following management of unilateral and bilateral TMJ ankylosis. Over the years, a number of surgical approaches to TMJ have been developed to attain the goal of successful removal of ankylotic mass, treating TMJ pathologies and condylar fracture.
In a study conducted by do Egito Vasconcelos BC et alon facial nerve function after surgical procedures for the treatment of temporomandibular pathology using House-Brackmann facial nerve grading system (HBFNGS) on 32 patients with 50 joints pathology using subfascial approach, they found that of 32 patients, 12.5% (8% of the 50 approaches) revealed signs of facial nerve injury.
There was a significant amount of post operative facial nerve injury in the patients who underwent surgery for TMJ ankylosis (p=0.014) and for gap arthroplasty patients (p=0.014). The study reveals that at 24 hours, none of the patients showedtotal nerve paralysis or severe dysfunction, only a moderately severe dysfunction (50%), or moderate dysfunction (50%). The forehead area was the most commonly affected. However, at 3 months follow-up, all patients acquired normal facial nerve function. 16 Our study showed similar results, however, there was no significant injury present at the six months time interval. This high frequency of nerve injury in our study during a subfascial approach, up to a period of 1 month, could have been either due to heavy retraction causing compression and/ or stretching of nerve fibre resulting in neuropraxia, or in a few cases it could have been due to inadvertent suture ligation of facial nerve fibers. Politi et al applied the deep subfascial approach to 21 patients and did not observe any temporary or permanent facial nerve function loss.
They reported that the facial nerve had been safely avoided, and the function of the auricular temporal nerve was also preserved. 10 Similarly, Kenkere et al carried out a detailed study on 12 patients and made 15 surgical exposures to access the TMJ and zygomatic arch. They used a deep subfascial approach and found that no functional deficit was observed in either the temporal or zygomatic branches of the facial nerve as ascertained by clinical examination. 12 Likewise, in our study, of 12 patients in whom a deep subfascial approach used, only one patient had grade 3 nerve injury; the remaining 11 showed grade 1 nerve injury at 24 hour post operatively. The low frequency of nerve injury in our study during deep subfascial approach. The evaluation was done in terms of facial nerve injury using House-Brackmann grading. The present study used the Face instrument to analyse patient perception regarding the approaches. The results of the study clearly indicate that no significant difference exists between the two approaches, in consideration of facial nerve injury during long term follow-up. 9 Further studies with a larger number of cases and multicenter involvement can be done for a more definitive conclusion.
Although during 6-month follow up both the approaches show 100% recovery, still we found that a deep subfascial approach is a better approach during the initial follow-up of the patient in terms of facial nerve injury , since this technique provides an additional layer of protection (the deep layer of the temporalis fascia and the superficial temporal fat pad) to the temporal and zygomatic branches of the facial nerve. The plane of dissection is distinctly identifiable and reliable, and the technique is simple to use with basic knowledge of the anatomy of the region. This technique is indicated for any surgery of the TMJ, including ankylosis and zygomatic arch and especially for secondary surgeries in the temporal region and TMJ, as well as correction of post-traumatic deformity of the zygomatic arch and complex.
CONCLUSION
A deep subfascial approach has proven to be a safer surgical procedure with respect to facial nerve injury as compared to a routine subfascial approach, although long term follow-ups render the differences between the approaches insignificant. Nonetheless, a deep subfascial approach can be considered for both routine and complex TMJ ankylosis cases. | 2021-01-07T09:06:56.833Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "5b80e63ff3d01d5a06f2b1e6cc67fc60a875ea46",
"oa_license": "CCBYNC",
"oa_url": "https://www.nepjol.info/index.php/AJMS/article/download/30784/26019",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f5fd82b706bcf766714a7312e57be2edfa22e3a8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
85445748 | pes2o/s2orc | v3-fos-license | Missed opportunities for earlier diagnosis of HIV in British Columbia, Canada: A retrospective cohort study
Background Late HIV diagnosis is associated with increased AIDS-related morbidity and mortality as well as an increased risk of HIV transmission. In this study, we quantified and characterized missed opportunities for earlier HIV diagnosis in British Columbia (BC), Canada. Design Retrospective cohort. Methods A missed opportunity was defined as a healthcare encounter due to a clinical manifestation which may be caused by HIV infection, or is frequently present among those with HIV infection, but no HIV diagnosis followed within 30 days. We developed an algorithm to identify missed opportunities within one, three, and five years prior to diagnosis. The algorithm was applied to the BC STOP HIV/AIDS population-based cohort. Eligible individuals were ≥18 years old, and diagnosed from 2001–2014. Multivariable logistic regression identified factors associated with missed opportunities. Results Of 2119 individuals, 7%, 12% and 14% had ≥1 missed opportunity during one, three and five years prior to HIV diagnosis, respectively. In all analyses, individuals aged ≥40 years, heterosexuals or people who ever injected drugs, and those residing in Northern health authority had increased odds of experiencing ≥1 missed opportunity. In the three and five-year analysis, individuals with a CD4 count <350 cells/mm3 were at higher odds of experiencing ≥1 missed opportunity. Prominent missed opportunities were related to recurrent pneumonia, herpes zoster/shingles among younger individuals, and anemia related to nutritional deficiencies or unspecified cause. Conclusions Based on our newly-developed algorithm, this study demonstrated that HIV-diagnosed individuals in BC have experienced several missed opportunities for earlier diagnosis. Specific clinical indicator conditions and population sub-groups at increased risk of experiencing these missed opportunities were identified. Further work is required in order to validate the utility of this proposed algorithm by establishing the sensitivity, specificity, positive and negative predictive values corresponding to the incidence of the clinical indicator conditions among both HIV-diagnosed and HIV-negative populations.
Introduction
Despite advances in HIV testing programs and improved access to healthcare services, late HIV diagnosis remains a problematic reality in high-resource settings [1][2][3]. Late presentation to HIV testing has been associated with pre-diagnosis encounters with healthcare providers where an HIV test was indicated, but not ordered; thereby resulting in missed opportunities for earlier HIV detection [4]. Consequences associated with late HIV diagnosis extend across various dimensions of the HIV epidemic: at the individual level, an increased disease burden (e.g., high mortality, risk of hospitalization, and AIDS-defining illness) [5][6][7]; at the population level, an exacerbated HIV transmission risk [8]; and at a structural level, an amplified healthcare resource utilization and related expenditures [9].
British Columbia (BC), Canada, is the first jurisdiction to implement Treatment as Prevention (TasP), expanded under the Seek and Treat for Optimal Prevention of HIV/AIDS (STOP HIV/AIDS) initiative, which encompasses widespread HIV testing and immediate initiation of free antiretroviral therapy (ART) [10]. Since 2014, provincial HIV testing guidelines have recommended routine testing (i.e., every five years) for individuals aged 18-70 years and annual testing for populations with a higher burden of HIV [11]. HIV testing (nominal and non-nominal), is free of charge for all BC residents. Despite widespread access to HIV testing and the existence of provincial HIV testing guidelines, in 2017, nearly a quarter of diagnosed people living with HIV (PLWH) presented to care with a CD4 count <350 cells/mm 3 [12]. Taken together, this body of evidence strongly indicates existing opportunities to further optimize the TasP strategy by diagnosing PLWH earlier in the course of HIV infection.
Thus, efforts to understand missed opportunities are crucial to minimize the aforementioned adverse outcomes, and to further optimize TasP in BC and other settings. In this context, insights pertaining to characteristics associated with missed opportunities will be instrumental in achieving the Joint United Nations Programme on HIV/AIDS' (UNAIDS) target of diagnosing at least 90% and 95% of all PLWH by 2020 and 2030, respectively [13].
In this population-based study, we proposed a case-finding algorithm, based on administrative data, to identify missed opportunities for earlier HIV diagnosis. This algorithm was fundamental in addressing our study objectives, which were: i) to quantify missed opportunities corresponding to a clinical manifestation which may be caused by HIV infection, or is frequently present among those with HIV infection (i.e., clinical indicator conditions) among diagnosed PLWH in BC; and ii) to identify specific clinical indicator conditions and population sub-groups associated with experiencing these missed opportunities. Findings are expected to inform evidence-based recommendations for future interventions, policies, and guidelines surrounding HIV testing.
Study setting
In BC, ART is provided free of charge (no copayments and deductibles) to all HIV-diagnosed BC residents. ART provision has been under the auspices of the BC Centre for Excellence in HIV/AIDS (BC-CfE) Drug Treatment Program (DTP) since 1992. ART eligibility in BC is based on the BC-CfE's HIV therapeutic guidelines, which have remained generally consistent with those put forward by the International Antiviral Society-USA since 1996 [14].
HIV testing guidelines and practices have evolved in light of sustained advancements in testing technologies resulting in shortened window periods, the availability of point-of-care testing [15], and the emergence of the compelling evidence supporting TasP [16]. Conventionally, testing guidelines were based on the presence of HIV symptoms and known HIV risk factors [17]. In 2010, the BC-CfE set new guidelines recommending healthcare providers to routinely offer an HIV test to sexually active individuals who present to care without having been tested in the year prior, individuals with a history of sexually transmitted infections (STI), and those tested for Hepatitis C, STI, or tuberculosis [17]. In 2014, the Office of the Provincial Health Officer further expanded guidelines by recommending annual testing for those in populations with high HIV burden and testing every five years for everyone else [11].
Study data
Data were obtained from the STOP HIV/AIDS population-based cohort, which is composed of individual-level longitudinal data on all diagnosed PLWH in the province, by virtue of linkages between various provincial databases and the DTP [10,[18][19][20][21][22]. These aforementioned linkages, along with their corresponding data capture are comprehensively detailed in the S1 File.
Study design
The eligibility criteria for this retrospective cohort study were as follows: i) ART-naïve individuals aged �18 years, ii) who were diagnosed between 1 January 2001 and 31 March 2014, and iii) had a CD4 count measurement within six months. Individuals diagnosed in the acute stage of HIV infection were not considered for missed opportunities given that these infections were detected in a timely manner.
Due to the population-based aspect of this study, HIV testing data is not the exclusive source utilized to ascertain HIV-diagnosed individuals in BC. A considerable number of those HIV-diagnosed have simply never been formally diagnosed via HIV antigen/antibody screening tests (i.e., a confirmed HIV-positive test [10]. Additional data sources of HIV diagnosis play a critical role in supplementing HIV testing data to construct a comprehensive population-based cohort of all HIV-diagnosed individuals.
Thus, HIV diagnosis was ascertained by one of the following validated criterion [23]: a confirmed HIV-positive test, a detectable plasma viral load >50 copies/mL, an HIV-related hospitalization, three HIV-related outpatient care visits, a reported AIDS-defining illness, or ART dispensation. Date of HIV diagnosis was ascertained by the first instance of one of the abovementioned criteria.
The restriction to ART-naïve individuals was applied to ensure that individuals were accurately classified as a new positive HIV case. Acute HIV infection was defined based on meeting the laboratory criteria (i.e., detection of HIV DNA or RNA by nucleic acid amplification test [NAT], or detection of p24 antigen in the absence of confirmed detection of HIV antibody), or a previous negative or indeterminate HIV test within 180 days of the first confirmed positive HIV test [24].
Analytical approach and measures
This study consisted of a three-pronged analysis. For all individuals, information pertaining to clinical events identified from healthcare encounters were reviewed for missed opportunities (i.e., the outcome of interest) throughout the following time-frames: i) one year, ii) three years, and iii) five years prior to the HIV diagnosis date. Note that these time-frames were not mutually exclusive, and missed opportunities identified from the five-year analysis also incorporates those identified in the three-and one-year analyses.
This approach strictly served as a sensitivity analysis to account for the considerable individual variability in CD4 counts associated with the natural course of untreated HIV infection. Thus, the inclusion of individuals for each of the analyses was based on the stage of HIV infection at diagnosis. All three analyses comprised of individuals with a CD4 count <500 cells/ mm 3 at HIV diagnosis. Individuals with a CD4 count �500 cells/mm 3 at HIV diagnosis were only included in the one-year analysis in an effort to minimize potential overestimation of the outcome, as these infections are generally of shorter duration. This decision was supported by a large study which estimated that CD4 count depletion reaching the 500 cells/mm 3 threshold occur on average one year after HIV seroconversion [25]. Other studies opted to examine missed opportunities for HIV diagnosis among individuals diagnosed �350 cells/mm 3 [26,27], while others have not adhered to any restrictions on the basis of CD4 counts [4,28].
Proposed algorithm to identify a missed opportunity
A missed opportunity was defined as a healthcare encounter due to a clinical manifestation which may be caused by HIV infection, or is frequently present among those with HIV infection (i.e., clinical indicator conditions), but no HIV diagnosis followed within a 30-day period. A healthcare encounter was considered to be due to a clinical indicator condition if at least one of the following criterion were met: These clinical indicator conditions are comprehensively presented in Table 1 and S1 Table. The presence of a clinical indicator condition was ascertained by means of an algorithm using the International Classification of Disease (Ninth and Tenth Revisions) (ICD 9/ 10) diagnosis codes. This algorithm was applied to the following STOP HIV/AIDS administrative databases: i) the Discharge Abstract Database, which captures diagnostic information on the circumstances of all in-patients discharges, transfers, and deaths, as well as day surgery patients from acute care hospitals across BC; and ii) the Medical Services Plan billing database, which captures diagnostic information related to inpatient and outpatient services provided by physicians and supplementary healthcare practitioners, as well as diagnostic procedures. Given that multiple diagnosis records may correspond to the same unique healthcare encounter, we have employed a set of rules to identify distinct encounters. A healthcare encounter in MSP was considered to be unique if one of the following conditions were fulfilled [31]: 1. If date in which the service was provided by a practitioner was different; or 2. If the practitioner's specialty was different; or 3. If the location in which the service occurred was different.
An encounter in DAD was considered to be unique from the admission date to the discharge date, including transfers which may have occurred between acute care institutions. In the event that multiple clinical indicator conditions were diagnosed within the same healthcare encounter, only one missed opportunity was counted.
Several data quality control measures have been implemented to ensure coding accuracy [32,33]. For the DAD, professionally trained coders are responsible for translating the diagnoses from medical charts into ICD-10 codes [34]. The Canadian Institute for Health Information evaluates coding accuracy by conducting reabstraction studies, which involves returning to the original source (e.g., medical charts) and comparing the information with the DAD [32]. These studies have been conducted routinely and yielded favorable results as well as areas for improvement [35]. All acute care facilities in BC are required to report data to the DAD [36]. For MSP, billing records are submitted electronically by a practitioners' offices to MSP in ICD-9 format. Audits and quality checks for select data fields are subsequently conducted by MSP [18]. It has been demonstrated that these codes are valid at the population level [37]. Nearly 100% of BC residents are covered under MSP [38].
To enhance the rigor of this algorithm, specific restrictions were imposed when constraints were indicated for specific clinical indicator conditions. Recurring conditions required at least two clinical events diagnosed within a 12-month period, while the first clinical event was not considered a missed opportunity. Chronic clinical indicator conditions (i.e., >1 month duration) were considered as a missed opportunity strictly if diagnosed by a specialist. This rule was imposed since administrative data cannot directly capture the duration of a condition; however, conditions diagnosed by a specialist can serve as a proxy measure. For example, conditions with long-standing symptoms may be referred to specialists and are typically associated with potentially lengthy wait times after the initial consultation with a general practitioner [39].
Explanatory variables
The explanatory variables measured at baseline (i.e., date of HIV diagnosis) included: gender (female and male), age (<30, 30-39, 40-49 and �50 years), risk for HIV acquisition (gay, bisexual, and other men who have sex with men [gbMSM], people who have ever injected drugs [PWID], gbMSM/PWID, heterosexual/other, and unknown), CD4 count (<200, 200-349, 350-499, �500 cells/mm 3 ), calendar year of HIV diagnosis (continuous), BC health authority (Fraser, Interior, Northern, Vancouver Coastal, Vancouver Island, and unknown), and rurality (urban, mixed, rural, and unknown). The health authorities are responsible for the management and delivery of health services in geographically defined areas in BC [40]. Of note, Vancouver Coastal is the largest health authority in BC, caring for >50% of PLWH [41], and where the DTP is located, while Northern is one of our most remote health authorities. Rurality was categorized by classifying local health areas (i.e., 89 provincial health regions aggregated up to the five health authorities) in accordance with their degree of rurality as detailed in S1 File.
Statistical analysis
Categorical variables were compared using the Fisher's exact test or the Chi-square test, and continuous variables were compared using the Kruskal-Wallis test [42]. For each of the oneyear, three-year, and five-year analyses, an explanatory multivariable logistic regression model was used to model the probability of having 0 versus �1 missed opportunity. Model selection was conducted using a backward elimination procedure based on the Akaike Information Criterion (AIC) and Type III p-value [43]. All p-values are two-sided, and the level of significance was set at 5%. All analyses were performed SAS version 9.4 (SAS, Cary North CA, USA).
Ethics
Linkage and usage of administrative databases were approved and performed by data stewards in each collaborating agency and facilitated by the BC Ministry of Health. The University of British Columbia Ethics Review Committee at the St. Paul's Hospital site provided ethics approval for this study (H18-02208). This study was conducted using strictly anonymized laboratory and administrative databases, and thus informed consent was not required. This study complies with the BC's Freedom of Information and Protection of Privacy Act.
In the five-year analysis, 298 individuals (14%) recorded a total of 649 unique missed opportunities. In the one-year and three-year analysis, 142 (7%) and 247 (12%) individuals contributed 506 and 287 missed opportunities, respectively (Fig 1). For all three analyses, recurrent pneumonia was the most prominent clinical indicator condition identified as a missed opportunity (33%, 31%, and 30%, respectively), followed by anemia related to iron and other vitamin B12 deficiencies or of unspecified cause (21%, 19%, and 18%, respectively), and herpes zoster/shingles among individuals aged <50 years (8%, 13%, and 13%, respectively). Although not preeminent, sexually transmitted infections, lymphatic disorders and mucosal fungal infections (primarily oral candidiasis) also emerged as important clinical indicator conditions. S2 Table presents the distribution of clinical indicator conditions diagnoses categorized by distinct disease groups, corresponding to the missed opportunities identified in all three analyses.
Characteristics associated with missed opportunities
Bivariable analyses. Consistent across all analyses, having �1 missed opportunity was associated with having a CD4 count <200 cells/mm 3 , being older than 50, having a heterosexual/other or PWID HIV acquisition risk, residing in the Northern health authority and in rural areas. An in-depth presentation of these associations, along with their corresponding significance, is reported in Table 3.
Adjusted multivariable analyses. Factors associated with having �1 missed opportunity are comprehensively presented in Table 4 and included:
Discussion
Findings from this population-based cohort study underscored missed opportunities for earlier HIV diagnosis in BC using a newly-developed case-finding algorithm based on ICD 9/10 codes for clinical indicator conditions. This study demonstrated that despite a setting where access to healthcare is unrestricted, opportunities for earlier HIV diagnoses remain; between 7%-14% of individuals experienced �1 missed opportunity. The most prominent missed opportunities were related to diagnoses of recurrent pneumonia, anemia related to nutritional deficiencies or of unspecified cause, and herpes zoster/shingles among individuals aged <50 years.
Our results complement the growing body of evidence indicating that individuals in highresource settings continue to be susceptible to missed opportunities [4,28,44,45]. In West Scotland and Italy, 26% and 29% of individuals had �1 clinical indicator condition prior to their HIV diagnosis ever, respectively [4,28]. In the United States and Switzerland, 22% and 44% of individuals were found to have �1 HIV indicator condition within five years prior to HIV diagnosis, correspondingly [44,45]. The relatively low proportion of missed opportunities in BC may be reflective of rigorous HIV prevention, testing and educational outreach programs in this province [46]. However, readers should be aware that differences exist in the definitions of missed opportunities among studies.
Current HIV testing guidelines acknowledge that HIV can have a range of non-specific presentations and recommend that healthcare providers include HIV in the differential diagnosis, irrespective of identified risk for HIV acquisition. This includes when an individual exhibits symptoms that warrant laboratory investigation or symptoms associated with HIV infection or immune compromise. Our findings complement the aforementioned recommendation by underscoring specific indicator conditions in which missed opportunities were most prevalent (i.e., recurrent pneumonia, anemia related to nutritional deficiencies or of unspecified cause, and herpes zoster/shingles among individuals aged <50 years). These conditions constituted 59%-69% of the missed opportunities, thus should particularly be on the forefront of clinical investigation when considering HIV in the differential diagnosis.
Findings from this study also inform healthcare providers and policy makers with important evidence necessary to implement targeted interventions aimed at improving HIV screening among the identified population sub-groups at increased risk of experiencing missed opportunities. The overarching theme appears to put forward a two-tiered issue: i) need to further emphasize underserved sub-populations, which are disproportionately impacted by stigma and geographical barriers to healthcare access [47,48]; and ii) need to target individuals from population sub-groups not traditionally associated high HIV burden, which may be inaccurately presumed to be at low risk for HIV by healthcare providers [49,50]. From a population perspective, these findings illustrate the need to provide further education for health care providers regarding universal testing.
While addressing missed opportunities for earlier diagnosis through increased testing in healthcare settings is a valuable strategy in reducing late diagnosis, individuals who do not present to care (e.g., asymptomatic individuals, those experiencing barriers to care) continue to be overlooked and contribute to late diagnosis in the absence of complementary interventions [50].
There are some limitations to consider. First, although our case-finding algorithm for identifying missed opportunities using ICD 9/10 codes has not yet been validated, thus potentially limiting the extrapolation of the results, it was developed on the basis of established clinical indicator conditions from the literature and informed by experts in the fields of HIV epidemiology and medical care. Second, although a number of clinical indicator conditions were selected based on having a prevalence of undiagnosed HIV >0.5% in the European context, the overall trends in HIV prevalence are largely homogeneous among high-income countries of North America and Western Europe [51]. Third, due to the requirement of an HIV diagnosis for entry into the STOP HIV/AIDS cohort, we were unable to assess the incidence of these clinical indicator conditions among those not HIV-diagnosed. However, case-control studies demonstrated that several of these clinical indicator conditions had a higher incidence among HIV cases compared to matched HIV-negative controls [52][53][54]. Fourth, HIV negative testing data did not include screening tests from Vancouver Island health authority (<5% of all screening tests), or those performed on a non-nominal basis [55]. Point-of-care testing data was also not available, though it only consisted of a small proportion of all tests performed in BC (<5%) [56]. Fifth, the STOP HIV/AIDS cohort does include data from the National Ambulatory Care Reporting System, rendering us unable to identify missed opportunities occurring during emergency room visits. Sixth, the STOP HIV/AIDS cohort data capture is restricted to March 2014; future analyses will assess missed opportunities after the revision of the provincial HIV testing guidelines in 2014. Finally, although healthcare administrative data are not collected for the purposes of research and are susceptible to misclassification and coding errors, they represent an important source of information for evidence-based research.
In conclusion, based on our newly-developed algorithm, this study demonstrated that HIV-diagnosed individuals in BC have experienced several missed opportunities for earlier diagnosis. Specific clinical indicator conditions and population sub-groups at increased risk of experiencing these missed opportunities were identified. Further work is required in order to validate the utility of this proposed algorithm by establishing the sensitivity, specificity, positive and negative predictive values corresponding to the incidence of the clinical indicator conditions among both HIV-diagnosed and HIV-negative populations. | 2019-03-23T13:02:58.182Z | 2019-03-21T00:00:00.000 | {
"year": 2019,
"sha1": "f04b845066e76bf020c2693ec26a20dbd686da69",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0214012&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f04b845066e76bf020c2693ec26a20dbd686da69",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
57758095 | pes2o/s2orc | v3-fos-license | Discrepancy between invasive and non-invasive blood pressure readings in extremely preterm infants in the first four weeks of life
Background The agreement between invasive and non-invasive blood pressure (BP) readings in the first days of life of preterm infants is contentiously debated. Objective To compare mean, systolic and diastolic invasive (IBP) and non-invasive BP (NBP) readings obtained during routine care in the first four weeks of life of extremely preterm infants. Methods We extracted pairs of IBP and NBP readings obtained from preterm infants born below 28 weeks of gestation from the local database. After exclusion of erroneous measurements, we investigated the repeated measures correlation and analyzed the agreement (bias) and precision adjusted for multiple measurements per individual. Results Among 335 pairs of IBP and NBP readings obtained from 128 patients, we found correlation coefficients >0.65 for mean, systolic and diastolic BP values. The bias for mean BP readings was -0.4 mmHg (SD 6.1), for systolic BP readings 6.2 mmHg (SD 8.1), and for diastolic BP readings -4.3 mmHg (SD 6.5). Overestimation of systolic IBP and underestimation of diastolic IBP by the non-invasive measurement were found both in the group with gestational age from 23 to 25.9 weeks and in the group with gestational age from 26 to 27.9 weeks. Systolic NBP readings tended to exceed invasive readings in the range <50 mmHg (bias 9.9 mmHg) whereas diastolic NBP readings were lower than invasive values particularly in the range >30 mmHg (bias -5.5 mmHg). Conclusion The disagreement between invasive and non-invasive BP readings in infants extends to the first four weeks of life. Biases differ for mean, systolic and diastolic BP values. Our observation implies that they may depend on the range of the blood pressure. Awareness of these biases and preemptive concomitant use of IBP and NPB readings may contribute to reducing over- or under-treatment.
Introduction
In extremely preterm infants, continuous blood pressure (BP) monitoring via an arterial line immediately after birth remains standard [1]. Arterial lines are also placed when preterm infants are critically ill not only for BP monitoring but also for repeated blood withdrawal. In this fragile population, the insertion of an arterial catheter is not always feasible or sometimes an indwelling peripheral catheter has to be removed because of low perfusion of the distal tissue [2,3]. Then BP is determined by the non-invasive oscillometric technique and neonatologists at the bedside will be concerned by a bias between invasive (IBP) and non-invasive (NBP) readings. The few studies on the agreement between invasive blood pressure (IBP) and noninvasive blood pressure (NBP) readings in the early life of preterm infants report partly inconsistent results [4][5][6][7][8]. They only considered mean BP [4][5][6]8] and/or were restricted to the first days of life [4,[6][7][8].
Blood pressure measurement guides therapeutic intervention in the neonatal intensive care unit (NICU). The mean BP thresholds used to trigger an intervention affect the achieved BP and inotrope usage [9]. Systolic BP is used to estimate pulmonary pressure in the echocardiographic assessment of early pulmonary hypertension in extremely preterm infants [10]. Diastolic BP is considered to reflect the intravascular blood volume and a drop in the diastolic BP is an alarming sign for loss of volume [11].
The purpose of this comparison study was to analyze correlation, agreement and precision relating to IBP and NBP readings obtained during routine care in the first four weeks of life in preterm infants born below 28 weeks of gestation.
Materials and methods
The local ethics committee (Ethikkommission der Medizinischen Universität Wien) approved the study (EK Nr: 2044/2016). The need for individual consent was waived (data were analyzed anonymously).
Study population
In a retrospective observational study, we included all preterm infants admitted at our NICU between October 2011 and December 2015 born below 28 weeks of gestation. Infants with congenital heart disease were excluded.
transducer system made sure that the pressure wave was not damped. For calibration, the invasive transducer (TruWave pressure transducer, Edwards Lifesciences, CA) was zeroed at the level of the right atrium. An indwelling arterial line was removed whenever continuous BP monitoring had no further benefit compared to non-invasive measurements and regular blood sampling was no longer required, or if it was not functioning and/or hypo-perfusion of distal tissue was observed. IBP readings were automatically recorded every fifteen minutes in the local information system database ICCA (IntelliSpace Critical Care and Anesthesia, Phillips, NL).
Non-invasive blood pressure measurement
Neonatal cuffs (NBP Cuff Neo Size 1-3, Dräger Medical GmbH, Lübeck, Germany) were used for the non-invasive oscillometric BP measurements. The cuff size was chosen according to the manufacturer's recommendations and was levelled to the infant's right atrium. The NBP measurements were obtained in the upper arm or in the lower leg, based upon ease of access. BP measurements were performed with the Infinity Delta XL Patient Monitor System (Dräger, Lübeck, Germany) which transferred the data to the local database.
Study protocol
In our NICU, we have no standard procedure that specifies when to take a NBP reading during invasive BP measurements. Whenever an arterial line was in place, the decision to obtain a NBP reading was left to the primary care team. Most often, NBP readings were used to report reliability of both IBP and NBP readings. Using an electronic query bound to the patient cohort, we extracted invasive and non-invasive BP readings, corresponding time of measurement, site of measurement, the insertion and removal time of the arterial line as well as patient baseline characteristics and administered medications from the information system database. Data were imported in the computing environment Matlab R2015b (The MathWorks, Natick, MA, USA) where we performed subsequent queries, data visualization and statistical computations. We identified all episodes of IBP readings for each patient and searched for NBP readings in the corresponding timeframe. For each NBP sample, we chose the IBP reading that preceded the NBP reading with the shortest time gap. This was done as the non-invasive measurement might alter subsequent IBP readings [12]. We allowed a maximal time gap of fifteen minutes between NPB and corresponding IBP.
The following safety precautions were defined to exclude redundant and erroneous measurements. 1. Incorrect NBP readings: We observed that systolic NBP values of 102 or 107 mmHg (usually they occurred in a clustered form) resulted from incorrect readings, probably due to an inappropriate application of the cuff. Pairs with such systolic values were excluded. 2. Readings with BP amplitudes (difference between systolic and diastolic value) smaller than five mmHg: these readings were excluded as they very likely indicated damped recordings. 3. IBP readings exhibiting sudden changes and episodes with fluctuations: We screened onehour episodes prior to and after the NBP reading of each BP pair by visual inspection, and excluded all those pairs exhibiting changes in the baseline of the IBP. In detail, we excluded episodes with one of the following criteria: at least two changes of approximately more than 10 mmHg of the mean IBP between two adjacent recordings (restless state) or a constant change of approximately more than 10 mmHg of the mean IBP after the IBP under consideration (recalibration of arterial line suspected). It is important to mention that the NBP values of the BP pair did not appear in the graphical presentation of the IBP tracing. For the sceening procedure, we build a simple graphical user interface (in MATLAB) that visualized only the IBP tracing and allowed to deselect episodes with fluctuations in the IBP recordings as described. Examples are presented in Figs 1 and 2. 4. Multiple IBP readings paired with a single IBP reading: We permitted only one NBP reading for each IBP considered for analysis and excluded all multiple NBP readings that were recorded within 15 minutes of the IBP reading under consideration except for the closest in time.
Data analysis
Correlations between IBP and NBP readings were analyzed using the repeated measures correlation (rmcorr implemented in R version 3.3.3) that accounts for non-independence among multiple observations per individual. We calculated agreement (bias, mean difference) and precision (1.96 SD of the difference corresponding to the 95% limits of agreement in the Bland-Altman plots) adjusting for multiple observations per individual [13]. We applied the Bland-Altman plots to depict the patterns of discord between IBP and NBP. Data were evaluated for different periods (28 days, week 1 versus week 2-4), two different groups of gestation (group I, from 23+0 to 25+6/7 weeks of gestation, versus group II, from 26+0 to 27+6/7 weeks of gestation) and for local BP intervals. We compared parametric data using the Student t-test, non-parametric data using the Mann-Whitney-U test and binary data using the chi-square test. A p-value below 0.05 was considered significant.
Fig 1. Illustration of invasive blood pressure (IBP) readings (approximately one hour before and one hour after the non-invasive blood pressure (NBP) reading considered for analysis) as it was used to screen IBP tracks for potential artifacts or inconsistency.
Since the changes in this example were only moderate (less than 10 mmHg), the IBP marked with black circles was selected for analysis. Note that the NBP reading was not visualized in the figure in order to blind selection/exclusion of IBP readings. https://doi.org/10.1371/journal.pone.0209831.g001 Discrepancy between invasive and non-invasive blood pressure readings in preterm infants PLOS ONE | https://doi.org/10.1371/journal.pone.0209831 December 28, 2018
Results
A total of 350 preterm infants born at 23+0 to 27+6/7 weeks of gestation and admitted to our NICU from October 2011 to December 2015 were enrolled in this study. Four infants with congenital heart disease were excluded. Among the remaining 346 infants, 335 (97%) had an indwelling PAL for various periods within the first four weeks of life.
In total, we could identify 791 pairs of IBP and NBP readings obtained from 181 patients. After excluding pairs with suspected incorrect NBP readings (n = 56), pairs with small BP amplitudes (n = 123), pairs exhibiting changes in the baseline of the IBP (n = 260), and multiple NBP readings matched with the same IBP reading (n = 17), the number of pairs considered for analysis decreased to 335. These pairs were obtained from 128 different patients (Fig 3). In this population, gestational age (median 25.6 weeks) and birth weight (mean 751 g) did not differ significantly from the overall cohort. However, administration of inotropic and sedating agents was more frequent in the examined population. Table 1 reports the characteristics of the overall cohort and the examined population. No arrhythmia was documented. In the examined cohort, 76 infants contributed only one BP pair, and ten infants contributed more than five BP pairs that were considered for analysis. The mean time difference between IBP and NBP reading for the 335 BP pairs amounted to 7.1 min (SD 4.1).
Fig 2. Illustration of invasive blood pressure (IBP) readings (approximately one hour before and one hour after the non-invasive blood pressure (NBP) reading considered for analysis) as it was used to screen IBP tracks for potential artifacts or inconsistency.
Since the changes in this example were more than 10 mmHg in particular after the IBP under consideration (marked with black circles), the pair corresponding to this IBP reading was excluded from analysis. Note that the NBP reading was not visualized in the figure in order to blind selection/exclusion of IBP readings. Table 2). For the systolic BP, the non-invasive method gave higher readings than the invasive measurement by 6.2 mmHg on average. For the diastolic BP, the non-invasive method gave lower readings than the invasive measurement by 4.3 mmHg on average. Similar patterns were found for each gestational group (Table 2). We also plotted the values of NBP against IBP from the first week of life, as this time of examination was used in most previous studies (Fig 5).
Bias based on IBP range
The bias of the non-invasive readings seemed to vary, depending on the range of the IBP (Table 3). For the mean BP, the bias was highest in the upper range (>40 mmHg) with Discrepancy between invasive and non-invasive blood pressure readings in preterm infants absolute values close to three mmHg. For the systolic BP, the bias was highest in the lower range (<35 mmHg) with values close to 10 mmHg and lowest in the upper range (>50 mmHg) with values close to one mmHg. For the diastolic BP, the bias was highest in the upper range (>30 mmHg) with values close to -6 mmHg. The results were similar for both groups of gestation (S1 Table).
Discussion
We performed a comparison study to determine the bias of non-invasive readings of mean, systolic and diastolic BP in the first four weeks of life of 182 extremely preterm infants. The three main findings of this study can be summarized as follows: First, the bias of the mean BP values was small as reported in other studies [4,7,8], which is reassuring for clinical practice. In our study, the bias remained small too for low mean IBP values (<30 mmHg). This is in contrast to the findings by Takci et al. who reported that the bias of the mean BP increased to 6.5 mmHg for IBP readings below 30mmHg [8]. Takci et al. obtained their data from multiple measurements in a relatively small group of study participants (n = 27) and did not specify a correction for multiple measurements per individual which may add a systematic error to the comparison.
Second, our results show that the non-invasive method leads to over-reading of the systolic IBP while it leads to under-reading of the diastolic IBP. This is in line with Lalan et al. who [14]. Third, the results indicated that the non-invasive systolic BP was approximately 10 mmHg higher when the invasive measurement was lower than 35 mmHg and only 1.5 mmHg higher when the invasive measurement was higher than 50 mmHg. For the diastolic BP readings, the bias was slightly higher (approximately -5.5 mmHg) in the upper range (diastolic IBP >30 mmHg). These findings, albeit being remarkable, have only observational character and need to be confirmed in further studies. At the bedside, this over-reading of systolic BP by
r: Pearson correlation coefficient) and Bland-Altman plots (right; dashed lines: Limits of agreement; dotted lines: Confidence intervals) for pairs (n = 335) of invasively (IBP) and non-invasively measured blood pressure (NBP) readings in the first four weeks of life obtained from preterm infants born below 28 weeks of gestation.
https://doi.org/10.1371/journal.pone.0209831.g004 Discrepancy between invasive and non-invasive blood pressure readings in preterm infants PLOS ONE | https://doi.org/10.1371/journal.pone.0209831 December 28, 2018 non-invasive measurements could lead to under-or over-treatment, for instance, in the management of early pulmonary hypertension in preterm infants [15].
We found relatively high values of precision (1.96 SD range 9.8-17.5 mmHg, Table 2), reflecting the individual variability between IBP and NBP readings. Such high variability was also described by Koenig et al., who reported a bias of the mean BP from -1.2 mmHg (SD 6.1) to 3.5 mmHg (SD 6.7) in infants with less than 1000 g and an umbilical arterial line (UAL) [5]. In a similar cohort, Meyer et al. found a better precision (bias -0.36, 2 SD 6.5 mmHg) of the non-invasive mean BP in the first 24 hours of life [4]. Takci et al. found a small difference between the mean IBP and NBP readings (bias 0.02 mmHg) in the first week of life in 27 newborns including 21 very low birth weight infants and reported a precision as high as 16.7 mmHg (1.96 SD) [8]. Similar precision values were also found in critically ill children [16], which indicates, that the variability of the BP measurements is independent of the size of the vessels. The reasons for the reported variations in BP measurements in preterm infants are manifold. As to inaccurate NBP measurement, the cuff size has a large impact and a small cuff tends to overestimate BP [17]. In our NICU, the nurse staff is trained to use the respective appropriate cuff. However, in clinical practice, the limb circumference often is only estimated. Also using both upper and lower limbs for NBP measurements might increase the range of variation, as limits of agreement up to 20 mmHg have been reported when comparing the location of non-invasive measurements [6]. Koenig et al. found a bias of 3.5 mmHg comparing the right arm mean BP versus UAL mean BP, and a bias of -1.2 mmHg for the right leg versus Discrepancy between invasive and non-invasive blood pressure readings in preterm infants UAL measurements in preterm infants with birth weight smaller than 1000g [5]. They suggested that the lower limb should be preferred for NBP readings in preterm infants. The location of the arterial line may also account for variations in the IBP readings. Recent studies provided contradictory results when comparing IBP readings derived from either PALs or UALs. Meyer et al reported that the degree of agreement was not affected by the position (UAL versus PAL) of the catheter [4], thereby confirming former results [18], whereas Lalan et al. found a greater bias in the mean BP for invasive measurements from the radial artery (4.8 mmHg) than for measurements from UALs (0.4 mmHg) [14]. Sources of inaccuracy with invasive BP measurements are air bubbles and blood clots in the arterial line causing damping with low systolic and high diastolic readings. In addition, the small diameter of the catheter acts as a low-pass filter, resulting in under-reading of systolic blood pressure [19,20].
In clinical practice it is not only important to know all those potential sources of variation and different readings with non-invasive and invasive BP measurement, but also to keep in mind, that the intra-arterial BP measurement, which is considered the "gold standard", and the oscillometric measurement are based on entirely different principles. In the former, the pressure waveform of the arterial pulse is transmitted via a column of fluid to a pressure transducer where it is converted into an electrical signal, which is processed, amplified and converted into a visual display by a microprocessor. In the latter, the cuff is automatically inflated to a preset value. Reducing the inflation gradually, the pressure wave of the arterial pulse causes oscillations in the vessel, which can be detected by the cuff. Mean arterial pressure corresponds to the maximum of oscillations and an algorithm applied to the change of oscillations sets systolic and diastolic arterial pressure values [21]. These different approaches are the rationale behind the results of this study as variations between these two methods primarily originate from the principle of operation rather than from inaccuracy.
Finally, similar findings have been obtained in other intensive care scenarios with children and adults [16,[22][23][24]. The decision, which BP monitoring is used, needs to be tailored to the Discrepancy between invasive and non-invasive blood pressure readings in preterm infants individual patient's risk in the clinical setting [16,25]. The same holds true for the preterm infant in the NICU. In critical situations, when positive inotropic or vasodilator agents are administrated, the use of noninvasive BP measurements should supplement the invasive readings to target specific BP goals [24] and might give a better understanding of the discrepancy between the two methods in the individual patient which is particularly helpful when the invasive measurement needs to be abandoned.
Strengths and limitations
Our study has several limitations. The retrospective study design is prone to bias and confounding errors. However, the relatively large number of individuals might reduce sources of bias and confounding. The individual decisions of the nurses and clinicians in charge to take a NBP reading during continuous IBP measurement might entail a notable risk of bias, in particular, when recalibration of the arterial line after discrepant NBP readings resulted in a change in IBP measurements. We tried to reduce this risk of bias by excluding all pairs with changes of approximately more than 10 mmHg in the mean IBP after the NBP reading. An important source of error results from motion artifacts. As we had no information about the individual infant's resting phase, we used the fluctuations in the invasive BP readings as a respective indicator and excluded pairs with highly fluctuating invasive readings. The time-difference within each BP pair between IBP and NBP might add another bias. This study was unable to meet the rigorous criteria of a research laboratory setting, but the findings highlight the real-world clinical assessment of BP in a high volume NICU. We analyzed neither ventilator nor inotropic support. Lalan et al. did not find any effect of the ventilator or inotropic support on the agreement between IBP and NBP readings [14]. We did not differentiate between pre-ductal and post-ductal measurements, as we could not find any substantial difference in invasive BP readings obtained from pre-and post-ductal PALs when correcting for gestational age and day of life. The strengths of the study were the careful visual screening for artifacts and manipulation of the invasive BP readings, the study period of four weeks and the inclusion of critically ill preterm infants.
Conclusion
Non-invasive and invasive BP readings disagree in the first four weeks of life of extremely preterm infants. The bias is least for the mean BP. Our observation that the bias may be rangedependent for the systolic and diastolic BP needs further confirmation. Non-invasive systolic BP is over-read and non-invasive diastolic BP is under-read, which is explained by the underlying principle of the oscillometric method. Our findings can support neonatologists in their correct evaluation of non-invasive BP readings, should they have to abandon arterial lines. The preemptive use of non-invasive BP measurements to supplement invasive BP readings may reduce subsequent inappropriate interventions by improving understanding of the non-invasive BP readings.
Supporting information S1 Data. Data. Anonymized data of invasive and non-invasive blood pressure pairs (n = 791) that were automatically extracted from the local database with additional information on site of peripheral arterial line, time gap between invasive and non-invasive measurements, time of measurement, gestational age, and inclusion for analysis. (XLSX) S1 | 2019-01-22T22:23:25.324Z | 2018-12-28T00:00:00.000 | {
"year": 2018,
"sha1": "18ced604a1d33768b5b667ea3e4c92988fa6db7b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0209831&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18ced604a1d33768b5b667ea3e4c92988fa6db7b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213498942 | pes2o/s2orc | v3-fos-license | Developing Reading Materials Through Local Based Needs at SDN 100107 Lobulayan South Tapanuli North Sumatera
— Nowadays, the developing and appreciating material based on students local based needs is very attractive. The purposes of this study are: 1) To find out the existing English reading materials used for the students of sixth grade at SDN 100107 Lobulayan in South Tapanuli regency; 2) To find out English reading material are needed by the students of sixth grade at SDN 100107 Lobulayan in South Tapanuli regency. This study is employed by implementing Research and Development design by Borg and Gall, 1983 but in implementing the model is simplified and adapted to Dirgeyasa (2011) become four steps: 1) evaluation and need analysis, 2) developing new material, 3) validating material, 4) final revision. The target of clients of this research was 29 correspondents which consisted of 25 students of SDN 100107 Lobulayan, 1 English teacher they are taken by using total sampling technique, 2 lecturers and 1 stakeholder is taken by using random sampling technique. The data was collected by questionnaire, documentation, and interview. The result of the study showed that the existing reading material used was less relevant with the needs of students of SDN 100107 Lobulayan in terms of topics, basic competences, assessment, and learning activities. After gaining the data from the evaluation and need analysis, it was found that the relevant reading material needed by the students of SDN 100107 Lobulayan are the material which related to their local based needs contains reading text; go to Aek Sijornih, I like ‘daun ubi tumbuk’, Salacca (local fruits), local Vegetables.
I. INTRODUCTION
In the context of Indonesian education, English as a foreign language has been learned by Indonesian learners since they were in Elementary School. According to the latest government policy as stated in the 2013 Curriculum, English at Elementary schools is now only a local-content subject taught once a week. Indonesia has also started English teaching earlier as compared to that previously which started at grade 7. Since the establishment of the decision letter of Minister of Education and Culture No. 060/U/1993, which states that English could be taught at primary education starting from grade 4, may schools initiated it from grade 1. As a result, even kindergartens followed it.
There are three reasons why does English at elementary school need in learning English. First, a young learner learn language easily; second, all of life systems uses English in this digital period, so that it can be easier to accept technology; third, the young learner accept English easily when they will continue to study at the junior high school [17]. It is therefore, English can be learned and mastered by students which include four skills namely, listening, speaking, writing and reading.
Reading is one of the four skills which is viewed as the most important language skill that should be developed in the classroom. Reading is defined as a human skill in which it is possible to interact with the written text, becoming one of the ways to acquire knowledge in a receptive way. Some previous studies have proven that reading is essential. In Indonesia, learning to read English starts at the fourth grade of elementary schools, and it continues at junior and senior high schools up to the higher education [10]. It can enhance people's social skills, improve hand-eye coordination, and provide people with endless hours of fun and entertainment.
Since English in Indonesia is a foreign language, most students at any levels of education get difficulty in reading There are two statements why does reading is important to be learned. According to Pang in Arias [2], learning to read is an important educational goal for children and adults because the ability to read opens up new worlds and opportunities. In line with this statement, Clarke, Truelove, Hulme, and Snowling [4] reading becomes more important as children progress through educational system. Consequently, reading is central to teaching and learning and it is vital to consider the circumstances in which the developing child is required to extract and apply meaning derived from text material.
Materials take an important part in teaching learning process. It is in line with Dirgeyasa [5] as stated that there are some factors that play important role in the process of teaching and learning, namely learning materials, teaching methods, assessments, the students, and the lecturers. So that, materials are the most influencing, it is vitally important to evaluate the existing teaching materials. Considering with reading, it is very important to pay attention to give materials that appropriate with student's condition and students need. Good and appropriate materials will give positive influence to the students' learning process. In line with this, good material contains interesting text and enjoyable activities [11].
Januari 2019, an interview was conducted to the students of sixth grade at SDN 100107 Lobulayan to get preliminary data. Four questions were given to the students, for instances is the existing holiday text interesting?, is the existing holiday text easy?, is the existing holiday text useful?, and if you have not known the holiday text, do you find difficulties in comprehending it?. From the result of interview, some of the students think that the holiday text in the textbook used are quite interesting and difficult. They do not have background knowledge about the text. It can be concluded that, the existing holiday text are good to be used in the teaching and learning process but they are not close to students' live for the sixth grade students of SDN 100107 Lobulayan, South Tapanuli. Here is the interview excerpt with one of the student (Nd) "kadang-kadang, kadang menarik, kadang tidak. Tidak meanarik karena saya tidak mengerti dengan isi teks tersebut" (Januari 2019). (Sometimes, it is not interesting if I do not understand the content of the text).
To get the preliminary data, the researcher also observed the student's English book. Grow with English Book 6, English textbook is one of series of English books for elementary school sixth grade. In their books, it has found that reading materials are totally irrelevant with their needs. They often get confused because they didn't understand. Further, the researcher has found most of students were not good in comprehending the text. The students did not understand the content of the reading material given by the teacher. They just keep silent when the teacher read the texts in front of the class. Without being supported by teacher's explanation in Indonesian language, they couldn't understand at all about the content of the text given. Then, the students also have low motivation in studying and not active in the class. Last, teaching and learning process becomes a monotous activity. As a consequence, the students are difficult to comprehend the text. It is therefore the good materials that can fulfill the students need are required.
From the two reading texts about holiday in Malang, we went to Florida, are not North Sumatera or South Tapanuli students' real world. The students are not really familiar with them although they can also be input texts in receptive skills. However, it will be more meaningful if reading texts as inputs are about local based need at tourism destination topics and local culinary in South Tapanuli. Thus, the development of reading material English should be based on the characteristic of South Tapanuli regency.
Moreover, students' difficulties in comprehending the English reading materials affected their achievement in English subject. Grade VI students of SDN 100107 Lobulayan, for instances the average score in their formative 1 administrated by the teacher was still low, that is 6.6 with the Minimum Mastery Criteria (Kriteria Ketuntasan Minimal: KKM) that should be achieved is 7.5.
Based on the fact stated previously, the researcher assumes that the facts will become a problem if not overcome soon. By doing this research, the researcher expected that this research can solve the problems stated previously by developing reading materials for elementary school through local based need. There are some reasons why does local based need. First, it is also urgent because by developing the reading material which involves local based need, the students are hoped to be able to improve their understanding about the material given since the material will be contextual and close with their culture and to be able to preserve the Angkola culture in the modernity of the globalization era. This is accordance with Firoz, Maghrabi, dan Kim statement [7] in relation to globalization is "Think globally manage culturally"Another reason for producing these kinds of materials is to help students become aware of their own cultural identity. Because the local based need of each place is different. Designing teaching materials for elementary students based on local need, where student learn their own local based need in order to talk about their tourism with the destinations.
In developing these reading materials, the researcher will insert local based need at tourism destination topics and culinary themes though English materials to solve those problems. Local based need means the needs of the students in Lobulayan area. There are two researchers previous why does local based needs is essential and really need in learning English. According to Dirgeyasa and Ansari [6], the local based need promotes and empowers the tourism resources locally to meet the needs of tourism industry. The local basedanalysis in tourism industry has an important and significant contribution to make to the quantities of natural and human resources. In line with Aspiandi, Sutapa, and Sudarsono studied recommend that local needs based materials. They also stated that teaching materials developed from the local needs have a good impact to learning activities.
Moreover, study related to the development materials for elementary school is applied out by Kusuma [15] conducted a study to develop reading material for fifth grade student at Elementary school in tourism area by inserting local culture. The result showed that reading material was developed by involving some of local contents and reading material had high validity and practicality and was proven to be effective.
Advances in Social Science, Education and Humanities Research, volume 384
Based on the background of the study, the problems of the study are stated in the form of questions as below:
A. Teaching English at Elementary School
Elementary school is a unit of formal education in supervision of the minister of Education. Elementary school is what we call primary school. That is the first stage of education for younger children. According to Cambridge Dictionary Online, elementary school is a school that provides the first part of a child's education, usually for children between five and eleven years old.
Since the early 90s, English education has been introduced to some Elementary schools in Indonesia. The Indonesian Government through its Ministry of Culture and Education issued a decree number 060/U/1993 dated February 25th, 1993 stating that English can be taught at elementary school but only as part of local-content curriculum [23] [24]. Also, Triana stated that concerning the principles of teaching English to young learners it is highly expected that the Indonesian government pay more attention to the teaching English at elementary school for the sake of sustainable development of human resource of Indonesian toward a better life. English language teaching (ELT), that is, the teaching of English as a second or foreign language, is usually portrayed in the professional literature as being primarily concerned with the mental acquisition of a language [12]. Indonesia [25]. Schools were given the freedom to start teaching English earlier than Grade 4 and were asked to implement a competency-based curriculum developed at the Local Education Unit (Kurikulum Tingkat Satuan Terpaduhenceforth KTSP).
Elementary school students can be categorized as young learner, because they are in the early ages that start from six until twelve years old [19]. Starting enter elementary school, most of Indonesian students when they are six years old.
There some advantages of learning English as foreign language for the young learner. As brilliant publication 2014 in Arief [3] stated 10 reasons for teaching foreign languages in primary school, namely as follows: 1. Learning a new language is fun 2. It's best to start early Brilliant publication in Arief [3] explains that primary pupils are very receptive to learning a new language. They are willing and able to mimic pronunciation without the inhibitions and self-consciuosness of older students.
Develops self confidence 4. Enriches and enhances children's mental development
Brilliant publication in Arif [3] mentioned international studies have shown repeatedly that foreign language learning increases critical thinking skills, creativity, and flexibility of mind in young children. 5. Improves children understanding of English 6. Encourage positive attitudes to foreign language 7. Broaden children's horizons 8. The ideal place to start 9. Help children in later career 10. It's great when you go holiday
B. Reading
Basically, reading is one of the kinds of skill in mastering English language. Reading is an activity that is done deliberately in order we can know what is wanted to know. Linse and Nunan [16] stated that Reading is a set of skills that involves making sense and deriving meaning from the printed word. It can be said that we must comprehend what we read. Further, Peregoy and Boyle in Linse and Nunan [16], there are three different elements which impact reading for second language learners, namely the child's background knowledge, the child's linguistic knowledge of the target language, and the strategies or techniques the child uses to tackle the text.
Suyanto [21] divides six stages the process of learning to read in general can go through as the following stages: 1. Read (pronounce) alphabet with English pronunciation ap-p-l-e 2. Read words that can also be accompanied by reciting or spelling like: apple 3. Read phrases forwarded to short sentences 4. Read sentences that are meaningful or contain messages, either in the form of a question (question) or statement sentence (statement) 5. Reading discourses, short writings, or other materials, such as dialogue, poetry, and letters. 6. Reading discourses, longer dialogues, stories, or events.
From the explanation above, reading skills are taught from words, phrases, then discourses with easy vocabulary to more Advances in Social Science, Education and Humanities Research, volume 384 difficult vocabulary, from short to longer discourses with more varied grammar. The level of difficulty and the length of the reading material are adjusted to the level of children's language development and level of the class. Moreover, Suyanto [21] stated that teaching materials are what the teacher uses to give to students in order to achieve certain competencies or abilities, as previously planned. Teaching materials can be obtained from various sources, among others in the form of, as follows: 1. textbook (student book), 2. teacher's book, 3. Tapes or CDs, 4. Picture cards, 5. Poster, 6. Results of research or study, 7. Various recordings of experiences that are relevant to subjects or courses that are fostered, 8. Scientific articles and conceptual writing 9. Note the experience of teachers and lecturers 10. Brochures, manuals and other relevant material EYL teaching materials have their own characteristics because of the limitations associated with children's language development, language functions, and the state of society. Suyanto [21] include eight characteristics the following: 1. Grammar is very simple 2. The type and completeness of the vocabulary need to be given because there is almost no English exposure outside the classroom. 3. Vocabulary is limited to about 500 words 4. Materials need to be accompanied by pictures 5. Students hardly hear English around it, so it needs repeated pronunciation exercises 6. Students do not have time to practice so teaching materials must be easy to understand and vary. 7. Vocabulary used is everyday language and is simple for communication 8. Easily available maybe there are students who do not have textbooks.
C. Concept Local based Needs
Local is a place or local conditions. According to Salazar [18] defined the local not only refers to a spatially limited locality; it is above all a space inhabited by people who have a particular sense of place, a specific way of life, and a certain ethos and worldview. Need is something that a person must have and that is needed in order to live or succeed or be happy. A need can felt by an individual, a group, or an entire community. Needs, it can be defined as the gap between what is and what should be.
In this study, "needs" refer to the needs of the learners and "local" refers to the Lobulayan area school. Thus, the local needs mean here the needs of the students in Lobulayan areas. In line with Aspiandi, Sutapa, and Sudarsono defined local needs is the needs of the students in an area.
In addition, Dirgeyasa and Ansari [6] stated that theoretically and empirically, the local needs promotes and empower the tourism industry. It is also relevant to the statement "think globally and act locally". Further, they also defined local based needs as all resources showing the typical characteristic of a certain region in terms of economy, culture, natural features and its people, which are different from the other regions.
Furthermore, Nangsari and Dwitagama in Dirgeyasa and Ansari [], defined the local-based need is a matter of local competitiveness with regard to natural resources, human resources, culture and tradition and services which are typically unique and different.
Like the term 'local based needs', the term 'keunggulan lokal' also defined as a process and realization of increasing the value of a regional potential so that it becomes a product and service or other high-value work that can add to the income of each region without exception is unique and has a comparative advantage [1]. However, this paper tended to use the term 'local based needs'. Simply, the researcher concluded that local based needs here the needs of the students in Lobulayan areas.
South Tapanuli is a district in North Sumatra, Indonesia, where the capital is Sipirok. This district was originally a very large district with capital in Padang Sidempuan. The areas that have been separated from South Tapanuli is Mandailing Natal and Padang Sidempuan, North Padang Lawas and South Padang Lawas. After the expansion, the district capital moved to Sipirok. The language that being used is the language of Batak Angkola with majority religion is Islam.
D. Material Development
In line with the guidelines of developing material, Tomlinson [22] provides some principles materials for the teaching of language which are presented below: 1. material should achieve impact 2. Materials should help learners to feel at ease 3. Material should help the learners to develop confidence 4. What is being taught should be perceived by learners as relevant and useful 5. Materials should require and facilitate learner selfinvestment 6. Learner must be ready to acquire the points being taught 7. Materials should expose the learners to language in authentic use 8. The learners" attention should be drawn to linguistic features of the input 9. Materials should provide the learners with opportunities to use the target language to achieve communicative purposes 10. Materials should take into account that the positive effects of instruction are usually delayed 11. Material should take into account that learners differ in learning styles 12. Materials should take into account that learners differ in affective attitudes 13. Material should permit a silent period at the beginning of instruction 14. Material should maximize learning potential by encouraging intellectual, 15. Material should not rely too much on controlled practice 16. Material should provide opportunities for outcome feedback Designing tasks is not the final step in material design. The materials, then, need to be evaluated through the process of material evaluation. Hutchinson and Waters [11] state that evaluation is a matter of judging the fitness of something for a particular purpose. They add that in the process of evaluation, there is no absolute good or bad-only degree of fitness for the required purpose. In other words, material evaluation can be defined as an activity to measure whether the material meets learners' need or not.
Hutchinson and waters [11] define needs as the ability to comprehend and to produce the linguistic features into two categories; target needs and learning needs. The target needs are what knowledge and abilities the learner will require in order to be able to perform appropriately in the target situation. The analysis of the target needs is divided into three points which are necessities, lacks, and wants.
a. Necessity is defined as the type of needs determined by the demands of the target situation. b. Lacks is the gap between what the learners know already and what the learners do not know. c. Wants is what the learners expect about language area that they want to master.
III. METHODOLOGY
This was research and development (R and D) study which aimed to develop effective product based on the result of need analysis. It was conducted to design reading materials based on local based needs for the students of sixth grade of elementary schools in Lobulayan. Borg and Gall [9] state that R and D is a process to develop and validate the educational product by testing it. They suggest that the product should be systematically field-tested, evaluated, and refined until they meet specified criteria of effectiveness, quality, or similar standards. But in implementing the model is simplified and adapted to Dirgeyasa [] become four steps: 1) evaluation and need analysis, 2) developing new material, 3) validating material, 4) final revision.
The respondents of the research were the sixth grade students of SDN 100107 Lobulayan. The total numbers of the sixth grade students were 29 persons. 1 English teacher they are taken by using total sampling technique, 2 lecturers and 1 stakeholder is taken by using random sampling technique.
The instruments of the research were questionnaire, interview, and documentary sheet. There were two kinds of questionnaire used. The first questionnaire is for the students and the teachers to identify the needs of local based needs reading materials for sixth grade of elementary school in Lobulayan. The second questionnaire is for expert judgment to evaluate the material developed.
The data would be analyzed quantitatively and qualitatively. The questionnaire consisted of several questions which asked the students and teachers to choose the answer based on their experience in teaching and learning reading.
The questionnaire would be analyzed by using descriptive technique. To determine the reading materials are feasible or not, a parameter is needed. Detail description of the categorization is shown in Table 2 This parameter determine whether the existing syllabus and reading materials relevant or not. It is shown in Table 4 as follow:
IV. RESEARCH FINDINGS AND DISCUSSION
The study was conducted by developing the need analysis to know the local based needs by the students toward their reading materials. In achieve that intention so this research has been done by educational Research and Development model. This research has been done by four steps; evaluation and need analysis, course design, validation, revision and final product. The result can be seen as shown in the below:
A. The existing reading material
Evaluation process on existing materials especially about reading text in Grow with English by Erlangga press was done in order to find out whether the existing reading text provided by the teacher. The checklist was done for determining of the relevance of properness of the existing material with the students' need in learning English especially in reading.
From the analysis of the existing materials from five main aspects; the purpose, design and arrangement, linguistic features, topics, and methods got 0.95 point. Based on likert scale can see in Table 3.3 can conclude that the existing reading material were less relevant for elementary school.
It can be concluded that the existing material are good to be used in teaching and learning process but they are not close for the sixth grade students of elementary school in Lobulayan.
B. Need analysis
Need analysis as the important source data after evaluation process. To collect the data was collected from stakeholder, alumni, English teacher, and students of elementary school.
The result of analysis of need analysis was done by the students, stakeholder, alumnus, English teacher conclude that the important need for students that can be implemented their knowledge in the factory must be correlated with the topics of natural resources (local tourism destination such as holiday in Aek Sijornih baths, Silima-lima waterfall, Parsariran baths ), and social culture (traditional food and drink, traditional clothes).
The aspects of this evaluation need analysis consist of four main aspects. The aspects covered a) the reading materials, b) linguistic features, c), design and layout d) topic needed.
The interview section has done by an English teacher and alumnus. This interview was conducted to know about the need after they graduate from the school. The results of the interview were analyzed by descriptive technique.
Alumnus
When the studied at school, the English skill cannot support with the needs of local knowledge because it never studied before. So the alumnus need about reading text which has correlation with their local based needs at school especially about local vegetables, tourist places name, traditional clothes, names of governor or country in English, local food or culinary.
English teacher
The topics given for sixth grade are traditional clothes, names of tourist place, local food, local fruits, about family, school objects. There are some terms in elementary students books used English, so they need to study English about local based needs in South Tapanuli.
Need analysis was done in order to find out the basic competences which need to be reached by elementary students to increase the students' motivation in learning English. From the result of questionnaire has given to the students, the result of interview was given to the stakeholder, alumnus, and English teacher, that data were taken become a reference for developing a new reading material through local based need in South Tapanuli.
The result of this research is similar with Kusuma [15].
Kusuma [15] found that the existing materials were not contextual for the students of fifth grade of elementary schools in Buleleng regency, Bali. Next, Hakim and Anggraini [] focuses on developing English textbook for fourth grade students in Elementary school. The researcher developed English textbook based on CTL. For the result, the form of student's book was developed based on the 2006 English curriculum. The student's book was developed into six units which stated the basic competences, the topic, and the learning outcomes. The topics were developed into task and were related to the student's context and workplace. In this research, most students think the existing text do not always suit with the students' background. It is difficult for the students to acquire the language and the text at the same time. They believe that if the students have background knowledge about the text, they will be easy in comprehending it. Then it is suggested to the teachers to suit the reading material with students' background knowledge [22].
It has been proved by many researchers that using localbased needs material in language teaching classroom is a great advantage to both students and teachers. First, it is a effective way to use of local texts in teaching foreign language class and it will make the students motivated because local needs contents involve the topics which the students are familiar with. As stated by Al Mahrooqi and Al Busaidi (2010), "local needs can be met more effectively". In addition, Aspiandi, Sutapa, and Sudarsono also stated local needs have a good impact to learning activities. Futhremore, Dirgeyasa and Ansari [6] local based needs promotes and empowers the tourism resources locally to meet the needs of tourism industry.
It can be concluded that, they can encourage learners to gain a deeper understanding of their own local needs and to share these insights in it. Next, using the local-based needs materials can make learner easier in learning the contents because they are familiar with those topics and have previous background knowledge about them. Moreover, providing local needs contents in foreign language class can also easier motivate the learner to explore their knowledge more and be more enthusiastic in class. Related to the term of local culture, Fu [8] stated that the use of local culture contents is one of the effective ways to stimulate students' motivation in language class. | 2020-01-09T09:04:19.812Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "9956ea61f990b316dd314601f7e22b16947e89ac",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125928435.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d5dc9db11483fe1e80f087543d0bbe2172a22a8d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Geography"
]
} |
207962825 | pes2o/s2orc | v3-fos-license | Association of Flavonifractor plautii, a Flavonoid-Degrading Bacterium, with the Gut Microbiome of Colorectal Cancer Patients in India
This study provides novel insights on the CRC-associated microbiome of a unique cohort in India, reveals the potential role of a new bacterium in CRC, and identifies cohort-specific biomarkers, which can potentially be used in noninvasive diagnosis of CRC. The study gains additional significance, as India is among the countries with a very low incidence of CRC, and the diet and lifestyle in India have been associated with a distinct gut microbiome in healthy Indians compared to other global populations. Thus, in this study, we hypothesize a unique relationship between CRC and the gut microbiome in an Indian population.
East Asian countries. Mutations in several tumor suppressor genes, such as APC, MSH2, MLH1, PMS2, DPC4/Smad4, and p53, and activation of oncogenes, such as -catenin, COX-2, and K-RAS, have been implicated as one of the many causes of colorectal cancer (2). The human colon is a unique organ that harbors thousands of bacterial species comprising ϳ10 12 to 10 Ϫ14 microbes, which play a prominent role in human health, likely implicated in the etiology of several human diseases such as inflammatory bowel disease (IBD), obesity, type 2 diabetes, and cardiovascular and other diseases (3)(4)(5). Similar associations of an altered gut microbiome with CRC have also been revealed in recent studies in Chinese, Austrian, French, and American populations (6)(7)(8)(9). In the majority of the studies, Fusobacterium nucleatum and Bacteroides spp. have been observed to be consistently associated with tumorigenesis (7,10).
Beyond taxonomic profiling, a few recent metagenomic studies have also focused on the identification of potential fecal biomarkers for the improved detection of CRC (6,8). In the Chinese population, Yu et al. identified 20 microbial gene markers differentiating the CRC and healthy gut microbiome (6). Another study, from a European population, also identified potent taxonomic biomarkers, which showed similar diagnostic accuracy as that of the fecal occult blood test (FOBT) for both early-and late-stage CRC (8). When the two approaches were combined, an improvement of Ͼ45% in sensitivity of machine learning models was observed compared to FOBT, while maintaining their specificity for CRC detection, suggesting that microbial biomarkers hold the potential to supplement the existing diagnostic techniques for early-stage and noninvasive detection of CRC.
The previous microbiome studies have mostly emphasized the identification of global CRC markers, as opposed to population-specific microbial biomarkers. However, most of these studies also focus on developed countries and/or populations with high incidences of CRC, which may share environmental or lifestyle factors that influence both CRC and the microbiome. It is, therefore, unclear how universal the reported associations between CRC and the gut microbiome are. Due to the significantly distinct lifestyles and dietary characteristics of different populations worldwide, it is important to identify both country-specific and global markers of CRC.
India is among the few countries in the world where CRC shows the lowest incidence. Low rates of CRC in India are often linked to vegetarianism, use of spices such as curcumin (turmeric), and other food additives having apparent anticancer properties (11). Given the profound role of diet in shaping the gut microbiome, these unique dietary traits are likely to affect the gut microbiome. A cross-population comparison carried out in one of our recent studies also showed that the Indian population forms a distinct cluster from other world populations (China, United States, and Denmark), driven by the predominance of Prevotella spp. (12). If the gut microbiome does mediate CRC disease progression, these unique gut microbial traits may explain the low incidence of CRC in India. However, no study has yet been carried out in the Indian population to examine these relationships. Therefore, to gain novel insights into the role of the gut microbiome in CRC in India, and to identify populationspecific bacterial markers of CRC, we performed a comprehensive gut microbiome analysis of CRC patients in India and compared that microbiome with healthy Indian individuals. Specifically, we profiled the fecal metagenome using shotgun metagenomic sequencing along with gas chromatography-mass spectrometry (GC-MS)-based profiling of the fecal metabolome in a cohort of 60 individuals (30 CRC patients and 30 healthy controls) from two distinct locations (north-central and southern India).
RESULTS
Shotgun metagenomic sequencing in n ϭ 60 individuals from both Bhopal and Kerala cohorts (see Table S1 in the supplemental material) yielded a total of 641 million high-quality sequencing reads with an average of 10.7 Ϯ 5.1 million reads/sample (average Ϯ SD). We then constructed a gene catalogue containing a set of 2,364,248 nonredundant genes for the Indian cohort. For maximum quantification of microbial genes, the Integrated Gene Catalogue (IGC) and the India-specific gene catalogue were combined to construct a nonredundant Updated Gene Catalogue (UGC), which comprised 11,118,467 genes (an addition of 12.5% genes in the current IGC) including 9,879,896 genes from the IGC and 1,238,571 genes unique to the Indian population. The UGC was used for mapping of metagenomic reads from the 60 Indian samples and resulted in 54.47% Ϯ 7.84% (average Ϯ SD) mapping of reads and in the identification of 3,824,855 million genes in the Indian cohort.
Variations in the CRC-associated gut microbiome in the Indian cohort. Rarefaction analysis showed that the gene richness approached saturation in both groups (healthy and CRC) and was higher in CRC than in healthy individuals (Fig. 1A). The increased gene richness was further validated by calculating the within-sample diversity (␣-diversity) using the Shannon index, which measures within-sample gene diversity. It was observed that the individuals with CRC had a more significantly diverse gene pool than healthy controls (Wilcoxon rank sum test; q value ϭ 0.0052) (Fig. 1B). Interindividual distances in gene composition, as determined by Bray-Curtis distance metrics, showed that CRC individuals are much more dissimilar than healthy controls (Wilcoxon rank sum test; q value ϭ 0.0003) (Fig. 1C). Taken together, these results suggest distinct differences in the diversity of functions carried out by gut microbial communities in the CRC-associated gut compared to the healthy controls.
To compare the gene contents among all 60 samples, a set of genes commonly present in at least 3 samples (5% of the total samples) was constructed, which comprised 1,988,680 genes. Using these 1.9 million genes, gene abundance profiles were generated for each of the 60 samples. The variations in microbial community composition between samples were first scored to examine the effect of each of 8 covariates (health status, location, age, gender, body mass index [BMI], stage, histopathology, and localization) ( Table 1) by performing permutational multivariate analysis of variance (PERMANOVA) on the gene abundance profiles. It was observed that health status explained the maximum variation (P value ϭ 0.0009, R 2 ϭ 0.04) compared to the other covariates. The location also showed a significant effect but explained less variation than the health status (P value ϭ 0.009, R 2 ϭ 0.03).
To further dig into the covariates explaining variation in the gene profiles across cohorts, principal-component analysis (PCA) based on the gene profiles was performed. The first and the second principal component explained 7.2% and 6.8% of the total variations (see Fig. S1 in the supplemental material) and were significantly associated with health status (polyserial correlation; q value Ͻ 10 Ϫ15 ) and location (polyserial correlation; q value ϭ 0.00004), respectively (Table S2). CRC and healthy samples clustered separately along PC1, corroborating significant functional microbiome differences explained mainly by the health status followed by location of the samples (Table S2).
Taxonomic variations in the CRC-associated gut microbiome. Taxonomic differences in the gut microbiome of CRC and healthy individuals were examined to identify the microbial taxa associated with the patterns observed in the previous analysis. For this analysis, three different methods were used: (i) reference-based Human Microbiome Project-National Center for Biotechnology Information (HMP-NCBI) species, (ii) de novo clustering-based metagenomic species (MGS), and (iii) clade-specific-markerbased metagenomic OTU (mOTU) species and Metaphlan species (see Materials and Methods). On performing correlation analysis, 158 HMP-NCBI-mapped species, 147 MGS, 61 species-level mOTUs, and 45 Metaphlan species were observed to be significantly associated with CRC or healthy samples (Wilcoxon rank sum test; q value Ͻ 0.01; mean abundance Ͼ 0.001) (Table S3). To improve the robustness of taxonomic marker identification in CRC, the taxonomic species that were identified by all the three strategies simultaneously (HMP-NCBI species, MGS, and any one of the clade markerbased approaches, i.e., mOTUs or Metaphlan) were considered for further analysis. A total of 20 taxonomic markers were identified based on their significant association with the health status using the above methods. Among these 20 marker species, six species, namely, Eubacterium rectale, Prevotella copri, Bifidobacterium adolescentis, Megasphaera elsdenii, Faecalibacterium prausnitzii, and Lactobacillus ruminis, were observed to be highly associated with the gut microbiome of healthy Indian subjects. These species have also been associated with a healthy phenotype in previous studies, and significant reductions in their proportion were observed in CRC in this study (13)(14)(15)(16)(17)(18). The remaining 14 species were associated and enriched in CRC samples. Among these, nine species including Akkermansia muciniphila (6,19), Bacteroides fragilis (20), Bacteroides clarus (21), Bacteroides eggerthii (7), Escherichia coli (6,19), Odoribacter splanchnicus (7), Peptostreptococcus stomatis (6,8), Parvimonas micra (6,7), and Parabacteroides distasonis (22), have been shown to be strongly associated with colorectal cancer in the previous studies (Fig. 2).
Remarkably, a few gut bacteria that have not yet been associated with colorectal cancer in the previous reports were also observed to be significantly associated with Indian CRC samples. Among these, a novel and striking finding was the presence of Flavinofractor plautii, which was significantly associated (Wilcoxon rank sum test, q Ͻ 0.00001) ( Fig. 2) with CRC samples in this study. Additionally, the predictive power of taxonomic association using Random Forest (RF) analysis on HMP-NCBI species abundance also showed Flavonifractor plautii as the most important species in separating Indian CRC samples from the healthy controls ( Table 2). The high abundance of this flavonoid-degrading bacterium in Indian samples is intriguing, as the diet of Indian populations is rich in polyphenols, with flavonoids being the most abundant dietary polyphenol (23). Additionally, a few gut bacteria not associated with colorectal cancer in the previous reports were also observed to be highly associated with CRC; these included Bacteroides intestinalis, Methanobrevibacter smithii, Streptococcus parasanguinis, and Veillonella parvula (Fig. 2). Further, PERMANOVA showed that only health status (q value ϭ 0.004) and location (q value ϭ 0.004) significantly explained variation in species abundance based on the four species observed using all three methods of species identification between CRC (red) and healthy (blue) samples are represented in panels. The y axis represents relative abundance of samples calculated by mapping the reads against reference genomes collected from HMP-NCBI. The boxes represent interquartile ranges between the first (25th percentile) and third (75th percentile) quartiles, and the line or notch in the boxes represents the median. The whiskers extending 1.5ϫ interquartile range on both sides represent the deviations in the values from the median. methods used to identify species. All other covariates were not significant (q value Ͼ 0.09) ( Table S4). In order to derive associations between microbial markers, a species cooccurrence network was generated from pairwise correlations using sparCC (Fig. 3). Flavonifractor plautii, Bacteroides fragilis, Bacteroides intestinalis, and Parabacteroides distasonis, which have been previously reported to be associated with CRC, showed higher degrees of association with each other and also with microbes such as Peptostreptococcus stomatis and Parvimonas micra (6). The most influential nodes in the network were determined using centrality measure, and it was observed that Flavonifractor plautii, Bacteroides intestinalis, Bacteroides fragilis, Bacteroides clarus, and Parabacteroides distasonis showed much higher centrality, thus showing their influence on the entire network. The high association between these species indicates that CRCassociated microbes tend to cooccur, and form more associations, in contrast with taxa characterizing healthy states. Global comparative metagenome-wide association study (MGWAS) metaanalysis. To demonstrate the utility of CRC-associated taxonomic markers in the CRC-associated gut microbiome between cohorts, we selected a similar group of 75 CRC cases and 53 healthy control samples from a Chinese cohort (19), and a group of 46 CRC cases and 57 healthy control samples from an Austrian cohort (7) for the comparative analysis. Using data sets from multiple countries, a meta-analysis was performed to identify global variations in the CRC microbiome. In order to control the variations arising due to uneven sequencing depths from other studies, we used a rarefied table for performing meta-analysis. We performed multivariate distance-based redundancy analysis (db-RDA) with health status (CRC and healthy) and country (India, China, and Austria) as metainformation. The multivariate analysis was constrained using these two pieces of information, and the most important axes explaining maximum variations between samples were extracted. The projection shows all the CRC and healthy samples from the three countries, with country-/study-wide differences on the x axis and differences due to CRC status on the y axis (Fig. 4). It was observed that the Indian population differed significantly from the Austrian and Chinese cohorts (P value Ͻ 10 Ϫ15 ) (Fig. 4). The Indian CRC samples showed marked differences in microbial composition and were separated from the other country samples, thus revealing the unique microbial community composition in Indian gut microbiomes. The MGS/CAGs that showed maximum contributions in driving the separation of Indian CRC samples included Flavonifractor plautii, Veillonella parvula, and Parabacteroides distasonis (Fig. 4).
To look at the global taxonomic patterns, we performed differential analysis unstratified for CRC status (while controlling for the populations) and found 85 MGS/CAGs to be significantly associated with CRC (Table S5). The MGS/CAG belonging to Fusobacterium nucleatum, which has been reported in earlier studies (24), showed the highest association with CRC status with a P value of Ͻ10 Ϫ15 . The other species that have been associated with CRC in the previous studies included Peptostreptococcus stomatis, Bacteroides fragilis, and Porphyromonas asaccharolytica (6,7,25). Flavonifractor plautii, which showed a striking association in Indian CRC samples, was also observed in this list, albeit with low P values compared to the previously mentioned species.
Functional characterization of microbiome associated with CRC. A metagenomewide association analysis was performed to gain functional insights on the CRCassociated gut microbiome. Out of the total of 1.9 million genes, which were present in at least 5% of the samples, 228,299 genes were found to be significantly associated with the disease status (Wilcoxon rank sum test, q Ͻ 0.01). These CRC-associated genes were functionally annotated using the KEGG database. Using the stringent criteria of P value of Ͻ0.01 and log odds ratio (LOR) of Ͼ|2|, 473 KEGG orthologues (KOs) were found associated with health status (Table S6). The top-ranked enzymes (KOs) include invasins, multidrug resistance protein, and enzymes involved in secretion and the transport system, which points toward a pathogenic and invasive environment with high cross-talk between host and microbes. Specifically, the high abundance of invasins has also been associated with the colorectal cancer-associated gut microbiome in the past (26,27), as they help the bacteria to gain entry into host cells (22,26).
The pathways associated with CRC were identified using reporter feature analysis, which takes into consideration the significance and enrichment of all the genes present in the pathway. It was observed that out of the 12 pathways, "ABC transporters" (q value ϭ 0.013) could pass the stringent cutoff being significantly enriched in CRC (Table S7). It was interesting that pathways related to the biosynthesis of six amino acids (leucine, isoleucine, lysine, phenylalanine, tryptophan, and valine) out of the nine essential amino acids were observed to be significantly high in healthy controls compared to the CRC cases (Table S7), suggesting depletion of essential amino acids in the gut microbiome of CRC individuals.
Further, to gain additional functional insights, we identified KEGG modules which were significantly associated with health status. For this, only those modules for which at least 90% of the module's enzymes are present in the samples were considered, and these modules were also found associated with health status with a q value of Ͻ0.001 (Wilcoxon rank sum test). Using these stringent criteria, a total of 46 modules could qualify, of which 12 modules were found higher in CRC cases than in the healthy controls. A module with the function of "Catechol ortho cleavage" was observed to be significantly associated with the CRC cases. This module is involved in the degradation FIG 4 Major effects of CRC on gut microbiome from multivariate meta-analysis. Principal-component analysis of the samples from China, Austria, and India using MGS abundance derived from metagenome-wide association study is projected. The multivariate analysis using distance-based redundancy analysis (db-RDA) was constrained by studies/populations and health status. The marginal box plots show separation of constrained projected coordinates on the x axis (constrained for studies/populations) and y axis (constrained for health status). The top three MGS that showed significant association with Indian CRC samples are interpolated on the plane of maximal separation. of catechols such as 3,4-dihydroxypheynlacetic acid, which are generated by degradation of flavonoids by the gut bacterium Flavonifractor plautii (28). On performing the Spearman correlation between these 46 modules and the 20 taxonomic species selected above, the "Catechol ortho cleavage" module was observed to correlate significantly (r ϭ 0.63, P ϭ 3.6 ϫ 10 Ϫ7 ) with Flavonifractor plautii (Fig. S2).
Insights from comparative metabolomic profiling. The principal-component analysis revealed marked variations in the metabolomic profiles of CRC and healthy individuals. These differences could be attributable to the host physiological changes and the microbial metabolism, which depends on the type of microbes inhabiting the gut. The first and the second principal components explained 71.55% and 10.66% of the total variations (Fig. 5A), respectively. The peaks annotated to metabolites 4-hydroxyphenyl pyruvic acid, butanoic acid, valeric acid, L-valine, and cyclohexene were observed to be significantly enriched (P value Ͻ 0.05; log fold change Ͻ Ϫ2) in CRC individuals compared to the healthy group (Fig. 5B). Among these, the higher level of 4-hydroxyphenyl derivates can be directly corrected with the flavonoid-metabolizing ability of the CRC microbiome, which is dominated by Flavonifractor plautii (formerly Clostridium orbiscindens), a microbe involved in the degradation of quercetin flavonoid (28,29). A higher abundance of compounds such as valerate, isovalerate, and isobutyrate (which are salts or esters of valeric acid and butanoic acid [butyric acid]) in CRC individuals has also been reported in other studies (30). Taken together, these observations indicate significant differences in the metabolic profiles in CRC individuals compared to the healthy group, which correlates with the flavonoid-metabolizing ability of the CRC microbiome in the Indian patient cohort. However, detailed studies to gain insights on the functional contribution of respective microbes for production of these metabolites and their impact on host health are needed to confirm these results.
CRC gene biomarker discovery. We divided our 60 samples into two sets, cohort A comprising 48 samples and cohort B comprising 12 samples, by random selection from the two locations and health status (see Materials and Methods). To identify potential CRC-associated biomarkers, a robust feature selection method was followed using the 102,168 health status-associated genes from the samples of cohort A. From these genes, we identified a subset that were highly correlated with each other (Pearson correlations Ͼ 0.9) and chose the longest gene from each correlated group to construct a statistically nonredundant set of 13,982 genes. Further, we used the "CfsSubsetEval" method from Weka to identify a subset of 36 genes that are highly correlated with the health status while having low intercorrelation with each other. The genes from this subset were further validated using the Boruta algorithm, which uses Random Forest to perform a top-down search for relevant features by comparing original attribute importance with importance achievable at random, and we eliminated irrelevant feature to stabilize the test. As a result, 33 out of 36 genes were confirmed as markers using this algorithm and 3 were predicted as tentative markers (Fig. S3). The principal-component analysis using these 33 genes showed clear separation between CRC and healthy samples, and the first two principal components explained 40.5% variation (Fig. 6A), which is a significant improvement compared to the separation observed using raw data (11.4% variation explained using first two principal components [ Fig. S1]). Most importantly, the first three principal components were observed to be significantly (adjusted P values: PC1 ϭ 7.5 ϫ 10 Ϫ10 , PC2 ϭ 1.97 ϫ 10 Ϫ8 , PC3 ϭ 0.0005) associated only with the health status with the stringent P value cutoff less than 0.001 (Table 3). PERMANOVA showed that only CRC status explained the variation in the 33 marker gene abundances significantly (P Ͻ 0.01, R 2 ϭ 0.19) ( Table 4). These results suggest that the 33 gene markers identified using the approach are strongly associated with health status and not with any other covariate. Further, to evaluate the predictive power of these marker genes in predicting the CRC status, the Random Forest method was used, which resulted in the perfect classification of the two classes (area under receiver operating characteristic [ROC] curve, area under the curve [AUC] ϭ 1) using 10-fold cross-validation. On performing the Spearman correlation of these 33 gene markers with the 20 taxonomic markers identified above, it was observed that the 16 genes enriched in CRC cases were highly correlated with Flavonifractor plautii and Bacteroides fragilis (Fig. 6B). These results further validates that these two species could play a role in Indian CRC samples.
Gene marker validation in independent metagenomic cohorts. To test the accuracy and robustness of these gene markers, we evaluated the predictive power of these 33 genes on cohort B (6 CRC samples and 6 healthy samples) from this study and from a cohort with a different genetic background: 75 CRC samples and 53 healthy samples from China and 46 CRC samples and 57 healthy samples from Austria. The relative gene abundances of China and Austria data sets were constructed by mapping their genes on the Updated Gene Catalogue constructed in this study. On cohort B, the Random Forest (RF) model using the 33 genes resulted in an accuracy of 91.67% with 11 out of 12 samples being correctly predicted (sensitivity, 100%; specificity, 83.33%). However, using the same gene markers on the Austria and China data sets resulted in lower average accuracy of 65.05% and 51.56%, respectively. A CRC index using the log relative abundances of the 33 gene markers was also calculated as mentioned in the study by Yu et al. (6). The CRC index clearly separated the samples from the Indian population (CRC index patients ϭ 4.04; CRC index healthy ϭ Ϫ4.65) with a P value of 3 ϫ 10 Ϫ1. However, it could not significantly differentiate between the CRC cases and healthy controls for the other two populations (Fig. 6C).
DISCUSSION
Recently, gut microbiome dysbiosis has emerged as a key factor that triggers an inflammatory response in the host and is proposed to lead to an initiation of colorectal cancer (31,32). However, most of our understanding comes from developed countries with high incidences of CRC. India harbors a unique gut microbiome and also has one of the lowest incidences of CRC. Thus, we expected to find a distinct relationship between the gut microbiome and CRC in an Indian cohort. Our results showed a clear distinction between the healthy and CRC-associated gut microbiomes. We also identified multiple potential microbial taxonomic and gene biomarkers associated with CRC. While some of these biomarkers have been reported in other global populations, others were unique to our cohort. Therefore, our study is one of the first to emphasize the importance of utilizing population-specific microbiome biomarkers in studies of CRC. Interestingly, gut microbiome diversity was higher in Indian CRC samples than in healthy controls. A similar observation was made in Austrian CRC cases, which showed an increased microbiome diversity; however, reduced microbial diversity was observed in a Chinese cohort, pointing toward a contrasting trend, perhaps due to populationspecific variations (12). The higher diversity in the Indian CRC microbiome can be explained by the fact that the Indian gut microbiome is highly skewed, with most (30% to 75%) of the community dominated by Prevotella, as observed in our recent study that examined the gut microbiome in a cohort of 110 Indian individuals (12). In this study, we observed a much lower proportion of Prevotella in CRC samples (12.7%) than in the healthy samples (45.31%). Thus, the apparent ϳ3.5-fold reduction in this most predominant taxon in CRC patients is likely to result in an increased diversity due to more opportunities for other bacterial taxa to flourish (33). Another consequence of this dysbiosis appears to be the reduction in pathways related to the biosynthesis of six essential amino acids (leucine, isoleucine, lysine, phenylalanine, tryptophan, and valine) out of the nine essential amino acids in CRC cases (see Table S6 in the supplemental material), which makes it tempting to speculate on a dysbiosis-mediated mechanism of CRC in the Indian population. The most interesting key finding of this study was the identification of Flavonifractor plautii as the key bacterium associated with CRC, which also emerged as one of the 20 taxonomic markers identified using three different strategies. Though its presence in the gut microbiome is not unique to India, and it was present in other CRC data sets (used in this study), it showed a differential abundance only in the CRC gut microbiome of the Indian cohort. In addition to being significantly abundant, it also emerged as the most important species in separating CRC samples from healthy samples in the Indian cohort. Also, the high correlation of F. plautii with the 16 CRC-associated gene markers highlights it as a potential key species in the CRC-associated Indian gut microbiome. Among other species that showed a strong association with CRC, the Bacteroides intestinalis and Methanobrevibacter smithii species were observed to be associated with Indian CRC cases and were not previously reported in other CRC microbiome studies. B. intestinalis is a gut commensal bacterium known to convert primary bile acids to secondary bile acids via deconjugation and dehydration (34). These secondary bile acids may have carcinogenic effects (35). M. smithii is a methanogenic archaeon and a dominant methanogen in the distal colon of both healthy and diseased individuals (36). To date, no direct mechanistic link has been established between gut-associated diseases and methanogens; however, colonization by archaea has been suggested to promote a number of gastrointestinal and metabolic diseases such as colorectal cancer, inflammatory bowel disease, and obesity (37).
F. plautii can degrade flavonoids by cleaving the C-ring of the flavonoid molecules (28). Flavonoids are important constituents of the human diet and are mainly comprised of polyphenolic secondary metabolites with broad-spectrum pharmacological activities. Accumulating evidence from epidemiological, preclinical, and clinical studies supports a role of these polyphenols in the prevention of cancer, cardiovascular disease, type 2 diabetes, and cognitive dysfunction (28,38). Flavonoids are proposed to affect the composition of the gut microbiota and could therapeutically target the intestinal microbiome by promoting beneficial bacteria and inhibiting potentially pathogenic species (28). Several common Indian foods such as tea, coffee, apple, guava, Terminalia bark, fenugreek seeds, mustard seeds, cinnamon, red chili powder, cloves, turmeric, and pulses contain large amounts of flavonoids (39). Medium levels (50 to 100 mg) are found in Indian gooseberry, omum, cumin, cardamom, betel leaf, and brandy (39). Small but significant amounts are also present in food items of high consumption such as kidney beans, soybeans, grapes, ginger, coriander powder, millets, and brinjal (39). Given the significance of flavonoids, the high consumption of beneficial flavonoids in the Indian diet has been correlated with low rates of CRC occurrence in India (38).
However, extensive degradation of flavonoids by gut microflora may result in lower overall bioavailability of intact flavonoids (40). Thus, in the Indian CRC samples it is reasonable to associate the high abundance of Flavonifractor plautii, a key flavonoiddegrading bacterium, with higher rates of flavonoid degradation that minimizes the potential beneficial effects and bioavailability of flavonoids in CRC. Further, the high association of F. plautii with the catechol cleavage pathway (catechols are generated by degradation of flavonoids) also indicates a potential role of this species in flavonoid degradation in the gut. In addition, the enzyme enoate reductase, which performs the first step of flavonoid degradation, was also found to be significantly abundant in CRC cases compared to healthy samples (Wilcoxon rank sum test; P value ϭ 0.045). Taken together, these observations underscore a potential role of this bacterium in degradation of flavonoids in CRC cases.
Interestingly, Fusobacterium nucleatum has been associated with the CRC microbiome in the past in the major studies from other populations. The meta-analysis performed in this study also found F. nucleatum as the top bacterium in the global CRC-associated microbiome studies. However, this bacterium was not present in the list of the 20 taxonomic markers identified in this study. Although its abundance was significantly higher in CRC cases than in healthy controls, the proportional abundance was below the minimum abundance criterion (Ͼ0.1%) selected in this study, and hence, it was not included in the list of taxonomic markers. Further, its presence was almost negligible (0.05%) in the Indian samples in comparison to its basal levels in Austrian and Chinese CRC data sets. Hence, it could not appear as a taxonomic marker for Indian CRC samples.
The results of the study also have translational applications in CRC diagnosis. Survival rates in CRC are reported to increase if the cancer is diagnosed and treated at an early stage (41). The standard colonoscopy method used to diagnose CRC is invasive and also expensive, due to which many high-risk individuals are not screened at their initial stages of cancer. The available noninvasive tests, such as the fecal occult blood test, fecal immunochemical test, and DNA-based Cologuard test (42), lack sensitivity and detection of early-stage disease, may provide false-positive results, and also need confirmation due to nonspecific diagnosis (43). Similarly, the molecular subtyping method which is commonly used in cancer research, where cancer subclasses are based on clinically relevant gene expression patterns (44), does not show clear results in CRC (45,46). Thus, the apparent limitations in the diagnosis of CRC prompt the need for the development of alternative diagnosis methods such as the microbial biomarkers identified in this study and other similar previous studies. The 33 potential gene markers associated with the Indian microbiome samples and their high accuracy (91.67%) in classifying Indian CRC samples from the healthy samples provide a proof of concept for the development of an affordable diagnostic test using fecal microbial gene markers. However, due to the lack of a significant number of samples to represent each of the four stages of CRC, a correlation analysis of the 33 gene markers with the stage of the cancer could not be performed in this study, which would be helpful to identify the early-stage CRC markers. In addition, the robustness of these candidate markers should be further validated on other Indian cohorts with larger numbers of samples and on similar cohorts in other populations, which is presently a limitation of this study and provides impetus for further studies.
MATERIALS AND METHODS
Cohort design and subject enrollment. A considerable sample size consisting of 60 samples (30 cases and 30 controls) was recruited from two different locations (Bhopal and Kerala) in India. For constructing a balanced data set, 15 cases and 15 controls were selected from both the locations. The two selected locations represent different geographies (2,000 km apart) and lifestyles in order to remove the confounding effect of diet and making the observations generalizable for the Indian cohort. Bhopal is a city located in central India and is populated with people from all over the country; hence, samples from here can act as a proxy to represent the diversity of the country. Samples from Kerala were specifically chosen because, among all the other states of India, Kerala has the highest rate of colorectal cancer incidence. The fecal samples were collected only from CRC cases, and those from healthy subjects were taken from a previous study (12). Each fecal sample was collected and immediately transported to the lab at 4°C for further processing. Diagnosis of all the cases was carried out by experienced oncologists at the hospitals through biopsy and colonoscopy. The study exclusion criteria for patients were any previously diagnosed serious medical conditions and recent use of antibiotics, to avoid the effect of confounding factors. Patients with incomplete medical information were also removed from the selected set. Fecal samples were collected prior to colonoscopy in sterile containers.
Fecal metagenomic DNA extraction. Metagenomic DNA was isolated from all the fecal samples using the QIAamp stool minikit (Qiagen, CA, USA) according to the manufacturer's instructions. DNA concentration was estimated by the Qubit HS double-stranded DNA (dsDNA) assay kit (Invitrogen, CA, USA), and quality was estimated by agarose gel electrophoresis. All the DNA samples were stored at Ϫ80°C until sequencing.
Shotgun metagenome sequencing. The extracted metagenomic DNA was used to prepare the sequencing libraries using the Illumina Nextera XT sample preparation kit (Illumina Inc., USA) by following the manufacturer's protocol. The sizes of all the libraries were assessed on the Agilent 2100 Bioanalyzer using the Agilent high-sensitivity DNA kit (Agilent Technologies, Santa Clara, CA, USA) and were quantified on a Qubit 2.0 fluorometer using the Qubit HS dsDNA kit (Life Technologies, USA) and by quantitative PCR (qPCR) using Kapa SYBR Fast qPCR master mix and Illumina standards and primer premix (Kapa Biosystems, MA, USA) according to the Illumina suggested protocol. The shotgun metagenomic libraries were loaded on an Illumina NextSeq 500 platform using the NextSeq 500/550 v2 sequencing reagent kit (Illumina Inc., USA), and 150-bp paired-end sequencing was performed at the Next-Generation Sequencing (NGS) Facility, IISER, Bhopal, India.
Preprocessing of the metagenomic reads. A total of 150 Gbp of metagenomic sequence data (mean ϭ 1.36 Gb) was generated from 60 fecal samples. The metagenomic reads were filtered using the NGSQC (v2.3.3) toolkit with a cutoff of q of Ն20 (47). The high-quality reads were further filtered to remove the host-origin reads (human contamination) from bacterial metagenomic reads, which resulted in the removal of an average of 1% of reads. The reads from each sample were assembled into contigs at a k-mer size of 63 bp using SOAPdenovo (v2.0) (48). The singletons resulting from each sample were pooled, and de novo assembly was repeated on the combined set of singleton reads from all samples. The open reading frames (ORFs) from each contig (length of Ն300 bp) were predicted using Meta-GeneMark (v3.38) (49). Pairwise alignment of genes was performed using BLAT (v2.7.6), and the genes which had an identity of Ն95% and alignment coverage of Ն90% were clustered into a single set of nonredundant genes, from which the longest gene was selected as the representative ORF to construct the nonredundant gene catalogue.
The Integrated Gene Catalogue (IGC), which represents 1,297 human gut metagenomic samples comprising HMP, MetaHIT and Chinese data sets, was retrieved (50). The gene catalogue constructed from Indian samples was combined with the IGC to construct a nonredundant gene catalogue (using Ն95% identity and Ն90% alignment coverage) and is referred to as "updated IGC" in the subsequent analysis.
Quantification of gene content. The quantification of gene content was carried out using the strategy performed by Qin et al. (51) in which the high-quality reads were aligned against the updated IGC using SOAP2 (v2.21) in the SOAP aligner with an identity cutoff of Ն90% (52). Two types of alignments were considered for sequence-based profiling: (i) the entire paired-end read mapped to the gene and (ii) one end of the paired-end read mapped to a gene and other end remained unmapped. In both cases, the mapped read was counted as one copy. Further, the read count was normalized based on length of the gene as b i ϭ x i /L i .
The relative abundance of a gene within the sample was calculated as follows: where a i is relative abundance of gene in sample S, x i is number of times that gene i was detected in sample S (the number of mapped reads), L i is length of gene i, j is all the genes, and b i is copy number of gene i in sequenced data from sample S. Construction of updated gene catalogue for gut profiling. To construct the gene catalogue for gut microbiome profiling, the high-quality sequencing reads were subjected to a de Bruijn graph-based assembly which resulted in 2,143,541 contigs of Ͼ300 bp in length with a total contig length of 1.52 Gb. To capture low-coverage genomic regions or low-abundance genomes, all unassembled reads were extracted and combined with the singletons from each sample to further assemble into an additional 1.2 million contigs (Ͼ300 bp) with a total assembled length of 0.76 Gb. The gene prediction on all assembled contigs resulted in 4,591,809 genes, out of which 2,36,4248 genes were nonredundant and which represents the gene catalogue of the Indian population. We incorporated these genes to update the currently available Integrated Gene Catalogue (IGC), which contained 9.8 million genes from 1,267 gut metagenomes from three continents (Europe [53][54][55], United States [56], and China [51]), as it lacked information on genes specific to the Indian population. The Updated Gene Catalogue (UGC) comprised 11,118,467 genes (an addition of 12.5% genes in the current IGC) with 1,238,571 genes unique to the Indian population.
On this updated gene catalogue, reads from each sample were mapped and the genes present in the Indian population were identified. On average, 54.47% Ϯ 7.84% (average Ϯ SD) of high-quality reads mapped from each sample to UGC and resulted in the identification of 3.8 million genes present in the Indian cohort. Taxonomic assignment and functional annotation were performed for these 3.8 million genes present using 4,097 reference genomes (HMP and NCBI species) and KEGG and eggNOG databases. A total of 2.41 million genes (62.9%) could be successfully assigned a taxonomy at genus level. The remaining genes are expected to be from currently unidentified microbial species. At the functional level, 8,312 KEGG orthologues and 59,303 eggNOG orthologue groups were identified in the updated gene catalogue. Additionally, 24% of the genes which were not mapped to the orthologue groups could be clustered into 649 novel gene families, which did not have any assigned function but were still included in the analysis as novel eggNOG groups.
Diversity and rarefaction analysis. Estimation of total gene richness, ␣-diversity (within-sample diversity), and -diversity (between-sample diversity) in the set of 60 metagenomic samples was performed by randomized sampling and replacement, and estimates were compared to a different group of samples. Rarefied matrices were obtained by rarefying at 6 million reads per sample. In total, we performed 10 repetitions, and in each of these, we measured richness, ␣-diversity (by using the Shannon index), and -diversity (by using Bray-Curtis distance) for each sample. The median values were taken as the respective measurement for each sample. Intersample distances were calculated using the Bray-Gut Microbiome of Colorectal Cancer Patients in India Curtis distance matrices. The significance of the association with health status was performed using the Wilcoxon rank sum test.
Phylogenetic assignment of reads. A total of 4,097 reference microbial genomes were obtained from the Human Microbiome Project (HMP) and National Center for Biotechnology Information (NCBI) on 5 December 2015. The databases were independently indexed into two Bowtie indexes using Bowtie 2 (v2.3.4.1) (57). The metagenomic reads were aligned to the reference microbial genomes using Bowtie 2. The mapped reads from the two indexes were merged by selecting the alignment having the higher identity (Ն90% identity). The percent identity was calculated using the formula: % identity ϭ 100 ϫ (matches/total aligned length). The normalized abundance of a microbial genome was calculated by summing the total number of reads aligned to its reference genome, normalized by the genome length and the total number of reads in the data set. For reads showing hits to the two indexed databases with equal identity, each genome was assigned an 0.5 read count. The relative abundance of each genome was calculated by adding the normalized abundance of each genome divided by the total abundance. The Calinski-Harabasz index (CHI) was used to calculate the variance between the clusters compared to the variance within clusters (58). A clade-specific-marker-based taxonomic assignment of reads was also done using the mOTUs (v2) (59) approach and Metaphlan (v2.0) (60).
Construction of metagenomic species for MGWAS. The gene cohort and its abundance from 291 samples belonging to India (60), Austria (103), and China (128) were combined and used for determining MGS/CAGs. The Pearson's correlation coefficient (PCC) cutoff of Ն0.9 was used for considering association between genes, and only genes having an abundance of Ͼ0 in at least 30 samples were considered for association analysis. Furthermore, the genes for which Ն90% abundance was obtained from a single sample were discarded. To determine the taxonomic origin of each MGS/CAG (metagenomic cluster), all the genes were aligned against reference microbial genomes of 4,097 genomes from HMP and NCBI at nucleotide level using BLASTN. The alignment hits were filtered using an E value of Յ10 Ϫ6 and alignment coverage of Ն80% of the gene length, and 2,687,688 genes showed alignments against the reference genomes. The remaining genes were aligned against the UNIREF database (UniRef50) at protein sequences (61). The multiple best hits with equal identity and scores were further assigned taxonomy based on the lowest common ancestor (LCA) method. The genes were finally assigned to taxa based on comprehensive parameters of sequence similarity across phylogenetic ranks as described earlier (62). The identity threshold of Ն95% was used for assignment up to species level, Ն85% identity threshold was used for assignment up to genus level, and Ն65% identity was used for phylum-level assignment using BLASTN. The taxonomic assignments of MGS/CAGs were performed with the criterion that Ն50% of genes in each MGS should map to the same lowest phylogenetic group. So, if a particular species is assigned Ն50% genes out of the total, the assignment will be carried out at species level rather than at the level of genus or higher orders. The relative abundance of MGS/CAGs in each sample was estimated by using relative abundance values of all genes from that MGS/CAG. A Poisson distribution was fitted to the relative abundance values of the data. The mean estimated from Poisson distribution was assigned as the relative abundance of that MGS. The profiles of MGS/CAGs were generated and used for further analysis. The MGS/CAGs associated with CRC in the Indian population were scored using log odds ratio, and P values were calculated using the Wilcoxon rank sum test between CRC and healthy individuals. The Wilcoxon rank sum test was adjusted for multiple comparisons using false-discovery rate (FDR) adjustment. The MGS having P values of Ͻ0.05 and log odds ratio of Ͼ2 (CRC) or Յ2 (healthy) were considered enriched in CRC or healthy groups, respectively.
Fecal metabolomic sample preparation and derivatization. In order to identify the metabolic potential of microbes, metabolomics profiling of a subset of individuals (n ϭ 18; CRC patients ϭ 9, healthy ϭ 9) was performed. Lyophilized fecal samples were used to achieve better metabolite coverage, as described previously (12). Metabolites were extracted from 80 mg of lyophilized samples in 1 ml of ice-cold methanol-water (8:2) by bead beating for 30 cycles (each cycle included 30 s of beating at 2,500 rpm and 1 min of standing at 4°C). The samples were then sonicated for 30 min in a probe-based sonicator (Branson digital Sonifier, model 102 C with double-step microtip) at 4°C followed by 2 min of vortexing. The supernatant was extracted by centrifugation at 18,000 ϫ g for 15 min at 4°C and dried at 50°C under a gentle stream of nitrogen gas. To remove the residual water molecules from the samples, 100 l of toluene was added to the dry residue and evaporated completely at 50°C under nitrogen gas. The extracted metabolites were first derivatized with 50 l of methoxyamine hydrochloride (MOX) in pyridine (20 mg/ml) at 60°C for 2 h, and the second derivatization was performed with 100 l of N-methyl-N-(trimethylsilyl)trifluoroacetamide (MSTFA) in 1% trimethylchlorosilane (TMCS) at 60°C for 45 min to form trimethylsilyl (TMS) derivatives. Finally, 150 l of the TMS derivatives was transferred into a GC glass vial insert and subjected to GC-time of flight MS (TOFMS) analysis.
GC-MS analysis. GC-MS was performed on an Agilent 7890A gas chromatograph with a 5975 C MS system. An HP-5 (25-m by 320-m by 0.25-m-inside-diameter [i.d.]) fused silica capillary column (Agilent J&W Scientific, Folsom, CA) was used with the open split interface. The injector, transfer line, and ion source temperatures were maintained at 220, 220, and 250°C, respectively. Oven temperature was programmed at 70°C for 0.2 min and increased at 10°C/min to 270°C, where it was sustained for 5 min, and further increased at 40°C/min to 310°C, where it was held for 11 min. The MS was operated in the electron impact ionization mode at 70 eV. Mass data were acquired in full scan mode from m/z 40 to 600 with an acquisition rate of 20 spectra per second. To detect retention time shifts and enable Kovats retention index (RI) calculation, a standard alkane series mixture (C 10 to C 40 ) was injected periodically during the sample analysis. RIs are relative retention times normalized to n-alkanes eluted adjacently. The injector port temperature was held at 250°C, and the helium gas flow rate was set to 1 ml/min at an initial oven temperature of 50°C. The oven temperature was increased at 10°C/min to 310°C for 11 min, and mass data were acquired in full scan mode from m/z 40 to 600 with an acquisition rate of 20 spectra per second.
features to stabilize the test. To test the accuracy of these markers, a Random Forest model was constructed using these genes and was used for making the prediction on samples from cohort B.
CRC index. To compare the performances of markers, we computed a CRC index, as defined by Yu et al. (6), for each of the individuals on the basis of 33 gene markers identified using the methodology mentioned above.
Supervised learning. Predictive models were built using supervised machine learning algorithm Random Forest (RF). The models were optimized using 10,000 trees and default settings of mtry (number for variables used to build the model). The mean 3-fold cross-validation error rates were calculated for each of the binary trees and the ensemble of trees. The mean decrease in accuracy, which is the increase in error rates on leaving the variable out, was calculated for each prediction and tree and was used to estimate the importance score. The variables showing a higher mean decrease in accuracy of prediction were considered important for the segregation of the data sets into groups based on the categorical variable.
Network plot. In order to derive associations between microbial markers, a species cooccurrence network was generated from pairwise correlations using sparCC, which takes into account the compositional data and estimates the correlations between species. The species associations with correlation coefficients of Ͼ0.3 were considered for construction of networks and inferring associations among species.
Statistical analysis. All the statistical comparisons between groups were performed using a nonparametric Wilcoxon rank sum test with FDR-adjusted P values to control for multiple comparisons. The correlations between two variables and the correlations within the variable were calculated using Spearman's correlation coefficient with adjusted P values. The correlations between categorical and numeric variables were performed using polyserial correlation/biserial correlations. To identify the enrichment of enzymes/species associated with a host, odds ratio was used as a measure of the enrichment of an enzyme in a host. The odds ratio was calculated as OR (k) ϭ [Α S ϭ LOC1 A Sk /Α S ϭ LOC1 (Α i k A Si )]/[Α S ϭ LOC2 A Sk /Α S ϭ LOC2 (Α i k A Si )], where A Sk denotes abundance of enzyme k in sample S. Apart from that, the Reporter features algorithm was used for gene-set analysis of significant pathways associated with different groups of samples. The algorithm takes the adjusted P values and fold changes (log odds ratio) as input for each KO. The gene statistic is calculated based on the significant association of KO and its direction of change through which the pathway is scored by calculating the global P value. All the graphs and plots were generated using the ggplot2 package in R.
Ethics approval and consent to participate. The recruitment of volunteers, sample collection, and other study-related procedures were carried out by following the guidelines and protocols approved by the Institute Ethics Committee of the Indian Institute of Science Education and Research (IISER), Bhopal, India. A written informed consent was obtained from all the subjects prior to any study-related procedures.
Availability of data and material. The data sets generated and/or analyzed during the current study are available in the NCBI BioProject database under project numbers PRJNA531273 and PRJNA397112. | 2019-11-14T14:17:15.742Z | 2019-11-12T00:00:00.000 | {
"year": 2019,
"sha1": "e5a57501d9ed05fce7593ca51e56fc1ea0e473d7",
"oa_license": "CCBY",
"oa_url": "https://msystems.asm.org/content/msys/4/6/e00438-19.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d891f71977e1bdb8e5288d65b8c1a941e6b503c3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
139618786 | pes2o/s2orc | v3-fos-license | Crystallite size determination of barium hexaferrite nanoparticles using WH-plot and WPPM
Scherrer formula and Williamson-Hall plot or WH-plot are the common methods to determine microstructure information from diffraction pattern, a technique with the expenses of the physical meaning of the result. Eventually a Whole Powder Pattern Modeling (WPPM) has been proposed, allowing physical information to be extracted from the diffraction data with the one-step refinement of the experimental pattern. In this paper, we reported the comparison between Williamson-Hall plot and Whole Powder Pattern Modeling to determine the crystallite size of nanoparticle barium hexaferrites which prepared by mechanical alloying and direct ultrasonic destruction.
Introduction
Recently nanostructured magnetic materials have become a hot topic due to their unusual properties raised from the surfaces of neighboring grains. The reason is that because of a great surface to volume ratio in the nanostructures system, then any physical phenomena taking place on the grain surfaces would contribute significant effects to the bulk properties. The unusual properties of permanent magnets were claimed for the first time by McCallum et al. [1] that they have developed the isotropic permanent magnet based on nanostructured materials with a remanent magnetization value significantly exceeding the conventional limits based on Stoner and Wohlfarth theory [2]. The enhancement in remanent magnetization is due to the interaction phenomenon since the orientation of the interacting spins becomes partly independent of the different crystallographic directions in each crystallite [3]. In fact, the interaction phenomenon that raised remanent enhancement occurred not only in the single phase nanostructured material [4] but also in the nanocomposite magnet system [5]. The basic principle to gain the effects of exchange grain interaction is the crystallite sizes of magnetic phase in the material must be in nanometer scale size.
Based on this fact, the determination of crystallite size becomes critical. Nanocrystalline barium hexaferrite (BHF) has been a subject of increasing interest due to the potential applications in the various areas like permanent magnets, microwave devices and fast magnetic recording devices [6][7][8]. Accurate determination of crystallite size is very important for a BHF as a hard magnetic material, leading to reduction in coercivity as it identity [8][9][10][11]. As the latest Scanning Electron Microscopes (SEMs) and Transmission Electron Microscopes (TEMs) easily achieve sub-nanometer resolution, determination of nanocrystalline materials becomes obvious. However, some limitations in analysis were still found such as limitation of analysis to only a few grains [12]. In addition to SEM and TEM, an indirect method like x-ray diffraction would also be possible for crystallite size analysis from the respective powder diffraction pattern. In years, Scherrer formula [13,14] and Williamson-Hall plot [13,15] methods have been used to determine the microstructural of the nanocrytalline material. Scherrer observed that the small crystallite size give rise to line broadening and derived a well-known equation for relating the crystallite size to broadening, which is called the Scherrer Formula [14]. Stokes and Wilson also observed that the crystal imperfection and distortion can cause line broadening [16]. Further, Williamson and Hall proposed a method for deconvoluting size and strain contribution by looking the peak width as a function of 2θ which is written in equation 1 [15].
where ℎ is the full width at half maximum of the line broadening, is the Bragg agle, k is the Scherrer constant depends on the how the line profile width is determined and usually not known, is the source wavelength, D is the crystallite size and is microstrain represents displacements of atoms from their ideal positions, produced by any lattice imperfection (dislocations, vacancies, interstitials, substitutionals, and similar defects).
It is clearly seen that the size and strain broadening are independent to each other, which is allowed to be separated when both occur together. The crystallite size was estimated from y-intercept and the strain from the slope. However, Scherrer formula (SF) and Williamson-Hall (WH) plot are strictly valid for Lorenzian peaks only, which are seldom meet a practice condition because some information also contain in Gaussian peaks. The average crystallite size is work well for isotropic domain, but not good enough for anisotropic domain. The microstrain contribution only provides a general term of defects, but does not identify their source [17]. The real obstacle comes from where SF and WH plot are extracted. Structure (Rietveld) refinement and whole powder pattern fitting (WPPF) are two distinct methods implementing the SF and WH-plot in the algorithms. The common tool for Rietveld and WPPF diffraction pattern analysis is shown in equation 2 [12].
where y ic is the net intensity calculated at point i in the pattern, y ib is the background intensity, G ik is the normalized peak profile function contained size and strain effects, M k is the multiplicity factor, L k Lorentz polarization factor, F k is the structure factor, P k is the preferred orientation, A k is the absorption correction and E k is the extinction correction. The profile functions are introduced by modeling the peak profiles with an arbitrary bell-shaped function and further deduce the physical information. This is a two-step method where the result cannot be evaluated a priori. The diffraction peak profiles are a complex combination of physical and instrumental effects. Constrains by the given analytical function can introduce a systematic (model) errors in correlated with structural and non-structural parameters. As a consequence, the results can be biased and difficult to assess. It also comes to the practical issues that the refinement is very sensitive to minimizing the least-squares of observed and calculated data.
Alternatively, there is a technique proposed to cover all Gauss/Lorentz-type combinations and one of those, called Whole Powder Pattern Modeling (WPPM) [17]. This technique directly refines the entire pattern with the individual parameter corresponding to individual source of broadening. It is able to model such information with a consistency and remarkable agree with the microscopy result [17][18][19]. It starts from Fourier approach and works in convolutive way where diffraction profiles can be seen in equation 3 [17]. where d* is the reciprocal space coordinate (d* hkl in Bragg condition), k(d*) groups of known geometrical and structural terms, w hkl is the weight function (dependent on defects present in the material), L is the length in real space (inversely proportional to {ℎ } * ), s hkl is the distance from peak centroid in the reciprocal space and C is the Fourier transform (FT) multiplication as explained in equation 4 [17].
where is the FT of instrumental profile, {ℎ } is the FTs for lognormal, or York distribution depends on grain shape and size, {ℎ } is the lattice distortion due to dislocation, ( {ℎ } + {ℎ } ) is the stacking fault broadening, {ℎ } is the anti-phase broadening, ( {ℎ } + {ℎ } is the grain surface relaxation broadening, ( {ℎ } + {ℎ } ) is the stoichiometric fluctuations broadening and additional line broadening source that can be included due to Fourier Transform of the various effects are multiplied. It is one-step procedure that removed the arbitrary hypotheses by direct modeling the whole diffraction pattern in terms of physical models of the microstructure.
Experimental
The nanoparticles BHF was prepared through two successive steps. In the first step, the mechanical alloying of the solid-state reaction method was employed. In this method, stoichiometric quantities of the analytical-graded precursors BaCO 3 and Fe 2 O 3 from Sigma-Aldrich with 99% purity level were mille in a ball mill apparatus with a ball-powder mass ratio of 10:1 using a jog speed of 160 rpm for 20 hours. Milled powders resulted from mechanical milling were compacted into a cylindrical die of 25 mm diameter under 10 tons load force. The green compact was then sintered at 1100C for 2 hours leading to fully crystalline bulk sample. The crystalline bulk sample was re-mille for 10 hours and then washed with a light HCl solution to eliminate impurities. In the second step, the fully crystalline re-milled powders were then further treated with direct ultrasonic probe with 5% Vol/Vol of water, at a frequency 20 kilohertz and 60 μm transducer amplitude. X-ray diffraction was performed under a Philips diffractometer using Co-Kα radiation due to the fluorescence effect of iron. The crystallite size of powder diffraction data was determined using High Score Plus (HSP) software implementing WHplot and PM2K software implementing WPPM. Powder XRD data of 5 hours ultrasonically irradiated sample were used in refinements after determining the instrument broadening with Si-standard (SRM-640). The average particle size distributions of powder specimens were determined using Dynamic Light Scattering (DLS) Particle Size Analyzer and Scanning Electron Microscope (SEM).
Result and Discussion
Fitted powder XRD patterns of ultrasonic irradiated samples in figure 1(a) shows a typical single phase M-type hexagonal structure of BHF with space group P63/mmc. Strong preferred orientation on hhl -planes in bulk pattern is a typical of mechanical alloyed BHF sintered between 1000-1250°C [20,21]. As the ultrasonic irradiation time increased the intensity of diffracted peaks also decreased due to the more random distribution of destructed crystals. The decrease in intensity was followed by line broadening of diffracted peaks. In figure 1(b), broadened peaks of (114) for ultrasonic irradiated powders are compared. It shows the Full Width Half Maximum (FWHM) increased as the ultrasonic irradiation time increased. The longer irradiation time, the higher value of FWHM. It is concluded that the ultrasonic irradiation can reduce the crystallite size. In the previous work, Karina et al. [22] have reported their comprehensive studies on the use of high power ultrasonicator to refine further the mechanically alloyed powders. Prior to irradiation ultrasonically the crystalline mechanically milled SrO.6 (Fe 2 O 3 ) powders which initially have mean particle and crystallite sizes respectively 723 nm and 179 nm, both were progressively reduced the size, though with a different reduction rate after irradiated under an ultrasonic transducer of 55 μm amplitude. After 5 hours irradiation time the polycrystalline particles reduced the size towards monocrystalline particles of about 87 nm. Hence, crystallites in the particle can be further refined. Figure 2(a) is a plot of particle size distribution of powder sample evaluated by particle size analyzer after ultrasonically irradiated for 5 hours. It shows a bi-modal distribution with a mean size of 51.2 nm and 276.3, respectively. The high mean distribution occurred due to effect of fine particles that tend to agglomerate. It can be seen that the particles of a low mean particle size occupied the largest volumetric distribution by 64 % and that of a high mean particle size occupied by 36%. SEM micrograph of such agglomerated particles is shown in figure 2(b). It can be clearly seen that the agglomerated particles present as a homogenous spherical shape with size range from a few ten nanometers to a few hundred nanometers. The mean crystallite size and crystallite size distribution of such particles can be evaluated from the respective XRD data. distribution was modeled by a lognormal distribution as shown in figure 3(b). The result superimposed second peak from DLS graph. Mean size of 49.1 nm is very close to DLS result (49.2 nm), concludes that each particle contained single crystallite. The curve width is about 3 nm narrower than DLS result (7.1 nm), shows a difference between the techniques.
Conclusions
A comparison between WH-plot and WPPM method of crystallite size determination at nano particle BHF has successfully performed. It is found that the average crystallite size of WH-plot was very close to the mean crystallite size distribution from WPPM. The size of WH-plot was about 42.4 nm and for WPPM about 49.1 nm. Both results have good agreement with DLS and SEM analysis. WPPM give the advantage of size distribution which is very important information in nano material applications. | 2019-04-30T13:08:14.919Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "71cb8dbf3341f0d9c53a4732f0837ddff9479593",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1080/1/012008",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "cf10e6d241ec12997d196783cab35a4787dca550",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
265274561 | pes2o/s2orc | v3-fos-license | Creation of forest stands with the use of water-retaining polymeric nano-substances (hydrogels) on the drained bottom of the Aral sea
. The Aral problem is of a planetary nature and its solution is possible only through large-scale forest reclamation. Every year, up to 150 million tons of salt, dust and sand rise into the air from the entire dried bottom located on the territory of Uzbekistan and Kazakhstan with an area of about 6 million hectares, which rise high into the sky, where they mix with clouds and are carried away to a distance of up to 1000 km. This is the tragedy of the century and it can be solved only by creating forest plantations. The studies were carried out on the sandy loamy plain of the former Rybatsky Bay of the dried bottom of the Aral Sea. In order to increase the germination of seeds of desert plants by increasing soil moisture, work was carried out on an area of 2 hectares. On the dry bottom of the Aral Sea, forest cultivation is difficult due to low rainfall and hot weather. This problem can be solved by introducing water-retaining polymeric nanosubstances into the soil. During the period of shoots in the places where nanosubstances were introduced, soil moisture increased by 1.5-2.2% compared to the control, and the survival rate of seedlings increased by 1.6-3.3%. When studying the effect of nanosubstances on a different range of desert plants, it was revealed that the number of seedlings of saxaul per 100 linear meters was more by 19.1 pcs, cherkez by 17.7 pcs, kandym by 7.5 pcs, Izen by 8.6, chogon by 12.3 and teresken by 13.6 pcs of plants than on control. In the experiment, the survival rate of saxaul and teresken was higher by 28%, and by autumn the mortality of saxaul was 3% and teresken 4%, and in the control, respectively, the mortality was 20 and 9%. The advantage is retained by the water-retaining polymeric nanosubstance produced in Perm. Therefore, in order to obtain abundant seedling shoots from seeds and increase the survival rate of seedlings in the extreme conditions of the dried bottom of the Aral Sea, it is advisable to use polymeric water-retaining nanosubstances.
Introduction
The problem of saving water all over the world, including in the Republic of Uzbekistan, is one of the most stable, because our state is located in the arid zone, where desert territories occupy 70% [7].An area of 3.2 million hectares on the territory of the Republic of Uzbekistan is occupied by the dried bottom of the Aral Sea.Water saving is especially relevant in the agriculture of the Republic of Karakalpakstan and on the drained bottom of the Aral Sea.The main source of soil moisture loss on the dried bottom is its evaporation from the upper horizons due to heating under the influence of solar radiation and drying by dry winds.If we take into account the fact that the annual amount of precipitation on the dried bottom is no more than 90 mm, then it is clear that the above factors lead to the complete drying of the soil [8].
To solve the main environmental problem of Uzbekistan, it was necessary to carry out large-scale forest reclamation work on the dried bottom of the Aral Sea in order to create protective forest plantations to fix the easily deflating surface of the former bottom.However, we encountered difficulties, consisting in insufficient amount of moisture in the soil, and therefore the introduction of polymeric water-retaining nanosubstances (hydrogels) into the soil contributed to a significant improvement in the hydrological properties of the soil.The use of hydrogels reduces the evaporation of moisture, does not allow it to flow into the underlying layers of the soil, and contributes to the preservation of productive moisture in the root layer throughout the growing season.The study of the effect of hydrogels from different manufacturers allows us to more deeply understand its effectiveness in maintaining moisture in the soil, the effect on the seeds germination and the survival rate of seedlings of desert plants [9,11,12].
We set a goal -to develop an agricultural technology for the use of water-retaining polymeric nanosubstances to increase the survival rate of seedlings and the appearance of abundant seedling shoots in the extreme arid conditions of the dried bottom of the Aral Sea, which will allow retaining moisture in the soil for a long period and increase the survival rate of seedlings of desert plants by 10-25%.
Materials and methods
Research area.The studies were carried out on the sandy loamy plain of the former Rybatsky Bay of the dried bottom of the Aral Sea covered with sea shells.The sandy loam is located at a depth of up to 35 cm, and below it is underlain by loam, and then brown viscous clay mixed with loose sand.The natural and climatic conditions of the study area are typical for most of the dried bottom.The main factors determining the habitat of vegetation types on the dried bottom of the Aral Sea are soil salinity, the depth of mineralized groundwater and the composition of bottom sediments.The projective vegetation cover does not exceed 30%.
The dried bottom of the Aral Sea occupies the extreme northern position in the continental subtropical climate zone.The location of the territory in the depths of the mainland at a great distance from the oceans determines the aridity and continentality of the climate [1,2].The main features of the climate are an abundance of heat, moisture deficiency, long hot dry summers and relatively frosty winters, often repeated strong winds causing salt-dust storms, and significant evapotranspiration [5,10].
The objects of research are the drained bottom of the Aral Sea, where seedlings of saxaul, cherkez, kandym, izen, chogon and teresken were planted, as well as seeds of these species were sown.When carrying out forest reclamation work, polymeric moisture-retaining nanosubstances from four manufacturers were used: BYM 5110 NIPI New Technologies, Perm, Russian Federation TSRICHT, Tashkent, Uzbekistan; "ZEBA" UPLLIMITED Gujarat India; "Polysorb-1" JSC, Kopeysk, Russian Federation.The experiment was laid on an area of 2 hectares.First, this area was cleared of vegetation, planned and fenced with tamarix branches, protecting it from wild animals.
Research methodology.Before the experiment was started, a geobotanical description of the research area was carried out according to the generally accepted methodology in forest reclamation: the relief, the type of bottom sediments, the species composition of plants and their projective cover, and the degree of natural self-overgrowing were described.The main elements of forest growing conditions that determine the possibility of using water-retaining nanosubstances on the dried bottom of the Aral Sea are not so much the climate as the topography and thickness of sand, the depth of groundwater, moisture reserves in the upper horizons, as well as the nature and degree of salinity along the profile [13].Some differences in the climate of the plots could only determine the range of plants and the timing of the work, as well as the rate of application of water-retaining nanosubstances.Soil conditions were characterized by describing soil sections with a depth of at least 100 cm.Soil samples were taken from each genetic horizon for subsequent determination of the granulometric and chemical composition.Soil moisture was determined by the air-thermostatic method.Soil samples were taken using a soil drill along genetic horizons.Since the main goal was to determine the effectiveness of a water-retaining polymeric nanosubstance (hydrogel), the hydrogel was applied both in dry powder form and in liquid form in 4-fold repetition during sowing seeds and planting seedlings [15,17,18].
When sowing, the hydrogel was applied in two ways -the seeds were soaked in the hydrogel, mixed and then sown in the prepared soil.The second method was that the seeds were sown dry in the soil, and the hydrogel in liquid form was applied from above in the form of irrigation [6].
When planting seedlings, the hydrogel, both in dry and liquid form, was introduced into the planting hole, thoroughly mixed with the soil, after which the seedlings were planted.
During the entire growing season, biometric measurements of established seedlings were carried out and a record was kept of shoots from sown seeds.
The criteria for the effectiveness of a water-retaining polymer nanosubstance from different manufacturers are an increase in soil moisture throughout the growing season, a high survival rate of seedlings, and a large number of seedlings from sown seeds [19,20].
Results and its discussion
Growing forest plantations on the dried bottom of the Aral Sea is associated with arid climate and lack of moisture in the soil.Therefore, it is so important to find substances whose introduction into the soil will increase its moisture content.Such a substance is a polymeric moisture-retaining nanosubstance, which is a network of cross-linked hydrophilic polymer chains.It can also be in the form of a colloidal gel, in which water is the dispersion medium [3].A three-dimensional solid is obtained as a result of the fact that the hydrophilic polymer chains are held together by cross-links.The crosslinks that bind the polymers of nanosubstances fall into two main categories: physical and chemical.Physical cross-links consist of hydrogen bonds, hydrophobic interactions, and chain entanglements (among other things).Due to their inherent cross-linking, the structural integrity of the nanosubstance network does not dissolve due to the high concentration of water.Nanosubstances are highly absorbent (they can contain over 90% water) natural or synthetic polymer networks [14,16].
We introduced water-retaining nanosubstances from different manufacturers during the sowing of seeds and when planting seedlings.One of the ways to sow the seeds was that before they were sown, the seeds were soaked in the nanosubstance, thoroughly mixed, and the sowing was carried out together with the water-retaining nanosubstance, i.e. in the liquid state [4].
The second method consisted in sowing the seeds of desert plants into the prepared grooves 2-3 cm deep, after which a layer of liquid hydrogel was applied on top.Seedlings were counted twice: in June and September.The obtained material was statistically processed and tabulated; at the same time, the soil moisture was studied in order to identify the effect of the nanosubstance on the moisture reserves in it throughout the growing season: April 6, May 6, June 18, and September 6.When creating forest plantations by sowing seeds, soil moisture in the horizon of 0-10 cm is important, i.e. in the seed horizon.It was revealed that the highest soil moisture during the sowing period was when using the nanosubstance of the brand "Polysorb" from Kopeysk and it was 3.2%, while in the control soil moisture was significantly lower and amounted to 1.3%.In the summer period, the advantage is retained by the nanosubstance produced in Perm and amounts to 1.7%, while in the control only 0.15%.The advantage of this nanosubstance is also preserved in September, and here the soil moisture content is 1.8%, while in the control it is 0.05%.During the period of planting saxaul and teresken seedlings, we introduced moistened hydrogel from different manufacturers and it was important to have high soil moisture in the horizon where the root system of seedlings is located, i.e. in the horizon of 10-40 cm.In May, in the horizon of 10-40 cm, the soil moisture content when the nanosubstance produced in Perm was introduced was 11.3%, and only 1.9% in the control, while in June this nanosubstance also has an advantage and here the moisture content was 5.7%, and in the control 0.7% (Table 2).The data of the experimental material showed that the largest number of seedlings was detected when a nanosubstance produced in Perm was introduced, where the number of seedlings of saxaul compared to the control was 19.1 plants per 1linear meter, cherkez by 17.7 pieces, kandym by 17.5.Izeni by 8.6 pieces., chogon by 12.3 and teresken by 13.6 plants more than in the control.The second place is occupied by the nanosubstance produced by Kopeysk, the third by ZEBA India, and the nanosubstance produced by TSRICHT in Tashkent, where the number of seedlings was 7.4 -10.9, was less effective.(Table 3).
A second evaluation of the preserved seedlings was carried out on September 23.It was very important to find out what effect the nanosubstance from different manufacturers has on plants during the hot season?Does it contribute to the preservation of plants at high air temperatures and is there a death of plants during this period of the year.
On September 23, when using a nanosubstance produced in Perm, there were 20.4 saxaul per 1 linear meter, which is 4.6 plants less than it was on June 20, cherkez by 4.3 plants, kandym by 5.8 plants, izen by 2.6 plants, chogon by 6.0 and teresken 5.7 plants less.In the control, where the nanosubstance was not used, the number of plants was very small, which cannot form the basis of future plantations.Moisture-retaining nanomaterial from other manufacturers also had a positive effect on the seeds germination, the survival of seedlings and their safety.The advantage is retained by the nanosubstance produced in Perm, here the soil moisture was significantly higher than in the control, which contributed to better seed germination and seedling survival.
No one had experience in studying the effect of nanosubstance on the survival rate of seedlings of tree and shrub species in Uzbekistan, especially desert plants in the extreme climatic conditions of the dried bottom of the Aral Sea.Therefore, it was important to study the effect of nanosubstances from different manufacturers on the survival rate of seedlings.100 saxaul plants and 100 teresken plants were planted in clean rows per 100 running meter.A liquid nanosubstance was introduced into the planting holes at the rate of 5 grams per one, where it was thoroughly mixed with the soil.The experimental material showed that the survival rate of seedlings is influenced by a polymeric moisture-retaining nanosubstance.However, not all nanosubstances have the same effect on plants and this depends on the manufacturer.So, when using a nano-substance produced in Perm, the data on the of survival assessment, carried out on June 20, showed that out of 100 planted plants, 92 saxaul plants and 84 teresken plants took root, and a second assessment conducted on September 23 showed that 89 saxaul and 80 teresken plants remained, i.e. the mortality was 3 and 4 plants (Table 4).When using the nanosubstance "Polysorb" 1 from Kopeysk in June, 86 and 81 plants took root in saxaul and teresken, by September 82 and 76 plants survived, and when using "ZEBA" UPLLIMITED Gujarat India, 81 and 73 plants, respectively, took root, and by September they survived 76 and 65 plants.In the control, where nanosubstances were not used, the survival rate of saxaul plants was 64 and teresken 56 plants, and by September, 44 and 47 plants remained, respectively (Table 4).If we compare the survival rate of plants according to the options for the use of nanosubstances from different manufacturers to the control where nanosubstances were not used, it was revealed that the mortality of saxaul and teresken in the experiment with the use of nanosubstances produced in Perm is 28 and 28 plants, and by September 23, 45 and 33 plants have fallen off .The obtained experimental material allows us to determine the advantage of the nanosubstance produced in Perm.Experimental studies have shown that the survival rate of seedlings with the introduction of moisture-retaining nanosubstances from different manufacturers into the planting hole contributed to a better survival rate of these seedlings and the differences are significant compared to the control (Table 5).Therefore, the introduction of water-retaining nanosubstances when sowing seeds and planting seedlings makes it possible to obtain a significantly larger number of seedlings and significantly increase the survival rate of seedlings.
Conclusions
Studies have shown that all water-retaining nanosubstances, regardless of the manufacturer, have a positive effect on the retention of moisture in the soil.It was revealed that the highest soil moisture in the spring was when using the nanosubstance of the brand "Polysorb" from Kopeysk and it was 3.2%, while in the control it was 1.3%.In summer, when the temperature on the soil surface was more than 50º C, the advantage was retained by the nanosubstance produced in Perm, where soil moisture was 1.7%, and only 0.15% in the control.The advantage of this nanosubstance is also retained in September, when soil moisture was 1.8%, and only 0.05% in the control.
The increase in soil moisture when the nanosubstance is applied compared to the control has a positive effect on the number of seedlings that appeared from the seeds of desert plants.The experiment showed that the largest number of seedlings was revealed when a nanosubstance produced in Perm was introduced, where the number of seedlings of saxaul compared to the control is 19.1 plants per 1 linear meter, cherkez -17.7 pcs, kandym -17.5, izen -8.6 pcs, chogon -12.3 and teresken by 13.6 plants more than in the control.Nanosubstances from other manufacturers also have a positive effect on the number of shoots that have appeared, but it is less significant.
The survival rate of seedlings depends on the manufacturer of water-retaining nanosubstances.When using a nanosubstance produced in Perm, the data on the survival assessment, carried out on June 20, showed that out of 100 planted plants, 92 plants of saxaul and 84 plants of teresken took root, and a second assessment conducted on September 23 showed that 89 saxaul and 80 plants of teresken remained, that is the mortality was 3 and 4 plants.In the control, the survival rate of saxaul was 64 and teresken 56, until September, respectively, 44 and 47 plants survived, the mortality was 20 and 9 plants.
Summarizing the above, we can conclude that the use of water-retaining polymeric nanosubstances can significantly increase the number of seedlings that have appeared from seeds and increase the survival rate of seedlings.In addition, it should be noted that the introduction of a nanosubstance into the soil increases the survival of plants and they more easily tolerate the summer heat, which reduces the number of fallen plants.
The work was financed within the framework of the state task ALM-202210017 "Introduction of agricultural technology for the use of water-retaining polymeric nanosubstances to increase the survival rate of seedlings on different types of bottom sediments of the dried bottom of the Aral Sea", 2022 -2023, Uzbekistan (project leader, Doctor of Agricultural Sciences Novitsky Z.B.).
Table 3 .
The number of seedlings of desert plants that appeared (pieces / 1 linear meter) in experiments using nanosubstances from different manufacturers
Table 4 .
Seedling survival rate (pieces/100 linear meter) when using a water-retaining nanosubstance from different manufacturers
Table 5 .
Significance of differences in the survival rate of seedlings (pieces /100 linear meter) when using water-retaining polymeric nanosubstances from different manufacturers | 2023-11-18T16:17:30.517Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "ad6a51876294315b4c8eab82d1e4f245aa174675",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/86/e3sconf_pdsed2023_06003.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6e41aa27d57b6fb3e3dc7db2b7f81f06baa66d87",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": []
} |
3971804 | pes2o/s2orc | v3-fos-license | Subjective evaluation of the effectiveness of whole-body cryotherapy in patients with osteoarthritis
Objectives One of the treatments for osteoarthritis (OA) is whole-body cryotherapy (WBC). The aim of this study is to assess the effect of whole-body cryotherapy on the clinical status of patients with osteoarthritis (OA), according to their subjective feelings before and after the application of a 10-day cold treatment cycle. The aim is also to assess the reduction of intensity and frequency of pain, the reduction of the painkiller medication used, and to assess the possible impact on physical activity. Material and methods The study involved 50 people, including 30 women (60%) and 20 men (40%). Thirty-one patients had spondyloarthritis (62% of respondents), 10 had knee osteoarthritis (20%), and 9 hip osteoarthritis (18%). The overall average age was 50.1 ±10.9 years; the youngest patient was 29 years old and the oldest 73 years old. The average age of the women was 6 years higher. The study used a questionnaire completed by patients, and consisted of three basic parts. The modified Laitinen pain questionnaire contained questions concerning the intensity and frequency of pain, frequency of painkiller use and the degree of limited mobility. The visual analogue scale (VAS) was used in order to subjectively evaluate the therapy after applying the ten-day treatment cycle. Results According to the subjective assessment of respondents, after the whole-body cryotherapy treatments, a significant improvement occurred in 39 patients (78%), an improvement in 9 patients (18%), and no improvement was only declared by 2 patients (4%). Conclusions Whole-body cryotherapy resulted in a reduction in the frequency and degree of pain perception in patients with osteoarthritis. WBC reduced the number of analgesic medications in these patients. It improved the range of physical activity and had a positive effect on the well-being of patients.
Introduction
One of various efficient approaches to therapeutic recovery that has been exercised for many years and remains in contemporary medical practice is whole-body cryotherapy. Treatments consist of applying short cryogenic temperatures (below -100°C) to the whole body of the patient in order to induce a physiological response. Furthermore, its significant development results in the formation of new types of cryochambers [1][2][3][4][5].
The treatment of patients with implementation of low temperatures inside a cryochamber, proves to be ex-tremely beneficial in rehabilitating the musculoskeletal system, particularly osteoarthritis, rheumatoid arthritis, ankylosing spondylitis, psoriatic arthritis, post-traumatic alterations, multiple sclerosis and spastic paresis. Patients with fibromyalgia experience subjective improvement in reported pain and also observed slowed conduction in sensory and motor nerves and reduced muscle spasticity. Contraindications include: claustrophobia, Raynauds disease and phenomenon, cardiovascular diseases (such as cardiac failure), acute respiratory diseases and cancer [6][7][8][9][10][11][12][13][14][15][16]. Cryotherapy is also a highly recommended procedure for maintaining a satisfactory physical condition in healthy people (biological regeneration, competitive sports) [17,18].
The best therapeutic effect of cryotherapy is achieved in the treatment of lesions located in the musculoskeletal system. It is beneficial in improving the well-being, and physical activity and helps relieve fatigue in patients. Treatments last from 1 to 3 minutes -the patients remain for about 15-30 seconds at a temperature of -60°C inside the vestibule and for 1 to 3 minutes at a temperature of -110°C to -160°C inside the actual chamber. Optimal results of systemic cryotherapy are achieved by applying temperatures ranging from -130°C to -150°C. Cryotherapy improves blood circulation. It also helps to neutralize the substances, which cause pain and inflammation.
Patients suffering from rheumatoid arthritis noticed an improvement in reducing joint stiffness, paint intensity and a smaller amount of painkillers taken upon completion of the cryotherapy sessions.
Upon completion of the cryotherapy sessions, physical therapy is a necessary component of the healing process. It is accomplished by a rehabilitation technique known as cryokinetics. The technique is associated with an individually determined rehabilitation program [14][15][16][17][18][19][20].
The aim of this study was to assess the effect of whole-body cryotherapy on the clinical status of patients with osteoarthritis, according to their subjective feelings before and after the application of a 10-day cold treatment cycle. The aim was also to assess the reduction of intensity and frequency of pain, the reduction of the amount of painkiller used, and the possible impact on physical activity.
Material and methods
The study was conducted at the Central Clinical Hospital of the Ministry of the Interior in Warsaw at the turn of February and March 2016, where the cryochamber was used.
The study involved 50 people, including 30 women (60%) and 20 men (40%). Thirty-one patients had spondyloarthritis (62% of respondents), 10 had knee osteoarthritis (20%), and 9 hip osteoarthritis (18%). The overall average age was 50.1 ±10.9 years; the youngest patient was 29 years old and the oldest 73 years old. The average age of the women was 6 years higher.
The study used a questionnaire completed by patients, and consisted of three basic parts: 1. The modified Laitinen pain questionnaire contained questions concerning the intensity and frequency of pain, frequency of painkiller use and the improvement of mobility. The number of points in these four categories ranges from 0 to 16, with a lower number indicating better health of the patient.
2. The visual analogue scale (VAS) was used in order for the patients to subjectively evaluate the therapy after the ten-day treatment cycle. It is a reliable and frequently used method in the evaluation of pain intensity. The patient indicated a point on a 10 cm line to show their pain severity, where 0 represents no pain and 10 represents the strongest possible pain (moderate pain is 1-3, 4-6 means strong pain, 7-9 is very strong pain).
3. Subjective evaluation of the therapy. Patients assessed the state of their health after treatment by choosing one of the available replies: significant improvement, improvement, lack of improvement or deterioration.
The study has been approved by the Bioethics Committee of the Central Clinical Hospital of the Ministry of the Interior in Warsaw (no 42/2015).
Results
After the ten-day cycle of whole-body cryotherapy (according to the subjective assessment of respondents), a significant improvement occurred in 39 patients (78%), an improvement occurred in 9 patients (18%), and no improvement was declared by only 2 patients (4%). The average baseline pain intensity in all patients was 5.1 points (VAS 5.1 ±1.9). Upon completion of the therapy, this value decreased to 2.6 points (2.6 ±1.6). According to the survey, in women this value dropped from 5.1 points (5.1 ±1.8) to 2.7 points (2.7 ±1.6), and in men from 5.2 points (5.2 ±2.0) to 2.5 points (2.5 ±1.7) (Table I).
Patients felt that before and after treatment, pain intensity decreased by an average of 1.6 points (1.6 ±0.7) to 0.7 points (0.7 ±0.5) (Tables II, VI)
Discussion
Whole-body cryotherapy applied within the framework of comprehensive physiotherapy is an effective method in the treatment of osteoarthritis, contributing significantly to improved mobility. The final outcome is definitely better when cryotherapy is used in a long-term therapeutic process. With proper application it does not cause complications and provides a valuable complementary method of primary treatment [4,9,10,12]. The results are a confirmation of the beneficial therapeutic effects of cryotherapy in patients with degenerative arthritis. Cryotherapy has analgesic, anti-inflammatory and anti-edematous effects, decreases muscle tension and improves microcirculation and systemic reactions (hormonal and immune) [9-11, 13, 14].
In research connected with evaluation of the remedial influence of whole-body cryotherapy in patients with chronic neck pain syndrome, Daniszewska et al. [21] declared that a series consisting of 10 sessions, with the combination of kinesitherapy, greatly reduces the pain-related symptoms and increases the movement range of the cervical spine linked with osteoarthritis. Likewise, Stanek et al. [22] reached the same results with a significant reduction of pain symptoms in patients with ankylosing spondylitis, who have undergone 10 sessions of whole-body cryotherapy treatment. Indeed, the gathered data emphasize that the decrease in pain intensity in the selected groups of patients who have attempted cryotherapy with kinesitherapy is considerably greater than in groups of patients treated with kinesitherapy alone. Before beginning and after ending the treatment cycle, patients have reported their findings based on the VAS pain scale.
In research comparing the analgesic effectiveness of local and whole-body cryotherapy in patients with chronic pain linked with degenerative changes, Miller [23] claims success of the therapy with both procedures, although the outcome is best achieved with whole-body cryotherapy. Furthermore, a favorable effect of cryotherapy on the mental state of patients has also been observed. The effect manifested itself in fatigue relief and mood improvement. In addition, the success of local and whole-body cryotherapy of knee osteoarthritis was found in the study of Osowska et al. [24], where both procedures are viewed as similarly effective. After the whole-body cryotherapy sessions, the level of pain symptoms rated on the numerical rating scale (NRS), along with a modified Laitinen questionnaire, diminished by 26% and 38%, respectively. However, in the group of patients treated with local cryotherapy, the pain level also dropped, by 28% and 35%, respectively [24].
Further investigations referring to the impact of whole-body cryotherapy in subjects suffering from rheumatoid arthritis confirm a positive remedial response.
Krekora et al. [25] discovered that a 10-session wholebody cryotherapy cycle combined with exercise, greatly minimizes the frequency and intensity of pain, morning stiffness, amount of painkiller taken and improvement in the context of motor activities.
The analgesic effect of whole-body cryotherapy was also observed in studies conducted by Cholewka and Drzazga [26] who attempted to compare the effectiveness of procedures performed in a two-tier cryochamber and a cryochamber of lingering cold. The results of the research were approximate. The authors highlight the fact that both types of cryochambers contributed to an overall improvement of the overall clinical status of patients [26].
In my study almost 80% of respondents felt that after the whole-body cryotherapy treatment, a significant improvement occurred. In the subjective assessment, patients focused in particular on the analgesic effect, the ability to undertake various activities in daily life (improvement of physical activity), relaxation and their generally improved well-being. The therapy has been proven effective, as indicated by the results obtained through the Laitinen questionnaire, the VAS and the subjective approach.
Conclusions
1. Cryotherapy resulted in a reduction in the frequency and degree of pain perception in patients with.
2. A 10-day cycle of cold treatment reduced the number of analgesic medications in these patients.
3. Cryotherapy treatments improved the range of physical activity and had a positive effect on the well-being of the patients.
The author declares no conflict of interest. | 2018-04-03T02:32:06.852Z | 2016-12-30T00:00:00.000 | {
"year": 2016,
"sha1": "2b4972b9d7fa8c57771a032bf6aa2e45c0bb7cdf",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-18/pdf-29024-10?filename=Subjective%20evaluation.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2b4972b9d7fa8c57771a032bf6aa2e45c0bb7cdf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216509755 | pes2o/s2orc | v3-fos-license | The Selective Politicization of Transatlantic Trade Negotiations
EuropeanUnion (EU) trade policy is in the spotlight. The Transatlantic Trade and Investment Partnership (TTIP) negotiations triggered substantial public mobilization which emerged in a surge of literature on trade politicization. Notwithstanding politicization’s topicality and significance, it varies considerably over time, across trade agreements negotiations as well as across EU member states. By picking up on the latter, this article examines why, despite similar economic benefits potentially to be gained from trade liberalization, TTIP negotiations revealed striking differences in politicization in Germany and the UK. Understanding this variation is illustrated by highlighting the impact of some of TTIPs’ substantial issues mobilizing a range of materially and ideationally motivated stakeholders, who in turn shaped diverging governments’ trade positions of the countries under scrutiny. In explaining this selective politicization across two European countries, focus is on three explanatory variables, domestic material interests (business associations and trade unions), societal ideas (voters and non-governmental organizations [NGOs]) dominant in these countries’ domestic politics, as well as their interaction with national institutions. For this reason, the societal approach to governmental preference formation is employed which provides a detailed exploration of these three domestic factors, as well as the importance of their interdependence, in shaping the TTIP positions of the UK and German governments.
Introduction
Throughout the last decade European Union (EU) trade policy, the oldest and most integrated policy, was viewed as a rather depoliticized and overlooked field of studies; research and literature were slim (Dür & Zimmermann, 2007, p. 772). Recently a remarkable surge in research has developed, due to alluring developments such as the stronger involvement of the European Parliament in trade policy decision making, the expansion of the trade agenda including content previously detached from free trade-triggering a more active engagement of nontraditional societal actors-and negotiations with new, essentially peer-like trade actors such as the United States (US) and China (van Loon, 2018a). Particularly due to the increase of public interest in and salience of trade agreements negotiations, EU trade policy is in the spotlight, suggesting that it, in contrast to the past, has become a profoundly politicized policy area. Research has picked up on the prominence of trade politicization, specifically revealed by the recent special issues of the Journal of European Integration (2017), the Journal of European Public Policy (2019) and this current Politics and Governance thematic issue. Equally, the European Commission's response to the increased public opposition to trade negotiations highlights a change from 'business as usual'; the time when EU trade policy was still perceived as a technocratic activity and both the public and the media were indifferent. In its Balanced and Progressive Trade Policy, the Commission felt prompted to react to a continually changing environment and increased public salience; 'How we conduct trade policy and trade negotiations matters. If the EU is to deliver effective agreements that benefit all citizens, the crafting of these agreements must be accountable, transparent, and inclusive' (European Commission, 2017, p. 8).
Withstanding the trade politicization hype however, this study acknowledges that, although claims about politicization are contemporary as well as significant, it varies considerably over time, across trade agreements negotiations as well as across EU member states. EU trade policy's depiction as a highly contested issue area with politicization spilling over to other trade agreements, or to the idea of free trade in general, is thus overly exaggerated (Young, 2019, p. 14). During the time period from 2005 until 2016, despite the Eurozone debt crisis and the Comprehensive Economic and Trade Agreement (CETA) as well as the Transatlantic Trade and Investment Partnership (TTIP) negotiations, there was no widespread hostility towards trade liberalization with considerable majorities in all EU member states having had continuous positive views of free trade, ranging between 68% and 77% (Eurobarometer, 2017, pp. 59-60). The fact that trade attitudes towards free trade remained steady during these various stages suggests that politicization has been more specific than general. Simultaneously, this also mirrors public opposition to TTIP being greater compared to any of the parallel EU trade negotiations.
Understood as 'an increase in the polarization of opinions, interests or values, and the extent to which they are publicly advanced towards the process of policy formulation within the EU' (de Wilde, 2011, p. 566), politicization did not play a decisive role in the majority of EU bilateral trade negotiations. Neither nontraditional societal actors nor the general public or the media paid significant attention to negotiations, thus playing mere spectator roles in (the ongoing, concluded or stalled) trade negotiations with developing countries such as China, India, Malaysia, Thailand and Vietnam or with developed countries such as Australia, Singapore, South Korea and Japan. (Inter-)regional negotiations such as those with the African, Caribbean and Pacific Economic Partnership Agreements (EPAs), the Andean Community, the Central American region and Mercosur did trigger some civil society resistance yet did not evolve into a large-scale European public mobilization. Due to the economic weight of the US and the EU, TTIP is the largest trade and investment agreement ever attempted and a prime example of what constitutes the next generation of a 'deep' trade agreement that led to 'an unprecedented public scrutiny' (Malmström, 2015, p. 2). Politicization's asymmetric dispersion hence illustrates that its occurrence was exceptional. This exceptionality resulted in TTIP being the outlier (Young, 2019).
While literature explaining the topical emergence of public mobilization during EU trade negotiations is abundant (De Bièvre & Poletti, 2017;De Ville & Siles-Brügge, 2016;Eliasson & Garcia-Duran Huet, 2019), it is presently complemented by scholars stimulating a research agenda in explaining the varying degrees of politicization across such trade negotiations (De Bièvre & Poletti, 2020;Meunier & Czesana, 2019). A consensus exists about various explanations of why the TTIP negotiations incited large public resistance. Accounting for the emergence of politicization are exactly those developments which instigated the abundant EU trade policy research mentioned above; institutional changes in the Lisbon Treaty, the content of trade and investment negotiations inducing stronger involvement of nontraditional societal actors, as well as the role and nature of the trading partner. Surprisingly, most efforts to explain the causes as well as the variation of trade politicization stubbornly focus on EU level institutions and actors (business associations, civil society organizations and trade unions). Despite scholars' acknowledgement of European governments' significance in shaping the common EU trade negotiation positions (Dür & Zimmermann, 2007, p. 783;Laursen & Roederer-Rynning, 2017, p. 765), the domestic level, 'where trade policy making actually begins and where member governments have to find negotiation positions that reflect their own domestic constraints' (van Loon, 2018a, p. 166), is-excepting a handful of studies (Adriaensen, 2016;Bauer, 2016;Bollen, 2018;Bouza & Oleart, 2018;De Bièvre, 2018;Meunier & Roederer-Rynning, 2020)-either mistakenly replaced by viewing the EU level as the domestic level, or plainly ignored. This lack of attention on the domestic level is astonishing as it is the level where trade policy making begins and where governments are constrained in finding negotiation positions originating from domestic societal demands. Assessing domestic level influences shaping governments' trade positions is thus a vital preceding component in comprehending how and why certain trade positions are pursued at the EU level. This deficiency, in looking at domestic factors to enrich knowledge, theoretical and empirical, has been criticized by van Loon (in press), who states that explanations for why European governments vary in trade positions and priorities, and how and by whom these are generated in the domestic preference formation process, remain largely unanswered. An accentuation on the origins of governments' trade positions offer a timely and relevant point of view and thus should be taken seriously in future research on EU trade policy (van Loon, 2018b, p. 107).
This study hence aims to illuminate the domestic level and-not by only opening but by explicitly unfolding the black box-its goal is to trace and explain variation of politicization across the TTIP trade positions of the United Kingdom (UK) and Germany. Both countries are traditional advocates of trade liberalization and were expected to be the main TTIP beneficiaries among the EU countries (Felbermayr, Heid, & Lehwald, 2013, p. 43). Yet at the height of trade politicization this did not translate into similar TTIP positions. Whereas the British government was a constant enthusiastic TTIP promoter, the German government, originally a fervent supporter, gradually signaled a more reserved TTIP backing. It is shown that politicization shaped the UK's enthusiastic, and Germany's reserved, position on trade, correlating with differences in interests and ideas prevalent in these countries' domestic politics. The article proceeds in the following three steps. The next section, and while touching on several domestic politics approaches, presents the societal approach to governmental preference formation (Schirm, 2011(Schirm, , 2013(Schirm, , 2016(Schirm, , 2020. This includes defining the variables, formulating the core hypotheses and explaining the operationalization. This is followed by the empirical case study which examines whether the TTIP positions of the governments under scrutiny correspond to domestic material interests or societal ideas, and whether these are in line with national institutions in a cross-country comparison. The last section concludes with a brief comparative summary on the theoretical and empirical findings.
The Societal Approach to Governmental Preference Formation
The societal approach to governmental preference formation is employed to account for the selective politicization since its eminent accentuation on endogenous societal considerations, interests, ideas and institutions dominant in countries' domestic politics, prior to international or intergovernmental negotiations (Schirm, 2013, p. 690), allows for not only opening but an explicit unfolding of the black box in explaining variation in governments' positions. While employing and augmenting domestic politics theories such as IR liberalism (Moravcsik, 1997), domestic sources of economic policies (Goldstein & Keohane, 1993;Keohane & Milner, 1996), historical institutionalism (Fioretos, 2011) as well as varieties of capitalism (Hall & Soskice, 2001), the societal approach, 'developed as a complementary approach' (Schirm, 2020, p. 5) engages in a unique advancement and refinement of these. Akin to these theories, its core assumption is that, in democratic political systems, elected governments intent to remain in office; ergo their positions mirror societal actors' preferences. Yet, contrary to hailing the importance of either domestic interests or ideas or institutions, this analytical instrument embraces all three domestic explanatory variables in explaining governmental preference formation as the dependent variable (Schirm, 2016, p. 68). Goldstein and Keohane (1993, p. 25) and Milner (1997, p. 16) point to domestic factors' interrelationship, yet truly exploring this interdependence requires further theoretical development. Providing a systematic examination of the individual role of domestic interests, ideas and institutions, in supporting or opposing each other, as well as their interplay in shaping governments' positions is a crucial innovative aspect of the societal approach. It is essential in advancing existing domestic politics approaches, both theoretically and empirically, which makes its distinctiveness a novelty. Consequently, a theory-guided empirical investigation is incomplete by solely determining which of these explanatory variables accounts for variation across governments' positions. A further step is necessary to complete the picture, which involves analyzing why domestic interests dominate in shaping government's trade positions in some cases, whereas ideas and institutions prevail in other situations. Schirm (2020, p. 9) notes that this facet on 'the conditions under which each variable becomes more important and prevails in shaping governmental preferences' is hitherto not included in previous domestic politics approaches. By employing said variables this study addresses the alluring developments such as the expansion of the trade agenda triggering a more active engagement of non-traditional societal actors. With EU trade increasingly impinging countries' domestic politics, thereby mobilizing a range of materially and ideationally motivated societal stakeholders, who aim to shape their respective governments' trade positions, this approach is both timely and warranted.
Echoing previous scholars' research output, the societal approach attaches domestic actors and structures certain attributions. Building on and furthering Milner (1997) and Moravcsik (1997), the 'material interest' variable is defined as economic sectors' short-term distributional calculations which adjust promptly to alterations in the international economy (e.g., the desire for trade protection vs. the demand for trade liberalization). 'Societal ideas' are defined as voters' durable, valuebased, shared expectations of apt government behavior in steering the economy (e.g., trust in market forces vs. governmental regulation). The definition of 'national institutions' expands on Fioretos' (2011), as well Hall and Soskice's (2001) line of thought in identifying these as formal arrangements of socio-economic coordination (e.g., coordinated market economy [CME] vs. liberal market economy [LME]). In order to be able to account for a broader array of domestic stakeholders, and the respective governments' responsiveness to their demands, further domestic actors are added in the analysis; The materially motivated sectoral business associations are complemented by trade unions considered as sources for domestic interests, and ideationally motivated voters are complemented by NGOs as sources for societal ideas.
The variables' explicit specification supports the articulation of individual hypotheses proposing 'conditions under which each variable becomes more important' (Schirm, 2020, p. 9) in shaping the governments' trade positions. These central hypotheses, accounting for the impact of economic sectors (interests) and societal expectations (ideas), as well as domestic structures (institutions), are conceptualized, and inserted within the trade context, as follows: 1) When economic sectors face meaningful distributional calculations, material interests predominate in shaping the governments' TTIP positions due to intense lobbying, and 2) when fundamental questions on the role of politics in steering the economy are affected and economic sectors face diffuse distributional concerns, societal ideas will prevail in shaping the governments' TTIP positions. Accounting for the variables' interplay, the following hypothesis states that when both cost-benefit calculations for economic sectors as well as fundamental societal expectations on governments' apt role in steering the economy are affected, then these compete and weaken, or reinforce and strengthen each other in shaping governments' TTIP positions. Additionally, when the issue concerns questions on formal arrangements of socio-economic coordination, the governments' TTIP positions will be consistent with national institutions. The effect of material interests and societal ideas on governmental preference formation is strengthened when these institutional frameworks are present, while the absence of these national arrangements dilute the influence of domestic factors' in shaping governmental positions (Schirm, 2016, p. 69).
In terms of operationalization, this study analyses the rhetorical logic, the discourse between the domestic stakeholders and responsible elected politicians in the UK and Germany, during the TTIP negotiations, particularly regarding the issues of the investor-to-state dispute settlement (ISDS) mechanism and food safety standards, covering the period 2013-2016. The relevance of the three independent variables for the divergent governments' TTIP positions is examined by identifying indicators of the expectations of sectoral business associations, trade unions, voters and NGOs during the TTIP negotiations. Centre of attention are these actors' statements in press releases, position papers, public opinion polls, official websites and secondary sources with the objective to identify the substantive origins and concerns at the core of politicization. Material interests are demonstrated through statements and positions papers of business associations and trade unions in order to examine the directly affected domestic economic sectors and the incentives of sectoral lobbying vis-à-vis the respective governments. Societal ideas are illustrated by measuring public opinion polls revealing durable fundamental expectations of voters in the form of values, as well as by position papers and statements of NGOs, on the apt governmental behavior in steering the economy, which are viewed more legitimate and acceptable in some TTIP negotiations issues than in others. National institutions are delineated by considering long-term complementarities resulting from two diverse institutional frameworks shaped by the structure of national economies, the CME-LME dichotomy, as well as the different images of government-society relations, in the form of consensus-based vs. majoritarian competition-oriented decision-making, which shows whether material interests and societal ideas tend to be consistent with these, thus potentially having shaped the governments' positions in TTIP negotiation issues.
Based on the assumption of governments' aim to remain in office, thereby inducing responsivity, the dependent variable is supported mainly by governmental doc-uments and statements. Briefly put, evidence is sought for a correlation between the stated concerns of supporters and opponents, and the respective governments' responsiveness to these concerns in their preference formation process during the TTIP negotiations. The UK and Germany, under scrutiny due to their variation in TTIP trade stances, were chosen to compare different sets of interests, ideas and institutions-the UK representing a LME shaped by financial services and Germany serving as a CME shaped by manufacturing-and concerning the appropriate role of government-British adhering more to trusting market forces and Germans attaching more confidence to governmental regulation (World Values Survey, 2005-2009). This dyad of ideas relates to 'path-dependent ideas and their codified institutional form' concerning the two countries' political systems and their political process of decision making (Schirm, 2011, p. 58). In the case of the UK, this stresses a government which acts as a referee among competing societal groups and a more winner-takes-it-all 'majoritarian and competitive decision-making,' while in Germany, the government is perceived as an intermediator through an inclusion of all relevant societal groups in the form of 'consensual decision-making' (van Loon, in press). Adoption of a most different setting hence allows for the presumption that, in a cross-country comparison, different domestic interests, ideas and institutions have indeed shaped the two governments' trade positions.
Unfolding the Black Box: Domestic Politics in the UK and Germany
TTIP's prime objective was 'to increase trade and investment' in order to create 'jobs and growth through increased market access and greater regulatory compatibility and setting the path for global standards' through four suggested measures: (1) elimination of tariffs, (2) reducing discriminatory policy measures supporting domestic providers of goods and services, (3) increasing convergence and mutual recognition of regulatory standards thereby lowering costs of EU and US suppliers, and (4) including investment protection and ISDS (European Council, 2014, p. 4). This illustrates the move from concentrating primarily on reducing border barriers to the free movement of goods such as tariffs, from the 1990s onwards focus was primarily on reducing behind-theborder restrictions on goods and barriers to trade in services. While in 2013, EU Trade Commissioner De Gucht referred to TTIP as 'the cheapest stimulus package that can be imagined' (De Gucht, 2013), in 2015, Trade Commissioner Malmström made the case for TTIP as being a 'no-brainer' with increasing trade having 'two overriding priorities: jobs and growth' (Malmström & Hill, 2015). Following these two goals and the alleged prosperity it was supposed to bring, TTIP was thereby in particular corresponding to the results of European respondents' opinion-more than six in ten citizens from 21 of the 27 EU member states believed that interna-tional trade should be a vector of domestic job creation (Eurobarometer, 2010, p. 70). Although the majority of 24 EU member governments were in favor of TTIP (Eurobarometer, 2016, p. 19) the [TTIP] discussion was 'a few degrees hotter in Germany than in other countries' (Tost, 2015). Increased opposition to a EU FTA with the US was particularly high in Germany-from 41% in 2014 to 52% in 2016-compared to a rather consistent low percentage of opposition from UK respondents-19% in both 2014 and 2016 (Eurobarometer, 2014, p. 202;2016, p. 19).
In the following, the argument that both countries' TTIP positions were shaped by material interests, societal ideas and national institutions will first of all be examined by providing empirical data from British and German business associations and trade unions which is then followed by presenting public opinion data and statements from NGOs. This data illustrates whether the governments' TTIP positions reflected the domestic factors dominant in these countries. The analysis will then simultaneously highlight under which conditions the variables shaped the governments' trade positions.
Material Interests in the UK and Germany
Leading umbrella business associations in both the UK and Germany were in favor of TTIP, whereas both countries' trade unions were rather skeptical. The Confederation of British Industry's report A New Era for Transatlantic Trade stated potential TTIP gains for small and medium-sized businesses from the harmonization of regulatory standards, market access and export opportunities for UK services, a rise of UK jobs due to an increase in investment, as well as a larger range of products at cheaper prices for consumers (CBI, 2014, p. 2). With the US being the UK's largest market outside the Eurozone, the CBI believed that TTIP 'was something worth pursuing in the current economic climate' (House of Commons, 2015, p. 6). CBI Brussels Director, Sean McGuire, referred to EU countries being party to investment treaties with ISDS provision and stated the necessity to 'uphold basic rules on investor protection [as] the right of states to regulate in the public interest, would help set a precedent for EU investment negotiations with other strategic trading partners like China' (Policy Review, 2015). The British Chamber of Commerce equally supported free trade between the EU and the US, particularly for small and medium-sized companies. Director General, John Longworth, stressed that 'firms across the UK will cheer a free trade deal that helps them gain new opportunities in US markets' (Longworth, 2015). The Trade Union Congress (TUC) acknowledged the potential economic benefits of TTIP, and noted that the reduction of tariffs and economic regulations, 'could genuinely lead to greater trade and greater benefits to all' in specific sectors such as the automobile and chemical industries (House of Commons, 2015, p. 6). It was however uncertain about potential job creation and viewed that the threats to public services, workers' rights as well as environmental and food standards would outweigh any potential benefits. The TUC believed that TTIP's primary purpose was to privilege foreign investors by providing transnational corporations with more power and influence, enabling them to sue states whose laws or actions are deemed incompatible with free trade. TUC's Sally Hunt, expressed its opposition 'to ISDS in TTIP and indeed any trade deal as it is undemocratic and against the public interest to allow foreign investors to use special secretive courts to sue governments for making public policy they think is bad for business' (TUC, 2014).
In a survey from the Association of German Chambers of Industry and Commerce (DIHK), TTIP was welcomed by an overwhelming majority from German industry-70% of the German 'Mittelstand' regarded TTIP as positive. Featuring the most important issue of 85% of respondents in facilitating bilateral trade was the adaptation or mutual recognition of equivalent norms, standards and certifications, followed by simpler customs clearance which was important for 83% of respondents. This number was even higher in the retail and agri-food industry branches (91% and 90%, respectively). Tariff elimination was viewed by 75% of respondents as important, especially for the retail and the agri-food sectors (both 82%), as well as for the automobile industry and suppliers (81%). The DIHK and other leading business associations, the Federation of German Industries, the Confederation of German Employers' Associations, and the German Confederation of Skilled Crafts issued a joint statement calling for an ambitious and fair trade and investment agreement and to 'make use of this opportunity' (Federation of German Industries, 2014, p. 1) in removing barriers in the transatlantic market, thereby 'achieving more growth, more employment, new market opportunities and therefore future prospects for companies and employees' (Federation of German Industries & Confederation of German Employers' Associations, 2014, p. 2). Leading the pro-TTIP campaign, the Federation of German Industries viewed the ISDS compatible with governments' ability to regulate, as well as an opportunity to reform the international investment system and to introduction higher standards for future trade agreements (Mildner, 2014). In its first position paper of 2013, the Confederation of German Trade Unions (DGB) criticized the US' non-ratification of six out of eight basic core labor standards of the International Labor Organization and called for a suspension of the TTIP trade negotiations. It demanded that 'one of the objectives of the agreement with the US must be an improvement of labor rights everywhere' (DGB, 2013, p. 4). In its position paper one year later, it stated its main concerns, the different levels of protection for consumers, the environment as well as the workforce, and called for TTIP 'to provide greater prosperity for a broader segment of the population, improve economic, social and environmental standards, and create structures for fair competition and good working conditions' (DGB, 2014, p. 4).
Societal Ideas in the UK and Germany
At TTIP's launch in 2013, both UK (58%) and German (56%) attitudes among the general public were positive towards increased trade and investment between the EU and the US. Attitudes were equally similar towards free trade in general, with 77% of UK respondents and 74% of German respondents being supportive (Eurobarometer, 2014). When asked about specific TTIP support, 39% of German respondents were in favor and 41% respondents were against TTIP, while 65% of respondents were in favor and 19% were against TTIP in the UK (Eurobarometer, 2014). German TTIP attitudes declined in 2015 with 31% in favor and 51% of respondents against the agreement. The numbers stayed relatively stable in the UK with 63% in favor of TTIP and 20% of respondents against (Eurobarometer, 2015). Van Loon (2018a, p. 172) points to these varying German public attitudes towards increased economic relations with the US, free trade in general and TTIP attitudes in specific. German attitudes towards TTIP were thus not related to free trade in general; instead 'the potential partner and the agreements' content unrelated to trade' is what mattered (Jungherr, Mader, Schoen, & Wuttke, 2018, p. 216). Reflecting the four measures to achieve TTIP's objectives, tariffs, regulations, rules and investment, the issues of perceived governments' limitation in regulating the domestic market potentially leading to a decline in consumer protection and supposed loss of democratic accountability due to the agreement's ISDS introduction, were prominent in the German public discourse. As opposed to traditional tariff cutting trade issues, public TTIP attitudes were less focused on the potential threat of increased international competition, but rather on its impact on national or European standards and policy processes as the agreement could be misused by companies as a back door, circumventing and undermining consumer protection rights as well as environmental standards: 51% of respondents opposed the harmonization of US and EU standards for products and services, 53% was against the removal of restrictions on investment between the EU and the US, while a vast majority of Germans showed fundamentally high trust levels in European standards on issues such as food safety (94%), auto safety (91%) and environmental safety (96%) (Pew Research Center & Bertelsmann Foundation, 2014, pp. 22-23).
Regarding food safety concerns, 56% of Germans believed that chlorinated chicken poses a health risk (Stern, 2014). This highlights the connection between Germans' beliefs in consumer protection and their strong preferences for governmental regulation; German respondents attach a greater significance to the role of government in steering the economy, whereas the British counterparts are more supportive of responsibility of market forces, reflecting both countries' capitalism types as a LME and CME (Schirm, 2011, p. 51). Against this background, a crucial factor in German TTIP attitudes was their skeptical view of transatlantic relations inciting a general distrust in German-American relations (Braml, 2014). In 2014, 73% of respondents thought that the US buying German companies would be negative for the German economy (Pew Research Center & Bertelsmann Foundation, 2014, p. 23) while almost the majority (49%) believed that it would hurt the economy if the US would build new factories in Germany (Pew Research Center & Bertelsmann Foundation, 2014, p. 22). This correlates with results from the Pew Research Center which reveals that America's international image has become more negative among German respondents since 2011, falling from 62% having a favorable opinion in 2011 to 50% in 2015, vs. 61% of UK respondents viewing the US positively in 2011 against a slight increase, reaching 65% assigning a positive rating, in 2015 (Pew Research Center, 2015, p. 13).
Much backlash against TTIP came from civil society groups, and UK and German NGOs' criticism particularly focused on the ISDS and TTIP's potential risks on environmental, consumer, health and labor rights. The German NGO sector, an alliance of around 70 members, created an online platform 'TTIP unfairhandelbar' (www.ttipunfairhandelbar.de) and provided critical views thereby informing members about discussion events and demonstrations. The British NGO sector created a similar counterpart 'NoTTIP' (https:// www.nottip.org.uk) with around 50 members. Rejecting ISDS, the German NGO alliance demanded 'legal protection for people-instead of privileged right of action for corporations' dismissing giving international companies 'their own special rights to take action against governments' (Forum on Environment and Development, 2014, p. 2). With the ISDS favoring investors, and not citizens, as well as facilitating the protection of foreign, and not national, investors' rights, this was severely criticized. German and UK NGOs stated that the ISDS 'threatens to undermine the most basic principles of democracy' (Hilary, 2015, p. 30). Regarding the position on food safety standards, NGOs in both countries feared that the agreement would result in a so-called race to the bottom on European food safety standards. The 'TTIP unfairhandelbar' NGO alliance stated that the non-negotiability of the alleged stricter European standards should not be diminished 'nor undermined by a mutual recognition of American and European standards' (Forum on Environment and Development, 2014, p. 2). In the UK, a War on Want position paper (member of the 'NoTTIP' alliance) voiced its concern of TTIP's potential impact on public services, in specific the 'further market opening' or the potential 'to lock-in past privatizations of the NHS' and demanded 'a full and un-equivocal exclusion of all public services from any EU trade agreements and the ongoing trade negotiations' (War on Want, 2015, p. 46).
Domestic Factors Shaping Governments' TTIP Positions
UK Prime Minister David Cameron, strongly in favor of TTIP said, 'there is no more powerful way to achieve [economic growth] than by boosting trade' (Cameron, 2013). The UK government acknowledged TTIP's large benefits 'adding as much as £10 billion annually to the UK economy in the long-term' as well as increasing jobs and lower prices for goods and services (UK Government, 2014, p. 5). Softening concerns about inclusion of the NHS and challenges of potential ISDS provisions were issued by Lord Livingston of Parkhead, then Business, Innovations and Skills' (BIS) Minister of State emphasizing that 'TTIP will not change the fact that it is up to the UK to decide how public services, including the NHS, are run' (BIS, 2014). This was supported by Cameron who deemed these concerns 'nonsense' as 'there is no threat, I believe, from TTIP to the National Health Service and we should just knock that on the head as an empty threat' (Cameron, 2014). The government, having signed numerous trade agreements including investment protection provisions, and thus in favor of the ISDS, had brought arguments to the fore that this mechanism would need to find the right balance between investment protection and the rights of the national government to regulate. BIS Secretary of State, Vince Cable said that 'neither the investment protection provisions nor decisions arising from ISDS cases will affect the ability of the UK government to regulate fairly and in the public interest' (Cable, 2014). This thus illustrates that although there was a certain ambivalence between material interests as well as among societal ideas, the government did not include all competing domestic groups, as its stance was shaped more by those material interests and societal ideas that favored TTIP.
In Germany, the Christian Democratic (CDU)/Social Democratic (SPD)-led government clearly stated the commitment towards a speedy conclusion of TTIP (Christian Democratic Union, 2013, p. 13). It however adopted a skeptical stance in 2014 regarding the ISDS issue. Opposition came especially from the governing SPD, calling for the exclusion of the mechanism in TTIP. In March 2014, the SPD party leader, Vice Chancellor and Economics Minister, Sigmar Gabriel emphasized the governments' position in a letter to then EU Trade Commissioner De Gucht. Gabriel wrote that the US and Germany already offered adequate legal protections to investors, so that ISDS provisions would not be required in a transatlantic agreement (Handelsblatt, 2014). Since then, the SPD had somewhat rowed back concerning its ISDS position. During the CETA negotiations, Gabriel backed down by stating that 'if the rest of Europe wants this agreement, then Germany must also approve' (Sarmadi, 2014). Chancellor Angela Merkel (2015a), calling for a TTIP 'which has many winners' stated that the many investment protection agreements Germany previously negotiated had not been under public scrutiny and that TTIP would induce 'a new international standard for investment protection.' She diluted concerns saying that such provisions 'were of great importance to many companies in Germany because they were protected from arbitrary situations in certain countries to which they would otherwise have been exposed' (Merkel, 2015a). Businesses, service providers and consumers would garner benefits from TTIP, leading to reduced prices, a larger range of products, a rise in sales resulting in generating an increase in jobs, yet 'our standards, for instance on consumer protection, environmental protection and health protection are non-negotiable' (Merkel, 2015b). The government and the DGB issued a joint position paper in which they stressed TTIP opportunities in intensifying trade relations making trade fair and sustainable, yet both emphasized that trade issues such as workers' rights, consumer protection, social and environmental standards were not to be jeopardized (BMWi, 2014, p. 1). This joint paper illustrates the German government's responsiveness to material interests in favor of TTIP, but also to interests opposed to TTIP. However, its position corresponded equally well to the concerns of societal ideas, and thus its reserved TTIP stance was shaped by the ambivalence of both types of domestic factors, interests and ideas, which was the result of its inclusion of all domestic groups.
Conclusion
The goal of this article was to trace and explain selective politicization across the TTIP trade positions of the UK and Germany. It has illustrated that a trade agreement's content can fuel politicization when a broad range of materially motivated and ideationally motivated stakeholders are affected by this. In line with the societal approach to governmental preference formation, the TTIP positions of the UK and Germany were strongly shaped by material interests, societal ideas and national institutions. These domestic variables' significance has been theoretically stressed and empirically examined, thereby accounting for the predominance of material interests when the issue at stake concerns distributional consequences for economic sectors, while societal ideas dominate when fundamental concerns of the role of government in steering the economy are at stake. When both are affected, they can either compete and weaken each other, or reinforce and strengthen each other, while the governments' positions are consistent with national institutions when the issue concerns questions of formal arrangements of socio-economic coordination.
The UK government's position was shaped by the preferences of business associations' who were directly affected by TTIP's distributional impact. Although UK business interests favored TTIP, and the TUC represented a skeptical TTIP stance, material interests nevertheless shaped an enthusiastic and strong government position. This variables' ambivalence means that the trade union's concerns did not weaken business interests' preferences in shaping the UK government's position. Thus, a position corresponding more to those material interests in favor of TTIP also correlates with LME institutions and societal ideas of trust in market forces. In addition, concerning the trade issues, especially regulation and safety standards, the government showed a weak responsiveness to the concerns of NGOs. The ambiguous relationship between NGOs opposed to TTIP and voters in favor of TTIP, illustrates that the former did not weaken the latter. In sum, business associations and voters were predominant in shaping the UK's TTIP position.
Equal to the UK government's stance, its German counterpart corresponded to material interests, directly affected by TTIP's potential distributional consequences. Again, these concerns were not identical, as business associations were strongly supportive, yet trade unions were against TTIP, thus weakening each other. The government included these ambivalent material interests in its trade policy position. On issues such as the role of government in steering the economy, ISDS, and food safety standards, the German TTIP stance correlated with the institutions of the CME and corresponded to societal ideas of trust in governmental regulation, with the voters' and NGOs' concerns reinforcing each other. Overall it should be noted, that with the trade union (DGB), NGOs, and voters all opposed to TTIP, the reserved German TTIP position was shaped by both domestic factors, interests and ideas, and thus in line with national institutions.
The aim of this article was to provide an explanation of the differences in politicization in the UK and Germany during the TTIP negotiations, thereby illuminating the domestic level of EU trade policy making by unfolding the black box and specifying a comprehensive understanding of the countries' domestic politics. The employed societal approach to governmental preference formation and its distinctiveness in complementing domestic politics approaches emphasizes the explicit specification of the domestic variables, interests, ideas and institutions. This supports the conceptualization of the hypotheses empirically examining the conditions for the prevalence of these vis-à-vis each other. Since the bulk of the literature on EU trade policy has long marginalized the domestic level, this study has shown the explanatory power of the societal approach in embracing all three domestic factors and explaining their origins, as well as their interdependence, in shaping the varying TTIP positions of the UK and German governments. As EU trade policy will remain in full spotlight for years to come, this contribution has thus made the case for a future accentuation on domestic factors for understanding the selective trade politicization across EU member states. | 2020-04-02T09:24:07.744Z | 2020-03-31T00:00:00.000 | {
"year": 2020,
"sha1": "a073fd436b22bb60dca0e6a5e0ad079ab0b7141e",
"oa_license": "CCBY",
"oa_url": "https://www.cogitatiopress.com/politicsandgovernance/article/download/2608/2608",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "18ee1ba387ce7fb28e009ea56b93b1b8afd8525d",
"s2fieldsofstudy": [
"Political Science",
"Economics"
],
"extfieldsofstudy": [
"Political Science"
]
} |
258193887 | pes2o/s2orc | v3-fos-license | A mitochondrial cytopathy presenting with persistent troponin elevation: case report
Abstract Background Mitochondrial diseases represent an important potential cause of cardiomyopathy and should be considered in patients presenting with multisystem manifestations. Timely diagnosis of a mitochondrial disorder is needed as it can have reproductive implications for the offspring of the proband. Case Summary We describe a case of undifferentiated rising and persistent troponin elevation in a 70-year-old female with only mild heart failure symptoms and signs. An eventual diagnosis of a mitochondrial cytopathy was made after genetic testing, striated muscle, and endomyocardial biopsy. Multidisciplinary involvement was vital in securing the ultimate diagnosis and is a key lesson from this case. On follow up, with institution of heart failure therapy including cardiac resynchronisation device therapy there was improvement in exercise tolerance and symptoms. Discussion For discussion is the investigation of undifferentiated cardiomyopathies and consideration of mitochondrial disorders as an important diagnosis to exclude prior to diagnosis as an idiopathic cardiomyopathy.
Introduction
The presentation of a mitochondrial cytopathy with persistent troponinaemia is unusual and important to recognise as an underlying diagnosis for patients with 'idiopathic' or unexplained cardiomyopathy.
Timely diagnosis of a mitochondrial disorder is important due to its management and reproductive implications to the offspring of the proband who may be asymptomatic carriers of significant mitochondrial DNA (mtDNA) mutations. 1 Here, we describe a patient with persistent troponin elevation and heart failure that underwent assessment for cardiac myositis leading to a final diagnosis of a mitochondrial cytopathy after multidisciplinary review, discussion, and investigation.
Case Presentation
A 70-year-old woman of Chilean origin was admitted for investigation of recurrent troponin-positive chest pain and new peripheral oedema. There had been a presentation five years prior with chest pain, troponin-T elevation to 236 mg/dL at peak, non-obstructive coronary anatomy on invasive angiography, reduced ejection fraction at 40% and lateral mid-wall late gadolinium enhancement (LGE) on cardiovascular magnetic resonance imaging (CMR), with corresponding tissue oedema and inflammation by T1-and T2-tissue-weighted imaging. The diagnosis of undifferentiated myocarditis was made. Medical background included type 2 diabetes mellitus, diagnosed 15 years previously, managed with metformin and empagliflozin, and bilateral sensorineural hearing impairment of moderate severity, onset age 50 years. There was no family history of deafness, diabetes, seizures, or stroke. There were no other systemic or neurological signs or symptoms at the time of this presentation. From this initial presentation, the symptoms of heart failure resolved, the troponin normalised, and the patient was lost to follow up, having been commenced on heart failure therapy with perindopril 2.5 mg daily, bisoprolol 2.5 mg daily, and frusemide 40 mg daily. The patient represented five years later with dyspnoea on minimal exertion, pre-syncope, angina, and reported several episodes of remitting and relapsing dyspnoea and ankle oedema over the preceding three years. Systems review was relevant for a reduced level of mobility of gradual onset over the preceding months. The cardiovascular examination was unremarkable apart from a degree of bipedal oedema. Electrocardiogram (ECG) showed sinus rhythm with broadened QRS complex in a typical left-bundle-branch-block pattern, with a QRS duration of 130 ms. Severe acute respiratory syndrome coronavirus 2 by reverse transcription polymerase chain reaction testing was negative. Initial echocardiogram showed mild concentric hypertrophy with moderate global impairment of systolic function, without valvulopathy. Global longitudinal strain was reduced -10.5%. With non-obstructive coronary disease on repeat invasive angiography, the patient proceeded to CMR, corroborating moderate impairment of left-ventricular (LV) systolic function and normal right-ventricular systolic function, with an LV ejection fraction of 40%. There were multiple areas of increased oedema on T2-weighted imaging and scar, with LGE uptake in the basal inferolateral wall and septum in a mid-wall pattern ( Figure 1). Right heart catheterisation performed after effective diuresis demonstrated a pulmonary capillary wedge pressure of 14 mmHg, mean pulmonary artery pressure of 18 mmHg, right atrial pressure of 13 mmHg, a transpulmonary gradient 6 mmHg and systemic vascular resistance of 3.5 Wood units. The measured cardiac output was complicated by the fact that the patient went into complete heart block with haemodynamic collapse just prior to measurement with further right heart catheterisation assessment not pursued. The calculated cardiac output was 1.3 L/min with a cardiac index of 0.9 L/min/m 2 . The high degree atrio-ventricular block did not recur and did not require further treatment. Serum lactate undertaken at the time of right heart catheterisation was found to be elevated at 4.0 mg/dL. Repeat lactate levels subsequently showed fluctuations between 2 and 4 mg/dL (normal range <2 mg/dL). Troponin-T was 1380 mg/dL at day 10 of admission, n terminal pro-brain natriuretic peptide was 3160 ng/L (normal range <200 ng/L) and creatine kinase was normal. Serum creatinine was 90 mmol/L (normal range <100 mmol/L) and there were no concomitant fluctuations in renal function through the admission.
Given the leading differential of inflammatory myocarditis, and the lack of viral prodrome or raised inflammatory markers to suggest a viral or bacterial aetiology, two courses of 1500 mg pulsed methylprednisolone in divided doses over days 3-5 and days 10-11 were administered. Despite this, a progressive rise in troponin-T was observed to 1380 mg/ dL by day 10 and 1800 mg/dL by day 14 (Figure 2). A technetium-99 m pyrophosphate amyloid scan was negative for transthyretin cardiac amyloidosis, a combined positron emission tomography and computed tomography scan excluded cardiac sarcoidosis, whilst serum and urine protein electrophoresis excluded light chain amyloidosis. An endomyocardial biopsy showed non-specific myopathic changes of variable myocyte atrophy, hypertrophy, and minimal interstitial fibrosis.
In summary, the patient manifested troponin elevations with active myocardial injury and moderate LV functional impairment without a clear cause, and treatment with high-dose pulsed steroids for presumed myocarditis did not suppress myocardial injury. Throughout the admission, the patient remained hemodynamically stable with clinically mild heart failure.
The patient was noted to have a reduced level of mobility, with a wide-based gait and reduced proximal limb girdle strength (Medical Research Council grade 4/5), but did not have other features of frailty or sensory neurological findings. Given the constellation of unexplained cardiomyopathy and proximal limb weakness, consideration was then given to a generalised myopathic process. Neurological assessment was undertaken. Electromyography supported a myopathic process, without the characteristic features of myotonia, and no neuropathy was found on nerve conduction studies. Magnetic resonance imaging (MRI) brain demonstrated marked periventricular deep white matter and white matter subcortical and supratentorial T2/FLAIR hyperintensities. Basal ganglia calcification and global cerebral, brain stem and cerebellar volume loss were noted. MRI of the thighs showed bilateral symmetric muscle oedema and diffuse wasting. Genetic testing was negative for myotonic dystrophy types 1 and 2.
Given the patient was of Chilean descent, the possibility of a Chagas cardiomyopathy due to Trypanasoma cruzi was considered. Chagas cardiomyopathy is characterised by segmental wall motion and CMR abnormalities typically involve the apical and inferolateral walls, with predilection for apical aneurysm formation and thrombus, probably secondary to microvascular disturbance and chronic myocarditis. 2,3 In contrast to these findings, our patient had relative apical sparing in terms of wall motion, and fibrosis involved the basolateral wall and basal septum. Eventually, negative T. cruzi serology excluded Chagas cardiomyopathy.
The constellation of cardiomyopathy, skeletal myopathy, extensive white matter changes, basal ganglia calcification, diabetes, sensorineural hearing impairment, and elevated serum lactate prompted consideration of a mitochondrial cytopathy. With autoimmune serological testing being positive for anti-polymyositis and scleroderma proteins, the differential diagnosis included autoimmune skeletal and cardiac myositis. A left quadriceps muscle biopsy was stained with a panel of routine, histochemical, and immunohistological stains. The muscle contained numerous angular atrophic esterase-positive (denervated) myofibers, a few scattered COX-deficient myofibers with a disorganised mitochondrial pattern, but fewer than 1% of COX-negative myofibers overall (Figure 3). These mitochondrial changes are within normal limits for this age. 4 Of note, muscle histopathology can be normal in genetically proven mitochondrial cytopathies, which should, therefore, not be excluded based on a negative striated muscle biopsy alone, particularly if denervated myofibers are present. 5,6 The large number of denervated myofibers present suggest this mitochondrial mutation caused an intramuscular neuropathy rather than a clinical striated muscle myopathy.
Genetic testing of 125 genes associated with myopathy, including 88 nuclear genes and 37 mitochondrial genes, demonstrated a pathogenic mutation in the mitochondrial genome, MT-TL1 m.3243A>G, at a heteroplasmic level of 18% in buccal cells and 64% mutational load in the striated muscle cells, confirming the diagnosis of a mitochondrial cytopathy.
Discussion
Mitochondrial disorders are a group of genetic conditions that occur secondary to a mutation that affects the mitochondrial respiratory chain function. 1,5,6 Depending on the locus of the pathogenic mutation, they can exhibit either a Mendelian inheritance, when the nuclear genome is implicated or mitochondrial inheritance, when the mitochondrial genome is implicated. In mitochondrial inheritance, disorders are passed down to offspring from their affected mothers only, via egg cells that carry mutant mitochondria. Mitochondrial heteroplasmy is due to random segregation of mtDNA at cell division; the varied mutational load can be present in different cells and tissues, generating varying severity of end-organ dysfunction, even within the same individual. 1 When the level of mutant mtDNA exceeds a threshold for a particular tissue, cell dysfunction ensues and symptoms manifest. 1 Consistent with this, our patient exhibited 64% heteroplasmic level of mutant mtDNA in her muscle biopsy sample, thus accounting for her neuromuscular symptoms.
Clinical presentations and symptom severity can vary depending on the mutant mtDNA heteroplasmic level within each tissue/organ and between different family member, which can range from being asymptomatic, to having oligo-system manifestations, or to multi-system severe disease involving endocrine, musculoskeletal, neurological, and cardiovascular systems. 6 Our patient carried the most common mutation associated with mitochondrial disease, MT-TL1 m.3243A>G, which manifests a wide range of clinical phenotypes, including mitochondrial myopathy, encephalopathy, lactic acidosis and stroke-like episodes (MELAS), maternally inherited diabetes and deafness, as well as (less commonly) myoclonic epilepsy with ragged red fibre, Leigh syndrome, and Kearns-Sayre syndrome 6 Cardiac manifestations of mitochondrial cytopathies include both structural arrhythmogenic abnormalities and tendencies towards atherosclerosis. 7 Abnormal ECGs and echocardiograms are found in up to 35% and 30% of patients respectively, occurring more commonly in patients with MELAS than other conditions. 7 LV hypertrophy is the most common abnormality, being present in up to 50% of patients with MELAS. 7 When LV systolic impairment occurs it typically presents with diffuse rather than focal abnormalities. 8 Elevated troponin levels in mitochondrial disorders are not consistently described in the literature, being present in between 1% and 13% of patients recruited for cardiac characterisation. [8][9][10] Cardiac MRI abnormalities are found in up to 50% of patients with mitochondrial cytopathy, with non-ischaemic LGE being most common. 11 Among these, patients with MELAS tend towards concentric hypertrophy relative to other mitochondrial diseases, with more diffuse at times patchy LGE, rather than showing a predilection for specific focal areas. 11 It is postulated that this represents replacement fibrosis secondary to dysfunction in the respiratory chain due to inherited mitochondrial abnormalities. 11 CMR may have advantages over standard echocardiography in patients with mitochondrial cytopathies by identifying these areas of fibrosis. This would allow earlier identification of patients at risk of cardiomyopathy or malignant arrhythmia.
The striking troponin elevation seen in our patient has to our knowledge not been described before in the literature among people with MT-TL1 m.3243A>G, while the echocardiographic and CMR findings are in keeping with those described with this mutation.
Follow-up
In view of the diagnosis with a mitochondrial cytopathy, the patient's metformin was ceased, and she was commenced on Coenzyme Q10 supplementation alongside heart failure therapies, including the sodium-glucose cotransporter 2 inhibitor, empagliflozin. Her mobility and heart failure symptoms have improved and her cardiac function remains stable on echocardiography. Given the unchanged echocardiography findings, possibilities for the sub-clinical improvement in symptoms may be secondary to up-titrated heart failure therapies, more consistent preservation of euvolaemia or improved substrate for mitochondrial function with supplementation, which has been demonstrated to improve exercise capacity. 12 Interval improvement in symptoms between this presentation and that from 5 years previous may be attributed to commencement of heart failure therapies. This initial improvement may have masked ongoing subclinical disease. Further troponin-T evaluation was not performed after initial presentation due to unclear clinical utility.
The patient has two adult children, both of whom would be at risk of having inherited the MT-TL1 m.3243A>G mutation from her with variable degrees of heteroplasmy. 6 Predictive genetic testing was taken up by her 31-year-old daughter, who was pregnant at the time with her first child, to assist with her reproductive decision making. The daughter's testing showed a 0% mutational load in her urine sample, and she proceeded with her pregnancy (without prenatal testing) to successfully deliver a healthy baby.
Conclusion
The recurrent elevations of troponin seen in our patient have to our knowledge not been previously described in patients with MT-TL1 m.3243A>G mutations. A poor response to steroid therapy and unexplained cardiomyopathy, together with systemic extra-cardiac manifestations, should prompt consideration of a mitochondrial cytopathy as an underlying diagnosis. Confirming a genetic mitochondrial diagnosis has important management and reproductive implications for patients and their families.
Lead author biography
Anish is been a physician trainee at St Vincent's Hospital in Sydney, Australia, with a strong interest in multimodal cardiac imaging and heart failure.
Supplementary material
Supplementary material is available at European Heart Journal -Case Reports.
Slide sets: A fully edited slide set detailing this case and suitable for local presentation is available online as Supplementary data.
Consent:
The authors confirm that consent has been obtained prior to submission of the article enclosed in accordance with COPE guidelines. | 2023-04-19T15:03:15.135Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "b6477d355b06702e42273a016fccf62a07e1fdbb",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/ehjcr/advance-article-pdf/doi/10.1093/ehjcr/ytad132/49939298/ytad132.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "855a7cec7c013e910428a641cca62cabcf575793",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
258848002 | pes2o/s2orc | v3-fos-license | Higher Expression Levels of SSX1 and SSX2 in Patients with Colon Cancer: Regulated In Vitro by the Inhibition of Methylation and Histone Deacetylation
Background and Objectives: Colon cancer (CC) has a high mortality rate and is often diagnosed at an advanced stage in Saudi Arabia. Thus, the identification and characterization of potential new cancer-specific biomarkers are imperative for improving the diagnosis of CC by detecting it at an early stage. Cancer-testis (CT) genes have been identified as potential biomarkers for the early diagnosis of various cancers. Among the CT genes are those belonging to the SSX family. In order to assess the usefulness of SSX family genes as cancer biomarkers for the detection of early-stage CC, the goal of this research was to validate the expressions of these genes in patients with CC and in matched patients with normal colons (NCs). Materials and Methods: RT-PCR assays were used to analyze the SSX1, SSX2, and SSX3 family gene expression levels in 30 neighboring NC and CC tissue samples from male Saudi patients. Epigenetic alterations were also tested in vitro using qRT-PCR analysis to determine whether reduced DNA methyltransferase or histone deacetylation could stimulate SSX gene expression via 5-aza-2′-deoxycytidine and trichostatin treatments, respectively. Results: The RT-PCR results showed SSX1 and SSX2 gene expression in 10% and 20% of the CC tissue specimens, respectively, but not in any of the NC tissue specimens. However, no SSX3 expression was detected in any of the examined CC or NC tissue samples. In addition, the qRT-PCR results showed significantly higher SSX1 and SSX2 expression levels in the CC tissue samples than in the NC tissue samples. The 5-aza-2′-deoxycytidine and trichostatin treatments significantly induced the mRNA expression levels of the SSX1, SSX2, and SSX3 genes in the CC cells in vitro. Conclusions: These findings suggest that SSX1 and SSX2 are potentially suitable candidate biomarkers for CC. Their expressions can be regulated via hypomethylating and histone deacetylase treatments, subsequently providing a potential therapeutic target for CC.
Introduction
Colon cancer (CC) is the third and fourth most common cause of cancer-related death worldwide among males and females, respectively [1]. In Saudi Arabia, it is the leading cause of mortality in both sexes and ranks as the first and third most frequently diagnosed malignancy in men and women, respectively [2]. Furthermore, the prevalence rate of CC is high among Saudi men and women between the ages of 55 and 58 years [3]. However, a recent study indicated that CC has become more prevalent among younger age groups in expression in CC tissues or cell lines [16,17] and its association with other cancers [8,20]. From the CTA database (http://www.cta.lncc.br/index.php, accessed 1 October 2022), SSX1 and SSX3 were randomly selected. In order to examine the specificity of possible CC biomarkers, we used RT-PCR assays to analyze the mRNA expressions of SSX genes in CC tissues but not in matched normal colon (NC) tissues in breast and leukemia cancers.
Ethical Approval and Sample Collection
The institutional review board (approval No. HAPO-01-R-011; project No. 56-2020) of Al-Imam Muhammad Ibn Saud Islamic University authorized this research. Participants were recruited from King Khalid University Hospital in Riyadh, Saudi Arabia. All participants included in this study had not received any treatment, including chemotherapy and/or physiotherapy. Clinical examination, endoscopy, imaging, and histological study are standard methods for diagnosing adenocarcinoma. In this study, these methods were used to monitor and diagnose the patients. Furthermore, all participants agreed and signed a written informed consent form for participation and were provided a privacy statement describing their personal data protection. Moreover, all participants were summoned to fill out a self-completed questionnaire, including information on age, family history, personal medical history, allergy symptoms or diseases, and social behaviors such as cigarette smoking and alcohol consumption.
A total of 35 matched CC and NC tissue samples from the same patient were collected in the study, including 30 and 5 samples taken from male and female Saudi patients with CC, respectively. Moreover, 15 samples were taken from female Saudi patients with breast cancer (BC). Furthermore, 12 samples were taken from male Saudi patients with chronic lymphoblastic leukemia (CLL) and compared with 12 normal blood (NB) samples from healthy Saudi men. Collecting fresh CC samples, along with matching NC tissues and BC samples, was done in sterile tubes with RNAlater stabilization solution (76106; Thermo Fisher Scientific, Foster City, CA, USA) to preserve and stabilize RNA. However, CLL samples and NB samples were collected into Blood RNA Tube (4342792; Applied Biosystems, Waltham, MA, USA). After that, all the tubes were kept overnight at 4 • C and then transferred into a −80 • C freezer until use.
Sources and Cultures of Human CC Cell Lines and Their Treatments with Epigenetic Drugs (5-aza-CdR or TSA)
In this study, we used HCT116 and Caco-2 human CC cell lines obtained from the chairperson Genome Research Chair (King Saud University, Riyadh, Saudi Arabia). The two cell types were grown in a 5% CO 2 humidified 37 • C incubator with DMEM (61965026; Thermo Fisher Scientific) with 10% fetal bovine serum (A3160801; Thermo Fisher Scientific).
Dimethyl sulfoxide (DMSO; D8418; Sigma, Hilden, Germany) was used to dissolve and dilute 5-aza-2 -CdR (A3656; Sigma) or TSA (T1952; Sigma) to the final concentration required in this study. Each type of cell line, either HCT116 or Caco-2, was subcultured into four sets. The first set was treated with 10 µM of 5-aza-CdR for 72 h, the second set with DMSO for 72 h (as a negative control for 5-aza-CdR), the third set with 100 nM of TSA for 48 h; and the fourth set, with DMSO for 48 h (as a negative control for TSA). However, the medium containing 5-aza-CdR, TSA, or DMSO was changed every 24 h. The times and concentrations were determined on the basis of the results of our recent publication [7].
RNA Isolation from NC, CC, BC, CLL, NB, and Cultured Cells
According to the recommendations of the manufacturer of the All-Prep DNA/RNA Mini Kit (80204; Qiagen, Hilden, Germany), approximately 30 mg each of the CC, NC, and BC samples was used separately in clean Eppendorf tubes to isolate and purify total RNA. Total RNA was obtained from around 5 × 10 6 grown cells using the manufacturerrecommended protocol from the All-Prep DNA/RNA Mini Kit. For the NB and CLL samples, the QIAamp RNA Blood Mini Kit (52304; Qiagen) was used to isolate and purify total RNA from 1.5 mL of the whole blood sample in accordance with the manufacturer's recommendations. Methods indicated in our prior research were used to determine the extracted RNA concentrations [7,9].
Synthesis of cDNA
A high-capacity cDNA reverse transcription kit (4368814; Applied Biosystems, Waltham, MA, USA) was used to convert 2000 ng/µL RNA from each sample into complementary DNA (cDNA) in accordance with the manufacturer's instructions. After that, the cDNA was diluted at 1:10 and kept at 20 • C.
Design of RT-PCR Primers, RT-PCR Conditions, and Agarose Gel Electrophoresis of RT-PCR Products
All RT-PCR primers were designed using previously described manual and software methods [7,9]. All primers used in this study were supplied by Macrogen Inc. (Seoul, South Korea). Nuclease-free water (129115; Qiagen) was used to dilute the primers to a final concentration of 10 µM (10 pmol/L). Table 1 lists the gene sequences and expected sizes of the RT-PCR products generated from those sequences. To compare the qualities of the normal, cancer, treated, and untreated cDNA samples, we amplified the housekeeping gene ACTB as a positive control. Furthermore, the effectiveness of the primer set for each gene was verified using cDNA from human testis total RNA (AM7972; Thermo Fisher Scientific). For the RT-PCR reaction preparation, 20 µL of the reaction mixture was placed in a clean PCR tube containing 10 µL of BioMix Red (BIO-25006; BioLine, London, UK), 8.4 µL of nuclease-free water, 0.8 µL of diluted cDNA (200 ng/µL), and 0.8 µL of both forward and reverse primers (10 µM) for each gene. The cycling parameters for the RT-PCR protocol were as follows: 5 min at 96 • C (one cycle), followed by 30 s at 96 • C, 30 s at 58 • C, and 30 s at 72 • C (35 cycles), and finally, 5 min of incubation at 72 • C (one cycle).
For gel electrophoresis, 1.5% agarose gel (A9539; Sigma-Aldrich, St. Louis, MO, USA) mixed with ethidium bromide (46067; Sigma) in 1× TBE buffer was used to separate 8 µL of each PCR product with a voltage of 100 for 1 h. In addition, 3 µL of a 100-bp DNA marker (N0467; New England BioLabs, London, UK) was loaded into agarose gels to confirm the sizes of the PCR products.
Design of qRT-PCR Primers and qRT-PCR Setups
Each set of qRT-PCR primers was manually designed using the optimal criteria provided in previous studies [7,9]. All primers were commercially synthesized using Macrogen. Stock primers were diluted with nuclease-free water to achieve their final concentration of 10 µM. The sequences of the qRT-PCR primers and their expected amplicon sizes are displayed in Table 2. For qRT-PCR reaction preparation, a 96-well plate was used in accordance with the iTaq Universal SYBR Green Supermix (1725120; Bio-Rad, Hercules, CA, USA) instructions. In order to obtain 10 µL of the total amount for each reaction, 5 µL of SYBR Green, 2 µL of diluted cDNA (200 ng/µL), 0.5 µL from both forward and reverse primers (10 µM), and 2.5 µL of nuclease-free water were added to each well. Each sample was duplicated twice using the QuantStudioTM 7 Flex Real-Time PCR System. The qRT-PCR cycling conditions were as follows: initial denaturation at 95 • C for 30 s and then 40 qRT-PCR cycles at 95 • C for 30 s, 58 • C for 30 s, and 72 • C for 30 s. A melting curve analysis was performed upon completion of the 40 cycles. The GAPDH housekeeping gene was used to standardize the qRT-PCR results.
Statistical Analysis
Significant differences between the two categories (before and after 5-aza-CdR or TSA treatment) for each gene were analyzed using the SPSS software (ver.22; SPSS Inc., Chicago, IL, USA). In this study, all p values within the following ranges were regarded as statistically significant: * p ≤ 0.05, ** p ≤ 0.01, and *** p ≤ 0.001.
In Silico Analysis
By using GeneMANIA tools (University of Toronto, Toronto, ON, Canada), the genegene interaction network of the SSX genes and their functional associations were created for a network analysis of common genes and the prediction of related genes [21].
The Cancer Genome Atlas (TCGA) Database Analysis
By using the TCGA database, SSX1, SSX2, and SSX3 expression levels were examined in different colon adenocarcinoma (COAD) tissue samples and compared with their expression levels in NC tissue samples. The expression patterns of the SSX1, SSX2, and SSX3 genes were validated in the COAD and NC tissue samples from the TCGA using datasets provided in OncoDB that were primarily from TCGA and included RNA-seq and clinical data from more than 9000 patients with cancer. For these analyses, RNA-seq data were obtained from the backend database and separated into two groups: the COAD and NC tissue samples. Whether a gene was upregulated or downregulated in the tumor samples was determined by calculating the log2-fold change value between the two groups. The Student t-test was used for the differential expression analysis. p values ≤ 0.05 were regarded as statistically significant.
Clinical Parameters of the Study Participants
As CC is more difficult to treat during later stages, late diagnosis is one of the most important causes of increased mortality in Saudi Arabia. Therefore, examining SSX family gene expressions (i.e., cancer biomarkers) in a large number of patients with CC should provide insights that will aid in the early diagnosis of malignancy and, thus, increase the likelihood of successful therapy. Table 3 displays the study participants' baseline clinical characteristics. A total of 74 participants were evaluated, including 35 with NC or CC, 15 with BC, 12 with leukemia, and 12 with normal blood (NB). The mean ages of the patients with CC and BC were 61 years (range, 24-96 years) and 52 years (range, 32-74 years), respectively. The mean ages of the patients with leukemia and the controls with NB were 49 years (range: 39-64 years) and 43 years (range: 33-52 years), respectively. Forty-three percent of the patients with CC were younger than 61 years, while 57% were older than 61 years. Overall, 60% of those with BC, 50% of those with leukemia, and 67% of those with NB were younger than 52, 49, and 43 years, respectively, whereas 40%, 50%, and 33% were older than 52, 49, and 43 years, respectively. The other clinical parameters of the participants are listed in Table 3. ER status
Expression Profiles of the SSX1, SSX2, and SSX3 Genes in the Matched CC and NC Tissues from the Male and Female Patients
The mRNA expression levels of the SSX family members were analyzed by first identifying the primers and annealing temperatures that would result in specific product amplification for each member of the SSX family. In the male patients, the mRNA levels of the SSX1, SSX2, and SSX3 genes were first validated using RT-PCR analysis with various RNAs isolated from 30 human NC tissue samples from Saudi men for the evaluation of testis specificity (Figure 1). The primer of each gene was verified by testing it on cDNA extracted from human testis RNA. The integrity of the cDNAs from the NC and CC samples was validated on the basis of ACTB gene expression. By using RT-PCR analysis, SSX1 and SSX2 were found to be expressed in 10% and 20% of the CC tissue samples, respectively ( Figure 2), but were not detected in any of the NC tissue specimens ( Figure 1). However, no detectable SSX3 expression was found in any of the examined CC ( Figure 2) or NC tissue samples ( Figure 1). For further analysis, the target samples for RT-PCR were tested by using qRT-PCR for SSX1 in three samples and SSX2 in six samples of CC compared to their normal matching tissues. The rest of the CC and NC samples were not analyzed with qRT-PCR due to the absence of detectable expressions of SSX1 and SSX2 in agarose gel ( Figure 3). The expression level of each gene was validated in the NC and CC tissues from the same sample. The expression level of each gene in the NC tissues was normalized to GAPDH and compared with that in the corresponding CC tissues. Figure 3 presents the qRT-PCR results, demonstrating significantly higher SSX1 and SSX2 expression levels in the CC tissues than in the NC tissues. Thus, the RT-PCR findings matched the qRT-PCR results. The expression of the level of SSX3 was not analyzed using qRT-PCR because of the absence of detectable expression levels in the agarose gel in the NC and CC tissues. Moreover, in this study, SSX1 and SSX2 gene expressions were considered positive when a band was found in the NC and CC tissue samples. However, only SSX2 showed statistically significant positive expression in the CC tissue samples relative to the NC tissue samples (p = 0.009; Table 4).
Expression Profiles of the SSX1, SSX2, and SSX3 Genes in the Matched CC and NC Tissues from the Male and Female Patients
The mRNA expression levels of the SSX family members were analyzed by first identifying the primers and annealing temperatures that would result in specific product amplification for each member of the SSX family. In the male patients, the mRNA levels of the SSX1, SSX2, and SSX3 genes were first validated using RT-PCR analysis with various RNAs isolated from 30 human NC tissue samples from Saudi men for the evaluation of testis specificity ( Figure 1). The primer of each gene was verified by testing it on cDNA extracted from human testis RNA. The integrity of the cDNAs from the NC and CC samples was validated on the basis of ACTB gene expression. By using RT-PCR analysis, SSX1 and SSX2 were found to be expressed in 10% and 20% of the CC tissue samples, respectively ( Figure 2), but were not detected in any of the NC tissue specimens ( Figure 1). However, no detectable SSX3 expression was found in any of the examined CC ( Figure 2) or NC tissue samples ( Figure 1). For further analysis, the target samples for RT-PCR were tested by using qRT-PCR for SSX1 in three samples and SSX2 in six samples of CC compared to their normal matching tissues. The rest of the CC and NC samples were not analyzed with qRT-PCR due to the absence of detectable expressions of SSX1 and SSX2 in agarose gel ( Figure 3). The expression level of each gene was validated in the NC and CC tissues from the same sample. The expression level of each gene in the NC tissues was normalized to GAPDH and compared with that in the corresponding CC tissues. Figure 3 presents the qRT-PCR results, demonstrating significantly higher SSX1 and SSX2 expression levels in the CC tissues than in the NC tissues. Thus, the RT-PCR findings matched the qRT-PCR results. The expression of the level of SSX3 was not analyzed using qRT-PCR because of the absence of detectable expression levels in the agarose gel in the NC and CC tissues. Moreover, in this study, SSX1 and SSX2 gene expressions were considered positive when a band was found in the NC and CC tissue samples. However, only SSX2 showed statistically significant positive expression in the CC tissue samples relative to the NC tissue samples (p = 0.009; Table 4). In the female patients, the mRNA levels of the SSX1, SSX2, and SSX3 genes were validated using a panel of RNAs obtained from the NC tissue samples from five female Saudi patients to determine the testis specificity of the mRNAs. No detectable expressions of the SSX1, SSX2, and SSX3 genes were found in any of the examined NC or CC tissue samples from the female patients ( Figure 4).
is represented by the error bars for each gene in each CC and NC sample. * p ≤ 0.05; ** p ≤ 0.01; *** p ≤ 0.001. Abbreviations: NC: normal colon; CC: colon cancer; qRT-PCR: quantitative reverse transcription polymerase chain reaction. In the female patients, the mRNA levels of the SSX1, SSX2, and SSX3 genes were validated using a panel of RNAs obtained from the NC tissue samples from five female Saudi patients to determine the testis specificity of the mRNAs. No detectable expressions of the SSX1, SSX2, and SSX3 genes were found in any of the examined NC or CC tissue samples from the female patients ( Figure 4).
Screening of the SSX Genes in CLL and BC Tissue Samples
This screening was conducted to determine the specificity of SSX1, SSX2, and SSX3 in additional cancer tissue samples, including leukemia in males and BC in females. The The agarose gel images display the RT-PCR analysis results for SSX1, SSX2, and SSX3. The cDNAs were synthesized from the total RNA from five NC and CC tissue samples. The cDNA samples were run with ACTB expression as the positive control, and as predicted, a band of 553 bp was obtained. Each set of primers for a given gene was examined using human testis cDNA. The official names and expected product sizes of the individual genes are presented to the left of the agarose gel images. Abbreviations: NC: normal colon; CC: colon cancer; bp: base pair.
Screening of the SSX Genes in CLL and BC Tissue Samples
This screening was conducted to determine the specificity of SSX1, SSX2, and SSX3 in additional cancer tissue samples, including leukemia in males and BC in females. The RT-PCR screening results showed that none of the SSX1, SSX2, and SSX3 genes were expressed in either of the CLL tissue samples (Figure 5, right) when compared to the NB samples ( Figure 5, left) or BC tissue samples ( Figure 6). RT-PCR screening results showed that none of the SSX1, SSX2, and SSX3 genes were expressed in either of the CLL tissue samples ( Figure 5, right) when compared to the NB samples ( Figure 5, left) or BC tissue samples ( Figure 6). RT-PCR screening results showed that none of the SSX1, SSX2, and SSX3 genes were expressed in either of the CLL tissue samples ( Figure 5, right) when compared to the NB samples ( Figure 5, left) or BC tissue samples ( Figure 6).
Effects of 5-aza-2 -CdR and TSA on SSX Gene Expressions in CC Cell Lines
Hypomethylating agents, such as 5-aza-2 -CdR, or histone deacetylase inhibitors, such as TSA, can increase the expression levels of multiple CT genes [7,11]. Most of these genes are X-CT genes, the silencing of which requires the hypermethylation of DNA sequences. Therefore, we questioned whether the expressions of SSX1, SSX2, and SSX3 could be regulated via treatment with 5-aza-2 -CdR or TSA agents and whether the expressions of some SSX genes in CC tissue samples might be affected by altered methylation and histone deacetylation mechanisms. We found no change in the morphology of the tumor cells treated with the 5-aza-2 -CdR or TSA agents. The mRNA level of each gene was measured in cells treated with 5-aza-CdR or TSA as compared to the cells treated with DMSO. DMSO was used as the solvent for both treatment drugs; therefore, 10 µL of DMSO was added to the cells in both groups as a control to determine its effects on gene expression.
In order to examine whether reduced DNA methyltransferase activity can activate SSX1, SSX2, and SSX3 gene expression, the HCT116 and Caco-2 cell lines were treated with 10 µM 5-aza-2 -CdR for 72 h. Then, the cDNA was synthesized, and qRT-PCR was performed, as described in Sections 2.4 and 2.6.
The qRT-PCR results for the HCT116 cells indicated that the mRNA expression levels of the SSX1, SSX2, and SSX3 genes were significantly induced in the cells treated with 5-aza-CdR when compared with those treated with DMSO (p < 0.0001; p < 0.0001; and p = 0.0005, respectively: Figure 7). In addition, the mRNA expressions of the SSX1 and SSX2 genes were more activated than those of the SSX3 gene. The qRT-PCR results showed that SSX2 and SSX3 expression was significantly induced in the Caco-2 cells treated with 5-aza-CdR (p = 0.0002 and p < 0.0001, respectively: Figure 7). However, the SSX1 gene did not exhibit statistically significant changes in the Caco-2 cells treated with 5-aza-CdR when compared to those treated with DMSO, as shown in Figure 7.
Effects of 5-aza-2′-CdR and TSA on SSX Gene Expressions in CC Cell Lines
Hypomethylating agents, such as 5-aza-2′-CdR, or histone deacetylase inhibitors, such as TSA, can increase the expression levels of multiple CT genes [7,11]. Most of these genes are X-CT genes, the silencing of which requires the hypermethylation of DNA sequences. Therefore, we questioned whether the expressions of SSX1, SSX2, and SSX3 could be regulated via treatment with 5-aza-2-CdR or TSA agents and whether the expressions of some SSX genes in CC tissue samples might be affected by altered methylation and histone deacetylation mechanisms. We found no change in the morphology of the tumor cells treated with the 5-aza-2-CdR or TSA agents. The mRNA level of each gene was measured in cells treated with 5-aza-CdR or TSA as compared to the cells treated with DMSO. DMSO was used as the solvent for both treatment drugs; therefore, 10 μL of DMSO was added to the cells in both groups as a control to determine its effects on gene expression.
In order to examine whether reduced DNA methyltransferase activity can activate SSX1, SSX2, and SSX3 gene expression, the HCT116 and Caco-2 cell lines were treated with 10 μM 5-aza-2-CdR for 72 h. Then, the cDNA was synthesized, and qRT-PCR was performed, as described in Sections 2.4 and 2.6.
The qRT-PCR results for the HCT116 cells indicated that the mRNA expression levels of the SSX1, SSX2, and SSX3 genes were significantly induced in the cells treated with 5-aza-CdR when compared with those treated with DMSO (p < 0.0001; p < 0.0001; and p = 0.0005, respectively: Figure 7). In addition, the mRNA expressions of the SSX1 and SSX2 genes were more activated than those of the SSX3 gene. The qRT-PCR results showed that SSX2 and SSX3 expression was significantly induced in the Caco-2 cells treated with 5-aza-CdR (p = 0.0002 and p < 0.0001, respectively: Figure 7). However, the SSX1 gene did not exhibit statistically significant changes in the Caco-2 cells treated with 5-aza-CdR when compared to those treated with DMSO, as shown in Figure 7. , and SSX3 expression levels in the HCT116 and Caco-2 cells before and after treatment with 5-aza-2 -CdR. DMSO was also utilized as a solvent for the 5-aza-2 -CdR solution and was applied to the control HCT116 and Caco-2 cells. GAPDH mRNA was used as a reference to normalize the gene expression levels. The standard error of the mean for three independent experiments is represented by the error bars. *** p ≤ 0.001; **** p ≤ 0.0001. Abbreviation: qRT-PCR: quantitative reverse transcription polymerase chain reaction; ns: not-significant.
Next, the significance of histone deacetylation in the repression of SSX family genes was investigated by treating HCT116 and Caco-2 cells with 100 nM of TSA for 48 h. The mRNA expressions of the SSX1, SSX2, and SSX3 genes in the HCT116 cells significantly increased when treated with the TSA agent (p = 0.0010; p = 0.0002; p = 0.0006, respectively: Figure 8). The qRT-PCR results showed that SSX2 and SSX3 gene expression significantly increased in the Caco-2 cells treated with TSA (p < 0.0001 and p = 0.0001, respectively: Figure 8). However, the SSX1 gene did not exhibit statistically significant changes in the Caco-2 cells treated with TSA when compared with those treated with DMSO, as shown in Figure 8.
Caco-2 cells. GAPDH mRNA was used as a reference to normalize the gene expression levels. The standard error of the mean for three independent experiments is represented by the error bars. *** p ≤ 0.001; **** p ≤ 0.0001. Abbreviation: qRT-PCR: quantitative reverse transcription polymerase chain reaction; ns: not-significant.
Next, the significance of histone deacetylation in the repression of SSX family genes was investigated by treating HCT116 and Caco-2 cells with 100 nM of TSA for 48 h. The mRNA expressions of the SSX1, SSX2, and SSX3 genes in the HCT116 cells significantly increased when treated with the TSA agent (p = 0.0010; p = 0.0002; p = 0.0006, respectively: Figure 8). The qRT-PCR results showed that SSX2 and SSX3 gene expression significantly increased in the Caco-2 cells treated with TSA (p < 0.0001 and p = 0.0001, respectively: Figure 8). However, the SSX1 gene did not exhibit statistically significant changes in the Caco-2 cells treated with TSA when compared with those treated with DMSO, as shown in Figure 8. The bar graphs show the SSX1, SSX2, and SSX3 expression levels in the HCT116 and Caco-2 cells before and after treatment with TSA. DMSO was also utilized as a solvent for the TSA solution and was applied to the control HCT116 and Caco-2 cells. GAPDH mRNA was used as a reference to normalize the gene expression levels. The standard error of the mean for three independent experiments is represented by the error bars. *** p ≤ 0.001; **** p ≤ 0.0001. Abbreviation: qRT-PCR: quantitative reverse transcription polymerase chain reaction; ns: not-significant.
Gene-Gene Interaction Network
The default setting of GeneMANIA was used to build a gene-gene interaction network for analyzing the SSX gene functions. The core node represented the SSX gene members that were surrounded by 10 nodes, defining the other genes that were strongly connected to the SSX genes in terms of both co-expression and physical interactions (top of Figure 9). The SSX1, SSX2, and SSX3 genes were highlighted as being co-expressed with 10 other genes in the following ranking: SSX2IP, RAB3IP, SSX2B, LHX4, SSX7, SSX5, MAGEA12, MAGEA6, MAGEA1, and KDM2B (colored purple in the bottom left of Figure 9). However, the GeneMANIA program revealed that the interconnected network of the SSX2IP, RAB3IP, SSX2B, LHX4, SSX7, SSX5, and KDM2B genes had real physical interactions (colored pink in the bottom right of Figure 9). Furthermore, the analysis revealed that the co-expressions and physical interactions accounted for 18.53% and 81.47%, respectively. with 10 other genes in the following ranking: SSX2IP, RAB3IP, SSX2B, LHX4, SSX7, SSX5, MAGEA12, MAGEA6, MAGEA1, and KDM2B (colored purple in the bottom left of Figure 9). However, the GeneMANIA program revealed that the interconnected network of the SSX2IP, RAB3IP, SSX2B, LHX4, SSX7, SSX5, and KDM2B genes had real physical interactions (colored pink in the bottom right of Figure 9). Furthermore, the analysis revealed that the co-expressions and physical interactions accounted for 18.53% and 81.47%, respectively.
Discussion
CT antigens are prospective cancer-specific biomarkers with potential diagnostic, prognostic, or therapeutic uses. The current classification approach for CT genes was developed by Hoffman et al. [8]. On the basis of an in silico pipeline, a subgroup of human meiotic genes was described as being composed of CT genes and presented a highly restricted cancer-specific marker [20,22].
Primers that specifically amplify individual SSX family members were identified in this study. These primers were used to validate SSX1, SSX2, and SSX3 expression in CC Figure 9. The gene-gene interaction network for SSX1, SSX2, and SSX3 members using the Gen-eMANIA database. Circular SSX genes are represented by the center nodes. The top 10 genes most commonly found in close proximity to SSX genes are shown. The lines indicate additional genes, and the edges illustrate their interactions with SSX genes.
Discussion
CT antigens are prospective cancer-specific biomarkers with potential diagnostic, prognostic, or therapeutic uses. The current classification approach for CT genes was developed by Hoffman et al. [8]. On the basis of an in silico pipeline, a subgroup of human meiotic genes was described as being composed of CT genes and presented a highly restricted cancer-specific marker [20,22].
Primers that specifically amplify individual SSX family members were identified in this study. These primers were used to validate SSX1, SSX2, and SSX3 expression in CC by using RT-PCR on fresh tissue samples from 35 patients with CC and the corresponding NC tissues. The primers selected from different exons were designed to avoid falsepositive outcomes resulting from contaminated genomic DNA (as shown in Table 1). After validation, the RT-PCR screening identified the SSX2 gene as a potential novel CTrestricted gene, possibly representing an optimal candidate CC biomarker because it was expressed in 20% (p = 0.009) of the CC tissue samples, respectively, but not in the NC tissues. The activation of CT genes in cancer is likely associated with demethylation or histone deacetylation inhibition [7,11]. The RT-PCR results showed SSX1 gene expression in 10% (p = 0.078) of the CC tissue specimens but not in any of the NC tissue specimens. For the SSX1 and SSX2 genes, the qRT-PCR findings were consistent with the RT-PCR results, demonstrating that these genes are expressed only in CC tissues and not in NC tissues. In order to determine the CC specificity of the SSX1 and SSX2 genes, BC, CLL, and NB tissue samples were examined; however, neither gene was expressed in any of the tissue samples.
The expressions of the SSX1 and SSX2 genes were found in the advanced grades of CC tissue samples, according to the clinical data of the study participants (grades II and III). Consistent with previous reports on SSX family expression in a range of human cancers (9,17,18), SSX gene expression has been correlated with more advanced stages of disease [23][24][25]. Previous studies have identified the same findings for SSX genes in patients with CC [17]. SSX genes were expressed in 32.4% of CC tissue samples but were not detected in NC tissue samples [17]. A previous study showed that SSX2 was expressed in prostate cell lines, but SSX1 was not expressed in the same prostate cell lines [26]. High SSX1 and SSX2 expression levels were observed in patients with hepatocellular carcinoma, which suggests that they might be used as a cancer marker [27,28]. These inconsistent results could be due to the differences in the primer sets used or in the physiology of the clinical samples. Our study is the first to validate SSX gene expressions in Saudi patients. Therefore, our results should be confirmed in future large-scale investigations involving different cancer types.
Many genes have been identified as potential inducers of epithelial-to-mesenchymal transition (EMT) in the progression of CC [29,30]. SSX2 expression was significantly higher in CC tissue samples with high disease grades than in NC tissue samples, suggesting that its expression is associated with cancer growth and metastasis. The SSX2 gene's role in the EMT in CC cells has not been investigated; however, the presence of SSX2 in CC suggests it could be a therapeutic target.
Moreover, Niemeyer et al. demonstrated no SSX1 and SSX2 expression in patients with acute lymphatic leukemia. However, each gene was expressed in 29% of patients with acute myeloid leukemia [31]. The differences between our findings and those of the aforementioned studies may be related to the different types of leukemia samples used or the relatively small number of patients examined. Thus, additional larger-scale investigations are needed to confirm our findings.
In contrast, the expression pattern of the SSX3 gene in the NC tissue samples was restricted to the testis, and no indication of RT-PCR expression was found in the CC tissue samples. Nonetheless, owing to the possibility of SSX3 gene expression in other cancer types, the gene was not eliminated from the gene screening. Consequently, an RT-PCR study of this gene was performed on several types of BC, CLL, and NB tissues. This gene was found to be expressed only in the testicular tissue sample and was absent in the BC, CLL, and NB tissue samples. The study results were similar to those of a previous work, which found no evidence of SSX3 expression in multiple human malignancies from several histological origins [16].
The study examined the expression levels of the SSX1, SSX2, and SSX3 genes in NC and colon adenocarcinoma (COAD) tissue samples using RNA sequencing data from the TCGA repository (accessed on 20 February 2023). As demonstrated by the TCGA, the expression levels of SSX1 and SSX2 were higher in the COAD tissue samples than in the NC tissue samples, which is consistent with our findings from the RT-PCR results in this study ( Figure 10). This confirms previous research results, demonstrating increased SSX1 and SSX2 expression levels in numerous cancers, including CC [8,27,28,32,33]. On the other hand, the TCGA results demonstrated that the SSX3 expression level was higher in the COAD tissue samples than in the NC tissue samples, despite the fact that both cell types expressed the gene. However, earlier research results demonstrated that SSX3 expression was not found in several cancer types [16]. This outcome is consistent with the RT-PCR findings of the present study. Therefore, additional research is required to identify whether the TCGA results of SSX3 are prevalent in CC tissues and the function of the SSX3 gene in the disease.
The treatment of cancer cells with medications that deregulate DNA methylation has been demonstrated to lead to the activation of CT gene expressions in different types of cancer cells [7,11,34,35]. However, the DNA methylation regulatory mechanisms responsible for CT gene silencing have been found in only a small subset of X-CT genes, and these all are triggered by hypomethylating agents. In addition, another epigenetic mechanism that can regulate CT gene expression is the inhibition of histone deacetylation via HDACi drugs, which leads to an increase in the expression levels of different CT genes [7,11]. and SSX2 expression levels in numerous cancers, including CC [8,27,28,32,33]. On the other hand, the TCGA results demonstrated that the SSX3 expression level was higher in the COAD tissue samples than in the NC tissue samples, despite the fact that both cell types expressed the gene. However, earlier research results demonstrated that SSX3 expression was not found in several cancer types [16]. This outcome is consistent with the RT-PCR findings of the present study. Therefore, additional research is required to identify whether the TCGA results of SSX3 are prevalent in CC tissues and the function of the SSX3 gene in the disease. The treatment of cancer cells with medications that deregulate DNA methylation has been demonstrated to lead to the activation of CT gene expressions in different types of cancer cells [7,11,34,35]. However, the DNA methylation regulatory mechanisms responsible for CT gene silencing have been found in only a small subset of X-CT genes, and these all are triggered by hypomethylating agents. In addition, another epigenetic mechanism that can regulate CT gene expression is the inhibition of histone deacetylation via HDACi drugs, which leads to an increase in the expression levels of different CT genes [7,11].
Epigenetic control was tested to determine whether reduced histone deacetylation or DNA methyltransferase can stimulate the expressions of the SSX1, SSX2, and SSX3 genes. Freshly derived early-passage HCT116 and Caco-2 cell lines were treated with 100 nM of TSA or 10 μM of 5-aza-2′-CdR for 48 or 72 h, respectively. The epigenetic results demonstrated that the expression levels of SSX1, SSX2, and SSX3 were activated with the TSA drug in the HCT116 cells but remained unaffected in the Caco-2 cells at a similar dose, which shows that not all cancer cell types react to the same treatment and maybe display tissue specificity. In addition, this observation suggests that the regulation mechanisms of SSX1, SSX2, and SSX3 expressions may not inhibit histone deacetylation in Caco-2 cells. This observation is consistent with previous reports that indicated different expression levels of several CT genes in CC cell lines [7,11].
The greatest induction of SSX gene transcriptions was detected after DNA methyltransferase inhibition using 5-aza-2′-CdR. This treatment increased the expression levels of SSX1, SSX2, and SSX3 in the HCT116 cells and those of SSX2 and SSX3 in the Caco-2 cells. These findings indicate that DNA hypomethylation is essential for regulating the Epigenetic control was tested to determine whether reduced histone deacetylation or DNA methyltransferase can stimulate the expressions of the SSX1, SSX2, and SSX3 genes. Freshly derived early-passage HCT116 and Caco-2 cell lines were treated with 100 nM of TSA or 10 µM of 5-aza-2 -CdR for 48 or 72 h, respectively. The epigenetic results demonstrated that the expression levels of SSX1, SSX2, and SSX3 were activated with the TSA drug in the HCT116 cells but remained unaffected in the Caco-2 cells at a similar dose, which shows that not all cancer cell types react to the same treatment and maybe display tissue specificity. In addition, this observation suggests that the regulation mechanisms of SSX1, SSX2, and SSX3 expressions may not inhibit histone deacetylation in Caco-2 cells. This observation is consistent with previous reports that indicated different expression levels of several CT genes in CC cell lines [7,11].
The greatest induction of SSX gene transcriptions was detected after DNA methyltransferase inhibition using 5-aza-2 -CdR. This treatment increased the expression levels of SSX1, SSX2, and SSX3 in the HCT116 cells and those of SSX2 and SSX3 in the Caco-2 cells. These findings indicate that DNA hypomethylation is essential for regulating the expressions of SSX1, SSX2, and SSX3. It is important to study these genes as potential biomarkers and encoding therapeutic targets, and mechanistic regulatory pathways may identify categories of CT genes that are co-regulated. These results imply that several mechanisms influence the regulation of SSX genes. Multiple CT genes have been shown to be essential for cancer cell growth. Therefore, inactivating these genes may be advantageous for minimizing the effect of cancer and making other treatment methods more successful by reducing the proliferation-mediated burden of malignancies. DNA methylation and histone modifications have been revealed as key modulators of the EMT program. For example, CDH1 promoter methylation has been identified as an important contributor to EMT and has frequent occurrences in different human malignancies [36]. In addition, histone modifications are usually reversible and play crucial roles in defining the plasticity of EMT [37].
Evidence shows that 5-aza-2 -CdR can regulate the expression of CTCFL (also known as BORIS), a transcriptional regulator that may be responsible for the regulation of numerous CT genes [38][39][40]. At this time, it is unknown whether the SSX1, SSX2, and SSX3 gene expression alterations induced by 5-aza-2 -CdR are due to alterations in the methylation at the position of their promoter or in the expressions of other transcription factors, such as CTCFL, which may regulate SSX gene expressions. Future research should focus on elucidating the mechanism behind these changes in gene expression. In addition, from the results of this study, we raise the critical question of why induction was highly detected in the CC cell line treated with 5-aza-2 -CdR but not in the other cell lines treated with DMSO. The expression level of the primary methylation repair enzyme DNMT1 has been reported to be decreased or degraded by 5-aza-2 -CdR treatment [41,42]. The expression level of the DNMT1 gene decreased in HCT116 and Caco-2 cells treated with 5-aza-2 -CdR when compared to cells treated with DMSO ( Figure 11). However, the role of 5-aza-2 -CdR treatment in decreasing the expression levels of other DNMT types should also be examined in future investigations. addition, from the results of this study, we raise the critical question of why induction was highly detected in the CC cell line treated with 5-aza-2′-CdR but not in the other cell lines treated with DMSO. The expression level of the primary methylation repair enzyme DNMT1 has been reported to be decreased or degraded by 5-aza-2-CdR treatment [41,42]. The expression level of the DNMT1 gene decreased in HCT116 and Caco-2 cells treated with 5-aza-2′-CdR when compared to cells treated with DMSO ( Figure 11). However, the role of 5-aza-2′-CdR treatment in decreasing the expression levels of other DNMT types should also be examined in future investigations.
Lastly, the aim of this research was to identify those SSX gene biomarkers that might aid in the screening of possible CC candidates for early detection. However, the present study has a few limitations. First, only 35 surgical samples (30 samples from male patients and five samples from female patients) were included in the study; therefore, larger samples are needed to confirm these findings. Second, the protein levels of the candidate SSX genes were not evaluated because of a shortage of samples. Figure 11. qRT-PCR analysis of DNMT1 expression in Caco-2 and HCT116 cells after treatment with 10 μM of 5-aza-2′-CdR for 72 h. The bar graphs show the DNMT1 expression levels in the Caco-2 and HCT116 cells before and after treatment with 5-aza-2′-CdR. After considering the fact that DMSO was used to dissolve the 5-aza-2′-CdR solution, this was the treatment given to the Figure 11. qRT-PCR analysis of DNMT1 expression in Caco-2 and HCT116 cells after treatment with 10 µM of 5-aza-2 -CdR for 72 h. The bar graphs show the DNMT1 expression levels in the Caco-2 and HCT116 cells before and after treatment with 5-aza-2 -CdR. After considering the fact that DMSO was used to dissolve the 5-aza-2 -CdR solution, this was the treatment given to the HCT116 and Caco-2 cells used as controls. GAPDH mRNA was used as a reference to normalize the expression levels. The standard error of the mean for three independent experiments is represented by the error bars. * p ≤ 0.05. Abbreviation: qRT-PCR: quantitative reverse transcription polymerase chain reaction.
Lastly, the aim of this research was to identify those SSX gene biomarkers that might aid in the screening of possible CC candidates for early detection. However, the present study has a few limitations. First, only 35 surgical samples (30 samples from male patients and five samples from female patients) were included in the study; therefore, larger samples are needed to confirm these findings. Second, the protein levels of the candidate SSX genes were not evaluated because of a shortage of samples.
Conclusions
The expression profiles of the three SSX genes were analyzed in CC and matched NC tissue samples. The gene expressions of SSX1 and SSX2 were detected in the CC tissue samples but not in the adjacent NC tissue samples. Therefore, these genes may be used as cancer-specific biomarkers (diagnostic tools) for the early detection of CC. However, additional protein-level investigations are needed to assess this result. This study also shows that 5-aza-2 -CdR and TSA as agents could stimulate the expressions of all SSX genes examined in the CC cell lines. However, on the basis of the findings of this study, we conclude that, owing to its ability to decrease DNMT1 expression levels, 5-aza-2 -CdR is the most important regulator of SSX gene expression. This epigenetic regulator is important for the transcriptional activation of SSX genes and might be used as a therapeutic target in future cancer immunotherapies. In order to assess the effects of 5-aza-2 -CdR treatment at higher doses for longer durations or in conjunction with a TSA agent, additional research is required. | 2023-05-24T15:20:51.035Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "1d6cc932632e43ddd9aef40019f9e28c36044c80",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/medicina59050988",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a4cb5e895d0cdf4cf53d4dd8cd03823a07b2ff1",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248014957 | pes2o/s2orc | v3-fos-license | Novel PANK2 Mutations in Patients With Pantothenate Kinase-Associated Neurodegeneration and the Genotype–Phenotype Correlation
Pantothenate kinase-associated neurodegeneration (PKAN) is a rare genetic disorder caused by mutations in the mitochondrial pantothenate kinase 2 (PANK2) gene and displays an inherited autosomal recessive pattern. In this study, we identified eight PANK2 mutations, including three novel mutations (c.1103A > G/p.D368G, c.1696C > G/p.L566V, and c.1470delC/p.R490fs494X), in seven unrelated families with PKAN. All the patients showed an eye-of-the-tiger sign on the MRI, six of seven patients had dystonia, and two of seven patients had Parkinsonism. Biallelic mutations of PANK2 decreased PANK2 protein expression and reduced mitochondrial membrane potential in human embryonic kidney (HEK) 293T cells. The biallelic mutations from patients with early-onset PKAN, a severity phenotype, showed decreased mitochondrial membrane potential more than that from late-onset patients. We systematically reviewed all the reported patients with PKAN with PANK2 mutations. The results indicated that the early-onset patients carried a significantly higher frequency of biallelic loss-of-function (LoF) mutations compared to late-onset patients. In general, patients with LoF mutations showed more severe phenotypes, including earlier onset age and loss of gait. Although there was no significant difference in the frequency of biallelic missense mutations between the early-onset and late-onset patients, we found that patients with missense mutations in the mitochondrial trafficking domain (transit peptide/mitochondrial domain) of PANK2 exhibited the earliest onset age when compared to patients with mutations in the other two domains. Taken together, this study reports three novel mutations and indicates a correlation between the phenotype and mitochondrial dysfunction. This provides new insight for evaluating the clinical severity of patients based on the degree of mitochondrial dysfunction and suggests genetic counseling not just generalized identification of mutated PANK2 in clinics.
Pantothenate kinase-associated neurodegeneration (PKAN) is a rare genetic disorder caused by mutations in the mitochondrial pantothenate kinase 2 (PANK2) gene and displays an inherited autosomal recessive pattern. In this study, we identified eight PANK2 mutations, including three novel mutations (c.1103A > G/p.D368G, c.1696C > G/p.L566V, and c.1470delC/p.R490fs494X), in seven unrelated families with PKAN. All the patients showed an eye-of-the-tiger sign on the MRI, six of seven patients had dystonia, and two of seven patients had Parkinsonism. Biallelic mutations of PANK2 decreased PANK2 protein expression and reduced mitochondrial membrane potential in human embryonic kidney (HEK) 293T cells. The biallelic mutations from patients with early-onset PKAN, a severity phenotype, showed decreased mitochondrial membrane potential more than that from late-onset patients. We systematically reviewed all the reported patients with PKAN with PANK2 mutations. The results indicated that the early-onset patients carried a significantly higher frequency of biallelic loss-of-function (LoF) mutations compared to late-onset patients. In general, patients with LoF mutations showed more severe phenotypes, including earlier onset age and loss of gait. Although there was no significant difference in the frequency of biallelic missense mutations between the early-onset and late-onset patients, we found that patients with missense mutations in the mitochondrial trafficking domain (transit peptide/mitochondrial domain) of PANK2 exhibited the earliest onset age when compared to patients with mutations in the other two domains. Taken together, this study reports three novel mutations and indicates a correlation between the phenotype and mitochondrial dysfunction. This provides new insight for evaluating the clinical severity of patients based on the degree of mitochondrial dysfunction and suggests genetic counseling not just generalized identification of mutated PANK2 in clinics.
INTRODUCTION
Pantothenate kinase-associated neurodegeneration (PKAN) (OMIM #234200), a subtype of neurodegeneration defined as brain iron accumulation (NBIA) disorders, is characterized by the accumulation of iron in the basal ganglia (Gregory et al., 2009). PKAN frequently manifests as severe dystonia, young-onset Parkinsonism, pigmented retinopathy, and loss of movement control (Hayflick et al., 2003). Based on the onset age, it is classified into the two groups: early onset (<10 years old when first symptoms start), otherwise known as classic onset, and late onset (≥10 years old when symptoms start), otherwise known as atypical onset. Patients with early-onset PKAN show rapid disease progression, loss of ambulation approximately 15 years after the first symptoms, and tend to develop pigmentary retinopathy. Those with later onset show slower progression, maintain independent ambulation for more than 15 years after first symptoms, and tend to have speech disorders and psychiatric features (Pellecchia et al., 2005). Most patients have the eye-of-the-tiger sign on brain MRI (McNeill et al., 2008) and display inherited mutations of pantothenate kinase 2 (PANK2) gene (OMIM * 606157) in an autosomal recessive pattern (Zhou et al., 2001).
Pantothenate kinase 2 is located on chromosome 20p13 and encodes the PANK2 protein consisting of 570 amino acids. PANK2 belongs to the pantothenate kinase family (PANK1-4) and is the only pantothenate kinase located in the mitochondria (Prokisch and Meitinger, 2003). It catalyzes the biosynthesis of coenzyme A (CoA) and acts as a ratecontrolling enzyme in the first step of the CoA biosynthesis pathway (Begley et al., 2001). CoA is a key molecule for hundreds of metabolic reactions, including the tricarboxylic acid cycle and neurotransmitter synthesis (Leonardi et al., 2005), and dysfunction is associated with neurodegeneration with brain iron accumulation (Srinivasan et al., 2015). PANK2 is comprised of a transit peptide/mitochondrial (TPM) (1-45 aa) domain at the N-terminal region, intermediate/regulatory (I/R) domain (47-211 aa) in the central region, and PANK catalytic core domain (CCR) (212-570 aa) at the C-terminal region (Zhang et al., 2006). The location of PANK2 in mitochondria is vital for regulating CoA biosynthesis (Leonardi et al., 2007). To date, more than 100 mutations have been reported with different mutation types and locations in PANK2 (Chang et al., 2020). Mutations in PANK2 have been reported to disrupt mitochondrial function, including increased oxidative status, disturbed CoA metabolism, and iron homeostasis, which is associated with PKAN (Brunetti et al., 2012;Campanella et al., 2012;Jeong et al., 2019). Mitochondria impairment is related to many neurodegenerative disorders, such as Parkinson's disease and Alzheimer's disease (Burté et al., 2015;Que et al., 2021;Wang et al., 2021a). However, the mitochondrial functional alteration caused by biallelic PANK2 mutations and the potential relationship with the severity of phenotype is still unknown. In this study, we reported seven biallelic PANK2 mutations from seven unrelated families, including three novel mutations, c.1103A > G/p.D368G, c.1696C > G/p.L566V, and c.1470delC/p.R490fs494X. All the biallelic mutations derived from patients reduced mitochondrial membrane potential (MMP) in human embryonic kidney (HEK) 293T cells, which correlated to the severity of the phenotype in the patients. We also systematically reviewed all the reported patients with biallelic PANK2 mutations and found that patients with loss-of-function (LoF) mutations or missenses in their TPM domain had more severe phenotypes. These results showed a potential relationship between the phenotype and mitochondrial dysfunction, suggesting that genetic counseling should consider the degree of mitochondrial dysfunction caused by the biallelic mutation.
Inclusion of Patients
All the patients were recruited from the genetic outpatient department of the Second Affiliated Hospital of Guangzhou Medical University, including three patients transferred from the Suzhou Hospital of Anhui Medical University, Department of Neurology, and Shanghai Jiao Tong University Affiliated Sixth People's Hospital. Brain MRI scans and detailed clinical data were collected, including age at onset, gait disturbance (GD), general and neurological examination results (dystonia, tremor, chorea, dysarthria, dysphagia, cognitive decline, and pyramidal signs), sex, and age. Genomic DNA of peripheral blood was extracted from the patients and their parents (Qiagen, Hilden, Germany) for sequencing.
Mutation screening of PANK2 was performed using Sanger sequencing. Patients were classified as early-onset or late-onset atypical types (Pellecchia et al., 2005). This study was approved by the Medical Ethics Committee of Second Affiliated Hospital of Guangzhou Medical University. Written informed consent was obtained from the patients and their parents (for children).
Cell Culture and Transfection
The human embryonic kidney (HEK) 293T human cell line was obtained from the Cell Bank of the Chinese Academy of Sciences (Shanghai, China) and was cultured at 37 • C in 5% CO 2 in Dulbecco's Modified Eagle's Medium supplemented with 10% fetal bovine serum (Gibco, cat 10270-106) and 50 U/ml penicillin-streptomycin (Gibco, #15070063). The FUGW-PANK2-EGFP plasmid was transfected into HEK293T cells using the Turbofect Transfection Reagent (Thermo Fisher Scientific, #R0531) according to the manufacturer's instructions. After 48 h transfection, the cells were subjected to Western blotting or immunofluorescence analysis.
Mutant PANK2 Protein Three-Dimensional Modeling Analysis
Pantothenate kinase 2 structural three-dimensional (3D)modeling was performed based on the Protein Data Bank (PDB) 1 accession (5E26). Analysis of WT and mutant protein modeling was performed using the Iterative Threading Assembly Refinement (I-TASSER) software. The three-dimensional structural images were visualized using PyMOL1.7.
Mitochondrial Membrane Potential Assay
Mitochondrial membrane potential is an important indicator of normal mitochondrial function and was detected by the MMP indicator, tetramethylrhodamine (TMRM) (#I34361, Invitrogen). Briefly, cells were transfected with WT or mutant PANK2 plasmids and after 48 h, they were stained with 50 nM TMRM indicator at 37 • C under normal culture conditions for 30 min. After the cells were washed with PBS solution, the fluorescence ratio of TMRM/EGFP was measured using the Spectra Max Paradigm Multi-Mode Microplate Reader (Molecular Devices).
Genotype-Phenotype Analysis
To explore the genotype-phenotype association, publications on PANK2 mutations and related phenotypes were retrieved from PubMed, 2 CNKI, 3 Varcards, 4 and HGMD 5 until January 2022. All the PANK2 variants were annotated based on the transcript NM_153638.4. A LoF mutation is defined as a non-sense, frameshift, canonical splice site, or initiation codon lost mutation. Detailed clinical features of patients with PKAN, including age of onset, GD, dystonia (limbs/oromandibular/generalized), tremor, Parkinsonism, dysarthria, dysphagia, pyramidal signs, MRI, and cognitive decline, were included as described in the literature.
Statistical Analysis
Statistical analyses were performed using SPSS version 18 (SPSS Incorporation, Chicago, IL, United States). All the quantified data were presented as median (min-max). The t-tests, one-way ANOVA tests, and the Kruskal-Wallis tests were used to compare two independent or paired samples, multiple samples, and nonparametric data, respectively. Statistical significance was set at p < 0.05.
All the missense mutations were located on a site that is highly conserved among PANK family proteins and across different species (Figure 1C). These were predicted and evaluated as damage or pathogenic mutations by 23 in silico predictive algorithms 6 and by the American College of Medical Genetics and Genomics (ACMG) guidelines (Richards et al., 2015), respectively ( Figure 1D and Supplementary Tables 2, 3). The c.1355A > G (p.D452G) mutation, present in 2/7 unrelated patients (28.6%), was identified as one of the hot spot mutations in the Chinese population (Supplementary Figure 1).
Two truncated mutations were located in the I/R and CCR domains, which were predicted to produce full or partial CCR loss 6 http://varcards.biols.ac.cn/ in PANK2 (Figure 2A). The six missense mutations were located in the CCR of PANK2 and three were predicted to decrease the native hydrogen bonds with surrounding amino acids by using 3-D structural modeling, resulting in varied alterations in the stability of PANK2 (Figure 2B). These results suggest that these mutations would cause PANK2 dysfunction.
Genotype-Phenotype Relationship of PANK2 in PKAN
To further explore the potential relationship between the genotype and phenotype, we analyzed the association between PANK2 mutation types (homozygous vs. compound heterozygous mutations, biallelic missense vs. biallelic LoF), the PANK2 domain where the mutation occurs (TPM, I/R, CCR), and the phenotype (age of onset, age of lost gait) of patients with PKAN. A total of 255 patients, including 145 (145/255, 56.9%) early onset and 110 (110/255, 43.1%) late onset, were enrolled (seven from this study and 248 from other publications; Supplementary Table 4). The dominant clinical manifestations included eye-of-the-tiger sign in the brain MRI, dysarthria, GD, and dystonia. The median age at onset was 8.0 (0.3-51.0) and the median age at loss of gait was 11.0 (2.0-60.0) ( Table 2).
consistent with previous reports (Chang et al., 2020). There was no difference in the ratio of biallelic missenses or missense/LoF mutations between the two groups of patients (Table 3). Next, we found that the age of onset and loss of gait for patients with homozygous mutations was younger than that of patients with compound heterozygous mutations (Figures 4A,C). Furthermore, we found that patients with homozygous LoF mutations have an earlier onset age and loss of gait than those with homozygous or heterozygous missense/LoF mutations (Figures 4B,D). Although there was A total of 255 patients from the publications and our cohorts were pooled analysis. Age of disease onset and at the loss of gait was presented as Median (Min-Max) (years).
no difference in the age of onset and at loss gait between the patients with the homozygous and those with compound heterozygous missense mutations (Figures 4E,F), the patients with the missense mutation that occurred in the TPM domain had an earlier age of onset than those with mutations in the other two domains (Figure 4G). Taken together, these results indicate that the degree of mitochondrial damage caused by biallelic mutations in PANK2 could be related to the severity of PKAN.
DISCUSSION
Pantothenate kinase-associated neurodegeneration is an autosomal recessive disorder caused by biallelic variation in PANK2, which encodes a mitochondrial protein implicated in the biosynthesis of CoA (Hartig et al., 2006;Efthymiou et al., 2020). In this study, we identified eight PANK2 mutations in seven unrelated families, including three novel mutations (c.1103A > G/p.D368G, c.1470delC/p.R490fs494X, and c.1696C > G/p.L566V). Most pair mutations of PANK2 (6/7, 85.7%) decreased PANK2 protein expression and impaired MMP in HEK 293T cells. Further systematic analysis revealed that patients with an LoF mutation that completely disrupted the mitochondrial function of PANK2 or a missense mutation occurring in its mitochondrial transit peptide domain had a tendency toward a more severe phenotype. These results indicate that PANK2 mutations contribute to the patient phenotype by disturbing mitochondrial function. Human PANK2 protein, a homodimer, catalyzes the ratelimiting first step of CoA biosynthesis in a feedback-regulation manner (Kotzbauer et al., 2005). PANK2 is highly expressed in the brain and is localized in the mitochondrial inner membrane (Zhou et al., 2001;Kotzbauer et al., 2005). Silencing . Difference across the groups was analyzed using the Mann-Whitney U test (*p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001; ns, no significance, the Mann-Whitney U test).
of PANK2 expression causes cell growth reduction, cell-specific ferroprotein upregulation, and iron deregulation in human cell lines, characteristics associated with the pathological features of neurodegeneration associated with brain iron accumulation in the basal ganglia (Kruer et al., 2011). LoF PANK2 mutations perturb mitochondrial function and iron homeostasis in mouse and human cells (Santambrogio et al., 2015;Orellana et al., 2016;Jeong et al., 2019). Similarly, in our cohort, biallelic mutations in PANK2 decreased PANK2 protein levels and damaged mitochondrial function. Furthermore, the disruption of mitochondrial function, but not PANK2 expression, was significantly high for mutations found in early-onset patients (Figures 3C,F). Previous studies supported that variable functional alteration resulting from different mutations in a same gene led to varied severity phenotype Shi et al., 2019;Wang et al., 2020Wang et al., , 2021b. In our pooled analysis of 255 patients, the patients with LoF mutations exhibited a more severe phenotype. It is worth noting that patients with a missense mutation that occurs in the PANK2 TPM domain also displayed a more severe phenotype than those with mutations in the other two domains.
The TPM domain is important for PANK2 function as an acylcarnitine sensor that upregulates CoA biosynthesis (Brunetti et al., 2012) and we propose that missense mutations in the TPM domain lead to the loss of PANK2 function by disturbing its translocation into mitochondria. These results indicated that biallelic mutations leading to impaired mitochondrial function or disturbing PANK2 trafficking into the mitochondria may result in an earlier onset of PKAN.
Coenzyme A is a crucial molecule participating in more than 100 metabolic processes (Siudeja et al., 2011;Wang et al., 2019). PANK2 catalyzes the synthesis of CoA from pantothenate (vitamin B 5 ) in the mitochondrial intermembrane space and acts as a sensor for mitochondrial CoA requirement and a regulator of CoA biosynthesis (Brunetti et al., 2012;Zizioli et al., 2016). Supplementation of CoA in human induced pluripotent stem cells (iPSC)-derived neurons with PANK2 deficiency was sufficient to rescue the majority of mitochondrial functionally defective phenotypes of PKAN (Orellana et al., 2016). In this study, we found that the patients with the mutation located in the mitochondrial harboring domain (TPM) of PANK2 displayed an earlier age of onset, indicating that they could benefit from earlier supplementation of CoA for improved PKAN treatment.
In addition, the frequency of mutation sites of PANK2 between Chinese and other populations was different. The top three of the most frequent mutations in the Chinese population were p.D378G (13/120), p.D452G (7/120), and p.I501T (7/120). While the top three of the most frequent mutations in other populations were p.N404I (20/378), p.G219S/V (15/204), and p.T528M (12/204). Some mutations only occurred in the Chinese population, such as p.D452G and p.D324Y (Supplementary Figure 1). These differences may be attributed to the founder effect (Rump et al., 2005;Nakatsuka et al., 2017).
In conclusion, this study identified three novel PANK2 mutations in seven PKAN families and found a correlation between genotype and mitochondrial function impairment and phenotype. Our results may provide a prediction of the severity of this disease and may provide insights for recognizing the disease mechanism of PKAN.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material.
ETHICS STATEMENT
The study was approved by the Medical Ethics Committee of Second Affiliated Hospital of Guangzhou Medical University. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
AUTHOR CONTRIBUTIONS
Y-WS, X-WS, and W-BL defined the research theme and wrote the manuscript. W-BL, N-XS, H-CX, Z-YL, LC, C-XF, and QC designed methods and experiments and carried out most of the experiments. X-WS, N-XS, CZ, and LC collected the clinic data. W-BL and N-XS analyzed the data and interpreted the results. All authors listed have made a substantial, direct, and intellectual contribution to the work, and approved it for publication.
ACKNOWLEDGMENTS
We thank the affected individuals and their families for participating in this study. | 2022-04-08T15:22:50.390Z | 2022-04-06T00:00:00.000 | {
"year": 2022,
"sha1": "76c557b9c22b50829939509981fbc083803bfc1f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2022.848919/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "525a89076ef1e955a49e41f14c3a10bb1f327426",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
231803852 | pes2o/s2orc | v3-fos-license | Norepinephrine affects the interaction of adherent-invasive Escherichia coli with intestinal epithelial cells
ABSTRACT Norepinephrine (NE), the stress hormone, stimulates many bacterial species’ growth and virulence, including Escherichia coli. However, the hormone’s impact on the adherent-invasive E. coli (AIEC) implicated in Crohn’s disease is poorly understood. In the study, we have investigated the effect of NE on the interaction of six AIEC strains isolated from an intestinal biopsy from 6 children with Crohn’s disease with Caco-2 cells. Our study focused on type 1 fimbria and CEACAM6 molecules serving as docking sites for these adhesins. The study results demonstrated that the hormone significantly increased the adherence and invasion of AIEC to Caco-2 cells in vitro. However, the effect was not associated with the impact of NE on the increased proliferation rate of AIEC or the fimA gene expression vital for their interaction with intestinal epithelial cells. Instead, the carcinoembryonic antigen-related cell-adhesion-molecule-6 (CEACAM6) level was increased significantly in NE-treated Caco-2 cells infected with AIEC in contrast to control uninfected NE-treated cells. These results indicated that NE influenced the interaction of AIEC with intestinal epithelium by increasing the level of CEACAM6 in epithelial cells, strengthening their adherence and invasion.
Introduction
Adherent-invasive Escherichia coli (AIEC) is a pathobiont implicated in Crohn's disease. AIEC characterizes the ability to adhere to and invade human intestinal epithelial cells [1]. Another essential characteristic of this group of E. coli strains is their ability to survive and replicate within macrophages without inducing their death [2]. AIEC interacts with the carcinoembryonic antigen-related cell-adhesion-molecule-6 (CEACAM6) at the apical surface of epithelial cells via type 1 pili that promote their adherence and invasion of intestinal epithelium [3,4]. Flagella are another crucial virulence factor promoting adherence and invasion of AIEC into epithelial cells and stimulating the secretion of interleukin-8 [5]. Other virulence factors described in AIEC include long polar fimbria, conferring bacterial interaction with Payer's patches [6], and the outer membrane protein A (OmpA) that interacts with the endoplasmatic-reticulum-stress-response glycoprotein Gp96 on the intestinal epithelial cells, promoting bacterial invasion [7].
The frequency of AIEC isolation from patients with Crohn's disease ranges from 21% to 63%, depending on the isolation and sample size methods. AIEC strains are isolated from a small percentage of healthy individuals, which suggests that their involvement in Crohn's disease is associated with individual factors inclining to develop the disease [8]. Multiple genetic polymorphisms predisposing to Crohn's disease's development appear to be an essential host factor promoting AIEC interaction with the intestinal epithelium. Mutations in the cytoplasmic nucleotide-binding oligomerization domain 2 (NOD2) in CD patients result in an increased immune response to bacterial antigens [9].
Overexpression of CEACAM6 receptors at the apical surface of intestinal epithelial cells in patients with CD facilitates AIEC adherence. Additionally, dysfunctional regulation of tight junction proteins associated with CEACAM6 overexpression promotes AIEC invasion into the intestinal mucosa. Decreased protective meprins, proteases degrading bacterial type 1 pili, observed in CD patients, further increase AIEC colonization of intestinal epithelium. Another significant factor favoring successful colonization of AIEC is gut dysbiosis and inflammation of intestinal mucosa that predisposes to the overgrowth and expansion of AIEC in these patients [9,10].
Innervation of Peyer's patches of the intestinal mucosa with sympathetic cholinergic nerves leads to catecholamines' (norepinephrine, epinephrine, and dopamine) release upon stimulation, e.g., via uptake of bacteria [11]. At high concentrations, these neuroendocrine hormones increase bowel peristalsis, modulate immunity, and interfere with human body homeostasis, as well as the outcome of infections caused by intestinal pathogens [12]. Nearly half of the norepinephrine synthesized in the human body is produced and utilized within the enteric nervous system influencing both, the host organism and intestinal microbiota [13,14]. Catecholamines influence microorganisms in at least three different ways. They facilitate the acquisition of iron from host iron-binding proteins, i.e., transferrin and lactoferrin, and act as signaling molecules that activate bacterial adrenergic-like QseC receptors [8,11,15]. The effect of NE on the gut microbiome is increased microbial growth rate and enhanced expression of virulence factors through a quorum sensing mechanism [16,17]. Moreover, these hormones modulate the interaction of microbiota with intestinal epithelium via adrenergic signaling. The ex vivo study on porcine colon explants rich in Peyer's patches evidenced that NE increased the uptake of enterohemorrhagic E. coli (EHEC) O157:H7 and Salmonella enterica serovar Typhimurium [18,19]. Brown et al. [18] have demonstrated that a neuronal conduction blockade in Peyer's patch explants with saxitoxin, a neuronal toxin, significantly decreased Salmonella Typhimurium uptake. In turn, Green et al. [19] have proven that NE via α2-adrenergic receptors increases EHEC adherence to the colonic mucosa. These studies confirmed the crucial role of adrenergic signaling in the NE-induced colonization of the intestinal epithelium by pathogens and indicated the complex nature of these interactions.
The effect of NE on the interaction of adherentinvasive E. coli, as a distinct group of pathogenic E. coli, with intestinal epithelium is poorly understood, laying the groundwork for our research. In the study, we determined the adhesion and invasion of AIEC to Caco-2 cells in the presence of norepinephrine, followed by this hormone's effect on the expression of selected factors, like type 1 fimbria and CEACAM6 molecule that could influence the interaction of AIEC with intestinal epithelial cells.
E. coli strains and culture media
A six E. coli strains isolated from biopsy specimens of 6 children (mean age 11.1 years, ranging from 7 to 18 years) with Crohn's disease, diagnosed in the Department and Clinic of Pediatrics and Gastroenterology of the University of Medicine, Wroclaw, Poland, were examined in the study. All these strains were confirmed as E. coli by their biochemical characteristics using an Enterotest assay. Based on the ability to adhere to and invade intestinal epithelial cells, as well as to survive within macrophages, all these strains were recognized as AIEC (Supplementary Materials; Table 1). A prototype AIEC LF82 (O83: H1) strain, kindly provided by Dr. Arlette Darfeuille-Michaud, Université d'Auvergne, France, was included in the study as a positive control. E. coli were routinely cultured overnight in Luria broth (LB) with shaking at 37°C, and then transferred into a growth-restricting SAPI-serum medium (6.25 mM NH 4 NO 3 , 1.84 mM KH 2 PO 4 , 3.35 mM KCl, 1.01 mM MgSO 4 and 2.77 mM glucose, pH 7.5), supplemented with non-inactivated 30% (v/v) bovine serum (FBS), and with 50 µM of L-norepinephrine bitartrate (SAPI-serum-NE) in phosphate-buffered saline (pH 7.4), 0.2 µm filter sterilized. NE concentration used in the study has been chosen based on a previous report [20].
PCR and RT-qPCR assays
The presence of the fimA gene AIEC strains was confirmed in PCR reaction using primers presented in Table S2 (Supplementary Materials). The expression of fimA by AIEC was investigated using quantitative real-time PCR. The qRT-PCR reactions were run with mRNA of AIEC cultured in SAPI-serum medium for 24 h, 24 h cultures in SAPI-serum medium subsequently subcultured in MEM medium for 3 h, and 24 h cultures in SAPI-serum medium subsequently subcultured in MEM medium with 50 µM NE (MEM-NE) for 3 h. RNA was extracted using the RNeasy Mini Kit and transcribed to cDNA with the iScriptTM. Reverse Transcription Supermix for RT-qPCR. The reaction was carried out in 10 μl volumes using the Sso AdvancedTM Universal SYBR® Green Supermix on a MIC Real-Time PCR System. The specific primers for fimA and dnaE, and rpoS housekeeping genes are presented in Table S2 (Supplementary Materials). RT-PCR reactions have run in triplicate in the following conditions: activation of the polymerase at 95°C for 15 min, initial denaturation at 95°C for 15 sec, annealing at 60°C for 20 sec, and elongation at 72°C for 20 sec followed by 45 cycles. The expression of CEACAM6 in epithelial cells was determined in Caco-2 cells infected with AIEC strains for 3 hours. RNA was isolated using the Aurum TM Total RNA kit (Bio-Rad) according to the manufacturer's instruction. RNA concentration was quantified using a NanoDrop ND-1000 spectrophotometer, and the 260/280 and 260/230 ratios were examined for protein and solvent contamination. The relative mRNA expression of the genes examined was normalized against the reference genes Rps-11, αtubulin and β-actin, calculated with the ∆∆Ct method.
Cell line
Caco-2 cell line (ATCC HTB37TM) was maintained in a minimal essential medium (MEM) supplemented with 10% of fetal bovine serum (FBS), 1 M sodium pyruvate, 1 M nonessential amino acids (NEAA), 100 U/ml penicillin, and 100 µg/ml streptomycin, at 37°C in a humid atmosphere with 5% CO 2 . Cells were routinely screened for mycoplasma contamination using Hoechst staining. For experimental analyses, Caco-2 cells were seeded at 5 × 10 4 cells per well in 24well culture plates and cultured ten to eleven days to a confluent monolayer. In an in vitro adherence and invasion assays, Caco-2 cells were untreated or treated for 3 hours with 50 µM NE added to the cell-culture medium 10 min before infection with AIEC.
Adherence and internalization assays
Overnight AIEC cultures in SAPI-serum medium were harvested and suspended in saline to the optical density 6 × 10 8 CFU/ml established spectrophotometrically at 600 nm and used to infect Caco-2 cells at a multiplicity of infection (MOI) of 50 bacteria per cell. At 3 hours post-infection, cells were washed three times with PBS and lysed with 0.1% Triton X-100. Serial dilutions of bacterial lysates were plated onto nutrient agar and incubated overnight at 37°C to count bacterial colonies (CFU). The invasion assay was performed in the same manner as the adherence assay with an additional 1 hour of incubation of Caco-2 cells in MEM medium containing gentamycin (100 µg/mL) to kill extracellular bacteria. Three separate experiments were performed in triplicate in three independent experiments for adherence and invasion assays.
Growth responsiveness of AIEC to norepinephrine
The impact of NE on the growth of AIEC was investigated using stationary cultures in LB medium that were inoculated into SAPI-serum medium supplemented with 50 µM NE to achieve a density of 1 × 10 2 CFU (colonyforming units) per ml. Cultures were incubated statically at 37°C in a humidified atmosphere with 5% CO 2 for 24 hours. The density of AIEC cultures was measured at two-time points, t = 0 and t = 24 h, by enumeration of CFU on nutrient-agar plates with a standard dilution technique. The impact of the cell culture medium MEM without NE and MEM with 50 µM NE on the growth of AIEC of the density (2.5 x 10 6 CFU/ml) corresponding to the number of AIEC used in the adherence assay was assessed spectrophotometrically at OD = 600 nm after 3 h incubation period.
Fluorometric analysis of CEACAM6 molecule expression
The impact of NE on the CEACAM6 expression was assessed in Caco-2 cells pre-treated with NE and infected with AIEC strains as described above. After 3 h of incubation, cells were washed three times in PBS and detached using non-enzymatic cell dissociation solution, following washing three times in ice-cold PBS with 1% bovine serum albumin (PBS-BSA) and kept on ice. Then, cells were stained with mouse antihuman CEACAM6/CD66 allophycocyanin (APC)conjugated antibody (R&D, FAB3934A). Incubation of Caco-2 cells with APC-CEACAM antibody proceeded for 45 min at 4°C, after which cells were washed three times in ice-cold PBS-BSA and diluted to the final density 3 × 10 6 cells per well of a black-walled microtiter plate. The fluorescence was read using a Tecan Infinite M200 plate reader and nm) at the excitation wavelength of 630 nm (λex = 630 nm) and emission wavelength 665 nm (λem = 665 nm). Mouse IgG1 APC-conjugated antibody (ThermoFisher Scientific) was used as an isotype control, and uninfected cells served as a negative control. Caco-2 cells viability was determined using staining with 10 µg/mL propidium iodide (Sigma Aldrich) for 1 min, and fluorescence was measured at λem = 535 nm and λem = 617 nm. Caco-2 cells killed with 4% formaldehyde for 30 min served as a positive control in the viability assay. The assay was repeated three times in quadruplicate, and the results are presented as the mean fluorescence intensity with standard deviation.
Yeast agglutination assay
Expression of type 1 fimbria in AIEC growing in culture conditions corresponding to adherence assay was assessed by their ability to agglutinate yeast (Saccharomyces cerevisiae) cells. The cultures of AIEC in SAPI-serum, MEM, and MEM-NE media were harvested and diluted with PBS to obtain a density of 9 × 10 8 CFU/ml (established spectrophotometrically). Next, they were mixed with 1% yeast cell suspension in PBS in a 96-well microtiter plate. The agglutination reaction was inspected under an inverted microscope and recorded as the highest dilution of AIEC suspension producing yeast cell clumping. The mannose-sensitive nature of yeast agglutination was investigated in the presence of 1% methyl-αmannopyranoside. The assay was repeated three times with three independent bacterial cultures, and the results are presented as the mean titer with standard deviation.
Norepinephrine enhanced adherence and invasion of AIEC strains to NE-treated Caco-2 cells
The impact of NE on the interaction of AIEC with intestinal epithelial cells was assessed in an in-vitro adhesion and invasion assays to Caco-2 cell monolayers in the presence of NE (50 µM). All AIEC strains demonstrated a significant increase in adhesion (from 2-to >4-fold; p < 0.05) to NEtreated Caco-2 cells compared to untreated cells. The mean adherence level of all AIEC strains to NE-treated Caco-2 cells was significantly higher than untreated cells (20,7 x 107 CFU vs. 8.3 × 10 7 CFU, respectively; p = 0.003). Besides, NE enhanced significantly (from 2-to 3.8-fold; p < 0.05) internalization of all wild-type AIEC by Caco-2 cells. However, the invasion of the prototype AIEC strain LF82 into NE-treated Caco-2 epithelial cells remained unchanged (p = 0.41) compared to untreated epithelial cells (Figure 1). A meta-analysis (systematic review and synthesis) has been carried out to quantify the effectiveness of NE between MEM and MEM-NE cultures of AIEC. Statistically, significant mean differences were observed in adherence (p < 0.0001) and invasion (p = 0.0001) assayson average, higher values were in MEM-NE than in MEM medium. The graphical results are presented in forest plots (Supplementary Materials Figure S1).
AIEC growth in the presence of norepinephrine
Supplementation of SAPI medium with bovine serum enhanced proliferation of all AIEC isolates from 10 2 CFU/ml to 10 8 CFU/ml after 24 hours of culture. A 24hour exposure of AIEC to 50 μM NE significantly increased the growth of three AIEC strains (LF82, EC29, and EC47) compared to the non-supplemented medium. In contrast, NE inhibited the growth of the EC48 isolate and did not affect EC30, EC38, and EC42 strains (Figure 2). These results have indicated that the impact of NE on AIEC growth in SAPI-serum medium was a straindependent. On the contrary, 3-h incubation in the MEM-NE medium insignificantly enhanced the multiplication of all AIEC strains, excluding the medium's impact and the incubation period of AIEC strains with Caco-2 cells on their increased adherence.
Norepinephrine modified the expression of the fimA gene in AIEC strains
The correlation between type 1 fimbria and NE-induced increased AIEC adherence and invasion of Caco-2 cells was assessed by fimA expression using a qRT-PCR reaction. According to culture conditions of adherence assay, RNAs were isolated from AIEC isolates cultured for 24 hours in SAPI-serum medium following subculture for 3 hours in MEM and MEM-NE (50 µM) media. The mRNAexpression levels independent of growth conditions were evaluated by the expression of dnaE and rpoS housekeeping genes in AIEC strains. The dnaE gene had constant expression levels for all test conditions, and the data were analyzed according to this gene (Supplementary Materials Figure S2).
A 3-h subculture of AIEC strains in a nutrient-rich MEM medium significantly increased mRNA fimA level in LF82 and EC48 strains compared to cultures in SAPIserum medium but did not affect the gene expression in four other isolates (EC30, EC38, EC42, and EC47). However, these cultural conditions decreased fimA expression in EC29 isolate, indicating that a switch from a poor to a nutrient-rich environment could modulate the fimA gene expression in AIEC strains (Figure 3(a)). When comparing fimA mRNA levels in strains cultured first in SAPI-serum medium and then sub-cultured for 3-h in MEM-NE and MEM media, two isolates (EC30 and EC38) had increased expression of this gene and two strains (EC29 and EC47) had decreased expression. The remaining three isolates (LF82, EC42, and EC48) demonstrated an unchanged gene expression. Yeast agglutination assay confirmed the impact of NE on increased FimA synthesis in EC30 and EC38 strains, although the difference between MEM and MEM-NE cultures varied by only one dilution (Figure 3 That implied that type 1 fimbria expression in AIEC strains cultured overnight in a nutrient-poor SAPI-serum medium and AIEC subcultured in the MEM-NE medium ensured their increased adhesion and invasion to NEtreated but not to untreated Caco-2 cells. On the other hand, a meta-analysis excluded the impact of NE on fimA gene expression in the presence of NE (Supplementary Materials Figure 4). This result suggested that NE may exert an effect on Caco-2 cells.
Norepinephrine enhanced CEACA6 expression in epithelial cells infected with AIEC
The impact of NE on CEACAM6 expression in untreated and NE-treated Caco-2 cells infected with AIEC was determined with an APC-conjugated anti-hCEACAM6 antibody. Caco-2 cell viability estimated with propidium iodide was 99.7% ±0.4%. Infection of untreated Caco-2 cells with four AIEC strains LF82, EC29, EC38, and EC47 increased CEACAM6 levels compared to control uninfected Caco-2 cells. On the contrary, the CEACAM6 level in untreated Caco-2 cells infected with EC42 and EC48 strains decreased significantly (p < 0.05) compared to control cells. The infection of Caco-2 cells with EC30 isolate has not changed the CEACAM6 level comparing to untreated cells. However, the level of CEACAM6 significantly increased in NE-treated Caco-2 cells infected with all AIEC strains examined compared to untreated infected Caco-2 cells and uninfected NE-treated cells (Figure 4(a)). These results indicated that NE had a prominent effect on the level of CEACAM6 in Caco-2 cells infected with AIEC.
To established whether changes in CEACAM6 expression are related to the expression of the corresponding gene, the CEACAM6 mRNA level in NE-treated and untreated Caco-2 infected with AIEC strains was determined. The mRNA-expression levels independent of growth conditions were evaluated by the expression of Rps-11, α-tubulin, and β-actin housekeeping genes in Caco-2 cells. The Rps-11 gene had constant expression levels for all test conditions, and the data were analyzed according to this gene (Supplementary Materials Figure S3). The result revealed that NE significantly increased CEACAM6 expression in Caco-2 cells infected with all but one AIEC strains compared to untreated cells ( Figure 4(b)). The increased level of CEACAM6 mRNA in Caco-2 cells infected with EC38 strain did not reach statistical significance compared to untreated cells. The CEACAM6 mRNA level in control uninfected NE-treated cells Caco-2 cells was comparable to untreated cells. A meta-analysis confirmed the increased CEACAM6 expression in NEtreated Caco-2 cells compared to untreated cells infected with AIEC (Supplementary Materials Figure S5). The Pearson correlation coefficient of 0.912 indicated a strong positive correlation between ceacam6 gene expression and CEACAM6 synthesis in NE-treated Caco-2 cells infected with AIEC.
Discussion
Lyte [21] and Freestone [22] demonstrated that catecholamine hormones increase virulence genes' expression, vital to bacterial ability to interact with host cells. In the study, we investigated the effect of NE on the adherence and invasion of six wild-type AIEC strains isolated from children with Crohn's disease to Caco-2 cells. An in vitro assay showed that NE significantly increased the adherence and invasion of AIEC to intestinal epithelial cells. Considering that NE increases the proliferation of many bacteria species, we first explored the effect of NE on the multiplication of AIEC in a nutrient-poor SAPI-serum medium. We found that this hormone increased AIEC proliferation. However, the influence was straindependent, which excluded the impact of increased AIEC proliferation on enhanced adherence and invasion into intestinal epithelial cells for most AIEC strains studied. Similarly, although the AIEC population's density in a nutrient-rich MEM medium has increased in the presence of NE, it was not significant, excluding the effect of short-term culture in a nutrient-rich medium on NEinduced increased AIEC adhesion and invasion. The interaction of AIEC with intestinal epithelial cells requires type 1 fimbriae. According to Boudeau et al. [4] AIEC present variant of type 1 pili that provide the adherence and invasion of AIEC into epithelial cells. In the study, the fimA gene expression required for type 1 piliation was assessed after threehours of incubation in the cell culture medium supplemented with NE and compared to the gene expression in AIEC cultured SAPI-serum and MEM media without the hormone according to the adherence assay conditions. The results indicated that the hormone had a strain-dependent effect on fimA expression in AIEC that did not reflect increased adherence of all strains assessed. Moreover, meta-analysis excluded the impact of NE on fimA gene expression in AIEC and enhanced adherence of AIEC to epithelial cells.
In addition to bacterial factors, the epithelial cells' receptors play a crucial role in the interaction of the intestinal pathogens with epithelial host cells. CEACAM6 is a cell adhesion receptor of the immunoglobulin-like superfamily anchored to the cell membrane. CEACAM6 regulates cell adhesion, proliferation, signaling in cancer, and immunity [23]. Barnich et al. [4] demonstrated that CEACAM6 acts as a receptor for AIEC adherence, and its expression enhanced in cultured epithelial cells after infection with AIEC bacteria. Moreover, according to Denizot et al. [24], CEACAM6 is overexpressed in patients with CD favoring AIEC colonization. The overexpression of CEACAM6 combined with AIEC infection leads to abnormal intestinal permeability and disruption of intestinal epithelial barrier function accompanied by translocation of AIEC via intestinal epithelium and cytokine secretion.
Considering the importance of CEACAM6 for the interaction of AIEC with intestinal epithelial cells, in the study, we investigated the effect of NE on the expression of CEACAM6 in Caco-2 cells infected with AIEC. The result indicated that NE had no impact on the CEACAM6 level in control, uninfected cells. On the other hand, NE significantly increased the expression of CEACAM6 in AIEC-infected Caco-2 cells compared to untreated but infected cells. Barnich et al. [4] have demonstrated that AIEC strains indirectly upregulated CEACAM6 in epithelial cells via activation of proinflammatory cytokine tumor necrosis factoralpha (TNFα). Whether NE increases TNFα synthesis and secretion or modulates the CEACAM6 expression via other mechanisms is under investigation in our laboratory. In normal physiological conditions, NE as an agonist of α-and β-adrenergic receptors, reduced TNFα expression in monocytes challenged with LPS, implying its anti-inflammatory role in bacterial infections [25]. The study of Schmidt et al. [26] confirmed the impact of NE as an agonist of adrenergic receptors on intestinal mucosal defense via rapid secretory IgA secretion in mucosal explants from the porcine distal colon. In turn, Spencer et al. [27] demonstrated that in S. Typhimurium, NE downregulated the virulence gene expression that affects their survival and long-term persistence host. Moreover, they also showed that NE increased the sensitivity of S. Typhimurium to LL-37 antibacterial peptide. These data imply that catecholate hormones may exert a dual effect on pathogenic bacteria by promoting their virulence and growth in the host from one side and increasing their sensitivity to an innate immune response from the other side. Hence, the outcome of infection with intestinal pathogens may depend on the NE-amendable delicate balance between innate immune response and bacterial virulence.
The study results have shown that NE, similarly to TNFα, increased CEACAM6 expression in intestinal epithelial cells infected with AIEC. Considering the role of CEACAM6 in many different cellular processes, the ability of NE to increase its level in AIEC infected epithelial cells, the hormone, maybe a relevant factor in the pathomechanism of CD disease.
Conclusions
The results of the study indicated that NE increased the adhesion and invasion of AIEC to Caco-2 cells and modulated the fimA gene expression essential for the interaction of AIEC with intestinal epithelial cells. Notably, norepinephrine enhanced CEACAM6 expression in intestinal epithelial cells infected with AIEC, implying that the hormone can facilitate AIEC colonization of intestinal epithelium by affecting the expression of cellular receptors. | 2021-02-05T06:16:07.204Z | 2021-02-04T00:00:00.000 | {
"year": 2021,
"sha1": "b90f1492d07fe9c15e841ffc0da255d575ed1ef5",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21505594.2021.1882780?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7437000e23995675f900282af37fb967dbdec292",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213605786 | pes2o/s2orc | v3-fos-license | Probing enzymatic activity – a radical approach†
Deubiquitinating enzymes (DUBs) are known to have numerous important interactions with the ubiquitin cascade and their dysregulation is associated with several diseases, including cancer and neurodegeneration. They are an important class of enzyme, and activity-based probes have been developed as an effective strategy to study them. Existing activity-based probes that target the active site of these enzymes work via nucleophilic mechanisms. We present the development of latent ubiquitin-based probes that target DUBs via a site selective, photoinitiated radical mechanism. This approach differs from existing photocrosslinking probes as it requires a free active site cysteine. In contrast to existing cysteine reactive probes, control over the timing of the enzyme–probe reaction is possible as the alkene warhead is completely inert under ambient conditions, even upon probe binding. The probe's reactivity has been demonstrated against recombinant DUBs and to capture endogenous DUB activity in cell lysate. This allows more finely resolved investigations of DUBs.
Optimisation of labelling conditions
Supplementary figure 4: (A) Optimisation of initiator concentration. The probe was incubated with HEK293T lysate for 90 min before the radical initiator 2,2-dimethoxy-2-phenylacetophenone (DPAP) was added at varying concentrations along with the radical stabiliser methoxyacetophenone (MAP). Samples were degassed and exposed to UV light (365 nm) for 2 min. (B) Time under UV was investigated using DPAP and MAP (0.25 µM) to improve compatibility with LC-MS/MS. The initiators were added to the samples after a 90 min incubation and exposed to UV light for the time indicated.
Inputs and eluates from immunoprecipitation experiment
Supplementary figure 7: Three labelling reactions and immunoprecipitations were performed in parallel, in triplicate using the terminal alkene probe 1, the phenyl substituted probe 2 and lysate alone. Samples of the inputs and eluates from each immunoprecipitation were separated by SDS-PAGE and visualised by western blot
Silver staining
Gels were treated with fixative (40% EtOH, 10% AcOH) at rt for 1 h or at 4 °C for 16 h. Gels were washed in 20% EtOH (2 x 10 min), then in dH2O (2 x 10 min). Gels were sensitised in aq. Na2S2O3 (0.02%) for 45 s and then immediately washed with dH2O (2 x 1 min). The gel was incubated in a solution of AgNO3 (12 mM) with formaldehyde (0.02%) at 4 °C for a minimum of 20 min and up to 2 h. Following this, the gel was washed in dH2O (2 x 30 s) and transferred to developer solution (3% K2CO3, 0.05% formaldehyde). Development was stopped using 5% AcOH.
Western Blotting
Proteins were transferred onto nitrocellulose membranes (GE Healthcare, Illinois USA) in blotting transfer buffer (25 mM Tris, 190 mM glycine, 20% MeOH) overnight at 15 V and 4 °C. The membrane was incubated in blocking solution (5% skimmed milk powder in PBST: 8 mM Na2HPO4, 150 mM NaCl, 2 mM KH2PO4, 3 mM KCl, 0.1% Tween 20, pH 7.4) for 1 h at rt or 16 h at 4 °C prior to immunoblotting. The primary mouse monoclonal anti-HA antibody (Biolegend, California USA) was diluted 1:2000 in blocking buffer and incubated with the membrane for 1 h at rt with gentle shaking. The membrane was washed with PBST (2 x 5 min) and PBS (2 x 5 min). The secondary antibody (Jackson ImmunoResearch, Cambridgeshire UK) was diluted in blocking buffer 1:4000, added to the membrane and incubated for 1 h at rt with gentle shaking. The membrane was washed with PBST (3 x 5 min), PBS (2 x 5 min) and dH2O (1 x 5 min). Pierce ECL western blotting substrate (Thermofisher, Massachusetts USA) was used to visualise the chemiluminescence.
Expression and purification of HA-Ub75-MeSNa
The expression and purification of HA-Ub75-MeSNa was carried out according to literature procedures. [5,6] BL21 (DE3) cells transfected with a pTYB2 plasmid encoding for a HA-tagged ubiquitin75 fusion protein containing an intein domain and chitin-binding domain (HA-Ub75intein-CBD) were transferred from a glycerol stock into LB medium (8 mL) containing ampicillin (100 μg/mL) and grown for 18 h at 37 °C at 180 rpm. The cells were transferred into fresh LB medium (300 mL) containing ampicillin (100 μg/mL) and grown at 37 °C at 180 rpm until an OD600 of 0.6 to 0.9 was reached. IPTG was added at a final concentration of 0.4 mM and the bacteria were incubated at 18 °C for 16 h with vigorous shaking. The cells were centrifuged at 8000 rpm for 15 min. The resulting pellet was re-suspended in column buffer (20 mL, 50 mM HEPES pH 6.8, 100 mM NaOAc) and lysed via sonication. The lysate was centrifuged at 14000 rpm for 45 min. A column containing chitin resin (2.5 mL) (New England Biolabs) was equilibrated with column buffer (25 mL). The clarified supernatant was run over this column. The column was washed with column buffer (25 mL). After these washes, column buffer containing sodium 2-sulfanylethanesulfonate (MeSNa) (7.5 mL; 50 mM) was run through the column before incubation in this buffer for 18 h at 37 °C with gentle shaking. HA-Ub75-MeSNa was eluted in column buffer (5 mL) before concentration by spinning at 14,000 rpm in Vivaspin 500 centrifugal concentrators (Sartorious, Göttingen Germany). HA-Ub75-MeSNa was desalted using a NAP-5 column (GE Healthcare, Illinois USA) and eluted in column buffer according to manufacturer's instructions. The sample was concentrated again at 14,000 rpm using Vivaspin centrifugal concentrators and the protein concentration was measured on a nanodrop (4.8 mg/mL, 100 μL). S1
Coupling HA-UB75-MeSNa to bromide warhead
HA-Ub75CH2CH2Br was synthesised using literature procedures. [6] 2-bromoethylamine•HBr (31 mg, 0.15 mmol) was dissolved in column buffer (200 μL) and the pH of the solution was adjusted to pH 8.0 by the addition of aq. NaOH (1 M). HA-Ub75-MeSNa in column buffer (2.2 mg/mL, 100 μL) was added to this solution and it was shaken gently for 90 min at rt. The reaction mixture was desalted using a NAP-5 column according to manufacturer's instructions, eluted in column buffer and concentrated by centrifuging at 14,000 rpm in a Vivaspin centrifugal concentrator. The protein concentration was measured on a nanodrop (1.5 mg/mL, 100 μL).
In vitro DUB labelling 2.5.1 HEK293T cell lysate preparation
A HEK293T cell pellet was lysed using glass beads. To a 100 μL cell pellet, 100 μL of glass beads were added. Homogenisation buffer (200 μL; 50 mM Tris pH 7.4, 5 mM MgCl2, 250 mM sucrose, 1 mM DTT or 1 mM TCEP) was added. The mixture was vortexed for 20 s before being placed on ice for 90 s. This sequence was repeated 20 times. Cell debris and glass beads were pelleted by centrifuging at 14,000 rpm for 5 min. The resulting supernatant was aspirated off. The protein concentration of the clarified extract was measured by nanodrop (19.9 mg/mL, 200 μL)
In vitro Ub75CH2CH2Br probe labelling
HA-Ub75-Br probe S2 (0.75 μL, 1.5 mg/mL in column buffer) was incubated with HEK293T cell lysate (2.5 μL, 19.9 mg/mL in homogenisation buffer). The final volume of the labelling was S2 adjusted to 30 μL with homogenisation buffer for the lysate labelling. Incubation was carried out for 90 min at 37 °C with gentle shaking. Upon completion, 2X reducing sample buffer (15 μL) was added and the proteins were heated to 95 °C for 5 min. The samples were separated using a 12% SDS-PAGE and visualised using silver staining or western blotting.
Optimised In vitro thiol-ene labelling with alkene probes
The relevant alkene probe (1 -4 μg) was incubated with HEK293T cell lysate (2.5 μL, 19.9 mg/mL in homogenisation buffer) or OTUB1 (2 μg). The final volume of the labelling was adjusted to 30 μL with homogenate buffer containing TCEP (1 mM) for the lysate labelling, or phosphate buffer (pH 8.0) containing TCEP (1 mM) for the recombinant enzyme labelling. The probes were pre-incubated with the DUBs for 90 min at 37 °C with gentle shaking before the addition of radical initiator 2,2-dimethoxy-2-phenyl-acetophenone (DPAP) (0.25 µM) and radical stabiliser 4'-Methoyacetophenone (MAP) (0.25 µM). The reaction mixture was degassed for 2 min with N2 and exposed to UV light (365 nm) for 2 min. 2X reducing sample buffer (30 μL) was added and the samples were heated at 95 °C for 5 min. Proteins where visualised using silver staining and western blotting after being separated on a 12% SDS-PAGE.
In vitro thiol-ene labelling with alkene probes and denatured OTUB1
OTUB1 (2 μg) was denatured either by heating at 95 °C for 10 min or by the addition of SDS (0.5% final concentration). The final volume of the labelling was adjusted to 30 μL with phosphate buffer (pH 8.0) containing TCEP (1 mM). In this step SDS concentration was reduced fifteen-fold. The probes were pre-incubated with the DUBs for 90 min at 37 °C with gentle shaking before the addition of radical initiator DPAP (0.25 µM) and radical stabiliser MAP (0.25 µM). The reaction mixture was degassed for 2 min with N2 and exposed to UV light (365 nm) for 2 min. 2X reducing sample buffer (30 μL) was added and the samples were heated at 95 °C for 5 min. Proteins where visualised using silver staining and western blotting after being separated on a 12% SDS-PAGE.
PR-619 pre-incubation assay
PR-619 was pre-incubated with HEK293T cell lysate (2.5 µL, 19.9 mg/mL) on ice for 30 min at a range of concentrations. Probe 1 (0.3 µL, 3.4 mg/mL in column buffer) was added giving the labelling a final volume of 30 µL. The reaction mixture was incubated for a further 90 min before addition of DPAP (0.25 µM) and MAP (0.25 µM) and degassing for 2 min with N2. The mixture was exposed to UV light (365 nm) for 2 min. 2X reducing sample buffer (30 μL) was added and the samples were heated at 95 °C for 5 min.
PR-619 equilibrium disruption assay
Probe 1 (0.3 µL, 3.4 mg/mL in column buffer) or HA-Ub75-Br probe S2 (0.75 μL, 1.5 mg/mL in column buffer) was incubated with HEK293T cell lysate (2.5 µL, 19.9 mg/mL) at 37 °C for 60 min. PR-619 was added at a range of concentrations and the mixture was incubated for a further 30 min at 37 °C. DPAP (0.25 µM) and MAP (0.25 µM) were added and the mixture was degassing for 2 min with N2. The mixture was exposed to UV light (365 nm) for 2 min. 2X reducing sample buffer (30 μL) was added and the samples were heated at 95 °C for 5 min.
Immunoprecipitation (IP)
The relevant alkene probe (5 μg) was pre-incubated with HEK293T cell lysate (12.5 μL, 19.9 mg/mL in homogenate buffer) in NET buffer (136 μL; 50 mM Tris pH 7.5, 5 mM EDTA, 150 mM NaCl, 0.5% NP-40) containing TCEP (1 mM) for 90 min at 37 °C. DPAP (0.25 µM) and MAP (0.25 µM) were added and the solution was degassed with N2 for 2 min. The solution was exposed to UV light (365 nm) for 2 min. SDS solution (10% in dH2O, 7.5 μL) was added to the reaction before vortexing for 30 s and sonication for 2 min. The mixture was diluted with homogenate buffer (1500 μL). EZview TM Red Anti-HA Affinity Gel (100 µL of 50% slurry) was equilibrated by adding NET buffer (750 µL), gently inverting and centrifuging at 9000 rpm. The supernatant was aspirated, and the equilibration step was repeated. The lysate was added to the equilibrated beads and incubated at 4 °C for 90 min with rolling. The mixture was centrifuged at 9000 rpm for 30 s and the supernatant was aspirated. NET buffer (750 μL) was added to the beads which were inverted until the beads were fully resuspended before being centrifuged at 9000 rpm for 30 s. This washing step was repeated four times. After the final wash glycine buffer (250 µL, 150 mM, pH 2.5) was added to the beads. The solution was inverted until the beads were resuspended and then left on ice for 1 min. The solution was centrifuged at 9000 rpm for 30 s. The resulting supernatant was aspirated, and this elution step was repeated. 1X reducing sample buffer (250 µL) was added to the beads which were heated at 95 °C for 5 min. A small % of the samples were separated by 12 % SDS-PAGE and visualised by western blotting. The remainder of the samples was subject to tryptic digest using the FASP protocol, desalted by zip tipping and analysed by LC-MS/MS using an Orbitrap.
CHCl3/MeOH extraction
Probe samples were concentrated using a CHCl3/MeOH extraction prior to an in-solution digest to identify the C-terminal peptide. MeOH (600 μL) and CHCl3 (150 μL) were added to a sample of protein (200 μL) and the solution was vortexed for 20 s. dH2O (450 μL) was added and the sample was vortexed for a further 20 s. The sample was centrifuged at 14,000 rpm for 2 min. The upper layer was aspirated off and discarded. The sample was diluted with MeOH (450 μL), vortexed for 20 s and centrifuged at 14,000 rpm for 1 min. The supernatant was aspirated and discarded. The pellet was prepared for an in-solution digestion.
In-solution digest following CHCl3/MeOH extraction
The protein pellet obtained using a CHCl3/MeOH extraction was diluted in urea buffer (50 μL; 6 M urea, 33 mM Tris pH 7.8) and dissolved by vortexing for 20 s and sonicating for 2 min. The sample was diluted with dH2O (250 μL), vortexed for 20 s and sonicated for a further 2 min. Elastase was added in a 1:15 dilution relative to the protein concentration. The digest was carried out at 37 °C with gentle shaking for 16 h. Samples were prepared for MS analysis by zip-tipping analysed by captive spray ionisation mass spectrometry
In-gel digest
Samples were separated by SDS-PAGE and visualised by silver staining. Bands of interest were excised, cut into small pieces and incubated for 18 h in wash solution (200 μL; 50% MeOH, 45% dH2O, 5% formic acid). The wash solution was aspirated, and fresh wash solution was added. The samples were incubated for a further 2 h at rt. The wash solution was removed, and the gel pieces were dehydrated for 5 min in MeCN (2 x 200 μL). DTT buffer (30 μL; 100 mM NH4HCO3, 10 mM DTT) was added to the gel pieces and they were incubated for 30 min at rt. The DTT buffer was removed and iodoacetamide solution (30 μL, 50 mM) was added. The samples were incubated for a further 30 min. After the removal of the iodoacetamide solution the gel pieces were dehydrated for 5 min in MeCN (200 μL). Rehydration was performed in NH3HCO3 solution (200 μL, 100 mM). Dehydration and rehydration steps were repeated. Trypsin stock was diluted in NH3HCO3 solution and this 1X stock (30 μL, 20 μg/mL) was added to the dehydrated gel pieces. The solution was incubated on ice for 10 min with gentle mixing. Following this incubation step, NH3HCO3 solution (5 μL, 50 mM) was added to the mixture and it was incubated for 18 h at 37 °C with gentle shaking. After this incubation, NH3HCO3 solution (50 μL, 50 mM) was added. The gel pieces were incubated in this mixture for 10 min with occasional vortexing. The supernatant was transferred to a fresh microcentrifuge tube. Extraction buffer 1 (50 μL; 50% MeCN, 45% dH2O, 5% formic acid) was added to the gel pieces. The pieces were incubated for 10 min in this buffer with occasional vortexing. The supernatant was then added to the collection tube and the gel pieces were incubated for a further 10 min in extraction buffer 2 with periodic vortexing (85% MeCN, 10% dH2O, 5% formic acid). The supernatant was again added to the collection tube. For bigger protein bands an additional extraction with extraction buffer 2 was performed. The combined supernatants were dried in a vacuum centrifuge, resuspended in buffer A (20 μL; 98% dH2O, 2% MeCN, 0.1 % formic acid) and analysed by captive spray ionisation mass spectrometry.
Filter Aided Sample Preparation (FASP)
FASP [7] was carried out using Vivaspin 500 centrifugal concentrators (10,000 MWCO). The concentrator was conditioned with 50 µL dH2O. It was spun at 11800 rpm for 2 min. The protein solution to be digested was transferred to the filter and centrifuged at 13000 rpm until concentrated to a maximum of 25 µL. UA buffer (200 µL, 8 M urea in 0.1 M Tris/HCl pH 8.5) was added to the filter and centrifuged at 11800 rpm for 15 min. This step was repeated twice. DTT solution (100 µL, 10 mM in UA buffer) was added to the concentrator and vortexed for 5 s. It was centrifuged at 11800 rpm for 15 min. IAA solution (100 µL, 50 mM) was added and the solution was vortexed for 5 s and then centrifuged again at 11800 rpm for 15 min. Washes were performed using UA buffer (3 x 100 µL) followed by NH4HCO3 solution (3 x 100 µL, 50 mM). After the final wash, trypsin solution (200 µL, 50 mM NH4HCO3 solution, 1:50 enzyme:protein) was added and the concentrator was incubated overnight at 37 °C. The concentrator was centrifuged at 11800 rpm for 15 min. NaCl solution (50 µL, 0.5 M) was added and it was centrifuged at 11800 rpm until all the of the solution had passed the filter. Samples were de-salted by zip-tipping and analysed by LC-MS/MS.
Zip-tip purification
A zip-tip (Merk Millipore, Massachusetts USA) was equilibrated by aspirating and dispensing buffer B (for peptides: 80% MeCN, 20% H2O, 0.1% TFA; for full proteins; 65% MeCN, 35% H2O, 0.1% TFA) four times and further four times with buffer A (2% MeCN, 98% H2O, 0.1% TFA). The protein sample was aspirated across the tip ten times. Buffer A was used to wash the sample by aspirating and dispensing four times. The protein or peptides were then eluted in buffer B (2 x 10 μL) and dried using a vacuum centrifuge. The sample was analysed by MALDI or LC-MS/MS.
MALDI-TOF MS
MALDI-TOF analysis was carried out on a BRUKER Ultraflextreme MALDI-TOF/TOF mass spectrometer. The matrix used was a saturated solution of HCCA (α-Cyano-4hydroxycinnamic acid) in TA 85% (85% ACN with 0.1% TFA), and the calibrant was prepared in the same matrix. The matrix (1 µL) was mixed with the sample (1 µL) and 1µL of this mixture was deposited onto a ground steel MALDI target plate and allowed to dry in air. Mass spectra were recorded in positive reflection mode.
Orbitrap mass spectrometry
Protein digests were redissolved in 0.1% TFA (30 µL per sample) by agitation (1200 rpm, 15 min) and sonication in an ultrasonic water bath (10 min). This was followed by centrifugation (14,000 rpm, 5 °C, 10 min) and transfer to MS sample vials. LC-MS/MS analysis was carried out in technical duplicates (4.0 µL per injection) and separation was performed using an Ultimate 3000 RSLC nano liquid chromatography system (Thermo Scientific) coupled to a Orbitrap Velos mass spectrometer (Thermo Scientific) via an Easy-Spray nano-electrospray source (Thermo Scientific). Samples were injected and loaded onto a trap column (Acclaim PepMap 100 C18, 100 μm × 2 cm) for desalting and concentration at 8 μL/min in 2% acetonitrile, 0.1% TFA. Peptides were then eluted on-line to an analytical column (Acclaim Pepmap RSLC C18, 75 μm × 50 cm) at a flow rate of 250 nL/min. Peptides were separated using a 120 min gradient, 4-25% of buffer A for 90 min followed by 25-45% buffer B for another 30 min (buffer A: 5% DMSO, 0.1% FA; buffer B: 75% acetonitrile, 5% DMSO, 0.1% FA) and subsequent column conditioning and equilibration. Eluted peptides were analysed by the mass spectrometer operating in positive polarity using a data-dependent acquisition mode. Ions for fragmentation were determined from an initial MS1 survey scan at 30,000 resolution, followed by CID (Collision-Induced Dissociation) of the top 10 most abundant ions. MS1 and MS2 scan AGC targets were set to 1 6 and 3 4 for maximum injection times of 500 ms and 100 ms respectively. A survey scan m/z range of 350 -1500 was used, normalised collision energy set to 35%, charge state screening enabled with +1 charge state rejected and minimal fragmentation trigger signal threshold of 500 counts. Data was processed using the MaxQuant [8] software platform (v1.6.7.0), with database searches carried out by the in-built Andromeda search engine against the Swissprot H.sapiens database (version 20180104, number of entries: 20,244). A reverse decoy database approach was used at a 1% false discovery rate (FDR) for peptide spectrum matches. Search parameters included: maximum missed cleavages set to 3, fixed modification of cysteine carbamidomethylation and variable modifications of methionine oxidation, asparagine deamidiation and protein N-terminal acetylation. Label-free quantification was enabled with an LFQ minimum ratio count of 1.
Captive spray ionisation mass spectrometry
Captive spray ionisation was performed using a Thermo Scientific UltiMate 3000RSLCnano LC (Waltham, MA USA) equipped with an Acclaim PepMap C18 (2 µm, 0.075 mm x 150 mm) column. For each injection, 5 µL of a (1 µg/µL) digested peptide was loaded onto a Nano Trap Column (100 µm I.D. x 2 cm, packed with Acclaim PepMap100 C18) at 10 µL/min with 95% water/ 5% acetonitrile/ 0.1% formic acid for 3 min. Trapped peptides were eluted onto the analytical column using a multi-step gradient with a flow rate of 0.3 µL/min. The gradient utilised two mobile phase solutions: A, water/0.1% formic acid and B, acetonitrile: 0 min, A (98%), B (2%); 3 min, A (98%), B (2%); 63 min A (65%), B (35%); 64 min A (5%), B (95%); 66 min A (5%), B (95%); 67 min A (98%), B (2%); 75 min A (98%), B (2%). Peptide digest were analysed on a Bruker compact Qq-TOF mass spectrometer via CaptiveSpray nanoBooster (Bremen Germany). Precursor ions were scanned from 150 m/z to 2200 m/z at 2 Hz with a cycle time of 3.0 seconds, with fixed windows excluded (20-350, 1221-1225, 2200-40000). Smart Exclusion was used to ensure only chromatographic peaks were selected as precursors. Active Exclusion enabled the analysis of less-abundant ions to be analysed and not excluded from precursor selection. Data acquired on the Bruker compact was converted to mzXML format and searched against a custom database containing the probe sequence inserted into a uniprot database with taxonomy restricted to human on PeptideShaker. [9] 3. General chemical methods 1 H and 13 C NMR spectra were recorded on Bruker 400 MHz or 600 MHz system spectrometers. Spectra were recorded in DMSO-d6 or CDCl3 relative to residual DMSO (δ = 2.50 ppm) or CHCl3 (δ = 7.26 ppm). Chemical shifts are reported in parts per million (ppm), coupling constants are reported in Hertz (Hz) and are accurate to 0.2 Hz. NMR spectra were assigned using HSQC and HMBC experiments. Mass spectrometry measurements were carried out on a Bruker ESI or APCI HRMS. Melting points were measured using a Griffin melting points apparatus and are uncorrected. Infrared (IR) spectra were obtained on a Perkin Elmer spectrophotometer. Flash column chromatography was carried out using silica gel, particle size 0.04-0.063 mm. TLC analysis was performed on precoated 60F254 slides and visualised by UV irradiation, potassium permanganate stain (3 g KMnO4, 20 g K2CO3, 300 mL dH2O) and ninhydrin stain (1.5 g ninhydrin, 5 mL, AcOH, 500 mL EtOH 95%). All solvents were obtained from commercial sources and used as received. Petroleum ether refers to the fraction of petroleum ether that boils at 40-60 °C.
Synthesis of (E)-1-phenyl-3-phthalimido-2-propene
Cinnamyl bromide (500 mg, 2.54 mmol) and potassium phthalimide (729 mg, 3.94 mmol) were dissolved in dry DMF (10 mL) under argon. The reaction mixture was stirred at rt for 3 h. TLC analysis (petroleum ether) showed complete consumption of cinnamyl bromide (Rf = 0.6) and formation of the product (Rf = 0.1) after this time. The solution was diluted with Et2O (40 mL) and brine (30 mL) and the white precipitate formed was collected by vacuum filtration. The aqueous layer was extracted with Et2O (2 x 30 mL). The combined organic layers were dried over MgSO4, filtered and concentrated to afford the crude product as a yellow solid. This was combined with the precipitated product and recrystallised from toluene to afford the product S3 as colourless crystals (411 mg, 62%); mp 152 -154 °C (toluene). Lit. [10] 154 °C -155 °C. The spectroscopic data is in agreement with those reported in the literature. [11]
Synthesis of (E)-3-phenyl-prop-2-en-1-amine
(E)-1-phenyl-3-phthalimido-2-propene S3 (700 mg, 2.66 mmol) was dissolved in MeOH (12 mL). Hydrazine hydrate solution (80%, 150 μL, 2.95 mmol) was added dropwise and the reaction was stirred at rt for 2 h. TLC analysis after this time showed the complete consumption of the starting material (petroleum ether-EtOAc, 3:1; Rf = 0.8) and formation of the product S4 (H2O-IPA-EtOAc, 1:2:2; Rf = 0.2). The reaction was cooled to 4 °C resulting in the formation of a white precipitate. The white precipitate was isolated by vacuum filtration and washed with MeOH (3 x 10 mL). The filtrate was concentrated under reduced pressure and the residue was dissolved in DCM (20 mL) and aq. KOH (20 mL). The aqueous layer was extracted with DCM (3 x 20 mL) and the combined organic layers were concentrated to afford the product S4 as a yellow oil (228 mg, 65%). The spectroscopic data is in agreement with those reported in the literature. [12] S4 | 2020-02-13T09:20:23.210Z | 2020-02-06T00:00:00.000 | {
"year": 2020,
"sha1": "389ebf7b1ac406eb89ada70af10a197ee0c81b85",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/sc/c9sc05258e",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "557aae500a344aca97fd30009d72b8b246c88819",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
248397632 | pes2o/s2orc | v3-fos-license | Younger and Older Adults’ Cognitive and Physical Functioning in a Virtual Reality Age Manipulation
Objectives: Age group stereotypes (AGS), especially those targeting old age, affect an individual’s behavior and long-term cognitive and physiological functioning. Conventional paradigms investigating the related mechanisms lack validity and stability. Our novel approach for the activation of self-relevant AGS uses a virtual reality (VR) ageing experience, measuring relevant effects on performance parameters. Methods: In a between-subjects experimental design, young participants embodied either a younger or older avatar in a 3D virtual environment to capture the effects on physical (Study 1; N = 68) and cognitive performance (Study 2; N = 45). In Study 3 (N = 117), the paradigm was applied to older participants. Results: For the younger participants, embodying older avatars was associated with declines in memory and physical performance when compared to the younger avatar age group. Furthermore, the manipulations’ main effects were moderated by negative explicit AGS that matched the respective performance domains. For the older participants, we found no significant performance differences in the two domains investigated. Discussion: The experimental manipulation demonstrated an impact on relevant performance parameters on a motivational and strategic level, especially for strong performance-related AS, but for young participants only. Possible reasons and mechanisms for the differences in younger and older samples’ results are discussed.
INTRODUCTION
Age group stereotypes (AGSs) are widespread (Hummert et al., 1994), and they can be mostly negative when concerning old age (Kotter-Grühn and Hess, 2012). The negative content of these ultimately self-relevant (Levy et al., 2012) stereotypes can have stark consequences for one's own aging process, particularly in the physical and cognitive domains. In the former, Levy et al. (2002) observed a decline in life expectancy of 50% in people with a negative view on aging. More specifically, holding negative AGS correlates with higher multimorbidity rates (Wurm et al., 2007), poorer cardiovascular health , and lower physical activity levels (Wurm et al., 2010). Research also showed associations with sensory (Levy et al., 2006) and motor functioning (Robertson and Kenny, 2016). Meanwhile, in the cognitive domain, early studies by Levy and Langer (1994) showed a correlation between AGS and memory performance, and longitudinal associations have been found with working memory (Levy et al., 2012) and general cognitive ability (Uotinen et al., 2005). Furthermore, AGS seems to moderate the association between physical frailty and cognitive functioning (Robertson and Kenny, 2016).
Three mechanisms appear responsible for the association between self-relevant stereotypes and functioning. First, stereotypes might become self-fulfilling prophecies: longitudinal studies show that negative expectations of aging are associated with poorer health behaviors, such as consulting physicians less often, drinking more alcohol, or exercising irregularly (Levy and Myers, 2004). A second explanation is based on findings that people with more negative self-relevant AGS show higher cardiovascular reactivity to stressful stimuli and thus carry a higher cardiovascular morbidity risk (Levy et al., 2000). However, this mechanism alone cannot explain the range of effects found. Finally, stereotype threat, which undermines performance in tasks related to the stereotypes addressed (Steele and Aronson, 1995), has been discussed as a possible mediating process, and this notion was confirmed in a recent meta-analysis by Lamont et al. (2015). The awareness of self-relevant stereotypes posed a stronger threat than seeing information about the actual probabilities of fulfilling them. While we still seek direct methods for reducing the consequences of negative AGS (e.g., Wolff et al., 2014), their presence can be manipulated and intensified in the controlled setting of laboratory experiments.
Conventional Experimental Activation of Age Stereotypes
Evidence from studies observing a correlation between negative AGS and subsequent declines in performance cannot be considered causal evidence of the phenomenon or of its underlying mechanisms. Experimental studies allowing the deliberate manipulation of stereotypes are therefore needed. For this purpose, several experimental paradigms have been developed to explore competing explanations.
An early approach to activating positive or negative selfrelevant AGS used a verbal priming technique, where agerelated adjectives (e.g., "confused") were briefly flashed on a screen, resulting in cognitive activation of the semantic networks related to the concept. Negative AGS priming was shown to cause temporary declines in walking speed (Bargh et al., 1996) and memory performance (Stein et al., 2002); however, these experiments caused controversy regarding their replicability (Rivers and Sherman, 2018), and the approach does not give participants the perspective of an older person; thus, the activation of self-relevant AGS lacks external validity. Haslam et al. (2012) used two parallel manipulations with agerelated self-categorizations by assigning middle-aged participants to either "old" or "young" groups (manipulating self-relevant affiliation with the stereotyped group) and by shaping deficit expectations with newspaper articles on either age-related memory decline or other age-related problems (manipulating the stereotype content). The study found that when participants were labelled "old," they performed worse on memory tests, particularly when they had been primed with a newspaper article on memory decline. These results support the link between stereotype content and behavioral outcomes. However, the effects were short lived, and the introduction of a social comparison group possibly diluted the self-relevant nature of the stereotype activation, as it added a competitive component.
A more lifelike approach was used by Varkey et al. (2006), where medical students partaking in the ageing game were asked to perform simple everyday tasks while wearing technical appliances to reduce their general abilities. With this artificial simulation of living in a frail body, Varkey et al. were able to improve significantly the participants' empathy and attitudes toward older adult patients. This manipulation offered the continuous embodiment of a physical and perceptual condition commonly associated with older age, even though the design was non-experimental and only allowed an indirect activation of self-related AGS. Still, it demonstrates how complex embodiment manipulations create effective multi-level experiences, leading to promising results. Eibach et al. (2010) chose an experimental approach with a more complex age-related phenomenology and stronger ecological validity. The manipulation of vision decline and induction of a generation gap experience led to the alteration of subjective age in an adult sample. Furthermore, the successful manipulation of subjective age moderated the influence of AGS on the subjects' self-evaluation. In a comparable approach, Stephan et al. (2013) presented participants with manipulated performance feedback for a handgrip strength task. Participants in the experimental condition were given positive feedback regarding their performance in comparison to same-aged peers, leading to a reduction in subjective age and an increase in handgrip strength scores for a second measurement. Thus, an experimental manipulation of AGS' self-relevance is indeed possible by inducing stereotypically old age-related experiences on a perceptual or social level.
Even more complex and visually realistic manipulations are possible whenever stimulus content is created and presented in gaming scenarios in which participants control an avatar whose appearance can be modified. The resulting Proteus effect (Yee et al., 2009; named after the shape-changing sea god in Greek mythology) is based on high standardization and multi-sensory stimulus manipulation, implying that self-relevant stereotypes can be activated by visual identity cues, affecting people's behavior. Earlier studies successfully manipulated social behavior and aggression by increasing body height (Peña et al., 2009) and influenced negotiation interactions by changing the avatar's attractiveness (Yee et al., 2009). Unfortunately, most studies used a desktop setting, which cannot offer an intuitive immersive experience or multi-level sensory stimulation, and results can only be partially generalized to self-relevant stereotypes.
While the above-mentioned longitudinal relationships offer a view of possible health-related outcome variables, many shortterm experimental findings are more difficult to transfer beyond the laboratory and fail to show lasting effects and external validity; hence, proving causality remains complicated. Meanwhile, combining a realistic aging experience with a small-step experimental variation has a continuous impact on perceptions or behaviors and seems the next step necessary to push the limits of our understanding of this psychological mechanism and, furthermore, to develop interventional techniques for the known medical and social implications of strong self-relevant negative AS.
Using Virtual Reality to Activate Age Group Stereotypes
Together with experts in the field, such as, Slater (2018), Rizzo et al. (2019) or Tuena et al. (2020), we argue that virtual reality (VR) technology enables a realistic immersive experience and hence allows the combination of a high level of standardization and the presentation of true-to-life environments, thus maximizing both internal and external (or ecological) validity. Particularly, users of VR technology can interact with their surroundings more naturally (at least more so than in an interaction with a computer mouse in front of a computer screen), and researchers can manipulate the characteristics of these surroundings in a highly flexible and cost-effective way (at least more flexibly and cost-effectively than setting up, furnishing, or driving to a desired setting). Numerous clinical applications (e.g., Brown et al., 2020) have demonstrated the benefits of an effective immersive experience when simulating general counselling settings (Slater et al., 2019), as well as of applying established therapy approaches to specific conditions, such as obesity and diabetes (Rizzo et al., 2011), eating disorders (Riva et al., 2021), and post-traumatic stress disorder (Rizzo and Shilling, 2017). Earlier studies successfully applied VR to nonclinical topics with regard to interactional or cognitive parameters. Seeing oneself as Sigmund Freud resulted in improved interpersonal problem-solving skills (Osimo et al., 2015), while the virtual embodiment of Albert Einstein improved cognitive performance when compared to a control group (Banakou et al., 2018). Hence, VR scenarios could give participants a realistic aging experience when embodying an older avatar by activating self-relevant AGS in a standardized environment. One implementation of this approach by Reinhard et al. (2020) demonstrated a replication of priming studies using a VR embodiment technique instead of verbal priming; young participants who first embodied an older avatar in a VR scenario showed a decreased walking speed. Furthermore, the approach offers an actual shift in perspective toward an older self, as demonstrated by Sims et al. (2020), who showed age-progressed images to influence young participants' attitudes toward financial security topics in later life.
Aside from the various applications of VR, some technical details seem to alter the interventional value of the approaches. Slater (2018) argues that VR applications allowing detailed head tracking and the ability to look around at objects provide a higher degree of realism, making it more likely for participants to experience the simulation as if it was really happening. Daher et al. (2017) showed that this even applied to interacting with a virtual agent instead of a real person. If certain technical criteria are met, a strong body ownership illusion is ensured as one possible mechanism behind VR. The body ownership illusion theory (e.g., Kilteni et al., 2015) focuses on the plasticity of the processing of perceptual stimuli. Based on the classic rubberhand illusion findings (Botvinick and Cohen, 1998), it is argued that VR creates an intersensory bias to perceive the displayed body parts as one's own (Banakou et al., 2018). The combination of tactile, visual, or auditory stimuli must be coordinated well enough, together with the experience of actually controlling the virtual avatar by moving its virtual head and arms around. By doing so, a realistic body ownership experience is provided, thus activating expectations and stereotypes directly and intuitively that are connected to the virtual avatar. In contrast, the Proteus effect suggests that people in general (Frank and Gilovich, 1988) and computer gamers in particular (Yee et al., 2009) adjust their behavior to their own expectations and stereotypes related to the visual identity cues of their digital avatar. The activation of selfrelevant stereotypes takes place indirectly from perceiving one's own appearance, and it thus depends on appropriate perceptions and semantic interpretations of visual cues.
PRESENT STUDIES
Our studies offered realistic embodiment experiences with virtual avatars of different ages. We created a detailed virtual copy of the laboratory (see Figure 1) that allowed visual, spatial, and tactile orientation. To intensify the body ownership illusion, subjects saw their virtual body in a mirror and obtained direct visual feedback of movement (see Figure 1). A set of four avatars (younger woman, older woman, younger man, and older man) was obtained (see http://www.renderpeople.com/) for the participants' virtual appearance (Figure 2), as implementing individual character-created avatars exceeded the project's technical and economical limits. Participants saw their virtual same-gender avatar either as younger or older. For the different participant age groups, the experimental groups consisted of ageincongruent avatars and the respective control group of an agecongruent avatar selection. Thus, participants belonged to either the younger avatar group (YA) or the older avatar group (OA). For readability purposes throughout the manuscript, the YA and OA group abbreviations will be combined with an index letter indicating the subjects' chronological age group, resulting in the OA y (experimental group) and YA y (control group) identifiers for Studies 1 and 2, representing younger and older age avatars with young participants; furthermore, the group identifiers follow the same setup with older participants in Study 3, with the YA o (experimental group) and OA o (control group) identifiers.
We conducted three independent studies with similar procedures but different age samples and different dependent variables. Study 1 included physical performance measures and Study 2 cognitive assessments. Both Studies 1 and 2 were conducted with younger participants. Study 3 included an older sample that was assessed on both physical and cognitive performance.
The paradigm combined the realism of a field environment with the controllability and standardization and controllability known, for instance, from experimental lab research on the Proteus effect. Our goal was to gain some fundamental knowledge of the valid, immersive, and self-relevant activation of AGS in a VR setting, to capture the increase or decrease in relevant performance parameters, and thus to enable future developments of interventional techniques for the abovementioned negative effects on health.
Data assessment for the studies was conducted in a time window of several weeks, which was limited by laboratory capacities, the availability of assistance staff, and unforeseen lockdown restrictions due to the COVID-19 pandemic. This particularly affected recruitment for Studies 1 and 2. Within these boundaries, we aimed to reach the determined minimum sample size and meanwhile make maximum use of the resources allocated to the project.
Hypotheses
Based on previous findings and the characteristics of our design and methods, we present three hypotheses concerning the age avatar embodiments' effects on the subsamples' performance scores and their associated moderating parameters.
Hypothesis 1
Studies 1 and 2 were expected to provide evidence of the immersive virtual age embodiment and thus activate selfreflexive AS. For the OA y group, in contrast to the YA y group, we hypothesized that our experimental manipulation would result in a cognitive and physical performance decline commonly associated with old age-related developments.
Hypothesis 2
After providing evidence of this "virtual aging" effect in a young sample, insights from Study 3 served to provide cross-validation evidence of the reverse direction of our manipulation: the virtual "youth fountain" that makes older people feel and act younger. Even if our virtual old age avatar embodiment were able to activate relevant self-reflexive expectations of the younger adults' own aging process, the technique would not necessarily be easily reversible. Providing evidence of successful manipulation in both directions would prove our intervention equally applicable to both age groups. Thus, we hypothesized a performance increase for the older sample in the YA O condition, compared to the OA O condition.
Hypothesis 3
The design controlled for important covariates and allowed exploration of the possible moderating effects of negative AGS in the experimental group based on previous findings (Kornadt et al., 2016). Because this moderating effect seems particularly strong when the stereotype content is matched with the domain investigated (Levy and Leifheit-Limson, 2009), we considered both domain-general and domain-matched AGS in the analyses. We hypothesized an interaction effect of the domain-general and domain-matched explicit AGS with our experimental intervention on the relevant performance parameters. Participants within the experimental age-incongruent subgroups of all studies were expected to show a stronger age embodiment effect when they carry particularly negative stereotypes of old age and particularly positive ones of young age. Hence, applying a successful VR age avatar embodiment to an older sample and comparing the quantity and quality of results was expected to possibly give insight into the relevant mechanisms responsible for the effect within the subsamples and thus provides evidence of the applicability of our manipulation to older samples concerning possible positive effects on the short-and long-term correlates of the aging process.
Power Analysis
A priori power analysis using G*Power (Faul et al., 2007) was conducted to determine the sample size necessary. Parameters for the calculations were obtained from the experimental design of the study by Reinhard et al. (2020), which was closest to the current methodological approach and where a comparison of two avatar groups' walking speed scores before and after applying avatar embodiment led to a Cohen's f of 0.43. The power analysis included a repeated measures analysis of variance (ANOVA) with three measures in total for two groups to compare: an error rate of α = 0.05, a power level of 1-β = 0.95, and a correlation of r = 0.50 between measures, and it resulted in a minimum total sample size of N = 50.
Data Analysis
To account for adequate randomization effects concerning possible parameters influencing the main results, the data analysis of each study started with a t-test comparison of the two experimental conditions based on relevant baseline parameters: immersion intensity, performance domain-related baseline measure (physical fitness or memory self-efficacy), positive and negative affect, and implicit and explicit AGS. The main analysis for each study was carried out using ANOVA and repeated-measures ANOVA to compare the experimental conditions (with and without age manipulation in VR) concerning their physical performance and cognitive performance accordingly. The statistical assumptions for the inferential tests were checked prior to the analysis and did fulfill the criteria necessary in accordance with the relevant literature (Kaur and Kumar, 2015).
To provide evidence of the robustness of our results against alternative explanations and possible undisclosed associations and to provide evidence of the validity of our approach, an ANCOVA analysis was carried out including the aforementioned moderators and controls as covariates and looking at 1) the main effects of this covariate on performance, 2) the interaction effects with our experimental condition on performance, and 3) whether the ANOVA's main effect results remained identical.
STUDY 1 Participants
In total, N = 68 participants (54 female) aged 18-35 (M = 22.46, SD = 3.10) were recruited using both online and offline university blackboards and randomly assigned to the experimental (n = 35) or control group (n = 33). There were more female subjects in the total sample and an almost equal distribution of gender (79 vs.
Procedure
The 75-min procedure (see Figure 3) started with information and consent forms, followed by a demographic questionnaire, an assessment of affective state, measures for implicit and explicit AGS, and a physical fitness questionnaire. The VR headset was then mounted, and motion trackers inside the headset and hand controllers allowed synchronization of the participants' movements. The avatar conditions' random assignment was based on a randomization list and was not announced or discussed by the laboratory members. The young participants were assigned to the younger (YA y ) or older (OA y ) age avatar group. Participants were presented with a 90-s audio instruction directing their attention to their avatar's appearance and movements to facilitate adequate immersion. After this introduction, a repeated measure of the affective state was followed by physical performance measures for handgrip strength and endurance. After leaving the 60-min VR scenario, participants completed a presence and body ownership questionnaire.
Handgrip Strength
The participants' handgrip strength was measured three times in a row while the VR scenario was running. A handgrip dynamometer was placed into the participant's dominant hand and no visual reference was presented within the VR. As suggested by Innes (1999), each participant was asked to hold the device with their arm in a neutral position, their shoulder adducted and in a neutral rotation, and their elbow flexed at a 90°a ngle. The repeated-measures approach was chosen first to obtain several data points for a validity comparison with earlier findings. The observed within-subject effect of declining handgrip scores, F (2, 122) = 15.44, p < 0.001, η 2 = 0.20, was in line with an expected decline in arm and hand strength from earlier applications (Innes, 1999). Second, the repeated measures approach allowed the assessment of the influence of both initial motivational drive and gradually advancing motivational parameters.
Endurance
To measure the participants' weight-holding endurance, they were instructed to hold the VR controller for as long as possible in a suspended position directly in front of their body with a straight arm and with a 90°angle between the arm and torso. The test ended when the hand was lowered beneath a point at the lower edge of the sternum, which was previously marked with a visual reference outside of VR and thus only visible to the person administrating the test.
Implicit Age Group Stereotypes
The Implicit Association Test, which was presented on the VRintegrated virtual monitor, aimed to detected subconscious associations between different mental representations. The general procedure started with learning and practicing a first allocation of positive and negative words in response to certain stimulus categories, including, in this case, age groups and, in other studies, ethnicities (Saujani, 2002) or gender groups (Ramos et al., 2016). Participants were then asked to respond to a reversed allocation of the verbal labels. The resulting "improved D-score," described by Greenwald et al. (1998), was calculated by including the reaction time difference between the earlier and later phases of the experiment, thus concluding a stronger link of the relevant concept to either positive or negative associations. The task was adapted to AGS and included images of older and younger people together with positive and negative words (e.g., "happiness," "cruelty"). A negative score represents a stronger association of "old" with negative content and "young" with positive content. To minimize possible priming effects of this presentation, the word list was balanced with positive and negative words and presented in a randomized order.
Explicit Age Group Stereotypes
This rating task (Kornadt et al., 2016) consisted of 48 adjectives, equally distributed into positive and negative words and furthermore balanced concerning the factors' competence and warmth, following the stereotype content model by Fiske et al. (2002). Participants were asked to rate on an 8-point semantic differential scale whether each adjective presented on the virtual monitor applied more to young or older adults. Two separate sum scores were calculated for the positive and negative groups. The positive score was subtracted from the negative one to create an overall AGS score. High values indicate a stronger old AGS negativity, with a strong association of younger age with positive attributes and older age with negative attributes. Lower values indicate fewer AGS with a lesser tendency to view younger people as positive and older people as negative.
A sub-score was calculated, including only the adjectives that were directly related to our dependent construct. The physical AGS score included the positive adjectives "healthy," "energetic," FIGURE 3 | General procedure. Note. Studies 1 and 2 consisted of either the physical or the cognitive performance focus, containing only the baseline and performance assessments of one performance domain as mentioned above. Study 3 included both performance domains into the procedure. * For Study 3 some questionnaires were completed in a paper-pencil setting prior to the laboratory appointment. ** The immersion and presence assessment was only carried out in Studies 1 and 2. "lively," and "agile" and negative reverse-scored adjectives included "sick," "lazy," "powerless," and "frail." The resulting score indicated the domain-matched AGS for physical performance.
Affective State
Prior to and right after mounting the VR headset, we assessed the participants' affective state through an adaptation of the Implicit Positive and Negative Affect Test (Quirin et al., 2009) that was presented on the lab monitor outside (but visually alike) the VR scenario. In this projective test setup, participants were asked to rate how much the sounds of. three fictitious words ("BELNI," "VIKES," and "TALEP") match three adjectives describing positive emotions and three adjectives describing negative emotions using a 4-point Likert scale. The authors of the paradigm argue that the obligatory rating of neutral words using positive and negative emotions provides an implicit projection of the participants' current mood, so no explicit questionnaire is necessary. Positivity and negativity scores before and after mounting the VR headset were calculated as the means of each set of three positive/negative adjectives. The inclusion of affective state development in the analysis aimed to control statistically for possible mood changes after entering the VR scenario, while minimizing social desirability effects by using an implicit assessment. For the following analyses, two mean delta scores were included as moderators, indicating an increase or decrease either in positive or in negative affect during the introduction to the VR scenario.
Immersion and Presence
As previous studies have shown VR embodiment effects to be sensitive to variations in the intensity of the immersive experience and presence perception, we used the Igroup Presence Questionnaire (IPQ; Schubert, 2003) and, for presence, the Body Ownership Questionnaire (BOQ; adaption by Reinhardt et al., 2020) in the laboratory monitor to assess the participants' immersion. The four resulting IPQ sub-scales "spatial presence," "involvement," "realness," and "global presence" were summarized into one total IPQ presence score that was included in the analysis parallel to the body ownership score from the BOQ.
Physical Fitness Assessment
Physical fitness was assessed as a baseline control measure prior to the physical performance testing in VR and carried out using the General Practice Physical Activity Questionnaire (GPPAQ; Ahmad et al., 2015). The amount and intensity of physical activity in professional and recreational contexts is assessed and summarized as a total score.
Immersion and Presence
The total scores for immersion indicate an equally sufficient level of immersion (M = 4.12, SD = 0.89) and presence (M = 2.27, SD = 0.88).
Effects of the Experimental Age Manipulation
There was no main effect of the experimental condition on endurance, F (1, 60) = 1.52, p = 0.22, η 2 = 0.025), indicating avatar age did not affect performance. There was also no main effect on handgrip strength, F (1, 61) = 0.71, p = 0.40, η 2 = 0.01. However, an interaction effect between avatar age and measurement repetition was found, F (2, 122) = 7.06, p = 0.01, η 2 = 0.10, indicating a difference in slopes for the two conditions ( Figure 4). On closer inspection, this interaction effect is mainly driven by a first measurement in the control group that began exceptionally high and dissolved for the following two measurements, indicating an initial intercept in performance of handgrip or forearm physiology.
Controlling for Affective State
Including the delta scores for positive mood changes during the VR manipulation as a within-subject factor in a repeated An exploration of negative mood changes as a within-subject factor also revealed no main effect of negative mood change, F (1, 55) = 0.13, p = 0.72, η 2 = 0.00, and no interaction effect with avatar age on handgrip strength, F (1, 55) = 2.52, p = 0.12, η 2 = 0.04. In line with our preliminary findings, there was no main effect of avatar age on handgrip strength, F (1, 55) = 0.70, p = 0.41, η 2 = 0.01, and the interaction effect between avatar age and measure repetition remained significant, F (2, 110) = 3.85, p = 0.02, η 2 = 0.11. In summary, the results of the avatar age manipulation cannot be explained by changes in the subjects' affective state.
A separate analysis of domain-matched AGS for physical performance was carried out based on the stereotype-matching effect reported by Levy and Leifheit-Limson (2009). This included a physical performance-related AGS negativity score as a possible moderator of handgrip strength and did not reveal a significant effect of domain-matched AGS on handgrip strength, F (1, 52) = 2.0, p = 0.16, η 2 = 0.04. A significant interaction effect with avatar age was discovered, F (1, 52) = 4.34, p = 0.04, η 2 = 0.08, indicating that for the experimental group, only subjects with higher representations of physical AGS performed worse in the handgrip strength task ( Figure 5). Only the analysis of lexical subgroups related to physical performance showed a moderating effect in the physical performance domain, which hints at the possible domain-matched relevance of AGS to performance in a selected domain.
Discussion
The virtual age embodiment did not directly affect handgrip strength. The significant interaction between condition and the repeated measurement, however, indicates that physical performance was affected in a more complex and subtle way than hypothesized. Thus, our first hypothesis is only partially confirmed. Compared to both the pattern shown in the control group and that known from other research (Innes, 1999), participants in the experimental group exhibited a conspicuously low performance in the first measurement; consequently, there was almost no fatigue-related decline in performance, typically found after a major effort. Experiencing being older appears to have resulted in a strategic attempt to save efforts and thus avoid premature resource depletion. This interpretation would be consistent with notions of selective optimization with compensation (Freund and Baltes, 1998) and reminiscent of an anticipated loss-based selection vis-à-vis a potentially challenging task or, as Ebner et al. (2006) would say, FIGURE 5 | Handgrip strength by experimental condition and physical age group stereotype negativity. Note. Handgrip strength scores of young sample displayed separately for the YA y (young avatar) and OA y (old avatar) subgroups over three repeated measures with separate graphs for (median split) low and high negative domainmatched (physical) AGS.
Frontiers in Aging | www.frontiersin.org April 2022 | Volume 3 | Article 851687 a goal orientation guided by loss prevention. The drawback of this interpretation is its assumption that younger people have at least tacit knowledge of the developmental regulation strategies of older individuals. A more simplistic explanation would be that participants approached the unknown task more cautiously, considering the possibly excessive demands they stereotypically expected for older people in unknown performance situations. Our data are compatible with both interpretations and invite further exploration of this issue in future research. An intriguing finding of Study 1 was the significant interaction of the experimental condition with domain-matched (but not domain-general) explicit AGS, suggesting that AGS activation is indeed the effective mechanism behind our findings. In line with our expectations, participants only behaved stereotypically old in the aspects they believed related to old age. This aligns with research by Levy and Leifheit-Limson (2009), who showed stronger effects when the stereotype content matched the outcome performance domain.
Surprisingly, we did not find significant main effects of avatar age on endurance. A more strenuous variant with a heavier weight might have produced clearer effects, as the subjects' physical resources would then be depleted more strongly. It could also be that the task did not match the content of the self-relevant stereotypes people hold. As our findings from the handgrip strength task suggest, changes in performance seem quite sensitive to the specific domain and hence to the specific content.
Procedure
The design, procedure, and time spent in VR remained almost identical to Study 1, except the dependent variables included cognitive performance measures instead of physical ones (see Figure 3) and the respective baseline control measure was replaced with a memory self-efficacy questionnaire. The young participants again were assigned to the younger (YA y ) or older (OA y ) avatar group.
Verbal Memory and Recall
To assess verbal memory and recall (VMR) abilities, we adapted the word list reproduction task by (CERAD-WL; Morris et al., 1989). Participants were asked to read aloud a set of 15 words presented for 2 s each. After the first sequence, each participant was asked to reproduce the words from memory. The procedure was repeated twice while changing the order of the words. The requirement of learning and recalling words offers both the inspection of a baseline recall ability and of a learning slope within the repetition of the sequence. The test shows strong associations with other established cognitive tasks (Yuspeh et al., 1998), while older samples are known to perform worse on the test (Hankee et al., 2016) and learning slopes appear steeper for younger individuals (Jones et al., 2011).
Alphanumeric Working Memory
In the letter-number sequencing subtest from the Wechsler Adult Intelligence Scale (Wechsler, 2008), a series of numbers and letters was read to the participants to measure their alphanumeric working memory capacity (AWM). Subjects were then asked to reproduce the sequence in a predetermined order. The test included 10 sets of three tasks, with a gradually increasing sequence length. The test ended as soon as the participant gave only false answers to a set of three tasks.
Sustained Attention
Reproducing the Testing Battery for Attention Performance (Zimmermann and Fimm, 2017) included a series of geometric figures, each displayed for 2 s, followed by a 1-s fixation cross. Figure variations included assorted colors, shapes, and sizes. Target stimuli were defined as a sequence of two figures with an identical shape, regardless of size or color.
Participants were asked to respond as quickly and accurately as possible by pressing a button whenever a target stimulus appeared. Performance was measured as accuracy among 36 target stimuli distributed over 300 trials.
Memory Self-Efficacy
As a baseline control variable for the dependent measures, the Memory Self-Efficacy Questionnaire by (MSEQ; Berry et al., 1989) assessed the participants' expectations concerning their memory performance using everyday examples of memorizing grocery shopping lists, telephone numbers, household items, or digits. For each category, the number of items that participants believed they could memorize was coded, and a total memory self-efficacy score for each subject was calculated following the original procedure.
Implicit and Explicit Age Group Stereotypes
Implicit and explicit AGS were assessed in an identical manner compared to Study 1. Again, a domain-matched explicit AGS score was calculated, this time matching the cognitive performance domain using a subset of adjectives, namely, "clever," "learned," "literate," and "versed" and the negative reverse-scored adjectives "forgetful" and "senile." The resulting score indicated the domain-matched AGS for cognitive performance.
Immersion and Presence
The total scores for immersion again indicate equally sufficient levels of immersion (M = 1.97, SD = 0.76) and presence (M = 3.64, SD = 0.97).
A domain-matched operationalization of explicit AGS was included as a between-subject factor, similar to in Study 1 but now for cognitive performance, revealing no main effect of cognitive performance-related AGS on VMR, F (1, 40) = 0.02, p = 0.90, η 2 = 0.02. No interaction effect of the repeated measures with cognitive performance-related AGS was found, F (1, 40) = 0.12, p = 0.73, η 2 = 0.003. Due to the small sample size, we desisted from separate statistical analyses for each measurement and instead performed a descriptive inspection of the data. Here, participants from the experimental group who represented higher cognitive AGS scores showed a notably lower score in the first and second of the three consecutive measurements (see Figure 7).
Discussion
In Study 2, a multifaceted selection of cognitive tasks was used. Even though the required sample size was not reached and the study was possibly underpowered, we found a strong main effect of avatar age on VMR. With strong connections to the participants' motivation and persistence, this finding is in line with our expectations, and the effect size of η 2 = 0.18 is large. For Study 2, the first hypothesis was confirmed. However, not all cognitive measures applied in this study showed a meaningful difference concerning avatar age condition. Most importantly, it is unclear why this experimental approach affects cognitive performance more strongly than physical performance. A certainly obvious difference lies within the details of test application. The cognitive measure of VMR was carried out right inside the virtual scenario with the relevant words displayed on the virtual monitor. Subjects were forced to direct their attention to the general direction of the monitor and mirror. Meanwhile AWM and handgrip strength tests were both administered without any visual reference within the VR scenario. Perhaps stereotypes related to cognitive performance decline in old age are more widespread than those targeting physical performance (Levy, 2003). However, it must be noted that the main effect of avatar age on VMR was no longer significant when the implicit AGS score was included as a covariate. Unfortunately, a post-hoc inspection of the correlations between implicit AGS and the experimental condition, as well as memory performance did not provide a clear picture to explain this finding. More research is needed to understand the role of implicit AGS in this context.
Interesting results were also obtained when including domainmatched explicit AGS as a moderator in the analyses. We found a substantial offset in the learning slopes in the experimental older avatar group. A descriptive inspection of the data could indicate that strong domain-matched negative AGS were associated with FIGURE 6 | Memory performance (VMR) by experimental condition. Note. Number of reproduced words from the verbal encoding and recall task (VMR) for young sample, displayed separately for the YA y (young avatar) and OA y (old avatar) subgroups over three repetitions of learning and reproducing.
Frontiers in Aging | www.frontiersin.org April 2022 | Volume 3 | Article 851687 low performance expectations for the older avatar group prior to the experiment, leading to a steeper learning curve once the task started and once participants gained confidence. Examining the data more closely, however, leads to another possible interpretation. The largest difference in performance occurred at the first measurement, where participants differing in their domain-matched AGS also showed a considerable offset in their performance. This finding resembles the handgrip strength effect from Study 1, and it might indicate the use of strategies to prevent the depletion of cognitive resources by using limited resources economically. Meanwhile, the validity of these findings stands in question, as a significant baseline difference in this moderator was found between the experimental conditions, and future research would require higher sample sizes with careful randomization or matching of participants.
The effect we found for VMR, as well as its large size, cannot be generalized across all cognitive tasks. Data from the AWM task did not show significant effects following avatar age manipulation. One possible interpretation considers the limited capacity of people's working memory (Baddeley & Hitch, 1974). With limited resources on a neuro-functional level, variations in self-relevant AGS activation and motivational aspects may not account for these performance differences.
STUDY 3 Participants
Study 3 comprised N = 117 (66 female) participants aged 50-83 years (M = 61.23, SD = 7.5) that were assigned to the control (n = 59) and experimental conditions (n = 58). The recruitment for Study 3 resulted in a much larger sample size compared to the earlier studies, whose recruitment was considerably limited by, for instance, the availability of student participants during certain times in the semester. An age range starting at 50 was selected for two reasons. First, other studies on aging and on AGS sometimes (e.g., Wurm et al., 2007Wurm et al., , 2010 include participants who do not yet fall into the category of being old to include this transition phase and to study precursors of aging. To cover both groups in the data analysis, chronological age will be included as a covariate. Second, we were guided by more pragmatic reasons given the demanding recruitment situation. The parents and relatives of the university students were in their 50 s or 60 s and could thus be recruited more easily than the general public during the COVID-19 pandemic.
Procedure
Recruitment was carried out using a university blackboard, leaflets in a campus-attached GP practice's waiting room, and newspaper bulletins. Prior to the VR assessment, the participants received a bundle of documents by mail and were asked to complete them prior to visiting the VR lab. This included information and consent forms on the study, together with questionnaires on demographic variables, explicit AGS, and memory self-efficacy that were identical to aforementioned studies but carried out in a paper-pencil mode at home. By transferring these questionnaires to a paper-pencil assessment prior to the laboratory appointment, the procedure for Study 3 was deliberately reduced in comparison to Studies 1 and 2 (see Figure 3). As the earlier procedure had reportedly been demanding on the physical and cognitive levels, a trade off had to be reached with a set of sufficiently valid and reliable performance measures and a minimization of possible fatigue effects toward the end of the laboratory appointment. The resulting 75-min procedure started with the assessment of implicit AGS. Participants were then introduced to the lab room, and the VR headset was mounted. Participants were assigned to the younger (YA o ) or older (OA o ) avatar group, and the introduction procedure remained identical to Studies 1 and 2. The following assessment of performance measures FIGURE 7 | Memory performance by experimental condition and cognitive age group stereotype negativity. Note. Number of words reproduced from the verbal encoding and recall task of young participants, displayed separately for the YA y (young avatar) and OA y (old avatar) subgroups over three repeated measures with separate graphs for (median split) low and high negative domain-matched (cognitive) AGS.
included verbal memory and alpha-numerical working memory, followed by the handgrip strength task. After a break of 2 minutes, the arm-holding endurance task was administered.
Dependent Measures
The participants' handgrip strength measurement was identical to Study 1. Due to a systematic operational error by one of the four test conductors, some handgrip measurements had to be excluded from the analysis, resulting in 89 complete datasets for this variable. Weight-holding endurance was assessed identical to Study 2 with an additional 500-g wristband attached to the relevant wrist. The additional weight was expected to reduce the overall testing time and variance in results, which both had been unexpectedly high in Studies 1 and 2. The verbal memory and recall and alphanumeric working memory assessments remained identical to previous studies.
Baseline Group Equivalence
There were no baseline group differences for memory selfefficacy, t (115) = -0.88, p = 0.80, but a significant baseline group difference in general explicit AGS, t (115) = -0.96, p = 0.02, was found, indicating stronger explicit AGS for the experimental OA y condition.
For weight-holding endurance, there was no main effect of avatar age group, F (1, 115) = 0.09, p = 0.77, η 2 = 0.00. Further, with chronological age as a covariate, there was no main effect of age, F (1, 113) = 0.03, p = 0.87, η 2 = 0.00, but a significant interaction effect of chronological age with avatar age group on weight-holding endurance, F (1, 113) = 4.54, p = 0.035, η 2 = 0.039 (see Figure 8). The 50-60-year-old subgroup of the experimental YA o condition showed a stronger performance than the older 61-83-year-old subgroup of the same avatar condition. For the OA o control condition, the findings were contrary, as the older 61-83-year-old subgroup showed a stronger performance than the younger 50-60-year-old subgroup of the same avatar condition.
Effects of Experimental Manipulation on Cognitive Performance
The analysis revealed no main effect of avatar age group on verbal memory and recall, F (1, 115) = 0.04, p = 0.85, η 2 = 0.00. Further, while a main effect of chronological age on verbal memory and recall was found, F (1, 114) = 8.26, p = 0.005, η 2 = 0.07, indicating lower scores for the participants with a higher age, no interaction effect of age with avatar age group on verbal memory and recall was found, F (1, 113) =0 .18, p = 0.68, η 2 = 0.00.
In the absence of significant main effects of our experimental manipulation on the relevant dependent variables, we did not consider any further control or moderator variables.
Discussion
The results of Study 3 suggest that what we have obtained for the young sample cannot be simply generalized to the older group. Most importantly, we did not find any evidence of the idea that embodying a younger avatar leads to improved performance parameters, so our second hypothesis was not confirmed. Thus, this part of the project was unable to provide the expected cross-validation. Possibly, the most prominent explanation is that the virtual avatars' incongruence with the subject's own appearance itself might have caused the observed performance decline due to an increased cognitive load (Sweller, 2011) caused by distraction or excitement. For the age-different avatar in both the young and old samples, this visual incongruence was possibly stronger than for the avatar that was closer to the subject's own chronological age. For future research, an additional control condition with, for instance, a gender/age-neutral mannequin or a personalized avatar appears a reasonable step to provide better insight into the relevant mechanism of our virtual reality approach. Furthermore, Study 3 was expected to provide evidence of the development of negative AGS interventions aimed at improving long-term health parameters known from previous research and thus minimizing negative effects during the aging process. Such effects were not found, even though the sample size exceeded what was necessary based on our previous power analysis and the selected performance measures were proven reactive to the intervention among younger participants. The roles of several possible influential parameters remain unclear and must be explored in future studies: a strong variation in chronological age within the two age groups was matched with fixed-age avatars, which poses the risk of strong intra-group variations in self-relevant age perceptions compared to the avatar. Interactional findings from Study 3 support this argument, as participants with a chronological age closer to that of the virtual avatar's age performed better in the endurance task in both the YA o and OA o groups. Again, a baseline difference between experimental conditions in explicit negative AGS was found, which poses a limitation in the results and requires careful consideration in future experimental designs and randomization or matching procedures. Furthermore, the application of our performance tasks might have caused a stereotype threat (e.g., Hess et al., 2003), inducing higher stress among the older sample, thus diluting our results.
GENERAL DISCUSSION
Our research introduced a novel, sustainable, and flexible approach to activating self-relevant AGS and decreasing effects on physical and cognitive performance within VR. It advances the empirical validity of the aging game and outpaces both the experimental effects of priming and the immersive character of false-feedback approaches. It offers sufficient manipulation of the self-relevance of pre-existing AGS with a direct connection to the relevant performance domains and enables the further development of interventional strategies with a hopefully positive impact on various health parameters linked to strong AGS negativity concerning older age groups. We hypothesized a performance decline among young participants (Study 1 and 2) that embody an older avatar in comparison to a same-aged avatar, which was partially confirmed. We also expected older participants (Study 3) to perform better when embodying a younger avatar in comparison to an older avatar, which was not confirmed. In addition, we hypothesized a general interaction effect of explicit AGS with avatar age embodiment on performance. This hypothesis was only partially confirmed, as the expected association did occur in certain results, but not throughout the three studies. Given the contrastive findings of our intervention for different age groups and different performance parameters, we see several reasons that the hypothesized mechanisms for changing physical and cognitive performance did not affect the older and younger samples alike.
First, the younger and older samples possibly differed concerning their openness to new technology and their earlier experiences with digital technology and VR (e.g., Elias et al., 2012). Individuals who find themselves in a research setup that is considerably unknown to them could experience higher arousal and maybe a stronger decline in cognitive focus or motivation.
Unfortunately, the affective state measurement was not applied in Study 3, so we could not test this explanation directly. Even though the immersion scores were sufficient in Studies 1 and 2, the immersion assessment was not applied to the older sample in Study 3, so it cannot be guaranteed that VR scenario immersion was as strong for the older participants, Further, in an explanation that is focused more strongly on resource availability and allocation, one must admit that performance is always tied to the availability of resources, both in the body and the brain, and it is always easier to diminish such resources than to build or activate them only, especially in the short-term setting of a laboratory experiment. For instance, performance will immediately decline after a cognitive overload (Decroix et al., 2016), distraction (Graydon and Eyesenck, 1989), or the activation of affective states (Rader and Hughes, 2005). Vice versa, activating a resource that will then immediately boost performance is obviously much more difficult to achieve. Surely, the activation of self-relevant AGS also does not affect physiological or neurological functioning directly, at least not in the brief period of a lab experiment. Muscles are not weakened or strengthened, and the brain is not slowed or sped up; instead, the utilization of available resources is influenced by the activation of self-relevant AGS. In hindsight, it is not surprising that we found no significant effects for the more basic measures and significant effects for measures with a stronger motivational component.
Furthermore, one must consider group differences in the lifetime in which they were exposed to the self-relevant stereotypes. Self-reflexive AGS could be much more difficult to change among older than younger people. Older participants have had much longer exposure to negative old AGS and thus have internalized them more strongly (Levy et al., 2012). Furthermore, the respective old AGS might have become more self-relevant as the individuals have grown older, resulting in a strong and long-lasting, now activated, and highly relevant negative AS. In comparison, the younger participants had much shorter exposure to the negative old AGS held by society; furthermore, these old AGS are not yet self-relevant to them.
Old AGS are acquired early in life (Flamion et al., 2020), are nurtured throughout the lifetime in an environment that tends toward ageism (Naegele et al., 2018), are easily triggered by obvious features (Mason et al., 2006), and function via both explicit and implicit pathways (Hess et al., 2003), making them seemingly susceptible to relatively subtle experimental manipulation.
Finally, there is a difference in perspectives of the stereotype content. For young participants, the VR manipulation resembles a "time machine" into a future that they naturally have not yet experienced. Older participants, however, experience a travel into a past they have lived through already. Our VR manipulation for young people might have activated what they associate with old age, while the content is not self-relevant and not strongly represented in their minds. For older participants, we activated both personal knowledge and self-relevant stereotypes about early and late sections of their life span. For this reason alone, our apparently symmetrical study design was probably not symmetrical at all in its psychological implications.
Strengths and Limitations
Our research proposes a novel approach to studying the short-term effects of activating self-relevant AGS in the lab, and this approach has obvious strengths when compared to more conventional paradigms (Varkey et al., 2006). Our studies, however, also have several weaknesses that might be responsible for the lack of significant findings or might limit the generalizability of our results. The conditions for sample recruitment led to a highly selective sample, as psychology students from our university might have been aware of the content or intention of our study or might have recognized some assessment procedures. The older sample was recruited from a university blackboard, a nearby GP's waiting room, and newspaper bulletins, which might have caused a somehow selective sample with higher characteristics on positive health behavior, openness to science in general, or interest in VR applications in particular. Such selectivity effects are known from cross-sectional age-comparative studies, even if all relevant measures are taken to prevent them (e.g., Lüdtke et al., 2003). In addition, the sample size was relatively small, especially in Study 2, and our careful randomization procedure nevertheless resulted in partial baseline differences in the experimental conditions concerning the relevant covariate of explicit AGS. A possible lack of identification with the virtual avatar might have resulted from the lack of personalization of the avatars (see Liu et al., 2019). The set of four avatars was chosen to ensure a maximum coverage of our participants' visual features but, as the avatars were not further individualized, a possible inter-individual variation in avatar identification must be mentioned as a limitation of this piece of research. In addition, more movement trackers could have improved the body illusion effect (Kim et al., 2020). These latter two limitations of the immersion, and hence the validity of our effects, were due to economic and technical factors. In Study 3, several questionnaires were moved from a laboratory assessment to paper-pencil questionnaires prior to the laboratory appointment to make the latter less time-consuming and less exhausting for the older sample. This might limit the comparability of our questionnaire assessments in an age group comparison.
We found that the average endorsement of items measuring self-reported immersion was not the highest possible. Instead of lamenting this fact, we want to draw two other conclusions. First, it seems the VR approach is quite effective in activating selfrelevant stereotypes, even when the experience of immersion is not perfect. This is a promising result for future applications of the paradigm. Second, there is room for improvement, and higher immersion could have been obtained with a higher-resolution VR headset, more individualized avatars, sensory feedback on more levels, or a virtual room matching the subjects' home environments. As stronger effects on cognition and behavior with higher immersion are expected, the full potential of the VR approach is yet to be explored.
Another possible weakness is the possibly invalid outcome variables that were not directly affected by AGS activation. Especially in Study 3, we reduced the overall testing time for pragmatic reasons but meanwhile combined two demanding performance measures in one session. We needed to focus on certain variables and deliberately decided to have the widest possible range of performance domains vis-à-vis the given constraints rather than assessing only one aspect in full. Future research can investigate more diverse outcome variables with the VR paradigm.
It should be noted that concerning the moderating role of pre-existing AGS, they were not manipulated experimentally. Hence, our interpretation that this finding identified the actual mechanism behind the performance declines lacks causal evidence. Future research must manipulate both virtual age and the valence of stereotypes to prove the entire causal chain. Furthermore, the moderating role of pre-existing AGS in both handgrip strength and VMR performance appears similar only at first glance. Upon closer inspection, the initial performance is impaired in both tasks, but only for VMR do we find a compensation of this impairment, as indicated by a steeper learning curve. It would be premature to draw substantial conclusions from this difference in performance patterns, as it would generalize this moderating effect too broadly.
One further limitation is that the test administrators were completely aware of both the assignment to one of the two conditions and to the research question, which might have led to a Rosenthal effect (e.g., Dumke, 1978). Their own expectations or AGS might have resulted in differences in behavior toward the control and experimental condition participants. Blinding the test investigators via automatic selection of and assignment to the conditions appears a feasible adjustment in the future.
Conclusion
As Marilyn Ferguson and Toms, (1982) stated, "Of all the selffulfilling prophecies in our culture, the assumption that aging means decline and poor health is probably the deadliest" (p. 272). For this reason, research on self-relevant AGS deserves more attention. At the same time, this research faces the fundamental challenge of manipulating age experimentally. We can therefore come closer to an understanding of the causal processes that link stereotypical attitudes to biological and psychological outcomes. Fortunately, current technology allows us to close the gap between what is possible and what would be needed from a methodological perspective. With the advent of VR applications, we might overcome at least some of the obstacles related to lifespan experimental research.
Open Practices Statement
The data and materials for all experiments are available at https:// osf.io/rnz62/?view_only=d5a447aec74a4aeeab73e8eb38f0a8a1.
All of the experiments were preregistered using OSF-Framework: 1) Physical performance: https://osf.io/dbns9 2) Cognitive performance: https://osf.io/w2msp AUTHOR'S NOTE Data collection for this paper was conducted from October 2019 to April 2020. Many thanks to Pointreef for their continuous technical support of the VR scenario and to Sebastian Unger for programming the experiments.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by University Witten/Herdecke, Ethics Submission #13/2019. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
NV contributed in terms of creation of experimental plans, organization of laboratory procedures, overseeing data recording, data preprocessing and calculating of statistical results as well as writing the manuscript. MT contributed in terms of creation of the overall project idea, giving advice on the execution of the study, the results calculation and the statistical analysis as well as considerable contributions on the contents and quality of the manuscript. | 2022-04-28T02:25:54.761Z | 2022-04-27T00:00:00.000 | {
"year": 2022,
"sha1": "0a090a314ae43fcf67ad2aa31c5307f1307d94a6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "0a090a314ae43fcf67ad2aa31c5307f1307d94a6",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
265486801 | pes2o/s2orc | v3-fos-license | Visible Light Communications-Based Assistance System for the Blind and Visually Impaired: Design, Implementation, and Intensive Experimental Evaluation in a Real-Life Situation
Severe visual impairment and blindness significantly affect a person’s quality of life, leading sometimes to social anxiety. Nevertheless, instead of concentrating on a person’s inability, we could focus on their capacities and on their other senses, which in many cases are more developed. On the other hand, the technical evolution that we are witnessing is able to provide practical means that can reduce the effects that blindness and severe visual impairment have on a person’s life. In this context, this article proposes a novel wearable solution that has the potential to significantly improve blind person’s quality of life by providing personal assistance with the help of Visible Light Communications (VLC) technology. To prevent the wearable device from drawing attention and to not further emphasize the user’s deficiency, the prototype has been integrated into a smart backpack that has multiple functions, from localization to obstacle detection. To demonstrate the viability of the concept, the prototype has been evaluated in a complex scenario where it is used to receive the location of a certain object and to safely travel towards it. The experimental results have: i. confirmed the prototype’s ability to receive data at a Bit-Error Rate (BER) lower than 10−7; ii. established the prototype’s ability to provide support for a 3 m radius around a standard 65 × 65 cm luminaire; iii. demonstrated the concept’s compatibility with light dimming in the 1–99% interval while maintaining the low BER; and, most importantly, iv. proved that the use of the concept can enable a person to obtain information and guidance, enabling safer and faster way of traveling to a certain unknown location. As far as we know, this work is the first one to report the implementation and the experimental evaluation of such a concept.
Introduction
According to current statistics, there are about 43.3 million blind people worldwide [1], counting for about 0.48% of the total human population.Although in the last two decades, modern medicine had increased success in healing blindness, some of the current studies estimate that in the next 30 years, the number of blind people will increase by up to three times due to the growing and aging population [2].As one can imagine, blindness has a major impact on a persons' life, from social integration to an increased susceptibility to accidents.For instance, the unemployment rate among blind individuals is three times higher than the average, whereas the risks associated with navigating sidewalks are at least twice as high [1,2].Moreover, in the case of young people, blindness restricts the access to normal education and limits personal progress.
If it is to define blindness, one of the most basic and simple definitions would be as an inability to see, caused by an incapacity to discern light from darkness.In this context, blind people "see" the world through their other senses, most often by hearing and touching.Thus, blindness and severe visual impairment disturb a person's capacity to receive visual information and can have various causes, including congenital conditions, eye injuries, diseases, or degenerative conditions.
The development of new solutions to assist blind people is a major research domain having great potential to enhance the daily lives of such people [3][4][5][6].These innovative solutions aim to augment the perception of the surrounding environment for the blind and severely visually impaired.Since a Visually Impaired Person (VIP) cannot rely on their sight, these systems must possess the capability to sense the surroundings, identify pertinent information, and convey it to the user through their other senses, with hearing and touch being the most suitable sensory channels for this purpose.To effectively perceive the environment, these devices incorporate various arrays of sensors, including ultrasound sensors, Passive InfraRed (PIR) sensors, Inertial Measurement Unit (IMU) sensors, LiDAR, GPS, and/or cameras.Once the sensory data are collected, a data fusion algorithm is employed to analyze them and provide the user with information that is not only accurate but also relevant, useful, and presented in an appropriate manner.At present, some of the most advanced solutions designed to assist VIPs are based on Artificial Intelligence (AI), specifically using neural networks for tasks like computer vision and data analysis [6].These AI-driven systems play a crucial role in processing sensory input and providing meaningful insights to help individuals with visual impairments navigate and interact with their surroundings more effectively.
In the context of addressing the needs of visually impaired individuals, this article introduces an innovative concept aimed to provide assistance discreetly and effectively.To ensure a subtle and inconspicuous design, this concept takes the form of a backpack equipped with a range of smart features.The primary function of this backpack couples the potential of the Visible Light Communications (VLCs) technology [7,8], to facilitate information transmission to the user through the indoor lighting systems.It can be recalled here that VLCs are a new wireless communication technology that uses the visible light for simultaneous illumination and data transfer [7,8], with high potential in distance measurement, relative positioning [9], and environment sensing [10].Consequently, different from other technologies, VLC is developing on top of a preexisting LED lighting infrastructure, providing it with a high potential to be a ubiquitous technology.In addition to the VLC features, for situations where outdoor conditions prevail and/or when VLC coverage is unavailable, the prototype incorporates additional sensors designed for obstacle detection.These sensors offer a wide total coverage angle of 240 degrees, primarily focused on the front of the user, enhancing safety and awareness.Once the relevant information is obtained from the VLC infrastructure through the VLC receivers and/or from the obstacle detection sensors, the system is able to dispatch it for the user by audio and/or by haptic means.Therefore, this article aims to provide the findings from the experimental evaluation of the VLC-based smart backpack within a complex real-world scenario.In this scenario, a visually impaired user intends to navigate from point A to point B, relying on guidance provided by the backpack.The results of this evaluation are highly promising, as they show that the prototype enables the user to receive crucial information and subsequently reach their desired destination efficiently, discreetly, swiftly, and most importantly, in a secure manner.Thus, this article continues and complements [11,12], by providing a comprehensive presentation of the prototype design, the implementation process, and additional experimental results confirm the benefits of the proposed concept.
The rest of this article is structured as follows.Section 2 provides a brief overview concerning the trends in developing blind persons-assisting solutions and a motivation for the use of the VLC technology in such applications.Then, Section 3 debates the aspects Sensors 2023, 23, 9406 3 of 21 regarding the design and implementation of the VLC-based blind persons' assistance solution, presenting the requirements of the system and describing the manner in which this system responds to them.Next, once the prototype has been presented, Section 4 comes to deliver the results of the experimental evaluation of the system's components as well as the result of the concept evaluation in a complex user-assistance situation.Then, Section 5 provides a discussion meant to point out the importance of this work and of its possible perspectives, and Section 6 provides the conclusions of this article.
Existing Solutions and Approaches in Visually Impaired Persons' Assistance
The development of assistance solutions for blind and severely visually impaired people implies several challenges.Thus, in order to efficiently guide the VIP, the assistance solution must be able to identify the location of the user.Secondly, the solution must be able to identify the relevant information that should be useful to the user, and thirdly, it should be able to deliver the information in an adequate manner [13].
Generally, users' location can be established with the help of various wireless communication technologies, such as Bluetooth [14], or Wi-Fi, based on computer vision and inertial sensing [15], or with the help of the smartphone camera [16], while providing positioning errors between 0.4 to 1.5 m.To identify the information from the area, camera-based image navigation solutions are widely used [17,18].These solutions imply a camera and specialized software that is able to recognize objects from the scene.Recently, various other solutions, such as artificial intelligence and Computer Vision (CV) applications, are swiftly progressing [19,20].For relative positioning and distance measurement, as well as for obstacle detection, LIDAR, ultrasound and camera recognition systems are also used [21].Next, once the relevant information is identified, it is transmitted to the user as audio information or by haptic means.
In terms of user solutions, blind assistance solutions are generally integrated into various devices, but most often into smart glasses and smart canes, while also being developed as personal computer or smartphone-compatible applications.These technologies are designed to enhance daily lives of individuals with visual impairments by providing realtime assistance and information.These smart devices and applications offer features like persons and object recognition, navigation aids, and text-to-speech capabilities, allowing users to receive auditory or haptic feedback about their surroundings.Smart canes are frequently equipped with sensors and GPS for obstacle detection and navigation support [22], Computer and smartphone applications offer accessibility features such as voice assistants, screen readers, and GPS-based navigation.These integrated software solutions aim to improve mobility, independence, and overall quality of life for VIPs.The high prevalence of personal computers and smartphones has led to the development of multiple software applications that are meant to assist visually impaired persons.Some of these applications are briefly presented in Section 2.2.
Commercial Software Applications for Blind and Severely Visually Impaired Persons' Assistance
A significant part of software applications designed for individuals with visual impairment is closely related to assistive technology, playing a crucial role in enabling voice-based reading.Their main purpose is to convert text into speech, making it more accessible for the blind and visually impaired and facilitating their access to written information, documents, and websites.Popular software applications like Job Access with Speech (JAWS) [23] and NonVisual Desktop Access (NVDA) [24] are widely used for this purpose.JAWS, developed by Freedom Scientific, is renowned for its ability to provide voice-based reading and enhanced accessibility, converting on-screen text and graphics into speech or Braille.Its features include compatibility with online platforms, the Microsoft Office suite, web browsers, email applications, and social media.NVDA, on the other hand, is an open-source software solution that offers similar functionalities but stands out for its portability and compatibility across various operating systems and digital resources.
Additionally, there are electronic Braille devices, GPS navigation systems integrated with mobile applications like Lazarillo [25] and BlindSquare [26], voice recognition systems like Siri [27] and Google Assistant [28], or VoiceOver [29] for Apple devices, which provide accessibility features and support for users with visual impairment.There are also mobile applications designed to identify colors in the user's surroundings, such as Be My Eyes [30], which allows users to request assistance from volunteers.These assistive technologies, along with handwriting-to-text conversion apps and Braille printers, contribute to creating a more inclusive and accessible environment for individuals with visual impairments.
Considering the important role of education, as well as the fundamental right to education, assistance software solutions for this purpose have been developed.Thus, specialized educational resources, AI assistance in online education, wearable technologies for content recognition and vocal feedback, and Haptic Wearables for urban navigation have been developed to enhance the daily lives and educational experiences of people with visual impairments.The rapid advancement of technology, including Virtual Reality (VR) and Augmented Reality (AR), has revolutionized the field of assistive technologies, providing innovative and intuitive solutions for the visually impaired.The integration of emerging technologies is transforming the way individuals with visual impairments access information and engage in educational processes.Accessibility standards like Web Content Accessibility Guidelines (WCAG) [31] have played a pivotal role in ensuring that digital learning platforms and online courses are inclusive for individuals with disabilities.
While current studies and developments in this field are still in their early stages, the complexity of addressing the needs of VIPs has led to significant advancements, particularly in the medical field.These advancements range from retinal implants to facial recognition, text reading, and audio playback, with the ultimate goal of enhancing the quality of life for individuals with visual impairment.
Visible Light Communications and Their Potential in Blind and Severely Visually Impaired Persons' Assistance
Over the past decade, VLC has emerged as an exciting wireless technology that has witnessed significant advancements.As previously mentioned, VLC uses visible light not only for illumination but also as means of transmitting data, thereby enabling pervasive wireless communication.Consequently, VLC has the remarkable capability to transform any LED light source into a data transmission device.Moreover, extensive research efforts have unlocked the potential of VLC for achieving highly precise localization, making it a valuable technology for delivering position-specific data.In contrast to traditional Radio Frequency (RF) communication, where a central device covers a wide area, VLC networks exploit the inherent properties of light.These characteristics allow for the deployment of a multitude of optical access points and the enhancement of overall performance.
Considering that LED lighting systems are an integral part of our daily lives, serving not only to illuminate our surroundings but also to provide essential visual information, it becomes apparent that their ubiquitous presence can be harnessed for a broader spectrum of applications.In this context, VIPs can derive substantial benefits from VLC's extensive coverage and its capacity to offer location-specific data.Thus, one can see that the VLC technology has the intrinsic means to solve a significant part of the tasks mentioned in Section 2.1, as it has the potential to identify users' locations and to timely deliver location specific data.Furthermore, as the VLC technology is developing on top of a preexisting and widely available lighting network, its potential is very high.On these grounds, VLC empowers users to be constantly aware of their precise location and receive pertinent information relevant to their specific surroundings.
However, despite the promising potential of VLC in assisting visually impaired individuals, there remains a relative lack of relevant research focused on practical demonstrations of these concepts.More concerted efforts in this area are needed to fully explore and use the capabilities of VLC technology for the benefit of those with visual impairments.Examples of preliminary works focused on VLC use in blind persons' assistance can be found in [32][33][34][35].Although these works emphasize the benefits of VLC and its compliance with visually impaired assistance, their implementation is still at a low Technology Readiness Level (TRL), whereas the experimental results are far from being relevant for real-life utilization.On the other hand, these works have the merit of pushing things forward in the right direction.
Conceptualization and Implementation of the Visible Light Communications-Based Smart Backpack for Blind and Severely Visually Impaired Persons' Assistance
This section approaches the issues related to the design and the practical implementation of the VLC-based blind persons' assistance smart backpack prototype.It aims to provide the purpose, the development guidelines, their argumentation and to illustrate the transition from design guidelines to experimental prototype.
Purpose Statement
The existing blind-assistance literature and the market segment oriented towards their guidance enables the identification of several types of assistance applications, from solutions that help a person to read information [23,36] to software solutions that help a person to fully use a computer and its applications [26,30].In this context, the purpose of the proposed solution is to provide blind and severely visually impaired persons with information that enables them to navigate in unfamiliar places based on personal guidance and user location specific information.Furthermore, in order to optimize the path, as well as to minimize the risks associated with the movement in unfamiliar locations, the proposed solution aims to be able to detect possible obstacles (i.e., open doors, boxes, chairs, dispensers etc.) in users' paths and to warn them.Another problem that the system aims to address is related to the blind person's ability to maintain a straight direction when it is necessary.Consequently, the system targets to monitor and to provide the users with information about their path and trajectory.
Visible Light Communications-Based Smart Backpack for Visually Impaired Persons' Assistance: Requirements and Guidelines
From users' perspective, in order to be effective, a human assistance solution should be useful to its users, versatile and simple to use, should have a high users' acceptance, and should be cost-efficient.These are the preliminary requirements that have been imposed for the blind assistance solution.
Usefulness: In order to be useful for blind persons' assistance, a system should be able to provide its users with information that they are not able to perceive through their sight.Nevertheless, as the environment has a lot of visual information to offer, a system should be able to analyze the available information and should be able to offer only the relevant information.Otherwise, too many data can distract the user's attention, making the system less useful.This implies a careful analysis of the available information and an adequate consideration of the users' needs.
Use simplicity and versatility: The system should be able to translate visual information into other types of sensorial data, whereas in this case, hearing and touching/sensing are the most straightforward senses that could be used.Nevertheless, as sometimes blind users already rely on their hearing in order to perceive the environment, the system should be able to provide users the possibility to use hearing options only when they want.Another aspect regarding versatility is related to a system's ability to provide additional services and/or functionalities.For example, many successful blind-assistance solutions are applications that can be installed on a smartphone.Their success is based on the fact that the blind assistance function can be integrated on an already useful device, which is now able to provide an additional function.Another aspect related to versatility is related to a solution's ability to remain helpful in as many situations as possible.For example, many GPS-based blind guidance solutions become useless in situations with no GPS coverage, which represents a major issue.
Users' acceptance: Many VIP support solutions encounter a common challenge related to their hardware design.To enhance the functionality, numerous sensors are often incorporated.While this approach improves the perception of the user's surroundings, it frequently results in final prototypes that are overly sized and lack aesthetic appeal, making them unaesthetic and uncomfortable for users to wear.Thus, it is obvious that wearing such a conspicuous and bulky device can inadvertently highlight the users' impairment, thereby limiting the device's overall practicality and benefits.
Cost-efficiency: Cost-efficiency is a feature that can be attributed to a system that is not necessarily cheap, but whose benefits are high with respect to the cost.Therefore, adding extra features to a product can make it cost-efficient if multiple users' needs are satisfied.
From a developer's perspective, the Visible Light Communications technology imposes a series of constraints.As previously mentioned, VLC assumes the use of the LED lighting equipment for simultaneous illumination and data transmission.
Data transmission as a secondary feature: Because the digital information sent through an existing lighting system is a secondary feature, it must not affect the primary function, as an illumination device, in any way.Therefore, the use of the VLC technology in the blind-assistance purpose should respect the following principles:
•
VLC should not impact the existing lighting infrastructure from a hardware point of view or should have a minimum impact; • VLC function should not affect lighting from a regular user visibility point of view, meaning that the same lighting intensity should be provided; thus, light intensity should not be increased in order to improve the Signal-to-Noise Ratio (SNR), nor it should be decreased unnecessarily; • VLC should not generate visible or perceivable flickering; • When light dimming is necessary, data transmission should be available.
Communication coverage:
Another aspect that should be considered is related to the coverage area.In order to be useful and safe, the VLC system should have the potential to provide wide area coverage.Therefore, while considering the existing distribution of the lighting devices within the space, the prototype should be able to ensure VLC data transmission for the entire envisioned area.
User-centered data distribution: In order to be effective in blind user assistance, the solution should be able to localize the user, in order to transmit location specific information.Nevertheless, different from IoT applications or from robot control applications where centimeter precision is required, in this case, high precision localization is less important, as the main objective is to have an estimation of the user's position.
Visible Light Communications-Based Smart Backpack for Visually Impaired Persons: Implementation Process
Within the upper-described context, this ongoing project was initiated with the purpose of designing, developing, and experimentally testing a novel visually impaired and blind assistance device that is discreet, versatile, user-friendly, and cost-efficient.The main aim of the project is to create a device that seamlessly integrates multiple functions into an item that does not draw attention to the user's impairment.To achieve this goal, the proposed concept takes the form of an everyday backpack, making it useful already.This design approach offers ample space for concealing the various sensors required for enhanced functionality, ensuring that the final product supports the aforementioned requirements of discretion, utility, and multi-purpose use.The schematic of the proposed design is illustrated in Figure 1.As one can see, in addition to the basic backpack, the system consists of five main blocks.
design approach offers ample space for concealing the various sensors required for enhanced functionality, ensuring that the final product supports the aforementioned requirements of discretion, utility, and multi-purpose use.The schematic of the proposed design is illustrated in Figure 1.As one can see, in addition to the basic backpack, the system consists of five main blocks.The first block is the energy power block.For improved utility and versatility, the concept uses a 10 W PhotoVoltaic (PV) panel that has the purpose of improving the energyefficiency and the autonomy of the system.The resulting electricity/energy is stored in a 5V/20,000 mAh power bank, which is used to power the smart backpack's other blocks.However, for improved utility, the user can also recharge thier personal devices, such as smartphones or smartwatches, through several USB ports.
The most important component of the concept is the optical wireless communications block.To facilitate data exchange with the indoor lighting system, the smart backpack prototype employs two optical transceivers positioned on the upper side, more precisely on the backpack braces.On the other side of the VLC channel, the infrastructure-integrated wireless communication component includes an optical transceiver fitted into the indoor lighting network.These transceivers employ visible light for receiving data from the indoor lighting system and Infrared (IR) links for uploading information requests.The schematic of the prototype's optical wireless communications component is shown in Figure 2, emphasizing the infrastructure-integrated transmitter and one of the smart backpack transceivers.This bidirectional connection allows the user to request information and, in future developments, can potentially enable the VLC lighting infrastructure to determine the user's location using Time-of-Flight (ToF) measurement properties combined with Received Signal Strength protocols [9].Currently, user requests primarily focus on emergency assistance, and are limited to several specific points of interest such as the location The first block is the energy power block.For improved utility and versatility, the concept uses a 10 W PhotoVoltaic (PV) panel that has the purpose of improving the energyefficiency and the autonomy of the system.The resulting electricity/energy is stored in a 5V/20,000 mAh power bank, which is used to power the smart backpack's other blocks.However, for improved utility, the user can also recharge thier personal devices, such as smartphones or smartwatches, through several USB ports.
The most important component of the concept is the optical wireless communications block.To facilitate data exchange with the indoor lighting system, the smart backpack prototype employs two optical transceivers positioned on the upper side, more precisely on the backpack braces.On the other side of the VLC channel, the infrastructure-integrated wireless communication component includes an optical transceiver fitted into the indoor lighting network.These transceivers employ visible light for receiving data from the indoor lighting system and Infrared (IR) links for uploading information requests.The schematic of the prototype's optical wireless communications component is shown in Figure 2, emphasizing the infrastructure-integrated transmitter and one of the smart backpack transceivers.This bidirectional connection allows the user to request information and, in future developments, can potentially enable the VLC lighting infrastructure to determine the user's location using Time-of-Flight (ToF) measurement properties combined with Received Signal Strength protocols [9].Currently, user requests primarily focus on emergency assistance, and are limited to several specific points of interest such as the location of restroom, elevator, door, stairs, campus restaurant, or a few other objects.Nevertheless, as the concept is further developed, additional points of interest could be defined.
m from the ceiling), when a 50% duty cycle is used.However, the light intensity can be adjusted from virtually the off state (1% duty cycle) to the nearly maximum intensity of the lighting device (obtained at a 99% duty cycle).Thus, it is important to emphasize that the system is designed to be compatible with light dimming, making it suitable for energysaving applications while maintaining its blind assistance functions.For situations that imply high optical noise, the concept is also compatible with Binary Frequency-Shift Keying (BFSK) modulation [38], solution that further enhances the system's resilience to noise.In its turn, the VLC receiver represents a vital component of the smart backpack.It consists of an optical collecting system, a signal processing segment, and a data processing unit.The front-end employs an optical filter used to eliminate unwanted spectral components and a PIN photodiode-based optical detector having a Field-of-View (FoV) of ±53°, which is able to convert the incident light into a proportional electrical signal.Then, the The VLC transmitter has been integrated into the indoor lighting infrastructure from one of our research laboratories.Specifically, the actual setup is based on a luminaire positioned at a height of 3.5 m above the floor.The luminaire uses four 60 cm off-the-shelf LED tubes, each having a power of 9 W. Controlled through a digital driver by an ARM Cortex M7 processor operating at 600 MHz, the LED tubes-based luminaire becomes an information broadcasting device.For VLC data transmission, the system uses an adapted form of Variable Pulse Position Modulation (VPPM) [37], with asynchronous communication, capable of providing a data rate of up to 100 kb/s.VPPM is a modulation technique suitable for VLC applications, which combines Pulse Width Modulation (PWM) and Pulse Position Modulation (PPM), enabling precise light dimming without compromising data transmission.The design provides optical intensity of 101 lux at the workspace level (2.7 m from the ceiling), when a 50% duty cycle is used.However, the light intensity can be adjusted from virtually the off state (1% duty cycle) to the nearly maximum intensity of the lighting device (obtained at a 99% duty cycle).Thus, it is important to emphasize that the system is designed to be compatible with light dimming, making it suitable for energy-saving applications while maintaining its blind assistance functions.For situations that imply high optical noise, the concept is also compatible with Binary Frequency-Shift Keying (BFSK) modulation [38], solution that further enhances the system's resilience to noise.
In its turn, the VLC receiver represents a vital component of the smart backpack.It consists of an optical collecting system, a signal processing segment, and a data processing unit.The front-end employs an optical filter used to eliminate unwanted spectral components and a PIN photodiode-based optical detector having a Field-of-View (FoV) of ±53 • , which is able to convert the incident light into a proportional electrical signal.Then, the signal processing module handles tasks such as signal band-pass filtering, signal amplification, and signal reconstruction.More precisely, the signal passes through a 1 kHz-500 kHz 4th order band-pass Bessel filter, several preamplification stages, an adaptive gain control circuit that stabilizes the signal amplitude, and a Schmitt trigger circuit which provides the digital output containing the binary data.Finally, the data processing unit is responsible for real-time data decoding, data analysis, and Bit Error Rate (BER) measurement.To accomplish these tasks, an ARM M7 Cortex processor running at 1008 MHz is used.Table 1 summarizes the parameters of the prototype's VLC component.The following block is the environment perception block.This unit consists of sensors that have the purpose of analyzing the users' movement and to detect potential obstacles.For improved effectiveness, an array of four obstacle-detection modules is distributed on the backpack.Two of these modules are positioned on each lateral side, while the other two are oriented towards the front, as depicted in Figure 3. Within each detection module, there resides a combination of a PIR sensor and an ultrasound sensor.The inclusion of these two distinct sensor types is motivated by their high compatibility with each other, which serves to reinforce the reliability of the information provided to the user.These obstacle-detection modules are strategically placed, each encompassing a 60-degree angle, resulting in a 240-degree coverage.Each module has a detection range set at 90 cm.Although longer ranges up to 250 cm could be used, it was considered that the most relevant information is the one from the immediate vicinity.Choosing a wider range can lead instead to situations in which all the sensors are constantly detecting certain obstacles, providing the user with too much information, which in turn can be rather distracting and less effective for the user.Anyway, the obstacle detection range can be further increased up to more than 250 cm if the user requires it.The obstacle detection modules are also vertically adjustable ensuring adaptability to the user's specific needs.The core function of these modules is to identify and alert the user regarding the presence of objects or individuals obstructing their path, thereby enhancing safety and facilitating smooth navigation.The environment perception unit also englobes a gyroscopic sensor and an accelerometer.The gyroscopic sensor is used to monitor users' orientation, in order to properly guide them in the right direction.As the chances of falling are significantly higher for blind persons, the accelerometer is envisioned to identify potential situations in which users' have fallen and/or are injured.
Experimental Testing Procedure, Experimental Results and Discussions Concerning the Importance of This Work
The following section presents the aspects related to the blind persons' assistance smart backpack intensive experimental testing procedure.It details the experimental evaluation method and the associated experimental results, focusing on the individual system components evaluation, as well as on the prototype's evaluation in a complex setup.
Experimental Testing Procedure
To validate the feasibility of the proposed concept and plan the future course of this project, the VLC-based smart backpack prototype underwent experimental assessment in controlled laboratory conditions.As depicted in Figure 4, the optical communications component was integrated into the indoor lighting system, forming the basis for the experimental testing environment.These initial tests were structured into two distinct phases, each serving a unique and different purpose.As one can see, while the communication component excels in indoor VLC-covered areas, the obstacle-detection component complements it by providing supplementary information.More importantly, it also extends its utility beyond the limits of VLC-enabled indoor environments, making it useful for both indoor and outdoor settings.Thus, once the smart backpack prototype has received the VLC data from the indoor lighting infrastructure and the information from the environment perception module, a data processing unit (again an ARM Cortex M7 at 600 MHz board) analyzes it and decides what type of information should be transmitted to the user.
Another important module of the smart backpack prototype consists in the block that enables the user to exchange information with the backpack and vice versa.Thus, to request information on certain points of interest, the user uses a gesture sensor which recognizes predefined gestures and assigns them to a certain request.The request is then analyzed and communicated through the IR transmitter embedded module to the optical transceiver integrated in the indoor lighting infrastructure.The lighting infrastructure module analyzes the request and provides a response which is transmitted using VLC.When the backpack VLC component receives the data, it analyzes it and provides the user with the requested information.The interaction with the user is made through a synthesized voice.Nevertheless, there are situations in which the users might require their hearing for other purposes.For such circumstances, the smart backpack integrates a series of four vibration motors.Two of them are located on the backpack shoulder strap, one for each arm, and the other two are located on the bottom of the backpack, being in contact with the user's lobar area, left side, and right side.Thus, based on a predefined haptic language that the user has to get used with, information is transmitted.The four vibration motors are also assigned to the four obstacle sensors.More exactly, each obstacle detection sensor, monitoring a certain area around the user, is assigned to one vibration motor.Thus, when an obstacle is detected on the user's front left side (i.e., the user could hit the obstacle with his left shoulder as he moves forward), the vibration motor on the left shoulder strap begins to vibrate (i.e., distance dependent vibrations).Similarly, to suggest that a wall is located on the left side, the vibration motor in contact with the user's lumbar area is activated (i.e., distance dependent vibrations).In accordance, the 90 cm monitoring range of each sensor is divided in three sectors: 0-30 cm, 30-60 cm, and 60-90 cm.Further on, when an obstacle is detected in a certain area, the vibration motor associated with that sensor generates vibrations of certain frequencies, where vibration frequency is in accordance with the distance toward the obstacle.Thus, when the obstacle is detected at 90 cm, the frequency of the vibrations is low, whereas as the user gets closer to the obstacle, the frequency of the vibrations increases.This versatility allows the system to accommodate a variety of scenarios and user preferences.Figure 3 illustrates the distribution of the modules on the smart backpack.
Experimental Testing Procedure, Experimental Results and Discussions concerning the Importance of This Work
The following section presents the aspects related to the blind persons' assistance smart backpack intensive experimental testing procedure.It details the experimental evaluation method and the associated experimental results, focusing on the individual system components evaluation, as well as on the prototype's evaluation in a complex setup.
Experimental Testing Procedure
To validate the feasibility of the proposed concept and plan the future course of this project, the VLC-based smart backpack prototype underwent experimental assessment in controlled laboratory conditions.As depicted in Figure 4, the optical communications component was integrated into the indoor lighting system, forming the basis for the experimental testing environment.These initial tests were structured into two distinct phases, each serving a unique and different purpose.
In the first phase, the system's capability to receive data from the indoor lighting component and relay it to the user in the form of audio and haptic information was evaluated.Thus, the first component of these tests was focused on investigating the VLC's ability to provide low BER communication, to support communication in light dimming conditions, to support user mobility or to support connectivity within the area of the VLC transmitter.Therefore, these tests aimed to confirm the proper functionality of the VLC component.
guidance from the indoor lighting system through VLC.Upon processing this signal, the user was provided with audio instructions such as "Walk 4 meters to reach the door.After passing through the door, turn right and proceed down the 23-m hallway.The destination is 1 meter behind the door, on the left side."As the user moved beyond the VLC-covered area, he relied only on the obstacle detection sensors integrated into the backpack.As previously mentioned, these sensors should provide relevant information to the user through haptic feedback, utilizing the four vibration motors strategically positioned within the backpack.Vibration frequencies vary based on the distance to the obstacle, with higher frequencies indicating closer proximity to the obstacle.The second phase is focused on assessing the system's ability to operate in scenarios where VLC coverage is unavailable or obstructed.In this regard, the backpack was used by two individuals who were blindfolded.Their task was to exit the laboratory, traverse a 23 m corridor, open a door, and reach a point situated behind it to find an object placed on a table.During this process, the user wearing the backpack initially received location guidance from the indoor lighting system through VLC.Upon processing this signal, the user was provided with audio instructions such as "Walk 4 meters to reach the door.After passing through the door, turn right and proceed down the 23-m hallway.The destination is 1 meter behind the door, on the left side."As the user moved beyond the VLC-covered area, he relied only on the obstacle detection sensors integrated into the backpack.As previously mentioned, these sensors should provide relevant information to the user through haptic feedback, utilizing the four vibration motors strategically positioned within the backpack.Vibration frequencies vary based on the distance to the obstacle, with higher frequencies indicating closer proximity to the obstacle.
This experimental setup was conducted with each of the two users repeating the task only five times.Although five experiments could be considered insufficient from a statistical point of view, limited tests have been made in order to prevent the users from accommodating with the trials, a fact that would influence the results.Consequently, one can consider that the results of these tests are relevant for users traveling in unfamiliar locations.For comparative purposes, the same task was also performed by a blindfolded user who did not use the smart backpack.It is important to underscore that individuals with visual impairments often face challenges in maintaining a straight direction, and therefore, the proposed solution seeks to address this specific issue by providing navigational assistance.
Apart from these tests, where the smart backpack should guide the user from one point to the other, by relying on either VLC or on obstacle detection sensors, another obstacle detection challenge has been introduced.In order to test the prototype's effectiveness in providing a safe path, obstacles were intermittently introduced along the path to test if the device could assist the users in safely navigating from one location to another.This third test aimed to determine if the prototype is able to prevent users from bumping into persons present in the path, or into certain objects.Based on a meeting with a blind student and on studies focusing on blind persons' problems, our team managed to find that one of the most challenging situations is when dealing with obstacles that are located at chest and head level, as these objects cannot be identified with the help of the blind stick.Otherwise, the blind stick is very effective in locating objects and ground level anomalies.
Experimental Results for the Visible Light Communications Component Evaluation
One of the purposes of the experimental evaluation of the VLC component was to confirm the capacity of the system to reliably transmit data from the LED-based luminaire VLC to the VLC receiver.Compared to the vehicular VLC channel, which is primarily in outdoor conditions [37][38][39][40], the indoor one is definitely less challenging, as it involves only a few meters' communication range, less optical interferences, and a smaller degree of unpredictability, so good results would be expected.
Another purpose of this experimental investigation is to determine the system's ability to maintain the connectivity when the user (i.e., the VLC receiver) is moving inside the VLC transmitter area.To be effective, such a system should be able to maintain the connectivity even within a wider area, and not only under the luminaire.More precisely, as the VLC receiver is upwards oriented, when it is moving away from the VLC transmitter, the incidence angle is increasing.Nevertheless, as the current generated by the photodiode has an incident angle cosine dependency, whereas the luminaire has a light direction oriented downwards, toward the workspace, one can see that the communication link can be affected.Therefore, these experiments are also useful in determining the system's communication coverage for a given BER limit.
Thirdly, the purpose of these experiments is to confirm the VLC system's ability to provide simultaneous light dimming and data communication.As energy efficiency is becoming a major preoccupation for human society, the compatibility with light dimming or the system's ability to work in situations in which the user does not need the lighting function become very important.Figure 5 exemplifies an oscilloscope capture showing the signals received by the VLC receiver.It also displays the manner in which the incident light is gradually transformed into a ready to use digital signal.Additionally, this figure illustrates the VPPM light dimming mechanism with its working principles and a brief comparison with the classical Manchester coding.As one can see, VPPM enables a wide control over the lights-on period, and therefore, a good control over the light intensity.Additionally, as the T ON is similar for bit 1 and bit 0, no light flickering is introduced.The summary of the experimental results resuming the VLC component performance is provided in Table 2.As expected, the high SNR corroborated with an optimized VLC hardware design enabled the system to provide and maintain an extremely low BER.Thus, a BER lower than 10 −7 is achieved without any use of error correcting techniques.For comparison, previous experience has shown that with an adequate hardware design and with an optimized software data extraction algorithm, a low BER can be maintained as long as the SNR does not get below a 0-1 dB limit [38][39][40].The very low BER is also the result of the integration of an automatic gain control circuit within the VLC receiver.The AGC is constantly adjusting the signal amplification, compensating the decrease in the incident power, while also preventing over-amplification that could lead to signal distortion and in turn, to bit errors.Furthermore, the results of the intensive experimental evaluation showed that the VLC receiver is able to maintain this low BER even in light-dimming conditions, as well as in the situation in which the VLC receiver is moving away from the VLC transmitter center coverage area.Thus, it has been demonstrated that the system can provide connectivity for a circular region that can have a 3 m radius.However, as in the experimental setup, the lighting fixtures are placed 1.5 m away from each other: this demonstrates that the proposed design is more than suitable to ensure the coverage of the Sensors 2023, 23, 9406 14 of 21 entire area, and that it can definitely be replicated for other areas as well.Another aspect that has to be debated is related to the systems' ability to work in "lights-off " conditions.On this topic, it has been experimentally demonstrated that the VLC transmitter can work with duty-cycles as low as 1%, which generate an illuminance that can be considered close to the point where the lights are off.In its turn, it has been demonstrated that both the hardware and the software components of the VLC receiver are able to handle the 1% duty cycles.Thus, this demonstrates that the proposed concept is compatible with energy-saving applications as well as with applications where lighting is not necessary all the time.On the other hand, with duty cycles below 10%, the complexity of the software routines is forcing the hardware to lower its data rate to 10 kb/s instead of 100 kb/s, in order to keep the same quality of data transmission as for duty cycles of 10% and above.In this case, the higher BER limit is the result of a lower number of transmitted bits.Additionally, the purpose was also to suggest that when the duty cycle is lowered, a slightly higher BER is expected.The reason for this higher BER is related to the wider bandwidth imposed for the band-pass filtering.
BER lower than 10 is achieved without any use of error correcting techniques.For com-parison, previous experience has shown that with an adequate hardware design and with an optimized software data extraction algorithm, a low BER can be maintained as long as the SNR does not get below a 0-1 dB limit [38][39][40].The very low BER is also the result of the integration of an automatic gain control circuit within the VLC receiver.The AGC is constantly adjusting the signal amplification, compensating the decrease in the incident power, while also preventing over-amplification that could lead to signal distortion and in turn, to bit errors.Furthermore, the results of the intensive experimental evaluation showed that the VLC receiver is able to maintain this low BER even in light-dimming conditions, as well as in the situation in which the VLC receiver is moving away from the VLC transmitter center coverage area.Thus, it has been demonstrated that the system can provide connectivity for a circular region that can have a 3 m radius.However, as in the experimental setup, the lighting fixtures are placed 1.5 m away from each other: this demonstrates that the proposed design is more than suitable to ensure the coverage of the entire area, and that it can definitely be replicated for other areas as well.Another aspect that has to be debated is related to the systems' ability to work in "lights-off" conditions.On this topic, it has been experimentally demonstrated that the VLC transmitter can work with duty-cycles as low as 1%, which generate an illuminance that can be considered close to the point where the lights are off.In its turn, it has been demonstrated that both the hardware and the software components of the VLC receiver are able to handle the 1% duty cycles.Thus, this demonstrates that the proposed concept is compatible with energy-saving applications as well as with applications where lighting is not necessary all the time.On the other hand, with duty cycles below 10%, the complexity of the software routines is forcing the hardware to lower its data rate to 10 kb/s instead of 100 kb/s, in order to keep the same quality of data transmission as for duty cycles of 10% and above.In this case, the higher BER limit is the result of a lower number of transmitted bits.Additionally, the purpose was also to suggest that when the duty cycle is lowered, a slightly higher BER is expected.The reason for this higher BER is related to the wider bandwidth imposed for the band-pass filtering.
Experimental Results for the Obstacle Detection Component Evaluation
Once the performance of the VLC component was confirmed, the next phase was focused on the obstacle detection component.Before moving forward to the evaluation in a complex situation, the first tests aimed to determine if the system is able to safely localize potential dangerous objects that could obstruct the path of a blind user.As mentioned in previous sections, these tests aimed to determine the system's efficacy in preventing the user to bump into other persons, to hit objects located in the path, and, most importantly, to localize objects that cannot be located by a blind stick, referring here to objects that are located at the chest and/or head level.The situations envisioned are illustrated in Figure 6, whereas the summary of the experimental results is provided in Table 3.
Test Objective Number of Trials
Successful Detections Human in the pathway detection 100 100 Chest and head obstacle detection 1 100 100 Dispenser in the pathway detection 2 100 100 1 Testing setup is illustrated in Figure 6a. 2 Testing setup is illustrated in Figure 6b.
The experimental findings also demonstrate the prototype's efficacy in assisting visually impaired individuals with key aspects of navigation and spatial awareness.Specifically, the prototype aids users in maintaining a straight trajectory, detecting the presence of individuals or obstacles in their path, signaling the proximity of a door/wall, and enhancing their overall environmental perception.
Thus, as illustrated in Figure 7, users exhibit a notably straighter trajectory when utilizing the prototype.This improved directional control, coupled with the increased confidence imparted by the system, results in a significant reduction in travel times.Consequently, users are less apprehensive about encountering obstacles, leading to a noteworthy reduction in the average travel time-from 53.6 s to 39.3 s.It is worth noting that further enhancements are anticipated as users become more accustomed to the system's operation and as the prototype is further calibrated on users' specific requirements, promising even greater improvements in their navigation experience.
Test Objective Number of Trials Successful Detections
Human in the pathway detection 100 100 Chest and head obstacle detection 1 100 100 Dispenser in the pathway detection 2 100 100 1 Testing setup is illustrated in Figure 6a. 2 Testing setup is illustrated in Figure 6b.
The experimental findings also demonstrate the prototype's efficacy in assisting visually impaired individuals with key aspects of navigation and spatial awareness.Specifically, the prototype aids users in maintaining a straight trajectory, detecting the presence of individuals or obstacles in their path, signaling the proximity of a door/wall, and enhancing their overall environmental perception.
Thus, as illustrated in Figure 7, users exhibit a notably straighter trajectory when utilizing the prototype.This improved directional control, coupled with the increased confidence imparted by the system, results in a significant reduction in travel times.Consequently, users are less apprehensive about encountering obstacles, leading to a noteworthy reduction in the average travel time-from 53.6 s to 39.3 s.It is worth noting that further enhancements are anticipated as users become more accustomed to the system's operation and as the prototype is further calibrated on users' specific requirements, promising even greater improvements in their navigation experience.
Discussion and Future Perspectives Regarding the Use of the Visible Light Communications Technology in Visually Impaired Persons' Assistance
Based on the results of the experimental evaluation, it can be considered that the proposed VLC-based smart backpack concept represents an innovative and versatile solution which has the potential to revolutionize navigation, communication, and safety in both indoor and outdoor environments, helping blind persons to travel in unfamiliar locations.In summary, the proposed VLC-based smart backpack offers several compelling advantages and promising prospects.The VLC technology provides the concept with a versatile data transfer solution which enables seamless data transfer in conjunction with a lighting function, making efficient use of limited power resources.Thus, the use of VLC delivers unlicensed spectrum access and a vast bandwidth of 400 terahertz, providing the support for multiple applications ranging from user localization to location-dependent data distribution.Therefore, different from most of the existing technologies, VLC is very suitable for location-oriented data distribution, providing the user with more relevant information.Accordingly, each luminaire is able to provide the user with accurate and specific information, where the information is correlated with its location, enabling an improved context-aware assistance and navigation.
One of the main advantages of VLC use in blind persons' assistance comes from the relatively simple integration within the preexisting lighting infrastructure, which offers in turn the support for the development of a ubiquitous assistance solution.In addition, as the proposed solution is developing on a preexisting lighting infrastructure, the implementation cost is partially reduced.The cost-efficiency is also provided by the relatively simple architecture and by its multipurpose functionality.Thus, the cost of the solution becomes controllable, and it could be estimated that the cost of a luminaire upgrade is below 100 euros, a cost that could go even lower if mass production is adopted.Furthermore, the low energy consumption associated with LED use, VLC and light dimming functions further improves the cost-efficiency.For comparison, the operation of a RF-based solution requires continuous energy consumption.On the other hand, in VLC, the data transmission function requires no additional energy consumption as the light used for lighting purposes is also used as a carrier for the data.As lighting systems is present in all public places, from airports to schools and public institutions, the development of such a solution can provide blind persons with access and personalized support in these areas, improving their independence.
From the VLC usage point of view, the experimental results have confirmed a very low BER (i.e., ranging within 10 −7 -10 −6 ), a wide area coverage, and, very importantly, the compatibility with mobility and light dimming.From this point of view, it is important to emphasize that although there are numerous works that address VLC light dimming capabilities [41][42][43][44], there are only a few that provide experimental demonstrations of such concepts [44].Nevertheless, in [44], a limited 25-85 dimming range was demonstrated, while providing a 10 −3 BER, and communication range below 1 m.Furthermore, demonstrating the system's ability to maintain the data link even for a 1% duty cycle, emphasizes the concept's compatibility with energy saving applications and with the current preoccupations for energy efficiency.
Another important contribution of this work comes from the fact that unlike most of the works that promote the development of a new notion or of a new technology, this work delivers the integration of the proposed concept into a functional device, together with an extended experimental investigation that demonstrates the concept's utility in a relevant use case.Thus, this work not only introduces the use of the VLC technology in severely visually impaired and blind persons' assistance, but it provides a relevant experimental demonstration for its utility.Therefore, although the proposed concept is not yet a commercial device (i.e., a TRL 10 product), it provides a TRL 6 product, providing the basis for future enhancements.Thus, this work aimed to make the transition from fundamental research to experimental research, contributing to the deployment of the VLC technology in new applications.Hence, although there are several other works that analyzed the use of VLC in blind persons' assistance, the prototype presented in this article is one of the most advanced, benefiting from a more realistic integration, from a more intensive testing and from an overall enhanced design.Furthermore, to improve its capacity and to enable its use in areas with no VLC coverage, the smart backpack concept also integrates environment perception sensors that provide a 240 • degree scanning around the user, enabling the detection of obstacles located in the users' path.Additionally, these sensors significantly improve the trajectory and travel time of visually impaired users, instilling confidence and reducing the risk of collisions with other persons or with obstacles, reducing in turn the chances of accidents during the walk.Thus, as the experimental results have confirmed, the use of this sensor fusion enabled the user to maintain a straighter trajectory and to improve its confidence, resulting in in shorter walking times.
An important aspect that should be clarified is related to the functionality of the proposed solution in multiple lighting devices setup.From this perspective, the current version of the prototype did not address the issues associated with multiple lighting devices.Nevertheless, the indoor VLC literature has widely addressed these issues and it was able to find the proper solutions that allow user mobility, provide the hand-over mechanisms between multiple light sources, and enable multiple-users resource sharing [45][46][47].Consequently, it can be considered that user movement and the synchronization between multiple lighting devices are not that problematic.One limitation of the proposed solution comes from the relatively low data rates.For comparison, record data rates in indoor VLC applications go up to a few tens of Gb/s [48].Although the experimental results showed that the prototype is compatible with user movement, the mandatory Line of Sight (LoS) condition imposed for VLC systems represents probably the most important limitation of the concept.This fact could lead to possible communication blockage in certain circumstances when the user is no longer within the VLC luminaire coverage.In such situations, the user will have to rely on the previously received data, and on the obstacle detection sensors until the link is reestablished.
Overall, it can be considered that the experimental evaluation of each of the smart backpack components and its evaluation in a complex situation has confirmed the benefits of the proposed solution, as well as the benefits resulting from multiple technologies and data fusion.
Finally, it should be reemphasized that utility of the concept is further enriched by the backpack's ability to be quasi energy independent thanks to the 10 W photovoltaic panel.
Conclusions
Acknowledging the fact that blindness and severely visual impairment are affecting human life in a serious manner, this work focused on investigating the way in which the VLC technology, along with several types of sensors, can be used for indoor navigation purposes.For this goal, a novel VLC-based smart backpack for blind persons' assistance has been designed, implemented, and experimentally evaluated.The basic principles that stand behind this concept are based on the idea that although these people are not able to perceive the light, they have other senses which can be used for information reception.Thus, the smart backpack prototype is able to convert the light carrying the data from the indoor lighting system into audio or haptic information that can be perceived by blind people.In this manner, users can receive information concerning different points of interest, enabling them to travel in unfamiliar public places, contributing to an enhanced independence and facilitating blind people's social inclusion.Consequently, the location of an elevator, of an airport terminal or of a restroom can be transmitted upon request, emphasizing here that the response information is location-orientated, meaning that the infrastructure is able to adapt and distribute the data in accordance with the user's location.
Therefore, this article presented the results from experiments conducted to assess the functionality of a novel VLC-based smart backpack designed to aid visually impaired individuals.The initial experimental outcomes have confirmed the capability of the proposed prototype to extract data from modulated light sources and convert it into audio informa-tion, thereby assisting VIPs in navigating unfamiliar environments.Moreover, it has been experimentally demonstrated that the VLC component is able to ensure data transfer while maintaining a BER lower than 10 −7 -10 −6 even in misalignment and/or in light dimming conditions.Furthermore, the smart backpack is equipped with obstacle-detection sensors, which serve as a useful feature, especially in areas lacking VLC coverage.These sensors play an important role in helping users maintain a straight path while traversing a straight long area and in detecting potential obstacles, thereby further enhancing their mobility and safety in unfamiliar surroundings.
The future outlook for the VLC-based smart backpack is promising.The ongoing efforts within this project are concentrated on enhancing the indoor lighting system's capacity to determine the user's location through visible light positioning technology.This development aims to refine the accuracy and relevance of the support information provided to VIPs, further optimizing their navigation experience.Additionally, current work is focused on the development and integration of a voice recognition function that is able to process users' voice commands.
In the end, it is important to emphasize that although additional functions can and will be implemented, the current work provides very clear evidence concerning the potential and benefits associated with VLC use in blind persons' assistance.
Figure 1 .
Figure 1.Schematic representation of the VLC-based smart backpack for blind users' assistance.
Figure 1 .
Figure 1.Schematic representation of the VLC-based smart backpack for blind users' assistance.
Figure 2 .
Figure 2. Schematic of the smart backpack visible light communications component.
Figure 2 .
Figure 2. Schematic of the smart backpack visible light communications component.
Figure 3 .
Figure 3. Visible light communications-based smart backpack for blind and severely visually impaired persons' assistance: (a) Front and lateral side view; (b) Backside view.
Figure 3 .
Figure 3. Visible light communications-based smart backpack for blind and severely visually impaired persons' assistance: (a) Front and lateral side view; (b) Backside view.
Figure 4 .
Figure 4. Visible light communication testing scenario: The indoor lighting system uses VLC to provide the user with information concerning a certain point of interest.After leaving the area, the user is no longer in VLC coverage and has to rely on the embedded obstacle localization sensors and on the haptic systems to travel the distance until the point of interest.
Figure 4 .
Figure 4. Visible light communication testing scenario: The indoor lighting system uses VLC to provide the user with information concerning a certain point of interest.After leaving the area, the user is no longer in VLC coverage and has to rely on the embedded obstacle localization sensors and on the haptic systems to travel the distance until the point of interest.
Figure 5 .
Figure 5. Illustration of signal processing at the VLC receiver level: Channel 1 (yellow) represents the output of the optical receiver; Channel 2 (cyan) represents the signal after filtering and amplification; Channel 4 (green) shows the digital signal that is used by the microcontroller to extract the binary data.The figure also depicts the use of the VPPM modulation in a 30% duty cycle example, the light dimming principles, and provides a comparison with the traditional Manchester coding.
Figure 6 .
Figure 6.Example of smart backpack for blind persons' assistance experimental utility: the prototype's obstacle sensors are able to identify and inform the user that: (a) an open glass door is obstructing the path; (b) a dispenser is in the path and an open door is found on the user's right side.
Figure 6 .
Figure 6.Example of smart backpack for blind persons' assistance experimental utility: the prototype's obstacle sensors are able to identify and inform the user that: (a) an open glass door is obstructing the path; (b) a dispenser is in the path and an open door is found on the user's right side.
Figure 7 .
Figure 7. Experimental results showing the path taken by four blindfolded users, as they navigated from point A to point B while utilizing the VLC-based smart backpack.The results revealed a notable enhancement in the users' path and a significant reduction in their travel time: (a) With backpack assistance; (b) Without any kind of assistance.
Figure 7 .
Figure 7. Experimental results showing the path taken by four blindfolded users, as they navigated from point A to point B while utilizing the VLC-based smart backpack.The results revealed a notable enhancement in the users' path and a significant reduction in their travel time: (a) With backpack assistance; (b) Without any kind of assistance.
Table 1 .
Summary of the VLC parameters.
Table 2 .
Summary of the prototype VLC experimental results.
Table 3 .
Summary of the obstacle detection tests.
Table 3 .
Summary of the obstacle detection tests. | 2023-11-29T16:25:26.109Z | 2023-11-25T00:00:00.000 | {
"year": 2023,
"sha1": "20eabf1c024da1787fb49bdd77c251c68d4cddb4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/23/9406/pdf?version=1700991567",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a0c5e45781ea78d9892d31615b0e8fcfd91c8d1b",
"s2fieldsofstudy": [
"Engineering",
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52216226 | pes2o/s2orc | v3-fos-license | Demineralised Lignite Fly Ash for the Removal of Zn ( II ) Ions from Aqueous Solution
Among the various possibilities of limiting the disposal of fly ashes (lignite), their reutilization as adsorbent materials is worthy of consideration. To this end, proper ashes beneficiation techniques can be put into practice. The adsorption of toxic compounds from industrial wastewater is an effective method for both treating these effluents and recycling lignite fly ash. The aim of this paper is to give a contribution for understanding the relationships among beneficiation treatments, adsorbent properties and adsorption mechanism and efficiency. In this context, the lignite fly ash was demineralised using concentrated HCl and HF (FA-DEM) and was used as adsorbent for Zn(II) ions from aqueous solutions. Batch experiments were carried out under various adsorbent dosages, pH, contact time and different metal ion concentrations. For FA-DEM, the 57.7% removal of Zn(II) ion was achieved under the optimum conditions of adsorbent dosages of 4 g/L, pH at 6, temperature at 303 K and the contact time of 1.15 h. The adsorption of Zn(II) ions onto FA-DEM followed the pseudo second order kinetics. The Langmuir isotherm model best represented the equilibrium data.
Introduction
Heavy metals are presented in wastewaters because of discharge by industry, for example electroplating, inorganic pigment manufacture, wood processing, photographic operations, and petroleum refining.Small amounts of some heavy organisms, but excessive levels of essential metals may be harmful to the organisms and cause serious health effects (cancer, liver damage, renal disorder, visceral cancers, insomnia, depression, lethargy, vomiting) [1].To minimize human and environmental exposure to hazardous heavy metals the US Environmental Protection Agency (US EPA) established limit of zinc that may be discharged into wastewater at 0.8 mg/L.Heavy metals may deactivate the active sludge (by poisoning the bacteria) in secondary treatment plants [2]; therefore, chemical treatment must be used to remove heavy metals before the biological step.Various physical and chemical methods are used for treating heavy metals from industrial wastewater, that include: adsorption, ion exchange, complication and membrane separation [3,4].The method most common and widely used for removing heavy metals from wastewater is chemical precipitation by the use of caustic soda or lime [5].This method is not expensive, but requires a large amount of chemicals and results in a large quantity of sludge that requires supplementary treatment.Replacing synthetic substrates with low-cost adsorbents has, therefore, been intensively studied, and there have been reports of the use of materials obtained from agriculture and from forest wastes, for example, bagasse fly ash [6], sugar beet pulp [7], activated carbon derived from bagasse [8], maple sawdust [9], clay [10,11], volcanic ash bone char [12], humus [13], or bituminous coal, for removal of heavy metals.Removal of heavy metals (cadmium, copper, zinc, and nickel) on scrap rubber, bituminous coal, peat [14], natural zeolite [15], alkali treated lignite fly ash and alkali followed by methyl orange treated lignite fly ash [16] has been reported.
Fly ash is an amorphous mixture of Ferro aluminosilicate minerals generated by combustion of ground or powdered coal [17].Approximately 70% of the combustion byproducts is fly ash collected in electrostatic precipitators.This is the most difficult by-product to handle [18], and there is a need for environment-friendly uses of fly ash.Chemically, 90% -99% of fly ash comprises Si, Al, Fe, Ca, Mg, Na, and K, with Si and Al being the major components.The applications of fly ash depend on the presence of basic mineral elements resembling the earth's crust, which makes it an excellent substitute for natural materials.Although many papers can be found in the literature on the possibility of removing heavy metals from wastewaters by adsorption on low cost activated carbons, the intertwining among waste properties, beneficiated materials, adsorption mechanisms and efficiencies has only been partly elucidated [19,20].
In this paper we report the efficiency of Zn(II) uptake on lignite fly ash modified by treatment with HCl (4M HCl) followed by treatment with HF (FA-DEM).The adsorption kinetics and substrate capacity were discussed, and are correlated with surface structure (SEM and EDAX) by batch mode study.
Preparation of Adsorbents
The fly ash used for this study was collected from the NLC Power Plant, Neyveli, Tamil Nadu, India.200 gram of the raw fly ash was treated with 200 mL of 4 M HCl and then kept in magnetic stir at 60˚C for an hour.After the solution was allowed to settle for 12 hours and washed with distilled water again and again till the conductivity of the filtrate was below 200 µs.It was then filtered and dried in hot air oven at the temperature of 105˚C.The dried acid treated fly ash (FA-HCl) was then powdered and then treated with HF solutions [(40%), liquid/solid ratio 0.006 dm 3 /g] at 60˚C for one hour over water bath.After treatment, filtered, washed, dried at 105˚C and used for further studies.The yield for FA-DEM was found to be 112 g.O), NaOH, HCl and HNO 3 were of analytical grade and were used without further purification.Stock solutions were prepared by dissolving 1 g of ZnSO 4 in one litre of water.Double distilled water was used throughout the study.
Acidity and Basicity (Boehm Titration)
Acidity and basicity were estimated by mixing 0.2 g of adsorbent (FA-DEM) with 20 mL of 0.1 M NaOH, 20 ml of 0.1 M Na 2 CO 3 , 20 mL of 0.1 M NaHCO 3 in a closed flask separately, and agitating for 48 h at room temperature.Filtered and from that filtrate 5 mL was pipetted out and titrated with 0.1 M HCl [21].
Fourier Transform Infrared Analysis
Functional groups in FA-DEM were examined by using the FTIR method of analysis.The FTIR spectrophotometer was based on changes in dipole moment resulting from bond vibration upon absorption of IR radiation.It was carried out at room temperature using Spectrum RX1 "Pelmer" version 5.3 Spectrophotometer in the spectral range of 4000 to 400 cm -1 with a resolution of 4 cm -1 .
Adsorption and Kinetic Studies
A stock solution of ZnSO 4 •7H 2 O (1000 mg/L) was prepared and suitably diluted accordingly to the various initial concentrations.Adsorption studies were carried out at room temperature (28˚C ± 5˚C).Batch adsorption studies were carried out using 0.2 g of adsorbent for each bottle, with 50 mL of solution of required concentration and pH of the solutions varied from 2 to 9 in a bench shaker at a fixed shaking speed of 120 rpm.The resulting mixture was filtered (Whatmann filter paper No. 41) and the final concentration of the metal ions in the filtrate determined by UV-2450 vis spectrophotometer at λ max value of 213 nm.The pH of the solution was adjusted using 0.1 M HCl and 0.1 M NaOH and buffer solution was used to maintain the exact pH.The experiments were carried out for various adsorbent dosages, different initial Zn(II) ions concentration, for various contact time and different initial pH of the solution.The stock solution of ZnSO 4 •7H 2 O was prepared for the concentration of 1000 ppm and it was diluted to various required concentrations.From the initial and final concentration, percentage removal can be calculated by where, C 0 -initial concentration of Zn(II) ions in mg/L, C f -final concentration of Zn(II) ions in mg/L.The data obtained in batch mode kinetics were used to calculate the equilibrium metal uptake capacity.It was also calculated for adsorptive quantity of Zn(II) ions by using the following expression: where q e is the equilibrium metal ion uptake capacity in mg/g, v is the sample volume in litre, C 0 the initial metal ion concentration in mg/L, C e the equilibrium metal ion concentration in mg/L and w is the dry weight of adsorbent in grams.
Adsorption Isotherms
Equilibrium studies were undertaken to understand the behaviour of the adsorbent at an equilibrium condition.Equilibrium data are basic requirements for the design of adsorption systems and adsorption models, which are used for the mathematical description of the adsorption equilibrium of the metal ion on to the adsorbent.The results obtained on the adsorption of Zn(II) ions were analysed by the well-known models given by Langmuir, Freundlich, Tempkin, Dubinin-Radushkevich, Harkin-Jura and Frenkel-Halsey-Hill isotherms.For the sorption isotherms, initial metal ion concentration was varied while the pH of the solution and adsorbent weight in each sample held constant.The sorption isotherms were realized with FA-DEM at solution pH 6. Langmuir isotherm assumes monolayer adsorption onto a surface containing a finite number of adsorption sites [22].The linear form of Langmuir equation is derived as: where Q e (mg/g) and K L (dm 3 /g) are Langmuir constants related to adsorption capacity and rate of adsorption.Freundlich isotherm assumes heterogeneous surface energies, in which the energy term in Langmuir equation varies as a function of the surface coverage [23].The well-known logarithmic form of the Freundlich isotherm is given by: 1 log where K F (mg/g) (l/mg) and 1/n are the Freundlich adsorption constant and a measure of adsorption intensity.
Tempkin assumes that heat of adsorption (function of temperature) of all molecules in the layer decreases linearly rather than logarithmic with coverage.Its derivation is characterized by a uniform distribution of binding energies (up to some maximum binding energy) [24].The Tempkin isotherm has been used in the form of: where B = RT/b, b and A, R and T are the Tempkin constant related to heat of sorption (J/mol), equilibrium binding constant (l/g), gas constant (8.314J/mol K) and absolute temperature (K).
The D-R model was applied to estimate the porosity, apparent free energy and the characteristics of adsorption [25][26][27].The D-R isotherm does not assume a homogeneous surface or constant adsorption potential.The D-R model has commonly been applied in the following Equation ( 6) and its linear form can be shown in Equation (7): where K is a constant related to the adsorption energy, Q m the theoretical saturation capacity, ɛ the Polanyi potential, calculated from Equation ( 8) The slope of the plot of lnq e versus gives K (mol 2 / (kJ 2 )) and the intercept yields the adsorption capacity, Q m (mg/g).The mean free energy of adsorption (E), de-fined as the free energy change when one mole of ion is transferred from infinity in solution to the surface of the solid, is calculated from the K value using the following relation [28] 1 The Harkin-Jura adsorption isotherm can be expressed as 1 q where B 2 is the isotherm constant.e was plotted vs. log C e .This isotherm explains the multilayer adsorption by the existence of a heterogeneous pore distribution [29].
The Frenkel-Halsey-Hill isotherm can be expressed as lnq e was plotted vs. lnC e .This isotherm explains the multilayer adsorption by the existence of a heterogeneous pore distribution of the adsorbent [30].
Scanning Electron Microscopic Studies (SEM)
The Figures 1(a) and (b) clearly show that the SEM was employed to observe the physical morphology of the FA-DEM before and after adsorption.The SEM images (Figure 1(a)) clearly show that FA-DEM particles are mainly composed of irregular and porous particles in FA-DEM.Figure 1(b) shows the SEM image of FA-DEM after adsorption of Zn(II) ions.
Energy Dispersive X-Ray Spectroscopic Analysis (EDX)
The components of FA-DEM were SiO 2 1.86%, Al 2 O 3 24.87%,CaO 10.9%, MgO 5.53%, TiO 2 2.54% and FeO 1.14%.The presence of the above mentioned elements in FA-DEM is clearly shown in Figures 2(a) and (b) before and after adsorption of Zn(II) ions respectively.From the EDX analysis, it was proved that the minerals like SiO 2 , Al 2 O 3 and FeO of fly ash were considerably reduced by treating fly ash with 4 M HCl and HF solutions peaks between 1 -2 keV and 8 -10 keV in the EDX spectrum scale (Figure 2(b)) prove that the Zn(II) ions are adsorbed by the adsorbent FA-DEM.
FTIR Spectroscopic Studies
Surface functional groups were detected by Fourier transform infrared (FTIR) spectroscope from the scanning range (4000 cm -1 -400 cm -1 ) and elemental analysis was performed using an elemental analysis.FTIR spectra for FA-DEM before and after adsorption show broadband between 3100 and 3700 cm -1 in Figures 3(a This stretching is due to both the silanol groups (Si-OH) and adsorbed water [31,32].The FTIR spectra of FA-DEM indicated weak and broadband in the region of 1600 -1800 due to C=O group stretching bands from aldehydes and ketones.The fundamental bending vibration of H 2 O molecules corresponds to a sharp peak at 1646.8 cm −1 .The band at 1646.8 cm −1 may be due to the conjugated hydrocarbon bonded carbonyl groups.The band at 1123.8 cm −1 may be due to vibrations of CO group lactones [32].This band was shifted to 1082.9 cm −1 after adsorption.This shows that the adsorption of Zn(II) ions may take place at this site.The 1396 cm −1 band in FA-DEM may be attributed to the aromatic CH and carboxyl-carbonate structures [33].Additionally intense vibration at 600 -400 cm −1 for FA-DEM is attributed to clay and silicate minerals [34].Although some interference can be drawn about the surface functional groups from FTIR spectra, the weak and broad bands do not provide any authentic information about the nature of the surface oxides.The presence of polar groups on the surface is likely to give considerable cation exchange capacity to the adsorbents.
Acidity and Basicity (Boehm Titration)
This titration shows the number of acidic, basic, phenolic, carboxyl and lactones sites.For FA-DEM, the number of basic sites presented was found to be 1.2024 mEq/g L, the number of phenolic, carboxyl and lactones groups was found to be 3.161 mEq/g L and the number of carboxyl groups presented was found to be 1.6416 mEq/g L. The above values for FA-DEM were also favourable for the possibility of ion exchange mechanism during the adsorption of Zn(II) ions process because of the number of phenolic, carboxyl and lactones groups in FA-DEM [35][36][37][38].
Effect of pH on Adsorption, Desorption and Recycling Ability
The pH of the solution has a significant impact on the uptake of heavy metals.The pH zpc of FA-DEM is 4. The solution pH is above the pH zpc of the adsorbent, the surface of the adsorbent is highly loaded with negatively charged ion.It favours the adsorption of metal cation onto the negative surface of the adsorbent due to the electrostatic attraction.Therefore, it can be expected that positively charged metal ions are likely to adsorb onto the negatively charged adsorbents at the pH above ZPC for FA-DEM [39].Metal cations in aqueous solutions hydrolyse according to the generalized expression for divalent metals.
M 2+ (aq.) The silica in FA-DEM could adsorb either positive or negative contaminants depending on the pH of the solution.The central ion of silicates has an electron affinity, giving the oxygen atoms bound to it low basicity.This allows the silica surface to act as a weak acid, which can react with water, forming silanol (SiOH) groups.As a result, at low pH the silica surface is positively charged and at high pH values it is negatively charged.The pHzpc of silica is generally in the neighbourhood of 2.0 [40][41][42].This indicates that the maximum Zn(II) ions adsorption capacity of FA-DEM can be attributed to the electrostatic interaction of the adsorbate with surface silica sites of the adsorbents [43][44][45].
Figure 4 indicates that the pH of the solution (2.0 -9.0) had a significant effect on the adsorption of Zn(II) ions onto FA-DEM.At pH 6, the adsorptions of Zn(II) ions on FA-DEM were found to be 57.7%.Zn(II) ions were removed by precipitation, but not in adsorption if the solution pH is above pH 8 for FA-DEM.Thus, we fixed the pH 6 for Zn(II) adsorption onto FA-DEM in this study.As shown, the precipitation of the heavy metal ions except copper was less than 20% at pH below 8, indicating that the removal of the metals except copper was mainly accomplished by adsorption below pH 8. Since the FA-DEM has a low ZPC, the surface of the fly ash was negatively charged under the pH investigated.As pH increased from 4 to 9, it is expected that the fly ash surface becomes more negatively charged.Thus, more favourable electrostatic attractive forces enhanced cationic metal ion adsorption as pH increased.However, the dependence of heavy metal adsorption on pH was different for each metal.The effect of pH on adsorption, desorption and recycling capacities of FA-DEM for Zn(II) ions removal in aqueous solution was given in Figure 4.For FA-DEM, the adsorption capacity increases initially upto 57.72% until the pH reaches 6 and after pH 6, it decreases.After pH 8, it increases.This may be due to the precipitation of Zn(II) ions.
In the wastewater treatment systems using adsorption process, the regeneration of the adsorbent and /or disposal of the loaded adsorbent are very important.Desorption studies were carried out for the adsorbents FA-DEM by employing batch methods shown in Figure 4.The maximum desorption of 7.76% took place in acidic medium at the pH 5 for FA-DEM.The results indicate that Zn(II) ions adsorbed onto FA-DEM can be recovered by acidified distilled water 7.76% only.After desorption, the adsorbent was further used for adsorption process for the removal of Zn(II) ions.The percentage of adsorption of Zn(II) ions was found to be 47.78% for FA-DEM after desorption at pH 6 given in Figure 4.
Effect of Contact Time
Aqueous Zn(II) ion solutions with initial ion concentration of 100 ppm were kept in contact with FA-DEM from 5 minutes to 1.20 h.The rate of removal was rapid for first 1.15 h and thereafter the rate of metal removal attains equilibrium.No significant change in metal ion removal after 1.15 h for FA-DEM.During the initial stage of adsorption, a large number of vacant surface sites are available for adsorption.After a lapse of some time, the remaining vacant surface sites are difficult to be occupied due to repulsive forces between the adsorbate molecules on the solid surface.The maximum uptake of Zn(II) ions at pH 6 for FA-DEM was found to be 57.7%.
Adsorption Kinetics
The adsorption process of Zn(II) ions can be well fitted using the pseudo second order rate for FA-DEM.The kinetic parameters were given in Table 1.The q e value (14.28) obtained from second order kinetic equation for FA-DEM was close to the experimental q e value (11.2) and the linear regression coefficient value R 2 value (0.9661) obtained for pseudo second order kinetics was close to unity compared to the R 2 value (0.6427) obtained from first order kinetics.The value on initial sorption (h) that represents the rate of initial adsorption, is 8.08 mg (g•min) -1 for FA-DEM.This indicates the adsorption of Zn(II) ions onto FA-DEM follows pseudo second order kinetics.
The Elovich equation, the linear coefficient value (R 2 ) for FA-DEM was found to be 0.7078.Elovich constants A E (desorption constant, g•mg -1 ) and B E (Initial adsorption rate) for FA-DEM were 0.781 mg/g min and 2.32 × 10 5 g/min respectively.
In intraparticle diffusion model, the values of q t were found to be linearly correlated with values of t 1/2 .The K d values were calculated by using correlation analysis.K d = 0.7197 mg•g -1 •min -1/2 , R 2 = 0.8979, C = 6.4466 for FA-DEM were obtained using intraparticle diffusion model.The values of intercept C (Table 1) provide information about the thickness of the boundary layer, the resistance to the external mass transfer increases as the intercept increases.
Adsorption Isotherm
To optimize the design of an adsorption system for the adsorption of adsorbate, it is important to establish the most appropriate isotherm model.Various isotherm equations like those of Langmuir, Freundlich, Tempkin, Dubinin-Radushkevich, Harkin Jura and Frenkel Halsey isotherm has been used to describe the mono-component equilibrium characteristics of adsorption of Zn(II) ions onto FA-DEM.The experimental equilibrium adsorption data were obtained by varying the concentration of Zn(II) ions with fixed dosage of FA-DEM.The adsorption parameters for each metal ion obtained from the fitting of different isotherm models with the experimental data are listed in Table 2 along with the linear regression coefficients, R 2 .FA-DEM has a homogeneous surface for the adsorption of metal ions.Therefore, it is expected that the Langmuir isotherm equation can be had better represent the equilibrium adsorption data.The R 2 value is closer to unity for Langmuir model than that for the other isotherm models for FA (R 2 = 0.9857).Therefore, the equilibrium adsorption data of Zn(II) ion adsorption on FA-DEM can be represented appropriately by the Langmuir model in the studied concentration range.
The calculated value of D-R parameters is given in Table 2.The saturation adsorption capacity Q m obtained using D-R isotherm model for adsorption of Zn(II) ions onto FA-DEM is 11.22 at 0.2 g 50 mL -1 adsorbent dose.The value of E calculated using Equation ( 8) is 0.354 KJ mole -1 for FA-DEM, which indicating that there was no ion exchange mechanism taking place in the adsorption of Zn(II) ions adsorption onto FA-DEM (for ion exchange mechanism the value of E was found to be within 1.3 to 9.6 KJ•mole -1 ) [46].
Influence of Ni(II) Ions and Cu (II) Ions on Adsorption of Zn(II) Ions
The concentration of Zn(II) ion solution was kept as 100 ppm.The concentration of Cu(II) ion was varied as 10, 20, 30 and 40 ppm.Each solution was taken in the bottles and the pH was adjusted to 6 for FA-DEM and after shaking 1.15 h, the percentage of adsorption was calculated.The percentage of adsorption decreased from 57.7% to 38.9% as the concentration of Cu(II) solution increased.This showed that there was a competitive adsorption taking place to a certain extend between the Zn(II) ions and the Cu (II) ions.The same procedure was repeated for Zn(II) ions in presence of Ni(II) ions.The adsorption percentage of Zn(II) ions was decreased to 57.7% to 38.8% in presence of Ni(II) ions.The concentration of Zn(II) ion solution was kept as 100 ppm.The concentrations of Ni(II) ion solutions and Cu(II) ion solutions were varied as 10, 20, 30 and 40 ppm.Each solution was taken in the bottles which were added to Zn(II) solution and the pH was adjusted to 6 for FA-DEM and after shaking 1.15 h, the percentage of adsorption was calculated.The percentage of adsorption decreased from 57.7% to 37.26% as the concentration of both Ni(II) ions and Cu (II) ions increased.This showed that there was a competitive adsorption taking place to a certain extend between the Zn(II) ions, the Ni(II) ions and Cu(II) ions.The percentage of adsorption of Zn(II) in presence of other metals was decreased.
Conclusion
Treating the fly ash with HCl (4M) solutions and HF solutions, the surface is modified by dissolution and reprecipitation reactions.By dissolution of acid oxides, the specific surface area is enhanced and activated and the efficiency of heavy metal removal increases.The adsorption of Zn(II) ions is pH-dependent with maximum adsorption of 57.7% occurring at pH 6 for FA-DEM.The adsorption data were well fitted by the Langmuir isotherm model which shows monolayer adsorption capacity of FA-DEM.Adsorption of Zn(II) ions onto FA-DEM obeyed pseudo second order kinetics.The adsorbed Zn(II) ions can be desorbed from the adsorbent using water.The percentage of adsorption of Zn(II) ions on FA-DEM was slightly higher in single system than binary and tertiary system which shows the competitive adsorption between the metal ions.The experimental results show that this can be an up-scalable solution and represent a step in investigating the process of complex treatment of wastewater containing heavy metals. | 2018-08-30T19:43:47.973Z | 2013-01-23T00:00:00.000 | {
"year": 2013,
"sha1": "7bb99f51e0d2ee72843fe6fa4f5870a1735ff1e4",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=27110",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7bb99f51e0d2ee72843fe6fa4f5870a1735ff1e4",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
252865934 | pes2o/s2orc | v3-fos-license | Three-Dimensional Analytical Solutions for Acoustic Transverse Modes in a Cylindrical Duct with Axial Temperature Gradient and Non-Zero Mach Number
: Cylindrical ducts with axial mean temperature gradient and mean flows are typical elements in rocket engines, can combustors, and afterburners. Accurate analytical solutions for the acoustic waves of the longitudinal and transverse modes within these ducts can significantly improve the performance of low order acoustic network models for analyses of acoustic behaviours and combustion instabilities in these kinds of ducts. Here, we derive an acoustic wave equation as a function of pressure perturbation based on the linearised Euler equations (LEEs), and the modified WKB approximation method is applied to derive analytical solutions based on very few assumptions. The eigenvalue system is built based on the proposed solutions and applied to predict the resonant frequencies and growth rate for transverse modes. Validations of the proposed solutions are performed by comparing them to the numerical results directly calculated from the LEEs. Good agreements are found between analytical reconstruction and numerical results of three-dimensional transverse modes. The system with both mean temperature profile and mean flow presents a larger absolute value of the growth rate than the condition of either uniform mean temperature or no mean flow.
Introduction
Combustion instabilities are typically present in rocket engines, aero-engines and land-used gas turbine engines, and may cause severe damage due to coupling between unsteady heat release from combustion and the acoustic system within the combustors or even the entire engine [1,2]. It is thus significant to predict the combustion instabilities and optimise the engine geometries and operating conditions to eliminate these instabilities during the design stage of engines [3][4][5].
There are typically two methods to numerically predict and analyse combustion instabilities [5,6]. The first is to directly simulate the coupling mechanisms of the unsteady flow, combustion, and acoustic dynamics based on the complete three-dimensional compressible Computational Fluid Dynamics (CFD) simulations;the compressible large eddy simulation (LES) is typically preferred among these CFD solvers [5,7]. Although great achievements have been made in the development of the LES, this approach remains very expensive and time-consuming, and is difficult to use in the real engine design process [5,8,9]. The second method is to decouple the calculation of the unsteady heat release from the flame and acoustic system [10][11][12]; the first term is characterised by a flame transfer function for linear analysis or a flame describing function for weakly nonlinear analysis, see e.g., the recent reviews [13,14]. The generation, propagation, transmission, and reflection of acoustic waves, or even the entropy and vorticity waves within the combustor with complex geometries, are characterised by low order acoustic network models [15][16][17][18][19][20][21], linearised Euler mass and flux, and conserved mass, momentum, and energy, the Euler equations and equation of state are expressed as follows: where ρ, u = (u x , u θ , u r ), p denote the density, velocity components, and pressure, respectively,q represents heat flux, and the heat capacity ratio γ and universal gas constant R g are assumed to be constant along the cylindrical duct. To linearise the Euler equations, the primitive variables are assumed to be the sum of a time-average steady component that is denoted by() and a small time-varying perturbation in the form of time and angular periodic dependence, expressed as a =ā +â(x, r) exp(iωt + in θ θ), where ω = 2π f + iω i is the complex angular frequency and the circumferential wavenumber n θ is definitely an integer due to the continuity in the θ direction. It should be noted that ω i < 0 denotes that the corresponding thermoacoustic mode is unstable.
The conservation relations for the steady flow variables are deduced, and can be written as follows: 1 With a prescribed mean temperature profileT(x), the consequent inhomogeneous mean flow field can be calculated along the axial direction. For greater convenience of subsequent derivation, the derivatives of the steady flow variables are expressed in the form of the relative density derivative and steady variables, written as follows: wherec = (γR gT ) 1/2 is the local sound speed.
Then, the first derivatives of the steady temperature, speed of sound, mean flow Mach number M x =ū x /c, and local wave number k 0 = ω/c are expressed as follows: It is evident that the axial temperature gradient is associated with the steady heat flux in the presence of non-zero mean flow. Therefore, the mean temperature gradient can be maintained by steady heat addition or extraction when the mean flow propagates through the three-dimensional cylindrical duct.
For the perturbation variables, the linearised Euler equations (LEEs) of the energy and momentum can be obtained: whereŝ denotes the entropy perturbation, which is introduced by the relationŝ/c p =p/γp − ρ/ρ to replace the density perturbation in the axial momentum LEE by the pressure perturbation and entropy wave. Here, c p = γR g /(γ − 1) is the heat capacity at constant pressure. With the unsteady heat fluxq set to zero, only the steady heat communication is taken into account. Considering the coupling term of −ū x (dū x /dx)(ŝ/c p ) in Equation (12), the entropy wave is generated during the acoustic wave travelling through the inhomogeneous mean temperature zone and in return affects the acoustic field in the presence of non-uniform mean velocity. It has been validated that the sound regeneration of the entropy wave convected by the mean flow can be neglected, especially in the high-frequency domain [43,55]. Therefore, the axial momentum LEE (Equation (12)) is further simplified by neglecting the acoustic-entropy coupling term, provided by A pure acoustic problem is derived, facilitating the acoustic wave equation and analytical solutions.
Assuming that the wave number is sufficiently larger than the critical value, |k 0 | |M x α|, and terms with order higher than M 2 x can be ignored, Equation (15) can be divided by (iω + dū x /dx) to eliminate the term (dp/dx)û x in Equation (11), leading to Then, Equation (15) is divided byū x , expressed as Subsequently, Equation (16) and Equation (17) are manipulated according to following formula: Following the above procedure, all terms withû x on the left-hand side (LHS) of the consequent formula are eliminated, and the termsû θ andû r on the right-hand side (RHS) can be replaced with the terms of the pressure perturbation in the linearised radial and azimuthal momentum equations (Equations (13) and (14)), written as follows: where β = d 2ρ /dx 2 /ρ. Dividing both sides by the x-dependent coefficient (1 − M x α/ik 0 ) 2 and assuming that |k 0 | |α| and |k 2 0 | |β|/2, we finally obtain the wave equation in the form of pressure perturbation only, expressed as follows: The general PDE in the form of pressure perturbation is ultimately obtained from the governed equations with the assumption of a sufficiently large wave number and the negligible effect of entropy perturbation in such a high-frequency domain. This allows for the derivation of the analytical expression of the pressure perturbation in the next section and the theoretical reconstruction of three-dimensional acoustic waves in the inhomogeneous background mean field.
Analytical Solutions of Three-Dimensional Acoustic Wave
The partial differential wave equation (Equation (20)) is characterized by partial derivatives and coefficients concerning x and r on the LHS and RHS, respectively. Therefore, separation of variables can be applied by substitutingp(x, r) = X (x)R(r) into the wave equation, resulting in the radial and axial ordinary differential equations (ODEs): and where λ is a constant that is independent of the radial and axial directions. For the radial component, the solution for the Bessel equation can be written as the superposition of two linearly independent Bessel functions: where J n θ and Y n θ are the Bessel functions of the first and second kind. Note that the coefficient c 2 must be 0, as Y n θ diverges as r tends to 0. With the rigid-wall boundary condition at r = r 0 , a series of discrete solutions for the constant λ is calculated and corresponds to different acoustic transverse modes labelled (n θ , n r ), where n r denotes the (n r + 1)th solution for each value of n θ .
Analytical Solutions of the Axial ODE Using a Modified WKB Approximation Method
For the axial component, no known solutions can directly deal with spatially varying coefficients of the axial ODE (Equation (22)), especially in the presence of both the axially arbitrary temperature profile and mean flow. The modified WKB approximation method is then used to resolve the axial ODE, which is essentially based on the large wave number assumption and successfully applied on the one-dimensional acoustic wave equation within varying coefficients [43].
The modified WKB solution uses the form of the separate amplitude and phase factors to represent the properties of the acoustic wave propagating through the axially varying mean field, expressed as follows: where C is a constant coefficient and a and b are x-dependent real variables. The assumption of a larger wave number naturally results in the precondition of |b| |a| within the modified WKB solution.
To obtain real variables a and b, the locally complex wave number k 0 = ω/c = k r + ik i has to be substituted into Equation (22) by assuming |k r | |M x k i |, provided by Then, the axial ODE can be transformed into an algebraic equation using the modified WKB solution, written as The real part is further simplified by combining previous assumptions for the larger wave number, |k 0 | |α|, |b| |a| and |k r | |M x k i |, expressed as: The solutions of b are directly written as Hence, the cut-off frequency f c is provided by It should be noted that this work only considers the condition ω > ω c in order to make sure that the formula under the square root is always positive, as acoustic waves often rapidly dissipate with axial distance when ω < ω c .
Solutions of a are further derived by substitute b ± into the imaginary part, respectively, where k x = ηk r . The analytical solution of X is thus assumed to be the superposition of plane waves propagating downstream and upstream. Then, the solution of pressure perturbationp can be expressed aŝ where X ± = ρk x,1 and the subscript '1' represents variables at the inlet. Coefficients C 1 and C 2 can be calculated by the given axial boundary conditions and the constant λ is determined by the mode numbers n θ and n r .
Solutions of the Three-Dimensional Velocity Perturbations
With the proposed solution ofp in the separated form, the radial and azimuthal momentum LEEs in Equations (13) and (14) can be transformed into non-homogeneous differential equations forû r andû θ . Consequently, corresponding analytical solutions can be derived directly via the variation of constants method, expressed as follows: where Coefficients C 3 and C 4 are determined by the relevant boundary conditions or the initial values of the radial and azimuthal velocity perturbations. For example, C 3 and C 4 equal zero when there are no radial and azimuthal velocity perturbations at the inlet (û r (x 1 ) =û θ (x 1 ) = 0). Furthermore, a zero axial component of the vorticity perturbation (ξ x = 0) in the incoming flow naturally provides the condition of C 3 = C 4 (this is discussed in Section 6).
Compared to sound waves travelling at the speed of sound, radial and azimuthal velocity disturbances involve waves propagating downstream at the axial mean flow velocityū x . Typically, these convection waves are associated with the development of the vorticity wave during the propagation of acoustic waves in the inhomogeneous background field [22,45,46].
Subtracting Equation (16) from Equation (17) yields a formula that does not contain the first derivative ofû x : Then, the axial velocity perturbationû x is easily obtained by substituting the solutions of the pressure and transverse velocity components into Equation (35). It should be noted that the r-derivatives of the Bessel function J n θ (λr) introduced byû r andû θ can be eliminated according to the Bessel equation of the radial component in the Equation (21). Therefore, the explicit expression is as follows: where and the terms with order higher than M 2 x are always neglected in the solutions. It is obvious that the axial velocity perturbation involves both the acoustic and convective waves at the same time. The latter greatly affects the resonant frequencies and growth rate of threedimensional combustors that have a velocity-dependence end, for example, an acoustically closed outlet. There are a total of four constant coefficients from C 1 to C 4 in the solutions of pressure and velocity perturbations that are determined by boundary conditions or initial values.
Finally, Equations (32), (33), and (36) constitute our analytical solutions of the threedimensional acoustic field, which should satisfy the high-frequency assumption |k 0 | |α| (the local wave number is sufficiently larger than the critical value) and that Mach number terms with order higher than M 2 x can be neglected. It should be noted that these proposed analytical solutions are not limited to the distribution forms of continuous axial mean temperature profiles.
Case of Zero Mean Flow
An assumption of zero mean flow is typically applied to the combustion chambers when Mach numbers are sufficiently small (M x ≈ 0). Therefore, the case of no mean flow is treated separately in this subsection and represents a benchmark result to present how the axial mean temperature gradient affects the thermoacoustic properties of the cylindrical combustion chamber containing the mean flow.
Substituting M x = 0 into the Equation (26) leads to the wave equation for the case of no mean flow: This coincides with the axial wave equation directly obtained from the LEEs for no mean flow (Equation (16) in our previous work [54]) if the imaginary part, as stated in the earlier text, is assumed to be sufficiently small, i.e., |k r | |M x k i |. The modified WKB method is used again to deal with spatially varying coefficients of the ODE, and solutions of b and a are derived by assuming |k 0 | |α|, expressed as follows: where the subscript '0' denotes the condition of no mean flow. The solution of the pressure perturbation is subsequently provided by where X ± 0 (x, ω) = ρk x,1 Then, the axial velocity is derived from the momentum LEEs in the condition of zero mean flow, expressed aŝ The radial and circumferential velocity perturbations have degenerated expressions, provided byû andû The analytical expressions for the three-dimensional acoustic field are finally derived in the case of zero Mach number. The two constant coefficients in the proposed solutions, C 1 and C 2 , can be determined by acoustic boundary conditions.
Validation Configuration
The proposed solutions are applied to predict the three-dimensional acoustic field for a straight cylindrical duct with the mean temperature profileT(x) and the inlet flow Mach number M x,1 . A linear temperature distribution is accounted for, with the expression where the axial mean temperature decreases fromT 1 at the inlet, x = x 1 = 0, toT 2 at x = l.
In order to validate the analytical solutions, the following boundary conditions are chosen for the inlet and outlet. An external pressure perturbationp in (x = 0, r, θ) = p 1 J n θ (λr) exp(in θ θ) is prescribed at the duct inlet and a pressure release conditionp out (x = l, r, θ) = 0 is prescribed at the outlet.
A dimensionless frequency is defined by Ω = f l/c 1 . The frequency larger than cut-off frequency, ω > ω c , is re-written as follows: Certain dimensionless transfer functions fromp 1 to acoustic perturbations are defined as functions of geometric positions and the forcing frequency, It is important that the analytical model has the almost same linearised Euler equations with the exception of the negligible entropic effect on the high-frequency acoustic waves. Hence, the proposed analytical solutions can be well verified by comparing them to the numerical results from LEEs. Table 1 presents the parameters used in the analyses in the next subsection.
Reconstructions of Sound Responses
In this subsection, the proposed analytical solution is validated by comparing the predicted three-dimensional acoustic field to the results of the numerical LEEs.
For the first transverse (1T) mode, n θ = 1 and n r = 0, dimensionless transfer functions are calculated analytically on the plane θ = π/2 when Ω = 0.8 and M x,1 = 0.2. As shown in Figure 2, our proposed solutions accurately reconstruct the three-dimensional acoustic field compared to the numerical results from LEEs. Figure 3 presents the real and imaginary parts of F p and F u x along an axial line of (r, θ) = (r 0 /2, π/2) for the 1T mode. The analytical solutions agree well with the numerical results, with the reduced forcing frequency Ω being 0.8, 1.2, and 1.6, respectively. It is worth noting that the high-frequency condition of the modified WKB method, that is, |k 0 | |α|, can be re-written in the form of a dimensionless frequency: The cut-off frequency Ω c for a transverse mode (λ = 0) often provides a much larger critical value than Ω 0 .
Therefore, the high-frequency assumption has almost no effect on the accuracy of the proposed analytical solutions when applied to the transverse acoustic field. High precision can be achieved when the forced frequency of the perturbation is slightly different from the cut-off frequency.
Boundary Conditions and Eigenvalue Matrix
In this subsection, the proposed three-dimensional pressure and velocity solutions are applied to a cylindrical duct containing an axial mean temperature profile and a mean flow, then used to predict transverse modes. For the incoming flow (x = 0) at the inlet, no vorticity perturbation is assumed upstream of the varying temperature region.
The flow vorticity is derived by taking the curl of the three-dimensional velocity vector u, expressed as follows: The components for the vorticity vector can be calculated by substituting the proposed velocity solutions. Then, the zero vorticity perturbation at the inlet can be expressed in the form of four coefficients C 1 ∼ C 4 , written aŝ where ∆ = ik 0,1 − M x,1 α 1 . These zero vorticity conditions can provide two equations, as the radial and circumferential components have the same expressions when the axial vorticity equals zero. Another two equations can be provided by acoustic boundary conditions at the inlet and outlet, e.g., the open end (p = 0) and acoustically closed end (û x = 0) boundary conditions.
The eigenvalue system is consequently built by combining four boundary conditions of the duct ends, expressed as The complex angular frequency ω = 2π f + iω i can be solved by letting the determinant of the eigenvalue matrix M be zero, where ω i < 0 denotes that the corresponding thermoacoustic mode is unstable.
Effect of Mean Flow and Temperature Gradient on Resonant Frequencies and Instabilities
Predictions for the first transverse mode (1T) are carried out for the eigenvalue system in the presence of different outlet mean temperaturesT 2 . Two kinds of boundary conditions are prescribed at the inlet and outlet, namely, both open end boundary conditions or both acoustically closed ends. The latter is representative of the inlets and outlets of many real combustion chambers.
As shown in Figure 4, the frequencies and growth rates of the thermoacoustic modes vary with the outlet mean temperatureT 2 . Good agreements are found between analytical solutions and numerical LEE results for both kinds of boundary conditions. When the outlet mean temperature increases, the spatially averaged speed of sound increases, which results in a higher resonant frequency. It is clear that when the duct outlet temperatureT 2 differs substantially from the inlet temperatureT 1 , the growth rate does not equal zero. A decrease in axial mean temperature leads to an unstable mode, while temperature increase corresponds to a stable mode. The resonant frequencies for the open-open ends are almost same as those of the closed-closed ends. However, the growth rates for the closed-closed ends have larger absolute values irrespective of whether the thermoacoustic modes are stable or unstable. Figure 5 presents the evolution of the resonant frequency and growth rate as a function of the inlet flow Mach number M x,1 . The analytical results calculated by the proposed solutions present good consistency with the results of the numerical LEEs. The growth rate is zero when the mean flow is stationary. With a baseline of no mean flow, the 1T mode becomes more unstable as the incoming flow Mach number increases, especially for the closed-closed ends. These results show that the system with both mean temperature profile and mean flow presents different linear stabilities from that with either uniform mean temperature or no mean flow. The presence of mean temperature gradient and the mean flow lead to evident shifts of the resonant frequencies and growth rates. Therefore, both should be taken into account during predictions and simulations for sound propagation through a three-dimensional cylindrical duct.
Transverse Modes with Different Boundary Conditions
In addition to the 1T mode, the first radial mode and the second transverse mode were investigated in order to provide general verifications of the analytical solutions. With both kinds of boundary conditions, as presented in Table 2, analytical predictions of resonant frequencies and growth rates are in good agreement with the numerical LEE results in the case of M x,1 = 0.1 andT 2 = 1200 K. High precision can be achieved by applying the proposed WKB-type solutions in the eigenvalue system. The convection wave generated by the acoustic waves propagating through the region of varying mean temperature and inhomogeneous mean flow results in a larger absolute value of the growth rate when a velocity-dependent boundary is prescribed.
Conclusions
In this work, we obtain an analytical solution for a three-dimensional acoustic field in a cylindrical duct in the presence of an axial temperature gradient and mean flow. This paper extends our previous works involving a one-dimensional straight duct [43] to a three-dimensional cylindrical duct. We first account for the multi-dimensional acoustic perturbations that propagate through the inhomogeneous field; both the axial mean temperature distribution and mean flow are present. A second-order partial differential wave equation is obtained from linearised Euler equations (LEEs) in the cylindrical system. Then, the separation of variables and a modified WKB method are applied to derive the analytical expressions of pressure and velocity perturbations when the high-frequency assumption |k 0 | |α| is satisfied. Based on the proposed analytical solutions, an eigenvalue system is built by providing boundary conditions of both duct ends and assuming zero vorticity perturbation at the inlet.
Validation is conducted by comparing the analytical solutions to the results of numerical LEEs. For a linear axial temperature profile, the proposed analytical solutions are applied to predict the acoustic perturbations in the cylindrical geometry for three transverse modes and two kinds of boundary conditions. The results show that the proposed analytical solutions can provide accurate predictions for the three-dimensional acoustic field and the thermoacoustic modes with both the temperature gradient and the mean flow considered. High precision is achieved when frequencies are larger than the cut-off frequency. Both axially varying mean temperature and mean flow can change the apparent growth rate within a thermoacoustic system. Therefore, the assumption of either no mean flow or zero temperature gradient should be used cautiously in the low order modeling of combustion systems. | 2022-10-13T15:08:31.662Z | 2022-10-10T00:00:00.000 | {
"year": 2022,
"sha1": "ca8b6df2f3f3fc4eb8aef4e28bebc008d9d58055",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2226-4310/9/10/588/pdf?version=1665416705",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1bd456f0405f2f4e004a8869d9710170519999a2",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
} |
5520818 | pes2o/s2orc | v3-fos-license | Picosecond and nanosecond pulse delivery through a hollow-core Negative Curvature Fiber for micro-machining applications
: We present high average power picosecond and nanosecond pulse delivery at 1030 nm and 1064 nm wavelengths respectively through a novel hollow-core Negative Curvature Fiber (NCF) for high-precision micro-machining applications. Picosecond pulses with an average power above 36 W and energies of 92 µJ, corresponding to a peak power density of 1.5 TWcm − 2 have been transmitted through the fiber without introducing any damage to the input and output fiber end-faces. High-energy nanosecond pulses (>1 mJ), which are ideal for micro-machining have been successfully delivered through the NCF with a coupling efficiency of 92%. Picosecond and nanosecond pulse delivery have been demonstrated in fiber-based laser micro-machining of fused silica, aluminum and titanium.
Introduction
High average power short pulsed lasers are increasingly used for micromachining. However flexible fiber beam delivery systems for such lasers are limited in terms of pulse energy, restricting the range of potential applications. As a result, these lasers are typically used to process flat parts placed under a galvo scan head. A suitable fiber delivery system is hence important to extend applications to the processing of complex 3D components.
Results obtained over the last couple of years have demonstrated the suitability of hollowcore photonic crystal fibers (HC-PCFs), hollow waveguides and large mode area solid core fibers for high-energy pulse delivery [1][2][3][4][5][6]. These fibers overcome limitations due to the low damage threshold and nonlinear effects dominant in conventional single-mode solid core silica fibers [7,8] and can also minimize attenuation imposed by material absorption [9]. However their energy handling capability is still limited to approximately 1 mJ in the ns pulse regime and efficient delivery of high energy ps pulses was not reported.
In 2012 Benabid et al reported a new design of kagome-type hollow-core microstructured fiber [10] with a hypocycloid shaped core (negative curvature), capable of delivering ns pulses with energies in the range of 10 mJ. Moreover their latest work presents delivery of 10.5 ps pulses with an energy of approximately 97 µJ and an average power of 5 W, corresponding to a peak power of 8 MW through 10 cm long kagome fiber [11]. These reported fibers however have a complicated structure, which requires stacking many layers of capillaries during preform fabrication. Consequently a less complex design which still provided the negative curvature of the core wall, the so called Negative Curvature Fiber (NCF), was presented in [12,13]. This fiber proved to be a great candidate for high-power delivery of Er:YAG laser pulses at a wavelength of 2.94 µm for medical applications [14].
In this paper we investigate a similar hollow-core Negative Curvature Fiber optimized for the delivery of high-power, high-energy ns and ps pulses in the 1 µm wavelength region for precision micro-machining applications.
Fiber structure and light guidance mechanism
The NCF was fabricated by the commonly used stack and draw technique, described in [12]. Eight identical circular capillaries were used to form a preform, which was drawn down to produce a final fiber with 43 µm diameter hollow air core and NA of 0.03 (measured) as shown in Fig. 1(a). To shape the negative curvature structure different pressures were applied to the core and the cladding during the fiber drawing process (details of fiber fabrication are given in [12]). The crucial points of the fiber are cladding nodes between adjacent capillaries, marked with red circles in Fig. 1(b), which introduce high losses. These nodes behave as independent waveguides supporting their own lossy modes. Therefore, the curvature of the cladding (core wall) is directed in the opposite direction (in comparison with a typical cylindrical core such as in the HC-PCF structure) in order to physically separate the fiber guided mode from the cladding nodes, significantly reducing coupling between them.
The light guidance mechanism in the NCF is based on the Anti-Resonant Reflecting Optical Waveguiding (ARROW) phenomenon, which means that its structure can behave as a Fabry-Perot resonant cavity [15]. Therefore all wavelengths of light which are not in resonance with the core wall are reflected back into the core and propagate with low loss as a result of destructive interference in the Fabry-Perot resonator. The resonant frequencies, meanwhile, cannot be confined in the core and leak away to the cladding region where they are highly attenuated. The wavelengths that are guided are hence strongly dependent on the thickness of the core wall (capillaries); for guidance at 1030 nm and 1064 nm a suitable wall thickness is in the range of 910 ± 50 nm.
Fiber attenuation
To measure the attenuation of the NCF at 1030 nm and 1064 nm wavelengths a cutback technique was used. With this fiber a high bending sensitivity was observed (further investigation of this phenomenon is described in section 2.3), therefore, to measure attenuation independently of bend loss a 1 m long, straight fiber was used. Any additional micro-bends, which could potentially affect the results, were minimized by maintaining the fiber within the same horizontal position during the entire experiment.
A TRUMPF TruMicro ps laser (M 2~1 .3) and a JDS Uniphase microchip ns laser (M 2~1 .2) were used as coherent light sources for 1030 nm and 1064 nm wavelengths respectively. In both cases the fiber was cut from 1 m to 10 cm while maintaining the same coupling conditions. The fiber attenuation was measured to be at the level of 0.23 dB/m and 0.16 dB/m for 1030 nm and 1064 nm respectively.
To obtain a loss spectrum over a wide spectral range from 1000 nm to 1400 nm (in this case with a coiled fiber) and determine the antiresonant bandwidth position, an additional cutback measurement was carried out with a broadband light source (a tungsten halogen bulb). The FC-connectorized fiber was connected to an Ando Optical Spectrum Analyzer AQ-6315B. In this case the fiber was cutback from 87 m to 3 m. The low-loss region covers over 300 nm (1000 nm −1330 nm) of the IR spectral bandwidth. The attenuation spectrum of the NCF is presented in Fig. 2
Bending losses
The fabricated NCF is sensitive to bending or any applied physical force e.g. point load, which introduce additional, significant losses. To fully understand the bending loss mechanism of the fiber, a series of different tests were performed. The measurements were conducted at 1064 nm with a JDS Uniphase microchip ns laser. The bending losses were measured for both a coiled fiber (4.1 m long) and with a 180° bend (3.1 m long) fiber. In both cases, the launching side of the fiber and its bent part were kept in the same horizontal plane to avoid micro-bends. Moreover, to maintain the same coupling conditions for each measurement the input end-face of the fiber was not moved during the entire experiment.
The results plotted in Figs. 3 and 5 demonstrate that fiber bending has a significant impact on the light delivered for bending diameters below 40 cm (2.5 coils) and 25 cm for coiled and 180° bent fiber respectively. For larger bend radii, no bend-induced loss was detected. In addition, changes in the delivered mode patterns were observed by imaging the fiber output with a CCD camera during fiber bending as shown in Figs. 4 and 6. In the case of coiling the fiber the delivered beam patterns are more single-mode like in comparison with a single 180° bend, which indicates significant attenuation of the higher order modes along the longer bending lengths. Observed fiber output beam profiles are not specific to a particular bend diameters but are representative of various profiles which can be visible at any bend radius. It is likely that the observed excitation of the different modes during fiber bending is easy to induce mainly due to very similar propagation constant of the higher order modes and the fundamental mode which results in coupling between them. Moreover, coupling between guided modes can be induced by applying a point force to either a straight or bent fiber (i.e. touching or moving it). As a result the delivered power can vary by up to ± 10%. However, when the fiber is stationary no changes in the delivered beam profile and power can be observed.
Damage limitation of the NCF
The laser damage threshold of the NCF was investigated with ps and ns lasers. In both cases a singlet lens (of different focal length for each laser) giving a focused spot diameter of ~36 µm (1/e 2 ) with a focus cone angle of 0.029 rad and 0.09 rad for ps and ns lasers respectively was used to couple the laser beam into the fiber with high launch efficiency. The coupling efficiency was at the level of 84% and 78% for the ps and ns lasers respectively. In the case of the ns laser, a high coupling efficiency was obtained mainly due to multimode guidance of the fiber.
The available industrial TRUMPF TruMicro ps laser (M 2~1 .3) provided 6 ps pulses with a maximum average power of 46.3 W and pulse energies of 116 µJ at 1030 nm, corresponding to a peak power of 19.3 MW and a peak power density at the fiber launch of 1.75 TWcm −2 . Even at this maximum power level, the fiber did not suffer any damage, so it was impossible to establish the damage limitation of the NCF. The coupled peak power was two times greater than the critical value for a hypocycloid kagome hollow-core fiber reported in [11]. The coupling efficiency was 84%, providing a delivered pulse energy of 92 µJ through 1 m of fiber. The fiber was also tested using a Q-switched ns laser (M 2~5 -6) with a pulse width of 9 ns and repetition rate of 10 Hz at 1064 nm. The input pulse energy was gradually increased until fiber damage occurred. A fiber failure was observed at pulse energies exceeding 3.2 mJ with a peak power of 0.35 MW, corresponding to an energy density at the input fiber end-face of 310 Jcm −2 and a peak power density of 34.3 GWcm −2 , which is approximately three times greater than previously achieved for standard HC-PCFs [16]. The input end of the fiber was destroyed and light guidance was no longer observed. The main reason of the fiber failure at this particular pulse energy is a high M 2 value of the laser beam introducing a poorer coupling efficiency (in comparison with the ps laser) and mismatch with the fiber guided-mode, which results in light being partially incident on the fiber cladding and damaging the structure. Therefore, the peak power density at which the damaged occurred is significantly lower than for the ps laser. Due to the lack of an available ns laser with a better beam quality providing sufficient pulse energy it was impossible to determine the energy handling capabilites of the NCF in the ns pulse regime. However, based on the obtained results we think that this fiber can successfully withstand higher pulses energies in both ns and ps pulse regime.
Nanosecond pulse delivery
To demonstrate the suitability of the fiber-delivered ns pulses for micro-machining applications a diode-pumped Spectra Physics Q-switched Nd:YVO 4 laser (M 2 <1.3) was used. The laser operated at 1064 nm providing 60 ns pulses with an energy above 1 mJ (ideal for precision machining applications) and an average power of 18.2 W at a repetition rate of 15 kHz. The pulse delivery tests were conducted for both a 0.7 m fiber without bending and a 10.5 m NCF coiled to a diameter of 60 cm. The laser beam was launched directly into the fiber using a plano-convex lens giving a focused spot diameter of 36 µm (1/e 2 ) with a focus cone angle of 0.03 rad (matching the acceptance angle of the fiber). The high quality of the laser beam presented in Fig. 7(a) and well optimized coupling parameters (for the maximum delivered power) allowed a coupling efficiency of 92% to be obtained. The maximum pulse energy from the laser was coupled into the fiber without damage, providing fiber-delivered pulses of energy 1.1 mJ with an average power of 16.3 W and a peak power of 18.3 kW. As described in [16], the fiber damage threshold normally scales with τ 0.5 , where τ is the pulse duration. Thus, given the damage threshold measured for the 9 ns pulses, we would expect the fiber to be capable of delivering at least 8 mJ in a 60 ns pulse before damage occurs. This is approximately 16 times greater than previously achieved with a conventional 7-cell defect HC-PCF [1]. After propagation through 10.5 m of NCF, the delivered energy was reduced to 0.8 mJ, which was still sufficient for micro-machining (see section 4.1). Figure 7(b) presents the fiber-delivered beam profile captured by imaging the end of the fiber with a CCD camera and Spiricon software, which clearly indicates multi-mode guidance in the fiber. The measured 1/e 2 diameter of the delivered beam was approximately 36 µm.
Picosecond pulse delivery
The experiment was performed using the TRUMPF TruMicro picosecond laser (M 2 >1.3) at 1030 nm wavelength, giving 6 ps pulses at 400 kHz repetition rate with a pulse energy of up to 116 µJ and average and peak powers of 46.3 W and 19.3 MW respectively. The beam from this laser was coupled into a 1 m straight length of fiber and an 8 m coiled length (with 23 cm bending diameter). The tight bending of the fiber was essential due to limited working space.
To couple a laser beam directly into the fiber core a singlet lens providing a focused spot diameter of ~36 µm (1/e 2 ) with a focus cone angle of 0.029 rad was used. The obtained coupling efficiency, ~84%, allowed pulse delivery without fiber damage. The launching parameters were optimized to deliver the maximum power. The launch efficiency is decreased in comparison with the ns laser mainly due to lower laser beam quality as shown in Fig. 8(a) and slight mismatch between the focused cone angle and the fiber acceptance angle. The maximum delivered pulse energy through a 1 m length was 92 µJ with an average power of 36.7 W corresponding to peak power of 15.3 MW. The delivered average and peak powers of the ps pulses are 7 and 2 times greater respectively than previously reported in [11]. The additional high bending loss with the 8 m length reduced the transmitted energy to 49 µJ with an average power of 19.6 W and a peak power of 5.6 MW. The fiber output was characterized with a CCD camera together with Spiricon beam profiling software and the Femtochrome FR-103XL autocorrelator. The delivered beam profile with a diameter of 36 µm (1/e 2 ) is presented in Fig. 8(b). The results of the spatial beam quality measurements of the NCFdelivered ps pulses are presented in Figs. 9(a) and 9(b). The M 2 = 1.5 of the delivered beam shown in Fig. 9(a) corresponds to the highest beam quality that can be delivered through the fiber by coupling the light mostly into the fundamental mode. However, due to the multimode behavior of the fiber the typical profile of the delivered beam with an M 2 of 3.2 is presented in Fig. 9(b). The intensity autocorrelation of the delivered pulse through 1 m NCF presented in Figs. 10(a) and 10(b) shows no pulse dispersion in comparison with original laser pulse. However, after propagation along 8 m of fiber the transmitted pulse was stretched to 8.7 ps FWHM as plotted in Fig. 10(c), likely to be associated with intermodal dispersion rather than non-linear effects. No significant distortion of the pulse shape was observed. In addition, no measurable degradation between the optical spectra of the laser pulse and NCF-delivered pulse was observed.
Nanosecond and picosecond micro-machining
Suitability of the fiber-delivered laser beam for machining applications was demonstrated by through cutting of aluminum sheet and marking of titanium with the ns laser and milling of fused silica with the ps laser. The experimental setup for fiber-based micro-machining is shown in Fig. 11. The laser-fiber coupling arrangement was identical to that used previously. A half-wave plate and polarizing cube beam splitter were used to provide precise power control without affecting the laser-fiber coupling efficiency. The output of the fiber was collimated with a singlet plano-convex lens and a telescope used to expand the beam to match the entrance aperture (14 mm diameter) of the galvo scan head. The focal lengths of the ftheta lenses in the galvo scan head were 125 mm and 160 mm for the ns and ps lasers respectively. The focused spot diameters at the workpiece were calculated to be approximately 30 µm (1/e 2 ) for the ns laser and 36 µm (1/e 2 ) for the ps laser. The lengths of the NCF used for the machining were 1 m (bent) and 10.5 m (coiled) for the ps and ns lasers respectively. The 1064 nm, 60 ns laser parameters used for through cutting of the 0.3 mm thick aluminum sheet were 0.8 mJ, pulses at a repetition rate of 15 kHz. The delivered pulse energy was sufficient to perform precise cutting of relatively small shapes, less than 1 mm by 1 mm with a cutting speed of 1 mm/s, see Fig. 12. Despite the multi-mode nature of the fiber and bending-induced profile changes, high quality shapes were generated. Marking in titanium was performed using identical machining parameters, but with a significantly higher surface speed of 100 mm/s. To fabricate the features a raster scanning method was used. Excellent results were achieved as shown in Fig. 13 Picosecond fiber-delivered pulses were used to perform high-quality machining of 1.5 mm thick fused silica. With the 1030 nm 6 ps laser, the pulsed energy used for machining was approximately 52 µJ at a 400 kHz repetition rate. The marking speed was set to 100 mm/s. This allowed fabricating features with the dimensions less than 1mm by 1mm with depth of 30 µm. The examples of machined fused silica are shown in Fig. 14. No cracks on the glass structure were observed. To our knowledge this was the first demonstration of a crack-free machining of glass with fiber-delivered ps pulses at 1 µm wavelength region, which is not possible to achieve with solid core LMA and conventional hollow-core fibers due to low damage threshold and non-linear effects in the short pulse regime. Moreover, crack-free milling of fused silica is not possible with longer (ns) pulses (which can be delivered through i.e. solid core LMA fibers) due to thermal effects raised on the glass.
Conclusions
In this paper we have presented high-power nanosecond and picosecond pulse delivery through a novel hollow-core Negative Curvature Fiber for micro-machining applications. We report for the first time successful transmission of ps pulses with an average power more than 36 W, which is 7 times greater than previously reported with a hypocycloid-core kagome-type HC-PCF [11]. The non-complex structure of the fiber, consisting of only a single cladding layer makes it relatively easy to fabricate compared with other ARROW-guiding fiber designs i.e. hypocycloid-core kagome fibers. The energy handling capability of the fiber is significantly higher than for the typical hollow-core photonic crystal fibers and conventional silica fibers. Moreover, it was not possible to establish the ultimate damage threshold due to the lack of a sufficiently powerful laser system, therefore we assume that the NCF can withstand higher pulse energies than reported here. The delivered pulse energies and average powers are of the level required for precision micro-machining applications. High quality machining and marking has been demonstrated, despite the fiber being multi-mode with a somewhat unstable output profile due to its bend-sensitivity which makes the NCF a perfect candidate for a solution to the flexibility problem of high-average power pulsed lasers. Further development of the fiber will focus on modifying its structure to obtain more efficient separation between the cladding nodes and the core to achieve single-mode, low-loss and bend-insensitive high-power beam delivery. | 2018-04-03T02:03:14.730Z | 2013-09-23T00:00:00.000 | {
"year": 2013,
"sha1": "5f6e1432595b450631064ce456cbfc2cb13e732f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.21.022742",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "66b0806ac97e7d5cc6f8d5fa9c29b30af5577ca3",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
233799448 | pes2o/s2orc | v3-fos-license | Development of an electronic book epub 3.0 as a learning resource for blended learning IPA Terpadu
The purpose of this study was to produce an electronic book on IPA Terpadu competency and find out the quality of the product as a learning resource for blended learning. An electronic book consisting of text, images, and video, readable on computers or other electronic devices. The development of this ebook consists of 3 stages: planning, design, and develop. Testing electronic book on various devices with different operating systems on mobile operating systems (android) and personal computers (Windows). The results of the study show that: (1) the developed product is an electronic book format epub 3.0 on IPA Terpadu, (2) the results of field test show that the developed product has a good quality, results with an average of 85% (very satisfied)
IOP Publishing doi: 10.1088/1757-899X/1098/3/032115 2 new way of learning, and also motivated them to learn, made learning more exciting, increased their attention toward instruction, was more efficient, and increased their interest in the class [9].
Blended learning is part of e-learning that utilizes information technology media to create optimal learning programs for students. Blended learning is a blend between face-to-face learning and elearning [10]. Blended learning refers to an integration of a face-to-face classroom section with an appropriate use of technology [11,12]. Blended learning is a formal education program in which a student learns at least part of learning through elearning with several elements of student control over time, place, and face-to-face meetings [13]. Learning IPA Terpadu must also include dimensions attitude, process, product, application and creativity. Students are expected to have holistic science in dealing with life's problems contextually everyday through integrated science learning [14]. The purpose of this study was to produce an electronic book on IPA Terpadu competency and find out the quality of the product as a learning resource for blended learning.
Methods
The product to be produced is a elektronic book epub 3.0 application. This application can be used as learning materials IPA Terpadu used in blended learning. In this study the authors will use 3 stages of development: planning, design, and develop.
Planning phase
The planning phase is the initial stage of research. Collecting information to find problems using interview, observation, and questionnaire instruments. Next collect software requirements to make products based on objectives. It is justified to state that requirements engineering is a critical success factor in a system development [15]. The development of this product is done to produce electronic books that will be applied to learning, the formulation of learning objectives is carried out at this stage.
Design phase
At this stage, determine activities that determine the design of the product display, develop key material ideas, and make flowcharts.
Develop phase
This development phase includes the activity write the program code, make a graph, producing media, and alpha testing.
Planning phase
The process of developing an electronic book starts with the planning stage which starts with a needs analysis. The purpose of analyzing requirements is to determine what type of software or system will be produced and manage the results of the elistation of the requirements to produce a document specification of the overall content requirements according to what the user wants.
Needs analysis.
Based on the results of interviews with teachers mentioned that the Integrated Science learning outcomes are very low because students do not have independent learning materials. Researchers also distributed questionnaires to 276 for students whose purpose was to find out the needs or expectations associated with the desired learning program and the results of 96% of students agreed on making electronic books.
Formulate instructional goal.
The electronic book produced contains learning materials that will be implemented in learning to achieve learning objectives. The next step, the formulation of learning objectives includes Kompetensi Inti (Kl), Kompetensi Dasar (KD) and indicators on IPA Terpadu subjects based on the curriculum 2013. Kompetensi Inti dan Dasar contain statements about the behaviors expected of students after participating in learning. Researchers classify general and specific learning goals on cognitive and psychomotor aspects.
Design phase
Product making is done at this stage by building product prototypes, determining material composition or ideas, and making flowcharts.
3.2.1.
Determine the product display design. At this stage a prototype of the electronic book epub 3.0 was made and the student worksheet as a whole consisted of visual, audio, video, animation, and hypertext or hyperlink material that contained material that referred to the learning objectives that had been formulated.
Developing the main material idea.
Developing the main idea of the material is done by first choosing and selecting the material used and making the material description which refers to the outline of the content that has been made previously. Then determine the strategy and design of learning, and determine material experts and media experts involved in the development.
Make a flowchart and storyboard.
Flowcharts are used to show the structure and sequence of the project, while the storyboard shows details of what will be displayed on the product. Flowcharts and storyboards, useful for designers to analyze program components and sequences for understanding information delivery.
Develop phase
At this stage, developing electronic book epub 3.0 products in accordance with the stages of planning and design.
Write down the program code.
The developer designs the product to be a product that fits the design and planning stages into an electronic book IPA Terpadu. The author will design a program by entering the programming language, editing (audio, visual, text, animation).
Field test.
The last stage in this development is a field test in the development of this field test involving students of senior high school using a questionnaire instrument. The distribution of questionnaires aims to determine student responses about student satisfaction, students were asked to express their opinions with the Likert scale. So, they have checked "1" if they strongly disagree, "2" if they disagree, "3" if they have no clear opinion, "4" if they agree and "5" if they strongly agree with the given statement. Before it is given to students, electronic book eBook 3.0 testing is carried out on several devices with different operating systems. The following table shows the results of electronic book epub 3.0 testing on various devices. 3, results show that student satisfaction with the use epub 3.0 IPA Terpadu on blended obtained results with an average of 85% (very satisfied). According to students' responses to statement one, most of them like blended learning using electronic book epub 3.0. Furthermore, the high response of students strongly agrees on statements that blended learning has provided students learning opportunities without being limited by place and time. This is a very positive thing about the right to learn students can get access to learning in accordance with the learning objectives. Statement regarding student learning progress can be facilitated by blended learning, students can adapt according to their ability to fast or slow understand the material.
Conclusion
Based on the description of the stages of product development in the research results, the following conclusions can be drawn: Development of electronic book epub 3.0 through 3 stages; (1) planning phase to gather information about needs and problems that occur in the field; (2) the design phase is the stage of making a product prototype about the epub 3.0 electronic book that was developed based on the findings in the planning phase; (3) the develop phase is developing the electronic book epub 3.0 products in accordance with the stages of planning and design. Then a field trial is conducted. Results show that student satisfaction with the use epub 3.0 IPA Terpadu on blended obtained results with an average of 85% (very satisfied). | 2021-05-07T00:04:23.007Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "52b023cd9ec36a23ca9453cb20ab342c6d64ca4d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1098/3/032115",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "423994834d07707b7d366f55745427a280acabbf",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
244247629 | pes2o/s2orc | v3-fos-license | Investigating the Relationship of Test Anxiety and Time Management with Academic Procrastination in Students of Health Professions
Academic procrastination is a harmful phenomenon among students and has many negative consequences. The present study aimed to investigate the relationship of test anxiety and time management with academic procrastination in students of health professions. The population of this correlational study consisted of 281 Iranian students of health professions. The Tuckman Procrastination Scale, Time Management Scale, and Sarason’s Test Anxiety Scale were used to measure the variables. Pearson’s correlation and multivariate regression tests were also performed. The mean score of students’ academic procrastination was higher than the average level. A significantly negative correlation was found between time management and academic procrastination ( r � − 0.487, P ≤ 0.01). Additionally, there was a significantly positive correlation between test anxiety and academic procrastination ( r � 0.443, P ≤ 0.01). The linear regression model indicated that independent predictors including time management and test anxiety accounted for 32.6% of the variation in academic procrastination ( R 2 � 32.6%). The findings of this study indicated that test anxiety and time management were associated with academic procrastination. Therefore, purposeful educational and psychological interventions are required to reduce academic
Introduction
Recent studies have shown that procrastination is one of the most important challenges for students in universities and higher-education institutions [1][2][3][4][5][6]. Some researchers have reported a prevalence rate of 14-50% for procrastination among students [7,8]. Procrastination is defined as "the students' delay in doing their homework or making a decision to act" [9,10]. Procrastination includes delays in doing course assignments, writing articles, and preparation for exam. Procrastination has a negative effect on individuals' quality of performance and mental-physical health [11]. Studies have illustrated that the most important characteristics of students who procrastinate are poor academic performance [12], low satisfaction with academic life [13], high stress [14], poor well-being, poor time management, misconceptions about the learning and teaching process [6], low marks in exams, and poor self-regulation [15]. In addition, one of the variables with a key role in increasing procrastination is anxiety [16]. Test anxiety is also a common type of anxiety among students, which is measured using the Sarason's Test Anxiety Scale [17,18]. is variable has a negative effect on students' readiness for exams. In this regard, the results of one study showed a significant correlation between test anxiety and academic procrastination [19]. Another study, however, indicated a negative correlation between test anxiety and academic performance [18]. In addition, test anxiety reduces subjective well-being [20].
According to studies, time management is a vital skill for university students [21,22]. Psychological studies have emphasized that people's ability in time management is rooted in their psychological and behavioral characteristics. Understanding the value of time, the ability to control time, and the optimal use of time are the most important characteristics of students with desirable time management skills [23,24]. Effective time management and optimal use of time involve planning, goal setting, and prioritizing activities in work and life [25]. According to the results of one study, time management training has a positive effect on reducing anxiety and depression and increasing sleep quality [24]. Other studies have shown that time management has a significantly positive correlation with academic motivation [25] and academic engagement [26]. Furthermore, time management has a significantly negative correlation with anxiety [25]. erefore, it is important to identify its relationship with academic procrastination. Students of health professions are highly responsible for the health of the community.
erefore, the prevalence of procrastination among them has many negative consequences [27,28]. Studies emphasize that procrastination in academic activities should be seriously considered [1,29]. Evidence suggests that students with academic procrastination fail to achieve academic goals [30] and have low levels of self-confidence [15]. In this regard, it is important to identify the factors associated with academic procrastination. us, the main purpose of this study was to investigate the correlation of test anxiety and time management with academic procrastination among students of health professions in Iran. To this end, the following questions were posed: (1) Is there any significant relationship between test anxiety and time management and academic procrastination? (2) To what extent do test anxiety and time management (independent variables) predict students' academic procrastination (dependent variable)?
Study Design.
is was a cross-sectional descriptiveanalytical study.
Setting, Sample, and Sampling Method.
e present study was conducted in the schools affiliated with the Kermanshah University of Medical Sciences (KUMS) in western Iran in 2020. e schools included nursing and midwifery, paramedical, health, pharmacy, dentistry, nutrition, and medicine. e sample size was calculated to be 281 students. e samples were selected by the stratified random sampling method, and each school formed a stratum. Inside each stratum, samples were selected by simple random sampling using a random number table. e response rate to the questionnaires was 100%. e inclusion criteria were studying in the second semester or above and students' informed consent to participation in the study. e exclusion criterion was reluctance to answer the questionnaire.
Instruments.
e tools used in this study included the demographic information form, Tuckman's Procrastination Scale, Time Management Scale, and Sarason's Test Anxiety Scale. e demographic information form included questions about gender, age, marital status, and school.
Tuckman's Procrastination Scale (TPS) was designed by Tuckman (1991) [31]. is questionnaire consists of 16 items and has a single-factor structure. e validity and reliability of the English version of this scale have been confirmed by Tuckman [31,32]. TPS has been validated in Iran, and its single-factor structure has been confirmed [33]. Other studies have reported optimal levels for the validity and reliability of this scale [4]. In the present study, the internal consistency of this scale was calculated using Cronbach's alpha at the 0.89 level. Each item was rated on a 5-point Likert scale, ranging from strongly disagree [1] to strongly agree [5]. e score range was from 16 to 64. A high score indicated the student's academic procrastination.
e Time Management Scale (TMS) was designed by Trueman and Hartley and consists of 14 items [34]. is tool includes two subscales: daily planning (5 items) and confidence in long-term planning (9 items). In the study by Trueman and Hartley's, the reliability of the scale was 0.79 [34]. e validity and reliability of the Persian version of TMS have been confirmed as well [35]. In this study, Cronbach's alpha coefficient was calculated to be 0.77. Each item is answered on a 5-point Likert scale (1 � never and 5 � always). e total score ranged from 14 to 70. High scores indicate the student's top time management skills.
Sarason's Test Anxiety Scale (STAS) was developed by Sarason [36] and consists of 37 items. Raju et al. calculated the reliability of STAS to be 0.84 using Cronbach's alpha [37]. e validity and reliability of this scale in Iran have been confirmed at an acceptable level [37,38]. In the present study, Cronbach's alpha coefficient of the STAS was 0.78. A score of 0 was used for false answers, and a score of 1 was determined for correct answers. Total scores ranged from 0 to 37. e scores are categorized into three levels, including low anxiety (0-12), moderate anxiety (13)(14)(15)(16)(17)(18)(19)(20), and high anxiety (21-37).
Data Collection Method.
After the beginning of the academic year, the permission to conduct the research was obtained from the national ethics committee of KUMS (IR.KUMS.REC.1397.1049.). A statistical sample was determined, and the researchers attended the schools of KUMS. e students were explained about the objectives of the study and were asked to fill out the questionnaire. e students who were willing to participate in the research were included in the study. e questionnaires were distributed randomly and collected after completion.
Statistical
Analysis. Data were analyzed by IBM SPSS-18 software. Descriptive statistics (frequency, percentage, mean, and standard deviation) were used to identify the characteristics of the participants and to determine the means of the variables. Pearson's correlation coefficient and 2 Education Research International multivariate regression analysis were used to analyze the study questions. e level of significance was set at P ≤ 0.05.
Ethical Considerations.
is study was approved by the Ethics Committee of the Kermanshah University of Medical Sciences with the code of IR.KUMS.REC.1397.1049. e goals of the study were stated to the participants, and all of them were assured of the confidentiality of their demographic information and responses. Written informed consent to participation in the study was obtained from all participants. ey were also given enough time to complete the questionnaires.
Results
More than half of the participants were male (n � 167, 59.4%). e mean age of the students was 22.7 ± 2.7 years. e majority of the students (n � 143, 50.9%) were in the age group 22-25 years and were from the school of medicine (n � 84, 29.9%) ( Table 1). e mean and standard deviation scores for time management skills, academic procrastination, and test anxiety were 42.0 ± 6.3 out of 70, 46.4 ± 7.7 out of 64, and 16.4 ± 4.4 out of 37, respectively ( Table 2).
Correlation analysis between the variables indicated a significantly negative correlation between time management skills and academic procrastination (r � −0.487, P ≤ 0.01). In addition, the total score of test anxiety was positively correlated with the academic procrastination score (r � 0.443, P ≤ 0.01) ( Table 3). e results of multivariate regression analysis showed that time management (beta � -0.382) and test anxiety (beta � 0.316) predicted academic procrastination. Time management and test anxiety explained 32.6% of the variation in academic procrastination (Table 4).
Discussion
e present study was conducted to determine the relationship of test anxiety and time management with academic procrastination among students of health professions in Iran.
e results showed that the mean score of test anxiety was at a moderate level and that of time management was slightly above the moderate level. e mean score of academic procrastination was above the average level. is finding was consistent with the findings of other studies [2,6,39]. In this regard, the results of Zhang et al. in Chinese students showed that 74% of them had procrastination in at least one academic assignment [2]. e findings of Atalayin et al. showed that half of dental students in Turkey suffered from procrastination [6].
e results of a study by Guo et al. in nursing students in China also revealed that students with low levels of self-efficacy had higher academic procrastination [39]. According to this evidence, procrastination is a challenge among students in many countries, including Iran. Many researchers have also emphasized that procrastination is a common and problematic phenomenon in students [3,40]. High levels of student procrastination in the current study may be related to several factors, such as lack of interest in the field of study, lack of motivation, and low self-esteem.
In the present study, a significantly negative relationship was found between time management skills and academic procrastination. In this regard, the results of one study showed that time management skills increased students' level of engagement and participation in the learning activities [26]. Ghiasvand et al. reported a significant correlation between time management and academic motivation [25]. Ping and Xiaochun showed that increased time management skills play an important role in reducing depression [24]. "t" is equivalent to a statistical value; "β" is equivalent to the standard coefficient, and indicates which independent variable has an effect on the dependent variable; "R 2 " is the coefficient of determination.
It is believed that time management skills are very important in the learning process because time management is a strategic skill and plays an important role in the achievement of goals in professional and academic fields [41]. But, procrastination is the opposite of time management and can lead to failure in students' education and life [42]. erefore, enhancing the students' time management skills may play an important role in reducing the phenomenon of procrastination.
is study revealed that the test anxiety variable had a significantly positive correlation with academic procrastination [18,20,43]. e results of Balogun et al. showed a negative correlation between test anxiety and students' academic performance [43]. erefore, psycho-educational interventions at the university level are necessary in this regard. Furthermore, Zhang and Henderson showed a negative and significant correlation between test anxiety and students' performance in the test [18]. e results of Steinmayr et al.'s study also showed that anxiety as an important component of test anxiety predicts changes in academic performance and subjective well-being [20].
Low levels of test preparation and lack of concentration are among the most important challenges of students in dealing with test anxiety. Delay in preparing for exams increases test anxiety in students. Studies have reported a negative correlation between academic performance and test anxiety [18,20]. Zhang et al. showed a significantly positive correlation between fear of failure and procrastination [2]. Moreover, the identification of the factors affecting the fear of failure and test anxiety plays a very important role in managing test anxiety in students. Most students are afraid of the final examinations.
erefore, evaluation of their performance in the learning process can be effective in reducing their anxiety. Anxiety is a psychological variable that has several causes. In the present study, the correlation between test anxiety and procrastination was investigated. In this regard, if instructors use formative assessments along with summative assessments, it can be effective in reducing students' exam anxiety because students are mostly concerned about grades in end-of-semester assessments. erefore, professors can reduce students' test anxiety by assigning a part of the score to formative assessments.
Limitations.
is research had three limitations. e nature of this study was cross-sectional and correlational. erefore, it is impossible to determine the causal relationship between the variables. A self-report questionnaire was used to collect data. Hence, as it was not possible to verify the participants' answers, it may have affected the accuracy of the results. Numerous factors such as personal, social, psychological, and academic factors can affect academic procrastination, time management, and test anxiety, which were not examined in this study.
Conclusion
In the present study, the academic procrastination score in students of health professions was above the average level.
Time management was negatively associated with academic procrastination. Furthermore, a significantly positive correlation was found between test anxiety and academic procrastination. e results of regression analysis showed that test anxiety and time management predicted academic procrastination. Enhancing the time management skills and reducing test anxiety can play an important role in reducing academic procrastination. Evidence suggests that procrastination is a serious obstacle to students' academic achievement. erefore, improving time management skills and taking measures to reduce test anxiety can reduce academic procrastination in students. erefore, the findings of this study can be considered by educational planners and administrators in universities. Future studies are suggested to identify the social, psychological, and educational factors that influence academic procrastination, test anxiety, and time management.
Data Availability e datasets used and analyzed during the present research are available from the corresponding author on reasonable request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2021-10-18T17:35:39.389Z | 2021-09-30T00:00:00.000 | {
"year": 2021,
"sha1": "93fc6f4174e281d6df71d3387f8d6adf7d770353",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/edri/2021/1378774.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0f6e7ab482c19d160451629f95f065fcf3fcecc5",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
259287227 | pes2o/s2orc | v3-fos-license | The EPICS control system for IsoDAR
Many large accelerator facilities have adopted the open-source EPICS software as the quasi-industry standard for control systems. They typically have access to their own electronics laboratory and dedicated personnel for control system development. On the other hand, small laboratories, many based at universities, use commercial software like LabView, or entirely homebrewed systems. These often become cumbersome when the number of controlled devices increases over time. Here we present a control system setup, based on a combination of EPICS, React Automation Studio, and our own drivers for electronics available to smaller laboratories -- such as Arduinos -- that is flexible, modular, and robust. It allows small laboratories, working with off-the-shelf modular electronics, power supplies, and other devices to quickly set up a control system without a large facility overhead, while retaining maximum compatibility and upgradeability. We demonstrate our setup for the MIST-1 ion source experiment at MIT. This control system will later be used to serve the entire IsoDAR accelerator complex and, as such, must be easily expandable.
Introduction
In the planned Isotope Decay-At-Rest experiment (IsoDAR) [1,2,3] we propose to search for sterile neutrinos through the disappearance ofν e , produced in a neutrino production target, and measured in a nearby 2.6 kt scintillator detector at the Yemilab underground facility. In order to be decisive within a time-frame of five years, IsoDAR requires a continuous 60 MeV proton beam of 10 mA on target at 80 % duty factor. We recently presented a mature design for a compact cyclotron capable of accelerating these record beam currents [4] that we optimized using AI/ML techniques [5,6]. The design utilizes several novel ideas, among them RFQ direct injection [7,8] and acceleration of We built a prototype for the ion source, currently capable of delivering a 1 mA, H + 2 beam, at MIT [9,10], which we are in the process of upgrading. At the 10 mA nominal currents and in an underground laboratory setting, reliability and ease-of-use are paramount for a control system. Because we build and test our hardware step-by-step, beginning with the ion source, then the RFQ, and finally the cyclotron, we also require an easily expandable control system. Small laboratories, many based at universities, often use commercial software like LabView [11], or entirely homebrewed systems. These often become cumbersome when the number of controlled devices increases over time. The Experimental Physics and Industrial Control System (EPICS) [12] is the tool of choice for a multitude of large facilities around the globe, with applications in accelerators [13,14,15,16,17], fusion and neutron science [18,19], particle physics experiments [20,21,22], astronomy [23,24], and more.
In this paper, we present the details of the implementation of a highly reliable and easily expandable framework based on EPICS, React Automation Studio (RAS) [25,26], and our own code for 1) fast communication with microcontrollers and 2) data logging.
Our scripts and tutorials make this framework easy to deploy and to expand when needed.
With just a personal computer or laptop and an Arduino, hardware control examples using EPICS can easily be set up and arbitrarily expanded later. This makes our setup ideal for educational purposes. All our code is available freely on GitHub [27].
The structure of this paper is as follows. In Section 2 we describe the various software packages, in Section 4, we discuss the implementation at our ion source laboratory, which Table 1: libcom1 message types, formatting, and response. Note that "Query All" will have a response length that depends on the total number of channels n.
doubles as an EPICS test-bench. Finally, in Section 6 we discuss how our easy-to-use and expandable setup can be deployed quickly to control small-scale experiments in labs, at home, or in an educational setting, by presenting a set of instructions and tutorials.
Finally, in Section 7 we present our conclusions.
Devices and communications
In many small experiments, equipment is commonly controlled by off-the-shelf microcontrollers such as Arduinos or Raspberry Pi computers. While these excel at fast deployment and low maintenance, they are often not implemented efficiently in terms of communication with other devices. To remedy this, we have developed a common protocol for Arduino-based devices to standardize the serial communications of these devices. The parameters in the formatting of messages in Table 1 Note that in this version of the library there are two modes for querying single devices: single channel and all channels. The response given by an all channel query will concatenate n values with a semi-colon for n channels on the device. Multiple queries per message may be supported by a future version of the library.
EPICS (Experimental Physics and Industrial Control System) is open source code
developed at Argonne National Laboratory. EPICS is the quasi-industry standard for large particle accelerator systems. The main benefit of using EPICS is its scalability to very large systems with many devices, as it can support a wide range of protocols. This is ideal for the breadth of devices present in the H + 2 ion source. EPICS also offers real-time data communication and a high degree of user control. The minimal latency 4 makes it ideal for running controlled experiments. The user control allows a user to set operating ranges for devices, create custom outputs, and create specific automated tasks, which makes it easy to interface with a Graphical User Interface (GUI).
In our control system design, EPICS serves as the bridge between the hardware and the user GUI. Many of the devices are in-house electronics that utilize off-the-shelf parts, and use libcom1 to communicate with EPICS. EPICS process variables (PVs) are used to define the types of inputs, outputs, and calculations of the control system and their properties. The GUI, discussed further in Section 2.3, interacts with the EPICS PVs through EPICS Channel Access (CA).
libcom1 Integration
Integrating libcom1 into the EPICS framework requires a minor modification to the StreamDevice package. This modification is simply an extension to the format converters to correctly parse the incoming data from devices using libcom1 . Once compiled with EPICS, the formatting tools can be used in the protocol files.
React Automation Studio
React Automation Studio (RAS) [25,26] is a scalable platform for instrumentation control which utilizes containerized services to provide EPICS integration, logging, alarm handling, a React GUI front end, and more. The suite can be deployed on a machine and access the lab devices through the local network while exposing the control system interface to the internet with a secure login, allowing for remote control and monitoring if required. Alarm handlers are linked to EPICS alarms which can alert operators through front-end notifications and email, and are logged through a MongoDB service. An important feature is the ability to save and load set points easily to restore components into a predetermined state for operation. These values are obtained from the EPICS PVs and stored in a separate MongoDB database and its functionalities can be accessed through standard system components within RAS.
RAS Front end -React
Since RAS can be deployed on the web, it can be used on any device with an online or local connection to the server. The React framework is used as it allows flexible development while offering a reactive and real-time experience to the user. A responsive interface based on Material-UI is implemented to serve a clean GUI on both desktop and mobile devices. Reusable components in the web views provide a high degree of flexibility for designing a control system, such as displays for a wide range of instrumentation.
These components take one or more PVs as input (which can be declared using macros for convenience) and instantiate themselves using relevant queried data. The front-end 6 queries the pvServer using the Socket.IO socket library in real time and posts user input data to the APIs offered by pvServer. Plots are generated using the Plotly graphic library, which offers customizable views with variety of data visualization tools. A few additional tools for visualization were developed for this work, which are available in our GitHub repository [27].
The RAS front end also comes with useful configuration pages, including the management of macros, access rights, and alarm handling. The main administrator user can set access rights to specific PVs or other configurations (such as alarm handler settings) for other users. Alarms can be set in the UI, targeting values from given PV to watch and send notifications when those values violate the alarm conditions. These notifications are displayed on the interface and can be sent through messaging services or email. All alarms are logged and the users are able to view the history of alarms and query for specific targets.
The development of the front end of RAS requires basic React, HTML, CSS, and JavaScript knowledge. However, due to the modularity of the implementation, most developers can use the existing components as black boxes and build on top of them conveniently.
RAS Back end -Python
The pvServer is the Python back-end service for RAS using Flask framework, which establishes connections to the EPICS PVs. EPICS is integrated into the RAS pvServer using PyEPICS Channel Access (CA). Flask-SocketIO is used to create sockets for sending real-time updates to the front end. This connection only happens when the user is authenticated and have access rights to the requested PV, and otherwise they are denied the connection. This back end also handles the user authentication, permissions, and provides APIs for writes to EPICS variables, which are handled similarly. MongoDB databases are used to manage some forms of data storage and alarm handling. Alarm configurations can be setup in JSON files, allowing for flexible conditions and predetermined alarm areas. Previous alarms that were triggered can be accessed at a later date 7 through the alarm logs.
Data Logger
A new data logging feature was developed to store real-time data fast and efficiently.
An important design principle was to avoid any changes to the existing RAS framework to simply integration. The data logger interfaces with the pvServer by creating new SocketIO clients for each monitored PV. The clients wait for any data to be transmitted and store it in a buffer. Once the buffer is full, the data contained within it will be saved as a chunk to a file. The default file format is HDF5, but additional file types can be implemented within the socket methods of the logger file.
The data logger is initialized with a configuration file, which allows the user to specify networking and data storage information as well as a list of desired PVs to monitor and their data types. To accommodate the varying rates at which certain PVs need to be logged, two scanning modes have been added: sampling and continuous. In the sampling mode, the user can specify a rate to poll the PV. In continuous mode, data will only be logged when the PV is updated by an EPICS IOC. Specifying the sample rates for individual devices reduces storage space usage by reducing unnecessary polling. The continuous mode is useful for storing values that don't change often, such as an interlock or a switch. The data logger is contained within a simple python script and requires few dependencies. This lightweight design allows for quick deployment and the ability to have multiple devices on the network storing data without significant overhead. The code for this tool is available on GitHub [28].
Example: Simple Control System
Included in the software repository is a simple working example of all the components described above. This simple example demonstrates how to map inputs and outputs of an Arduino microcontroller to EPICS with libcom1 , and a bare-bones interface using RAS.
get familiar with the individual components before building a more complex application.
The default Arduino configuration assumes a few simple connections between input and output pins, as seen in Figure 4. The Arduino sketch file contains the code for controlling these pins and creating the libcom1 channels. The EPICS database files simply create input and output records for these pins, requiring no modification assuming the default layout. Finally, there is a folder that includes a few files for RAS that create a page with buttons for toggling the outputs, indicators for the binary inputs, and a plot for the analog inputs. More complex behavior can be achieved with simple modifications to the individual components described here. Figure 6: The layout of the MIST-1 H + 2 Ion Source control system. The RAS server is set up to run on the lab workstation and serves as the outward/internet-facing device. Two EPICS IOCs are used for the controls of different devices. One is on the high voltage platform and is used to interface with the power supplies that drive the source. A fiber optic Ethernet adapter is used to provide electrical isolation from the high voltage. The other IOC handles other devices off of the source platform, such as the interlocks. The data-logging service is also run on the lab workstation, but is implemented such that it can be deployed on any computer on the local network.
Example: Ion Source Control System
The IsoDAR MIST-1 H + 2 ion source uses the tools highlighted in the previous sections to create a stable and easy-to-use control system. Many of the devices utilize custom electronics and use microcontroller (e.g. Arduino and Teensy) to read in or output voltages. Communication with these devices is streamlined using the libcom1 protocol over USB, which allows for a standardized and fast way of integrating new devices into the control system. In this example, two EPICS IOCs are used: one to communicate with devices on the high voltage platform inside the field cage of the ion source, and another for all devices outside. An optical fiber Ethernet cable is used for communicating with the platform IOC for electrical isolation. The packets are first decoded using a protocol file, which employs StreamDevice discussed in Section 2.2.1. Other devices which require manufacturer-supplied drivers or serial protocols can also be interfaced with EPICS as stream devices. Each device type has an associated database files that contains all relevant records and application-specific alarms. For example, a power supply will contain records to get/set/read voltages and currents with thresholds set for over-voltage and over-current alarms. An example of the process variables defined for the Matusada AU20P7 power supply is shown in Table 3. Another computer on the local network runs the docker containers for RAS suite, which also serves as the control computer for operators. The RAS server runs the docker containers within a Windows Subsystem for Linux (WSL) instance running Ubuntu 20.04 and the front end of the control system can be accessed from any web browser on the local network.
Operation of the control system requires careful monitoring of the voltages and currents of various power supplies over time, so the MIST-1 ion source pages use a host of different plotting features from existing RAS libraries as well as new application-specific modifications to existing tools. React components have also been created for certain device types, which allows for modular development of the beam line controls.
There are two methods for saving values from the PVs in the control system. The first is by using the native RAS tools, which are convenient for saving set points and 13 Table 5. Figure from Ref. [30].
containment field within the chamber. To prevent this, water flow meters that monitor the source cooling water and thermocouples on the source body itself are routinely scanned for changes. Another heating concern is the melting of the filament, which can occur if there is a loss of vacuum, and so there is an upper allowed limit on the source pressure during running. If changes to these variables go outside of the normal operating conditions, the source heating power supply is disabled and an alarm is triggered.
Test Measurement
Here, we present a test measurement to demonstrate the functionality of the EPICSbased control system for the MIST-1 ion source. This measurement was part of the first commissioning runs after switching to EPICS and after the installation of a new pentode electrostatic extraction system [30] (cf. Figure 7). During this test, the filament heating current was varied from 20 A to 35 A. The other settings were kept constant and are listed in Table 5. It should be noted that normally, with increasing beam current, the voltages on the extraction system electrodes would be adjusted to ensure optimal transport. However, in this case, because the resulting beam current only goes up to 16 drain current as a function of filament heating in Figure 8, left. We recorded the source parameters in hdf5 files using the data logger and subsequently plotted them in Figure 8, right. The control system performed as expected.
Educational Use
The containerized control system as described here provides an excellent base for small systems in educational labs that require communication with off-the-shelf microcontrollers and RS232-, RS485-, or USB-enabled power supplies and other devices. The GitHub repository [27] for this work contains instructions for setting up and deploying a bare-bones version of EPICS and RAS, with additional code for the libcom1 library and the corresponding StreamDevice modification for EPICS. The individual tools described here may require more time to develop familiarity with, but the workflow of connecting a serial device and creating display elements is simple. The provided example (see Figure 4) only requires an Arduino, a few readily available components, and a platform to run EPICS and RAS, e.g., a low-cost laptop. The preferred operating system is Linux, but instructions using the Windows Subsystem for Linux (WSL-2) are available too.
Conclusion
In this paper, we presented the control system for the MIST-1 ion source, which is based on a combination of EPICS, React Automation Studio, and code we wrote for serial communication with Arduinos. We also introduced a tutorial and a set of instructions and examples to ease the installation of similar control systems. These are freely available on GitHub. We find that our setup is reliable and, due to the modular nature of EPICS and RAS, easily expandable, which ideally suits our needs to slowly expand the system. As the next steps, we will include a radiofrequency quadrupole linear accelerator and later a full cyclotron particle accelerator. The GitHub repository and tutorials we generated are well-suited for educational purposes, e.g., lab-courses, or DIY projects. | 2023-06-30T06:43:00.224Z | 2023-06-29T00:00:00.000 | {
"year": 2023,
"sha1": "be8382f6a57379a62356e0b8990ec1c237d43402",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "be8382f6a57379a62356e0b8990ec1c237d43402",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
18766221 | pes2o/s2orc | v3-fos-license | Abnormal Response of the Proliferation and Differentiation of Growth Plate Chondrocytes to Melatonin in Adolescent Idiopathic Scoliosis
Abnormalities in the melatonin signaling pathway and the involvement of melatonin receptor MT2 have been reported in patients with adolescent idiopathic scoliosis (AIS). Whether these abnormalities were involved in the systemic abnormal skeletal growth in AIS during the peripubertal period remain unknown. In this cross-sectional case-control study, growth plate chondrocytes (GPCs) were cultured from twenty AIS and ten normal control subjects. Although the MT2 receptor was identified in GPCs from both AIS and controls, its mRNA expression was significantly lower in AIS patients than the controls. GPCs were cultured in the presence of either the vehicle or various concentrations of melatonin, with or without the selective MT2 melatonin receptor antagonist 4-P-PDOT (10 µM). Then the cell viability and the mRNA expression of collagen type X (COLX) and alkaline phosphatase (ALP) were assessed by MTT and qPCR, respectively. In the control GPCs, melatonin at the concentrations of 1, 100 nM and 10 µM significantly reduced the population of viable cells, and the mRNA level of COLX and ALP compared to the vehicle. Similar changes were not observed in the presence of 4-P-PDOT. Further, neither proliferation nor differentiation of GPCs from AIS patients was affected by the melatonin treatment. These findings support the presence of a functional abnormality of the melatonin signaling pathway in AIS GPCs, which might be associated with the abnormal endochondral ossification in AIS patients.
Introduction
Adolescent idiopathic scoliosis (AIS) is a three-dimensional structural deformity of the spine that occurs during the peripubertal period [1]. Although no consensus has been reached on its etiology, one of the generally accepted concepts is the presence of abnormal skeletal growth in AIS [2,3]. This abnormality is manifested during the peripubertal period in patients with AIS in that they tend to be taller, leaner and have a longer arm span than their healthy peers [4,5]. A faster growth of AIS subjects during puberty was also recorded in longitudinal studies [6]. Studies on magnetic resonance images disclosed a related anterior spinal overgrowth in girls with AIS [7,8]. Furthermore, a histomorphometric study of the vertebral endplate revealed more active growth in the anterior than the posterior spinal column in AIS patients [9]. All of these observations indicated the presence of an abnormal systemic growth and the likelihood of an abnormal regulation and modulation of skeletal growth and endochondral ossification in patients with AIS.
Growing interest has arisen in the past decades following the report of "idiopathic-like" scoliosis in animals with pinealectomy-induced melatonin deficiency [10][11][12]. Although considerable controversies still exist on whether an abnormal plasma melatonin level is present in patients with AIS [13][14][15], only a few studies focused on examining abnormalities in the signaling pathway of melatonin rather than the circulating melatonin level [16][17][18]. Melatonin failed to inhibit the increase of 3',5'-cyclic adenosine monophosphate (cAMP) induced by forskolin in osteoblasts from AIS patients when compared with cells from normal control subjects [16,18]. In addition, abnormality in the genotypic frequency at the promoter region of the MT2 gene, which was detected in a genetic association study [19], indicated that the MT2 gene was likely a predisposition gene for AIS, although there were controversies [20][21][22][23]. Furthermore, the osteoblasts from girls with AIS exhibited an abnormal response to melatonin in terms of proliferation and differentiation [24], which might be due to the abnormalities in MT2 receptor expression [25].
The functional outcome of abnormalities in the melatonin signaling pathway in the regulation of endochondral ossification in AIS, however, has not been reported. It has been documented that melatonin inhibited both proliferation and differentiation of rat vertebral body growth plate (VBGP) chondrocytes [26] with the involvement of MT1 and MT2 receptors. In addition, a recent study showed that melatonin enhanced the chondrogenic differentiation of human mesenchymal stem cells [27], mediated at least partially by the two membrane melatonin receptors. We hypothesized that melatonin could be involved in the regulation and modulation of endochondral ossification in human, and that abnormality in the melatonin signaling pathway could contribute to the abnormal skeletal growth in AIS subjects. The proposed study aimed to investigate the expression of the melatonin membrane MT2 receptor in human growth plate chondrocytes (GPCs) and also the effect of melatonin on the proliferation and differentiation of GPCs isolated from AIS patients and normal control subjects, and cultured in vitro.
Expression of MT2 Receptor in GPCs
In both AIS and control groups, MT2 receptors were expressed mainly in the cytoplasm and not in the nuclei of GPCs ( Figure 1). No positive signal was observed in the absence of the primary antibody for both receptors. In addition, the mRNA expression of MT2 receptor in GPCs of AIS subjects, determined by quantitative real time-polymerase chain reaction (qRT-PCR) was significantly lower than that in control GPCs (p < 0.05) Figure 1. MT2 receptor expression in growth plate chondrocytes (GPCs) from adolescent idiopathic scoliosis (AIS) patients and control subjects. Immunofluorescent staining was carried out using purified rabbit polyclonal anti-MT2 antibodies. MT2 receptor was demonstrated in GPCs from AIS patients (a) and control (b) subjects; The immunoreactivity was observed mainly in the cytoplasm. No staining was observed in the negative control (c); The mRNA expression of MT2 receptor was quantified by qRT-PCR. The GPCs from AIS patients showed a significantly lower expression than those of control subjects (d) (Students' t test, * p < 0.05). (c) (d)
To the best of our knowledge, the current study constitutes the first report on the inhibitory effect of melatonin on the proliferation and differentiation of GPCs in human. In an in vitro study, melatonin at high concentrations (10, 100 µg/mL) showed an inhibitory effect on both the proliferation and differentiation of cultured rat vertebral body growth plate chondrocytes [26]. After incubation for 24 h in medium containing melatonin, the cell proliferation, gene expression of collagen type II and aggrecan, as well as protein expression of proliferating cell nuclear antigen (PCNA), Sox9 and Smad4 were significantly reduced. Moreover, it was found that the effects of melatonin could be reversed by the melatonin receptor antagonist luzindole, indicating the involvement of membrane melatonin receptors in these functions [26]. In a broiler chicken model, Aota et al. [28] noted that pinealectomized chickens had a significantly increased area of labeled hypertrophic zone per total hypertrophic zone compared with control chickens (32.8% ± 12.5% vs. 6.4% ± 5.0%, p < 0.005), as well as an increased number of hypertrophic and proliferative chondrocytes. In the present study, the GPCs were treated with melatonin in either physiological ( It is interesting to note that in a study conducted by Zhong et al. [26] that melatonin significantly inhibited the proliferation and differentiation of VBGP chondrocytes at high dosages (10 and 100 µg/mL, or 43 and 430 µM) but not at low dosages (0.1 and 1 µg/mL, or 0.43 and 4.3 µM), while in the present study, melatonin at 1 nM and 0.1 µM concentrations demonstrated an inhibitory effect on both proliferation and differentiation of human GPCs. These differences might be due to the different study protocols employed. In the study performed by Zhong et al. [26] the chondrocytes were treated with melatonin for 24 h only, while in our study the GPCs were cultured with melatonin for 72 h in view of the doubling time of GPCs being around 48 h in our study (data not shown).
Figure 2.
Effect of melatonin on the proliferation and gene expression of collagen type X (COLX) and alkaline phosphatase (ALP) in cultured GPCs from control subjects. Data represent mean ± standard deviation (n = 10) The cell viability at the melatonin (MLT) concentration of 1, 100 nM and 10 µM was 82.2% ± 12.3%, 86.4% ± 11.0% and 83.8% ± 9.1% of that in the vehicle group, respectively. Significant differences were retrieved by One-sample t-test (* p < 0.05, ** p < 0.01) (a); The expression of COLX (b) and ALP (c) was also reduced by melatonin especially at high concentrations. However, these inhibitory effects were reversed in the presence of 4-P-PDOT (10 µM).
One of the signaling pathways regulating chondrocyte proliferation and differentiation is mediated through G proteins. The activation of Gs alpha-subunit would stimulate the cAMP-protein kinase A pathway [29] subsequently resulting in phosphorylation of SOX9 and expression of Col2α1 which would lead the chondrocytes to continue to proliferate without going into the normal differentiation phase [30,31]. On the other hand, it has been shown that melatonin, acting through the melatonin receptors, high-affinity GPCR MT1 and MT2, activates the Gi alpha-subunit and, in turn, inhibits cAMP accumulation [32]. It is likely that in the current study, melatonin might exert its action by activating membrane receptors to inhibit cAMP accumulation. This was supported by the identification of the expression of the MT2 receptor in human GPCs, and the finding that the inhibitory effects of melatonin on both proliferation and differentiation of GPCs in control subjects could be reversed by 4-P-PDOT. Similar findings on rat VBGP have been reported [26]. Moreover, the hypothesized signaling pathway of melatonin was further supported by the finding of reduced protein expression of Sox9 in VBGP following melatonin treatment [26].
Lack of Response of GPCs to Melatonin Treatment in Both Proliferation and Differentiation in AIS Patients
In patients with AIS, the effect of melatonin on proliferation in GPCs was observed as the percentage of viable GPCs was 95.3% ± 10.3%, 96.0% ± 9.1% and 100.0% ± 12.3% of that in the vehicle group at the melatonin concentration of 1, 100 nM and 10 µM, respectively. No significant difference was found between the vehicle and melatonin treatments ( Figure 3). Furthermore, the expression of ALP and COLX was not affected in the presence of the different dosages of melatonin.
Control AIS
In AIS, the lack of response of GPCs in both proliferation and differentiation to melatonin might be attributed to derangements in the signaling pathway of melatonin. A systemic melatonin signaling pathway dysfunction has been described by Moreau et al. [17] The dysfunction was shown to be related to abnormalities in the phosphorylation of Gi protein [16]. It suggested a switching of MT2 receptor coupling from Gi alpha-subunit to Gs alpha-subunit [18]. MT2 gene polymorphism has been reported to be statistically associated with the occurrence of AIS suggesting that MT2 might be a susceptibility gene [19]. In the present investigation, the expression of MT2 receptor in GPCs from AIS and control subjects was demonstrated by immunofluorescence but there were no significant qualitative differences between AIS and control subjects. However, another study indicated that it was likely that the mRNA expression of MT2 receptor was significantly reduced in GPCs from AIS when compared with control subjects [33]. In osteoblasts from girls with AIS, an abnormal expression of MT2 receptor has been observed, and the expression was not demonstrable in four out of the eleven girls [25]. Furthermore, AIS patients with a low level of expression of MT2 receptor in osteoblasts showed a longer arm span than those with a normal expression level of MT2 receptor [34]. Hence, a comprehensive investigation on the MT2 receptor including the expression of RNA and protein, the structure of DNA and protein, and the coupling between MT2 receptor and Gi/Gs proteins should be carried out in GPCs from AIS subjects and control subjects to facilitate further understanding of the abnormalities in the MT2 receptor.
In a histomorphometric study on the longitudinal bone growth in pinealectomized chickens, an increased number of hypertrophic and proliferative chondrocytes was observed compared to the control [28]. Although longitudinal growth was not measured due to the multiple labeling techniques used, the authors believed that the enlarged endochondral bone coverage could further minimize metaphyseal bone production, leading to a rapid and marked loss of cancellous bone volume, and this would accelerate bone elongation economically [28]. The lack of response of GPCs to the action of melatonin in regulating proliferation and differentiation could be linked to the abnormal endochondral ossification, which affects skeletal growth in AIS. Abnormal endochondral ossification has been speculated in patients with AIS [4,7,8] and thought to be a contributing factor in the etiopathogenesis of the disease. The impaired ability of GPCs in AIS patients to respond to the inhibitory effect of melatonin on chondrocyte proliferation and differentiation found in the current study indicates a role of melatonin in the regulation and modulation of endochondral ossification. Without the normal response to the inhibitory action of melatonin, the GPCs were speculated to proliferate. This, in turn, affects the growth activity as suggested in the previous study [9].
In addition to its effect on chondrogenesis, melatonin has shown a significant effect on osteogenesis. Melatonin has been found to induce osteogenesis of human mesenchymal stem cells [35], and the enhanced alkaline phosphatase activity in osteogenic medium could be reduced by the presence of 4-P-PDOT, the selective MT2 receptor antagonism [36]. For osteoblasts, melatonin could enhance the proliferation of osteoblasts in vitro at pharmacological doses [37] but not at physiological doses [38,39]. In addition, Satomura et al. [37] demonstrated the dose-dependent stimulation of alkaline phosphatase activity in human osteoblasts by melatonin. Furthermore, melatonin enhanced mineralized matrix formation in vivo [37,40]. In patients with AIS, the abnormal response of osteoblasts to melatonin has also been reported recently. Man et al. [24] noted that melatonin failed to promote both proliferation and differentiation of osteoblasts from AIS subjects, which might contribute to the low bone mineral density in these patients [41][42][43][44]. This abnormality of melatonin could be attributed to the abnormal expression of MT2 receptor [25,34]. Taking these observations together, it is likely that melatonin signaling pathway dysfunction could be a systemic problem and plays an important role in the abnormal systemic bone growth in AIS subjects [4,42,43]. This was also supported by the finding of the disrupted cartilage matrix and the rapid and marked loss of cancellous bone volume in chickens which had undergone pinealectomy [28].
A limitation of the present study is that the sites at which the cartilage specimens were collected varied because of the small number of subjects who could be recruited and the ethical problem and difficulty encountered in harvesting growth plate cartilages. By isolating GPCs from growth plates and culturing the cells in a standard environment, however, the environmental factors affecting the biology of chondrocytes could be minimized. Although functional abnormalities of the melatonin signaling pathway in regulating the GPCs from AIS patients were detected, the underlying pathogenesis involving the MT2 receptor and Gi protein has not been fully elucidated. As the exact mechanism of the dysfunction in melatonin signaling cannot be fully uncovered in this study, further investigations on the role and mechanism of MT2 receptor in AIS patients are warranted. Furthermore, although comparisons on the proliferation, mRNA expression of MT2 receptor, COLX and ALP of GPCs between AIS and control subjects would also advance our understanding of the endochondral ossification activity in AIS patients, this was not carried out in the present study. We had the concern that in vitro culture of GPCs might lead to changes in the cell viability and mRNA expression of COLX and ALP from the original status in cartilage. Hence, it would be best to extract RNA and protein from the cartilage samples to carry out such assays.
Recruitment of Subjects
Twenty AIS patients (seventeen girls and three boys) aged 13.8 ± 3.4 years, with a severe curvature of the spine as evidenced by Cobb angle ranging from 45° to 95°, and undergoing corrective spinal surgery with either anterior spinal fusion (n = 5) or posterior spinal fusion (n = 15), were recruited. Ten non-AIS subjects (six girls and four boys) with a mean age of 13.7 ± 2.8 years, and either developmental dysplasia of the hip, lumbar disc herniation, or trauma, and undergoing other forms of orthopaedic surgery, were recruited as control subjects. Subjects with other forms of spinal deformity, abnormal metabolic or melatonin-related diseases, such as sleeping problems, skin pigment anomalies, and endocrine disorders, were excluded [45]. Standardized growth plate cartilage biopsies were harvested intra-operatively from the vertebral end plate, spinal process or iliac crest apophyses in the AIS group and from the iliac crest apophyses in the control group.
Primary Culture of GPCs
The methods of Lee et al. and Hidvegi et al. [46,47] were followed with minor modifications. Growth plate biopsies were cleaned from the attached connective tissue under sterile conditions. After cutting into small pieces (around 1 × 1 mm), the cartilage samples were subjected to a series of enzyme digestions to isolate GPCs from the cartilage matrix [46,47]. The pieces of cartilage were incubated in the presence of trypsin (1 mg/mL, 10 mL/g cartilage) for 10 min, hyaluronidase (1 mg/mL, 10 mL/g cartilage) for 30 min, and collagenase II (0.5 mg/mL, 20 mL/g cartilage) for 4-6 h. GPCs were then cultured, at 37 °C in a humidified atmosphere of 5% CO2, in a monolayer in DMEM supplemented with 10% FBS and 100 U/mL PSN. The medium was refreshed every 2-3 days.
The morphology of GPCs in culture was observed under an inverted microscope and captured with the equipped camera. Chondrocytes from both AIS patients and control subjects displayed a fibroblast-like and flattened appearance when adhered to the bottom of the culture flask. After culture for another two weeks, the cells assumed a more polygonal shape. There was no discernible difference between GPCs from AIS and control subjects. GPCs at the end of the second passage were harvested and used for further assays.
Expression of Melatonin Membrane MT2 Receptor
The expression of MT2 receptor on GPCs was determined by immunofluorescent staining. GPCs were seeded on cover slips coated with poly-L-lysine and then cultured overnight for attachment. After washing in phosphate-buffered saline (PBS), the samples were fixed in acetone for 30 min. After eliminating non-specific binding by exposure to 5% goat serum in 1% BSA/PBS at room temperature for 20 min, the samples were incubated overnight with the primary antibody at 4 °C in a moist chamber. Rabbit anti-MT2 affinity-purified polyclonal antibodies (1 mg/mL) were diluted in blocking buffer (1% BSA/PBS) (1:100, v:v). Then samples were incubated in the presence of Alexa Fluor-488 goat anti-rabbit IgG diluted with blocking buffer (1:100, v:v) at room temperature for 1 h. The cover slips were stained with DAPI in VECTASHIELD mounting medium and mounted on glass slides with nail polish. Immunofluorescence was observed with a fluorescence microscope (LEICA DM RXA2, Leica Microsystems, Wetzlar, Germany).
To quantify the expression of MT2 receptor, GPCs were harvested and then RNA was extracted by using the RNeasy Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. Sample mRNA was quantified by measuring the optical density at 260 and 280 nm. Extracted total RNA (1 µg) was reverse-transcribed with Moloney murine leukemia virus reverse transcriptase (Promega) as detailed in the manufacturer's guidelines. qRT-PCR was performed in a total volume of 10 µL containing 5 µL diluted template DNA (1:50 dilution), 500 nM sense and antisense primers, 1 µL of SYBR Green master mix (Roche Applied Science), and 20 mM MgCl2. PCR amplification and quantification were performed in a Light Cycler Carousel-Based system (Roche Applied Science) as follows: denaturation for 10 min at 95 °C, followed by 45 amplification cycles (15 s at 95 °C for denaturation, 5 s for annealing at 56-58 °C, and 15 s for extension at 72 °C). The amount of RNA was calculated from the measured threshold cycles (Ct) by employing a standard curve. The data were normalized by determination of the amount of glyceraldehyde-3-phosphate dehydrogenase (GAPDH). Sequences of the sense and antisense primers for MT2 receptor were 5'-CTCCCTATCGCT GTCGTGTC-3' and 5'-ATCTGGGGAGCCATTTCTTG-3' respectively [33]. The relative quantification values were expressed in log to obtain a normal distribution for analysis.
Effect of Melatonin on Proliferation of GPCs
Isolated GPCs were seeded in a 96-well plate at a density of 10,000 cells/ cm 2 in culture medium. After incubation overnight, the cells were cultured in serum-free DMEM medium for 24 h. The GPCs were then cultured with DMEM containing 1% FBS, together with vehicle alone (DMSO, 10 µM) or in the presence of various concentrations of melatonin (1, 100 nM, and 10 µM). The final concentration of the vehicle was minimized to 10 µM. To further evaluate whether the effect of melatonin on the proliferation of GPCs was mediated by the MT2 receptor, the selective MT2 receptor antagonist 4-P-PDOT (10 µM, Tocris Cookson Inc., Ellisville, MO, USA) [48] was added together with the various concentrations of melatonin. The treatment was refreshed daily for three days. Then the cell viability was determined by the MTT assay in triplicate. Absorbance was measured by using a microplate reader at the wavelength of 570 nm with a reference wavelength of 630 nm.
Effect of Melatonin on Differentiation of GPCs
The GPCs were cultured until confluence, followed by switching to chondrocyte differentiation medium (MEM alpha medium supplied with 10% FBS and 1 mM glycerol-2-glycerophosphate, 100 U/mL PSN and 50 µg/mL ascorbic acid) [49]. Then the GPCs were treated with either vehicle or melatonin (1, 100nM and 10 µM), in the presence or absence of 4-P-PDOT for 14 days. The culture medium and melatonin were refreshed every other day. At the end of the treatment, the mRNA expression of COLX and ALP was determined by qRT-PCR.
GPCs were harvested and then RNA was extracted by using the RNeasy Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. Sample mRNAs were quantified by measuring the optical density at 260 and 280 nm. Extracted total RNA (1 µg) was reverse-transcribed with Moloney murine leukemia virus reverse transcriptase (Promega) as detailed in the manufacturer's guidelines.
Statistical Analysis
All data were expressed as mean ± standard deviation (SD). SPSS/PC 13.0 (SPSS Inc., Chicago, IL, USA) was used for all statistical computations. Comparisons between vehicle and melatonin treatments were analyzed by one-sample t-test. Comparisons between AIS and control were made using independent samples t-test. A p-value smaller than 0.05 was considered statistically significant.
Conclusions
The present study represents the first report on the effect of melatonin on the proliferation and differentiation of GPCs from AIS and control subjects, signifying that melatonin is likely to be involved in the regulation and modulation of growth plate chondrocyte biology, which is important for endochondral ossification. The finding of the abnormal response of GPCs to melatonin in both proliferation and differentiation in AIS subjects provides further evidence supporting the presence of an abnormal systemic signaling pathway of melatonin in AIS patients, which might be linked to the abnormal endochondral ossification and skeletal growth. | 2016-03-22T00:56:01.885Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "ae1282b59dee39f8a088aa4bbe4fec75858cef04",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/15/9/17100/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae1282b59dee39f8a088aa4bbe4fec75858cef04",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248507402 | pes2o/s2orc | v3-fos-license | NR4A1 inhibits the epithelial–mesenchymal transition of hepatic stellate cells: Involvement of TGF-β–Smad2/3/4–ZEB signaling
Abstract This study aimed to examine whether nuclear receptor 4a1 (NR4A1) is involved in inhibiting hepatic stellate cell (HSC) activation and liver fibrosis through the epithelial–mesenchymal transition (EMT). HSC-T6 cells were divided into the control group, the acetaldehyde (200 μM, an EMT activator) group, and the NR4A1 activation group (Cytosporone B; 1 μM). The expression levels of the epithelial marker E-cadherin, the mesenchymal markers fibronectin (FN), vimentin, smooth muscle alpha-actin (α-SMA), and fibroblast-specific protein 1 (FSP-1), and the components of the transforming growth factor (TGF)-β pathway were detected by real-time polymerase chain reaction and western blotting. Compared with the control group, E-cadherin in the acetaldehyde group was downregulated, whereas FN, FSP-1, vimentin, α-SMA, and COL1A1/COL1A2 were upregulated (P < 0.05). Compared with the acetaldehyde group, NR4A1 agonist upregulated E-cadherin and downregulated FN, FSP-1, vimentin, α-SMA, and COL1A1/COL1A2 (P < 0.05). After acetaldehyde stimulation, TGF-β, Smad2/3/4, and zinc finger E-box-binding homeobox (ZEB) were upregulated, while Smad7 mRNA levels were downregulated (all P < 0.05). Compared with acetaldehyde alone, NR4A1 agonist increased Smad7 mRNA levels and reduced TGF-β, Smad2/3/4, and ZEB mRNA levels (all P < 0.05). NR4A1 activation suppresses acetaldehyde-induced EMT, as shown by epithelial and mesenchymal marker expression. The inhibition of the TGF-β–Smad2/3/4–ZEB signaling during HSC activation might be involved.
Introduction
Liver fibrosis is a wound-healing response to chronic liver injury caused by viral hepatitis, alcohol, metabolic diseases, autoimmune conditions, and cholestatic liver diseases. The most common etiologies are alcoholic liver disease, non-alcoholic fatty liver disease, chronic viral hepatitis, genetic conditions (α − 1 antitrypsin deficiency, hereditary hemochromatosis, and Wilson disease), and autoimmune diseases (primary biliary cirrhosis, primary sclerosing cholangitis, and autoimmune hepatitis), and drugs. Sustained liver damage leads to fibrosis, liver failure, and even hepatocellular carcinoma [1,2]. In the United States, the prevalence of chronic liver diseases was 1.8% in 2017, with 12.8 deaths per 100,000 population [3,4].
The epithelial-mesenchymal transition (EMT) is a transition of epithelial cells to a mesenchymal state, which is a reversible process [5]. The EMT plays an essential role in tissue development, wound healing, fibrosis, and cancer progression [6][7][8]. Liver fibrosis is caused by excess extracellular matrix (ECM) production by myofibroblasts [9]. Activation of hepatic stellate cells (HSCs) is a key event in the formation of liver fibrosis since it is the major source of myofibroblasts [10]. Several studies indicated that HSCs undergo EMT during their activation [7,[11][12][13] to participate in liver fibrosis [14][15][16]. The main pathways involved in the EMT in liver fibrosis are the Hedgehog signaling pathway, transforming growth factor (TGF)-β signaling pathway, Notch signaling, and extracellular signal-regulated kinase (ERK) signaling pathway [7,17]. Excessive Hedgehog activation after liver injury participates in EMT and liver fibrosis [7,17]. TGF-β signaling is involved in ECM and collagen production by HSCs [7,17]. The ERK pathway plays a crucial role in cell growth and differentiation and represses EMT [17]. The Notch pathway is involved in cell differentiation [7]. Importantly, inhibition of the EMT of HSCs can suppress the activation of the HSCs, thereby alleviating the progression of hepatic fibrosis [11,18,19]. Hence, inhibition of the EMT is a promising strategy for reversing fibrosis.
Nuclear receptor 4a1 (NR4A1, also known as Nur77, TR3, or NGFIB) is a member of the NR4A family of nuclear orphan receptors. NR4A1 plays diverse and important regulatory roles in glucose and lipid metabolism and inflammatory responses [20,21]. NR4A1 is involved in the EMT in tumor metastasis and migration [22][23][24]. The loss of NR4A1 inhibits TGF-β-induced EMT and metastasis [22]. Furthermore, Palumbo-Zerr et al. [25] demonstrated that NR4A1 inhibits TGF-β signaling and can suppress experimental lung and liver fibrosis.
Additional recent studies have also demonstrated that NR4A1 inhibits TGF-β signaling [26][27][28]. Therefore, NR4A1 might represent a promising antifibrotic target, but since the EMT mechanisms associated with liver fibrosis and those associated with cancer might be different, it remains unclear whether NR4A1 inhibits HSC activation and liver fibrosis by modulating the EMT. This study aimed to examine whether NR4A1 is involved in inhibiting HSC activation and liver fibrosis through the EMT.
Cells
The HSC-T6 cells were purchased from Shanghai Tongpai Technology Co., Ltd.
Quantitative real-time PCR
Total RNA was extracted from the cells using TRIzol (MiniBEST Universal RNA Extraction Kit; Takara Bio, Otsu, Japan). Gene expression was measured by the quantitative real-time polymerase chain reaction (qRT-PCR) using the SYBR Green Real-time PCR Master Mix (Takara, Otsu, Japan) performed under standard conditions with an ABI 7900 Sequence Detection System (Applied Biosystems, Foster City, CA, USA). All primers were from Takara. The primer sequences are listed in Table 1.
Calculation of the gene expression
First, we calculated the average Ct value of the sample, and then, the ΔCt of the target gene in the sample. The relative value of the target gene in the sample to the internal reference gene was calculated. Afterward, the ΔCt of a certain gene was calculated relative to the reference sample group and multiple relationships.
Reference gene selection
GAPDH has been used as a stable reference gene selected as a housekeeping conserved gene according to previous studies [29-34].
Statistical analysis
The data are presented as mean ± standard deviation and were analyzed using a one-way analysis of variance with Fisher's least significant difference post hoc test. P-values <0.05 were considered statistically significant (*P < 0.05 and **P < 0.01).
NR4A1 regulates the expression of EMTrelated genes to prevent EMT in HSC-T6 cells
To explore the regulatory effects of NR4A1 on the EMT of HSC-T6 cells, we first used acetaldehyde to stimulate HSC-T6 cells and measured the changes in the expression of EMT-related genes. Compared with the control group, the mRNA levels of E-cadherin in the acetaldehyde group were significantly downregulated, whereas those of FN, fibroblast-specific protein 1 (FSP-1), and vimentin were significantly upregulated. In addition, the mRNA levels of HSC activation markers, including smooth muscle alpha-actin (α-SMA) and COL1A1/COL1A2, were significantly upregulated in the acetaldehyde group compared with the control group. When the NR4A1 agonist (Csn-B) was used with acetaldehyde, compared with the acetaldehyde group, the mRNA levels of E-cadherin in the NR4A1 activation group were significantly upregulated, while those of FN, FSP-1, vimentin, α-SMA, and collagen genes (COL1A1/COL1A2) were significantly downregulated ( Figure 1). Similar changes were observed at the protein level ( Figure 2). These results indicated that the acetaldehyde model induces EMT in HSC-T6 cells and that NR4A1 is involved in regulating the expression of EMT-related genes, probably preventing EMT in the cells.
Discussion
Activation of HSCs is a key event in liver fibrosis [10].
HSCs undergo EMT during activation [7,[11][12][13]. NR4A1 inhibits TGF-β signaling, which can suppress experimental lung and liver fibrosis [25], but whether NR4A1 inhibits HSC activation and liver fibrosis through the EMT is unknown. Therefore, this study aimed to examine whether NR4A1 is involved in inhibiting HSC activation and liver fibrosis through the EMT. The results suggest that NR4A1 activation suppresses acetaldehyde-induced EMT in HSCs, as shown by increased epithelial and decreased mesenchymal marker expression. The inhibition of the TGF-β-Smad2/3/4-ZEB signaling during HSC activation might be involved. Liver fibrosis is a wound-healing response to various liver damage forms, such as hepatitis, alcohol, drugs, metabolic diseases, biliary injury, and toxins. Importantly, HSC activation is a key event in liver fibrosis [10], and wounding and repair are dynamic processes that include matrix synthesis, deposition, and degradation [35]. HSC activation undergoes EMT-like changes and participates in fibrogenesis [11,12]. Consistent with these previous studies, the present study indicated that the acetaldehydeinduced activation of HSCs upregulated the expression levels of FN, FSP-1, vimentin, and α-SMA (all myofibroblastic markers) while downregulating those of E-cadherin (an epithelial marker). Hence, these findings confirm that HSC activation involves the EMT. Figure 3: Effects of NR4A1 on the protein levels of Smad2/3/4, Smad 7, and ZEB. Protein levels of Smad2/3/4 and ZEB in the acetaldehyde group were upregulated, while that of Smad 7 was downregulated. The expression of these proteins was reversed in the NR4A1 activation group. The proteins were analyzed by western blotting. β-Actin was used as an internal control. *P < 0.05, **P < 0.01 vs control, # P < 0.05, ## P < 0.01 vs acetaldehyde. n = 3/group. Figure 4: mRNA levels of the components of the TGF-β-Smad-ZEB signal pathway in HSC-T6 cells. The mRNA levels of TGF-β, Smad2/ 3/4, and ZEB were significantly upregulated, while Smad7 mRNA levels were significantly downregulated. The mRNA levels of Smad7 in the NR4A1 activation group were significantly upregulated, while those of TGF-β, Smad2/3/4, and ZEB were significantly downregulated. *P < 0.05, **P < 0.01 vs control, # P < 0.05, ## P < 0.01 vs acetaldehyde. n = 3/group. [22][23][24]. Still, it remains unclear whether the role of NR4A1 in EMT in cancer is the same as in EMT in liver fibrosis and HSC activation. Csn-B is an NR4A1 agonist that enhances the transcriptional activity of NR4A1. The present study showed that Csn-B upregulated the expression levels of epithelial markers and decreased the expression levels of mesenchymal markers in acetaldehyde-induced EMT in HSCs, compared to these levels in the presence of acetaldehyde alone. These findings suggest that NR4A1 activation suppressed the EMT during acetaldehyde-induced activation of HSCs. It is contrary to the findings in tumor cells, which is not surprising since the pathogeneses and cellular characteristics of tumors and fibrosis exhibit different characteristics. In the present study, the mechanism of NR4A1-mediated inhibition of the EMT of HSCs was explored. Palumbo-Zerr et al. [25] found that NR4A1 is an endogenous inhibitor of TGF-β signaling and inhibits skin, lung, liver, and kidney fibrosis. TGF-β signaling has also been identified as one of the predominant inducers of the EMT [36,37], and the inhibition of TβRI activity can block TGFβ-induced EMT. TGF-β binds to TβR receptors to form a complex, activating Smad2/ Smad3 to interact with Smad4 to form trimeric Smad complexes. The TGFβ/Smad signaling pathway activates the expression of EMT transcription factors and initiates the EMT [6,38]. Smad complexes interact with ZEB1 and ZEB2 to mediate TGFβ-regulated gene expression. ZEB is one of the key transcription factors of the EMT, and its functions are finely regulated at the transcriptional, translational, and post-translational levels. ZEB expression is activated early during the EMT and plays a central role in developing both fibrosis and cancer. The TGF-Smad-ZEB pathway is involved in the EMT [6]. In the present study, we found that acetaldehyde-mediated stimulation of HSCs significantly upregulated TGF-β, Smad2/3/4, and ZEB levels and significantly downregulated Smad7 levels. Furthermore, NR4A1 activation in the presence of acetaldehyde resulted in higher Smad7 levels and lower TGF-β, Smad 2/3/4, and ZEB expression levels than acetaldehyde treatment alone. Previous studies confirmed that Smad7 inhibits TGF-β signaling [39]. The present study illustrates that NR4A1 can suppress EMT by inhibiting TGF-β-Smad2/3/4-ZEB signaling. NR4A1 might negatively regulate TGF-β signaling, at least in part, by promoting SMAD7 expression. NR4A1 activates TGF signaling and promotes EMT in tumor cells [22,23], whereas the present study in HSCs differs from those in tumor cells. Interestingly, NR4A1 exhibits both tumor-suppressive and pro-oncogenic effects in cancer development [40][41][42]. NR4A1 translocation from the nucleus to the cytoplasm in colon cancer cells may initiate apoptotic cascades [43]. In contrast, NR4A1 exhibits anti-apoptotic effects when it is not exported from the nucleus [41,44]. TGF-mediated induction of the EMT is dependent on the nuclear export of NRA41 in breast cancer cells, and NR4A1 antagonists inhibit the nuclear export of NR4A1 and thereby block the TGF-induced EMT [23]. NR4A1 phosphorylation decreases the transcriptional activity of NR4A1, and pNR4A1 is strongly associated with hepatic/lung fibrosis and is mainly located in the cytoplasm, whereas pan-NR4A1 localizes in both nuclear and cytoplasmic compartments [25]. Collectively, these studies suggest that the effects of NR4A1 depend on its subcellular localization and the cell type in which it is signaling.
Several previous studies have shown that NR4A1 can promote tumor cells' EMT
This study has limitations. The subcellular localization and expression of NRA4A1 were not examined, which could be an added benefit for this study. Thus, this should be tested in future experiments. In addition, the pathways involved in EMT and fibrosis were examined superficially. Only an agonist of NRA4A1 was used, and future studies should also use an antagonist. In addition, agonists/ antagonists and silencing/overexpression of TGF-β and other proteins involved in that pathway should be used to determine the contribution of the TGF-β pathway in the EMT in liver fibrosis. Nonetheless, this study only used GAPDH as a reference gene due to the fact that several studies used GAPDH as an internal reference [29][30][31][32][33][34]. However, it is recommended to use at least two reference genes to obtain more reliable results, and therefore, other reference genes will be considered for future studies.
In summary, this study indicates that NR4A1 suppresses the EMT during acetaldehyde-induced HSC activation. NR4A1-mediated inhibition of the EMT of HSCs is involved in the suppression of TGF-β-SMAD2/3/4-ZEB signaling and increased SMAD7 expression, but confirmation is needed. Hence, the findings suggest that NR4A1 plays an important role during HSC activation and that NR4A1 might be a promising therapeutic target for treating liver fibrosis.
Conflict of interest:
The authors state no conflict of interest.
Data availability statement: The datasets generated during and/or analyzed during this study are available from the corresponding author on reasonable request. | 2022-05-04T13:17:21.086Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "1a8b392dd72fbef904ffba1dc81f08da3de30e8f",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/biol-2022-0047/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0012b31ec5ff48195d6eee201be0255b86672616",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
224805617 | pes2o/s2orc | v3-fos-license | Ethical Framework for Assessing Manual and Digital Contact Tracing for COVID-19
The COVID-19 pandemic has challenged the traditional public health balance between benefiting the good of the community through contract tracing and restricting individual liberty. The authors analyze important technical and ethical issues regarding new smartphone apps that facilitate contact tracing and exposure notification. Then they present a framework for assessing contact tracing—whether manual or digital.
CONTACT TRACING
State laws authorize contact tracing by public health officials, with safeguards. It is routinely carried out, for example, in tuberculosis cases and during measles outbreaks (4). With COVID-19, contact tracing aims to notify all persons who were within 6 feet of an infected person for at least 10 minutes during the 14 days before diagnosis. Although public health laws allow "mandatory" contact tracing, in effect contact tracing is voluntary because people who do not want to cooperate can decline to talk or say they do not recall contacts or locations (5).
Manual Contact Tracing
In manual contact tracing, public health staff contact persons exposed to an infected person and ask them to be tested and to self-quarantine to prevent further transmission. Manual tracing has not been successful with COVID-19 because of the very large numbers of infected persons, the downsizing of public health departments, the shortage of experienced contact tracing staff, mistrust of government, and lack of cooperation by contacts (6,7). Contact tracers in different U.S. jurisdictions completed interviews with 64% to 71% of COVID-19 cases; of these, 53% to 70% reported no names of persons whom they might have exposed (6,8,9).
What might be the reasons for such low cooperation? Some infected persons may worry that they will lose their jobs or be stigmatized. Students may fear disciplinary action if they admit to attending a high-risk event. Furthermore, there may be insufficient numbers of appropriate bilingual contact tracers to address the disproportionately high number of infected persons who do not speak English at home; language and cultural barriers greatly hamper building rapport (10). Some people are mistrustful, fearing that "contact tracers" are actually police, Federal Bureau of Investigation, or U.S. Immigration and Customs Enforcement agents (7,11). Some people fear that contract tracing will lead to government surveillance, detention camps, and taking away their guns (2,3). Contact tracers have been threatened with violence and called "Gestapo" (2). Fox News host Laura Ingraham compared being contacted by a contract tracer to being groped by a Transport Security Administration worker (3).
Smartphone Exposure Notification Apps
Smartphone apps might facilitate contact tracing by providing a record of the user's recent close exposures to the phones of persons later found to be infected with COVID-19. Unlike manual contact tracing, infected persons then do not need to recall their movements during their contagious period or know the identities of people close by. These apps can use Bluetooth, GPS, or WiFi technology. Table 1 compares important features of these technological approaches. Some apps are designed to prioritize privacy and require explicit consent.
Bluetooth-based Apps
Using Bluetooth technology, Google and Apple have developed a joint privacy-preserving app, called Exposure Notifications Express, that allows phones to detect nearby Android and iOS phones (16). The app collects and stores on the user's phone randomly generated encrypted Bluetooth keys that contain no identifying information about the other phones or the location or time of the exposure. If the phone's owner later contracts COVID-19, these Bluetooth keys can be used to notify exposed phones electronically without revealing the identity of any of the parties.
Exposure Notifications Express is an app that allows public health authorities to utilize the privacy-preserving platform developed by Apple-Google without having to spend substantial effort and cost to develop their own exposure notification apps (17,18). Some other Bluetooth exposure notification apps implement a similar privacy-preserving approach as Apple-Google, but other apps that combine Bluetooth with other approaches, such as those developed by Citizen and Everbright, do not have strong privacy and consent protections (19, 20).
GPS-Assisted Apps
In the United States, cell phones' locations are continually tracked by service providers and apps from retailers and map services. Apps that log a user's locations on his or her phone can facilitate manual contact tracing by helping users who later become infected with COVID-19 recall where they have been and potentially whom they were near. Some state or local governments and private companies have developed exposure notification apps using GPS technology (21,22).
WiFi-based Location Tracking
Universities routinely track the locations of students, employees, and guests who enable WiFi access on their campus. Location is triangulated on the basis of the strength of signals that users' devices send to wireless access points to connect to the WiFi network. Each user is identified by a unique access address. A few universities are developing a contact tracing system based on this location information (23, 24). Some companies are also developing WiFi-based tracking programs for business clients. Finally, some universities and companies are developing apps that use more than one of the above technologies (19 -21, 24, 25).
Context of Exposure
No app can assess whether the exposure risk is lower because persons were wearing masks. Bluetoothbased apps cannot determine whether there was a floor or wall separating the persons. Although apps Requires explicit consent to activate app and again to allow app to notify people exposed Can turn off app at any time Requires explicit consent to activate app and again to allow app to notify people exposed Can turn off app at any time based on GPS provide locations and times of exposure, they may not identify physical separation and different rooms, floors, or offices at a GPS location. For WiFibased location triangulation, the ability to distinguish different rooms or floors in a building depends on the density and configuration of the wireless access nodes.
Privacy Breaches
Serious privacy breaches have been identified in many COVID apps, all contrary to the apps' stated privacy policies. The North Dakota app Care19 shared information with a digital advertising firm, including the unique advertising identifier that allows targeted advertisements in other apps (26). Earlier, Google collected location data with its "privacy-preserving" contact-tracing application programming interface (27). Furthermore, technology and data companies have a history of violating their own privacy and consent policies, including sharing data beyond the scope of their policies, as Facebook did in the Cambridge Analytica scandal (28,29). Even decentralized exposure notification apps cannot eliminate the possibility of privacy breaches. For example, a malicious party running accounts on multiple phones can deduce the identity of a case by triangulating the notifications (30).
Coordination Among Jurisdictions
Infected persons living or working near borders and travelers may expose persons from several jurisdictions, presenting a challenge for both manual and digital contact tracing. The European Union has a pilot program to allow users of the Apple-Google app to report a positive test and receive alerts if they border (31). The U.S. Association of Public Health Laboratories is setting up a secure national server to host the deidentified Bluetooth keys from the Apple-Google app that infected persons voluntarily share; this national server will facilitate interoperability across states (32).
Low Uptake
In a national survey, only 42% of U.S. residents said they would download and use a mobile contact tracing app (33). Actual downloading rates are shown in Table 1 (12)(13)(14)(15). Although maximum effectiveness requires about 60% of the population to be using the same app, lower levels of uptake may still provide some public health benefit (34).
A FRAMEWORK FOR BALANCING PROTECTING PUBLIC HEALTH AND RESPECTING PRIVACY
The traditional ethical and policy framework sets criteria that justify liberty-limiting public health interventions, including contact tracing, to benefit the community by limiting spread of a disease (1,4,(35)(36)(37). Balancing these countervailing considerations depends on a jurisdiction's attitudes toward privacy, trust in government institutions and technology companies, and the curve of infections. A public health measure that restricts individual liberty is appropriate if the answer to all of the following questions is "yes" ( Table 2) (1, [35][36][37]; public health officials would be justified in implementing a digital or manual contact tracing program.
Is the risk to public health serious and likely?
The COVID-19 pandemic is clearly severe, owing to the large number of cases and excess deaths.
Is the public health intervention effective for diminishing the public health risk?
In apps introduced before the new Apple-Google platform, errors were frequently reported, and several apps needed to be withdrawn to make corrections (26,27).
The downloading of current exposure notification apps and the number of positive cases reported to the apps and contacts notified remain to be seen (6,33). Apps that prioritize privacy cannot track those notified or how many of these people test positive for COVID-19. Thus, their effectiveness in preventing spread is hard to assess.
Exposure notification is only the first step toward the goal of reducing new COVID-19 infections; testing and quarantine or isolation are also needed. However, in the United States, access to testing and timely return of results still fall far short of the need (38) and undermine the value of exposure notification (39). Moreover, many people exposed to COVID-19 cannot quarantine because they live with others in close quarters or need to continue working to pay for food and rent (40). Some jurisdictions have offered logistic and financial support to exposed persons to help them overcome these challenges (39). Thus, for many reasons, the real-world effectiveness of exposure notification apps is likely to be limited.
Are the risks of the public health intervention acceptable?
To assess the risks of exposure notification apps, the public needs readily available answers to the fol- Are the risks of the public health intervention acceptable? For exposure notification apps, the following questions should be addressed: A. Is specific informed and voluntary consent required from the cellphone owner for the app to collect data and to notify potential contacts? B. Are both the app user and potential contacts anonymous to each other? C. Are the collected data the minimum needed to carry out an authorized public health purpose? D. Is data use restricted to the public health purpose by designated public health officials? E. Are the data destroyed after a defined period, when they are no longer needed for the public health purpose? F. Are strong security protections in place and tested? G. Has the app been tested under field conditions? 4. Are the benefits and risks of the public health intervention equitably distributed? The public health measure is appropriate if the answer to all of the above questions is "yes." Ethical Framework for Assessing Contact Tracing for COVID-19
MEDICINE AND PUBLIC ISSUES
Annals.org Annals of Internal Medicine lowing questions. "Yes" answers to these questions indicate lower risks ( Table 2).
A. Is specific informed and voluntary consent required from the cellphone owner for the app to collect data and to notify potential contacts? B. Are both the app user and potential contacts anonymous to each other? The app user and contacts may also be anonymous to public health authorities, which heightens privacy but may reduce the public health benefits. C. Are the collected data the minimum needed to carry out an authorized public health purpose? D. Is data use restricted to the public health purpose by designated public health officials? Is sharing data with other entities, such as law enforcement, immigration officials, the Department of Homeland Security, and commercial organizations expressly prohibited? Is combining the collected data with other data prohibited, so that individuals cannot be reidentified? E. Are the data destroyed after a defined period, when they are no longer needed for the public health purpose? F. Are strong security protections in place and tested? G. Has the app been tested under field conditions?
Public health departments, app developers, and businesses and universities that implement apps should provide transparent and detailed answers to these questions. In Colorado, basic information originally was not disclosed about how an app was gathering and using cellphone location data. Investigative reporters who obtained such information identified significant privacy concerns (41). Apple and Google have provided detailed information about their app and platform and use open-source programming that facilitates independent review. In contrast, businesses, business consultants, and universities developing proximity detection apps have generally provided little specific information to evaluate these questions (42,43).
Privacy and security should be evaluated and audited by independent cybersecurity firms, to verify answers to these questions (44). Such independent analyses have revealed serious flaws in many COVID apps (26,27). The Apple-Google app does more to minimize these risks than other exposure notification approaches.
Are the benefits and risks of the public health intervention equitably distributed?
Historically, public health measures and contagious disease morbidity and mortality have fallen most heavily on disadvantaged and vulnerable groups, particularly ethnic and racial minorities that are already stigmatized (45). The number of COVID cases is strikingly higher among Black and Latinx persons, and COVID deaths are higher among Black persons (46). Manual contact tracing programs that do not match tracers to infected persons in terms of language, ethnicity, race, and culture have failed to build trust and overcome the disproportionate harms of the pandemic on these groups (10). Moreover, both manual and digital contact tracing in disadvantaged communities may be unsuccessful unless they mitigate contacts' poor access to testing, crowded housing, and financial pressures to continue to work in jobs that require close contact with many other persons (39,40).
Is the intervention the least restrictive alternative for achieving the public health goal?
In the United States and Europe, unlike Asian countries, such as South Korea and Singapore, voluntary and privacy-preserving public health interventions are strongly preferred. Involuntary exposure notification tracking smartphone location would be a significant step beyond current U.S. public health practice and is not being considered by any state or local government. However, some U.S. businesses are requiring employees to use location tracking or exposure notification apps (43). Moreover, some universities are tracking location routinely; one allows people to opt out only by forgoing WiFi access on the campus (23). As previously discussed, "mandatory" contact tracing by public health workers is de facto voluntary (5). Thus, privacy-preserving exposure notification apps coupled with voluntary cooperation of infected persons with public health officials to notify contacts who may not be using the app provides the best combination of public health benefit and protection of privacy.
In conclusion, in the United States, proven public health measures, such as wearing masks, physical distancing, and restricting gatherings in crowded venues, have not been consistently implemented or followed (47). In some areas, these measures, as well as contact tracing, have been resisted as unacceptable infringements on liberty (48).
Many U.S. government leaders have failed to articulate clear, effective, and consistent evidence-based messages on the value of placing some restrictions on liberty and privacy to serve the common good. In contrast, Germany and New Zealand, whose leaders have successfully communicated such messages, have maintained much lower percentages of cases in the population than the United States (49). During the COVID-19 pandemic, many governments around the globe lost public trust. Without trust, the public is unlikely to accept restrictions on liberty and privacy.
In addition to government officials, opinion influencers can urge people to protect others and build trust for public health measures. A positive example is professional basketball players making public service announcements to reinforce following public health measures. Football coaches and players, whose inperson audiences are now limited, should exhort fans to work together as a team toward the common longterm goal of overcoming the pandemic. Ministers and bishops should appeal to their congregations to follow Biblical exhortations to care for other people. Engagement is particularly important with communities of color, where there is more mistrust of contact tracing and addressing the basic needs of infected persons and contacts is crucial (50).
MEDICINE AND PUBLIC ISSUES
Ethical Framework for Assessing Contact Tracing for COVID-19 The urgent need to reduce the spread of the pandemic may lead national, state, and local governments, as well as businesses and universities, to rush to institute digital COVID-focused apps that seem desirable in theory. Even if apps have acceptable risks to privacy, complex cultural, political, and ideological problems and tradeoffs need to be resolved. Broad support must be forged for a coordinated and sustainable spectrum of effective public health measures that include prompt testing and return of results and consistent use of masks and physical distancing, as well as digital and manual contact tracing. | 2020-10-21T13:04:42.813Z | 2020-10-20T00:00:00.000 | {
"year": 2020,
"sha1": "6ffdc686a3fddf59f11fb7cc597875df272dc222",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7573931",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca508ddd512ca613ab292d1713dcfde3076e1a08",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231765268 | pes2o/s2orc | v3-fos-license | Clinical Outcome of Guillain-Barré Syndrome in 108 Children
Objectives To review the clinical outcome and electrophysiologic characteristics of children with Guillain-Barré syndrome (GBS) from Eastern India. Methods The hospital records of the children aged less than 12 years with a final diagnosis of GBS at our hospital from November, 2015 to December, 2018 were reviewed. Disabilities were assessed at 8-weeks and 6-month follow-up using Hughes scale (0–6). Results Demyelinating variety in 57 patients (52.8%) was more common than the axonal variety (33.3%). 71.1% (32/45) of GBS patients had recovered (scale 0,1) during the follow up period of 6 months. These included 67.7% (21/31) of the axonal variety and 78.6% (11/14) of the demyelinating variety. Conclusion Irrespective of the severity, disability is less with the demyelinating variety as compared with the axonal subtype.
G uillain-Barré syndrome (GBS) is presently the most common cause of acute flaccid paralysis in our country. Among children in India and neighboring areas, the axonal variety predominates [1][2][3][4][5]; the demyelinating variety predominates in South America [6], other parts of Asia [7][8][9] and Europe [10,11]. This difference in subtypes across geographical areas is not clearly understood [12]. Data on the common subtype in Eastern India is lacking, including data on outcome as pertaining to its subtypes, which this study attempts to address.
METHODS
This is a hospital record review conducted in a pediatric tertiary care hospital between November, 2015 to October, 2018. During this period, 144 children with acute flaccid paralysis (AFP) were admitted, among whom, 108 were diagnosed as GBS (Asbury and Cornblath criteria [13]), and included in our study after excluding GBS variants (n=12), hypokalemic periodic paralysis (n=4), transverse myelitis (n=6), traumatic neuritis (n=2) and those who did not give consent (n=12). Ethical clearance was obtained from institutional ethics committee. Informed consent was taken before enrolment. Intravenous immunoglobulin therapy was given to all patients.
Nerve conduction study was done within 48 hours in 99 (91.7%) patients, after initial stabilization. All the patients underwent stool examination for poliovirus detection. Lumbar puncture was done in 102 (94.4%) patients in the second week after disease onset. Hughes GBS disability grade was applied to assess the outcome at eight weeks (n=66) and six months (n=45) after discharge [14].
Statistical analysis: SPSS 24.0 and Graph Pad Prism version 5 were used for analysis. Proportions were compared by Chi-square test or Fischer exact test, as appropriate. P value <0.05 was considered to be statistically significant.
RESULTS
A total of 108 cases (66 boys) of GBS in the age range 1.2 to 10 years, median (IQR) age of 4.2 years (2 year 3 month -5 year) were enrolled in the present study. Preceding respiratory and gastrointestinal infections were found in 33.3% (n=36) and 25% (n=27) children, respectively. History of antecedent illness was present in 72 (66.7%) patients including diphtheria-tetanus-whole cell pertussis vaccination in one child.
R R R R R E E E E E S S S S S E E E E E A A A A A R R R R R C C C C C H P H P H P H P H P A A A A A P P P P P E E E E E R R R R R
Published online: January 28, 2021; PII:S097475591600283 VOLUME 58 __ SEPTEMBER 15, 2021 SEN, ET AL. O UTCOME OF GULLAIN-BARRÉ SYNDROME fluctuating blood pressure (n=6) and postural hypotension (n=3) were seen in 47.2% (n=51) of the patients. Sensory symptom was the first symptom in 66 (61.1%) patients as compared to motor symptoms in 42 (38.9%) patients. The most common initial sensory symptom was paresthesia (33.3%) in the form of pin and needle sensations, burning sensation and itching. Among other initial sensory presentations generalized muscles aches were found in 8.3% of cases, numbness of legs in 5.5%, pain in the back and neck in 8.3%, and pain in legs in 5.5% cases. Pain at some time during the illness was present in 86% of patients. In this study, demyelinating pattern (AIDP) was seen in 52.8% (n=57), axonal pattern in 33.3% (n=36); whereas, 5.6% (n=6) had normal NCV pattern. In the axonal variety, there were 34 cases of acute motor axonal neuropathy (AMAN) and two cases of acute motor sensory axonal neuropathy (AMSAN). Albuminocytological dissociation was found in 54 (50%) patients.
Pediatric intensive care unit (PICU) care for the management of dysautonomia and respiratory paralysis was required in 54 patients. Mechanical ventilation for respiratory failure was required in 24 (22.2%) patients, out of which nine (8.3%) died during the acute phase of the illness. Dysautonomia, bulbar involvement and diarrhea were associated with all of the nine patients who died. The causes of death were cardiac arrest in the context of dysautonomic syndrome in four patients, ventilatorassociated pneumonia (VAP) in three patients, adult respiratory distress syndrome (ARDS) in one, and sepsis in one patient. The duration of ventilation was 2-64 days with a mean of 20.12 days and the range of hospital stay was 2-74 days with a mean of 16.5 days. During ventilation, one patient developed pneumothorax, nine developed VAP and 19 patients (17.9%) required tracheostomy.
Out of 108 patients, 99 were discharged but only 66 (66.7%) patients were available for follow up at 8 weeks after the onset of illness, and 45 patients after 6 months ( Table I). The reasons for not following up were amelioration of weakness, minor sensory symptoms, distance from the hospital and follow up at their nearby clinics. In the present study, three patients developed chronic inflammatory demyelinating polyneuropathy (CIDP) during the follow up period and were treated with IVIG and steroids.
DISCUSSION
This is a single center study done in eastern India which included 108 children with GBS and compared their outcome at eight weeks and six months follow up. Of the 99 patients available for electrophysiological studies 52.8% had the demyelinating subtype and 33.3% had the axonal variety.
In the present study, those having the axonal variety had higher Hughes disability score at presentation, at the peak of disease, on discharge and on follow up at eight weeks and six months respectively. Axonal variety had a higher incidence of GI symptoms in our study as well as other studies [7,15] while antecedent upper respiratory illness was more common in the demyelinating variety as also noted in few previous studies [3,7,15]. The reasons why some infections are more common in certain subtypes of GBS are not very clear.
In a study by Korinthenberg, et al. [10] on 95 children there was an improvement of 96% (91/95) (75% Grade 0 and 21% Grade 1) at the end of an observation period of 288 days. They had 74% of the demyelination subtype, which probably explains their excellent outcome. Kalra, et al. [2] in their studies in 52 children conducted in northern India revealed a recovery rate of 87.5% at 1 year follow up and 95% thereafter. In a study from southern India, Kannan, et al. [15] all 43 children recovered (Grade1,2) at 6 month follow up. They reported a mix of axonal (44.2%) and demyelinating (48.8%) subtypes. Recently Yadav, et al. [1] studied 36 children and reported a recovery rate of 84.4% (27/32) (Grade 1,2) at 3 month follow up. Their predominant subtype was the axonal variety (69.4%). We had a higher number of axonal variety in follow up as compared to the demyelinating subtype (31 vs 14). This was probably due to persistence of weakness in axonal type which also explains the poorer outcome of our study. Data has been difficult to analyze as a few studies [1,15] have used a Hughes disability score of 1 and 2 to discuss outcome while others [2,10], including the present study, have used a score of 0 and 1. More meaningful comparisons could have been done if the same disability scores were used. The overall prognosis in most studies was excellent and it has been observed by most of the studies that a longer duration of follow-up showed improved disability ratings and scores [1,6,10].
The limitations of this study is being a single center study, absence of a longer period of follow up and insufficient numbers to draw strong conclusions | 2021-02-03T06:17:18.879Z | 2021-01-28T00:00:00.000 | {
"year": 2021,
"sha1": "19310e35994bd759e13ffb88ba0f7d313d4a18cf",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13312-021-2302-7.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e7b0fd50f7949301f0fc1f973aaa5df8c6c358c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
46029132 | pes2o/s2orc | v3-fos-license | Crystal Structures of Inhibitor Complexes Reveal an Alternate Binding Mode in Orotidine-5′-monophosphate Decarboxylase*
The crystal structures of the enzyme orotidine-5′-monophosphate decarboxylase from Methanobacterium thermoautotrophicum complexed with its product UMP and the inhibitors 6-hydroxyuridine 5′-phosphate (BMP), XMP, and CMP are reported. A mutant version of the protein, in which four residues of the flexible phosphate-binding loop180Gly–Gly190 were removed and Arg203 was replaced by alanine, was also analyzed. The XMP and CMP complexes reveal a ligand-binding mode that is distinct from the one identified previously with the aromatic rings located outside the binding pocket. A potential pathway for ligand binding is discussed.
Orotidine-5Ј-monophosphate decarboxylase (ODCase) 1 catalyzes the last step in the de novo pyrimidine biosynthesis pathway, converting OMP to UMP (Scheme I), which in turn serves as the source of all cellular pyrimidine nucleotides. ODCase continues to elicit strong interest not only because of its obvious importance in DNA and RNA synthesis, and thus in cell growth and proliferation, but even more so because of the enormous acceleration it conveys on the catalyzed reaction as well as the elusive nature of its reaction mechanism.
The decarboxylation of orotic acid in water of neutral pH is a very slow process (t1 ⁄2 ϳ 78 million years at 25°C). The enzyme ODCase, however, catalyzes the breaking of this stable carboncarbon bond at rates that are only 2 orders of magnitude below the limits set by diffusion. This astonishing feat qualifies ODCase as the most proficient enzyme known (1). Such catalytic power is even more remarkable if one considers that it is achieved without the help of any metal ions or cofactors and does not involve acid/base catalysis (2,3). Not too surprisingly, several laboratories have made the chemical mechanism of this enzyme an object of their studies (4 -8). Two strong inhibitors, 6-hydroxyuridine 5Ј-phosphate (barbituric acid ribosyl monophosphate; BMP) and 6-aza-UMP, are thought of as close mimics of the postulated carbanion intermediate thereby functioning as transition state analogues. Several weaker inhibitors include the product UMP and the purine nucleotides XMP and CMP (Scheme II). All of these compounds show a competitive inhibition pattern.
Recently published crystallographic studies of ODCases from four different organisms (9 -12) identified the close conservation of their three-dimensional structures but still did not lead to a generally accepted mechanistic proposal (13)(14)(15). A Lys 42 -Asp 70 -Lys 72 -Asp 75B arrangement creates a chain of alternating charges, which lies on one side of the binding pocket for the aromatic base. This cavity fits tightly around the bound product or inhibitor ligands (9 -12). The substrate OMP itself, however, only enters this site in crystals of the double mutant D70A/K72A. Without the removal of the two side chains, there was insufficient space for substrate molecules to bind, with Asp 70 not only overlapping in space but also exerting major electrostatic repulsion. In addition, there were no large conformational changes observed by either main chain or side chains when the wild-type enzyme and several active site mutants were crystallized in complex with substrate, product, and inhibitors (16). In light of this peculiar nature of the substratebinding site, we have put forward the idea that stress created by charge-charge repulsion in a small rigid and hydrophobic environment was a major factor in rate acceleration. This stress would be expected to build up as the C6-carboxylate approaches the side chain of Asp 70 leading to the release of CO 2 . The resulting carbanion would then be neutralized through proton transfer from the side chain of Lys 72 . The substantial binding energy generated by interactions of the phosphate and ribose parts of OMP with the enzyme matrix could provide a large part of the driving force for this process (9). This model, however, does not explain why the purine nucleotide XMP with its bulkier two-ring base is a much better inhibitor of ODCase (K i ϭ 4.1 ϫ 10 Ϫ7 M) than the product UMP (K i ϭ 2.0 ϫ 10 Ϫ4 M), which in its chemical structure more closely resembles the strong inhibitor 6-aza-UMP (K i ϭ 6.4 ϫ 10 Ϫ8 M) (4). Given the remarkably tight fit to the binding site found in all structurally characterized pyrimidine nucleotide complexes (9 -12, 16), one would have to postulate a major conformational change of the protein to accommodate the larger ring of XMP. The potential for such a reconfiguration of the active site would clearly weaken the argument for the involvement of electrostatic repulsion in the catalytic mechanism of the enzyme. We, therefore, set out to determine the binding mode of several well characterized inhibitor molecules, including XMP, to ODCase from the methanogenic archaeon Methanobacterium thermoautotrophicum.
In addition, we investigated the role played by the phosphate-binding loop, amino acids Gly 180 -Gly 190 , in ligand binding by analyzing the 6-aza-UMP complex of the deletion mutant ⌬R203A, in which amino acids 184 -187 were removed and the phosphate anchoring Arg 203 was changed to alanine. The crystal structures of the complexes of wild-type ODCase with the product UMP and the most tightly binding inhibitor 6-hydroxyuridine 5Ј-phosphate (BMP) were also determined to allow meaningful comparisons with the other complexes studied by us but also with the results described by other groups for the enzymes from yeast (10), Bacillus subtilis (11), and Escherichia coli (12).
EXPERIMENTAL PROCEDURES
Cloning, Protein Expression, and Purification-Wild-type and mutant M. thermoautotrophicum ODCases were expressed and purified as recently described (17). The ⌬R203A mutant was constructed by first introducing a single site mutation, R203A (QuikChange TM , Stratagene), followed by the removal of the four residues Ala 184 -Gln 185 -Gly 186 -Gly 187 in a second round of mutagenesis. The coding strand sequences of the respective primers are (mutating triplet in bold and underlined; the point of deletion is indicated by vertical line (͉): R203A, GATGCCATAATAGTTG-GAGCGTCCATCTACCTTGCAG and ⌬, CATTTCTCATATCCCCCGGT-GTGGGA͉GACCCAGGGGAGACCCTC.
Crystallization, Data Collection, and Processing-All crystals were grown in hanging drops using ϳ1.2 M trisodium citrate, pH 6.5-8.5, as the main precipitant. To obtain reasonably sized, single crystals of the UMP and BMP complexes, 3-7% dioxane had to be added to the crystallization medium. The resulting crystals belonged to space group C222 1 , with one monomer per asymmetric unit. Other additives were needed to generate single crystals of the XMP and CMP complexes. 10% (v/v) (Ϯ)-1,3-butanediol (Fluka) initiated crystal formation and sup-pressed twinning of the XMP crystals. A mixture of 1.5-2% (w/v) 1,2,3heptanetriol (Fluka) and 5% (v/v) 12-crown-4 (Fluka) was essential in growing diffraction quality crystals of the CMP complex (space group P2 1 , with one dimer per asymmetric unit) and to again suppress twinning of the crystals. To stabilize the 6-aza-UMP complex crystals of the deletion protein ⌬R203A, MgCl 2 was added to a final concentration of 25 mM, in addition to 5% dioxane (space group P1; four monomers per asymmetric unit). Nevertheless over time, the inhibitor molecules dissociated from the protein and crystals of the free protein appeared on the surface of the complex crystals.
Structure Determination-Both the structures of the XMP and ⌬R203A complexes were determined using the molecular replacement package EPMR (19). In the case of the XMP complex, the wild-type ODCase monomer coordinates were used as the search model for the two monomers in the asymmetric unit (an identical procedure was followed for the CMP complex), whereas a dimer was used to find the two dimers of ⌬R203A. Subsequent refinements were done with the program Crystallography NMR Software, version 0.9 (20). No noncrystallographic symmetry restraints were used for the final model. The refinement statistics are given in Table I. All images were prepared using SPOCK (21), MOLSCRIPT (22), and RASTER3D (23).
Effect of Alcohol on the Kinetics of ODCase-Wild-type protein was expressed in the ODCase-deficient E. coli cell line SKP10 (24) and purified according to the standard protocol (17). A concentration of 0.12 M calculated for monomeric protein was used in each activity assay. The decarboxylation of OMP was monitored in an UV-visible spectrophotometer (Cintra 20, GBC) at 282 nm and 25°C in 50 mM Tris buffer, pH 7.5, and 1 mM dithiothreitol. For each concentration of OMP, the initial linear rate of decrease in absorption was taken as a measure of the velocity. The effect of (Ϯ)-1,3-butanediol on the capability of XMP to inhibit ODCase activity was tested with various concentrations of the alcohol in the constant presence of 200 M XMP. The data were fit to the Michaelis-Menten equation with the program KaleidaGraph 3.0 (Abelbeck software) to provide the values of V max and the apparent dissociation constant K m .
RESULTS
Both the CMP complex and XMP complex crystals were very difficult to grow. Modifications to the dielectric constants of the media caused by the addition of small organic compounds seemed to support the formation of crystals of ODCase complexes, but a large number of additives had to be screened before crystals of diffraction quality could be obtained reproducibly. For instance, it was mandatory to add (Ϯ)-1,3-butanediol to the crystallization mixture for the XMP complex crystals to suppress their very high tendency to twin.
The electron density maps for both the CMP and XMP complexes clearly show that neither of the nucleotide inhibitors has its base bound inside the cavity that accepts the aromatic rings of the substrate and product molecules. Also in both structures, the loop comprising amino acids Gly 180 -Gly 190 was disordered. This binding mode leaves the inhibitor molecules partially exposed to solvent (Fig. 1). In substrate or product complexes, this loop clamps down on the nucleotide. Gln 185 , the residue located in the center of the loop, interacts with both O-2 and an oxygen of the 5Ј-phosphate.
Purine Inhibitor XMP-The analysis of this complex was attempted to address why the purine nucleotide XMP with its significantly larger base is a much more effective competitive inhibitor of ODCases than the product UMP, especially given SCHEME I SCHEME II the narrowness of the site that was to accept the aromatic rings of these molecules.
Its limited size, together with its resistance to conformational change, prevents the base-binding cavity from accepting the xanthine group of XMP (Fig. 1B). Unlike 6-aza-UMP, UMP, BMP, and OMP, XMP adopts the lower energy structure also found in solution, with 3Ј-endo sugar puckering and anti-conformation of the base. Despite this change, Ser 127 still serves as an anchoring residue for the nucleotide base forming two hydrogen bonds to xanthine. The side chain hydroxyl accepts a proton from N-1 and the backbone amide donates a proton to O-2. When compared with the complex of 6-aza-UMP and wildtype enzyme, the interactions between the phosphate group and Arg 203 as well as those of the 3Ј-OH of ribose with Asp 20 and Lys 42 were conserved. However, the 2Ј-OH group was no longer in contact with its usual partners, Asp 75B and Thr 79B from the other monomer. Instead, it was now only 3.0 Å away from the amino group of Lys 42 and forms an additional bond to the C3-OH of (ϩ)-1,3-butanediol, a crystallization additive which, together with a water molecule W1, is bound in the active site of the XMP-ODCase complex ( Fig. 2A).
The molecule of butyl alcohol could be described to act like an adapter. Its hydrophobic chain fits nicely against the hydrophobic pocket of the active site, whereas its C1-OH binds to N-3 of xanthine, in addition to its C3-OH and the ribose 2Ј-OH interactions mentioned above. A superposition of the active sites of the XMP and 6-aza-UMP complexes (Fig. 3A) shows that water W1 assumes the regular binding spot of the ribose 2Ј-OH, forming hydrogen bonds to OG1 of Thr 79B and to the carboxylate of Asp 75B .
The refined B-factor values for XMP (average B ϭ 20.8 Å 2 ) indicate a high occupancy rate and low mobility of this molecule in the crystalline complex. For comparison, the average B-factor for the very rigid active site residues is 16.2 Å 2 . Water W1 and the butyl alcohol molecule show significantly higher values of 34.9 and 43.6 Å 2 , respectively, most probably indicative of increased mobility. It is interesting to note that both hydroxyls of the additive as well as the water W1 take up positions that are occupied by first-hydration shell water molecules in the free-enzyme structure.
Kinetic Studies-We performed kinetic measurements to establish whether the presence of (Ϯ)-1,3-butanediol had any major influence on catalysis. Increasing concentrations of alcohol do lower the apparent dissociation constant K m from that measured with wild-type enzyme; V max , however, stays constant (Table II). As the dielectric constant (⑀) of the medium was decreased, ODCase binds the substrate more tightly. Ionic interactions and hydrogen bonds increase in strength with decreasing ⑀; therefore our results corroborate the idea that a large part of the substrate binding energy was generated from the interactions of the enzyme with the phosphate and the ribose groups of the nucleotide. Lowering ⑀ will also make it easier to protonate the C6-carboxylate of OMP, minimizing potential electrostatic repulsion. Our results align well with the general finding that decarboxylation reactions are accelerated in desolvating environments (8).
At a fixed concentration of 200 M XMP, increasing amounts of (Ϯ)-1,3-butanediol (5-20%, v/v) do not significantly change the values of V max and K m (Table II) leading us to believe that the alcohol does not drastically change the binding mode of XMP. The high B-factor values of the alcohol molecules also imply that their interactions with amino acids and the xanthine ring were not very specific and not very strong. Another argument for the general relevance of the orientation of the aromatic ring of XMP observed in crystals was the fact that the base of bound CMP adopts an almost identical position in the absence of any mitigating molecules (see below).
Pyrimidine Inhibitor CMP-The potential substrate 6-carboxy-CMP binds only very weakly to ODCase and its decarboxylation was almost undetectable (25); yet, there was no strong structural reason for the absence of catalysis. When comparing the interactions possible between the enzyme and orotate on the one hand and a cytosine ring on the other, only one hydrogen bond might be lost in the latter (9). Most of the hydrogen bonds formed (a water molecule linking the backbone carbonyl group of Gln 125 with O-4 of orotate and the hydrogen bond of the latter to the backbone amide of Ser 127 ) could have the partners act equally well as hydrogen donors or acceptors. In complexes involving orotidine or uracil, the side chain hydroxyl of Ser 127 receives a bond from N3H of the base, and at the same time, donates one to the oxygen of the amide side chain of Gln 185 . However, it could only act as a donor to either one of these positions in a potential CMP complex. To shed some light on the structural basis of the rejection of CMP-carboxylate as a substrate, we determined the crystal structure of the CMP bound to M. thermoautotrophicum ODCase.
The crystal structure of the CMP complex reveals that the pyrimidine nucleotide not only adopts a position very similar to the one found for XMP but also engages in most of the interactions displayed by the purine nucleotide. Another feature common to both complexes was the disordered state of the loop comprising residues Gly 180 -Gly 190 , the phosphate-binding loop (Fig. 1C).
Both active sites of the functional dimer were occupied by CMP molecules as well as by waters of increased mobility (Fig. 2B). CMP assumes its solution conformation, with the ribose ring in the 3Ј-OH endo position and cytosine oriented anti to the sugar ring (Fig. 3B), again mirroring XMP in its complex with ODCase. Both CMP molecules have relatively high Bfactor values, 34.2 and 42.3 Å 2 for the A and B monomers of the homodimer, respectively, in contrast to 19.4 and 23.3 Å 2 for the conserved enzyme residues surrounding them. The binding modes of the two cytosine bases were not completely identical; the one in the A monomer assumes a position slightly closer to that of the 6-azauracil ring in the 6-aza-UMP complex than does its counterpart in the B monomer. The increased mobility of CMP was accompanied by the disordering of the guanidinium head group of Arg 203 in the active site of monomer A. While the ribose 3Ј-OH was still fixed by hydrogen bonds to Asp 20 (2.7 Å) and Lys 42 (2.8 Å), the 2Ј-OH group was not making its proper contacts with the enzyme but formed a long hydrogen bond with the amino group of Lys 42 (3.1 Å) instead.
For the crystallization of the CMP-ODCase complex, 1,2,3heptanetriol was used as an additive. As this 7-carbon alcohol is much larger than (ϩ)-1,3-butanediol, it cannot fit between the nucleotide and the active site wall. Instead, five water molecules fill the empty cavity designed to hold the base in the productive nucleotide-binding mode. One of them sits in the position that the 2Ј-OH would occupy in such a complex. It is equivalent to water W1 in the XMP complex and engages in the same interactions to Asp 75B and Thr 79B .
Impaired Phosphate Binding Mutant, ⌬R203A-Kinetic studies have made it quite clear that the binding energy contributed by the 5Ј-phosphate group is very important for ODCase catalysis. Mutants with impaired phosphate binding ability have drastically reduced catalytic rates (4). To investigate the structural basis of this property, especially to find out whether the loss of the majority of bonds between the phosphate group and the enzyme has any effects on the conformation of the residues surrounding the aromatic rings of the nucleotide bases, we mutated Arg 203 , the main phosphateinteracting residue, to alanine and deleted the four residues 184 AQGG 187 of the phosphate-binding loop. Following co-crystallization, the structure of this mutant protein was determined in its complex with the inhibitor 6-aza-UMP. The reduced ligand binding affinity correlates well with a loss in crystal stability but does not lead to any structural adaptation of the active site. First, crystals of the complex form appear but over time many of them dissolve and ligand-free crystals (space group P4 1 2 1 2) start to grow on their surfaces. The residual complex crystals start to bend. If complex crystals were harvested after nucleation of free enzyme crystals had started the resolution of their diffraction pattern dropped to about 5 Å. Both effects are consistent with the build up of stress in the crystal lattice. Given these observations, it came as no surprise to find that the refined overall B-factor value for this mutant complex is quite high (38.7 Å 2 averaged over the four monomers located in the asymmetric unit) with one of the monomers displaying an overall B-factor value of 44.6 Å 2 . The electron density corresponding to the nucleotide ligand in the deletion mutant is distinctly weaker and of lower quality than the density found for the XMP or CMP ligands in their respective complexes. Together with the problems encountered during the crystallization of the 6-aza-UMP⅐⌬R203A complex, this indicates an elevated tendency for ligand loss.
Although MgCl 2 had been added to the crystallization solu-tion for stabilization, no defined metal ion was observed in the resulting 1.9-Å electron density map. Superposition of active site residues in the wild-type enzyme with their counterparts in the ⌬R203A mutant (Fig. 3C) does not reveal any significant differences (root mean square deviations of 36 main chain and 9 side chain atoms ϭ 0.12 Å). Despite its reduced affinity for inhibitors, the ⌬R203A mutant binds 6-aza-UMP in exactly the same position as the wild-type enzyme (Fig. 1D). All interactions between the nucleoside part of the inhibitor molecule and active site residues were maintained, including the hydrogen bond between N-6 and Lys 72 (3.1-3.2 Å). This was obviously not the case for the phosphate group. In addition to missing the ionic interaction with Arg 203 , it has lost one of its first hydration shell waters that was held in place by the backbone amide of Gln 185 and the backbone carbonyl of Val 182 in wild-type ODCase.
One distinct difference between the structures of wild-type and deletion mutant enzymes was the presence of an additional, elongated electron density peak next to the azauracil ring in the latter (Fig. 2C). Whereas the average values of the B-factors for the active site residues and 6-aza-UMP range from 25 to 36 Å 2 , the B-factor for the water molecule modeled into the density has a value of 60 Å 2 . It was not clear whether this density represents alternate but overlapping binding positions of water molecules or whether it was caused by the low occupancy presence of an as yet unidentified low molecular weight compound.
BMP, 6-Aza-UMP, and UMP Inhibitor Complexes-BMP, 6-aza-UMP, and UMP all have almost identical molecular structures except for positions 6 of the respective bases. Both BMP (K i ϭ 8.8 ϫ 10 Ϫ12 M) (26) and 6-aza-UMP (K i ϭ 6.4 ϫ 10 Ϫ8 M) (4) are strong inhibitors. This has been attributed to their similarity to the carbanion, which was postulated as a reaction intermediate stabilized by the positive charge of Lys 72 (26). This single favorable interaction alone, however, cannot adequately explain why the product UMP is so ineffective as an inhibitor (K i ϭ 2.0 ϫ 10 Ϫ4 M). The crystal structures of the three compounds complexed with various homologous ODCases were known (9 -12). We decided to reanalyze the corresponding complexes with M. thermoautotrophicum ODCase as the crystals of this enzyme diffract to significantly higher resolution than do those from other sources. At the same time, the results of such analyses provide a common frame of reference against which the significance of small changes between the various ligand complexes can be assessed.
When the structures of the different M. thermoautotrophicum complexes were compared, only minimal changes were identified. The overall root mean square deviation value for 418 C␣ atoms belonging to a dimer was less than 0.5 Å; the corresponding value was 0.2 Å for 50 backbone and C atoms of the following conserved active site residues: Asp 20 , Lys 42 , Asp 70 , Lys 72 , Ser 127 , Gln 185 , Arg 203 , Asp 75B , Ile 76B , and Thr 79B . The similarities even extend to water molecules accompanying the inhibitors in the binding site. As described for the structures of the enzymes from E. coli (12) and yeast (10), the negatively charged atom O-6 of BMP interacts electrostatically with Lys 72 (2.8 Å) and at the same time via hydrogen bonding with a solvating water molecule and with OD1 of Asp 70 .
As part of the uracil ring of UMP, C-6 does not carry a negative charge and cannot act as a partner in ionic or hydrogen bond interactions. Consequently, the amino group of Lys 72 was located more than 1 Å away from its positions in the 6-aza-UMP⅐ODCase and BMP⅐ODCase complexes. The rest of the conserved side chains shows no changes in position. In addition to the loss of an energetically very favorable interaction with the side chain of Lys 72 , the UMP complex structure reveals another reason for the inefficiency of UMP to act as an inhibitor of ODCases. In the ligand-free structure, there was one well ordered solvent molecule for each of the charged residues in the active site. Binding of UMP creates a cavity between the inhibitor molecule and the enzyme surface because of desolvation. As the exact energy involved in such a process depends on the microenvironment it was extremely difficult to estimate. However, it will undoubtedly impose appreciable entropic cost on the formation of the UMP⅐ODCase complex, which in turn will lead to an increase in the inhibition constant. DISCUSSION All of the complex structures reported here should help to improve our understanding of the highly specialized and finely tuned active site of ODCase. There was only one direct interaction (the interaction with Lys 72 ) between the active site of the enzyme and UMP that generates binding energy and was lost when compared with the strong inhibitors BMP and 6-aza-UMP. The xanthine ring of XMP does not interact with Lys 72 either; nevertheless, this nucleotide is a rather good inhibitor of ODCase. We identified two entropic factors that could mitigate the effects of this missing interaction. Binding of XMP is not accompanied by the adoption of one fixed conformation by the loop Gly 180 -Gly 190 , and it does not lead to the stripping of water molecules from the charged residues in the active site. In addition, XMP itself retains the low energy conformation it assumes in solution.
Whereas UMP was much less tightly bound than the analogues that carry a (full or partial) negative charge close to position C-6 of the base, its aromatic ring enters the basebinding cavity and adopts a position common to those chargecarrying analogues. Somewhat surprisingly, given its close similarity to UMP, the base of CMP assumes a position almost identical to that of its counterpart in XMP. As mentioned above, the removal of a hydrogen atom from N-3 of CMP makes it impossible for this atom to act as a hydrogen bond donor. This, in turn, would lead to the loss of one hydrogen bond emanating from the side chain hydroxyl of Ser 127 . Whether this reduction in binding energy was sufficient to prevent proper binding of the aromatic ring of CMP was not clear, especially as the phosphate and ribose binding pattern was unchanged. There exists, however, another property of the nucleotide bases that could form the basis for discrimination: their dipole moment. For cytosine, its value was about 1.5 times larger and its direction rotated 70°when compared with the corresponding values for uracil (27). Given the unusual distribution of charges in the active site of ODCase (9), the binding site of the enzyme could well be able to recognize such differences in dipole moments and use them for ligand selection. After all, the enzyme does have a higher affinity for UMP than for ribose 5Ј-phosphate alone (4).
Although it does drastically reduce the binding strength of the inhibitor, the major impairment of phosphate binding alone, caused by the removal of four amino acids and a ligandanchoring arginine side chain, did not shift the binding position of the 6-aza-UMP base if sufficiently high inhibitor concentrations were used (4) (Fig. 2C). Because there were no unfavorable steric clashes or electrostatic repulsions (including chargehydrophobic contacts), 6-azauracil fits right into the active site pocket, and the limited number of contacts made by the ribose and the base were sufficient to keep it in place.
For both the XMP and CMP complexes, the binding modes were the same despite the use of different additives and, more importantly, the quite dissimilar nature of their base components. Phosphate and ribose groups of the inhibitors superimpose very closely (Fig. 3B) although the location of the ribose ring was not identical to the one found in the high affinity complexes with BMP and 6-aza-UMP. In the latter ones, the place occupied by the 2Ј-OH group finds a water molecule in the XMP and CMP structures, indicating a minimum energy position for hydroxyl groups at this spot. Given the small number of identifiable direct bonds formed between ODCase and CMP, it was surprising to find the relatively well defined binding revealed in the electron density maps of the complex (Fig. 2B). For the cytosine ring, no direct contacts to the enzyme can be seen; two water molecules form a bridge between the side chain of Ser 127 and O 2 . The interactions of the phosphate group with Arg 203 in the two monomers in the asymmetric unit vary in their tightness. CMP seems to be mainly held in place by hydrogen bonds between the two ribose hydroxyls and Lys 42 . The xanthine ring of XMP was held in place more firmly by side chain and main chain bonds to Ser 127 .
The binding mode of the XMP and CMP inhibitors may reflect a transient conformation that the ligand passes through on its way to its fully docked position in the active site of ODCase, a position reflected in the 6-aza-UMP complex. Our results were consistent with the following scenario. The 5Јphosphate group serves as the first anchor point of binding. Then, while the Gly 180 -Gly 190 loop closes down on the ligand, the formation of bonds to the ribose hydroxyls pulls the molecule deeper into the binding pocket. The ring pucker of the ribose sugar changes as water molecules bound in the active site are replaced by atoms of the ligand. However, only when size and/or dipole moment of the base correspond closely to those that the enzyme has evolved to accept is the aromatic ring of the ligand allowed to assume the "productive" orientation. This explains the decrease in k cat /K m when the ribose 2Ј-OH interaction with the enzyme was damaged either by mutation on enzyme residues Thr 79B and Asp 20B or by chemical modification on the substrate (6). Even though it is true that the enzyme has the highest affinity for the transition state, the components contributing to that affinity, in particular 2Ј-OH interactions of the ribose 5Ј-phosphate part, do not reach their maximum possible strength until catalysis has progressed greatly along its reaction coordinate. The transient enzymesubstrate complex perhaps resembles the CMP⅐ODCase or XMP⅐ODCase complexes where contacts were initiated but not finalized. Thus, the binding energy of the enzyme for phosphoribose is utilized for catalysis. Although the results of the structural analyses described above are suggestive of a temporal sequence of ligand binding, they do not yet yield definite conclusions about the chemical mechanism employed by this remarkable enzyme. Obtaining clues toward solving this puzzle will depend on the application of more dynamically oriented experimental methods. | 2018-04-03T06:13:49.767Z | 2002-08-02T00:00:00.000 | {
"year": 2002,
"sha1": "89c62179bd3c63a8212906a72c8075a4c7606e9a",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/277/31/28080.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "d2d7fea24b58f0dc12145f6f4b236c8d1c262a84",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
10976974 | pes2o/s2orc | v3-fos-license | Hemodynamic and Light-Scattering Changes of Rat Spinal Cord and Primary Somatosensory Cortex in Response to Innocuous and Noxious Stimuli
Neuroimaging technologies with an exceptional spatial resolution and noninvasiveness have become a powerful tool for assessing neural activity in both animals and humans. However, the effectiveness of neuroimaging for pain remains unclear partly because the neurovascular coupling during pain processing is not completely characterized. Our current work aims to unravel patterns of neurovascular parameters in pain processing. A novel fiber-optic method was used to acquire absolute values of regional oxy- (HbO) and deoxy-hemoglobin concentrations, oxygen saturation rates (SO2), and the light-scattering coefficients from the spinal cord and primary somatosensory cortex (SI) in 10 rats. Brief mechanical and electrical stimuli (ranging from innocuous to noxious intensities) as well as a long-lasting noxious stimulus (formalin injection) were applied to the hindlimb under pentobarbital anesthesia. Interhemispheric comparisons in the spinal cord and SI were used to confirm functional activation during sensory processing. We found that all neurovascular parameters showed stimulation-induced changes; however, patterns of changes varied with regions and stimuli. Particularly, transient increases in HbO and SO2 were more reliably attributed to brief stimuli, whereas a sustained decrease in SO2 was more reliably attributed to formalin. Only the ipsilateral SI showed delayed responses to brief stimuli. In conclusion, innocuous and noxious stimuli induced significant neurovascular responses at critical centers (e.g., the spinal cord and SI) along the somatosensory pathway; however, there was no single response pattern (as measured by amplitude, duration, lateralization, decrease or increase) that was able to consistently differentiate noxious stimuli. Our results strongly suggested that the neurovascular response patterns differ between brief and long-lasting noxious stimuli, and can also differ between the spinal cord and SI. Therefore, a use of multiple-parameter strategy tailored by stimulus modality (brief or long-lasting) as well as region-dependent characteristics may be more effective in detecting pain using neuroimaging technologies.
Introduction
While decades of discoveries using neuroimaging technologies have revealed rich and complex processes underlying various brain functions [1], the use of neuroimaging as an objective tool to quantify or measure pain has been questioned [2]. This is because pain is a multifactorial subjective experience of the nociceptive inputs associated with one's memories as well as emotional, pathological, genetic, and cognitive factors [3]. The often-called "pain matrix" due to a large distributed brain network involved during nociceptive/pain processing leads to limited validation and effectiveness (as examined by sensitivity and specificity) of neuroimaging signals in pain detection and/or quantification [4].
In order to validate, improve or support neuroimaging as an effective and objective tool to measure pain, a better understanding of neural, cerebral, and/or vascular physiology under different kinds of painful/noxious stimulation and at different locations of the central nervous system would be beneficial. A few recent studies using functional magnetic resonance imaging (fMRI) have confirmed that BOLD signals, originating from regional deoxy-hemoglobin concentration changes, are altered due to thermal pain and can be used as biomarkers to objectively assess pain [5,6]. Similarly, several human studies using near infrared spectroscopy have demonstrated that thermal or electrical stimulations induce changes of hemoglobin concentrations at different brain regions [7][8][9]. However, many important physiological questions related to pain processing at different neurological sites cannot be answered using a non-invasive approach in human subjects. Animal studies become a necessary and important approach to address many of the physiological questions. In this particular study, we aimed to answer the following questions: (i) how vascular hemoglobin concentrations and oxygenation change under various short-term mechanical and electrical stimuli as well as a long-lasting chemical stimulus; (ii) whether the nociception-induced hemoglobin-based parameters are contralateral or bilateral in the primary somatosensory cortex (SI) and/or spinal cord; and (iii) whether there exists a single parameter that is consistently associated with noxious stimuli.
To answer the questions given, it is necessary to precisely characterize the neurovascular coupling during pain processing in the central nervous system. In this study, we utilized our recently-developed fiber-optic method [10,11] that allows us to simultaneously acquire absolute values of regional oxy-hemoglobin and deoxy-hemoglobin concentrations (i.e., HbO and Hb) as well as the light-scattering coefficients (μs′: an effect describing photons being dispersed when diffusing into biological tissue) from rat bilateral spinal cord and SI. In particular, the light-scattering property of neurons is believed to manifest neural activation in various species, from the crab leg-nerves to squid giant-axons [12,13], and from the retina to neocortices in cats and rats [14][15][16][17][18][19]. To investigate the effectiveness of our optical signals-derived neurovascular parameters in pain detection/quantification, we used three different stimulus modalities, namely graded mechanical (brushing, innocuous pressuring, and noxious pinching), graded electrical (5, 10, and 15 V), or long-lasting chemical stimulus (formalin injection) to rat hindpaw. Our results indicated that all neurovascular parameters showed stimulation-induced changes. However, patterns of these stimulation-induced changes varied with regions and stimuli, suggesting that a multiple-parameter strategy tailored with respect to stimulus modality may be more effective in pain detection/quantification.
Animal Preparation
Ten male adult Sprague-Dawley rats were used with a mean age of 102.1 ± 0.6 (±SEM) days and a mean weight of 377.9 ± 14.5 g. All animals were initially anesthetized by a single intraperitoneal injection of pentobarbital sodium solution (50 mg/kg). A PE10 tubing was inserted into the jugular vein for continuous intravenous (i.v.) administration of pentobarbital sodium (5 mg/mL) at a fixed rate of 0.02 mL/min to maintain anesthesia throughout data acquisition [20]. The lumbasacral segment of rat spinal cord was exposed following laminectomy and then animal was immobilized on a stereotaxic frame. The dura mater was resected, and mineral oil was used to cover the spinal cord to preserve moisture. Rat body temperature was maintained at 37 °C by using a feedback-controlled heating blanket (Homoeothermic Blanket, Harvard Apparatus, Holliston, MA, USA). The animal was further paralyzed by i.v. injection of pancuronium (1 mL; 1 mg/1 mL/min) to prevent muscular twitches. Artificial ventilation (Model 683, Harvard Apparatus, Holliston, MA, USA) was maintained throughout the experiment. All procedures were approved by the Institutional Animal Care and Use Committee (IACUC) at the University of Texas at Arlington. Procedures also followed the guidelines described by the Committee for Research and Ethical Issues of the International Association for the Study of Pain [21].
Data Collection
A customized needle-like fiber-optic system was used to collect reflectance of light at wavelengths between 400 and 1000 nm [11]. Prior to the placement of optic probes over the spinal cord, a silver ball-electrode was used to locate the ipsilateral dorsal root entry zone where strength of the primary afferent inputs was maximal by gently tapping rat hindpaw and monitoring real-time recording (using a oscilloscope and an audiometer). Two craniotomies were made over the bilateral primary somatosensory cortices for hindlimb at posterior AP 0.8 mm and ±2 mm lateral, and burr holes were then filled with drops of mineral oil before the placement of optic probes to avoid air gaps. Four optic probes (0.85 mm in diameter) were positioned at the dorsal surface of the dorsal root entry zones at L4-5 lumbar segment, and at SI burr holes bilaterally. A surgical microscope (Zeiss) was used to ensure no-pressure contacts of probes to the SI and spinal cord. To investigate the microcirculation system during functional activation, we purposely avoided large vessels for probe placements. The use of bilateral assessments (of the spinal cord and SI) allowed us to confirm functional activation during sensory processing. After the experiment, optical signals were converted into HbO, Hb, and μs′ by using an iterative algorithm in Matlab (MathWorks, Natick, MA, USA). The algorithm is detailed elsewhere [11]. The sampling rate was 0.6 Hz.
Mechanical, Electrical and Chemical Stimuli
Graded mechanical (e.g., brushing, pressuring and pinching) stimuli and electrical (e.g., 5, 10, and 15 V) stimuli at 10 Hz with 1-millisecond pulse-duration (Grass S48 stimulator, USA) were applied to rat hindlimb unilaterally for 10 s. Mechanical stimuli were applied on the plantar surface [22,23]. Electrical stimuli were delivered to the ankle using two leads (i.e., bent syringe-needles) pierced through the skin. Both mechanical and electrical stimulations were conducted in a block design with five consecutive trials and an interval of 2 min to the same paw for each animal. After mechanical and electrical stimulation, formalin (50 μL; 3%) was injected into the center of the plantar area of the other paw. Some data from electrical stimulation paradigm were reported elsewhere [11].
Statistical Analysis
Wilks' lambda, a multivariate Analysis of Variance (ANOVA), was utilized to test a baseline difference between the spinal cord and SI based on bilateral measurements in each parameter (e.g., HbO, Hb, HbT, SO2 or μs′). T-tests were utilized to test a stimulation-induced relative change from baseline or to test a difference between hemispheres. Univariate one-way within-subject ANOVA was used to test formalin-induced relative changes over time. Post-hoc contrast analysis and Fisher LSD multiple comparisons were conducted to reveal a temporal pattern of formalin-induced change if necessary. The alpha level was set at 0.05. All data were expressed in mean ± SEM. All statistical analyses were performed in SPSS 17.0 (SPSS, Chicago, IL, USA). Table 1 illustrates baseline measurements of HbO, Hb, HbT (HbO+Hb), SO2 (HbO/HbT), and μs′ before the first stimulation, brushing, at four locations (n = 10). Within the spinal cord or SI, there appeared no systemic difference between hemispheres. Between the spinal cord and SI, there were significant differences in HbO, SO2, HbT and μs′. Most prominently, the basal HbO in the SI was higher than that in the spinal cord, whereas the basal light-scattering in the SI was lower than that in the spinal cord. These basal differences should reflect a structural discrepancy. Anatomically, the scanned region at the SI was mainly the grey matter, whereas at the spinal cord it was mainly the white matter. Table 1. Absolute values of baseline neurovascular parameters (n = 10).
Mechanical-and Electrical-Stimulation-Induced Hemodynamic and Light-Scattering Changes
Typical examples of stimulation-induced hemodynamic and light-scattering traces from the ipsilateral spinal cord are shown in Figure 1. Examples of relative changes (after block averaging, i.e., five blocks, and baseline subtraction) are shown in Figure 2. In particular, the ipsilateral spinal cord tended to show longer (~30 s) responses in HbO, Hb, HbT and SO2, whereas the contralateral SI tended to show shorter (~10 s) responses in HbO, HbT, SO2, and μs′. To investigate the effects of stimulus modality (mechanical or electrical), intensity (low, medium, or high), and region (the SI or the spinal cord) on hemodynamics as well as on the light scattering, averages of representative time periods (30 s for the spinal cord; 10 s for the SI) were used. As shown in Figure 3, all five parameters (rows) showed stimulation-induced changes along the somatosensory pathway (i.e., the ipsilateral spinal cord and/or the contralateral SI). Surprisingly, the stimulation-induced changes did not always occur even at the highest intensity of stimulation. That is, the stimulation-induced changes were not merely dependent on intensity, but on other factors such as region and modality. For instance, the ipsilateral spinal cord was responsive to both modalities, whereas the contralateral SI was responsive to only electrical stimuli. In addition, the spinal cord was more likely to show a change in Hb than the SI. Finally, both regions showed a decrease in μs′, despite no consistency in intensity, modality or lateralization.
Temporal Characteristics of Hemodynamics in the Spinal Cord and SI
To precisely investigate temporal profiles of hemodynamic changes in response to brief stimuli, we sought to determine the most reliable hemodynamic parameter. A signal-to-noise ratio (SNR; the maximum divided by the standard deviation of 20-s baseline measurement) of HbO, Hb, HbT, and SO2 were computed; and the parameter with the largest SNR indicating the most reliable parameter. Our SNR essentially utilized the peak value of a neurovascular response that was intended to minimize the complex temporal dynamics. As shown in Figure 4, SO2 and HbO were generally superior to Hb and HbT across regions, stimulus intensities and modalities. These two were selected for temporal analysis. As shown in Table 2, onset, peak time and duration (all in seconds) were used to demonstrate temporal characteristics of stimulation-induced changes in HbO and SO2. Note that only a fraction of rats showed a detectable onset at all stimulation conditions. To maintain sufficient statistical power, no statistical analysis was performed for less representative cases (n < 5). In brief, the ipsilateral spinal cord response appeared to occur later than the contralateral SI response (~4 vs. ~2 s in SO2; noted by †). Also, the ipsilateral spinal cord responses peaked later than the contralateral SI responses (~11 vs. ~4 s in HbO or SO2; noted by †). Similar to the peak time, the ipsilateral spinal cord responses returned to baseline later than the contralateral SI response (duration: ~30 vs. ~8 s in HbO or SO2; noted by †). Finally, there were delayed ipsilateral SI responses (~15 s in HbO or SO2; noted by ‡). Collectively, there were significant differences in temporal profile of stimulation-induced changes between the spinal cord and the SI.
Formalin-Induced Hemodynamic and Light-Scattering Changes
As shown in Figure 5, in the acute phase, the formalin-induced responses appeared to be highly variable over time. Averages of the first 60-s period indicated the following: (1) an increase in HbO and a decrease in Hb from the ipsilateral spinal cord and SI; (2) an increase in SO2 from the contralateral spinal cord and the ipsilateral SI; (3) a decrease in HbT only from the ipsilateral SI (Table 3). In the acute phase, we failed to find any statistically significant change from the contralateral SI. This may in part be due to a considerable level of individual difference (see large error-bars in Figure 5). Note that due to massive bleedings and an unexpected death, the sample size differed among regions (for the bilateral SI and contralateral spinal cord: n = 8; for the ipsilateral spinal cord: n = 9). As shown in Figure 6 and Table 4 (see averages of every 5 min over a total of 45 min), in the delayed phase, only sustained decreases in SO2 (from the ipsilateral spinal cord and the contralateral SI), HbO (only from the contralateral SI), and HbT (from bilateral SI) were statistically significant. It is noteworthy that the light scattering failed to show any statistically significant change after formalin injection. Together, the hemodynamic parameters, namely SO2, HbO, and HbT, demonstrated formalin-induced changes; patterns of these changes were functions of time and region.
Discussions
We measured focal hemodynamic and light-scattering changes following mechanical, electrical, and chemical noxious stimulation in rat SI and spinal cord by using a fiber-optic method. A phantom experiment with a similar method suggests a signal penetration depth to be ~1 mm [19]. Thus, our measurements in rats should have included sizable regions for accessing functional activation of the SI and spinal cord. Our absolute measurements of HbT from the SI are in a physiological range of a previous report [24]. More importantly, our results of electrical stimulation-induced HbO and Hb changes in opposite directions from the ipsilateral spinal cord and the contralateral SI are well-documented and are in agreement with the somatosensory pathway, suggesting that our measurements are functionally relevant to neural activation. In contrast to the contralateral SI, the ipsilateral SI, to a lesser extent, is thought to play a role in somatosensory processing as examined by human neuroimaging techniques [25][26][27], likely via the corpus callosum [28]. The ipsilateral SI's neural activity also occurs later, and is weaker in intensity [27]. Regarding onset and intensity, our results ( Figure 2 and Table 2) appeared to resemble the electrophysiology of the human brain. Nevertheless, the physiological significance of the ipsilateral SI in somatosensory processing or pain processing remains poorly understood.
Electrical, but Not Mechanical, Stimuli Produced an Intensity-Dependent HbO Increase
The essence of our work was to determine a biomarker of pain. As shown in Figure 3, using HbO or any other parameter alone was unlikely to differentiate noxious stimuli (pitching or 15 V) from innocuous stimuli (brushing or 5 V). However, HbO from the spinal cord showed an intensity-dependent change in response to electrical stimuli (F(2, 18) = 4.15, p = 0.03), but not to mechanical stimuli (F(2, 18) = 1.85, p = 0.19). This modality disparity may be related to the nature of stimuli, but not nociception.
In the peripheral nervous system, the primary afferents respond differently to various stimuli in intensity-and modality-dependent manners. Modality is defined as a general type of stimulus that is associated with a specific type of receptor on the primary afferents [29]. As an innocuous stimulus, brushing only activates A-β primary afferent fibers conveying information from mechanoreceptors, whereas noxious pinching primarily activates A-δ and C fibers conveying information from both mechanoreceptors and nociceptors [30]. As an unnatural stimulus, electrical stimulation at 15 V is more likely to activate A-δ fibers than at 5 and 10 V [31]. Therefore, electrical and mechanical stimuli may activate a number of different neurons in periphery. In the spinal cord, there are low-threshold (only responding to innocuous stimuli), high-threshold (only responding to noxious stimuli), and wide-dynamic-range neurons (responding to both), all of which receive sensory inputs from multiple primary afferents [32]. The energy consumption (e.g., oxygen and glucose) in the spinal cord should reflect a summation of all neuron types. It is thus possible that the population of neurons involved in the process of mechanical stimulation was smaller than that in the process of electrical stimulation. As a result, the energy consumption (in the spinal cord) by mechanical stimuli was smaller and less intensity-dependent than that by electrical stimuli.
Regional Characteristics of Hemodynamic Responses
We found clear temporal differences between the SI and spinal cord in response to mechanical or electrical stimuli (Table 2). There are two possible explanations. First, as the HbO and SO2 increases from the spinal cord seemed to be greater (Figures 2 and 3) and longer (Table 2) than those from the SI; therefore, the spinal cord metabolism level may be greater in sensory processing. Sensory information is processed at various stages in the central nervous system, and the spinal cord is the first stage. Likely, this stage of sensory processing has less screening/filtering, and also involves ascending information to a number of supraspinal structures (e.g., the thalamus, the cerebellum, the middle brain etc.). As such, the sensory processing in the spinal cord may consume more energy than that in the SI-where neurons are organized with respect to specific body parts (dermatome) and only process very filtered information.
Second, there may be vasculature differences between the spinal cord and SI. Accumulating evidence indicates that during neuronal activation, a local increase in blood flow is mainly due to vasodilatation of arteries (but not veins) in the somatosensory cortex [33,34]. In our data, increases in HbO consistently occurred in both the SI and spinal cord, suggesting vasodilation of arteries during neuronal activation in sensory processing. However, changes in Hb were less consistent across regions, intensities, or modalities. It may be that the spinal cord veins act differently than the cortical veins in response to neuronal activation, and as a result, the temporal profiles of SO2 and HbO showed regiondependent differences.
Hemodynamic Signatures of Spinal Cord and SI in Response to a Long-Lasting Noxious Stimulus
The formalin test is a well-established model in study of long-lasting pain. Following a single subcutaneous injection of formalin, animals show immediate pain-behaviors (e.g., licking and elevating injected paw) for the first 5 min (Phase I), and after a 10-15 min quiescent period, they start to show pain-behaviors again for more than 1 h (Phase II) [35]. This biphasic pattern is also demonstrated by excitability of the spinal cord sensory neurons in anesthetized animals [36]. In line with these classic findings, our previous study using fNIRI in rats has found distinct hemodynamic patterns correlated with Phases I and II in a number of brain regions [37]. Similarly, in the current study we found hemodynamic responses (from the bilateral spinal cord and ipsilateral SI) during Phase I ( Figure 5, Table 3), and a different response pattern (i.e., sustained linear decreases in SO2, HbO, and HbT) during Phase II ( Figure 6, Table 4). Such different response patterns between Phases I and II may underlie the well-documented biphasic physiological/behavioral pattern in the formalin test; the pattern in Phase II may indicate a hemodynamic signature of formalin-induced long-lasting pain.
It is well-accepted that an increase in blood flow to a specific brain region during functional activation is a hemodynamic signature of neuronal activation, which is referred as to hyperemia [38]. Hyperemia is primarily due to arterial dilation [33,34]. As the metabolic-rate increase is significantly less than the arterial blood influx [39], there will be a large increase in HbO and a small/no increase in Hb. Likely, regional SO2 (i.e., HbO/(HbO+Hb)) will be elevated and approaches to arterial SO2 (~90%). Our observations of hemodynamics during electrical stimulation are in accordance with the canonical hyperemia notion. However, this was not the case in the formalin test (Figures 5 and 6). One possible explanation for this is that the neurovascular coupling in the process of long-lasting pain may be fundamentally different from the neurovascular coupling in sensory processing at an innocuous level. When there is a much higher demand of energy consumption (e.g., in response to formalin injection), the hyperemia hypothesis (i.e., arterial blood influx >> metabolic rate) may not stand. In the case of formalin injection, arteries, capillaries, and veins may all respond at a significant level due to the high metabolic-rate in the local neurovascular network. SO2 may no longer be merely determined by a transient boost in HbO. A small decrease in Hb (~1.5 vs. ~10 µM from the spinal cord; Table 3, Figure 3) can counteract an increase in SO2 during Phase I (non-significance; Table 3). During Phase II, all blood vessels may be fully dilated and/or reach their physiological limits for transporting blood (i.e., a ceiling effect), where a change in either HbO or Hb cannot meet the demand of heavy energy consumption in the process of the long-lasting pain (i.e., arterial blood influx << metabolic rate); only a decrease in SO2 can reflect the ongoing energy consumption.
Hindlimb injection of formalin is only an animal model for long-lasting pain. To the best of our knowledge, there is neither any human data characterizing formalin-induced pain, nor an equivalent type of persistent pain in humans that lasts for one hour or so. Thus, there is a scarcity of studies that link between the common SO2 assessment in human clinic [40,41] and the physiology of long-lasting pain. Nevertheless, both human and animal data using hyperbaric oxygen therapy suggests that a systemic increase in SO2 can alleviate chronic pain (note that chronic pain is associated with a neuropathological condition that often lasts for days and beyond) [42,43]. Together with our formalin results, the hyperbaric-oxygen-therapy-induced analgesic effect may indicate a causal relationship between persistent pain and a decreasing SO2 in associated regions in the central nervous system. That is, a decreasing SO2 may underlie an aberrant interaction between neurons and vascular activity; this aberrant interaction may play a role in pain processing.
Regional Characteristics of Light Scattering
Our measurements in the baseline period as well as in response to stimuli are in line with standard practices in the field-the white matter has a greater light-scattering effect than grey matter [44]. Because our probes were placed above the dorsal surface of rat SI and above the dorsal column of rat spinal cord, our baseline measurements (μs′: SI < spinal cord; Table 1) should mainly characterize the light-scattering difference between the white matter and grey matter. In response to stimuli, the changes in light scattering appeared to be complex; both positive and negative light-scattering changes have been found in rat somatosensory cortex during forepaw stimulation [17]. Our results showed a decrease in light scattering only in a limited number of stimulation and measurement conditions ( Figures 3, 5 and 6). Stimulation-induced light-scattering change appeared to be dependent on time, region, intensity, and modality.
Anesthesia
Pentobarbital anesthesia-known to depress cardiovascular activity and respiration [45]-was used, because an in vivo assessment in awake rats using our customized device is extremely challenging. The use of anesthesia completely ruled out such confounds as motion artifacts and behavioral states, and ensured precise and consistent locations of scanned areas of interest over a long period of time. In short, the use of anesthesia allowed us to inspect unique patterns of response to nociceptive/noxious stimuli. In addition, as general anesthesia appears not to affect the early onset of evoked responses of the central nervous system [46,47], our results under pentobarbital anesthesia in rats may to some extent parallel some genuine characteristics of neurovascular response to somatosensory/nociceptive stimulation in the awake state. However, because general anesthesia alters arousal states [48], and pain perception consists of sensory, affective, and cognitive aspects, an interpretation of our results should be cautious in comparison to human data derived from neuroimaging techniques.
Conclusions
A gradual decrease in SO2 appeared to be a unique pattern for formalin-induced sustained pain, which may be a biomarker candidate for long-lasting pain. We also found changes in hemodynamic parameters and light scattering in rat spinal cord and SI during brief peripheral stimulation. Patterns of these changes (e.g., amplitude, duration, lateralization, increase or decrease) in a single parameter did not show a distinction between innocuous and noxious stimulation. Instead, patterns of these changes depended on time, region, stimulus intensity, and modality. We expect that a multiple-parameter strategy with careful consideration of combining factors such as regions and stimulus modality may be more effective in detecting pain using neuroimaging technologies. | 2016-03-01T03:19:46.873Z | 2015-09-29T00:00:00.000 | {
"year": 2015,
"sha1": "849d5ce14ed629d09d4f73a312c1669735113bb9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/5/4/400/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "849d5ce14ed629d09d4f73a312c1669735113bb9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225961950 | pes2o/s2orc | v3-fos-license | Comprehensive Analysis of ATP6V1s Family Members in Renal Clear Cell Carcinoma With Prognostic Values
ATP6V1s participate in the biological process of transporting hydrogen ions and are associated with various cancers in expression and clinicopathological features, while its role in kidney renal clear cell carcinoma is unknown. We aimed to demonstrate the relationship between ATP6V1s and kidney renal clear cell carcinoma. This study investigated the expression and roles of ATP6V1s in KIRC using Oncomine, The Cancer Genome Atlas, UALCAN, Human Protein Atlas, Clinical Proteomics Tumor Analysis Consortium, GeneMANIA, Tumor IMmune Estimation Resource, GEPIA databases. Low mRNA and protein expression of ATP6V1s members were found to be significantly associated with clinical cancer stages, nodal metastasis status, and patient’s gender in KIRC patients. Besides, lower mRNA expression of ATP6V1A, ATP6V1B2, ATP6V1C1, ATP6V1C2, ATP6V1D, ATP6V1E1, ATP6V1E2, ATP6V1F, ATP6V1G1, and ATP6V1H have shorter OS. Taken together, these results indicated that ATP6V1s family members could be a potential target in the development of anti-KIRC therapeutics and an efficient marker of the prognostic value of KIRC.
INTRODUCTION
Renal cell carcinoma (RCC) is one of the common urinary system tumors, which accounts for about 2-3% of adult malignant tumors, and its incidence is increasing year by year (1). The most common pathological type is kidney renal clear cell carcinoma (KIRC), which accounts for about 70-80% of RCC (2,3). Compared with renal papillary cell carcinoma and renal chromophobe cell carcinoma, KIRC shows a poorer prognosis and is more prone to metastasis (4). When KIRC appears to metastasize, its outcome is less favorable (5). The curative treatment of early KIRC is partial or radical nephrectomy. However, about 30% of patients have a recurrence after surgery (6,7). Advanced KIRC can be treated with molecular targeted therapy and immunotherapy, but the longterm efficacy is still unsatisfactory (8). Due to the insidious onset and without obvious symptoms, 30% of patients had invaded adjacent tissues or metastasized at initial diagnosis. Therefore, screening effective biomarkers for the diagnosis, treatment and prognostic evaluation of KIRC is of great clinical significance.
Although various biomarkers have been considered to be related to KIRC, such as bone morphogenetic protein 8A (9) and Cripto-1 (10), their reliability remains controversial. Vacuolar adenosine triphosphatase (V-ATPase) is widely distributed in eukaryotic cells and transports H + by hydrolyzing ATP. Studies have shown that V-ATPase affects tumor invasion and invasion. V-ATPase is divided into two parts, including located in cytoplasmic part V1 and transmembrane part V0. V-ATPase V1 is also called ATP6V1 (11), which is composed of subunits A-H (12). The main function of ATP6V1 is to hydrolyze ATP to provide energy for transporting H + . Numerous studies have shown that ATP6V1 plays an important role in diseases such as tumors, kidney diseases, abnormal bone development, and diabetes (11,13,14). However, few studies about the relationship between ATP6V1 and KIRC has been reported so far.
In this study, we addressed this problem by identifying the transcriptional and protein expression patterns of ATP6V1s family members via The Cancer Genome Atlas (TCGA), Oncomine, Clinical Proteomics Tumor Analysis Consortium (CPTAC), and Human Protein Atlas (HPA) databases. Then we continued to predict Gene Ontology functions and biological pathways of ATP6V1s together with their 20 related genes. Furthermore, we analyzed clinical features and prognostic values of ATP6V1s family members in KIRC. The current study shows the potential biological functionality and prognostic value of ATP6V1s, which will be beneficial to the diagnosis and treatment of kidney renal clear cell carcinoma.
MATERIALS AND METHODS
Differentially Expressed ATP6V1s at the Transcriptional Level Oncomine 4.5 (www.ocomine.org) is an integrated online oncogene microarray database and data-mining platform, which provides peer-reviewed, robust analysis methods, and a powerful set of analysis functions to compute gene expression signatures (15). In our study, the mRNA expressions of 8 different ATP6V1s family members in KIRC tissues with their corresponding adjacent normal control samples were analyzed by the Oncomine database. The data in our study were compared by the t-test and cut-off p-value and fold change were as following: p-value < 0.0001, fold change = 2, gene rank = 10%.
TCGA database (http://cancergenome.nih.gov/) is a comprehension and the coordinated project contains gene expression database and corresponding clinical information data (16). The gene expression of ATP6Vs in KIRC and corresponding clinical information data were downloaded from the TCGA database. UALCAN (http://ualcan.path.uab.edu) is a comprehensive and interactive web resource based on RNA-seq of 31 cancer types from the TCGA database (17). To determine the reliability of the differential expression data, the UALCAN database was selected for further verification. In this study, the mRNA expressions of different ATP6V1s family members of KIRC tissues and normal tissues were analyzed in the TCGA-KIRC dataset. P < 0.001 was considered statically significant.
Differentially Expressed ATP6V1s at Protein Level
In addition to the TCGA and UALCAN database providing mRNA expression analysis of ATP6V1s family members, the protein expression analysis of ATP6V1s family members was provided using the data from the CPTAC Confirmatory/ Discovery dataset for KIRC (18). The CPTAC is used for proteomics research of various tumors. In this work, the protein expressions of different ATP6V1s family members between KIRC tissues with normal tissues were analyzed following the CPTAC reproducible workflow protocol. P < 0.001 was considered statically significant.
HPA (http://www.proteinatlas.org) is a platform that contains representative immunohistochemistry-based protein expression data for near 20 highly common kinds of cancers (19). In this study, immunohistochemistry images of protein expression of different ATP6V1s family members between normal and KIRC samples were directly visualized by HPA.
Construction of Related Genes Network
GeneMANIA 3.6.0 (http://www.genemania.org) is a website for generating hypotheses about gene function using available genomics and proteomics data (20). In our study, the ATP6V1s family members were submitted to the GeneMANIA to illustrate the functional association network among ATP6V1s and their related genes. The advanced statistical options were that max resultant attributes were 10, max resultant genes were 20, and the weighing method used was automatically selected.
GO Enrichment Analysis and Kyoto
Encyclopedia of Genes and Genomes (KEGG) Pathway Enrichment Analysis DAVID (https://david.ncifcrf.gov/), is a functional enrichment analysis web tool with continuously updated and effectively reduce data redundancy (21). GO functions and pathways of ATP6V1s and their 20 related genes were enriched by WebGestalt. The Method of Interest is selected in Over-Representation Analysis (ORA). The GO functional enrichment was performed in the biological process no Redundant (BP), cellular component no Redundant (CC), molecular function no Redundant (MF). And the pathway analysis was performed in the KEGG pathway.
Immune Infiltration Analysis of ATP6V1s
TIMER (https://cistrome.shinyapps.io/timer/) is a systematic database using the microarray expression values for calculating a comprehensive analysis of immune infiltrates through different cancer types (22). The immune infiltration estimation of ATP6V1s was performed in KIRC by TIMER. The scatterplots of ATP6V1s was generated to show the purity-corrected partial Spearman's rho value and statistical significance. The positive purity value expected genes are highly expressed in the tumor cells, and the opposite is expected for genes highly expressed in the microenvironment.
Clinicopathological Analysis of ATP6V1s in KIRC
Furthermore, UALCAN was used to analyze the association between the mRNA or protein expressions of ATP6V1s in KIRC tissues with their clinicopathologic parameters such as individual cancer stages, nodal metastasis status, and patient's gender. The results could be got directly by selecting the clinicopathological grouping options integrated into the UALCAN database. In particular, only the tumor group could be divided into different clinicopathological groups. The statically significant p is less than 0.001.
Survival Analysis
In this study, the prognostic value of mRNA expression of distinct ATP6V1s in KIRC was analyzed by GEPIA (http:// gepia.cancer-pku.cn/index.html) (23), which contains 9,736 tumors and 8,587 normal samples from the TCGA and the GTEx. Based on the median values of mRNA expression, patients with KIRC were divided into high and low expression groups. p < 0.05 was considered statically significant.
Statistical Analysis
All statistical analysis analyses and plots were produced using R (v.3.5.1). T-test was used to analyze the expression of ATP6V1s. One-way ANOVA test, Wilcoxon signed-rank test, and logistic regression were used to evaluate relationships between clinicalpathologic features and the expression of ATP6V1s. Cox regression analyses and the Kaplan-Meier method were used to evaluate prognostic factors.
Low mRNA Expression of Different ATP6V1s Family Members in Patients With KIRC
The design flow chart of the whole analysis process of this study is shown in Figure 1.
In order to research the mRNA expression of different ATP6V1s family members in RCC patients, data of 20 types of cancers were analyzed and compared to normal tissues by the Oncomine database. As shown in Figure 2 and Table 1, mRNA expressions of ATP6V1A, ATP6V1B1, ATP6V1D, ATP6V1F, ATP6V1G3, and ATP6V1H were significantly higher in RCC tissues. In the Beroukhim KIRC dataset, the mRNA expression of ATP6V1A, ATP6V1B1, and ATP6V1H was lower in RCC tissues compared with normal tissues with fold changes of 2.403, 13.706, and 2.276 (p=4.75E-14, 1.03E-08, 6.78E-12), respectively. Higgins found a 3.226-fold decrease in mRNA expression of ATP6V1A in KIRC tissues. Yusenko, Gumz, and Jones observed significant down-expression in ATP6V1B1 mRNA in KIRC tissues. Down-regulation of mRNA expression of ATP6V1G3 was also found in KIRC tissues. Gumz also found that mRNA expression of ATP6V1H in KIRC was down-expression compared to normal tissues.
Next,themRNAexpressionpatternsofATP6V1sfamilymembers were further measured by the TCGA database. Consistent from the Oncomine database, as was shown in Figure 3, compared to normal samples mRNA expressions of all ATP6V1 members were significantly down-regulated in KIRC tissues. Figure 4, Protein expression of all ATP6V1s family members was lower in KIRC tissues compared to the normal tissues by CPTAC. Similar results appeared by CPTAC analysis, ATP6V1s proteins were low expressions in KIRC tissues by HPA ( Figure 5). Low protein expressions of ATP6V1A, ATP6V1B1, ATP6V1B2, ATP6V1C1, ATP6V1C2, ATP6V1D, ATP6V1E1, ATP6V1F, ATP6V1G1, ATP6V1G2, ATP6V1G3, and ATP6V1H were found in KIRC tissues, while their medium and high protein expressions were observed in normal kidney tissues. Negative protein expression of ATP6V1E2 was observed both at normal kidney tissues and KIRC tissues ( Figure 5). Taken together, our results showed that protein expressions of ATP6V1s family members were significantly low-expressed in patients with KIRC. Generally, all the results above showed that ATP6V1s were under-expressed in KIRC both in the transcriptional and protein expressions.
Association of mRNA Expression of ATP6V1s Family Members With Immune Infiltration Level in KIRC
Then, we investigated whether mRNA expression of ATP6V1s family was correlated with immune infiltration levels in KIRC from the TIMER database. The results showed that the mRNA expressions of ATP6V1D, ATP6V1F, and ATP6V1F were obviously related to tumor purity ( Figures 7F, H, I). The correlation of mRNA expression of ATP6V1A, ATP6V1B1, ATP6V1B2, ATP6V1C1, ATP6V1E1, ATP6V1E2, ATP6V1G2, ATP6V1G3, and ATP6V1H with B cell was statistically significant (Figures 7A-D
Association of mRNA and Protein Expression of ATP6V1s Family Members With Clinicopathological Features of KIRC Patients
Next, the relationship between mRNA expression of ATP6V1s family members with clinicopathological parameters of KIRC patients was analyzed by CTGA, including individual cancer stages and nodal metastasis status. As was shown in Figure 8, mRNA expressions of ATP6V1s family members were remarkably correlated with cancer stages, and patients who were in more advanced cancer stages tended to express lower mRNA expression of ATP6V1s. Compared to normal tissues, the mRNA expression of ATP6V1s family members was significantly lower in stage 1, stage 2, stage 3, and stage 4. While there was no significant difference in mRNA expression between stage 2 and normal tissues. That may be due to the small sample size in stage 2 (only 87 samples). Then, we analyzed the relationship mRNA expression of ATP6V1s family members with nodal metastasis status of KIRC patients. As shown in Figure 9, mRNA expressions of ATP6V1A, ATP6V1B1, ATP6V1D, ATP6V1G2, and ATP6V1G3 were significantly related to nodal metastasis status ( Figures 9A, B, F, The relationship between protein expressions of ATP6V1s family members with the gender of KIRC patients was analyzed by CPTAC. The protein expressions of ATP6V1A, ATP6V1C1, ATP6V1C2, ATP6V1D, ATP6V1E1, and ATP6V1G2 in female were significantly higher than the male with statistic difference, while the difference between ATP6V1B1, ATP6V1B2, ATP6V1F, ATP6V1G1, ATP6V1G3, and ATP6V1H was not remarkable ( Figure 10).
In brief, mRNA expression of part TP6V1s members was associated with clinicopathological parameters of KIRC patients.
Prognostic Value of mRNA Expression of ATP6V1s Family Members in KIRC Patients
The association between mRNA expression of ATP6V1s family members and prognosis of KIRC patients was analyzed by Kaplan-Meier Plotter. As were shown in Figures 11A, C ATP6V1D (HR(high)=0.43, and Log-rank p=1e-07), ATP6V1E1 (HR(high)=0.41, and Log-rank p=1.9e-08), ATP6V1E2 (HR (high)=0.73, and Log-rank p=0.045), ATP6V1F (HR(high)=0.71, and Log-rank p=0.024), ATP6V1G1 (HR(high)=0.36, and Logrank p=2.7e-10), ATP6V1H (HR(high)=0.69, and Log-rank p=0.017) were significantly associated with shorter OS of KIRC patients. However, mRNA expression of ATP6V1B1 and ATP6V1G2 showed no correlation with the prognosis of KIRC patients ( Figures 11B, K). These results indicated mRNA expressions of part of ATP6V1s family members were significantly associated with the prognosis of KIRC patients, and they may be useful biomarkers for prediction of KIRC patients' survival.
DISCUSSION
Many cytokines, hormones, and proteins are involved in the development and progression of KIRC. ATP6V1s as components of V-ATPase has been shown to participate in the development of multiple tumors, including KIRC. Even so, the role of ATP6V1s family members in the prognosis value of KIRC is still unclear. In this study, we analyzed the expression and prognostic value of different ATP6V1s family members in KIRC. Results from our study showed that mRNA expressions of ATP6V1A, ATP6V1B1, ATP6V1D, ATP6V1F, ATP6V1G3, and ATP6V1H were significantly lower in KIRC tissues compared to normal tissues from the Oncomine database. However, mRNA expressions of all ATP6V1s family members were significantly down-regulated in KIRC tissues from the TCGA database. Besides, through analyzing the protein expression of ATP6V1s family members in KIRC by HPA and CPTAC, we found that protein expressions of ATP6V1A, ATP6V1B1, ATP6V1B2, ATP6V1C1, ATP6V1C2, ATP6V1D, ATP6V1E1, ATP6V1F, ATP6V1G1, ATP6V1G2, ATP6V1G3, and ATP6V1H were lower than normal tissues, and similar results were found by CPTAC. Next, it was found that proteins Abnormal ATPase subunit expression and dysregulated ATPase activity are closely related to the occurrence, proliferation, and invasion of various tumors (11,12,(24)(25)(26). Numerous studies have shown that ATP6V1s family members are abnormally expressed in tumor tissues or tumor cell lines. Over-expression of ATP6V1C1 had been found in oral squamous cell carcinoma (27)(28)(29). In human pancreatic cancer, V-ATPase was significantly overexpressed (30). The expression of ATP6V1A in gastric cancer tissue is significantly higher than that in normal tissue, and its expression is related to histological grade, lymph node metastasis, and vascular invasion. Knocking down the expression of ATP6V1A in vitro inhibits the proliferation and invasion ability of gastric cancer cells (31). ATPase promotes the formation of a slightly alkaline microenvironment around tumor cells, which is beneficial to tumor cell proliferation (32). ATP6V1C1 promotes the growth of breast cancer by activating the mTORC1 pathway and promotes bone metastasis by activating V-ATPase (33). ATP6V1C1 may promote breast cancer growth and bone metastasis by regulating lysosomal V-ATPase activity in vivo and in vitro (34). Down-regulate the expression of ATP6V0C and ATP6V1A, which inhibits the activity of V-ATPase, reduces the invasiveness of liver cancer cells (35). Studies have shown that ATP6V1C1 could be used as a marker of diagnosis and prognosis in oral squamous cell carcinoma (29). In glioblastoma, high expression of ATP6V1G1 is associated with poor prognosis (36).
Obviously, there were some limitations to this study. First, all the data analyzed was based on the online databases in silicon, further in vivo and in vitro studies are required to verify these findings. Second, the underlying mechanisms of distinct ATP6V1s in KIRC is still unknown. Further experiments are worth to reveal the detailed mechanism between ATP6V1s and KIRC. Besides, this study was only a retrospective study, further detailed prospective results will support each other.
In conclusion, our results showed that underexpression of ATP6V1s members in KIRC was found on distinct public databases. Moreover, ATP6V1s were significantly associated with individual cancer stage, nodal metastasis status, and patient's gender. Furthermore, high expressions of ATP6V1s were significantly related with longer OS in KIRC patients. All in a word, ATP6V1s family members could be a potential target in the development of anti-KIRC therapeutics and an efficient marker of the prognostic value of KIRC.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/supplementary material.
AUTHOR CONTRIBUTIONS
XL and HL analyzed the data. CY and LL suggested online tools. SD and ML designed the project, selected the analyzed results, and wrote the paper. All authors contributed to the article and approved the submitted version.
FUNDING
The present study was supported by the National Natural Science Foundation of China (Grant No. 81200521).
ACKNOWLEDGMENTS
We thank Lao Xinyuan, for his guidance and help in our scientific research work. | 2020-10-30T13:13:13.077Z | 2020-10-30T00:00:00.000 | {
"year": 2020,
"sha1": "d57e9ce623402dc697ec88e0b9b107951bf9d8e2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2020.567970/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d57e9ce623402dc697ec88e0b9b107951bf9d8e2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219981629 | pes2o/s2orc | v3-fos-license | SN-Engine, a Scale-free Geometric Modelling Environment
We present a new scale-free geometric modelling environment designed by the author of the paper. It allows one to consistently treat geometric objects of arbitrary size and offers extensive analytic and computational support for visualization of both real and artificial sceneries.
INTRODUCTION
Geometric modelling of real-world and imaginary objects is an important task that is ubiquitous in modern computer science. Today, geometric modelling environments (GME) are widely used in cartography [5], architecture [26], geology [24], hydrology [20], and astronomy [27]. Apart from being used for modelling, importing and storing real-world data, GMEs are crucially important in computer games industry [12]. Creating a detailed model of a real-world or imaginary environment is a task of great computational complexity. Modelling various physical phenomena like gravity, atmospheric light scattering, terrain weathering, and erosion processes requires full use of modern computer algebra algorithms as well as their efficient implementation. Finding exact and numeric solutions to differential, difference, and algebraic equations by means of computer algebra methods (see [7] and the references therein) is at the core of the crucial algorithms of modern geometric modelling environments.
There exist a wide variety of geometric modelling environments, both universal and highly specialized. They include three-dimensional planetariums like SpaceEngine or Celestia, virtual Earth engine Outerra, procedural scenery generator Terragen, 3D computer graphics editors Autodesk 3ds Max, Blender, Houdini, and 3D game engines like Unreal Engine 4, CryEngine 3, and Unity.
The present paper describes a new geometric modelling environment, SN-Engine, designed and implemented by the author of the paper. The main advantage of this GME is its capacity to consistently and in a computationally efficient way treat geometric objects of arbitrary size, from extragalactic and down to the microscopic scale.
This geometric modelling environment is a freeware implemented in C# programming language. Scripts, sample video files and high resolution images created with SN-Engine are publicly available at snengine.tumblr.com.
STATE-OF-THE-ART IN GEOMETRIC MODELLING: CA-PABILITIES AND LIMITATIONS
The geometric modelling environment presented in the paper shares many properties and functions with other GMEs. These include procedural generation, instruments and algorithms of 3D rendering, internal elements of system architecture, file and in-memory data formats, etc. In the next tables we compare SN-Engine with other existing systems. Despite the extensive capabilities of modern geometric modelling systems all of them have limitations in terms of world structure, engine modification, user-world interaction, and licensing. The presented modelling system SN-Engine is a freeware which, unlike any other software listed above, is endowed with tools for procedural generation of objects of arbitrary scale.
The presented geometric modelling system unifies procedural approach to content generation and game engine technologies to create a fully flexible multi-client, arbitrary scale world creation and experience platform. The system can be used for solving various tasks like demonstration, design, recreation, and education. Its functions include generation of new content, transformation of existing content, and visualization. By organizational structure the presented geometric modelling system can be used in autonomous, local-networked or global-networked mode.
The engine provides flexibility in terms of world and entity modification, creation and gen- • Support of arbitrary scale worlds, from super galactic to microscopic level; • Client-server world and logic synchronization; • Scriptable game logic; • Procedural generation [12,14,17,23] of any world content ranging from textures and models, to full world generation; • External data import: e.g. height maps [9,19], OpenStreetMap [18] data, textures, models, etc.; • Integrated computational physics module; • World state system for saving world data including any runtime changes; • Script system controlling every world related function of the engine; • Fully extendable and modifiable library of objects including galaxies, planets, lights, static and dynamic objects, items and, others; • Support of player controlled or computer controlled characters.
The base engine realization contains a procedurally generated universe with standard node hierarchy ( Fig. 1) ranging from galaxies to planet surfaces and their objects. Hierarchy is displayed in columns from left to right and from top to bottom: galaxies (a), spiral galaxy stars and dust (b), a star system, planets and their orbits (enabled for clarity) (c), an earth-like planet (d), planet surface seen from a low orbit (e), planet surface from ground level (f). In addition, other nodal hierarchies can be created with custom nodes by means of procedural generation algorithms.
A planet has a touchable surface endowed with physical properties which satisfy the laws of the physics of solids. Players can walk on a planet's surface and interact with other objects.
IMPLEMENTATION OF THE GEOMETRIC MODELLING ENGINE
The engine is implemented in C# on .NET Framework and uses Lua [13] as the main scripting language. The engine consists of core systems and modules. The core systems are as follows: • Task system managing task creation, processing, linking and balance; • IO system or virtual file system, providing synchronous and asynchronous asset loading, reloading, and dependency management; • Object system or global object management system, base for Assets, Nodes, Components, Events, and others; • Event system that manages subscription and invocation of functions; • Physics system handles physical interactions of nodes with instances of physics entities; • Sound system supports playback of sound effects and ambience in 3D space; • Horizon system providing frame of reference update and loading of nodes; • Input system or user input engine interface; • Net system that enables client-server event relay; • Rendering system which does visualization of nodes with attached cDrawable component; • Scripting system providing safe and real-time game logic programming in Lua.
The list of modules comprises the following: • Profile which stores user data, settings, saves, etc.; • Add-on or user data package manager; • GUI that renders user interface objects and their rendering; • ModelBuilder providing classes and functions for dynamic model construction; • ProcGen or operation based procedural generation system; • sGenerator or script based procedural generation system; • Flow, the node based visual programming language; • Forms, the dynamic index database of user defined data assets; • VR, the virtual reality Oculus HMD interface; • Web module that provides Awesomium browser interface, rendering to texture and control functions.
Data modules include primary asset modules which contain methods for asset manipulation in JSON format [10], Material, StaticModel, Particles, Sprite font, SurfaceDataStorage, Package, Texture, Sound, and secondary format modules that are used in data import stage: SMD, FBX, BSP, OSM, MCV.
Hierarchical Nodal System of the Geometric Modelling Engine
The core component of the engine world structure is a node. Every node follows the set of rules: • A node can have space bounds with box, sphere or compound shape; • A node can contain any finite number of other nodes and one parent node, forming a tree graph structure; • A node has absolute size variable measured in meters per node unit, so in case of node with spherical bounds its radius is equal to the absolute size of the node; • A node has position, rotation and scale variables which define its location and scale in its parent node; • A node has seed variable used for procedural generation; • Nodes can have components, custom variables and event listeners; • A node without parent node is the world node.
A possible node type hierarchy of a client camera flying two meters above the planet surface is as follows: 0) world sol; 1) spacecluster; 2) galaxy; 3) starsystem; 4) star; 5) planet; 6) planet surface; 7) planet surface node; 8) camera. All types in this hierarchy, with the exception for the camera type, are defined in script files.
The location variables and node hierarchy allows one to build relative transformation matrices and their inverse matrices for each parent node in its hierarchy. This in turn allows one to obtain relative transformations for any node within the same world.
The precision of 32 bit floating point numbers which are widely used in computer graphics is limited while the corresponding precision distribution is not linear. Any number with 7 or more significant decimal digits is subject to data loss during mathematical operations. There are several workaround methods to overcome this limitation and each method needs to calculate object positions in relation to camera at runtime. These methods include the following: • Storing object positions as 64bit floats, which raises overall precision to 15 decimal digits but still has the same precision distribution limitations; • Storing object positions as 32bit or 64bit integers, since integers have linear precision distribution; • Storing object positions in relation to specialized group object. This method is used in most geometric modelling systems; • Storing relative local object positions with respect to their parent object. This method requires hierarchical organization of objects.
We use the last method to solve this problem. This is done in three steps. The first two steps are performed when creating or updating a node while the third is performed at the rendering stage.
The first step is to compute the node world matrix, which consists of the node scale, rotation and position: Here W is the node world matrix, S = (S x , S y , S z ) is the node scaling vector, R = (R x , R y , R z , R w ) is the node rotation quaternion, and P = (P x , P y , P z ) is the node position vector.
The second step is to compute hierarchical matrices using world matrices obtained at the previous step and parent nodes: Here • Id 4 is the identity matrix of size 4; • i is the node level in parent hierarchy, where 0 = node, 1 = parent, 2 = parent of parent, . . . , n = world node; • H i is the i-th level node world transformation matrix; • S i is the i-th level parent node absolute node size in meters; • W i is the i-th level parent node world matrix.
Finally, the third step is the local world matrix computation (Fig. 2, where (a) is the current node, (b) is the target node, (c) is local world matrix and (d) is the closest common ancestor for the current and the target nodes). This matrix is computed as follows: Here LW is the local world matrix of node N at node C, T (the top node) is the nearest node in hierarchy to both current and target nodes, C is the current node, N is the target node, level is the node hierarchy level starting from the world node.
System of the Components
The engine nodes use partial implementation of Entity component system [4] pattern with the difference that components can have their own methods. The nodes are entities in this pattern. Components define how a node interacts with other nodes, how it is rendered and how it functions. Core engine components (sComponent) include the following: Drawable object components (cDrawable): cModel for model instances, cParticleSystem for particle system instances, cSkybox for skybox instances, cSurface for planet surface (terrain and water) instances, cSpriteText for text sprites in 3D space, cAtmosphere for atmospheric fog, cVolume for volumetric texture renderer instances.
Update phase components (cUpdater): cOrbit which sets node position using orbital parameters and current time, cConstantRotation that sets node rotation from current time.
Physics components (include interfaces for physics engine [3]): cStaticCollision or collision mesh with infinite mass, cPhysicsSpace for physical space which contain physics objects, cPhysic-sObject for physics objects with finite mass and volume, cPhysicsMesh or concave triangulated physics mesh, cPhysicsCompound or compound physics object, cPhysicsActorHull or physical controller for actors. Content generation components: cRenderer components that perform render to texture operations, cCamera which renders from camera node, cCubemap that renders 6-sided cubemap texture, cHeightmap draws to planetary height map texture, cInterface renders hierarchy of panel objects to GUI texture, cShadow draws scene to cascading shadow map [8] texture, cProcedural or procedural node generation component.
Other component types: cLightSource point light which is used for illumination, cNavigation which generates and contains navigational map for AI actors, cPartition is hierarchical space partitioning (part of HorisonSystem), cPartition2D or quad tree partitioning (4 subspaces on 2 axes), cPartition3D or octree partitioning (8 subspaces on 3 axes), cSurfaceMod is surface data (height, temperature, etc.) modifier (cSurface), cWebInterface is web interface data container.
Orbital component
uses Keplerian orbits to setup its node position. When an orbit is created, we calculate the average orbital speed V orbital in terms of of masses and semi-major axis length using the formula Here G is the gravitational constant, a is the semi-major axis, M node is the orbiting node mass, M parent is the parent node mass.
Subsequently in the update event we use Algorithm 1 to calculate the vector position along the shifted ellipse and then rotates it around 0 with the argument of periapsis along the Y axis, inclination along the X axis and ascending longitude along the Y axis. Function GetOrbitalPosition(a, b, e, p, i, l, V orbital , t) t = mod (tV orbital , 2π) − π; Here a is the semi-major axis, b is the semi-minor axis, e is the eccentricity, p is the argument of periapsis, i is the inclination, l is the ascending longitude, t is time, t is the orbital position. The value = 0.001 in parent node units is used as a precision limiter.
The Horizon System
The horizon system provides dynamic recalculation of coordinate systems, space to space node transfer and the space partition update. The system checks all nodes with enabled space transfer flag against the bounds of their parent node and every other node in it. When a node is outside of its parent bounds or inside of any other node bounds, the system recalculates its position, rotation, velocity, and angular velocity and then changes its parent node.
The partition component generates dynamic tree structure from partition nodes and calls events to its host node. This allows one to create procedural generators for three-dimensional (Octree) objects like galaxy generation, galactic cluster generation and two-dimensional (Quad tree) objects like planet surfaces. It can also be used for any other type of generation with scripting.
Procedural Generation
Procedural generation is the crucial part of the engine. It allows real-time content generation from pre-defined patterns and scripts. There are different methods of procedural content generation that are implemented in the engine including: • Node generation with scripted procedural generators in Lua or C# , e.g. cluster and galaxy generation (C#), star system generation (Lua); • Model and node generation with operation based generation system, e.g. models of objects, buildings; generated building interior world nodes and others; • Texture generation from Lua scripts or JSON files; • Texture generation from cCamera and cCubemap node components; • Surface data generation [11,16,17,25] from cHeightmap node component, noise functions [2] and JSON parameters.
Operation Based Procedural Generation System.
This system allows one to define dynamic model structures and to build complex 3D models with several levels. Models are defined in JSON-like files or Lua structures by lists of consecutive operations. These operations use named groups of 3D primitives along with parameters as input data. Fig. 3 illustrates the use of procedural generation in building interiors. The primitive types comprise the following: point as a basic structure which contains position in the local space, path as a set of points with a loop flag, and surface or polygon which contains edge (path) and other surface properties such as material and uv-matrix. The procedural generation system includes a total of 56 different operation types. The operation types are divided into groups as follows: "Create" operations which create sets of primitives from data, "Extend" operations which create new primitives from existing, "Modify" operations that alter existing primitives, "Select" operations which filter existing primitives or return their data, and "Utility" operations which provide branching in generation algorithm.
An example of the use of operation is given by { type : "inset", from:"bt base", out : ["bt base", "bt sides"], extrude : 0.4, amount: 0.5 }. It applies "inset" operation (insets a polygon edge by the "amount" value and shifts it by the "extrude" value) for each primitive in the group "bt base". It outputs the central polygon to the "bt base" group and the edge polygons to the "bt sides" group.
The Rendering System
Rendering system uses SharpDX [22] library which is an open-source managed .NET wrapper of the DirectX API. The engine works on DX10 API and DX11 API.
The system uses a combination of forward and deferred rendering techniques [1,6] to draw objects and effects. The rendering process is the most difficult one in terms of computational complexity and consists of several different stages. The preprocessing stage is when all texture generation is performed. The rendering system processes the incoming draw requests from cRenderer components and fires their respective events. Then in the main stage, the system performs successive render calls to cDrawable components, which are divided by the render groups and layers. Finally, at the post-processing stage the system combines the results of the previous draw calls into a single texture and applies screen effects including screen space local reflections [21] and screen space ambient occlusion [15]. The atmospheric shader is a program for GPU which manages the rendering of a "fog" layer. This shader is a work in progress. One of the key algorithms implemented in this program is as follows.
Algorithm 2: Atmosphere glow and fog shader Input : A shader parameters C atmosphere , C i , C sun , C l , M , N i , N planet , H, W hrz , W planet , W atmosphere , n and back buffer texture t back . Output: A pixel colour and alpha values.
Function GetFogColour(N, W world ,N planet , W planet , W hrz ,. . . ) // 1. The horizon line and gradient density calculation Here C atmosphere is the atmosphere colour, C i is the i-th star colour, C sun is the star colour, C l is the star direction, M is the node scale, N is the surface normal, N i is the i-th star direction, N planet is the direction to the planet centre, H is the camera distance to the planet surface towards the centre of the planet, W hrz is the distance to the horizon, W planet is the distance to the planet centre, W world is the depth of the current pixel, W atmosphere is the atmosphere width, n is the star count, t back is the pixel colour, lerp(a, b, d) is the linear interpolation of two vectors a and b based on the weight d.
The resulting image is blended with the current view. The numerical coefficients in the above formulas have been selected on the basis of visual perception.
The Scripting System.
Script system uses Lua [13] as the scripting language. Scripts in Lua describe most of game logic. Types of scripts include entity definitions, player controller definitions (free camera controller, actor controller, etc.), GUI widget definitions, auto run scripts and Lua modules (definitions for user class types, structure types and libraries), procedural generator definitions and others. At the time of writing, there were 251 Lua script files in the base engine content directory with total size of 904 KB.
The Networking System.
The engine network system is based on client-server model and uses packets to transfer data. The packets are being sent and received asynchronously and incoming packets placed into a queue which is then processed from the main thread in both server and clients.
LIMITATIONS OF THE PRESENTED GEOMETRIC MOD-ELLING ENVIRONMENT
There exist certain limitations for each of the main systems of SN-Engine. They are summarized in the next table.
DISCUSSION
The new geometric modelling system SN-Engine combines procedural generation algorithms, arbitrary scale nodal system, and extensive Lua scripting system to construct and visualize complex sceneries. High resolution screenshots and video are available on the system website at http://snengine.tumblr.com/ | 2020-06-24T01:01:11.648Z | 2020-06-22T00:00:00.000 | {
"year": 2020,
"sha1": "9aaf42a39cd5fa76cd9f9dfd9d6debc67db99ded",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9aaf42a39cd5fa76cd9f9dfd9d6debc67db99ded",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
204806091 | pes2o/s2orc | v3-fos-license | Reconstructing continuous distributions of 3D protein structure from cryo-EM images
Cryo-electron microscopy (cryo-EM) is a powerful technique for determining the structure of proteins and other macromolecular complexes at near-atomic resolution. In single particle cryo-EM, the central problem is to reconstruct the three-dimensional structure of a macromolecule from $10^{4-7}$ noisy and randomly oriented two-dimensional projections. However, the imaged protein complexes may exhibit structural variability, which complicates reconstruction and is typically addressed using discrete clustering approaches that fail to capture the full range of protein dynamics. Here, we introduce a novel method for cryo-EM reconstruction that extends naturally to modeling continuous generative factors of structural heterogeneity. This method encodes structures in Fourier space using coordinate-based deep neural networks, and trains these networks from unlabeled 2D cryo-EM images by combining exact inference over image orientation with variational inference for structural heterogeneity. We demonstrate that the proposed method, termed cryoDRGN, can perform ab initio reconstruction of 3D protein complexes from simulated and real 2D cryo-EM image data. To our knowledge, cryoDRGN is the first neural network-based approach for cryo-EM reconstruction and the first end-to-end method for directly reconstructing continuous ensembles of protein structures from cryo-EM images.
INTRODUCTION
Cryo-electron microscopy (cryo-EM) is a Nobel Prize-winning technique capable of determining the structure of proteins and macromolecular complexes at near-atomic resolution. In a single particle cryo-EM experiment, a purified solution of the target protein or biomolecular complex is frozen in a thin layer of vitreous ice and imaged at sub-nanometer resolution using an electron microscope. After initial preprocessing and segmentation of the raw data, the dataset typically comprises 10 4−7 noisy projection images. Each image contains a separate instance of the molecule, recorded as the molecule's electron density integrated along the imaging axis ( Figure 1). A major bottleneck in cryo-EM structure determination is the computational task of 3D reconstruction, where the goal is to solve the inverse problem of learning the structure, i.e. the 3D electron density volume, which gave rise to the projection images. Unlike classic tomographic reconstruction (e.g. MRI), cryo-EM reconstruction is complicated by the unknown orientation of each copy of the molecule in the ice. Furthermore, cryo-EM reconstruction algorithms must handle challenges such as an extremely low signal to noise ratio (SNR), unknown in-plane translations, imperfect signal transfer due to microscope optics, and discretization of the measurements. Despite these challenges, continuing advances in hardware and software have enabled structure determination at near-atomic resolution for rigid proteins (Kühlbrandt (2014); Scheres (2012b); Renaud et al. (2018); Li et al. (2013)).
Many proteins and other biomolecules are intrinsically flexible and undergo large conformational changes to perform their function. Since each cryo-EM image contains a unique instance of the molecule of interest, cryo-EM has the potential to resolve structural heterogeneity, which is experimentally infeasible with other structural biology techniques such as X-ray crystallography. However, this heterogeneity poses a substantial challenge for reconstruction as each image is no longer of the same structure. Traditional reconstruction algorithms address heterogeneity with discrete clustering Figure 1: Cryo-EM reconstruction algorithms tackle the inverse problem of determining the 3D electron density volume from 10 4−7 noisy images. Each image is a noisy projection of a unique instance of the molecule suspended in ice at a random orientation. Algorithms must jointly learn the volume and the orientation of each particle image. Example image from Wong et al. (2014).
Here, we introduce a neural network-based reconstruction algorithm that learns a continuous low-dimensional manifold over a protein's conformational states from unlabeled 2D cryo-EM images. We present an end-to-end learning framework for a generative model over 3D volumes using an image encoder-volume decoder neural network architecture. Extending spatial-VAE, we formulate our decoder as a function of 3D Cartesian coordinates and unconstrained latent variables representing factors of image variation that we expect to result from protein structural heterogeneity (Bepler et al. (2019)). All inference is performed in Fourier space, which allows us to efficiently relate 2D projections to 3D volumes via the Fourier slice theorem. By formulating our decoder as a function of Cartesian coordinates, we can explicitly model the imaging operation to disentangle the orientation of the molecule during imaging from intrinsic protein structural heterogeneity. Our learning framework avoids errant local minima in image orientation by optimizing with exact inference over a discretization of SO(3) × R 2 using a branch and bound algorithm. The unconstrained latent variables are trained in the standard variational autoencoder approach. We present results on both real and simulated cryo-EM data.
IMAGE FORMATION MODEL
Cryo-EM aims to recover a structure of interest V : R 3 → R consisting of an electron density at each point in space based on a collection of noisy images X 1 , ..., X N produced by projecting (i.e. integrating) the volume in an unknown orientation along the imaging axis. Formally, the generation of image X can be modeled as: where V is the electron density (volume), R ∈ SO(3), the 3D rotation group, is an unknown orientation of the volume, and t = (tx, ty, 0) is an unknown in-plane translation, corresponding to imperfect centering of the volume within the image. The image signal is convolved with g, the point spread function for the microscope before being corrupted with frequency-dependent noise and registered on a discrete grid of size DxD, where D is the size of the image along one dimension.
The reconstruction problem is simplified by the observation that the Fourier transform of a 2D projection of V is a 2D slice through the origin of V in the Fourier domain, where the slice is perpendicular to the projection direction. This correspondence is known as the Fourier slice theorem (Bracewell (1956)). In the Fourier domain, the generative process for imageX from volumeV can thus be written:X whereĝ = Fg is the contrast transfer function (CTF) of the microscope, S(t) is a phase shift operator corresponding to image translation by t in real space, and A(R)V =V (R T (·, ·, 0) T ) is a linear slice operator corresponding to rotation by R and linear projection along the z-axis in real space. The frequency-dependent noise is typically modelled as independent, zero-centered Gaussian noise in Fourier space. Under this model, the probability of of observing an imageX with pose φ = (R, t) from volumeV is thus: where l is a two-component index over Fourier coefficients for the image, σ l is the width of the Gaussian noise expected at each frequency, and Z is a normalization constant.
TRADITIONAL CRYO-EM RECONSTRUCTION
To recover the desired structure, cryo-EM reconstruction methods must jointly solve for the unknown volume V and image poses φ i = (R i , t i ). Expectation maximization (Scheres (2012a)) and simpler variants of coordinate ascent are typically employed to find a maximum a posteriori estimate of V marginalizing over the posterior distribution of φ i 's, i.e.: Intuitively, given V (n) , the estimate of the volume at iteration n, images are first aligned with V (n) (Estep), then with the updated alignments, the images are backprojected to yield V (n+1) (M-step). This iterative refinement procedure is sensitive to the initial estimate of V as the optimization objective is highly nonconvex; stochastic gradient descent is commonly used for ab initio reconstruction 1 to provide an initial estimate V (0) (Punjani et al. (2017)).
Given sample heterogeneity, the standard approach in the cryo-EM field is to simultaneously reconstruct K independent volumes. Termed multiclass refinement, the image formation model is extended to assume images are generated from V 1 , ..., V K independent volumes, with inference now requiring marginalization over φ i 's and class assignment probabilities π j 's: While this formulation is sufficiently descriptive when the structural heterogeneity consists of a small number of discrete conformations, it suffers when the heterogeneity is complex or when conformations lie along a continuum of states. In practice, resolving such heterogeneity is handled through a hierarchical approach refining subsets of the imaging dataset with manual choices for the number of classes and the initial models for refinement. Because the number and nature of the underlying structural states are unknown, multiclass refinement is error-prone, and in general, the identification and analysis of heterogeneity is an open problem in single particle cryo-EM.
METHODS
We propose a neural network-based reconstruction method, cryoDRGN (Deep Reconstructing Generative Networks), that can perform ab-initio unsupervised reconstruction of a continuous distribution over 3D volumes from unlabeled 2D images ( Figure 2). We formulate an image encoder-volume decoder architecture based on the variational autoencoder (VAE) (Kingma & Welling (2013)), where protein structural heterogeneity is modeled in the latent variable. While a standard VAE assumes all sources of image heterogeneity are entangled in the latent variable, we propose an architecture that enables modelling the intrinsic heterogeneity of the volume separately from the extrinsic orientation of the volume during imaging. Our end-to-end training framework explicitly models the forward image formation process to relate 2D views to 3D volumes and employs two separate strategies for inference: a variational approach for the unconstrained latent variables and a global search over SO(3) × R 2 for the unknown pose of each image. These elements are described in further detail below.
GENERATIVE MODEL
We design a deep generative model to approximate a single function,V : R 3+n → R, representing a n-dimensional manifold of 3D electron densities in the Fourier domain. Specifically, the volumeV is modelled as a probabilistic decoder p θ (V |k, z), where θ are parameters of a multilayer perceptron (MLP). Given Cartesian coordinates k ∈ R 3 and continuous latent variable z, the decoder outputs distribution parameters for a Gaussian distribution overV (k, z), i.e. the electron density of volumê V z at frequency k in Fourier space. Unlike a standard deconvolutional decoder which produces a separate distribution for each voxel of a D 3 lattice given the latent variable, following spatial-VAE, we model a function over Cartesian coordinates (Bepler et al. (2019)). Here, these coordinates are explicitly treated as each pixel's location in 3D Fourier space and thus enforce the topological constraints between 2D views in 3D via the Fourier slice theorem.
By the image formation model, each image corresponds to an oriented central slice of the 3D volume in the Fourier domain (Section 2). During training, the 3D coordinates of an image's pixels can be explicitly represented by the rotation of a DxD lattice initially on the x-y plane. Under this model, the log probability of an image,X, represented as a vector of size DxD, given the current MLP, latent pose variables R ∈ SO(3) and t ∈ R 2 , and unconstrained latent variable, z, is: where i indexes over the coordinates of a fixed lattice c 0 . Note thatX = S(−t)X is the centered image, where S is the phase shift operator corresponding to image translation in real space. We define c 0 as a vector of 3D coordinates of a fixed lattice spanning [−0.5, 0.5] 2 on the x-y plane to represent the unoriented coordinates of an image's pixels.
Instead of directly supplying k, a fixed positional encoding of k is supplied to the decoder, consisting of sine and cosine waves of varying frequency: pe (2i) (k j ) = sin(k j Dπ(2/D) 2i/D ), i = 1, ..., D/2; k j ∈ k (7) pe (2i+1) (k j ) = cos(k j Dπ(2/D) 2i/D ), i = 1, ..., D/2; k j ∈ k Without loss of generality, we assume a length scale by our definition of c 0 which restricts the support of the volume to a sphere of radius 0.5. The wavelengths of the positional encoding thus follow a geometric series spanning the Fourier basis from wavelength 1 to the Nyquist limit (2/D) of the image data. While this encoding empirically works well for noiseless data, we obtain better results with a slightly modified featurization for noisy datasets consisting of a geometric series which excludes the top 10 percentile of highest frequency components of the noiseless positional encoding.
INFERENCE
We employ a standard VAE for approximate inference of the latent variable z, but use a global search to infer the pose φ = (R, t) using a branch and bound algorithm.
Variational encoder: As each cryo-EM image is a noisy projection of an instance of the volume at a random, unknown pose (viewing direction), the image encoder aims to learn a pose-invariant representation of the protein's structural heterogeneity. Following the standard VAE framework, the probabilistic encoder q ξ (z|X) is a MLP with variational parameters ξ and Gaussian output with diagonal covariance. Given an input cryo-EM imageX, represented as a DxD vector, the encoder MLP outputs µ z|X and Σ z|X , statistics that parameterize an approximate posterior to the intractable true posterior p(z|X). The prior on z is a standard normal, N (0, I).
Pose inference: We perform a global search over SO(3) × R 2 for the maximum-likelihood pose for each image given the current decoder MLP and a sampled value of z from the approximate posterior. Two techniques are used to improve the efficiency of the search over poses: (1) discretizing the search space on a uniform grid and sub-dividing grid points after pruning candidate poses with Positional encoding Figure 2: CryoDRGN model architecture. We use a VAE to perform approximate inference for latent variable z denoting image heterogeneity. The decoder reconstructs an image pixel by pixel given z and pe(k), the positional encoding of 3D Cartesian coordinates. The 3D coordinates corresponding to each image pixel are obtained by rotating a DxD lattice on the x-y plane by R, the image orientation. The latent orientation for each image is inferred through a branch and bound global optimization procedure (not shown).
branch and bound (BNB), and (2) band pass limiting the objective to low frequency components and incrementally increasing the k-space limit at each iteration (frequency marching). The pose inference procedure encodes the intuition that low-frequency components dominate pose estimation, and is fully described in Appendix A.
In summary, for a given imageX i , the image encoder produces µ z|Xi and Σ z|Xi . A sampled value of the latent z i ∼ N (µ z|Xi , Σ z|Xi ) is broadcast to all pixels. Given z i and the current decoder, BNB orientational search identifies the maximum likelihood rotation R i and translation t i forX i . The decoder p θ then reconstructs the image pixel by pixel given the positional encoding of R T i c 0 and z i . The phase shift corresponding to t i and optionally the microscope CTFĝ i is then applied on the reconstructed pixel intensities. Following the standard VAE framework, the optimization objective is the variational lower bound of the model evidence: where the expectation of the log likelihood is estimated with one Monte Carlo sample. By comparing many 2D slices from the imaging dataset, the volume can be learned through feedback from these single views. Furthermore, this learning process is denoising as overfitting to noise from a single image would lead to higher reconstruction error for other views. We note that the distribution of 3D volumes models heterogeneity within a single imaging dataset, capturing structural variation for a particular protein or biomolecular complex, and that a separate network is trained per experimental dataset. Unless otherwise specified, the encoder and decoder networks are both MLPs containing 10 hidden layers of dimension 128 with ReLU activations. Further architecture and implementation details are given in Appendix A.
RELATED WORK
Homogeneous cryo-EM reconstruction: Cryo-EM reconstruction is typically accomplished in two stages: 1) generation of an initial low-resolution model followed by 2) iterative refinement of the initial model with a coordinate ascent procedure alternating between projection matching and refinement of the structure. In practice, initial structures can be obtained experimentally (Leschziner & Nogales (2006)), inferred based on homology to complexes with known structure, or via ab-initio reconstruction with stochastic gradient descent (Punjani et al. (2017)). Once an initial model is generated, there are many tools for iterative refinement of the model (Scheres ( 2017)). CryoSPARC implements a branch and bound optimization scheme, where their bound is a probabilistic lower bound based on the noise characteristics from the image formation model (Punjani et al. (2017)). Ullrich et al. (2019) propose a differentiable voxelbased representation for the volume and introduce a variational inference algorithm for homogeneous reconstruction with known poses.
Heterogeneous cryo-EM reconstruction: In the cryo-EM literature, standard approaches for addressing structural heterogeneity use mixture models of discrete, independent volumes, termed multiclass refinement (Scheres (2010); Lyumkis, Dmitry et al. (2013)). These mixture models assume that the clusters are independent and homogeneous, and in practice require many rounds of expertguided hierarchical clustering from appropriate initial volumes and manual choices for number of clusters. More recently, Nakane et al. (2018) extend the image generative model to model the protein as a sum of rigid bodies (determined from a homogeneous reconstruction), thus imposing structural assumptions on the type of heterogeneity. Frank & Ourmazd (2016) aim to build a continuous manifold of the images, however their approach requires pose supervision and final structures are obtained by clustering the images along the manifold and reconstructing with traditional tools. Recent theoretical work for continuous heterogeneous reconstruction includes expansion of discrete 3D volumes in a basis of Laplacian eigenvectors (Moscovich et al. (2019)) and a general framework for modelling hyper-volumes (Lederman et al. (2019)) e.g. as a tensor product of spatial and temporal basis functions (Lederman & Singer (2017)). To our knowledge, our work is the first to apply deep neural networks to cryo-EM reconstruction, and in doing so, is the first that can learn a continually heterogeneous volume from real cryo-EM data.
Neural network 3D reconstruction in computer vision: There is a large body of work in computer vision on 3D object reconstruction from 2D viewpoints. While these general approaches have elements in common with single particle cryo-EM reconstruction, the problem in the context of computer vision differs substantially in that 2D viewpoints are not projections and viewing directions are typically known. For example, Yan et al. (2016) propose a neural network that can predict a 3D volume from a single 2D viewpoint using only 2D image supervision. Gadelha et al. (2017) learn a generative model over 3D object shapes based on 2D images of the objects thereby disentangling variation in shape and pose. Tulsiani et al. (2018) also reconstruct and disentangle the shape and pose of 3D objects from 2D images by enforcing geometric consistency. These works attempt to encode the viewpoint 'projection' operation 2 explicitly in the model in a manner similar to our use of the Fourier slice theorem.
Coordinate-based neural networks in computer vision: Using spatial (i.e. pixel) coordinates as features to a convolutional decoder to improve generative modeling has been proposed many times, with recent work computing each image as a function of a fixed coordinate lattice and latent variables (Watters et al. (2019)). However, directly modeling a function that maps spatial coordinates to values is less extensively explored. In CocoNet, the authors present a deep neural network that maps 2D pixel coordinates to RBG color values. CocoNet learns an image model for single images, using the capacity of the network to memorize the image, which can then be used for various tasks such as denoising and upsampling (Bricman & Ionescu (2018)). Similarly, Spatial-VAE proposes a similar coordinate-based image model to enforce geometric consistency between rotated 2D images in order to learn latent image factors and disentangle positional information from image content (Bepler et al. (2019)). Our method extends many of these ideas from simpler 2D image modelling to enable 3D cryo-EM reconstruction in the Fourier domain.
RESULTS
Here, we present both qualitative and quantitative results for 1) homogeneous cryo-EM reconstruction, validating that cryoDRGN reconstructed volumes match those from existing tools; 2) heterogeneous cryo-EM reconstruction with pose supervision, demonstrating automatic learning of the latent manifold that previously required many expert-guided rounds of multiclass refinement; and 3) fully unsupervised reconstruction of continuous distributions of 3D protein structures, a capability not provided by any existing tool.
UNSUPERVISED HOMOGENEOUS RECONSTRUCTION
We first evaluate cryoDRGN on homogeneous datasets, where existing tools are capable of reconstruction. We create two synthetic datasets following the cryo-EM image formation model (image size D=128, 50k projections, with and without noise), and use one real dataset from EMPIAR-10028 consisting of 105,247 images of the 80S ribosome downsampled to image size D=90. The encoder network is not used in homogeneous reconstruction. As a baseline for comparison, we perform homogeneous ab-initio reconstruction followed by iterative refinement in cryoSPARC (Punjani et al. (2017)). We compare against cryoSPARC as a representative of traditional state-of-the-art tools, which all implement variants of the same algorithm (Section 2). Further dataset preprocessing and training details are given in Appendix B.
Dataset Method
No We find that cryoDRGN inferred poses and reconstructed volumes match those from state ofthe-art tools. The similarity of the volumes to the ground truth can be quantified with the with the Fourier shell correlation (FSC) curve 3 . Reconstructed volumes and quantitative comparison with the FSC curve is given in Figure S5. Pose error to the ground truth image poses are given in Table 1. For the real cryoEM dataset (no ground truth), the median pose difference between cryoDRGN and cryoSPARC reconstructions is 0.002 for rotations and 1.0 pixels for translations, and the resulting volumes are correlated above a FSC cutoff of 0.5 across all frequencies.
HETEROGENEOUS RECONSTRUCTION WITH POSE SUPERVISION
Next, we evaluate cryoDRGN for heterogeneous cryo-EM reconstruction on EMPIAR-10076, a real dataset of the E. coli large ribosomal subunit (LSU) undergoing assembly (131,899 images, downsampled to D=90) (Davis et al. (2016)). Here, poses are obtained through alignment to an existing structure of the LSU and treated as known during training. In the original analysis of this dataset, multiple rounds of discrete multiclass refinement with varying number of classes followed by human comparison of similar volumes were used to identify 4 major structural states of the LSU. We train cryoDRGN with a 1-D latent variable treating image pose as fixed to skip BNB pose inference. As a baseline, we reproduce the published structures originally obtained through multiclass refinement with cryoSPARC. Further baseline and training details are given in Appendix C. We find that CryoDRGN automatically identifies all 4 major states of the LSU (Figure 3a). Quantitative comparison with FSC curves 3 and additional volumes along the latent space are shown in Figure S7. We compare the cryo-DRGN latent encoding µ z|X for each image to the MAP cluster assignment in cryoSPARC and find that the learned latent manifold aligns with cryoSPARC clusters (Figure 3b). Cryo-DRGN identifies subpopulations in some of the cryoSPARC clusters (e.g. Class D), which is partitioned by a subsequent round of cryoSPARC multiclass refinement ( Figure S8). Published structures A and F correspond to impurities in the sample. CryoDRGN correctly assigns images from these impurities to distinct clusters, but does not learn their correct structure since the poses inferred from aligning to the LSU template structure are incorrect. 3 The FSC curve measures correlation between volumes as a function of radial shells in Fourier space. The field currently lacks a rigorous method for measuring the quality of reconstruction. In practice, however, resolution is often reported as 1/k0 where k0 = arg max k F SC(k) < C and C is some fixed threshold. Table 2: Reconstruction accuracy quantified by an FSC=0.5 resolution metric between the reconstructed volumes corresponding to each image and its ground truth volume. We report the average and standard deviation across 100 images in the dataset (lower is better; best possible is 2 pixels).
RECONSTRUCTION
We test the ability of cryoDRGN to perform fully unsupervised heterogeneous reconstruction from datasets with different latent structure. We generate four datasets (each 50k projections, D=64) from an atomic model of a protein complex, containing either a 1D continuous motion, 2D continuous motion, 1D continuous circular motion, or a mixture of 10 discrete conformations ( Figure S7). We train cryoDRGN with a 1D latent variable for the linear 1D dataset and a 10D latent variable for the other 3 datasets. As a baseline, we perform multiclass reconstruction in cryoSPARC sweeping K=2-5 classes. We compare against K=3, which had the best qualitative results.
We also propose a modification to cryoDRGN in order to train on tilt series pairs datasets. Tilt series pairs is a variant of cryo-EM in which, for each image X i , a corresponding image X i is acquired after tilting the imaging stage by a known angle. This technique was originally employed to identify the chirality of molecules (Belnap et al. (1997)), which is lost in the projection from 3D to 2D. We propose using tilt series pairs to encourage invariance of q ξ with respect to pose transformations for a givenV z (and incidentally to identify the chirality ofV z ). We make minor modifications to the architecture as described in Appendix D.
In Figure 4, we show that cryoDRGN reconstructed volumes for the circular 1D dataset qualitatively match the ground truth structures. Note that while we only visualize 10 structures sampled along the latent space, the volume decoder can reconstruct the full continuum of states. In contrast, cryoSPARC multiclass reconstruction, a discrete mixture model of independent structures, is only able to reconstruct 2 (originally unaligned) structures which resemble the ground truth. Volumes contain blurring artifacts from clustering images from different conformations into the assumedhomogeneous clusters in the mixture model. Results for the remaining datasets are given in Figures S10-13.
We quantitatively measure performance on this task with an FSC resolution metric computed between the MAP volume for each image V zi|Xi and the ground truth volume which generated each image, averaged across images in the dataset (Table S4). We find that cryoDRGN reconstruction accuracy is much higher than state-of-the-art discrete multiclass reconstruction in cryoSPARC, with further improvement achieved by training on tilt series pairs.
CONCLUSIONS
We present a novel neural network-based reconstruction method for single particle cryo-EM that learns continuous variation in protein structure. We applied cryoDRGN on a real dataset of highly heterogeneous ribosome assembly intermediates and demonstrate automatic partitioning of structural states. In the presence of simulated continuous heterogeneity, we show that cryoDRGN learns a continuous representation of structure along the true reaction coordinate, effectively disentangling imaging orientation from intrinsic structural heterogeneity. The techniques described here may also have broader applicability to image and volume generative modelling in other domains of computer vision and 3D shape reconstruction.
A.1 BRANCH AND BOUND IMPLEMENTATION DETAILS
We perform a global search over SO(3) × R 2 for the maximum-likelihood pose for each image given the current decoder MLP. Two techniques are used to improve the efficiency of the search over poses: (1) discretizing the search space on a uniform grid and sub-dividing grid points after pruning candidate poses with branch and bound, and (2) band pass limiting the objective to low frequency components and incrementally increasing the k-space limit at each iteration (frequency marching).
Our branch and bound algorithm for pose optimization is given in Algorithm 1. Briefly, we discretize SO(3) uniformly using the Hopf fibration Yershova et al. (2010) at a predefined base resolution of the grid and incrementally increase the grid resolution by sub-dividing grid points. At each resolution of the grid, the set of candidate poses is pruned using a branch and bound (BNB) optimization scheme, which alternates between a computationally inexpensive lower bound on the objective function evaluated at all grid points and an upper bound consisting of the true objective evaluated on the best lower-bound candidate. Grid points whose lower bound is higher than this value are excluded for subsequent iterations. In our case, the loss is evaluated on low-frequency components of the image; specifically, Fourier components with |k| < k max is an effective lower bound, as it is both inexpensive to compute and captures most of the power (and thus the error). This bound encodes the intuition that low-frequency components dominate pose estimation. We concomitantly increase k max at each iteration of grid subdivision.
At each iteration, some poses are excluded by BNB, and the remaining poses are further discretized. Although BNB is risk-free in the sense that the optimal pose at a given resolution will not be pruned, our application of it is not risk-free as a candidate pose with high loss at a given resolution doesn't guarantee that its neighbor in the next iteration will not have a lower loss. Irrespective, in practice, we find that at a sufficiently fine base resolution, we obtain good results on a tractable timescale (hours on a single GPU). 4 We reimplement the uniform multiresolution grids on SO ( for iter = 1 . . . N iter do 6: Compute lower bound at all grid points 7: lb(φ i ) ← loss betweenX and SLICE(V z , φ i ) at k < k 8: φ * ← arg min(lb) 9: ub ← loss betweenX and SLICE(V z , φ * ) at k < k max Compute upper bound 10: for φ i ∈ Φ do Subdivide grid points below the upper bound 12: if lb(φ i ) < ub then 13:
A.2 TRAINING DETAILS
Given an imaging dataset,X 1 , ...X N , we summarize three training paradigms of cryoDRGN. 1) For homogeneous reconstruction, we only train the volume decoder p θ and perform BNB pose inference for the unknown φ i 's for each image. 2) As an intermediate task, we can perform heterogeneous reconstruction training the image encoder q ξ and the volume decoder p θ with known φ i 's to skip BNB pose inference. 3) For fully unsupervised heterogeneous reconstruction, we jointly train q ξ and p θ to learn a continuous latent representation, performing BNB pose inference for the unknown pose of each image.
Unless otherwise specified, the encoder and decoder networks are both MLPs containing 10 hidden layers of dimension 128 with ReLU activations. A fully connected architecture is used instead of a convolutional architecture because the images are not represented in real space.
Instead of representing both the real and imaginary components of each image, we use the closelyrelated Hartley space representation (Hartley (1942)). The Hartley transform of real-valued functions is equivalent to the real minus imaginary component of the FT, and thus is real valued. The Fourier slice theorem still holds and the error model is equivalent.
In this work, we simplify the image generation model to Gaussian white noise. Therefore, for a given image, the negative log likelihood for a reconstructed slice from the decoder corresponds to the mean squared error between the phase-shifted image and the oriented slice from the volume decoder. We leave the implementation of a colored noise model to future work.
B.1 DATASET PREPARATION
Simulated datasets: From a ground truth 3D volume, we simulated datasets following the cryo-EM image formation model by 1) rotating the 3D volume in real space by R, where R ∈ SO(3) is sampled uniformly, 2) projecting (integrating) the volume along the z-axis, 3) shifting the resulting 2D image by t, where t is sampled uniformly from [−10, 10] 2 pixels, and 4) optionally adding noise to an SNR of 0.1, a typical value for cryo-EM data (Baxter et al. (2009)). Following convention in the cryo-EM field, we define SNR as the ratio of the variance of the signal to the variance of the noise. We define the noise-free signal images to be the entire DxD image. 50k projections were generated for each dataset with image size of D=128.
Real dataset: To generate the real cryo-EM dataset for homogeneous reconstruction, images from EMPIAR-10028 (Wong et al. (2014)) were downsampled by a factor of 4 by clipping in Fourier space. The images were then 'phase flipped' in Fourier space by their contrast transfer function, a given real-valued function with range [-1,1] determined by the microscopy conditions, i.e. the Fourier components are negated where the CTF is negative.
B.2 TRAINING
For each dataset, we train the volume decoder (10 hidden layers of dimension 128) in minibatches of 10 images with random orientations for the first epoch to learn a volume with roughly correct spatial extent, followed by 4 epochs with branch and bound (BNB) pose inference (30 min/epoch noiseless, 80 min/noisy datasets). Since BNB pose inference is the bottleneck during training, we employ a multiscale training protocol, where after 4 epochs with BNB pose inference, the latent pose is fixed, and we train a separate, larger volume decoder (10 hidden layers of dimension 500) for 15 epochs with fixed poses to "refine" the structure to high resolution (20 min/epoch). Training times are reported for 50k, D=128 image datasets trained on a Nvidia Titan V GPU. Figure S5: Left: CryoDRGN unsupervised homogeneous reconstruction on 2 simulated datasets and 1 real cryo-EM dataset matches state-of-the-art. Right: Fourier shell correlation (FSC) curves between the reconstructed volume and the ground truth volume for the synthetic ribosome datasets.
C HETEROGENEOUS RIBOSOME RECONSTRUCTION WITH POSE SUPERVISION
Dataset preparation: We used the dataset from EMPIAR-10076 which contains 131,899 images of the E. coli large ribosomal subunit (LSU) in various stages of assembly (Davis et al. (2016)). Images were downsampled to D=128 by clipping in Fourier space. Poses were determined by aligning the images to a mature LSU structure obtained from a homogeneous reconstruction of the full resolution dataset in cryoSPARC, i.e. "a consensus reconstruction".
Baseline: In the original analysis of this dataset, multiple rounds of multiclass refinement in sweeps of varying number of classes followed by expert manual alignment and clustering of similar volumes were used to identify 6 classes, labeled A-F consisting of 4 major structural states of the LSU (classes B-E) and 2 additional structures of the 70S and 30S ribosome, class A and F, respectively.
Since the published dataset did not contain the corresponding image cluster assignments, we perform multiclass refinement in cryoSPARC using the published structures of the 6 major states, low pass filtered to 25Å as initial models, to reproduce the results and obtain image cluster assignments. Aside from class A and F (low population impurities in the sample), the remaining structures correlate well with the published volumes ( Figure S6). Figure S6: Reconstructed volumes from cryoSPARC multiclass refinement using the published structures of the 6 major states, low pass filtered to 25Åas initial models. Right: FSC curves between the cryoSPARC reconstructed and published volumes.
cryoDRGN training: We train cryoDRGN with a 1-D latent variable in minibatches of 10 images for 200 epochs, treating image pose as fixed (11 min/epoch on a Nvidia Titan V GPU). To simplify representation learning for q ξ , we center and phase flip images before inputting to the encoder. We encode and decode a circle of pixels with diameter D=128 instead of the full 128x128 image.
C.1 SUPPLEMENTARY RESULTS Figure Linear 1D motion: We generated a dataset containing one continuous degree of freedom as follows: From an atomic model of a protein complex, a single bond in the atomic model was rotated while keeping the remaining structure fixed, and 50 atomic models were sampled along this reaction coordinate. 1000 projections with random rotations and in-plane translations were generated for each model, yielding a total of 50k images, approximating a uniform distribution along a continuous reaction coordinate.
Linear 2D motion: We extended the linear 1D motion dataset by introducing a second degree of freedom from rotating a bond in the atomic model that connected a different protein in the complex. Similar to the 1D motion dataset, from a starting configuration, the original bond was rotated +/-N degrees, and 50 models were sampled along this reaction coordinate. Then from the starting conformation, the second bond was rotated +/-90 degrees, and 50 additional models were sampled along the second reaction coordination. 500 projections were generated from each model, yielding a total of 50k images.
Circular 1D motion: For this dataset, we rotated a bond a full 360 degrees and sample 100 models along this circular reaction coordinate. 500 projections were generated from each model, yielding a total of 50k images.
Discrete 10 class: For this dataset, we sampled 10 random configurations for the proteins in the complex. 5000 projection images were generated from each model, yielding a dataset containing a mixture of 10 discrete states.
For all four datasets, random rotations were generated uniformly from SO (3), and translations were sampled uniformly from [−5, 5] pixels. The image size was D=64 with absolute spatial extent of 720Åand Nyquist limit of 22.5Å. Schematics of the simulated motions are given in Figure S9. Figure S9: Ground truth atomic model and the heterogeneity introduced for different datasets.
D.2 TILT SERIES PAIRS
Tilt series pairs is a variant of cryo-EM in which, for each image X i , a corresponding image X i is acquired after tilting the imaging stage by a known angle. This technique was originally employed to identify the chirality of molecules (Belnap et al. (1997)), which is lost in the projection from 3D to 2D and therefore cannot be inferred from standard cryo-EM. Inferential procedures such as expectation maximization converge to one handedness or the other depending on their initialization. In multiclass reconstruction, different classes are not guaranteed to possess the same handedness even if there is a high relatedness between structures. We remark on this experimental technique as we propose using tilt series pairs to encourage invariance of q ξ with respect to pose transformations for a givenV z (and incidentally also to identify the chirality ofV z ). To train on tilt series pairs, the encoder is split into two MLPs, the first learning an intermediate encoding of each image, and the second mapping the concatenation of the two encodings to the latent space. We use an 8 layer MLP with output dimension 128 for the former and a 2 layer MLP with input dimension 256 for the latter. All hidden layers have dimension 128. For branch and bound, the combined loss over both images is evaluated for each grid point of SO(3) × R 2 . To generate the image X tilt,i associated with X i , prior to rotating the volume by R i , we rotate the volume by a constant 45 degrees around the x-axis.
D.3 TRAINING
We trained cryoDRGN in minibatches of 5 images for 40 epochs without tilt series pairs and 20 epochs with tilt series pairs. We trained a 1-D latent variable for the linear 1D motion dataset, and 10-D latent variables for the remaining datasets. Random angles were used for the first epoch of training to learn roughly the correct spatial extent of the volume and BNB pose inference was used for the remaining epochs. The runtime was 120 min/epoch vs 2 min/epoch with and without BNB pose inference, respectively, on a Nvidia Titan V GPU. Table S3: Relationship between number of classes in cryoSPARC and reconstruction accuracy quantified by an FSC=0.5 resolution metric between the reconstructed volumes corresponding to each image and its ground truth volume. We report the average and standard deviation across 100 images in the dataset (lower is better; best possible is 2 pixels). Table S4: Relationship between z dimension in cryoDRGN and reconstruction accuracy quantified by an FSC=0.5 resolution metric between the reconstructed volumes corresponding to each image and its ground truth volume. We report the average and standard deviation across 100 images in the dataset (lower is better; best possible is 2 pixels). | 2019-09-11T17:13:06.000Z | 2019-09-11T00:00:00.000 | {
"year": 2019,
"sha1": "5305e9fa61d2b942e168651a300ad99afa1db3f1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "db04dc5b7f1864b9dbb1e79892a7d35767a83d0a",
"s2fieldsofstudy": [
"Biology",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Biology",
"Engineering",
"Mathematics"
]
} |
204824305 | pes2o/s2orc | v3-fos-license | Functional equation and zeros on the critical line of the quadrilateral zeta function
For $0<a \le 1/2$, we define the quadrilateral zeta function $Q(s,a)$ using the Hurwitz and periodic zeta functions and show that $Q(s,a)$ satisfies Riemann's functional equation studied by Hamburger, Heck and Knopp. Moreover, we prove that for any $0<a \le 1/2$, there exist positive constants $A(a)$ and $T_0(a)$ such that the number of zeros of the quadrilateral zeta function $Q(s,a)$ on the line segment from $1/2$ to $1/2 +iT$ is greater than $A(a) T$ whenever $T \ge T_0(a)$.
Introduction and Statement of the Main Results
The Dirichlet series of ζ(s, a) and Li(s, a) converge absolutely in the half-plane σ > 1 and uniformly in each compact subset of this half-plane. Moreover, the Hurwitz zeta function has analytic continuation to C except s = 1, where there is a simple pole with residue 1 (e.g., [1,Chapter 12]). In contrast, the Dirichlet series of the function Li(s, a) with 0 < a < 1 converges uniformly in each compact subset of the half-plane σ > 0 (e.g., [15, p. 20]). Furthermore, the function Li(s, a) with 0 < a < 1 is analytically continuable to the whole complex plane (e.g., [15,Chapter 2.2]). We clearly have ζ(s, 1) = Li(s, 1) = ζ(s), where ζ(s) is the Riemann zeta function. Moreover, we show the following, which implies that Q(s, a) has infinitely many zeros on the critical line σ = 1/2. Theorem 1.2. For any 0 < a ≤ 1/2, there exist positive constants A(a) and T 0 (a) such that the number of zeros of Q(s, a) on the line segment from 1/2 to 1/2 + iT is greater than A(a)T whenever T ≥ T 0 (a).
We share some remarks on the functional equation and zeros on the critical line of zeta functions in the next three subsections. Note that the quadrilateral zeta function Q(s, a) also has the following remarkable properties. From [17, (2.4)], it holds that Q(0, a) = −1/2 = ζ(0) for all 0 < a ≤ 1/2.
1.2.
Zeros of zeta functions on the critical line. The famous Riemann hypothesis asserts that the real part of every non-real zero of the Riemann zeta function is 1/2. The study to establish the lower bound for the number of zeros of ζ(s) on the critical line σ = 1/2 has long history. Denote by N Ri (T ) the number of zeros ρ = β + iγ of the Riemann zeta function ζ(s) with β = 1/2 and 0 < γ ≤ T . In 1914, Hardy proved that Later, Hardy and Littlewood [9] showed the following (see also [5,Chapter 11.2] and [22,Chapter 10.7]): Theorem A (Hardy and Littlewood [9, Theorem A]). There are constants A > 0 and T 0 > 0 such that N Ri (T ) ≥ AT whenever T > T 0 .
In 1942, Selberg proved that there exists A > 0 such that Note that the numerical value of the constant A in Selberg's theorem was very small. However, Levinson [16] greatly improved Selberg's result and showed that A ≥ 1/3. Furthermore, Conrey [4] proved that A ≥ 0.4088. The current (June 2021) best result, which was proved by Kühn, Robles, and Zeindler [13], for the lower bound of A is It is well-known that the Riemann zeta function ζ(s) does not vanish in the region of absolute convergence by the Euler product. Next, we review some facts about the zeros on the vertical line σ = 1/2 of the Epstein and Hurwitz zeta functions, which have complex zeros in the half-plane σ > 1/2 (e.g., [11,Chapter 7.4.3] and [15,Chapter 8.4]).
Let B(x, y) = ax 2 + bxy + cy 2 be a positive definite integral binary quadratic form, and denote by r B (n) the number of solutions of the equation B(x, y) = n in integers x and y. Then, the Epstein zeta function for the form B is defined by the ordinary Dirichlet series for σ > 1. It is widely known that the function ζ B (s) admits analytic continuation into the entire complex plane except for a simple pole at s = 1 with residue 2π∆ −1 , where ∆ := √ 4ac − b 2 (e.g., [6, Section 1]). Moreover, the function ζ B (s) satisfies the functional equation ∆ 2π Denote by N Ep (T ) the number of zeros of the Epstein zeta function ζ B (s) on the critical line and whose imaginary part is smaller than T > 0. In 1935, Potter and Titchmarsh [19] showed that N Ep (T ) ≫ T 1/2−ε . Subsequently, Sankaranarayanan [20] obtained N Ep (T ) ≫ T 1/2 / log T , and Jutila and Srinivas [14] proved that N Ep (T ) ≫ T 5/11−ε . As the current (June, 2021) best result, Baier, Srinivas, and Sangale [2] showed that A key to the proof of the estimation N Ep (T ) ≫ T 4/7−ε shown in [2] is the first power mean of an ordinary Dirichlet series ∞ n=1 b n n −s with b n ∈ C satisfying certain conditions cannot be too small. By zeros on the critical line and the functional equation of the Epstein zeta function ζ B (s) mentioned above, the quadrilateral zeta function Q(s, a) has many analytical properties in common with the Epstein zeta function (and the Riemann zeta function). It should be mentioned that the gamma factor of Q(s, a) does not depend on the parameter 0 < a < 1/2 from Theorem 1.1, but the gamma factor of ζ B (s) depends on the discriminant ∆ of the positive definite integral binary quadratic form B(x, y). Furthermore, we can see that the lower bound for the zeros of Q(s, a) on the critical line is better than that of ζ B (s) by virtue of Theorem 1.2 at present.
Based on this conjecture and the facts above, we can guess that proving the existence of ≫ T zeros on the line segment (1/2, 1/2 + iT ) of the Hurwitz or Epstein zeta functions is difficult because these zeta functions have no Euler product in general. However, we show that the quadrilateral zeta function Q(s, a) has ≫ T zeros on the line segment (1/2, 1/2 + iT ) even though Q(s, a) cannot be written as an Euler product (see (1.1)).
Remark. The quadrilateral zeta function Q(s, a) with a ∈ Q can be essentially expressed as a linear combination of Euler products from (1.6). Hence, under the GRH and some assumptions on well-spacing of zeros for Dirichlet L-functions, we could show that Q(s, a) with a ∈ Q ∩ (0, 1/2) \ {1/3, 1/4, 1/6} have 100 % of zeros on the line σ = 1/2 if we could replace the function N j=1 b j L(s, χ j ), where b j ∈ R \ {0}, in Bombieri and Hejhal [3, Theorem A] by the function N j=1 (β 1j + β 2j q s )L(s, χ j ), where β 1j , β 2j ∈ C \ {0} and q is a natural number. However, it seems to be extremely difficult for us to relax their assumptions in [3, Theorem A] as above (even when β 2j = 0). It is worth noting that Q(s, a) with a ∈ R \ Q can be expressed as neither an ordinary Dirichlet series nor a linear combination of Euler products. Despite these facts, we can prove Theorem 1.2 by modifying the proof of Hardy and Littlewood's classical ideas in [9, Sections 2, 3, and 4] (see also [5,Chapter 11.2] the series being absolutely convergent for σ > 1. Assume that Then, we have f (s) = Cζ(s), where C is a constant.
Hecke [10,Section 1] showed that Hamburger's Theorem can be rewritten as: the following three conditions characterize ζ(2s) up to a constant factor.
-(1)-The function φ(s) is meromorphic, and P (s)φ(s) is an entire function of finite genus with a suitable polynomial P (s).
-(3a)-Both functions φ(s) and φ(s/2) can be expanded in a Dirichlet series that converges in a half-plane.
Moreover, Hecke [10] proved that the expressibility of φ(s/2) as a Dirichlet series in -(3a)-can be replaced by the following restriction on the poles of φ(s). More precisely, he showed that ζ(2s) (up to a constant factor) is uniquely determined by -(1)-, -(2)-, and -(3b)-The function φ(s) can be expanded in a Dirichlet series that converges somewhere and the only pole allowed for φ(s/2) is s = 1.
It is quite natural to relax the conditions introduced by Hecke. Knopp [12] showed the following, which implies that there are infinitely many linearly independent solutions if we drop the pole condition -(3b)-above by using the Riemann-Hecke correspondence between ordinary Dirichlet series with functional equations and modular forms or the generalized Poincaré series.
According to the theorems by Hamburger, Hecke, and Knopp, we can see that the conditions to characterize ζ(s) introduced by Hamburger or Hecke are so polished that a slight weakening of their conditions leads to infinitely many counterexamples, as mentioned by Knopp. Note that Knopp's theorem does not provide any explicit representation for the coefficients of a(n) of the Dirichlet series satisfying condition -(3)-. However, as analogues or improvements to Knopp's Theorem, in the next subsection, we show that the zeta function Q(s, a) defined explicitly in Section 1.1 fulfills the assumption -(2)-and some modified conditions of -(1)-and -(3a)-or -(3b)-.
1.4.
Variations of Knopp's Theorem. Now, we consider some variations of Knopp's Theorems, namely, we properly modify conditions -(1)-, -(3a)-, and -(3b)-introduced by Hecke and prove that Q(s, a) fulfills the reshaped conditions. First, we have the following immediately from Theorem 1.1. Next, let ϕ be the Euler totient function and χ be a primitive Dirichlet character of the conductor of q. Let L(s, χ) := ∞ n=1 χ(n)n −s be the Dirichlet L-function. Then, for 0 < r < q, where q and r are relatively prime integers, we have In addition, let G(r, χ) denote the (generalized) Gauss sum G(r, χ) := q n=1 χ(n)e 2πirn/q associated with a Dirichlet character χ. Then we have Hence, from (1.4) and (1.5), it holds that Therefore, we have the following from the functional equation (1.2).
For q ∈ N, put H(s, q) := (q s + q 1−s ) −1 . Then, we can see that H(s, q) = H(1 − s, q), and q s H(s, q) is written as an ordinary Dirichlet series by From (1.6), the function q −s Q(s, r/q) can be expressed as an ordinary Dirichlet series. Therefore, we can see that is also written by an ordinary Dirichlet series. Moreover, the function is entire. Hence, we have the following from Theorem 1.1. Next, we show the following integral representation of π −s/2 Γ(s/2)Q(s, a).
Then, for 0 < ℜ(s) < 1, we have Proof. For ℜ(s) > 1, we have The first infinite series can be written as Hence, we obtain Similarly, when ℜ(s) > 1, one has The first infinite series can be expressed as Therefore, when ℜ(s) > 1, we have For a, u > 0, it is well-known that (see [11, p. 13, (6) Hence, we easily obtain G a (u) = u −1 G a (u −1 ), u > 0.
(2.1) By using the equation above and changing the variable u → v −1 , we have Let a ⋆ := min{a, 1 − a}. From the definition of G a (u) and (2.1), it holds that Let ℜ(s) > 0. By the definition of G a (u) and the estimation above, we have Hence, both integrals in (2.2) converge when ℜ(s) > 0. Clearly, one has for ℜ(s) < 1. Therefore, we obtain the integral representation in Proposition 2.1 2.2. Lemmas. We contrast the integrals J a (t) and I a (t) given by − a * =a,1−a 2≤n<t/a⋆ sin(k log(n + a * )) (n+a * ) 1/2+it log(n+a * ) + e 2πia * n sin(k log n) n 1/2+it log n , where C 1 (a) and C 2 (a) are some positive constants depend on a.
It is shown in [5, p. 236] that Then, by Proposition 2.1, the function I x,k (s, a) can be expressed as By Cauchy's integral theorem and the fact that the function G a (u) − 1 approaches zero rapidly as u tends to infinity along any ray u = xw in the wedge |ℑ(log x)| ≤ π/4, w ∈ R, the integral above is equal to This expresses I x,k (s, a) as the transform of an operator and shows, from the Parseval-Plancherel identity (see [5, p. 216, line 7]), that 1 2πi Note that under the change of variable w → w −1 , the form dw becomes −dw/w 2 , the factor sin(k log w)/ log w is unchanged, and the function G a (xw) − 1 − (xw) −1 becomes where x −1 = x, from (2.1). Thus, the integral of the right-hand side of (2.6) is equal to twice the integral from 1 to ∞. The first step is deriving an upper bound of the integral given by (2.6).
Proof. From the inequality sin ky y ≤ k 0 ≤ π/k, y −1 y ≥ π/k, the integral of the left-hand side of (2.6) is bounded above by (2.7) According to the inequality |A + B| 2 ≤ 2|A| 2 + 2|B| 2 , where A, B ∈ C, the first integral in (2.7) is at most Obviously, the second definite integral is 4k 2 π −2 (1−e −π/k ). From |A+B| 2 ≤ 2|A| 2 +2|B| 2 again, the first integral is bounded above by respectively. We divide each double sum ∞ n,m=0 and ∞ n,m=1 into three sums, one in which n = m, one in which n > m and one in which n < m because both double sums in (2.8) converge absolutely. If n = m, we have for some positive constant C 5 (a). From the inequality Next, we will estimate the definite integral of the remaining terms n = m of (2.8) from 1 to e π/k . The terms with m > n are the complex conjugates of those with m < n, so it suffices to estimate the latter. Consider the integral Because cos δ is positive for a small δ > 0, V (w, a) is a monotone increasing function with respect to w, and this integral can be rewritten in terms of the variable V as where f and V ′ are functions of V by composition with the inverse function V → w. By [5,Lemma in p. 197] and the fact that f is decreasing and V ′ is increasing, the integral above is not more than 8k 2 π 2 2f (1) V ′ (1) = 16k 2 π 2 exp −π (n + a) 2 + (m + a) 2 w 2 sin δ 2π (n + a) 2 − (m + a) 2 cos δ .
A similar estimate can be applied to the imaginary part and to both the real and imaginary parts of the following integral 8k 2 π 2 e π/k 1 exp −π n 2 + m 2 w 2 sin δ − iπ n 2 − m 2 w 2 cos δ + i2πna − i2πma dw.
Thus, by the inequality above and modifying the proof of (2.10), we have For simplicity, we put Then, we have J a (t) ≥ |I a (t)| for all t ∈ R and J a (t) = |I a (t)| whenever the interval of integration of I a (t) contains no roots of Q(s, a) = 0 on the line ℜ(s) = 1/2. The basic idea of the proof is to show that in a suitable sense J a (t) is much larger than |I a (t)| on average. Thus, estimates of J a (t) from below are required. Stirling's formula yields Now, we are in a position to prove the main theorem. Note that the proof below is based on the argument in [5, Chapter 11.2] (see also [22,Chapter 10.7]). When a = 1/4 or 1/2, it holds that Q(s, 1/2) = (2 s + 2 1−s − 2)ζ(s), 2Q(s, 1/4) = (2 2s − 2 s + 2 2−2s − 2 1−s )ζ(s) by (1.6) (see also [17,Section 2.2]). Therefore, we suppose 0 < a < 1/4 or 1/4 < a < 1/2 which implies that cos(2πa) = 0 (see Lemma 2.2).
Proof of Theorem 1.2. Let ν be the number of zeros of Q(1/2 + it, a) in the interval {0 ≤ t ≤ B + k}. And let the line ℜ(s) = 1/2 be divided into intervals of length k and for each of the ν zeros strike out the interval which contains it and the intervals which adjoin this one. Let S be the subset of {A ≤ t ≤ B} consisting of points which do not lie in the stricken intervals. Then, the total length of the intervals of S is not less than B − A − 3νk because a length of at most 3k was stricken for each zero. Note that |I a (t)| = J a (t) for all t ∈ S. Put I := S |I a (t)|dt. Then, by Lemma 2.5 and the fact that there are no zeros between t − k and t + k, we have Therefore, it holds that K 1 kδ −3/4 − K 2 k 2 δ 1/4 ν − K 3 δ −3/4 − K 4 k 2 δ −1/4 ≤ K 5 δ −3/4 (Kk + εk 2 ) 1/2 , which is equivalent to We can make the coefficient of k −1 δ −1 on the right-hand side positive by choosing ε > 0 and k −1 > 0 to be sufficiently small. Hence, with this fixed k > 0, it has been shown that for all sufficiently small δ > 0, the number of roots on the line segment from 1/2 to 1/2 + i2δ −1 is greater than K 6 δ −1 − K 7 δ −1/2 with K 6 > 0. | 2019-10-22T08:53:13.000Z | 2019-10-22T00:00:00.000 | {
"year": 2021,
"sha1": "22812f3d3a9618400e149e99c89b50a31feaf814",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1910.09837",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "22812f3d3a9618400e149e99c89b50a31feaf814",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
247478204 | pes2o/s2orc | v3-fos-license | Ramadan Diurnal Intermittent Fasting Is Associated With Attenuated FTO Gene Expression in Subjects With Overweight and Obesity: A Prospective Cohort Study
Aim and Background A growing body of evidence supports the impact of intermittent fasting (IF) on normalizing body weight and that the interaction between body genes and environmental factors shapes human susceptibility to developing obesity. FTO gene is one of these genes with metabolic effects related to energy metabolism and body fat deposition. This research examined the changes in FTO gene expression upon Ramadan intermittent fasting (RIF) in a group of metabolically healthy subjects with overweight and obesity. Methods Sixty-three (63) subjects were recruited, of which 57 (17 males and 40 females, mean age 38.4 ± 11.2 years) subjects with overweight and obesity (BMI = 29.89 ± 5.02 kg/m2were recruited and monitored before and at the end of Ramadan month), and 6 healthy subjects with normal BMI (21.4 ± 2.20 kg/m2) recruited only to standardize the reference for normal levels of FTO gene expression. In the two-time points, anthropometric, biochemical, and dietary assessments were undertaken, and FTO gene expression tests were performed using RNA extracted from the whole blood sample. Results In contrast to normal BMI subjects, the relative gene expressions in overweight/obese were significantly decreased at the end of Ramadan (−32.30%, 95% CI–0.052 −0.981) in comparison with the pre-fasting state. Significant reductions were found in body weight, BMI, fat mass, body fat percent, hip circumference, LDL, IL-6, TNF-α (P<0.001), and in waist circumference (P<0.05), whilst HDL and IL-10 significantly increased (P<0.001) at the end of Ramadan in comparison with the pre-fasting levels. Binary logistic regression analysis for genetic expressions showed no significant association between high-energy intake, waist circumference, or obesity and FTO gene expression. Conclusions RIF is associated with the downregulation of the FTO gene expression in subjects with obesity, and this may explain, at least in part, its favorable metabolic effects. Hence, RIF presumably may entail a protective impact against body weight gain and its adverse metabolic-related derangements in subjects with obesity.
INTRODUCTION
Obesity is one of the most common prevalent chronic diseases, with its comorbidities and long-term consequent mortality has become a major challenge to global health (1). Obesity is a complicated multifaceted disease that develops from the interaction of cellular, molecular, genetic, metabolic, physiologic, behavioral, cultural, and socioeconomic, influences (2, 3). Notably, cardiovascular disease, diabetes, renal disorders, and neoplasms were the major causes of high body mass index (BMI)-related disability-adjusted life years (DALYs), accounting for 89.3 percent of all high-BMI-related DALYs, with the BMIrelated disease burden varying significantly, depending on the Socio-Demographic Index (SDI) (4). Despite the tremendous efforts in the MENA region to combat the problem, yet obesity is still a major health problem due to several factors including the adopted dietary patterns and physical inactivity (5,6). Genetic predisposing factors represent one of the major contributing factors in the etiopathogenesis of obesity and its consequent complications, with some studies unraveling that high BMI is 25-40% heritable (7). However, to affect body weight, genetic predisposing factors often need to be coupled with environmental and behavioral triggering factors (8,9).
With the progressive advancement of genome-wide association studies, more than 100 loci have been identified to be associated with obesity and its related traits (10). One of these genetic loci with a strong effect on obesity and related biological functions such as adipogenesis and energy balance regulation, fat mass, and obesity-associated (FTO) gene has emerged as one of the influential genes with remarkable impact (11). Recent large-scale analyses found that the obesity-risk allele (rs9939609 A allele) of the FTO is associated with increased food intake (12,13), and previous studies also reported that the FTO obesity-risk allele was associated with a reduced response to hunger and satiety after meals in adults and children (14)(15)(16).
Nowadays, intermittent fasting (IF) has been looked at as an emerging effective, and costless dietary intervention that helps to promote health and prevent disease and aging (17). Several reports showed the benefits of different styles of IF, including time-restricted eating (TRE, a form of IF that involves confining the eating window to 4-10 h and fasting for the remaining hours of the day) (18), modified fasting regimens allowing 20-25% of energy needs to be consumed on scheduled fasting days, alternate-day fasting. The benefits are well evident for metabolic disorders, as well as cancer, obesity, diabetes, and neurological disorders (17,19,20).
Among the widely observed and extensively examined types of IF is the religious form observed during the month of Ramadan (RIF) (21). Ramadan is the ninth month of the lunar calendar, during which healthy adults are mandated to abstain from dawn to sunset, and to refrain from eating and drinking (including water) for a period that extends from 12 to 17 h, depending on the solar season and geographical location (22). This pattern of fasting is associated with dietary (including both food quality and quantity) (23), lifestyles (including sleep quality and quantity) (24) as well as circadian rhythm hormonal changes (25) that may harbor changes in gene expression.
With the expansion of the nutrigenomic studies, growing attention has been directed toward examining the effect of different regimens of IF on gene expression of variable genes related to human health and diseases (16,35). However, there is a paucity of studies tackling the effect of RIF and the associated dietary and lifestyle changes on the expression of specific genes related to human health and disease. Among these, only two studies examined the impact of RIF on the anti-oxidative stress genes (TFAM, Nrf2, SOD2) and metabolism-controlling genes (SIRT1, SIRT3) (36), the Circadian Locomotor Output Cycles Kaput (CLOCK) gene, and other genes related to circadian rhythmicity (37). However, the relationship between RIF and obesity-and body fat-controlling gene expressions is still to be investigated. In the former study, RIF was associated with significant increases in the relative expressions of the antioxidant genes (TFAM, SOD2, and Nrf2) in obese subjects in comparison to counterpart expressions of healthy weight subjects, with percent increments of 90.5, 54.1, and 411.5% for the three genes, respectively. However, the metabolism-controlling gene (SIRT3) showed a highly significant downregulation accompanied by a clear trend for reduction in the SIRT1 gene at the end of Ramadan month, with percent decrements of 61.8 and 10.4%, respectively (36). For the latter, profound changes were reported in the diurnal expression of CLOCK, a central component of the circadian molecular clock, during Ramadan compared to the non-fasting month of Sha'aban (the month before Ramadan) (37). One study assessed the association of common polymorphisms in the CLOCK and FTO gene polymorphisms (SNPs) (rs1801260 and rs9939609, respectively) with standardized BMI scores, and the impact of dietary and lifestyle modification in school-age children (38). It was found that sex is a potential modifier for the association between the CLOCK polymorphism and BMI z-scores in school-age children (38) and that the FTO SNP, rs9939609, did not significantly modify the effect of the intervention on BMI z-scores at the follow-up or changes of BMI z-scores (38).
Given the proven impact of RIF in lowering body weight (28), body fatness (29), visceral fat content (30), and satiety and eating-controlling hormones (leptin, adiponectin, ghrelin) (25); it becomes rational to examine the relationship between the observance of RIF and the expression of FTO gene. Considering the preventive effect of IF, and RIF in particular as a unique diurnal TRE model (39), on the above-mentioned obesity-related indicators, and the principal role of FTO in controlling satiety, food intake, body fatness, and obesity risk (11,(13)(14)(15)(16); the current work stemmed from the hypothesis that observing RIF will be associated with reduced expression of the FTO gene in fasting people with obesity. Therefore, the current work was designed to find out how the observance of RIF by fasting people with obesity will be associated with changes in the genetic expression of FTO.
Participant Selection
In total, 63 subjects were recruited, of which 57 (17 males and 40 females, mean age 38.4 ± 11.2 years) subjects with overweight and obesity (BMI = 29.89 ± 5.02 kg/m 2 (were recruited and monitored before and at the end of Ramadan month, and 6 healthy subjects with normal BMI (21.4 ± 2.20 kg/m 2 ) recruited only to standardize the reference for normal levels of FTO gene expression. All subjects who visited the University Hospital Sharjah (UHS), UAE, for screening were recruited for this study. All subjects were Arabs from the Arabian Gulf, Iraq, Egypt, Sudan, Tunisia, and the Levant countries (Syria, Jordan, Lebanon, and Palestine). The study protocol was designed and conducted following the Declaration of Helsinki and approved by the UHS Research Ethics Committee (Reference no. REC/16/12/16/002). All enrolled subjects (n = 63) were provided with an information sheet describing the research plan, objectives, and requirements of participation. Subjects were recruited using personal communication, social media, and institutional emails. All subjects attended the UHS for screening and investigations and provided signed informed consent to participate in this study. Subjects were men and women who were of either normal weight or overweight/obesity (BMI >25 kg/m 2 ) and decided to fast Ramadan and were willing to participate in this study. We have collected basic and sociodemographic data using a self-report questionnaire that covered the medical history and demographic information. The questionnaire was administered in individual face-to-face interviews. A trained research assistant conducted all interviews. The exclusion criteria were a history of metabolic syndrome, diabetes, or cardiovascular disease, taking regular medications for any chronic disease, following a weight-reducing diet, a history of bariatric surgery within the last 6-9 months before commencing Ramadan fasting, and being a pregnant or peri-menopausal woman.
Study Design
A prospective observational study design was used to investigate the effect of RIF on FTO gene expressions along with variable anthropometric, metabolic, and inflammatory markers in subjects with overweight and obesity. Data were collected at baseline (2-7 days before RIF) and after completing 28-30 consecutive days of diurnal RIF. During the fasting month of Ramadan, individuals abstain from all foods and drinks (including water) from dawn to sunset, with the average fasting duration being 15 h per day. Subjects were not requested to follow any dietary or physical activity regimens or recommendations during any stage of this study. All subjects were asked to pursue habitual lifestyle patterns during both fasting and non-fasting hours. According to Islamic laws of fasting, menstruating women are exempted from observing Ramadan fasting during their period; hence, the fasting period for participating women was less than that for men (23-25 vs. 28-30 days).
Anthropometric Assessment
Anthropometric measurements were taken at two time points (before and at the end of commencement of 28-30 fasting days). Anthropometric measures of body weight, fat mass, and body fat percentage, and fat-free mass were measured using segmental multi-frequency bioelectrical impedance analysis (DSM-BIA; TANITA, MC-980, Tokyo/Japan) before and at the end of the fasting month. The DSM-BIA machine measured the visceral fat rating (from 0 to 100), and this value was converted into a visceral fat surface area by multiplying the obtained value by 10, consistent with the manufacturer's instructions. Height was measured using a fixed stadiometer to the nearest 0.1 cm. BMI was calculated as weight (kg) divided by height in m 2 . Waist and hip circumference were measured to the nearest 0.01 m using a non-stretchable measuring tape (Seca, Hamburg/Germany), and their ratio was calculated accordingly.
Dietary Intake Assessment
No special dietary recommendations or food regimens were given to the study subjects during any stage of the study, and all the subjects were asked to pursue their habitual dietary patterns during the eating period before and during Ramadan. Dietary intakes were assessed by trained nutritionists using the 24-h recall technique on 3 days (one weekend day and two weekdays) at the two-time points (before and at the end of Ramadan fasting). Printed two-dimensional food models were used to help study subjects approximate the eaten portion sizes. Dietary intakes of energy (calories), macronutrients (carbohydrates, protein, fats, and water), and micronutrients (vitamins and minerals) were estimated using the Food Processor software (version 10.6 ESHA Research, Salem, OR/USA).
Physical Activity Level
The Dietary Reference Intakes classification for general physical activity level was used to assess subjects' level of physical activity (40). This classification depends on the general physical exercise pattern. Subjects were considered highly active if they performed at least 2 h per day of moderate-intensity physical exercise or 1 h of vigorous exercise in addition to daily living activities. Subjects were considered moderately active if they performed more than 1 h per day of moderate-intensity exercise in addition to daily living activities. Subjects that performed 30 mins to 1 h per day of moderate-intensity physical exercise in addition to daily living activities were considered to have low activity. Finally, subjects who performed daily living activities without other physical exercise were considered sedentary (40).
Blood Sampling
A sample of 10 ml of blood was collected from subjects at baseline in 3 different tubes including red top (plain) for serum, purple top (EDTA) for plasma and RNA extraction, and gray top (sodium fluoride) for glucose level (before commencing fasting) and at the end of the fasting month. At both time points, blood samples were collected after at least 8 h of fasting. The samples were collected between 11 a.m. and 1 p.m. to eliminate the effect of timing and dietary intake on the measured biochemical parameters and ensure consistency in the duration of fasting for the two-time points. Collected blood samples were divided into two aliquots. One aliquot was centrifuged at 2,500 rpm for 15 min within 1 h of the collection; the serum was aliquoted, coded, and stored at −80 • C until it was used for biochemical analysis. The second aliquot was used for RNA extraction, as explained below.
Biochemical Assay
In this study, we used chemiluminescent immunoassay (CLIA) based on a fully automated clinical chemistry analyzer (Adaltis, Pchem1, Italy) to quantify fasting glucose, total cholesterol (TC), LDL-cholesterol, HDL-cholesterol, and triglycerides (TG) at the two-time points. The pro-and anti-inflammatory cytokines (IL-6 and TNF-α; and IL-10, respectively) were quantified using a multiplex assay (Luminex, Bio-Plex Pro TM Human Cytokine plex Assay).
Blood Pressure Measurement
Blood pressure was measured before blood sampling using a digital blood pressure monitor (GE, USA), with subjects in an erect, seated position after a 5-min resting period.
FTO Gene Expression, RNA Extraction, Reverse Transcription, qPCR RNA was extracted using the column-based, Total RNA Purification kit from Norgen, (Thorold, Canada) and reverse transcribed to cDNA by the QuantiTect Reverse Transcription kit from Qiagen (Hilden, Germany), according to the manufacturer's instructions. cDNA and primer concentrations were optimized to obtain a single amplification peak. qPCR reaction was performed at a volume of 20 µl, including 10 ng of cDNA with 5x HOT FIREPol EvaGreen qPCR Mix Plus (ROX) (Solis Biodyne). The cycling conditions included initial activation of the polymerase for 15 min at 95 • C, followed by 45 cycles of 15-s denaturation at 94 • C, annealing at 55 • C for 30 s followed by extension at 72 • C for 30 s. The forward and reverse primers used in the study are presented in Table 1. For each sample, the expression of each gene was normalized to the housekeeping gene ribosomal protein L18 subunit (RPL18s); The RPL18s was chosen as it showed less variation among GAPDH and actin housekeeping genes during the initial optimization; at the same time point.
(10ng of cDNA and 0.1 uM of each primer per reaction). Three different negative controls were used at this analysis; control (1) no enzyme was added, control (2) no mRNA was added, and control (3) water was added instead of cDNA (NTC control). Therefore, the minimum amount of cDNA, primers and SYBR green have been used per reaction to obtain the specific signal avoiding false amplification. The relative expression was shown as fold change according to Livak and Schmittgen (41) and was presented as mean and standard deviation as described elsewhere (42). Considering the lack of reference range for FTO gene expression, six subjects with normal BMI (21.4 ± 2.20 kg/m 2 ) were recruited only to get the normal FTO expression levels at the two points (before and after Ramadan fasting). For overweight/obese subjects, at each time point, the FTO gene expression was first calculated relative to the housekeeping (RPL18) gene, then as a fold-change compared to the gene expression for the normal reference levels obtained from subjects with normal BMI.
Statistical Analyses
The statistical analyses were done using Statistical analyses were performed using SPSS 24 (IBM, Armonk, NY, USA). and reported based on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines (43). The primary outcome measure was the change in the genetic expression of the FTO gene between the two-time points. We estimated that 51 subjects would provide 80% power to detect a significant difference of 5% in genetic expression between baseline (pre-fasting) and post-fasting using a two-tailed pairedsamples t-test with α = 0.05. With an expected dropout rate of 10%, 56 subjects were planned for enrollment. Tests for normality were included in the model. The variables were expressed as the mean ± standard deviation (SD). Independent sample t-test comparing baseline characteristics between males and females. Two-tailed Paired sample t-tests were used to compare withinsubject changes from baseline (pre-fasting) to post-fasting time points. Binary logistic regression [the odds ratio (OR), 95% confidence interval (CI)] was calculated considering genetic expression as dependent variables, and sex (male vs. female), caloric intake (high, >2,000 Kcal vs. low, <2,000 Kcal), waist circumference as independent variables. We recoded the waist circumference variable as high waist circumference or low waist circumference as per the corresponding sex of the participant. The following criteria were used: High, ≥ 102 cm vs. Normal <102 cm for men and High, ≥ 88 cm vs. Normal <88 cm for women. Linear regression was used to determine the relationship between the change in FTO gene expression (dependent variable) before and at the end of Ramadan and biochemical and anthropometric variables (as independent variables). All data were tested at a 5% level of significance (P < 0.05).
RESULTS
Fifty-seven (17 males and 40 females, mean age of 38.42 years ± 11.18) overweight/obese subjects (BMI = 29.89 ± 5.02 kg/m 2 ) were recruited and monitored before and at the end of fasting the whole month of Ramadan. The majority of subjects were females (about 70%), and most subjects were married (about 83%), university graduates (around 77%), and sedentary (about 91%). About 91% of the study population were from non-Gulf Cooperation Council (GCC) countries ( Table 2).
The basic and anthropometric characteristics of the subjects are shown in Table 3. Bodyweight and composition, glucose homeostasis, blood pressure, and inflammatory markers significantly varied between pre-and post-Ramadan fasting, as shown in Tables 4-6. By the end of the Ramadan fasting month, body weight, BMI, fat mass, body fat percent, waist circumference, and hip circumference significantly (P < 0.05) reduced when compared to pre-fasting levels. LDL-C, IL-6, and TNF-α were significantly reduced as well (P < 0.05) at the end of the fasting month, while HDL-C and interleukin 10 were significantly increased (P < 0.05) ( Table 5). Changes in dietary intake are shown in Table 4. Significant increases were reported in the dietary intake of total sugars, PUFA, vitamin C, omega-3 fatty acids, lycopene, and vitamin E in comparison with the pre-fasting intakes, while the intake of protein and cholesterol decreases in comparison with the pre-fasting intakes ( Table 6). Results of relative genetic expressions in subjects with overweight/obesity showed significant downregulation in the FTO gene expression at the end of Ramadan in comparison with the pre-fasting level, with a percent reduction of about−32% (95% CI-0.052 −0.981) (Figure 1). Binary logistic regression analysis for genetic expressions showed no significant (P > 0.05) association between highenergy intake (≥2,000 kcal vs. <2,000 kcal), waist circumference (High, ≥ 102 cm vs. Normal <102 cm for men and High, ≥ 88 cm vs. Normal <88 cm for women), obesity (BMI ≥ 30 vs. BMI < 30) and gene expressions of FTO gene (Supplementary Table 1). Linear regression analysis showed a significant, but weak, positive association between the hip circumference and the FTO gene expression at the end of Ramadan fasting days (Supplementary Table 2).
DISCUSSION
The current study provides the first evidence of a link between RIF and the FTO gene expression in a cohort of overweight/obese subjects who observed Ramadan fasting (28 days for an average of 15 h/day). There was an association between reduced FTO expression and favorable effects, as demonstrated by suppression of pro-inflammatory markers and improvement of the lipid profile. Above all, such an association was also accompanied by a reduction of BMI and waist/hip ratio denoting that the effect of RIF may be explained, at least in part, by its link to the FTO expression. Several human studies showed beneficial effects of IF (17,19,21,44,45). Recently, a growing body of evidence suggests substantial health implications for the religious form of IF, with the Ramadan model as one of the most extensively studied forms with variable anthropometric, metabolic, and inflammatory impacts (25-29, 31, 32). The main distinctive features of RIF in comparison to other patterns of IF models are presented in the fact that RIF involves diurnal, dawn to sunset, IF for 29-30 consecutive days with complete abstinence from food and drink, including water. Other models of IF include modified fasting regimens (involves consumption of 20-25% of energy needs on scheduled fasting days such as 5:2 diet), TRE (allows ad libitum nutrient and energy intake within specific time frames, and inducing regular, extended fasting intervals, mostly nocturnal fasting), and alternate-day fasting (allows alternating fasting days with eating days) (45).
In the current study, RIF reduced body weight, BMI, body fat percent, waist circumference; RIF also resulted in a reduction of LDL, with increased HDL. This is in support of the previous reports on the favorable effect of RIF on the cardiometabolic risk factor profile. In our recent meta-analysis, we demonstrated the favorable effect of IF on reducing total cholesterol, LDL, triglyceride levels as well as diastolic blood pressure and heart rate (26). The study included subgroup analysis of age, sex, and duration of fasting (as confounding factors) and the significant favorable effect of RIF was constantly evident in all subgroups. Concordant with our findings in the current and previous studies, Mindikoglu et al. showed that RIF resulted in a significant reduction of BMI, waist circumference, and improvement in blood pressure, with an anti-cancer, anti-diabetes, and antiaging serum proteome response, providing another dimension for the benefits of IF that is likely to be promoted by cytokine modulation (46).
The observed changes in the total energy and dietary intakes from different macro and micronutrients are repeatedly shown in several studies (30,32), and consistent with the recent work that compared dietary intakes from different food groups and macronutrients in a comparative study using the year-round dietary intakes (23).
FTO expression is different in underfeeding and fasting conditions and displays tissue-specific differences in mouse models of obesity, but it is not known whether these differences are the cause or the consequence of obesity (47). FTO mRNA expression in mice and humans is broadly distributed in many organs, with notably high levels of expression in the brain and hypothalamus, which regulate energy balance and hunger (48,49). Previous studies showed that tissue-specific genes may be expressed in a wider variety of tissues. When transcriptomics analysis of peripheral blood mononuclear cells (PBMCs) was compared to that of liver, kidney, stomach, spleen, prostate, lung, heart, colon, and brain, more than 80% of shared differentially expressed genes (50). Also, other reports suggested that PBMCs can represent a surrogate indicator in dietary investigations to identify differentially expressed genes in population studies (51,52). In mice, the expression of FTO may be influenced by their dietary condition (53). When mice are fasting, there is a strong stimulus to eat, and their hypothalamus FTO mRNA expression is significantly reduced compared to their fed counterparts. Supplementation with the anti-hunger hormone leptin does not reverse this effect, which suggests that the reduced hypothalamic FTO expression observed during fasting is independent of leptin levels (54). These findings show that FTO is downregulated during fasting and increased during feeding and that a decrease in FTO expression or activity might be a signal that encourages overeating and obesity. Noteworthy, the results in rats are different; possibly because of inconsistency in the conditions of different studies and the different sensitivity to starvation among different species. Murine FTO gene expression was shown to be downregulated under fasting conditions, suggesting that obese mouse models mimic the fasted state, possibly contributing to their over-eating (47).
Interestingly, FTO was highly expressed in the cerebellum, salivary gland, and kidney of adult pigs, whereas it was not detected in blood (55). The latter study showed FTO was positively associated with energy intake in the pancreas, and with age in the muscle, adding to the multiple factors that affect FTO expression. Such variation can be explained by different metabolic and secretion activities of different tissues at different age groups. Moreover, as previously described with leptin, diurnal variation may also affect the level of expression of FTO (56). The link of FTO expression to fasting and obesity is not yet fully elucidated in humans, where more factors may interplay to determine this effect. Such factors include the effect of food predilection, dietary patterns, and complexity of gutbrain networking including leptin, ghrelin, among other key players (57). Our current study highlights the reduction of FTO in overweight/obese subjects as a consequence of observing diurnal IF for four consecutive weeks.
Furthermore, the current study showed a reduction in both pro-inflammatory cytokines IL6, TNF-α. Concordantly, in a study by Faris et al. significant reductions in IL-6, IL-1β, and TNF-α were reported in fasting subjects during Ramadan of both sexes, when compared to basal pre-fasting values obtained 1 week before Ramadan (32). Furthermore, this finding is consistent with the results of a meta-analysis and original research showing that RIF is associated with significant reductions in serum proinflammatory cytokines (IL-6, IL-1β, and TNF-α) and hs-CRP, and the oxidative stress marker malondialdehyde and urinary 15-f(2t)-isoprostane (32,33). The current findings on the significant reductions in lipid profile components (TC, LDL, and TG) and increased HDL are consistent with the systematic reviews and meta-analyses showing that RIF is associated with such improvements in the cardiometabolic risk factors (26,27). As shown by Faris and colleagues (30), these reductions in the proinflammatory cytokines and other inflammatory adipokines were reported to be associated with significant reductions in visceral adiposity in obese subjects who observed the 4-week dawn to sunset IF of Ramadan. Experimentally, fasting reduced TNF-α in visceral white adipose tissue, IL-1β in subcutaneous tissue, as well as insulin and leptin in the plasma in stressed rats (58).
The current study showed that RIF increased IL-10, which is consistent with a previous study by Faris et al. among obese subjects observing RIF when compared with the pre-fasting levels (30). IL-10 has a strong immune-modulation activity (59). It is thought of as an anti-inflammatory cytokine that can suppress cytokine production from macrophages and the function of neutrophils (60, 61) but can activate CD8+ T cells and natural killer (NK) cells for anti-viral immunity, denoting its dual role in immunity (62,63). Intriguingly, the IL-10 signaling pathway was one of the top Differentially Expressed Genes (DEGs) in COVID-19 infected normal epithelium vs. mockinfected cells (64) and could be, along with the reduction in other metabolic and inflammatory risk factors, involved in the plausible protective effect of Ramadan fasting against the COVID-19 infection (65).
FTO expression did not correlate with high-energy intake, waist circumference, or obesity as shown by the binary logistic regression analysis performed. These findings denote that RIF exerts its beneficial effects independently from the dietary and anthropometric factors, through different pathways that may or may not involve weight reduction and lower energy intake. Such dissociation between the beneficial effect of IF and caloric restriction is supported by previous work on rodents (66) who found that IF has beneficial effects in experimental mice reported on glucose regulation and neuronal resistance to injury that are independent of caloric intake. Several proteins interact with the FTO; the most significant of which is Melanocortin receptor 4 (MCR4) that is co-expressed with the FTO in some species. The MCR4 is involved in energy balance as well as somatic growth (67).
Pharmacologic treatments cannot reset the circadian clock rhythm; thus, there is an urgent need for an effective intervention to reset the circadian clock and prevent metabolic syndrome and metabolic syndrome-induced cancers (68,69). However, IF practiced exclusively during human activity hours can reset the circadian rhythm. Therefore, resetting the disrupted circadian clock in humans by consecutive daily IF could provide a primary strategy to improve metabolic syndrome and reduce the incidence of metabolic syndrome-induced cancer (68,69). RIF upregulated several key regulatory proteins that play a key role in tumor suppression, DNA repair, insulin signaling, glucose, and lipid metabolism, circadian clock, cytoskeletal remodeling, immune system, and cognitive function (70).
Nonetheless, our results indicate a lack of association between FTO gene expression and caloric intake by the fasting people. This notion may appear inconsistent with the evident association between intakes of calories, carbohydrates, and fats with the FTO genotype (67), given some studies that showed a correlation of the FTO risk allele and high FTO gene expression (68,69). Moreover, it has been reported that the FTO genotype may influence dietary macronutrient intakes, body weight, energy balance, appetite, and hormone secretion (70,71). The SNPs of the FTO gene are likely associated with food intake and obesity through modifying the expression of other genes (72). Until now, there is no strong evidence for the association of FTO A risk allele and level of gene expression. There is a recognized association of the A risk allele of FTO rs9939609 and overweight worldwide (48,71) in several Arab populations (72)(73)(74)(75). In this study, we did not investigate the subjects' genotypes, as the study group is not from the same ethnicity. In our previous study on the Emirati population, the rs9939609 AA genotype was significantly associated with higher BMI; in females, but not in males (76). In another study by our group, subjects with rs9939609 AA genotype showed significantly higher fasting glucose compared to other genotypes, with a trend of higher insulin levels and HOMA2-IR (77). We recently correlated the FTO genotypes, as well as FGF21 genotypes to dietary patterns in the Emirati population (78). Whether the outcome of a caloric intervention is affected by FTO rs 9939609 A risk allele is weakly evident (79).
A few limitations should be considered when interpreting the findings of the current work. First, causality cannot be inferred, as the design is observational prospective in nature. Hence, undetected confounding factors could be involved in the downregulation or upregulation of the tested FTO gene upon RIF. Changes in circadian rhythm and sleep patterns that have been reported to affect the expression of some genes (37) are among the factors that may be implicated in changing FTO gene expression. Tissue-and age-specific variations of FTO expression also add to the complexity of the interpretation of its expression in the blood. Although the practice of physical exercise did not change during Ramadan month in comparison with the prefasting stage, still this factor may be of paramount effect, and objective measurements have to be applied in measuring physical exercise levels in the forthcoming studies.
CONCLUSIONS
RIF is linked to the downregulation of FTO gene expression in subjects with obesity, which might explain, at least in part, its beneficial metabolic benefits. Consequently, RIF may have a preventive effect against body weight increase and associated negative metabolic-related derangements in overweight/obese people, possibly through modulation of FTO gene expression.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Research Ethics Committee, University of Sharjah. The patients/participants provided their written informed consent to participate in this study. | 2022-03-17T13:27:55.820Z | 2022-03-17T00:00:00.000 | {
"year": 2021,
"sha1": "8aab1834aa7a2f66ec0c9936cf51a8d34585b76e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2021.741811/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "8aab1834aa7a2f66ec0c9936cf51a8d34585b76e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202960565 | pes2o/s2orc | v3-fos-license | Joining of Microwave Components by Use in Micro Plasma Arc Welding Process for Space Applications
: Microwave components plays a crucial role in communication system. To convey all types of information like voice, data links, wireless network, satellite and spacecraft communication system etc. One of the prominent microwave components, extensively used in communication satellites is amplifier. In this project work fabrication, joining and testing of microwave components is successfully performed. Fabrication of components of amplifier had been used in this project work performed using high precision Lathe machines. A special focus were given for development of circumferential edge Joint process of microwave components by using Micro Plasma Arc Welding process in Kovar, Monel-404, and Soft iron types microwave component’s materials and necessary fixtures used during welding for joining of microwave components. The leak proof joints has developed to sustain vacuum pressure of the order of 10 -10 Torr. Micro Plasma Arc Welding process will providing quick, better quality and defect free weld in the microwave components materials as compared to another fusion welding process. During my project work-study of Micro Plasma Arc Welding process parameters such as welding current, voltage, electrode work piece distance, duty cycle, gas flow etc. Moreover, the study of various non- destructive test, like X-ray Radiography test, Dye penetration test and special focus on Helium leak proof test was performed on the samples joined using Micro Plasma Arc Welding Process. Finally, suitability of advanced welding process like Micro Plasma Arc Welding Process have proved for Microwave components.
I. INTRODUCTION
Welding is the process of joining together two pieces of metal so that bonding takes place at their original boundary surfaces. When two parts to be joined are melted together, heat or pressure or both is applied and with or without added metal for formation of metallic bond. During welding, the work pieces to be joined are melted at the interface and after solidification; a permanent joint can be achieved. Sometimes a filler material added as a form of weld pool of molten material, which after solidification gives a strong bond between the materials. Weldability of a material depends on different factors like the metallurgical changes that occur during welding, changes in hardness in weld zone due to rapid solidification, extent of oxidation due to reaction of materials with atmospheric oxygen and tendency of crack formation in the joint position. Plasma arc welding is an arc welding process similar to gas tungsten arc welding (GTAW). The key differences from GTAW is that in PAW, by positioning the electrode within the body of the torch, the plasma arc can separated from the shielding gas envelop. The plasma arc welding then forced through a fine-bore copper nozzle, which constricts the arc, and the plasma exits the orifice at high velocities and a temperature approaching 28,000 Degrees Celsius.
A. Basic Mechanism of Plasma Arc Welding Process
Plasma arc welding is an arc welding process where in coalescence produced by heating with a constricted arc setup between an electrode and the work piece (transferred arc) or the electrode and the constricting nozzle (non-transferred arc). The process uses two inert gases, one is plasma gas (orifice gas) and the second is shielding gas. The orifice gas is the gas, which is directed through the torch to surround the electrode, it become ionized in the arc to form the plasma and issues from the orifice in the torch nozzle as the plasma jet. For some operation auxiliary shielding gas is, provide through an outer gas cup, similar to Gas Tungsten arc welding. The arc-constricting nozzle through which the arc plasma passes has two main dimensions orifice diameter and throat length. In some cases Filler metal may added or not.
B. Amplifier Microwave Components Materials
There are different types of materials used in fabrication of amplifier components. Micro plasma arc welding process is to be performed at materials like Kovar, Monel-404 and soft iron. 1) Kovar: Kovar is used for fabrication of amplifier components because of its unique property of thermal expansion. The coefficient of thermal expansion of Kovar is 5x10 -6 /K for temperature range of 30 0 C -200 0 C. 2) Monel-404: Monel 404 alloy is used for fabrication of amplifier components because is used primarily in specialized electrical and electronic applications. The composition of Monel 404 alloy is providing to a very low Curie temperature, low permeability, and good brazing characteristics. Monel 404 can be welded using fusion welding techniques but cannot be used hot worked. Monel 400 series alloys good machinability and provide good weld joint.
3) Soft Iron:
Soft Iron is used in fabrication of amplifier components because it is a low Carbon content ferrous alloy and finds its extensive use in amplifier components because it can be easily magnetized or demagnetized.
II. EXPERIMENTAL WORK AND METHODOLOGY
Microwave components materials are welded micro plasma arc welding process with Circumferential Edge Joint by use in semiautomatic welding fixture. High purity argon gas (99.99%) is used as a shielding gas after welding to prevent absorption of oxygen and nitrogen from the atmosphere. Welding was carried out on Kovar, Monel-404, and Soft iron types space qualified materials. There are three types of non-destructive test used with accuracy by the space industry; they are the dye penetration test, X-ray Radiography test and Helium leak proof detection test.
A. Fixture Fabrication
To join Microwave components by Micro plasma arc welding method, it is essential to fabricate a cylindrical fixture lead to efficient supporting, holding, clamping and arresting the movement of the cylindrical type microwave components, which are joined, and hence resulting into quality welds. The layout of fixture is such that it need to provide rigid support to the cylindrical microwave components while welding and well mounted on the semi-automatic welding fixture. Fixture manufacture from Aluminium alloy 6061 and Stainless steel alloy.
B. Important Consideration for Fixture 1)
A fixture is efficient for supporting, holding, clamping and arresting the movement of the cylindrical type microwave components to be joined which results into quality welds.
2) The capability of welding fixture is to locate and hold all the cylindrical type microwave components and produced high quality weld efficiently.
3) The welding fixture should provide rigid support to the cylindrical microwave components while welding and well mounted on the semi-automatic welding mechanism.
III. TESTING OF MICROWAVE COMPONENTS JOINED USING MPAW
After fabrication of microwave components, it is necessary to do test for evaluation of microwave components weld joints. Testing of microwave components weld joints by MPAW process is testing by non-destructive testing. The process of interaction does not damage the test object or impair its intended utility value. NDT methods range from the simple to the intricate. Visual inspection is the simplest of all. Surface imperfections invisible to they may be revealed by dye penetrant or magnetic methods. If serious surface detects are found, there is often little point in proceeding further to the more complicated examination of the interior by other methods like X-ray Radiography or ultrasonic testing and helium leak proof detection testing. Non-destructive testing generally use for all those inspection methods that permit evaluation of welds and related materials without destroying their usefulness.
The objective of non-destructive testing of joints should be, 1) To seek out discontinuities which has evaluated to the requirements of the quality standards. 2) To obtain clues from the causes from irregularities in the fabrication process. Following types of Non-destructive tests performed on welded joints.
A. Dye Penetration Test
Dye penetration test also called liquid penetration inspection (LPI) test. It is a widely applied for surface weld quality check in industry base. A test also low-cost inspection method so used to locate surface-breaking defects in all non-porous materials. Die penetration test was performed on welded joints. This method have used to find cracks, porosity and incomplete flow in weld joints. A limitation of this test is only check to surface cracks no identify to inner cracks so further tests apply to check inner cracks. A test name is X-ray Radiography test & Helium leak proof test.
B. X-Ray Radiography Test
X-ray Radiography test method is non-destructive test method that utilizes radiation to penetrate an object and to, 1) Record Images on variety of recording devices such as film.
2) Be viewed on a fluorescent screen. 3) Be monitored by various types of electronic radiation detectors. Penetrating radiation has passed through the welded joint on to a photographic film, resulting in an image of the objects internal structure be deposited on the film. The amount of energy absorbed by the object depends on its thickness and density. Energy not absorbed by the object will cause exposure of the radiographic film. These areas will be dark when the film is developed. Areas of the film exposed to less energy remain lighter. Therefore, areas of the object where the thickness has changed by discontinuities, such as porosity or cracks, will appeared as dark outlines on the film. Inclusions of low density such as slag will appear as dark areas on the film while inclusions of high density such as tungsten will appear as light areas. All discontinuities are detect by viewing shape and variation in density of the processed film.
C. Helium Leak Proof Detection Test
After fabrication of microwave components, it is necessary to do test for evaluation microwave components weld joints. Testing of microwave components weld joints by MPAW process was tested by helium leak proof detection test at in house facility. This test also called Mass spectrometer leak detection test. Leak in microwave components weld joints are the unwanted throughput of air or any other gas into the system from the external environment. Leaks can arise from some cracks, defects in weld joint, holes, porosity in the microwave components weld joints or due to impurities and imperfection in welding joint area. Leak in microwave components weld joints the possibility to attaining desire level of vacuum pressure. The art of leak detection and fixing is, 1) To choose the appropriate non-destructive testing method of detection from the good knowledge of magnitude of leak.
2) To pin point and quantify of leak.
3) To seal it with an appropriate mechanism. Leak has be quantified in terms of the rate at a gas flows through the leaks at certain condition of pressure and temperature. The most commonly used term is however "Torr" in a vacuum system. Helium leak proof detection test method involves the use of some tracer gas, which does not contaminate the system, and quantitative measurement of the partial pressure of the tracer gas entering microwave components weld joints through leaks. The detection of tracer gas has carried out by first ionizing them and measure with mass spectrometer. Most widely and accurately used method for calculation of vacuum leak rate is helium leak proof detection testing. The Appropriate fixtures is developed for sealing and vacuuming the welded components. For conducting helium leak proof detection test the required equipment are, a) Helium gas as tracer gas b) Leak detector c) Helium mass spectrometer leak detector Microwave components weld joint testing by use in helium leak proof detection test by use of some necessary fixture. A fixture provide sealing and vacuuming welded components. Viton O-ring and gasket use in sealing of welded components is necessary fixture use in during Helium leak proof testing. When searching for leaks at microwave components weld joints is to be tested is continuously evacuated. The tracer gas helium penetrating from the outside into the system has pumped through a leak detector where its concentration is measured. 3.1×10 -10 mbar-l/sec Helium 3.3×10 -10 mbar-l/sec NO 2 4.7×10 -10 mbar-l/sec Helium 4.5×10 -10 mbar-l/sec NO 3 4.7×10 -10 mbar-l/sec Helium 4.5×10 -10 mbar-l/sec NO
A. Dye Penetration Test
Dye penetration test is a non-destructive test for testing of surface weld quality testing process by use of penetrant and developer spray. Conducting of this test, in both welding process weld joints in microwave components. Sufficient time provide after the developer has been applied to weld joints, a short time has allowed the dry developer to bolt or draw the red dye form a discontinuities and holes. Discontinuities are show as bright red indications against the white background of the developer. The result of this test is positive and no possible discontinues cracks and holes not found during visual inspection.
B. X-ray Radiography Test X-ray radiography test is a non-destructive testing process. A method is both internal and external weld surface defects detect during a testing process. Volumetric weld defects such as slag inclusions and various forms of gas porosity has easily detected by radiographic techniques due to the large negative absorption difference between the parent metal and the slag or gas. During conducting of this test, both welding process weld joints in microwave components provide sufficient time as per ASME radiography standards. The joints has tested under reference of ASME Sec. V Art. 2 & 22 and accepted under ASME Sec. Vlll Div. 1 UW 51. No significant defects found in the joints. The result of this test is positive & not observed cracks, porosity in radiography x-ray images. Film radiography produces a permanent record of the weld condition, which has archived for future reference and provides an excellent means of assessing the welder's performance and for these reasons; it is often still the preferred method for new job fabrication.
Helium leak proof detection test:
The microwave components weld joints tested by use of Helium leak proof detection test. This test also called Mass spectrometer leak detection test. Helium mass spectrometer is an instrument commonly used to detect and locate small leaks. It is typically uses a vacuum chamber in which a sealed with vacuum container filled with helium have placed. Helium leaks out of the container, and the rate of the leak has detected by a mass spectrometer. A blank background leak rate is select to 4×10 -10 mbar-l/sec. After performing and monitoring this test, we can say that the result of this test is positive and no possible small leak found during this test. This method provide leak rate up to 10 -10 Torr. This test only provides absence or presence of leak into vacuum system.
V. CONCLUSION
For development of effective leak proof microwave components weld joints by Micro plasma arc welding process, it covers section of microwave components materials and fabrication of microwave components by use in lathe machine. The thickness of microwave components was very less so special holding fixtures were developed for welding. Defined experimental sequential procedural methodology for MPAW Circumferential Edge Joint. Testing of microwave components weld joints and evaluation of the results. During this project works all the aimed objectives has been successfully achieved. After completion of fabrication, welding and testing phase of microwave components, following points are concluded.
A. Fabrication of Microwave components of Kovar, Monel-404 and Soft iron type materials is quite difficult because of toughness and hardness materials properties. B. Micro Plasma Arc Welding is suitable process for joining microwave components of lesser thickness (of the order of 0.5 mm). C. The joint quality obtained Micro Plasma Arc Welding process was excellent and welded joints produced the process have passed Dye penetration test, X-ray Radiography test, and Helium leak proof detection test (of the order of 10 -10 mbar-l/sec). | 2019-09-17T03:05:53.965Z | 2019-03-31T00:00:00.000 | {
"year": 2019,
"sha1": "4ccdb74f03c9e0674e8cc5a1ec699e0a435b1d7b",
"oa_license": null,
"oa_url": "https://doi.org/10.22214/ijraset.2019.3475",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bea19fe9ee10a9f17199fc0d96e80acf3df54ffa",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
13005170 | pes2o/s2orc | v3-fos-license | Primary Nasopharyngeal Tuberculosis Combined with Tuberculous Otomastoiditis and Facial Nerve Palsy
Primary nasopharyngeal tuberculosis (TB) without pulmonary involvement is rare, even in endemic areas. Herein, we present a rare complication of primary nasopharyngeal TB accompanied with tuberculous otomastoiditis (TOM) and ipsilateral facial nerve palsy, in a 24-year-old female patient, with computed tomography and magnetic resonance imagery findings.
Introduction
Upper respiratory tuberculosis (TB) is a rare extrapulmonary disease, even in endemic areas (1). The nasopharynx is the least common site for TB of the upper respiratory tract (2)(3)(4). Until now, only 16 cases of primary nasopharyngeal TB have been reported in PubMed, beginning from 1967. The number of the literature reviews, including case reports about primary TB of the nasopharynx, by the search terms (primary tuberculosis, nasopharynx) was 26. Most cases of nasopharyngeal TB usually occur with combined active pulmonary TB or systemic infection, spreading via hematogenous or lymphatic systems (3). Conversely, primary nasopharyngeal TB, without pulmonary involvement, is an extremely rare disease, with very few reported cases in recent years (1,4). In addition, TB of the middle ear cavity is also a rare extra-pulmonary manifestation (5). The incidence of tuberculous otomastoiditis (TOM) comprises 0.04 -0.9% of all cases of chronic otitis media (6).
Case Presentation
A 24-year-old Asian female patient was admitted to our hospital, with left ear fullness and otorrhea. At an outpatient clinic, she was previously diagnosed with otitis media with effusion (OME) that was unresponsive to a three-week course of antibiotics. Otoscopic examination revealed perforation of the tympanic membrane and profound amber-colored discharge, behind the ear-drum ( Figure 1A). On physical examination, no palpable neck lymph nodes were noted. Routine hematologic examination revealed normal erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP) levels. Initial ear culture studies did not identify any pathogens. After one week of further antibiotic medication, she underwent ventilation tube (V-tube) insertion to maintain aeration of the middle ear and prevent fluid reaccumulation.
However, the presence of purulent discharge from the V-tube was again noted on the third follow-up visit (Figure 1B). At this visit, the patient also presented with newonset ipsilateral facial nerve palsy. Facial paralysis was identified as grade IV, using the House-Brackmann grading system, due to presence of incomplete eye closure and mouth drooping, with maximum effort. She was then hospitalized for further evaluation.
After hospitalization, the patient underwent a flexible nasopharyngoscopy, which revealed a polypoid mass, with severe inflammation, in the roof of the nasopharyngeal wall. Extensive purulent granulation tissue was also noted, along the left posterolateral nasopharyngeal wall ( Figure 2). A biopsy was taken from the polypoid mass of the nasopharyngeal roof. The chest radiography was normal (Figure 3). The patient had no underlying disease, no history of pulmonary TB or contact with TB patients. Because the patient had no pulmonary symptoms and the chest radiography was normal, we excluded primary pulmonary TB. We did not perform sputum acid-fast bacilli (AFB) staining. (MR) examinations were performed using a 3.0-T MR system (Intera Achieva 3.0-T; Philips Medical Systems, Best, the Netherlands) with intravenous administration of a total volume of 15 mL contrast medium (gadoteridol, ProHance, Bracco Diagnostics Inc., Princeton, NJ, USA). Contrast-enhanced CT and MR of the paranasal sinuses showed a polypoid mass, with necrotic foci, at the roof of the nasopharyngeal wall ( Figure 4A and B). It also revealed diffuse mucosal enhancement and low attenuated polypoid masses, obliterating the left torus tubarius and pharyngeal opening of the Eustachian tube ( Figure 5A and B). Unenhanced temporal CT showed fluid filled middle ear cavity, mastoid antrum and mastoid air cells, without sclerotic changes ( Figure 6). Contrast-enhanced temporal MRI showed avid enhancement at the canalicular, labyrinthine, anterior genu, tympanic and mastoid segments of the left facial nerve, which confirmed left facial neuritis (Figure 7). Histopathology of the biopsy specimen, taken from the nasopharyngeal roof and mastoid antrum confirmed caseating granulomatous inflammation, consistent with TB (Figures 8 and 9). However, the specimen from the nasopharyngeal mass showed negative in AFB staining and polymerase chain reaction (PCR) tests. Considering the low sensitivity of AFB smear and PCR in extrapulmonary specimens, the results from this case report were not unusual. We have reached final diagnosis of primary nasopharyngeal and middle ear TB, on the basis of clinical, radiologic and histopathologic examination, against those from cytology and microbiology examination. The patient was started on four antituberculous medications (rifampicin, isoniazid, pyrazinamide and ethambutol) and underwent left canal wall up (CWU) mastoidectomy, with tympanoplasty. During the operation, multiple polypoid masses were observed in the entire middle ear cavity, mastoid antrum and mastoid air cells. In addition, the pharyngeal opening of the Eustachian tube was obstructed by polypoid masses. All ossicles were surrounded by granulation tissue and an ossiculoplasty was performed. Furthermore, bony canal dehiscence was noted, at the tympanic segment of the facial nerve, and surgical decompression was performed, as well. Histopathology of the biopsy specimens taken from the middle ear cavity and mastoid antrum during the operation also confirmed caseating granulomatous inflammation, consistent with TB. Electrodiagnostic testing, needle electromyogram and nerve conduction studies, were performed three weeks after the operation, which revealed the remaining left facial neuropathy.
Discussion
Upper airway TB is an uncommon clinical condition and is usually combined with pulmonary involvement (1). Moreover, primary TB of the upper respiratory tract, without lung involvement, is rare (1, 2). The nasopharynx is the least common site for TB involving the upper respiratory tract and comprises <1% of upper respiratory tract TB (4). However, it is interesting to note that recent large scale studies, analyzing nasopharyngeal TB, have reported that primary nasopharyngeal TB is more common than secondary involvement (7). Such discrepancy is probably derived from a limited ability to assess the nasopharynx by physical examination and a low clinical suspicion for nasopharyngeal TB.
Radiographically, nasopharyngeal TB presents with two main patterns: 1) polypoid masses and 2) diffuse mucosal thickening (3,7,8). Most cases of nasopharyngeal TB have been identified by a polypoid mass, which has been shown to indicate the proliferative phase (7). The CT and MR imaging show the characteristic findings of nasopharyngeal TB, including presence of necrosis and striped pattern in nasopharyngeal lesions, a lack of invasion of regional structures and peripheral enhancing cervical lymphadenopathy (7). The nasopharyngeal mass or diffuse mucosal thickening may reveal intermediate signal intensity on T1 weighted and T2 weighted images, with moderate contrast enhancement on MR images (8). In the present case, there were also soft tissue densities, in the entire middle ear and mastoid antrum. It has been reported that radiologic characteristics of TOM are soft tissue in the entire middle ear cavity, preservation of mastoid air cells, without any sclerotic change, mucosal thickening of the external auditory canal, with intact scutum (9), which are consistent with our case. Differential diagnosis can vary, according to radiologic patterns. Lymphoid hyperplasia, nasopharyngeal carcinoma, lymphoma and Castleman's disease should be considered in the differential diagnosis for the polypoid mass. Previous studies have reported that an isolated polypoid mass, with central necrosis centered on nasopharyngeal roof, suggests high probability of nasopharyngeal TB (8). For the second pattern of nasopharyngeal TB, diffuse nasopharyngeal wall thickening, various benign (Wegener's granuloma, syphilis, fungal infection) and malignant lesions (early local stage of nasopharyngeal carcinoma, lymphoma, minor salivary gland tumor) should be considered for the differential diagnosis (8). However, there are no definite imaging features to make an accurate diagnosis of nasopharyngeal TB and a biopsy is required to confirm the diagnosis and to differentiate it from malignancy and the other conditions described above.
It is worth noting that most nasopharyngeal TB involves the posterior roof of the nasopharynx, equivalent to the "adenoid" during childhood or early adolescence (4). The findings lend support to a preceding report, stating that TB directly involves nasopharyngeal lymphoid tissue (3). Regardless, our case showed the two main patterns of primary nasopharyngeal TB: the polypoid mass at the nasopharyngeal roof and, also, the diffuse soft tissue thickening of nasopharyngeal wall, consistent with previous reports and studies. However, the nasopharyngeal carcinoma is most often centered in the lateral pharyngeal recess (also called the Fossa of Rosenmuller). Sites of predilection and morphologic features can be potential diagnostic clues to differentiate nasopharyngeal TB from malignancy.
The Eustachian tube is lined with a mucous membrane, continuous with the pharynx and the mastoid air cells; therefore, the infections from nasopharynx can travel from the nasopharynx along the mucosal membrane of the Eustachian tube to the middle ear cavity. In addition, obstructing masses in the nasopharynx prevent air flow from passing through the Eustachian tube, creating a negative pressure in the middle ear cavity, followed by effusion (10). Therefore, it is unclear which came first, the nasopharyngeal TB or TOM, in this case? However, given the physiologic anatomy of the nasopharynx and middle ear cavity, there is a high probability of secondary involvement of the nasopharyngeal TB into the middle ear and mastoid antrum, through pharyngeal opening of the Eustachian tube.
Previous studies of nasopharyngeal TB have not provided in-depth discussion of the possibility for extranasopharyngeal spread and cranial nerve involvement. There have been limited single case reports identifying extensive nasopharyngeal involvement of TB beyond the nasopharyngeal mucosa and submucosa, skull base or parapharyngeal space (1). In this case report, besides in the nasopharyngeal roof, soft tissue masses and mucosal thickening were also noted in the pharyngeal opening of the Eustachian tube. Additional notable findings included soft tissue densities of the ipsilateral middle ear cavity, as well as along the facial nerve canal.
In conclusion, TB infection should be considered in the necrotic polypoid masses with regional mucosal lesions, centered on the posterior nasopharyngeal wall, especially in endemic areas. In addition, radiologists should keep in mind that pharyngeal orifice of the Eustachian tube can be a potential route for the spread of nasopharyngeal TB, from the nasopharynx to the middle ear cavity. Therefore, it is important to determine if obliteration of the nasopharyngeal orifice has occurred, in order to prevent acid bacilli from invading middle ear cavity. | 2016-05-12T22:15:10.714Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "32cd86e9cc8362c6f75916b88a8d4f84a3652a9b",
"oa_license": "CCBYNC",
"oa_url": "https://iranjradiol.kowsarpub.com/cdn/dl/013e1944-5084-11e7-ae47-03e445dd19f2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "32cd86e9cc8362c6f75916b88a8d4f84a3652a9b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221516820 | pes2o/s2orc | v3-fos-license | A S IMPLE AND G ENERAL G RAPH N EURAL N ETWORK WITH S TOCHASTIC M ESSAGE P ASSING
permutation-equivariance with certain parametrization. Extensive experimental results demonstrate the effectiveness and efficiency of SMP for tasks including node classification and link prediction.
INTRODUCTION
Graph neural networks (GNNs), as generalizations of neural networks in analyzing graphs, have attracted considerable research attention. GNNs have been widely applied to various applications such as social recommendation (Ma et al., 2019), physical simulation (Kipf et al., 2018), and protein interaction prediction (Zitnik & Leskovec, 2017). One key property of most existing GNNs is permutation-equivariance, i.e., if we randomly permutate the IDs of nodes while maintaining the graph structure, the representations of nodes in GNNs should be permutated accordingly. Mathematically, permutation-equivariance reflects one basic symmetric group of graph structures. Although it is a desirable property for tasks such as node or graph classification (Keriven & Peyré, 2019;Maron et al., 2019b), permutation-equivariance also prevents GNNs from being proximity-aware, i.e., permutation-equivariant GNNs cannot preserve walk-based proximities between nodes such as the shortest distance or high-order proximities (see Theorem 1).
Pairwise proximities between nodes are crucial for graph analytical tasks such as link prediction You et al., 2019). To enable a proximity-aware GNN, Position-aware GNN (P-GNN) (You et al., 2019) 1 proposes a sophisticated GNN architecture and shows better performance for proximity-aware tasks. But P-GNN needs to explicitly calculate the shortest distance between nodes and its computational complexity is unaffordable for large graphs. Moreover, P-GNN completely ignores the permutation-equivariance property. Therefore, it cannot produce satisfactory results when permutation-equivariance is helpful.
In real-world scenarios, both proximity-awareness and permutation-equivariance are indispensable properties for GNNs. Firstly, different tasks may require different properties. In social networks, for example, recommendation applications usually require the model to be proximityaware (Konstas et al., 2009) while permutation-equivariance is a basic assumption in centrality measurements (Borgatti, 2005). Even for the same task, different datasets may have different requirements on these two properties. Taking link prediction as an example, we observe that permutationequivariant GNNs such as GCN (Kipf & Welling, 2017) or GAT (Velickovic et al., 2018) show better results than P-GNN in coauthor graphs, but the opposite in biological graphs (please see Section 5.2 for details). Unfortunately, in the current GNN frameworks, these two properties are contradicting, as we show in Theorem 1. Whether there exists a general GNN to be proximity-aware while maintaining permutation-equivariance remains an open problem.
In this paper, we propose Stochastic Message Passing (SMP), a general and simple GNN to preserve both proximity-awareness and permutation-equivariance properties. Specifically, we augment the existing GNNs with stochastic node representations learned to preserve node proximities. Though seemingly simple, we prove that our proposed SMP can enable GNNs to preserve walk-based node proximities in theory (see Theorem 2 and Theorem 3). Meanwhile, SMP is equivalent to a permutation-equivariant GNN with certain parametrization and thus is at least as powerful as those GNNs in permutation-equivariant tasks (see Theorem 1). Therefore, SMP is general and flexible in handling both proximity-aware and permutation-equivariant tasks, which is also demonstrated by our extensive experimental results. Besides, owing to the stochastic nature and simple structure, SMP is computationally efficient, with a running time roughly the same as those of the most simple GNNs such as SGC (Wu et al., 2019) and is at least an order of magnitude faster than P-GNN on large graphs. Ablation studies further show that a linear instantiation of SMP is expressive enough as adding extra non-linearities does not lift the performance of SMP on the majority of datasets.
The contributions of this paper are summarized as follows: • We propose SMP, a simple and general GNN to handle both proximity-aware and permutationequivariant graph analytical tasks.
• We prove that SMP has a theoretical guarantee in preserving walk-based proximities and is at least as powerful as the existing GNNs in permutation-equivariant tasks.
• Extensive experimental results demonstrate the effectiveness and efficiency of SMP. We show that a linear instantiation of SMP is expressive enough on the majority of datasets.
RELATED WORK
We briefly review GNN architectures and the permutation-equivariance property of GNNs.
The earliest GNNs adopts a recursive definition of node states (Scarselli et al., 2008;Gori et al., 2005) or a contextual realization (Micheli, 2009 GNN (You et al., 2019), which proposes to capture the positions of nodes using the relative distance between the target node and some randomly chosen anchor nodes. However, P-GNN cannot satisfy permutation-equivariance and is computationally expansive.
MESSAGE-PASSING GNNS
We consider a graph is the set of M = |E| edges, and F ∈ R N ×d0 is a matrix of d 0 node features. The adjacency matrix is denoted as A, where its i th row, j th column and an element denoted as A i,: , A :,j , and A i,j , respectively. In this paper, we assume the graph is unweighted and undirected. The neighborhood of node v i is denoted as The existing GNNs usually follow a message-passing framework (Gilmer et al., 2017), where the l th layer adopts a neighborhood aggregation function AGG(·) and an updating function UPDATE(·): where h (l) i ∈ R d l is the representation of node v i in the l th layer, d l is the dimensionality, and m (l) i are the messages. We also denote H N ] and [·, ·] is the concatenation operation. The node representations are initialized as node features H (0) = F. We denote a GNN following Eq. (1) with L layers as a parameterized function as follows 2 : where H (L) are final node representations learned by the GNN and W denotes all the parameters.
One key property of the existing GNNs is permutation-equivariance.
Definition 1 (Permutation-equivariance). Consider a graph G = (V, E, F) and any permutation P : V → V so that G ′ = (V, E ′ , F ′ ) has an adjacency matrix A ′ = PAP T and a feature matrix F ′ = PF, where P ∈ {0, 1} N ×N is the permutation matrix corresponding to P, i.e., P i,j = 1 iff P(v i ) = v j . A GNN satisfies permutation-equivariance if the node representations are equivariant with respect to P, i.e., It is known that GNNs following Eq. (1) are permutation-equivariant (Maron et al., 2019b).
Definition 2 (Automorphism). A graph G is said to have (non-trivial) automorphism if there exists a non-identity permutation matrix P = I N so that A = PAP T and F = PF. We denote the corresponding automorphic node pairs as Corollary 1. Using Definition 1 and 2, if a graph has automorphism, a permutation-equivariant GNN will produce identical node representations for automorphic node pairs: Since the node representations are used for downstream tasks, the corollary shows that permutationequivariant GNNs cannot differentiate automorphic node pairs. A direct consequence of Corollary 1 is that permutation-equivariant GNNs cannot preserve walk-based proximities between pairs of nodes. The formal definitions are as follows.
Definition 3 (Walk-based Proximities). For a given graph G = (V, E, F), we use a matrix S ∈ R N ×N to denote walk-based proximities between pairs of nodes defined as: where v i v j denotes any walk from node v i to v j and S(·) is an arbitrary real-valued function.
Typical examples of walk-based proximities include the shortest distance ( The formulation and proof of the theorem are given in Appendix A.1. Since walk-based proximities are rather general and widely adopted in graph analytical tasks such as link prediction, the theorem shows that the existing permutation-equivariant GNNs cannot handle these tasks well.
A GNN FRAMEWORK USING STOCHASTIC MESSAGE PASSING
A major shortcoming of permutation-equivariant GNNs is that they cannot differentiate automorphic nodes. To solve that problem, we need to introduce some mechanism as "symmetry breaking", i.e., to enable GNNs to distinguish these nodes. We sample a stochastic matrix E ∈ R N ×d where each element follows an i.i.d. normal distribution N (0, 1). The stochastic matrix can provide signals in distinguishing the nodes because they are randomly sampled without being affected by the graph automorphism. In fact, we can easily calculate that the Euclidean distance between two stochastic signals divided by a constant √ 2 follows a chi distribution χ d : When d is reasonably large, e.g., d > 20, the probability of two signals being close is very low.
Then, inspired by the message-passing framework, we apply a GNN on the stochastic matrix: We regardẼ as the stochastic representation of nodes. By using the stochastic matrix and messagepassing,Ẽ can be used to preserve node proximities (see Theorem 2 and Theorem 3). Then, we concatenateẼ with the node representations from another GNN with node features as inputs: where F output (·) is an aggregation function such as a linear function or simply the identity mapping. In a nutshell, our proposed method augments the existing GNNs with a stochastic representation learned by message-passings to differentiate different nodes and preserve node proximities.
There is also a delicate choice worthy mentioning, i.e., whether the stochastic matrix E is fixed or resampled in each epoch. By fixing E, the model can learn to memorize the stochastic representation and distinguish different nodes, but with the cost of unable to handle nodes not seen during training.
On the other hand, by resampling E in each epoch, the model can have a better generalization ability since the model cannot simply remember one specific stochastic matrix. However, since the node representations are not fixed (but pairwise proximities are preserved; see Theorem 2), in this case, E can only be used in pairwise tasks such as link prediction or pairwise node classification. In this paper, we use a fixed E for transductive datasets and resample E for inductive datasets.
A LINEAR INSTANTIATION
Based on the general framework shown in Eq. (8), we attempt to explore its minimum model instantiation, i.e., a linear model. Specifically, inspired by Simplified Graph Convolution (SGC) (Wu et al., 2019), we adopt a linear message-passing for both GNNs, i.e., whereà = (D + I) − 1 2 (A + I)(D + I) − 1 2 is the normalized graph adjacency matrix with self-loops proposed in GCN (Kipf & Welling, 2017) and K is the number of propagation steps. We also set F output (·) in Eq. (9) as a linear mapping or identity mapping.
Though seemingly simple, we show that such an SMP instantiation possesses a theoretical guarantee in preserving the walk-based proximities.
Theorem 2. SMP in Eq. (9) with the message-passing matrixà and the number of propagation steps K can preserve the walk-based proximityà K (à K ) T with high probability if the dimensionality of the stochastic matrix d is sufficiently large, where the superscript T denotes matrix transpose. The theorem is regardless of whether E are fixed or resampled.
The mathematical formulation and proof of the theorem are given in Appendix A.2. In addition, we show that SMP is equivalent to a permutation-equivariant GNN with certain parametrization. Remark 1. Suppose we adopt F output (·) as a linear function with the output dimensionality the same as
all-zeros forẼ and an identity matrix for H (L) .
The result is straightforward from the definition. Then, we have the following corollary. Corollary 2. For any task, Eq. (8) with the aforementioned linear F output (·) is at least as powerful as the permutation-equivariant F GNN ′ (A, F; W ′ ), i.e., the minimum training loss of using H in Eq. (8) is equal to or smaller than using In other words, SMP will not hinder the performance 4 even the tasks are permutation-equivariant since the stochastic representation is concatenated with the node representations of the other GNN followed by a linear mapping. In these cases, the linear SMP is equivalent to SGC (Wu et al., 2019).
Combining Theorem 2 and Corollary 2, the linear SMP instantiation in Eq. (9) is capable of handling both proximity-aware and permutation-equivariant tasks.
NON-LINEAR EXTENSIONS
One may question whether a more sophisticated variant of Eq. (8) can further improve the expressiveness of SMP. There are three adjustable components in Eq. (8): two GNNs in propagating the stochastic matrix and node features, respectively, and an output function. In theory, adopting nonlinear models as either component is able to enhance the expressiveness of SMP. Indeed, if we use a sufficiently expressive GNN in learningẼ instead of linear propagations, we can prove a more general version of Theorem 2 as follows. Although non-linear extensions of SMP can, in theory, increase the model expressiveness, they also take a higher risk of over-fitting due to model complexity, not to mention that the computational cost will also increase. In practice, we find in ablation studies that the linear SMP instantiation in Eq. (9) works reasonably well on most of the datasets (please refer to Section 5.4 for further details).
EXPERIMENTAL SETUPS
Datasets We conduct experiments on the following ten datasets: two simulation datasets, Grid and Communities You et al. (2019) 2020). We also adopt three GNN benchmarks: Cora, Citeseer, and PubMed Yang et al. (2016). We only report the results of these three benchmarks for the node classification task and the results for other tasks are shown in Appendix B due to the page limit. More details of the datasets are provided in Appendix C.1.
We summarize the statistics of datasets in Table 1. These datasets cover a wide spectrum of domains, sizes, and with or without node features. Note that Email and PPI datasets contain more than one graph and we conduct experiments in an inductive setting on these two datasets, i.e., the training, validation, and testing are split with respect to different graphs. Baselines We adopt two sets of baselines. The first set is permutation-equivariant GNNs including GCN Kipf & Welling (2017) In comparing with the baselines, we mainly evaluate two variants of SMP with different F output (·): SMP-Identity, i.e., F output (·) as an identity mapping, and SMP-Linear, i.e., F output (·) as a linear mapping. Note that both variants adopt linear message-passing functions as SGC.
For fair comparisons, we adopt the same architecture and hyper-parameters for all the methods (please refer to Appendix C.2 for the details). For datasets without node features, we adopt a constant vector as the node features. We experiment on two tasks: link prediction and node classification. Additional experiments on pairwise node classification are provided in Appendix B.2. We repeat the experiments 10 times for datasets except PPA and 3 times for PPA, and report the average results. Link prediction aims to predict missing links of a graph. Specifically, we split the edges into 80%-10%-10% and use them for training, validation, and testing, respectively. Besides adopting those real edges as positive samples, we obtain negative samples by randomly sampling an equal number of node pairs from all node pairs that do not have edges. For all the methods, we set a simple classifier: Sigmoid(H T i H j ), i.e., use the inner product to predict whether a node pair (v i , v j ) forms a link, and use AUC (area under the curve) as the evaluation metric. One exception to the aforementioned setting is that on the PPA dataset, we follow the The results except PPA are shown in Table 2. We make the following observations.
LINK PREDICTION
• Our proposed SMP achieves the best results on five out of the six datasets and is highly competitive (the second-best result) on the other (Physics). The results demonstrate the effectiveness of our proposed method on link prediction tasks. We attribute the strong performance of SMP to its capability of maintaining both proximity-aware and permutation-equivariance properties.
• On Grid, Communities, Email, and PPI, both SMP and P-GNN outperform the permutationequivariant GNNs, proving the importance of preserving node proximities. Although SMP is simpler and more computationally efficient than P-GNN, SMP reports even better results. • When node features are available (CS, Physics, and PPI), SGC can outperform GCN and GAT.
The results re-validate the experiments in SGC Wu et al. (2019) that non-linearity in GNNs is not necessarily indispensable. A plausible reason is that the additional model complexity brought by non-linear operators makes the models tend to overfit. On those datasets, SMP retains comparable performance on two coauthor graphs and shows better performance on PPI, possibly because node features on protein graphs are less informative than node features on coauthor graphs for predicting links, and thus preserving graph structure is more beneficial on PPI.
• As Email and PPI are conducted in an inductive setting, i.e., using different graphs for training/validation/testing, the results show that SMP can handle inductive tasks as well.
The results on PPA are shown in Table 3. SMP again outperforms all the baselines, demonstrating that it can handle large-scale graphs with millions of nodes and edges. PPA is part of a recently released benchmark . To the best of knowledge, SMP achieves the state-of-the-art on this dataset.
NODE CLASSIFICATION
Next, we conduct experiments of node classification, i.e., predicting the labels of nodes. Since we need ground-truths in the evaluation, we only adopt datasets with node labels. Specifically, for CS and Physics, following Shchur et al. (2018), we adopt 20/30 labeled nodes per class for training/validation and the rest for testing. For Communities, we adjust the number as 5/5/10 labeled nodes per class for training/validation/testing. For Cora, Citeseer, and Pubmed, we use the default splits that came with the datasets. We do not adopt Email because some graphs in the dataset are too small to show stable results and exclude PPI as it is a multi-label dataset.
We use a softmax layer on the learned node representations as the classifier and adopt accuracy, i.e., how many percentages of nodes are correctly classified, as the evaluation criteria. We omit the results of SMP-Identity for this task since the node representations in SMP-Identity have a fixed dimensionality that does not match the number of classes.
The results are shown in Table 4. From the table, we observe that SMP reports nearly perfect results on Communities. Since the node labels are generated by graph structures on Communities and there are no node features, the model needs to be proximity-aware to handle it well. But P-GNN also fails because it selects anchor nodes randomly and thus can only capture the proximities between nodes and cannot learn a classifier to separate nodes into different communities.
On the other five graphs, SMP reports highly competitive performance. These graphs are commonlyused benchmarks for GNNs. P-GNN, which completely ignores permutation-equivariance, performs poorly as expected. In contrast, SMP can manage to recover the permutation-equivariant GNNs and avoid being misled, as proven in Theorem 1. In fact, SMP even shows better results than its counterpart, SGC, indicating that preserving proximities is also helpful for these datasets.
Following P-GNN You et al. (2019), we also conduct experiments on pairwise node classification. We observe similar results as link prediction and provide the results in Appendix B.2.
ABLATION STUDIES
We conduct ablation studies by comparing different SMP variants, including SMP-Identity, SMP-Linear, and the additional three variants as follows: • SMP-MLP: we set F output (·) as a fully-connected network with 1 hidden layer. We show the results for link prediction tasks in Table 5. The results for node classification and pairwise node classification, which imply similar conclusions, are provided in Table 9 and Table 10 in Appendix B.3. We make the following observations.
• In general, SMP-Linear shows good-enough performance, achieving the best or second-best results on six datasets and highly competitive on the other (Communities). SMP-Identity, which does not have parameters in the output function, performs slightly worse. The results demonstrate the importance of adopting a linear layer in the output function, which is consistent with Theorem 1. SMP-MLP does not lift the performance in general, showing that adding extra complexities in F output (·) brings no gain in those datasets. • SMP-Linear-GCN feat reports the best results on Communities, PPI, and PPA, indicating that adding extra non-linearities in propagating node features are helpful for some graphs. • SMP-Linear-GCN both reports the best results on Gird with a considerable margin. Recall that Grid has no node features. The results indicate that inducing non-linearities can help the stochastic representations capture more proximities, which is more helpful on featureless graphs.
EFFICIENCY COMPARISON
To compare the efficiency of different methods quantitatively, we report the running time of different methods in Table 6. The results are averaged over 3,000 epochs on a NVIDIA TESLA M40.
The results show that SMP is computationally efficient, i.e., only marginally slower than SGC and comparable to GCN. P-GNN is at least an order of magnitude slower except for the extremely small graphs such as Grid, Communities or Email, which have no more than a thousand nodes, not to mention that the expansive memory cost makes P-GNN unable to work on large-scale graphs.
CONCLUSION
In this paper, we propose SMP, a general and simple GNN to maintain both proximity-awareness and permutation-equivariance properties. We propose to augment the existing GNNs with stochastic node representations learned to preserve node proximities. We prove that SMP can enable GNN to preserve node proximities in theory and is equivalent to a permutation-equivariant GNN with certain parametrization. Experimental results demonstrate the effectiveness and efficiency of SMP. Ablation studies show that a linear SMP instantiation works reasonably well on most of the datasets.
BROADER IMPACT
GNNs have been a trending topic in the machine learning community for the past few years. Possible application scenarios of GNNs include social networks, biological networks, academic networks, information networks, etc. We expect our proposed SMP to find general applicability in all these scenarios, but the exact model performance may depend on the specific tasks and datasets. One advantage of SMP is its simple structure and superior efficiency, which makes it more suitable for large-scale graphs. Since SMP shares a similar backbone as other GNNs and we do not explicitly utilize any semantic information, we do not foresee that SMP will produce more biased or offensive content than the existing GNNs.
A THEOREMS AND PROOFS
A.1 THEOREM 1 Here we formulate and prove Theorem 1. First, we give a definition for preserving walk-based proximities.
Definition 4. For a given walk-based proximity, a GNN is said to be able to preserve the proximity if, for any graph G = (V, E, F), there exist parameters W G and a decoder function F de (·) so that ∀ǫ > 0: where Note that we do not constrain the GNN architecture as long as it follows the message-passing framework in Eq. (1), and the decoder function is also arbitrary. In fact, both the GNN and the decoder function can be arbitrarily deep and with sufficient hidden units. Next, we rephrase Theorem 1 using the above formulation.
Theorem 1. For any walk-based proximity function S(·), a permutation-equivariant GNN cannot preserve S(·), except the trivial solution that all node pairs have the same proximity, i.e., S i,j = c, ∀i, j, where c is a constant.
Proof. We prove the theorem by contradiction. Assume there exists a non-trivial S(·) which a permutation-equivariant GNN can preserve. Consider any graph G = (V, E, F) and denote N = |V|. We can create Basically, we generate two "copies" of the original graph, one indexing from 1 to N , and the other indexing from N + 1 to 2N . By assumption, there exists a permutation-equivariant GNN which can preserve S(·) in G ′ and we denote the node representation as H ′(L) = F GNN (A ′ , F ′ ; W G ′ ). It is easy to see that node v ′ i and v ′ i+N in G ′ form an automorphic node pair. Using Corollary 1, their representations will be identical in any permutation-equivariant GNN, i.e., Also, note that there exists no walk from the two copies, i.e. v ′ As a result, for ∀i ≤ N, j ≤ N, ∀ǫ > 0, we have: We can prove the same for ∀i > N, j > N . The equation naturally holds if i ≤ N, j > N or Combining the results, we have ∀ǫ > 0, ∀i, j, |S i,j − S(∅)| < 2ǫ. Since ǫ can be arbitrarily small, the equation shows that all node pairs have the same proximity c = S(∅), which leads to a contraction and finishes our proof.
Notice that in our proof, G ′ can be constructed for any graph, so rather than designing one specific counter-example, we have shown that there always exists an infinite number of counter-examples by constructing an automorphism in the graph.
A.2 THEOREM 2
Here we formulate and prove Theorem 2. Note that some notations and definitions are introduced in Appendix A.1.
Theorem 2. For the walk-based proximity S =Ã K (Ã K ) T , SMP can preserve the proximity with high probability if the dimensionality of the stochastic matrix is sufficiently large, i.e., ∀ǫ > 0, ∀δ > 0, there ∃d 0 so that any d > d 0 : where H is the node representation of SMP in Eq. (9). The results hold for any stochastic matrix and thus is regardless of whether it is fixed or resampled.
Proof. Our proof is mostly based on the standard random projection theory. Firstly, since we have proven in Theorem 1 that the permutation-equivariant representations cannot preserve any walkbased proximity, here we prove that we can preserve the proximity only usingẼ, which can be easily achieved by ignoring H (L) in F output ([Ẽ, H (L) ]), e.g., if we set F output as a linear function, the model can learn to set the corresponding weights for H (L) as all-zeros.
We set the decoder function as a normalized inner product: Then, denoting a i =Ã K i,: and recallingẼ =Ã K E, we have: Since E is a Gaussian random matrix, from the Johnson-Lindenstrauss lemma (Vempala, 2005) (in the inner product preservation forum, e.g., see Corollary 2.1 and its proof in (Sham & Greg, 2020)), ∀0 < ǫ ′ < 1 2 , we have: By setting ǫ ′ = ǫ maxi ai , we have ǫ > ǫ ′ 2 ( a i + a j ) and: which leads to the theorem by solving and setting d 0 as follows: A.3 THEOREM 3 Here we formulate and prove Theorem 3. Note that some notations and definitions are introduced in Appendix A.1. Theorem 3. For any length-L walk-based proximity, i.e., Firstly, as the message-passing and updating function are bijective by assumption, we can recover from the node presentations in each layer all their neighborhood representations in the previous layer. Specifically, there exist F (l) (·), 1 ≤ l < L such that: For notation conveniences, we split the function into two parts, one for the node itself and the other for its neighbors: For the first function, if we successively apply such functions from the l th to the input layer, we can recover the input features of the GNN, i.e., E. Since the stochastic matrix E contains a unique signal for different nodes, we can decode the node ID from e i ; E = i. For brevity, we denote applying such l + 1 functions to get the node ID as For the second function, we can apply F (l−1) neighbor to the decoded vector set so that we can recover their neighborhood representations in the (l − 2) th layer, etc.
Next, we show that for e This result is easily verified as: (25) Note that all the information is encoded inẼ, i.e., we can decode neighbor (·). We can also apply F (0:L−1) self to e (L−1) i to get the start node ID i. Putting it together, we have: where F (·) is composed of F (l) self (·) , 0 ≤ l < L and F (l) neighbor (·) , 1 ≤ l < L. Applying the proximity function S(·), we have: We finish the proof by setting the real decoder function F de (·) to arbitrarily approximate this desired function S (F (·, ·)) under the universal approximation assumption.
B.1 ADDITIONAL LINK PREDICTION RESULTS
We further report the results of link prediction on three GNN benchmarks: Cora, Citeseer, and Pubmed. The results are shown in Table 7. The results show similar trends as other datasets presented in Section 5.2. (2019) and further experiment on pairwise node classification, i.e., predicting whether two nodes have the same label. Compared with standard node classification, pairwise node classification focuses more on the relations between nodes and thus requires the model to be proximity-aware to perform well.
Similar to link prediction, we split the positive samples (i.e., node pairs with the same label) into an 80%-10%-10% training-validation-testing set with an equal number of randomly sampled negative pairs. For large graphs, since the possible positive samples are intractable (i.e. O(N 2 )), we use a random subset. Since we also need node labels as the ground-truth, we only conduct pairwise node classification on datasets when node labels are available. We also exclude the results of PPI since the dataset is multi-label and cannot be used in a pairwise setting You et al. (2019). Similar to Section 5.2, we adopt a simple inner product classifier and use AUC as the evaluation metric.
The results are shown in Table 8. We observe consistent results as link prediction in Section 5.2, i.e., SMP reports the best results on four datasets and the second-best results on the other three datasets. These results again verify that SMP can effectively preserve and utilize node proximities when needed while retaining comparable performance when the tasks are more permutation-equivariant like, e.g., on CS and Physics.
B.3 ADDITIONAL ABLATION STUDIES
We report the ablation study results for the node classification task and pairwise node classification task in Table 9 and Table 10, respectively. The results again show that SMP-Linear generally achieves good-enough results on the majority of the datasets and adding non-linearities does not necessarily lift the performance of SMP. Table 10: The ablation study of different SMP variants for the pairwise node classification task. The best results and the second-best results are in bold and underlined, respectively.
B.4 COMPARISON WITH USING IDS
We further compare SMP with augmenting GNNs using a one-hot encoding of node IDs, i.e., the identity matrix. Intuitively, since the IDs of nodes are unique, such a method does not suffer from the automorphism problem and should also enable GNNs to preserve node proximities. However, theoretically speaking, using such a one-hot encoding has two major problems. Firstly, the dimensionality of the identity matrix is N × N , and thus the number of parameters in the first message-passing layer is also on the order of O(N ). Therefore, the method will inevitably be computationally expansive and may not be scalable to large-scale graphs. The large number of parameters will also more likely lead to the overfitting problem. Secondly, the node IDs are not transferable across different graphs, i.e., the node v 1 in one graph and the node v 1 in another graph do not necessarily share a similar meaning. But as the parameters in the message-passings depend on the node IDs (since they are input features), such a mechanism cannot handle inductive tasks well.
We also empirically compare such a method with SMP and report the results in Table 11. The results show that SMP-Linear outperforms GCN onehot in most cases, not to mention that GCN onehot fails to handle Physics, which is only a medium-scale graph, due to the heavy memory usage. One surprising result is that GCN onehot outperforms SMP-Linear on Grid, the simulated graph where nodes are placed on a 20 × 20 grid. A plausible reason is that since the edges in Grid follow a specific rule, using a one-hot encoding gives GCN onehot enough flexibility to learn and remember the rules, and the model does not overfit because the graph has a rather small scale. | 2020-09-15T06:17:11.584Z | 2020-09-05T00:00:00.000 | {
"year": 2020,
"sha1": "67cdafb71e2ec4e2d83341ee9bfa20d9933d6192",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "67cdafb71e2ec4e2d83341ee9bfa20d9933d6192",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
250495433 | pes2o/s2orc | v3-fos-license | Effective Adsorption of Pb2+ on Porous Carbon Derived from Functional Octadecahedron ZIF-8
An adsorbent ZO (oxidized ZIF-8-derived carbon) was prepared on the ZIF-8-derived carbon (ZC) by modified Hummer’s method. The removal rate and adsorption amount of Pb2+ were measured on the different molar ratios of 2-Hmim to 2, 2′-bipyridine in ZO, including 1:1 (1:1 ZO) and 1:2 (1:2 ZO). The adsorption experiments show that the best condition to adsorb Pb2+ in Pb2+ solution for 1:1 ZO is an adsorbent dosage of 20 mg, adsorption time of 16 h, initial Pb2+concentration of 15 mg/L, and pH = 3; that for 1:2 ZO is the adsorbent dosage of 15 mg, adsorption time of 18 h, initial Pb2+ concentration of 15 mg/L, and pH = 4. The adsorption data fits the quasi-second-order kinetics (R2 = 0.99998), indicating that chemical adsorption plays a leading role. The fitted isotherm adsorption curve is more consistent with the Langmuir adsorption model (1:1 ZO, R2 = 0.95058; 1:2 ZO, R2 = 0.97488). The competitive adsorption results show that the removal rate of Pb2+ by 1:1 ZO and 1:2 ZO is more than 98%, indicating that 1:1 ZO and 1:2 ZO have a superior selectivity for Pb2+ competing with Cu2+ and Fe2+. The maximum adsorption amount of Pb2+ is 15.52 mg/g by 1:1 ZO and 18.09 mg/g by 1:2 ZO. This study shows that 1:2 ZO is more helpful for the removal of Pb2+ than 1:1 ZO.
Introduction
Heavy metals (HMS) are the most common pollutants in sewage [1], including lead (Pb), cadmium (Cd), mercury (Hg), chromium (Cr), and arsenic (As). They are highly toxic and non-biodegradable and harmful to all organisms [2]. So, HMS removal has attracted much attention owing to the high requirement of ecological civilization and health. Pb 2+ as one of HMS is not only non-degradable but also the most toxic metal [3]. Even a small amount of Pb 2+ is toxic to plants and animals [4]. Thus, the emission of Pb 2+ has led to a serious environmental problem, and effective removal of Pb 2+ is essential.
Recently, a large number of scholars have studied the chemical precipitation method and the ion exchange method to remove Pb 2+ , demonstrating the high efficiency of those methods. Maria Teresa Alvarez et al. [5] have investigated the Pb 2+ precipitation by biologically produced H 2 S, achieving above 92% of the Pb 2+ removal rate. James P. Bezzina et al. [6] have reported that ion exchange removal of Pb 2+ from acid-extracted sewage sludge is highly effective. However, limitations owing to cost-effectiveness, incomplete removal of Pb 2+ , and high energy requirement determine the application of chemical methods is restricted [7]. Comparatively, the adsorption method is not only friendly environmental but inexpensive for removing Pb 2+ from sewage [8,9].
The adsorption method is widely used in research and industry for the advantages of strong operability and high efficiency [10]. Common adsorbents for the adsorption of Pb 2+ are zeolite [11], graphene oxide [12], and biomass [13]. Although zeolite, graphene oxide, and biomass are popular adsorbents due to the high Pb 2+ removal rate of almost 100%, they are of high cost for wide application [11]. Zeolitic imidazolate frameworks (ZIFs) as a nanoporous carbon (Nc) material are cost-effectiveness and easy to synthesize, and they have a large specific surface area and pore volume. Therefore, they can be a promising adsorbent for Pb 2+ removal in sewage [14].
Researchers have developed various ZIFs with zeolite or zeolite-like topological structures; they have strong chemical robustness and good thermal stability [15,16]. In particular, ZIFs with sodalite topology, such as ZIF-8, give the improvement of Pb 2+ removal a high possibility due to their porous structure [17]. The successful synthesis of ZIF-8 in concentrated ammonium hydroxide aqueous solutions at room temperature was reported by Ming He et al. [17]. ZIF-8 is a framework formed by zinc ions and imidazole ligands with albite topology and has been widely studied in this type of material [18]. It can be used for gas adsorption/separation, catalysis, etc. [19,20]. Common ZIF-8 shapes are cube [17], cuboid [21], dodecahedron [22], spherical [23], and leaf-like [24]. After the carbonization of ZIF-8 (zinc-based ZIF) powder in a nitrogen atmosphere, ZIF-8-Derived carbon can be obtained after washing with hydrochloric acid and drying. Related literature reported that the morphology of ZIF-8 before and after carbonization did not be changed [25]. Recently, researchers found that the porous carbon synthesized by the ZIF-8 carbonization method can be used as an efficient adsorbent to remove pollutants [26][27][28]. A number of researches show that the ZIF-8 membrane presents an advantage in the separation of H 2 from a mixture [16]. However, the Pb 2+ adsorption by ZIF-8-derived carbons has been rarely investigated.
Currently, Hummer's method is the most common method used for preparing graphene oxide [29]. Thus, most of the previous studies have focused only on the preparation of graphene oxide by the modified Hummer's method [30]. ZIF-8-derived carbon prepared at 900 °C has a certain degree of graphitization, so Hummer's method can be used to surface oxidize ZIF-8-derived carbon and increase the functional group of ZIF-8-derived carbon surface.
In this study, ZIF-8-derived carbon was oxidized by the modified Hummer's method and used to adsorb Pb 2+ in an aqueous solution. Then, SEM, XRD, FT-IR, zeta potential, and BET five methods were used to characterize the materials. The effects of adsorption time, pH, adsorbent dose, and initial concentration on the removal of Pb 2+ were studied, and the optimal parameters for material removal of Pb 2+ were determined. After that, the material was used to adsorb Pb 2+ solution containing other metal ions to explore the material's selectivity to Pb 2+ .
Chemicals and Materials
The details of chemical materials used in the research are listed in Table 1. Deionized water (DW) used in all experiments was made in a laboratory. All chemical reagents were purchased without further purification.
Synthesis of ZIF-8
Zn (NO 3 ) 2 ·6H 2 O at 1.8 g was added to 90 mL CH 3 OH, and 0.5 g 2-Hmim was added to 45 mL NH 3 ·H 2 O. Then, the zinc ion-containing solution was slowly added to the above solution. Next, the solution was stirred for 5 h at 5 °C and centrifuged at 8000 rpm for 10 min and wash with methanol three times. Drying at 80 °C in a vacuum drying oven overnight, ZIF-8 powder was obtained. Finally, the molar ratios of 2-Hmim and 2, 2′-bipyridine in ZIF-8 include 1:1 (symbol 1:1 ZIF-8) and 1:2 (symbol 1:2 ZIF-8). (1) where R is the removal rate of Pb 2+ (%), C 0 and C t are the initial and equilibrium concentration of Pb 2+ (mg/L), Q t is the adsorption amount when the adsorption reaches equilibrium (mg g −1 ), V is the volume (L) of the adsorbed solution, and M is the mass (mg) of 1:1 ZO and 1:2 ZO used for adsorption.
Each group of the adsorption experiment was measured in parallel 3 times, the average value and standard deviation were calculated, and the error bars were made in the adsorption experiment diagram.
Adsorption Isotherm Experiment
In the experiment, 20 mg of 1:1 ZO and 15 mg of 1:2 ZO were added into 20 mL of Pb 2+ simulated waste liquid with a concentration of 15 mg/L. After stirring at room temperature for 16 h and 18 h, the obtained solution was filtered with a syringe filter. After filtration, 5 mL of the supernatant was taken and its concentration was determined using atomic absorption at a wavelength of 283.3 nm. Two isotherm adsorption models, Langmuir and Freundlich, were used to analyze the adsorption data of 1:1 ZO and 1:2 ZO on Pb 2+ .
Coharacterization
Scanning electron microscopy (SEM, SIGMA + X-MaxN, Germany) was used to observe the morphology of materials. X-ray diffraction (XRD, X'PertPowder, Netherlands) was used to study the crystal structure and phase of the materials, with the scanning range of 3 ~ 90° at the scanning speed of 2.4 s/step, and the tube pressure and tube flow were 40 kV and 20 mA, respectively. Functional groups can be identified by analyzing Fourier transform infrared (FT-IR) spectra (VERTEX 70) in the wavenumber range of 500 ~ 4000 cm −1 . Nitrogen adsorption-desorption isotherm analysis can be used for the structure analysis of porous materials (Micromeritics). Nitrogen gas was degassed at 120 °C for 6 h and then specific was determined at 77 K using a specific surface area and porosity analyzer. The N 2 adsorption and desorption curves of the materials were measured by Quantachrome AUTOSORB-1. The specific surface area was calculated by the Brunauer-Emmett-Teller (BET) method in the range of partial pressure (P/P 0 ) from 0.02 to 0.22. Pore volume and pore size distribution were calculated by BJH (Barrett-Joyner-Halenda) theory. A thermogravimetric analyzer (TGA-DSC) STA 449C Jupiter thermal analyzer (Germany NETZSCH company) was employed to measure the weight loss of the ZIF-8 in the temperature range of 30 ~ 1000 °C with a heating rate of 10 °C min −1 under a nitrogen stream. Zeta potentials of the nanoparticles were determined by dynamic light scattering (Beckman, USA).
Structural Characterization
Scanning electron microscopy images of the material 1:1 ZO and 1:2 ZO are shown in Fig. 1. 1:1 ZO and 1:2 ZO is octadecahedral, and the particle is uniform and has a good dispersion. The particle size of 1:1 ZO is smaller than that of 1:2 ZO [14].
TGA curves of 1:1 ZO and 1:2 ZO are shown in Fig. 2. When the temperature is below 150 °C, the weight loss of 1:1 ZO and 1:2 ZO are minimal, attributed to the evaporation of water molecules adsorbed in the material's pore. There is no significant weight loss observed in the temperature range of 150 ~ 400 °C. 5.5% (1:1 ZO) and 10.84% (1:2 ZO) of the weight loss at 400 ~ 600 °C can be due to the decomposition of the carbon-containing frame into gaseous products, such as nitric oxide, carbon monoxide, and the formation of metal oxides (zinc oxide). The weight loss of 10.94% (1:1 ZO) and 11.94% (1:2 ZO) after 900 °C can be attributed to the generation of nitric oxide, carbon monoxide, and the precipitation of Zn during the reduction of ZnO. The final weight of 1:1 ZO retains about 45% of the initial mass, while the final weight of 1:2 ZO is only 33% of the initial mass.
The results indicate that the thermal stability of 1:1 ZO is higher than that of 1:2 ZO, which is related to the presence of more 2-Hmim in 1:1 ZO [17]. Table 2 shows the pore parameters of 1:1 ZO and 1:2 ZO. It can be seen that the specific surface area, pore volume, and pore size of 1:1 ZO are all larger than that of 1:2 ZO. This is related to the higher oxidation degree of 1:2 ZO. The results confirm the conclusions obtained from the nitrogen adsorption and desorption curves [11]. The nitrogen adsorption-desorption curves in Fig. 3a show that the adsorption isotherms of 1:1 ZO and 1:2 ZO have the characteristics of type IV isotherm. The adsorption of nitrogen by nitrogen adsorption and desorption isotherm at low pressure indicates a certain micro-porous structure in the material [23]. Figure 4a is an XRD diagram of 1: [34].
As shown in Fig. 4b, the Fourier transform infrared spectra of 1:1 ZO and 1:2 ZO have the same trend. Both have absorption peak at 1578 cm −1 , 1252 cm −1 , and 1720 cm −1 . Among them, the 1252 cm −1 peak corresponds to C-N, and 1578 cm −1 peak belongs to C = N and N-H groups. The peak at 1720 cm −1 corresponds to the stretching vibration peak of -COOH, indicating that 1:1 ZO and 1:2 ZO have been successfully oxidized [12]. According to Fig. 5a, both 1:1 ZO and 1:2 ZO are negatively charged, in which 1:1 ZO has a negative charge of 18.44 mV, and 1:2 ZO has a negative charge of 20.66 mV. The generation of these negative charges indicates that the surface of ZO material contains a large number of negatively charged functional groups. As shown in Fig. 5b, the pH PZC of 1:1 ZO is 2.38. So, when pH < 2.38, 1:1 ZO has a positive charge; and when pH > 2.38, 1:1 ZO is negatively charged. The pH PZC of 1:2 ZO is 3.87. So, when pH < 3.87, 1:2 ZO is positively charged, and pH > 3.87, 1:2 ZO has a negative charge [8].
Adsorption Research
To investigate the effects of the adsorbent dosage on the removal rate and adsorption amount of Pb 2+ , 5 mg, 10 mg, 15 mg, 20 mg, and 25 mg of 1:1 ZO and 1:2 ZO were added into 20 mL and 15 mg/L Pb 2+ solutions for adsorption observation. The obtained removal rate and adsorption amount are shown in Fig. 6.
As shown in Fig. 6, with the increase in adsorbent dosage (5 ~ 25 mg), the removal rate of Pb 2+ increases from 20.62 to 96.62% by 1:1 ZO and from 21.41 to 98.21% by 1:2 ZO. The Pb 2+ adsorption amount of 1:2 ZO first increases from 12.8 to 16.76 mg/g, and then decreases to 11.79 mg/g, while that of 1:1 ZO firstly increases from 12.37 to 13.36 mg/g and then decreases to 11.59 mg/g. Thus, 1:1 ZO and 1:2 ZO have the same trend in the removal rate and adsorption amount. Reaching the maximum adsorption amount, the dosage of 1:2 ZO (15 mg) is less than that of 1:1 ZO (20 mg). Under this condition, the removal rate of Pb 2+ by both materials can reach more than 90%.
The reason for the low removal rate when a small amount of adsorbent was added is the insufficient adsorption binding site which leads to incomplete adsorption. When a sufficient amount of adsorbent was added, the provided active sites are enough to adsorb more Pb 2+ , leading to an increased removal rate. Further to this, increasing the amount of adsorbent to a state where the target pollutants were almost adsorbed, results in a low adsorption quantity so that the adsorption cannot be saturated. Therefore, it is necessary to choose a suitable dose of adsorbent. When the dosage is less than a certain value (15 mg of 1:2 ZO and 20 mg of 1:1 ZO), the active sites on the adsorbent are insufficient; the adsorption does not reach a saturated state, and the removal rate is low. On the contrary, the adsorption is saturated and the adsorption amount decreases.
To research the effects of the initial concentration in Pb 2+ solution on the removal rate and adsorption amount, Fig. 7 shows 1:1 ZO and 1:2 ZO adsorption following the concentrations of Pb 2+ solution. The volume of the Pb 2+ solution is 20 mL and the concentrations include 5 mg/L, 10 mg/L, 15 mg/L, 25 mg/L, and 35 mg/L. 20 mg of 1:1 ZO and 15 mg of 1:2 ZO were used as the adsorbent in the experiment.
As shown in Fig. 7, with the increase of the initial concentration of Pb 2+ (5 mg/L ~ 25 mg/L), the adsorption amount of Pb 2+ by 1:1 ZO (Fig. 2a) first increases from 4.81 to 12.01 mg/g. Then, it decreases to 7.17 mg/g and finally increases to 9.67 mg/g. The removal rate continues to decrease (96.35 ~ 27.62%). For comparison, the adsorption amount of Pb 2+ by 1:2 ZO (Fig. 7b) first increases from 6.32 to 12.81 mg/g. Then, it decreases to 12.38 mg/g and finally increases to 16.82 mg/g. The removal rate continues to decrease by 58.71%. Thus, the optimum initial concentration of Pb 2+ to obtain the maximum adsorption amount and the peak removal rate is 15 mg/L for 1:1 ZO and 1:2 ZO. Significantly, the adsorption amount and removal rate of 1:2 ZO are higher than those of 1:1 ZO at the same initial Pb 2+ concentration. So, 1:2 ZO has a better removal effect of Pb 2+ . For the same adsorbent, when the concentration of Pb 2+ in the solution is less than 15 mg/L, it is easy to chelate with Pb 2+ or produce electrostatic adsorption due to enough carboxyl groups. However, if the concentration of Pb 2+ is greater than 15 mg/L, the carboxyl groups available for adsorption are insufficient, and sufficient active sites could not be provided for the adsorption of Pb 2+ , so the removal rate decreases.
To research the effects of the pH in Pb 2+ solution on the removal rate and adsorption amount, 20 mg 1:1 ZO and 15 mg 1:2 ZO were added into 20 mL of simulated waste liquid with a Pb 2+ concentration of 15 mg/L. 1 M NaOH and 1 M HCl were used to adjust the pH value containing 2, 3, 4, 5, 6, 7, and 8. The experiment results are shown in Fig. 8.
The pH value affects the interaction between adsorbates and adsorbents by changing the charge distribution on the surface of the adsorbates and adsorbents [35]. When the pH < 5, lead mainly exists in Pb 2+ and Pb(OH) + , and when 5 < pH < 10, lead mainly exists in the form of Pb(OH) 2 , Pb(OH) 4 2− , and Pb(OH) 3 − . As shown in Fig. 8, the adsorption amount and removal rate of 1:1 ZO and 1:2 ZO for Pb 2+ are first increased and then decreased with the increase of pH. The Pb 2+ adsorption amount (14.57 mg/g) and the Pb 2+ removal rate (97.13%) of 1:1 ZO reach the peak at pH = 3, shown in Fig. 8a. Because pH PZC of 1:1 ZO is 2.38 (shown in Fig. 5), when pH < 2.38, 1:1 ZO is positively charged, there are Pb 2+ and Pb(OH) + exist, and there is electrostatic repulsion between them. When 2.38 < pH < 5, the negative charge of 1:1 ZO is enhanced, and Pb 2+ has a positive charge and attracts each other.
As shown in Fig. 8b, both the Pb 2+ adsorption amount (19.52 mg/g) and the Pb 2+ removal rate (97.58%) of 1:2 ZO reach the peak at pH = 4. Since pH PZC of 1:2 ZO is 3.78 (shown in Fig. 5), when pH < 3.78, the 1:2 ZO is positively charged. At this time, lead mainly exists in Pb 2+ and Pb(OH) + , while the Pb 2+ and Pb(OH) + are mutually repellent to the 1:2 ZO. When 3.78 < pH < 4, 1:2 ZO is negatively charged, it electrostatically attracts Pb 2+ and Pb(OH) + . When 5 < pH, 1:2 ZO is negatively charged. The adsorption amount and the removal rate significantly decrease due to that lead mainly exists in the form of Pb(OH) 2 and Pb(OH) 4 2− , and mutual repulsion occurs between them. To investigate the effects of the adsorption time on the Pb 2+ removal rate and the Pb 2+ adsorption amount, 20 mg 1:1 ZO and 15 mg 1:2 ZO were added to 20 mL of simulated waste liquid with a Pb 2+ concentration of 15 mg/L, and the solutions were shaken at room temperature (200 rpm Fig. 9. As shown in Fig. 9, the Pb 2+ removal rate of 1:1 ZO and 1:2 ZO gradually increases when increasing the adsorption time. The adsorption time of 1:1 ZO and 1:2 ZO to reach the saturation (13.81 mg/g for 1:1 ZO and 18.09 mg/g for 1:2 ZO) of Pb 2+ adsorption is 16 h and 18 h, respectively. Significantly, at the same adsorption time, the adsorption amount of 1:2 ZO is much higher than that of 1:1 ZO; that is, the adsorption effect of 1:2 ZO is better.
At the beginning of adsorption, there are a large number of active sites on the surface of the adsorbent. So, 1:1 ZO and 1:2 ZO can well combine with Pb 2+ , resulting in a high removal rate. With the decrease of active sites, the adsorption rate gradually slows down, and the adsorption reaches saturation after a certain time.
In addition, the experiment also compared the adsorption of Pb 2+ by the prepared material and other materials, and the results are shown in Table 3. It can be seen that the saturation adsorption amount of 1:1 ZO and 1:2 ZO to Pb 2+ prepared is higher than that of some other adsorbents, such as activated bentonite and magnetic graphene, indicating that 1:2 ZO is a more effective adsorbent for Pb 2+ .
Adsorption Kinetics
The pseudo-first-order kinetic model (3) and the pseudosecond-order kinetic model (4) were used to calculate the adsorption rate. The equations are as follows: where q e and q t (mg/g) are the equilibrium adsorption amount and adsorption amount at time t, respectively; K 1 (min −1 ) and K 2 (min −1 ) are the pseudo-first-order kinetic constant and the pseudo-second-order kinetic constant, respectively.
The fitting curves and the related parameters are shown in Fig. 10 and Table 4. It can be clearly seen from Table 4 that the fitting effect of the quasi-second-order kinetic has a fitting coefficient of 0.99998, which is significantly higher than that of the quasi-first-order kinetic. Moreover, the maximum adsorption amount calculated by the quasi-second-order kinetics equation is closer to the experimental value. Thus, it is indicated that the pseudo-second-order kinetics can better describe the adsorption kinetics of 1:1 ZO and 1:2 ZO for Pb 2+ , so the adsorption process is mainly a chemisorption adsorption process.
Adsorption Isotherm
The adsorption isotherm is employed to evaluate the adsorption characteristics of an absorbent. In this work, the Langmuir adsorption isotherm (5) and the Freundlich adsorption isotherm (6) were used to understanding the Pb 2+ adsorption behavior of 1:1 ZO and 1:2 ZO. The equations of the Langmuir model and the Freundlich model are as follows: where q e (mg/g) is the equilibrium adsorption amount of 1:1 ZO and 1:2 ZO; C e (mg/L) is the concentration of the Pb 2+ when the adsorption is in equilibrium; q m (mg/g) is the maximum adsorption amount of 1:1 ZO and 1:2 ZO; K L is the Langmuir constant; K f and n are the Freundlich constant, which is related to the adsorption strength of the adsorbent; C 0 is the initial concentration of the Pb 2+ (mg/L). Figure 11 shows the Langmuir adsorption isotherm equation and the Freundlich adsorption isotherm equation. As shown from Adsorption thermodynamics can be used to calculate the driving force of the adsorption process. The enthalpy change (ΔH 0 ) and the entropy change (ΔS 0 ) can be obtained by multiplying the slope to the intercept of the lnK versus 1/T fitted curve (Eqs. 7 and 9) with the gas molar constant where K d is the adsorption equilibrium constant (L/kg); R is the gas molar constant (8.314 J/mol/K); T is the absolute temperature (K); ΔS 0 (kJ/mol), ΔH 0 (kJ/mol), and ΔG 0 (kJ/ mol) are entropy change, enthalpy change, and Gibbs free energy, respectively. According to the thermodynamic analysis in the temperature range (298 K, 308 K, 318 K), the adsorption amount data at each concentration can be calculated, and a scatter plot of lnK vs. 1/T is fitted shown in Fig. 12. From the slope and the intercept of the fitted curve to the gas mole constant, the value of ΔH 0 and ΔS 0 can be obtained, as shown in Table 6. It can be seen from Fig. 12 that as the temperature of the system increases, the adsorption amount of Pb 2+ by 1:1 ZO and 1:2 ZO both decrease, and the △H 0 is a negative value in Table 6, indicating that the adsorption of Pb 2+ by 1:1 ZO and 1:2 ZO is an exothermic process. In addition, △G 0 is a negative value, indicating that the adsorption process of Pb 2+ by 1:1 ZO and 1:2 ZO is spontaneous. A negative value of ΔS 0 indicates that the disorder of the solid-liquid interface is reduced during the adsorption process.
Competitive Adsorption
In the experiment, Cu 2+ and Fe 2+ were selected as heavy metal ions to compete with Pb 2+ during adsorption. The adsorption results of 1:1 ZO and 1:2 ZO for Pb 2+ in different mixtures are shown in Fig. 13. When Cu 2+ and Fe 2+ exist simultaneously, the removal rates of Pb 2+ by 1:1 ZO and 1:2
Conclusion
In this work, 1:1 ZO and 1:2 ZO were prepared and used to adsorb Pb 2+ . The best conditions for 1:1 ZO to adsorb Pb 2+ are m (adsorption dosage) = 20 mg, t (adsorption time) = 16 h, C 0 (initial concentration) = 15 mg/L, and pH = 3. The best conditions for 1:2 ZO to adsorb Pb 2+ are m = 15 mg, t = 18 h, C 0 = 15 mg/L, and pH = 4. It shows that 1:2 ZO is more helpful for the removal of Pb 2+ than 1:1 ZO. The adsorption mechanism of 1:1 ZO and 1:2 ZO on Pb 2+ has been discussed. The Pb 2+ adsorption process of ZO accords with the quasi-second-order kinetics (R 2 = 0.99998), indicating that chemical adsorption plays a leading role. The fitted isotherm adsorption curve is more consistent with the Langmuir adsorption model (1:1 ZO: R 2 = 0.95058; 1:2 ZO: R 2 = 0.97488). The maximum adsorption amount of Pb 2+ is 15.52 mg/g by 1:1 ZO and 18.09 mg/g by 1:2 ZO. Thermodynamic analysis shows that the Pb 2+ adsorption process is spontaneous. Competitive adsorption experiments show that the removal rate of Pb 2+ by 1:1 ZO and 1:2 ZO is more than 98% in the presence of other heavy metal ions, which indicates that the investigated materials have a high selectivity for Pb 2+ . | 2022-07-14T09:25:08.814Z | 2022-09-15T00:00:00.000 | {
"year": 2022,
"sha1": "7ccd25c4d4981a85ba206eec44689a853db0fb0b",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1516642/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "673a02c6960cf2d2241fc443ebbcd429f3aa2544",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
55319120 | pes2o/s2orc | v3-fos-license | FITTING A POINT CLOUD TO A 3 D POLYHEDRAL SURFACE
The ability to measure parameters of large-scale objects in a contactless fashion has a tremendous potential in a number of industrial applications. However, this problem is usually associated with an ambiguous task to compare two data sets specified in two different co-ordinate systems. This paper deals with the study of fitting a set of unorganized points to a polyhedral surface. The developed approach uses Principal Component Analysis (PCA) and Stretched grid method (SGM) to substitute a non-linear problem solution with several linear steps. The squared distance (SD) is a general criterion to control the process of convergence of a set of points to a target surface. The described numerical experiment concerns the remote measurement of a large-scale aerial in the form of a frame with a parabolic shape. The experiment shows that the fitting process of a point cloud to a target surface converges in several linear steps. The method is applicable to the geometry remote measurement of large-scale objects in a contactless fashion.
INTRODUCTION
The geometry measurement of large-scale objects in any industry is a very acute problem.It reduces to the comparison of a 3D point set given by remote measurement to a continuous theoretical surface.We can classify it as a point-to-surface (PTS) problem.Usually one can treat such comparison as superposition that requires not less than three reference points.However, reference points can be either unknown or meaningless for some classes of product.In this case, the problem comes to a comparison of two geometric objects in 3D space according to a given criterion of optimality.That is, the unknown parameters of the 3D transformation such as translation and rotation are a subject to be found according to objects optimal matching.All the algorithms of two 3D sets comparison can be classified in the following way: 1. ICP-algorithm (iterative closest point algorithm).The basis of ICP-algorithm is the assumption that two objects have common area where they coincide well enough.It means that in the common area, for both of them, each point of one object has a corresponding point of another object.Vaillant and Glaunes (Vaillant at al., 2005.)described the basics of the ICPalgorithm.Though some disadvantages of ICP-algorithm take place: -the computational complexity of the closest points finding is O(mN1 N2) where m -number of iterations, N1-number of the first object points, N2 -number of the second object points (Friedman , at al. 1977); -strong dependence on the given initial approximation; -strong dependence on the density of point clouds; -the method requires the existence of a large overlap region where the points of one cloud correspond to the points of another cloud.
Recently, many variants of the original ICP approach have been proposed, such as: - Brunnstrom and Stoddart (Brunnstrom, at al. 1996) describe the genetic algorithm of finding the most successful initial approximation which is input data to ICP-algorithm; -the research of Dyshkant (Dyshkant, 2010) is also dedicated to the ICP-algorithm modification based on the k-d trees, which allows minimization of the computational complexity to O(mN1 logN2); -in works of Liu, Li, and Wang (Y. Liu, at al. 2004) algorithms to improve the accuracy and reliability of the ICP-algorithm by imposing certain restrictions of the input data are proposed.
Methods based on curvature maps.
This class of methods requires knowledge of the curvature of the surface given by the point cloud.The algorithm was described by Gatzke at al. (Gatzke, at al., 2005).The disadvantage of this method is a strong dependence on the point cloud density because it affects the accuracy of the curvature calculation.
The authors of (Bergevin, at al. 1995) describe the algorithm that does not require the approach of initial data.This algorithm can use a free-form surface; however, it has a very low speed.
The authors of work (Sitnik, at al. 2002) improved the method of the steepest descent optimization.The disadvantage of the approach is the quadratic computational complexity.
Delaunay triangulation algorithm in combination with Nelder-Mead method are described in works (Dyshkant, 2010) and (Nelder, at al.1965).The algorithm assumes that the surfaces are single-valued.This algorithm as well as ICP-algorithm depends on the given initial approximation.
The authors of (Gruen, at al. 2005) proposed the algorithm based on the least square method.The algorithm requirement is that the point clouds have a significant overlap area.
In work (Popov, at al. 2013) the algorithm based on step by step geometric transformation of the point cloud is formulated.The disadvantage of this algorithm is the lack of mathematical rigorousness.
Nowadays there are two trends in the surface superpose problem solution.The first group of methods limits the initial data therefore they work fast.The second group is more general but has a large computational complexity.Hence, we need more algorithms for the comparison of the two data sets.
INITIAL BACKGROUND
The initial assumption is that there are two sets: P:P i (x i ,y i ,z i )the cloud of source points obtained by measuring and ) ( i z , i y , i x i P : P -the cloud of target points (see Fig. 1).We should fit point cloud P to a target point cloud P .
Figure 1: Two data sets However, we cannot make it directly because of the lack of definite and understandable base points.Besides, both data sets have different structure so there are no corresponding points to compare.Therefore, we triangulate a target cloud and turn it into a continuous 3D polyhedral surface Σ (see Fig. 2) specified on the same bounded domain D ⊂ R 2 as a target data set.It is required to find such Ω opt transformation amongst all possible Ω 3D transformations so that the source set Ω opt (P) could be in closest state to the surface Σ according to the given distance function ρ.That is where ρ(Σ,X) -the distance function from point X to surface Σ.
Once the two sets are specified with respect to two different origins we need such transformation of one of them as 'rigid body' so that to ensure the satisfaction of the eqn (1).Such 3D transformation is defined by six parameters: the components of the translation vector Δx c ,Δy c ,Δz c (here C is the geometry center of relocatable set) and three rotation angles φ x , φ y , φ z ,.
The numerical solution of this non-linear problem by the usual optimization approaches is very complicated for various reasons, namely: • In general, it is difficult to fit two sets even approximately.
Hence, it is impossible to find the initial values of Δx c ,Δy c ,Δz c , φ x , φ y , φ z .It forces us to take them with maximum values that makes the computing process slow down.
• The surface cannot be simply single coherent that increases the number of constraints in the optimization problem.
• Often the surface does not have analytical representation, so its derivatives are unknown or do not exist.That makes it impossible to use efficient numerical algorithms based on the function derivatives.
• The computation time depends on value N (the number of points in P set).Therefore, the computing process becomes very slow when the dense of the point set grows significantly.
Figure 2: The source data set and the 3D surface Taking it into account we propose a new approach that consists of two stages and the first of them is Principal Component Analysis (PCA).We apply PCA to the so-called 'rough fit' that actually is the approximate initial fit of two sets.The second stage is the precise fit based on Stretched grid method (SGM) that allows accurate fitting of two sets according to minimum SD criterion in 1-4 linear steps.We demonstrate this approach with the help of the parabolic aerial where Σ -analytic aerial surface, Psource point cloud obtained by measuring the aerial with the standard electronic tacheometer «Trimble-M3» (Survey of Trimble M3 Mechanical Total Station).Thus, the target data set was obtained on the basis of available documentation.
ROUGH FIT
It should be noted, that PCA is often used to map data on a new orthonormal basis in the direction of the largest variance (Draper, at al., 2002).The largest eigenvector of the covariance matrix always points to the direction of the largest variance of the data.
In our case, the first data set is the point cloud and the second is the continuous surface, therefore, we should represent the surface by another point cloud as well.The further procedure follows the scheme described in work (Bellekens, at al. 2014).
If the covariance matrix of two point clouds differs from the identity matrix, a rough fit can be obtained by simply aligning the eigenvectors of their covariance matrices.This alignment is obtained in the following way: at the first step the two point clouds are centered in such a way that the origins of their final bases coincide.The centering of the point cloud simply corresponds to subtracting the centroid coordinates from each of the point coordinates.The centroid of the point cloud corresponds to the average coordinate and is thus, obtained by dividing the sum of all point-coordinates by the number of points in the point cloud.Since the rough fit based on PCA simply aligns the directions in which the point clouds vary the most, the second step consists of calculating the covariance matrix of each point cloud.The covariance matrix is an orthogonal 3 × 3 matrix, the diagonal values of which represent the variances while the off-diagonal values represent the covariance.As the third step, the eigenvectors of both covariance matrices are calculated.The largest eigenvector is a vector in the direction of the largest variance of the 3D point cloud, and therefore, represents the point cloud's rotation.
Further, let A be the covariance matrix, let v be an eigenvector of this matrix, and let λ be the corresponding eigenvalue.The problem of eigenvalues decomposition is then defined as Ax = λx, (2) and further reduces to x(A − λI) = 0. (3) It is clear that (3) only has a non-zero solution if A − λI is singular, and consequently if its determinant equals to zero det(A − λI) = 0. (4) The eigenvalues can simply be obtained by solving ( 4 Here we face some disadvantage, as we cannot always determine the direction of collinear principal component axes uniquely with PCA (see Fig. 3).Therefore, we correct their directions in this issue manually by rotating the source point cloud about axes X pr , Y pr , Z pr consequently to meet the minimum of SD criterion.In our sample, we rotate the point cloud about X pr (see Fig. 4.) The rough fit cannot obtain real minimum solution according to the SD criterion; therefore, the next stage is the precise fit.
PRECISE FIT
The precise fit stage is based on SGM.SGM described in work (Popov E.V., 1997) is a numerical technique for finding approximate solutions of various mathematical and engineering problems that can be related to an elastic grid behavior.In our case, we apply SGM to drag in the source point cloud as a 'rigid body' to the target surface by the set of elastic springs (Fig. 5).
, where Li, Lj -vectors to points i and j respectively from the point cloud centroid.
Due to exp (5) we can calculate the displacement of each point in the point cloud if we know the displacement of single point number j only.
The further step is to write the expression for the potential energy of entire connecting lines between the cloud points and the springs including (Fig. 4) that takes the following form here n -total number of springs, R m -the length of spring number m, D -an arbitrary constant (D = 1 in our case).
Then, let us assume that co-ordinate vector {X} of all the points of the cloud is associated with a final cloud position, when the source cloud is fit to the target surface and the vector {X}' is associated with the initial point cloud position.Thus, vector {X} will look in the following way where {∆X} -vector of the co-ordinate increment of entire points.
where k -number of the current point, t -number of the current co-ordinate.After transformations using exps ( 5), ( 7), ( 8), ( 9) and keeping all lengths L ij constant (see Fig. 6) we can obtain the following linear equation system 6×6 where vector Δx has only 6 unknown components to be found, namely Δx j , Δy j , Δz j , Δx c , Δy c , Δz c ; K -the matrix of solution with the following components When all six unknown functions Δx j , Δy j , Δz j , Δx c , Δy c , Δz c are found we can calculate displacement of each source point due to precise fit using exp (5).
"SURFACE FITTING" PROGRAM
We developed the program named "Surface Fitting" based on the above described algorithm (Popov, 2016).To make the program extremely accessible and mobile we developed it as a web-based open source application using JavaScript language (Flanagan, 2011) in couple with THREE.JS library (Dirksen, 2013).As it is known, JavaScript is an object-oriented language designed in 1995 to allow non-programmers to extend web sites with client-side executable code.The language is also becoming a general-purpose computing platform with office applications, browsers and development environments being developed in JavaScript.Unlike other traditional languages such as Java, C++ or C#, JavaScript strives to maximize programming flexibility and is ideal for programming accessible applications including portable measuring systems.THREE.JS is a highlevel, scene graph framework for 3D graphics built on top of WebGL.THREE.JS programs are written in JavaScript because there is no alternative for WebGL.The main idea when creating the program was to provide the user with a simple, accessible tool that depends neither on the hardware nor on the software platform.In Fig. 7 one can see the user interface of the "Surface Fitting" program.In spite of linear nature of the precise fit, the process needs some iterations to converge because of some disparity of two sets after rough fit.The final fit of two sets can be seen in Fig. 8.
Figure 8: The final fit of two data sets.
As we can see, the process meets minimum SD criterion very quickly.The final error (about 0.01%) means that the fit precision is about 1-2 mm for the aerial with about 15m of overall dimension.
CONCLUSION
This paper has introduced a new two-stage method based on rough fit with PCA and precise fit by SGM for direct point cloud and polyhedral surface fit in 3D.The algorithm has been designed for accurate distance minimizing between source and target data sets.It differs from techniques based on the ICPalgorithm and it differs from any Least Square approaches due to clear physical meaning.It does not depend on the configuration of data sets and their connectivity.The method consists of several linear steps and converges very fast.By vary the residual criterion, we can achieve any level of the fitting relative error.
The "Surface Fitting" allows calculation of both data sets at the initial and fitting position.It also provides the user with comprehensive and detailed tools of the whole scene visualization.
The only temporary inconvenience in using this method for 3D data sets comparison is the ambiguity of transformations with a rough fitting when applying PCA.We are currently resolving this problem by simple manual correction consequentially rotating the source data set at π-angle about principal axes to obtain local SD minimum.In the future, we plan to make it automatic.Besides, we intend to improve PCA algorithm by making it less sensitive to the data set singularities.
Finally, it should be noted that one could successfully apply algorithms and "Surface Fitting" program described in the paper in healthcare, where a soft-body model often needs to be aligned accurately with a set of 3D measurements.Such applications are cancer-tissue detections, hole detection, dental occlusion modelling, artefact recognition, etc.
), whereas the corresponding eigenvectors are obtained by substituting the eigenvalues into (2).Once the eigenvectors are known for each point cloud, the fit is achieved by aligning these vectors.Then, let us assume that matrix T Σ represents the transformation that would align the largest eigenvector of the target point cloud related to surface Σ with the X-axis.Now, let us suppose that matrix TP represents the transformation that would align the largest eigenvector of the source point cloud P with the X-axis as well.Finally, we can align the source point cloud with the target point cloud easily if we take into account coincidence of both principal component systems (X pr , Y pr , Z pr ) of source and target point clouds (see Fig.3).
Figure 3 :
Figure 3: Two data sets in common principal component system
Figure 4 :
Figure 4: The rough fit of two data sets in common principal system
Figure 6 :
Figure 6: To rigid body rotation and transformation Taking into consideration the 'rigid body' rotation of point cloud due to precise fit, we can write the displacement of an arbitrary point pi (Fig.6) as follows
Figure 7 :
Figure 7: User interface of "Surface Fitting" program.Using "Surface Fitting" program we can calculate the displacements of entire source points to fit it to the target surface and visualize the whole scene.Input data in the form of source and target point clouds are loaded into the program automatically.The process involves four consecutive steps • target cloud triangulation, • rough fit, • source cloud rotation about principal axes manually at πangle if necessary • and the final precise fit.
Table 1 :
Table1shows the matching error of SD against the number of iterations.The process convergence | 2018-12-07T17:01:54.449Z | 2017-05-10T00:00:00.000 | {
"year": 2017,
"sha1": "adb2c2844b46ca3582f219c5ff44b4976f197043",
"oa_license": "CCBY",
"oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-2-W4/135/2017/isprs-archives-XLII-2-W4-135-2017.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "adb2c2844b46ca3582f219c5ff44b4976f197043",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
199814126 | pes2o/s2orc | v3-fos-license | Soft Magnetic Skin for Continuous Deformation Sensing
Recent progress in soft‐matter sensors has shown improved fabrication techniques, resolution, and range. However, scaling up these sensors into an information‐rich tactile skin remains largely limited by designs that require a corresponding increase in the number of wires to support each new sensing node. To address this, a soft tactile skin that can estimate force and localize contact over a continuous 15 mm2 area with a single integrated circuit and four output wires is introduced. The skin is composed of silicone elastomer loaded with randomly distributed magnetic microparticles. Upon deformation, the magnetic particles change position and orientation with respect to an embedded magnetometer, resulting in a change in the net measured magnetic field. Two experiments are reported to calibrate and estimate both location and force of surface contact. The classification algorithms can localize pressure with an accuracy of >98% on both grid and circle pattern. Regression algorithms can localize pressure to a 3 mm2 area on average. This proof‐of‐concept sensing skin addresses the increasing need for a simple‐to‐fabricate, quick‐to‐integrate, and information‐rich tactile surface for use in robotic manipulation, soft systems, and biomonitoring.
Introduction
Growing interest in wearable technologies, soft robotics, and human-robot interaction has renewed focus on the development of soft sensing. These materials gather information while remaining soft and stretchable by using a wide range of sensing technologies and modalities. Many artificial or electronic skin technologies commonly use resistive or capacitive sensing, but there have also been exciting advancements in the use of piezoelectrics, triboelectricity, optics, and acoustics. [1][2][3][4] Soft sensors can also be engineered that leverage the coupling of strain or pressure with changes in electrical resistance across fluidic channels embedded in the elastomer. [5,6] Composites can also be developed to add sensing properties to naturally soft host substrates, such as gel or elastomer. For example, loading elastomer with microor nano-particles of liquid metal or carbon black can markedly improve the host's thermal, electrical, mechanical, or radiofrequency properties. [7] However, these sensing technologies are difficult to scaleup to large sensing areas due to corresponding challenges from fabrication, delicate interfaces, and the additional wiring that is required. Not only does integration of these larger systems become more difficult but also the probability of failure at one of the interfaces also increases.
Here, we introduce a tactile skin composed of a fixed 3-axis magnetometer covered with a soft elastomer that is embedded with a dispersion of magnetic microparticles ( Figure 1A). As deformation is applied to the surface of the composite, the microparticles are displaced with respect to the static position of the magnetometer. The magnetometer can measure the changes in the surrounding magnetic field, without direct electrical contact, and estimate the location and force of the contact ( Figure 1B,C). An approximate model for this sensing mode is described in Section 1.1, Supporting Information. We also leverage morphological computation through the inherent dimension reduction performed in the material itself. Although there are many individual magnetic particles distributed throughout the skin contributing to the signal, we can measure a simple 3-axis output that preserves information about the deformation. [8][9][10] Magnetic and ferromagnetic elastomer composites have been well studied and reported. [11][12][13][14] Previously established relationships between conductivity and applied pressure, [15,16] damping properties and external field, [17] and shear modulus and external field, [18] make them valuable for sensing [19] and actuation. [20] Recent approaches to magnetic tactile sensing measure magnetic flux with arrays of Hall effect sensors and rigid permanent magnets embedded within an elastomer. [21][22][23][24] Magnetic flux can also be measured through inductance changes with giant magnetoimpedance materials. [25] In contrast, our approach uses elastomers embedded with magnetic Ne-Fe-B microparticles that are on the order of %200 μm in diameter. The use of microscale magnetic particles reduces the intensity of internal stress concentrations when mechanical load is applied and also allows the potential for sensing skins that are flexible or stretchable. Moreover, they allow for geometries that are thin or contain sharp 3D geometries.
We begin with a brief overview of the fabrication of the soft tactile skin and the method for collecting pressure data over a 15 mm 2 square and 5 mm radial circle. Due to the nonuniform distribution of particles, we opt for data-driven techniques to classify the location and estimate the depth of the contact, which have been shown to be successful for tactile data in many cases. [26][27][28][29][30] The top five classification and regression algorithms are reported and discussed. In particular, we show that we can classify location with 98% accuracy for both 3 mm resolution 5 Â 5 grid, and 5 mm radial circle with three discrete depths. Regression algorithms localize the contact to a 3 mm 2 area. In summary, this work introduces a novel approach to address the need for a continuous and soft tactile surface with simple fabrication, quick integration, and adaptable geometry.
Results and Discussion
The skin is made by mixing a commercial silicone with magnetic microparticles and curing the composite under a magnetic field (see Section 4.1 for more details). We programmed a 4-degreeof-freedom (DOF) robotic arm to automate applied pressure and collect magnetic field change and force over time (see Section 4.3 for more details). Here, we discuss two experiments: a 5 Â 5 grid to demonstrate the spatial and force resolution given a fixed indentation depth and an 8-point circle to test both depth and force resolution given a fixed distance from the magnetometer. The time-series data was represented by a set of static features (see Section 4.2 for more details). Classification and regression algorithm comparisons can be found in Section 1.2, Supporting Information and Figure S3, Supporting Information.
Location Sensing
For the 5 Â 5 grid experiment, force and magnetic field changes were collected over a 3 mm resolution 5 Â 5 grid up to a 3 mm depth ( Figure 2A). We collected 2750 contact samples at these 25 locations using a uniform random distribution. Each class (25 total) has about 100 samples each. Several different classification algorithms were able to accurately distinguish between the 25 locations (see Section 1.2, Supporting Information). Here, we present classification results using quadratic discriminant analysis (QDA), which achieved the best performance. In the event of a misclassification, the predicted class is always adjacent to the true location ( Figure 2B). Classification accuracy for every location are shown in Figure 2C and perform well across all 25 locations.
To estimate the location, we transformed the 25 discrete locations into their coordinate locations. For the 5 Â 5 grid and linear regression, the x-position has an average error of 1.1 mm and the y-position has an average error of 3.8 mm. We attribute this difference in accuracy to the larger misalignment in the y-axis frame. If the location grid is not perfectly centered over the magnetometer, the y signal will measure smaller changes in signal. In Figure 2D, the x-axis looks well-centered with very similar accuracy across 25 locations. However, the y-axis accuracy is biased toward the right-hand side ( Figure 2E). This can be attributed to a combination of alignment, particle distribution, or chip manufacturing errors. These errors make model-based techniques very difficult to calibrate, further supporting our use of data-driven methods. B) The composite retains the stretchability and flexibility of the host substrate, and is compatible with stretchable circuitry. C) The magnetic field measured at the magnetometer changes with the deformation of the elastomer. We attribute this to the change in location between each magnetic particle and the fixed magnetometer.
www.advancedsciencenews.com www.advintellsyst.com The output estimations near the edge of the sensor tend to have a lower accuracy and higher standard deviation. Due to the magnetic signal to distance relationship of 1=d 3 , the quality of signal is expected to decrease drastically with distance. At these points along the edge, we believe that the random distribution of particles begins to have a larger effect on output signal than the applied deformation. This leads to unusual signal changes, and is the main reason why we chose data-driven techniques instead of function fitting. A more detailed example of this type of noise can be found in Section 1.3, Supporting Information.
Location and Depth Sensing
For the 8-point circle experiment, the force-controlled changes in magnetic field were measured for eight different XY locations and three different depths (dZ ¼ 1, 2, or 3 mm) ( Figure 3A). We collected 2850 contact samples for these 24 XYZ locations using a uniform random distribution. Each class (24 total) had approximately 110 samples each. See Figure S5, Supporting Information, for the experimental set-up.
As in Section 2.1, QDA can be used to classify location based on both XY location and depth. If the predicted class is wrong, it is commonly predicted as an adjacent class ( Figure 3B). Misclassification between adjacent locations is much more common than adjacent depths. The large correlation between z-axis magnetic field and pressure can be used to easily distinguish between the three depths. Since all the tested locations are closer to the magnetometer than the 5 Â 5 experiment, we do not see the same introduced noise from the particle aggregates. Classification accuracy for every location are shown in Figure 3C. In general, less applied pressure (depth ¼ 1) leads to a smaller signal change and lower accuracy. For this sample, location 3 and depth 1 had noticeably lower classification accuracy. We attribute this to a combination of misalignment leading to smaller signals on the right-hand side, which is also apparent in the larger error in locations 2, 3, and 4 in Figure 3D,E.
The 24 classes were transformed into their true (x,y,z) coordinates for location estimation. For the 8-point circle and linear regression, the x-position has a mean absolute error of 1.2 mm and the y-position has a mean absolute error of 3.4 mm across all the classes. The difference in error between the x and y coordinates imply a small misalignment in this test as well-also shown in varied error by location in Figures 3D,E. The z-position error is much smaller (0.03 mm) due to larger signal changes associated with 1 mm depth changes ( Figure 3F).
Estimating Force
We can also estimate force with our time-series data and a k-nearest neighbors (k-NN) regression. The inputs are the B x , B y , and B z components of the magnetic field, the internal temperature of the magnetometer B t , and load cell output at each time step. For the 5 Â 5 grid experiments, the force estimation has a mean error of 0.44 N ( Figure 2F), a minimum output of 0.03 N, and the maximum output of 1.9 N. For the 8-point circle, the force estimation has a mean error of 0.25 N (Figure 3G), the minimum output of 0.14 N, and a maximum output of 2.4 N. The z-axis of the magnetic field has the strongest correlation with the applied pressure, making force estimation quite accurate. However, a good signal change is dependent on the amount www.advancedsciencenews.com www.advintellsyst.com of deformation. Therefore, we expect that if the elastomer had a higher Young's modulus, then the force resolution would be much larger. The force range applied during both tests was approximately between 0 and 2.5 N, which was limited by our chosen maximum depth of 3 mm.
Sensor Demonstrations
A simple use-case of the tactile skin is demonstrated by using the magnetic elastomer as a 4-key directional game pad. Four acrylic arrows are adhered to the surface to help the user locate where to apply pressure to input a direction command. The four commands can be identified by the changes in the X, Y, and Z components of the magnetic field. No classifier is used for this example, and instead simple thresholding is found to be adequate when the buttons are sufficiently spaced. The positive and negative X and Y changes are mapped to the four arrow keys on the keyboard to move an image around the screen. Example data for each direction is shown in Figure 4A.
To demonstrate the speed and accuracy of the 5 Â 5 grid classifier, we play a short game of Minesweeper with a robotcontrolled cylindrical indenter. Each of the 25 grid locations is mapped to a mouse location on the screen. The length of the signals (i.e., duration of applied pressure) indicates whether the user wants a left-click to reveal the square or a right-click to place a flag. Immediately after the signal returns to resting, the QDA classifier is used to predict the location and then the appropriate actions are performed. Raw data and classification results are shown in Figure 4B.
These demonstrations show that the magnetic skin can function with varied inputs and noise. However, all three inputs are relatively low frequency. As with most elastomer-based sensors, we expect hysteresis to play a larger role in more dynamic applications. Since the sensing mode is dependent on the deformation of the magnetic skin, any mechanical improvements that help the skin keep up with dynamic change would be beneficial.
Conclusions and Future Work
In conclusion, a novel integration of magnetic elastomer with data-driven analysis leads to a continuous interaction surface that can estimate location and depth of indentation. Classification results can distinguish between 25 grid locations in a 15 mm 2 area with >98% accuracy. The algorithms can also classify 24 classes in a constant diameter circle with varied depth. Regression algorithms can localize the contact to a 3 mm 2 area within the 15 mm 2 active sensing area. The magnetic skin leverages morphological computation properties to inherently reduce the dimensionality of the output before analysis, thereby eliminating the need for a dense array of underlying microelectronic chips and wiring.
In the future, we plan to improve the range and resolution for force and contact location by tuning the fabrication process of the magnetic elastomer, training procedure, and adding additional magnetometers. In addition, mechanical improvements to the composite can mitigate hysteresis to enable use in more dynamic applications. We also anticipate future applications in soft www.advancedsciencenews.com www.advintellsyst.com robotics, medical devices, manipulation, and tactile surfaces. As necessary, the magnetic skin can be molded to conform to the geometry of the host system and be magnetically programed to respond to prescribed mechanical loads or deformations.
Experimental Section
Fabrication: The pre-polymer and cross-linker were shear mixed (AR-100; Thinky) for 30 s in a 1:1 ratio. The pre-cured elastomer mixture was immediately hand-mixed with magnetic particles (MQP-15-12; Magnequench) in a 1:1 weight ratio. The composite was then poured into a 3D-printed mold and degassed for 5 min. A thin plastic film was placed on top of the mold and excess elastomer was squeezed out. The filled mold was then placed upside down on the surface of a permanent magnet (N48; Applied Magnets). The elastomer was cured at room temperature and removed from the mold in an hour. Finally, the elastomer was adhered (Sil-Poxy; Smooth-On) to the top of the commercial magnetometer board (MLX90393; Sparkfun) ( Figure S6, Supporting Information). The magnetic skin required no electrical connection to the underlying magnetometer board. Instead, it required proximity for the magnetic flux changes to be detected. Feature Selection: For this article, we chose to represent the time-series data as a set of representative features. We chose to manually identify 21 features to aid our intuition on the results, in lieu of automated feature selection methods. The 21 features included the minimum, maximum, mean, standard deviation, median, and sum of the magnetic signal in each direction and the scalar ratios between the three axes. At the end of the contact, we calculated the features from data collected over the time of the contact, and immediately output the classification and regression results. We were most interested in supporting evidence of our claim that deformation of the randomly distributed magnetic particles created repeatable and separable signals. Classification and regression methods using these features are described in Section 1.2, Supporting Information.
Data Collection: Data collection was automated using a desktop 4-DOF robotic arm (uArm Swift Pro; UFactory). The magnetic skin board was mounted onto an acrylic plate with a 500 g load cell (TAL221; SparkFun). During contact, 3-axis magnetometer, load cell, and position data was collected and stored at approximately 50 Hz. The indention locations were programed in two patterns: a 5 Â 5 grid (depth ¼ 3 mm) and 8-point circle (depth ¼ 1, 2, or 3 mm). All indentations were performed at the same speed, 10 mm min À1 . We used the robot arm kinematics as our ground truth location. The indenter was a cylindrical rigid punch with a radius of 1.5 mm. In both cases, the location was considered as a classification and a regression problem to focus on modeling the sensor implementation and supporting the proof-of-concept sensing mode. The magnetometer was located directly underneath location 13 of the 5 Â 5 grid and center of the 8-point circle.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 2019-08-16T18:38:39.050Z | 2019-07-25T00:00:00.000 | {
"year": 2019,
"sha1": "caebdcd655859fc9662749569af8b9c1e6998e5f",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/aisy.201900025",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "6d2d32f85f51ca4b61a75e2708088171ea4583f7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Materials Science",
"Computer Science"
]
} |
211146100 | pes2o/s2orc | v3-fos-license | Annotating and Extracting Synthesis Process of All-Solid-State Batteries from Scientific Literature
The synthesis process is essential for achieving computational experiment design in the field of inorganic materials chemistry. In this work, we present a novel corpus of the synthesis process for all-solid-state batteries and an automated machine reading system for extracting the synthesis processes buried in the scientific literature. We define the representation of the synthesis processes using flow graphs, and create a corpus from the experimental sections of 243 papers. The automated machine-reading system is developed by a deep learning-based sequence tagger and simple heuristic rule-based relation extractor. Our experimental results demonstrate that the sequence tagger with the optimal setting can detect the entities with a macro-averaged F1 score of 0.826, while the rule-based relation extractor can achieve high performance with a macro-averaged F1 score of 0.887.
Introduction
With the rapid progress in the field of inorganic materials, such as the development of all-solid-state batteries (ASSBs) and solar cells, several materials researchers have noted the importance of reducing the overall discovery and development time by means of computational experiment design, using the knowledge of published scientific literature (Agrawal and Choudhary, 2016;Butler et al., 2018;Wei et al., 2019). To achieve this, automated machine reading systems that can comprehensively investigate the synthesis process buried in the scientific literature is necessary. In the field of organic chemistry, Krallinger et al. (2015) proposed a corpus in which chemical substances, drug names, and their relations are structurally annotated in documents such as papers, patents, and medical documents, while composition names are provided in the abstracts of molecular biology papers. Linguistic resources are available in abundance, such as the GENIA corpus (Kim et al., 2003) of biomedical events on biomedical texts and the annotated corpus (Kulkarni et al., 2018) of liquid-phase experimental processes on biological papers. In biomedical text mining, the detection of semantic relations is actively researched as a central task (Miwa et al., 2012;Scaria et al., 2013;Berant et al., 2014;Rao et al., 2017;Rahul et al., 2017;Björne and Salakoski, 2018). However, the relations in biomedical text mining represent the cause and effect of a physical phenomenon among two or more biochemical reactions, which differs from the procedure of synthesizing materials. In the field of inorganic chemistry, only several corpora have been proposed in recent years. Kononova et al. (2019) constructed a general-purpose corpus of material synthesis for inorganic material by aligning the phrases extracted by a trained sequence-tagging model. However, this corpus did not include relations between operations, and therefore, it was difficult to extract the step-by-step synthesis process. Mysore et al. (2019) created an annotated corpus with relations between operations for synthesis processes of gen-eral materials such as solar cell and thermoelectric materials. However, the synthesis processes of ASSBs are hardly included even though the operations, operation sequences, and conditions also have differences due to the characteristics of the synthesis process for each material category. In this study, we took the first step towards developing a framework for extracting synthesis processes of ASSBs. We designed our annotation scheme to treat a synthesis process as a synthesis flow graph, and performed annotation on the experimental sections of 243 papers on the synthesis process of ASSBs. The reliability of our corpus was evaluated by calculating the inter-annotator agreements. We also propose an automatic synthesis process extraction framework for our corpus by combining a deep learning-based sequence tagger and simple heuristic rule-based relation extractor. A web application of our synthesis process extraction framework is available on our project page 1 . We hope that our work will aid in the challenging domain of scholarly text mining in inorganic materials science. The contributions of our study are summarized as follows: • We designed and built a novel corpus on synthesis processes of ASSBs named SynthASSBs, which annotates a synthesis process as a flow graph and consists of 243 papers.
• We propose an automatic synthesis process extraction framework by combining a deep learning-based sequence tagger and rule-based relation extractor. The sequence tagger with the best setting detects the entities with a macro-averaged F1 score of 0.826 and the rule-based relation extractor achieves high performance with a macro-averaged F1 score of 0.887 in macro F-score.
The pure Li4Ti5O12 material, denoted LTO, was obtained from Li2CO3 (99.99 %, Aladdin) and anatase TiO2 (99.8 %; Aladdin) precursors, mixed, respectively, in a 4:5 molar ratio of Li:Ti. The precursors, dispersed in deionized water, were ball-milled for 4 h at a grinding speed of 350 rpm, and then calcined at 800 • C for 12 h after drying. Figure 1: Example of a synthesis process. The underlined phrases relate to the material synthesis process.
Annotated Corpus
In this section, we present an overview of our annotation schema and annotated corpus, which we named the Syn-thASSBs corpus.
Synthesis Graph Representation
We used flow graphs to represent the step-by-step operations with their corresponding materials in the synthesis processes. Using the flow graphs, it was expected that we could represent links that are not explicitly mentioned in text. In the inorganic materials field, Kim et al. (2019) proposed the representation of the synthesis process using a flow graph and the definition of annotation labels in experimental paragraphs. In our annotation scheme, we followed their definition, with three improvements: (1) the property of an operation is treated as a single phrase, and not as a combination of numbers and units, (2) each label has been modified to capture the conditions necessary to synthesize the ASSB, and (3) a relation label for coreferent phrases is included to understand the anaphoric relations. A flow graph for the ASSB synthesis process is represented by a directed acyclic graph G = (V, E), where V is a set of vertices and E is a set of edges. We provide an example section of the paper (Bai et al., 2016) in Figure 1, and the graph extracted from the sentences in the section in Figure 2.
Label Set
The labels contain vertices and edges for the synthesis graph representation in Section 2.2.1., and Section 2.2.2., respectively.
Vertices
The following vertex labels were defined to annotate spans of text, which correspond to vertices in the synthesis graph. The labels represent the materials, operations, and properties. For the material labels, we labeled all phrases that represent materials in the text, while operation and property labels were added only to those phrases related to the synthesis process. We segmented the roles of materials into categories. Moreover, we introduced multiple property types for analyzing the structure of the synthesis process. Figure 2: Example of the synthesis graph generated from Figure 1.
MATERIAL-FINAL represents the final material (or products) of the material synthesis process; for example, Li 4 Ti 5 O 12 . MATERIAL-SOLVENT is liquid that is used to dissolve substances and create solutions; for example, deionized water, ethanol, or methanol. MATERIAL-OTHERS represents other materials that are not related to the synthesis process, such as compounds for thin films or catalysts; for example, "... and then purified with activated carbon and acid alumina." OPERATION represents an individual action performed by the experimenters. It is often represented by verbs; for example, "... were ball-milled for 4 h ..." PROPERTY-TIME represents a time condition associated with an operation; for example, "... were ball-milled for 4 h..." PROPERTY-TEMP represents a temperature condition associated with an operation; for example, "... and then calcined at 800 • C ..." PROPERTY-ROT indicates a rotational speed condition associated with an operation; for example, "... at a grinding speed of 350 rpm ..." PROPERTY-PRESS represents a pressure condition associated with an operation; for example, "The powder was uniaxially cold pressed at 300 MPa." PROPERTY-ATMOSPHERE represents an atmosphere condition associated with an operation; for example, "... was conducted in Ar atmosphere for 3 h." PROPERTY-OTHERS represents other conditions associated with an operation or the manufacturer names and purity associated with a material; for example, "MgO (purity 99.999%)," "... pressed into pellets Figure 3: Screenshot of brat interface annotating synthesis process in Figure 1.
Edges
We defined the following three edge labels, which represent the relations between vertices. CONDITION indicates the conditions of an operation and properties of a raw material (for example, the temperature, time, and atmosphere) for performing an operation. This label is also used to express the relations between a raw material and its manufacturer name or purity.
NEXT represents the order of an operation sequence and indicates the input or output relations between a material and an operation. COREFERENCE is a link that associates two or more phrases when these phrases refer to the same material.
Annotation Details and Evaluation
In this section, we explain the annotation details, including the text preparation, preprocessing, and annotation settings; thereafter we present the settings and results of the interannotator agreement experiments.
Annotation Details
We constructed a corpus including the experimental sections of 243 papers on material synthesis processes in the following manner. We collected papers on experimental processes from online journals. To limit the annotation target to the ASSB, which is synthesized using the "solid phase method" or "liquid phase method". We set the search queries to identify papers containing "solid electrolyte" or "ionic conductivity", but not containing "poly", "SEI", and "solid electrolyte interphase" in the titles, abstracts, and keywords. The four experts in material science are involved in selecting target journals and keywords. Thereafter, we manually selected 243 papers that were confirmed to include the synthesis process in the "Experimental", "Preparation" or "Method" sections, because synthesis processes often appear in these sections. We applied the PDF Parser 2 to extract text from the downloaded PDF papers. We extracted the texts of the above sections, manually corrected several typos, and unified certain orthographical variants in composition formulae and quantitative expressions. For example, a " • C" was replaced with the token "degC". Finally, we annotated the synthesis graph on the obtained texts. Three annotators, who were master's course students in materials science, were involved in the annotation. Annotator A tagged 77 papers, annotator B tagged 68 papers, and annotator C tagged 98 papers. Finally, one professional in materials science verified the annotations of the three student annotators and corrected the annotation errors. We used the brat annotation toolkit (Stenetorp et al., 2012) for manual annotation. Figure 3 illustrates an annotation interface by brat.
Inter-Annotator Agreement
The agreement calculations were based on whether the spans of the labels were precisely matched the three annotators in materials science on the spans by using 30 randomly selected synthesis processes from the SynthASSBs corpus. We calculated the agreements using Cohen's kappa. For each pair of two annotators selected from the three annotators A, B, and C, the agreement score was calculated by regarding the labels identified by one annotator as gold and the labels by the other annotator as the prediction, and the average of the scores in two directions was determined.
For the vertices, we calculated two agreement scores: the agreement score of the spans and types (All), and the agreement score of the types on the spans that were annotated by both annotators (Type). For the edges, we also calculated two agreement scores on the vertices that were annotated by both annotators. One score was calculated by comparing the existence of edges and their types (All), while the other score was calculated by comparing the types on the edges that were annotated by both annotators (Type). The inter-annotator agreement results are presented in Table 1. We confirmed that the types (Type) of vertices and edges were almost perfectly matched among the annotators (both kappa coefficients were over 0.99) and the spans and types (All) of them were also substantially matched. This demonstrates that the annotation scheme of the vertices and edges was clear when selecting types. However, the kappa coefficients in the All settings were lower than those in the Type settings. This indicates that annotation ambiguity was caused when deciding which phrase should be involved in the synthesis process. We leave the improvements in the annotation guidelines to reduce this ambiguity problem for future work.
Statistics
Several key statistics of the SynthASSBs corpus, such as the number of documents, sentences, tokens, and entities, are summarized in Table 2. The number of vertices or edges per type is indicated in Table 3. In the statistics, we used scispaCy (Neumann et al., 2019) 3 to split sentences, perform tokenization and extract entities.
Synthesis Process Extraction
Our framework performed extraction of synthesis processes in a pipeline manner, using two modules: deep learning-based sequence taggers for extracting the phrases we defined as vertices, and a rule-based relation extractor (RE) for connecting the edges that were pairs of extracted phrases. As illustrated in Figure 4, our framework first performed sequence tagging (a) to extract the phrases related to the material synthesis process. Thereafter, the relations between entities were extracted by the rule-based RE (b). (b) Rule-based RE Figure 4: Overview of synthesis process extraction. The red phrases and circles indicate terms related to materials, green indicates operations, and yellow indicates properties. The solid and broken arrows represent the next and condition edges, respectively.
Sequence Tagging
Random Fields (Huang et al., 2015) as a sequence-tagging model to identify the spans of the vertices. We used six different base representations in the neural network-based sequence tagger: character-level embedding (CE) (Zhang et al., 2015); byte pair encoding (BPE) (Sennrich et al., 2016); word embeddings for inorganic material science Mat-WE and mat2vec ; Mat-ELMo , which is an embeddings from language models (ELMo) (Peters et al., 2018) model pretrained on materials science texts; and SciB-ERT , which is a bidirectional encoder representations from transformers (BERT) model (Devlin et al., 2019), pretrained on biomedical and computer science texts. These representations were fine-tuned during training on the sequence-tagging task.
Relation Extraction
We developed the following five rules using the training portion of the SynthASSBs corpus. The illustrations following the rule descriptions are used for visualization. The circles used in the figures represent sequential tokens; the red, green, yellow, and white circles corresponds to MATE-RIAL, OPERATION, PROPERTY, and other words/phrases, respectively. A bounding box around circles represents a sentence. A solid arrow represents an edge of NEXT, while a broken arrow represents an edge of CONDITION.
Rule of OPERATION to OPERATION (O-O):
An OPERATION phrase is connected to the next OPERA-TION phrase in the same sentence or in the next sentences.
Rule of MATERIAL to OPERATION (M-O):
When an OPERATION phrase appears in brackets, a MATERIAL-START or MATERIAL-SOLVENT phrase before the left bracket is connected to the OPERATION. In the example sentence "Samples were prepared from H 3 BO 3 , AL 2 O 3 , SiO 2 and either Li 2 CO 3 (dried at 200 degC)," the OPERATION phrase "dried" written in brackets is connected to its previous MATERIAL-START phrase, "Li 2 CO 3 ", not "H 3 BO 3 ", "AL 2 O 3 ", and "SiO 2 ". As for other MATERIAL-START or MATERIAL-SOLVENT phrases, we applied the following rules with ignoring the OPERATION phrases in brackets. A MATERIAL-START or MATERIAL-SOLVENT phrase is connected to its closest OPERATION phrase in a sentence. If two candidates exist within the same distance, the previous candidate is selected. If no OPERATION phrase exists in a sentence, the phrase is connected to the next-closest OPERATION phrase beyond the sentence boundaries.
Rule of OPERATION to MATERIAL (O-M):
An OPERATION phrase that appears at the end of the operation sequence is connected to all MATERIAL-FINAL phrases in the text.
Rule of PROPERTY-OTHERS to OPERATION or MATE-RIAL (PO-OM):
When a PROPERTY-OTHERS phrase appears in brackets, the phrase is connected to the closet previous MATERIAL-START phrase. In the example phrase "TiO 2 , GeO 2 and NH 4 H 2 PO 4 (purity 99.999 %)," "purity 99.999 %" is connected to its closest previous MATERIAL-START phrase, namely "NH 4 H 2 PO 4 ", and not "TiO 2 " and "GeO 2 ". A PROPERTY-OTHERS phrase is connected to the closest phrase of MATERIAL-START, MATERIAL-FINAL, MATERIAL-INTERMEDIUM, MATERIAL-SOLVENT, MATERIAL-OTHERS, or OPERATION. If two candidates exist with the same distance, the previous candidate is selected. Figure 8: Illustration of PO-OM.
Rule of PROPERTY to OPERATION (P-O):
A PROPERTY-TIME, PROPERTY-TEMP, PROPERTY-ROT, PROPERTY-PRESS, or PROPERTY-ATMOSPHERE (that is, properties other than PROPERTY-OTHERS) phrase is connected to its closest previous OPERATION phrase in the sentence or before it.
Evaluation Settings
We evaluated the sequence tagger and rule-based RE individually. The sequence tagger was implemented using Flair (Akbik et al., 2019) 4 , which is a multi-lingual, neural sequence-labeling framework for state-of-the-art natural language processing. When training the sequence tagger, we set the number of training epochs to 200, and used the default hyper-parameters of Flair. The sequence tagger was evaluated using two settings of type sets.
In the first setting, we extracted three coarse-grained distinct types of vertices in the flow graph: the MATERIAL, OPERATION, and PROPERTY vertices.
In the second setting, we ex- We divided the SynthASSBs corpus into three subsets: 145 sections for training, 49 for development, and 49 for testing. We used an F1 score as the primary evaluation metric. We also report the macro-averaged F1 score of the three coarse-grained types (ALL) for the first setting and the micro-averaged F1 scores for the three coarse-grained types (MATERIAL, OPERATION, and PROPERTY) and the macro-averaged F1 score of these three types (ALL) for the second setting. We also plot the changes in F1 score of the methods as the training set is increased in increments of 5% to answer the question about whether the corpus size is large enough to train the sequence tagging. The evaluation was performed on the fine-grained types and the scores were calculated (Zhang et al., 2015) 0.686 0.644 0.733 0.741 0.779 0.708 0.571 0.673 0.496 0.666 BPE (Sennrich et al., 2016) 0 Table 4: F1 scores of sequence-labeling models with different base representations on development dataset. Macroaveraged F1 scores were calculated using all three coarse-grained types (ALL). The highest and second-highest for each metric are indicated in bold and underline, respectively.
on the development set. We show the micro-averaged F1 scores for the three coarse-grained types and the macroaveraged F1 score of the three types (ALL) for the plot.
For the rule-based RE, we used 145 sections (used for training in sequence tagging) in designing rules, whose details are shown in Sec. 4.2., and 98 sections (used as development and testing in sequence tagging) for evaluating the rules. To evaluate the RE, an F1 score based on an exact match was used as the primary evaluation metric. We used COREFERENCE relations in the evaluation: phrase pairs with COREFERENCE relations were treated as the same phrase in the RE evaluation. The performance of the rulebased RE was further analyzed in detail by evaluating the efficiency of the fine-grained labels in the entities as the ablation study, and by demonstrating the accuracy and coverage of each rule. Table 4 summarizes the sequence-labeling results for extracting three coarse-grained vertex types over the six base representations as shown in Sec. 4.1.. The results show reasonably high performance, in which Mat-ELMo achieved the highest performances, with an F1 score of 0.917 on MATERIAL, and 0.826 on ALL, while SciBERT achieved the best score on OPERATION. The performance of the sequence tagger with Mat-ELMo, evaluated on the fine-grained types, is presented in Table 5. Among the MATERIAL types, MATERIAL-START achieved the highest F1 score of 0.887. The F1 score of OPERATION was 0.821, which was higher than the average. Among the PROPERTY types, PROPERTY-TIME achieved the highest F1 score of 0.928. However, the F1 score of MATERIAL-INTERMEDIUM was 0.105 in the sequence tagging. This may be because it is difficult to extract MATERIAL-INTERMEDIUM without understanding the whole structure of the synthesis process. Changes in F1 score according to training set size are presented in Figure 10. In this result, we observe that the curves of ALL remain almost flat after using around 20% of the training set is used. Therefore, we conclude that the size of the SynthASSBs corpus is large enough to train the sequence tagger. In detail, for MATERIAL, the F1 score gradually increases as the training set size increases because material phrases often include unknown terms. OP-ERATION's performance is flat after 5% of the training set is used because there are several types of OPERATION verbs used in the synthesis process. Because PROPERTY is also steady state when 20% or more of the training set is used, it seems that the properties are described in a regular manner. Table 6 displays the results of the rule-based model as well as the rule-based RE results obtained by the ablation tests. The high performance with a macro-averaged F1 score of 0.887 shows the effectiveness of the rules. To confirm the effectiveness of the fine-grained types or sub-labels, we compared the F1 score with three settings. In the first setting, we extracted the relations without using material sublabels (-MATERIAL- * ), by applying the rule of M-O to all of the MATERIAL types and ignoring the rule of O-M.
Relation Extraction Results
In the second setting, we extracted relations without using PROPERTY sub-labels (-PROPERTY- * ), by applying the rule of PO-OM to all of the PROPERTY types and ignoring the rule of P-O. The final setting was without either MATE-RIAL or PROPERTY sub-labels (-both sub-levels). According to the ablation tests, the F1 scores were improved by 7.8% on CONDITION and 11.1% on NEXT when applying Table 6: F1 scores of rule-based system and ablation test results. Macro-averaged F1 scores were calculated using CONDITION and NEXT (ALL).
the sub-label rules.
To analyze the effects of the rules in further detail, the coverage and accuracy for each rule were determined, and these are presented in Table 7. By comparing the rule coverage and accuracy, it could be observed that the rules of PO-OM and P-O, which exhibited wide coverage and high accuracy (over 25% and 85%, respectively), contributed significantly to the extraction performance. This indicates that the rules for extracting the relation between the PROPERTY and MATERIAL or OPERATION successfully mimicked the manner of reading a paper. Although the coverage of the rule O-M was extremely low and the accuracy was relatively low (4.6% and 48.9%, respectively), this rule was essential for constructing the synthesis graph and could not be omitted.
Qualitative Evaluation
We present a thorough evaluation on a real-world scientific literature to demonstrate the efficacy of our framework. A prediction obtained by our framework and the synthesis graph are shown in Figure 11 and Figure 12, respectively. In this result, our framework could extract phrases related to material synthesis almost without error. In detail, the relations across the sentences were extracted without problems; for example, our framework created a NEXT edge between "mixed" written in the first sentence and "dispersed" written in the second sentence. Moreover, our framework succeeded in identifying the type of MATERIAL even if material written in an abbreviation form; for example, our framework could detect that "Li 4 MATERIAL-FINAL in the first sentence. However, the label type was wrong in "anatase" in the first sentence, and OPERATION connection between "calcined" and "drying" on the second line was different from that labeled by the annotator. This is because our rule-based RE could not understand the meaning of "after drying".
Error Analysis
We analyzed 135 errors in the sequence-tagging results. The over-detection errors constituted 49 cases, which were often PROPERTY-OTHERS types that were not directly related to the synthesis process; for example, vessel size or thickness, and milling machine properties. A total of 49 entities were missing and were often caused by PROP-ERTY types, were missing due to rare adverbs, adjectives, or units; for example, "naturally", "constant", "mm-thick", and, "micrometers".
In the RE, we identified two major problems when we analyzed the 129 errors. The first problem was caused by the definition of the distance, which used the number of words and ignored syntactic structures. For example, in the sentence "LiNO3 were weighed according to the stoichiometry of the Li3xLa2/3-xTiO3 and dissolved in ethylene," our distance-based rule predicted that "Li3xLa2/3-xTiO3" qualifies "dissolved" instead of "weighed". This type of problem included 73 errors. The second problem was complex operation sequences. Where two or more material synthesis processes were described in one document, there were cases in which a synthesis process indicated at the beginning was omitted in the second and subsequent explanations. In such cases, branching and merging of synthesis processes occurred. Our rules assumed that the operation sequence was described sequentially, so they could not identify these processes. This type of complex operation sequence caused 28 errors. One means of addressing the above problems is to incorporate additional rules; however, it is not realistic to create more rules manually, because the descriptions are sometimes ambiguous, without an understanding of the contents. We are considering developing a deep learning-based extractor that can take syntactic structures into account.
Related Work
Process extraction from procedure texts has been studied in a wide range of fields. Such studies include an effort to extract liquid mixing procedures from text (Long et al., 2016), an annotated corpus of photosynthesis and formation erosion processes (Dalvi et al., 2018), the extraction of response processes from guidance texts at the time of disaster Figure 11: Synthesis process extraction results from the text in Figure 1 Li2CO3 TiO2 Figure 12: Synthesis graph of the extracted synthesis process in Figure 11.
occurrence (Guo et al., 2018), and several attempts to structure and extract a series of cooking-related actions, such as baking and boiling, from cooking recipe sentences (Mori et al., 2014;Kiddon et al., 2015;Maeta et al., 2015;Abend et al., 2015). Numerous language resources exist in the organic chemistry field (Kim et al., 2003;Krallinger et al., 2015;Tsubaki et al., 2017;Kulkarni et al., 2018;Tanaka et al., 2018), which have been annotated with the experimental processes that appear in the papers. Moreover, an attempt has been made to extract processes by apply-ing event extraction methods to realize machine-based text reading for biomedical papers (Miwa et al., 2012;Scaria et al., 2013;Berant et al., 2014;Rao et al., 2017;Rahul et al., 2017;Björne and Salakoski, 2018). In the inorganic chemistry field, several corpus are available for generalpurpose materials (Mysore et al., 2019;Kononova et al., 2019), while some studies are underway to extract the synthesis process from papers (Mysore et al., 2017;Tamari et al., 2019); however, no corpus and extraction system exist for synthesizing ASSBs. Therefore, we have presented a domain specific corpus of the synthesis process for ASSBs, and an automated machine-reading system for extracting the synthesis processes buried in the scientific literature.
Conclusion
This study has addressed the problem of the lack of labeled data, which is a major bottleneck in developing ASSBs. We constructed the novel SynthASSBs corpus, consisting of the experimental sections of 243 papers. The corpus annotates synthesis graphs that represent the synthesis process of ASSBs in text. Moreover, we proposed an automatic synthesis process extraction framework using our corpus by combining a deep learning-based sequence tagger and rule-based relation extractor that mimics the experience in human reading. As a result, the sequence tagger with the best setting can detect the entities with a macro-averaged F1 score of 0.826. Furthermore, the rule-based RE demonstrates high performance with a macro-averaged F1 score of 0.887.
In future work, we will develop a deep learning-based relation extractor that incorporates syntactic information into the model to improve the extraction performance. We will also apply our extracting framework to existing papers, and, using the extracted abundant knowledge, we will build a computational synthesis design framework for discovering novel material. | 2020-02-19T02:01:21.758Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "1ceec1c790a4e3e18e19cdd92f57182f078abd11",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "e285ca4310d6b9f31bc54f14889f8ff88849bc2c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
236565110 | pes2o/s2orc | v3-fos-license | Aid e ff ectiveness: when aid spurs investment
Purpose – This paper aims to analyze the effectiveness of aid in stimulating investment using different measures of aid and up-to-date panel time-series techniques. This study controls for endogeneity by using dynamic ordinary least squares (DOLS) andminimizes the risk of runninga spurious long-run relationship by using series that are cointegrated. This paper fi nds evidence that aid promotes investment in countries with good institutional quality and gain interesting insights on the in fl uence of country characteristics and the amount of aid received. Aid is ineffective in countries with unfavorable country characteristics such as a colonial past, being landlocked and with large distances to markets. Aid can boost investment in regions that receive high (above-median) amounts of aid such as Africa and the Middle East but not in regions that receive low amounts of aid. Investment-targeted aid is effective but non-investment-related aid can also enhance investment. Design/methodology/approach – Regressions on the aid-investment nexus are based on either a rather simple (115 countries) or an extended/augmented investment model (91 countries). The data covers the period of 1973 – 2011 and 1985 – 2011 if the institutional quality is included. This study estimates the relationship between aid and investment by applying the DOLS/dynamic feasible generalized least squares technique which is based on a long-run relationship of the regression variables (cointegration). In this framework, this paper incorporates country- fi xed effects, control for endogeneity, autocorrelation and take heteroscedasticity andcross-country correlation ofthe residuals into account. Findings – This study fi nds empirical evidence that aid promotes investment in countries with good institutional quality and gain interesting insights on the role played by country characteristics and the amount of aid received. Aid is ineffective in countries with unfavorable country characteristics such as the colonial past, being landlocked, distant from markets. Aid can boost investment in regions that receive high (above-median) amounts of aid such as Africa and the Middle East. Investment-targeted aid is effective but non-investment-related aidalso able to enhanceinvestment. Research limitations/implications – The study looks at the investment to gross domestic product (GDP) ratio (including domestic investment and foreign direct investment (FDI)) and, hence does not disentangle these factors. It looks at the net effect (positive and negative impact together) and, therefore does not allow to identify the direct crowding out the impact of aid. Of course, if this fi nds that aid has a negative impact on investment, it is clear that aid must have crowded out domestic investment or FDI orboth. measuresthat improve theeffectivenessof aid. Also, it is relevantthatthe relative amount of aidreceived(aid- to-GDPratio)must be quitehigh sothataidcan increaseinvestment. Social implications – This study sees that the least developed, low-income countries and (in terms of regions) the sub-Saharan Africa countries bene fi t from aid. This is very desirable. This paper further sees that higher relative amounts of aid do help more and that it is helpful to care about a better institutional quality in developing countries. Hence,this study providessome support forthedesirabilityof aid. Originality/value – The paper was done very diligently, and this study is very con fi dent that the results are robust. This paper is also con fi dent that this study has studied the long-run (which is of special importance) nexus between aid and investment. The estimation technique used is original, as it combines regular DOLS with correctionsfor autocorrelation andcross-section dependence.
Introduction
In recent decades, the improvement of aid effectiveness has been a priority for development agencies, which has led to a dramatic increase in the number of studies on aid effectiveness and its impact on growth. Research from the past 40 years suggests that development aid influences economic growth in a small but positive and statistically significant way. Mekasha and Tarp (2013) support the hypothesis of a positive impact of aid on growth.
However, there are still open questions concerning the growth-generating and growthdeterring impact of aid. A functioning aid-investment channel is critical, as the aidproductivity, aid-domestic savings and aid-real exchange rate transmission channels can counteract growth. Empirical evidence suggests that aid significantly reduces productivity (Alvi and Senbeta, 2012), significantly lowers domestic savings (Bowles, 1987) and leads to a significant appreciation of the real exchange rate that impedes growth by suppressing the export and the import substitution sectors of an economy (Rajan and Subramanian, 2011).
The aid-investment link requires further empirical analysis to identify the drivers of investment (such as domestic savings, foreign savings in the form of international loans, macroeconomic conditions, institutional quality, risk and aid) and the specific role of aid in enhancing investment. In an environment of low domestic saving rates, foreign aid can help bridge the saving-investment gap in form of external savings. Aid is especially crucial in the most disadvantaged developing countries that neither attract large amounts of international portfolio investment nor international loans (Lucas, 1990). Most importantly, foreign direct investment (FDI) is also insufficient in these countries, as, according to theory, unfavorable macroeconomic and institutional factorse.g. high-interest rates, high risk, high volatility, low trade openness and poor institutional quality, etc.influence investment decisions of foreign investors and can discourage FDI, which is part of domestic investment.
While boosting investment is clearly not the only objective of aid, it is an important instrument for breaking the vicious circle of underdevelopment (Solow, 1956;Swan, 1956;Romer, 1986). Depending on the theory of growth, investment is crucial not only to achieve higher short-to-medium-run growth (Solow's growth model) but also to achieve long-term growth (endogenous growth models). Levine and Renelt (1992) have identified investment as a robust driver of growth in cross-country growth regressions.
In the empirical literature on the aid-investment link, a variety of arguments have been cited to support either a positive (Donaubauer et al., 2016;Garriga and Phillips, 2014;Arazmuradov, 2012) or a negative (Boone, 1996;Herzer and Grimm, 2012;Selaya and Rytter Sunesen, 2012) impact of foreign aid on investment.
The contribution of this paper is to complement previous analyzes of the aid-investment nexus with novel econometric methods that deal with relevant econometric issues (spuriousness, endogeneity, omitted variables), intending to find non-spurious, unbiased and robust estimates of the actual size of the aid coefficients. To pay tribute to the economic relevance of aid and other determinants of investment (measured as gross capital formation as a share of gross domestic product (GDP)), we evaluate the economic importance of the effect of aid on investment for different country groups and in different environmental settings. To complete the analysis, we also assess the impact of different types of development aid (classified by donor, financing instrument and sector) on investment bearing in mind that aid is targeted at investment to varying degrees. The estimations of this study are based on a sample of 91 to 115 countries covering the period 1973-2011.
To summarize, we find, on average, a positive and significant impact of aid on investment. This finding holds in particular for sub-samples of developing countries such as Africa and the Middle East. All previously listed country agglomerations receive not only above-median (more than 2%) but also above-average (more than 3.8%) amounts of aid (in terms of the aid-to-recipient GDP ratio). However, this finding does not hold for Asia and Latin America where the amount of aid received is below the median. In addition, the impact of aid is found to depend on unchangeable (time-invariant) circumstances such as being a small island, being landlocked, having large distances to markets and historical past, etc. Moreover, empirics show that aid is not effective in countries with the above-mentioned unfavorable time-invariant country characteristics. We find aid to spur investment in countries that obtain relatively substantial amounts of aid and in countries with good institutional quality. An additional finding is that both purely investment-related aid and non-investment-related aid are able to increase investment. An exception is an aid targeted to improve institutions, which does not enhance investment.
The rest of the paper is structured as follows: In Section 2, we present a literature review on aid effectiveness and identify research gaps. In Section 3, we model the aid-investment relationship and explain our econometric estimation strategy. In Section 4, we address the most important conceptual issues when modeling the aid-investment. Finally, Section 5 concludes.
Literature review and research gaps
Aid effectiveness has been studied quite intensively in recent decades but some key issues still require clarification: which type of aid is most effective; the circumstances (macroeconomic, political, institutional, inherited conditions) under which aid is most effective; and whether there is a minimum amount of aid necessary to be effective and promote development.
The majority of researchers have investigated aid effectiveness either at the macro or micro level and the empirical evidence, also known as the micro-macro paradox, has been rich in contrast. While the micro-evidence has leaned toward a positive effect of aid, the macroevidence has been less clear-cut and has remained more subject to controversy. From a theoretical perspective, aid should have a positive and significant impact on investment and have a positive overall effect on economic growth. However, whether aid has a positive impact on investment and the extent of its effect is fiercely debated. Two metaanalyzes showed that aid has a positive impact on investment (Hansen and Tarp, 2000;Doucouliagos and Paldam, 2006), while Herzer and Grimm (2012) find a negative and significant impact of aid on private investment in a panel data study. Arazmuradov (2012), Garriga and Phillips (2014) and Donaubauer et al. (2016) study the impact of foreign aid on FDI and show that aid or at least specific types of aid such as aid for economic infrastructure, increase FDI. In contrast, Selaya and Rytter Sunesen (2012) find that the effect of aid is ambiguous, there is a positive impact when invested in human capital and public infrastructure but aid can crowd out private investment.
Other important transmission channels of aid on economic growth at the macro level relate to: the impact of aid on domestic savings (empirical studies find a negative and significant effect of aid, and hence, a negative effect on growth); the impact of aid on the real exchange rate (empirical studies show a negative growth effect via a real appreciation of the exchange rate); the impact of aid on total factor productivity (TFP) (empirical evidence finds that aid decreases TFP) (Alvi and Senbeta, 2012); and These transmission channels have been investigated by others and lie beyond the scope of this study (Gomanee et al., 2005).
As the literature on aid effectiveness has grown, researchers have started to group them into different generations of studies (Hansen and Tarp, 2000;Arndt et al., 2010Arndt et al., , 2016Mekasha and Tarp, 2013). The earlier studies (first generation) investigate the link between aid and domestic savings to finance investment and focus on foreign aid as part of external savings as a complement to domestic savings. These studies are based on the theoretical models of Rosenstein-Rodan and Chenery and Strout who identify structural disequilibria as a source of underdevelopment and argue in favor of planning economic development to overcome these structural deficits through high levels of investment spending. Rosenstein-Rodan (1961) emphasizes that underdeveloped countries require a big push, i.e. large amounts of investments, to embark on the path of economic development from their present state of backwardness. Chenery and Strout (1966) view foreign aid as a means to reach a certain output target by filling two gaps (the investment-savings gap and the trade gap). They used this two-gap model to compute the amount of foreign aid that is necessary to promote development. However, the simplistic view that foreign aid adds to the availability of external resources without substituting domestic savings has been challenged in empirical studies that show that foreign aid reduces the incentive to save in aid-receiving countries (Weisskopf, 1972). The second generation of aid effectiveness studies focuses on the aid-investment link. These studies mostly build on the Harrod-Domar model. Investment and the productivity of capital, which is considered constant, determine output growth. In this model, all savings and aid as foreign savings are used to finance investment.
Empirical studies show that aid can increase total savings but not by as much as the aid flow, suggesting that a considerable part of the aid is consumed rather than invested [1] (Hansen and Tarp (2000) referring to Rahman (1968), Weisskopf (1972), Griffin and Enos (1970) and Gupta (1970). A problem with these studies is that the endogeneity of aid is not addressed and the role of physical capital accumulation in the growth process is overemphasized. More recent evidence shows aid has a negative impact on domestic savings because of substitution (Nowak-Lehmann et al., 2012). In line with this, Boone (1996) finds that aid does not significantly increase investment, but it does increase consumption and expands the public sector. Mosley (1987) adds that aid can also change the structure of government spending depending on the government's preferences for consumption and investment.
The third generation of aid effectiveness studies addresses the aggregate effect of aid, namely, its direct impact on per capita income or its growth using the neoclassical Solow growth model (Doucouliagos and Paldam, 2009). The econometric techniques became more sophisticated and more scrutiny was placed on the country context of the aid recipient. Studies found diminishing returns to aid (Hansen and Tarp, 2001) and aid effectiveness to be dependent on features of the recipient countries such as good economic policy (Burnside and Dollar, 2000), the share of a country's area that lies in the tropics (Dalgaard et al., 2004), it is level of democratization (Svensson, 1999), institutional quality (Burnside and Dollar, 2004), political stability (Chauvet and Guillaumont, 2004), vulnerability to external shocks (Guillaumont and Chauvet, 2001), absorptive capacity (Chauvet and Guillaumont, 2004) and its national debt (Bjerg et al., 2011). All in all, the third-generation studies have produced mixed results. Some studies found a positive and significant impact of aid (Arndt et al., 2015), others found an insignificant impact (Burnside and Dollar, 2000;Easterly et al., 2004;Rajan and Subramanian, 2008;Doucouliagos and Paldam, 2009, 2013Tezanos et al., 2013) and some studies even found a negative and significant impact of aid when institutional quality is low (Bräutigam and Knack, 2004). The third generation studies that analyze aid's impact on growth essentially indicate that it may be more insightful to focus on specific transmission channels of aid such as the impact of aid on investment, domestic savings, human capital, productivity or the real exchange rate.
The fourth generation of aid effectiveness studies moves away from studying the aggregate impact of aid to investigating the effectiveness of different types of aid (Rajan and Subramanian, 2008;Tezanos et al., 2013;Donaubauer et al., 2016).
Our study falls into the category of the macro analyzes of aid effectiveness examining the direct link between aid and investment and borrows aspects from the second-generation models by studying a specific channel of effectiveness, namely, the aid-investment nexus and from the third-generation models by treating aid as endogenous and considering geographic and environmental settings. Finally, we integrate approaches from the fourthgeneration models by examining the effectiveness of distinct types of aid.
It has to be pointed out that the varied outcomes of previous studies, from (significant) negative to (significant) positive aid coefficients, were sensitive to small changes in the data set, in the methodology and model specification (Easterly et al., 2004;Rajan and Subramanian, 2008;Doucouliagos and Paldam,2009;Roodman, 2007;Clemens et al., 2012). However, there are also research gaps that may explain the diverging regression results from previous studies and that we try to fill: not having sufficiently addressed econometric issues such as omitted variables bias; endogeneity and the estimation of spurious regressions; and the omission of settings in which aid unequivocally promotes investment.
In contrast to previous research, we aim to identify relevant investment drivers and relevant scenarios, in which aid can spur investment. To this end, we augment the investment model with various investment drivers that have been proposed in the literature. We then check the statistical significance of these potential drivers using the dynamic feasible generalized least squares (DFGLS) technique. After identifying statistically significant drivers of investment, we expose the regression model to various settings that have proven to be relevant, to capture discernible differences in aid effectiveness. Additionally, we investigate whether unchangeable country characteristics such as geography, distance and historical past, are responsible for different responses to development aid. A further dimension we study is the aid amount in relation to a recipient country's GDP and check the aid-investment link in scenarios of high and low inflows of aid.
The econometric model
The standard [2] theoretical literature offers few and limited approaches (Keynesian and neoclassical; post-Keynesian and modern neoclassical) to explain investment behavior. The first is the Harrod-Domar/Samuelson accelerator model of investment. It states that increases in income accelerate capital accumulation and decreases accelerate capital depletion. Investment depends on the capital to output ratio and (expected) changes in income. Investment is determined by expectations for output growth and investor mood (Westerhoff, 2006). The second is the (modern) neoclassical approach (Fisher, 1930;Jorgenson, 1963) which puts less emphasis on expectations and risk but more emphasis on the user costs of capital that are mainly determined by the real interest rate, the relative price of capital and a depreciation allowance (Alexiou, 2009).
Both models have been subject to an empirical test by Alexiou (2009) but performed very poorly, pointing to the need to specify the investment environment more adequately.
We constructed a core model of investment (combining characteristics of both the accelerator and the neoclassical model), similar to Alexiou's, assuming that under wellfunctioning capital markets and rational decision-making, investment is determined by the profitability of investment, i.e. by the marginal product of capital, the real cost of capital and GDP growth. The real cost of capital is a function of the relative price of capital, the real interest rate, which also expresses the risk involved and the depreciation rate (Mankiw, 2009). Our analysis shows that the above-mentioned direct determinants of investment do not explain investment which is in line with the findings of Alexiou (2009) even though the empirical tests differed. We followed a cointegration approach, whereas Alexiou scrutinized the t-values of the above-mentioned direct investment determinants.
Together with Rama (1993), Chhibber and Dailami (1993), Dollar and Easterly (1999) and Agénor and Montiel (2008), we argue that other (indirect) investment determinants such as macroeconomic (i.e. debt-ratio, trade openness) and institutional environments should be included in an empirical investment model. This may be a more effective approach to model investment behavior given that the investment environment influences expectations. In terms of foreign aid, it should be treated as an international transfer of income if it is a grant. If it is a concessional loan (with a grant element of at least 25%), however, it reduces the cost of capital and has a lower interest rate. To understand its impact on investment, we include foreign aid and its sub-components as separate explanatory variables and as direct investment determinants.
We use the underlying data (see online appendix [3]) to identify a systematic long-run relationship, i.e. cointegration, in the aid-investment nexus. To do so, we apply panel timeseries tests, which verify the existence of non-stationary series and stationary residuals (cointegration) in the investment models. We use gross fixed capital formation (the investment-to-GDP share) as a dependent variable, which captures overall investment at the national level but does not allow us to distinguish between domestic and foreign sources of investment. The results do not support the hypothesis that gross fixed capital formation is (exclusively) determined by the variables of the theoretical core model (real interest rate, risk, GDP growth). Hence, the determinants of investment in the core model are insufficient for explaining a long-lasting, systematic investment relationship in our sample of countries. In contrast, the variables of our extended investment model (aid, domestic savings, trade openness, etc.) seem to be systematically related to the investment-to-GDP ratio so that we use this model for our estimations.
We proxy investment, our dependent variable, by gross capital formation as a share of GDP, which is comprising items such as machinery, plants and office buildings and inventory investment. Gross capital formation comes from both domestic and foreign sources.
Foreign aid as a driver of investment is the variable of interest measured by the net ODA as a ratio of GDP. Further, we use its composition, namely, grants or (net) loans, bilateral or multilateral aid. We also include aid spent on investment-related purposes to test whether aid can stimulate investment. Our expectation is that distinct types of foreign aid will vary in their effectiveness.
We also include the following macroeconomic variables: Domestic savings are essential for financing domestic investment and we expect to see a positive impact on investment (Ang, 2007;Feldstein and Horioka,1980;Narayan, 2005). An increase in external indebtedness is expected to have a negative impact on investment as amortization and interest payments reduce the financial means that could be used for investment (Alesina and Tabellini, 1989;Sachs, 1989;Savvides, 1992;Deshpande, 1997;Perkins et al., 2013). The openness of the economy also influences investment and we expect a positive impact. It indicates an extension of the market size which enables a higher degree of specialization and higher productivity (Alesina et al., 2005;Perkins et al., 2013;Dowrick and Golley, 2004). Greater economic growth is considered positive for investment by improving the investment environment and a positive sign is expected (Attanasio et al., 2000;Binder and Bröck, 2011).
These variables are included in the empirical investment model [4] as: where the dependent variable, invy it is the investment-to-GDP ratio in the country I at time t. aidy it denotes the aid-to-GDP ratio. domsy it is the domestic savings-to-GDP ratio, debty it is the external debt stock-to-GNI ratio and tradey it is the sum of imports and exports divided by GDP, which denotes the trade openness of a country. growth it represents economic growth measured as the annual change of GDP. u it is the error term that is iid with mean zero and follows an I (0) process. a i are country-fixed effects. ß is the coefficient of interest, which depicts the average effect (in our sample of countries) of a one-unit increase of the aidto-GDP ratio (or its subcomponents) on the investment-to-GDP ratio. All increases are in terms of percentage points. g , d , « and § indicate the impact of an increase in the domestic savings ratio, external indebtedness, trade openness and economic growth, respectively. We find empirical evidence that, on average, aid, domestic savings, trade openness, external debt, GDP growth, institutions and to a lesser degree, TFP, are significant determinants of investment and could serve as relevant investment settings. Thus, we test our investment model in high and low settings of trade openness, institutional quality, GDP growth, TFP performance and external indebtedness, respectively.
Given the econometric limitations in previous studies on the aid-investment link, we propose applying time series and cointegration tests before running regressions. We complement the dynamic ordinary least squares (DOLS) or "leads and lags" approach, to control for all types of endogeneity (Wooldridge, 2009;Stock and Watson, 2012;Herzer and Grimm, 2012) using internal instruments, with an FGLS procedure to mitigate the omitted variable problem (Kmenta, 1986;Mukherjee et al., 1998;Greene, 2000). As such, suggesting the combined estimation procedure Dynamic FGLS (DFGLS), we avoid spurious regressions and biased regression coefficients. Omitted variable bias is mitigated by a Prais-Winsten transformation of the series (an FGLS procedure).
We control for the endogeneity of all right-hand side variables by applying the DOLS approach (Wooldridge, 2009) as follows: DOLS removes the endogeneity by moving the endogenous parts of the regressors into the error term u it which takes the following form: where v it is uncorrelated with each change in the right-hand side variables by construction.
Is aid a determinant of investment?
We estimate the relationship between aid and investment using equation (1) by applying the DFGLS technique as described in Section 3. Within the DFGLS technique, we incorporate country-fixed effects, control for endogeneity, autocorrelation and take heteroscedasticity and cross-country correlation of the residuals into account. Regressions in this section are based on either a simple (115 countries) or an extended/augmented investment model (91 countries). The data covers the period 1973-2011 and 1985-2011 if the institutional quality is included. Based on a model which emphasizes the average impact of aid on investment, we find that aid has a positive and significant impact on investment and that all other regression coefficients carry the expected signs.
Comparing the aid coefficient produced with the DFGLS technique (Table 1, col. 1) with the aid coefficient produced with the DOLS technique (col. 2) results in similar aid coefficients of 0.30 and 0.35, respectively. The DFGLS technique is preferable as it controls for omitted variable bias whereas the DOLS technique does not.
Considering the DFGLS estimations in col. 1 of Table 1, a 1 percentage point increase of the aid ratio increases the investment ratio by 0.30 percentage points. Looking at the mean values, an increase in the aid ratio from 3.8% to 4.8%, which represents a 26% increase, will, therefore, equate to an increase in the investment ratio from 22.8% to 23.1% (this constitutes an increase of 1.3%).
Based on results from the preferable DFGLS estimation technique, an increase of the domestic savings ratio by 1 unit increases the investment ratio by 0.34 units; if domestic savings increase from 15.2% to 16.2%, which represents an increase of about 7%, the investment ratio will increase from 22.8% to 23.1%, which is about 1.5%. These changes imply that domestic savings are more effective in raising investment than aid.
Trade openness has a positive and statistically significant influence on investment. A one percentage point increase of openness leads to an increase of the investment ratio by 0.12 percentage points. Furthermore, an increase of growth by one percentage point spurs investment by 0.37 units, increasing the investment ratio from an average of 22.8 to 23.2 percentage points. In contrast, external indebtedness has a negative and non-significantalthough smallimpact on investment confirming the burden of the debt service.
Computing the amount of investment generated by aid based on the aid coefficient in col. 1 of Table 1, we find that 1 dollar spent on aid translates into an increase in investment of about $1.70 indicating the existence of forwarding and backward linkages of investment projects [5].
To guarantee the reliability of the regression coefficients presented in Table 1, we check for robustness by inserting non-linearities and by extending the model.
The impact of aid on investment in different country groupings
We also analyze the aid-investment relationship in different regions of the world (Africa, sub-Saharan Africa (SSA), North Africa, the Middle East, Latin America and Asia), by applying the extended investment model. We run the model with aid, domestic savings, external debt, trade openness and growth as right-hand side variables applying the DFGLS technique. The results are presented in Table 2 and we concentrate on the aid coefficients and the amount of investment that is generated by one dollar of aid.
We find that the aid coefficients for Africa, SSA, North Africa and the Middle East (Table 2, col. 1-4) are positive and statistically significant, whereas in Latin America and Asia (col. 5 and 6) they are not statistically different from zero. It is worth noting that Latin America and Asia have aid-to-GDP ratios (between 1.32 and 1.98%), that are below the median (2% aid-to-GDP ratio) and well below the mean (3.8% aid-to-GDP ratio). Africa, SSA, North Africa and the Middle East, in contrast, have above-median aid-to-GDP values (6.2%, 6.3%, 3.2% and 2.8%).
We also compute the amount of investment generated by one dollar of aid in Africa, SSA, North Africa and the Middle East. We find low values of $0.98 and $0.27 for Africa as a whole (3)
Is aid more favorable for investment in certain environments?
For this analysis, we include environmental factors which are constant over time and determined by nature (climate, geography and being land-locked) or by history (e.g. colonial history). These factors are reflected in time-invariant country characteristics and are not attributable to policy interventions. Other environments are set by time-variant factors as institutional quality, aid ratio and economic growth. The environmental effect is modeled by the inclusion of country-specific fixed effects. When working with country fixed effects, the average intercept and the deviation from the intercept for each country are used to split the sample into countries with a below-average intercept (signaling unfavorable conditions/country characteristics) and above-average intercepts (signaling favorable conditions/country characteristics). A below-average intercept implies a below-average autonomous investment [6] and an above-average intercept implies an aboveaverage autonomous investment. Table 3 (columns 1 and 2) provides evidence that an increase in the aid-to-GDP ratio leads to a significant increase in the investment-to-GDP ratio in countries that enjoy favorable country characteristics, whereas it is non-significant in countries with an unfavorable environment.
Interestingly, other variables such as increased domestic savings and trade openness enhance investment to a greater extent under favorable country conditions than under unfavorable country conditions. Table 3 (columns 3 and 4) shows the effect of aid on investment in high and low institutional settings. In column 3, we see that aid promotes investment when institutional quality is high. This is in line with the statement above on the importance of good institutions for investment. The rule of law, low levels of corruption and high bureaucratic quality can increase the investment share in recipient countries.
Moreover, we investigate the impact of an environment of disproportionately high or low amounts of aid in relation to a recipient country's GDP. Hence, in Table 3, columns 5 and 6, we test whether the effectiveness of aid is higher in countries that receive aid payments above the median compared to countries that receive aid amounts below the median. We find that in countries receiving above-median amounts of aid (more than 2%), aid is always effective. There is also some vague evidence (confidence of 90%) that a one percentage point increase of the aid-to-GDP ratio in below-median aid receiving countries generates about twice as much investment as a one percentage point increase in the aid ratio in abovemedian aid recipients. Furthermore, in environments with relatively high amounts of aid, the effectiveness of trade openness is higher and the effectiveness of domestic savings is a bit lower.
Finally, we analyze the impact of high or low GDP growth (col. 7 and 8) on the effectiveness of aid. We find that aid enhances an investment in economies that grow above the average rate (4.1%; median values with 4.2% are almost the same) but not in countries that grow below the average rate. Domestic savings and trade openness seems to have a greater impact on investment in poor growth environments, and therefore, slightly compensate for the low growth environment.
To conclude, aid does not spur investment when countries face unfavorable conditions and suffer from low institutional quality. It seems that path dependency and low-level economic equilibria have an effect on investment spending and historical past influences the quality of institutions and administrative structures, which provide the basis and security for investment decisions. In contrast, aid is always effective at increasing investment when 4.4 Are certain types of aid more investment-promoting than others?
We now turn to the role of different aid categories in promoting investment, as it is reasonable to assume that distinct types of aid differ in their impact on investment. What we see in Table 4 columns 1 to 6 is that domestic savings and trade openness contribute positively to investment and are statistically significant. For grants and net loans (col. 2) and aid to improve institutions (col. 6), no significant effects can be found, but the rest of the aid categories (columns 1, 3, 4 and 5) seem to have a statistically significant impact on the investment ratio.
The sub-types of aid, bilateral and multilateral aid (col. 1), do influence investment in a significant way. Unsurprisingly, investment-related aid (col. 3) has a significant influence on investment in addition to non-investment targeting aid. Backward and forward linkages in production and investment can possibly explain this phenomenon. In particular, aid used for infrastructure (col. 4) and its sub-component communications (col. 5), is very effective at boosting investment. However, its impact is not as large as the figure of 4.23/3.94 seems to suggest. These numbers imply that if aid is increased by about 100 times (which is unrealistic), then the investment ratio would rise by about 4 units (percentage points). In contrast, aid targeted at institutions (col. 6) has a non-significant impact. This could imply that investing in institutional improvement does not need much physical investment but rather an improvement in quality, which most often requires a change in habits, attitudes and priorities. It does not imply that aid targeted at institutional quality is ineffective.
Conclusion
This study sheds light on the aid-investment link. Apart from very heterogeneous and country-specific aid-investment relations, we discovered some generalizable aid-investment linkages. On the one hand, aid spurs investment, but factors such as domestic savings and economic growth are equally important. When countries enjoy favorable (time-invariant) conditions, have good institutions or receive above-median aid-to-GDP ratios (higher than 2%), aid promotes investment significantly.
The amount of investment that is generated by one dollar of aid is low in Africa as a whole, especially in sub-Saharan Africa. In these regions, linkage effects of investment seem to be lower than in North Africa and the Middle East. Differentiating between the subcomponents of aid revealed that the general, quite stable positive impact of aid on investment is driven by both bilateral and multilateral aid while the latter is more effective in promoting investment. Interestingly, we find that both investment-related and noninvestment-related aid (targeting the health and education sectors) are effective in promoting investment as well. There is evidence that aid for infrastructure spurs investment. No investment-enhancing effect could be found for aid allocated for the improvement of institutions. However, even if aid given to improve institutions does not increase investment directly, it might do indirectly, if it succeeds in enhancing the institutional quality. As such, our empirical evidence suggests development aid should be given in sizable amounts exceeding a 2% share of a recipient country's GDP.
One limitation of our study can also give scope for further research: The main variable of interest is gross fixed capital formation (the investment-to-GDP share) which is an aggregate of domestic and foreign shares in the national investment. It would be interesting (3) parentheses Table 4. Effectiveness of different types of aid to see whether and how development aid or different types thereof, promote domestically financed and foreign investment. The G20 Compact with Africa as a political agenda aims to increase investment relations between private investors of industrialized countries and developing nations. The initiative should, however, not disregard that development aid itself is a contributor to investment in developing countries and fulfills its purpose if country conditions are favorable in terms of geography and institutions and if development aid is given in sufficient quantities. | 2021-08-02T00:05:22.002Z | 2021-05-17T00:00:00.000 | {
"year": 2021,
"sha1": "35e46b2119f97215a2d5d5a93e89ab75847605cb",
"oa_license": "CCBY",
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/AEA-08-2020-0110/full/pdf?title=aid-effectiveness-when-aid-spurs-investment",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a09d42f6ce9176aa2fe440cc1db14385e49c8b65",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
201843828 | pes2o/s2orc | v3-fos-license | Integrating the influence of weather into mechanistic models of butterfly movement
Background Understanding the factors influencing movement is essential to forecasting species persistence in a changing environment. Movement is often studied using mechanistic models, extrapolating short-term observations of individuals to longer-term predictions, but the role of weather variables such as air temperature and solar radiation, key determinants of ectotherm activity, are generally neglected. We aim to show how the effects of weather can be incorporated into individual-based models of butterfly movement thus allowing analysis of their effects. Methods We constructed a mechanistic movement model and calibrated it with high precision movement data on a widely studied species of butterfly, the meadow brown (Maniola jurtina), collected over a 21-week period at four sites in southern England. Day time temperatures during the study ranged from 14.5 to 31.5 °C and solar radiation from heavy cloud to bright sunshine. The effects of weather are integrated into the individual-based model through weather-dependent scaling of parametric distributions representing key behaviours: the durations of flight and periods of inactivity. Results Flight speed was unaffected by weather, time between successive flights increased as solar radiation decreased, and flight duration showed a unimodal response to air temperature that peaked between approximately 23 °C and 26 °C. After validation, the model demonstrated that weather alone can produce a more than two-fold difference in predicted weekly displacement. Conclusions Individual Based models provide a useful framework for integrating the effect of weather into movement models. By including weather effects we are able to explain a two-fold difference in movement rate of M. jurtina consistent with inter-annual variation in dispersal measured in population studies. Climate change for the studied populations is expected to decrease activity and dispersal rates since these butterflies already operate close to their thermal optimum. Electronic supplementary material The online version of this article (10.1186/s40462-019-0171-7) contains supplementary material, which is available to authorized users.
Background
Understanding individual movement is crucial to species conservation as it directly impacts metapopulation stability and species persistence [1]. In order to predict the consequences of anthropogenic change, it is essential to understand, in detail, the capacity and the motivation for movement of species within complex landscapes [2][3][4]. Butterflies have served as a model systems to investigate movement processes [5] that determine metapopulation dynamics [6], home-range sizes [7,8], functional connectivity [9], and minimum area requirements [10], though accurately predicting movement rates remains challenging, since movement is context dependent and driven by multiple environmental factors [11].
The drivers of movement behaviour have been variously investigated and modelled in butterflies. Examples include: responses to boundaries [12][13][14][15][16], habitatspecific movement rates [17,18], and variation among individuals in motivation to move [19]. Progress in modelling these effects is achieved by incorporating mechanisms underlying the behavioural responses to changing conditions. Rarely though has the effect of weather been included (but see [18]), despite the well-established temperature-dependency of lepidopteran flight behaviour [20][21][22][23][24][25] and the underlying physics of heat transfer being known in detail for Colias butterflies [26]. Therefore, the consequences of weather and climate variability on potential movement rates have yet to be fully addressed.
Recent field studies conducted on a number of different butterfly species confirm that weather is an important factor explaining propensity for emigration [27] and underlying the variation in dispersal rate between years [28,29]. Specifically, rate of movement is found to increase with both air temperature and sunshine intensity due to their predicted independent effects on body temperature [30]. Environmental variability in propensity to move is shown to contribute to the kurtosis of dispersal kernels in general [31][32][33][34][35]. However, while metabolism is expected to increase with temperature under predicted climate change [36], performance is eventually impaired as species approach their thermal safety margins [37][38][39], forcing a change in thermoregulatory behaviour that can ultimately limit and reduce movement rates [40,41]. Understanding of these effects is necessary as species ranges are shifting rapidly in response to changing climates [42,43], and the rates of range shifts are linked to species mobility [44].
In order to better understand and predict the effects of weather on movement rate in butterflies we investigated the weather-dependence of movement behaviour in the model species Maniola jurtina (L. 1758). M. jurtina is a common species which exists in networks of local fragmented populations. It is a relatively sedentary species with short mean dispersal distances. The majority of individuals remain in their natal patch [45], a situation typical of butterflies in metapopulations [46] making it ideal to model. Various aspects of the movement behaviour of M. jurtina have been empirically investigated, notably changes in movement rates with habitat quality and edge responses [47][48][49][50][51]. Both temperature and solar radiation are known to influence the movement rate of a range of butterfly species, including M. jurtina [29], though a basis for including these in predictions of movement is lacking. Here we address this issue by introducing an individual-based model which incorporates weather-dependent changes in duration of flights and inactivity (referred to hereafter as inter-flight durations). The model is parameterised with extensive high precision data on both flight tracks and behavioural time budgets collected over the course of three seasons and at four sites which demonstrates the influence of weather on flight and inter-flight durations. Movement models incorporating flight and inter-flight have only recently been developed [19] and we show how the influence of weather can also be included. The model is validated with data collected over 10-min intervals and is then used to explore the consequences of a weather on weekly displacement rates. We conclude by discussing possible consequences of these findings for the responses of M. jurtina to climate change.
Study species and sites
The meadow brown (Maniola jurtina) is a widespread univoltine butterfly with a flight period that extends across the summer months in the UK from June to September [52]. It is commonly found in a variety of grasslands habitats [45], where the larvae feed mainly on Poa spp and the adults nectar on a range of flowering plants [53].
Data on individual flight tracks were collected over 72 days during the summers of 2016 (July-August), 2017 (June-September) and 2018 (June-July), at four sites in the south of England: North farm in Oxfordshire (51°37′ N, 1°09′W), Jealott's Hill farm Berkshire (51°27′N, 0°44′ W), the University of Reading (51.4414°N, 0.9418°W), and Sonning farm Berkshire (51°28′N, 0°53′W). Three of the sites were agricultural farms which had implemented agri-environment schemes and consisted of a mixture of arable fields, open meadows, and nectar rich field margins, while the fourth consisted of areas of meadow within the grounds of the Reading University campus.
Movement & behavioural observations
Three hundred eighty-five (♀181, ♂204) individual butterflies were followed at a distance of approximately three metres continuously for up to 10-min intervals to record both movements and behaviour. These distances allow careful observations of the butterflies without disturbing their behaviour. Flight paths were reconstructed as a series of steps and turns between landings and successive 15 s periods of continuous flight [54]. Positions were initially marked with numbered flags, the precise coordinates for which were subsequently mapped using a high-grade Global Navigation Satellite System receiver accurate to < 30 cm (Arrow 200 RTK). The time for which an individual was followed, termed observation time was either 10 min or after a set number of flags were laid (20 in 2016 & 2017 and 15 in 2018), whichever event occurred first.
Step distances and relative turning angle were calculated based upon the coordinates of the successive flagged positions. During the observations activity was recorded continuously by categorising behaviour into: flying and inter-flight with the timing of behaviour recorded accurately using a bespoke android phone app developed for the project by LE. Any flight and inter-flight durations which were ongoing at the end of the observation were treated as right-censored data in subsequent analyses.
We use two measures of 10-min displacement, which we term distance rate and displacement rate. Distance rate is here defined as the total flight path distance divided by the observation time; displacement rate (m/s) is the Euclidean distance moved during the observation divided by the observation time.
Dataloggers (HOBO pendant) were used to record solar radiation (lux) at 10 s intervals and air temperature was measured at hourly intervals from meteorological stations within 3 km of each site (Jealotts Hill, Sonning, University of Reading, RAF Benson).
Statistical analysis
Linear models were used to demonstrate the influence of sex, air temperature, (air temperature) 2 , and solar radiation on the movement variables, though a different procedure was used for incorporating these effects into the individual-based model as it is then desirable to model both the changing mean and variance of flight and inter-flight durations across weather categories (see Generalising behavioural responses to weather conditions). (Air temperature) 2 was introduced as covariate after visual inspection of the relationship between air temperature and flight duration. To control for repeated measures from an individual, means of the variables were calculated such that each observation of a movement variable referred to a unique individual. Model diagnostics were used to check the conformation of the data to the assumptions of linear models and minimal transformations were used when residuals were skewed, thus step speeds, displacement rate and distance rates were cube-root transformed, and flight and inter-flight durations which were log transformed. Stepwise AIC was used to drop uninformative covariates. The Wall-Raff rank sum tests of angular distance, which is available through the circular package in R [55] was used to test for differences in turning angles between the sexes.
Generalising behavioural responses to weather conditions
The individual-based model required representative distributions fitted to the flight and inter-flight durations across weather conditions. The data was subdivided to allow for changes in both the means and the variance of the representative distribution across the changing weather conditions. To evaluate the effect of temperature on flight duration distributions, flights were ranked by recorded air temperature and then subdivided to give five categories across the observed range (median values: 16.2°C, 19.6°C, 23°C, 26.4°C, 29.8°C). Inter-flight duration distributions were similarly analysed across the range between 10 and 230klx as measured on the dataloggers (i.e. from overcast to full sunshine) using median values: 30.2 k lx, 76 klx, 120 klx, 16.4klx, 22.6klx.
Flight and inter-flight durations were long-tailed, and goodness of fit statistics were used to choose between candidate parametric distributions (log-normal distributions were selected as most appropriate). As flight and inter-flight durations contain right censored observations, distributions were fitted using 'fitdistcens' an algorithm available in fitdistplus package through R [56] which takes account of censoring and uses maximum likelihood methods to fit distributions to data. Flight duration distributions were then fitted across temperature categories and inter-flight durations distributions across solar intensity categories. This allowed evaluation of the change in the parameters of the lognormal distributions (log μ, σ) across weather conditions. The changes were summarised using a quadratic model which was selected after visual inspection of the change in parameters across weather conditions. This provided an estimate of the shape of flight and interflight distributions between the upper and lower bounds of the observed weather conditions. All analysis was carried out in R 3.4.2 (R Core Team, 2018).
Individual based model
A spatially-explicit individual-based random walk model was developed to evaluate the effect of temperature and solar radiation on the movement rates of M. jurtina. The model consists of individuals representing butterflies which move across a grid of habitat patches. Mechanistic movement models typically represent butterfly movement as a series of steps and turns which are used in a correlated random walk to simulate the flight path of a butterfly over time [57][58][59]. Our model is conceptually similar to a recent approach in which movement over time is represented as transitions between flights and inter-flight periods [10], rather than as fixed flight times for all butterflies. This allows representation of the changing durations of flights and inter-flights with environmental conditions and between the sexes (Fig. 1) and thus allows prediction of movement rates across a range of weather conditions. Durations of flight and interflight periods are drawn from solar-intensity and temperature-specific log-normal distributions with the parameters predicted through model fits to the observed changes in parameters across weather conditions (described above). Individuals in the model move during a flight by random draws from observed distributions of step lengths and turning angles. An overview of the model is given in Fig. 1. Each individual first selects an inter-flight duration and remains stationary until this time has elapsed, and then it draws a flight duration. To move during flight the individuals draw step distances from marginal distributions of step lengths observed for flights of that duration. For example, if a four second flight was drawn a corresponding step from the four second marginal distribution of step lengths would be selected. The butterfly then moves forward at a rate such that the step length is completed in the flight time. As step lengths were measured at a maximum of every 15 s a long flight may result in multiple steps being drawn before the flight time has elapsed. This detail, which is not included in standard random walk approaches, decouples movement rate from flight time and is important here to fairly represent the effect of changing flight durations on movement. After a flight, or every 15 s during flight, the individuals change heading by drawing a turning angle and adding this turn to the current heading. After the flight time had elapsed the individuals selected another inter-flight duration, and this was repeated until the end of the simulation. To match field observations as closely as possible observations of butterflies ceased after 20 or 15 flags had been laid at the proportions used in field observations, and a low probability of being lost in flight was included. The model was built in NetLogo 6.0 [60] and analysis was carried out using the RNetLogo package [61]. Von-Mises circular distributions were fitted to observed turning angles using the 'circular' package in R [55,62].
Short term movements of individual butterflies
The positions of individual butterflies were marked when they took off, when they landed, and every 15 s during flight: the distance between successive marks is referred to as a step, and the change in direction between successive steps is referred to as a turn. Males had significantly longer step distances than females (mean ± SE: females = 3.21 m ± 0.16 m; males =3.88 m ± 0.11 m, ttest on log step distances: t = 5.09, p < 0.001, df = 1351.1) and more directed flights (circular mean resultant length: females = 0.40, males = 0.61, Wallraff test: X 2 = 34.4, p > 0.001) (Fig. 2) but females flew faster than males as measured by step speeds (step distance/step duration) ( Table 1).
Step speeds were not influenced by solar radiation and there was only weak evidence of an effect of air temperature or (air temperature) 2 though they were both retained in AIC model selection (Table 1).
Behaviour over 10 min
Males were significantly more active than females, with longer flights (Fig. 3a, median flight durations: males: 9.1 s, females 3.8 s) and shorter inter-flight durations Table 1). In addition to the effects of sex, flight durations were affected by air temperature but not solar radiation, while inter-flight durations were most affected by sex and solar radiation, with weak evidence for an effect of air temperature (Table 1). Flight durations increased with air temperature and peaked between 20°C and 26°C, and then decreased, but only marginally so for females (Fig. 3a). Inter-flight durations declined as solar radiation levels increased (Fig. 3b). Males had higher displacements rates than females (Table 1). For displacement and distance rates, which integrate effects on flights and inter-flight durations, air temperature, (air temperature) 2 and solar radiation all significantly affected observed rates.
Generalising behaviour with log normal distributions
Quadratic models fitted to the parameters of log-normal distributions (log μ, σ) were used to generalise the nonlinear behavioural changes of M. jurtina across weather conditions (coefficients presented in supplementary materials 1). The effect of insolation on inter-flight durations was well captured using this approach fitting closely the parameters of the log-normal for both sexes (R 2 : Males log μ = 0.94, σ =0.91; Females log μ = 0.98, σ =0.88). For male butterflies parameters of flight durations across air temperatures, were also well fitted (R 2 : log μ = 0.86, σ = 0.81) though for females the effect of air temperature was generally much weaker (Fig. 3a) and with no simple relationship between the log-normal parameters and air temperatures a data driven approach was applied by using the fitted parameters for an air temperature category when simulating air temperatures within that interval in the individual-based model.
Using the individual-based model to predict dispersal rates
The individual-based model described in Methods was developed to bridge the gap between short-term observations of movements and 10-min displacements by explicitly representing changes in behaviour across weather conditions. The model uses weather-dependent parameterisations (supplementary material 1) of flight Analyses performed using linear models, predictors removed or retained through AIC model selection. • < 0.1, * p < 0.05; ** p < 0.01; *** p < 0.001. Numbers in parentheses indicate standard error of the estimated coefficients durations and inter-flight durations to predict movement rates, measured as distance rate (track path length/observation time) (Fig. 1) and displacement rates (Euclidean distance/observation time) (Additional file 1: Figure S2). The model was validated by comparing predictions of movement rate with the observations for each air temperature and solar-intensity level (Figs. 4 and Additional file 1: Figure S2). Predictions were obtained by inputting the air temperature and solar radiation of a field observation, running the model for ten minutes of simulated time and then collecting the measure of displacement, this process was repeated 20 times per individual. Distance rates are preferable for validation because they are not sensitive to edge-of-habitat effects, which are not included in the model, but displacement is a more direct measure of 10-min displacement because it represents the Euclidean distance moved.
Predicted and observed distance rates were highly correlated across of levels of sunshine (Fig. 4a, Pearson's r = 0.97, p < 0.001) and air temperatures categories (Fig. 4b, r = 0.90, p < 0.001) though there is some under-prediction for males at the two highest temperature categories. Similarly high correlations were obtained for displacement rates across sunshine categories (Additional file 1: Figure S2A, Pearson's r = 0.89, p < 0.001) and temperature categories (Additional file 1: Figure S2B, Pearson's r = 0.90, p < 0.001). We consider that these high correlations between observations and predictions constitute satisfactory validation of the model.
To analyse the effects of solar radiation and temperature on movement over a meaningful timeframe for the dispersal potential of a population, simulations of the movement of 1000 butterflies over a week (5 days × 8 h) were performed for 25 simulated weather conditions (5 sunshine × 5 temperature levels). Daily temperatures were simulated by fitting a Loess curve to observed temperatures during the 2018 field observations and shifting the intercept of the function in 3°C intervals to replicate cooler or warmer days (Additional file 1: Figure S1). Daily sunshine levels were similarly replicated by fitting a custom function to observed solar radiation and shifting the intercept in 20 klux intervals (Additional file 1: Supplementary materials 2). Weather changes occurred half-hourly in the simulation and on-going behaviours, such as inter-flight durations, then ceased and a new behaviour was drawn, so that butterflies were reactive to the changing conditions. Maximum mean weekly displacements were predicted approximately three times greater for males than for females (Fig. 5). The range of weekly displacement predictions varied more than twofold across solar-intensity and temperature categories for males and > 50% for females. For both sexes predicted weekly displacement responded strongly to solar radiation. Displacement peaked at intermediate temperatures in males, but there was no strong effect in females. These results were similar for distances travelled (Additional file 1: Figure S3) with males flying much further than females and flying furthest at intermediate temperatures, and both sexes travelling further distances with increasing solarintensity.
Discussion
Our objective has been to integrate the effects of air temperature and solar radiation into an individual-based model which predicts movement rates for M. jurtina. Our method has been to identify the short-term effects of the weather variables on flight and inter-flight durations ( Fig. 3 and Table 1), and then to draw from distributions representing these weather-dependent behaviours within the individual-based model. Two measures of movement are presented: displacement rates and distance rates, and the model is satisfactorily validated for both measures by comparing observations and predictions (Figs. 4 and Additional file 1: Figure S2). The model is subsequently used to analyse the effects of weather on weekly displacement and demonstrates that within the analysed range weather has a greater than two-fold effect for males and greater > 50% for females (Fig. 5).
Weather strongly influences butterfly behaviour, primarily through the effects of air temperature on flight duration, and solar radiation reducing the time interval between successive flights (Fig. 3). These effects of weather on movement are consistent with theoretical expectations based on biophysical analysis and observations of thermoregulatory behaviour [63][64][65][66] and consistent with previous observations of butterfly movement [20,23,29,67]. While warmer temperatures are predicted to increase the scope for muscle power by enhancing aerobic capacity [68], we found no strong evidence of a relationship between flight speed and either air temperature or solar radiation. It is likely that the flight speed measured in this study reflects a foraging strategy optimised for favourable habitats rather than a maximal rate [69]. Therefore, a limitation when relating our results to longer-term dispersal is the complexity of the dispersal process with movement behaviour changing between habitat types [51] and influenced by edge effects [70]. Nonetheless, the influence of weather on behaviour was found to account for more than a twofold variation in displacement rate, which is consistent with observed annual variability in dispersal rates [28].
While both sexes showed similar flight speeds, males had longer flight durations and shorter intervals between successive flights, resulting in a three-fold greater predicted daily displacement. These sex differences likely reflect different priorities. Male M. jurtina continuously 'patrol' habitat in search of females to mate with, whereas mated females search for suitable host plants on which to lay eggs [20,45]. While males appear to maximise flight durations on sunny days when solar radiation can be used to elevate body temperature, females show reduced activity which is less temperature dependent. This restricted flight period for oviposition may ultimately reflect thermal constraints on egg maturation rate [71]. The optimal strategy for females may be to fly only when eggs are ready to lay, to minimise unwanted attention from males and associated energetic costs.
Although below 23°C temperature had a positive effect on flight duration, for male butterflies flight durations declined above 26°C (Fig. 3). Similarly, predicted displacement for males peaked at approximately 26°C, and afterwards declined, though there was no strong effect of temperature on females (Fig. 5). For both sexes movement predictions peaked at the highest solar radiation levels. Declines in activity and switches in behaviour are consistent with ectotherms nearing their thermal limits [40] . High temperatures have for instance been shown to reduce mate-searching behaviour in the small white (Pieris rapae) [72]. Our results suggest that while a warmer climate is likely to increase potential dispersal rate and potentially population stability for M. jurtina [29], particularly at its northern range boundary, predicted high temperatures under climate change might ultimately restrict movement with detrimental effects on the stability of populations unless accompanied by an associated change in phenology, population size, habitat use and/or thermal adaptation [73,74], such as seen in the morphological differences in species of Colias butterflies across altitudinal gradients [23].
While the long-term ecological consequences are complex to predict, we have demonstrated that the current relationship between behaviour and weather can be defined and included in mechanistic movement models. The temperature-dependence of flight behaviour observed particularly for male M. jurtina, has a number of important general implications. Firstly, weather alone may explain much of the variation in movement observed for butterflies among sites and among years [28,31], and therefore ought to be accounted for when estimating butterfly and other ectotherm movement behaviours. Secondly, the influence of weather on dispersal may affect population synchrony in both space and time [75]-the Moran effect [76]. Thirdly, the finding that flight behaviour is constrained by unfavourably hot conditions suggests opportunities for oviposition may be more limited than previously thought, reducing the possible benefits of temperature dependent increases in fecundity [77].
We hope that the approach of representing the weather dependence of movement in models can be applied more generally across species, using mechanistic understanding of how movement depends on traits differing between species such as body size [64,78], thermoregulatory behaviour and melanism [25,65], or observation of thermal performance curves on a species by species basis. Thermal performance curves for movement are available for several insects [79][80][81], and reptiles [82][83][84]. We hope that in this way the effects of changing climate may be better predicted using mechanistic movement models that account for the effects of varying environmental conditions.
Conclusions
Individual based models provide a useful framework for including mechanism in movement models. By disentangling the effects of weather on different aspects of flight behaviour, and then by demonstrating how to integrate these insights into an individual based model of butterfly movement, we were able to explain up to a two-fold difference in movement rate of M. jurtina consistent with inter-annual variation in dispersal measured in population studies. We have also revealed that climate change for the studied populations, may be expected to decrease activity and dispersal rates since these butterflies already operate close to their thermal optimum. We hope that developments of our model will enable improved forecasting of the ecological consequences of changes in weather, and ultimately climate, and provide impetus to include greater mechanism in future movement models. | 2019-09-02T22:13:10.536Z | 2019-09-02T00:00:00.000 | {
"year": 2019,
"sha1": "58dc321b0b5f37f9f3d106347e0579fd8394b016",
"oa_license": "CCBY",
"oa_url": "https://movementecologyjournal.biomedcentral.com/track/pdf/10.1186/s40462-019-0171-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "58dc321b0b5f37f9f3d106347e0579fd8394b016",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
234816246 | pes2o/s2orc | v3-fos-license | Develop A Commissioning Process And Template In Building Project For Large-Scaled Construction Company
This study aims to establish building commissioning process for large-scaled construction company, and develop the commissioning template in accordance with each phase. The purpose of this paper is to develop a practical, systematic, professional template for commissioning for a large construction company. In addition, commissioning of buildings is the main subject of performance verification of Heating, Ventilating and Air Conditioning and Refrigeration (HVAC & R) systems that require equipment and control, so we focus on verification of HVAC & R. Large-scaled construction companies in Korea are transforming from general contractors to Engineering, Procurement, and Construction (EPC)s, and it is expected that the role of building commissioning for the objective quality assurance of buildings will be important in the future. In addition, it is necessary to prepare for this, since it is impossible to properly respond in a timely manner to the contractor who must obtain final approval from a professional commissioning company before delivering the building to owner. In order to solve these problems, it is necessary to properly understand the submission documents and work system required by a professional commissioning company and to establish a work performance system to enable commissioning from the General contractor’s perspective. As the owner's demand for building commissioning increases day by day, a response to this is necessary, also, the commissioning can respond to the global warming crisis. The research team builds the commissioning template from plan to O&M phase, then, the template contained key documents and general document packages.
Introduction
Since the late 2000s, interest in energy saving and eco-friendliness arising from resource depletion along with global warming has been spreading around the United States and Europe, and the demand for eco-friendly performance verification of buildings is expanding. Commissioning, which started from the performance verification of ships and aircraft, is expanding to general buildings, and the building commissioning market is gradually expanding. In the United States, Europe, and Canada, guidelines and laws are established and distributed through various associations, and currently, the introduction and concept of commissioning is spreading. [1][2][3] Large-scaled construction companies in Korea are transforming from general contractors to Engineering, Procurement, and Construction (EPC)s, and it is expected that the role of building commissioning for the objective quality assurance of buildings will be important in the future. [4] In the current situation, the lack of experience in building commissioning will be a latent problem. In addition, it is necessary to prepare for this, since it is impossible to properly respond in a timely manner to the contractor who must obtain final approval from a professional commissioning company before delivering the building to owner. In order to solve these problems, it is necessary to properly understand the submission documents and work system required by a professional commissioning company and to establish a work performance system to enable commissioning from the General contractor's perspective. [5] Therefore, the purpose of this paper is to develop a practical, systematic, professional template for commissioning for a large construction company. In addition, commissioning of buildings is the main subject of performance verification of Heating, Ventilating and Air Conditioning and Refrigeration (HVAC & R) systems that require equipment and control, so we focus on verification of HVAC & R. In order to create a template, the project was categorized into 4 phases, and the details of each phase are as follows. System and O&M manual Existing system / O&M manual analysis
Literature review
As the functions and performances of buildings have become more and more complicated in recent years, building commissioning has been widely applied as the perception that a step-by-step review and verification of whether a building is being designed, constructed, and operated is required to ensure that the building performs its expected performance. Building commissioning is a technique that applies the quality assurance process to the construction industry. [6] It is defined to verify that the needs of the owner are well reflected at each phase of construction. If appropriate commissioning is applied according to the characteristics of the building, it not only can satisfy the required performance of the building, but also helps to complete a healthy building through environment-friendly.
The type of commissioning
The building commissioning can be divided 5 categories and the contents are as follows. According to overseas studies, the average operating cost of commissioned buildings is 8 to 20% lower compared to those that do not. In addition, according to Altwies research [7], commissioning can reduce design changes by 87% and rebuild by 90%. Then, the total construction cost savings are 4 ~ 9%. And the advantages of stakeholder are as follows.
(1) Advantages of the owner -Efficient operation and maintenance -Failure frequency can be reduced (2) Advantages of architect -Equipment system meets the needs of the owner -Change management is possible (3) Advantage of contractor -Efficient construction management -Reduce defects
Commissioning cost
The cost of building commissioning is affected by the type of facility, the difficulty of construction, operating hours, and equipment level. Therefore, it is difficult to calculate the standardized cost, but the US PECI and LBNL provide examples of commissioning cost. [8][9][10] The contents are as follows.
(1) California Commissioning Guide The GSA's commissioning guide includes the charts provided by PECI, which are used to refer to the calculation of commissioning costs for various buildings.
Team building of commissioning
The general commissioning team is as follows. The essential members of the planning and design phases are Owner's representatives (OR), Commissioning Authority (CxA), Design Professional (DP), and in general, the OR can be Project manager (PM), occupants, user, facility manager, and O&M personal. Also, Architecture & Engineers (A/E) plays a DP role in a project. If you have CM, PM, Program manager (PM'), etc., they can participate as a commissioning member. The mandatory member of the construction phase is OR, CxA, DP, General contractor (GC), Vendor, CM, PM, and PM', and the vendor is the equipment manufacturer and installer. An essential member of the warranty phase consists of OR, CxA, DP, GC, Vendor, CM, PM, and PM', and etc. In relation to the HVAC & R system, the required members, additional members, and members who can participate if necessary are as follows.
3.
Establish commissioning process Although the project was initially defined in 4 phases, the contents of the warranty phase are complicated, so the Cx tasks were specifically divided by acceptance and O & M phase. The process standard is as follows. The main tasks were divided into steps, and abbreviations for each phase were entered as intermediate numbers so that they could be clearly recognized. Figure 2. Process notation and numbering criteria 3.1. Plan phase In the plan phase, "project proposal" starts under the authority of the owner. The first step of commissioning is from consisting of the "Cx team (commissioning team)". The commissioning team consists of the ordering company, OR, CxA, A/E, CM, GC, and manufacturers. In the second step, the "Cx scope meeting" hosted by CxA is held. Through the meeting, each team member's Role and Responsibility (R&R) and work scope are determined. The determined R&R is reflected later in the Cx Spec. After the meeting, CxA drafts a Cx Plan (commissioning plan) and shares it with all members of the Cx team, including the owner. In addition, future business directions and plans will be determined. The third step is the "OPR Workshop", where the objectives of the project and the requirements of the owner are described under the authority of the OR or owner.
Design phase
The commissioning process at the design phase is shown in Fig. 4. The first step is "BOD development", where the A/E team is the main body and is creating design guidelines to meet OPR. Information such as system manuals and specifications is included in these guidelines.
Acceptance phase
The commissioning process at the acceptance phase is shown in Fig. 6.
Develop the commissioning template
The commissioning template is composed of key document and general document package. The key document is composed of Cx Plan and Cx Spec, which explain the commissioning process and R&R of team members, and the general document package is composed of documents that should be accomplished phase by phase.
Cx Plan
It is a plan that explains the overall business procedures and plans that are the basis for commissioning and describes the R&R of each participating participant. It should be revised to reflect the current status in each phase of design, construction, and O&M. It is written by CxA and shared with owner, OR, A/E, and GC. The main contents consist of 1) commissioning work, 2) R&R, and 3) commissioning schedule.
Cx spec.
This is a document that describes the terms and conditions of proceeding and R&R related to commissioning. This document should be revised to reflect the construction conditions. It is written by CxA and shared with Owner, OR, A/E, and GC. The main contents consist of 1) contract terms, 2) R&R, and 3) proceedings. The documents related to the Cx spec are as follows. 1) Sample test rating, 2) R&R/activity matrix, 3) Cx system selection matrix for each product, 4) Cross check item when applying integrated system 1) Sample test rating ASHRAE proposes a random sampling technique for sample test ratings and a guide to select sampling rates in consideration of project characteristics. The Quality based sampling examples consist of complexity, criticality, length, owner's input, CxA, construction checklist, construction speed, and the amount of equipment. Final commissioning process testing consists of complexity, criticality, owner's input, and CxA. In addition, CxA can also adjust the rate of sample test ratings, and the rate can vary depending on the contract.
2) R&R matrix Each of the US EPA, GSA, and NEBB organizations provides guidelines and standards of the R&R/Activity matrix. Therefore, you can select and use it according to the characteristics of the project.
3) Cx system selection matrix for each product The Cx system selection matrix for each product consists of HVAC, plumbing system, Automatic temperature control system (ATC), building envelop, life safety systems, security and specialties. [14] 4) Cross check item when applying integrated system Cross check item when applying integrated system is adopted by ASHRAE guideline. [15]
Conclusion
The purpose of this paper is to propose a template that can respond to commissioning from a large construction company. To respond the commissioning process, it should be established in each 5 categories such as ICx, Re-Cx, Retro-Cx, Ocx, and MBCx and each 5 phases such as plan, design, construction, acceptance, and O&M phase. The commissioning template is composed of key document and general document package. The key document is composed of Cx Plan and Cx Spec, which explain the commissioning process and R&R of team members, and the general document package is composed of documents that should be accomplished phase by phase such as Owner's project requirement (OPR), Basis of Design (BOD), Design review (DR), Sequence of Operation (SOO), Minute of commissioning meeting (MOCxM), Construction inspection report (CIR), Daily commissioning report (DCR), Issues log, Progress Log, Factory witness test (FWT) or Factory acceptance test (FAT), Vendor start-up, Pre-functional checklist (PFC), Functional test procedure (FTP), Training agenda, and O&M manual. And the previous research shows an owner can be saved 8-20% in operating cost through commissioning process. This paper showed how to prepare a template for documents corresponding to each step and each step for commissioning suifigfor large domestic construction companies. In addition, it is expected that this will not only improve the commissioning capacity of domestic construction companies, but also make a great contribution to the maintenance of owners. | 2021-05-21T16:57:22.035Z | 2021-04-10T00:00:00.000 | {
"year": 2021,
"sha1": "e50e3b97fa5549a58263ba93f561853e101f6f5e",
"oa_license": "CCBY",
"oa_url": "https://turcomat.org/index.php/turkbilmat/article/download/2054/1780",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4509e8f64d5dfd3578e92d5a1c1055110c107f94",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
265178790 | pes2o/s2orc | v3-fos-license | Serum procalcitonin has no significance in the diagnosis of periprosthesis joint infection before total hip and knee replacement
Background Currently, there is no “gold standard” for early diagnosing PJI. The diagnosis of periprosthetic joint infection (PJI) is a challenging problem in the clinic. As we know, many serum markers have been used in the early diagnosis of PJI. The aim of this study was to validate the value of PCT in the diagnosis of PJI. Methods A retrospective review of 77 patients with revision arthroplasties from January 2013 to July 2020 was conducted. PJI was defined using the modified Musculoskeletal Infection Society (MSIS) criteria combined with follow-up results. Besides medical history, clinical and laboratory data was gathered. Preoperative blood was taken for serum PCT and other biomarkers measurement. Receiver operating characteristic (ROC) curves were generated to evaluate the biomarkers’ diagnostic performance and optimal cut-off value. Results Forty-one patients were identified as the PJI group (27 hips and 14 knees), while thirty-six patients were identified as the aseptic loosening (AL) group (33 hips and 3 knees). The AUCs for C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), Platelets (PLT), Fibrinogen (FIB), and Procalcitonin (PCT) were 0.845 (95% CI 0.755–0.936, p < 0.001), 0.817 (95% CI 0.718–0.916, p < 0.001), 0.728 (95% CI 0.613–0.843, p < 0.001), 0.810 (95% CI 0.710–0.910, p < 0.001) and 0.504 (95% CI 0.373–0.635, p = 0.950), respectively. Higher Area under the Curve (AUC) values were obtained for the combinations of PCT and CRP (AUC = 0.870) (95% CI, 0.774–0.936), PCT and ESR (AUC = 0.817) (95% CI, 0.712–0.896), PCT and PLT (AUC = 0.731) (95% CI, 0.617–0.825), PCT and FIB (AUC = 0.815) (95% CI, 0.710–0.894). The serum PCT indicated a sensitivity of 19.51% and a specificity of 83.33% for diagnosing PJI. When the optimal cut-off value for PCT was set as 0.05 ng/ml, its positive and negative likelihood ratios were 57.1% and 47.6%, respectively. Conclusion In conclusion, serum PCT appeared to be no reliable biomarker in differentiating PJI from aseptic loosening before revision arthroplasties. However, PCT combined with other biomarkers further increases the diagnostic accuracy.
Introduction
Total joint replacement (TJA) is the most effective treatment for advanced arthritis, but periprosthetic joint infection (PJI) is a serious complication and a major reason for postoperative revision (1,2).The incidence of PJI after arthroplasty was 0.7% (3).A study has shown that each patient suffered from PJI would pay at least $15,000-$30,000 (4).Therefore, early diagnosis of PJI and effective intervention are very to improve the prognosis after total joint replacement.A large number of clinical studies have shown that early diagnosis can not only preserve the prosthesis to a certain extent, but also the infection control rate can reach 70% (5).Currently, there is no "gold standard" for early diagnosing PJI.The diagnosis of periprosthetic joint infection (PJI) is a challenging problem in the clinic.This leads to the delayed diagnosis and treatment of PJI.As a result, this catastrophic complication can seriously diminish patient quality of life and increase the financial burden on families and societies (2).
Serum samples are readily available and are especially important for patients without Synovial fluid samples.As we know, many serum markers have been used to diagnose PJI early.Erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP) are recommended as essential indicators for the diagnosis of PJI by the American Academy of Orthopaedic Surgeons (AAOS) and the Musculoskeletal Infection Society (MSIS) guidelines due to their superior sensitivity and specificity (6,7).Zhang et al. first reported that serum platelet (PLT) is a promising marker for the diagnosis of deep surgical site infection after open induction internal fixation for traumatic limb fractures (8).Klim, S.M. et al. showed that Fibrinogen (FIB) is a cost-efficient and practical marker for diagnosing PJI (9, 10).Procalcitonin (PCT) is commonly used for the diagnosis of systemic infection (11-13).However, its diagnostic value for PJI is controversial, and there is no universal threshold value (14)(15)(16).To further clarify its diagnostic value, this study evaluated the value of PCT in the diagnosis of PJI by comparing it with CRP, ESR, PLT, and FIB.
In this retrospective study, we sought to: (1) the performance of PCT in distinguishing chronic PJI and AL (Aseptic Loosening) by comparing with other inflammation indicators; (2) the value of PCT combined with CRP, ESR, PLT, or FIB for diagnosing PJI.
Study design
After approval of our hospital's institutional review board, a single-center retrospective cohort study protocol was performed in compliance with the Helsinki Declaration.We recruited patients after revision hip or knee arthroplasties from January 2013 to July 2020 in our institution to determine the diagnostic value of PCT for diagnosing PJI.
Inclusion and exclusion criteria
We identified patients with revision hip or knee arthroplasties using the International Classification of Diseases, Tenth Revision, and Clinical Modification procedure codes (17).A total of 289 patients with revision arthroplasties were originally included into our retrospective cohort study.The primary causes of joint revision and clinical symptoms before surgery are pain, systemic or local joint fever, joint swelling or sinus formation, etc.Firstly, patients without serum PCT at revision arthroplasties were excluded.In order to diminish the possibility of bias associated with comorbidities, we excluded 12 patients with the history of tuberculosis (TB) (n = 3), bone tumors (n = 2), and inflammatory arthritis (n = 7).And those patients who had undergone multiple concurrent joint Infections (n = 1), Poly exchange surgery (n = 2) also were excluded due to its intricate source of pathogens and undetermined duration of infection (18).Finally, 77 patients were included in the analysis, which was divided into two groups: 41 patients in the periprosthetic joint infection group (PJI group) and 36 patients in the aseptic loosening group (AL group) (Figure 1).Patient's age, gender, and other baseline data were compared between the two groups (Table 1).
Diagnostic criteria of infection and data extraction
The final diagnosis of PJI was based on MSIS criteria (Table 2) (6, 7).Using patients' electronic medical records, we carefully extracted the following baseline data: demographic information, diagnoses, treatments, the involved joint, symptoms and signs, time from primary arthroplasty to the first reoperation (years), time from symptom onset to the first reoperation (months), laboratory results, culture results, comorbidities, and medication use.
Laboratory evaluations
Patients' cubital fasting venous blood samples were obtained by nurses the day before revision surgery, routinely.The samples were immediately tested by our hospital's laboratory within 1-2 h for PLT and FIB levels.Nurses also took blood samples for serum PCT, ESR, and CRP evaluation at the same time.
In our hospital, at least 3 tissue culture specimens were collected and cultured for 3-7 days.More than one periprosthetic tissues were selected and sent for biopsy and immediate histological analysis by the chief surgeon during the revision surgeries.After that, Vancomycin or a sensitive antibiotic was used to prevent or treat the infection for 2 weeks after the operation.In addition, rivaroxaban was used to prevent deep vein thrombosis in lower limbs.The follow-up time was at least 1 year.
Statistical analyses
We analyzed clinical and laboratory values by using basic descriptive statistics.As far as quantitative data is concerned, there were two situations.We used the Independent-samples test to compare continuous variables between the PJI and AL groups for normally distributed continuous data which were shown as mean ± standard deviation (SD).On the other hand, we conducted the Mann-Whitney U-test to compare continuous variables between groups for nonnormally distributed continuous data which were shown as quartiles.In terms of qualitative data, frequencies, and constituent ratios were evaluated to perform the Pearson chi-square test or Continuity correction chi-square test between the two groups.P < 0.05 was considered as statistical significance.
The receiver operating characteristic curves (ROC) were plotted to evaluate the diagnostic performance of each serological marker.The areas under the curve (AUC) and 95% confidence intervals (CI) were calculated via ROC analysis.The AUC values were shown as excellent (0.900-1.000), good (0.800-0.899), fair (0.700-0.799), poor (0.600-0.699), or noneffective (0.500-0.599) (19).Youden's index was used to make sure of the optimal predictive cut-offs for each marker.All data were conducted by SPSS software version 23 and MedCalc Software version 15.0.
Results
A total of 77 patients were included for the final analysis.According to the MSIS criteria, 41 patients were identified as the periprosthetic joint infection (PJI) group (27 hips and 14 knees).In comparison, 36 patients were identified as the aseptic loosening (AL) group (33 hips and 3 knees).The mean age in the PJI group was 64.5 ± 10.0 years; of them, 17 were men and 24 were women.The mean age in the AL group was 65.9 ± 9.8 years; of them, 11 were men, and 25 were women.The two cohorts did not differ statistically in age (p = 0.529) and gender (p = 0.321).At the same time, there were no statistically significant differences between groups with diabetes mellitus (p = 0.094) or hypertension (p = 0.683).However, a statistical significance was shown in joint type (p = 0.014), time from primary arthroplasty to the first reoperation (p = 0.001), and time from symptom onset to the first reoperation (p = 0.001) between the two groups.The characteristics of the recruited patients were depicted in Table 1.
We evaluated the tested markers (CRP, ESR, PLT, FIB, and PCT) for all included patients.All patients in the PJI group had significantly higher values for the four markers (CRP, ESR, PLT, and FIB) compared with the AL group (all P < 0.05).
Unfortunately, there was no significant difference for PCT between the two groups (P = 0.747).The details were shown in Table 3.We also listed the normal ranges for these tested markers in Table 3.As shown in Table 4 for culture organisms in the PJI group, Staphylococcus aureus and Streptococcus agalactiae were the two most common pathogens.We classified these pathogens into two groups (Table 5).There were no significant differences in all tested markers (CRP, ESR, PLT, FIB, and PCT) between the two groups (P > 0.05).That is to say, the results of the tested markers had nothing to do with the species of pathogens.All tested markers (CRP, ESR, PLT, FIB, and PCT) were evaluated and plotted in the ROC curves (Figure 2).The AUCs for CRP, ESR, PLT, FIB, and PCT were 0.845 (95% CI 0.755-0.936,p < 0.001), 0.817 (95% CI 0.718-0.916,p < 0.001), 0.728 (95% CI 0.613-0.843,p < 0.001), 0.810 (95% CI 0.710-0.910,p < 0.001) and 0.504 (95% CI 0.373-0.635,p = 0.950), respectively (Table 6).The ROC curves showed that CRP had the highest AUC, followed by ESR, FIB, PLT, and PCT.The AUCs of CRP, ESR, and FIB ranged from 0.800-0.899,which demonstrates a good diagnostic value for PJI.The AUC of FIB was between 0.7 and 0.8, indicating a fair diagnostic value for PJI.In contrast, PCT had an AUC of 0.504 (lower than 0.6), the lowest value, indicating an inferior diagnostic value for PJI.Further analyses of the diagnostic value of PCT combined with other markers for PJI were conducted in order to improve their diagnostic accuracies.Higher AUC values indicating better diagnostic accuracy were obtained for the combinations of PCT and CRP (AUC = 0.870) (95% CI, 0.774-0.936),PCT and ESR (AUC = 0.817) (95% CI, 0.712-0.896),PCT and PLT (AUC = 0.731) (95% CI, 0.617-0.825),PCT and FIB (AUC = 0.815) (95% CI, 0.710-0.894).Among them, the combined analysis of PCT with CRP had the highest AUC.In conclusion, combining serum PCT with one of the other markers can improve their diagnostic accuracies (Table 6).
Discussion
PJI is a catastrophic complication after total joint arthroplasty (20).Currently, there is an international consensus for the diagnosis of PJI, but no "gold standard" (21).The differentiation between PJI and aseptic loosening (AL) is challenging in orthopedic surgery because the treatment of PJI is entirely different to the treatment of aseptic loosening (22).ESR and CRP are initial markers recommended by the current guidelines due to the low false-negative and high sensitivity rates (23)(24)(25).CRP is a protein made by the liver.When there is acute inflammation, it responds to increased macrophages (26).Erythrocyte sedimentation rate (ESR) refers to the rate at which red blood cells sink under certain conditions.Although CRP and ESR have shown abilities for diagnosing PJI after primary replacement, the efficacy is limited (27,28).Research by Paziuk T et al. showed that initial platelet (PLT) counts could be used for distinguishing between PJI and AL (29).Related studies reported that fibrinogen (FIB) with high sensitivity and specificity might become a novel biomarker for diagnosing PJI (19,30).
PCT, which is produced by Thyroid C cells, consists of 116 amino acids.During infection, serum PCT levels rise with the bacterial endotoxin (31).Therefore, it is helpful to diagnose systemic infection (32).PCT is an undefined biomarker for diagnosing local infections such as PJI (33).The reports about (36).After reviewing reports for the diagnosis of PJI, we found Several studies have illustrated the role of PCT in diagnosing patients with PJI.These studies showed PCT is a sensitive and specific marker of bacterial infection (37)(38)(39).We want to investigate whether PCT is a superior maker.Therefore, we performed a sensitivity analysis to further validate the diagnostic performance of PCT.Finally, we found different results which showed the limited efficacy of PCT for diagnosing PJI before revision arthroplasties.There was no significant difference for PCT between PJI group and the AL group (P = 0.747).The AUC for PCT was 0.504 (95% CI 0.373-0.635,p = 0.950).The serum PCT indicated a sensitivity of 19.51% and a specificity of 83.33% for diagnosing PJI.When the optimal cut-off value for PCT was set as 0.05 ng/ml, its PPV and NPV were 57.1% and 47.6%, respectively.The results in our study showed that PCT is a specific, but less sensitive biomarker for diagnosing PJI.The AUCs of other biomarkers increased significantly as they were combined with PCT.In conclusion, the combination of serum PCT with one of the other markers can improve their diagnostic accuracies.
Different reasons may result in the limited efficacy of PCT for diagnosing PJI.Firstly, PCT is not necessarily released into the blood if patients suffering from PJI do not show bacteremia (40).It is conceivable that the grade and virulence of the majority of PJI is too low to trigger PCT release.It was pointed out that the high rate of false negatives is associated with local low-virulence organisms (41).Secondly, in healthy adults, even after tooth brushing, transient bacteremia may lead to low-grade PCT release (42)(43)(44).Thirdly, since the penetration of PCT into the blood is different in each patient, the cut-off values set for the PCT may affect the study results.The cut-off value (0.05 ng/ml) may not be optimal in our cohort.In addition, we performed only the measurement of serum PCT, while we did not conduct the measurement of synovial fluid PCT.Some limitations should be considered in our study.Firstly, the modified MSIS criteria used in this study may produce bias for assessing the diagnostic accuracy.Secondly, this was a retrospective cohort study, so its own inherent selection bias may affect its results.Thirdly, only 77 patients were recruited for the study.Thus, a multi-center study with a larger sample size is needed to be carried out for further analyses.This also diminished the sample size.Finally, our study has shown only serum PCT biomarkers, not including synovial fluid PCT biomarkers.
Conclusions
We detected multiple biomarkers for their diagnostic performance.In conclusion, this study demonstrates that the value of serum PCT has limited efficacy in differentiating PJI from aseptic loosening before revision arthroplasties.However, PCT combined with other biomarkers further increases diagnostic accuracy.Further multiple-center studies with largesize samples are needed to improve its diagnostic rate and validate our results.
FIGURE 1 Flow
FIGURE 1Flow diagram of patients shows the study design.
Major criteria 1 ) 1 )
Two positive periprosthetic cultures with phenotypically identical organisms 2) A sinus tract communicating with the joint Minor criteria Elevated serum C-reactive protein (CRP > 10 mg/L) AND erythrocyte sedimentation rate (ESR > 30 mm/h) 2) Elevated synovial fluid white blood cell (WBC > 3,000 cells/ml) count OR change on leukocyte esterase test strip (+ or ++) 3) Elevated synovial fluid polymorphonuclear neutrophil percentage (PMN% > 80%) 4) Positive histological analysis of periprosthetic tissue [>5 neutrophils per high-power field in 5 high-power fields (×400)] 5) A single positive culturePJI is present when one of the major criteria exists or three out of five minor criteria exist.
FIGURE 2 ROC
FIGURE 2 ROC curve of serum markers in diagnosing PJI.
TABLE 2
MSIS criteria for the diagnosis of PJI.
TABLE 1
Demographic data for the study population.
TABLE 3
The tested markers in the two groups.
TABLE 5
The inflammatory and PCT markers of the 2 most common pathogens in PJI.
Table 6
Area under ROC curve.
TABLE 4
Culture organisms in the PJI group. | 2023-11-15T17:52:23.139Z | 2023-11-06T00:00:00.000 | {
"year": 2023,
"sha1": "468fb02ce2df10935e8c6a8530ecfa06c5b97d50",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fsurg.2023.1216103/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "67b4ae3cf88c1dc3aa7d645e61c9623f21405e6c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
250688772 | pes2o/s2orc | v3-fos-license | Structural evolution in the isothermal crystallization process of the molten nylon 10/10 traced by time-resolved infrared spectral measurements and synchrotron SAXS/WAXD measurements
The structural evolution in the isothermal crystallization process of nylon 10/10 from the melt has been clarified concretely on the basis of the time-resolved infrared spectral measurement as well as the synchrotron wide-angle and small-angle X-ray scattering measurements. Immediately after the temperature jump from the melt to the crystallization point, the isolated domains consisting of the hydrogen-bonded random coils were formed in the melt, as revealed by Guinier plot of SAXS data and the infrared spectral data. With the passage of time these domains approached each other with stronger correlation as analyzed by Debye-Bueche equation. These domains transformed finally to the stacked crystalline lamellae, in which the conformationally-regularized methylene segments of the CO sides were connected each other by stronger intermolecular hydrogen bonds to form the crystal lattice.
Introduction
Aliphatic nylon consists of the amide groups giving strong intermolecular hydrogen bonds and the methylene segments giving the flexibility to the skeletal chain. A sensitive balance between these two factors affects the hierarchical structure and physical properties of nylon. In other words, it is needed to clarify the formation process of the hierarchical structure to control the physical properties of this polymer by taking these two important factors into consideration. In a long history of nylon, however, there had been reported almost no paper describing the structural changes during the crystallization process from the melt. In the present study we have revealed successfully and in a concrete manner the structural evolution process of nylon 10/10, as one of the most typical aliphatic nylons, in the isothermal crystallization process from the melt by performing the time-resolved measurements of infrared spectra (FTIR) and synchrotron wide-angle X-ray diffraction (WAXD)/small-angle X-ray scattering (SAXS) patterns.
Experimental
The isothermal crystallization experiments have been performed by changing the temperature of the sample rapidly (ca. 1000 o C/min) from above the melting point to the predetermined crystallization point. During this temperature jump, the time-resolved measurements of FTIR, WAXD and SAXS were carried out at a constant time interval (every 1 -2 sec). The simultaneous measurements of WAXD and SAXS were performed using synchrotron X-ray source in the beam line BL40B2 of SPring-8, Japan.
The samples used in the experiments were nylon 10/10 -[-(NH(CH 2 ) 10 NHCO(CH 2 ) 8 CO-]-and nylon 10/10-d 16 -[-(NH(CH 2 ) 10 NHCO(CD 2 ) 8 CO-]-. The latter or partially-deuterated sample was useful especially in the infrared spectrosocopic experiment since the infrared bands of methylene segments of the CO side were distinguished from those of NH side. The transition behaviours of these two samples were essentially the same each other. Figure 1 shows the time dependence of infrared spectra measured in the isothermal crystallization at 176 o C. Figure 2 shows the time dependence of vibrational frequency and intensity of the infrared bands of NH stretching mode (3300 -3440 cm -1 ) and the CD 2 group mode (1065 cm -1 ). In the melt before the temperature jump, the NH stretching band of the free hydrogen bonds [(NH, free)] was detected clearly at 3430 cm -1 . At the same time, the (NH) band corresponding to the hydrogenbonded amide groups (3300 cm -1 ) was also detected, though it was rather broad in the melt. In this way, the intermolecular hydrogen bonds are existent still in the melt. This is a key point to understand the crystallization behaviour of nylon 10/10 as will be discussed in a later section. These hydrogen bonds became stronger and their relative amount increased drastically immediately after the temperature jump. On the way of crystallization in a time region from 10 to 60 sec the hydrogen bonds were stabilized for a while (a plateau region observed for NH stretching bands in Figure 2). After that they started to become stronger furthermore, just when the methylene segments of the CO side were regularized in conformation. The methylene segments of the NH side were still in the disordered state [1]. Figure 3 shows the time dependence of WAXD and SAXS profile in the isothermal crystallization process of nylon 10/10 at 176 o C. For the quantitative analysis of SAXS data in the time region of 0 -60 sec, we assumed that (i) in the earliest stage the locally-formed weak hydrogen bonds in the melt might cause the generation of some domains with relatively high density. These domains are isolated each other. (ii) As the time passes, these domains may transform to the stacked lamellae with relatively strong correlation. Then we analyzed the SAXS data on the basis of Guinier (for isolated domains; scattering vector q < 0.007 Å -1 ) and Debye-Bueche theories (for correlated domains; q > 0.007 Å -1 ) as well as the correlation function concerning the stacked lamellar structure. As shown in Figure 4 (a), the Guinier plot [ln(I) vs q 2 ] gave the size of domain (or radius of gyration R g ) consisting of relatively strongly hydrogen-bonded coils [2]. As shown in Figure 4 (b) the Debye-Bueche plot [I -1/2 vs q 2 ] gave the averaged distance (correlation length) between these domains [3]. Figure 5 shows the time dependence of the thus estimated R g and values. Figure 6 illustrates the relation between R g , (and long period) given in Figure 5. As mentioned above, the R g is the size of domain with almost no correlation with the neighboring domains. Once the crystallization is started, these domains approach more closely to each other due to the thermal contraction of the whole system, and at the same time the inner structure becomes more regular due to the increasing strength of hydrogen bonds, as revealed by the infrared spectral data. Once the correlation length, a measure of distance between the correlated neighboring domains, is close to the R g [see Figure 6 (b)], the correlation between these regular parts start to increase. The correlation becomes stronger and approaches finally to the long period of the stacked lamellae. (It is noticed that the R g was almost constant during this process as long as the Guinier plot was assumed to be applicable even for such a weakly correlated domain structure as an approximation.) As the time passed beyond 60 sec, the value started to deviate from the linear straight line and transformed to the long period of stacked lamellae, about 240 Å. The SAXS data in this time region was analyzed by calculating the correlation function for the stacked lamellar structure [4], and the invariant Q was estimated as seen in Figure 5.
Structural Evolution Process of Nylon 10/10
From Figure 5, we may draw the structural evolution process of nylon 10/10 in the isothermal crystallization from the melt as illustrated in Figure 7. At the earliest stage of crystallization from the melt, the intermolecular hydrogen bonds become stronger steeply and the amide groups of free hydrogen bonds decrease in population. These hydrogen bonds induce the local creation of isolated domains of relatively high density. As the time passes furthermore, these domains approach each other because the sample contracts gradually in the solidification process, during which the intermolecular hydrogen bonds are stabilized temporarily (10 -60 sec region). After that the domains gather together more closely to form the stacked lamellar structure, in which the conformationally regularized methylene segments (the methylene segments of the CO side, strictly speaking) are packed in parallel and linked together by the strong intermolecular hydrogen bonds to form the crystal lattices. (According to the infrared spectra measured for the partially-deuterated nylon 10/10 sample, the methylene segments of the NH side are still in the conformationally-disordered state even when the methylene segments of the CO side are already regularized. The reason for the difference in conformational regularization between these two kinds of methylene segments is not solved yet [1].) This structural evolution process is appreciably different from that detected for polyolefin like polyethylene, for example.
In the latter case the random coils in the melt started to regularize after the temperature jump, and the trans-rich zigzag-like chains are formed with some conformational disorders. These chains gather together to form the pseudo-hexagonal lattice, which transforms gradually to the orthorhombic cell consisting of regular all-trans zigzag chain stems [5]. In this case the molecular chains are connected with weak van der Waals interactions. In the aliphatic nylon case, the methylene segments are constrained more or less due to the intermolecular hydrogen bonds even in the molten state. Therefore, some domains of comparatively high density are existent from the starting point of crystallization, making it easier to form the aggregation state of conformationally-regularized stems linked side by side with the strong intermolecular hydrogen bonds.
Conclusions
In the present report the structural evolution process of nylon 10/10 in the isothermal crystallization phenomenon has been revealed by measuring the time dependences of infrared spectra and synchrotron WAXD/SAXS data. The hydrogen bonds between the amide groups are existent even in the molted state although they are rather weak, which play an important role in the formation of domains with higher density. After the passage of some time, these domains gather together and transform to the stacked lamellar structure with regularly packed zigzag methylene segments combined with the strong intermolecular hydrogen bonds. This crystallization behavior is quite contrast to the case of polyethylene without any strong intermolecular hydrogen bonds. The difference in processability between these two kinds of polymer might reflect such a difference in crystallization process from the molten state. | 2022-06-28T02:16:21.704Z | 2009-01-01T00:00:00.000 | {
"year": 2009,
"sha1": "319faf3312044237d256a37e5e037e65315404eb",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/184/1/012002",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "319faf3312044237d256a37e5e037e65315404eb",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
49362551 | pes2o/s2orc | v3-fos-license | Assessment of Fv/Fm absorbed by wheat canopies employing in-situ hyperspectral vegetation indexes
Chlorophyll fluorescence parameter of Fv/Fm, as an important index for evaluating crop yields and biomass, is key to guide crop management. However, the shortage of good hyperspectral data can hinder the accurate assessment of wheat Fv/Fm. In this research, the relationships between wheat canopy Fv/Fm and in-situ hyperspectral vegetation indexes were explored to develop a strategy for accurate Fv/Fm assessment. Fv/Fm had the highest coefficients with normalized pigments chlorophyll ratio index (NPCI) and the medium terrestrial chlorophyll index (MTCI). Both NPCI and MTCI were increased with the increase in Fv/Fm. However, NPCI value ceased to increase as Fv/Fm reached 0.61. MTCI had a descending trend when Fv/Fm value was higher than 0.61. A piecewise Fv/Fm assessment model with NPCI and MTCI regression variables was established when Fv/Fm value was ≤0.61 and >0.61, respectively. The model increased the accuracy of assessment by up to 16% as compared with the Fv/Fm assessment model based on a single vegetation index. Our study indicated that it was feasible to apply NPCI and MTCI to assess wheat Fv/Fm and to establish a piecewise Fv/Fm assessment model that can overcome the limitations from vegetation index saturation under high Fv/Fm value.
Photosynthesis was the most important biological process on earth 1 . It was the unique approach by which plants gained energy from the environment. There were three basic effects when light struck a leaf surface: absorption, reflection and transmission. The major part of light was absorbed by the chlorophyll used for photosynthesis, and only a small proportion was de-excited via emission with a longer wavelength as fluorescence, or dissipation as heat 2 . Chlorophyll fluorescence emissions occurred in the red and far-red regions of the plant spectrum (650-800 nm) 3 . Changes in chlorophyll fluorescence parameters of plant leaves could reflect the changes of environmental factors and their effects on plant photosynthetic physiology to a certain extent 4 . In many chlorophyll fluorescence parameters, F v /F m was used to characterize the conversion efficiency of the light energy of the PS II reaction center, and its numerical changes were of special significance. However, conventional methods of assessing F v /F m from field observations, that involved site-specific complicated parameterizations and calculations, made it difficult to apply over large agricultural areas 5 . These shortcomings could be overcome through the complementary use of hyperspectral measurements of crops, which had several advantages -non-destructive, uniform, could be performed rapidly, and no complicated parameterizations were necessary.
Assessment of F v /F m from vegetation indexes (VIs) derived from hyperspectral data, especially remote sensing data, have been reported by several studies [6][7][8][9][10] . For instance, some researchers compared the performance of VIs to assess F v /F m of legume crops and reached the conclusion that out of the nine kinds of VIs with a close relationship with F v /F m , modified soil adjusted vegetation index (MSAVI) performed best 11 . If ground cover was significant, the impact of the background significantly reduced, and F v /F m could be better estimated using normalized difference vegetation index (NDVI). Re-normalized difference vegetation index (RDVI) showed an approximate linear relation to F v /F m regardless of ground cover. Hyperspectral remote sensing is an important technique to fulfill real-time monitoring for crop growth status based on its superior performance in acquiring vegetation canopy information rapidly and non-destructively. However, the regression analysis was based on only five points making it statistically uncertain. Other scientists used radiative transfer models to estimate F v /F m and found that a linear model based on NDVI produced the best estimate results 12 There was a need for an investigation of the performance of VIs in different vegetation ecosystems 15 .
Models based on linear F v /F m -NDVI relationships suffered from a major flaw -NDVI saturated at high leaf area index values 16 and, thus a linear model tended to be insensitive to F v /F m changes in such cases 17 . Another issue that needed to be recognized was the scarcity of data for boreal ecosystems. The majority of the above cited studies presented empirical evidence suggesting a functional relationship between F v /F m and hyperspectral VIs, and these were mostly focused on forests, grasses (prairies), and some crop types such as rice, wheat and cotton 18 . There were only few reports on quantitatively estimating F v /F m for wheat canopies using VIs from remote sensing data 19 . Besides, VIs-F v /F m relationships differed from one ecosystem type to another ecosystem due to the influences of vegetation type, strong background signals, canopy structure and spatial heterogeneity 20,21 . Further, existing remote sensing-based F v /F m products lacked adequate ground validation, which was critical for establishing the uncertainty and accuracy of such products so that they could be used for guiding crop production practices 22,23 . This study is motivated by the above-mentioned issues and focuses on exhaustive statistical analyses of F v /F m -VIs relationships for wheat canopies, using in-situ hyperspectral data collected from a series of field experiments, and aims at determining a practical methodology for estimating F v /F m of wheat canopies.
Results
Changes in wheat canopy F v /F m with growth stage. F v /F m revealed the progressive increase as the growth of wheat crops at different growth stages (Fig. 1). An initial significant increase in F v /F m , by about 23.1%, was observed corresponding to crop development from turning stage to jointing stage. However, the further changes in F v /F m from booting stage to the milk stage were not significant (2.4%, 0.12%, 1.31% and −2.59%, respectively). Until the blooming stage, F v /F m began to increase and reached its maximum value of 0.68. From the blooming stage to the milk stage, F v /F m tended to slow, that was saturated.
VIs-F v /F m relationship. Statistically significant correlations between F v /F m and VIs were observed in 51 cases out of 56 VIs considered (Table 1), and these were both positive and negative. Positive correlations between VIs and F v /F m were generally stronger than the negative ones. F v /F m was most strongly correlated to NPCI, MTCI and NDVI [900, 680] -correlation coefficients (r) are 0.891, 0.886 and 0.879, respectively. Thus, NPCI, MTCI and NDVI [900, 680] could be identified as three common VIs relatively well correlated to wheat canopy F v /F m , and these were the post probable VIs of choice for estimating F v /F m .
Establishing the F v /F m assessment model based on Vis.
A total of 10 VIs were considered for modeling F v /F m based on a threshold on VI-F v /F m correlation (i.e. r > 0.82 in Table 1). These non-linear F v /F m assessment models were best represented as exponential functions and were evaluated using their predictive (R 2 ) and error statistics (RRMSE) ( Table 2). Among them, F v /F m had the closet exponential relation with NPCI, and had a closer exponential relation with MTCI and NDVI [900, 680], and the models based on NPCI, MTCI and NDVI [900, 680] were capable of estimating the F v /F m with R 2 of 0.874, 0.859 and 0.834, respectively, with RRMSE of 0.109, 0.116 and 0.126, respectively, with the assessment accuracy of 89.1%, 88.4% and 87.4%, respectively. Furthermore, according to comparisons of R 2 , RRMSE and assessment accuracy, it was more suitable to assess wheat canopy F v /F m by NPCI and MTCI than by NDVI [900, 680]. Saturation analysis of Vis. All three VIs in Fig. 2 Based on the aforementioned research results, the piecewise hyperspectral assessment model of F v /F m was built according to the range of F v /F m value in Fig. 3. Namely, if F v /F m ≤ 0.61, NPCI should be used to assess F v /F m , and the assessment model was y = 1.0616 × −0.076, R 2 = 0.929 (p < 0.01); if F v /F m > 0.61, MTCI should be used to assess F v /F m , and the assessment model was y = 5.5259e −3.529x , R 2 = 0.835 (p < 0.01).
Evaluation of VIs-based F v /F m model. A total of 60 samples observed from the experiments in 2017
were used to test the hyperspectral VIs-based assessment model of F v /F m . The estimated and measured F v /F m cross-resistance almost coincided with 1:1 relation line shown in Fig. 4 Table 3, the assessment accuracy values of the piecewise F v /F m model in different ranges of the F v /F m increased by 11.3%, 13.9% and 16.4%, respectively. In conclusion, the piecewise model based on NPCI and MTCI, used to assess F v / F m , can not only improve the assessment accuracy, but also solve the saturation problems that occurred in NPCI and NDVI [900, 680].
Discussion
F v /F m was primarily controlled by ground cover and leaf area 24 . Before the jointing stage, F v /F m increased significantly ( Fig. 1), which was characterized by strong absorption of incoming F v /F m as wheat crops grew vigorously, adding leaf area, driven by nitrogen fertilization. This was followed by a lower rate of crop growth (and leaf area expansion), which was captured by the lower rate of F v /F m increase. According to agronomic principle of wheat, although the research was lack of F v /F m data after the milk stage, it was still available to conclude that leaves started to turn yellow and gradually litter, as the growth period went, and F v /F m declined in the combination of wheat's photosynthetic physiological characteristics. Until full-ripe stage, F v /F m was close to 0, because leaves took off green and became withered and died so that they were unable to absorb light energy and the accumulation of dry matter had stopped 25 . Significant efforts were presently focusing on the use of VIs in general, and NDVI in particular, for estimating vegetation canopy F v /F m . Furthermore, many studies indicated that VIs were better correlated to F v /F m than the reflectance in single wavebands 26,27 , which could be plausibly explained by the fact that VIs could minimize the influence of atmospheric scattering and soil background and enhanced the information of the sensitive wavebands 28 . Similarly, this study found F v /F m to be strongly correlated to the majority of VIs (49 out of 56), with NPCI, MTCI and NDVI [900, 680] being the best performing VIs. This result is helpful to provide an important technique for the establishment of perfect wheat photosynthetic groups, the improvement of sunlight energy efficiency and the implementation of cultivation control.
As compared to the previous studies with NDVI, NPCI and MTCI for estimating the F v /F m gave the lower RRMSE and higher assessment accuracy than NDVI proposed in several studies. Future research should focus on evaluating the performance of the proposed model over wheat crops grown under a variety of conditions, different wheat varieties, as well as other crop types. This will help in refining the model as a useful tool for informing crop management practices. Efforts should also be made to test this model with data from different sourcesfield-based spectral measurements, as well as current and future satellite data.
Conclusion
VIs, like NDVI, were often plagued with saturation at high biomass areas, which was a major disadvantage for VIs-F v /F m models. We have addressed this issue by employing the differences in sensitivity of different VIs to F v / F m i.e. F v /F m had the highest coefficients with NPCI and MTCI. Both NPCI and MTCI were increased with the increase in F v /F m . However, NPCI value ceased to increase as F v /F m reached 0.61. MTCI had a descending trend when F v /F m value was higher than 0.61. A piecewise F v /F m assessment model with NPCI and MTCI regression variables was established when F v /F m value was ≤0.61 and >0.61, respectively. The model increased the accuracy of assessment by up to 16% as compared with the F v /F m assessment model based on a single vegetation index. Our study indicated that it was feasible to apply NPCI and MTCI to assess wheat F v /F m and to establish a piecewise F v / F m assessment model that can overcome the limitations from vegetation index saturation under high F v /F m value. 32°26′N). The former crop in the field was rice. The soil is yellow brown soil (Alfisolsin U.S. taxonomy), containing 2.23 g kg −1 organic matter, 121.3 mg kg −1 available nitrogen, 25.9 mg kg −1 available phosphorus and 83.7 mg kg −1 available potassium in the 0-30 cm soil layer. Canopy spectral parameters were recorded alongside with the quasi-simultaneous measurement of F v /F m upon the growing wheat canopies. In order to highlight variations in wheat growth due to biochemical composition changes, three different levels of nitrogen fertilization as urea were implemented, including non-nitrogen fertilization, adequate nitrogen fertilization (450 kg ha −1 ) and heavy nitrogen fertilization (900 kg ha −1 ). There are three replicates for each nitrogen level. The plot size was 4 m × 4 m. Local standard wheat cropping management practices pertaining to water, pest, disease and weed were followed. Training data consisted of 95 and 87 samples in 2015 and 2016, respectively, and test data consisted of 60 samples in 2017.
Materials and Methods
Canopy hyperspectral reflectance data. In 2015, six spectral measurements were carried out in the wheat turning green stage (March 7), jointing stage (March 20), booting stage (April 9), blooming stage (April 25), 15 days after blooming stage (May 9), and milking stage (May 18), respectively. All canopy spectrometry determinations were taken at a vertical height of 1.6 m over the canopy under the cloudless or near cloudless condition between 11:00 and 14:00, using an ASD FieldspecPro spectrometer (Analytical Spectral Devices, USA) fitted with 25° field of view fiber optics, operating in the 350-2500 nm spectral region with a sampling interval of 1.4 nm between 350 nm and 1050 nm, and 2 nm between l050 nm and 2500 nm, and with spectral resolution of 3 nm at 700 nm, 10 nm at 1400 nm, selecting the representative, growth-uniform, pest-free plants, placing the probe of sensor down in measuring. A 40 cm × 40 cm BaSO 4 calibration panel was used for calculation of hyperspectral reflectance. Vegetation and panel radiance measurements were taken by averaging 20 scans at optimized integration time, with a dark current correction at every spectrometry determination.
In 2016, four spectral measurements with 87 test samples were carried out in the wheat turning green stage Spectral smoothing. Spectral smoothing process was performed in order to remove high frequency noise and the random errors caused by spectral measuring instruments, which enhanced signal to noise ratio. A five-point weighted smoothing method was used to process the raw spectral data 29 . Five-point weighted smoothing method is carried out using Equation (1) Here, n is the weighted average of the intermediate data points in the filter window, namely the smoothed spectrum value, and m is the value of unsmoothed data points, namely the original spectral value.
F v /F m measurement. The chlorophyll fluorescence parameters of wheat leaves were determined by modulated fluorescence OS1-FL (Opti-Sciences, Tyngsboro, MA, USA) after the completion of each spectrum. First, the dark adaptation clamp was used to adapt the blade to 10 min, and then the initial light energy conversion efficiency of photosystem II (PS II) F v /F m was measured, and the calculation was repeated 9 times each time. The formula is as follows: Here, the F o is the basic fluorescence value under the dark adaptation condition; the F m is the maximum fluorescence value under the dark adaptation condition; the F v is the fluorescence value under the variable condition.
Hyperspectral VIs. In reference to previous studies, based on spectral characteristics of wheat and combined with the physical meaning of spectral index, a total of 56 VIs were considered (Table 3) Statistical analysis. VIs-F v /F m relationships were analyzed using a variety of regression models -linear, exponential, logarithmic, and quadratic. Models were ranked based on statistically significant (p < 0.05 or 0.01) correlation coefficients (r in case of linear models) and coefficients of determination (R 2 in case of non-linear models). Finally, by plotting the relation figure under the scale 1:1 between estimated and measured F v /F m values, the performance of the model was evaluated through the coefficient of determination (R 2 ) and relative root mean squared error (RRMSE) for the assessment of in-situ measured F v /F m . The higher the R 2 and the lower the RRMSE, the higher the accuracy of the model to assess the F v /F m . The RRMSE and assessment accuracy are calculated using Equations (3) Here, y i and ŷ i are the measured values and predicted values of wheat canopy F v /F m , respectively. n is the number of samples. | 2018-06-23T13:09:35.585Z | 2018-06-22T00:00:00.000 | {
"year": 2018,
"sha1": "b4a5d4324146c405d7c23b1f2d883193d8cbdac8",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-27902-3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4a5d4324146c405d7c23b1f2d883193d8cbdac8",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
} |
93950110 | pes2o/s2orc | v3-fos-license | Origin of $1/f$ noise transition in hydration dynamics on a lipid membrane surface
Water molecules on lipid membrane surfaces are known to contribute to membrane stability by connecting lipid molecules and acting as a water bridge. Although the number of water molecules near the membrane fluctuates dynamically, the hydration dynamics has been veiled. Here we investigate residence statistics of water molecules on the surface of a lipid membrane using all-atom molecular dynamics simulations. We show that hydration dynamics on the lipid membrane exhibit $1/f^\beta$ noise with two different power-law exponents, $\beta_l<1$ and $\beta_h>1$. By constructing a dichotomous process for the hydration dynamics, we find that the process can be regarded as a non-Markov renewal process. The result implies that the origin of the $1/f$ noise transition in hydration dynamics on the membrane surface is a combination of a power-law distribution with cutoff of interoccurrence times of switching events and a long-term correlation between the interoccurrence times.
Water molecules on lipid membrane surfaces are known to contribute to membrane stability by connecting lipid molecules and acting as a water bridge. Although the number of water molecules near the membrane fluctuates dynamically, the hydration dynamics has been veiled. Here we investigate residence statistics of water molecules on the surface of a lipid membrane using all-atom molecular dynamics simulations. We show that hydration dynamics on the lipid membrane exhibit 1/f β noise with two different power-law exponents, β l < 1 and β h > 1. By constructing a dichotomous process for the hydration dynamics, we find that the process can be regarded as a non-Markov renewal process. The result implies that the origin of the 1/f noise transition in hydration dynamics on the membrane surface is a combination of a power-law distribution with cutoff of interoccurrence times of switching events and a long-term correlation between the interoccurrence times. In numerous natural systems, the power spectra S(f ) exhibit enigmatic 1/f noise: at low frequencies. In biological systems, 1/f noise has been reported for protein conformational dynamics [1][2][3], DNA sequences [4], biorecognition [5], and ionic currents [6][7][8][9], implying that long-range correlated dynamics underlie biological processes. Moreover, 1/f noise is involved in the regulation of permeation of water molecules in an aquaporin [3]. There are many mathematical models that generate 1/f noise including stochastic models [10][11][12][13] and intermittent dynamical systems [14][15][16][17]. The power-law residence time distribution is one of the most thoroughly studied origins for 1/f noise [12,[14][15][16][17]. In dichotomous processes, the power spectrum shows 1/f noise when the distribution of residence times of each state follows a power-law distribution with divergent second moment. For blinking quantum dots, which show a 1/f spectrum, residence times for "on" (bright) and "off" (dark) states have been experimentally shown to have a power-law distribution with a divergent mean [18,19]. In stochastic models, this divergent mean residence time violates the law of large numbers which causes the breakdown of ergodicity, non-stationarity, and aging [20][21][22][23]. Conversely, the divergent mean residence time implies an infinite invariant measure in dynamical systems [24] and that the time-averaged observables are intrinsically random [24,25].
In our previous work, we found that the residence times of water molecules on the lipid membrane surfaces followed power-law distributions [26,27]. Therefore, it is physically reasonable to expect that the hydration dynamics on membrane surfaces also obey 1/f noise. Although little is known about the hydration dynamics, it is important to understand the dynamics of resident water molecules because these water molecules may play important roles in the overall dynamics of the membrane, and will affect membrane stability and biological reactions. In fact, such water molecules stabilize the assembled lipid structures [26,28]; this water retardation increases the efficiency of biological reactions [27,29,30]. Water molecules enter and exit the hydration layer, and the number of water molecules near the lipid head group fluctuates.
In this letter, we perform a molecular dynamics (MD) simulation on water molecules plus a palmitoyl-oleoylphosphocholine (POPC) membrane at 310 K to investigate the hydration dynamics on the lipid surface [the details of the MD simulation are shown in [31]]. We find that fluctuations in the number of water molecules on the lipid surface show 1/f β noise with two power-law exponents, i.e., β l < 1 at low frequencies and β h > 1 at high frequencies, and that the residence time distributions for "on" and "off" states follow power-law distributions with exponential cutoffs. Moreover, we construct a dichotomous process from the trajectory of the number of water molecules on a lipid molecule to clarify the origin of the two power-law exponents in the power spectrum. By analyzing the constructed dichotomous process, we find that there is a long-term correlation in residence times, which causes two different power-law exponents in the power spectrum.
Fluctuations of water molecules on the lipid head group.−We recorded the number of water molecules for which the oxygens were within interatomic distances of 0.35 nm from all atoms in lipid head groups [ Fig. 1A]. The number fluctuates around an average of about 14. Figure 1B shows the ensemble-averaged power spectral density (PSD) obtained from the average of the power spectra for the number of water molecules at 128 lipid molecules. The power spectrum exhibits two regimes with distinctive 1/f behavior. 1.35, while below this frequency we have S(f ) ∝ f −β l with β l = 0.8; furthermore, the PSD shows a plateau at low frequencies. This crossover phenomenon is essential because S(f ) ∝ f −β with β ≥ 1 implies nonintegrability and non-stationarity. We have confirmed that 1/f fluctuations of the number of water molecules are observed in boxes and spheres near the membrane surfaces but not in bulk water. A similar transition of the power-law exponent of the PSD has also been observed for the interchange dynamics of "on" and "off" states for quantum dot blinking [32]. This behavior was described theoretically using an alternating renewal process, where the residence time distributions of "on" and "off" states are given by a power-law with an exponential cutoff ψ on (τ ) ∝ τ −1−α e −τ /τon and a power-law ψ off (τ ) ∝ τ −1−α where α < 1, respectively [32]. The transition frequency f t is related to the exponential cutoff in the quantum dot blinking experiment. In this case, the PSD exhibits aging, non-stationarity, and weak ergodicity breaking because the "off" time does not have a finite mean.
To confirm whether the aging effect appears in the hydration dynamics on the lipid surface, we calculate the ensemble-averaged PSDs for different measurement times [ Fig. 1C]. The magnitudes of the PSDs do not depend on the measurement time t, i.e. there is no aging. It follows that the power-law distribution with an exponential cutoff considered in [32] cannot explain hydration dynamics on lipid membranes.
Dichotomous process.−To consider the origin of 1/f noise, we constructed a dichotomous, i.e. two state, process from the time series of the number of water molecules; the "on" (N = 1) or "off" (N = −1) states are when the number of water molecules on each lipid molecule is above or below, respectively, the average number [ Fig. 2A]. Figure 2B shows the ensembleaveraged PSD for the time series of constructed dichotomous processes. The obtained 1/f noise is the same as the ensemble-averaged PSD for the original time series [see Fig. 1B]. Figure 2C shows probability density functions (PDFs) of residence times for "on" and "off" states. The PDFs follow power-law distributions with exponential cutoffs, P (τ ) = Aτ −(1+α) exp(−τ /τ c ), where the power-law exponent is α = 1.2, and cutoffs for the PDFs of the "on" and "off" states are τ c = 59 ps and 1074 ps, respectively. The plateau of the PSD at low fre- quencies comes from the exponential cutoffs in the power law distributions.
Origin of the transit 1/f noise.−One important question remains unclear: What is the origin of the transition in the 1/f noise? In other words, does power law intermittency or long-term memory (as expected for a non-Markov process) contribute to the transition in the 1/f noise? To address this question, we calculated the ensemble-averaged PSD for a shuffled time series of dichotomous processes, where residence times for "on" and "off" states were shuffled among themselves randomly. The ensemble-averaged PSD of the shuffled time series is different from that of the original time series of the dichotomous process [Fig. 3]. The transition in the 1/f noise disappears, although the power spectrum shows 1/f noise at high frequencies even after shuffling. The power-law exponent of S(f ) ∝ f −β at high frequencies is about 0.8, and the PSD converges to a finite value at low frequencies. This suggests that the transition in the 1/f noise originates from the non-Markovian nature of the hydration dynamics. Following our observations, we performed a numerical simulation in which time series of "on" and "off" states were generated with random waiting times drawn from a power-law distribution with an exponential cutoff, where α = 1.2, on: τ c = 60, off: τ c = 1000. In Markovian dichotomous processes, the power-law exponent β in the PSD is given by the power-law exponent in the residence time distribution, i.e., β = 2 − α as α < 2 [15]. The power-law exponent β observed here in the PSD is consistent with this relationship.
To clarify the correlation of residence times, we considered three types of time series of residence times: {τ on 1 , ..., τ on N }, {τ off 1 , ..., τ off N }, and {τ on 1 , τ off 1 , ..., τ on N , τ off N }. Figure 4A shows correlations between "on" and "off" residence times. There are positive correlations of residence times between the previous "on" state and the current "on" state or the previous "off" state and the current "off" state, and negative correlations of residence times between an "on" state residence time and the next "off" state time or an "off" state residence time and the next "on" state time. Moreover, the ensemble-averaged PSDs of the three types of time series of residence times exhibit 1/f noise [ Fig. 4B]. This result means that the residence times have a long-term correlation. What is a biological significance of 1/f noise in hydration dynamics on lipid membrane surfaces? The roles played by the water molecules near the membrane depend upon their structure and dynamics. The 1/f noise attributed to a non-Markov renewal process can contribute to the stability of the hydration layer, which is important for membrane stability and physiological processes.
In conclusion, we have used all-atom molecular dynamics simulations to show that the number of water molecules on the lipid molecules exhibits 1/f noise. The power law exponents are different below and above the transition frequency f t . There is a transition from β l < 1 at low frequencies to β h > 1 at high frequencies, although the power spectrum does not break ergodicity. Moreover, we provide evidence that the transition in the 1/f noise and ergodicity are caused by non-Markov power-law intermittency with exponential cutoff. These results are relevant to a broad range of systems displaying 1/f fluctuations. | 2014-04-16T05:00:01.000Z | 2014-04-16T00:00:00.000 | {
"year": 2014,
"sha1": "a877c7b4fc5f55efefb25cee21483f489631f39d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a877c7b4fc5f55efefb25cee21483f489631f39d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
} |
245774552 | pes2o/s2orc | v3-fos-license | Innovative therapeutic concepts of progressive multifocal leukoencephalopathy
Progressive multifocal leukoencephalopathy (PML) is an opportunistic viral disease of the brain—caused by human polyomavirus 2. It affects patients whose immune system is compromised by a corresponding underlying disease or by drugs. Patients with an underlying lymphoproliferative disease have the worst prognosis with a mortality rate of up to 90%. Several therapeutic strategies have been proposed but failed to show any benefit so far. Therefore, the primary therapeutic strategy aims to reconstitute the impaired immune system to generate an effective endogenous antiviral response. Recently, anti-PD-1 antibodies and application of allogeneic virus-specific T cells demonstrated promising effects on the outcome in individual PML patients. This article aims to provide a detailed overview of the literature with a focus on these two treatment approaches. Supplementary Information The online version contains supplementary material available at 10.1007/s00415-021-10952-5.
Introductory remarks
Progressive multifocal leukoencephalopathy (PML) is an opportunistic infection of the brain caused by the human polyomavirus 2 (HPyV-2) (previously known as: JC polyomavirus). Overall, PML is associated with severe disability and a relatively high mortality [1]. Infection with HPyV-2 usually occurs during childhood, though the proportion of seropositive persons in the population increases with age, reaching approximately 60-80% in 70-year-olds [2,3]. The rate is also highly dependent on the serological test applied, with different groups reporting highly variable seropositivity rates. HPyV-2 usually leads to an asymptomatic, lifelong persistent and latent infection in the general population. However, in patients with long-lasting and profound impairment of cellular immunity, HPyV-2 can reactivate from latency or persistent asymptomatic infection and undergo intra-individually acquired viral genomic rearrangements leading to neuroinvasion and lytic infection of white matter (predominantly oligodendrocytes) and neuronal cells in the brain [4,5]. The in vivo diagnosis of PML is based on the clinical presentation, brain imaging findings (preferably magnetic resonance imaging, MRI) and the detection of the virus in the cerebrospinal fluid (CSF) by polymerase chain reaction (PCR) [6]. The fact that there is no animal model for PML and that HPyV-2 is difficult to grow in culture remains a major challenge for the development of antiviral therapeutic strategies against HPyV-2. To date, direct antiviral therapeutics such as cidofovir, mirtazapine, cytarabine, or mefloquine have failed to improve survival or reduce disability in PML patients [28][29][30][31]. Basically, the key to successful treatment of PML is restoring the functions of the immune system (Fig. 1). The aim of this review is to present 1 3 different therapeutic strategies for the management of PML. In addition to the use of interleukins, the treatment of PML with anti-PD-1 antibodies and allogeneic virus-specific T cells will be characterized in particular. For this purpose, all cases and case series published to date on this topic have been summarized.
Immunological mechanisms and causative factors of PML
Because most adults are exposed to HPyV-2 in childhood, HPyV-2-specific antibodies, and memory T cells are found in their blood. In patients with a human polyomavirus 1 (HPyV-1) (formerly known as: BK virus) infection, a close relative of HPyV-2 sharing immunologically significant epitopes, it has been shown that an increase in antibody titers is associated with a reduction in viral load [7,8]. Although these antibodies can effectively control viremia, they cannot control polyomavirus-associated complications. In contrast, the presence of cytotoxic (CD8+) T cells has been shown to correlate positively with a beneficial clinical course of PML [9]. Thus, it is reasonable that an impaired T-cell immune response due to underlying immunosuppressive disease entities or therapy increases the risk of PML. The role of B cells in the pathogenesis of PML is rather unclear. Functional B cells are necessary for adequate viral defense and the risk for PML is quite high, especially in chronic lymphocytic leukemia (CLL) and other B-cell-associated lymphoproliferative disorders. This might be due to the fact that certain B-cell transcription factors (especially Spi-B) promote not only B-cell differentiation but also HPyV-2 replication [10]. In general, the causes of PML can be divided into three major subgroups. The first group consists of patients with human immunodeficiency virus (HIV)-infection whose PML risk is due to the underlying disease alone. At the height of the acquired immunodeficiency syndrome (AIDS) pandemic, approximately 5% of HIV-positive patients developed PML [11]. After the introduction of highly active antiretroviral therapy, both the number of HIV patients developing PML and the mortality of PML in these patients decreased significantly [12]. The risk of developing PML is highly dependent on the number of CD4+ T cells and current overall incidence rate amounts to about 1/1 000 person-years [1,[13][14][15]. The second group includes patients whose PML risk is due to both their underlying disease and its therapy. Particularly those affected by a lymphoproliferative disease should be mentioned here, since both the disease itself as well as its therapy severely compromises the immune system. To date, PML patients with an underlying malignant hematologic disease have the worst outcome. Mortality in individual studies of smaller size is about 90% and many patients die within the first 2 months after diagnosis [16][17][18][19]. The last group comprises patients who receive immunosuppressive therapy, e.g. due to an autoimmune disease. In this context, it should be mentioned that the incidence of monoclonal antibody therapyrelated PML has substantially increased in recent years. The best-known example of such an agent certainly is the anti-alfa4-integrin antibody natalizumab, which was designed to prevent the migration of leukocytes into the CNS and is approved for the treatment of highly active relapsing multiple sclerosis. Since its approval in 2004, 839 cases of PML have been reported to date (as of September 2020, www. tysab ri. de, accessed 07/10/2021), with risk correlating with duration of treatment and blood HPyV-2 antibody index values (quantifies antibody reactivity relative to a reference sera) [20]. The mortality of natalizumab-related PML is approximately 20% to 23% [21][22][23], but survivors largely carry severe or at least moderate disability [22]. Prognosis in this group of patients largely depends on the time of diagnosis and the extent of lesion burden on MRI at the time of PML diagnosis. Patients who were not symptomatic when PML is diagnosed and patients with less extensive disease on MRI at diagnosis tend to have a better prognosis with better survival rates and less functional disability [23][24][25].
A special feature of this group of natalizumab-treated patients is that they are particularly prone to develop a so-called PML-immune reconstitution inflammatory syndrome (PML-IRIS). This phenomenon is characterized by an excessive inflammatory reaction leading to paradoxical worsening after implementing immune system restoration. It can occur in all PML patients in principle, but is particularly well characterized in natalizumab-associated PML [21]. Patients with other autoimmune diseases are also increasingly treated with novel immunomodulatory therapies, especially with monoclonal antibodies such as rituximab or infliximab [26]. The rate of PML cases in these patients is significantly lower than in those with lymphoproliferative diseases. Nevertheless, the risk of PML occurrence must also be considered here [27], especially since the use of such therapies will certainly continue to increase in the future. It should be noted that in this last group of patients, treatment suspension can help in improving PML prognosis while worsening the underlying disease.
Use of interleukins for the treatment of PML
Several studies described the use of interleukins (interleukin-2 and interleukin-7) in PML to reconstitute the immune response. As a trophic factor for lymphocytes, interleukin-2 (IL-2) is required for the establishment and maintenance of adaptive T-cell responses; however, IL-2 is also critical for immune dysregulation through its effect on regulatory T cells [32]. For example, disruption of IL-2 production after stem cell transplantation appears to lead to a deficiency in cell-mediated immunity and thus to an increased risk of opportunistic infections in stem cell transplanted patients [33]. It is, therefore, not surprising that the positive reports on the use of IL-2 in PML are particularly from patients after stem cell transplantation. In addition, other previously used PML therapies such as cytarabine could not be used because of the risk of cytopenias [34,35].
There are a total of five case reports on the use of IL-2 in patients with underlying hematologic disease and PML. Two cases received a combination treatment of IL-2 and pembrolizumab. All affected individuals had in common that their underlying disease had led to a partly pronounced lymphocytopenia as the cause of PML. They all benefited from treatment and showed long-term improvement of their neurological symptoms [36][37][38][39]. A single publication can be found that addressed IL-2 therapy for natalizumabassociated PML. A 51-year-old female patient with relapsing-remitting multiple sclerosis developed a novel motor aphasia three years after initiation of natalizumab therapy and was diagnosed with PML. Since the symptoms were progressive after plasmapheresis and intravenous immunoglobulins, a therapy with subcutaneous IL-2 was initiated, under which the patient's clinical symptoms improved in the long term and the viral load in the CSF decreased [40].
In addition, subcutaneous IL-7 was combined with vaccination against the JCV VP1 protein to target PML. Two patients were treated this way. Both showed an increase in JCV VP1-specific CD4+ T cells and a long-term stabilization or slight improvement of PML-associated symptoms [48]. Another paper by Patel and colleagues described the application of multiple PML therapeutics in a patient with CD4+ lymphopenia. In addition to cidofovir, risperidone, and mefloquine, IL-7 and CMX001, an investigational oral agent, were used [49]. The patient benefited from the treatment, but which therapeutic agent was the possibly decisive one remained unanswered. In summary, the results for the treatment of PML with interleukins are promising especially if PML is based on CD4+ lymphopenia. Since data remain sparse and most case reports are several years old, it is very encouraging that a recent pilot study (NCT04781309) is investigating the value of recombinant IL-7 for the treatment of lymphopenia in PML patients.
Furthermore, in recent years, two modern therapeutic approaches to PML have emerged as promising candidates for the treatment of PML in small case series and single case reports (Table 1). They are both based on the activation of the endogenous immune system and will be characterized in more detail below.
Anti-PD-1-antibodies
The primary therapeutic scope of anti-programmed death (PD)-1 antibodies such as nivolumab or pembrolizumab are oncological diseases. The pharmacodynamic principle of activating CD8+ T cells to generate a potent antitumor response has revolutionized cancer therapy. After the approval for indications such as metastatic malignant melanoma or non-small cell lung cancer, the use of PD-1 antibodies in chronic viral infections has been suggested. Over time, increasing evidence suggested that the PD-1/ Idiopathic lymphopenia (n = 1) 3/10 (30%) Stabilization of symptoms 4/10 (40%) Death PD-L1 axis is upregulated during acute viral infections to protect surrounding tissues from an exuberant immune response [50]. However, if the virus cannot be eliminated and a chronic viral infection develops, this mechanism causes exhaustion of the antiviral immune response (socalled exhausted T cells), particularly the CD8+ T cells [50]. The hypothesis that such T-cell exhaustion may also play a role in PML has led to the attempt of using anti-PD-1 antibodies in this chronic viral disease [51]. In addition, increased PD-1 expression was detected not only on T cells in the blood and CSF of PML patients, but also on intralesional T cells in the CNS. Simultaneously, increased PD-L1 expression was shown on macrophages within PML lesions, highlighting the relevance of the PD-1/PD-L1 axis in PML [52]. The first paper on the use of pembrolizumab and nivolumab in PML was published in 2019. Eight PML patients with different underlying diseases (HIV infection (n = 2), (non)-Hodgkin's lymphoma (n = 4), idiopathic lymphopenia (n = 2)) were treated with pembrolizumab at a dose of 2 mg per kg body weight every 4-6 weeks. A maximum of three doses were administered in total. During therapy, two patients experienced improvement of neurological symptoms, stabilization of symptoms occurred in four cases, and two patients did not benefit from therapy. In those patients, worsening of clinical symptoms in combination with increased lesion load on brain MRI and HPyV-2 viral load in CSF was observed [52]. It should be noted that no PML-IRIS occurred in any case. The authors reasoned that this was due to the persistence of lymphopenia in all patients beyond pembrolizumab therapy.
In addition to the eight cases mentioned above, additional 13 individual case reports and two smaller case series (totaling 22 additional patients) were published during the course of the study on the use of PD-1 inhibitors in PML [53][54][55][56][57][58][59][60][61][62][63][64]. Eight of the 13 individual case reports described the use of pembrolizumab, and the remaining five publications used nivolumab. Clinical improvement was achieved in 7 of 13 cases (54%), 2 patients (15%) stabilized, and 4 (31%) died due to disease progression. For more clinical and demographic details regarding the patients treated see Table S1 (supplementary material).
A recent case series by Roos-Weil and colleagues in 2021 described six PML patients treated with anti-PD-1 antibodies (nivolumab (n = 4), pembrolizumab (n = 2)). The cause of PML was a hematologic malignancy in four cases, one patient suffered from a primary immunodeficiency, and one case was on immunosuppressive therapy for myasthenia gravis. Within the long-term follow-up 14-33 months after initiation of anti-PD-1 treatment, three of the six patients were still alive, one with clinical improvement and two with stabilization of symptoms. Three patients died despite treatment [65].
We recently published a case series describing three PML patients with underlying hematological diseases. One patient with long-standing Waldenstrom's disease showed marked improvement in his PML-associated symptoms during therapy with pembrolizumab and ultimately became symptomfree, whereas the other two patients suffered a fatal course of PML despite anti-PD-1 therapy. Both had been previously treated with rituximab and had no detectable CD20-and CD19-positive cells in their blood at the time of diagnosis [66].
Formally, no causal conclusions can be drawn from the few publications with a very heterogeneous patient group. What can be concluded, however, is that in addition to the initial positive case reports, there was worsening of PML symptoms and even death of patients despite the use of an anti-PD-1 antibody in 33% and 38% of the cases, respectively ( Table 1). The level of HPyV-2 viral load, PD-1 expression of T cells, and the number of HPyV-2-specific CD8+ and CD4+ T cells are discussed as possible factors influencing the success of the treatment [67]. In addition, the T-cell phenotype seems to play an important role. Pawlitzki and colleagues observed that in a PML patient with fatal disease progression receiving pembrolizumab therapy (13th individual case report), the proportion of Ki67+ PD-1+ CD45RA-memory T cells, so-called terminally exhausted T cells, increased significantly [68]. Thus, characterization of immune cell subtypes in PML patients will be of great importance for future therapeutic decisions and prognostic assessments.
In addition, a single case report should be mentioned in which the authors postulated PML as a consequence of therapy with nivolumab. A 54-year-old patient with refractory Hodgkin lymphoma was treated with nivolumab for 13 months after multiple high-dose chemotherapies. In addition, oral steroid therapy for hypocortisolism was ongoing. Thirteen months after initiation of nivolumab therapy, a diagnosis of PML was finally made with increasing neurological deficits, and anti-PD-1 therapy was discontinued due to a presumed causal relationship. Regarding the course, the authors reported that the patient remained alive 5 months after diagnosis, but without relevant improvement in neurological deficits [69]. No other adverse events like this due to nivolumab have been published to date, and it is reasonable to assume that the PML in the aforementioned case was not triggered by anti-PD-1 therapy but was preexisting or due to the underlying malignancy or steroid therapy.
Severe autoimmune phenomena, often been described in oncological patients receiving anti-PD1 therapy [70], have not yet been observed in the treatment of PML. However, in contrast to the initial case series by Cortese et al., PML-IRIS occurred in a total of seven patients in the additional published cases (Table 1). Also, individual less severe autoimmune phenomena, such as single cases of myositis, arthritis or colitis were observed as a consequence of anti-PD-1 therapy. Secondary autoimmune phenomena such as pneumonitis, or hypophysitis are common with anti-PD-1 therapy, and neurologists should be vigilant for evidence of such side effects [71]. Especially in patients with pre-existing autoimmune diseases including multiple sclerosis, therapy with nivolumab or pembrolizumab must be discussed very critically. Considering possible autoimmune side effects, which may also affect the CNS in the form of demyelinating inflammation, anti-PD-1 antibody therapy does not seem to be the right option in certain cases.
Allogeneic virus-specific T cells
Currently, the most promising therapeutic approach may be the use of allogeneic virus-specific T cells. The method has its origins in hematology and has been mainly used in stem cell transplanted patients with Epstein-Barr virus (EBV), cytomegalovirus (CMV), Adenovirus (HAdV) or HPyV-1 infections [8]. Reactivation of HPyV-2 also plays a role in patients after hematopoietic stem cell transplantation, although PML rates are very low compared with hemorrhagic cystitis due to HPyV-1 [72,73]. The first treatments involving adoptive transfer of virus-specific T cells occurred in the early 1990s [74][75][76]. Since then, adoptive T-cell therapy has evolved tremendously. Whereas initially peripheral mononuclear cells from seropositive donors were expanded ex vivo in a time-consuming manner, in the course of time it became possible to isolate virus-specific T cells with major histocompatibility complex (MHC)-multimers in a much more time-effective way [77]. In addition, the risk of graft versus host disease (GVHD) could be minimized by a more targeted selection of virus-specific T cells. Another promising immunotherapeutic approach that has emerged in recent years is the use of HLA-matched T-cell lines from third party donors, which offers the advantage of timely availability of cells for clinical use. The efficacy of this method has been illustrated, particularly for CMV and EBV reactivation after allogeneic stem cell transplantation [78]. In 2011, an Italian group published the case of a young patient with severe chronic GVHD and 5 years of immunosuppressive therapy after allogeneic stem cell transplantation, in whom HPyV-2-specific T cells were used for the first time. At PML diagnosis, the remaining immunosuppression was stopped and antiviral therapy with cidofovir was initiated. In addition, the patient received two infusions of 0.5 and 1.0 × 10 6 T cells from his stem cell donor, respectively, which had been previously coincubated and activated ex vivo with HPyV-2-specific proteins [79]. After therapy, there was marked improvement in neurological symptoms, a decrease in lesion burden on brain MRI, and lack of detectability of HPyV-2 DNA in CSF. Importantly, there was no evidence of GvHD.
However, this positive single case report was not followed by further publications on the use of virus-specific T-cell therapy in PML for several years.
In 2018, the therapy with allogeneic T cells received increased focus as a treatment option for PML. Muftuoglu and colleagues treated three PML patients (32, 35, and 73 years old) with HPyV-1-specific T cells from third party donors, with patients receiving two, three, or four T-cell infusions. Of particular note, two of the three patients suffered from underlying hematological disease (acute myeloid leukemia and polycythaemia vera), which are usually associated with high PML mortality [16][17][18][19]. Patient 3 had AIDS due to HIV infection. He had discontinued anti-retroviral therapy 5 years before PML diagnosis because of side effects. All patients experienced a reduction in HPyV-2 viral load in the CSF after the first treatment. With regard to clinical symptoms, two of the three patients experienced a significant reduction or complete remission of neurological symptoms. The third patient, who was 73 years old, experienced stabilization but no improvement of her symptoms and ultimately died under palliative care [80]. The case series suggests that a therapeutic attempt with HPyV-1-specific T cells may be reasonable in PML and probably has an acceptable safety profile, although proof of efficacy cannot be provided.
As HPyV-2 and HPyV-1 share certain significant epitopes (in particular, the capsid protein VP1 and the so-called large T antigen (T-Ag), an important regulatory protein of polyomaviruses), there is cross-reactivity for HPyV-2-and HPyV-1-specific T cells [81]. Since the production of HPyV-1-specific T cells is already established in some manufacturing centers under conditions of "Good Manufacturing Practice (GMP)", their use in PML is obvious.
Encouraged by the work of our colleagues, we also started to treat PML patients with HPyV-1-specific T cells at our center. Recently, the experiences of two successfully treated cases were published [82]. One patient suffered from dermatomyositis as underlying disease, the other patient had developed severe pulmonary fibrosis after successful treatment of Hodgkin's lymphoma, so that she had to undergo lung transplantation with consequent strong immunosuppressive treatment. In comparison to Muftuoglu and colleagues, our clinic does not use preproduced frozen allogeneic peripheral blood mononuclear cells stimulated by HPyV-2 antigens. Rather, we have access to a registry of more than 3500 potential donors. Suitable donors are selected based on the appropriate HLA typing and their T-cell frequency. Direct isolation of antigen-specific T cells is achieved by stimulation with appropriate overlapping peptide mixtures, cytokine capture and magnetic isolation, so that the cells are available after about 16-24 h.
Very recently, the literature on the use of HPyV-1-specific T cells in PML was extended by another case report and a first clinical study. A 57-year-old PML patient with underlying marginal cell lymphoma was initially treated with a total of 10 infusions of pembrolizumab. Because of insufficient reduction of viral load in CSF, the patient additionally received two infusions of HPyV-1-specific T cells at 7-week intervals [83]. During therapy, there was both a significant reduction in viral load and improvement in neurological symptoms.
A first pilot clinical trial about treatment with HPyV-1-specific T cells was presented by Cortese and colleagues in August 2021 [84]. After screening of a total of 26 patients, 12 PML patients were ultimately treated. They received a maximum of three infusions of HPyV-1-specific T cells donated by first-degree relatives. One year after the start of the treatment, seven of the twelve patients were still alive, five patients died of PML. No treatment-associated adverse events were reported. It should be noted that the production time of the final T-cell product in this study amounted to up to 4-6 weeks. Because of the long duration of T-cell production, some individual patients could no longer be included in the study due to symptom exacerbation. The extent to which a longer manufacturing time to the final T-cell product negatively affects patient outcome cannot be conclusively assessed due to limited data.
In addition to HPyV-1 specific therapies, treatment of PML with HPyV-2-specific T cells is also gaining increasing interest. The literature contains a recent single case report and a case series on the use of HPyV-2-specific T cells in PML. In the case of a 59-year-old man with refractory multiple myeloma who had undergone allogeneic stem cell transplantation, HPyV-2-specific T cells were derived from the lymphocytes of the HLA-identical stem cell donor. After termination of the ongoing chemotherapy and subsequent T-cell administration, neurological symptoms stabilized and HPyV-2 was not detected in the cerebrospinal fluid (CSF), whereas there was subtle morphological evidence of an immune reconstitution syndrome. Apart from focal epilepsy secondary to PML, the patient was free of neurological symptoms 12 months after therapy with HPyV-2-specific T cells [85].
In January 2021, an Italian group published a case series on HPyV-2-specific T-cell therapy in nine PML patients, in which cell lines were derived from autologous or allogeneic peripheral blood mononuclear cells by stimulation with protein multimers in a procedure of approximately 4 weeks [86]. Seven of the nine patients suffered from malignant hematologic diseases, six of whom had previously received B-cell depleting therapy (rituximab). Of the nine patients treated, three patients, all of whom had been treated with B-cell depleting therapy as part of their underlying malignant hematologic disease, died as a result of PML. One additional death was attributed to varicella zoster virus (VZV) encephalitis. Two of the surviving patients (each with non-Hodgkin's lymphoma as the underlying disease) had stabilization of neurologic symptoms, and three showed more or less marked improvement of symptoms. In five cases, the HPyV-2-specific cellular immune response was analyzed before and after cell therapy. In four of five patients, there was a relevant increase in interferon-gamma (IFN)-producing HPyV-2-specific T cells after T-cell application, which in turn correlated with a favorable outcome [86].
Conclusions
PML is a rare but often fatal opportunistic viral disease of the brain, for which there has been no adequate therapeutic strategy yet. Basically, the outcome of patients depends on how quickly the body's own immune response, which is usually impaired in PML patients, can be restored. The ease with which such immune reconstitution can be achieved depends very much on the underlying disease. Patients with underlying hematological diseases remain particularly problematic, as both the disease itself and its therapy can lead to a significant impairment of the immune system.
The two innovative therapeutic concepts mentioned above show promising results in some cases, although the success of treatment varies considerably between patients, particularly in the case of anti-PD-1 therapy. The therapeutic options for PML described in this article cannot all be applied equally to the different PML subgroups. The primary target group is certainly those patients whose immune response is impaired by the underlying disease and its therapy. In the case of HIV-associated PML, the prognosis can usually be improved with the use of effective antiretroviral therapy alone. If the PML is triggered by an immunosuppressive therapy for the treatment of an autoimmune disease, the termination of this therapy can contribute to the treatment of PML, while at the same time a worsening of the underlying disease can occur. These factors must be considered when selecting a therapeutic regimen for PML. With a low overall number of cases to date, it is not yet possible to draw definitive conclusions regarding the efficacy of the treatment approaches. However, the adverse effects described so far seem to be limited. This is true for anti-PD-1 therapy as well as for the use of virus-specific T cells, although secondary autoimmune phenomena must probably be expected, especially with anti-PD-1 therapy. Based on the literature to date, therapy with allogeneic T cells seems to provide the most promising results in the treatment of PML. The factors for successful therapy, for example the question to what extent a delayed manufacturing time affects the outcome, and whether success can be predicted pre-therapeutically should be the subject of future studies.
Author contributions NM, LG-L, and TS had the idea for the article. NM and LG-L performed the literature search and data analysis. The first draft of the manuscript was written by NM. LG-L, FH, BE-V, BM-K, CW, K-WS, MPW, GUH, and TS critically revised the work. All authors read and approved the final manuscript.
Funding Open Access funding enabled and organized by Projekt DEAL. No funds, grants, or other support was received.
Conflicts of interest
The authors have no competing interests to declare that are relevant to the content of this article.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-01-07T14:59:24.123Z | 2022-01-07T00:00:00.000 | {
"year": 2022,
"sha1": "a52f08a19fe4a077272665409d8877819ca9a8d3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00415-021-10952-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a52f08a19fe4a077272665409d8877819ca9a8d3",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118393326 | pes2o/s2orc | v3-fos-license | Electron quantum dynamics in closed and open potentials at high magnetic fields: Quantization and lifetime effects unified by semicoherent states
We have developed a Green's function formalism based on the use of an overcomplete semicoherent basis of vortex states, specially devoted to the study of the Hamiltonian quantum dynamics of electrons at high magnetic fields and in an arbitrary potential landscape smooth on the scale of the magnetic length. This formalism is used here to derive the exact Green's function for an arbitrary quadratic potential in the special limit where Landau level mixing becomes negligible. This solution remarkably embraces under a unified form the cases of confining and unconfining quadratic potentials. This property results from the fact that the overcomplete vortex representation provides a more general type of spectral decomposition of the Hamiltonian operator than usually considered. Whereas confining potentials are naturally characterized by quantization effects, lifetime effects emerge instead in the case of saddle-point potentials. Our derivation proves that the appearance of lifetimes has for origin the instability of the dynamics due to quantum tunneling at saddle points of the potential landscape. In fact, the overcompleteness of the vortex representation reveals an intrinsic microscopic irreversibility of the states synonymous with a spontaneous breaking of the time symmetry exhibited by the Hamiltonian dynamics.
Serge Florens
Institut Néel, CNRS and Université Joseph Fourier, B.P. 166, 25 Avenue des Martyrs, 38042 Grenoble Cedex 9, France (Dated: September 17, 2009) We have developed a Green's function formalism based on the use of an overcomplete semicoherent basis of vortex states specially devoted to the study of the Hamiltonian quantum dynamics of electrons at high magnetic fields and in an arbitrary potential landscape smooth on the scale of the magnetic length. This formalism is used here to derive the exact Green's function for an arbitrary quadratic potential in the special limit where Landau level mixing becomes negligible. This solution remarkably embraces under a unified form the cases of confining and unconfining quadratic potentials. This property results from the fact that the overcomplete vortex representation provides a more general type of spectral decomposition of the Hamiltonian operator than usually considered. Whereas confining potentials are naturally characterized by quantization effects, lifetime effects emerge instead in the case of saddle-point potentials. Our derivation proves that the appearance of lifetimes has for origin the instability of the dynamics due to quantum tunneling at saddle points of the potential landscape. In fact, the overcompleteness of the vortex representation reveals an intrinsic microscopic irreversibility of the states synonymous with a spontaneous breaking of the time symmetry exhibited by the Hamiltonian dynamics.
I. INTRODUCTION
The integer quantum Hall effect, with its remarkable transport properties, 1 offers perhaps the simplest route to understand the complex dynamics of electrons taking place in strongly inhomogeneous nanostructures. Indeed, the presence of a strong perpendicular magnetic field in two dimensions brings the classical motion close to integrability, with slow drift trajectories superimposed to faster cyclotron orbits, leaving hope that the quantum dynamics displays similar and simple structures. 2,3,4,5,6,7,8,9 Further simplification is brought by the fact that electron-electron interactions can be taken into account in the integer quantum Hall regime at the single-particle level, 10,11 so that the calculation of equilibrium properties, such as the local electron density and the distribution of permanent currents throughout the sample, can be carried out from a one-particle random Schrödinger equation. Nevertheless, the precise microscopic resolution of this equilibrium one-body problem still lacks a concrete analytical formalism and progress in this direction would be useful toward a microscopic description that underlies more complex nonequilibrium phenomena.
The main difficulty in the resolution of the disordered Schrödinger equation resides in the complexity of the potential landscape that competes with the kinetic energy in a nontrivial manner. In fact, it is worth mentioning that the standard procedure 12 to deal with a random po-tential, which consists in averaging over impurity configurations, is already physically questionable at high magnetic fields, at least at the microscopic level. Indeed, this theoretical route is usually justified by the physical assumption of randomness after successive collision events. However, instead of the chaotic exploration of the disordered landscape, the electronic classical motion becomes relatively regular for a smooth disorder potential at high magnetic fields. At the technical level, this difficulty was pointed out in the standard quantum-mechanical diagrammatic perturbative method, which leads to unsolved complications for a smoothly disordered potential: more and more classes of diagrams must be incorporated in the calculation as the strength of the magnetic field is increased. 13 This somehow indicates that the perturbative technique is unadapted for the high magnetic-field regime when the bending of the electronic trajectories becomes too important.
We note already that if one does not resort to any averaging over disorder, one faces not only a technical problem but also a more fundamental physical one, namely, the question of the microscopic origin of irreversibility and intrinsic dissipation from Hamiltonian quantum dynamics, an essential aspect for the calculation of transport properties. Indeed, the standard impurity averaging procedure, valid in the limit of low magnetic fields, introduces an effective description of the formalism. This allows to get, in addition to the energy spectrum, lifetime effects, which are inaccessible in a purely Hamiltonian formalism limited to the Hilbert space of square integrable functions. In absence of averaging, it is thus necessary to clarify the possible relation between the "complexity" of the potential landscape and the issue on the microscopic origin of irreversible processes. In other words, we are confronted with the controversial question whether irreversibility results from supplementary approximations to the fundamental quantum-mechanical laws (which are strictly time reversible) or is subtly hidden in the usual formulation of quantum dynamics.
A well-known starting point to capture the regime of quasi-regular dynamics in high perpendicular magnetic fields is to implement in full exact quantum-mechanical terms the fast cyclotron motion of circling electrons resulting from the Lorentz force, which gives rise to the quantization of the kinetic energy into discrete Landau levels. The second relevant degree of freedom then corresponds to the slow guiding center of motion, whose dynamics is dictated by the smooth disordered potential landscape. The essential role of the potential landscape, to be captured precisely in the microscopic quantummechanical calculations, is to lift the huge degeneracy of the Landau levels. It is already important to note that the magnetic field B enters into the quantum-mechanical problem only via two different quantities: the cyclotron pulsation ω c = |e|B/m * c and the magnetic length l B = c/|e|B. While ω c is a material-dependent parameter via the effective mass m * , l B can be regarded as a more fundamental quantity since it involves only physical constants such as Planck's constant , the speed of light c, and the absolute value of electric charge |e|. The cyclotron pulsation ω c determines a characteristic frequency for the circular motion which actually already enters into the problem at the classical level. In contrast, the magnetic length l B is purely a quantum-mechanical quantity, especially characterizing the spatial extent of the wave functions. In the popular operatorial language of quantum mechanics, these two relevant degrees of freedom are introduced by decomposing the electronic coordinater =R +η into a relative positionη =v ×ẑ/ω c linked to quasicircular cyclotron orbits (v is the velocity operator), and a guiding center positionR = (X,Ŷ ). These quantum variables obey the commutation rules [v x ,v y ] = −i ω c /m * and [X,Ŷ ] = il 2 B . In close analogy with the canonical quantization rule between the position and the momentum, it is seen that, for the slow drift motion of the guiding center, l 2 B plays the role of an effective magnetic-field-dependent Planck's constant.
The condition of a high magnetic field can thus be imposed either by expressing that l B is the smallest length scale in the problem, i.e., by taking l B → 0, or by considering that ω c is the biggest frequency scale in the problem, i.e., by taking ω c → ∞. These two limits can in principle be taken separately or simultaneously and actually yield different physical situations which have been discussed in the literature. 4,5,6,7,8,14,15,16,17,18,19 For instance, a popular approach corresponding to the first limit l B → 0 is to treat classically the slow guiding center motion while the fast cyclotron motion is kept quantum mechanical. 4,5,6,8,14 This case leads to great simplifications in the theoretical treatment, since the guiding center coordinates then commute and can be described entirely in classical terms. In this limit, the guiding center motion is restricted to equipotential lines and the energy spectrum is characterized by a continuous potential energy on top of discrete Landau levels. Another standard approximation corresponding to the second limit ω c → ∞ is to neglect Landau level mixing and to study the Hamiltonian quantum dynamics at finite l B projected onto a single Landau level. 7,15,16,17,18,19 This most tricky regime implies to work in a fully quantum-mechanical formalism taking into account rigorously the noncommutativity of the guiding center coordinates. An interesting aspect that was not fully cleared with both types of approaches lies in how to capture the transition from quantum to classical, with the classical features emerging possibly from microscopic decoherence processes.
In order to study the interplay of Landau levels quantization and a smooth disordered potential in a controlled fashion, we have developed 20,21 in recent years a specially devoted Green's function formalism based on the use of a semicoherent overcomplete set of states |m, R labeled by a continuous quantum number R, related to the classical guiding center coordinates, and an integer m, associated to the discrete Landau levels. This family of states were named vortex states 20 due to the vortexlike phase singularity of the associated wave functions r|m, R at the electronic position r = R. Because the vortex states encode no preferred symmetry, they allow a great adaptability to the local spatial variations of the random potential. More precisely, our approach consists in mapping the quantum equation of motion obeyed by the Green's function (the so-called Dyson equation) to this vortex representation, which then rigorously extends to quantum mechanics the classical guiding center picture. The main essential difference with the guiding center treatment is that our method keeps the full quantummechanical noncommutativity of the guiding center coordinates through the overcompleteness property of the basis of states. We have shown 21 that, within the vortex representation, Dyson equation can be easily and systematically diagonalized order by order in powers of the magnetic length l B . Quantum observables are then obtained by returning to electronic representation from the vortex Green's functions, so that the semiclassical limit l B → 0 as well as its systematic corrections is naturally obtained with our approach. 21 Moreover, the vortex representation allows one to classify and include in a systematic and straightforward way the Landau level mixing processes in the calculations.
The results to be developed here aim at extending the work initiated in Refs. 20,21, where a systematic and closed form expression for the solution of Dyson equation in a smooth arbitrary potential was already obtained, under the form of a series expansion classified order by order in powers of l B . The further and important step made in the present paper is to include to all orders the contributions from first and second spatial derivatives of the potential, in loose analogy to the resummation of leading classes of Feynman diagrams in standard perturbation theory. For simplification, we will consider the mathematical limit where Landau level mixing can be considered negligible and our solution will encompass all cases of quadratic potentials in that limit. A further motivation for a resummation of the gradient expansion is that the series obtained in powers of l B may not converge in general, since the semiclassical guiding center limit l B → 0 is expected to be singular, similar to the case of the more standard, fully semiclassical limit → 0. That the small l B expansion is indeed singular will be illustrated by the asymptotic character of guiding center semiclassical results: physical aspects related to an exact quantum treatment, such as quantization of energy levels or lifetime effects, can not be approximated by a finite expansion. We shall confirm this feature by comparing our exact quantum solution to various approximation schemes related to several improvements of the semiclassical guiding center method.
Before diving into the heart of the technique, we want to mention that, although our method obviously shares on certain aspects some similarities with theories already existing in the literature, important differences with these prior works can be emphasized. First, we would like to stress that our methodology is based on the exclusive use of Green's functions, not wave functions, in contrast to the theory pioneered by Girvin and Jach 7 (see also Refs. 16,18,19) where a one-dimensional (1D) Schrödinger's equation for the electron dynamics projected onto a single Landau level (valid in the limit ω c → ∞) was derived for finite l B . This point could appear superfluous at the first glance, since it is possible to get Green's functions from the knowledge of wave functions. However, the use of an overcomplete representation with nonorthogonal states to solve the dynamical equations of motion necessarily forces us to give up the wave functions picture and work in a Green's function formalism of partially coherent states. Furthermore, it is worth noting that the Hilbert space of square integrable wave functions is usually well suited for closed integrable systems, but turns out to be totally inadequate in situations presenting scattering processes in open systems (case of a saddle-point potential for instance), where one must appeal to another formalism, the scattering states picture. We shall show that the use of an overcomplete representation of coherent states allows one to get and treat quantization and lifetime effects on an equal footing in the resolution of Dyson equation. Moreover, the appearance of lifetimes in the energy spectrum coincides with the impossibility to describe the solution of the Dyson equation in terms of a countable set of states, thus proving the relevance of an overcomplete 22 representation in such a situation. Second, we note that several authors 7,15,16,17 actually attempted to build theories based on the use of vortex states within the path-integral formalism, which, however, seemed to suffer from techni-cal difficulties that were not elucidated. 7,15,16 In contrast, our theory is not tainted with the specific mathematical ambiguities which can often be encountered with the path-integral technique.
The derivation of an exact solution for the Green's function at large cyclotron energy yet finite magnetic length, embracing all possible cases of quadratic potentials, constitutes the main mathematical result of this paper. Besides capturing exactly the tunneling processes in the case of a saddle-point potential, it has the virtue of pointing out clearly the physical microscopic mechanism responsible for the appearance of lifetimes in the spectral decomposition of the Hamiltonian. We therefore hope that it will also help to clarify the debate about the physical roots of time irreversibility and the ubiquitous emergence of a classical character from quantum mechanics. An important point we will also demonstrate is that the derived solution provides a controlled approximation at finite temperature for all equilibrium local observables in the case of an arbitrary potential that is smooth on the scale of the magnetic length. This result, based on the fact that the local Green's function at high magnetic fields displays a hierarchy of energy scales controlled by successive spatial derivatives of the potential, can be used to write down an expression for the local density of states which may be useful in the context of recent scanning tunneling spectroscopy measurements. 23 A short report of parts of this work has been published in Ref. 24.
The paper is organized as follows. In Sec. II we present the vortex Green's function formalism and derive the general form of Dyson equation in the vortex representation. In Sec. III, Dyson equation is exactly solved for the two particular cases of an arbitrary 1D potential and an arbitrary two-dimensional (2D) quadratic potential. The obtained solutions are then exploited in Sec. IV to derive a general expression for the local density of states. Finally, we discuss in Sec. V the importance of considering an overcomplete representation in the present problem and its physical implications for the issue of time irreversibility. A small conclusion closes the paper. Some extra technical details bringing complements of information for the calculations are given in several appendixes.
A. Hamiltonian and projection onto the vortex representation
We consider the single-particle Hamiltonian for an electron of charge e = −|e| confined to a two-dimensional (x, y) plane in the presence of both a perpendicular magnetic field B and an arbitrary potential energy V (r), with the vector potential A defined by ∇×A = B = Bẑ, and m * the electron effective mass (here r = (x, y) is the position of the electron in the plane). For V = 0, the energy spectrum is quantized into Landau levels E m = (m + 1/2) ω c with ω c = |e|B/(m * c). The high degeneracy of the energy levels in absence of a potential is associated with a great freedom in the choice of a basis of eigenstates for the free Hamiltonian. To diagonalize Hamiltonian (1) for an arbitrary potential energy landscape V (r), a very convenient basis 20 turns out to be the overcomplete set of so-called vortex wave functions given by Ψ m,R (r) = r|m, R = 1 with z = x + iy and Z = X + iY . The continuous variable R = (X, Y ) constitutes the quantum analog of the semiclassical guiding center discussed in the introduction.
Here we have expressed the wave functions (3) in the symmetrical gauge A = B × r/2. Besides being eigenstates of the free Hamiltonian, the set of wave functions (3) has the coherent states character with respect to the continuous (degeneracy) quantum number R, which also corresponds to a "vortex"-like singularity for r = R. Despite being semiorthogonal, the set of quantum numbers |m, R obeys the completeness relation thus allowing one 20 to use the vortex representation in a Green's function formalism, providing unicity of the development, related to the analyticity of the disorder potential. 25 Note that, however, the nonorthogonality of the states prevents to build a perturbation theory solely based on wave functions to deal with the potential term V (r). We have shown in a previous work 21 that the electronic Green's function associated with Hamiltonian (1) and satisfying the evolution equation in the energy (ω) representation (we set from now on = 1) can be written exactly in terms of vortex wave functions Ψ m,R (r) as Here ∆ R means the Laplacian operator taken with respect to the vortex position R, and the term δ in the lefthand side of Eq. (5) is an infinitesimal positive quantity encoding the boundary condition for the time evolution.
The retarded Green's function G R (with plus sign in Eq. (5)) represents the response of the system to an impulse excitation, while the advanced Green's function G A (with minus sign) corresponds to a source wave with a deltalike response. Note that the correspondence between Green's functions in Eq. (6) is nonlocal with respect to the Landau level index m, as expected, but quasilocal with respect to the vortex position R. Equation (5) for the electronic Green's function then maps 21 exactly onto the following Dyson equation for the vortex Green's function g m;m ′ (R, ω) (from now on, we do not specify that the Green's function depends on ω in order not to burden the expressions): The matrix elements v m;m ′ (R) of the potential V in the vortex representation can be evaluated exactly for an arbitrary potential provided that the latter is smooth, i.e., infinitely differentiable, which is the case for any physical potential. They take the form of a series expansion in powers of the magnetic length (see Ref. 20 for the detail of the derivation)
B. Systematic magnetic length expansion
One method adopted in the paper 21 in order to solve Eq. (7) is to search the function g R,A m (R) under the form of a series in powers of the magnetic length l B , similarly to the matrix elements of the potential, The functions g m;m ′ (R), which allows one in principle to obtain an explicit expression for g (j) m;m ′ at any order j from the knowledge of all other components with subleading order i < j. The leading order component can be readily obtained and reads Inserting expression (11) in Eq. (6) and keeping only the l B zeroth order term coming with p = 0 yields the compact expression for the electronic Green's function (12) which is a quite simple and general functional of V (R). Subleading corrections up to order l 3 B were explicitly calculated in Ref. 21. C. Limitations of the strict lB expansion (8) is the dominant one for the matrix elements of a smooth potential with characteristic length scale ξ ≫ l B , one could naively expect that the leading component g (0) in Eq. (11) is also the dominant one in the expansion (10) for the Green's function. As noted in Ref. 21, this conclusion has however to be contrasted since it does not take into consideration the fact that the (l B /ξ) expansion generates at higher orders systematic terms which may be highly singular in energy due to their multiple pole structure. This is most clearly seen from Eq. (6) for the electronic Green's function at coinciding points r = r ′ obtained with the leading order vortex propagator g (0) : where integrations by parts and the property |Ψ m,R (r)| 2 = |Ψ m,r (R)| 2 were used to get the last line of Eq. (13) (we have noted above ω m = ω − E m ± iδ). Now clearly the truncation of the above Eq. (13) to the first p = 0 term is only vindicated provided the integral varies on length scales larger than l B , which is not always guaranteed, as the vortex wave function spatially extends precisely on the scale l B . If these corrections become important, not only the whole sum over p above must be kept, but also all terms of similar form that appear within the complete vortex Green's function g m;m ′ (i.e., to all orders in l B ). Let us see what kind of terms one should then consider. By inspecting the second line in (13), one is in fact looking for corrections in the vortex Green's function at order l 2 B of the type Such terms with multiple poles, which indeed start to appear in g (2) (see Ref. 21 for a complete derivation), proliferate at all orders of the l B expansion, similar to the further contributions associated to values of p > 1 in Eq. (13). These corrections to the Green's function will not be perturbatively small whenever one probes energies or temperatures smaller than the first characteristic energy scale appearing above, namely, l B |∇ R V |, in which case the leading expression (12) breaks down. Equation (14) is however hinting at how a controlled calculation can be performed: provided that a hierarchy of energy scales . . can be established, a systematic resummation to all orders in l B of potential gradient terms will push the validity of the calculation down to the smaller scale l 2 B ∆ R V , and so on and so forth. This idea is quite analogous to the usual resummation of classes of Feynman graphs in standard perturbation theory and constitutes the basic motivation for the computations that will follow.
D. Dyson equation in the absence of Landau level mixing
To present the method of resolution of Dyson equation (7) to infinite order in the l B expansion, we shall focus for simplicity on the limit of vanishing Landau level mixing, i.e, ω c → ∞ with l B finite. In this case, one can easily check that the vortex Green's function becomes purely diagonal, g m;m ′ (R) = g m (R)δ m,m ′ , so that Eq. (7) gets simplified into Equations (15) and (16) are exact in the limit ω c → ∞ and valid for any (differentiable) potential V (X, Y ). For the specific case of a quadratic potential, only the first terms k = 0, 1, 2 and j = 0, 1 of the series appearing, respectively, in Eqs. (15) and (16) remain giving rise to a nontrivial second-order partial differential equation to be solved in Sec. III. Let us first continue considering a generic potential and try to simplify at maximum this Dyson equation.
In order to solve Eq. (15), it appears very convenient to introduce modified vortex Green's function through the following change in functions [an insight suggested by the form of the electronic Green's function (6)] where expression (16) for v m (R) was used to obtain Eq. (19). After some standard manipulations presented in Appendix A, one gets the following very compact form of Dyson equation (valid for an arbitrary potential, in the limit ω c = ∞ with l B finite): (20) where the notations ∂ṽ X and ∂ṽ Y mean that these spatial derivatives act on the functionṽ m (R) only (similarly forg m (R)). Interestingly, and in contrast to the initial Dyson Eq. (15), this differential operator starts now at order l 4 B ∂ XYṽ ∂ XYg (once Dyson equation has been properly symmetrized by taking its real part, see Appendix A), so that the change in functions (17) and (18) manages in principle to perform the whole resummation of potential gradient terms to all orders in l B (this will be discussed in more detail in Sec. IV).
Before considering the solution of the transformed Dyson equation (20), we need to examine the change brought by the mapping (17) in the electronic Green's function (6), now diagonal in the Landau level index m: where the factors e − l 2 B 2 ∆R and e l 2 B 4 ∆R were combined together, and integrations by parts were performed. The last step, performed in Appendix B, is simply to compute the action of the exponential operator in Eq. (21) onto the product of two vortex wave functions, which finally reads: where with A s = (1 − s)/(1 + s). Form (22) will be particularly useful for subsequent calculations in Sec. IV and in Appendixes C and E.
III. SOLVING DYSON EQUATION
A. Absence of curvature: case of an arbitrary 1D potential or a locally flat disordered 2D potential Dyson equation (20), also in its explicit form [Eq. (A7)], has the remarkable property that the differential operators involve necessarily derivatives of the potential in two orthogonal directions. For a 1D potential along the x direction, the function v m (X) depends on a single coordinate, so that Dyson equation forg R,A m (X) becomes completely trivial and its exact expression (in the limit ω c → ∞) reads withṽ m (X) defined above in Eq. (18) playing the role of an effective potential energy.
To benchmark expression (24) for the modified vortex Green's function, we consider the exact solution for the electronic Green's function that can be derived using a standard wave function formalism in the case of a parabolic 1D potential and prove in Appendix C that both approaches lead to identical expressions. This establishes that formula (21), with even the lowest order vortex Green's functiong m (R), contains the edge states physics, which plays an important role in the understanding of transport properties observed in the quantum Hall effect regime. 26,27,28 From the present analysis of an arbitrary 1D potential, one can already guess (see Sec. IV for more details) that the differential operators appearing in Dyson equation (20) mainly play a role in the case of 2D equipotential lines that present a certain amount of curvature at the scale of the magnetic length. For a disordered 2D potential V (R), this can occur, e.g., in the vicinity of its critical points R c characterized by ∇V (R c ) = 0. For an arbitrary smooth potential, and far from its critical points, the equipotential lines are locally straight at the scale of l B , so that the modified vortex Green's functioñ g R,A m (R) will be well approximated by the expressioñ Once inserted in the electronic Green's function (21), this simple result gives the approximate expression that was proposed with little detail in our previous Ref. 21.
Considering that for a smooth 2D potential the equipotential lines are locally straight on the scale l B (this requires a sufficiently large local radius of curvature), one can then perform in principle the integration over the variable parametrizing distance along the constant energy "surface", V (R) = const, in the same way as done explicitly in Appendix C for a pure 1D potential. One then recovers from the obtained Green's function expression the property that the wave functions are locally well approximated by translation-invariant Landau states with drift velocity c∇V ×ẑ/(|e|B), as argued in the seminal paper by Trugman. 4 Expression (26) is however quite powerful, because it does not rely on a particular parametrization of the equipotential lines, which can be cumbersome for a disordered potential, and can be used easily by numerically or analytically performing the integral over the vortex coordinate R.
However, as stressed before, approximation (26) breaks down in the vicinity of the critical points of the potential, where the drift velocity locally vanishes. This requires to include in the analysis the second-order derivatives of the potential V in order to lift the degeneracy of the Landau levels, leading to strong quantum effects (quantization and/or lifetime), as we will discuss from now on.
B. Green's functions including curvature effects: case of a 2D quadratic potential To investigate curvature effects and to determine more precisely under which conditions approximation (25) is valid, we expand the arbitrary potential V (R) around a given point R 0 , up to quadratic order. This expansion appears to be sufficient provided that the gradient and the three possible second-order derivatives of the potential (locally) never vanish simultaneously, a realistic assumption. We thus write Inserting expression (27) From the symmetrized form (A7) of Dyson equation (20), we then find that the functiong m (R) is dictated by the second-order partial differential equation The antisymmetrized Dyson equation (A8) yields, on the other hand, the extra constraint indicating that the functiong R,A m (R) necessarily possesses the same equipotential lines as V (R). We thus and substitute this expression into Eq. (29) to obtain a simple 1D differential equation obeyed by the function f R,A m (E): The coefficient γ is nothing but the determinant of the Hessian matrix of the potential V , with a prefactor l 4 B /4. Its sign determines the nature of the critical points R c at which |∇ R V | vanishes. A saddle point is characterized by γ(R c ) < 0, while a strictly positive γ(R c ) indicates the presence of a local maximum or minimum.
Differential equation (31) can be solved by Fourier transforming to time, see Appendix D, so that the Green's function is given by the explicit formulã Noticeably, when γ > 0 the function τ (t) is a periodic function of time t, so that Green's functiong m must display discrete poles, and quantization of energy levels in a confined potential is recovered (this is further discussed in Appendix E). We stress that such success of the vortex formalism was far from granted, because one has started with a family of wave functions labeled by the continuous quantum number R.
For γ < 0, the functions cos and tan in Eqs. (36) and (37) are to be replaced by their hyperbolic counterparts cosh and tanh, respectively, so that the kernel 1/ cosh( √ −γt) obviously introduces lifetime effects in the description [the convergence of integral (35) over the time is now ensured by this term and no more by the cutoff function exp(∓δt), as was the case for γ > 0]. Clearly, the vortex self-energy obtained from Eq. (35) displays an elastic scattering rate proportional to √ −γ, a clear signature of quantum tunneling at saddle point with important consequences for transport properties (see discussion in Sec. V). This allows us to make the crucial physical identification between scattering mechanism and negative curvature of the potential in the quantum Hall regime.
It is interesting to note that the strong quantum effects (quantization or lifetime) exhibited by the exact quantum solution (35) are dictated by the quantitity √ γ, which involves the square root of the second-order derivatives of the potential. Clearly, they thus can not be fully captured via a finite expansion in powers of the magnetic length which can only generate integral powers of the derivatives of the potential; see Secs. II B and II C. This impossibility to approximate quantum effects at finite l B in a controllable way with the l B expansion illustrates its asymptotic character.
The function h R,A m (R 0 , t) depends on the reference point R 0 via the coefficients η andω m for a generic quadratic potential, and possibly also via the coefficient γ for a potential characterized by higher derivatives. The geometric parameters γ and η are basically small coefficients for a potential V (R) which varies smoothly at the scale l B . If we literally take γ = η = 0, we find again expression (25) for the functiong m (R). We will show further in which circumstance it is nevertheless required to keep the dependence on the coefficients γ and/or η in the Green's function to correctly describe the local physical observables.
Making use of expression (35) together with Eq. (21), we obtain that the electronic Green's function reads finally Expression (38) is the main mathematical result of this work. It is exact in the limit ω c → ∞ for any quadratic potentials. In particular, it holds for quadratic confining potentials simulating closed systems, such as quantum dots, as well as for nonconfining quadratic potentials corresponding to open systems, such as quantum point contacts. Physical implications of this result are discussed in Sec. V, while further mathematical simplifications will now be performed in order to extract relevant physical observables.
IV. LOCAL DENSITY AND CURVATURE EFFECTS
A. Simplifying the Green's function expression Expression (38) for the Green's functions can be calculated further in different ways. One possibility is to use a parametrization of the equipotential lines of V (R). Such an approach appears, however, not very practical for a generic random potential. Actually, it turns out that the two-dimensional integral over the position R can be performed analytically when V (R) is expanded up to its second derivatives around the point R 0 . For a quadratic potential, the Green's function can be rewritten at the final stage as a single one-dimensional integral over the time variable t, as will be shown in this section. For the numerics, this appears to be more easily tractable than a direct computation of formula (38).
Note that for a quadratic potential, formula (38) is actually independent of the choice of R 0 . This can be easily checked by taking the gradient of expression (38) with respect to R 0 and considering that, besides the explicit term V (R 0 ), the dependence on R 0 is also contained in the function h m (R 0 , t) through the coefficients η andω m (the other coefficient γ is independent of R 0 in the particular case of a quadratic potential). The independence of the electronic Green's function then follows from the For a smooth arbitrary potential V (R), result (38) is expected to give a very good approximation to the electronic Green's function provided that the temperature exceeds the energy scales associated with the third order (and beyond) derivatives of the potential. Contrary to the case of a quadratic potential, formula (38) will now depend on the reference point R 0 , which thus has to be chosen appropriately. The natural choice appears to be R 0 = (r + r ′ )/2.
Inserting formulas (22) and (23) into Eq. (38), using the expansion (27) of the potential V (R) up to quadratic order with R 0 = (r + r ′ )/2 = c and evaluating the resulting Gaussian integrals over the variable R, we get with where H V | c is the 2 x 2 Hessian matrix composed of the second derivatives of the potential V taken at position c (the elements of the matrix are given by [ For r = r ′ , we have the simplifications W(r, r, t) = ∇ r V (r) andη(r, r, t) = η(r) [function defined in Eq. (34)]. Note that for a potential V characterized by derivatives of order higher than 2, formula (39) yields only an approximate result. In this case, all the geometric coefficients, including γ and ζ, depend on the center of mass position c.
B. Local density of states
We now aim at computing the local electronic density defined by where the local density of states ρ(r, ω) is directly obtained from the retarded Green's function at coincident positions as Here n F (ω) = [1 + exp([ω − µ]/T )] −1 is the Fermi-Dirac distribution function, T the temperature, and µ the chemical potential. We thus need the simpler form of expression (39) To simplify further the expression of the local density, it is then required to consider the explicit expression (36) for the function h R m (r, t) and insert it into Eq. (45). In order to do the integral over ω in expression (43), we first introduce the change in variable ω ′ = ω − µ and decompose the exponential factor in the numerator depending on ω ′ as exp(iω ′ t) = cos(ω ′ t) + i sin(ω ′ t). The integral over the energy ω ′ in Eq. (43) coming with the first term cos(ω ′ t) is then performed by writing the Fermi-Dirac distribution function as On the other hand, the second contribution to the integral (43) coming with the term sin(ω ′ t) is calculated by using the result Finally, we find that the local density takes the form of a simple integral over the time t n(r) = 1 2πl 2 This formula is exact for any quadratic potential in the absence of Landau level mixing. To illustrate this strong statement, we prove in Appendix E its equivalence with the expression for the local density that can be derived by standard means in the specific case of a circular 2D parabolic confinement (note that we have already shown the correspondence in the different case of a 1D parabolic potential at the level of the Green's functions, see Appendix C). This shows that quantization effects, i.e., the presence of a discrete energy spectrum, are fully captured in the vortex representation, despite not being completely explicit in formula (48). The latter equation has thus a relatively general character since it contains under a compact and unified form the cases of confining and nonconfining quadratic potentials. Note that expression (48) is naively problematic for the saddle-point quadratic po-tential model because the energy spectrum in this case is unbounded from below, but relative density variations are, on the other hand, perfectly well defined.
Of particular interest is the derivative of the local density with respect to the chemical potential which can be directly probed by the differential tunneling conductance in a scanning tunneling spectroscopy (STS) experiment (provided that the tip density of states is constant in the studied energy range): At zero temperature, this yields the local density of states at the chemical potential energy, ρ(r, µ), since then −n ′ F (ω) = δ(ω − µ). Using formula (48), we directly get Contrary to the local density formula, expression (50) is well defined for the saddle-point quadratic potential model because it involves only states around the energy µ. Formula (50) for the local density of states is exact for any quadratic potential. One may wonder about its accuracy for an arbitrary potential landscape which is smooth on the scale of the magnetic length. We shall investigate this question by a careful quantitative analysis in the next subsection.
C. Quantitative aspects: when do gradient and curvature corrections need to be included?
In order to illustrate on a concrete example how successive steps in the resummation of leading derivatives of the potential really operate, we focus here on the 2D circular confining potential whose explicit solution is given by the so-called Fock-Darwin states (see Appendix E), and investigate the temperature-dependent local density of states (49). The simplest approximation scheme, which amounts to view the potential term (51) in a purely local manner, i.e., V (R) ≃ V (R 0 ), is obtained by setting |∇V | = ζ = γ = η = 0 in Eq. (50). This obviously recovers the usual semiclassical guiding center result: This result is in fact accurate as long as one considers temperatures higher than the energy scale associated to the drift motion, namely, l B |∇ R V |. At lower temperatures, the resummation of all leading gradient contributions needs to be performed, which corresponds to considering the potential as locally flat (in the geometrical sense): . This calculation can in fact be achieved with the previously obtained results, setting ζ = γ = η = 0 in Eq. (50): Clearly the scale l B |∇ R V | provides a cutoff in the above integral, so that the single pole divergence associated to the derivative of the Fermi-Dirac distribution function in Eq. (52) is regularized. 1 displays the STS local density of states as a function of temperature for a fixed chemical potential and the particular position r peak given by µ − E m − V (r peak ) = 0, according to semiclassical expression (52), the leading gradient approximation (53) and the exact solution (50) which includes also curvature effects from the full quadratic dependence of potential (27). For the sake of simplicity, we have considered µ = 0.8 ω c , which corresponds to filling the lowest Landau level m = 0 only. Clearly in Fig. 1 √ γ begin to be felt; see Figs. 1 and 4. Departure from the leading gradient result is manifest by a final divergence of the exact density of states at the peak position in the limit T → 0, since the Fock-Darwin energy spectrum is discrete (with level spacing √ γ) due to the confinement. As a final remark, the above discussion is quite instructive, as it clearly shows under which conditions curvature effects associated with second-order derivatives of the potential can be neglected, namely, when temperature is higher than the energy associated with curvature. Therefore, successive approximation schemes can be devised for a smooth arbitrary (disordered or not) potential leading to controlled expressions for the local density of states. The whole scheme is indeed based on the existence of a hierarchy of local energy scales of the type l n B ∂ n r V (r). Expression (50), which includes all second-order derivatives of the potential, thus provides an accurate estimate as long as the temperature is larger than cubic and higher order derivatives of the potential. In particular, it is also valid near saddle points of the potential landscape, where the drift velocity vanishes. It is thus extremely useful for interpreting local STS experiments such as in Ref. 23 and completely bypasses the need to diagonalize numerically a complicated random Schrödinger equation.
V. DISCUSSION: ON THE FUNDAMENTAL IMPORTANCE OF OVERCOMPLETENESS
In the light of the technical results derived in the previous sections, we formulate here some general conclusions on very fundamental issues in quantum mechanics such as the emergence of classicality and the microscopic origin of time irreversibility.
A. Emergence of classicality in quantum mechanics
It is well-known that the classical Hamilton-Jacobi equations of motion can be derived from the quantummechanical Schrödinger equation when terms having as prefactor can be disregarded. In other terms, classical mechanics is clearly a limit of quantum mechanics. However, capturing the precise mechanism responsible for the emergence of the classical behavior in the physical properties of the system within a fully quantum mechanical framework, i.e., at finite, appears much more complicated. The essential reason is that establishing the quantum-classical correspondence requires not only to consider the equations of motion but also the states of the system. And the limit → 0 appears to be much more singular for the wave functions than for the energy spectrum. When dealing with this limit, we are imme-diately confronted with a conceptual problem relying on the fact that quantum mechanics is originally formulated in a Hilbert space spanned by a countable basis of square integrable states, while classical dynamics occurs in a continuous phase space. We are therefore in a delicate position to reproduce the basic structure of the classical phase space.
In the particular problem under study in this paper, the set of vortex states |m, R introduces from the very beginning in the quantum description a continuous representation for the quantum numbers. Because they obey in part the coherent states algebra 25 (note that the vortex states are very peculiar coherent states in so far as they present the coherent character only via the degeneracy quantum number R and not via the eigenvalue quantum number m, so that they can be also eigenstates of the kinetic part of the Hamiltonian, in contrast to fully coherent states), and especially a completeness relation [Eq. (4)], we can legitimately use the vortex representation for the spectral decomposition of Hamiltonian (1) provided that the potential V is a smooth function. 20 As an original motivation to work preferentially with these states, 20 the quantum numbers |m, R provide a very intuitive and clear physical connection to the classical dynamics for the free Hamiltonian when considering the de Broglie-Madelung hydrodynamic picture 29,30 of the Schrödinger equation: the quantization of the kinetic energy into Landau levels stems only from the interference of the electronic wave function with itself due to the completion of a circular orbit around the position R, where the phase of the wave function is ill-defined. The price to pay for the continuous aspect, i.e., for introducing overcompleteness into the quantum-mechanical formalism, is the nonorthogonality of the states with respect to the degeneracy quantum number R, which reflects the quantum indeterminacy in the positions of the vortices and is accounted for in formula (4) by associating the elementary area 2πl 2 B to the incremental area in the integration over the vortex positions.
Being better armed to capture the transition from the quantum to classical, it is not completely a surprise that we find that the vortex representation leads at the mathematical level to a systematic and straightforward expansion 21 in powers of the magnetic length (which, we remind, plays the role of an effective Planck's constant in the present problem) of the vortex Green's functions, and thus of the physical observables. Therefore, it turns out that overcompleteness is clearly not a drawback but an advantage at the technical level! However, behind this mathematical aspect, we also see a very fundamental physical aspect, which is rarely considered in quantum mechanics when choosing a peculiar representation of states. Obviously, the vortex representation offers the unique opportunity to derive quantum expressions without having to implement the complete explicit form of the potential V . This is exemplified by the exact compact formula (38) for the Green's function which embraces all possible cases of quadratic potentials. The generic form of this result actually encodes the stability of the vortex states. Indeed, the Fock-Darwin states (E3) which correspond to the exact eigenstates of Hamiltonian (1) in the presence of a circular parabolic confinement and have the rotational symmetry (see Appendix E) appear to be very unstable: one can not expect the confinement to be perfectly circular under realistic conditions, so that the real physical state certainly does not obey the rotational symmetry. In contrast, the vortex states which enclose no preferred symmetry turn out to be stable with respect to an arbitrary small asymmetrical smooth perturbation of the potential landscape. From this robustness property, we can expect them to be the real physical states, i.e., the most predictable ones in an experiment.
Interestingly, we have an illustration with the present study for the process of superselection of states put forward by Zurek 31 to explain the emergence of the classical behavior from a quantum substrate. The only important difference is that we are somewhat accounting here for an intrinsic mechanism of classicality. Indeed, it is customary in quantum mechanics to appeal to extrinsic degrees of freedom brought by an environment (surrounding the studied quantum system) to explain the appearance of classical properties through decoherence processes. As developed by several authors (see the review 31 ), the environment prevents certain quantum superposition of states from being observed as a result of their high instability. Only states that survive this process of coupling to the environmental degrees of freedom have predictable consequences. As shown by Zurek et al. 32 in a model of weakly damped harmonic oscillator, coherent states, which are known to be the closest states from the classical limit, are minimally affected by the coupling to the environment. Due to this robustness, they emerge as a preferred set of states.
In the present problem of the electron dynamics in a high magnetic field, we clearly see under which conditions the overcomplete vortex representation becomes effectively selected by the dynamics. Indeed, we have noted that formula (38) derived in the vortex representation reproduces the exact Green's functions in the simple integrable case of a 2D circular confining potential (Appendix E). The system actually does not exhibit yet a preference for the overcomplete set of vortex states over the complete set of Fock-Darwin eigenstates. In contrast, the case of a quadratic saddle-point potential which simulates an open system and introduces a dynamical instability seems quite instructive. Indeed, the conventional approach of quantum mechanics with square integrable wave functions turns out to be inadequate to determine the energy spectrum, so that one usually has to resort to another formalism, namely, the scattering states quantum formalism. 33 These difficulties are manifestations of the fact that the spectral problem for unstable unconfined dynamical systems is not computable in the Hilbert space. The overcompleteness of the vortex representation in this specific case of saddle-point potential shows precisely its relevance by allowing one to solve the dynamical equations exactly on the same footing as in the confining cases in a Green's function formalism. One can thus argue that the overcomplete set of vortex states is naturally favored by the instability of the dynamics. Noticeably, the basic dynamical object in the vortex representation appears to be no more the wave function but the Green's function. By inspecting the form of the generic Green's function (38), one notices that the latter can not be written explicitly as a product of two wave functions (as is usually the case when using a complete representation) due to the presence of the nonlocal operator exp −(l 2 B /4)∆ R acting on the vortex wave functions [see also Eqs. (22)- (23)]. This reflects the overcompleteness of the coherent states basis with the two-dimensional continuous quantum numbers R associated to the vortex position. It is therefore clear that it is not possible to get an single expression encompassing all possible cases of confining and unconfining quadratic potentials in terms of wave functions eigensolutions of the Schrödinger's equation. This general result can only be achieved through the introduction of an overcomplete basis of physical states.
B. Time irreversibility
An attractive feature is the close links existing between the transition from quantum to classical (as a result of decoherence) and time irreversibility. By time irreversibility we mean the time asymmetry due to a preferred direction of time, as shown by decaying states. While quantum mechanics is able to provide a clear and successful dynamical foundation to the idea of quantum levels, the problem of decaying states with lifetimes remains somewhat obscure and controversial. These difficulties in identifying the physical roots of irreversibility rely essentially on the fact that the microscopic dynamical equations are time reversible, whereas complex macroscopic systems are always characterized by a time-asymmetric evolution. Consequently, it is generally believed that irreversibility arises from the macroscopically large number of degrees of freedom affecting the time evolution of a nonisolated system. 34 There have been many different approaches to derive an irreversible dynamical evolution starting from the Schrödinger equation. The most popular one 35 is to consider the microscopic (integrable) system as a part of a larger Hamiltonian system which has many degrees of freedom (the environment or reservoir). Then, after tracing over the environmental degrees of freedom (the latter are disregarded because uncontrolled and unobserved), the dynamics of the (open) quantum system is no more described by the Schrödinger equation, which is expected to be applicable only to a closed system. Other possibilities are to solve quantum-mechanical equations by dealing directly with tractable models of the environment, such as the consideration of a collection of harmonic oscillators. The common denominator of all these approaches is to associate time asymmetry with the external influence of a reservoir or a measurement apparatus. Irreversibility thus seemingly has an extrinsic root.
In order to better clarify its possible link with the inherent dynamics of the system, Prigogine et al. 36,37,38 demanded that irreversibility be rather directly connected with the Hamiltonian of the microscopic quantum system, in spite of introducing extra dynamical assumptions (because, after all, the division of a global system into a system and an environment is artificial and rather a matter of taste). These authors 36,37,38 used extensions of the traditional Hilbert space through the introduction of a nonunitary change of representation and argued with a few simple examples that time asymmetry may spontaneously arise in systems whose dynamics is nonintegrable in the Hilbert space of quantum mechanics. Then, the problem of integration and irreversibility both enjoy a common solution in the extended space.
In the Hilbert space quantum mechanics, the time evolution described by the Hamiltonian must be time reversible, leading to a widespread belief that intrinsic irreversibility simply does not exist. Moreover, for the nontrivial physically interesting systems, the computability of the spectral problem is generally limited, the state of the art offering only perturbative and/or effective approximate solutions. In such systems, irreversibility does appear in the derivation, but as the result of supplementary approximations to the Hamiltonian formalism of quantum mechanics. A well-known example in condensed-matter physics is the case of a disordered system for which elastic lifetimes in the spectrum are obtained by averaging over disorder configurations. 12 In brief, in order to clarify an intrinsic mechanism of irreversibility, it is of valuable interest to find nontrivial physical systems which are sufficiently simple to allow exact time-asymmetric solutions.
We strongly believe that the exact solution for the electron dynamics in a high magnetic field and a given yet arbitrary quadratic potential presented in this paper precisely offers such an opportunity. We have noted in Sec. III that the Green's functions are characterized by the presence of lifetimes in the case of saddle-point potentials (when the geometric curvature γ < 0), meaning that time symmetry is broken. We thus obtained irreversibility without appealing to extra dynamical considerations, such as an environmental coupling. In other terms, we are basically in the scenario depicted by Prigogine et al. 36,37,38 One may naturally wonder how the time-reversible Schrödinger equation can then lead to irreversible processes at the mathematical level. It is often believed that the complex poles of the Green's functions correspond to eigenvalues of a non-Hermitian operator. In contrast, we would like to point out that a broken time symmetry exhibited by the states is not necessarily in contradiction with a time-invariant Hamiltonian if a mathematical theory is used that makes a distinction between states and the Hermitian Hamiltonian operator. Actu-ally, the dynamics remains here time symmetric but is realized through an overcomplete representation which permits a broken time symmetry for the states. A complete (countable) representation for its part does not allow time-asymmetric solutions. The overcomplete vortex representation provides a more general type of spectral decomposition of the Hamiltonian operator, which is merely based on the use of Dirac's bra-ket formalism. The extension of the eigenvalue problem to the complex plane is then purely a qualifying feature of the instability of the dynamics, thus revealing an intrinsic irreversible character of the evolution of the states.
It has been stressed by several authors 38,39,40,41 that the natural setting of quantum mechanics is the rigged Hilbert space rather than the Hilbert space alone. The rigged Hilbert space is just an extended space consisting of the Hilbert space equipped with distribution theory and was originally introduced into quantum mechanics to give a mathematical justification of Dirac's bra-ket formalism. It establishes rigorously that the spectral decomposition formula acquires meaning in the continuous spectrum as well as in the discrete spectrum, and allows the appearance of complex eigenvalues. Plane-wave eigenvectors, which are generalized eigenvectors in the space of tempered distributions, are basic examples of these elements of the rigged Hilbert space which do not live in the Hilbert space. They are routinely used in the scattering states formalism, which contains an arrow of time hidden in the choice of time asymmetric boundary conditions: The consideration of in-and out-plane wave states asymptotically far from the scattering region is indeed a statement of causality expressing the fact that the state at a given position is determined by the action of a source at a retarded time. Note that causality is naturally accounted for in the definition itself of the retarded and advanced Green's functions. However, in this case, the presence of the infinitesimal quantity δ in the dynamical equations [see Eq. (5)] does not automatically imply a broken time symmetry for the physical states. For this, one needs in addition to have a dynamical instability occurring in an unconfined system, i.e. scattering events.
The introduction of a continuous ingredient plays an important role in all microscopic derivations of irreversible processes. With the consideration of asymptotic in-and out-plane wave states, the scattering formalism presupposes the existence of a continuum via the presence of reservoirs, so that irreversibility finally acquires within this approach an extrinsic character. Moreover, this formalism is specifically limited to open systems, thus antagonistic to the Hilbert space quantum mechanics of closed systems. In this paper, we have shown that, by using an overcomplete representation of coherent states belonging to the Hilbert space such as the vortex states, it is possible to embed quantum theory in a wider formalism of which Hilbert space quantum mechanics of closed systems would become a special case. Moreover, in this approach quantization effects and lifetime effects are naturally treated on the same footing. The contin-uous ingredient is contained into the overcompleteness property of the chosen set of quantum numbers. As a price to pay when working in a coherent states representation, it requires giving up the wave functions as the fundamental quantity of quantum theory and replacing them by Green's functions. It is worth emphasizing that the overcompleteness does not necessarily imply a loss of information and time symmetry breaking. For this, we need in addition an instability of dynamical motion related, e.g., to the presence of saddle points in the potential landscape. In this case, the overcompleteness of the representation 22 is necessary to obtain a solution of the spectral problem. The basic reason is that the crossing of the equipotential lines at the saddle-point energy (which schematically looks like a collision process and can be seen as a bifurcation of a path) together with the openess of the system destroys the trajectory as well as the Hilbert space description. Therefore, the phenomenon of instability somehow imposes to deal directly with probabilities to describe the dynamical evolution of the physical states (which necessarily belong to the Hilbert space). It is worth noting that we then obtain a description which from the point of view of its structure is isomorphic to classical mechanics.
We have seen that irreversibility arises as a selection principle from the time-invariant Hamiltonian. The states selected by the unstable dynamics appear thus to be less symmetric than they would seem to follow from the Hamiltonian description. This situation is actually reminiscent of the well-known spontaneous symmetry breaking as it occurs in ferromagnetism. In the presence of a dynamical instability, bra and ket vortex states describe just physically distinct states. Finally, we note that a critical ingredient to obtain the time symmetry breaking in our solution is to consider quantum tunneling within an infinite system, i.e., unconfined spatially (otherwise, the physical quantum numbers describing the dynamics are necessarily discrete and the evolution unitary).
VI. CONCLUSION
In this paper, we have built a Green's function formalism based on the use of an overcomplete semicoherent vortex representation to study the electron quantum dynamics in high magnetic fields and in a smooth potential landscape. Within this formalism, we have shown that it is possible to derive in a controllable way approximate quantum expressions, e.g., for the local density of states, for an arbitrary potential smooth at the scale of the magnetic length. Moreover, we have obtained in the limit of negligible Landau level mixing an exact expression for the electronic Green's function which encompasses all possible cases of quadratic potentials. We have argued that this generic result, which is rendered possible by the use of an overcomplete representation of states belonging to the Hilbert space, is a manifestation of a stability prop-erty of the vortex quantum numbers. We have shown that the overcompleteness feature of the vortex representation does not introduce de facto a loss of information, since we are able to reproduce the solutions for the exactly solvable (integrable) cases of parabolic 1D and 2D confining potentials, which can be obtained by standard wave function calculations. In contrast, we have found that a loss of information, associated with the introduction of a probabilistic description of the physical processes, and concomitant with the appearance of lifetimes (synonymous of time symmetry breaking), arises in the saddle-point quadratic potential model. The vortex representation turns out to be especially relevant in this latter case of quadratic potential by providing in the limit of negligible Landau level mixing exact physical insight into the quantum tunneling processes originating at the saddle point. Therefore, we have explicitly proved that time irreversibility does not result from supplementary approximations to the Hamiltonian formalism of quantum mechanics, but just naturally arises in the spectral decomposition of the Hamiltonian from the formulation of dynamics in this overcomplete vortex representation of states. With the present analysis, we deduce that the minimal necessary ingredient to get solutions from the Hamiltonian formalism which exhibit a broken time symmetry is to have an instability of the single-particle dynamics, as occurring from quantum tunneling at the saddle points of the potential landscape, which manifests itself in an unconfined (thus open) system. Therefore, besides permitting to capture the transition from quantum to classical in an efficient way, the overcompleteness property of the representation allows the introduction of an intrinsic irreversibility on the microscopic level. Dyson equation (15) has been rewritten in the ω c = ∞ limit and we aim here at getting a simpler yet equivalent form that trivializes the problem of local potential gradients. This can be achieved through the substitution of functions (17) and (18), which clearly gives Going to Fourier space permits to rewrite the right-hand side of expression (A1) as a single global operator. Indeed, definingg and inserting these expressions into the right-hand side of Eq. (A1), important simplifications occur: The global operator e il 2 B (pyqx−pxqy)/2 above can then be written back into real space, providing the final expression given in Eq. (20).
We note in passing that the other Dyson equation ( i.e., G = G 0 + GV G 0 ) provides a second equation satisfied by the function g m : which may be mapped in a similar way onto the following equation for the functiong m A more explicit expression for Dyson equation can then be obtained by taking the symmetric sum of Eqs. (20) and (A6), and afterward, by expanding the exponential function and using the binomial theorem: Note that the difference of Eqs. (20) and (A6) yields another equation which may be useful in solving Eq. (A7) (e.g., in the case of a quadratic potential, see Sec. III)
APPENDIX B: MODIFIED VORTEX WAVE FUNCTIONS
Our aim in this appendix is to prove expression (22). Let us analyze first the following differential operator: Applying this to a function f (R) and introducing the Fourier transform of f , we get Using the inverse Fourier transform, we havê The integral over q is formally divergent. We circumvent this problem by introducing for a while the parameter ξ = −l 2 B /4 > 0. The calculation of the resulting Gaussian integral can then be easily done, which finally yieldŝ We deduce from this calculation that the operatorÔ is nothing but a convolution operator with a Gaussian kernel. We now apply it to f (R) = Ψ * m,R (r ′ )Ψ m,R (r). Using formula (B6) and the explicit expression (3) of the vortex wave functions, we havê where we have done the change in variable η = u − c + id ×ẑ with d = (r ′ − r)/2 and c = (r ′ + r)/2. We are again in presence of a formally divergent integral. As just above we introduce the parameter ξ and use the following trick to perform the Gaussian integral over η in Eq. (B7): The remaining Gaussian integral (B8) can now be straightforwardly evaluated (note that the contours of integration can be deformed to the real axes using the analyticity property of the integrand). We finally find Inserting the definitions of the parameters c and d in terms of the positions r and r ′ into Eq. (B9), we directly arrive at expressions (22) and (23). In the particular case of a 1D parabolic potential given by the wave functions and the energy spectrum of Hamiltonian (1) can be found by solving directly the Schrödinger equation using well-known standard methods. The relevant quantum numbers appear to be a positive integer n which labels the Landau levels, and a continuous quantum number p y playing the role of momentum in the y direction. In the Landau gauge A = Bxŷ, wave functions and energy spectrum read, respectively, where Ω = ω 2 c + ω 2 0 and L = /m * Ω are the renormalized cyclotron pulsation and magnetic length, respectively, and H n denotes the nth Hermite polynomial.
In absence of Landau level mixing, one has to consider ω c ≫ ω 0 , keeping all terms of order ω 0 /ω c , and neglecting higher powers of this ratio. Thus we have Ω ≈ ω c + ω 2 0 /2ω c and L ≈ l B . From Eqs. (C2) and (C3), the Green's function thus reads in this limit of negligible Landau level mixing with E np ≈ ω c n + 1 2 + ω 0 ω 0 2ω c n + 1 2 + V (p y l 2 B ). (C5)
Derivation within the vortex formalism
Now, we show how one can recover the Green's function (C4) of a 1D parabolic potential from the vortex formalism. We start with expressions (21) and (24) and exploit the fact that the effective potentialṽ m is inde- The integral over Y can then be performed exactly making use of expressions (22) and (23). Considering that where c and d are defined in Appendix B, Eq. (C6) is rewritten as It can be checked that the following algebraic relation holds: Inserting formula (C9) into Eq. (C8) and reintroducing the variables r and r ′ everywhere in place of c and d, we find that the Green's function finally reads Introducing p y = X/l 2 B and expliciting the termṽ m (X) by inserting expression (C1) into definition (18) ofṽ m , we see that expression (C10) corresponds exactly to Eq. (C4) up to a phase factor exp i(xy − x ′ y ′ )/2l 2 B which comes from the fact that we work here within the vortex formalism in the symmetric gauge, and not in the Landau gauge.
APPENDIX D: SOLVING THE DYNAMICAL EQUATION FOR POTENTIAL LINES
Differential Eq. (31) is second order in the derivative with respect to E, but first order in E. It will obviously become second order in τ and first order in the derivative with respect to τ by going to the Fourier component F R,A m (τ ). So, in order to solve Eq. (31), we write and substitute this form into Eq. (31) to get dτ F R,A m (τ ) ω m − E ± iδ − iγτ − (γE + η)τ 2 e −iEτ = dτ F R,A m (τ ) ω m ± iδ − iγτ − ητ 2 − i(γτ 2 + 1) d dτ Doing an integration by parts, we have 1 = −i(1 + γτ 2 )F R,A m (τ ) e −iEτ +∞ −∞ + dτ e −iEτ × ω m ± iδ + iγτ − ητ 2 + i 1 + γτ 2 d dτ F R,A m (τ ). (D3) Finally, taking the Fourier transform of this equation, we find that F R,A m (τ ) is governed by the first-order differential equation provided that the integrated term in Eq. (D3) vanishes, i.e., (1 + γτ 2 )F R,A m (τ ) → 0 (D5) when τ → ±∞. Equation (D4) is readily solved by Here θ(τ ) is the Heaviside function. For γ < 0, one must understand that defined for √ −γ|τ | ≤ 1. The variable t in expression (D6) plays actually the role of the time since it is conjugated to the energy ω which enters into the expression via the quantityω m ; see definition (32). Because the solutions of the homogeneous equation do not respect the time causality, one has only considered the particular solution of inhomogeneous equation (D4).
For γ ≤ 0, solution (D6) fulfils requirement (D5) for any value of the parameter η (for the case γ = 0, condition (D5) is obeyed with the help of the infinitesimal quantity ±iδ, while for γ < 0 we have F R,A m = 0 for √ −γ|τ | ≥ 1). However, for γ > 0, we note that condition (D5) is not satisfied, so that expression (D1) together with formula (D6) does not yield a solution of the initial Eq. (31). Nevertheless, the solution of Eq. (31) for γ > 0 can be inferred from result (D6) by noting that the problem actually originates from the saturation of the function t(τ ) when τ → ±∞. Indeed, by considering t instead of τ as being the relevant variable and by extending its domain of definition to the whole real axis, we can exploit the infinitesimal quantity ±iδ to get rid of the boundary term at infinity. For γ > 0, it can be easily checked that the function f R,A m (E) = ∓i dt θ (±t) cos( √ γt) e −i(E+η/γ)τ (t) ×e i(ωm+η/γ±iδ)t (D9) with the function τ (t) given by is a solution of Eq. (31). Here integral (D9) is defined in the sense of Cauchy principal value for the points √ γt = π/2 + nπ. This provides the exact result (35) for the vortex Green's function of an arbitrary quadratic potential in the ω c → ∞ limit.
APPENDIX E: CHECKING THE VORTEX FORMALISM: CASE OF A 2D PARABOLIC CONFINING POTENTIAL
Recovering the set of two discrete quantum numbers for the circular confinement potential from the use of a basis of states which is characterized by both discrete and continuous quantum numbers appears in principle to be a very challenging task. We show that the quantization of the confining potential appears in the vortex Green's function formalism through a rather different way from the usual derivation in the wave function formalism.
Standard derivation
To benchmark our results for the Green's functions, we shall compare the general expression derived in Sec. IV from the use of the vortex states formalism with the exact solution for a circular confining potential. The potential profile given by V (r) = 1 2 m * ω 2 0 x 2 + y 2 (E1) leads in a homogeneous magnetic field to the well-known Fock-Darwin spectrum where n = 0, 1, 2, ... is a positive integer and l = 0, ±1, ±2, ... a positive or negative integer. Here Ω = ω 2 c + 4ω 2 0 is the renormalized cyclotron pulsation. The normalized wave functions associated with the energy spectrum (E2) are written in polar coordinates r = (r, θ), Ψ n,l (r) = 1 L n! (n + |l|)!
where L |l| n (z) corresponds to the generalized Laguerre polynomial of degree n, and L = /m * Ω is the renormalized magnetic length.
The local density can be directly calculated from the knowledge of the energy spectrum and the exact wave functions, and is given by The method of projection onto a given Landau level is again obtained by considering ω c ≫ ω 0 , keeping terms of order ω 0 /ω c . This is equivalent to taking ω c → ∞ with l B finite. We thus have Ω ≈ ω c + 2ω 2 0 /ω c and L ≈ l B , so that the energy spectrum becomes with m = [n+ (|l|− l)/2] ≥ 0 the Landau level index. According to the second term in the right-hand side of Eq. (E5), the Landau levels are generally nondegenerate as a result of the circular confining potential characterized by the frequency ω 0 ≪ ω c . If we restrict ourselves to the lowest Landau level contribution to the local density for the sake of simplicity and consider the absence of Landau level mixing, the exact local density gets simplified into The different parameters for the circular confining potential are ζ = l 2 B m * ω 2 0 = ω 2 0 /ω c , γ = ζ 2 /4, and η(r) = l 2 B ζ |∇V (r)| 2 /8. Using these values and the general formula for the local density (48) obtained from the vortex formalism, we get for the lowest Landau level contribution (m = 0) to the local density n(r) = 1 2πl 2 Finally, by noting that the first term in the right hand side of Eq. (E11) can be written as we arrive at formula (E6) for the local density. This establishes the exact equivalence of general formula (48) and of Eq. (E6) in the particular case of a circularly symmetric confining potential. | 2009-09-17T12:39:39.000Z | 2009-06-18T00:00:00.000 | {
"year": 2009,
"sha1": "822da0d3bdd854b2113aff92d578ba9f581a8cb2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0906.3375",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "822da0d3bdd854b2113aff92d578ba9f581a8cb2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
259262323 | pes2o/s2orc | v3-fos-license | Detect Depression from Social Networks with Sentiment Knowledge Sharing
Social network plays an important role in propagating people's viewpoints, emotions, thoughts, and fears. Notably, following lockdown periods during the COVID-19 pandemic, the issue of depression has garnered increasing attention, with a significant portion of individuals resorting to social networks as an outlet for expressing emotions. Using deep learning techniques to discern potential signs of depression from social network messages facilitates the early identification of mental health conditions. Current efforts in detecting depression through social networks typically rely solely on analyzing the textual content, overlooking other potential information. In this work, we conduct a thorough investigation that unveils a strong correlation between depression and negative emotional states. The integration of such associations as external knowledge can provide valuable insights for detecting depression. Accordingly, we propose a multi-task training framework, DeSK, which utilizes shared sentiment knowledge to enhance the efficacy of depression detection. Experiments conducted on both Chinese and English datasets demonstrate the cross-lingual effectiveness of DeSK.
Introduction
In recent years, there has been a growing focus on the subject of mental health, drawing significant attention from the general public. Particularly following the outbreak of the COVID-19 pandemic, a surge in the prevalence of common mental health disorders has been observed [2]. And among these disorders, depression stands out as the most prevalent form, exhibiting a strong correlation with substantial morbidity and mortality rates [4]. Traditional methods for depression diagnosis usually use interviews with patients or self-report questionnaires, which are time-consuming and error-prone.
The social network offers a pathway for capturing pertinent behavioral attributes pertaining to an individual's cognition, emotional state, communication patterns, daily activities, and social interactions. The emotions conveyed and the linguistic patterns employed in posts on social networks can potentially serve as indicators of prevailing sentiments such as feelings of insignificance, culpability, powerlessness, and intense self-disdain, which are characteristic of major depressive disorder [5]. Hence, it becomes paramount to comprehend and analyze the emotions that individuals convey through social networks, especially during difficult times like the pandemic. Furthermore, the timely identification of initial indicators of depression is of great importance, as it enables prompt intervention and assistance for those in need.
There is a large amount of existing work analyzing depression using social network data. Early works are usually based on statistical or traditional machinelearning approaches. [20] proposed a multi-modal depressive dictionary learning model specifically for detecting users with depressive tendencies on Twitter. Recently, deep learning methods have exhibited remarkable advancements, attaining notable levels of performance in depression detection. [3] proposed a deep learning model named X-A-BiLSTM for depression detection in imbalanced social network data. Leveraging the capabilities of the Transformer model, [12] demonstrated significantly improved accuracy in detecting depression among social network users.
Nevertheless, the existing methods predominantly focus on using pre-trained models or deeper networks to obtain the semantic aspects of sentences. The neglect of sentiment features of the target sentences and external sentiment knowledge leads to unsatisfactory performance [1] [24] of the neural network in depression detection. Because depression is usually highly correlated with negative emotions, these sentiment features could contribute to more comprehensive and accurate depression detection. With the intuition that sentiment serves as a direct clue to depression [17], we introduce external sentiment knowledge into depression detection to enhance performance. Specifically, our approach incorporates external sentiment knowledge into the depression detection model by leveraging a multi-task learning framework. This framework facilitates the simultaneous learning of sentiment analysis and depression detection, enabling the model to benefit from the additional information provided by external sentiment knowledge. The main contributions of this work are summarised as follows: 1. We propose a Depression detection based on Sentiment Knowledge model called DeSK, which employs multi-task learning to acquire and leverage external sentiment knowledge which is overlooked by previous works. 2. Considering the scarcity of publicly available datasets pertaining to depression in Chinese social networks, we collect and construct a dataset focused on depression from the Weibo platform. This dataset was created through self-diagnosis methods for further research and analysis 5 .
3. Experimental results on the Reddit dataset show that DeSK outperforms state-of-the-art performance. The ablation tests validate the model components and demonstrate their efficacy in detecting depression.
Related Work
Depression detection and sentiment analysis have been extensively studied in the field of mental health analysis using social network data. Early approaches focused on utilizing statistical or traditional machine learning methods to detect depression based on textual content. These methods mainly relied on linguistic patterns and semantic information to identify signs of depression. However, they often overlooked the crucial role of sentiment features in depression detection. However, multi-task learning offers a promising solution to address this disadvantage. By jointly learning these tasks, the models can benefit from shared knowledge and improve performance in both domains.
Depression Detection
Trained professional psychologists rely on various methods, including written descriptions provided by individuals and psychometric assessments, to assess and diagnose depression accurately [16]. Social network-based sentiment analysis is an alternative depression detection approach rising in recent years. Researchers can extract valuable insights from the vast amount of data available on social networks, such as patterns, trends, and user-generated content related to depression. By extracting various behavioral attributes from social network platforms, such as social engagement, mood, speech and language style, self-networking, and mentions of antidepressants, [5] aimed to provide estimates of depression risk. [3] proposed a deep learning model (X-A-BiLSTM) to handle the real-world imbalanced data distributions in social networks for depression detection. [9] proposed a deep visual-textual multi-modal learning approach aimed at acquiring robust features from both normal users and users diagnosed with depression. And [8] developed a depression lexicon based on domain knowledge of depression to facilitate better extraction of lexical features related to depression.
Sentiment Analysis
Sentiment analysis seeks to examine individuals' sentiments or opinions concerning various entities, including but not limited to topics, events, individuals, issues, services, products, organizations, and their associated attributes [29]. For the past few years, the growth of social networks has significantly propelled the advancement of sentiment analysis. To date, the majority of sentiment analysis research is based on natural language processing techniques. [7] combined text features and machine learning methods to classify social network texts into six types of emotions. [21] employed machine learning algorithms, specifically Naive Bayes (NB) and the k-nearest neighbor algorithm (KNN), to discern the emotional content of Twitter messages and classified the Twitter messages into four distinct emotional categories.
Multi-task Learning
Multi-Task Learning (MTL) is a machine learning paradigm that aims to enhance the generalization performance of multiple related tasks by leveraging the valuable information inherent in these tasks [30]. [31] proposed a facial landmark detection by combining head pose estimation and facial attribute inference. [32] incorporated sentiment knowledge into a hate speech detection task by employing a multi-task learning framework.
Methodology
In this section, we introduce DeSK, which exhibits an enhanced capability to detect depression through the integration of target sentence sentiment and external sentiment knowledge. The overall architecture is shown in Figure 1. The framework primarily comprises three components: 1) Input layer, which captures the sentiment features of the sentence. A depressed words dictionary is employed to determine if each word exhibits depressed speech characteristics and appended to the word embedding as emotional marker bits. 2) Multi-task learning framework, which leverages the strong correlation between sentiment analysis and depression detection, to model task relationships and acquire task-specific features by leveraging shared sentiment knowledge. Multiple feature extraction units, consisting of a multi-head attention layer and a feed-forward neural network, are utilized for this purpose. 3) Gated attention layer, which is a gated attention mechanism that calculates the probability of selecting each feature extraction unit.
Input Layer
The central idea of DeSK revolves around the notable association between depression and negative emotions. We hypothesize that texts expressing depressive sentiments frequently contain explicit usage of negative emotion words [23]. Hence, directing attention towards capturing derogatory words within a sentence can assist in enhancing depression detection capabilities. More specifically, we capture this sentiment information by utilizing sentiment marker bits.
Sentiment Marker Bits. Our work is based on the intuition that certain specific words that possess an exceedingly negative nature, such as sadness [15], disgust [28], etc., have a more substantial impact on the assessment of depression. To address this, we have constructed a depressed word dictionary, the vocabulary of which comes from NRC Emotion-Lexicon [14]. The depressed word dictionary is employed to classify social network text into two categories: those containing depressed words and those without depressed words. Each word in the text is assigned to one of these categories. The category assignment for each word is initialized randomly as a vector which we call sentiment marker bits: S = (s 1 , s 2 , s n ).
Word Embedding. Our word embedding is based on distributed representations of Words [13], mapping words into a high-dimensional feature space while preserving their semantic information. For each text, we represent it as T = {w 1 , w 2 , w n } using word embedding, where w i ∈ R D denotes each token embedding and D is dimensions of word vectors.
Due to the linear structure observed in typical word embedding representations, it becomes feasible to meaningfully combine words by element-wise addition of their vector representations. To effectively leverage the information contained within depressed words, we integrate each word embedding with the sentiment marker bits. Regarding the implementation aspect, a simple vector concatenation operation is used, and the embedding of a word v i is calculated as v i = w 1 s 1 .
Multi-task Learning Framework
Considering the diverse influences of various countries, regions, religions, and cultures, some meanings in many languages are often embedded within the underlying semantics rather than solely reflected in sentiment words. For instance, the word "blue" may not explicitly convey a sense of depression, but it often carries a pessimistic semantic meaning. As evident from the aforementioned example, depression text frequently includes negative sentiment words. However, relying solely on the sentiment information within the target sentence itself for depression detection often proves challenging in achieving satisfactory performance.
The task of determining the sentiment of a text based on its semantic information is commonly referred to as sentiment analysis. Extensive research has been conducted on sentiment analysis for many years, resulting in the availability of abundant high-quality labeled datasets. In contrast, in the depression detection field, the availability of high-quality labeled data is limited, leading to a restricted vocabulary and inherent biases during the training process. In multitask learning, the commonly used framework employs a shared-bottom structure where different tasks share the bottom hidden layer. While this structure can mitigate overfitting risks, its effectiveness may be impacted by task dissimilarities and data distribution. In DeSK, we incorporate multiple identical feature extraction units that share the output from the previous layer as input and pass it on to the subsequent layer. This allows for an end-to-end training of the entire model. Our feature extraction units layer consists of a multi-head attention layer and two feed-forward neural networks.
Multi-head Attention Layer. To capture long-distance dependencies between words in a sentence, we employ the multi-head self-attention mechanism introduced by [25]. This approach calculates the semantic similarity and semantic features of each word in the sentence with respect to other words, allowing for enhanced connectivity and information exchange throughout the sentence. The formula is as follows: For a given query Q ∈ R n1×d1 , key K ∈ R n1×d1 , value V ∈ R n1×d1 , The final feature representation is as follows: Pooling Layer. Based on the observation [19] that using a combination of max-pooling and average-pooling yields significantly better performance compared to using a single pooling strategy alone. By leveraging both pooling strategies, we are able to capture different aspects and variations within the features, leading to improved overall performance. The formula is as follows:
Gated Attention Layer
The gated attention mechanism enables the model to dynamically select a subset of feature extraction units based on the input. Each task has its own gate, and the weight selection varies for different tasks. The output of a specific gate represents the probability of selecting a different feature extraction unit. Multiple units are then weighted and summed to obtain the final representation of the sentence, incorporating the contributions from the selected feature extraction units.
The formula is as follows: where k is the number of tasks.
Model Training
For the training process, the loss function used in DeSK combines cross entropy with L2 regularization, as follows: where i is the index of sentences, and j is the index of class.
Experiment
In this section, we begin by introducing the datasets used in our study as well as the evaluation metrics employed for performance assessment. Then, we present a series of ablation experiments conducted to showcase the effectiveness of DeSK. Finally, we provide a comprehensive analysis of the results obtained from these experiments.
Datasets
To explore whether sharing sentiment knowledge can enhance the performance of hate depression detection, we utilize three publicly available datasets for social network depression detection and one sentiment dataset. Meanwhile, to validate the cross-language nature of the model, we use another depression detection dataset which we constructed, and a sentiment analysis dataset in Chinese. The specifics of these datasets are presented in Table 1.
Reddit Depression Dataset (RDD) is collected [18] from the archives of subreddit groups such as "r/MentalHealth," "r/depression," "r/loneliness," "r/stress," and "r/anxiety." These subreddits provide online platforms where individuals share their experiences and discussions related to mental health issues. The collected postings data were annotated by two domain experts who assigned three labels to denote the level of signs of depression: "Not depressed," "Moderate," and "Severe." Tweet Depression Dataset 60k (TDD-60k) utilized in our study consists of four depression labels [22], each corresponding to different levels of depression signs. By incorporating these four depression labels, it aims to capture a comprehensive range of depression severity in the Twitter dataset.
Tweet Depression Dataset 10k (TDD-8k) is from huggingface 6 . The dataset exhibits a near-equivalent distribution of positive and negative examples, indicating a balanced representation between the two classes.
Sentiment Analysis (SA) is from Kaggle2018 7 . The SA dataset exhibits a higher number of positive cases but a relatively smaller number of negative cases. As the test set does not have labeled data, we solely rely on the training set for our analysis and model training.
Chinese Weibo Depression Dataset (CWDD) was compiled by collecting tweets from depression-related communities on Weibo. It underwent annotation and organization to ensure a balanced distribution of positive and negative cases, thereby achieving comparable proportions between the two classes. The dataset consists of 10,348 tweets from Weibo classified as depressed, while there are 7,562 tweets categorized as non-depressed.
Weibo Sentiment 100k (WS) consists of over 100,000 comments from Sina Weibo, which have been tagged with emotion labels. It contains 59,993 positive comments and 59,995 negative comments, making it a balanced dataset in terms of positive and negative sentiment. In consideration of the proportion of sentiment data and depression data, we have specifically chosen 10,500 instances from the dataset for the purpose of training. This selection process ensures a balanced representation of both sentiment and depression-related data in the training set.
Given the cross-language nature of our experiments, it is essential to construct separate depression word dictionaries for English and Chinese. The English depression dictionary is constructed with reference to the NRC Emotion Lexicon [14]. The NRC Emotion Lexicon is a widely recognized resource that provides a comprehensive collection of words annotated with their associated emotions. Our Chinese depression word dictionary is constructed based on the Dalian University of Technology Chinese Emotion Vocabulary Ontology Database [27]. This database is a recognized and comprehensive resource that contains a wide range of Chinese words and phrases annotated with their corresponding emotional categories.
Baselines and Metrics
Baselines. We compare the performance of DeSK with the following baselines to evaluate its effectiveness.
Doc2vec is proposed by [13], known as an unsupervised learning algorithm that aims to represent documents as fixed-length numerical vectors. It is an extension of the popular Word2Vec algorithm, which is used to generate word embeddings.
BERT is proposed by [6]. The pre-trained model BERT was used to capture the features of depression detection.
RoBERTa, proposed by [11], is an enhanced version of the BERT model, which introduces various optimizations to improve its performance. These optimizations focus on refining the underlying architecture and training process.
Metrics. In the depression detection task, we employ two evaluation metrics, namely Accuracy (ACC) and Macro F1, to assess the performance of DeSK.
Training Details
In the experiments, we use the following configurations for different components of the model. In the input layer, we initialize all word vectors using Glove Common Crawl Embeddings (840B Token) with a dimension of 300. The category embeddings, on the other hand, are initialized randomly with a dimension of 100.
For the sentiment knowledge sharing layer, we employ a multi-head attention mechanism with four heads. The first Feed-Forward network consists of a single layer with 400 neurons, while the second Feed-Forward network includes two layers with 200 neurons each. Dropout is applied after each layer with a dropout rate of 0.1.
The RMSprop optimizer is utilized with a learning rate of 0.001, and the models are trained using mini-batches consisting of 512 instances. To prevent overfitting, we incorporate learning rate decay and early stopping techniques during the training process. These measures ensure effective model training and help mitigate the risk of overfitting. The overall performance comparison is summarized in Table 2. DeSK outperforms other neural network models in terms of accuracy and F1 score. Specifically, compared to Doc2Vec, DeSK achieves a 10% increase in the F1 score. Even when compared to the strong baseline model, universal encoder, DeSK demonstrates superior performance. Furthermore, DeSK has the advantage of being easier to implement and having fewer parameters compared to other models. This makes it more accessible and efficient for practical applications. Addi- tionally, we conducted tests on different datasets, including the Chinese Weibo dataset, to evaluate the cross-language capability of DeSK. The results are shown in Table 3, indicating that DeSK performs well in analyzing and processing text from different languages, showcasing its ability to handle multilingual data effectively. This versatility further enhances the applicability of DeSK across various linguistic contexts. We conducted an analysis to assess the impact of different components of DeSK. The results are presented in Table 4, where "-ss" refers to the ablation of sentiment knowledge sharing and sentiment marker bits, while "-s" indicates that sentiment data was not utilized as input and only sentiment marker bits were used in the model. The findings provide insights into the contributions of these components to the overall performance of the model. We conducted an analysis to evaluate the impact of gated attention in DeSK. The results, as shown in Table 5, demonstrate that the model's performance is further improved when gated attention is utilized. By learning how the outputs of different gates interact and contribute to the final representation, DeSK can better understand the dependencies and correlations between tasks, leading to improved performance in capturing complex task relationships.
The Influence of Sentiment Dataset
As mentioned earlier, depression detection and sentiment analysis exhibit a strong correlation, indicating that sharing sentiment knowledge can enhance the performance of depression detection. However, the relative data proportions of the two tasks in multi-task learning can influence the performance of each task [10]. To investigate the influence of the dataset ratio on the performance of multi-task learning, we select the smallest dataset, TDD-7K, for the experiment. We then analyze the effect by sampling and adjusting the size of the SA (Sentiment Analysis) dataset. Figure 2 illustrates the performance of the model in relation to the ratio of data in the sentiment dataset and the depression dataset. It reveals that the model's performance tends to be poor when the ratio of data in the sentiment dataset is small compared to the depression dataset. Conversely, the model achieves its best performance when the ratio reaches 3:1. This finding suggests that having an appropriate proportion of data from both the sentiment dataset and the depression dataset is crucial for achieving optimal performance in multitask learning.
Discussion
Depression has emerged as a significant global health concern, affecting individuals across various geographical locations. With the widespread adoption of social networks and the optional anonymity they provide, many individuals, whether diagnosed or not, may express their mood or symptoms related to depression on these platforms. This presents an opportunity to identify individuals at risk of depression through their online activities. Detecting individuals at risk of depression on social networks has great potential. It allows for timely intervention and support for those who may struggle to access social support or effective treatment through traditional means. By leveraging the power of technology and analyzing online behavior, we have the potential to reach individuals who might otherwise go unnoticed and provide them with the necessary assistance they need.
Conclusion and Future Work
This paper focuses on investigating the effectiveness of multi-task learning in depression detection tasks. The core concept revolves around utilizing multiple feature extraction units to share multi-task parameters, thereby enabling improved sharing of sentiment knowledge. The proposed model incorporates gated attention to fuse features for depression detection. By leveraging both the sentiment information from the target and external sentiment resources, DeSK demonstrates enhanced system performance through ablation experiments, thereby advancing social network depression detection. Through detailed analysis, we provide further evidence of the validity and interpretability of DeSK.
Overall, our experiments contribute to a better understanding of the interplay between depression detection and sentiment analysis through multi-task learning. They lay the foundation for future endeavors in refining modeling techniques and data selection, encompassing different types of social network depression information, diverse sentiment data types and scales, and other related aspects. But there are still some limitations. One key limitation is the quality of the available depression detection datasets, which are often labeled based on self-diagnosis. This introduces potential biases and uncertainties in the data, which can impact the performance of depression detection. Additionally, while psychologists may have access to a wealth of depression-related information through counseling sessions, privacy concerns prevent the correlation of this information with social network data. This limits our ability to create a comprehensive and high-quality depression detection dataset that combines both social network information and professional insights.
To overcome these limitations, we plan to explore privacy computing techniques to achieve alignment between social network data and individual entities without compromising privacy. This would allow us to create a more robust and reliable depression detection dataset, enhancing the accuracy and effectiveness of depression detection. | 2023-06-28T06:42:56.580Z | 2023-06-13T00:00:00.000 | {
"year": 2023,
"sha1": "3f5c69635c9c0f83031522b8149093b477f76119",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3f5c69635c9c0f83031522b8149093b477f76119",
"s2fieldsofstudy": [
"Computer Science",
"Psychology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
233813521 | pes2o/s2orc | v3-fos-license | QUANTITATIVE EVALUATION OF MICROWAVE IRRADIATION ON SHORT-ROTATION PLANTATION WOOD SPECIES
The durability of imported timber is a matter of growing concern in the tropical Indian climate, with their refractory nature further adding to the woes with respect to further processing. In the present study, the effect of microwave pre-treatment, exposure time and initial wood moisture content on retention, treatability and cross-sectional anatomical properties of Tectona grandis and Southern yellow pine imported from Ghana and South America were evaluated. Water based preservative copper chrome borate (CCB) of 2 % concentration was used for the study. The experimental study in combination with dip-diffusion method returned with significant improvement in retention of about 5-6 folds more than the control sets in Southern yellow pine and Tectona grandis. Another set of Southern yellow pine and Tectona grandis samples were further treated using a full cell pressure method after microwave, without initial vacuum, which showed similar trends with a 3-4 folds increase in retention over controls. Both experiments returned with significant improvement in the treatability class of Tectona grandis and Southern yellow pine. The anatomical analysis was performed using a light microscope with 5 and 10x magnifications on treated and untreated samples of both Tectona grandis and Southern yellow pine. The outcome of the anatomical study exhibited improvement in vessel diameters in the treated samples of Tectona grandis with a reduction in the degree of occlusion by the presence of tyloses. For Southern yellow pine, checks on micro level and cracks on macro level appeared along with the ray cells and the diameter of the resin canals was substantially expanded which ascertains that microwave pre-treatment ameliorated the flow of fluids in the wood microstructure which improved permeability and resulted in better uptake and penetration.
INTRODUCTION
The threat of climate change is looming over the world due to various anthropogenic activities responsible for emissions of Greenhouse gases (GHG) like carbon dioxide (CO 2 ). The increase of carbon in the form of CO 2 in the atmosphere for the past few decades has resulted in a steep increase in global mean temperature from 1,8 ºC to 4 °C (IPCC 2007, Singh et al. 2000. Sustainable forest management and efficient utilisation of the major forest product, i.e. wood, can positively influence CO 2 removals by locking the carbon stored in harvested wood products and reduce carbon emissions in the global carbon cycle (UNECE 2008). For woody biomass to act efficiently as a carbon sink, the amount of CO 2 sequestered in growing forests and the pool of long-lasting wood products must be acceptably larger than the amount of CO 2 released by decomposition and combustion (Flugsrud et al. 2001). Due to its natural origin, aesthetic appeal, excellent workability and renewability, wood has gained the attention of the stakeholders and promoted as an eco-friendly construction material in present days across the globe. Recently, wood modification is the most sought-after field in the niche sector of wood science and technology as the dwindling supply of many durable wood species worldwide has compelled industrialists and other patrons to look for alternative species of non-durable nature. Living Forests Model predicted global wood removals of 7,168 billion m 3 in 2030 and 11,356 billion m 3 in 2050 (WWF 2012) which establishes the demand for woody biomass worldwide. The demand for wood-based panel products and furniture from India is on the rise, due to optimal labour cost and less expensive production process but the gap between demand and supply of raw material is advancing meteorically (Ganguly 2018) and managed by import. Asia Pacific region had a net import of 36 million m 3 in 2016 (FAO 2017) and India's industrial demand for wood was predicted to reach 150 million m 3 by 2018 (AHEC 2016). India, being a major importer of round logs and other allied products had an estimated import worth $2 billion in the past decade (Sood 2019) with logs estimating to about 74 % of total imported forest products (Montiel 2016, AHEC 2016. Hardwood species of Tectona grandis (TG) and meranti along with softwood species like Southern yellow poplar (SYP) are mostly imported from Malaysia, United States of America, Myanmar and New Zealand (Sood 2014). However, the performance of such imported wood and allied products for several ends uses calls for attention as several imported species perform rather dubiously in the tropical climate of the Indian subcontinent (Sundararaj et al. 2015). This may result in early and frequent replacement of the wood in use which may not be economically and ecologically feasible (Samani et al. 2019). Hence, research on wood modification nowadays in India primarily focuses on several imported and indigenous species of lower durability and enhancing their performance in service (Ganguly and Tripathi 2018, Hom et al. 2020a, Samani et al. 2020, Hom et al. 2020b, Saha et al. 2020. Additionally, wood in its service span is often exposed to harsh climate (Cheung 2019) and thus, despite being a good building material, it has certain limitations that restrict its extensive use outdoors. Biodegradation of wood in its natural form is elementary but needs to be controlled significantly in service (Kutnik et al. 2014). Wood preservation is the most convenient and efficient method to impart a substantial life span to timber and timber products. However, the extent and execution of this method vary from species to species due to their refractoriness and treatability indices. Poor treatability of wood often results in moderate uptake of treating solutions but meager penetration which fails to address the purpose. To make amends with this nuisance, several wood modification techniques were developed and are being explored to facilitate uptake of treating chemicals. Present day research is primarily focusing on eco-friendly modification practices that limit or reduce excessive use of wood preservatives and frequent replacement of wood in use.
Microwave modification is an eco-friendly technique that reduces energy consumption (Sethy et al. 2016) and aids in several wood processing operations such as seasoning and preservative treatment by significantly improving wood's permeability and preservative uptake, resulting in its effective use for a longer period in service. Previous research on microwave (MW) showed a positive impact on retention and penetration of wood pertaining to moderate and high refractory index with different chemicals and catalysts (Samani et al. 2019, Ganguly and Tripathi 2018, Vinden et al. 2017, Terziev and Daniel 2013, Sethy et al. 2012, Dashti et al. 2012, Torgovnikov and Vinden 2009). Rapid heating of wood microstructure during MW modification results in delamination of the weak anatomical structures like ray cells and parenchyma, resulting in better connectivity of free space in the capillary system and in return an easier fluid flow (Dömény et al. 2014). Using optimum to severe levels of to MW intensity, improvement in permeability in radial and longitudinal direction up to several thousand times in comparison to the untreated samples can be achieved (Liu et al. 2005, Torgovnikov and Vinden 2010, Vinden et al. 2011. Torgovnikov and Vinden (2010) further highlighted the importance of initial moisture content (IMC) of wood prior MW modification. The micro cracks formed in wood during the treatment are basically due to the high-pressure gradient of the vapour, which is generated during MW heating. Green wood performs better in this case in comparison to wood having very low moisture content (MC). MW was explored by Gašparik and Gaff (2013) for wood plasticizing and they also reported the importance of higher MC for this intense process which was later endorsed by Gašparik and Barsik (2014). Further, in some preliminary laboratory trials it has been found that wood having very low MC can be heated upto 170 °C after MW modification which may result in some chemical modification, slow pyrolysis of the chemical constituents of wood and charring. The char formed by microwave heating has a specific surface area of approximately 450 m 2 /g (Miura et al. 2004). Hence, selecting the modification parameters based on optimum MC is of high importance.
Anatomical changes after any physical or thermal wood modification is fundamental and well understood. Similarly, MW wood modification also results in a significant change in wood microstructure as reported by several researchers (Hong-Hai et al. 2005, Jiang et al. 2006, Torgovnikov and Vinden 2009, Li et al. 2009, He et al. 2014, Samani et al. 2019 although, in case of moderate modification intensities or species with better structural integrity it may not be evident (Vongpradubchai and Rattandecho 2009).
Based on the above cited literature and findings, the present study was designed to assess the impact of MW modification on retention, treatability and anatomical properties of imported TG and SYP. The species were considered keeping in mind their potential in the context of Indian wood industries in the future and their increasing import to India of late. One refractory hardwood species which is known to have caused hindrance in preservative treatment and one softwood species that was easy to treat were taken to assess the difference of treatment parameters.
Sample preparation
Seasoned planks of SYP and TG imported from South America and Ghana respectively, were procured through local vendors. The study was executed at the Wood Preservation Discipline of Forest Research Institute, Dehradun, India. The planks were subsequently converted into cube samples of side length 3,5 cm (dimension 3,5 (Longitudinal) x 3,5 (Radial) x 3,5 (Tangential)) cm. Relatively straight grained samples free from any visual abnormalities were selected from the same part of the board and were considered for the study to obtain optimum results and to optimize variability in the data. Samples contained both sap and heartwood portions. The samples were conditioned at 20 °C +2 °C and 85 % RH in a conditioning chamber for 14 days prior experiment. In total 188 samples (94 each of TG and SYP) were considered for the study (Table 1). Each set of experiments had 10 replicates whereas for anatomical analysis 2 replicates per set were considered. For the oven drying (OD) set, samples of each species were selected from the lot at random and IMC of the samples were determined on an OD basis as per (IS 11215 1991). The IMC (with standard deviation values in parenthesis) of TG was found as 42,0 (+0,8) % and that of SYP was 40,8 (+1,52) % prior commencement of the experiments.
Microwave pre-treatment and preservative impregnation
MW modification was carried out in a kitchen MW device (Model 30SC3, IFB Industries, India) with frequency 2,45 GHz and maximum output power of 900 W. Treatments were defined based on power and volume of samples and the energies defined in Table 1 were calculated accordingly (Samani et al. 2019). After MW modification, the treated samples were immediately dipped in a test vessel containing a 2 % solution of copper chrome borate (CCB) preservative. Dipping was done for 5min and the samples were then taken out of the chamber, dripping preservatives were soaked with a tissue and the mass was recorded. Another set of samples from both species were further subjected to pressure impregnation after MW at 1034212,5 N/m 2 (10,3 bar) for 2 h without initial vacuum, although, a final vacuum for 15 min was applied. Retention (kg/m 3 ) was checked for an absorbed quantity of the preservative on a wet weight basis as per (IS 401 2001) using the following Equation 1: (1) Where, G = Weight of the treating solution absorbed by sample in kg; C = Concentration of treating chemical (%) and V = Volume of the specimen, in m 3 .
Treatability evaluation
After preservative treatment, specimens were allowed to dry in a controlled chamber with 20 °C +2 °C temperature and 50 % + 2 % relative humidity (RH) for better fixation of the preservative chemicals. After 21 days of conditioning, both treated and untreated specimens were cut into equal halves to detect the presence of copper. The exposed surfaces were sprayed with Chrome Azurol S solution. The spot test exhibited blue color ( Figure 1) on cross sections where copper had penetrated and the untreated zones turned red (IS 2753(IS 1991. The percentage area of the treated zone was assessed visually. All four measurements ( Figure 1) were averaged to obtain a single penetration value. The penetration data were analyzed as per IS 401 (2001) to determine the treatability class. Treatability class and ICCA was evaluated as per the following representation (Tripathi 2012).
Anatomical analysis
For anatomical analysis two replicates per species from the control sets and two replicates per species from the set of 1500 MJ/m 3 were chosen. The extreme sets were selected for better understanding of the changes taking place within wood microstructure. The samples were first checked for the orientation of rays and accordingly a cross sectional cutting pattern for preparation of blocks was decided so that the intersection of growth rings remains as close to 90° as possible. Samples were cut perpendicular to the axially oriented xylem cells to avoid over-and underestimation of the measured anatomical features. In the present study, only the cross-sectional features were analysed to assess the impact of MW modification. The selected specimens were first sliced into blocks of smaller dimensions (2 x 2 x 2 cm 3 ) and were soaked in distilled water for 24 h at least to avoid damage to cell structures when cutting (Von Arx et al. 2016, Schneider and Gärtner 2013, Gärtner and Schweingruber 2013, Yeung et al. 2015. The samples were further boiled in cycles of 30 min to prepare them for the microtome. Sections were made of roughly 12 μm -20 μm thickness using a Reichert Microtome (Austria, 358926). Heidenhain's haematoxylin and safranin were used for staining the sections and standard laboratory schedule was followed afterwards by passing the section through grades of alcohol (10 % to 100 %), and afterwards putting it in xylene and clove oil (50:50) for making permanent slides. Finally, the sections were mounted in Canada balsam. For SYP the resin canal diameters and for TG the vessel diameters were measured. Twenty observations per slide were made and the mean values are reported.
Statistical analysis
The data were analysed using the SPSS Version 25 package from (IBM 2017) to determine the mean, standard deviation, Pearson correlation and to perform Kruskal-Wallis H test and ANOVA. Mean values were considered to examine significant differences between treatments and Duncan's modified LSD was performed afterwards to examine differences between individual means.
Retention
Previous studies revealed that improvement of preservative retention in wood is directly proportional to the applied MW energy and results in increase in retention values (Ramezanpour et al. 2015, Ganguly and Tripathi 2018, Samani et al. 2019. MW modified wood samples were dipped in the preservative solution immediately after irradiation so as to simulate the vacuum process in the full cell method and assess its effect in preservative retention. This theory has not been previously reported and hence can be explored further in future to optimise treatment cost. It was found that the steam coming out of the sample through numerous micro cracks formed within the wood, facilitated the preservative uptake and retention. For less refractory SYP, the improvement in retention with only 5 min of dipping returned with retention values of about 7 kg/m 3 for the highest MW energy class which was approximately 9 times more than that of the control set ( Figure 2). Around 6,5 kg/m 3 to 8 of absorption is recommended for Copper based preser vatives for several above ground applications (IS 401 2001) and the same was achieved in the present study by dipping of MW modified wood, with minimal energy consumption and without much effort. Such retention values impart adequate protection to the timber when it is not exposed to conditions with extreme humidity or in direct contact with water. For TG, a similar trend was found although with significantly lesser retention value of 2 kg/m 3 after dipping. TG falls under class "e" as per IS 401 (2001) which explains this outcome. However, the retention obtained by TG after 5 min of diffusion treated with highest MW energy was 6 times more than that of the controls (Table 2). Dipping as well as pressure treatment resulted in statistically improved retention of all the treatments when compared to the controls as revealed by Duncan Subset in both the species (Table 2) which establishes the overall efficacy of the study. Similar trends regarding uptake of water or treating solutions were observed by Treu and Gjolsjo (2008) and Hong-Hai et al. (2005) after MW pre-treatment. With additional pressure, the impregnation values improved further with SYP showing retention values of 19 kg/m 3 to 24 kg/m 3 which was 2-2,5 folds more than that achieved by non-MW modified controls. Both the treatments yielded statistically significant outcomes (Table 3). Wood obtained from fast grown plantation softwood like SYP usually has a 3 10 high proportion of juvenile wood and less of heartwood hence low natural durability (Hill 2007) resulting in restricted uses in extremely harsh climatic conditions or outdoors which can be alleviated by this method. SYP, treated with MW and impregnated with preservative after pressure, can be explored in severe climatic conditions and may exhibit durability beyond its otherwise specified period of 60 months (IS 401, 2001) which may result in its frequent use. Indigenous or imported TG is considered a durable species and falls under durability Class 1 (IS 401, 2001) while plantation grown TG sapwood is moderately durable and falls under Class 3. Under extreme exposure, not much protection is needed for TG timber, however, added retention of preservative is always a plus to enhance the durability of wood in service. TG samples treated with highest MW energy exhibited 4 times more retention values in comparison to the controls. Highest MW exposure yielded retention of approximately 9 kg/ m 3 for TG which may surely be enough to improve its durability substantially beyond the stipulated time range of 120 months in natural form. Table 3 exhibits that the retention of treating chemicals after diffusion or pressure impregnation, has a strong positive correlation with MW irradiation energy. The coefficient of determination (r 2 ) observed was 0,516 and 0,378 for SYP while 0,491 and 0,573 for TG with regards to dipping and pressure type of preservative treatment respectively, indicating the same. Enhanced retention corresponds to enhanced durability which means less frequent replacement of harvested wood products and thus a sustainable use of the woody biomass, locking the carbon stored. Values in parenthesis represent Pearson's correlation (r value) **Correlation is significant at the 0,01 level (2-tailed).
Treatability
Inspection was carried out visually after MW modification which highlighted that the control specimens of TG were highly refractory in nature and belonged to class "e" with very little or practically no impregnation. SYP controls were relatively more permeable and fell in class "d" Treatability class of both the species improved significantly after the MW pre-treatment with highly refractory TG being elevated to class "a" from class "e" and SYP to class "a" from class "d" (Figure 4). Tarmian et al. (2020) have mentioned that treatability and permeability are strongly correlated and the finding of this study further substantiates their claim as after MW treatment the reduction in the occlusion level in TG and checks and cracks formed in the microstructure of SYP might have positively influenced the permeability of the species (He et al. 2014, Wang et al. 2014 resulting in an easier flow of treating chemicals. The mean ICCA obtained for each set was statistically analyzed using the Kruskal Wallis nonparametric ranking test to assess the significance of MW modification energy. The impact of MW irradiation was statistically evident on treatability and penetration of both the species (Table 4) and the improvements in treatability obtained were in line with similar findings by Ramezanpour et al. 2015 andSamani et al. 2019 where highest treatability corresponded to the highest MW energy class (Figure 3). Table 4 shows a significant difference of MW modification energy on treatability as p value is less than 0,05. A post hoc independent sample test ( Figure 4) shows that in case of TG, dipping and pressure impregnation of preservative shows significant difference in treatability. Although, the effect of MW modification was non-significant between different MW energy levels for both dipping and pressure treatments. For SYP, the effect of different MW energy levels on treatability for dipping was significant statistically, while pressure treatment showed no statistical difference (Figure 4). It is also pertinent to mention that, Chromium based preservatives might not perform well in terms of treatability evaluation due to the high reaction rate of chromium with wood and relatively longer (24h) fixation time (Morris et al. 2002, Cooper andMorris 2007) which might have also resulted in the poor retention and treatability of the control samples after subsequent dipping of 5 min. This enhanced treatability of both hard and soft wood after MW pre-treatment can be of particular interest for several wood processing parameters such as coating, pulping, bonding of adhesives and chemical modification of timber (Tarmian et al. 2020) where homogeneity in treatment is a must and can be ensured.
Anatomical analysis
The enhanced treatability and retention were substantiated further by the finding of anatomical cross-sectional analysis of treated and untreated specimens of both the species. For TG, refractoriness can be attributed to the degree of occlusion in its vessels which is often fully or partially choked with tyloses ( Figure 5A). The presence of these vessel inclusions in teak, often makes it extremely difficult to treat with preservatives prior to use. MW pre-treatment, because of its typical wood modification acumen, can prove to be an efficient solution to these hindrances. It can be hypothesized that the high steam pressure with the fast-moving steam, generated within wood during the process, may flush out all such inclusions completely or partially from the core to the periphery. These pressure gradients developed during the treatment, can improve the effective vessel diameter (Samani et al. 2019) and aid in fluid and vapour flow through the entire volume of the test specimen in all three major axes (Torgovnikov and Vinden 2009). It was found that the effective mean vessel diameter (EMVD) of TG samples ( Figure 5J) improved significantly (P≤ 0,05) in comparison to the untreated samples ( Figure 5I). EMVD of the treated samples was recorded (standard error values in parenthesis) as 259 (+ 5,75) μm whereas the same for the controls were 186,5 (+ 6,17) μm. Apart from this obvious change ( Figure 5J), the anatomical appearance changed drastically with minor cracks ( Figure 5B) and significantly clearer vessels ( Figure 5F), free from vessel inclusion after MW. Anatomy of SYP also altered due to the treatment with resin canals showing a clearer appearance and visible cracks and checks along the weak ray cells which might have facilitated the fluid flow. Mean canal diameter of untreated controls were 112,5 (+ 6,17) μm which was statistically different from 266,25 (+ 5,77) μm for the modified set. The results of the anatomical study were in conformity with Weng et al. 2020 andHe et al. 2014.
CONCLUSIONS
MW wood modification is a potential tool to improve the treatability and permeability of several timber species of non-durable and refractory nature, ensuring their longevity and reducing frequent replacement in service. This maintains the carbon storage potential of harvested wood products and increases the possibilities of carbon dioxide sequestration by plantation. In addition, this method holds promise to reduce treatment time and energy consumption for the otherwise extensive wood processing techniques which can be immensely profitable and of particular interest for the small-scale industrialists. This process also ensures higher retention in relatively shorter exposure times which is established in the present study with water based preservative CCB. The anatomical structure of both soft and hardwood showed some delamination although the magnitude of the distortion varied between species mainly due to their inherent structural integrity. This phenomenon in particular must be considered prior to expose MW modified wood for extensive structural application as this may lead to some reduction in strength. However, this reduction in strength may be cosmetic if the treatment parameters are optimised.
Bureau of Indian Standards. IS. 1991. Methods for estimation of preservatives in treated timber and in | 2021-05-07T00:04:28.580Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "b4bf6ab3576a4e5b063a14a295243ad59b3f81a7",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.cl/pdf/maderas/v23/0718-221X-maderas-23-25.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "98777930dfd031b318dd8d341ee42210fb751770",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
261934464 | pes2o/s2orc | v3-fos-license | The Frauchiger-Renner Gedanken Experiment: Flaws in Its Analysis -- How Logic Works in Quantum Mechanics
In a publication (Nature Comm. 3711, 9 (2018)), Daniela Frauchiger and Renato Renner used a Wigner/friend gedanken experiment to argue that quantum mechanics cannot describe complex systems involving measuring agents. They were able to produce a contradictory statement starting with four statements about measurements performed on an entangled spin system. These statements needed to be combined using the transitive property of logic: If A implies B and B implies C, then A implies C. However, in combining successive statements for the Frauchiger-Renner gedanken experiment we show that quantum mechanics does not obey transitivity and that this invalidates their analysis. We also demonstrate that certain pairs of premises among the four statements are logically incompatible, meaning that the statements cannot all be used at once. In addition, to produce the contradiction, Frauchiger and Renner choose a particular run, which they call the `OK' -- `OKbar' one. However, the restriction to this case invalidates three of the four statements. Hence, there are three separate problems with logic in the 2018 Nature Communication publication. We also demonstrate the violation of the rules of logic -- including transitivity -- in certain situations in quantum mechanics in general. We use the Frauchiger-Renner gedanken experiment as a laboratory to explore a number of topics in quantum mechanics including wavefunction logic, Wigner/friend experiments, and the deduction of mathematical statements from knowledge of a wavefunction and obtain a number of interesting results. We show that Wigner/friend experiments of the type used by Frauchiger and Renner are impossible if the Wigner measurements are performed on macroscopic objects. They are possible on certain microscopic entities but then the Wigner measurements are rendered"ordinary"in which case ...
Introduction
In a publication, 1 D. Frauchiger and R. Renner argued that quantum theory cannot consistently describe the use of itself in the following sense: If quantum mechanics governs the agents involved in experiments, then a gedanken experiment exists that leads to a contradiction. By "agents," one means the experimentalists and their equipment that record the results of measurements on quantum systems. An experimentalist is able to note the outcome of an experiment through statements such as "I observed the spin of a spin-1 2 object to be up" (or down if that is the result of the measurement) where "up" and "down" refer to the direction of the spin in, say, in the z-direction. The equipment of the experimentalist equally has this capacity by recording the result (be it up or down) in a database, for example.
In the Frauchiger-Renner gedanken experiment, the initial state involves two spin- 1 2 objects that are entangled. Four measurements are performed: In the first two, two different agents measure the spin of each object in the z-direction. In the final two, Wigner agents perform measurements on those two agents themselves, 2, 3 and therefore, quantum mechanically measure macroscopic systems. Four "If ..., then ..." statements concerning the measurements are derived. These statements are then combined using the transitive property of logic (if A implies B and B implies C, then A implies C) three times to arrive at a contradiction. From this, Frauchiger and Renner conclude that "quantum theory cannot be extrapolated to complex systems, at least not in a straightforward manner." The two authors cast the above result as a theorem involving three assumptions about quantum mechanics. We refer the reader to the details of these assumptions in reference 1, and, instead, in this Introduction, describe them in general terms. The first assumption is the "quantum mechanical" one (Q). It says that the probability of an output of a measurement is given by the Born rule. The second assumption (C) demands consistency, meaning that if one agent reaches a conclusion about a statement or prediction, then another agent, when using the same theory, assumptions and information, will arrive to the same conclusion. Assumption (S) asserts that if an agent is certain of the outcome of a measurement or statement, then the agent cannot conclude something contradicting it. Assumption (S) may appear to be obvious, but the Contradictory Statement (see Section 3) that arises in the Frauchiger-Renner gedanken experiment takes the form "If I (Agent W) obtain a measurement of 'OK', then by using assumptions (Q) and (C), I can conclude that I should obtain a measurement of 'FAIL'. Here, 'OK' and 'FAIL' are similar to the down and up states of a spin- 1 2 object when the axis of spin quantization is in the x-direction. So, the argument in the gedanken experiment makes use of (S).
Frauchiger and Renner leave open the possibility that there are implicit assumptions beyond the above three: "Any no-go result ... is phrased within a particular framework that comes with a set of built-in assumptions. Hence, it is always possible that a theory evades the conclusions of the no-go result by not fulfilling these implicit assumptions." 1 Indeed, there are many overlooked assumptions.
The first omitted assumption in the Frauchiger and Renner paper is that wavefunction collapse does not occur. If wavefunction collapse occurs during a measurement, then one or more of the four "If ..., then ..." statements is rendered false and the contradiction cannot be generated. This is discussed at the end of Section 3, but intuitively, it can be understood as follows. Three of the four "If ..., then ..." statements sensitively depend on cancellations between different terms in the wavefunction. The slightest change in the coefficient of a term renders at least one of these three statements false. Wavefunction collapse is "brutal" in this regard, causing the coefficient of one term to vanish while forcing the coefficients of the remaining terms to increase in magnitude to preserve a unit norm for the wavefunction.
The next omitted assumption is unitarity, which we denote by (U). Reference 4 noticed this result. If a non-unitary version of quantum mechanics is used, then again, one or more of the four "If ..., then ..." statements is no longer valid. Table 4 of reference 1 provides a list of 10 interpretations/modifications of quantum mechanics, each of which violates at least one of the assumptions (Q), (C) and (S) used in the Frauchiger-Renner "proof." However, most of these theories are non-unitary; the fact they violate (Q), (C) or (S) is secondary because at least one the four "If ..., then ..." statements is not true. This second assumption, that of unitarity, is not fatal: one can simply add it as an additional assumption. Assumption (U) already includes the first omitted assumption concerning wavefunction collapse because wavefunction collapse violates unitarity.
The third omitted assumption, which we label as (L), is that the generation of the contradiction assumes that "standard classical" logic can be applied to statements about quantum mechanical measurements. Indeed, the transitive property of logic needs to be used three times to generate the contradiction. R. Renner admits that he left assumption (L) out. 5 Although one can postulate that "standard classical" logical applies to quantum mechanics, 6 it is something that can be checked, and when one analyzes whether logical transitivity is obeyed in unitary quantum mechanics, one finds that, in general, it is not. Hence, the derivation of the contradictory statement by Frauchiger and Renner is flawed. We also demonstrate that certain pairs of premises among the four "If ..., then ..." statements are incompatible, meaning that the "If ..., then ..." statements cannot all be logically used at once. In short, the two authors are not showing that quantum mechanics cannot be applied to macroscopic systems; they are actually showing that "classical" logic can be violated in quantum mechanics.
Intuitively, the reason for this is that the premises of the four "If ..., then ..." statements can interfere with each other. In our work on "quantum logic" in this article, we provide three explanations of why logical transitivity is violated, one of which is physically similar to the two-slit interference effect: It is as though one of the "If ..., then ..." statements requires "the wavefunction to go through both slits" to be true, while the other "If ..., then ..." statement requires "the wavefunction to go through only one of the slits". In addition to showing that transitivity is violated in a direct calculation and presenting three ways to understand its violation, we provide below the proper way of combining two "If ..., then ..." measurement statements and a formula for quantifying its violation when the naive use of logical transitivity is applied; from this perspective, Frauchiger and Renner used the wrong method of combining "If ..., then ..." statements about measurements.
A fourth omitted assumption is the following: The starting point of the Frauchiger-Renner gedanken experiment is the selection of a particular run: Of the four possible measurement outcomes that the two Wigner agents can obtain, Frauchiger and Renner run the experiment until only the 'OK'-'OK' outcome occurs. The two authors implicitly assumed that the four statements remain valid for a particular run. a However, it turns out that this assumption is also false. For the particular run selected, two of the "If ..., then ..." statements no longer remain true. The third one (Statement 4) is only "technically" true because its premise is always false, and this was also noted by D. Lazarovici and M. Hubert in Bohmian mechanics. 7 What happens is that when Agent W obtains a measurement of 'OK', it is impossible for AgentF to obtain the outcome needed for the premise of Statement 4. Given that wavefunction collapse ruins the validity of at least one "If ..., then ..." statement, and that selecting a particular run is similar to wavefunction collapse, it is not surprising that this step of the Frauchiger-Renner gedanken experiment renders two of the "If ..., then ..." statements false. Instead of performing the gedanken experiment repeatedly until the two Wigner agents obtain the 'OK'-'OK' outcome, one can phrase the first step of the gedanken experiment as a logic statement: "If Agent W and AgentW obtain measurements of 'OK' and 'OK' respectively, then ...". When formulated this way, assumption four is no longer needed, and the flaws in the Frauchiger-Renner analysis raised in this paragraph become violations of assumption (L).
Frauchiger and Renner concluded from the contradictory statement that quantum mechanics cannot be applied to experimentalists and their equipment because the third and fourth steps in the gedanken experiment involved Wigner/friend measurements on these macroscopic systems. However, it could be that quantum mechanics does govern these macroscopic systems, but it is impossible to perform such measurements. Indeed, the Wigner/friend measurements used in the Frauchiger and Renner publication are of a very unusual nature, conducted on generalized Schrödinger cat states. Given the choice between whether (i) quantum mechanics can govern macroscopic systems and (ii) measurements can be performed on macroscopic Schrödinger cat states, many physicists would assume that (ii) is impossible rather than (i).
So the fifth omitted assumption is that the third and fourth measurements of the Gedanken experiment in reference 1 are possible. However, in Section 7, we point out that an overlooked agreement between a Wigner agent and the agent's friend needs to be established prior to the start of the Frauchiger-Renner gedanken experiment. This agreement is of interest because it puts restrictions on the nature of Wigner/friend experiments. Indeed, the restrictions are so stringent as to rule out Wigner measurements of the type used in the Frauchiger-Renner gedanken experiment on macroscopic entities. We consider this to be one of the important results of our work. It is interesting that Heisenberg's uncertainty principle plays a role in this. See Section 7. This restriction already casts doubt as to whether the Frauchiger-Renner gedanken experiment indicates that quantum mechanics cannot a I initially overlooked this issue too. In the email correspondences between Renato Renner and myself from 3/11/21 to 3/21/22 (before submission of this work to Nature Communications) and from 4/15/22 to 5/21/22 (post submission), we both thought that statements 2 -4 were valid for any particular run. be extrapolated to complex systems.
It is possible, however, to modify the Wigner/friend measurements so that the Heisenberg uncertainty principle is not violated. When this is done, the third and fourth measurement only involve certain microscopic entities of the quantum constituents of the laboratory equipment of friend agents. The third and fourth measurements are then rendered "ordinary" (such as measurements on a spin using a particular axis of quantization). If assumption (L) were valid, one would then conclude the nonsensical result that quantum mechanics is inconsistent at a microscopic level. By modifying the Wigner/friend measurements as described below, it becomes possible to perform the Frauchiger-Renner experiment in a real laboratory setting, a result that should be of interest to experimentalists working on the foundations of quantum mechanics.
In short, there a several flaws in the analysis of reference 1. There are also some issues with the assumptions (S) and (C) that are discussed in Appendix A. The problem is that they involve classical concepts.
We use the Frauchiger-Renner gedanken experiment as a laboratory to explore a number of topics in quantum mechanics including wavefunction logic, Wigner/friend experiments, and the deduction of mathematical statements from knowledge of a wavefunction and obtain a number of interesting results.
The validity of the Frauchiger-Renner gedanken experiment has been challenged 7-15 in two directions: whether the argument itself is incorrect or whether there are hidden assumptions in the argument. However, no publication has pointed out that the fundamental reason why the analysis in the 2008 Nature Communications publication is wrong is that classical logic cannot be applied to Agent measurement statement.
In unitary quantum mechanics, 16 measurement does not involve wavefunction collapse, 17, 18 the probability of an outcome is related to the absolute square of the wavefunction, and linearity and unitarity are strictly maintained even during a measuring process. There is a single universal wavefunction and one does not assign worlds to certain linear superpositions of this wavefunction. Instead, in unitary quantum mechanics, the quantum mechanical interpretation of a situation is obtained by examining the wavefunction itself; sections 2, 3 and 4 illustrate this.
Hence, unitary quantum mechanics is "standard" quantum mechanics without wavefunction collapse; however, unitarity and the absence of wavefunction collapse lead to some consequences for quantum measurement that many physicists may not be unaccustomed to, which are embodied in the following: The basic Measurement Rule is: 16 If wavefunction collapse is not needed "to explain" an experimental result, then a single measuring event suffices to determine the state with certainty; if this is not the case, then the uncertainty of the quantum state is transferred to the measuring agent, multiple measurements are needed to determine the state, and an output reading indicating that the state is S does not mean that the wavefunction is S.
In unitary quantum mechanics, one knows the form of the wavefunction at each stage of the Frauchiger-Renner gedanken experiment. From the wavefunction, it is quite easy to derive the four "If ... then ..." statements in reference 1 This makes the analysis quite simple, and the violation of classical logic manifestly evident. All the measurements in the Frauchiger-Renner gedanken experiment involve binary outcomes. Let us illustrate the Measurement Rule in unitary quantum mechanics for this case. Let |↑⟩ and |↓⟩ indicate the two outcomes for a measurement, and think of them as the up and down spin of a spin-1 2 object. Let A be an agent who is about to measure the spin. The word agent included an experimentalist and her equipment. Let Ψ A be the agent's wavefunction before the measurement takes place. Then there are two generic cases: (i) The initial state is not a superposition; it is either |↑⟩ or |↓⟩ (up to an overall phase), but it is unknown which of these two possibilities is happening. (ii) The initial state S 0 of the object to be measured is a superposition of up and down spin, that is, it is of the form S 0 = a ↑ |↑⟩ + a ↓ |↓⟩ (with |a ↑ | 2 + |a ↓ | 2 = 1). In case (i), the schematic description of the process is Here, Ψ A ↑ and Ψ A ↓ are two different wavefunctions involving the quantum constituents of A and the spin-1 2 object. In the schematic equations, the left-hand side (respectively, right-hand side) is the wavefunction before (respectively, after) the measurement is made. Case (i) corresponds to the situation when wavefunction collapse is not needed to explain the experimental result: If the spin is up, then it is measured to be up and no wavefunction collapse is needed; ditto for the situation when the spin is down. In case (ii), the schematic description in unitary quantum mechanics is Equation (2) follows from Eq.(1) and quantum-mechanical linearity, and linearity is a consequence of unitarity. If Agent A is a machine with no thinking capability, then Ψ A ↑ in Eq.(2) is the same as Ψ A ↑ in Eq.(1) and its quantum constituents involve the coding of an up-spin output; likewise for Ψ A ↓ . If Agent A involves a human or entities with reasoning ability, and this is the case for the agents in the Frauchiger-Renner gedanken experiment, then the relation between the wavefunctions in Eqs. (2) and (1) depends on what Agent A knows about the initial state. If Agent A knows nothing, then the situation is the same as that of the pure machine case: the wavefunction components, Ψ A ↑ and Ψ A ↓ , in equations (1) and (2) are equal. If Agent A knows that the initial situation is (i), then Ψ A ↑ contains a configuration of the human's quantum constituents that embody the thought "I measured the spin to be up and so I know the initial wavefunction must have been S 0 = |↑⟩." A similar thought holds with "up" replaced by "down" is contained in Ψ A ↓ . If Agent A knows that the initial situation is (ii) but is using unitary quantum mechanics, then Ψ A ↑ contains a configuration of the human's quantum constituents that embodies the thought "I measured the spin to be up but I know that the current wavefunction must contain another component Ψ A ↓ in a linear superposition even though I cannot be directly aware of its existence." A similar statement arises for Ψ A ↓ . In case (ii), even though Ψ A ↑ involves the observation of an output indicating up spin, Agent A cannot conclude that the spin is or was up. Indeed, this is correct because the spin was initially a ↑ |↑⟩ + a ↓ |↓⟩ (and not just |↑⟩) and it is very unlikely to end up being proportional to |↑⟩ at any point during a physical measuring process. Furthermore, it takes multiple measurements beginning with the same S 0 = a ↑ |↑⟩ + a ↓ |↓⟩ to determine information about the coefficients a ↑ and a ↓ .
The agents in the Frauchiger-Renner gedanken experiment are not only informed of the initial wavefunction but also of the entire series of measurement steps. In relation to what was discussed in the previous paragraph, the analog is as follows: If Agent A was informed that S 0 was a ↑ |↑⟩ + a ↓ |↓⟩, then, in this case, Ψ A ↑ contains a configuration of the human's quantum constituents that embodies the thought "I measured the spin to be up but I know that the current wavefunction must contain another component Ψ A ↓ in a linear superposition even though I cannot be directly aware of its existence and that the form of this superposition is a ↑ Ψ A ↑ + a ↓ Ψ A ↓ , where a ↑ and a ↓ are the same coefficients as in S 0 ." Furthermore, another Agent B, having been given all the information about the initial state and experimental procedure, can also conclude that the form of the wavefunction after the measurement is a ↑Ψ A ↑ + a ↓Ψ A ↓ , whereΨ A ↑ andΨ A ↓ are some wavefunctions that Agent B does not know in detail but which incorporate the same measurement statements as in Ψ A ↑ and Ψ A ↓ : "Agent A measured the spin to be up (or down) but Agent A knows that the current wavefunction must contain another ... ". Agent B can be one of the agents in the Frauchiger-Renner gedanken experiment or even an "outsider", that is, someone who is not directly involved. Reference 1 thought that an assumption (Assumption (C)) was needed for agents such as Agent B to make such deductions. In unitary quantum mechanics, Assumption (C) is not needed because the "If ..., then " statements can be derived without the need of this assumption. See Appendix A.
Our notation differs somewhat from that of Frauchiger and Renner: 1 The use of "bars" over objects is unchanged. However, we use "bra's" and "ket's" to denote discrete states, which happen to come in pairs for the Frauchiger-Renner gedanken experiment; so, we denote them with up and down spins (|↑⟩ and |↓⟩) to take advantage of the isomorphism with spin-1 2 objects. For states involving many degrees of freedom, we denote the wavefunction using the symbol Ψ. A subscript M on a state indicates that it has been "measured" but M can also stand for "message" because these states also embody statements such as the ones in quotes in the two previous paragraphs. The states |↑⟩ S and |↓⟩ S in reference 1 are simply denoted by |↑⟩ and |↓⟩ in our paper. Our states ↑ and ↓ correspond to |heads⟩ R and |tails⟩ R in ref. 1. A discrete state associated with "ok" (respectively, "fail") in ref.1 is represented by a '−' (respectively, a '+') in our work (except our unbarred |−⟩ M corresponds to − |ok⟩ L , that is, it differs by a minus sign). The measurement times t j in our paper are written as n:0j in reference 1.
Wavefunction Representations of the Extended Wigner/Friend Experiment in Unitary Quantum Mechanics
The initial state Ψ 0 of the Frauchiger-Renner gedanken experiment can be taken to be b This state involves two qubits, which we take to be two spin-1 2 objects: an "unbarred" spin, for which the basis is |↑⟩ and |↓⟩, and a "barred" spin, for which the basis is ↑ and ↓ . In both cases, the axis of quantization for the spin is taken to be in the positive z-direction.
The Gedanken experiment proceeds in four main measurement steps: In the first step, AgentF measures the spin of the barred spin-1 2 object in the z-direction at time t 1 . Using Eq.(2), this causesΨF ↑ andΨF ↓ to be replaced byΨF ↑ andΨF ↓ respectively. The wavefunction then becomes In Eq.(4), we have replacedΨF ↑ andΨF ↓ by ↑ M and ↓ M respectively. They can be considered to make up a discrete two-state system with a message associated with each. The justification for this is given in Section 7, which discusses some aspects of Wigner/friend experiments in the context of the Frauchiger-Renner gedanken experiment. This replacement does not affect any of the conclusions obtained in our paper concerning classical logic in quantum mechanics. A reader who does not want to use this simplification can replace ↑ M and ↓ M withΨF ↑ andΨF ↓ in the equations below.
In the second step, Agent F measures the spin of the unbarred spin-1 2 object at time t 2 in a way that is analogous as to what AgentF did for barred spin at time t 1 , and the wavefunction becomes b In reference 1, the initial state is created differently: A quantum qubit of the form 1 is generated. AgentF measures this barred spin state. If the barred spin is up, then she sends the state |↓⟩ to Agent F. If it is down, then she sends the state (|↑⟩ + |↓⟩)/ √ 2 to Agent F. This procedure assumes thatF is able to manipulate states easily. In particular, the overall phase of a wavefunction is not an observable and cannot be controlled, and so she cannot guarantee that (|↑⟩ + |↓⟩)/ √ 2 as opposed to e iϕ (|↑⟩ + |↓⟩)/ √ 2 is sent. The resulting initial wavefunction would become ( ↑ |↓⟩ + e iϕ ↓ (|↑⟩ + |↓⟩))/ √ 3. Put differently, AgentF cannot control the relative phase of the two terms when the procedure in reference 1 is used. Statement 2 below is not true unless ϕ = 0. Hence, it is better to begin with Eq.(3).
The third step involves a Wigner measurement by Wigner AgentW of 'measured' barred spin in the x-direction. Here, we are thinking of ↑ M and ↓ M as the up and down z-components of a spin- 1 2 system. More precisely, the measurement is performed on the basis as the wavefunction after AgentW makes the measurement at time t 3 . Finally, Agent W performs a similar measurement (that is, in the x-direction for 'measured' unbarred spin) as AgentW but on the |↑⟩ M and |↓⟩ M with the result at the final time t f = t 4 . The first term in Eq. (7) comes from the first term ↑ |↓⟩ in Eq.(3), the second term from the second term ↓ |↑⟩ in Eq.(3) and the third from the third one ↓ |↓⟩. Some terms in Eq.(7) cancel among themselves (which can be considered a quantum interference effect) to give Equations 5 -8 are consistent with the results obtained in references 7 -9 and 12 -14 after one takes into account notational differences and/or the use of statepreserving c measurements performed by the agents.
There is an additional step in which Agent W meets with AgentW and they provide each other with their measurement results. This affects both of their wavefunctions yielding: A subscript xy on a Ψ encodes the statement "AgentW measured x at time t 3 and Agent W measured y at time t f ." For example,ΨW −+ involves "I, AgentW, measured '−' at time t 3 and subsequently met with Agent W and 'learned' that Agent W had measured '+' at time t f . c This is a particular form of measurement in which the measured state is preserved: where A is a "friend agent", that is, F orF , or where A is a "Wigner agent", that is, W orW and ↑ is replaced by + and ↓ is replaced by −. Note that the difference between this and Eq.(2) is that the states |↑⟩ and |↓⟩ remain unchanged after the measurement. One might call this type of measurement an observation: An agent observes the state but leaves it intact. The "measurement information statement" is still in Ψ A ↑ and Ψ A ↓ .
The Frauchiger-Renner Argument
The wavefunctions in Section 2 encode measurement statements about themselves: ↑ M with the statement "AgentF measured the barred spin to be up (that is, ↑ ) at time t 1 ", ↓ M with "AgentF measured the barred spin to be down at time t 1 ", |↑⟩ M with "Agent F measure the unbarred spin to be up at time t 2 ", |↓⟩ M with "Agent F measure the unbarred spin to be down at time t 2 ", ΨW + with "AgentW obtained a measurement of '+' a time t 3 ", ΨW − with "AgentW obtained a measurement of '−' a time t 3 ", Ψ W + with "Agent W obtained a measurement of '+' a time t 4 ", and Ψ W − with "Agent W obtained a measurement of '−' a time t 4 ." The Frauchiger-Renner argument that quantum mechanics cannot consistently describe itself is based on four statements that can be derived from the wavefunction results in Section 2.
Statement 1: When the experiment is carried out multiple times, there is eventually a run in which Agent W measures '−' and when he encounters AgentW the latter informs the former that he has measured '−'. For this particular run, Agent W can say "If I (Agent W) measured '−' at time t 4 , then AgentW measured '−' at time t 3 ." However, it is convenient to consider an alternative form of Statement 1: Statement 1 ′ : "If, at time t 4 , agents W andW respectively obtain '−' and '−', then W obtained '−'." This modified statement is trivially true. One also needs to note that the probability of them both measuring a '−' result is non-zero. The advantage of using this form is that one does not have to repeatedly run the experiment until the "minus-minus" outcome happens. Statement 2: If AgentW measured '−' at time t 3 , then Agent F measured the unbarred spin to be up at time t 2 .
Statement 3: If Agent F measured the unbarred spin to be up at time t 2 , then AgentF measured the barred spin to be down at time t 1 . Statement 4: If AgentF measured the barred spin to be down at time t 1 , then Agent W will measure '+' at time t 4 .
Using the transitive property of logic and combining Statements 1 to 4 in order for the run in which both Wigner agents get "minus" produces a contradiction: The "If ... then ..." logical statement begins with the premise "Agent W measured '−' at time t 4 " and ends with the conclusion "Agent W will measure '+' at time t 4 ," which we call the Contradictory Statement. d Notice that putting these statements together does not correspond to the order in which measurements take place. However, this does not matter in unitary quantum mechanics. Agents W andW can deduce the d When using Statement 1 ′ , the Contradictory Statement becomes "If, at time t 4 , agents W and W respectively measure '−' and '−', then agent W can deduce that he will measure '+'.
four statements all at time t 4 . In fact, anybody who knows the initial state and the experimental procedure can deduce the four statements including those "If ..., then ..." statements whose premise involve a time later than its conclusion. See Appendix A. Based on the Contradictory Statement, Frauchiger and Renner conclude that quantum mechanics cannot consistently describe itself. (7): If AgentF measured the barred spin to be down (that is, ↓ ) at time t 1 , then the relevant part of the wavefunction consists of the second and third terms in Eq.(4). These two terms evolve to the second and third terms in Eq.(7) but the term proportional to Ψ W − cancels among them.
It is also possible to experimentally verfiy each of the above four statements by introducing two additional agents. See the discussion at the end of the next section.
If wavefunction collapse occurs, then the wavefunctions are not given as in Eqs.(4) -(7) but depend on the outcomes of the agents' measurements. For example, if in the first measurement AgentF measures the barred spin to be up, then the wavefunction collapses to the first term in Eq.(4) at time t 1 . Ignoring for the moment the effects of the measurements of the other agents, Statement 2 is no longer true because its validity depended on a cancellation of the first and third terms, but the latter is now missing. Statements 3 and 4 are technically true only because their premises are false. If AgentF measures the barred spin to be down, then the wavefunction collapses to the second and third terms in in Eq.(4) at time t 1 , again rending Statement 2 false; Statements 3 and 4 remain true. If AgentF measures an the barred spin to be down at time t 1 and Agent F measures the unbarred spin to be down at time t 2 , then only the last term remains in the wavefunction in Eq. (5). In this case, Statements 2 and 4 are no longer true and Statement 3 is "logically true" only because its premise is false. The above is illustrative of what is true in general: wavefunction collapse renders one or more of statements 2 through 4 false.
Quantum Logic and Mathematical Wavefunction Statements
In this section, we discuss the logical mathematical statements encoded in wavefunctions. Although individual mathematical statements have a physical analog in terms of experimental measurements, sets of mathematical statements do not necessarily have a physical analog in terms of a sequence of steps in an experiment, as will become clear below. The purely mathematical results presented in this section provide insights into the Frauchiger-Renner gedanken experiment and the issue of violation of the transitive property of logic in quantum mechanics in general. It is worth noting that experimentalists can make measurements to separately verify each of the mathematical statements about wavefunctions; this is shown near the end of this section; see the paragraph staring with Eq.(28) and subsequent paragraphs.
There are three basic rules: (i) When a wavefunction is written as a linear superposition of several orthogonal component wavefunctions, then the situation involves logical disjunction and the OR symbol. e (ii) When a wavefunction involves a product of several component wavefunctions, then the situation involves logical conjunction and the AND symbol.
(iii) The probability that a (normalized) state occurs is given by the absolute square of its coefficient in the wavefunction (the Born rule).
Let us illustrate these rules using the spin part of the wavefunction at time t 0 in Eq. (3): Rule (i) tells us that either the situation is ↑ z |↓⟩ z OR ↓ z |↑⟩ z OR ↓ z |↓⟩ z . This particular case not only involves logical disjunction but mutual exclusivity. e If the wavefunction were only ↑ z |↓⟩ z , then Rule (ii) would tell us that the barred spin is up AND the unbarred spin is down. As a more complicated example, combining rules (i) and (ii), one arrives at the following mathematical statement from the wavefunction in Eq.(10): Either (barred spin is up AND the unbarred spin is down) OR (barred spin is down AND the unbarred spin is up) OR (barred spin is down AND the unbarred spin is down) .
One can also derive Mathematical Statement 3: If the unbarred spin is |↑⟩ z , then the barred spin is ↓ z , because only the middle term has the unbarred spin being up in Eq. (10). Different logical statements can be derived by expanding the wavefunction in different ways. For example, the second and third terms in Eq.(10) combine to give e It is not possible to have a logical conjuction (AND symbol) between two (or more) orthogonal components of a wavefunction. In Eq.(10), the logical statement ( ↑ z |↓⟩ z AND ↓ z |↑⟩ z ) makes no sense: It is impossible for the barred spin to be up (and unbarred spin down) and for the the barred spin to be down (and unbarred spin up). The fact that linear superpositions of orthogonal wavefunctions have this property is the reason why Schrödinger cats are non-problematic in unitary quantum mechanics. 16 where the z and x subscripts indicate the direction of spin quantization and where From the second term, one deduces Mathematical Statement 4: If the barred spin is ↓ z , then the unbarred spin is |↑⟩ x .
The violation of the transitive property of logic for mathematical statements derived from wavefunctions follows from Mathematical Statements 3 and 4: Let A = "if the unbarred spin is |↑⟩ z ", B = "the barred spin is ↓ z ", and C = "the unbarred spin is |↑⟩ x ". Putting these two statements (that is, A ⇒ B and B ⇒ C) together using the transitive property of logic yields If the unbarred spin is |↑⟩ z , then the unbarred spin is |↑⟩ x , which is a contradiction. As we show below, the problem with transitivity for Mathematical Statements 3 and 4 in this paragraph is closely related to the problem with logic in combining Frauchiger-Renner's measurement Statements 3 and 4, but with |+⟩ M playing the role of |↑⟩ x . Rule (iii) tells us that the probability of barred spin being up AND unbarred spin being down is 1/3 because the coefficient of ↑ z |↓⟩ z is 1/ √ 3 in Eq.(10). Using Rule (iii), one can obtain mathematical wavefunction statements involving probabilities: If the barred spin is ↓ z , then there is a 50% chance that unbarred spin is up (|↑⟩ z ), and there is a 50% chance that unbarred spin is down (|↓⟩ z ). (16) This follows from the second and third terms in Eq. (10). Given that quantum mechanics is a theory of probability, it is natural for logical mathematical statements about wavefunction to involve probabilities. Now, when Mathematical Statements 3 and 4 ′ are combined using transitivity, they generate an invalid probabilistic statement: If the unbarred spin is |↑⟩ z , then there is a 50% chance that unbarred spin is up (|↑⟩ z ), and there is a 50% chance that unbarred spin is down (|↓⟩ z ) , which is consistent with the incorrect statement in Eq.(15) because a spin in the up x-direction has a 50% chance of being up in the z-direction and a 50% chance of being down in the z-direction.
By expressing the wavefunction in Eq. (10) in an axis of spin quantization in the z direction for the unbarred spin and in an x direction axis for the barred spin, one can also obtain the mathematical statement about wavefunctions that is analogous to the Frauchiger-Renner measurement Statement 2: Mathematical Statement 2: If the barred spin is ↓ x , then the unbarred spin is |↑⟩ z .
There is also the analog of Frauchiger-Renner Statement 1: Mathematical Statement 1: If unbarred and barred spins are respectively |↓⟩ x and ↓ x , then the barred spin is ↓ x , which is obviously true. When all four mathematical statements (Eqs. (19), (18), (12) and (14)) are combined using classical logic, one can then rotate the unbarred spin to flip it completely (rather than rotating it 90 o as is done in Eq. (15)). The chain of statements reads "If unbarred and barred spins are respectively |↓⟩ x and ↓ x , then the barred spin is ↓ x ," "If the barred spin is ↓ x , then the unbarred spin is |↑⟩ z ," "If the unbarred spin is |↑⟩ z , then the barred spin is ↓ z ," and "If the barred spin is ↓ z , then the unbarred spin is |↑⟩ x ." If these could be combined using the transitive property of logic, then one would obtain the Contradictory Mathematical Statement about wavefunctions: "If unbarred and barred spins are respectively |↓⟩ x and ↓ x , then the unbarred spin is |↑⟩ x ." The fact that there is a one-to-one correspondence between mathematical statements about the wavefunction in Eq.(10) and the Frauchiger-Renner measurement statements beginning with the wavefunction in Eq.(3) (whose spin component is identical) suggests that problem with the transitive property of logic for mathematical statements derived from wavefunctions is likely to arise in the Frauchiger-Renner gedanken experiment, but this remains to be shown, and we show this below.
Summarizing, the transitive property of logic cannot always be used for mathematical statements derived from wavefunctions. The question arises as to whether the above discussion can be "translated" into statements about measurements.
Mathematical Statements 3 and 4 have physical equivalents involving measurement statements: If the unbarred spin is measured to be |↑⟩ z , then the barred spin will be measured to be ↓ z , and If the unbarred spin is measured to be ↓ z , then the unbarred spin will be measured to be |↑⟩ x .
From the above, it might seem easy to create a physical experiment that generates a contradiction. This is not the case for two reasons: First, the measurements occur at different times. If the measurement of the unbarred spin in Eq.(20) happens at time t 1 and that of the barred spin in Eqs. (20) and (21) at time t 2 , then the second unbarred spin measurement in Eq.(21) must necessarily happen after t 2 since the conclusion of the "If ... then ..." statement in Eq. (21)) is in the future. Suppose it happens at time t 3 . Then combining Eqs. (20) and (21) using the transitive property of logic gives If the unbarred spin is measured to be |↑⟩ z at time t 1 , then the unbarred spin will be measured to be |↑⟩ x at time t 3 , which is not obviously a contradiction because the times are different. Second, measurements can affect wavefunctions. The measurement of the unbarred spin at time t 1 may disturb it, changing it from |↑⟩ z to some other value in an unpredictable way. Hence, the subsequent measurement of the unbarred spin at time t 3 could have a random relation to its value at time t 1 . Statements 3 and 4 of the Frauchiger-Renner gedanken experiment only involve the second and third terms of Eq.(3). Let us focus on them and the measurements by agents F and AgentF: When AgentF makes her measurement at t 1 , the wavefunction becomes and when Agent F makes her measurement at t 2 , it becomes Now one has two valid statements, the original one of the Frauchiger-Renner gedanken experiment, which is, Statement 3: If Agent F measured the unbarred spin to be up at time t 2 , then AgentF measured the barred spin to be down at time t 1 , and the analog of Mathematical Statement 4 ′ , namely, Statement 4m: If AgentF measured the barred spin to be down at time t 1 , then Agent F will measure at time t 2 the unbarred spin to be up with a probability of 50% and will measure it to be down with a probability of 50%.
The conclusion of Statement 3 is the premise of Statement 4m. Hence, if the transitive property of logic is valid for combining statements about measurements, then Agent F can say, "If I measure unbarred spin to be up at time t 2 , then I can conclude that I am not guaranteed to measure the unbarred spin to be up at time t 2 ." Since this is a contradiction, the premise of the statement, namely, the transitive property of logic is valid for combining statements about measurements, must be false. Thus, we have created a rigorous proof that the transitivity property can be violated for statements about measurements. Note that this violation occurs for a microscopic system since spins are associated with microscopic objects. No measurements on the agents themselves are involved.
In logic, one can consider the situation in which two premises P and Q must be both satisfied. This is logical conjunction and is denoted by (P AND Q). For example, in Eq.(10), if P = (barred spin is up) and Q = (unbarred spin is down), then (P AND Q) means that both conditions hold and one is restricting the situation to the first term in Eq. (10). This is a valid use of conjuction in quantum mechanics. Now consider, P = (unbarred spin is |↑⟩ z ) and Q = (unbarred spin is (|↑⟩ z − |↓⟩ z )/ √ 2). f These premises are incompatible and using them with conjuction is an invalid operation. One might think that (P AND Q) means (unbarred spin is |↑⟩ z since |↑⟩ z is common to both P and Q. However, if one uses the x axis of quantization, then P = (unbarred spin is (|↑⟩ x + |↓⟩ x )/ √ 2) and Q = (unbarred spin is |↓⟩ x ), and using the same faulty reasoning one might conclude that (P AND Q) means (unbarred spin is |↓⟩ x ). In short, there is no way to define (P AND Q) for this situation.
A relevant generalization, which involves measurements, is the following: Suppose the wavefunction at time t 0 is where Ψ + 0 and Ψ − 0 involve the barred spin but not the unbarred spin, and Suppose that the barred and unbarred degrees of freedom evolve independently. Then, the evolution of Ψ 0 maintains its factorized form, that is, at time t, where Ψ + 0 evolves to Ψ + t , (|↑⟩ z + |↓⟩ z )/ √ 2 evolves to Ψ W + , Ψ − 0 evolves to Ψ − t , and (|↑⟩ z − |↓⟩ z )/ √ 2 evolves to Ψ W − . Suppose P = (Agent F measures unbarred spin to be up at time t 0 ) and Q = (the measurement of Agent W is '−' at time t). Because of factorized evolution, premise Q implies that the relevant part of the wavefunction is the second term in Eq. (26); it is proportional to Ψ F ↑ − Ψ F ↓ and incompatible with premise P, which says that Agent F measured the spin to be up at time t 0 . Hence, (P AND Q) has no logical sense. In the last part of Section 6, we show that certain pairs of premises in the Frauchiger-Renner argument are incompatible with logical conjunction.
At the beginning of this section, we promised to show that Mathematical Statements 1-4 can individually be verified experimentally. To perform this task, we introduce two "Referee Agents" R andR. The initial wavefunction is similar to Eqs. (3) and (10): Mathematical Statement 1 in Eq. (19) is automatically true. To verify Mathematical Statement 3 (Eq.(12)), Referee Agents R andR make spin measurements using the z-axis for spin quantization. The wavefunction becomes f We choose a minus sign in Q because it corresponds closer to the situation in reference 1.
Then, Agents R andR get together to compare results. Whenever Agent R measures ↑ z , he finds that AgentR measures↓ z due to the middle term in Eq. (29) and the lack of aΨR ↑z Ψ R ↑z term. To verify Mathematical Statement 2 (Eq.(18)), Referee Agent R uses the z-axis to make the spin measurement while AgentR uses the x-axis. Using these bases, the initial state in Eq.(28) becomes After the measurements are made, the wavefunction becomes When Agents R andR get together, they find that whenever AgentR measures↓ x , Agent R measures ↑ z . This is because of the last term in Eq.(31) and the lack of ā ΨR ↓x Ψ R ↓z term. To verify Mathematical Statement 4 (Eq. (14)), Agent R uses the x-axis to make the spin measurement while AgentR uses the z-axis. Using these bases, the initial state in Eq.(28) becomes After the measurements are made, the wavefunction becomes When Agents R andR get together, they find that whenever AgentR measures↓ z , Agent R measures ↑ x . This is because of the last term in Eq. (33).
Physicists might argue that mathematical statements about a wavefunction of the type discussed in this section are not physical because they are not observed. However, we believe it is more reasonable to say that mathematical statements about a wavefunction are physcially valid if there exists a measurement that can show that they are true. With this viewpoint, Mathematical Statements 1 -4 are individually physically true statements.
Suppose that one attempts to use in single measurement procedure -similar to the above technique of using referee agents -to show the validity of any two of the four Mathematical Statements. Then, one finds that this cannot be done. A measurement to establish the validity of one of the statements interferes with the measurement to establish the validity of the other statement, or vice versa. In the case of Mathematical Statements 1 -4, one runs into the issue of compatibility of premises as explained above (Discussion starts at two paragraphs above Eq.(26)). One would have to measure either the barred spin or the unbarred spin in both the z-direction and the x-direction. Also problematic is that after a measurement, the quantum degrees of an agent become entangled with the spin that he measures. Therefore, the second of the two measurements would have to be performed on the degrees of freedom of the agent and this can be problematic; see Section 7. The discussion here can be considered as providing an understanding of why classical logic cannot always be applied when combining certain "If ..., then ..." statements in quantum mechanics.
It one were to attempt to carry out the Frauchiger-Renner experiment in the real world, then one would have to make sure that the correct initial spin state of Eq. (3) is being generated. The way to do this is to perform measurements on the spins, and this turns out to be the same procedure used to experimentally verify Mathematical Statements 2-4; see Eqs.(28) -(33) above and the corresponding discussion. For example, the verification of Statement 3 above shows that the spin state must contain the term ↓ z |↑⟩ z but cannot contain ↑ z |↑⟩ z . It can be checked that the verification of Statements 2 -4 using measurements is sufficient to uniquely determine the spin wavefunction to be that in Eq. (29). In a real experiment, one would repeatedly check the intial spin-state generation until one gained confidence that one is producing it correctly each time.
One can also use referee agents to experimentally verify the statements in the Frauchiger-Renner gedanken experiments. As above, we need two of them: Agents R andR. The initial wavefunction in Eq.(3) is replace by To experimentally verify Statement 2, namely, if AgentW measures '−' at time t 3 , then Agent F measured the unbarred spin to be up at time t 2 , one has Agent R observe Agent F just after time t 2 . Refer to footnote c above. An observation of Agent R on Agent F's wavefunction takes the form: so that Agent R does not disturb the wavefunction of Agent F. The wavefunction just after t 2 takes the form Then, at a time just after t 3 , AgentR observes what AgentW has measured. The analog of Eq.(35) isΨRΨW and the wavefunction just after t 3 becomes Next, Agents R andR meet. They find that whenever AgentR observed AgentW to measure '−', then Agent R had to observe Agent F measuring |↑⟩. This result comes from the last term in Eq.(38).
In a similar manner, Statement 3 is experimentally verified by having AgentR observe AgentF's measurement just after time t 1 and then having Agent R observe Agent F's measurement just after time t 2 . Statement 4 is verified by having AgentR observe AgentF's measurement just after time t 1 and then having Agent R observe Agent W's measurement just after time t 4 . Although Statement 1 ′ is obviously true, one could have agentsR and R respectively observe the measurements ofW and agents W just after times t 3 and t 4 respectively.
Quantum Logic and Measurement Statements
The example associated with Eqs.(23) -(25) above involves going "backward and forward" in time. Is there an example in which one avoids this? Consider a system involving three spin-1 2 objects and three agents A, B and C who perform measurements on them. Use the following as the initial wavefunction: Agent A first measures A-spin at time t A , then Agent B measures B-spin at time t B and finally Agent C measures C-spin at t C . Here, t A < t B < t C . After these measurements are made the structure of the wavefunction is The agents A, B, and C can then get together to discuss their results. The following statements are verified to be true from Eq.(40) using the quantum logic rules (i)-(iii):
Statement A: If Agent A measures A-spin to be up at time t A ,
then Agent B will measure B-spin to be up at time t B .
Statement T: If Agent A measures A-spin to be up at time t A ,
then Agent C will measure C-spin to be up at time t C .
One also can deduce: then Agent C at time t C will not necessarily measure C-spin to be up.
Indeed, if Agent B measures B-spin to be up at time t B , then Agent C will measure C-spin to be up 50% of the time and down 50% of the time. If Statements A and B could be combined using the transitive property of logic, then one would obtain Statement F: If Agent A measures A-spin to be up at time t A , then it is not guaranteed that Agent C at time t C will measure C-spin to be up.
Statement F is false since it violates Statement T, the latter always being true. Therefore, the transitive property of logic can be violated in quantum mechanics concerning statements about measurements. Again, this is for a microscopic system since spins are involved. If there is any doubt about the above, Statements A, B and T can be verified in a real experiment; the difficult part is in generating the initial entangled spin state, but nowadays there are methods to handle this. In standard logic, an "If ... then ..." statement cannot be "50% true"; if it is not always true, then it is considered false. However, given that the conclusion of the "If ... then ..." Statement F (obtained using transitivity) and that Statement B can be replaced by "Agent C will measure C-spin to be up 50% of the time and down 50% of the time", one might characterize Statement F as being "50% true" and "50%' false". g If in the above example, the coefficient of |↑⟩ A |↑⟩ B |↑⟩ C in Ψ 0 is selected to be √ 0.1 while that of |↓⟩ A |↑⟩ B |↓⟩ C is √ 0.9, then Statement F is "90% false" and "10% true". One can adjust the component coefficients to "increase" the "falsehood" of F, but one cannot arrive at "100%" in this simple example. In addition, up to this point, all of the examples of violations of transitivity for statements about measurements involve using an "If ... then ..." statement in which one of the conclusions of a premise involves probabilities; it is quite natural and acceptable to have such statements since quantum mechanics involves uncertainty and probabilities. In the Frauchiger-Renner gedanken experiment, none of conlcusion of Statements 1 -4 involve probabilities; however, in the next section, we show that violations of transitivity and other rules of logic still arise.
Suppose that we have a wavefunction at time t = 0 of the form where Ψ a 0 and Ψ b 0 are orthogonal normal and c a and c b are constants. Suppose that one is trying to combine two "If ... then ..." statements using transitivity but the premise of the first "If ... then ..." statement involves a premise restricting the wavefunction to c a Ψ a 0 but a conclusion that involves both terms, while the premise of the second "If ... then ..." statement is this conclusion (and hence involves both terms). This is the case in the above example. It is accomplished by a "shift effect": The premise of "If |↑⟩ A at time t A , then |↑⟩ B at time t B " involves only the first term of Ψ 0 (that is, the g By performing a series of runs and collecting statistics, it can be verified that Statement F produces a false result among "50%" of the runs. It should be clear that standard classical logic is not the correct framework for dealing with statements about wavefunctions and measurements. In fuzzy logic, 19 it is permissible to have statements that are "fractionally" true. c a Ψ a 0 = c a |↑⟩ A |↑⟩ B |↑⟩ C term), but the premise of "If |↑⟩ B at time t B , then ..." involves both terms. Roughly speaking, the premise has "shifted" from the first term to both terms. The statement "If |↑⟩ B at time t B , then ... " can be true because of the second term (c b |↓⟩ A |↑⟩ B |↓⟩ C ) and this can mean that the premise "If |↑⟩ A at time t A " of "If |↑⟩ A at time t A , then |↑⟩ B " can be false. Indeed, this is the origin of the violations of transitivity in the examples presented so far. Consider Statements A and B above. When the premise of B is true but the premise of A is false, A-spin is down and this corresponds exactly to the cases in which transitivity leads to a false result (that is, C-spin is down) in Statement F. h Unitarity guarantees that the evolutions of Ψ a 0 and Ψ b 0 to a later time t (which we denote by Ψ a t and Ψ b t ) proceeds independently and that the two components remain orthogonal normal. This means that the future evolution of Ψ a t cannot depend on Ψ b t and vice versa. If the conclusions in the two "If ... then ..." statements are only "compatible" due to the first term in the wavefunction, then the use of transitivity will be violated in a fraction of the cases given by: Since in the above examples c a = c b , the "violation of transitivity is 50%". It is possble for the conclusions for the two "If ... then ..." statements to be "compatible" for both terms in the wavefunction, in which case transitivity produces a valid statement, even when the "shift effect" is present. This can be considered a coincidence. For example, if the C-spin of the second term in Eq.(39) is changed to be up, then the conclusions of statement B and F above are both changed to "Agent C at time t C will measure C-spin to be up". However, the reason that Statement F is now true is because Agent C measures C-spin to be up for both terms. In mathematical statements about wavefunctions, the "shift effect" can also be traced to the reason why the incorrect result in Eq.(21) arises when one makes use of logical transitivity.
In the next section, we show that the combining of Statements 1 ′ and 2, of Statements 2 and 3 and of Statements 3 and 4 using transitivity in the Frauchiger-Renner gedanken experiment uses a "shift effect" and involves exactly the same structure discussed here with c a = c b . Hence, the logical statements obtained from them using transitivity are not true.
In addition to transitivity, there are other rules of logic that are violated in quantum mechanics. For example, in standard classical logic, If P implies R, and Q is any other condition, then (P and Q) also implies R: (P ⇒ R) ⇒ ((P AND Q) ⇒ R). Return to the experiment associated with Eqs.(39) and (40), and let P to be the premise of Statement B (P = "Agent B measures B-spin to be up at time t B "), let R be the conclusion of Statement B (R = "Agent C at time t C will not h This particular understanding of the violation of transitivity grew out of email exchanges that I had with Renato Renner from May 1, 2022 to May 13, 2022. necessarily measure C-spin to be up"), and let Q be the premise of Statement A (Q = "Agent A measures A-spin to be up at time t A "). Then P AND Q actually implies S instead of R; the conclusion S is "Agent C at time t C will measure C-spin to be up".
We now illustrate the power of unitary by showing that the four measurement statements of the Frauchiger-Renner gedanken experiment can all be derived from the final wavefunction in Eq. (8) and knowledge of how the experiment was conducted, that is, AgentF first measured barred spin at time t 1 , then Agent F measured unbarred spin at time t 2 , etc. Note that |↑⟩ M (respectively, |↓⟩ M ) at time t 2 always evolves to (Ψ W The analogous statement is true for the barred states. The first step is to rewrite Eq.(8) so that the (Ψ W + + Ψ W − ) and (Ψ W + − Ψ W − ) dependence is evident: To obtain Statement 4, one needs to rewrite Eq.(8) so that the (ΨW + +ΨW − ) and (ΨW + −ΨW − ) dependence is evident: Barred down spin evolves to (ΨW + −ΨW − ) and it multiplies Ψ W + (the first term in Eq.(44)), and so one obtains "If AgentF previously measured barred spin down, then Agent W will obtain '+' for his measurement," which is Statement 4. The above shows that there is a close relation with the measurement statements in the Frauchiger-Renner gedanken experiment and the mathematical statements of Section 4, particularly those in the paragraphs above and below Eq. (17), and that the reasons for the violation of transitivity are similar.
Issues with Logic in the Frauchiger-Renner Argument
Agent F can perform her measurement before AgentF and the resulting wavefunction after both measurements remains the same. One can have t 1 approach t 2 or even have the two agents perform their measurements at the same time. i Then Statements 3 and 4 are valid at the same time. So, one can take t 1 = t 2 and use Eq.(5) as the starting point for the Frauchiger-Renner argument. Below, we often make this simplification.
Consider the logic involved in combining Statement 3 and Statement 4 in the logical chain that leads to the Contradictory Statement. Let P be the premise of Statement 3, that is, P = (Agent F measured the unbarred spin to be up at time t 2 ). Let Q be the conclusion of Statement 3, which is also the premise of Statement 4. Here, Q = (AgentF measured the barred spin to be down at time t 1 = t 2 ). Finally, let R be the conclusion of Statement 4: R = (Agent W will measure '+' at time t 4 ). Statement 3 is P ⇒ Q and Statement 4 is Q ⇒ R. Now, the premise of Statement 4 involves the second and third terms in the wavefunction at time t 2 (those involving ↓ M |↑⟩ M + ↓ M |↓⟩ M in Eq.(5)) while Statement 3 involves the middle or second term (the one involving ↓ M |↑⟩ M ). Let us just focus of this part of the wavefunction and its evolution to time t 4 : whereΨW ↓ is an abbreviation for (ΨW + −ΨW − )/ √ 2. The first thing to note is that the "shift effect" is occurring, and so, given the results in Section 5, combining Statements 3 and 4 using transitivity is an invalid procedure. One can also "quantify" the violation of assuming (P ⇒ Q) AND (Q ⇒ R)) ⇒ (P ⇒ R) using Eq.42): P ⇒ R should be "50% false", which is easy to show.
The ↓ M |↑⟩ M term in Eq.(45) at time t 2 evolves to theΨW ↓ (Ψ W + +Ψ W − ) term at time t 4 . Likewise, the last term in Eq.(45) at t 2 evolves to the last term at t 4 in Ψ 4 . When the premise Q (i.e., barred spin is measured to be down) of Statement 4 holds, both terms are relevant and a cancellation of Ψ W − occurs in Ψ 4 thereby yielding the conclusion R of Statement 4, that the probability of Agent W obtaining '+' is 100%. However, if an "If ... then ..." statement involves premise P (i.e., unbarred spin is measured to be up), then the relevant term is the ↓ M |↑⟩ M one. It evolves to something proportional to (Ψ W + +Ψ W − ). So, if P holds, that is, Agent F measured unbarred spin up at time t 2 , then there is a 50% chance Agent W will obtain '+' for his measurement and not 100%. Hence, this" direct" calculation demonstrates that the transitivity property of logic cannot be used to combine Statements 3 i As an aside, it is also true that the temporal order does not matter for the two Wigner measurements at times t 3 and t 4 ; one can have t 3 < t 4 , t 4 < t 3 or t 3 = t 4 , and Statements 1-4 of the Frauchiger-Renner gedanken experiment are all still valid. and 4; its use produces an incorrect result. The different understandings of why transitivity is violated in the Frauchiger-Renner case are identical to those of why transitivity is violated for the "A-B-C" model of the previous section. However, for the Frauchiger-Renner case, there is another explanation. The conclusion of Statement 4 involves a delicate cancellation between two terms in Eq.(45) to eliminate the Ψ W − dependence in Ψ 4 . This can be considered a quantum interference effect. The best example of quantum interference is the double slit experiment. Imagine that ↓ |↑⟩ / √ 2 is associated with passing through the left slit, while ↓ |↓⟩ / √ 2 is associated with passing through the right slit. Each evolves respectively toΨW ↓ (Ψ W + + Ψ W − )/2 andΨW ↓ (Ψ W + − Ψ W − )/2 when they reach a certain point on the detection screen. See Figure 1. If no attempt is made to detect whether the wavefunction goes through the left or right slit, then the quantum interference effect occurs, there is a "cancellation of the Ψ W − amplitude", and the screen will signal to Agent W a '+' outcome. Now when premise P is operative, it means that Agent F has effectively "done something" to determine which slit the object went through. Indeed, she has determined that it went through the left slit because she has measured the unbarred spin to be up, which is associated with ↓ |↑⟩ / √ 2. This disturbs the quantum interference effect, the wavefunction will evolve toΨW ↓ (Ψ W + + Ψ W − )/2 at the screen, and the signal can no longer be guaranteed to be '+'. Half the time it will be '+' and half the time it will be '−'. Thus, using the analogy with the two-slit experiment, one understands physically why the transitive rule is violated in this case: When Statement 3 is combined with Statement 4, the resulting logical statement does not properly take into account the effect of the measurement performed by Agent F on the measurement by Agent W.
If the gedanken experiment involved only the terms displayed in Eq.(45), then one can derive the following two statements: Statement 3L: If Agent F measures the unbarred spin to be up at time t 2 , then Agent W at time t 4 will measure '+' 50% of the time.
Statement 3R: If Agent F measures the unbarred spin to be down at time t 2 , then Agent W at time t 4 will measure '+' 50% of the time. Now, in classical logic, if (L ⇒ Q) and (R ⇒ Q) are two valid "If ... then ..." statements, then one can conclude that (L OR R) ⇒ Q. Here, L and R are respectively the premises of Statements 3L and 3R, and Q = (Agent W at time t 4 will measure '+' 50% of the time), or one can use Q = (Agent W at time t 4 will not necessarily obtain a measurement of '+'). However, the correct statement involving the premise (L OR R) is (L OR R) ⇒ Q ′ , where Q ′ = (Agent W at time t 4 will measure '+' with certainty). This is just another example of how classical logic cannot be applied to statements about measurements, especially when quantum interference effects are involved.
When Statement 2 is combined with Statement 3 using transitivity, the result is " If AgentW measured '−' at time t 3 , then AgentF measured the barred spin to be down ( ↓ ) at time t 1 . However, we know from the methods used in the last two paragraphs of Section 5 that that if AgentW measured '−' at time t 3 , then it had to have evolved from a term proportional to ↑ M − ↓ M in Ψ 1 at time t 1 and not something proportion to ↓ M . In fact, one can show that it originated from ( ↑ M − ↓ M ) |↑⟩ (up to a factor). Hence, combining Statement 2 with Statement 3 using transitivity generates an invalid logic statement. It is violated 50% of the time.
To analyze whether Statement 1 ′ can be combined with Statement 2 using transitivity, one needs to consider the last two terms in Ψ 4 of Eq. (8). Recall that Statement 1 ′ is "If, at time t 4 , agents W andW respectively measure '−' and '−', thenW measured '−'." Hence, the premise of Statement 1 involves the last term in Eq.(8), whereas the conclusion of Statement 1 ′ , which is the premise of Statement 2, involves both the 3rd and 4th terms. A "shift effect" is present, and, not surprisingly, the logical statement generated using transitivity is "50% false". The conclusion of Statement 2, namely that Agent F measured the unbarred spin to be up at time t 2 , uses a quantum interference effect similar to the one involved in combining Statements 3 and 4. In fact, the "wavefunction structures" and the "If ... then ..." statements for the two cases are isomorphic.
The "If ... then ..." Statements 1 through 4 in the Frauchiger-Renner gedanken experiment are all valid in unitary quantum mechanics when considered in isolation. In reference 1, a special run is selected, namely, the one in which agents W andW measure '−' and '−'. What happens when one considers the effect of imposing this?
The answer is that one is restricting the wavefunction to the last term in Eq. (8), and Statements 2, 3 and 4 all become invalid, thereby ruining the analysis in reference 1. For example, the conclusion of Statement 3, which is AgentF measured the barred spin to be down at time t 1 , is not a consequence of the premise "Agent F measured the unbarred spin to be up at time t 2 ANDW measures '−' at time t 3 AND Agent W measures '−' at time t 4 ," as one can verify. This is another example of the fact that the use of certain rules of logic do not always properly take into account the combined effects of measurements made by the agents. Intuitively, it is easy to understand why Statements 2 and 4 are rendered invalid when the '−' -'−' condition is imposed: The validity of both these statements depends on a perfect quantum interference cancellation between two terms in the wavefunction. Anything that disrupts the delicate cancellation will render the corresponding statement false. Consider Statement 2, for example. Its validity depends on a cancellation involving the first and third terms in Eq.(6) and the evolution of these two terms going forward in time. However, the constraint that agents W andW respectively measure '−' and '−' affects all three terms in Eq.(6) and upsets the quantum interference cancellation.
To generate the Contradictory Statemen of reference 1, the premises of Statements 1 through 4 must be all true at once. These premises are "Agent W measures '−' ", "W measures '−' ", etc.. When Agent W measures '−' at time t 4 , it can be verified that AgentF had to have measured barred spin to be up at time t 1 . The premise of Statement 4 is "AgentF measured the barred spin to be down at time t 1 ". Hence, whenever the premise "Agent W measures '−' at time t 4 " is true, the premise of Statement 4 is false, and vice versa. There are no instances when the premises of Statements 1 through 4 are all true at once. So, it is not surprising that, in incorrectly using the rules of logic for statements about measurements in their gedanken experiment, Frauchiger and Renner can arrive at the logically contradictory statement "If, at time t 4 , agents W andW respectively measure '−' and '−', then agent W can deduce that he will measure '+'.
There are additional incompatibilities with the premises: If the usual rules of logic hold, then (A ⇒ B AND B ⇒ C) ⇒ (A AND B) ⇒ C. So, it should be true that (P 1 AND P 2 AND P 3 AND P 4 ) ⇒ C 4 , where P i is the premise of Statement i and C 4 is the conclusion of Statement 4. Hence, the premises must be all true at once to derive the Contradictory Statement. However, the situation is even worse: The premise of Statement 2 is "AgentW measured '−' at time t 3 ". The premise of Statement 4 is "AgentF measured the barred spin to be down at time t 1 ". Now from the anlaysis at the end of Section 5, we know that if AgentW measured '−' at time t 3 , then the relevant part of the wavefunction had to have evolved from something proportional toΨF ↑ −ΨF ↓ (which is the same as ↑ M − ↓ M ) at time t 1 . This means that we are in the situation described in the paragraph that contains Eqs. (26) and (27) except that barred spin is involved: P 2 and P 4 are "conjunctually incompatible" meaning that it is illegitimate to have them appear in the same logical AND statement. One of the premises of Statement 1, namely that "Agent W measures '−' at time t 4 ", is also "conjunctually incompatible" with the premised of Statement 3, namely that "Agent F measured the unbarred spin to be up at time t 2 " for the same reason as the "barred" case. In a single-photon interferometric setup implementing the scenario of Frauchiger and Renner, reference 15 observed similar compatibility issues.
From the results in this section, one can see that there are a plethora of errors in the analysis of the gedanken experiment in reference 1.
The Frauchiger-Renner Wigner/Friend Measurements
In this section, we reveal a technical problem with the Wigner/friend measurements used in the Frauchiger-Renner gedanken experiment. The experiment makes use of two such measurements. Suppose that Agent F measures a qubit, which we represent as the spin of a spin-1 2 object. Let Ψ F be the wavefunction of Agent F before the measurement is made. The wavefunction Ψ F in general consists of many degrees of freedom -those of the experimentalist and those of her apparatus. When Agent F measures the state |↑⟩, the wavefunction for F changes: at a minimum, the apparatus records the up-spin result and the experimentalist notes in her brain that the spin was measured to be up. As explained in the Introduction, we denote the resulting wavefunction by Ψ F ↑ . If the spin-1 2 object "survives", its degrees of freedom are included in Ψ F ↑ . In cases in which the qubit states are represented by the right and left polarizations of a photon and the photon is destroyed during the measurement, Ψ F ↑ does not include the qubit degree of freedom; the measurement process is still represented by Ψ F |↑⟩ → Ψ F ↑ . When Agent F measures the state |↓⟩, statements similar to the above apply and the process is presented by Ψ F |↓⟩ → Ψ F ↓ . A Wigner/friend measurement involves a new Agent W who makes a measurement on the Ψ F ↑ and Ψ F ↓ states. In the Frauchiger-Renner gedanken experiment, Agent W makes the measurement in the basis Ψ F If Ψ W is the wavefunction before the "Wigner" measurement is made, then, as in the case of Agent F above, Ψ W is affected by the measurement and the process is represented by The degrees of freedom of F are included in Ψ W + and Ψ W − . In the Frauchiger-Renner gedanken experiment, Agent W has an almost impossible task in measuring the '+' and '−' states of such a complicated system, 15 and as Lídia del Rio and Renato Renner note the Wigner agents must "have excellent quantum control of other agents' memories and labs" 20 in order to conduct their measurements on agents F andF, at lease in the way that the Wigner measurements are presented in reference 1. However, there is also a tremendous burden on Agent F (and AgentF). If Agent F is a complicated object -and indeed up until now we have been assuming this since F consists of the experimentalist and her equipment -then it is unlikely that the same Ψ F ↑ is produced each time |↑⟩ is measured. This is a problem for Agent W because in the Wigner/friend experiment he is not allowed does not know what these states are? The solution is that Agent F must respond to the measurement of the spin in a predetermined known way to produce specific Ψ F ↑ and Ψ F ↓ , and Agent W must be informed of these "known" states at the start of the experiment and before the measurements are made. It is clearly impossible for Agent F to produce a specified Ψ F ↑ or Ψ F ↓ given that Agent F is such a complicated object. Among things, the center of mass coordinate of Agent F is a continuous variable that cannot be precisely fixed because of the Heisenberg uncertainty principle.
To shed some light on the issue, consider replacing all the quantum degrees of freedom associated with Agent F with a system of 100 qubits, with each qubit having two states: up and down. During the measuring process, these 100 qubits are "disturbed" randomly but the signal of the experimental outcome is encoded in the last qubit: If the spin was up (respectively, down), then the last qubit is up (respectively, down) after the measurement is performed. Now Agent F and W agree, for example, that Ψ F ↑ (respectively, Ψ F ↓ ) corresponds to all the qubits being up (respectively, down). Now when the experiment takes place Agent F has the very difficult task of controlling how the experiment effects the first 99 bits. It is very unlikely that the qubits will all be up or all be down. Hence, almost all the time, Agent W gets no signal when he tries to make his measurement. So, the "100-spin case" is difficult but still doable in principle. However, the situation is rendered impossible when one considers that among the enormous number of quantum degrees of freedom of Agent F, there are many -in fact most -which are continuous and not discrete.
A way around this problem is to have Agent W interact only with a small "important" subset of the degrees of freedom of F. These degrees of freedom might include a qubit in a data base that recorded the reading as up or down (as in the previous paragraph) as well as data bytes providing the time of the measurement t m and statements such as "Agent F knows that the measurement was up at time t m " that are used in reference 1, and so on. In other words, if we write Ψ F = Ψ ′F |S⟩ where |S⟩ indicates a state associated with these "important" degrees of freedom, then we can have If the initial state is |↑⟩ for example, then It is easy to see that Ψ ′′F = Ψ ′F |S⟩ ′ plays no role in the Wigner/friend experiment: It is just an overall factor in the wavefunction from the time of the measurement by Agent F henceforth, and therefore can be ignored in the analysis. When this is done, |↑⟩ M and |↓⟩ M act like a qubit but with messages associated with them. This simplification justifies, for example, the replacement of a friend agent by photon polarizations as is done in reference 21. Although there are many possibilities for agents F and W to agree in advance on what Ψ F ↑ and Ψ F ↓ are, the result that these two states need to be replaced by two specific states, which we can call |↑⟩ M and |↓⟩ M , is a general result. Agent W can still involve the many degrees of freedom of the human experimentalist and his equipment.
Note, when the procedure described in the previous two paragraphs is used, that the final wavefunction for the experimentalist must be the same whether the initial spin state is |↑⟩ or |↓⟩. This means that the experimentalist cannot be conscious of the experimental outcome. The definition of a measurement on a quantum system is not universally defined in physics. In its definition, one might require a human or intelligent being to be conscious of the outcome, in which case, a Wigner measurement of the type occurring in the Frauchiger and Renner gedanken experiment is impossible, given the results in the first four paragraphs of this section. Alternatively, the definition of a quantum measurement might only require the outcome to be "recorded" or "registered", which means that it is not necessary for an intelligent being to be aware the experimental result (especially if it is potentially possible to verify the recorded result by human participation in the manner used at the end of Section 4). In this case, a Wigner measurement is possible but it must be made on an entity without a center of mass degree of freedom such as a spin, a photon polarization, a tensor product of these, et cetera. The Frauchiger and Renner experiment is therefore only possible if the states |↑⟩ M , |↓⟩ M , ↑ M and ↓ M are of this form. However, such states are necessarily microscopic. Hence, when the Wigner agents make their measurements, it is on microscopic entities, in which case, they are "ordinary" quantum measurements. Regardless of the problems with the transitive property of logic, the Frauchiger-Renner gedanken experiment cannot be making a statement about a macroscopic system. If the measurements by Agents F andF are not considered measurements but recordings, then the subscripts 'M' in Sections 2 through 6 are misleading and should be replaced by 'R'. For completeness, we provide a brief description on how the Frauchiger-Renner gedanken experiment is modified to take this into account. One needs to avoid saying that AgentF and Agent F make measurements on the barred and unbarred spins, since almost all but a few discrete quantum degrees of freedom of these two agents are affected. For example, the original first step, which is "AgentF measures the spin of the barred spin-1 2 object in the z-direction at time t 1 ," needs to be replaced by "An experimental procedure on the barred spin-1 2 object in the z-direction at time t 1 is performed and the outcome is recorded in another spin-1 2 object as ↑ R or ↓ R ." A similar replacement occurs for step two. The subscript "M" is replaced by "R" on barred and unbarred spins in Sections 2 and 3. Statement 1 and Statement 1 ′ are unchanged but Statements 2 to 4 become: Statement 2: If AgentW measured '−' at time t 3 , then the unbarred spin was recorded to be up at time t 2 .
Statement 3: If the unbarred spin was recorded to be up at time t 2 , then the barred spin was recorded to be down at time t 1 .
Statement 4: If the barred spin was recorded to be down at time t 1 , then Agent W will measure '+' at time t 4 . If these four statements could be combined using the transitive property of logic, then one would still obtain the Contradictory Statement of Section 3.
It should be clear that any Wigner measurement on a linear combination of Agent F states involving all the degrees of freedom of a human experimentalist and her equipment is, in general, impossible. For the case in which Agent F performs a measurement on a spin-1 2 object in the z-direction, Agent W is unable to perform a measurement using a basis of cos θΨ F ↑ + sin θΨ F ↓ and sin θΨ F ↑ − cos θΨ F ↓ for any θ for which both cos θ ̸ = 0 and sin θ ̸ = 0. Note that θ = π/4 is the Frauchiger-Renner case. Hence, the only basis in which a Wigner agent can make a Wigner measurement on Agent F in the gedanken experiment is one that is "aligned" with the basis that Agent F used. For the case of the spin-1 2 object discussed in this section, the basis is Ψ F ↑ and Ψ F ↓ , and the measurement process is quite simple for this: Agent W can meet with Agent F and just ask her what she measured. The process is schematically represented by Ψ W Ψ F ↑ → Ψ W ↑ and Ψ W Ψ F ↓ → Ψ W ↓ .
Discussion and Conclusions
In their work, Frauchiger and Renner concluded that quantum theory cannot be extrapolated to complex systems in a straightforward manner. However, the generation of a contradiction arises only if Assumption (S) is replaced by the stronger Assumption (L), 5 which states that statements by agents concerning measurements of wavefunctions obey standard rules of logic. One might naively think that Assumption (L) should also hold. 6 (1) and (2) correctly describe both the microscopic and macroscopic situations, so there is a consistent transition in going from small scales to large scales without wavefunction collapse, implying that there is no Heisenberg cut 23 in contradiction to what references 4 and 20 believe. If a Heisenberg cut existed, then it would undermine the conclusion of Frauchiger and Renner because classical mechanics replaces certain aspects of unitary quantum mechanics above the cut thereby "not allowing" quantum mechanics to fully operate in the macroscopic world. Section 6 shows that by explicit calculation that the use of logical transitivity in the Frauchiger-Renner gedanken experiment is violated in three separate instances. This result has been verified by three authors j for the case of Bohmian Mechanics 24 (which a modification of quantum mechanics involving a tracking field that maintains unitarity) for combining statements 3 and 4. [25][26][27] In Section 6, we also provide three explanations for why statements in the Frauchiger-Renner gedanken experiment cannot be combined using logic: (i) Combining Statements 1 ′ and 2, Statements 2 and 3, as well as Statements 3 and 4 involve a "shift effect" (See Section 5), and this invalidates the use of transitivity for these pairs of statements. Eq.(42) provides a quantification of the violation, and it is 50% for each of the above three uses of transitivity. (ii) The premise of the first "If ... then ..." statement is false in a certain fraction of the instances for which the premise of the second "If ... then ..." statement is true. This fraction coincides with the result in Eq.(42). When the premise of the first statement is false while the premise of the second statement is true, a false result is generated from the "If ... then ..." statement obtained by combining the two "If ... then ..." statements using transitivity. This happens in combining Statements 1 ′ and 2, Statements 2 and 3, and Statements 3 and 4. (iii) Combining Statements 1 ′ and 2 and Statements 3 and 4 involve a quantum interference effect. This interference effect is upset by the measurement associated with the premise of the first statement. In other words, the "If ... then ..." statement obtained by using transitivity does not properly take into account the effect of the measurement perform by the first agent on the measurement performed by the second agent.
There are five problems with the analysis performed by Frauchiger and Renner. (1) The transitive property of logic is not valid in combining pairwise the four statements to generate a contradiction. This is related to another issue: (2) In logic, (A ⇒ B AND B ⇒ C) ⇒ ((A AND B) ⇒ C). However, in unitary quantum mechanics for the cases involving Statements 1 -4, (A AND B) ⇒ C ′ , where C ′ is a conclusion that is different from C. This shows that this rule of logic is violated in the Frauchiger-Renner gedanken experiment. This is relevant for the generation of the contradictory statement because it is unclear whether one should use C or C ′ ; the argument in the Frauchiger-Renner publication needs to use C to generate the contradictory statement. However, in combining pairs of statements C ′ turns out to be the correct conclusion. In short, Frauchiger and Renner used the wrong formula for combining the statements.
(3) Frauchiger and Renner run their experiment until both agents W andW respectively measure '−' and '−'. It can be shown that restricting the run to this case renders Statements 2, 3 and 4 false.
j While waiting for the editorial decision of the submission of my work to Nature Communications, I asked some researchers to determine the consequence of the measurement of Agent F on the measurement of Agent W.
(4) Instead of performing runs until agents W andW respectively measure '−' and '−', one can replace Statement 1 by Statement 1 ′ (See Section 3). All premises must be true at once to obtain a valid Contradictory Statement. One of these premises of Statement 1 ′ is that Agent W measures '−'. The premise of Statement 4 is that AgentF measures the barred spin to be down. Section 6 showed that when the premise that Agent W measures '−' is true, the premise of Statement 4 is false, and when the premise of Statement 4 is true, the premise that Agent W measures '−' is false. (5) If one formulates the generation of the contradictory statement as (P 1 AND P 2 AND P 3 AND P 4 ) ⇒ C 4 , where P i is the premise of Statement i and C 4 is the conclusion of Statement 4, then one finds that P 2 and P 4 are incompatible for use with logical conjunction. In addition, P ′ 1 and P 3 are incompatible. See the next-to-the-last paragraph of Section 6.
In the above, (1) and (2) involve the issue of combining two successive statements using transitivity in the Frauchiger-Renner gedanken experiment. Items (3), (4), and (5) point out other problems in combining the four statement to produce the contradictory statement. In short, the usual rules of logic cannot be used on Statements 1 -4 in the Frauchiger-Renner gedanken experiment.
At a minimum, our work has cleared up a misconception created by reference 1 that has already reached mainstream scientific media. [28][29][30][31][32][33] We have also obtained other important results. We developed the concept of quantum logic and used it to deduce physical and mathematical consequences from knowledge of a wavefunction. We have learned that one must be careful in using many of the" standard" rules of classical logic for statements about wavefunctions and measurements. Sections 4 -6 shed light on why the violations of the rules of logic are expected in certain circumstances. In Section 7, we pointed out a restriction on Wigner/friend experiments. If this restriction is imposed, then the Wigner/friend measurement of the Frauchiger-Renner gedanken experiment becomes an ordinary quantum measurement, allowing the possibility of carrying out the experiment in a real laboratory setting.
The Frauchiger-Renner gedanken experiment does not demonstrate problems with quantum mechanics at macroscopic scales, but it is a useful laboratory for exploring quantum mechanics, quantum logic, and Wigner/friend measurements. v ′ at time t" where v ′ ̸ = v. k So, being cautious, Renner and Frauchiger require Assumption (S) so that premise 1 and conclusion 4 produce the Contradictory Statement. In unitary quantum mechanics, the above same four statements are derivable. However, the problem is not that premise 1 implies conclusion 4 is a contradiction; the problem is that the transitive property of logic does not always apply when combining statements about wavefunction measurements.
Assumption (S) is also inconsistent with the experimental procedure of the Frauchiger-Renner gedanken experiment. Agent W, for example, is supposed to perform a measurement using the basis 2}. l Since this is a Schrödinger Cat state involving Agent F measuring spin up and Agent F measuring spin down, it is inconsistent with the assumption that an agent be certain of a measurement.
A careful reading of reference 1 reveals that, in the "proof", one only needed Assumption (S ′ ): If a logical contradiction arises, one can conclude that something is wrong. In any case, Assumption (S) needs to be replaced by Assumption (L). Now consider Assumption (C). It says, Suppose that Agent A has established that "I am certain that Agent A ′ , upon reasoning within the same theory as the one I am using, is certain that x = ξ at time t." Then Agent A can conclude that "I am certain that x = ξ at time t." The problem with this is that in unitary quantum mechanics, one can be sure of a measurement result only if the situation is as in Eq.(1). However, the situation for the Frauchiger-Renner gedanken experiment involves Eq.(2). So, the premise of Assumption (C) is always false because of the word "certain" in it. The problems with assumptions (S) and (C) have a common origin: trying to force "classical thinking" onto a quantum situation.
Frauchiger and Renner used (C) and the agreement among agents as to which quantum theory to use (which because of Assumption (U) must be unitary quantum mechanics) to deduce Statements 1 -4. However, in unitary quantum mechanics, Statements 1 -4 can be deduced from the description of the gedanken experiment. Hence, Assumption (C) is not needed. m In the rest of the Appendix, we explain why.
In unitary quantum mechanics for the extended Wigner/friend gedanken experiment, all agents know the initial spin part of the wavefunction and the experimental procedure. From this information, agents, an outside observer, or even the reader of k In the Wigner/friend gedanken experiment, Agent A is Agent W, x is unbarred measured spin, v = |−⟩ M , v ′ = |+⟩ M and t = t 4 . l Recall that ψ F ↑ = |↑⟩ M (respectively ψ F ↓ = |↓⟩ M ), is a wavefunction for Agent F in which unbarred spin is measured to be up (respectively, down). m In the context of the Frauchiger-Renner Gedanken experiment, this is good thing since we have just shown that Assumption (C) has problems. | 2022-08-02T01:15:55.458Z | 2022-07-29T00:00:00.000 | {
"year": 2022,
"sha1": "a043cecfd80009704e617956f9ac0e0e58530559",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a043cecfd80009704e617956f9ac0e0e58530559",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
59445092 | pes2o/s2orc | v3-fos-license | Some Afterthoughts – or Looking Back
In contrast to the clear recognition of psychoanalysis as discursive activity as Lacan (1953) espoused it succinctly for quite a time the main stream activity on the relation of psychoanalysis and language was focused on Freud’s theory of symbols. Language and the development of the ego was a favourite topic in the New York study group on linguistics (Edelheit, 1968). As Freud had developed his own rather idiosyncratic way of understanding symbols, some conceptual work with the different usage of the term symbol had to be done. Victor Rosen in his paper on “Sign Phenomena and their relationship to unconscious meaning” (1969) demonstrates that the work of the psychoanalyst can be conceptualized as a process of differentiating conventional symbols from sign phenomena. Understanding meaning by common sense has to be completed by understanding the additional unconscious meaning any concrete piece of verbal material may carry. The technical rule for the analyst of evenly hovering attention is directed to just this process. Listening to his patient’s associations the analyst receives the conventional meaning of what he listens to. Suspending his reaction to this level of meaning he then tries to understand potential meanings beyond the everyday meaning. By interpreting the analyst usually uses a perspective that is not immediate in his patient's view.
Horst Kächele 1 International Psychoanalytic University (IPU)
The relationship of "psychoanalysis and language" was in the center of many theoretical and clinical discussions ever since Freud (1916/17) had declared: Nothing takes place in a psycho-analytic treatment but an interchange of words between the patient and the analyst.The patient talks, tells of his past experiences and presents impressions, complains, confesses his wishes and his emotional impulses.
The doctor listens, tries to direct the patient's processes of thought, exhorts, forces his attention in certain directions, gives him explanations and observes the reaction of understanding or rejection which he in this way provokes in him (p.17).
In contrast to the clear recognition of psychoanalysis as discursive activity -as Lacan (1953) espoused it succinctly -for quite a time the main stream activity on the relation of psychoanalysis and language was focused on Freud's theory of symbols.Language and the development of the ego was a favourite topic in the New York study group on linguistics (Edelheit, 1968).As Freud had developed his own rather idiosyncratic way of understanding symbols, some conceptual work with the different usage of the term symbol had to be done.Victor Rosen in his paper on "Sign Phenomena and their relationship to unconscious meaning" (1969) demonstrates that the work of the psychoanalyst can be conceptualized as a process of differentiating conventional symbols from sign phenomena.Understanding meaning by common sense has to be completed by understanding the additional unconscious meaning any concrete piece of verbal material may carry.The technical rule for the analyst of evenly hovering attention is directed to just this process.Listening to his patient's associations the analyst receives the conventional meaning of what he listens to.Suspending his reaction to this level of meaning he then tries to understand potential meanings beyond the everyday meaning.By interpreting the analyst usually uses a perspective that is not immediate in his patient's view.
However, Forrester (1980) expressed, in his introduction of his book "Language and Origin of Psychoanalysis", astonishment that there were only a few treatises on psychoanalysis, which dealt directly with the role of language in the course of treatment (p.X).Detailed studies concerning "spoken language in the psychoanalytical dialogue" were just beginning to blossom in the eighties of the last century (Kächele, 1983).
Praising the Freudian dictum many a times psychoanalysts -often unintentionally -have been followers of the philosopher Austin (1962), who in his theory of speech acts, proceeds from the observation that things get done with words.In the patterns of verbal action, there are specific paths of action available for interventions to alter social and psychic reality.In psychoanalysis, writes Shapiro (1999), "the prolonged interaction between patient and analyst provides numerous opportunities for redundant expression of what is considered a common small set of ideas in varying vehicles and at various times, designed to get something done or to re-create an old pattern" (p.111).However, speech, if it is to become effective as a means of action, is dependent on the existence of interpersonal obligations that can be formulated as rules of discourse.These rules of discourse depend partly on the social context of a verbal action (those in a court of law differ from those in a conversation between two friends), and conversely, a given social situation is partly determined by the particular rules of discourse.Expanding this observation psychoanalytically, one can say that the implicit and explicit rules of discourse help to determine not only the manifest social situation, but also the latent reference field, that is transference and countertransference.
If the discourse has been disturbed by misunderstandings or breaches off the rules, metacommunication about the preceding discourse must be possible which is capable of removing the disturbance.For example, one of the participants can insist on adherence to the rule (e.g., "I meant that as a question, but you haven't given me an answer!").In such metacommunication, the previously implicit rules which have been broken can be made explicit, and sometimes the occasion can be used to define them anew, in which case the social content and, we can add, the field of transference and countertransference can also change.
The compulsion arises from the fact that analyst and patient have entered into a dialogue and are therefore subject to rules of discourse, on which they must be in at least partial (tacit) agreement if they want to be in any position to conduct the dialogue in a meaningful way.It is in the nature of a question that the person asking it wants an answer and views every reaction as such.The patient who is not yet familiar with the analytic situation will expect the conversation with the analyst to follow the rules of everyday communication.
The exchange process between the patient's productions, loosely called "free associations", and the analyst's interventions, loosely called "interpretations", most fittingly may be classified as a special sort of dialogue.The analyst's interventions encompass the whole range of activities to provide a setting and an atmosphere that allows the patient to enter the specific kind of analytic dialogue: Language and Psychoanalysis, 2016, 5 (2), 81-87 http://dx.doi.org/10.7565/landp.v5i2.156283 If any kind of meaningful dialogue is to take place, each partner must be prepared (and must assume that the other is prepared) to recognize the rules of discourse valid for the given social situation and must strive to formulate his contributions accordingly (Thomä & Kächele, 1994b, p. 248).
The special rules of the analytic discourse thus must be well understood by the analysand lest he or she waste the time not getting what he or she wants.Therefore she or he has to understand that the general principle of cooperation is supplemented by a specific additional type of meta-communication on part of the analyst.As we have already pointed out the analyst's interventions have to add a surplus meaning beyond understanding the discourse on the plain everyday level.
How does one add a surplus meaning?Telling a joke is a good case for working with a surplus meaning not manifest in the surface material.Jokes have a special linguistic structure and most often work with a combination of unexpected material elements and special tactic of presentation.Reporting clinical examples from the literature Spence et al. (1994) suggest that the analyst is always scanning the analytic surface in the context of the two-person space, consciously or preconsciously, weighing each utterance against the shifting field of connotations provided by (a) the course of the analysis; (b) his or her own set of associations; and (c) the history of the analysand's productions (p.45).An experimental way to detect the generation of such ad-on meanings was Meyer's (1988) effort via post-session free associative self-reports to find out "what makes the psychoanalyst tick".
For such questions, which are basic for the psychoanalytic enterprise the development of conversational and discourse analytical methods was crucial moving the pragmatic use of language as speech on empirical grounds.When Sacks et al. (1974) proposed a "simplest systematics for the organization of turn-taking behavior in conversation" it was obvious that such tools would be of high relevance to psychotherapy as an exquisite dialogic enterprise.Although Mahony (1977) gave psychoanalytic treatment a place in the history of discourse, Labov and Fanshel (1977) probably were the first to apply such concepts to empirical investigation of psychotherapy sessions.In Germany the linguist Klann (1977) connected "psychoanalysis and the study of language" no longer focusing on the traditional discussion on symbols but focusing on the pragmatic use of language as therapeutic tool exemplified by role of affective processes in the structure of dialogue (Klann, 1979).
In this arena many things that take place in the relationship between patient and analyst at the unconscious level of feelings and affects cannot be completely referred to by name, distinguished, and consolidated in experiencing (see Bucci, 1995).Intentions that are prelinguistic and that consciousness cannot recognize can only be imprecisely verbalized.Thus in fact much more happens between the patient and analyst than just an exchange of words.Freud's "nothing else" must be understood as a challenge for the patient to reveal Language and Psychoanalysis, 2016, 5 (2), 81-87 http://dx.doi.org/10.7565/landp.v5i2.156284 his thoughts and feelings as thoroughly as possible.The analyst is called upon to intervene in the dialogue by making interpretations using mainly linguistic means.Of course, it makes a big difference if the analyst conducts a dialogue, which always refers to a two-sided relationship, or if he makes interpretations that expose the latent meanings in a patient's quasi-monological free associations.Although it has become customary to emphasize the difference between the therapeutic interview and everyday conversation (Leavy, 1980), we feel compelled to warn against an overly naive differentiation since everyday dialogues often are: characterized by only apparent understanding, by only apparent cooperation, by apparent symmetry in the dialogue and in the strategies pursued in the conversation, and that in reality intersubjectivity often remains an assertion that does not necessarily lead to significant changes, to dramatic conflicts, or to a consciousness of a "pseudounderstanding"…In everyday dialogues something is acted out and silently negotiated that in therapeutic dialogues is verbalized in a systematic manner (Klann, 1979, p. 128).Flader and Wodak-Leodolter (1979) collected these first German studies on processes of therapeutic communication.Some years later these researchers discovered the rich material available at the Ulm Textbank (Flader et al. 1982).This was probably not surprising because the availability of original transcripts for linguists was at the time very limited.Amongst others, the opening phase of Amalia X's treatment, that phase of familiarizing the patient into the analytical dialog and the transition from day to day discourse into the analytical discourse, was examined (Koerfer and Neumann 1982): Towards the end of the second (recorded) session Amalia X complains about the unusual dialogic situation in the following way: 'alas, I find this is quite a different kind of talk as I am used to it'.This kind of difficulty has been described by Lakoff (1981) succinctly: "The therapeutic situation itself comprises a context, distinct from the context of 'ordinary conversation', and that distinction occasions ambiguity and attendant confusion" (p. 7).In fact we are dealing with a learning situation comparable to learning a foreign language though less demanding: If in fact psychotherapeutic discourse were radically different in structure from ordinary conversation, we should expect something quite different: a long period of training for the patient, in which frequent gross errors were made through sheer Language and Psychoanalysis, 2016, 5 (2), 81-87 http://dx.doi.org/10.7565/landp.v5i2.156285 ignorance of the communicative system, in which he had time after time to be carefully coached and corrected (Lakoff, 1981, p. 8).This perspective supports our maxim of the treatment technique: as much day-to-day dialogue as necessary to correspond to the safety needs of the patient, to allow this learning process and as much analytical dialogue as possible to further the exploration of unconscious meanings in intra and interpersonal dimensions (Thomä & Kächele, 1994b, p. 251 ff).
In the following years, the "linguistic turn", the inclusion of pragma-linguistic tools into the study of the psychoanalytical discourse, gained considerable momentum (Russell 1989(Russell , 1993)).For example, Harvey Sacks (1992) described "conversational analysis" (CA) that put "coherence" in the center, which also plays a central role in attachment research.Lepper and Mergenthaler (2005) could show in a group therapy setting and in a psychodynamic short therapy (Lepper & Mergenthaler 2007) that the "topic coherence" stands in a close connection with clinically important moments, insights and changes.Systematic investigations on the special conversational nature of the psychoanalytic technique have become more diversified.The linguist Streeck (1989) illustrates how powerful conversational technique were even in identifying prognostic factors for shared focus formulation in short term therapy related to positive outcome where psychometric instruments failed.The role of metaphor in therapeutic dialogues has developed into a field of its own (Spence, 1987;Buchholz, 2007;Casonato and Kächele, 2007).Intersubjective conceived treatment research enlarges the empirical frame by including dimensions of conversational practice, narrative representation and use of metaphor.Is it too far reached to connect the development of the relational perspective in psychoanalysis with the rise of narrative treatment research focusing on what happens between patient and analyst in great details as Buchholz (2006, p. 307) does?
The mechanism of psychoanalytic interpretation had been the object of an early discourse-analytic case study by Flader and Grodzicki (1982) recently followed by a larger sample studied by Peräkylä (2004).The issue whether discourse in psychoanalysis proper is different from discourse in psychotherapy might be no longer in the center of interest.The more empirical material is studied the less these differences show up.Patients and their analysts display a range of conversational strategies in the diverse therapeutic situations as Streeck (2004) has illustrated.
The contributions of the Berlin study group on conversational analysis have shouldered the unfinished task to detailing what goes on in psychotherapeutic sessions on a level that will certainly enrich our understanding. | 2018-12-29T12:30:04.128Z | 2017-01-15T00:00:00.000 | {
"year": 2017,
"sha1": "13ff564eee985ed45418b280cd33dd0269552fb7",
"oa_license": "CCBY",
"oa_url": "http://www.language-and-psychoanalysis.com/article/download/1777/pdf_25",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "13ff564eee985ed45418b280cd33dd0269552fb7",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
10879789 | pes2o/s2orc | v3-fos-license | Phylogenetic and recombination analysis of Tobacco bushy top virus in China
Background During the past decade, tobacco bushy top disease, which is mainly caused by a combination of Tobacco bushy top virus (TBTV) and Tobacco vein-distorting virus (TVDV), underwent a sudden appearance, extreme virulence and degeneration of the epidemic in the Yunnan province of China. In addition to integrative control of its aphid vector, it is of interest to examine diversity and evolution among different TBTV isolates. Methods 5’ and 3’ RACE, combined with one step full-length RT-PCR, were used to clone the full-length genome of three new isolates of TBTV that exhibited mild pathogenicity in Chinese fields. Nucleotide and amino acid sequences for the TBTV isolates were analyzed by DNAMAN. MEGA 5.0 was used to construct phylogenetic trees. RDP4 was used to detect recombination events during evolution of these isolates. Results The genomes of three isolates, termed TBTV-JC, TBTV-MD-I and TBTV-MD-II, were 4152 nt in length and included one distinctive difference from previously reported TBTV isolates: the first nucleotide of the genome was a guanylate instead of an adenylate. Diversity and phylogenetic analyses among these three new TBTV isolates and five other available isolates suggest that ORFs and 3’UTRs of TBTV may have evolved separately. Moreover, the RdRp-coding region was the most variable. Recombination analysis detected a total of 29 recombination events in the 8 TBTV isolates, in which 24 events are highly likely and 5 events have low-level likelihood based on their correlation with the phylogenetic trees. The three new TBTV isolates have individual recombination patterns with subtle divergences in parents and locations. Conclusions The genome sizes of TBTV isolates were constant while different ORF-coding regions and 3’UTRs may have evolved separately. The RdRp-coding region was the most variable. Frequent recombination occurred among TBTV isolates. Three new TBTV isolates have individual recombination patterns and may have different progenitors. Electronic supplementary material The online version of this article (doi:10.1186/s12985-015-0340-2) contains supplementary material, which is available to authorized users.
Introduction
Tobacco bushy top virus (TBTV) is a member of the Umbravirus genus, which requires the presence of Tobacco vein-distorting virus (TVDV) for infectivity in the field [1]. TBTV is encapsidated by the coat protein encoded by TVDV, which is needed for transmission via aphids [1,2]. However, mechanical inoculation of sap from diseased tobacco onto healthy plants leads to the loss of TVDV, implying the independent pathogenicity of TBTV [1]. Together, these viruses caused severe stunting and destructive bushy-top disease in tobacco in sub-Saharan Africa in the 1960s [3] and in Asia including China in the 1990s [1]. During 1993 to 2001, there were 51,300 hm 2 of tobacco bushy top-diseased fields including 8,700 hm 2 of total field failure in Yunnan Province of China [4]. In addition to TBTV and TVDV, two components including tobacco bushy top diseaseassociated RNA (TBTDaRNA) and satellite RNA of TBTV were also identified in tobacco with bushy-top disease [5,6], although all four components were not always present together [7].
During the past decade, tobacco bushy-top disease was infrequent with only sporadic cases exhibiting mild symptom in Yunnan province, which may be due to interruption of the natural epidemic cycle through the integrative control of its aphid vectors [7,8]. Sudden appearance, extreme virulence and degeneration of the epidemic of tobacco bushy-top disease was a pattern similar to that of other destructive diseases [9][10][11][12], whose lethal pathogens underwent quick attenuation of pathogenicity. Therefore, it is of interest to determine whether the new TBTV isolates produced mild pathogenicity and how they were evolved. For single-stranded RNA viruses, recombination is a major evolutionary event allowing isolates to adapt to new environmental conditions and hosts [13], and frequent recombination events have been detected for various RNA viruses such as Soybean mosaic virus and potyvirus isolates [14][15][16].
The TBTV genome contains a positive-sense singlestranded RNA of 4152 nt, which encodes four ORFs, and contains a short 5' UTR of 10 nt and a 3' UTR of 645 nt [17]. Based on comparisons with other Umbraviruses, p35 and its frameshift product p98 are responsible for genome replication [18][19][20]. p98 contains the ubiquitous RdRp GDD motif of positive-strand RNA virus and is presumably the RNA-dependent RNA polymerase (RdRp) [18,21]. Based on studies conducted with Groundnut rosette umbravirus, p26 is a long-distance movement-associated protein and is also responsible for stabilization of viral RNAs and nuclear shuttling [22,23], and p27 is likely a cell-to-cell movement protein [22].
In this study, tobacco plants with suspected mild tobacco bushy-top disease were collected in three locations in Yunnan province. Full length genomes of the three new TBTV isolates from Jiangchuan (termed JC) and Midu (termed MD) were cloned and sequenced, revealing a distinctive difference from previously reported TBTV isolates: The first nucleotide of TBTV-JC, TBTV-MD-I and TBTV-MD-II is a guanylate compared with an adenylate reported for the other TBTV sequences. In addition, we compared these three new TBTV isolates with the five available TBTV sequences to study molecular diversity and recombination events among the isolates.
Results
Detection of TBTV, TVDV and TBTD-associated RNA in different sources of tobacco with tobacco bushy top disease Total RNA was extracted from leaves of tobacco with suspected tobacco bushy top disease collected from three locations (JiangChuan county, MiDu county and BaoShan city) in China's Yunnan Province. RT-PCR and subsequent sequencing revealed that TBTV was present only in the samples from JiangChuan and MiDu. None of the samples contained the newly reported TBTDaRNA (Additional file 1: Table S1).
5'-RACE and 3'-RACE of TBTV from JiangChuan and MiDu
To determine the full-length sequences of TBTV from JiangChuan (termed JC) and MiDu (termed MD), 5'-RACE and 3'-RACE were first performed to determine 5' terminal and 3' terminal sequences. The size of the 5'-RACE PCR product with either poly (C) or poly (G) at the 5' end was approximately 500 bp (data not shown). Comparison of the sequencing results revealed that the first nucleotide of the JC isolate is a guanylate, with the sequence beginning with 5'-GGGUUACGAUAUGGAGUU CAUCAAC-3' (Fig. 1a). The MD isolate also has the same sequences at the 5' end of its genome. The first nucleotide of all previously reported TBTV isolates is an adenylate. Most Umbraviruses, as well as Necroviruses and Carmoviruses in the family Tombusviridae, also have 5' terminal guanylates.
The size of the 3'-RACE PCR product of the JC and MD isolates was approximately 950 bp (data not shown). The 3' terminal sequence of the JC and MD isolates is 5'-GGGAGAUGAGCACUCUCUCUCGCGCCC-OH-3' (Fig. 1b). The underlined cytidylate differs from previous TBTV sequences, which contain an urydilate at this position. This substitution of U to C seems to deform the loop of the 3' proximal stem-loop in TBTV (data not shown).
Comparison of the sequences of the three new TBTV isolates and five previously published sequences
The full-length genomes of TBTV-JC (KM016224), TBTV-MD-I (KM016225) and TBTV-MD-II (KM067277) are 4152 nt in length, as previously reported for the other TBTV isolates. Comparison of the in vitro expression level of ORF1 (p35) and ORF1-frameshift protein (p98) for these isolates using wheat germ extracts (WGE) showed that that the translated levels of p35 and p98 for TBTV-JC, TBTV-MD-I and TBTV-MD-II (all mild isolates) was~40 % of the TBTV-Ch level, which was cloned from a sample showing typical tobacco bushy top disease (Wang and Yuan, unpublished data). This suggests that attenuated expression of replicase components (p35 and p98) may be in part responsible for the mild pathogenicity of new TBTV isolates in the field. Nucleotide sequence identities among TBTV-JC, TBTV-MD-I and TBTV-MD-II were 94.8 % to 97.3 % ( Table 1). The highest nucleotide sequence identity among the 8 TBTV isolates was 98.9 % between TBTV-Ch and TBTV-YWSh, while the lowest was 89.0 % between TBTV-MD-II and TBTV-YWDu. The nucleotide sequence of TBTV-YWDu was the most different from the other 7 isolates, with identities ranging from 89.0 % to 90.9 %. Correspondingly, the nucleotide sequence identities among the other 7 isolates were 94.6 % to 98.9 % (Table 1).
TBTV contains a 5' UTR of 10 nt and 3' UTR of 645 nt. The 5'UTR of the three new isolates only diverges in the 5' ultimate nucleotide from the previously reported isolates. In the 3'UTR, identical residues ranged from 94.6 % to 99.8 % (Table 1), which was higher than values found for the full-length genome. In particular, TBTV-YWDu, whose full-length genome differed the most from the other isolates (sharing 89.0 % to 90.9 % identity), had 3'UTR sequences sharing 94.6 % to 99.8 % identity with the other isolates ( Table 1).
Phylogenetic relationship among all TBTV isolates
Phylogenetic trees were constructed based on the fulllength genome, the 3' UTR or ORFs-coding regions of TBTV. The distances of groups in phylogenetic trees of the full-length genome, ORF1 or p98-C were bigger than values of other regions (Fig. 2). It is suggested that RdRp-coding region was the most variable and mainly determined the divergent of TBTV isolates. In the tree of the full-length genome, the 8 isolates were divided into three groups, with the first group only containing TVDV-YWDu (Fig. 2a). The second group contained TBTV-MD-II while the third group contained three sub-groups, one of which included one isolate of TBTV-YYXi; the other two sub-groups included 2 and 3 isolates respectively (Fig. 2a).
The phylogenetic trees based on ORFs or 3' UTR have distinctive patterns. The tree based on ORF1 has the same grouping as the full-length genome ( Fig. 2a and c). The other four trees have different pattern from the full-length genome. It is further confirmed that the divergence of ORF1-coding region is primarily correlated with the divergence of the full-length genome in TBTV.
In addition to the pattern of trees, there are some same clusters in different phylogenetic tree, i.e. the cluster containing TBTV-MD-I, TBTV-YLLi and TBTV-JC in trees of the full-length genome, ORF1, p98-C, ORF3 or ORF4-coding regions (Fig. 2a, c, d, e and f ); the cluster containing TBTV-Ch and TBTV-YWSh in trees of the full-length genome, ORF1 or p98-C-coding regions (Fig. 2a, c and d ); and the cluster containing TBTV-Ch and TBTV-YWDu in trees of the 3'UTR, ORF3 or ORF4coding regions (Fig. 2b, e and f). It is suggested that TBTV-MD-I, TBTV-YLLi and TBTV-JC have the nearest similarity except the 3'UTR, while TBTV-Ch and TBTV-YWSh have the nearest similarity in RdRp-coding region and TBTV-Ch and TBTV-YWDu have the nearest relationship in the 3' half of genome including ORF3 and ORF4-coding regions and 3'UTR. All data of phylogenetic trees and molecular diversity assay suggested that different ORFs-coding regions and 3' UTR were evolved separately.
Recombination analysis of the TBTV isolates
To find potential recombination signals in the TBTV isolates, recombination analysis was performed using RDP4 program. Using six algorithms, 29 recombination events were detected in all 8 isolates ( Fig. 3b and Table 3).TBTV-YYXi had 6 potential recombination signals, while TBTV-JC only had 1 potential recombination signal (Table 3).
In all 29 potential recombination events, three recombination events (events 27, 28 and 29) detected in TBTV-YWDu had remarkable high degree of certainty with P-value of at least three algorithms <1 × 10 −6 ( Table 3). There are the other nine recombination events with a high degree of certainty due to recombinant score > 0. 6 (Table 3). In addition, remaining 17 recombination events have a fairly likelihood since recombinant score is between 0.4-0.6 ( Table 3).
For all 8 TBTV isolates, there are three types based on the location of recombination events. The first type contained TBTV-YWDu, which only had recombinations at 3' half of the genome; the second type contained TBTV-YLLi and TBTV-JC, which had recombinations at 5' half of thegenome; and the third type contained 5 isolates of TBTV-MD-I, TBTV-MD-II, TBTV-Ch, TBTV-YWSh and TBTV-YYXi, which had recombinations throughout the genome (Fig. 3b).
In addition, there are same type of recombinant events with same locations and parents in different TBTV isolates including recombination located at 616-965 with parents TBTV-YYXi/TBTV-MD-II in TBTV-JC (event 1), TBTV-MD-I(event 3) and TBTV-YLLi (event 23), four recombinant events located at 253-385 with parents TBTV-MD-II/ TBTV-YLLi and 652-3207 with parents TBTV-YYXi/ TBTV-JC in TBTV-Ch (events 9 and 10) and TBTV-YWSh (events 13 and 15) ( Fig. 3b and Table 3). Table 1 Nucleotide sequence identities (%) for TBTV-JC, TBTV-MD-I, TBTV-MD-II and previously reported TBTV isolates based on the full-length genome and 3'UTR sequence TBTV-JC TBTV-MD-I TBTV-MD-II TBTV-Ch TBTV-YWSh TBTV-YYXi TBTV the outbreak of tobacco bushy top disease as well as phylogenetic relationship and possible recombination. The three new isolates of TBTV (TBTV-JC, TBTV-MD-I and TBTV-MD-II) had a remarkable difference at the first nucleotide of G from five previously reported TBTV with the first nucleotide of A, while sizes of the genome and ORFs were same in all TBTV isolates. For ORFs encoded by the TBTV isolates, ORF3 and ORF4 was relative stable while ORF1 was the most variable based on the identities of ORFs-coding sequences among 8 TBTV isolates. This strong genetic variability was also identified in the RdRp-coding region of several plant viruses [26][27][28][29].
In all of 8 TBTV isolates, TBTV-YWDu had the most remarkable divergence from the other 7 isolates based on the identities of the full-length genome, ORF1or p98-C-coding sequences. TBTV-MD-II also showed difference from the other 7 TBTV isolates based on the identities of the 3' UTR ( Table 2). The remarkable divergence of TBTV-YWDu (Fig. 2a, c and d) and TBTV-MD-II (Fig. 2b) from the other TBTV isolates were also indicated by the phylogenetic trees. Two phylogenetic trees based on ORF3 and ORF4-coding sequences showed similar branches, which suggested that ORF3 of TBTV isolates underwent similar evolution as ORF4. Based on all data of molecular diversity and phylogentic tree, it is suggested that different ORFs-coding regions and the 3' UTR in TBTV underwent separate evolution, and the diversity of ORF1-coding region mainly determined the diversity of full-length genome.
During evolution of the single strand RNA viruses, recombination is a major evolutionary way for an isolate to adapt to new environmental conditions and hosts [13][14][15][16]. In this study, recombination events were also analyzed among all 8 TBTV isolates. Total 29 potential recombination events were detected, in which three recombination events (events 27, 28 and 29) in TBTV-YWDu showed high reliability with P-value of at least three methods <1 × 10 −6 ( Fig. 3b and Table 3). These three recombination events in TBTV-YWDu were located within 3' half and have same parents of TBTV-Ch/possible TBTV-MD-II, which is supported by the phylogenetic analysis. In phylogenetic trees based on ORF3, ORF4 and 3'UTR in 3' half of TBTV genome, TBTV-YWDu and TBTV-Ch (the minor parent) formed into a cluster, while TBTV-MD-II (the possible major parent) belonged the other different groups (Fig. 2b, e, and f ). However, TBTV-YWDu formed a separate group in phylogentic trees based on ORF1 and p98-C in 5' half of genome ( Fig. 2a and c), which implied why there is no recombination events in 5' half of TBTV-YWDu. The other 9 recombination events (events 4, 10, 15, 19, 20, 22, 24, 25 and 26) seem to have a high degree of certainty due to recombination score > 0.6 ( Table 3). Three recombination events (events 4, 10 and 15) with similar breakpoints (629-3207 in TBTV-MD-I, 652-3207 in TBTV-Ch and Table 3 TBTV-YWSh) were supported by phylogenetic tree (Table 3 and Fig. 4a), in which TBTV-Ch and TBTV-YWSh formed into a branch with their minor parent TBTV-YYXi of events 10/15 and TBTV-MD-I formed into a branch with its minor parent TBTV-JC of event 4. Recombination event 22 was also supported by phylogenetic tree (Table 3 and Fig. 4b), in which TBTV-YLLi was formed into a branch with its minor parent TBTV-JC. Meanwhile, TBTV-JC was also the minor parent in recombination event 2 in TBTV-MD-I (recombination score is 0.469), which was also supported by phylogentic tree based on the fragment of 200-1550 (Fig. 4b). Three recombination events (events 24, 25 and 26) in TBTV-YLLi also showed some correlation with phylogenetic tree based on the fragment of 1500-2480 (Table 3 and Fig. 4h), in which TBTV-YLLi formed into a branch with its minor parent of event 24 and has near relationship with the minor parent TBTV-YYXi (event 25) or TBTV-JC (event 26). For two recombination events (events 19 and 20) in TBTV-YYXi, there is no correlation between recombination assay and phylogenetic tree based on ORF3 or ORF4-coding region ( Fig. 2e and f ), in which TBTV-YYXi formed into a separate group. It is suggested that events 19 and 20 are possible with uncertain minor parents (Table 3) . 2b), in which TBTV-Ch and TBTV-YYXi along with TBTV-YWSh formed into a branch. In addition, clue of the event 17 in TBTV-YYXi can be found in the phylogenetic tree based on p98-C-coding sequences (Fig. 2d). Only three events (events 5, 6 and 18) were not supported by the corresponding phylogenetic trees ( Fig. 4g and e). It is suggested that events 5, 6 and 18 are possible with uncertain minor parents ( Table 3). All above data implied the inherent relationship between phylogenetic analysis and recombination assay. Based on the analysis on P-value, recombination score and the correlation with phylogenetic trees, 24 events within 29 possible recombination events are certainly true, while the other 5 events (events 5, 6, 18, 19 and 20) only have low-level likelihood.
For three new isolates of TBTV, they seemed to undergo distinctive evolution. Firstly, TBTV-JC and TBTV-MD-I had near relationship compared with TBTV-MD-II based on the sequence diversity, the phylogenetic analysis and the recombination assay. However, TBTV-MD-I have distinctive characteristic from TBTV-JC in recombination assay (Fig. 3b). TBTV-MD-II may have different ancestor since it always formed a different branch from the branch including TBTV-JC and TBTV-MD-I in all phylogenetic trees (Fig. 2). Although they seemed to undergo different evolution, they showed mild pathogenecity, which may due to following reasons. First, activity of RdRp was altered since RdRp-coding region is the most variable for TBTV. Second, the expression level of replicase components was lower than that of severe pathotype, which is partly confirmed by primary data of in vitro translation of p35 and p98 (Wang and Yuan, unpublished data). In addition, mutagenesis of TVDV may also partially cause the mild symptom of tobacco bushy top diseases, since TVDV could not support TBTV in some samples of tobacco bushy top disease such as sample from Baoshan city (Additional file 1: Table S1) and the other related data [7]. Further attention and effort are necessary to figure out the detailed mechanism on the lower expression of replicase components encoded by TBTV and the absence of interaction between TBTV and TVDV in nature.
Conclusion
Three new TBTV isolates from tobacco bushy top samples with mild symptom were cloned. The first nucleotide of them is a guanylate instead of an adenylte reported in the other TBTV isolates. Identities and phylogenetic analyses indicated that variants ORFs and the 3' UTR in TBTV were evolved separately. RdRp-coding region was the most variable among TBTV isolates and the divergence of ORF1 is mainly correlated with the divergence of the fulllength genome. Frequent recombinations were detected among TBTV isolates. For three new TBTV isolates, they have different recombination pattern and may have different ancestors. RT-PCR detection of TBTV, TVDV and TBTD-assocaited RNA Total RNA was extracted from tobacco leaves using Trizol reagent (TransGen), and reverse-transcribed using M-MLV reverse transcriptase and oligonucleotides corresponding to TBTV, TVDV or TBTD-associated RNA (Additional file 1: Table S2). PCR amplification was performed using Taq DNA polymerase and pairs of oligonucleotides (Additional file 1: Table S2). For the detection of TBTV, two pairs of oligonucleotides, TB-2263-F/TB-3263-R and TB-667-F/TB-1630-R, were designed for RT-PCR assay. Accordingly, two pairs of oligonucleotides (TV-2728-F/TV-3458-R and TV-3454-F/TV-4166-R) for TVDV and one pair of oligonucleotides (TBTD-1507-F/TBTD-2016-R) for TBTD-associated RNA were designed for RT-PCR detection respectively (Additional file 1: Table S2). The PCR products were cloned into pMD18-T vector (TaKaRa) and sequenced by using M13 primers.
5'-RACE, 3'-RACE and full-length RT-PCR of TBTV
For 5'-RACE, the total RNA was reverse-transcribed by M-MLV reverse transcriptase using oligonucleotide TB-943-R and treated with a mixture of RNaseH and RNase A. After purification using a cDNA purification kit (TransGen), the cDNAs were extended using dCTP or dGTP and terminal deoxynucleotidyl transferase (TaKaRa) and then subjected to PCR amplification using Oligo(dG)-anchor primer/TB-943-R or Oligo(dC)-anchor primer/TB-943-R. The first-round PCR products were amplified using Anchor primer/TB-510-R. The final PCR products were cloned into vector pMD18-T and sequenced using M13 primers. At least three RT-PCR clones were sequenced to make sure the reliabilityof 5'-RACE result.
For 3'-RACE, total RNA and Oligo (dA)-linker were ligated with T4 RNA ligase (NEB). The oligo(dA)-linked RNAs were reverse-transcribed using Oligo (dT)-anti linker and then followed by PCR amplification using Oligo (dT)anti linker/TB-3206-F. The final products were cloned into pMD18-T and sequenced using M13 primers. At least three RT-PCR clones were sequenced to make sure the reliabilityof 3'-RACE result.
For full-length RT-PCR of TBTV, total RNAs were reverse-transcribed by PrimeScript reverse transcriptase (Takara) using TBTV-3'-R. The cDNA were then subjected to PCR amplification using LA Taq (Takara) and oligonucleotides TBTV-5'-F and TBTV-3'-R. PCR products corresponding to full-length TBTV genomic RNA were cloned into pMD18-T and sequenced using M13 primers and TBTV-specific primers (detailed information not shown).
All primers used for 5'-RACE, 3'-RACE and full-length RT-PCR of TBTV were shown detailedly in Additional file 1: Table S2.
Sequence assembly and alignment, construction of phylogenetic trees and Recombination analysis Sequence assembly was accomplished by DNAMAN, which is also used to analyze the identities of nucleotide sequences or amino acids among TBTV isolates.
Phylogenetic tree was constructed using MEGA 5.0 software package [30] based on the neighbor-joining method and Kimura 2-parameter method. 1000 replicates of Bootstrap resampling was used to ensure the reliability of individual nodes in phylogenetic tree.
Recombination analysis was achieved using RDP4 [31]. Six methods including RDP, GENECONV, Bootscan, Maxchi, Chimaera and SiScan implemented in RDP4 was used to detect recombination events, likely parental isolates and recombination break points under default settings. If recombination event was supported by at least three methods with P-value <10 −6 or the recombination score is above 0.6, this recombination event is certainly true. If the recombination score is between 0.4-0.6, this recombination event has a fair likelihood. | 2016-05-12T22:15:10.714Z | 2015-07-25T00:00:00.000 | {
"year": 2015,
"sha1": "8da1231a64d73bc59f2af993bf1a2466265ffa4a",
"oa_license": "CCBY",
"oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/s12985-015-0340-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8da1231a64d73bc59f2af993bf1a2466265ffa4a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
209489312 | pes2o/s2orc | v3-fos-license | Sexual Dimorphism and Breed Characterization of Creole Hens through Biometric Canonical Discriminant Analysis across Ecuadorian Agroecological Areas
Simple Summary The first step towards the protection and valorization of the genetic resources of a country is their definition. Although Ecuadorian zootechnical species are very diverse, they are scarcely characterized, and hence the efforts towards their protection are not as fruitful as they could be. The present paper approaches the biometric characterization of the Creole hen population in Ecuador through the study of sexual dimorphism and the differentiation of an agroecologically-based structured population using fourteen zoometric measures as differentiation criteria. Highlands region provinces of Cotopaxi and Tungurahua were the most zoometrically distant from the rest. However, Morona Santiago province population in the Amazonian region slightly differs from those in Guayas, Chimborazo and Bolívar in the Coastal and Highlands regions, respectively. The dual-purpose nature of Ecuadorian Creole hen resources enables the implementation of breeding programs that seek meeting a wider scope of public demands, through the definition of the agroecologically-based breed differentiated production of local hen eggs and meat. Abstract Despite Ecuador having a wide biodiversity of zootechnical species, characterization studies of these genetic resources are scarce. The objective of this research was to perform the biometric characterization of the Creole hen population through 14 quantitative zoometric measures. We evaluated 207 hens and 37 roosters from Ecuador’s three agro-ecological regions: the Sierra (highlands) region (Bolivar, Chimborazo, Tungurahua and Cotopaxi provinces); the Costa (coastal) region (Guayas); and the Oriente Amazonian region (Morona Santiago). Sexual dimorphism was assessed using one-way analysis of variance (ANOVA). Body dimensions were generally significantly higher for males (p < 0.05), especially for length of head, beak, neck, dorsum, tarsus, thigh, leg, and middle finger. Then, individuals were biometrically clustered into populations after a stepwise canonical discriminant analysis (CDA) computing interpopulation Mahalanobis distances. Agroecologically-based structured populations were identified when zoometrical criteria were used to classify the animals. Cotopaxi and Tungurahua provinces were reported to be the most distant from the rest, with a slight differentiation of the Morona Santiago province population from those in Guayas, Chimborazo and Bolívar. Conclusively, Ecuadorian Creole hens were higher than longer contrasting light hen breeds, which favors their dual-purpose aptitude. Hence, the development of selection programs aimed at Ecuadorian differentiated entity of production of eggs and meat across agro-ecological areas is feasible.
Introduction
Genetic diversity of domestic hens existing across Ecuador is not only promoted by climatic stratification but also natural and human driven selection. From the red junglefowl (Gallus gallus), the most likely ancestor of these avian populations [1], the effects of natural selection may have resulted in a high heterogeneity and variability of the morphological characteristics of fowl, with a high potential to adapt to the different environmental conditions [2][3][4][5][6].
Apart from naturally driven selection processes and natural migratory movements, genetic variability in local chicken populations may have been conducted as a result of human action. For instance, human-made migration processes [7] brought about the widespread distribution of poultry genetic material, given the size of animals was convenient and facilitated transport, favoring the expansion of these fowl across the different agroecological levels [3].
These factors led to genetic divergences contributing to poultry production under a family-run backyard system usually developed by each household's women [3]. Husbandry practices characterized by the use of rustic animals in free range conditions with a low capital investment, which enables assuming a relatively low economic risk to implement an efficient productive management to produce high-biological-value protein sources such as meat and eggs [8,9]. Additionally, these products are preferred among consumers because of their pigmentation, taste, and lean quality of meat [10,11]; which translates into acceptable income that returns to each family, closing the cycle [12][13][14][15][16][17].
Breeds originating in the Old World were introduced to Latin American territories by the Spanish colonists and adapted to the different agroecological areas and conditions that they found, forming what has traditionally been addressed as Creole hen populations. For decades, these creole populations occupied local productive niches and evolved towards their current state, but still lacked the necessary characterization actions that may help consolidating and protecting them. In parallel, current breed development and formation until the XVII century provided the basic elements for the directed selection of our days and for the pursuit of concrete characteristics of interest to the farmer or producer. In this context, a new conglomerate of breeds and commercial lines formed in the first world were introduced into developing countries in an attempt to fulfil the growing market demands at a lower cost [18].
This global situation resulted in an alarming loss in the biodiversity of animals of zootechnical interest that the region faces nowadays. According to Food and Agriculture Organization (FAO), the endangerment risk that 81 percent of Latin America and the Caribbean avian breeds are exposed to is unknown [19], as even censuses are not appropriately registered. The increased risk of a population whose endangerment status is unknown is based on the fact that measures towards its protection are not implemented. In this regard, efforts are being made to maintain, conserve and, in turn, benefit from their most profitable or useful traits, such as disease or stress resistance, in commercial breeding plans [18].
Not only local hen breeds face a serious risk of extinction, but there is also a simultaneous loss of the traits that made them survive after the evolutionary process that they followed when they arrived in and adapted to the lands to which they were introduced. Creole hens present a good ability to scavenge and forage, have good maternal qualities, and are hardier than exotic breeds with higher survival rates and minimal care and attention requirements. This rusticity is one of those traits to positively influence avian zootechnical production, given its implication with the adaptation ability of animals to the environment in which they are produced.
After a period characterized by a lack of actions regarding local genetic resources conservation, with policies more likely focusing on intensive production, morphological characterization studies in fowl started being run again in Ecuador. These studies lay the basis for local resources conservation and breeding plans. Zoometric traits have widely been reported to depend on an inherited basis and to be suitable means of prediction for the live weight of the individuals [20][21][22]. Thus, they may play an important role in the subsequent performance of animal carcasses [23]; a relationship that translates to new potential selection criteria, seeking the maximization of the profitability of the products derived from such local genetic resources.
Despite the fact that research projects seeking the zoometrical characterization of Ecuadorian local hen breeds started being implemented using univariate analysis, there is still a patent lack of knowledge regarding the differentiation of such local populations, and hence policies towards the protection of such genetic resources cannot be implemented properly. Therefore, the aim of this study was to perform differentiated zoometric characterization of Creole hens through the application of a canonical discriminant analysis (CDA) to provide insights on the possible clustering patterns described by the population and into which subpopulations can be distinguished using Ecuadorian provinces as the criteria of origin [24]. Conclusively, this approach will enable quantification of the large existing phenotypic variability in the Ecuadorian creole hen population as a strategy to facilitate the rational development of such productively important avian local resources and their use, and to direct the implementation of conservation strategies aimed at ensuring their survival in the competitive world of poultry production and future consolidation as breeds.
Sample Size and Distribution
The whole sample comprised 281 fowl, 244 hens (84.84%) and 37 roosters (15.16%), evaluated across the three regions and six provinces, as follows: the Sierra Region (Andean highlands), with the provinces of Bolivar (31), Chimborazo (70), Tungurahua (35), Cotopaxi (32); the Amazonian Oriente Region (eastern rainforests) with Morona Santiago (38); and the Costa Region (Pacific coastal lowlands) with Guayas (28), respectively. Stevens [25] provided a very thorough discussion of the sample sizes that should be used to obtain reliable results for canonical discriminant analysis. Strong canonical correlations in the data (R > 0.7), even in cases of relatively small samples (around n = 50) will be detected most of the time. However, to obtain reliable estimates of the canonical factor loadings for interpretation, hence, to be able to draw valid conclusions, Stevens recommends that there should be at least 20 times as many cases as variables included in the analysis, if one wants to interpret the most significant canonical root only, as is the case in our study. To arrive at reliable estimates for two canonical roots, Barcikowski and Stevens [26] recommend, based on a Monte Carlo study, to include 40 to 60 times as many cases as variables.
Study Site Characterization and Sample Animals Management
The study was conducted under field conditions from January to December 2015 and from January 2017 to August 2018. The animals comprising the sample were raised and kept by backyard producers who did not present evidences of crosses with commercial lines among the effectives of their farms, following a randomized design. The map of provinces and climatic floors of Ecuador are shown in Figures 1 and 2. Agroecological zones are detailed in Table 1. Animals were reared under extensive backyard conditions, and were not vaccinated against viruses or parasites such as coccidia, nor treated against parasites. Chickens were fed on organic corn and were occasionally supplemented with household wastes, vegetables, and other sources of minerals from each area. Antibiotics and multivitamins were not administered.
Biometric Data Collection
Biometrical analysis was performed on each animal, measuring the sixteen quantitative variables proposed by FAO [28]. A summary of the biometric variables measured and the procedure followed is shown in Table 2. Quantitative data was obtained using a digital scale, a gauge with 0.02 mm accuracy, and a tape measure. All the biometric information was collected in a structured file including georeferencing for each producer together with zoometric measurements. Mean and standard deviation of each measurement were computed for each sex and province. one-way analysis of variance (ANOVA) was carried out using the MEANS Statement from the PROC GLM routine of the S.A.S. 9.4 software [31] to determine the existence of differences in the means for the fourteen variables measured between males and females and across provinces. Then, the WALLER option was used to perform post hoc Waller-Duncan k-ratio t test on all main effect to measure specific differences between pairs of means (p < 0.05). Waller and Duncan [32] and Duncan [33] take an approach to multiple comparisons that differs from all the methods previously discussed in minimizing the Bayes risk under additive loss rather than controlling type I error rates. Furthermore, this range test uses the harmonic mean of the sample size, which makes it preferable if the sample sizes are unequal [34].
Canonical Discriminant Analysis (CDA)
The multivariate technique involved the use of canonical discriminant analysis on the 14 biometric measurements using the province to which each animal belonged as a labeling classification criteria, to identify the variation provided by the different variables measured under study, and to establish clusters that may identify and outline subpopulations [35][36][37][38]. Hence, we determined the percentage of correctly allocated individuals in their populations of origin in comparison to those animals which were statistically misclassified or attributed to a different province from the one in which they were sampled, to discover a linear combination of quantitative morphological variables that provide maximum separation between the potentially existing different populations when the classification criterion was the province in which the animals were located. CDA was also used to plot pairs of canonical variables to help visually interpret group differences. Variable selection was performed using Forward Stepwise (FSTEP) multinomial logistic regression algorithms. Canonical discriminant analysis was performed using the CANDISC Procedure from the PROC CANDISC routine of the S.A.S. 9.4 software [31].
Canonical Correlation Dimension Determination
Canonical correlation is a form of correlation relating two sets of variables. The maximum number of canonical correlations between two sets of variables is the number of variables in the smaller set. The first canonical correlation is always the one which explains most of the relationship [39]. The canonical correlations are interpreted as Pearson's r, and hence their square is the percentage of variance in one set of variables explained by the other set along the dimension represented by the given canonical correlation (usually the first); that is, Rc-squared is the percentage of shared variance along this dimension [40]. As a rule of thumb, some researchers state that a dimension will be of interest if its canonical correlation is 0.30 or higher, corresponding to about 10% of variance explained. Despite this, some researchers report just the first canonical correlation; it is recommended that all meaningful and interpretable canonical correlations are reported [41].
Canonical Discriminant Analysis Efficiency
Wilks' Lambda test assesses which variables significantly contribute to the discriminant function. As a rule of thumb, the closer Wilks' lambda is to 0, the more the variable contributes to the discriminant function. The significance of Wilk's Lambda can be tested using Chi-Square, and then, if the p-value if less than 0.05, we can conclude that the corresponding function explains group adscription well [42]. For small sample sizes or a small number of treatments, the limiting chi-squared or normal distributions may not adequately describe the actual probability distributions of the test statistics. Here, a finite approximation may be more appropriate than using the limiting distribution. One such method is Fisher's F approximation for Wilks' lambda by Rao [43] as developed in Chávez [44]. According to these authors, under normality conditions, this procedure performs more accurately than a χ 2 approximation [45].
Canonical Discriminant Analysis Model Reliability
Box's M is used to test the assumption of equal covariance matrices in Multivariate Analysis of Variance (MANOVA) and discriminant function analysis (DFA). Box's M has very little power [46] for small sample sizes; hence when we work with a small sample a nonsignificant result may not necessarily indicate that the covariance matrices are equal. In contrast, for large samples a statistically significant result can be reported when it does not actually exist. To address this particular issue, a smaller alpha level (p < 0.001) is recommended [47]. Some authors suggest that Box's M is highly sensitive, hence unless p < 0.001 and sample sizes are unequal, we should ignore it. However, if the results are significant and you have unequal sample sizes, the test is not robust [48].
In multiple regression, another assumption that should be tested for is multicollinearity. The variance inflation factor (VIF) is used as an indicator of multicollinearity. Computationally, it is defined as the reciprocal of tolerance: 1/(1 − R 2 ).
Various recommendations for acceptable levels of VIF have been published in the literature. Perhaps most commonly, a value of 10 has been recommended as the maximum level of VIF [49][50][51][52]. The VIF recommendation of 10 corresponds to the tolerance recommendation of 0.10 (1/0.10 = 10). However, a recommended maximum VIF value of 5 [53] and even 4 [54] can be found in the literature.
For example, a VIF of 8 implies that the standard errors are larger by a factor of 8 than would otherwise be the case, if there were no inter-correlations between the predictor of interest and the remaining predictor variables included in the multiple regression analysis.
Canonical Coefficients and Loading Interpretation and Spatial Representation
A preliminary principal component analysis (PCA) was performed to minimize overall variables into few meaningful variables that contributed most to variations in the populations. As a result, half wing radius ulna length (hwrul) and distal phalanx wing length (dpwl) were discarded, given they reported a component loading lower than |0.5| which suggested their redundant confounding nature, which may base on the fact that they comprise the total length of the wing, defined by proximal humerus wing length (phwl). Discriminant function analysis was used to determine percentage assignment of individuals into their own populations.
The traditional approach to interpreting discriminant functions examines the sign and magnitude of the standardized discriminant weight (also referred to as a discriminant coefficient) assigned to each variable in computing the discriminant functions. Small weights may indicate either that a certain variable is irrelevant in determining a relationship or that it has been discarded because of a high degree of multicollinearity.
Discriminant loadings reflect the variance that the independent variables share with the discriminant function. In this regard, they can be interpreted like factor loadings in assessing the relative contribution of each independent variable to the discriminant function.
In either simultaneous or stepwise discriminant analysis, variables that exhibit a loading of >|0.40| or higher are considered substantive, indicating substantive discriminating variables. With stepwise procedures, this determination is supplemented because the technique prevents nonsignificant variables from entering the function. However, multicollinearity and other factors may preclude a variable from entering the equation, which does not necessarily mean that it does not have a substantial effect. Loadings are considered to have relatively higher validity than weights as a means of interpreting the discriminating power of independent variables because of their correlational nature.
Standardized coefficients allow you to compare variables measured on different scales. Coefficients with large absolute values correspond to variables with greater discriminating ability. Also, discriminant scores can be computed by using the standardized discriminant function coefficients applied to data that have been centered and divided by the pooled within-cell standard deviations for the predictor variables, as discussed in IBM Corp. [55].
The data were standardized following standard procedures of Manly [56] before squared Mahalanobis distances and principal component analysis were computed. Squared Mahalanobis distances were computed between populations using the following formula: where D 2 ij is the distance between population i and j, COV −1 is the inverse of the covariance matrix of measured variable x, and Y i and Y j are the means of variable x in the ith and jth populations, respectively. The squared Mahalanobis distance matrix was converted into a Euclidean distances matrix and used to build a dendrogram using unweighted pair-group method using arithmetic mean (UPGMA) via agglomerative hierarchical cluster procedure with the software DendroUPGMA by Garcia-Vallvé and Pere Puigbo [57]. The Mahalanobis squared distance, defined as the square of the distance between the measures of the standardized values of Z (centroids), was used this way to verify whether there were significant differences between provinces [58].
Discriminant Function Cross-Validation
To establish whether the percentage of correctly classified cases is high enough to consider that the discriminant functions issue valid results, as a form of significance we can use leave-one-out cross-validation option. Classification accuracy achieved by discriminant analysis should be at least 25% greater than that obtained by chance.
These results can be supported by Press' Q statistic. This parameter can be used to compare the discriminating power of our function to a model classifying individuals at random (50% of the cases correctly classified), as follows (2) where N is the number of individuals in the sample, n is the number of observations correctly classified (as a coefficient ranging from 0 to 1), and k is the number of groups. The next step is to compute the critical value, which equals the chi-square value at 1 degree of freedom. It is advisable to let alpha equal 0.05. When Q exceeds this critical value, classification can be regarded as significantly better than chance, thereby supporting cross-validation.
Descriptive Statistics, ANOVA and Waller-Duncan Post-Hoc Test
Morphometric analysis indicated highly significantly differences when males were compared to females, as shown in Table 3. 6.54 a 6.10 b a lower mean; b higher mean (p < 0.05). When there is not a significant difference between sexes superindex letters are the same ( a ). Table 4 shows a summary of the significant results of ANOVA and post hoc Waller-Duncan test for the zoometric characteristics of Ecuadorian creole hens across the provinces. The variables analyzed have a very high and variable coefficient of variation across biometric traits.
Canonical Discriminant Analysis Model Reliability
The value of p < 0.05 obtained for Box's M test means the data did not differ significantly from multivariate normal and we could proceed with the analysis. Wilk's lambda statistic was used to assess whether canonical discriminating functions contributed significantly to the separation of treatments, that is, it was used to test the meaning of the discriminating function Table 5. All zoometric variables were included at a preliminary stage of the analysis performed in this study. Tolerance (1/R 2 ) and variance inflation factor (VIF) were analyzed to identify those variables that were responsible for multicollinearity between variables. This analysis revealed that the variables proximal humerus wing length (phwl), half wing radius ulna length (hwrul) and distal phalanx wing length (dpwl), turned out to be highly related (VIF > 4). Therefore, we decided to retain distal phalanx wing length (dpwl) in the analysis, because that measure results from the combination of proximal humerus wing length (phwl) and half wing radius ulna length (hwrul), given their lower VIF. After the removal of redundant variables, the results for tolerance and VIF can be seen in Table 6.
Canonical Coefficients and Loading Interpretation and Spatial Representation
The canonical discriminant analysis identified five discriminating canonical functions. The first had a high discriminatory power, as denoted by the eigenvalue of 5.422. The results are presented in Table 7. The first function obtained explains 91.9% of total variance. The fifth function contributes to the explanation of variance with 49.9% of the information to the analysis, that is, relatively low. The results for the tests of equality of group means to test for differences across provinces once redundant variables have been removed are shown in Table 8. The greater the value of F and the lower the value for Wilks' Lambda, the better the discriminating power a certain variable has and the lower the rank position it presents. Those variables presenting equal values of lambda and F had equivalent discriminatory power, as shown by beak and dorsal length. When this happens, it is necessary check whether these similarities are based on a multicollinearity problem or are because the variables, indeed, have a similar discriminant power. Once F and Wilks' Lambda had been assessed, we evaluated the magnitude of standardized and non-standardized coefficients, reported in Table 9, to determine whether there had been a reduction in the discriminant power of individual variables as a result of multicollinearity between pairs, which implies a reduction in the separate discriminant power of each of the two variables involved in the multicollinear relationship.
As shown in Table 8 for the variables neck and dorsal length, we observe that standardized coefficients fell below 0.4, hence, hence there was a decrease in the discriminating power of the non-individual function as a result of the effect of multicollinearity, that is, neck and dorsal length variables being related and explaining a somehow redundant fraction of variability. The greater the reduction in the standardized coefficient, the more important the multicollinearity problem between variables holding similar Wilks' lambda and F values. Standardized coefficients are shown in Table 9. According to Hair Jr [59], absolute values below |0.3| are indicative of multicollinearity problems when F and Wilks' Lambda have been previously checked to be similar for a certain pair of variables.
Unstandardized coefficients, calculated on raw scores for each variable, are of most use when the investigator seeks to cross-validate or replicate the results of a discriminant analysis or to assign previously unclassified subjects or elements to a group. As we are assessing the potential misclassification of individuals belonging to previously defined populations as a way to define such populations themselves, we must interpret standardized coefficients, and hence unstandardized coefficients were discarded [60]. Furthermore, the unstandardized coefficients cannot be used to compare variables or to determine what variables play the greatest role in group discrimination because the scaling for each of the discriminator variables (i.e., their means and standard deviations) usually differ. The maximum number of canonical discriminant functions generated is equal to the number of groups minus one. In the present study, the number of canonical discriminant functions was 5 for each series, as we used the six provinces as a labelling criterion. After the evaluation of standardized coefficients, the resulting discriminant functions were as follows: To determine which is the variable that we have to discard out of each pair for which a multicollinearity problem has been detected, we checked discriminant loadings, which are presented in Table 10. Discriminant loadings measure the existing linear correlation between each independent variable and the discriminant function, reflecting the variance that the independent variables share with the discriminant function. In this regard, they can be interpreted like factor loadings in assessing the relative contribution of each independent variable to the discriminant function. A graphical representation of discriminant loadings is shown in Figure 3, with those variables whose vector extends further apart from the origin being the most representative discriminating ones. A territorial map was created by plotting the discriminating values for each observation (Z) for the first function on the x axis and those values for the second discriminant function on the y axis. Figure 4 graphically depicts the canonical discriminant analysis of individuals across the six sampling provinces. Cotopaxi and Tungurahua provinces are evidently independent, while the populations from the provinces of Guayas, Chimborazo and Bolívar displayed significant overlap. Further away from the latter three, we find the population from the province of Morona Santiago. Centroids designed the central observation for each province group. The probability that an unknown case belongs to a particular group was calculated by measuring the relative distance of Mahalanobis to the centroid of a population. To compute discriminant scores or centroids, we substituted the mean for each possible province in the three first dimensions [61]. Then, to calculate the optimal cut-off point, that is, the probability of classification we followed the procedures in Hair, Black, Babin and Anderson [52]. Then we could determine whether a certain case was appropriately classified. A territorial map was created by plotting the discriminating values for each observation (Z) for the first function on the x axis and those values for the second discriminant function on the y axis. Figure 4 graphically depicts the canonical discriminant analysis of individuals across the six sampling provinces. Cotopaxi and Tungurahua provinces are evidently independent, while the populations from the provinces of Guayas, Chimborazo and Bolívar displayed significant overlap. Further away from the latter three, we find the population from the province of Morona Santiago. A territorial map was created by plotting the discriminating values for each observation (Z) for the first function on the x axis and those values for the second discriminant function on the y axis. Figure 4 graphically depicts the canonical discriminant analysis of individuals across the six sampling provinces. Cotopaxi and Tungurahua provinces are evidently independent, while the populations from the provinces of Guayas, Chimborazo and Bolívar displayed significant overlap. Further away from the latter three, we find the population from the province of Morona Santiago. Centroids designed the central observation for each province group. The probability that an unknown case belongs to a particular group was calculated by measuring the relative distance of Mahalanobis to the centroid of a population. To compute discriminant scores or centroids, we substituted the mean for each possible province in the three first dimensions [61]. Then, to calculate the optimal cut-off point, that is, the probability of classification we followed the procedures in Hair, Black, Babin and Anderson [52]. Then we could determine whether a certain case was appropriately classified. Centroids designed the central observation for each province group. The probability that an unknown case belongs to a particular group was calculated by measuring the relative distance of Mahalanobis to the centroid of a population. To compute discriminant scores or centroids, we substituted the mean for each possible province in the three first dimensions [61]. Then, to calculate the optimal cut-off point, that is, the probability of classification we followed the procedures in Hair, Black, Babin and Anderson [52]. Then we could determine whether a certain case was appropriately classified.
It can be observed that the provinces of Cotopaxi and Tungurahua are located in different places on the Cartesian plane, that is, remote from each other. The opposite situation happened with the provinces of Bolívar, Guayas and Chimborazo, and it can also be observed that Morona Santiago is slightly separated from the three latter provinces.
Discriminant Function Cross-Validation
When classification and leave-one-out cross-validation matrices are evaluated, it can be observed that 97.14% has been estimated to be correctly classified for Bolivar, with 88.57% being validated for the same province. For Chimborazo, 93.33% has been correctly classified, with 89.33% validated. For Cotopaxi, Guayas, Morona Santiago and Tungurahua the total of observation was appropriately classified with 90.63%, 50%, 80% and 94.44% being validated, respectively.
Cross-validation reported a result for Press Q parameter of 1064.1418 (Press Q = [N − (nK)] 2 /N(K − 1) = 227 − (221 × 6) 2 /227(6 − 1)). Hence, Q was above 6.63 (significance level of 0.01), Chi 2 s critical value for a degree of freedom at a chosen confidence level. Predictions were significantly better than chance, according which it would have a correct classification rate of 50% The absolute values of Mahalanobis' distances between the local populations of the six provinces involved in the analysis are shown in Table 11. The shortest distance is found between Bolivar and Chimborazo, while the longest distances are those found between the Province of Cotopaxi and the rest. Contrastingly, the distances of Tungurahua with Bolívar, Chimborazo, Cotopaxi are similar. The level of statistical significance for all Mahalanobis distances was high and similar (p < 0.0001). Table 11. Distance of Mahalanobis between locations (above the diagonal) and F statistics (numerator degrees of freedom (dfn) = 6, denominator degrees of freedom (dfd) = 214) for the square distances between locations (below the diagonal). Mahalanobis' distance analysis is based on the analysis of generalized squared Euclidean distances adjusted for unequal variances. The Mahalanobis distance (D 2 ), defined as the square of the distance between the measures of the standardized values of Z, was used to verify whether there were significant differences between the provinces. Thus, the greater the value of the distance, the greater the distance between the means of the provinces considered as well [58]. As can be seen in the dendrogram Figure 5, the provinces of Bolivar-Chimborazo, Guayas-Morona Santiago, and Cotopaxi-Tungurahua are populations represented as subpopulations.
Items
The shortest Euclidean distance was observed between the provinces of Bolivar and Chimborazo; whereas the opposite happened between the Province of Cotopaxi and the others. The distances between Tungurahua and Bolívar, Chimborazo, or Cotopaxi are similar. In contrast, Morona Santiago is slightly far from the provinces of Guayas, Chimborazo, Bolívar.
Discussion
The morphometric measurements show highly significant differences in relation to sex, as reported by Yakubu and Salako [62] in indigenous chickens in Nigeria, which reported such differences to be based on the hormonal effects of sex that condition growth. These results were consistent with similar results in the literature [6,63].
The high coefficient of variation observed in the results is similar to that reported in different populations of chickens in Mexico [64], and also in indigenous chickens in Nigeria [61], which demonstrates the variability of the morphometry in the birds studied, which may be due to genetic divergence processes followed by the populations, such as migration [65], which resulted in the morphological modification of the populations to adapt to the characteristics of the different environments and the orography to which the birds were introduced [63, [66][67][68].
Measurements for head length are higher than those measured in Batsi Alak Hens of Mexico whose mean for males and females varied from 4.16 to 4.6 cm [67], and lower than those reported for Yoruba ecotypes of Nigeria with an average of 9.90 cm [68].
In terms of crest length, the values are lower than those measured in indigenous Nigerian roosters and higher than those indicated for hens of the same country [62]. Likewise, Yoruba and Fulani ecotypes [67] reported a similar value to that reported for the province of Cotopaxi when comparing fowl in general, without considering males and females separately [69]. However, in autochthonous Catalonian chicken breeds, such as Patridged Penedesenca and Blonde Empordanesa, we observe crest sizes double those measured in the present study [70].
For beak length, the values are analogous to those found in Botswana hens. This could be supported by the fact that both studies were conducted across three agroecological regions [71]. However, higher values were reported by Yakubu and Salako [62] in indigenous Nigerian fowl for males and females, as well as in native Catalonian breeds Partridged Penedesenca and Blonde Empordanesa. The opposite situation was described by Batsi Alak hens from Mexico [67] and Fulani
Discussion
The morphometric measurements show highly significant differences in relation to sex, as reported by Yakubu and Salako [62] in indigenous chickens in Nigeria, which reported such differences to be based on the hormonal effects of sex that condition growth. These results were consistent with similar results in the literature [6,63].
The high coefficient of variation observed in the results is similar to that reported in different populations of chickens in Mexico [64], and also in indigenous chickens in Nigeria [61], which demonstrates the variability of the morphometry in the birds studied, which may be due to genetic divergence processes followed by the populations, such as migration [65], which resulted in the morphological modification of the populations to adapt to the characteristics of the different environments and the orography to which the birds were introduced [63, [66][67][68].
Measurements for head length are higher than those measured in Batsi Alak Hens of Mexico whose mean for males and females varied from 4.16 to 4.6 cm [67], and lower than those reported for Yoruba ecotypes of Nigeria with an average of 9.90 cm [68].
In terms of crest length, the values are lower than those measured in indigenous Nigerian roosters and higher than those indicated for hens of the same country [62]. Likewise, Yoruba and Fulani ecotypes [67] reported a similar value to that reported for the province of Cotopaxi when comparing fowl in general, without considering males and females separately [69]. However, in autochthonous Catalonian chicken breeds, such as Patridged Penedesenca and Blonde Empordanesa, we observe crest sizes double those measured in the present study [70].
For beak length, the values are analogous to those found in Botswana hens. This could be supported by the fact that both studies were conducted across three agroecological regions [71]. However, higher values were reported by Yakubu and Salako [62] in indigenous Nigerian fowl for males and females, as well as in native Catalonian breeds Partridged Penedesenca and Blonde Empordanesa. The opposite situation was described by Batsi Alak hens from Mexico [67] and Fulani ecotypes of Nigeria [68], which reported similar mean values to those of Bolivar, Guayas, Morona Santiago. This could be ascribed to the similarity between the climates of the locations in which the study took place.
For the neck length trait, the values found were equivalent to those found in Partridged Penedesenca and Blonde Empordanesa [70] and Yoruba and Fulani Nigerian ecotypes [68]; while lower values were found in indigenous Nigerian hens for both males and females [62]. The highest values in literature were reported for Batsi Alak Hens from Mexico with measures of 19 to 17 cm in males and females, respectively [67].
Body and dorsal lengths along with head length have been related in literature to the potential of animals for egg production [68]. When these data are compared with those reported by Moazami-Goudarzi [5] and their studies of local Tanzanian chicken ecotypes, the values for males of the Singamagazi ecotype were slightly higher than the average reported for Morona Santiago hens, but comparable to the males of the Kuchi ecotype. However, for the male of Mbeya ecotype and female of the Singamagazi ecotypes [5], these values were similar to those reported for the Patridged Penedesenca hens and for the Blonde Empordanesa [70] and equivalent to those found in Bolívar, Chimborazo, and Guayas. Studies conducted on Botswanan hens across three different agroecological areas found similar values for females and males [71] to those measured in Cotopaxi and Tungurahua. In turn, Yakubu and Salako [62] reported higher average measurements than those of Ecuadorian creole hens found in this study or those found for the Fulani and Yoruba.
Higher values for the ventral length variable are found in native breeds of Partridged Penedesenca, Blonde Empordanesa hens [5], and Yarubi and Fulani ecotypes [68]. The thoracic perimeter variable is a good indicator of meat yield in most species of poultry [68]. Higher values were found than those obtained for Nigerian males and females [62]. However, native Nigerian birds bred for research purposes belonging to the Anak Titan ecotype [72] and the Yoruba and Fulani ecotypes [68] reported similar values to those found for Chimborazo individuals.
The Batsi Alak hens in Mexico reached values similar to those reported in Bolívar, Chimborazo and Cotopaxi [67]. These values were common in backyard hens in Mexico [63]. The measurements for half wing radius ulna length (hwrul) were shorter than those for the males and females of the Batsi Alak hen breed from Mexico [67]. However, the values for distal phalanx wing length (dpwl), were similar to those observed in Batsi Alak hens from Mexico for both males and females. This may be due to the fact that the study was carried out at an altitude of 1200 to 2760 m above mean sea level, similar conditions found at the location where our study took place [67]. Thigh length in Partridged Penedesenca and Blonde Empordanesa hens [70] were similar to those from Ecuadorian fowl from Cotopaxi and Tungurahua provinces, but shorter than the Nigerian Yoruba ecotype hens [68].
Regarding the circumference of the leg, the indigenous hens of Nigeria reported similar values to those measured in Bolívar and Chimborazo and slightly similar ones to those reported for the hens of Cotopaxi. The dimensions of the leg have been related in literature with the type of production, with those animals presenting higher dimensions (both in width and length), being more appropriate to suit the requirements for meat production and characteristic of carnic breeds [68]. Tanzanian local chicken ecotypes presented higher tarso-metatarsal lengths [5] for Singamagazi and Kuchi than those of Ecuadorian Creole hens. However, females of the same ecotypes reported similar average values [73] to those measured in Morona Santiago hens.
Similarly, males of the Ching'wekwe ecotype and females of the same ecotype and female of the Morogo ecotype presented similar values to those of the provinces of Guayas and Tungurahua, but much higher than those of the province of Cotopaxi. Likewise, this value was similar to that reported by Nigerian birds [62], which may support the fact that ecotype, may be strongly conditioned by the agroecological conditions in the area in which these avian population are based.
Similarly, larger sizes were recorded for breeds such as the Partridged Penedesenca and Blonde Empordanesa with an average of 8 cm, which may be because the birds studied also come from four different eco-climates, homologous to those of the areas considered in this research.
For birds from India [74], average values were similar to those measured in Bolívar and Chimborazo. Long tarsi have been associated with dry regions and flat topographies, as they allow birds to travel long distances in search for food, unlike birds with short tarsi, which could be attributed to the effects of natural selection [72].
Short tarsi have been identified with a greater ability to escape from predators [74], hence, they have been directly related to processes of adaptation and improvement of survival. Functionally, from a productive point of view, tall animals tend to be destined for meat production and small animals for egg production [68]. In addition, the length of tarsi may be related to the prediction of live weight in the field, as reported by some authors [75][76][77]. Ecuadorian creole hens present morphological traits which would made them more prone to produce eggs. However, the dimensions of certain morphological variables could make them suitable for meat production, so it is essential to implement breeding programs to select and direct crosses that allow ecotypes to be obtained by classifying individuals depending on to their productive potential.
Canonical discriminant analysis suggests that the variables perimeter of the leg (legc), metatarsus tarsus length (metl) and middle finger phalanx length (mfpl) were the ones that had the greatest discriminatory capacity between provinces. The results revealed the presence of wide ranges of variation within and among Creole hens in Ecuador. However, four large population blocs could be identified, namely Cotopaxi, Tungurahua, Morona Santiago, and that comprising the populations of Bolivar, Chimborazo and Guayas, a fact that could be attributed to the different conditions found across the various agroecological zones in the country, ethnic groups handling these resources and cultural implications that they have, along with the huge migration events suffered by these resources when facing natural and/or man-made challenges.
Toalombo, et al. [78] identified a common pattern of haplogroups of mitochondrial DNA for Ecuadorian chickens reared across Ecuadorian agroecological systems, which suggests these animals may belong to the same maternal lineages. In fact, the maternal origin for these populations could presumably be attributed to pre-Columbian Asiatic matrilines or Iberian matrilines arriving during the Spanish colonization. Furthermore, mitochondrial findings support the fact that current Ecuadorian local chickens do not show maternal influences from commercial lines and maintain high levels of genetic diversity without evidence of genetic drift and/or population bottlenecks. This high diversity may be owing to internal heterogeneity, which, as suggested by the results in the present study, may be promoted by the breeding policies that are carried out. Additionally, the patterns found by canonical discriminant analysis are supported by the results reported by Toalombo et al. [78], as a certain internal substructure can be found. However, the absence of any breeding program, registers, or zootechnic management produces a high fragmentation of the potential Ecuadorian breeds.
The most likely reason for which Morona Santiago, Bolívar, Chimborazo, and Guayas populations present a mixed structure which does not permit their complete segregation from one another may have its basis on the fact that poultry farmers in this province are prone to preserve their birds, avoiding the introduction of individuals from external populations, despite the fact that poultry production has already been developed in their areas.
The provinces of Morona Santiago, Bolívar, and Chimborazo are nearby to each other, sin contrast to the province of Guayas, located in the Costa Region. However, there is a provincial road that joins Chimborazo (Sierra) and Guayas (Costa). It should be noted that agricultural fairs are likely to take place along the road connecting both provinces. These events act as exchange centers of genetic material, which is mainly performed with minor species such as birds, given the considerable ease to transport such resources. The same would happen with Morona Santiago, which is located in the Ecuadorian Amazon, but near Chimborazo, which is geographically located in the center of the country. The provinces in question are characterized by their agricultural and livestock background, such that 13% of the population is engaged in poultry production activity; hence, it could be assumed that the populations maintain the genetics of their fowl over time.
The connection with Guayas, as stated by the General Secretariat of the Andean Community in 2009, lays on the fact that during the seventies, there was a reduction in domestic agricultural-livestock production, which led to the migration of the inhabitants of Nabuzo-Penipe (Chimborazo) to the coast of Ecuador. This social movement implied people carried easily-used animal species such as hens along with them. Complementarily, as a result of the eruptions of the Tungurahua volcano, constant since the late 1990s [79], the Canton Penipe experienced immigration from the surrounding populations, which was also noticeable in Nabuzo (Chimborazo), the area least affected by volcanic ash. This confirms that the genetic material did not suffer such migratory pressure and hence its resources remain intact, which in turn explains the clustering revealed by the canonical discriminant analysis and which may outline the same genetic structure.
Conclusions
The results revealed the presence of wide ranges of variation within and among Creole hens in Ecuador. However, four large population blocs could be identified, namely Cotopaxi, Tungurahua, Morona Santiago, and that comprising the populations of Bolivar, Chimborazo, and Guayas, a fact that could be attributed to the different conditions found across the various agroecological zones in the country, the ethnic groups handling these resources and cultural implications that they have, along with the huge migration events suffered by these resources when facing natural and/or man-made challenges. This addresses the high number of opportunities to implement programs for the promotion, conservation, and genetic improvement of these local resources, through their selection and crossing after their definition and characterization, since their inherent resistance and adaptation to the different environmental conditions allow the definition of technical and scientific strategies to exploit their productive potential. In addition, the local nature of these resources should be highlighted as intangible ancestral value in the field of food sovereignty, and should be inserted within the productive policies at a governmental level, as has already been done with other species, with the aim of improving rural livelihoods and meeting the growing demand for poultry products. Funding: This study received funding from "Instituto de Investigaciónes (IDI) -Vicerrectorado de Investigaciones -Escuela Superior Politcnica de Chimborazo" by the project proposal: "Caracterización morfológica y genética de la gallina criolla del Ecuador". | 2019-12-28T14:03:07.854Z | 2019-12-22T00:00:00.000 | {
"year": 2019,
"sha1": "759d5bd77926fb6ab735f46b23ef4ab92a915d79",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/10/1/32/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b5c22e5068445a4891b4f8f115ab0dfcb83d1fa",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
240316641 | pes2o/s2orc | v3-fos-license | Mutated cuckoo search algorithm for dynamic vehicle routing problem and synchronization occurs within the time slots in home healthcare
The advancement and development of technologies promote more interest in exploring Dynamic Vehicle Routing Problems (DVRP), especially in Home Healthcare. Home healthcare (HHC) has gained more attention from researchers in recent years due to its increasing demand. There are certain cases where some patients may require one or more care services concurrently. In such a situation, synchronization of vehicles within specific time slots is necessary. The inclusion of dynamic patients and synchronization of vehicles without disregarding the time window will transform into a formidable task for HHC organizations. Hence, the investigation of the Dynamic Vehicle Routing problem where Synchronization occurs within the Time Slots (DVRPSTS) becomes a remarkable model in the context of HHC. This paper proposed one such complex endeavour considering multi-objectives such as (i) minimize the total travel time and the number of vehicles utilized (ii) maximize the number of new patients visits in HHC. The accuracy and efficiency of the proposed algorithm Mutated Cuckoo Search Algorithm (MCSA) are validated by comparing its results with existing methods in the literature. Thus, this algorithm outperforms most of the randomly generated test instances. To the best of our knowledge, the proposed MCSA has not yet been modelled for the DVRPSTS in HHC.
Introduction
According to the ''World Population Ageing (2019)'' report, there is a constant rise in the ageing population. It is predicted that by 2050, the elderly population (65 years or more) will project up to 16 per cent and their proportion of the world population will be one in six (1:6). As a result of an increase in the elderly population, the growing cost of in-patient hospitals, traffic congestion, rise in chronic (like diabetes) and contagious diseases (like coronavirus), etc. would produce high demand for home healthcare among different healthcare service delivery systems (Landers 2016). Home Healthcare (HHC) provides a set of care services to patients at their respective homes using suitable caregivers. In this realistic world, innovations in technologies like Global positioning systems (GPS), Radio Frequency Identification (RFID) tags, smart mobile updates etc., create a significant impact on logistics and transportation services. These advancements in technologies help to perform fleet management effectively and proficiently (Liao 2016;Nasir 2020). Hence, many researchers have been attracted to investigate more about the Dynamic Vehicle Routing Problem (DVRP) (Larsen 2002;Euchi 2015: Fikar 2017Demirbilek 2019;Sangeetha 2020a).
Planning and routing the appropriate professional caregivers' vehicles to the patients according to their demands and preferences within the time window is a crucial job for HHC. In our HHC system, the process of DVRP begins when the new patients' requests arrive after the execution of the route plan. Indeed, the new patients are included in a suitable route path and synchronization of vehicles happens within the time slots whenever it is required. The introducing two simultaneous real-time scenarios in the HHC service system are (i) introducing the vehicle routing in a dynamic environment (ii) synchronization of visits within the specific time slots has made this problem more challenging. This proposed and unique model is called Dynamic Vehicle Routing Problem, where Synchronization occurs within the Time Slots (DVRPSTS). Since this model is an NP-hard problem (Fikar 2017;Sangeetha 2020a). Solving it within a specific time frame is notoriously difficult by using exact algorithms. Despite the fact that contemporary optimization techniques such as metaheuristic algorithms can generate high-quality solutions, they cannot produce exact solutions. But it has the capability of producing the most appropriate solutions. Such types of algorithms are profoundly reasonable to solve a real-time problem like DVRPSTS of HHC. Only a few researchers have used nature-inspired metaheuristic algorithms for solving DVRP in the literature of HHC (Fikar 2017;Sangeetha (2020aSangeetha ( , 2020b; Borchani 2019). The Cuckoo Search Algorithm (CSA) has received much attention from researchers in various optimization areas (Yang 2009;Ouaarab et al. 2014;Xiao et al. 2018;Alssager et al. 2020;Swathypriyadharsini 2021). Therefore, this research proposed a novel Mutated Cuckoo Search Algorithm (MCSA) to solve this new variant of DVRPSTS in HHC.
Our proposed variant focused on multi-objectives such as minimizing the total travel time of caregivers and the number of vehicles utilized for the routing and maximizing the number of patient visits in each route plan. To address this model, an MCS Algorithm has been developed to execute the numerical analysis by comparing with two incredible metaheuristic algorithms like the Genetic Algorithm (GA) and Discrete Cuckoo Search Algorithm (DCSA). Hence, this research work shows that the proposed algorithm is very promising and outperforms the other two competing algorithms like GA and DCSA in most cases.
The rest of this paper is organized as follows: In Sect. 2, a detailed literature review on static, dynamic and synchronization visits and application of the CS algorithm in VRP. In Sect. 3, describes the DVRPSTS of HHC and its mathematical formulation. Section 4 presents the pseudocode and flowchart of the MCSA. Section 5 contains the experimental analysis and its outputs. Finally, Sect. 6 concludes the study.
Literature review
Innovations and advancements in technology transform and improve the monitoring and managing skills of the healthcare system by using wireless sensor networks (Li 2009;Kateretse 2013;Yaghmaee 2013;Prakash 2019), IoT (Kore 2020;Kadhim 2020), mobile healthcare (Jeong 2014), tele homecare system (Dinesen 2009), RFID technology (Liao 2016), U-Health platform (Jung 2013), and sensing technology (Khawaja 2017;Simik 2019), etc. The impact of covid-19 infections, patients preferred to be treated at home by their healthcare services rather than hospitalised. Hence, the integration of advanced technology with the health care system will help in handling the pandemic effectively. It has further increased the demand for HHC services significantly. Therefore, it is crucial in optimizing the routing plan of heterogeneous vehicles with limited resources of the HHC organization. It creates a vast interest in finding a Suitable Action Planner (SAP) for constructing the complex scheduling and routing of caregivers, taking into account various constraints in a static, dynamic environment and synchronization visits of caregivers etc.
The various planned management in static circumstances is examined by Nickel (2012), constructed constrain programming, Adaptive large neighbourhood search (ALNS) and Tabu search regarding benefits of cost and time. Sangeetha (2020b) proposed enhanced elitism in Ant Colony Optimization (E-ACO) to solve Heterogeneous VRP to maintain workload balance among various caregivers to maintain continuity of care in HHC.
In the dynamic environment of VRP in HHC, various types of action planners have been discussed and developed by Demirbilek (2019) and examined two different variants such as the Daily Scenario-Based Approach (DSBA) and Weekly Scenario-Based Approach (WSBA) for anticipating future demands by multiple nurses. Nasir (2018) proposed an integrated model of scheduling and routing for daily planning in HHC. It is solved using mixed-integer linear programming (MILP), and two heuristic methods are incorporated, Initial heuristic solution and self-correcting variable neighbourhood search algorithms. Euchi (2015) demonstrated artificial Ant Colony (AC) with a 2-opt local search to solve the Dynamic pick-up and delivery vehicle routing problem (DPDVRP) to reduce the total cost. Haitao (2018) proposed Enhanced Ant Colony Optimization (E-ACO) for the homogeneous vehicle. Here, ACO combined with K-means and crossover operation to extend search space and avoid falling into local optimum.
Developed suitable techniques for vehicle routing with synchronization visits by Rabeh et al. (2012) displayed the problem as a Synchronized VRP with Time Window (SVRPTW) and proposed a MILP and solved, using the LINGO_11.0 solver for reducing the total time. Parragh (2018) demonstrated the first problem as VRPTW with pairwise Synchronization and the second problem as the service technician routing and scheduling problem (STRSP). They were designed using ALNS to reduce the total cost. Nasir (2020), under Synchronization requirements between HHC staff and Home Delivery Vehicles (HDVs) visits, a MILP model is developed to characterize the optimization problem for minimizing the total cost. A Hybrid Genetic Algorithm (HGA) presented to suggest HHC planning decisions. Mankowska (2013) constructed Home Health Care Routing and Scheduling Problem (HHCRSP) to minimize the total cost using the Adaptive variable neighbourhood search algorithm. Borchani (2019) defined a variant of VRPTWSyn in HHC. A hybrid Genetic Algorithm with a Variable Neighborhood Descent search (GA-VND) is proposed for reducing the difference in service time of different vehicles and providing the workload balance. David Bredström (2008) presented a mathematical programming model for the combined vehicle routing and scheduling problem with time windows and additional for imposing pairwise synchronization and pairwise temporal precedence between customer visits, independently of the vehicles. It is solved using CPLEX Branch and Bound. Rousseau (2002) developed the Synchronized Vehicle Dispatching Problem (SVDP). The solution method proposed in this paper relies on the subsequent insertion of customers using the greedy procedure.
Application of Cuckoo Search Algorithm (CSA) in VRP, initially introduced by X. Yang and Suash Deb (2009), stated that CSA works well when dealing with multimodal and multi-objective optimization problems. Ouaarab et al. (2014) initially modelled Improved CS (ICS); this model is mainly adapted to solve the symmetric travelling salesman problem (TSP). Local perturbations are introduced as 2-opt and double-bridge in their proposed Discrete CSA. Xiao et al. (2018) discussed the patient transportation problem to reduce transport emissions formulated for the CVRP model. Also, a 'split' procedure has been implemented to simplify the individual's representation. Astute cuckoos are introduced to improve the ICS's searchability. Alssager et al. (2020) developed a hybrid CS with Simulated Annealing (SA) algorithm for the CVRP, consisting of three improvements-the investigation of 12 neighbourhood structures, three selections strategies and hybrid it with the SA algorithm. Therefore, from the above survey, very few papers have been discussed on both dynamic and synchronization constraints. No work has been modelled for CSA combined with different local search methods for solving the DVRP in the literature of HHC. Thus, it is worthwhile to examine this remarkable SAP model of MCSA is proposed for DVRPSTS in HHC.
Problem description
This study proposed a dynamic HHC model, which initially does not possess the complete data of all patients. Each patient must be visited in a preferrable time window. Assuming that the number of caregivers, vehicles in each type and number of care activities to be performed are known before the route plan begins. Meanwhile, many new patients' demands are constantly emerging over time. A working day is divided into four different time slots S = {S1, S2, S3, S4} as mentioned in Table 8. The new arrival of patients is scheduled for any one of these time slots based on their demands and should be allotted to the respective vehicles existing in a shift. During the execution of the route path, typically, some patients have been visited, with few new patients waiting to be serviced at any moment in the same working day. Hence, DVRPSTS is divided into a set of standard VRP in every time slot and then solving them in order of instances using the metaheuristic MCS algorithm.
Measuring dynamism
The levels of dynamism are varied for different types of DVRP. Usually, dynamism is categorized based on the ratio of some dynamic requests relative to the total. They are (i) degree of dynamism, (ii) effective degree of dynamism and (iii) effective degree of dynamism with time window. This paper recognizes the DVRP model in Haitao Xu (2018) and interprets DVRP as a set of static CVRP in each time slot. Therefore, this paper selects the metric, degree of dynamism (dod), which is the ratio of the known to unknown patients before the route plan starts to visit: dod ¼ no: of known patients nodes total no: of patients nodes If dod is 1, all patient demands are known in prior, and the problem is entirely static, while if dod is 0, then there exist no patient demands are known in prior. Hence, this model problem of HHC is a partial-dynamic problem. The above Fig. 1 illustrates clearly the working process of the system DVRPSTSP in HHC. The red node of the system indicates the HHC depot where all the vehicles start and end. Three different capacitated vehicle route paths are shown in Fig. 1. All three heterogeneous vehicles would accept the request of known and unknown patients and visit them within a specific time horizon. Green nodes indicate visited and known patient requests; white nodes indicate unvisited and known patient requests; yellow nodes indicate new and unknown patient requests. Triangular nodes indicate the synchronization of route paths of two different vehicles. The dash-dotted lines represent the completed routes, straight lines represent the planned routes but not yet visited, whereas dotted lines indicate the unplanned routes or newly formed routes. Each time a patient request arises during the route plan, it must be inserted into the appropriate route plan without violating the vehicles capacity and time limit constraints. But, in real-time situations, the insertion of new patients into a specific time window will be a significantly more complicated task. Hence, it is necessary to reschedule the order of visits for the remaining nodes in the route plan after each new node is added. This rescheduling process provides a sustainable balanced workload among the caregivers and makes them reach the HHC depot within their working time horizon. Thus, the action planner of MCSA uses a decision-making technique in which the new patient's requests are dynamically assigned to suitable vehicle route paths as demonstrated in Fig. 2.
Problem formulation
In this study, the locations of patients are scattered randomly or semi-clustered. The total travel time is evaluated and given as an input source. Consider the various professional and non-professional caregivers working under the HHC organization, like physicians, nurses, therapists, nurse assistants, etc. This set of caregivers have been assigned to three different capacities of vehicles. Every vehicle should allocate a single route plan for visiting a group of pre-assigned patients. Each patient must visit in their preferred time window. Our idea is to provide a suitable action planner (SAP) for this complex dynamic routing and synchronization within the time slot. The working process of SAP is explained clearly in Fig. 2. Synchronization of vehicles is performed based on the patient's requirements. As a result, vehicles with different capacities should be coordinated so that sync vehicles should not overlap in their respective time slots. So, this problem is initially formulated using the MILP. Hence, specifications, notations and mathematical formulation for this proposed model are discussed below:
Specifications and notations
The specifications and notations of DVRP can be formally defined as follows: Graph: Consider an undirected graph G = (N, E), where N = {0, 1, 2… n, n ? 1} is the set of nodes and E is the set of all possible links between two nodes.
Depot: In this graph, nodes 0 and n ? 1 represent the depot where routes begin and complete at the same node.
Patient nodes: There are n patients. Each patient's home is represented by nodes 1, 2… n. Each node requires q demand to perform a set of care activities.
Total Time ts ij : In this problem, ts ij means the sum of time taken for travel from node i to node j and time taken for care service s i performed at node i. That is, t ij represents the travel time from node i to node j and s i represents the care service time of node i (i.e., ts ij = t ij ? s i ).
Synchronized node: Node i requires synchronization of vehicles based on their demand for care services corresponding to different capacities of vehicles synch each other within the specific time slot.
Route: Each route must start and end at the depot. It is assigned to be a sequence order of n nodes with the same demand capacity to deliver during their working time limit Q wt i . Dynamic Route: A dynamic route is designed for a type of vehicle to serve a set of known patients and allow new patients' requests during the execution of the planned route.
DVRPSTS: To deal with three different types (type 1, type 2, type 3) of vehicles with limited capacities Q = {Q 1 , Q 2 , Q 3 }. Dynamic nodes of patients are assigned with different vehicle capacities so that patients' demand q i must coincide with the vehicle capacity Q i and provide the services within their total working time horizon Q wt i . Synchronization of vehicles happens whenever there is a need for synch vehicles achieved within the specific time slots. Hence, each vehicle can probably serve known or unknown patients and synchronization visits whenever it is necessary.
Mathematical formulation
It is mainly focused on multi-objectives such as (i) to minimize the total travel time, idle time, and the number of vehicles utilization without exceeding the time window and (ii) to maximize the number of patient visits during each route plan. Present the proposed model DVRPSTS in a MILP formulation. The following are the assumptions and notations made to formulate this model for further clarification.
Parameters
N= set of patient's homes or nodes = {0,1, 2 …. N, n?1} K = set of different types of vehicles= (type 1-SV, type 2-MV, type 3-LV} Q k = set of capacitated vehicles = {Q 1 , Q 2 , Q 3 } m=maximum available number of vehicles in each type= {1, 2, 3…..n} N k = set of patients to be visited by vehicle type k w k i =set of new patients' arrival with respective to their demand required = {w 1 i , w 2 i , w 3 i } ts ij = travel time of caregivers from node i to node j and service time taken at node i are included d i = number of demands required at patient node i r i = set of routes available for each shift.
are the time slots for the synchronized vehicles at the required patient's node within the specific time slot.
A standard cuckoo search algorithm
The standard Cuckoo Search Algorithm (CSA) was initially developed by Yang and Deb (2009). It is a metaheuristic algorithm inspired by the interesting feature of parasitism in cuckoo species and originally developed to address multimodal functions. CSA can be summarized as three ideal rules: (1) The egg, laid by each cuckoo in the randomly selected nest, represents a random solution; (2) Towards the end of each iteration, the optimum nest with an egg of good quality is saved for the future generation. In other words, the best fitness of solution is retained; (3) there is a fixed number of host bird nests and where P a-[ [0, 1] is a specific probability of finding the cuckoo's egg in the nest by the host bird. Suppose a host bird discovers the cuckoo's egg, then it would throw out the egg or abandon the nest. So that the egg is further would not be incubated. In the CSA, this phenomenon can be described in an easier way that a fraction of P a of the current set of solutions is replaced by randomly generated solutions. A solution X tþ1 i is generated from the solution X t i of cuckoo i as given in the Eq. (i) by performing a Lévy flight: where a \ 0 is the step size, which should be associated with the scales of the problem of interest and a = 1 is the most regularly used value in most cases. The significant characteristic of Lévy flights is to intensify the search around a solution and to take occasionally long steps that can minimize the probability of falling into local optima. The levy flight is based on a power-law tail with a probability density function, and its step length mainly depends on the value generated by levy flight trajectories. Both step length and step size s are randomly drawn from the Lévy distribution as given below Eq.
1: Initially generate the population of host bird nest xi, i=1,2,….n 2: while (t <MaxGen) or (stop condition) then do 3: Select the host bird nest randomly and improve the solution by generating a new solution by using Levy flights 4: Smart cuckoos' evolution with a small portion (Pc) 5: Calculate the fitness value (Fi) of newly generated solution 6:
Randomly select a next nest among n nests (say j) and find its fitness Value (Fj) 7: if (Fi > Fj) then 8: replace j with the newly generated solution; 9: end if 10: A small portion (Pa) of the low-quality nests are abandoned and modified by new ones 11: Hold the optimal solutions (or nests with greater quality solutions); 12: Sort the solutions and find the current best solution 13: end while 14: Postprocess outcomes This algorithm is fascinated by three key components such as: (1) a simple method of selection approach; (2) the lesser number of parameters are used; (3) more diversity in search space by using levy flight strategy, which controls the step length by small and large perturbations, so that there exists stability between exploration and exploitation. Hence, the CS algorithm brings out emerging results and fits a wide-ranging scope of optimization problems.
Mutated cuckoo search algorithm for DVRPSTS in HHC
This research proposed a modified version of swarm intelligence that is inspired by a cuckoo's brood parasitism called Mutated Cuckoo Search Algorithm (MCSA) in order to address this new variant DVRPSTS in HHC. It is mainly based on enhancing the selection process with mutated strategies for each iteration. Below are the key terms of MCSA.
Nest
An individual in the population. Here, assume that a cuckoo bird lays only one egg in a nest.
Egg
A solution of the algorithm is generally represented as the survival of an egg in a nest.
Host nest initialization
In the proposed model, initialize the individuals randomly to make the problem more realistic.
Selection of host nest to lay a new cuckoo egg
The classic CSA uses a random selection approach. Cuckoo birds do not follow this selection approach since it applies a mimic strategy based on the most similar eggs: in the pattern, colour and shape to increase the egg's survival. The imitation character of the cuckoo bird enhances the performance of MCSA, which incorporates a more Four arcs are deleted and added that need not be successively adjacent in a route to produce a new initial solution advanced selection strategy based on neighbourhood structures. The selection of the best initial route is performed by applying four neighbourhood structures such as 2-opt, OR-opt, 3-opt and double bridge as described in Table 1. Hence, the selection process of the best nest plays a vital role in this algorithm. Generate and sort the initial solutions, then select the current best initial solution. This current best initial solution predicts the high-quality solution will emerge in a relatively small number of generations. Thus, the algorithm highlights the unique mutated strategy to make more intensify and diversify in the search space (Fig. 3). Cross over The arc between two adjacent nodes i and j belonging to route one and the arc between two adjacent nodes i' and j' belonging to route two are both removed. Next, an arc is inserted connecting i and j' and another is inserted linking i' and j then LF \ 2 Inter-route Swap one node from one route with one node from another route
Aband nest P a
When an egg is abandoned, a new one is replaced in the population. The abandoned routes are rebuilt by using the inversion mutation process (Fig. 4).
The step
The step length is proportionate to the cross over or interroute move on a solution.
Levy flights
To increase the cuckoo bird's chances of survival, it needs to improve its skills. As they move from step to step via Levy flights, they look for the best solution in each step without stagnating in a local optimum. Moreover, smart cuckoos are introduced to improve the existing solution. Levy flight is performed in MCSA by using Path Relinking Strategies (PRS). Path Relinking is a diversifying strategy that is used as a way of exploring routes between selected initial solutions. By exploring trajectories that associate high-quality solutions with the original solution, this strategy generates a path in the neighbourhood space that leads to the final solution.
The paths between these two solutions are explored using crossover or inter route techniques, as shown in Table 2. If this process results in a new best solution, then the current best solution is replaced with the new one. Otherwise, it continues with the current best solution (Fig. 5).
The above moves are linked with the step length generated by the Lévy flight. Usually, Lévy flight is generated by a probability density function that has a power-law tail. Cauchy distribution is commonly used for this purpose (Alssager et al. 2020). According to Husselmann and Hawick (2013), random numbers are generated from a Lévy distribution as shown in the algorithm below (Fig. 6).
The steps associated with the path relinking strategy are set without any prior knowledge. However, it is performed based on experimental knowledge to identify the most appropriate moves that significantly impact the solutions. Therefore, an experimental investigation of these strategies has been carried out to identify their effectiveness in improving the solution.
Fitness evaluation
MCSA calculates the total travel time for each route and determines if it is feasible and exceeds the overall working time. Every feasible route between two nodes is denoted as an arc (i, j). The fitness value is calculated as the total travel time of each vehicle. The schematic representation of DVRPSTS's solution using MCSA is displayed clearly in Fig. 7.
Pseudocode of MCSA
Thus, the new mutated selection process is performed by Pc cuckoos, which play an important role in controlling the balance between intensification and diversification in the search space. Introduce the two main criteria for avoiding infeasible situations, such as checking the capacity of the vehicles with patients' demand and also checking synchronized time slots for the vehicles. As far as the authors are aware, there has not been any research carried out on MCSA for dynamic routing and synchronization in home healthcare.
Experimental study
The metaheuristic algorithm MCSA is coded for DVRPSTS in HHC. The algorithm's performance has been tested and found to be better than GA and DCSA. In this numerical experiment, run the test instances for three different capacity vehicles as given in Table 3. The day is divided into two shifts. Each shift has a time window of 270 min. Since the caregiver's work is based on shift (half day), lunch break is not included in the route plan. Once the
Initialization
(1) Set initial population size N (2) Set the MaxGen. shift is over, all vehicles return to their depots. This experimental study has done several numerical analyses using different test instances.
Problem test instances
The algorithm investigated using randomly generated test instances for three different capacities of vehicles that are small vehicles (SV), medium vehicles (MV) and large vehicles (LV). Based on real-time situations, consider the small type of vehicle requires a greater number of demands than the medium and large types of vehicles. The problem test instances are displayed in Table 3.
Experimental analysis
The experimental analysis for the proposed algorithm is implemented using python 3.9 version and configuration Intel inside 1.33 GHz and 8 GB of RAM operating windows 10 with 64 bits. Table 4 represents the parameter settings of the experimental setup.
Initially, investigate the performance of MCSA using randomly generated test instances and the same is compared with other two popular algorithms, GA and DCSA. This process has been repeated 10 times and finds the best solution for all three metaheuristic algorithms as shown below in Table 5. Table 5 reveals that MCSA finds the optimal route path, giving the least total travel time in all test instances.
Thus, the performance of the proposed algorithm is achieved and shown in above Fig. 8. Furthermore, the proposed algorithm MCSA is fitted to a unique combinatorial problem in HHC. The insertion of dynamic nodes in each route plane is carried out without violating the following constraints like total working time of caregivers, synchronized constraints, demands of patients, etc. This proposed algorithm has been executed in a dynamic environment with the synchronization of vehicles using randomly generated test instances for further checking its efficiency. While investigating the results, it computes the number of dynamic nodes included, the total number of nodes visited and the percentage of utilized vehicles in a Table 6.
It is realized from Table 6 that the computational results of MCSA outperform GA and DCSA in most of the test instances. Table 7 illustrates the dynamic nodes acceptance rate for three meta-heuristic algorithms. This experiment found out that MCSA incorporates the maximum number of dynamic nodes within the number of available vehicles for each shift. The proposed algorithm comparatively performs well and helps to reduce the number of vehicles utilized to visit them. It would increase the continuum of care and overall cost-benefit for the HHC organization. It is demonstrated in the below table.
Results of accepting rate of dynamic nodes
Also, present a graphical representation of the inclusion of new nodes in Fig. 8, which quickly helps to compare and identify the performance of MCSA, GA and DCSA. Figure 9 illustrates MCSA's superior performance in each type of vehicle. Bold values represent the optimal solutions for each instance
Synchronized nodes
The total working time of the caregivers (vehicles) in a day is divided into four time slots. Synchronization of vehicles (Rabeh et al. 2012) is done at the nodes according to time slots, S = {S1, S2, S3, S4}. A day has a total of 540 min, and each time slot ranges from 0 to 135 min. Almost 20 per cent of nodes have been synchronized for both known and new patients in test instances of the problem (Table 3). The synchronization required for patients based on their demands in the specific time slot. The patient's node which is needed to be synchronized may be performed in shift I (i.e., slot 1 or slot 2) or shift II (i.e., slot 3 or slot 4) are given below: Sync nodes = [5,9,11,14,25,38,40,43,51,58,60,63,65,74,77,79,81,92,96,101,104,121,129,146,154].
Optimal output
Thus, the optimal route is achieved for dynamic routing instances implemented by the proposed MCSA with the available number of vehicles in each shift as shown below in Table 10. This paper discussed the novel model of DVRP along with synchronizing constraints in HHC. The purpose of this research work is to bring insight into vehicle routing problems under a unique dynamic strategy which uses an improved CSA. Due to the complexity of the problem, only a few papers have dealt with metaheuristic algorithms for the DVRP in HHC. In order to deal with such a complex model of DVRP with synchronization constraints in HHC, the most appropriate meta-heuristic algorithm has been developed called a Mutated Cuckoo Search Algorithm (MCSA). Thus, this is the first application in which HHC has been able to handle both dynamic situations and synchronize vehicles. The problem is modelled based on practical scenarios of HHC, such as the inclusion of new patients' and synchronization visits. As a result, the proposed new variant, DVRPSTS, strives to minimize total travel time and idle time while maximizing the number of patients on each route. Synchronizing of vehicles takes place during their respective time slots. MCSA is composed of a few unique approaches, such as path relinking in LF, inverse mutation, and mutated selection strategies. This approach of MCSA with SAP enables it to make prudent decisions in the planning of each route, especially when handling dynamic nodes.
The experimental study exhibits the approach compared with the most prominent algorithms like GA and DCSA for randomly generated test instances. Hence, the computational results observed and enhanced metaheuristic algorithm produced the optimal outputs for solving the DVRPSTS variants in HHC.
Thus, the outcomes of experimental analysis reveal the significance of these unique approaches built-in MCSA. The solution quality from the algorithm identifies that the proposed approach is a suitable action planner for dynamic routing problems with synchronization of HHC.
In future, the research work can be extended to schedule along with a dynamic routing plan and synchronization. Further, explore stochastic travel and service time of caregivers. | 2021-11-01T15:08:49.930Z | 2021-10-30T00:00:00.000 | {
"year": 2021,
"sha1": "471e8e20b5f0e89dec454b3e2fb936dc0595357b",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13198-021-01300-x.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "67e10770326fc2de30207cc47272550cc9152b7c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
211038426 | pes2o/s2orc | v3-fos-license | On the need for a new generation of coastal change models for the 21st century
The combination of climate change impacts, declining fluvial sediment supply, and heavy human utilization of the coastal zone, arguably the most populated and developed land zone in the world, will very likely lead to massive socio-economic and environmental losses in the coming decades. Effective coastal planning/management strategies that can help circumvent such losses require reliable local scale (<~10 km) projections of coastal change resulting from the integrated effect of climate change driven variations in mean sea level, storm surge, waves, and riverflows. Presently available numerical models are unable to adequately fulfill this need. A new generation of multi-scale, probabilistic coastal change models is urgently needed to comprehensively assess and optimise coastal risk at local scale, enabling risk informed, climate proof adaptation measures that strike a good balance between risk and reward.
Numerical Modelling of Climate Change Driven Coastal Change
At present, climate change impacts on coastal change are commonly estimated via: (a) one-dimensional, physically based, but simple models (e.g. Bruun Rule 26 ); (b) highly scale-aggregated models with limited process descriptions (e.g. ASMITA 27 , CASCADE 28 ); and (c) extensive time integration of micro-scale processes using process-based morphodynamic models [29][30][31] . For the convenience of the reader, all coastal change models mentioned above and below are listed by model category in Table 2.
Straightforward applications of simple techniques such as the Bruun rule, while potentially being adequate for first-pass assessments at regional to global scale, are unlikely to produce results that are sufficiently reliable to support local scale coastal management/adaptation decisions with US $ billions at stake 32,33 . Highly aggregated models such as ASMITA and CASCADE essentially drive the models toward a prescribed end-state (i.e. equilibrium condition). Due to the empirically based (usually with data from one or two data rich locations) severe aggregation inherent in these models, they do not provide much insight on processes governing morphological evolution, and their general applicability is also somewhat tenuous. Attempts to date with fully process-based models (e.g. Delft3D, Mike21, CMS) forced with concurrent water level, wave, and riverflow forcing have only been able to produce accurate results for simulation lengths less than about 5 years 5,34-36 . Therefore, it appears that currently available modelling approaches are unable to provide sufficiently reliable predictions of integrated climate change impact on coastal change, and that new models underpinned by 'out-of-the-box' thinking are urgently needed.
As climate change impacts on sandy coasts will manifest themselves at various different spatio-temporal scales (~10 m to ~100 km and days to centuries; see Table 1), ideally what is required for climate change impact assessments is a multi-scale coastal change model that concurrently simulates the various physical processes occurring at different spatio-temporal scales, while also accounting for inter-scale morphodynamics.
Process-based modelling. To simulate coastal hydrodynamics relevant for episodic (ST), medium-term, and long-term (LT) morphodynamics, a process-based multi-scale model needs to incorporate both cross-shore (vertically non-uniform) and longshore (mostly vertically uniform) hydrodynamics. Ideally, therefore, the model would need to be a process based model capable of simulating nearshore hydrodynamics in at least a quasi-3D fashion. Previous attempts at quasi-3D representation of nearshore coastal hydrodynamics have been successful (e.g 37 .), and therefore, this is not a major challenge. However, modelling morphological change that may occur under the combined forcing of waves and currents (including riverflow effects at coastal inlets), especially at time scales of more than a few years, still remains a significant challenge 34,38 . Although, there have been numerous attempts since the early 1990s to overcome this challenge, these have met with only partial success, at best 29,[38][39][40] . The most recent attempt, which used a combination of parallel computing and wave input reduction techniques, achieved a 30-year morphodynamic simulation with combined wave-tide forcing 41 , with a computing time of 5-19 days (depending on the parallel computing/input reduction combination used). While this is a huge improvement from what was possible 10 years ago, achieving a 100 year wave-tide-riverflow forced simulation within a few minutes (or hours) using traditional process-based models still seems very far away.
Perhaps a completely different approach is required to solve this problem. For example, the solution could lie in a novel concept in which morphological change is simulated using a non-gridded technique. In such a model, for instance, a traditional computational grid may still be used to compute time varying quasi-3D nearshore water level, velocity and transport fields, but these quantities would then be spatially and temporally aggregated in areas of interest where potentially mobile morphological features exist (e.g. sand bars, channels, mounds, ebb/ www.nature.com/scientificreports www.nature.com/scientificreports/ flood deltas etc.). These aggregated hydrodynamic forcing fields may subsequently be used, in combination with an appropriate morphodynamic acceleration factor, to rapidly simulate the spatio-temporal evolution of only the morphological features of interest over a few tidal cycles. If successful, such an approach would enable morphodynamic simulations that are much faster, and consequently much longer, than what the present state-of-the-art would allow.
Reduced complexity modelling. While new ideas and increasing computational power might enable
~100-year long process-based model simulations in the future, still, their inherent slowness, will probably render this type of models unwieldy for the multiple simulations (~1000) required to derive probabilistic estimates of coastal change; a mandatory requirement of emerging risk informed coastal zone management/planning frameworks 42,43 . An effective approach to circumnavigate this problem is to develop physics based, yet simple and fast numerical models known as reduced complexity models. This approach adopts simplified descriptions of fundamental system physics and delivers estimates of system response to forcing. It is a well-grounded and fast approach that lends itself to multiple simulations (thousands of simulations in minutes), enabling probabilistic estimates of system response.
While a few such reduced complexity numerical models have been developed since the turn of the century [44][45][46][47] , no concerted efforts have yet been made to develop a reduced complexity model that is capable of providing rapid, probabilistic estimates of coastal change resulting from the integrated effect of climate change driven variations in coastal forcing. Such an attempt, which would undoubtedly be a challenging undertaking, could for example follow the basic approach outlined below.
Relevant concepts adopted in existing non-process based LT coastal evolution models (e.g [48][49][50][51][52][53][54][55] .) could be used to develop a reduced complexity model to obtain rapid, probabilistic estimates of future LT nearshore morphological change. This model can then potentially be combined with an existing ST reduced complexity models, such as the Probabilistic Coastal Recession (PCR) model 45 , which provides probabilistic estimates of contemporary and/ or future ST storm erosion volumes. Concepts adopted in existing non-process based models of ST coastal change (e.g [56][57][58][59] .) may also be strategically used in the model development depending on local geomorphic conditions and/or the target coastline indicator (e.g. MSL contour, toe/top of dune, vegetation line).
The main result that can be expected from the application of such an LT/ST integrated, 2D reduced complexity coastal change model is a series of coastline positions (alongshore) with a range of exceedance probabilities (e.g. 0.9, 0.5, 0.1, 0.01) for every year of the simulation. These probabilistic results could then be combined with, for e.g. spatial maps of property value to derive economic risk maps.
It is important to note that the confidence with which any of the above discussed novel modelling approaches can be applied to address real-world situations depends very much on rigorous model validation against field measurements. The non-availability of (or lack of free access to) long term morphological and hydrodynamic data has been for decades a frustrating bottleneck in terms of achieving robust validation of, especially, longer term coastal change models. However, the recent emergence of open source satellite image based global data sets of coastal morphology and topography 24,60-64 and the general worldwide trend towards open source in-situ data (e,g. EMODNET, CEFASWavenet, SISMER, SHOM, Open Earth, DUCK FRF, Narrabeen-Collaroy) represents a step-change in the availability of/access to long term data, greatly improving opportunities for the validation of long term coastal change models.
While the economic damage (consequence) that can be caused by climate change driven coastal change (hazard) can be very high, foregoing land-use opportunities in coastal regions is also costly (opportunity cost), with both sides of the equation depending not only on climate change impacts but also on economic considerations such as future changes in coastal property values, and return on investments in the coastal zone etc 65 . Developing effective policies and strategies for future coastal land-use planning purposes is therefore a delicate balancing act. Quantitative coastal risk assessments are also invaluable to the insurance and re-insurance industries for determining optimal insurance premia, with follow-on effect on coastal property values, and subsequently on the value-at-risk [65][66][67][68][69][70] . Projections of future coastal change provided by a new generation of multi-scale, probabilistic coastal change models such as those discussed above will readily support comprehensive coastal risk assessment and optimisation, enabling risk informed, climate proof adaptation measures that optimises the balance between risk and reward. | 2020-02-06T15:59:40.302Z | 2020-02-06T00:00:00.000 | {
"year": 2020,
"sha1": "e637b32a7666595aa449ed8bd1af482b3c6625e9",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-58376-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e637b32a7666595aa449ed8bd1af482b3c6625e9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
]
} |
53416979 | pes2o/s2orc | v3-fos-license | Interactive comment on “ The impact of volcanic aerosol on the Northern Hemisphere stratospheric polar vortex : mechanisms and sensitivity to forcing structure ”
stratosphere caused by the radiative effects of the volcanic aerosols. In the present paper the impacts of 4 different aerosol forcings, all estimates of Pinatubo, are studied with a climate model. For each forcing an ensemble of 12 simulations is performed and the different ensembles are compared to a control ensemble. Large differences are found among the ensembles and the changes are found to be fragile and sensitive to the details of the forcings. A more robust result is that the temperature gradient in the lower stratosphere is not due only to the direct radiative forcing but to a large extent is due to an increased wave-driven meridional circulation. I find that the topic is interesting and that the paper contributes with important results.
Introduction
The Northern Hemisphere (NH) stratospheric winter polar vortex, which shows considerable interannual and intraseasonal variability, has been observed to be stronger than normal in winters after major volcanic eruptions (Kodera, 1995;Labitzke and van Loon, 1989).While this observation is based on a relatively small sample (limited to the winters after the 1963 Agung, 1982 El Chichón and1991 Pinatubo eruptions), the theoretical argument to explain such a strengthening appears clear: namely, that heating of the lower stratosphere through the absorption of radiation by volcanic sulfate aerosols enhances the equator-to-pole temperature gradient in the lower stratosphere, which, through the thermal wind equation, leads to stronger westerly winds (Robock, 2000 and references therein).Satellite observations clearly show a warming of the tropical lower stratosphere after volcanic eruptions (Labitzke and McCormick, 1992), so changes in meridional temperature gradients and zonal Published by Copernicus Publications on behalf of the European Geosciences Union.
winds are logical consequences.The degree to which secondary feedback mechanisms -such as changes in ozone or upward propagating planetary waves (e.g.Graf et al., 2007;Stenchikov et al., 2006) -also affect the vortex strength is at present unclear.
Post-volcanic strengthening of the NH polar vortex is an important step in a proposed mechanism which explains observed changes in surface climate in post-eruption winters (Kodera, 1994;Perlwitz and Graf, 1995).Specifically, it is thought that through stratosphere-troposphere coupling (Baldwin and Dunkerton, 2001;Gerber et al., 2012) the volcanically induced strong stratospheric polar vortex leads to the observed positive anomalies in surface dynamical indexes such as the Northern Annular Mode (NAM) or North Atlantic Oscillation (NAO) (Christiansen, 2008).Such dynamical changes lead to the "winter warming" pattern of postvolcanic temperature anomalies (Robock and Mao, 1992), which is characterized by warmer temperatures over large regions of the NH continents during winter which oppose the overall cooling impact of the volcanic aerosols on the surface.On the other hand, in a model study, Stenchikov (2002) found that a positive phase of the NAM was also produced in an experiment in which only the tropospheric impact of volcanic aerosols was included, implying that aerosol heating in the lower tropical stratosphere is not necessary to force a positive NAM response.Whatever the mechanisms, observations show that 11 out of 13 eruptions since 1870 were followed by positive wintertime NAO values (Christiansen, 2008): the apparent robustness of this post-eruption dynamical response should allow for enhanced skill in seasonal prediction for winters that follow volcanic eruptions (e.g.Marshall et al., 2009).
While a number of early model simulations reported qualified success in simulating the atmospheric dynamical response to volcanic eruptions (e.g.Graf et al., 1993;Kirchner et al., 1999;Rozanov, 2002;Shindell et al., 2001), assessments of the multi-model ensembles of the Coupled Model Intercomparison Projects (CMIP) 3 and 5 showed no significant winter warming response to prescribed volcanic forcing, nor did they show significant anomalies in post-eruption dynamical quantities in the stratosphere or at the surface (Driscoll et al., 2012;Stenchikov et al., 2006).It has been suggested that in order for a model to successfully respond to volcanic forcing, it should include a reasonably well-resolved stratosphere (Shindell, 2004;Stenchikov et al., 2006).However, analysis of the CMIP5 ensemble revealed no appreciable systematic difference in post-eruption geopotential height anomalies to volcanic aerosol forcing between models with or without well-resolved stratospheres (Charlton-Perez et al., 2013).
Most model simulations that incorporate the impact of volcanic eruptions, such as the CMIP3 and 5 historical simulations, do so using prescribed volcanic aerosol fields, which have associated uncertainties.In this study we investigate how the response of the atmosphere to volcanic aerosol forc-ing depends on the prescribed aerosol forcing used in the simulation.We use one CMIP5 model, the MPI-ESM, and focus on the first winter after Mt.Pinatubo, the period of strongest volcanic forcing within the era of satellite observations.We run ensemble simulations using four different forcing data sets: two observation-based forcing sets, based primarily on different versions of SAGE II aerosol extinction measurements, and two model-constructed aerosol forcing sets.By assessing the model response to these forcing sets, we investigate (1) the mechanisms linking volcanic aerosol heating in the lower tropical stratosphere and NH winter vortex strength, ( 2) what stratospheric circulation responses to volcanic aerosol forcing are robustly produced by the model and forcings, and (3) the sensitivity of the vortex response to changes in the space-time structure of the volcanic forcing.
Experiment: observational basis, model and methods
In this section, the hypothesis that volcanic eruptions produce a strengthened NH winter polar vortex is briefly summarized by examining ERA-Interim reanalysis data.Then, the materials and methods of the present modelling study are presented including the model and volcanic forcing sets used, and the design of the ensemble simulations and analysis.
NH polar vortex response to volcanic eruptions in ERA-interim
The NH polar vortex is highly variable as a result of unforced internal variability, and the impact of external forcings such as volcanic eruptions, the 11 year solar cycle, the El Niño-Southern Oscillation (ENSO), and the Quasi Biennial Oscillation (QBO).Isolating the vortex response to any individual forcing term can be difficult, especially in the case of major volcanic eruptions for which so few actual events have occurred within the era of satellite measurements.
A common and simple method to isolate the pure volcanic impact on the state of the NH winter stratosphere is to simply average post-eruptive anomalies over winters after recent major volcanic eruptions.This technique is here applied to ERA-Interim reanalysis (Dee et al., 2011) temperature and zonal wind fields after the eruptions of El Chichón (1982) and Pinatubo (1991).Post-eruption winter anomalies are constructed for the two winters after each eruption by differencing post-eruptive December-to-February (DJF) mean fields with fields averaged for 3 (El Chichón) and 5 (Pinatubo) years before the eruption (the shorter reference period for El Chichón results from the fact that the ERA-Interim data set begins in 1979).Differences in post-eruption anomalies for Pinatubo are not strongly dependent on the choice of a 3, 4 or 5 year reference period.
Figure 1 (left) shows ERA-Interim temperature and zonal wind anomalies composited for four winters, the two winters each after Pinatubo and El Chichón.Mean tempera- ture anomalies in the post-volcanic composite show positive anomalies in the tropical lower stratosphere, as would be expected due to aerosol heating.Temperature anomalies also show cooling of the tropical upper stratosphere, cooling of the NH polar lower stratosphere, and warming of the NH polar upper stratosphere.Mean post-volcanic zonal winds show a strengthening of the NH winter polar vortex by ∼ 6 m s −1 .Such a simple average with a small sample size does not completely remove the influences of variability resulting from other forcing terms -notably, the solar cycle was at a maximum at the times of both the El Chichón and Pinatubo eruptions, and El Niño events were observed in the first winters after both eruptions -but the composite temperature and zonal wind anomaly structure is certainly a better approximation of the direct volcanic impact than anomalies in any single post-volcanic year.For comparison, single winter anomalies are shown for the first winter after the Pinatubo eruption.Temperature anomalies for DJF 1991/92 roughly follow the structure of the 4 year composite, albeit with tropical positive anomalies located at higher altitudes, and weaker polar lower stratosphere cooling.Post-Pinatubo zonal wind anomalies in the tropics reflect the state of the QBO at the time, with negative (easterly) wind anomalies in the middle stratosphere (∼ 50-15 hPa) and positive (westerly) anomalies in the upper stratosphere (15-2 hPa).The polar vortex in the first post-Pinatubo winter was actually not as clearly enhanced as the 4 year composite, with positive zonal wind anomalies only in the mid to lower polar stratosphere centred at ∼ 70 • N. It is likely that in addition to the volcanic forcing, the vortex in DJF 1991/92 was weakened somewhat due to the influences of the concurrent forcing of the El Niño and QBO easterly phase.Based on these arguments, it can be hypothesized that the pure response of the stratosphere to Pinatubo aerosol forcing would have the approximate structure of the composite response shown in Fig. 1 (left), albeit with greater amplitude, since the aerosol optical depth and hence aerosol radiative heating during the first post-Pinatubo winter is the strongest of the years used in the volcanic composite.This "expected" response in the first post-Pinatubo winter is based on a small sample size of observations, and an assumption of linear response to the magnitude of volcanic forcing; however, in light of limited evidence, it represents a best first-order, observation-based expectation, consistent with that assumed explicitly or implicitly in prior studies.
MPI-ESM
The MPI-ESM is a full Earth system model, with atmosphere, ocean, carbon cycle, and vegetation components.Major characteristics of the full ESM and its performance in the CMIP5 experiments are described by Giorgetta et al. (2013).
The "low resolution" model configuration (MPI-ESM-LR) is used here, with horizontal resolution of the atmospheric component given by a triangular truncation at 63 wave numbers (T63) and 47 vertical layers extending to 0.01 hPa.Unlike model configurations with higher vertical resolutions, the LR version has no internally generated QBO.CMIP5 historical simulations have previously been performed with the MPI-ESM over the time period 1850-2005.Prescribed external forcings for the historical simulations, including volcanic aerosols as well as greenhouse gases and ozone, follow CMIP5 recommendations, and the responses to these forcings are described by Schmidt et al. (2013).
The MPI-ESM is configured to take volcanic aerosol forcing data in two formats, both of which are used in this study.One format consists of monthly zonal mean values of aerosol extinction, single scattering albedo, and asymmetry factor as a function of time, pressure, and the 30 short wave and long wave spectral bands used by the model.This format is consistent with the observation-based forcings sets introduced in Sect.2.3.A second format consists of monthly zonal mean aerosol extinction at 0.55 µm, and zonal mean effective radius, both as a function of latitude, height, and time.Pre-calculated look-up tables are then used to scale the 0.55 µm extinction to the wavelengths of the model's radiation code based on the effective radius.This methodology has been used to perform MPI-ESM simulations using the forcing time series of Crowley et al. (2008, e.g. Timmreck et al., 2009) or output from prior runs of the MAECHAM5-HAM coupled aerosol-climate model (e.g.Timmreck et al., 2010) as done in this study and described in Sect.2.4.
Observation-based aerosol forcing sets
Volcanic sulfate aerosol forcing for the MPI-ESM CMIP5 historical simulations is based on an extended version of the aerosol data set developed by Stenchikov et al. (1998, hereafter S98) on the basis of SAGE II measurements of aerosol extinction at 1.02 µm and estimates of effective radii derived from instruments on the Upper Atmosphere Research Satellite.The data are given at 40 pressure levels and interpolated to the actual hybrid model layers during the simulations.The S98 data set is based primarily on retrievals of aerosol extinction at 1.02 µm from SAGE II, with gaps filled with data from ground-based lidar systems.Together, the S98 forcing set and that from Sato et al. (1993, with updates http://data.giss.nasa.gov/modelforce/strataer/),both primarily based on SAGE II data, have been used in roughly half of the models that performed CMIP5 historical simulations (Driscoll et al., 2012).
Subsequent updates to the SAGE II retrievals have led to significant changes in the space-time morphology of the estimated aerosol extinction after Pinatubo (Arfeuille et al., 2013;Thomason and Peter, 2006).A new volcanic forcing set (SAGE_4λ Arfeuille et al., 2013) has been compiled and made available to modelling centres (http://www.pa.op.dlr.de/CCMI/CCMI_SimulationsForcings.html),specifically for use in chemistry climate simulations within the Chemistry Climate Model Intercomparison (CCMI) initiative (Eyring and Lamarque, 2013).Time series of zonal mean aerosol optical depth (AOD) at 1 µm -the wavelength closest to the original SAGE II measurements and so less impacted by uncertainties in derived aerosol size distribution -are shown in Fig. 2 over the Pinatubo period for the S98 and SAGE_4λ reconstructions.
It should be noted that even for Pinatubo -the bestobserved eruption in history -the observation-based volcanic aerosol forcing sets suffer from significant but mostly unquantified uncertainties.Most notably, gaps in the satellite record result from sparse sampling of the satellite instruments (Stenchikov et al., 1998) and the fact that large optical depths in the initial months after the Pinatubo eruption reduced atmospheric transmission below detectability (Russell et al., 1996).
Model-based aerosol forcing sets
Two "synthetic" volcanic aerosol forcing sets were constructed based on a 12-member ensemble of simulations of a Pinatubo-like eruption using the MAECHAM5-HAM coupled aerosol-climate model with SO 2 injections of 17 Tg and prescribed climatological sea surface temperatures (Toohey et al., 2011).Figure 3 shows lower stratospheric zonal mean temperature anomalies (at 100 hPa) and zonal wind anomalies (at 50 hPa) for these simulations, in comparison with ERA-Interim post-volcanic anomalies described in Sect.2.1.Most of the MAECHAM5-HAM ensemble members (roughly 9/12) show characteristics of a strengthened polar vortex in the lower stratosphere, as quantified as negative temperature anomalies at polar latitudes and positive zonal wind anomalies between 60 and 80 • N in Fig. 3.However, the ensemble variability of the simulations is pronounced, with three members showing a weakened polar vortex with positive temperature anomalies over the polar cap and negative wind anomalies between 60 and 80 • N. From the full 12-member ensemble, two subsets were defined based on the zonal wind anomalies, with strong and weak vortex composites (hereafter SVC and WVC) defined respectively as the average of the three realizations with the most positive and most negative zonal wind anomalies at 50 hPa and 70 • N. Aerosol properties (aerosol extinction at 0.55 µm and effective radius) for these two composites were collected for use in MPI-ESM simulations.SVC and WVC zonal mean AOD time series, scaled to 1 µm to be consistent with the observation-based AOD time series, are shown in Fig. 2.
Experiments
The forcing sets described above were used to force four 16member ensemble simulations of the Pinatubo eruption time period.The number of MPI-ESM realizations used here is therefore notably greater than the three MPI-ESM realizations used in prior single- (Schmidt et al., 2013) or multimodel (Charlton-Perez et al., 2013;Driscoll et al., 2012) investigations of the CMIP5 historical simulations.Ten of the 16 unique initial condition states (at June 1991) were taken from 10 independent, pre-existing CMIP5 historical simulations.In order to increase the ensemble size, six of the historical simulations were restarted in 1980 with a small atmospheric perturbation applied, and integrated until 1991.All simulations were therefore forced with S98 volcanic forcing up until June 1991, at which point forcing either continued as S98 or switched to one of the other forcings.
A control ensemble (CTL) is based on the 5 year period 1986-1990 for the original 10 historical simulations used to produce the initial conditions, comprising in total 50 years, during which the other external forcings are negligibly different from 1991-1992 conditions.Anomalies for the volcanic ensembles are computed by differencing ensemble mean results with the CTL ensemble mean.
Observations imply that the NH winter vortex response to volcanic forcing may last for 2 years.While considering the results for the first two winters together certainly increases the ensemble size, it is quite possible that the mechanisms leading to vortex response are different in the first and second winters after an eruption.The magnitude of aerosol forcing will be different for the two winters for any forcing set, with typically stronger aerosol forcing occurring in the first winter.Such temporal changes are likely larger than differences between the different forcing sets used in this study.With these potential complications in mind, in order to simplify the analysis and interpretation of the model results in this study, we choose to focus our analysis only on the first winter after the Pinatubo eruption.
Unless otherwise stated, results shown are ensemble means.Confidence intervals (95 % level) are calculated for differences between forced and CTL ensemble means and the differences between the forced ensemble means using a bootstrapping technique, with 1000 resamples of the original ensembles.For example, for each latitude and pressure level, and for each forced ensemble, 1000 bootstrapped sample means are produced by sampling the 16 ensemble member values with replacement, resulting in an approximation of the uncertainty distribution for the mean value.The same process is applied to the CTL ensemble, with n = 50.The difference between the two bootstrapped distributions defines the uncertainty in the mean difference, and is used to define the 95 % confidence interval of the ensemble mean difference.A parallel procedure is used to define the confidence intervals for the differences of two volcanically forced ensembles.
Radiative forcing
Latitude-pressure plots of zonal mean DJF 1 µm extinction (EXT) are shown in Fig. 4 for the S98 and SAGE_4λ forcing sets.A major difference between the two forcing sets is the vertical distribution of extinction in the tropics, with SAGE_4λ extinction being more constrained to the lower stratosphere, compared to S98 which has considerable extinction extending down into the upper troposphere.This difference in tropical extinction is the result of improvements in the SAGE_4λ retrieval: the S98 vertical distribution in the tropics is very likely unrealistic (Arfeuille et al., 2013).The two forcing sets also differ in terms of the magnitude of tropical extinction, with SAGE_4λ having stronger extinction in the tropical lower stratosphere for all wavelengths greater than 0.55 µm.Differences in extinction magnitude are also apparent in the high-latitude lower stratosphere of both hemispheres, with SAGE_4λ extinction much smaller than that of S98.
Direct aerosol radiative heating rates (Q aer ) are computed by performing radiative transfer calculations at each model time step twice, once with and once without the volcanic aerosols (as in Stenchikov et al., 1998).We have calculated Q aer for single realizations of each forced ensemble.
Net (long wave + short wave) Q aer for DJF for the two observation-based forcing sets is shown in Fig. 4. Q aer values are positive over most of the stratosphere for both S98 and SAGE_4λ forcings, with highest magnitude in the tropical lower stratosphere at approximately 30 hPa, just north of the equator.Like the extinction values (Fig. 4, upper row), S98 heating rates are spread over a larger vertical extent than SAGE_4λ heating rates: at the equator, S98 heating rates > 0.1 K day −1 extend from ∼ 100 hPa upwards whereas for the SAGE_4λ forcing set, heating rates > 0.1 K day −1 begin ∼ 60 hPa.Like the extinction at 1 µm (and longer wavelengths) SAGE_4λ forcing leads to stronger Q aer , with maximum values of 0.5 K day −1 compared to max values of 0.3 K day −1 for S98.Although minor compared to the differences in the tropical stratosphere, there are differences in Q aer in the NH polar latitudes, with S98 leading to slightly larger Q aer values in the NH polar lower stratosphere than for SAGE_4λ.
Figure 5 shows DJF 1 µm aerosol extinction for the SVC and WVC forcing sets.Compared to the observation-based forcing sets, both model-based forcing sets have greater magnitudes of aerosol extinction, especially in the high latitudes.Such differences likely arise from a combination of (1) underestimates in the observation-based forcing sets due to saturation effects, especially in the weeks directly following the eruption, and (2) potential errors in the model simulations, in terms of either general model deficiencies or errors in the model formulation in regard to the actual Pinatubo eruption (e.g.SO 2 injection amount or height).The model-based extinctions also have stronger gradients (both vertical and horizontal) across the tropopause compared to the observationbased forcing sets.These differences, especially in the vertical, are likely due to the limited vertical resolution of the observations.The primary difference between the SVC and WVC forcings is the hemispheric partitioning of the aerosol extinction, with WVC having larger extinctions in the NH than SVC.We interpret this difference as a result of wave driven circulation in the NH: in cases of strong NH wave forcing in the original MAECHAM5-HAM runs, a stronger residual circulation transports more aerosol from the tropical region towards the NH, while also disturbing the NH polar vortex.Therefore, by selecting cases of weak polar vortex, we also select cases of strong northward aerosol transport.Another major difference is the magnitude of extinction in the NH high latitudes, and therefore the gradient in extinction around 60 • N. The stronger aerosol extinction gradient in the SVC forcing set is obviously a result of the strong vortex in the MAECHAM5-HAM simulations, which inhibits mixing of aerosol into the polar cap.In the tropics, SVC and WVC have very small differences, and their vertical distribution is almost identical.
Q aer values from the model-based forcing sets, also shown in Fig. 5, are more similar to the observation-based Q aer values than the extinctions: the differences in high-latitude extinction have relatively little impact on the Q aer differences due to the much weaker long wave radiation field here than in the tropics.Differences in Q aer between the two modelbased forcing ensembles are relatively small (compared to differences between the two observation-based forcing ensembles), and are characterized primarily by the north-south shift between the two forcing sets, and the gradients in Q aer at 60 • S and 60 • N. At high latitudes, Q aer values are apparently not strongly dependent on the aerosol extinction; for example, SVC has smaller aerosol extinction values in polar latitudes than WVC, but shows larger Q aer .This is likely due to the fact that the net (absorption-emission) long wave heating rate is strongly temperature dependent because of the Stippling highlights anomalies that are significant at the 95 % confidence level based on a bootstrapping algorithm.
temperature dependence of the emission.As shown below, the SVC ensemble is characterized by a colder polar vortex than for WVC, which should lead to less emission and thus larger net heating.
Temperature and zonal wind response
In order to determine which thermal and dynamical responses to the volcanic forcings are robust across the different forcings, results are first examined by concatenating the 64 volcanic simulations into one "grand" ensemble (VOLC), with ensemble mean anomalies of temperature and zonal wind shown in Fig. 6.Significant and robust temperature responses in the volcanic simulations include positive anomalies in the tropical lower stratosphere and negative temperature anomalies in the troposphere, extending to the surface between approximately 70 • S-45 • N. In the NH high-latitude stratosphere, the temperature anomaly structure for the VOLC ensemble is roughly consistent with that shown by Schmidt et al. (2013) for three MPI-ESM realizations (using S98 forcing), with positive temperature anomalies in the upper stratosphere and mesosphere.However, while temperature anomalies for the VOLC ensemble are significant throughout most of the tropopause and lower-to-middle stratosphere, anomalies in the NH polar region are generally not significant.
Significant zonal wind responses in the VOLC ensemble include weakening of the subtropical jets at ∼ 30 • and ∼ 100 hPa of both hemispheres by 2-4 m s −1 , and a weaken-ing of the SH stratospheric easterlies by 4-6 m s −1 .Grand ensemble mean zonal wind anomalies in the NH highlatitude stratosphere reach a maximum of ∼ 2-4 m s −1 , and are not significant.
Temperature and zonal wind anomalies are shown separately for the observation-and model-based forcing ensembles in Figs.7 and 8, respectively.For the observation based forcings S98 and SAGE_4λ (Fig. 7), the general features are consistent with the VOLC grand ensemble.In agreement with Q aer of Fig. 4, SAGE_4λ tropical temperature anomalies are greater in magnitude, with peak values of 4.8 K compared to 3.6 K for S98.Temperature anomalies are also shifted in height between the two ensembles, with peak temperature anomalies located at 30 hPa for SAGE_4λ, compared to 50 hPa for S98.Differences in tropical temperature anomalies between the two ensembles (right-most column of Fig. 7) are significant at the 95 % confidence level between approximately 200 and 20 hPa.The SAGE_4λ ensemble shows slightly larger warming in the polar upper stratosphere, although the difference between S98 and SAGE_4λ is not significant.
In the NH high latitudes, the S98 ensemble shows a weak (2-3 m s −1 ) increase in westerly wind, while the SAGE_4λ ensemble shows a weak (3-4 m s −1 ) negative wind anomaly in the upper stratosphere/mesosphere. Differences between the two ensembles are not significant in the midlatitudes to high latitudes.For the model-based forcing ensembles SVC and WVC (Fig. 8), temperature anomalies in the tropics and midlatitudes are quite similar in structure between the two ensem-bles and similar to the grand ensemble mean.In the NH high latitudes, however, the temperature responses are quite different in structure between the two ensembles.The SVC ensem-ble produces an NH high-latitude temperature anomaly pattern with significant warming in the upper stratosphere and lower mesosphere.WVC, on the other hand, gives a temperature anomaly pattern with insignificant positive temperature anomalies in the polar lower and middle stratosphere.Differences between the two forcing sets are significant in the polar lower and upper stratosphere.
Zonal wind anomalies for the model-based forcing ensembles follow from the temperature anomalies.The SVC ensemble produces a significant strengthening of the NH polar vortex, with peak zonal wind anomalies 6-8 m s −1 .The WVC ensemble produces no significant change in NH vortex winds.The difference between the vortex response in the SVC and WVC ensembles is significant in the polar lower stratosphere, and in the mid-to-upper stratosphere at the highest latitudes (70-90 • N).
DJF zonal mean zonal wind at 60 • N and 10 hPa for each realization of each ensemble is shown in Fig. 9. Ensemble means with 95 % confidence intervals are also shown.The mean of the control ensemble is marked by a horizontal dashed line.Consistent with results discussed above, only the SVC ensemble shows a significant zonal wind response to volcanic forcing, with an ensemble mean 95 % interval that excludes the control mean.In terms of individual ensemble members, 13/16 members of the SVC ensemble have zonal mean wind greater than the control mean, although all of the members lie within the natural variability of the control ensemble.Three SVC members have zonal winds weaker than the control; as such, the response to SVC forcing is not totally robust and the increase in ensemble mean vortex strength appears to represent a change in the probability of weak vs. strong vortex states.The WVC and S98 ensembles show ensemble means and spreads very similar to the control ensemble, with S98 showing a weak and insignificant positive anomaly in wind strength.The SAGE_4λ ensemble shows an ensemble mean equal to that of the control ensemble, but interestingly shows some evidence of a decrease in ensemble variability.
Wave-driven circulation response
For all ensembles, we have computed transformed Eulerian mean (TEM) diagnostics (Andrews et al., 1987), including Eliassen-Palm (EP) fluxes, the meridional residual mass circulation stream function (ψ * ), the residual vertical velocity (w * ), and temperature tendencies due to vertical residual advection.Ensemble means of these quantities have been compared to values from the control ensemble in order to compute post-volcanic anomalies in the first NH winter.As for the temperature and zonal wind anomalies, selected TEM quantities are examined first in terms of the grand VOLC ensemble in Fig. 10.
It is well known that the variability of polar vortex strength is largely controlled by planetary wave drag, and therefore depends on the upward wave flux from the troposphere into the stratosphere (Newman et al., 2001;Polvani and Waugh, 2004).The vertical component of EP-flux (F z ) is a commonly used proxy for the amount of wave activity entering and propagating through the stratosphere.Figure 10 shows DJF F z for the control ensemble and anomalies for the VOLC grand ensemble.Around the tropopause (∼ 100-200 hPa), F z anomalies in the VOLC ensemble are negative in the midlatitude (30-45 • ) regions of both hemispheres, and generally positive in the midlatitudes to high latitudes (45-90 • N).Positive F z anomalies are significant throughout the SH stratosphere, while in the NH, significant positive F z anomalies occur around 60 • N and 100 hPa, and extend upwards, slanting equatorward with height.
Convergence of EP-flux (or negative values of EP-flux divergence, EPFD) leads to wave drag, a slowing of the wintertime westerly (eastward) zonal wind and a poleward residual circulation.Enhanced wave drag in volcanic simulations is found throughout the SH stratosphere and in the midlatitude NH middle stratosphere (around 30 • N, 10 hPa, Fig. 10d).Wave drag in the latter location is especially important for forcing the residual circulation (Shepherd and McLandress, 2011).Figure 10e and f show the CTL and VOLC anomalies of the meridional residual circulation stream function.The poleward NH residual circulation stream function is found to be enhanced in the VOLC ensemble.
The volcanically induced residual circulation anomalies drive adiabatic heating anomalies where vertical motions are induced.Temperature tendency anomalies due to residual vertical velocity (hereafter dT w * , Fig. 10g, h) show clearly the tropical cooling associated with anomalous vertical upwelling, and heating at the midlatitudes and high latitudes due to anomalous downwelling.
The terms dT w * and Q aer are found to dominate the temperature tendency budget compared to other terms in the TEM diagnostics.Therefore, the temperature anomalies found in the volcanic simulations can be understood to be pri- marily the result of the direct (diabatic) aerosol heating Q aer , and the indirect, dynamical (adiabatic) heating dT w * .These terms, along with the corresponding temperature anomalies, are shown for each of the volcanic ensembles as lower stratosphere (100-20 hPa) averages in Fig. 11.
Ensemble mean temperature anomalies show roughly similar behaviour in the tropics and midlatitudes (0-55 • N) for all ensembles.In the NH polar latitudes, weak positive temperature anomalies are simulated for the S98, SAGE_4λ and WVC ensembles, and negative anomalies for the SVC en-semble.Q aer peaks in the tropics and decays to zero between 30 and 60 • N. dT w * is negative in the tropics, opposing the Q aer heating, and becomes generally positive in the extratropics where downward advection occurs.In the high latitudes, dT w * is positive for S98, SAGE_4λ and WVC, but negative for SVC, consistent with the temperature anomalies (Fig. 8).It is thus clear that the differences in temperature anomalies at high latitudes, and therefore the temperature gradients around 60 • N, result from differences in dT w * between the ensembles.The effect of differences in temperature tendencies on temperature anomalies is likely amplified at higher latitudes since radiative damping timescales increase with latitude in the lower stratosphere (Newman and Rosenfield, 1997).These results underscore the point that temperature gradients at the high latitudes are controlled by the structure of the volcanically induced residual circulation anomalies rather than the direct aerosol heating.
Differences in the dynamical responses to the SVC and WVC volcanic forcings are further explored by examining TEM diagnostics for these ensembles.As in the VOLC grand ensemble, both model-based forced ensembles show positive F z anomalies throughout the SH stratosphere and negative anomalies in the subtropical tropopause region (Fig. 12a, b).Positive F z anomalies are found at the NH high-latitude lower stratosphere, extending upwards and slanting equatorward with height.However, these positive F z anomalies are much stronger (and significant only) in the WVC ensemble.
As in the VOLC ensemble, negative EP-flux divergence (wave drag) is significantly enhanced in the SH stratosphere and in the NH midlatitude middle stratosphere in both the SVC and WVC ensembles (Fig. 10d, e).NH wave EP-flux divergence anomalies are notably stronger in the WVC ensemble between 10 and 100 hPa and 30-60 • N; differences between the two ensembles are significant around 100 hPa and 60 • N. Residual circulation anomalies (Fig. 12g, h) show significant poleward circulation cells in both hemispheres.Notable differences in the form of the induced residual circulation cells between the two ensembles exist in the lower and middle NH stratosphere, with circulation cells confined to 0-45 • N in the SVC ensemble, and extending 0-90 • N in the WVC ensemble.These differences are significant in around 100 hPa and 60 • N, and as such appear related to the differences in EP-flux divergence discussed above.Temperature tendency anomalies due to residual vertical velocity (dT w * , Fig. 10j, k) show that the broad residual circulation anomaly cell in the WVC ensemble leads to dynamical heating of the lower stratosphere extending from ∼ 45-90 • N, while the narrower poleward cell in SVC leads to significant heating only in the midlatitude (45-60 • N) lower stratosphere.Differences in dynamical warming are marginally significant in the polar lower stratosphere, consistent with the differences in EP-flux divergence and residual circulation.
The TEM fields of Fig. 12 thus show that the significant difference in polar vortex zonal wind response to the SVC and WVC volcanic forcings comes about due to differences in the wave activity propagating upwards into the stratosphere from the troposphere.This mechanism is further explored in Fig. 13 for all volcanic ensembles.Figure 13 (left) shows the relationship between polar vortex wind (u at 10 hPa, 60 • N, hereafter u vortex ) and lower stratosphere polar temperature (T at 50 hPa, 60-90 • N average, hereafter T vortex ).Individual ensemble members are shown by circular markers, while ensemble mean values are shown as triangles on the bottom and right-hand axes.The compact linear relationship between u vortex and T vortex apparent in the control run is shifted in the volcanic simulations: for a given T vortex , the associated u vortex is typically ∼ 5 m s −1 stronger in the volcanic runs than in CTL.The SVC ensemble mean T vortex is similar to the CTL ensemble mean, and correspondingly, the SVC u vortex is larger than the CTL ensemble mean by 5-10 m s −1 .In contrast, the other volcanic ensembles all show increases in T vortex , and correspondingly, u vortex for these ensembles is less than that of SVC, and similar to the CTL ensemble mean value.
Polar lower stratosphere temperatures are related to upward EP-flux in the midlatitudes (Newman et al., 2001;Polvani and Waugh, 2004).In Fig. 13b, u vortex is plotted vs. F z at 100 hPa averaged over 45-75 • N (hereafter F z,100 , a common scalar metric for wave activity entering the extratropical stratosphere).Polar vortex strength, u vortex , is seen to be stronger for any particular F z,100 value in the volcanic runs compared to the CTL ensemble.For the SVC ensemble, F z,100 is comparable to the CTL value, and the u vortex anomaly is 5-10 m s −1 .The other ensembles show positive anomalies in F z,100 ; the zonal wind of these volcanic ensembles are thus reduced due to the increased wave activity and hence wave drag.Post-volcanic anomalies in F z,100 thus exert a negative feedback on the wind anomalies driven by changes in the lower stratosphere temperature gradient.
Discussion
A major result of the preceding sections is that the temperature structure of the lower stratosphere in post-volcanic winters in the high latitudes is controlled primarily by induced residual circulation anomalies.Post volcanic-eruption enhancement of the stratospheric residual circulation (or Brewer-Dobson circulation) has been suggested based on previous model studies (Pitari and Mancini, 2002;Pitari, 1993).Graf et al. (2007) assessed the observational record and found evidence of increased winter stratospheric wave activity after three eruptions (Agung, El Chichón and Pinatubo).Poberaj et al. (2011) showed that anomalously strong EP-fluxes occurred in the SH after Pinatubo.Such increases in winter stratospheric wave activity may be a result of changes in the wave propagation conditions of the stratosphere following aerosol radiative heating, allowing more planetary waves to propagate upwards.Similar arguments explain climate models' predicted increase in future stratospheric residual circulation due to changes in the atmospheric temperature structure due to climate change (Garcia and Randel, 2008;McLandress and Shepherd, 2009;Shepherd and McLandress, 2011), which also enhances the meridional temperature gradient in the lower stratosphere, albeit at lower altitudes.
The residual circulation anomalies induced by the volcanic forcing in the ensembles strengthen the climatological residual circulation, although the structure of the anomalies is different to the climatology.For example, the induced upwelling is centred on the equator whereas the maximum climatological upwelling is centred in the summer hemisphere, and the induced upwelling is strongest above the level of maximum aerosol radiative heating, and even negative below.This latter point may explain why post-Pinatubo anomalies in tropical upwelling are not apparent in observational records, which are usually displayed as time series of upwelling in the lower stratosphere (Seviour et al., 2012).We have shown that increased wave activity and wave drag in both hemispheres is a robust response to volcanic aerosol forcing in the MPI-ESM, and therefore a component of the residual circulation anomalies in the volcanically forced simulations results from wave drag anomalies.However, given the fact that maximum anomalous tropical upwelling occurs at and above the location of maximum aerosol radiative heating in the tropics, it seems possible that there is also a diabatic component to the anomalous residual circulation, forced directly by the aerosol radiative heating, as suggested by previous studies (e.g.Aquila et al., 2013;Pitari and Mancini, 2002).
The lack of a robust NH polar vortex response to the Pinatubo forcings used here does not necessarily rule out the possibility that other forcings could produce significant vortex responses.Most obviously, the magnitude of the volcanic forcing may be essentially important.Toohey et al. (2011) found a significant vortex response to a near-super eruption volcanic forcing, which likely produced a much stronger direct aerosol radiative heating gradient in the midlatitudes to high latitudes.Similarly, Bittner et al. ("Sensitivity of the Northern Hemisphere winter stratosphere variability to the strength of volcanic eruptions", submitted manuscript) find a significant response of the MPI-ESM NH polar vortex to volcanic forcing representing that of the 1815 eruption of Tambora.It is also important that our simulations do not include possible chemical depletion of stratospheric ozone brought about by the presence of volcanic aerosols, or ozone anomalies due to changes in the residual circulation.Induced changes in the meridional structure of stratospheric ozone can also affect temperature gradients in the lower stratosphere (Muthers et al., 2014), and may be a significant feedback in the response of the polar vortex to volcanic forcing.
While the SAGE_4λ volcanic forcing set is almost certainly a more accurate reconstruction of the true aerosol evolution than S98 in many aspects, e.g. the vertical distribution of aerosol extinction in the tropics, it does not produce the expected increase in NH polar vortex strength in simulations with the MPI-ESM.It actually leads to a somewhat weaker vortex than the S98 forcing, which is especially surprising given that it produces stronger radiative heating in the tropical lower stratosphere than S98.
The model-based SVC forcing was the only forcing to produce a significant vortex strengthening.Differences in simulated vortex response between the different ensembles imply sensitivity in vortex response to the exact structure of the volcanic forcing.These differences between the ensembles, and especially the significant response of SVC ensemble, should perhaps be taken with a grain of salt: the large variability of wave drag and vortex dynamics necessitates the use of large ensembles in order to negate the impact of variability on the ensemble means.While our ensemble size of 16 is larger than most prior single-model volcanic studies, it may still be insufficient to unambiguously identify significant highlatitude responses from volcanic forcing.Furthermore, given the anomalous heating of the lower stratosphere in volcanic simulations, it could be the case that insignificant anomalies in residual circulation lead to significant anomalies in ensemble mean temperature gradients and therefore zonal wind.Nevertheless, planetary wave propagation through the tropopause region has been shown to be quite sensitive to local vertical and meridional gradients in zonal wind and temperature (e.g.Chen and Robinson, 1992).It is plausible that small differences in the structure of the prescribed volcanic aerosol forcing, and resulting radiative heating, temperature and wind, could have relatively large impacts on wave propagation.
Conclusions
In simulations of the post-Pinatubo eruption period with the MPI-ESM with four different volcanic aerosol forcings, an enhanced polar vortex -which is expected based on limited observations and simple theoretical arguments -was not a robust response.The responses that were significant and robust across all four forcings in the NH winter stratospheric include: (1) positive temperature anomalies in the lower tropical stratosphere, (2) enhanced F z in the NH midlatitudes (40-60 • N) and wave drag in the midlatitude middle stratosphere, (3) enhanced meridional residual circulation, (4) dynamical cooling of the tropical lower stratosphere and heating of the midlatitude lower stratosphere.
The lack of robust polar vortex response to volcanic aerosol forcing in the MPI-ESM simulations of this work is consistent with the multi-model results of the CMIP5 historical experiments (Charlton-Perez et al., 2013;Driscoll et al., 2012).We have shown that the meridional temperature gradient in the extratropical lower stratosphere induced by volcanic aerosol forcing, and therefore the strength of the induced stratospheric winter polar vortex wind anomalies, is controlled primarily by dynamical heating associated with the induced residual circulation rather than the direct aerosol radiative heating.The vortex response in the model is therefore much less robust than would be expected if it were directly due to the aerosol radiative heating, as it is instead sub-ject to complexity and variability of wave-mean interaction in the winter stratosphere.
results further imply that the NH polar vortex response is quite sensitive to the specific structure of the volcanic forcing used.Generally, volcanic forcing leads to an overall increase in resolved wave activity entering the midlatitude to high-latitude stratosphere, and the impact of this wave activity tends to weaken the polar vortex, counteracting the impact of low-latitude heating.However, one volcanic forcing set used here, the model-based SVC forcing, did not significantly affect high-latitude wave activity, and thereby produced a strengthened polar vortex, approximately in line with the expected response.We speculate that such sensitivity is due to the role that minor differences in volcanically induced temperature and wind anomalies play in wave-mean flow interactions in the stratosphere.An alternative theory is that differences in volcanic forcing lead to differences in wave generation in the troposphere.In either case, the results imply that -at least for a Pinatubo magnitude eruption -very accurate reconstructions of volcanic aerosol forcing would be required to reproduce any impact of the aerosols on polar vortex strength in climate model simulations.
Figure 1 .
Figure 1.ERA-Interim DJF (top) temperature and (bottom) zonal wind anomalies for (left) post-volcanic composite (n = 4) of two winters after eruptions of El Chichón and Pinatubo, and (right) 1991/92, the first winter after the Pinatubo eruption.
Figure 2 .
Figure 2. Zonal mean aerosol optical depth at 1 µm for the Pinatubo eruption, (left) as reconstructed from observations producing the S98 and SAGE_4λ forcing data sets and (right) based on MAECHAM5-HAM model simulations and composited according to strong (SVC) and weak (WVC) vortex states.Dashed vertical lines demarcate the DJF period of interest.
Figure 3 .
Figure 3. First post-eruption NH winter (DJF) anomalies of (left) temperature at 100 hPa and (right) zonal wind at 50 hPa for the 12-member MAECHAM5-HAM Pinatubo ensemble of Toohey et al. (2011) (grey lines).Ensemble members constituting the strong and weak vortex composites (SVC and WVC) as defined in the main text are marked by circles and crosses, respectively.For comparison, ERA-Interim anomalies based on a composite of the two winters after the eruptions of El Chichón and Pinatubo (blue), and the single winter after the Pinatubo eruption (red) are also shown.
760Figure 4
Figure 4: (top) latitude-pressure distributions of 1 μm aerosol extinction in the S98 and SAGE_4λ 761 volcanic aerosol forcing sets in first DJF after Pinatubo (winter 1991/92).(bottom) aerosol radiative 762 heating (Q aer ) as computed within the MPI-ESM model for each forcing set.Right-hand column shows 763 S98-SAGE_4λ differences for both 1 μm extinction and Q aer .764 765 Figure 5: (top) latitude-pressure distributions of 1 μm aerosol extinction in the SVC and WVC volcanic 768 aerosol forcing sets in first DJF after Pinatubo (winter 1991/92).(bottom) aerosol radiative heating (Q aer ) 769 as computed within the MPI-ESM model for each forcing set.Right-hand column shows SVC-WVC 770 differences for both 1 μm extinction and Q aer .771 772
Figure 6 .
Figure 6.DJF temperature (top) and zonal wind (bottom) from the CTL ensemble (left) and anomalies for the VOLC grand ensemble (right).Stippling highlights anomalies that are significant at the 95 % confidence level based on a bootstrapping algorithm.
Figure 7 :Figure 7 .
Figure 7: Temperature and zonal wind response of the observations-based volcanic forced ensembles.783 Shown are DJF anomalies for the (left) S98 and (middle) SAGE_4λ ensembles, and (right) the difference 784 between the two ensembles for (top) temperature and (bottom) zonal wind.Hatching highlights 785 anomalies which are significant at the 95% confidence level based on a bootstrapping algorithm.786 787
Figure 8 :Figure 8 .
Figure 8: Temperature and zonal wind response of the model-based volcanic forced ensembles.Shown 790 are DJF anomalies for the (left) SVC and (middle) WVC ensembles, and (right) the difference between 791 the two ensembles for (top) temperature and (bottom) zonal wind.Hatching highlights anomalies which 792 are significant at the 95% confidence level based on a bootstrapping algorithm.793 794
Figure 9 .
Figure 9. DJF zonal wind at 60 • N and 10 hPa for the CTL ensemble and the volcanically forced ensembles S98, SAGE_4λ, SVC and WVC.Individual ensemble members shown as grey symbols.Ensemble means shown in coloured symbols, with vertical whiskers representing the 95 % confidence interval of the ensemble mean.Dashed horizontal line shows the ensemble mean of the CTL ensemble.
Figure 10 .
Figure 10.Selected TEM diagnostics for (left) the CTL ensemble and (right) anomalies for the VOLC grand ensemble.Shown are (a, b) vertical component of EP-flux, (c, d) EP-flux divergence, (e, f) residual circulation streamfunction and (g, h) heating due to residual vertical advection.Contours in (a) are in units of 1 × 10 4 kg s −2 .Stream function contours in (e) and (f) are red for positive values and blue for negative values, and are log-spaced, ranging from 10 to 1 × 10 4 kg m −1 s −1 in panel (e) and 1 to 100 kg m −1 s −1 in panel (f).
Figure 12 :Figure 12 .
Figure 12: Selected transformed Eularian mean diagnostics for the SVC and WVC forced ensembles.823 Shown are DJF anomalies for the (left) SVC and (middle) WVC ensembles, and (right) the difference 824 between the two ensembles for (a,b,c) vertical component of EP-flux, (d,e,f) EP-flux divergence, (g,h,i) 825 residual circulation and (j,k,l) heating due to residual vertical advection.Stream function contours in 826 Figure 12.Selected TEM diagnostics for the SVC and WVC forced ensembles.Shown are DJF anomalies for the (left) SVC and (middle) WVC ensembles, and (right) the difference between the two ensembles for (a, b, c) vertical component of EP-flux, (d, e, f) EP-flux divergence, (g, h, i) residual circulation streamfunction and (j, k, l) heating due to residual vertical advection.Stream function contours in panels (g, h, i) are red for positive values and blue for negative values, and are log-spaced ranging from 1 to 100 kg m −1 s −1 .
Figure 13 .
Figure 13.Scatter plots of vortex wind, temperature and wave forcing for the CTL and four volcanically forced ensembles.All quantities are averaged over the NH winter (DJF).Shown are (left) zonal wind at 10 hPa, 60 • N plotted against temperature at 50 hPa, 60-90 • N averaged, and (right) zonal wind at 10 hPa, 60 • N plotted against the vertical component of EP-flux averaged over 40-75 • N. Individual forced ensemble members are shown by coloured circles, and the CTL ensemble members by black crosses.Ensemble mean values of the quantities plotted on the horizontal and vertical axes of each panel are shown by coloured (black) triangles for the forced (CTL) ensembles along the bottom and right-hand axes. | 2018-10-30T11:48:24.904Z | 2014-12-09T00:00:00.000 | {
"year": 2014,
"sha1": "3cf2cd4a3c1567ffbd895ae78259b6b6433daa7f",
"oa_license": "CCBY",
"oa_url": "https://acp.copernicus.org/articles/14/13063/2014/acp-14-13063-2014.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "fa3587eb43a4082f3258088cc08f03915c23ba7f",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
106659802 | pes2o/s2orc | v3-fos-license | DAIRYING AND EMPLOYMENT IN THE AMURI: 1983 TO 2002
The dairy industry is currently experiencing a staff shortage, as are many other industries. It has experienced rapid expansion, and the situation is made worse by the poor image of the industry. This expansion has often occurred in areas with little prior dairying. The Amuri region, North Canterbury, is one such area. The conversion of farms to dairying began in I983, following the commissioning of the 17000-hectare Waiau Plains Irrigation Scheme in 1980. There are now 49 herds in the area. Some of the initial dairy farming experiences were very bad, giving all dairying in the region a poor reputation. The Amuri region is 'geographically isolated basin', and situated approximately 90 minutes from Christchurch, with limited social opportunities for the farm staff This resulted in extreme difficulties attracting and retaining good staff in the area. In response to this situation, the dairy farmers in the area formed the Amuri Dairy Employers Group, in March 2000. This group established a constitution including: Mandatory member employer training; Agreeing to an independent annual audit of member employment practices; and Agreeing to a Code of Practice for employment standards. I have undertaken a two-year investigation of the effects of the Amuri Dairy Employers Group, on dairy farming employment and the wider social effects in the Amuri area. A case-study approach has been used to gather the information. This research was conducted as part of a Masters of Commerce (Agriculture).
Introduction
The dairy industry faces a problem attracting and retaining sufficient numbers of adequately skilled staff currently.This state is being experienced by many industries at present and many regions have identified a shortage of skilled labour as one of the major limitations to economic growth and increasing productivity.
The New Zealand dairy farming industry has approximately 14,000 farmers with 3.45 million cows, producing 13 billion litres pf milk per year of which 95% is exported, and (providing NZ$ 7.5 billion in exports.This is some ~percent of world production, which is 31 percent of world production traded across borders.New Zealand lowest cost producer of traded milk products.The Fonterra cooperative handles 96 percent of New Zealand production, which is 20 percent of New Zealand's total exports and 7 percent of national Gross Domestic Product (Fonterra, 2002).
Estimates of Future Labour Needs
There are two basic needs in terms of labour in the dairy industry.1) On a simple numeric basis, far more people are required due to increasing farm numbers and the increasing size of these new farms.Gaul (2000), projected that between 1578 and 3960 extra staff would be required in the South Island by 2005, depending on the efficiency ratios used.2) Holmes and Cameron (200 I) identified the need for higher levels of education among people in the dairy industry.The current percentage of dairy farm workers with degrees is 4%, in the general population it is 8% and among 'managers' 12%, which is an indication of the lower academic levels among dairy farm workers (as of 1999).Also they estimated the need for people with higher education levels as: 8,000 graduates on farm, 20,000 apprentices on farm, 250 agricultural post-graduates as consultants, and 100 PhD graduates as researchers; all needed by 2030.These are the only estimations which exist.No robust projections for staffing requirements for the dairy industry have been found.
The Industry
The dairy industry's' staffing situation is made worse by the industry's poor image, and the industry's rapid growth.The "urbanization" of society is also playing its part in reducing the potential pool of employees available in rural areas.Much of the industry expansion is occurring in new areas, areas where there has not been previous or recent dairy activity.These areas are not accustomed to dairying and dairy farmers.The social changes associated with a large scale changing to dairy farming are significant.Some of the advantages are: the increase in numbers of people, which often increases school roles, and increases spending in local shops and businesses.Disadvantages include the increased mobility of dairy farmers.Sharemilkers for example, are commonly on three-year contracts, moving to a new job in a new area after three years.This "instability" can often cause problems with reduced opportunities to develop social networks and reduced social involvement.They often foresee only short-term involvement with both the job and the area.The numeric increases in the school roles are often upset by the comings and goings of dairy farm pupils mid way through the academic year, .at about I 51 of June.
In areas of traditional inter-generational ownership transfer, people changing properties every three years is very short-term.
The research occurred over two seasons of record payouts, reaching $5.33 per kgMS for the 2001/2002 season.The payout for Fonterra suppliers has dropped this season to $3.70/kgMS, which will put significant pressure on farm profitability and in turn wages for farm workers.Any attempt to lower wages, with the drop in profitability, is not likely to be well received.There are no actual figures on wage levels, and whether any actual increase occurred is a matter of speculation.Anecdotally, wages have risen markedly in the last two years, which many people believe is linked to profitability.
The dairy farm season runs 1st June to 31st May the following year, with most employment and land purchase contracts working to these dates.Workload, during June and July is at its lowest.The spring period, involving calving, raising calves, moving through into the breeding season and irrigation is the period of highest workload.The late spring/early summer period is often monotonous, with repetitive jobs, coupled with the long hours.From Christmas to drying off, the hours steadily decrease and milkings becoming quicker.Thus employees begin a job at the period of lowest workload, and at a time when the opportunity to learn 'normal farm routines' is in short supply.Then suddenly the farm system moves into a period of very high workloads, when the opportunity to train someone and for an employee to learn steadily is minimal.For this reason, some farmers are moving to taking on inexperienced staff in February, to give them a more steady and realistic introduction into dairy farming.
Background to the Amuri
The Amuri region is a geographically isolated dry basin, 90 minutes northwest of Christchurch.The W eka Pass separates the basin from any centres of large population.This makes the area seem more isolated.The Waiau Plains Irrigation scheme had been built in the late 1970's, in response to drought and was initially seen as no more than 'insurance' against drought.The investigative work, prior to the construction of the scheme, was carried out on the expectation of an intensification of current land uses, i.e. carrying more sheep, and growing more reliable crops.The scheme was commissioned in 1980, and little was done differently in terms of farming practices until 1983, when the first farm was converted to dairying.Several people I spoke to from the region said that following the commissioning of the irrigation scheme."They had it, and it took them a while to work out what they were going to do with it."(Interview with Mr R. Davison, August 2002).
Interestingly, it was a local sh1p farmer who converted his property first.Another int:desting 'side-effect' of the irrigation scheme was that with the farmers undertaking the on-farm irrigation development works required, many of them borrowed to do this, bringing debt to the area.Several of the community members I spoke with, suggested that prior to the irrigation developments, there was little debt carried in the area.The effect of this is that in years of low return, where previously just operating costs had to be met, a reduction in personal income could cover the shortfall.Now large debt servicing costs also had to be met.To cover this many farmers had to adopt more 'robust' farming systems, with increased reliability of returns.
In 1984 "Rogernomics" began.The key policy changes in terms of dairying were, the floating of the New Zealand dollar, the removal of agricultural subsidies, and deregulation of several key agricultural markets.The results were, a sharp decline in agricultural products prices and a massive increase in interest rates.
During the late 1980s Canterbury also faced a severe drought.This put a great deal of financial pressure on many farmers .Dairy product markets, and accordingly milk payouts, were affected but recovered relatively quickly.The development of dairying in the South Island, onto what had been sheep and beef country followed.In North and Mid-Canterbury dairy farming developed on the light soils, which was made possible through irrigation.
The dairy farms in the Amuri were developed as "milking platforms", with all young stock and replacement stock carried off the farm.This has meant that the Amuri area has always had very high production per cow and per hectare.
For the time, the early developments were often very large dairy farms.This reflected the large blocks of land in the area.Often the management skills and experience to run these operations well were lacking.There were many highly indebted and relatively inexperienced farmers in the Amuri area during the early stages.Poor employment relationships often resulted, with high staff turnover, unhappy staff and the sharemilkers staying no longer then they had to.The Amuri was often seen as a first stop for North Island dairy farmers and sharemilkers moving south.These people had no experience with large herd operations, further worsening the situation.
The result was that the entire region developed a bad reputation as somewhere to live and work.There were examples of farm operations with greater than 100% staff turnover within a season, and they 'achieved' this most seasons.An example of this is a person with whom I worked.He and his partner had been the 34th and 35th employees on one large dairy operation in one season.These were not the majority of employers, but they were the ones whose 'legend spread'.The result was a real problem attracting and retaining good staff in the Amuri area, for all employers.This problem was the motivation behind the initiation of the Amuri Dairy Employers Group (ADEG).
Amuri Dairy Employers Group
The Amuri Dairy Employers Group developed from the concern several dairy employers in the area had about the effect of the 'staff shortage' on their farm performance.This concern came from people, some who had been in the Amuri area sometime, and others, who had just moved into the area.These people found the relative unavailability of staff a real problem for their businesses.These farmers put together a proposal to address the labour problems, and called a public meeting to discuss it.The meeting was held on 2"ct May 2000, "to discuss the labour problem".42 of the 45 farms in the area were represented at the meeting.During the meeting a draft proposal was put forward, including a Constitution and a Code of Practice.These were all up for discussion to reach general agreement.The draft Code of Practice contained:
~
An employee had to have an employment contract and job description, and was to be allowed to see the contract for a minimum of 24 hours before having to sign it, (this was before ERA 2000).
~
It was up to the employer to ensure that the employee had the required life skills to look after him/herself, e.g.issues like a healthy diet, cooking, budgeting etc.
In the original 'discussion' version of the Code of Practice, the 50 hours per week maximum applied to under 20-year olds.It was lowered to 18 years of age prior to the first public meeting.
Most of these rules ~ere considered acceptable.The main 'bone of contention" was the limit on hours of work by young staff.People who employed young staff, and had no problem retaining them, and therefore felt that their practices worked fine, objected to the limitations this rule imposed on their businesses.Two key issues were raised in the 'robust discussion' which surrounded this proposal; the 50 hour per week limit was not workable, as many employers had their staff, and 'wage budgets' set for the coming season, and could not make the adjustments required to accommodate 50 hour weeks from the under 18's.The other issue raised was that many of the members were sharemilkers and would have to bear the cost of any increase in accommodation required to handle more staff working fewer hours.
The Constitution presented contained some key points: all members had to agree to; undertaking mandatory annual employer training; displaying the code of practice in the work place; a list of 'contact people' for the staff regarding employment problems; providing employee training; and an emergency labour list.
At the meeting a few of the fmer points were discussed at length and some quite loudly, with some revisions suggested.The general guidelines were accepted by most of those who wished to be involved by the end of May 2000.
The 50 hours per week working limit was removed and replaced with 'up to 50 hours' and an agreement to provide meals each day of the work period for which the 50 hour limit was exceeded.An agreement was reached to 'try and limit hours to 50 hours per week', and for this point to be reviewed later.
The Amuri Dairy Employers Group has the following stated aims:
~
To function as a group of high calibre employers who promote the Amuri Dairy Employers group as such.Staff and employer training are deemed integral to this role.
~
To promote the Amuri dairy industry as a positive career choice and an attractive employment option.
~
As a secondary function a dairy employer network offering local area industry support.
From this meeting initial employer training was set up.It began on 12 June 2000; also the employee training was started.Over the winter period a life-skills course was run.This covered basic 'well-being' issues including; budgeting, cooking, sewing, and eating well.Many of the employees are young single males who have just left home and lack some of these skills.This shows that the ADEG was always looking at issues beyond just staff performance and was focused on staff well being.Since that time, ADEG has run many courses on what you may consider the basic skills and husbandries related to agriculture.For example; pasture scoring, condition scoring, animal health, motorbike safety, chainsaw courses, and a communication course for the senior managers.The employer members should have attended three employment management courses.There have also been many social functions for both employees and employers, some joint and some separate events.The ADEG put on a 'welcome' for the new people to the area and a 'farewell' for those leaving.This is seen as an important community activity that was otherwise lacking.Members who had agreed to membership and paid their subscription in the early stages, and now only those who have passed the audit and paid their sub are able to advertise with the Amuri Dairy Employers Group logo (see below).
Figure 1 -Amuri Dairy Employers Group Logo After some discussion as how best to go about the 'audit' of employment practices as was felt necessary to give the group creditability, Investors In People NZ were contracted to perform an audit against the Code of Practice requirements.The development of the ADEG is shown on a timeline, Figure 3.
My Study
The research was carried out following a case study methodology.Several sources of data were used including; official statistics, interviews with ADEG members and employees of members, and other background literature, and an anonymous survey of employees.
All current members, 29 and 18 respectively as at winter, 2001 and 2002, were interviewed.Also I interviewed 20 employees of ADEG members.This was supplemented by an anonymous survey of employees during the winter of 2001, to which I received 20 replies.I also spoke with some 'key' members of the Amuri community, many of whom had been in the area prior the establishment of the irrigation schemes.
I interviewed ADEG members regarding: their perception of the group and its performance against its stated aims, employee and employer training provided; difficulties they foresaw for the group; and what effects the group had on Amuri area.In the second year interviews with the long-term ADEG members, I also asked questions regarding changes in employer practices they had made, the drivers of these changes, their effects, benefits gained and a series of questions regarding the employment practice audit.
Another nine members who had joined the group since the first winter were interviewed.I asked them questions regarding what effects the ADEG had on their employment practices; what it was like movilm.. into area with such a group, what expectations they felt Jthere were regarding employment practices in the area, and what if anything could have been done to encourage them to have joined the group earlier.
I spoke with the employees during the early winter of 2002.I tried to gain from them their 'sense of job satisfaction', what issues were important to them in job satisfaction, what effect ADEG has had on their employment conditions, and the employment conditions generally in the Amuri area.
As there is very little information on employment standards and practices in the dairy industry, the study is very much a qualitative study, and attempts to describe what happened and how it was done.The study cannot be statistically validated.There are no figures against which I can compare statistics gathered from the ADEG.
The Results -Employees
In relation to the employees, the group has had a very positive effect.However, from the employee's perspective the group has had no effect.That is, that the employees I spoke with were generally very happy with most aspects of their jobs, and saw almost no correlation between their job satisfaction, employment conditions, and the efforts of ADEG.The employees considered that employment conditions in the Amuri area both within and outside the employers' group had significantly improved in the last two years.Housing, hours worked, and time off, were the primary factors, which the ADEG members employees identified as having improved.The increase in social interaction among farm staff and the increased availability of training in the Amuri were the only two factors which employees themselves identified as a benefit to them resulting directly from ADEG.It was felt by several employees that the group was an "employers group" and was seen as something of an old boys club, and not particularly friendly towards employees.This seems to stem primarily from the agreed 'complaints procedures' not being followed, and those activities that were provided by ADEG not being clearly identified.As approximately 32 of the farms in the area are involved with ADEG it becomes hard to differentiate what is ADEG and what is not.The ADEG provide staff only discussion groups, to encourage staff participation.The same people who run the general discussion groups, which are open to the entire community, often run these discussion groups.Hence the potential for confusion.
Labour, Employment and Work in New Zealand 2002The Employers Employers identified many benefits in belonging to the ADEG.Some of these are summarized in Tables 1 through 7.In the year between the interviews, the variation in group members' opinions had reduced significantly.Most of the members saw it as important to maintain the group in the future, but suggestions as to how this could be best achieved, varied widely.The ADEG committee is currently working on a full business plan for the group.In working through this hopefully a consensus or at least a common vision can be achieved.It was believed widely that the committee had done very well in getting the group up and running.The need to set policies and structures in place to ensure that the group is ongoing and the enthusiasm is maintained was the primary concern of ADEG members in 2002.
I asked members what effect ADEG has had on their employment practices.Many consider it has had little effect and that their employment practices were above the code of practice level.Some others, who again considered that their practices were above code of practice ~level anyway, felt that they had gained an increased awareness of the important factors in employment relationships.The employer training was found to have been useful to all members, and how useful was related to the employers' levels of experience and previous training in employment matters: In 200 I .a major concern of group members was a lack of credibility of the group given the audit process had yet to be established.However in 2002, the audit process is established, most members consider the group is now entirely credible.Some member's still held concerns about the employment practices of some of the other members.These members are meeting the minimum requirements, are complying with the code of practice, but by many are still considered to be poor employers.The ongoing problems generally stem from issues of communiCati6tland organisation.
The results of the questions regarding employers' perceptions of the groups performance against it aims are as follows.
(NI A is recorded for No Answer and Not Applicable).Do you consider the Promotion of the Group, as Good Employers has been Effective (1 Very Effective through 5 Very Ineffective)?At the time of the first interviews the group had been in operation, slightly over a year, yet it was the opinion of most group members, that these aims had been met.By the second year, members were unanimous in their support of the group achieving these aims.
Do you consider there are advantages to you advertising for staff as a member of the Amuri Dairy Employers Group (Yes/No).With just one year of marketing and having the logo, most members in the first and all members by the second year, considered that there were advantages to them advertising for staff as a member of ADEG.
Could you please giv~ each of the following factors a 1 Table 7 Positive Effect on Existing Staff (very much improved) through 5 (not at all Improved) rating.Although not complete, the overwhelming majority of ADEG members felt that the ADEG has had a positive effect on the employment situation in the Amuri.
Do you consider that the Amuri Dairy Employers Group has had a Positive Effect on your Existing Staff?(Table 7, next column).
Not quite unanimous, but again most members felt that through the training and, increased social interaction between staff, that ADEG has had a positive effect on their staff.Another point raised by many employers was the 'motivational effects' of their staff seeing the employers providing training for them, and being asked their opinions by people as part of the audit process and as part of this research.It has all added to the staffs' sense of value and self worth.
Discussion
The aspects of employment relationships covered by the ADEG code of practice, are such that an employer could meet all requirements, and their staff could still be unhappy.
The issues of 'basic respect' , ensuring that the employees feel motivated, involved and valued within the operation, are far more complex management issues.After all this training, learning and discussion, a wide variation in points of view, motives, and opinions still exist.
There are people who became part of this group for little more than, "wanting to be seen as being supportive of the group"; some who saw that their employment practices could be improved; and some who wanted an increased supply or availability of staff for purely economic reasons.Those who joined just to be 'supportive' left the group prior to the audits being carried out, but all the other points of view still exist.Yet much has been achieved.The concern looking to the future is that people who have achieved what they expected are not motivated to push on with the group.It took great passion, effort and belief to get the group started and to where it is.Without this same motivation going forward, this could be a real problem.
It comes down to a matter of expectation.The drive and enthusiasm to get the groups started and developed came largely from people whose expectations have not been met.As these people step away from driving the group, the lesser expectations of those taking over may become a limiting factor.Many see maintenance of the group, and if necessary some 'tightening of the rules' to maintain a competitive advantage, as all that is necessary.Others consider that the group should continue to push on and try to raise standards.Finding a workable middle ground is the key to the future of the group.The business plan being developed currently will play a key role in gaining agreement on these issues.It is important to note that the input required of an 'average' (non-committee) member of the group so far has been The 'stigma' of employee turnover has been reduced, with people in the Amuri now more willing to talk about employment issues, and not being afraid to lose or replace a staff member if it is not working out.It may sound contradictory to be encouraging turnover, but in some cases, replacing a 'problematic' or unsuitable employee may often result in positive outcomes for the employer and remaining employees.An important issue in this regard was the increased importance members put on 'employee fit' with the other employees and with the position available.
Formerly people in the Amuri previously have had a "we'll take who we can get, and make them fit..." approach.The members, as a result of the training in these matters and an increased confidence in being able to attract and then select someone suitable, now formally define the position available, and what attributes someone to fill it must have, and wait to get someone who meets those requirements.The occurrences of 'just putting up with someone' until the end of season', or ' until we're through the worst of it' would seem to be significantly reduced.
11 of the 18 members interviewed were Sharemilkers, and orie was a contract milker.These are 'categories' of employers who tend to consider they have limited options in terms of staffing policies, due to their situation, -both financially and in terms of having to work with what is provided by the farm owner.Despite 66.7% of members of the ADEG being in this position, much has been achieved.This shows that there are many factors significant to employee satisfaction, which is within the realms of an employer's control.For example, the inability to provide a better house or bigger shed need not prevent an employer providing an attractive job to staff.
Conclusions
The Amuri Dairy Employers Group is a valuable organisation in terms of dairy farming employment issues.
The Amuri Dairy Employers Group: The 'core functions' and activities, which have resulted in much of the change observed, could be replicated without the need for the full group.However, ADEG's success must be attributed to its dairy farmer members who had vision of better dairy farm employment relationships...s ....
;::l ~ Notes 1.All ADEG members must volunteer their services (not their employee's) for the Emergency Labour List so that if another member is suddenly short staffed, they have people to phone to provide shortterm cover.Several very early morning phone calls have been made following car crashes involving employees.Previously in this situation, there were very few options available to an employer.This policy reduces both stress on all parties in event of an accident, and reduces the employers' sense of isolation Labour, Employment and Work in New Zealand 2002
~
A proposed 50 hours per week limit on hours worked by under 18 year olds ~ Rules on minimum standards regarding accommodation ~ A maximum of 12 consecutive days worked, with at least 2 consecutive days off to follow.
minimal.Membership has cost them $350.They have had to undergo 3 training courses, and attend some of the meetings held, and allow their staff to attend training, not a huge workload or required input, given what has been achieved.
~~
Made employer training readily available to its members~ Set dates and deadlines to do certain things (i.e.review contracts, up-grade the house etc), perhaps 'closing the gap' between intent and action.~Provideda forum for discussion about employment issues.Changed dairy farmers core employment values in a positive way.
Figure
Figure 3. Timeline of the Amuri Dairy Employer Group
Table 1
Promotion of Group as Good Employers
Table 5
Ratings of Improvements Observed in Many Factors due to the Amuri Dairy They are not covered by the current code of practice.With the increased discussion and raised awareness of important employment relationship issues, positive changes hav-eOcC'urred to the core values of members.Many aspects of what ADEG provides have contributed to this; the audit and associated discussion with an employment relationship expert; the thinking triggered by the "self assessment form"; open discussion amongst members; and even discussion with me on employment issues, have all been factors.
In terms of Matching, all has gone pretty much according to expectation.The unplanned effects related to Opportunity in the HCF, have been; the support that the ADEG has received from the business community, resulting in referrals from professionals regarding employment.Both land agents and the rural lenders considering that the groups' existence is positive in relation to clients.The ADEG has also attracted a lot of sponsorship, allowing the provision of much training at little cost to members.The early employer training and initial employment practice audits were conducted at minimal cost to the group, with the organisations keen to become involved with the ADEG. | 2019-04-11T13:16:28.621Z | 1970-01-01T00:00:00.000 | {
"year": 1970,
"sha1": "26838040feea920c1383cda0c9e510adadc64e0f",
"oa_license": null,
"oa_url": "https://ojs.victoria.ac.nz/LEW/article/download/1235/1039",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "714edad7beb6cd46b3298c0f6aba1c3e9f22d97e",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Engineering"
]
} |
249383966 | pes2o/s2orc | v3-fos-license | Tumor-Derived Extracellular Vesicles Predict Clinical Outcomes in Oligometastatic Prostate Cancer and Suppress Antitumor Immunity
Purpose: SABR has demonstrated clinical benefit in oligometastatic prostate cancer. However, the risk of developing new distant metastatic lesions remains high, and only a minority of patients experience durable progression-free response. Therefore, there is a critical need to identify which patients will benefit from SABR alone versus combination SABR and systemic agents. Herein we provide, to our knowledge, the first proof-of-concept of circulating prostate cancer-specific extracellular vesicles (PCEVs) as a noninvasive predictor of outcomes in oligometastatic castration-resistant prostate cancer (omCRPC) treated with SABR. Methods and Materials: We analyzed the levels and kinetics of PCEVs in the peripheral blood of 79 patients with omCRPC at baseline and days 1, 7, and 14 after SABR using nanoscale flow cytometry and compared with baseline values from cohorts with localized and widely metastatic prostate cancer. The association of omCRPC PCEV levels with oncological outcomes was determined with Cox regression models. Results: Levels of PCEVs were highest in mCRPC followed by omCRPC and were lowest in localized prostate cancer. High PCEV levels at baseline predicted a shorter median time to distant recurrence (3.5 vs 6.6 months; P = .0087). After SABR, PCEV levels peaked on day 7, and median overall survival was significantly longer in patients with elevated PCEV levels (32.7 vs 27.6 months; P = .003). This suggests that pretreatment PCEV levels reflect tumor burden, whereas early changes in PCEV levels after treatment predict response to SABR. In contrast, radiomic features of 11C-choline positron emission tomography and computed tomography before and after SABR were not predictive of clinical outcomes. Interestingly, PCEV levels and peripheral tumor-reactive CD8 T cells (TTR; CD8+ CD11ahigh) were correlated. Conclusions: This original study demonstrates that circulating PCEVs can serve as prognostic and predictive markers to SABR to identify patients with “true” omCRPC. In addition, it provides novel insights into the global crosstalk, mediated by PCEVs, between tumors and immune cells that leads to systemic suppression of immunity against CRPC. This work lays the foundation for future studies to investigate the underpinnings of metastatic progression and provide new therapeutic targets (eg, PCEVs) to improve SABR efficacy and clinical outcomes in treatment-resistant CRPC.
Introduction
Patients with metastatic castration-resistant prostate cancer (mCRPC) who progress after chemotherapy and next-generation antiandrogen therapy experience a meager median survival of 13.6 months. 1 A subset of these patients with oligometastatic disease, usually defined as ≤5 lesions, are ideal candidates for potentially curative SABR. 2 Indeed, 2 recent phase 2 randomized trials in oligometastatic castration-sensitive prostate cancer (CSPC) showed that SABR prolongs progression-free survival with minimal toxicity. 3,4 However, distant failure after SABR remains the primary manifestation of disease progression. In a separate phase 2 trial in oligometastatic CRPC (≤3 lesions) identified with 11 C choline positron emission tomography (PET) and computed tomography (CT), SABR was very effective for local control (75% at 2 years). 5 Unfortunately, the median time to distant recurrence was 5.1 months, and 19% of patients experienced distant recurrence within 3 months. This suggests that advanced PET imaging may not be sufficiently sensitive. Thus, a combination of PET imaging and minimally invasive biomarkers could improve the selection of truly oligometastatic CRPC. 6 Part of the benefit of SABR stems from the induction of a systemic antitumor immune response. 5,[7][8][9] Peripheral expansion of clonotypic and tumor-reactive CD8 T cells is a prerequisite for local antitumor response and abscopal effect. 5,10,11 Thus, there is a critical need to understand the systemic suppression of anti-prostate cancer immunity, particularly in patients treated with SABR. This will provide a rationale to develop effective combination therapies as currently tested in melanoma, renal cell carcinoma, and lung cancer. 12,13 Extracellular vesicles (EVs) are emerging as promising liquid biomarkers for cancer diagnosis and prognosis and prediction of treatment response. 14 EVs are nanosized vesicles released by all cell types, including tumor cells. They contain surface molecules and cargo from donor cells and travel in body fluids including blood and urine. The clinical utility of EVs in the management of prostate cancer has been an active area of investigation. 15,16 Markers that reliably identify prostate cancer-derived EVs (PCEVs) are needed, and prostate cancer-specific markers and STEAP1 are strong potential candidates. 17 Importantly, clinical data investigating a role for EVs in patient selection and response monitoring have not been documented. Given the observation that clinical interventions affect EV subpopulations in a variable manner, it is essential to identify and validate these markers as they relate to tumor burden, treatment response, and antitumor immunity. 18 Herein, we defined and examined baseline and post-SABR PCEV levels in a cohort of patients with omCRPC treated with SABR. We evaluated their performance as markers of disease burden and predictors of risk of distant failure in response to SABR. Finally, we investigated the association of PCEVs with peripheral CD8 T cells to gain novel biological insights on the crosstalk between tumor and immune cells in response to radiation therapy.
Patient cohort
A total of 89 patients with oligometastatic castration-refractory prostate cancer were identified with 11 C-choline PET/CT and treated with SABR at Mayo Clinic between August 2016 and December 2019. Patients with ≤3 extracranial lesions, testosterone levels <50 ng/dL on androgen deprivation therapy, an Eastern Cooperative Oncology Group performance status score of 0 to 2, and >6 months of life expectancy were eligible. According to Guckenberger and colleagues' classification of oligometastatic disease, 19 40 patients (50.6%) experienced repeat oligoprogression, whereas 35 patients (44.3%) experienced metachronous oligoprogression and 4 patients (5.1%) experienced induced oligoprogression. Study approval was granted by the Mayo Clinic Institutional Review Board (IRB #16-000785). The study was conducted in accordance with the Declaration of Helsinki, and written informed consent was obtained from all participants before enrollment. Whole blood was collected at baseline and at 3 time points after SABR (days 1, 7, and 14). In all, 79 of 89 patients provided blood at baseline. Of those 79 patients, 66 patients gave samples at baseline and day 7 after SABR. A total of 53 patients provided blood for all time points. A separate cohort of 40 patients with widely metastatic CRPC (mCRPC) (ie, patients with >3 metastatic lesions) detected by conventional CT and/or bone scan and with a prostate-specific antigen (PSA) level greater than 2.0 ng/mL was used to compare PCEV levels between patients with omCRPC and mCRPC (IRB #21-004451). Blood was also collected in a cohort of 22 men undergoing radical prostatectomy (RP) for localized prostate cancer. All men presented with undetectable PSA levels at the time of blood collection (within 4 months after RP) (IRB #19-011292).
Labeling of prostate cancer-derived extracellular vesicles (PCEVs)
The PCEVs were labeled using the following monoclonal antibodies: Alexa Fluor 647 conjugated PSMA (3E7 clone; Creative Biolabs) and Alexa Fluor 488 conjugated STEAP1 (SMC1 clone, Mayo Clinic Antibody Hybridoma Core) antibodies. The PSMA and STEAP1 antibodies were conjugated using protein labeling kits (A10235 and A20173, Thermo Fisher Scientific). The degree of antibody labeling (DOL) was measured using a Nanodrop One C spectrophotometer (Thermo Fisher Scientific). The degree of antibody labeling for PSMA and STEAP1 was 3.2 and 3.6, respectively. Antibody sensitivity and specificity were validated in vitro by using the human prostate cancer LNCaP (PSMA + STEAP1 + ), PC3-flu (PSMA − STEAP1 − ), and PC3-PIP (PSMA + STEAP1 − ) cell lines. PC3-flu and PC3-PIP cell lines were kindly provided by Dr Xinning Wang (Case Comprehensive Cancer Center, Cleveland, Ohio). PSMA and STEAP1 protein expression in cell lines was validated by Western blot (Fig. E1A, E1B). Cell lines were cultured in serum-free media for 24 hours, and the conditioned medium was collected and concentrated using ultrafiltration (Amicon Ultra-15 100 kDa, Millipore). PSMA + and STEAP1 + EVs were analyzed by nanoscale flow cytometry (Fig. E1C, E1D). Optimal antibody concentrations were determined by titration using 3 plasma samples with detectable levels of PSMA + and STEAP1 + EVs. To confirm the specificity of PSMA and STEAP1 antibodies, plasma samples were treated with detergent (0.1% Sodium Dodecyl Sul-fate) for 30 minutes on ice followed by antibody staining (Fig. E2). Loss of >90% of fluorescent events in the presence of detergent would confirm that PSMA and STEAP1 antibodies recognized intact EVs and did not form false-positive aggregates.
Nanoscale fiow cytometry
Platelet-poor plasma samples were thawed at 37°C for 1 minute and centrifuged at 13,000 × g for 5 minutes at room temperature. Platelet-poor plasma was diluted in sterile Phosphatebuffered saline filtered 0.22 μm and incubated for 30 minutes at room temperature with fluorescently labeled PSMA and STEAP1 antibodies. Labeled samples were further diluted in sterile Phosphate-buffered saline before analysis by nanoscale flow cytometry. Each plasma sample was analyzed on an Apogee A60-Micro Plus (Apogee FlowSystems Inc, Northwood, United Kingdom) equipped with 3 excitation lasers (405, 488, and 638 nm) and 9 detectors. Side scatter (Large Angle Light Scatter) was used as a trigger at a 405-nm laser wavelength. Particle detection by A60-Micro Plus was calibrated with Rosetta calibration beads according to the manufacturer's instructions (Exometry Inc). Before each run, a blank sample with Dulbecco's Phosphate-buffered Saline was analyzed to ensure a count rate <100 events per second. For each sample run, the event rate was kept below 7700 events per second to avoid a swarm effect. 20 Each sample was run in 3 technical replicates for 1 minute, and coefficient variation was kept below 15%. Data analysis was performed using FlowJo, version 10.6.1 (Tree Star, Palo Alto, California). The number of detected events, sample dilution, flow rate, and acquisition time were used to determine particle concentration (EVs per milliliter). For a detailed description of flow cytometer specifications and preanalytical and analytical procedures, please refer to the MIFlowCyt-EV report in Appendix E1 (Supplementary Materials). 11 C-choline PET and CT were performed for all patients. Quantitative image features extracted from PET/CT images included the region of interest (ROI) volume, ROIMaxHU, ROIMeanHU, ROIMaxSUV, ROIMeanSUV, TotalVolume, and TotalGlycolysis (HU = Hounsfield Unit; SUV = Standard Uptake Value). These 7 features were chosen because they are likely to correlate with the tumor or metastasis burden and can be divided into 2 groups based on the ROI selected by the algorithm described later and the sum of the total treatment volume.
PET/CT imaging data analysis
To select the ROI, baseline PET/CT scans were first registered rigidly to the planning CT for each treated site, and the clinical target volume (CTV) high volume (defined by the treating physician) was copied to PET/CT. For patients with multiple lesions treated, the maximum SUV of each lesion was compared, and the volume with the highest SUV was selected as the ROI. The volume of the ROI is reported as ROIVolume.
To normalize the interpatient variation of the PET SUV and CT HU, a slice of descending aorta (DA) was contoured by a nuclear medicine physician, and the mean SUV from PET (DAM-eanSUV) and its mean HU from CT (DAM-eanHU) were calculated. The ROI-based values (ROI-MaxHU, ROIMeanHU, ROIMaxSUV, and ROIMeanSUV) were normalized by dividing by the DAMeanHU and the DAMeanSUV, respectively.
TotalVolume and TotalGlycolysis were not based on a selected ROI but on the sum of all treated sites. TotalVolume is the sum of all volumes treated. TotalGlycolysis is the sum of the product of the mean SUV and the treated volume of each site.
Threshold optimization
To identify optimal cut-point values for each variable, we used a bioinformatic method adapted from the X-tile method. 21 The optimized cut value of each quantitative feature was obtained by minimizing the P value of a log-rank test with the exhaustive search method.
To remove the effects of outliers, the 80th and 20th percentiles were used as the maximum and minimum of the search range for each feature, which was again divided into 50 intervals (steps) linearly. Starting from the minimum, the cut value was incremented step by step. At each step, the data set was divided by the cut value into 2 groups, and the log test P value for the 2 groups was calculated if the ratio of the group sizes was between 0.33 and 3.0. This constraint prevented extreme splitting of the data set. The cut value with the minimum P value was taken as the optimized cut value of the feature. To check the stability of the cut, the optimized cut value was shifted by 3% (either in the positive or negative direction), and the P value was calculated with the shifted cut value. The Python Lifelines module was used for log-rank and Kaplan-Meier tests.
Statistical analysis
Progression of PSA, distant failure, and overall survival were used as clinical endpoints to determine the association of PCEV levels with oncological outcomes. Kaplan-Meier estimates were used to estimate survival curves. For each Kaplan-Meier plot, P values were derived from the log-rank test for differences between groups. The hazard ratio (HR) of each biomarker was calculated with a univariate Cox proportional hazards model with different survival targets (PSA progression, distant recurrence, and overall survival). To minimize the effect of outliers, the biomarker values were capped by 2 times the 95th percentile value and standardized with the StandardScaler algorithm in the Python scikit-learn package. The association of PCEV levels with clinical features was determined using 2-sided Mann-Whitney U tests. Linear regression analysis of PCEV levels with imaging features or levels of peripheral CD8 T cells was used to determine Spearman r values and associated P values.
For correlative studies, PCEV levels were treated as continuous variables. For association with clinical features and oncological outcomes, PCEV levels were converted to categorical variables and used to classify patients as high and low. Prism, version 9.0.1 (GraphPad Software), Python SciPy, and Python scikit-learn packages were used for all statistical analyses.
Association of PCEV levels with tumor burden in oligometastatic and metastatic prostate cancer
To enumerate PCEVs directly from patient plasma, we used nanoscale flow cytometry and antibodies against the 2 well-known prostate markers PSMA and STEAP1 (Fig. 1A). We compared the levels of PSMA + EVs and STEAP1 + EVs in the blood of patients with localized prostate cancer after radical prostatectomy (post-RP), patients with omCRPC, and patients with mCRPC ( Fig. 1B-C). In the post-RP cohort, all men presented with an undetectable PSA level at the time of blood collection (≤4 months post-RP). The matched cohort of patients with mCRPC comprised heavily treated patients with widespread metastatic lesions identified by conventional CT and/or bone scan. This cohort had significantly higher levels of PSMA + EVs compared with the omCRPC and post-RP cohorts (median, 24
In the omCRPC cohort, 67.1% of patients had 1 extracranial metastatic lesion, 25.3% had 2 lesions, and 7.6% had 3 lesions detected by 11C-choline PET/CT imaging (Table E1). We analyzed the relationship between baseline PCEV levels and several PET/CT imaging parameters (Table E2). No significant association was observed between levels of STEAP1 + EVs and tumor volume or characteristics within this cohort. Higher levels of PSMA + EVs were associated with increased ROI Max HU (CT) (P = .03), total volume (P = .09), and total glycolysis (P = .07). No correlation was found between PCEV levels and imaging features (Table E3). No association between PCEV levels and the number of metastatic lesions detected on 11 C-choline PET/CT imaging was observed (Fig. 2C).
Baseline PCEVs and risk of distant failure in omCRPC treated with SABR
Although SABR improves outcomes in oligometastatic prostate cancer, distant failure remains the major source of disease progression. 3,5 Some patients with limited metastatic burden (≤3-5 metastatic lesions) benefit significantly from SABR, but the identification of these patients remains challenging. To identify potential predictors of distant failure, we performed Cox univariate analysis with baseline clinical factors including PSA levels and PET imaging characteristics. No significant association was observed between clinical factors and risk for distant failure (Fig. 3A).
In contrast, a significant association was observed between PCEV levels at baseline and risk for distant recurrence. Both higher baseline PSMA + EVs (HR, 1.35; 95% CI, 1.03-1.76; P = .03) and STEAP1 + EVs (HR, 1.43; 95% CI, 1.09-1.86; P = .01) predicted a higher risk of distant recurrence. Furthermore, high baseline PSMA + EVs and STEAP1 + EVs were associated with a shorter time to distant recurrence (Fig. 3B). The median times to distant recurrence were 6.6 months and 3.5 months for patients with low and high levels of PSMA + EVs, respectively (P = .009). At 6 months, distant failure occurred in 19.5% of patients with low PSMA + EVs and in 70.4% of patients with high PSMA + EVs. Similarly, the median times to distant recurrence were 5.7 months and 4.2 months for patients with low and high STEAP1 + EVs, respectively (P = .022). The risk of distant failure at 6 months after SABR was 66.6% among patients with high STEAP1 + EVs, compared with 50% among patients with low STEAP1 + EVs. No association was observed between baseline PCEV levels and PSA progression or overall survival (Fig. E3). Our data posit PCEVs as a potential independent prognostic factor for risk of distant recurrence in omCRPC treated with SABR. The lack of prognostic value of PET imaging characteristics suggests that PET imaging may underestimate the true (micro)metastatic burden of omCRPC, and undetected metastases may continue to grow, causing distant failure. Patients with PET-identified omCRPC presenting high PCEV concentrations may have a higher tumor burden than anticipated with PET imaging, and they can be more at risk of distant recurrence.
Association of PCEV levels and immunologic changes
Preexisting antitumor immunity and expansion of tumor-reactive T cells are critical for response to SABR. 3,5 Patients with high levels of tumor-reactive T cells (CD11a high CD8 + ) responded better to SABR with prolonged PSA progression-free survival and time to distant recurrence. 5 Previous reports have demonstrated that tumor-derived EVs can carry immunosuppressive proteins and prevent immune-mediated tumor cell killing and response to immunotherapy. 22,23 In line with this, we performed a correlation analysis and evaluated the association between PCEV levels and peripheral CD8 T cells at baseline and after SABR (Fig. 6A). At baseline, we found that levels of both PSMA + and STEAP1 + EVs correlated negatively with several subpopulations of tumor-reactive T cells (Table E5). No association was found between levels of parent tumor-reactive CD8 T cells (CD11a high CD8 + ) and PCEVs (Fig. 6B). However, high baseline levels of PSMA + and STEAP1 + EVs were associated with a lower percentage of tumor-reactive CD8 T cells positive for markers of effector function Bim, CX3CR1/GZMB, and PD-1 (Fig. 6C-E). 11,24 Conversely, elevated PCEV levels at day 7 correlated positively with tumor-reactive CD8 T cells at day 14 after SABR (Table E4 and Fig. 6F).
Discussion
To our knowledge, this study is the first to analyze the kinetics and evaluate the clinical utility of tumor-derived EVs in patients with prostate cancer treated with radiation therapy. It is also the first study, to our knowledge, to evaluate an EV-based blood test as a complementary approach to 11 C-choline PET/CT imaging for the diagnosis of oligometastatic prostate cancer. We analyzed the relationship between circulating EVs and peripheral CD8 T cells at baseline and post-SABR in the intent to provide novel insights in the crosstalk between tumors and immune cells.
SABR has shown clinical benefit in oligometastatic prostate cancer, but defining oligometastatic disease is challenging because it relies on imaging modalities with variable sensitivity and specificity. 3 There is a critical need to develop sensitive tools to assess tumor burden and identify patients with truly oligometastatic disease who can benefit the most from SABR.
Along with circulating tumor cells and circulating tumor DNA, EVs have recently emerged as potential markers of tumor burden and predictors of response to therapy. 22 To detect EVs released from prostate cancer, we used 2 prostate-specific surface markers, PSMA and STEAP1, previously found to be enriched in prostate cancer cells and EVs. 25,26 PSMA has received significant attention with the development of PSMA-directed radionuclide therapy and PET imaging. 27,28 Similarly, STEAP1 is currently under investigation as a molecular imaging marker and therapeutic target. 29,30 Interestingly, our study identified STEAP1 as a robust marker of tumor-derived EVs in CRPC. At the time of diagnosis, we observed a strong correlation between levels of PSMA + EVs and STEAP1 + EVs in patients with CRPC. We also found that circulating STEAP1 + EVs outnumbered PSMA + EVs in patients with omCRPC and mCRPC. Whereas antibody affinity can affect EV quantification, it may also indicate differential expression of PSMA and STEAP1 in CRPC. Low PSMA expression has been previously reported in treatment-refractory patients. 31 In response to androgendeprivation therapy, CRPC tumors can progress toward a neuroendocrine phenotype and lose PSMA expression. 32 Notably, we detected PCEVs in all patients, including those with an undetectable PSA level at baseline. Intratumoral and intertumoral heterogeneity of PSMA expression observed in advanced prostate cancer may have important consequences for patient selection using PSMA PET imaging and treatment with Lutetium-PSMA. 31 In a phase 1 study evaluating the safety profile of an antibody-drug conjugate targeting STEAP1, 99% of patients (133 of 134) showed positive STEAP1 expression, 73% with midhigh intensity. 30 Herein, we provide blood-based evidence that STEAP1 can be a promising alternative to PSMA for identification of aggressive prostate cancers, including those characterized by neuroendocrine features and low serum PSA levels. In line with this, a recent study in lung cancer found that STEAP1 is overexpressed in poorly differentiated neuroendocrine lung cancer (small cell lung carcinoma) compared with carcinoid tumors. 33 Further studies are warranted to determine differential expression of PSMA and STEAP1 in metastatic prostate cancer lesions identified by PET imaging.
To improve the selection of patients with oligometastatic prostate cancer, we determined the association of baseline PCEV levels and tumor burden assessed by 11C-choline PET imaging. Surprisingly, we did not find any correlation between PCEV levels and imaging features; however, baseline levels of PSMA + EVs and STEAP1 + EVs were predictors of distant failure. Patients with high PCEV levels were at increased risk of distant metastatic progression. This suggests that 11 C-choline PET imaging may underestimate disease burden and fail to identify treatable metastases. A PCEV-based blood test may refine the identification of truly oligometastatic prostate cancer and patients who will benefit most from SABR therapy. Given the emergence of PSMA PET imaging, comparative studies are needed to further evaluate the potential of PCEV levels as a marker of disease burden in patients with oligometastatic disease diagnosed with PSMA and/or choline PET imaging.
Longitudinal analysis after SABR revealed that blood PCEV levels increase rapidly after treatment and peak at day 7. Although irradiation has been shown to stimulate EV biogenesis with in vitro cell cultures, 34 the current study is, to our knowledge, the first clinical demonstration of SABR-induced EV release. PSMA + EVs at day 7 after SABR were a strong predictor of outcomes. In contrast to baseline levels, high levels of PSMA + EVs at day 7 were associated with a lower risk of disease progression and better survival. This suggests that an increase in blood PSMA + EV levels may indicate SABR-associated cancer cell death and antitumor immunity. Previous studies established that the expansion of peripheral tumor-reactive CD8 T cells after SABR is essential for local and systemic tumor control. 3,5 Herein, we found that elevation of PCEV was positively correlated with levels of tumor-reactive CD8 T cells. Interestingly, the highest correlation was observed with CD8 T cells expressing CX3CR1 chemokine receptor. CX3CR1 is expressed by effector CD8 T cells, and high levels of peripheral CX3CR1 + CD8 T cells have been linked to better response to immune checkpoint blockade. 24,35 Although these findings suggest EV-mediated crosstalk between tumors and CD8 T cells, the nature of this association remains elusive. The peak of PCEV levels (day 7) preceded the increased frequency of tumor-reactive CD8 T cells (day 14), which suggests that PCEVs may play an active role in SABR-induced antitumor immunity. Similarly, in a prior study, patients with melanoma who responded to pembrolizumab immunotherapy had levels of Ki67 + PD1 + CD8 T cells that positively correlated with blood-derived exosomal PD-L1. 22 Tumorderived EVs can carry immunomodulatory molecules such as PD-L1, CD73, miRNAs, and cytosolic DNA that either induce or inhibit antitumor immunity. [36][37][38][39][40] Prostate cancer is characterized by an immunosuppressive tumor microenvironment, and EVs can contribute to tumor immune escape. [41][42][43] At baseline, we found an inverse correlation between PCEV and tumor-reactive CD8 T cells, which may indicate an inherently immunosuppressive environment driven by PCEVs. After SABR treatment, a subset of omCRPC patients have a durable response, with no signs of disease progression after 2 years. This suggests that in these best responders, SABR can alter the molecular composition of PCEVs toward an immunostimulatory phenotype resulting in abscopal response. 44,45 Conversely, limited control of distant metastases can be attributed to the release of immunosuppressive EVs that impair T-cell priming and tumor cell killing. PD-L1 and B7-H3 immune checkpoint molecules can be upregulated on the surface of EVs in response to radiation therapy. 46,47 Additional studies should focus on characterizing the molecular composition of PCEVs and determining the effect of SABR on expression of immuno-regulatory molecules. Furthermore, it is critical to decipher the molecular and cellular mechanisms involved in EVmediated antitumor immunity in response to SABR using immunocompetent prostate cancer mouse models. This will pave the way for the design of biomarker-driven combination therapies, improving SABR efficacy and patient outcome.
We recognize several limitations to this study. First, our study may be underpowered. This, accompanied by its retrospective design, may introduce selection bias that confounds the study results. In this study, we used nanoscale flow cytometry as a means of establishing proof-of-concept of the clinical utility of measuring circulating tumor-derived EVs for patient stratification and prediction of response to SABR. Although this technology is not primarily designed for clinical use, it has already been implemented in several clinical studies including randomized controlled trials. [48][49][50][51] Nanoscale flow cytometry offers numerous advantages for developing EV-based blood tests. It allows for high throughput multiparametric detection of EVs of approximately 150 nm or greater. In addition, this technology has minimal requirements with respect to preanalytical steps necessary to obtain a robust EV-based assay. This is a particularly important consideration, because most studies evaluating the clinical benefit of EVs are limited by cost and time-consuming preanalytical procedures affecting data reproducibility and implementation in a clinic setting. Our blood test can be performed within 1 hour from the time of blood collection to data analysis using basic laboratory equipment, highlighting the potential for using such technology in the clinical setting with broad feasibility.
Conclusions
This hypothesis-driven study sheds light on a new EV-based blood test that may improve identification of patients with oligometastatic castration-refractory prostate cancer who will benefit from SABR treatment. Additionally, it provides novel biological insights in the crosstalk between tumor cells and adaptive immune cells in response to SABR. Future endeavors will involve validation in larger patient cohorts and comprehensive profiling of the molecular cargo of prostate cancer-derived extracellular vesicles to refine risk prediction and identify potential therapeutic vulnerabilities to improve response to SABR. and treated with SABR. The median times to distant failure for high levels and low levels of PSMA + EVs were 3.47 months and 6.6 months, respectively (P = .0087). The median times to distant failure for high levels and low levels of STEAP1 + EVs were 4.2 months and 5.73 months, respectively (P = .022). | 2022-06-06T15:14:34.805Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "827bcb3b885382d517c66b17eacaea46688206e5",
"oa_license": "CCBYNCND",
"oa_url": "http://www.redjournal.org/article/S0360301622005442/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3377011f6736d244476d74ef0ad606ed1006b7c9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235405860 | pes2o/s2orc | v3-fos-license | High-quality Arabidopsis thaliana Genome Assembly with Nanopore and HiFi Long Reads
Arabidopsis thaliana is an important and long-established model species for plant molecular biology, genetics, epigenetics, and genomics. However, the latest version of reference genome still contains a significant number of missing segments. Here, we reported a high-quality and almost complete Col-0 genome assembly with two gaps (named Col-XJTU) by combining the Oxford Nanopore Technologies ultra-long reads, Pacific Biosciences high-fidelity long reads, and Hi-C data. The total genome assembly size is 133,725,193 bp, introducing 14.6 Mb of novel sequences compared to the TAIR10.1 reference genome. All five chromosomes of the Col-XJTU assembly are highly accurate with consensus quality (QV) scores > 60 (ranging from 62 to 68), which are higher than those of the TAIR10.1 reference (ranging from 45 to 52). We completely resolved chromosome (Chr) 3 and Chr5 in a telomere-to-telomere manner. Chr4 was completely resolved except the nucleolar organizing regions, which comprise long repetitive DNA fragments. The Chr1 centromere (CEN1), reportedly around 9 Mb in length, is particularly challenging to assemble due to the presence of tens of thousands of CEN180 satellite repeats. Using the cutting-edge sequencing data and novel computational approaches, we assembled a 3.8-Mb-long CEN1 and a 3.5-Mb-long CEN2. We also investigated the structure and epigenetics of centromeres. Four clusters of CEN180 monomers were detected, and the centromere-specific histone H3-like protein (CENH3) exhibited a strong preference for CEN180 Cluster 3. Moreover, we observed hypomethylation patterns in CENH3-enriched regions. We believe that this high-quality genome assembly, Col-XJTU, would serve as a valuable reference to better understand the global pattern of centromeric polymorphisms, as well as the genetic and epigenetic features in plants.
Introduction
The Arabidopsis thaliana Col-0 genome sequence was published in 2000 [1], and after decades of work, this reference genome has become the ''gold standard" for A. thaliana. However, centromeres, telomeres, and nucleolar organizing regions (NORs) have been either misassembled or not even been sequenced yet due to the enrichment of highly repetitive elements in these regions [2,3]. Long-read sequencing technologies, such as Oxford Nanopore Technologies (ONT) sequencing and Pacific Biosciences (PacBio) single molecule real-time (SMRT) sequencing, generate single molecular reads longer than 10 kb, which exceeds the length of most simple repeats in many genomes, making it possible to achieve highly contiguous genome assemblies [4]. Highly repetitive regions, e.g., centromere or telomere regions, however, remain mostly unassembled due to the limitations in read length and the error rate associated with sequencing of long reads. Although ONT sequencing has overcome read length limitation and can generate ultra-long reads (longest > 4 Mb) (https://nanoporetech. com/products/promethion), the associated 5%-15% per base error rate [5] leads to misassemblies or inaccurate assemblies. Naish et al. [6] used ONT-generated ultra-long reads to produce a highly contiguous A. thaliana Col-0 genome, but the consensus quality (QV) scores of all five chromosomes, ranging from 41 to 43, were lower than those of the reference TAIR10.1 (ranging from 45 to 52) [6]. High-fidelity (HiFi) data generated from a circular consensus sequencing [7] are a promising strategy for repeat characterization and centromeric satellite assembly. The combination of ONT long reads and HiFi reads has been demonstrated to overcome the issues of sequencing centromere and telomere regions in the human genome, and generated the telomere-to-telomere (T2T) assembly of human chromosome (Chr) X [8] and Chr8 [9].
Centromeres mainly consist of satellite DNAs and long terminal repeat (LTR) retrotransposons [10] that attract microtubule attachment and play an important role in maintaining the integrity of chromosomes during cell division [11]. In plant species, centromeric satellite DNA repeats range from 150 bp to 180 bp in size [12]. It has been reported that A. thaliana centromeres contain megabase-sized islands of 178-bp tandem satellite DNA repeats (CEN180) [13] that bind to centromere-specific histone H3-like protein (CENH3) [14,15]. Unfortunately, centromere sequences are largely absent from previously generated A. thaliana reference genome assemblies [15], hindering the investigation of CEN180 distribution and its genetic and epigenetic impacts on the five chromosomes.
To obtain T2T A. thaliana genome assembly, we introduced a bacterial artificial chromosome (BAC)-anchor replacement strategy to our assembly pipeline and generated the Col-XJTU genome assembly of A. thaliana. We completely resolved the centromeres of Chr3, Chr4, and Chr5, and partially resolved the centromeres of Chr1 and Chr2. The Col-XJTU assembly of A. thaliana genome was found to be highly accurate with QV scores greater than 60, which were obviously higher than those of TAIR10.1 and another recently deposited assembly [6]. Due to the unprecedented high quality of the Col-XJTU genome assembly, we were able to observe intriguing genetic and epigenetic patterns in the five centromere regions.
Results
Assembly of a high-quality genome of A. thaliana We assembled ONT long reads using NextDenovo v. 2.0, and initially generated 14 contigs (contig N50 = 15.39 Mb) ( Figure 1A, Figure S1A). Of these, eight contigs contained the Arabidopsis-type telomeric repeat unit (CCCTAAA/ TTTAGGG) on one end, while two contigs had the 45S rDNA units on one end ( Figure 1A). Contig 13 (935 kb) and Contig 14 (717 kb) composed of CEN180 sequences were neither ordered nor oriented, and thus were removed from the assembly ( Figure S1A). We polished the remaining 12 contigs with HiFi data using Nextpolish and scaffolded them using 3D-DNA derived from Hi-C data. Consequently, we obtained five scaffolds with seven gaps located at centromere regions ( Figure 1A). To further improve the genome assembly, we assembled HiFi reads using hifiasm [16,17] and identified the centromeric flanking BAC sequences [18][19][20] on both the five ONT scaffolds and HiFi contig pairs ( Figure 1A, Figure S1B and C). We first filled the gaps on centromeres using the BAC-anchor strategy ( Figure S1B). To guarantee the highest base-pair accuracy, we replaced the low-accuracy ONT genome assemblies with the PacBio HiFi contigs and kept the HiFi contigs as long as possible ( Figure 1A, Figure S1C). , and the QV scores of all five chromosomes are greater than 60 (ranging from 62 to 68), which are obviously higher than those of the TAIR10.1 reference genome (ranging from 45 to 52) ( Table 1) and a recently deposited genome (ranging from 41 to 43) [6], suggesting that our Col-XJTU assembly is highly accurate. The completeness evaluation showed a k-mer completeness score of 98.6%, suggesting that the Col-XJTU assembly is highly complete as well. The Col-XJTU assembly was composed of 97% HiFi contigs, with only 4,098,671 bp from ONT contigs which contain highly repetitive elements (Table S1). The heterozygosity of A. thaliana Col-XJTU is very low (0.0865%), which was estimated using GenomeScope v. 1.0 [21] from the k-mer 17 histogram computed by Jellyfish v. 2.3.0 [22]. The base accuracy and structure correctness of the Col-XJTU assembly were also estimated from the sequenced BACs. Firstly, 1465 Figure 1 High-quality T2T genome assembly A. Assembly of ONT ultra-long reads. We obtained 14 contigs, and two of them are composed of CEN180 repeat sequences. Then, the 12 contigs were ordered and oriented to five scaffolds using Hi-C data. After scaffolding, the assembly was represented in five pseudomolecules corresponding to the five chromosomes of the TAIR10.1 assembly. Gaps were filled using HiFi contigs based on BAC anchors. Whole ONT-HiC assemblies were replaced with HiFi contigs based on BAC anchors. The gray bars represent the centromere regions; the red lines represent the gap locations; the numbers and corresponding lines in blue indicate the numbers and loci of new gene annotations, respectively. B. Col-XJTU genome assembly corrected a misassembly region of TAIR10.1 genome assembly. Grey bands connect corresponding collinear regions. Duplicated segments that were misassembled are connected with bands in blue. C. Circos plot of Col-XJTU genome assembly. The tracks from outside to inside: distribution of karyotypes of assembled chromosomes, GC density, density of transposable elements, and gene density calculated in 50-kb windows. Syntenic blocks and different colored lines represent different chromosomes. Chromosomes are labeled at the outmost circle with centromeres shown in dark gray. T2T, telomere-to-telomere; ONT, Oxford Nanopore Technologies; HiFi, high-fidelity; BAC, bacterial artificial chromosome.
BACs were aligned to the Col-XJTU assembly via Win-nowmap2, and the mapping results calculated using the CIGAR string revealed good agreement with high sequence identity (99.87%). We validated the structure of our assembly using bacValidation, and the Col-XJTU assembly resolved 1427 out of 1465 validation BACs (97.41%), which is higher than BAC resolving rate of humans [23]. In addition, Col-XJTU genome assembly corrected one misassembled region with 1816 bp in length, containing two protein-coding genes, in the TAIR10.1 genome ( Figure 1B; Table S2).
The assembly sizes of Col-XJTU centromere 1 (CEN1), CEN2, CEN3, CEN4, and CEN5 were 3.8 Mb, 3.5 Mb, 4.0 Mb, 5.5 Mb, and 4.9 Mb, respectively (Table S3). The sizes of gap-free CEN3, CEN4, and CEN5 were consistent with the physical map-based centromeric sizes [18][19][20]; however, the 3.8-Mb-long CEN1 had a gap and was smaller than the estimated size of 9 Mb based on the physical map [20], and the 3.5-Mb-long CEN2 with a gap was assembled, accounting for 88% of the 4-Mb-long physical map [20]. All five centromeric CEN180 arrays did not contain large structural errors ( Figure S2). Upon the annotation of the five centromere regions, we found that all five A. thaliana centromeres were surrounded by transposon-enriched sequences rather than protein-coding gene-enriched sequences ( Figure 1C).
The Col-XJTU assembly (contig N50 = 22.25 Mb) improved the contiguity of the A. thaliana genome compared to TAIR10.1 (contig N50 = 11.19 Mb) ( Table 1), and we had filled 36 gaps apart from two gaps in CEN1 and CEN2 (Table S4). Benchmarking Universal Single-Copy Orthologs (BUSCO) evaluation revealed higher genome completeness of Col-XJTU than that of TAIR10.1 ( Table 1). The synteny plot showed that Col-XJTU genome is highly concordant with TAIR10.1 ( Figure S3) but with three additional completely resolved centromere regions and partly resolved NORs. Novel sequences (a set of regions not covered by TAIR10.1) equivalent to a total of 14.6 Mb were introduced in the Col-XJTU genome; of these, 94.8% belong to the centromeric regions, with 3.7% of them located in the NORs and telomeres (Table S5). The QV score of the novel sequences (> 10 kb) is 67.43, and the base accuracy is 99.999982%. The assembly sizes of 45S rDNA units in Chr2 and Chr4 were 300,270 bp and 343,661 bp, respectively. The telomeres of the eight chromosome arms ranged from 1862 bp to 3563 bp in length (Table S6), which are consistent with the reported lengths [24]. The read depths of these telomeres did not differ obviously compared to the average coverage of the genome (Table S6). Moreover, no telomeric motif was found in the unmapped HiFi reads, probably indicating completely resolved telomeres. The repeat content of Col-XJTU genome (24%) is much higher than that of the current reference genome (16%) ( Table 1), largely due to the higher number of LTR elements assembled and annotated in Col-XJTU genome (Table S7).
A total of 27,418 protein-coding genes (99.9%) were liftedover from TAIR10.1 (27,444) using Liftoff (Table 1). We then masked repeat elements and annotated protein-coding genes in the novel sequences in Col-XJTU genome. Finally, we obtained 27,583 protein-coding genes in Col-XJTU genome with 165 newly-annotated genes. Of the newly-annotated genes, 41 and 89 genes were located in the NORs of Chr2 and Chr4, respectively ( Figure S4), while 35 newly-annotated genes were located in the centromeres (n = 33) and telomeres (n = 2) ( Figure 1A). Only 14 of the 165 newly-annotated genes contain functional domains, whereas the remaining 151 ones have unknown functions (Table S8). Interestingly, 96% of the newly-annotated genes were found to be actively transcribed across different tissues (Table S9), especially in leaves ( Figure S5). The highly expressed leaf-specific novel genes encode protein domains such as ATP synthase subunit C and NADH dehydrogenase (Table S8), indicating that these genes may be involved in photosynthesis.
Global view of centromere architecture
Previously, the centromere composition of A. thaliana was estimated using physical mapping and cytogenetic assays; however, such estimation resulted in the generation of incorrectly annotated and unknown regions, such as 5S rDNA and CEN180 repeat regions [1]. The complete assembly of CEN3, CEN4, and CEN5 in this study revealed $ 0.5-kb-long repeats in the 5S rDNA array regions ( Figure 2), which is consistent with the previous findings obtained by fluorescence in situ hybridization and physical mapping [25,26]. The 5S rDNA regions in CEN4 and CEN5 exhibited high similarity with 95% sequence identity. However, this region in CEN3 was interrupted by LTRs, resulting in a low sequence identity. All 5S rDNA regions presented GC-rich and hypermethylation patterns ( Figure 2). We detected 3666 5S rDNA monomers, which approximately doubled the previously reported amount of $ 2000 5S rDNA gene copies in the Col-0 genome [27]. The 5S rDNA arrays were divided into four clusters ( Figure S6A), wherein the 5S rDNA sequences in CEN4 and CEN5 formed independent clusters labeled as 5S Cluster 1 and Cluster 2, respectively ( Figure 2). The 5S rDNA sequences in CEN3 were divided into two 5S clusters, Clusters 3 and 4 ( Figure 2), which contained obviously more polymorphic sites than Clusters 1 and 2 in CEN4 and CEN5 ( Figure S7). We observed that CEN1, CEN2, CEN3, and CEN4 contained highly similar CEN180 arrays (Figure 2), and the reduced internal similarity in CEN5 was likely due to the disruption by LTR/Gypsy elements (Figure 2). We found one CEN180 array in CEN1, CEN2, CEN3, and CEN5 but two distinct CEN180 arrays in CEN4. Except for the downstream one in CEN4, all others CEN180 arrays showed higher than 90% sequence identity either with inter-or intrachromosomal regions (Figure 2). The downstream CEN180 array in CEN4 showed a higher internal sequence identity (> 90%) and a lower external sequence identity (< 90%) than the other CEN180 arrays ( Figure 2). Moreover, the downstream CEN180 array in CEN4 showed lower GC content and methylation frequency than other CEN180 arrays ( Figure 2). We performed LASTZ search for tandem repeats to construct the CEN180 satellite library and identified 60,563 CEN180 monomers in the five centromeres. The phylogenetic clustering analysis revealed four distinct CEN180 clusters with single-nucleotide variants and small indels ( Figures S6B and S8). Almost all the downstream CEN180 monomers of CEN4 belonged to CEN180 Cluster 1 (Figure 2, Figure S9), while the upstream CEN180 monomers of CEN4 belonged to the remaining three CEN180 clusters.
A functional region of centromere is defined by the binding of epigenetic modifications with CENH3 [28,29]. We observed that CENH3 was obviously enriched in the interior of the centromere but depleted at the LTR region ( Figure 2). The five centromeres showed higher DNA methylation than pericentromeres ( Figure S10); however, the CEN180 arrays presented hypomethylation patterns (Figure 2, Figure S10). Interestingly, we found that the CENH3-binding signal exhibited a strong preference for CEN180 Cluster 3 on all five centromeres ( Figure S11). Such preference was not observed in CEN180 Cluster 1 in CEN4 and other four centromeres ( Figure S11). The CENH3 signal enrichment presented the opposite tendency with the methylation frequency in 60% CEN180 clusters of the five centromeres ( Figure 2, Figure S12).
Discussion
Traditionally, long-read sequencing technologies commonly suffer from high error rates [30]. However, the recently developed HiFi reads by PacBio have both the advantages of long read lengths and low error rates, enabling the assembly of complex and highly repetitive regions in the new era of T2T genomics [31,32]. HiFi reads have been used to assemble the T2T sequence of human ChrX and Chr8 [8,9], aiding in the completion of the human genome [33]. Recently, two complete rice reference genomes have also been assembled using HiFi reads [32].
The size of A. thaliana centromeres is 2-5 folds larger than that of the rice centromeres (0.6-1.8 Mb) [32], and hence, a sophisticated approach is required to complete the assembly of the A. thaliana centromeres. We combined the dual longread platforms of ONT ultra-long and PacBio HiFi to produce the high-quality A. thaliana Col-XJTU genome with only two gaps in CEN1 and CEN2. We assembled a 3.8-Mb-long CEN1, which is smaller than the 9-Mb region estimated by physical mapping [20]. We also assembled a 3.5-Mb-long sequence (88% of the physical map [20]) of CEN2 using hifiasm. Recently, a version of A. thaliana genome was deposited with a $ 5-Mb-long CEN1 sequence, which is still smaller than the physical map size [20], indicating the difficulty in assembling long centromere regions even with long-read technologies [6]. We are optimizing a singly unique nucleotide k-mers (SUNKs) assembly method [9] for plant genomes, aiming to eventually produce the completely resolved long centromere regions.
Diverse methylation patterns have been observed in the centromere sequences of two human chromosomes upon completion of the human genome [8,9]. The centromeres of Chr8 and ChrX in the human genome contain a hypomethylation pocket, wherein the centromeric histone CENP-A for kinetochore binding is located [8,9,34,35]. This phenomenon has also been observed experimentally in A. thaliana [36]. Our highquality centromere assembly of A. thaliana reveals that the CEN180 arrays enriched with CENH3 occupancy are hypomethylated compared to the pericentromeric regions. Although the primary function of centromeres is conserved between animal and plant kingdoms, the centromeric repeat monomers are highly variable in terms of sequence composition and length, and little sequence conservation is observed between species [37]. Extensive experimental evidence has confirmed that convergent evolution of centromere structure, rather than the sequence composition, is the key to maintaining the function of centromeres [38]. Furthermore, we have observed clusters with irregular patterns of methylation and CENH3 binding, indicating that centromeres may contain regions with unknown functions or still-evolving components. We would need to complete the assembly of centromere sequences for more related species to gain insight into the evolution of centromere structure and function.
In conclusion, our novel assembly strategy involving the combination of ONT long reads and HiFi reads leads to the assembly of a high-quality genome of the model plant A. thaliana. This genome will serve as the foundation for further understanding molecular biology, genetics, epigenetics, and genome architecture in plants.
Plant growth condition and data sources
The A. thaliana accession Col-0 was obtained from the Shandong Agricultural University, China as a gift. The A. thaliana seeds were placed in a potting soil and then maintained in a growth chamber at 22°C with a 16 h light/8 h dark photoperiod and a light intensity of 100-120 mmolÁm À2 Ás À1 . Young true leaves taken from 4-week-old healthy seedlings were used for sequencing.
Genomic DNA preparation
DNA was extracted using the Qiagen Genomic DNA Kit (Catalog No. 13323, Qiagen, Valencia, CA) following the manufacturer's guidelines. Quality and quantity of total DNA were evaluated using a NanoDrop One UV-Vis spectrophotometer (ThermoFisher Scientific, Waltham, MA) and Qubit 3.0 Fluorometer (Invitrogen life Technologies, Carlsbad, CA), respectively. The Blue Pippin system (Sage Science, Beverly, MA) was used to retrieve large DNA fragments by gel cutting.
Oxford Nanopore PromethION library preparation and sequencing
For the ultra-long Nanopore library, approximately 8-10 mg of genomic DNA was selected (> 50 kb) with the SageHLS HMW library system (Sage Science), and then processed using the Ligation sequencing 1D Kit (Catalog No. SQK-LSK109, Oxford Nanopore Technologies, Oxford, UK) according the manufacturer's instructions. DNA libraries (approximately 800 ng) were constructed and sequenced on the PromethION (Oxford Nanopore Technologies) at the Genome Center of Grandomics (Wuhan, China). A total of 56.54 Gb of ONT long reads with $ 388Â coverage were generated including $ 177Â coverage of ultra-long (> 50 kb) reads. The N50 of ONT long reads was 46,452 bp, and the longest reads were 495,032 bp.
HiFi sequencing and assembly
SMRTbell libraries were constructed according to the standard protocol of PacBio using 15 kb preparation solutions (Pacific Biosciences, CA). The main steps for library preparation include: 1) genomic DNA shearing; 2) DNA damage repair, end repair, and A-tailing; 3) ligation with hairpin adapters from the SMRTbell Express Template Prep Kit 2.0 (Pacific Biosciences); 4) nuclease treatment of SMRTbell library with SMRTbell Enzyme Cleanup Kit; and 5) size selection and binding to polymerase. In brief, the 15 mg genomic DNA sample was sheared by gTUBEs. Single-strand overhangs were then removed, and DNA fragments were damage repaired, end repaired, and A-tailed. Then, the fragments were ligated with the hairpin adapters for PacBio sequencing. The library was treated with the nuclease provided in the SMRTbell Enzyme Cleanup Kit and purified by AMPure PB Beads. Target fragments were screened by BluePippin (Sage Science). The SMRTbell library was then purified by AMPure PB beads, and Agilent 2100 Bioanalyzer (Agilent Technologies, Palo Alto, CA) was used to detect the size of the library fragments. Sequencing was performed on a PacBio Sequel II instrument with Sequencing Primer V2 and Sequel II Binding Kit 2.0 at the Genome Center of Grandomics. A total of 22.90 Gb of HiFi reads with $ 157Â coverage were generated, and N50 of the reads was 15,424 bp. HiFi reads were assembled using hifiasm v. 0.14-r312 [17] with default parameters, and the gfatools (https://github.com/lh3/gfatools) was used to convert sequence graphs in the GFA to FASTA format.
Hi-C sequencing and scaffolding
Hi-C library was prepared from cross-linked chromatins of plant cells using a standard Hi-C protocol; the library was then sequenced using Illumina NovaSeq 6000. A total of 21.14 Gb of Hi-C reads with $ 158Â coverage were generated. The Hi-C sequencing data were used to anchor all contigs using Juicer v. 1.5 [44], followed by a 3D-DNA scaffolding pipeline [45]. Scaffolds were then manually checked and refined with Juicebox v. 1.11.08 [46].
Replacing ONT-HiC assemblies with HiFi contigs
We introduced a BAC-anchor strategy to fill the remaining gaps in ONT-HiC assemblies. Briefly, for each gap, we first identified two BAC sequences flanking the gap locus that were aligned concordantly (identity > 99.9%) to both the ONT-HiC assembly and a HiFi contig, and then replaced the gapcontaining contigs with corresponding HiFi contigs. We used the same method to polish ONT-HiC assemblies with HiFi contigs. The BAC sequences we used as anchors are list in Figure S1 and Table S10.
Genome comparisons
Two genome assemblies were aligned against each other using nucmer v. 4.0.0 (-c 100 -b 500 -l 50) [47], and the output delta file was filtered using a delta-filter (-i 95 -l 50). The alignment regions between two genomes were extracted using showcoords (-c -d -l -I 95 -L 10,000), and the novel region of our genome was extracted using 'complement' in BEDTools v. 2.30.0 [48]. The synteny relationships among the five chromosomes were estimated using BLASTN v. 2.9.0 with 'all vs. all' strategy and visualized using Circos v. 0.69-8 [49]. Genomic alignment dot plot between Col-XJTU and TAIR10.1 assemblies was generated using D-GENIES [50]. QV and completeness scores were estimated using Merqury [51] from Illumina sequencing data generated on the same material in this study. The assembly accuracy for five chromosomes was estimated from QV as follows: accuracy percentage = 100 À (10 (QV/À10) ) Â 100 [9]. To assess genome completeness, we also applied BUSCO v. 3.0.2 analysis using the plant early release database v. 1.1b [52]. Pairwise sequence identity heatmaps of five centromeres were calculated and visualized using the aln_plot (https://github.com/mrvollger/ aln_plot) command: bash cmds.sh CEN CEN.fa 5000.
BAC validation
We validated the assemblies using bacValidation (https:// github.com/skoren/bacValidation) with default parameters, which recognizes a BAC as 'resolve' within the assembly with 99.5% of the BAC length to be aligned to a single contig. BAC libraries were downloaded from European Nucleotide Archive (ENA), and the BACs used to validate five chromosomes are listed in Table S10.
Assembly validation of CEN180 arrays
We applied TandemTools [53] to assess the structure of the centromeric CEN180 arrays. We first aligned ONT reads (> 50 kb) to the Col-XJTU assembly with Winnowmap2 and extracted reads aligned to the centromeric CEN180 arrays (Chr1: 14
Misassembly evaluation
We first used QUAST v. 5.0.2 [57] to assess the structure accuracy of new assemblies. QUAST parameters were set to 'quast. py <asm> -o quast_results/<asm> -r<reference> --largemin-alignment 20,000 --extensive-mis-size 500,000 --minidentity 90' according to a previous report [23]. Based on QUAST evaluation, we did not detect any misassembly between Col-XJTU and TAIR10.1 genomes at noncentromeric regions. Furthermore, we detected and labeled one potential misassembly due to segmental duplications for Chr5 when mapping the protein-coding gene sequences of TAIR10.1 to Col-XJTU using Liftoff. We aligned the BAC sequences (K3M16 and K10A8) to the different regions between TAIR10.1 and Col-XJTU using BLASTN, supporting that the Col-XJTU assembly is correct.
Methylation analysis
Nanopolish v. 0.13.2 with the parameters 'call-methylation --methylation cpg' was used to measure the frequency of CpG methylation in raw ONT reads. The ONT reads were aligned to whole-genome assemblies via Winnowmap v. 2.0 [67]. The script 'calculate_methylation_frequency.py' provided in the methplotlib package [68] was then used to generate the methylation frequency.
Data availability
The whole-genome sequence data reported in this study have been deposited in the Genome Warehouse [69] at the National Genomics Data Center, Beijing Institute of Genomics, Chinese Academy of Sciences / China National Center for Bioinformation (GWH: GWHBDNP00000000.1), and are publicly accessible at https://ngdc.cncb.ac.cn/gwh. The genome annotation has been deposited in https://dx.doi.org/10.6084/m9.figshare. 14913045. The raw sequencing data for the PacBio HiFi reads, ONT long-reads, Illumina short reads, and Hi-C Illumina reads have been deposited in the Genome Sequence Archive [70] at the National Genomics Data Center, Beijing Institute of Genomics, Chinese Academy of Sciences / China National Center for Bioinformation (GSA: CRA004538), and are publicly accessible at https://ngdc.cncb.ac.cn/gsa. | 2021-06-12T13:20:32.941Z | 2021-06-09T00:00:00.000 | {
"year": 2021,
"sha1": "462bf946cfc95d3c16ba84e0cad6fdc345e36840",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.gpb.2021.08.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "da1b026f959d00561c25f8ab88b7d8fedd543ef3",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science",
"Biology",
"Medicine"
]
} |
248619444 | pes2o/s2orc | v3-fos-license | Gamma-ray bursts afterglow physics and the VHE domain
Afterglow radiation in gamma-ray bursts (GRB), extending from the radio band to GeV energies, is produced as a result of the interaction between the relativistic jet and the ambient medium. Although in general the origin of the emission is robustly identified as synchrotron radiation from the shock-accelerated electrons, many aspects remain poorly constrained, such as the role of inverse Compton emission, the particle acceleration mechanism, the properties of the environment and of the GRB jet itself. The extension of the afterglow emission into the TeV band has been discussed and theorized for years, but has eluded for a long time the observations. Recently the Cherenkov telescopes MAGIC and H.E.S.S. have unequivocally proven that afterglow radiation is produced also above $100$\,GeV, up to at least a few TeV. The accessibility of the TeV spectral window will largely improve with the upcoming facility CTA ({the} Cherenkov Telescope Array). In this review article, we first revise the current model for afterglow emission in GRBs, its limitations and open issues. Then we describe the recent detections of very high energy emission from GRBs and the origin of this radiation. Implications on the understanding of afterglow radiation and constraints on the physics of the involved processes will be deeply investigated, showing how future observations, especially {by} the CTA Observatory, are expected to give a key contribution in improving our comprehension of such elusive sources.
Introduction
Gamma-Ray Bursts (GRBs) are observed as transient sources of radiation displaying a distinctive pattern consisting of two different phases. The first phase is dominated by emission in the keV-MeV energy range, lasting from fractions of a second to several minutes, and reaching isotropic equivalent peak luminosities in the range L ∼ 10 49 − 10 53 erg s −1 . The bimodal distribution of the prompt emission duration reveals that there are two classes of GRBs, called short and long depending on whether the prompt emission lasts shorter or longer than 2 seconds [1,2]. The second emission phase, called afterglow, follows the prompt with a delay of tens of seconds, and is detected on a very wide range of frequencies, from γ-rays to the radio band. The afterglow flux decays smoothly as a power-law in time for weeks or months, and the typical frequency of the radiation moves in time from the X-ray to the radio band. Since 2019, the detection of a few long GRBs between 0.3 and 3 TeV on time-scales from tens of seconds to a few days proved for the first time that GRBs can also be sources of radiation in the TeV band, where they can convey a sizable fraction (20-50%) of the total energy emitted during the afterglow phase [3][4][5].
All this prompt/afterglow emission is identified with radiation produced as a result of the launch of an ultra-relativistic (Γ ∼ 100 − 1000) jet from a newly born compact object. The ejecta undergoes first internal dissipation (through mechanisms such as shocks between different parts of the outflow [6] or magnetic reconnection episodes [7,8]). In a second moment, the ejecta undergoes external dissipation [9], triggered by interactions with the arXiv:2205.12146v1 [astro-ph.HE] 24 May 2022 ambient medium (e.g., the interstellar medium or the wind of the progenitor's star [10]). The two different dissipation processes occur at different typical distances from the central engine (R ∼ 10 13−14 cm and R ∼ 10 15−20 cm) and generate two well distinguished emission phases, identified as the prompt and afterglow emission, respectively.
For long GRBs, it is widely believed that the involved energetics and time-scales and the successful launch of a relativistic jet can find justification in the collapsar model [11,12]. In this model, the core of a massive star collapses into a black hole and the accretion from the surrounding disk powers the launch of two opposite, collimated (θ jet ∼ 5 − 10 • ) outflows. A similar scenario applies also to short GRBs, where the black hole originates from the merger of two neutron stars (as recently proven by the association of a short GRB with a gravitational wave signal [13]) or a neutron star and a black-hole. An alternative model [14][15][16][17] considers a millisecond magnetar (i.e. a rapidly rotating neutron star) as the progenitor of long GRBs (or at least a fraction of them). This model has the advantage of explaining more naturally the detections of late time activity (10 2 − 10 3 s after the prompt onset) in the form of X-ray flares and plateaus, observed in about one third of the population.
Beside the nature of the progenitor's star, another quite pressing open issue in GRB physics concerns the composition of the jet itself, i.e. the nature of the dominant energy stored in the outflow, which can be either magnetic (in the form of Poynting flux [7,14]) or kinetic (i.e. bulk motion of the matter). This uncertainty reflects into an uncertainty on the mechanism extracting energy from the jet (i.e. the process converting part of the jet energy into random energy of the particles), which is identified with internal shocks in the latter case, and magnetic reconnection events in the case of a Poynting flux dominated outflows [14]. While internal shocks in a matter-dominated jet have been considered the mainstream model for a long time, tensions between some model predictions and observations have moved the attention in the last decade on a family of models based on magnetic jets [18][19][20]. In particular, internal shocks are not an efficient mechanism [21,22], and this is in contrast with the evidence that only a relatively small fraction (10 − 50%) of energy is still in the blast-wave during the afterglow phase, meaning that most of it must have been dissipated and radiated away during the prompt. It must be noted, however, that the estimate of the energy content of the blast during the afterglow is indirect, and contingent upon a proper modeling of the afterglow emission [23]. Investigations that took advantage of GeV emission detected by LAT (the Large Area Telescope onboard the Fermi satellite), reached the conclusion that the blast energy is usually underestimated by studies relying on X-ray emission, and inferred a prompt emission efficiency between 1-10% [24], which can be still consistent with internal shocks. The nature and efficiency of the dissipation mechanism in the prompt phase are still matter of intense debate. In any case, the radiation is expected to be produced by the accelerated electrons, which efficiently loose energy via synchrotron cooling [6,25]. Inconsistencies between the expected synchrotron spectrum and the observed spectral shape of the prompt emission [25,26] have called into question also the nature of the radiative process. Recent works have performed major advances towards the comprehension of the radiative mechanism responsible for the prompt emission, supporting the synchrotron interpretation [27][28][29][30][31].
The nature of the afterglow emission is much better understood, at least on its general grounds. The interaction between the jet and the external medium triggers the formation of a forward shock running into the external medium and a reverse shock running into the ejecta [32][33][34]. These shocks are responsible for the acceleration of particles and for the deceleration of the outflow, eventually down to non-relativistic velocities [35][36][37]. The observed radiation is the result of synchrotron radiation from electrons accelerated at the forward shock [38]. A contribution from the reverse shock may also be relevant, typically in the radio and optical band [34,39]. Shock formation and particle acceleration in ultra-relativistic shocks are still not completely understood. Very important progresses have been done in the last decade from i) the dynamics of the blast-wave, ii) shock formation, particle acceleration and self-generation of turbulent magnetic field in the shock proximity, and iii) the main processes shaping the radiative output, on the whole electromagnetic spectrum, from radio to very-high energy γ-rays. In Section 3 we propose a discussion of the main open issues of the afterglow model, outlining which observations are at odds with model predictions, which observed features are missing in the basic scenario and what are the present limitations that prevent us from extracting valuable information from the modeling of multi-wavelength afterglow radiation. In Section 4 we describe the recent discovery that GRBs can be bright TeV emitters. Each GRB with a firm (or with a hint of) detection by MAGIC or H.E.S.S. is discussed in detail. We present multi-wavelength observations and review the proposed interpretations of the detected emission. In Section 5, we compare the general properties of the detected GRBs both among each other and with the general population. We discuss how the TeV emission can help to solve some of the most important issues of the afterglow model. Finally, in Section 6, we discuss the prospects for future studies of TeV emission from GRBs with the next generation of Cherenkov telescopes and their expected impact on GRB physics.
The afterglow model
Afterglow emission refers to all the broad-band radiation observed from a GRB on longer timescales (minutes to months) as compared to the initial prompt radiation detected in hard X-rays [38,47,48]. Its temporal evolution is usually well described with simple decaying power-laws, in contrast with the short-time (< seconds) variability that characterises the prompt emission [49][50][51][52]. These major differences place the emission region of afterglow radiation at larger radii (> 10 15 cm), pinpointing its origin in the processes triggered by the interaction between the jet and the circumburst medium.
The expansion of the relativistic jet into the external medium is expected to drive two different shocks: the forward shock, running into the external medium, and the reverse shock, running into the jet. The shocked ejecta and the shocked external medium, separated by the contact discontinuity, are both sources of synchrotron radiation from the accelerated electrons [53]. Most of the detected radiation is interpreted as emission from ambient particles energized by the forward shock. Spectra and lightcurves are then shaped by the environment where the GRB explodes, which in turn is strictly connected to the nature of the progenitor. The other player that shapes the properties of afterglow radiation is particle acceleration at relativistic shocks, which is though to proceed via diffusive shock acceleration, but for which the details of the underlying physics remain still poorly constrained. Moreover, the overall luminosity of the afterglow radiation depends on the energy content of the blast-wave. Such amount is determined by how efficiently the prompt mechanism has dissipated and released part of the initial explosion energy. Following these considerations, it is evident how the study of afterglow radiation impacts on the general understanding of the GRB phenomenon: the progenitor and its environment, the nature and efficiency of the mechanisms responsible for prompt emission, the properties of the jet, and the micro-physics of relativistic shocks.
In this section, the physics involved in the afterglow scenario is presented, with a particular focus on the forward shock emission and on the radiative output expected at VHE. This section is organized as follows: we revisit the physics of the jet dynamical evolution in its interaction with the ambient medium (Section 2.1), the particle acceleration mechanism (Section 2.2), and the resulting radiative output and its spectral shape (Section 2.3).
Jet dynamics
After the reverse shock has crossed the ejecta, the dynamics of the blast-wave enters a selfsimilar regime ([35], BM76 hereafter). In a thin shell approximation, the reverse shock crossing time corresponds to the time when the blast-wave starts decelerating. The deceleration of the jet, caused by the collision with the external medium, becomes significant at the radius R dec where the energy transferred to the mass m collected from the external medium (∼ m(R dec )c 2 Γ 2 0 ) is comparable to the initial energy (E 0 = M 0 Γ 0 c 2 ) carried by the jet. This deceleration radius is typically of the order of R dec ∼ 10 15 − 10 16 cm, depending on the density of the external medium and on the ejecta mass M 0 and initial bulk Lorentz factor Γ 0 . Before reaching this radius, the ejecta expands with constant velocity (coasting phase).
Most analytic estimates of the afterglow evolution with the purpose of modeling data are developed for the deceleration phase, where the self-similar BM76 solution for adiabatic blast-waves is adopted [38,47,54]. Since VHE emission can be detected at quite early times (a few tens of seconds), we are interested also in the description of the coasting phase and in a proper treatment of the transition between coasting and deceleration.
In the following, to derive the evolution of the bulk Lorentz factor we adopt the approach proposed by [55]. This method allows to describe the hydrodynamics of a relativistic blastwave expanding into a medium with an arbitrary density profile ρ(R) and composition (i.e. enriched by pairs), and the transition from the free expansion of the ejecta to the deceleration phase, taking into account the role of radiative and adiabatic losses. The internal structure is neglected (homogeneous shell approximation), and the Lorentz factor Γ considered is the one of the fluid just behind the shock front. In the deceleration phase, the self-similar solutions derived in BM76 are recovered by this method, both for the adiabatic and the fully radiative cases, and for constant and wind-like density profiles of the external medium. The presented approach also allows to introduce time-varying radiative efficiency, either resulting from a change with time of e or a change in the radiative efficiency of the electrons. Equations reported here are valid after the reverse shock has crossed the ejecta. Corrections to the hydrodynamics before the reverse-shock crossing time can be found in [55].
Equation describing the evolution of the bulk Lorentz factor
The aim is to derive an equation describing the change dΓ of the bulk Lorentz factor of the fluid just behind the shock in response to the collision with a mass dm(R) = 4πR 2 ρ(R)dR encountered when the shock front moves from a distance R to R + dR and with ρ being the mass density. The change in Γ is determined by dissipation of the bulk kinetic energy, conversion of internal energy back into bulk motion, and injection of energy into the blastwave. The latter is sometimes invoked to explain plateau phases in the X-ray early afterglow or to explain flux rebrightenings [56][57][58]. The following treatment neglects energy injection, which however can be easily incorporated in this kind of approach.
To write the equation for energy conservation, from which dΓ/dR can be derived, we first need to recall how the energy density transforms under Lorentz transformations. In the following, we denote quantities measured in the frame comoving with the shocked fluid (comoving frame), with a prime, to distinguish them from quantities measured in the frame of the progenitor star (rest frame, without a prime).
The energy density in the comoving frame is u = u int + ρ c 2 , where u int is the comoving internal energy and ρ is the comoving mass density. Applying Lorentz transformations, u = (u + p )Γ 2 − p , where p is the pressure and is related to the internal energy density by the equation of state p = (γ − 1) u int = (γ − 1) (u − ρ c 2 ), whereγ is the adiabatic index of the shocked plasma. The energy density is then given by: u = u int (γΓ 2 −γ + 1) + ρ c 2 Γ 2 , which shows how the internal energy and rest mass density transform. The total energy in the progenitor frame will be E = uV = uV /Γ, where V is the shell volume in the progenitor frame, and can be expressed as: where: which properly describes the Lorentz transformation of the internal energy. Here, M = M 0 + m = ρ V is the sum of the ejecta mass M 0 = E 0 /Γ 0 c 2 and of the swept-up mass m(R), and E int = (u − ρ c 2 )V is the comoving internal energy. The adiabatic index can be parameterized asγ = (4 + Γ −1 )/3 to obtain the expected limitsγ 4/3 for Γ 1 and γ 5/3 for Γ → 1. The majority of analytical treatments use Γ instead of Γ eff , which implies an error up to a factor of 4/3 in the ultra-relativistic limit [55].
The blast-wave energy E in Eq. 1 can change due to (i) the rest mass energy dm c 2 collected from the medium, (ii) radiative losses dE rad = Γ eff dE rad , and (iii) injection of energy. Ignoring possible episodes of energy injections into the blast-wave, the equation of energy conservation in the progenitor frame is: The overall change in the comoving internal energy dE int results from the sum of three contributions: The first contribution, dE sh = (Γ − 1) dm c 2 , is the random kinetic energy produced at the shock as a result of the interaction with an element dm of circum-burst material: as pointed out by BM76, in the post-shock frame, the average kinetic energy per unit mass dE sh /dm is constant across the shock, and equal to (Γ − 1)c 2 . The second term in eq. 4, dE ad , is the internal energy lost due to adiabatic expansion, that leads to a conversion of random energy back to bulk kinetic energy. The third term, dE rad , accounts for radiative losses. From Eq. 3, it follows that the variation of the Lorentz factor is: from which the evolution Γ(R) of the bulk Lorentz factor of the fluid just behind the shock as a function of the shock front radius can be derived. The term Γ eff dE ad /dR, accounting for adiabatic losses, allows to describe the re-acceleration of the fireball: this contribution, usually neglected, becomes important only when the density decreases faster than ρ ∝ R −3 . To evaluate Eq. 5 it is necessary to first specify dE ad and E int .
Internal energy and adiabatic losses
In specific cases, the adiabatic losses and the internal energy content can be expressed in an analytic form. The following treatment to estimate adiabatic losses and the internal energy content of the blast-wave assumes that, right behind the shock, the freshly shocked electrons instantaneously radiate a fraction rad of their internal energy and then they cool only due to adiabatic losses [55]. By assuming that the accelerated electrons promptly radiate at the shock, and then they evolve adiabatically, one is implicitly considering either fast cooling regime or quasi-adiabatic regime, in which case the radiative losses do not affect the shell dynamics.
Defining e as the fraction of energy dE sh dissipated by the shock and gained by the leptons, the mean random Lorentz factor of post-shock leptons becomes (for a more detailed discussion see section 2.2): γ acc,e − 1 = (Γ − 1) e /µ e .
Here, µ e = ρ e /ρ is the ratio between the mass density ρ e of shocked electrons and positrons (simply "electrons" from now on) and the total mass density of the shocked matter ρ. In the absence of electron-positron pairs µ e = m e /(m e + m p ) m e /m p .
Leptons then radiate a fraction rad of their internal energy, i.e., the energy lost to radiation is dE rad = − rad e dE sh = − dE sh , with ≡ rad e being the overall fraction of the shockdissipated energy that goes into radiation. After radiating a fraction rad of their internal energy, the mean random Lorentz factor of the freshly shocked electrons decreases down to: The assumption of instantaneous radiative losses is verified in the fast cooling regime ( rad ∼ 1), which is required (but not sufficient) to have ∼ 1 (i.e., a fully radiative blast-wave). In the opposite case rad 1, the evolution is nearly adiabatic ( 1), regardless of the value of e , and the details of the radiative cooling processes are likely to be unimportant for the shell dynamics. The case with intermediate values of rad and is harder to treat analytically, since the electrons shocked at radius R may continue to emit copiously also at larger distances, affecting the blast-wave dynamics.
A similar treatment can be adopted for protons: if protons gain a fraction p of the energy dissipated by the shock (with p = 1 − e − B ), their mean post-shock Lorentz factor will be: where µ p = ρ p /ρ is the ratio between the mass density of shocked protons ρ p and the total shocked mass density ρ. In the standard case, when pairs are absent, µ p 1. Since the proton radiative losses are negligible, the shocked protons will lose their energy only due to adiabatic cooling. Adiabatic losses can be computed starting from dE int = −p dV , where p is the pressure in comoving frame. For N particles with Lorentz factor γ the internal energy density is: The radial change of the Lorentz factor, as a result of expansion losses, is: To estimate the adiabatic losses, let us assume that the shell comoving volume scales as V ∝ R 3 /Γ, corresponding to a shell thickness in the progenitor frame ∼ R/Γ 2 . This scaling is correct for both relativistic and non-relativistic shocks, in the decelerating phase (BM76). For re-accelerating relativistic shocks, Shapiro [59] showed that the thickness of the region containing most of the blast-wave energy is still ∼ R/Γ 2 . For the sake of simplicity, changes in the comoving volume due to a time-varying adiabatic index or radiative efficiency are neglected. If the scaling V ∝ R 3 /Γ is assumed, the equation can be further developed analytically, and reads: 8 of 76 The comoving Lorentz factor at radius R, for a particle injected with γ(r) when the shock radius was r, will be where γ(r) is given by γ rad,e (r) (Eq. 7) for leptons, and by γ acc,p (r) (Eq. 8) for protons. Considering the proton and lepton energy densities separately, the comoving internal energy at radius R will be: With the help of Eq. 12, one can explicitly find E int (R) and insert it in Eq. 5.
The other term needed in Eq. 5 is dE ad /dR. First, we have derived (dγ/dR) ad for a single particle. Now integrating over the total number of particles, again considering protons and leptons separately, one obtains: In Eqs. 13 and 14, it is assumed that only the swept-up matter is subject to adiabatic cooling, i.e., that the ejecta particles are cold.
As long as the shocked particles remain relativistic, the equations for the comoving internal energy and for the adiabatic expansion losses assume simpler forms: In the absence of significant magnetic field amplification, p + e 1 so that p + e − 1 − , and the radiative processes of the blast-wave are entirely captured by the single efficiency parameter . In the fast cooling regime rad ∼ 1 and e . In this case the term p + e − reduces to p , meaning that, regardless of the amount of energy gained by the electrons, in the fast cooling regime the adiabatic losses are dominated by the protons, since the electrons lose all their energy to radiation.
Evaluating these expressions for adiabatic blast-waves in a power-law density profile ρ ∝ R −s , one obtains: where Γ ∝ R −(3−s)/2 as in the adiabatic BM76 solution has been used.
In the fully radiative regime = 1, which implies E int = 0 and dE ad = 0, Eq. 5 reduces to: which describes the evolution of a momentum-conserving (rather than pressure-driven) snowplow. Replacing Γ eff → Γ, the solution of this equation coincides with the result by BM76.
Since the model is based on the homogeneous shell approximation, the adiabatic solution does not recover the correct normalization of the BM76 solution. In this treatment, the total energy of a relativistic decelerating adiabatic blast wave in a power-law density profile so that the BM76 normalization can be recovered by multiplying the density of external matter in Eqs. 5, 13 and 14 by the factor (9 − 2s)/(17 − 4s). To smoothly interpolate between the adiabatic regime and the radiative regime, the following correction factor should be adopted: No analytic model properly captures the transition between an adiabatic relativistic blastwave and the momentum-conserving snowplow, as increases from zero to unity. The simple interpolation in Eq. 19 joins the fully adiabatic BM76 solution with the fully radiative momentum-conserving snowplow.
In summary, Eqs. 5, 13 and 14, complemented with the correction in Eq. 19 (which should by applied to every occurrence of external density and external matter) completely determine the evolution of the shell Lorentz factor Γ as a function of the shock radius R.
Relativistic shock acceleration
The spectral shape of the afterglow emission is well described by power-laws over a wide energy range (from radio to GeV-TeV). This is the clear manifestation of the presence of an electron population that has been accelerated in a power-law energy distribution. In GRB afterglows, the main candidate to explain the accelerated non-thermal particles is a Fermi-like mechanism that operates with similar general principles as the non-relativistic diffusive shock acceleration: particles are scattered back and forth across the shock front by magnetic turbulence and gain energy at each shock crossing. The particles themselves are thought to be responsible for triggering the magnetic instability that produces the turbulent field governing their acceleration. The outcome of this acceleration process is determined by the composition of the ambient medium (electron-proton plasma in the case of GRB forward shocks), the fluid Lorentz factor (Γ GRB >> 1, decreasing to non-relativistic velocity only after several weeks or months), and the magnetization σ (i.e., the ratio between Poynting and kinetic flux in the pre-shocked fluid, σ = B 2 /(4π m p n c 2 )), with B being the magnetic field strength. For GRB forward shocks, the magnetization is low, around 10 −9 in the interstellar medium and in any case below 10 −5 even for a magnetized circumstellar wind.
In this section we summarise the present understanding of particle acceleration and magnetic field generation in electron-proton, ultra-relativistic, weakly magnetized shocks. The statements and considerations reported in this section refer specifically to this case (which is the one relevant for forward external shocks in GRBs) and might not be valid for magnetized plasma and/or mildly-relativistic flows and/or electron-positron plasma.
In general, the information that one would extract from theoretical/numerical investigations and compare with observations are: i) the spectral shape of the emitting electrons (i.e., the minimum and maximum Lorentz factor γ min/max and the spectral index p), ii) the acceleration efficiency (i.e., the fraction of electrons ξ e and the fraction of energy e in the non-thermal population), and iii) the strength of the self-generated magnetic field, usually quantified in terms of fraction B of the shock-dissipated energy conveyed in the magnetic field. In particular, in order to compare with observations, the relevant B is the one in the downstream, in the region where radiative cooling takes place and the emission is produced.
After revisiting the state-of-the-art of the theoretical understanding (for recent reviews, see [40,60]), we discuss how particle acceleration and magnetic field amplification are incorporated in GRB afterglow modeling, and then we comment on the constraints on the above-mentioned parameters as inferred from the comparison between the model and the observations.
Inputs from theoretical investigations
Analytical approaches and Monte Carlo simulations generally rely on the assumption that electromagnetic waves, providing the scattering centers to regulate and govern the acceleration, are present on both sites of the shock, with a given strength and spectrum, so that the Fermi mechanism can operate. The particle distribution is then evolved under some assumption (such as diffusion in pitch angle) on the scattering process, and considering a test-particle approximation (i.e. the high-energy particles do not modify the properties of the waves).
The main success of these approaches is the verification that under these conditions power-law spectra are indeed produced and the predicted spectral index is in very good agreement with observations of afterglow radiation from GRBs [61]. The spectral index has been calculated for different assumptions on the equation of state, diffusion prescription and for a wide range of shock velocities [61]. A quasi-universal value p 2.2 − 2.3 is found in the ultra-relativistic limit. Figure 1 shows a comparison between analytical and numerical results as a function of the shock velocity for three different types of shocks (see [61] for details). In the ultra-relativistic limit (γβ 1), the estimates of the spectral slopes converge to a universal value p = s − 2 ∼ 2.2.
The investigation of relativistic shocks is complemented by particle-in-cell (PIC) simulations, where the non-linear coupling between particles and self-generated magnetic turbulence is captured from first principles.
The limitations of this technique are imposed by the computation time: for accuracy and stability, PIC simulations need to resolve the electron plasma skin depth c/ω pe of the upcoming electrons (where ω pe = 4πe 2 n e /m e is the plasma oscillation frequency of the upstream plasma, n e is the proper density, and m e is the electron mass), which is orders of magnitudes smaller than the scales of astrophysical interest. It is then difficult to follow the evolution on time-scales and length-scales relevant for astrophysics. Low dimensionality (1D or 2D instead of 3D) and small ion-to-electron mass ratios are additional limitations imposed by the computation time. Results of PIC simulations need then to be extrapolated to bridge the gap between the micro-physical scales and the scales of interest. With these caveats in mind, we summarize here the main achievements.
PIC simulations have shown that magnetic turbulence can be efficiently ( B ∼ 0.01 − 0.1) generated by the accelerated particles streaming ahead of the shock (in the so-called precursor region), where they generate strong magnetic waves which in turn scatter the particles back and fourth across the shock. In particular, in the weakly magnetised shocks discussed in this section, the dominant plasma instability is thought to be the so-called Weibel (or current filamentation) instability [62], generated by the counter-streaming of the accelerated particles against the background plasma in the precursor region [42,63]. PIC simulations have shown that as long as the fluid is ultra-relativistic (Γ > 5), the main parameter governing the acceleration is the magnetization σ, i.e. the efficiency of the process is insensitive to Γ, as the precursor decelerates the incoming background plasma. An example of downstream particle spectra derived by PIC simulations is shown in Figure 2 ( [42]). The ion and electron spectra are shown for a 2D simulation with Γ = 15, Figure 1. Spectral index (s p = p + 2) of the electrons accelerated in shocks as a function of the shock velocity. Curves refer to the equation derived by [61] under the hypothesis of isotropic, small-angle scattering and is a generalization of the non-relativistic formula. Symbols show the comparison with numerical studies. Different curves refer to different assumptions on the type of shock (see [61] for details): in all cases, the value of the spectral index approaches the same value s p ∼ 4.2 (corresponding to p ∼ 2.2) in the ultra-relativistic limit.
ion-to-electron mass ratio m i /m e = 25, and σ = 10 −5 . The temporal evolution is followed up to t = 2500 ω −1 pi . The formation of a non-thermal tail is clearly visible. The downstream non-thermal population is found to include around ξ 3% of the electrons, carrying e 10% of the energy. The spectral index is around p ∼ 2.5. The acceleration proceeds similarly for electrons and ions, since they enter the shock in equipartition (i.e. their relativistic inertia is comparable) as a result of efficient pre-heating in the self-excited turbulence in the precursor.
The maximum energy γ max increases proportionally to t 1/2 (see inset in Figure 2), slower than the commonly adopted Bhom rate [64], in which case γ max ∝ t. Extrapolating the γ max behaviour to the relevant time-scales and considering that synchrotron cooling will limit the acceleration for high-energy particles, the electron maximal Lorentz factor is found to reach values γ max ∼ 10 7 in the early phase of GRB afterglows, corresponding to synchrotron photon energies around 1 GeV, roughly consistent with observations. All these results on the particle spectrum are obtained on time-scales that are too short for the supra-thermal particles to reach a steady-state and their extrapolation to longer time-scales is not trivial.
A still debated open question (because computationally demanding) is how the magnetization evolves downstream. PIC simulations have found values of B ∼ 0.1 − 0.01 in the vicinity of the shock front. How this turbulence evolves on longer time-scales is still matter of debate. The turbulence is expected to decay rapidly, on time-scales orders of magnitude shorter than the synchrotron cooling time. Magnetization is then predicted to be very different The evolution is followed until t = 2500ω −1 pi . From [42]. ©AAS. Reproduced with permission.
close to the shock and in the region where particle cooling takes place. Electrons would then cool in a region of weak magnetic field [65,66]. These considerations suggest that it might not be correct to define a single magnetization B in GRB modeling, infer its value from observations and compare with predictions from PIC simulations referring to the magnetization near the shock front. Magnetization values inferred from observations most likely probe a region downstream, far from the front shock (see Section 3.3 for a discussion). Theoretical efforts are fundamental to provide physically motivated inputs for the phenomenological parameters included in the afterglow model. The large number of unknown model parameters, coupled with a limited number of constraints provided by observations, implies that constraints from theory are of paramount importance for a correct interpretation of the emission in GRBs and for grasping the origin of their non thermal emission, from radio to TeV energies. On the other hand, despite the huge progresses in the theoretical understanding of relativistic acceleration, the theory is not quite yet to the point of providing robust inputs for modeling observations. It is then clear how the two approaches must be combined to gain knowledge on the micro-physics of acceleration and magnetic field generation on the one hand and on the origin of radiative processes and macro-physics of the emitting region (bulk Lorentz factor and energy content) of the sources on the other hand.
Description of shocks in GRB afterglow modeling
The theory of relativistic shock acceleration is applied to GRB afterglow by introducing several unknown parameters in the model. These are the fractions e and B of dissipated energy gained by the accelerated particles and amplified magnetic field, the spectral index p of the accelerated particle spectrum, and the fraction ξ e of particles which efficiently enter the Fermi mechanism and populate the non-thermal distribution.
Recalling that the shock dissipated energy (in the comoving frame) is given by dE sh = (Γ − 1)dmc 2 (see section 2.1), the corresponding energy density is u sh = (Γ − 1)ρ c 2 . From shock jump conditions, the density in the comoving frame ρ is related to the density of the unshocked medium (measured in the rest frame) by the equation: which is valid in both the ultra-relativistic and non-relativistic limits (see e. g., [35]). In the GRB afterglow scenario it is usually assumed that pairs are unimportant and then the density of protons and electrons is the same: n p = n e = n. This implies that the mass is dominated by protons: ρ = nm p . In this case, the available energy density that will be distributed to the accelerated particles (electrons and protons) and to the magnetic field can be expressed as: A fraction B of this energy will be conveyed to the magnetic field: from which it follows that the magnetic field strength B is: Similarly, for the accelerated electrons: u e = e u sh = e 4Γ(Γ − 1)nm p c 2 = γ m e c 2 4Γξ e n (24) where γ is the average random Lorentz of the accelerated electrons: The accelerated non-thermal electrons are assumed to have a power-law spectrum, as a result of shock acceleration. Their energy distribution can be described by a power-law N(γ)dγ ∝ γ −p dγ for γ min ≤ γ ≤ γ max where γ min is the minimum Lorentz factor of the injected electrons and γ max is the maximum Lorentz factor at which electrons can be accelerated. To derive the relation between γ min , γ max and the model parameters, we consider the definition of the average Lorentz factor γ : and solve the integrals. Equating 25 and 26 leads to (for p = 1): A simplified equation for γ min can be obtained assuming that γ −p+2 max γ −p+2 min : Since p is expected to be 2 < p < 3, this condition is verified for γ max γ min . The minimum Lorentz factor is then not treated as a free parameter of the model, as it is calculated from eq. 28 as a function of the free parameters e , ξ e and p. Concerning the prescription for the value of γ max (for details see Section 3.5) usually it relies on the condition that radiative losses between acceleration episodes are equal to the energy gains, where energy gains proceeds at the Bhom rate. As we mentioned in the previous section, PIC simulations however have shown that this might not be the case.
A similar treatment can be adopted also for protons simply substituting e with p , m e with m p and assuming a power-law energy distribution with spectral index q. As a result, the minimum Lorentz factor for protons can be derived as: Solving the equations assuming that γ max,p γ min,p leads to: The equations for γ min and Eq. 23 for B, coupled with the description of the blast-wave dynamics described in section 2.1, provides all the necessary equations to derive the radiative output for a jet with energy E and initial bulk Lorentz factor Γ 0 expanding in a medium with density n(R). The derivation of the radiative output is detailed in section 2.3. To conclude the discussion about particle acceleration, in the next section we anticipate which constraints can be inferred on the physics of particle acceleration from multi-wavelength observations, once the afterglow model is adopted.
Constraints to the acceleration mechanism provided by observations
Assuming that accelerated particles have a power-law spectrum (dN acc /dγ ∝ γ −p ) and the cooling is dominated by synchrotron radiation, the spectral slope p can be inferred from observations of the synchrotron spectrum and/or from the temporal decay of the lightcurves if observations are performed at frequencies higher than the typical frequency ν m of photons emitted by electrons with Lorentz factor γ min (this is correct both in case of fast and slow cooling regime). The estimated value of p from afterglow modeling are spread on a wide range, from p ∼ 2 to p ∼ 3, suggesting that the spectrum of injected particles does not seem to have a typical slope, at odds with theoretical predictions. The determination of p however, suffers from the uncertainties on the spectral index inferred from optical and X-ray observations, where the observed spectra are subject to unknown dust and metal absorption. A derivation of p from the decay rate of the lightcurves is also subject to the correct identification of the spectra regime, and partially also to the assumption on density profile of the external medium, which is often unconstrained (see section 3.2).
The typical value of e inferred from afterglow modeling is around 0.1, meaning that 10% of the shock-dissipated energy is gained by the electrons, spanning from 0.01 to large values, such as 0.8. Although this seems a large uncertainty, e is perhaps the most well constrained parameter of the model, and is in good agreement with values predicted by numerical investigations [42]. For the fraction B , on the contrary, the inferred values varies in a very wide range, typically from 10 −5 to 10 −1 [67][68][69]. Recent studies that incorporate Fermi-LAT GeV observations [24,70] have shown that the typical values estimated for B can be even smaller, in the range ∼ 10 −7 -10 −2 . These values are needed in order to model GeV radiation self-consistently with radiation detected at lower frequencies, with repercussions on the estimates of the other parameters, such as n and E. These small values of the B needed to model the radiation have been tentatively interpreted as the sign of turbulence decay in the downstream [65,66]. As a consequence, even though the turbulence is strong ( B 0.1) in the vicinity of the shock, where particle are accelerated, it becomes weaker at larger distances, in the region where particles cool (see section 3.3). Small values of B are confirmed by the modeling of recent TeV detections of afterglow radiation from GRBs ( [71,72], see Section 4).
Another parameter that one would like to constrain from observations is the fraction of particles ξ e that are injected into the Fermi process. In the vast majority of the studies, this parameter is not included (i.e. it is implicitly assumed that all the electrons are accelerated, ξ e = 1). This parameter is indeed difficult to constrain, as it is degenerate with all the other parameters [73].
Observations so far have not been able to identify the location of a high-energy cutoff in the synchrotron spectrum, that would reveal the maximal energy of the synchrotron photons and then the maximum energy γ max of the accelerated electrons. Observations by Fermi-LAT are in general consistent with a single power-law extending up to at least 1 GeV. Photons with energies in excess of 1 GeV have been detected from several GRBs, the record holder for Fermi-LAT being a 95 GeV photon [74]. These photons cannot be safely associated to synchrotron radiation on the basis of spectral analysis, as their paucity makes difficult to assess from spectral analysis whether they are consistent with the power-law extrapolation of the synchrotron spectrum or they are indicative of the rising of a distinct spectral component. In any case, the Fermi-LAT detections are suggesting that synchrotron photons should be produced at least up to a few GeV. This is consistent with the limit commonly invoked for particle acceleration: if the acceleration proceeds at the Bhom rate (t acc r L /c) with r L = E/eB being the Larmor radius, and is limited by synchrotron cooling (t syn 6 π m e c/σ T B 2 γ) then γ max ∼ 10 7 − 10 8 can be reached. Even though this does not necessarily imply that acceleration must proceed at the Bhom limit, the value of γ max inferred from the detection of GeV photons is quite large and barely consistent with what found by PIC simulations. Whether or not the observations are in tension with the present derivation of γ max from PIC simulations and theoretical arguments, strongly depends on a clear identification of the origin of photons in the GeV-TeV energy range. Present and future observations with Imaging Atmospheric Cherenkov Telescopes (IACTs) are the main candidates to shed light on this issue.
Derivation of the radiative output
The expected radiative output can be estimated by means of analytical approximations, which provide prescriptions for the location of the synchrotron self-absorption frequency ν sa , the characteristic frequency ν m emitted by electrons with Lorentz factor γ min , the cooling frequency ν c emitted by electrons with Lorentz factor γ c , and the overall synchrotron flux [38,47]. In these approaches, the synchrotron spectrum is in general approximated with power-laws connected by sharp breaks, but more sophisticated analytical approximations of numerically derived synchrotron spectra have also been proposed [54]. The associated SSC component in Thomson regime [48] and corrections to be applied to the synchrotron and SSC spectra to account for the effects of Klein-Nishina [75] cross section (see Section 2.3.1) are also available in literature. These prescriptions are usually developed for the deceleration phase, when the Blandford-McKee solution [35] for the blast-wave dynamics applies, i.e., as long as the blast-wave is still relativistic. These models take as input parameters the kinetic energy content of the blast-wave E k , the external density n(R) = n 0 R −s (with s = 0 or s = 2), the fraction of shock-dissipated energy gained by electrons ( e ) and by the amplified magnetic field ( B ), and the spectral slope of the accelerated electrons p. During the deceleration phase, the initial bulk Lorentz factor Γ 0 does not play any role, but its value determines the radius (or time) at which the deceleration begins.
An alternative approach to estimate the expected spectra and their evolution in time consists in solving numerically the differential equation describing the evolution of the particle spectra and estimate the associated emission [76][77][78]. In this section we describe a radiative code that solves simultaneously the time evolution of the electron and photon distribution. The code has been adopted, e. g., for the modeling of GRB 190114C presented in [71].
The temporal evolution of the particle distribution is described by the differential equation: whereγ = ∂γ ∂t is the rate of change of the Lorentz factor γ of an electron caused by adiabatic, synchrotron and SSC losses and to energy gains by synchrotron self-absorption. In the SSC mechanism, the synchrotron photons produced by electrons in the emission region act as seed photons that are up-scattered at higher energies by the same population of electrons. Such scenario will generate a very high energy spectral component, which is the target of searches by IACTs such as MAGIC 1 and H.E.S.S. 2 . In principle, also up-scattering of an external population of seed photons can be considered and included in the cooling term, but here we will ignore this mechanism (external Compton) and focus only on SSC. The source term Q(γ, t ) = Q acc (γ, t ) + Q pp (γ, t ) describes the injection of freshly accelerated particles (Q acc (γ, t ) = dN acc /dγ dt ) and the injection of pairs Q pp (γ, t ) produced by photon-photon annihilation.
In the next sections we explicit each one of the terms included in eq. 31 and how to estimate the synchrotron and SSC emission. To solve the equation, an implicit finite difference scheme based on the discretization method proposed by [79] can be adopted.
Synchrotron and SSC cooling
The synchrotron power emitted by an electron with Lorentz factor γ depends on the pitch angle, i. e., the angle between the electron velocity and the magnetic field line. In the following, we assume that the electrons have an isotropic pitch angle distribution and we use equations that are averaged over the pitch angle (e. g., [80]). The synchrotron cooling rate of an electron with Lorentz factor γ is given by: The cross section for the inverse Compton mechanism is constant and equal to the Thomson cross section (σ T ) as long as the photon energy in the frame of the electron is smaller than the rest mass electron energy m e c 2 . For higher photon energies, the cross section decreases as a function of the energy and is described by the Klein-Nishina (KN) cross section. To estimate SSC losses, we adopt the formulation proposed in [81], which is valid for both regimes. Defining the SSC kernel as: where:ε . (34) ε and ε are the energies of the photons (normalized to the rest mass electron energy) before and after the scattering process, respectively. The two terms of Eq. 33 account respectively for the down-scattering (i.e. ε <ε) and the up-scattering (i.e. ε >ε) process. The energy loss term for the SSC can now be calculated with the equation:
Adiabatic cooling
As discussed in section 2.1, particles loose their energy adiabatically due to the spreading of the emission region. This energy loss term should be inserted in the kinetic equation governing the evolution of the particle distribution. To derive the adiabatic losses, we rewrite equation 10 as a function of energy losses dγ in a comoving time dt : with β being the random velocity of particles in unit of c. The comoving volume V of the emission region can be estimated considering that the contact discontinuity is moving away from the shock at a velocity c/3. After a time t = dR/Γ(R) c the comoving volume is: and:γ 18 of 76
Synchrotron self-absorption (SSA)
Electrons can re-absorb low energy photons before they escape from the source region. The absorption coefficient α ν can be expressed as [80]: valid for any radiation mechanism at the emission frequency ν , with P (γ, ν ) being the specific power of electrons with Lorentz factor γ at frequency ν and assuming hν γm e c 2 . Thus, the SSA mechanism will affect mostly the low frequency range. This results in a modification of the lower frequency tail of the synchrotron spectrum as: assuming a power-law distribution of electrons, with ν i = min(ν m , ν cool ) and ν SSA the frequency below which the synchrotron flux is self-absorbed and the source becomes optically thick.
Synchrotron and Inverse Compton emission
Following [82], the synchrotron spectrum emitted by an electron with Lorentz factor γ, averaged over an isotropic pitch angle distribution is: where x ≡ ν 4π m e c/(6 e B γ 2 ), and K n are the modified Bessel functions of order n. The total power emitted at the frequency ν is obtained integrating over the electron distribution: The SSC radiation emitted by an electron with Lorentz factor γ can be calculated as: where nν is the photon density of synchrotron photons and the integration is performed over the entire synchrotron spectrum. Integration over the electron distribution provides the total SSC emitted power at frequency ν .
Pair production
Pair production by photon-photon annihilation is particularly important for a correct estimate of the radiation spectrum in the GeV-TeV band. Indeed, some of the emitted VHE photons are lost due to their interaction with photons at lower energies (typically X-ray photons). As a result, the observed flux is attenuated and the resulting spectrum at VHE is modified. Here we follow the treatment presented in [83]. The cross section of the process σ γγ as a function of β , the centre-of-mass speed of the electron and positron is given by: where: (45) and ω t = hν t /m e c 2 with ν t being the target photon frequency, ω s = hν /m e c 2 with ν being the source photon frequency and µ = cos φ, where φ is the scattering angle. Then, it is possible to derive the annihilation rate of photons into electron-positron pairs as: where x. An accurate and simple approximation which takes into account both regimes is given by: where H(x − 1) is the Heaviside function [83]. The approximation reproduces accurately the behaviour near the peak at x peak ∼ 3.7 and over the range 1.3 < x < 10 4 which usually dominates during the calculations. A comparison between Eq. 46 and Eq. 47 is given in Figure 3 where the goodness of the approximation adopted in the mentioned x range can be seen. The impact of the flux attenuation due to pair production mechanism on the GRB spectra is Figure 3. Comparison between the exact annihilation rate (eq. 46) and the approximated formula (eq. 47).
The ratio between the two curves in the range (1.3 < x < 10 4 ) is shown in the bottom panel. In this range the ratio is always 7%. estimated in terms of the optical depth value τ γγ . From its definition: where n (ν t ) is the number density of the target photons per unit of volume, σ ν ν t is the cross section and ∆R is the width of the emission region. Introducing the cross section in terms of the annihilation rate R(x) in its approximated formula and integrating over all the possible target photon frequencies: where ν and ν t are the frequencies of the source and of the target interacting photons. The pair production attenuation factor can be then introduced simply multiplying the flux by a factor (1 − e −τ γγ )/τ γγ . This attenuation factor will modify the GRB spectrum giving a non-negligible contribution especially in the VHE domain. An example of the modification of a GRB spectrum due to pair production can be seen in Figure 4. Here the flux emitted in the afterglow external forward shock scenario by synchrotron and SSC radiation and the flux attenuation due to pair production have been calculated with a numerical code. For a set of quite standard afterglow parameters and assuming ∆R = R/Γ, the attenuation of the observed flux due to pair production become relevant above 0.2 TeV and it reduces the flux by ∼ 30% at 1 TeV and by ∼ 70% at 10 TeV. Similar considerations can be done also for the electron/positron production. Assuming that the electron and positron arises with equal Lorentz factor γ and that x peak ∼ 3.7, a photon with energy ω s 1 will mostly interact with a target photon of energy ω t ≈ 3/ω s . Then, from the energy conservation condition: The e ± production can be seen as an additional source term for the distribution of accelerated particles. As a result, an additional injection term Q pp e to be inserted in the kinetic equation (equation 31) is calculated as:
Comparison with analytical approximations
In order to compare results from the numerical method described in previous section and analytical prescriptions available in literature, we give an example in Figure 5. The analytical prescriptions for the synchrotron and the SSC component are calculated following [38] and [48]. In [38] the synchrotron spectra and light-curves are derived assuming a powerlaw distribution of electrons in an expanding relativistic shock, cooling only by synchrotron emission. The dynamical evolution is described following BM76 equations for an adiabatic blast-wave expanding in a constant density medium. The resulting emission spectrum (green dashed lines in Figure 5) is described with a series of sharp broken power-laws. The SSC component associated to the synchrotron emission was computed, as a function of the afterglow parameters, in [48]. In this work, calculations are performed assuming that the scatterings occur in Thomson regime. Modifications to the synchrotron spectrum caused by strong SSC electron cooling are also detailed.
From the comparison proposed in Figure 5, it can be clearly seen that analytical and numerical results are in general in good agreement. Both curves follow the same behaviour except for the high-energy part of the SSC component. Here the KN scattering regime, which is not taken into account in the analytical approximation, becomes relevant. As a result, the numerical calculations differ from the analytical ones showing a peak and a cutoff in the SSC spectrum due to the KN effects.
Nevertheless, there are a few minor discrepancies between the two methods. The numerically-derived spectrum is very smooth around the break frequencies, with the result that the theoretically expected slope (e.g., the one predicted by the analytical approximations) is reached only in regions of the spectrum that lie far from the breaks, i.e. is reached only asymptotically. This puts into questions simple methods for discriminating among different regimes and different density profiles based on closure relations, which are relations between the spectral and the temporal decay indices [10,38]. Regarding the flux normalization, there The results from the numerical code described in section 2.3 are shown with solid blue and red lines. This example shows the spectrum calculated at t = 10 4 s for s = 0, p = 2.3, e = 0.05, are minor discrepancies between the numerical and analytical results. This is due to the fact that in analytical prescriptions it is assumed that the radiation is entirely emitted at the characteristic synchrotron frequency. On the contrary, in the numerical derivation, the full synchrotron spectrum of a single electron is summed up over the whole electron distribution. Similar considerations apply to the SSC component when comparing with the analytical spectra. Moreover, the discrepancies observed between analytical and numerical SSC spectra are amplified by the differences observed in the target synchrotron spectra.
In general this comparison shows that the numerical treatment is a powerful tool able to predict the multi-wavelength GRB emission in a more accurate way than the analytical prescriptions. The latter ones, however, are still giving valid approximations of the overall spectral shape. The main limitation of analytical estimates arises when TeV observations are involved. The importance of KN corrections is evident in this band and should be properly treated for a correct interpretation of the TeV spectra, as will be shown in section 4.
Open Questions
As predicted by the basic standard model presented in the previous section, the afterglow emission is the result of particle acceleration and radiative cooling occurring in two different regions: the forward and the reverse shock. The temporal and spectral behaviour of the two emission components can be inferred after the jet/blastwave dynamics, acceleration mechanisms, and the radiation processes are modeled (section 2). The general agreement between model predictions and observations convincingly proves that the long-lasting radioto-GeV radiation is indeed produced in interactions between the ejecta and the external medium. Also, the radiative mechanisms involved and the nature of emitting particles are well established, with synchrotron (and possibly SSC) from the accelerated electrons (either at the forward or reverse shock) being the source of the detected radiation.
Despite the general success of the external shock scenario, there are several, longstanding open issues which represent a serious challenge for our present understanding of the afterglow emission and the GRB phenomenon in general. Moreover, even when observations seem to be in qualitative agreement with predictions, the extraction of the model parameters (which would give important feedbacks on our understanding of particle acceleration and GRB environments) is limited by the large degeneracy among parameters and lack of solid inputs from theoretical considerations.
Afterglow emission studies have not experienced relevant progresses in the last years, with observations and techniques which are the same since the launch of the Swift satellite. The recent discovery of TeV radiation from GRBs is opening the possibility to renovate and boost afterglow studies, with major impacts on the general understanding of GRB sources.
In this section we list and comment on those aspects still lacking a clear explanation, and in particular we selected topics which might largely benefit from observations and detections in the VHE regime.
X-ray flares
Observations of the afterglow emission in X-ray and optical often display behaviours that are not predicted by the standard scenario, and require the inclusion of additional emission components contributing to the detected radiation. In the standard external forward shock scenario the afterglow light-curves in X-ray and optical band are expected to decay following a power-law or a broken power-law behaviour, where the breaks are interpreted as the cooling or injection frequency crossing the observed band [38,47,54]. The advent of Swift-XRT and the increasing number of optical follow-up observations performed by ground based robotic telescopes have highlighted the presence, in a good fraction of cases, of unexpected features in the early time afterglow, such as flares and plateaus [49,57].
Flares are episodes of sudden rebrightenings characterised by a very fast rise of the flux, followed by an exponential decay profile. Comprehensive studies of X-ray afterglows show that an X-ray flare is observed in ∼ 33% of the GRBs [84,85]. The times at which they are observed span a very wide range, from around ∼ 30 s up to ∼ 10 6 s after the trigger time. The time where the flare peaks is shown in Figure 6 (T pk , x-axis) for a large sample of 468 X-ray flares in long GRBs. Most of the flares occur within 10 3 seconds, even though there are many cases of flares occurring several hours after the burst. The width of the flare ω is found to evolve linearly with time to larger values following the trend ω ∼ 0.2T pk [84]. The average and peak luminosities L of the flare also display a dependence from T pk , with L ∝ T −2.7 pk at least for early time (T pk < 10 3 s) flares [84,85]. When including also late time flares [86,87] a shallower index is obtain, around ∼ −1.2. The energy emitted during flare episodes is quite large and, for early time flares, is around ∼ 10% of the prompt emission or sometimes even comparable [88].
Flares have been detected also in the optical, although the sample of optical flares is far smaller than the X-ray one [89]. A statistical study of optical flares detected by Swift/UVOT shows that most of them correlate with and share similar temporal properties to simultaneous X-ray flares. Nevertheless, there are a few dozen of GRBs for which no X-ray flaring activity is observed simultaneously with optical flares [89].
Flares are believed to have an inner origin and to be associated with a prolonged activity of the GRB central engine [57,[90][91][92][93][94]. However, the relatively long timescales on which they are detected represent a challenge for the model. Many questions are still open, such as the location of the emitting region, what is powering the flares, and whether late time flares have a different origin than flares detected at early times.
Speculations about possible signatures of X-ray flares in the GeV-TeV range are present in literature [95][96][97][98]. Assuming that flares have an internal origin and are produced at R < R dec , forward shock electrons will be exposed to the flare radiation, producing an IC emission component by up-scattering the flare photons. Following these estimates, the IC component peaks at ∼ 100 GeV and has a flux comparable to the X-ray flux. Alternatively, GeV-TeV radiation associated to flares can be produced by the SSC mechanism, where electrons responsible for X-ray synchrotron flares also upscatter these photons to higher energies. The process is considered less interesting for TeV radiation because the peak of this SSC component is expected to be around 1 GeV [95], due to a relatively low minimum Lorentz factor γ min ∼ 60. Such value is estimated from theoretical considerations where γ min 60 e,−1 (Γ IS − 1) for p = 2.5, e = 0.1 and a relative shock Lorentz factor Γ IS of the order of unity. We notice that the recent estimates of the minimum electron Lorentz factor in the late prompt emission of GRBs [29] may modify these predictions, and place the expected SSC around 100 GeV. The luminosity of this component will strongly depend on the size of the emitting region. As a result, detection of flares in GeV-TeV band can provide relevant information to identify the properties at the emitting region and the production site of the flaring activity.
To understand what are the chances of current and future VHE ground based instruments to contribute to the study of flares, we perform some simplified estimates. The MAGIC telescopes observed 138 GRBs in almost ∼ 16.5 years, from 2005 up to June 2021 [99]. More than half of them (74 events) have been observed with delays from shorter than 10 3 s, which means ∼ 4.5 GRBs yr −1 , and 37 events observed with delays shorter than 100 s (i.e. 2.2 GRB yr −1 ). Considering that ∼ 33% of the long GRBs have an X-ray flare and considering the distribution of their peak times (see Figure 6), we estimate that ∼ 1 GRBs/yr is the rate of GRBs with an X-ray flare occurring during MAGIC observations. Let us go a bit further and estimate the detectability of a putative ∼ 10 2 GeV counterpart of X-ray flares. For the flux of the GeV-TeV flare, we consider as reference value the X-ray flux, and discuss what happens if a similar or ten times smaller flux is emitted at ∼ 10 2 GeV. We collect the X-ray flux of a large sample of flares from the catalog of X-ray flares presented in [87]. Results are shown in Figure 6. The average flux of the flare and the flare peak time correlate, and the orange line represents the best fit. To perform the estimates, we consider two different flare peak times, T pk = 10 2 s and T pk = 10 3 s. The typical average fluxes at those times are F = 1 × 10 −9 erg cm −2 s −1 and F = 1 × 10 −10 erg cm −2 s −1 , respectively. Assuming that a similar amount of flux is emitter around 100 GeV we can compare these values with the differential sensitivity as a function of the observing time of IACT instruments. Figure 7 ([100]) shows the sensitivity for several telescopes to the detection of a point-like source at 5 standard deviations significance as a function of the exposure time and for four selected energies. Considering that the width of the flare is related to the peak time following the relation ω ∼ 0.2T pk , we can compare the flare fluxes estimated at T pk = 10 2 s and T pk = 10 3 s with the differential sensitivity for observing time of t obs = 20 s and t obs = 200 s. The flare fluxes lie close to the differential sensitivity of the MAGIC telescopes (for 100 GeV at t obs = 20 s is ∼ 1.0 × 10 −9 erg cm −2 s −1 and at t obs = 200 s is ∼ 5.0 × 10 −10 erg cm −2 s −1 ). This indicates that MAGIC telescopes can barely detect such a flare. Moreover, Extragalactic Background Light (EBL) attenuation reduces the flux, that is why we are making the estimates at 100 GeV, where the attenuation is still small. We conclude that MAGIC would be able to detect (or place relevant constraints on) only the brightest X-ray flares (as it can be seen from Figure 6, the correlation has a large spread, and flares at 10 2 or 10 3 s can easily have fluxes one order of magnitude larger than what assumed here).
Concerning future instruments, the Cherenkov Telescope Array (CTA 3 will have a sensitivity which is almost one order of magnitude lower than the MAGIC one and similar slewing capabilities. The same estimates done for MAGIC can be applied to CTA, with the advantage that CTA will have a northern and southern sites, approximately doubling the possibility to follow GRBs within short time-scales. This is a promising indication that the CTA array will be potentially able to detect possible counterpart at E ∼ 30 GeV of X-ray flares, provided that this counterpart has a flux is which no less than ten times smaller than what detected in X-rays. As a result, it can play a major role in exploring and improving our knowledge of flares and their connection with prompt emission and with the prolonged activity of the central engine.
Density profile of the external medium
Following the established connection between long GRBs and the core-collapse of massive stars, the jet is expected to produce the afterglow while propagating in the wind of the star in its free-streaming phase. Afterglow radiation of long GRBs should then be produced in the interaction with a medium with radial density profile n ∝ R −2 . However, several investigations have shown that about half of the long GRBs are better explained if the blastwave is assumed to run into a medium with constant density. We revise the evidence in support of the constant density medium and discuss the difficulties in reconciling these observations with expectations on the environment surrounding long GRB progenitors.
Long GRBs originate from the core-collapse of massive stars, most likely rapidly rotating, and with a possible evidence of a preference for low-metallicity. The most convincing evidence in support of this paradigm is the association with type Ic supernovae and the proximity of GRBs to young star-forming regions. While the connection of long GRBs (or at least with the bulk of the population) with the core-collapse of massive stars is solid, the role of metallicity and rotation in the launch of the GRB jets, and the identification of the progenitor star are still uncertain. The progenitor is usually identified with Wolf-Rayet stars, massive stars (M > 20 M ) in the final stages of their evolution, characterised by powerful winds and a high mass loss rate [101]. The wind from the star is expected to interact and deeply modify the environment where the GRB explodes and leave imprints on its afterglow emission.
More in detail, from the interaction between the stellar wind and the ISM four concentric regions with different properties are expected to form. In the inner part (i.e., close to the star) the circumburst medium is permeated by the free-streaming wind, producing a density with radial profile n ∝ R −2 . The density is related to the mass loss rateṀ and to the velocity v w of the free-streaming stellar wind by: A termination shock separates the unshocked from the shocked wind: the latter forms a hot bubble of thermalised wind material, with a nearly constant density profile, as the formation of pressure and density gradients is prevented by the high sound speed inside the bubble. The hot bubble, in its outer part, is enclosed by a shell of shocked ISM, surrounded by the unshocked ISM. The GRB jet is supposed to trill its way in this stratified medium [10]. To understand where most of the afterglow evolution occurs, we have to estimate the deceleration radius R d and the non-relativistic radius R NR (i.e. the radius where the blastwave has decelerated to non-relativistic velocity) and compare them to the termination shock radius. For typical parameters (Ṁ = 10 −5 M yr −1 and v w = 10 3 km s −1 ), the fit to numerical models of Wolf-Rayet stars [102] give the following relation between the termination shock radius and the density of the unshocked ISM: R T = 10 n −1/2 ISM pc , where n ISM is the density of the unshocked ISM. From the blast-wave dynamics, the deceleration and the non-relativistic It is evident how the complete evolution of the afterglow radiation occurs well inside the free-streaming region.
In afterglow modeling of long GRBs it is then customary to assume a density profile described by eq. 52, whereṀ and v w are treated as unknown parameters (normalised to the typical values of a Wolf-Rayet star) combined in one single free model parameter A : n(R) = 3 × 10 35 A R −2 . Despite this robust prediction, the modeling of afterglow observations shows that in a relevant fraction of cases, observations are better explained by adopting a circumburst medium with a constant density n = n 0 .
The fraction of this cases varies depending on the method and on the selected sample, and is on average about 50% [103][104][105][106].
To place the termination shock at least inside the non-relativistic radius, one should invoke a very large density of the ISM, n 10 5 cm −3 , typical of dense cores of molecular clouds: R T = 0.03 (n ISM /10 5 cm −3 ) −1/2 pc. Density profiles for different ISM densities are shown in Figure 8, upper panel. Alternatively, one can try to variate the wind parameters. How the termination shock radius changes for different values ofṀ and v w is shown in the bottom panel of Figure 8. A very low mass loss rateṀ = 10 −7 M yr −1 (which may find a justification in case of low-metallicity star) is needed to bring the termination shock radius below 1 pc (for n ISM = 10 cm −3 ). With this low mass-loss rate, the deceleration and non-relativistic radius increase (R d ∼ 6 × 10 −3 pc and R NT ∼ 60 pc), placing the termination shock still after the deceleration radius but well within the non-relativistic radius, allowing for part of the observed emission to develop into a constant density environment. By increasing the blast-wave energy, the deceleration radius can further approach R T . This suggests that it is more likely for a very energetic GRB to cross the termination shock at early times and then expand in a ISM-like medium, as compared to a faint GRB. An indication of an average larger E γ in GRBs with a wind-like medium as compared to GRBs with a ISM-like medium has been found in [107], but is in contrast with results from the study performed by [106] on a larger sample.
The parameter space for which part of the afterglow emission can indeed be produced in the ISM-like density profile of the shocked wind is very limited, as it corresponds to the most energetic GRBs, low-metallicity progenitors and high-density ISM, or a combination of these factors [107]. These considerations on diversity of E k ,Ṁ, v w and ISM density may not be sufficient to explain the results of the modeling (i. e., the preference for a ISM-like environment). The fraction of GRBs which might have these peculiar parameters can hardly account for the large fraction of GRBs for which a wind-like profile is excluded by observations. The required conditions are too extreme to be verified in half of the population. However, it is not clear if this percentage has been overestimated by present studies. To quantify the inconsistency, the first step would be to perform a dedicated study of afterglow emission to assess the percentage of long GRB afterglows that are not consistent with a wind-like environment.
Methods based on closure relations may not be valid if the spectrum is modified by Compton scattering in Klein-Nishina regime (see also [108]). Moreover these are based on simple approximation of the synchrotron spectrum into power-law segments, while the wide curvature of the real synchrotron spectra might lead to an incorrect estimates of the value of p if the observed frequency is in the vicinity of a synchrotron break frequency. A full modeling is then necessary to really assess the fraction of long GRBs for which an R −2 density profile is excluded, and ultimately understand if the paradigm for the environment of GRBs should be drastically modified. Radio observations may be of great help, since the flux temporal behaviour does not depends on p and is quite different in case of constant or wind-like density profile. Similarly, the detection of SSC radiation can help solving this ambiguity.
Small values of B
For a long time the typical value of B has been considered to variate between 0.01 and 0.1, both on the basis of theoretical considerations on particle acceleration and findings by numerical simulations. Indeed, the present understanding of the micro-physics at weakly magnetised shocks invoke the existence of self-generated micro-turbulence both behind and in front of the shock, at a level corresponding to B ∼ 0.01 − 0.1. This layer of intense microturbulence is expected on theoretical grounds and recently corroborated by numerical PIC simulations. Inferences of the value of B from early modeling of afterglow radiation were broadly consistent with these numbers, confirming the presence of a large self-generated fields in ultra-relativistic weakly magnetized shocks. More recently, several independent methods have provided evidence for significantly lower values.
In particular, several studies on GRBs with GeV temporally extended emission detected by LAT arrive to the same conclusions that in order to explain GeV radiation as part of the synchrotron emission, multi-wavelength observations requires B = 10 −6 − 10 −3 [24,[109][110][111][112]. Similar values have been inferred from studies that are based on radio, optical and/or Xray emission and do not make use of high energy emission, such as [67][68][69]113]. A smaller magnetic field in the region where most of the particle cooling occurs might increase the expected relevance of the SSC component, as supported by recent detections of TeV radiation by IACTs.
Such a small values of B may appear to be problematic [114], because strongly selfgenerated micro-turbulence must be present to ensure the scattering and acceleration of the particles, which otherwise would be simply advected away.
It was later pointed out that the inferred low values of magnetization might be indicative of a turbulence that is decaying on time-scales comparable with the electron cooling time [65,66]. From a theoretical perspective, indeed, the micro-turbulence is expected to decay beyond some hundreds of skin depths. This picture has been validated by PIC simulations, which however are still far from probing time-scales comparable with the dynamical timescale of the system. Dedicated simulations the magnetic field does decay behind the shock, on a time-scale much longer than cω pi . Immediately behind the shock, the magnetic field carries a magnetization B ∼ 0.01, which decays in time after 10 2 − 10 3 plasma times. Eventually, the magnetic field will settle to the shock-compressed value 4Γ B u , where B u is the magnetization of the upstream unperturbed medium. In this scenario, high-energy particles, which produce MeV-GeV photons, feel only the region close to the shock, where the magnetization is large, due to their short cooling time. Particles that cool on longer time-scales (and produce radio, optical and X-ray photons) cool on longer time-scales, and then in a region where the magnetic field has decayed.
The application of cooling in a decaying magnetic turbulence to four GRBs detected by LAT has proved to be very successful and even able to give indications on how fast the turbulence decays, being consistent with a power-law decay B ∝ t −α t with α t ∼ 0.5 [66].
To understand and constrain the value of the magnetic field relevant for the particle cooling is of great importance, since an incorrect assumption or prior affects the estimates of all the other afterglow parameters, and in particular the density of the external medium [67,106].
A low value of B tends to increase the level of SSC luminosity for a given synchrotron luminosity. The recent detection of bright TeV emission from the afterglow of GRBs is an indication that this might indeed be the case. Existing and future TeV observations will shed a light on this issue, fostering a revision of our prejudice on the value of the magnetic field in the region where particles cool.
Variation of micro-physical parameters with time
Thanks to the increasing number of available observations on a wide range of frequencies (from radio to TeV) and times (from seconds to weeks), the basic assumption that microphysical parameters (such as e , e , p and ξ e ) are constant over the whole afterglow evolution can be testes. We comment on the hints (inferred from afterglow modeling) for temporal evolution of these parameters.
In case of well-sampled multi-wavelength light-curves, the modeling with synchrotron spectra is able not only to identify the location of the spectral breaks at a certain time but also evaluate their evolution in time. As a result, hints that micro-physical parameters e and B may vary with time have been found in some events with well detailed multi-wavelength follow-up campaigns.
In [115] broad-band (from near infrared up to X-ray) afterglow data from GRB 091127 were interpreted in the standard external forward shock scenario assuming a constant-density medium. The good quality of the data allows to identify the breaks in the light-curves and associate them with the synchrotron spectral breaks. As a result, the time evolution of the synchrotron breaks was estimated. In particular, it was calculated that the cooling break frequency ν cool evolves as ν cool ∝ t −1.2 which is in contrast with synchrotron predictions for which a less steeper decay ν cool ∝ t −0.5 is expected. As a result, some assumptions of the standard model must be relaxed to remove the tension between observations and theoretical predictions. The most likely option able to explain the cooling break observational behaviour without affecting the general interpretation of the data is to let the B parameter variate with time. Assuming that B ∝ t 0.49 the time evolution of ν cool can be explained successfully.
In [116] for GRB 130427A modeling, in order to explain the observed fast evolution of the injection frequency ν m ∝ t −1.9 a temporal evolution of e is claimed. Considering that ν m ∝ 2 e , a modest evolution of e following the trend e ∝ t −0.2 is able to satisfactorily describe the observed light-curves.
A time-dependent evolution of the micro-physical parameters has also been proposed in order to explain the features observed in the early afterglow phase which are not predicted by the external forward shock scenario such as X-ray afterglow plateaus, chromatic breaks, and afterglow rebrightenings [117][118][119][120].
Information from TeV observations can be certainly exploited in order to reduce the uncertainty on the values of the micro-physical parameters. The expansion of the broad band afterglow observations to a new spectral window will be a further test and a challenge for the multi-wavelength modeling based on the standard external forward shock scenario. In particular, the time evolution of the different energetic components including also TeV emission will give new insights useful to investigate the evolution of the micro-physical parameters. A first proof is provided by the well-sampled multi-wavelength emission observed for GRB 190114C, one of the few GRBs detected so far at TeV energies. The broadband emission can be explained only by invoking evolution of the micro-physical parameters with time [121], as will be discussed in the next section.
Maximum synchrotron photon energy
One of the expectations from Fermi-LAT observations of GRB afterglows was the identification of a spectral cutoff in the afterglow synchrotron spectrum marking the maximum energy of synchrotron photons [122,123]. Such a cutoff has not been firmly identified. Its location is directly connected with the shock micro-physics conditions and the maximum energy at which electrons can be accelerated. This maximum energy is typically estimated equating the timescale for synchrotron cooling and the acceleration timescale, where acceleration is assumed to proceed at the Bohm level, considered as the fastest rate. This estimate returns hence the maximum energy of the accelerated particles. Assuming that the accelerated particles are electrons: where r L is the Larmor radius. For each crossing the electrons gain energy by a factor ∼ 2. On the other hand, the energy losses for synchrotron radiation on this timescale are: The particle stops to gain energy when: Therefore, the maximum Lorentz factor for electrons can be derived: The corresponding maximum synchrotron photon energy is: which for electrons is ∼ 50 MeV in their rest frame. Similar considerations can be done also for protons. Following the same arguments presented below one obtains: for the cooling Lorentz factor and: for the maximum Lorentz factor which sets a maximum photon energy of ∼ 100 GeV. Synchrotron emission is less efficient for protons so they are less affected by cooling and they can reach higher maximum Lorentz factors than the electrons. Within this framework it is expected that observations in the GeV band can be exploited to identify the presence of a cut-off in the HE tail domain. At the current stage, Fermi-LAT observations indicate that the afterglow component of the HE energy emission is usually modelled with a single power-law component with index ∼ -2 and with no evidence of spectral evolution in time [124] and HE cut-offs.
The absence of the cut-off in the observational data may be explained in several ways. The most likely interpretations are the limited sensitivity of the LAT instrument in the GeV range and the possible contamination due to the rising of the SSC spectral component. As a result, the synchrotron cut-off is hidden behind the VHE spectral component which can be detected in the GeV-TeV domain. This implies that TeV observations are fundamental in order to disentangle between the two spectral components and infer the cutoff of the synchrotron spectrum.
Another possible interpretation is that the lack of a cutoff in the observational data is genuine. In this case, the synchrotron emission can exceed the limit assumed for the maximum photon energy and extend in the GeV-TeV domain. This interpretation can be tested with VHE observations. The extension of the HE power-law derived by LAT up to the TeV domain should be consistent with VHE data and no spectral hardening in the GeV-TeV band should be seen. Such scenario is in contrast with the standard particle acceleration model presented in Section 2. A deep revision of our understanding of acceleration mechanisms is required in order to make TeV emission from synchrotron radiation possible. In particular, a mechanism, able to accelerate electrons up to PeV energies is needed.
Calculation performed so far assumes the presence of a uniform magnetic field B throughout shock heated plasma. If this assumption is rejected it is possible to consider a non-uniform magnetic field, stronger close to the shock front and decaying downstream. Following the calculation of [125] the magnetic field B can be expressed in terms of the distance from the shock front x as: where B s and B w are respectively the strongest and the weakest magnetic field strengths, η is the power-law decaying index, and L p is the field decay length scale, which is estimated as [126]: where Γ s is the shock front Lorentz factor and n is the number density of the accelerated particles in the shocked fluid comoving frame. As a consequence, the Larmor radius r L increases with the distance from the shock front since B (x) becomes weaker and an electron travelling downstream will be likely sent back upstream when r L ≤ x. When considering the case B s B w and x L p the particles will lose most of their energy in the region of low magnetic field. Therefore from the condition that losses in the low magnetic field region should be greater than losses in the high magnetic field region, after some algebra the following condition is obtained: valid for η > 1/2 and x 0 /L p 1 where x 0 is the width of the high magnetic field region. Considering that x 0 /L p ≡ (B s /B w ) 1/η , eq. 62 states that the Larmor radius in the high magnetic field region is larger than the actual width of the region and electrons will be barely deflected in such portion of the shocked plasma. As a result, it is possible to calculate the maximum Lorentz factor for electrons that loose most of their energy in the weak magnetic field region following the same conditions presented for the uniform magnetic field case: As a result, the maximum synchrotron photon energy is given by: which is greater than the one calculated in eq. 57 by a factor B s /B w . Numerical calculations [41] show that this ratio can be larger than ∼ 10 2 . As a result, photons of energies 100 GeV can be produced via synchrotron process when assuming a non-uniform magnetic field which decays downstream of the shock front and with particles loosing most of their energies in the weakest field region. In both interpretations presented here, TeV observations are fundamental in order to investigate with unprecedented details the possible presence or absence of the synchrotron cutoff spectrum. This have also a direct impact on the study of the possible radiation mechanisms responsible for the VHE component in GRBs.
Prompt emission efficiency
The overall efficiency η γ of the prompt emission mechanism is the result of three processes: the efficiency η diss of the (still unidentified) mechanism responsible for dissipation of the jet energy, the efficiency e of the acceleration mechanism in converting the dissipated energy into random energy of the electrons, and the radiative efficiency rad of the electrons: η γ = η diss e rad . Provided that it is reasonable to assume a fast cooling regime for the prompt emission ( rad = 1), the overall prompt efficiency is limited by the capability of the dissipation mechanism in extracting the kinetic or magnetic energy of the jet and the capability of the particle acceleration process to convey a fraction of this energy into the non-thermal electron population. The value of the efficiency provides then a fundamental clue to place constraints on the origin of energy dissipation in GRBs, which is still quite uncertain, discriminating between matter and magnetic dominated jets.
From the definition of η γ = E γ /E 0 (where E 0 = E γ + E k is the initial explosion energy), we can write the relation E k = (1 − η γ )/η γ E γ . The parameter η γ can then be estimated from the comparison of the energy E γ emitted in the prompt phase and the energy E k left in the jet after the end of the prompt emission (i.e., at the beginning of the afterglow phase). While the former is directly estimated from observations, the latter one can be inferred only indirectly, from the modeling of afterglow radiation.
One of the most adopted methods to infer E k for large samples of GRBs is to rely on the X-ray luminosity and use it as a proxy for the energy content of the blast-wave [127][128][129][130][131]. This method is solid as long as the X-ray band lies above max(ν m , nu c ) and is not affected by inverse Compton cooling. If these two conditions are verified, then the electrons emitting X-ray photons are in fast cooling regime and their cooling is dominated by synchrotron losses. The luminosity produced is then proportional to the energy content of the accelerated electrons E k e . Assuming a value (typically 0.1) for e , then E k can be estimated. Investigations based on the X-ray emission have inferred very large values of η γ , between 0.5 and 0.9 [49,129,132,133].
The very same approach can be applied also to 100 MeV-GeV photons detected by the LAT, under the assumption that these are synchrotron photons. A strong correlation between the GeV luminosity and E γ,iso has been indeed found, supporting the possibility that GeV photons lie in the high-energy part of the synchrotron spectrum, where the afterglow luminosity is proportional to E k e and can be use, similarly to the X-ray luminosity, to estimate E k [134]. A study by [70] revealed that the energetics E k inferred independently from X-ray and GeV luminosities on a sample of 10 GRBs are inconsistent with each other. The authors show that the inconsistency is solved if ν c > ν X (where ν X is the X-ray frequency), or if Compton losses are important in the X-ray band. Full modeling of the GeV, X-ray and optical data support this scenario. In both cases, B is required to be much smaller than usually assumed, with values in the range 10 −7 − 10 −3 . This analysis shows that the GeV band is a much better proxy for E k , since it is above ν c and is not affected by inverse Compton cooling, due to Klein-Nishina suppression. Adopting GeV luminosities as a proxy for E k , the estimated values of E k increase, affecting also the estimates of η γ , which are around 5 − 10% [24].
A correct estimate of η γ is extremely important, since its value is related to the mechanism dissipating energy in the jet. Since internal shocks can hardly reach values of η γ larger than 10%, values around 50-90% have been used to argue that internal shocks are not a viable mechanism to explain prompt emission in GRBs, and more efficient mechanisms should be considered (e.g., magnetic reconnection). If the efficiency is however smaller than initially estimated, internal shocks may still be a viable solution. Moreover different estimates of η γ lead to different estimates on the total initial jet energy E 0 = E γ,iso + E k , with repercussions on the energy budget of GRBs and finally on their progenitors and mechanisms for jet launching. Small values of B may then relax the problem with very large prompt efficiency, which is definitely unreasonable for internal shocks, but difficult to attain also for magnetic reconnection models (for a discussion see e. g., [19,20,135]).
A scenario where the magnetic field strength is relatively low in the emitting region, implies a stronger SSC emission. Recent TeV detection of GRBs support this scenario, and provide additional observations to constrain the magnetic field. Moreover, as shown by the first detections by IACTs (section 4), the energy in the TeV component is comparable to the energy in X-rays, providing better estimates for the energy budget in the afterglow phase. Future detections from a larger sample of GRBs can help in assessing more precisely the energy budget of the jet during the afterglow emission and add important information to constrain the efficiency of the prompt emission and favour or exclude some dissipation scenarios.
Fraction of accelerated particles
As described in section 2, the representation of the relativistic shock acceleration in GRB afterglow relies on some free parameters. These values describe the energy equipartition between particles and magnetic field and the non-thermal accelerated particle distribution. In particular, the parameter ξ e is responsible to account for the fraction of particles (here we consider electrons but same considerations are valid also for protons) accelerated into a non-thermal distribution. This means that from relativistic shock theory it is expected that a fraction 1 − ξ e of electrons is heated into a thermal distribution rather than a non-thermal one (see Figure 9). For simplicity, usually in afterglow studies it is assumed that ξ e = 1, i.e. all the particles are accelerated into a non-thermal distribution. Such assumption is used in order to avoid the large degeneracy which affects the GRB parameters when including this additional free value. In particular, afterglow modeling predictions obtained assuming ξ e = 1 for parameters E k , n, e and B cannot be distinguished from those one estimated for any ξ e in the range m e /m p < ξ e < 1 and afterglow parameters E k = E k /ξ e , n = n/ξ e , e = ξ e e and B = ξ e B [73]. This can be proven considering the dependencies of the jet dynamics and shock energy equipartition on the model parameters. As shown in BM76 and by previous calculations on the evolution of a relativistic blastwave, in the self-similar regime the bulk Lorentz factor evolve as Γ ∝ (E k /n) 1/2 . As a result, the same flow evolution can be obtained with different values of E k and n as long as their ratio is preserved. The fraction of energy given to the magnetic field is reduced by a factor ξ e but at the same time the energy density given to the shock is increased by a factor 1/ξ e so that the magnetic field energy density is the same in the scenario when including or not ξ e . Analogous considerations can be done also for the number and the energy density of the electrons so that their values are preserved. As a result, in principle it is not possible to distinguish between the two parameter sets obtained for ξ e or for any value m e /m p < ξ e < 1. This imply that afterglow model parameters, when considering ξ e = 1, are estimated with an uncertainty of factor m e /m p and afterglow observations do not constrain directly their values (e.g. E k or B ) but rather a fraction of their value multiplied or divided by ξ e (e.g. E k /ξ e or B ξ e ).
It is possible to obtain information on the value of ξ e through PIC simulations or indirect features of the thermal component on the synchrotron afterglow spectra. As mentioned in the previous Section, PIC simulations of weakly magnetized shocks have found that the downstream population include around ∼ 3% of the electrons which are accelerated into a non-thermal distribution. In case the efficiency is low (around ∼ 10% or less) the presence of a large population of thermal electrons may affect the afterglow radiation spectra. The thermal electrons are distributed at lower energies than the non-thermal ones since ηγm e c 2 γm p c 2 where η m p /m e is a factor describing the ignorance on the plasma physics governing electron heating beyond γm e c 2 . As a result, the synchrotron radio component emission may be affected through the production of a new emission component from thermal electrons (for η 1 and moderate 1/ξ e ) or a large self-absorption optical depth (for ξ e 1) which may be visible in a time scale of few hours. Possible effects of the thermal component are discussed in [136][137][138].
Information from the TeV component cannot completely solve the degeneracy between afterglow parameters and cannot provide additional clues on the non-thermal electron distribution. However, it can provide unique fundamental data useful to constrain the other afterglow parameters less constrained such as B and the density. This will impact also the ξ e calculation since it can help to reduce the degeneracy between the sets of solutions available in the parameter space. Indeed, the multi-wavelength modeling of GRB 190829A (detected at TeV energies by H.E.S.S.) showed that the only way to explain all the detected radiation, from radio to TeV, is to introduce the parameter ξ e in the modeling, which is constrained by data to be ξ e < 0.13 [72].
Discovery of a TeV emission component in GRBs
The robust theoretical framework developed throughout the years to explain the afterglow radiation predicts that, to some extent, GRBs should be TeV emitters (section 2). Observations in the HE band, and in particular the presence of GeV photons with energies up to ∼ 100 GeV, support this possibility. On the other hand, from the observational side the search for such emission is hampered by several drawbacks. Space-born telescopes, such as Fermi-LAT, sensitive up to few hundred GeV, have an hard time with GRBs due to their low γ-ray photon flux at the highest energies (∼ 10 2 GeV), caused by their cosmological distance and strong EBL absorption. These difficulties can be overcome by the much larger effective area of IACTs in the common energy range of sensitivity (50-300 GeV). As a downside, IACTs have a small field of view (a few degrees wide), higher low-energy threshold ( 50 − 100 GeV), and reduced duty cycle (less than 10%).
In the last decades, IACTs have performed a huge effort to become instruments suitable for GRB observations. In particular, the efforts have been focused in two directions: i) the development of fast repointing systems to promptly react to GRB alerts and start observations with delays of a few tens of seconds after the trigger time, and ii) the extension of the energy threshold below 100 GeV, important to reduce the impact of the EBL attenuation on the detection probability of cosmological GRBs.
After a decade of VHE observations resulting in non-detections, the first announcement of GRBs detected by IACTs arrived in 2019, thanks to the MAGIC and H.E.S.S. telescopes [139]. These detections have firmly established that GRBs can be bright sources of TeV radiation. Somewhat unexpectedly, VHE emission has been detected also several hours/a few days after the GRB onset, and up to energies of 3 TeV. The timescales of the detections place the origin of the emission in the afterglow phase. The TeV emission has been studied and interpreted in a multi-wavelength context, in order to evaluate the properties and the nature of the responsible radiation mechanisms. In particular, investigations have focused on SSC, external inverse Compton (EIC), and synchrotron radiation.
In this section, all GRBs for which a detection (significance > 5σ) or a hint of detection (significance between 3 and 5σ) has been claimed by Cherenkov telescopes are presented. These are in total six events (one short and five long): GRB 160821B (Section 4.1), GRB 180720B (Section 4.2), GRB 190114C (Section 4.3), GRB 190829A (Section 4.4), GRB 201015A (Section 4.5) and GRB 201216C (Section 4.6). For each event we start with a brief description of the prompt and afterglow multi-wavelength observations. Then, we describe VHE observations and summarise the main results. Being these detections a novelty, and some of them laying close to the sensitivity detection threshold of the instrument, we describe in detail the VHE data analysis, the calculation of the significance excess at the GRB position (following the usual prescription used for VHE sources presented in [140]), and the methods adopted for the derivation of the spectral energy distribution (SED) and of the light-curves. For each GRB, we also present the interpretations that have been put forward in the literature. A discussion on the main common properties and differences among this initial population of VHE GRBs and with respect to the whole GRB population is presented in section 5, where we address also the question of what we have been learning from these few detections.
In this section all quoted times refer to the time elapsed from the trigger time T 0 of the Swift-BAT or Fermi-GBM instrument, as will be specified. Photon indices are given in the notation N ν ∝ ν α , while temporal indices are defined by F(t) ∝ t β T .
GRB 160821B
GRB 160821B is a short GRB at z = 0.162 detected by the Swift-BAT [141] on 21 August 2016 at T 0 = 22 : 29 : 13 UT and by the Fermi-GBM [142]. The analysis of MAGIC observations shows a ∼ 3σ excess at the GRB position.
General properties and multi-wavelength observations
The BAT prompt spectrum (T 90 = 0.48 s) is well described by a power-law with index α = −0.11 ± 0.88 and an exponential high-energy cutoff, corresponding to a peak energy with E p = 46.3 ± 6.4 keV [143]. The GBM prompt spectrum (T 90 = 1.088 ± 0.977 s) is fitted with a cutoff power-law function as well, with E p = 92 ± 28 keV. Being located at redshift z = 0.162, this is one of the nearest short GRBs detected up to date. Its isotropic-equivalent energy E γ,iso ∼ 1.2 × 10 49 erg is toward the low energy edge of the known distribution for short GRBs [144]).
Afterglow observations are available in the radio, optical, X-ray and (V)HE band. Fermi-LAT observations were performed from the trigger time up to 2315 s and from 5285 s to 8050 s and did not reveal any significant excess in the 0.3-3 GeV band [145]. Swift-XRT observations [146] started 57 s after the trigger time and revealed a complex behaviour of the X-ray afterglow light curve. An initial plateau is followed by a steep decay at around 10 2 s. Then, a power-law decay with index ∼ −0.8 is observed after ∼ 10 3 s [147,148]. Optical observations were performed by several instruments [149][150][151][152] revealing the presence of a fading source with a magnitude r = 22.6 ± 0.1 mag 0.95 hrs after T 0 . The identification of the host galaxy allowed the measurement of the spectroscopic redshift z = 0.162. The GRB is located in the outskirts of the host spiral galaxy, at ∼ 15 kpc projected distance from its center [148,153]. A fading radio afterglow counterpart was observed at 6 GHz by VLA starting from 3.6 h after the burst trigger [154]. The multi-wavelength light-curves of GRB 160821B are shown in Figure 11.
VHE observations and results
The MAGIC telescopes started the follow-up of GRB 160821B with a very short delay of 24 s after T 0 and continued observations for about 4 h [145]. The observations were performed with a relatively high Night Sky Background (NSB) (2-8 times brighter than in dark nights) due to the presence of the Moon, and in mid-high zenithal angle conditions (from 34 • to 55 • ). Unfortunately, the first ∼ 1.7 h of the data were strongly affected by clouds. As a result, dedicated and optimized software configurations were used. A more stringent image cleaning with respect to dark conditions was applied to take into account the spurious contribution coming from the high NSB. The analysis required cuts optimized on the Crab Nebula and on Mrk421 observed in similar conditions, and correction factors for the low atmospheric transmission, calculated thanks to the LIDAR facility [155]. The pre-trial analysis showed the presence of a 3.1σ (2.9σ post-trial 4 ) significance excess at the GRB position provided by Swift-XRT (see Figure 10). The flux has been estimated for energies above 0.5 TeV assuming a power-law spectrum with photon index α = −2. In the first 1.7 h, where data taking was affected by bad atmospheric transmission, only flux upper limits could be derived. This time window has been divided into two time intervals (24 − 1216 s and 1258 − 6098 s) and the derived upper limits are respectively 1.1 × 10 −11 cm −2 s −1 and 5.4 × 10 −12 cm −2 s −1 . In the subsequent 2.2 h (6134 − 14130 s) assuming that the signal is real, a flux value could be calculated and is 9.9 ± 4.8 × 10 −13 cm −2 s −1 . For the same time interval, also a flux upper limit has been estimated and gives 3.0 × 10 −12 cm −2 s −1 . All the mentioned fluxes are shown in Figure 11 (red symbols) and refer to the observed values, i.e., without correcting for EBL absorption. Upper limits have been calculated at 95% confidence level following the prescriptions of [156]. The low (∼ 3σ) significance estimated did not allow to obtain an unfolded spectrum. As a result, in the SED ( Figure 12) the reconstructed flux in the third bin (6134 − 14130 s) over the energy range 0.5 − 5 TeV is represented as an error box. The statistical error on the photon flux has been taken into account, while, for simplicity, the systematic error for the assumed spectral index was neglected. The flux inferred by MAGIC observations in the 0.5 − 5 TeV energy range would imply a TeV luminosity at least 5 times larger (when de-absorbed by EBL) than the luminosity emitted in the X-ray band.
Interpretation
A modeling of the multi-wavelength observations, including MAGIC data, has been presented by the MAGIC Collaboration in [145]. The emission is interpreted as the sum of several components, dominating at different times and in different energy bands: • synchrotron and SSC emission from electrons accelerated at the forward shock; this is in general the dominant emission component; • synchrotron emission from electrons accelerated by the reverse shock, which is found to dominate the radio band until t ∼ 4.8 h; • kilonova emission, powered by freshly synthesized r-process elements released in neutron star mergers; this component is found to dominate the optical/nIR from around 1 to 4 days [148,153]; • an X-ray extended emission component, widely attributed to long-lasting activity of the central engine, here dominating the X-ray band for t < 10 3 s. In performing this multi-component modeling, the synchrotron and SSC forward shock emission have been calculated with a one-zone numerical code (see [71] for details), while the reverse shock and kilonova emission contributions have been taken from [148]. Only X-ray data at t > 10 3 s have been included in the modeling, to exclude the extended emission component. The broad-band modeling is shown over-plotted to the light-curves in Figure 11 (solid lines) and to the SED between 1.7 and 4 h in Figure 12.
A very good agreement between data and modeling is found in radio (green lines and points), optical (yellow and pink lines and points) and X-rays (blue lines and points). A large degeneracy is present in the parameters, and the data modeling only allows to identify the ranges for the permitted values of each parameter. These are reported in Table 1 and we note that they are very similar to those estimated in [148] and in a later work by [157]. In the allowed parameter space defined by radio, optical and X-ray observations, different combinations of the parameters predicts different SSC fluxes at 1 TeV are found, reaching at most F (1TeV) SSC ∼ 2 × 10 −13 erg cm −2 s −1 . This value, when attenuated by EBL, is at least one order of magnitude fainter than the one inferred from data analysis of the MAGIC observations. In other words, the parameter space constrained by the observations at lower frequencies is unable to account for such energetic TeV emission, and the SSC forward shock scenario fails to reproduce the observations, provided that the hint of excess found by MAGIC is a real signal from the source.
An alternative scenario that has been explored is the external inverse Compton (EIC) scenario, investigated by [157]. These authors first consider a one-zone SSC model, and reach similar conclusions to those presented by the MAGIC Collaboration [145]: the SSC mechanism predicts a TeV flux around 1-2 orders of magnitude lower than the MAGIC observations (see Figure 11. GRB 160821B: multi-wavelength light curves (from radio to TeV) and their modeling according to [145]. ©AAS. Reproduced with permission. The different contributions from the forward shock (FS), reverse shock (RS), and kilonova are shown (see legend). Flux [erg/cm2/s] MAGIC 1.7-4 h 6 GHz @ 4h optical @ 2h X-ray @ 3h Figure 12. GRB 160821B: modeling of the simultaneous multi-wavelength SED at approximately ∼ 3 h according to [145] (©AAS. Reproduced with permission.), for the same parameters used to model the lightcurves in Figure 11. The shaded areas show the sensitivity energy range of the different instruments. The MAGIC error box on the reconstructed flux is also shown. Synchrotron (solid black line), intrinsic SSC (before EBL absorption, dashed black line) and SSC emission with EBL attenuation (solid red line) estimated from the numerical modeling are shown. Figure 13, orange curves). The alternative EIC scenario is then considered by the authors, where the seed photons are provided by the extended X-ray emission and the X-ray plateau. The extended emission and the plateau are fitted using two phenomenological functions.
The energy spectrum of the late-prompt emission is described by a broken power-law (see Figure 13, top and bottom panels). For the EIC model, the VHE spectrum is inferred for three different observed times (t = 1.1, 1.8, 2 h) and compared to the MAGIC flux averaged between 1.7 and 4 h. As it can be seen in Figure 13 (bottom panel), the model flux at 2 h under-predicts the MAGIC flux (the MAGIC observed flux, green shaded area, should be compared with the EBL-absorbed model flux). We conclude that also the EIC model is unable to explain the large TeV flux suggested by MAGIC observations.
GRB 180720B
GRB 180720B is a long GRB at z = 0.654 triggered on 20 July 2018 by the Fermi-GBM [158] at T 0 = 14 : 21 : 39 UT and 5 s later by the Swift-BAT instrument [159]. telescopes observed and detected GRB 180720B about 11 h after the prompt emission, with a ∼ 5σ statistical significance.
General properties and multi-wavelength observations
The extremely bright prompt emission of this event, which is the seventh in brightness among the GRBs detected by the Fermi-GBM until then, lasted for T 90 = 48.9 ± 0.4 s and released an isotropic-equivalent energy E γ,iso = 6.0 ± 0.1 × 10 53 erg in the 50-300 keV range. Multi-wavelength afterglow observations covered the entire electromagnetic spectrum (see Figure 11). Significant signal was detected by Fermi-LAT from the trigger time up to 700 s, with the highest photon energy of 5 GeV detected 137 s after the burst trigger [160]. The Swift-XRT telescope observed and identified a bright afterglow starting from 90 s. This was still visible almost 30 days after the trigger time. The late-time light curve (from 2 × 10 3 s to 4 × 10 6 s) can be modelled with an initial power-law decay with an index −1.19 +0.01 −0.02 followed by a break at t break = 8 × 10 4 s to an index of −1.55 +0.04 −0.05 [161][162][163][164][165][166][167][168][169][170] revealed the presence of a counterpart and allowed to estimate the redshift value of z = 0.654. The optical afterglow was seen to be slowly fading at an almost constant rate from around 10-11 h after the trigger time [171,172] as discussed in [173]. Radio observations (not shown in the figure) were also performed starting from ∼ 1.7 days after the burst showing a steep power-law decaying emission [173].
VHE observations and results
The H.E.S.S. telescopes followed-up the event for ∼ 2 h starting from 10.1 h, revealing the presence of a source with a 5.3σ pre-trial significance (5.0σ post-trial 6 ). The observation was performed in standard dark and good weather conditions with a mean zenith angle of 31.5 • . Another observation was performed under similar conditions 18 days after the previous one with results consistent with background events. The inferred flux at ∼ 11 h and the flux upper limit at 18 d are shown in Figure 14 (red symbols).
The observed spectrum in the 0.1 − 0.44 TeV energy band has been fitted both with a power-law (panel on the left in Figure 15) and with a power-law with a cutoff F int = F 0,int E E 0 −γ int , to describe an intrinsic power-law spectrum affected by the EBL attenuation e −τ(E,z) (see Figure 15, panel on the right). With reference to the second fit, the analysis returns a photon index γ int = 1.6 ± 1.2 (statistical) ±0.4 (systematic) and a flux normalization F 0,int = (7.52 ± 2.03 (statistical) +4.53 −3.84 ) (systematic) ×10 −10 TeV −1 cm −2 s −1 , evaluated at an energy E 0,int = 0.154 TeV.
Interpretation
The H.E.S.S. Collaboration explored two possible radiation mechanisms to explain the VHE emission from GRB 180720B [4]: synchrotron emission and SSC radiation. A full modeling is not performed, and the discussion and comparison among the two different scenarios is based on estimates of the typical and maximal electron energy necessary in the two cases and on the comparison between spectral and temporal indices in different energy ranges. A synchrotron spectrum with a flat (α ∼ −2) slope extending from X-ray to VHE could model the emission with one single broad component and explain the similarity between the H.E.S.S., Fermi-LAT, and Swift-XRT luminosities, and the consistency among their photon index values.
The large error on the VHE photon index however is not placing strong constraints, leaving open both the possibility of a consistency with the extrapolation of the synchrotron spectrum but also the possibility of a spectral hardening, indicative of a second component. A synchrotron origin of 10 2 GeV photons would require to find a process able to accelerate electrons up to PeV energies, which is in excess of the maximum electron energy achievable in external shocks (for a discussion, see Section 2.3.1). Adopting the standard Bhom limit, > 100 GeV emission 10 h after the burst would require a huge bulk Lorentz factor Γ ∼ 1000 which at these late times is really unlikely. As a result, these strong requirements disfavour the synchrotron emission as responsible of the VHE component in GRB 180720B. [174]. Both the synchrotron and the SSC contribution to the total flux are shown (see legend). In the SED, X-ray and H.E.S.S. data are shown respectively with the green and the blue boxes.
The SSC scattering, on the contrary, arises as a natural candidate. A full broad-band modeling of GRB 180720B data in this scenario is presented in [174]. A numerical code reproducing the synchrotron and SSC emission in the afterglow shocks has been used (see [175]). The resulting light-curves and SED in the H.E.S.S. observational time window are shown in Figure 16. The full emission is explained as afterglow forward shock radiation (except for the initial peak in the optical and X-ray curves at t ∼ 10 2 s which is attributed to reverse shock emission). In the case of a constant-density ISM environment, the parameters that best reproduce the data are: E k = 10 54 erg, n = 0.1 cm −3 , e = 0.1, B = 10 −4 , Γ 0 = 300 and p = 2.4. As it can be noticed, the equipartition factor B needs to assume a quite low value in order to explain the observations. A stellar wind-like environment is discarded by the authors on the basis of the comparison between the expected flux at ∼ 1 − 10 GeV, following the prescriptions of [48] ( 2 × 10 −6 erg cm −2 s −1 at t ≈ 100 s) and the one observed by Fermi-LAT (∼ 10 −8 erg cm −2 s −1 at t ≈ 100 s). A low magnetic field equipartition factor is derived from the condition E KN 0.44 TeV at t ≈ 10 h where E KN is the energy at which the KN scattering becomes relevant. A transition energy between synchrotron and SSC component of ∼ 1 GeV is derived. Such value falls into the Fermi-LAT energy range and is compatible with a hardening of the spectrum in the VHE band. However, since Fermi-LAT sensitivity is above the predicted flux of GRB 180720B at 10 h in the GeV band, the data cannot firmly confirm the presence of this transition.
General properties and multi-wavelength observations
The duration of the prompt emission is T 90 ≈ 116 s as measured by Fermi-GBM and T 90 ≈ 362 s by Swift-BAT. However, the prompt light curve showed a multi-peak structure only for about 25 s, suggesting that the remaining activity, which is characterised by a smooth power-law decay, recorded by these instruments may be already the afterglow emission. Support to such interpretation is also obtained from a joint spectral and temporal analysis of the Fermi-GBM and Fermi-LAT data [178]. The total radiated prompt energy is E γ,iso = (2.5 ± 0.1) × 10 53 erg in the energy range 1-10 4 keV [179]. Extensive follow-up observations from several different instruments from GeV to radio are available. Light-curves are shown in Figure 17. Fermi-LAT observations started since the beginning of the prompt phase. A GeV counterpart was detected from T 0 to 150 s, when the burst left the LAT field of view and remained outside it until 8600 s. When LAT resumed observations significant signal was still detected at a flux level ∼ 2 × 10 −10 erg cm −2 s −1 (0.1-1 GeV). After ∼ 60 s from the burst trigger, Swift-XRT started follow-up observations, which covered in total ∼ 10 6 s. The light-curve in the 1-10 keV energy band is consistent with a power-law decay F ∝ t α with α = −1.36 ± 0.02 [71]. NuSTAR and XMM-Newton observations are also available around 1-2 days. The NIR, optical and UV data were taken from around ∼ 100 s. The early emission is particularly bright and is interpreted as dominated by the reverse shock component [180]. Afterwards, the decay rate flattens and then steepens again after ∼ 3 × 10 4 s (see Figure 17). The Nordic Optical Telescope measured a redshift of z = 0.4245 ± 0.0005 [181] which was then confirmed by Gran Telescopio Canarias [182].
Radio and sub-mm data were taken from ∼ 10 4 s and exhibit an achromatic behavior, possibly dominated by the reverse shock in the sub-mm range, followed by emission at late times with nearly constant flux. After receiving (at 22 s after the BAT trigger time) and validating (at 50 s) the GRB alert, the MAGIC telescopes started observing GRB 190114C at 57 s and operated stably from 62 s, starting from a zenith angle of 55.8 • . Observations lasted until 15912 s when a zenith angle of 81.14 • was reached. The observation was performed in good weather conditions but in presence of the Moon, resulting in a night sky background approximately 6 times higher than the standard dark night conditions. The results of the offline analysis show a clear detection above the 50σ level in the first 20 minutes of observation [3].
VHE observations and results
The light-curve (see Figure 18, upper panel) for the intrinsic flux (i.e., corrected for the EBL absorption) in the 0.3-1 TeV range was derived starting from 62 s and up to 2454 s. The TeV light curve is well described by a power-law with temporal decay index β T = −1.60 ± 0.07, steeper than the one exhibited by the X-ray flux. The temporal evolution of the intrinsic spectral photon index α int of the TeV differential photon spectrum is shown in the bottom panel. A constant value of α int ≈ −2 is consistent with the data, considering the statistical and systematic errors, but there is evidence for a softening of the spectrum with time. The spectral fit in the 0.2-1 TeV energy range for the time-integrated emission (62 − 2454 s) returns α obs = −5.34 ± 0.22 and α int = −2.22 +0. 23 −0.25 for the observed and EBL-corrected spectrum, respectively.
Interpretation
The properties of the VHE light curve and spectrum of GRB 190114C were studied by the MAGIC Collaboration in [3]. The PL behaviour, the absence of variability, and the relatively long timescale of the emission support the evidence that the VHE component belongs to the afterglow phase. An estimate of the amount of radiated power in the TeV range can be derived, assuming that the afterglow onset is at ∼ 6 s [178]. In this case the energy radiated in the TeV band is ∼ 10% of the isotropic-equivalent energy of the prompt emission E γ,iso considering the temporal evolution estimated from the MAGIC light-curve. The energy of the photons observed by the MAGIC telescopes was compared with the maximum energy of synchrotron photons assuming two possible scenarios for the radial profile of the external density, namely constant and wind-like ( Figure 19). These estimates of the maximum energy are based on the widely adopted limit on the maximum electron Lorentz factor set by equating the acceleration at Bhom rate with the synchrotron cooling rate (see Eq. 57). Adopting this limit, synchrotron emission can not account for the TeV photons detected by MAGIC, and a different radiation mechanism must be invoked. An additional, model independent indication for the presence of a spectral component other than synchrotron is evident after multi-wavelength simultaneous SEDs are built. In [71] the VHE data were rebinned into five time intervals and XRT, BAT, GBM and LAT data were added when available, i.e., in the first two time intervals (Figure 20). The spectrum shows a double-peaked behaviour with a first peak in the X-ray band and the second one in the VHE band. The Fermi-LAT data play a particularly important role in revealing the shape of the SED, as they show a dip in the flux, strongly supporting an interpretation of the whole SED as superposition of two distinct components.
Following these considerations, in [71] the SSC mechanism is explored. The broad-band emission is modeled with a numerical code reproducing the synchrotron and SSC radiation in the external forward shock scenario, including the proper KN cross section and the effects of γ − γ annihilation.
The predicted spectra and lightcurves are compared with the data in Figure 21 and Figure 22. Acceptable modeling of the multi-wavelength afterglow spectra have been found for a constant medium with E k 3 × 10 53 erg, ε e ≈ 0.05 − 0.15, ε B ≈ (0.05 − 1) × 10 −3 , n ≈ 0.5 − 5 cm −3 and p ≈ 2.4 − 2.6. It is found that, being the peak of the SSC component below 200 GeV, the KN suppression and the γ − γ internal absorption have a non-negligible role in shaping the peak of the VHE spectrum. The modeling reproduces very well the XRT, LAT and TeV emission (solid blue curve in Figure 21 and solid blue, green and red curves in Figure 22), while it overproduces both the optical and radio flux at late times (solid violet, yellow and cyan curves in Figure 22). According to [71], a similar fit is found also assuming a wind-like profile for the external density. In this case the parameters are E k = 4 × 10 53 erg, ε e = 0.6, ε B = 1 × 10 −4 , A * = 0.1 and p = 2.4. Very interestingly, the modeling shows that the late LAT observation (around 10 4 s) is completely dominated by SSC emission (red dashed MAGIC spectra are EBL-corrected assuming the model by [183]. From [71]. curve in Figure 22). A different type of modelization is also investigated by [71], under the requirement to model optical data. In this case (dotted curves in Figure 22), the fit is very good for optical, X-ray and LAT observations, but fails in reproducing the MAGIC light-curve. The values inferred for the GRB afterglow parameters are similar to those used for past GRB afterglow studies at lower frequencies. This is an indication that the SSC component can be a relatively common process for GRB afterglows, since it does not require peculiar values of the parameters to be explained.
Several other successful modelings of GRB 190114C data within the synchrotron and SSC external forward shock scenario have been published in literature [174,[184][185][186]. A summary of the parameters inferred by different works can be found in Table 2.
In [174] the X-ray, optical and LAT data before 100 s are attributed to reverse shock emission or prompt contribution. A constant-density environment for the circumburst material is assumed. A time-averaged SED (50 − 150 s) is estimated showing that at GeV energies a transition between the synchrotron and SSC component can be identified. From re-analysis of LAT data, a hard photon index (1.76 ± 0.21) is derived, in agreement with the hardening of the spectrum caused by the rising of the SSC component. Differently from what seen by [71,184] γ − γ absorption does not contribute significantly in shaping the VHE spectrum.
A similar interpretation is given in [185], although the inferred value of B is larger ( B ∼ 10 −3 ). A consistent modeling of the multi-wavelength observations as synchrotron and SSC radiation is found, both the ISM and wind-like scenarios. The SED at 80 s (see figure 2 and figure 5 in [185]) and the broad-band light curves ( figure 3 and figure 6) are reproduced, despite at 10 3 s, the model predictions in the 0.3 − 1 TeV band and X-rays are slightly brighter than the observed data.
In [186], analytical approximations are adopted for the description of the synchrotron and SSC components. In addition, the KN cut-off energy and the γ − γ absorption contribution are calculated and compared with the data. A wind-like environment was used for the TeV. This implies that the KN effect is relevant only at TeV levels and the VHE data can be modelled assuming that the SSC scattering is in Thompson regime. The γ − γ absorption is also considered negligible since the estimated attenuation factor is way lower than the one due to EBL attenuation and it reaches values around unity only for energies 1 TeV.
In [184] the multi-wavelength data were fitted with a single-zone numerical code with an exact calculation of KN cross-sections as well as the attenuation due to the pair production mechanism. A smoothed analytical approximations for the electron injection function was used. A systematic scan over a 4-dimensional parameter space was performed to search for the best-fit solution at early and later times. The SED calculated at 90 s and 150 s ( figure 3 and figure 9, respectively) are found to be well described by a fast cooling regime. The KN effect and the pair production mechanism shape the VHE spectrum significantly. It is estimated that 10% of the total emitted power, i.e., 25% of initially produced IC power, is absorbed.
General properties and multi-wavelength observations
The prompt emission detected by the two instruments consists of two episodes with the first one seen in the time interval from the trigger time to 4 s and the second brighter episode from 47 s to 61 s. The two episodes have very different spectral properties: the first one is described by a power-law with index −1.41 ± 0.08 and an exponential high-energy cutoff function with E p = 130 ± 20 keV, the second one can be described with a Band function with E p = 11 ± 1 keV , α = −0.92 ± 0.62 and β T = −2.51 ± 0.01 [189]. The (isotropic equivalent) prompt emission energy inferred from spectral analysis of Fermi-GBM data is E γ,iso ∼ 2 × 10 50 erg.
A multi-wavelength observational campaign of the event was performed covering the entire electromagnetic spectrum (see Figure 23 and 25). The event was not detected in the HE range by Fermi-LAT. Nevertheless, ULs have been reported in the MeV-GeV band up to 3 × 10 4 s [190]. The Swift-XRT started observations at 97.3 s and detected a bright X-ray afterglow, which was monitored until ∼ 7.8 × 10 6 s [191]. The X-ray light curve in the 0.3-10 keV energy range (observer frame) shows a peculiar behaviour with an initial steep decay phase followed by a plateau and a strong flare episode ( Figure 25, upper panel, blue points). After the flare the standard afterglow phase starts with a decay following a power-law with a possible steepening around 10 days. In the UV/optical/NIR band the event was followed by several instruments. The redshift was estimated to be z = 0.0785 ± 0.005 [192], which makes this event one of the closest GRBs ever detected. Starting from 4.5 -5.5 days after the GRB trigger an associated supernova has been reported [193]. Also in the optical data a flare is seen simultaneously with the one in X-rays. In the radio band the detection was reported by several instruments starting from ∼ 1 day after the trigger [194][195][196][197]. The radio flux initially slowly increases and then starts to decay after 20-30 days. Figure 23 with the XRT light-curve and the LAT upper limits. The time-evolving flux was satisfactorily modeled with a power-law decay F(t) ∝ t α with α = −1.09 ± 0.05. Such decay index is similar to the X-ray one derived in the same time interval (α X = −1.07 ± 0.09).
Interpretation
The interpretation of the VHE emission from GRB 190829A is debated and different radiation mechanisms including synchrotron, SSC or EIC emission have been proposed so far to explain the origin of the TeV emission. The H.E.S.S. Collaboration [5] investigated both the synchrotron and the SSC emission in the external forward shock as responsible radiation mechanism of the observed TeV component. Multi-wavelength data collected simultaneously with H.E.S.S. observations in the first two nights were modeled separately with a time-independent numerical code using the Markov-chain Monte Carlo (MCMC) approach to explore the parameter space. The results of the fitting show that the SSC mechanism fails to explain the VHE emission. The low Lorentz bulk factor predicted by the observations (Γ 10) implies that the SSC emission occurs in KN cross-scattering regime. As a result, a steep spectrum, inconsistent with the observational VHE data, is obtained (see Figure 24, light blue shaded area). Possible improvements between the data and the model foresee a higher Γ, which is in contrast with the observations, or the presence of an additional hard component in the distribution of the accelerated electrons. However, this latter solution implies extreme assumptions on the density of the circumburst medium (n 0 = 10 −5 cm −3 in case of strong magnetic field or n 0 = 10 5 cm −3 for weak magnetic field) and a SED strongly dominated by the SSC component which is inconsistent with the data. A better fitting of the observational data can be obtained when considering an alternative model where the maximum electron energy set by the radiative losses is ignored. In such scenario, the synchrotron emission is able to extend up to TeV energies and the observational broad-band data are described by a single synchrotron component (see Figure 24, orange shaded area). The SSC contribution is negligible while the γ − γ absorption shapes the VHE spectrum. The single synchrotron component scenario provides better fit (> 5σ) to the multi-wavelength data. On the other hand, this interpretation requires unknown acceleration processes or non-uniform magnetic field strength in the emission region as described for GRB 180720B (see Section 4.2). [72]. The 90% and 50% credible intervals from the fit are shown in lighter shades.
A complete multi-wavelength modeling of the GRB 190829A data including contribution of the synchrotron and SSC emission for both the forward and reverse shocks considering a constant-density environment is presented in [72]. The predicted broad-band light curves and the SED at the time of the H.E.S.S. detection are shown in Figure 25. A MCMC approach was adopted in order to estimate the best-fit parameters for the multi-wavelength modeling. The resulting values of the parameters related with the forward shock scenario are shown in Table 3. In contrast with the H.E.S.S. Collaboration results, the VHE emission is well reproduced with the SSC external forward shock scenario. The usual simplified assumption that ξ e = 1 is excluded by the fit which provides acceptable solutions only for ξ e 6.5 × 10 −2 . Moreover, an isotropic-equivalent kinetic energy at the afterglow onset E k = 2.5 +1.9 −1.3 × 10 53 erg is estimated. Considering the observed GBM prompt energy, such value implies that the prompt efficiency is η = 1.2 +1.0 −0.5 × 10 −3 which is much lower than the typical values derived from previous GRB studies. The other parameters (n 0 , e and B ) are found to be similar to the ones estimated for GRB 190114C.
A two-component off-axis jet model has also been investigated [198]. Such model proposes that the GRB jet is seen off-axis (θ view = 1.78 • ) and it consists of a narrow (θ jet = 0.86 • ) fast (Γ = 350) jet and a slow (Γ = 20) co-axial jet. The former jet component is responsible for the emission of SSC photons in the VHE band. The calculation of the SSC flux at the time of the H.E.S.S. detection is done following the prescriptions of [48] considering only the Thompson scattering regime.
An EIC plus SSC scenario has also been proposed for the production of the VHE component [199]. The seed photons belong to the long-lasting X-ray flare seen for GRB 190829A which can be up-scattered to TeV energies. A numerical calculation of the afterglow dynamics and radiative processes have been used to model the observational data. For t ∼ 10 3 − 10 4 s the EIC component dominates the VHE emission, while for later times (t 3 × 10 4 s) the EIC gradually decays and the SSC component becomes relevant. The initial afterglow kinetic energy used for the modeling (E k = 10 52 erg) suggests that GRB 190829A is not a typical low-luminosity GRB but it may have much higher kinetic energy. The (isotropic equivalent) prompt emission energy inferred from spectral analysis of Fermi-GBM data is E γ,iso = (1.1 ± 0.2) × 10 50 erg [202]. The prompt duration is T 90 = 9.78 ± 3.47 s (15 − 350 keV band). The BAT time-average spectrum in the time interval 0 − 10 s is well fitted by a power-law model with photon index β T = −3.03 ± 0.68 suggesting a low peak energy E p < 10 keV [203].
Swift-XRT [204] follow-up the event starting only 3214 s after T 0 due to observational constraints. The light curve up to almost 1 day is well described by a power-law with decay index α = −1.49 +0. 24 −0.21 . Late-time observations performed by the Chandra X-ray Observatory [205] and Swift-XRT [206] from ∼ 8 days up to ∼ 21 days showed a flattening of the X-ray light curve, i.e. a flux level inconsistent (higher) with the extrapolation of the power-law decay rate at early times. Optical observations confirmed the presence of an afterglow counterpart from around 168 s [207]. The optical light-curves showed a clear initial rise with a peak around 200 s followed by a decay [208]. A bright radio counterpart (flux density ∼ 1.3 × 10 −4 Jy at 6 GHz 1.4 days after the burst) was also detected by several instruments [209][210][211]. Late-time optical observations identified an associated supernova rising from 5 days after the burst reaching its maximum flux around 12-20 days after T 0 [212,213]. The measurement of the redshift was reported by the GTC (z = 0.426) [214] and then confirmed by the NOT (z = 0.423) [215] instrument. Final results from VHE data analysis of GRB 201015A have not been published yet. Preliminary information reported here have been released in [217] and [216]. Observations of GRB 201015A were performed by the MAGIC telescopes starting 33 s after the trigger time, under dark conditions, with a zenith angle ranging from 24 • up to 48 • , and lasted for about 4 h. In the second half of the data taking the presence of passing clouds affected the observation for ∼0.45 h. These data were removed and the remaining ones were analyzed with the standard MAGIC analysis software. Offline analysis showed a possible excess with a 3.5σ significance at the GRB position (see Figure 26) and a significant spot in the sky map. The energy threshold of the analysis is calculated to be 140 GeV from Monte Carlo simulated γ-ray data.
GRB 201216C
GRB 201216C is a long GRB at z = 1.1 triggered by the Swift-BAT at T 0 = 23 : 07 : 31 UT on 16 December 2020 [218]. Fermi-GBM also detected the event with a slightly different trigger time (6 second before the Swift-BAT) [219]. MAGIC detected GRB 201216C with a significance of ∼ 6σ.
General properties and milti-wavelength observations
The duration is estimated as T 90 = 48 ± 16 s in the 15-350 keV band by Swift-BAT [220] and around 29.9 s in the 50-300 keV band by Fermi-GBM. The light curve shows a multiple peak structure with a main peak around 20 s after the trigger time. The time-averaged GBM spectrum in the first 50 s is best fit by a Band function with E p = 326 ± 7 keV, α = −1.06 ± 0.01 and β T = −2.25 ± 0.03. The isotropic equivalent energy E γ,iso in the 10-1000 keV band is (4.71 ± 0.16) × 10 53 erg, as calculated from the fluence measured by Fermi-GBM.
Fermi-LAT observed the GRB starting from around 3500 s and up to 5500 s. No significant emission was reported [221]. Swift-XRT began the observation at t = 2966.8 s due to an observational constrain. A fading source was detected following a broken power-law behaviour with decay indices of 1.97 +0.10 −0.09 and 1.07 +0.15 −0.10 and a break at 9078 s [222]. Optical observations were also performed by several instruments. The r-band light curve made along with VLT data point [223] and inferred data from FRAM-ORM [224] show a power-law decay in flux with index equal to 1. The Liverpool Telescope observations, performed around 177 s after the trigger time, seems to be around the peak of the optical afterglow [225]. The HAWC observatory followed-up the event but no significant detection was identified in the TeV band [226]. Redshift estimation of z = 1.1 was performed by the ESO VLT [227]. Final results from VHE data analysis of GRB 201216C have not been published yet. Preliminary information reported here have been released in [229] and [228]. MAGIC observations and data taking of GRB 201216C started with a delay of 56 s after the Swift-BAT trigger time. The observation lasted for 2.2 h and was performed in optimal atmospheric condition and in absence of Moon. The zenith angle ranged from 37 • to 68 • . The low level of night sky background allowed to retain also the low energy events and therefore obtain a low energy threshold with compared to the other GRBs observed. To keep as many low-energy events as possible, an image cleaning method to extract dimmer Cherenkov showers initiated by gamma rays than the standard method was adopted.
VHE observations and results
The signal significance was calculated to be 6.0σ pre-trial (5.9σ post-trial 7 ) for the first 20 minutes of observation (see Figure 27). A preliminary time-integrated spectrum for the first 20 minutes of observation was produced. Due to the strong absorption effect by EBL a very steep power-law decay was found for the observed spectrum, especially for the events with energies higher than a few hundreds of GeV. The intrinsic spectrum, corrected for the EBL absorption was found to be consistent with a flat single power-law until 200 GeV above which no significant spectral points have been derived. A preliminary light curve in the time interval 56 s -2.2 h was also calculated. After 50 min only upper limits on the emitted flux have been derived since no significant emission was found after this time. The preliminary results are consistent with a monotonically decaying light curve fitted with a power-law.
The new TeV spectral window: discussion
After decades of searches, MAGIC and H.E.S.S. observations have unequivocally proved that (long) GRBs can be accompanied by a significant amount of TeV emission during the afterglow phase. Table 4 summarizes the main properties of the GRBs detected by IACTs, and presented in detail in the previous section. The list includes also two events (namely GRB 160821B and GRB 201015A) where only a hint of excess (i.e., with a significance at ∼ 3 − 4σ) was found. For the other four events, namely GRB 180720B, GRB 190114C, GRB 190829A and GRB 201216C the detections are robust (> 5σ). The table lists several properties, such as duration T 90 and total emitted energy E γ,iso of the prompt emission, redshift, and information on the IACT detection (the starting time T delay of observations elapsed since the trigger time T 0 , the energy range where photons have been detected, the name of the telescope and the significance of the excess). GRB 160821B is the only one belonging to the short class, the other five being long GRBs.
In this section we address the question why these GRBs have been detected, whether they have peculiar properties and whether they show some common behaviour which may be at the basis of the production of TeV radiation. To do that, one should be careful, since these GRBs have been followed-up under very different observational conditions and with very different time delays after the trigger time, and they span a quite large range of redshifts (from 0.078 to 1.1). Keeping in mind these differences, which have a strong impact on the detection capabilities of IACTs, we compare the observed and intrinsic properties of the population of GRBs at VHE, highlighting their similarities and differences, and discuss how they compare to the whole population. Table 4: List of the GRBs observed by IACTs with a firm detection (significance > 5σ) or a hint of detection (3 − 4σ) above 100 GeV. The T 90 and E γ,iso refer to the duration and total emitted energy of the prompt emission; the redshift is listed in column 3; T delay is the time delay between the trigger time T 0 and the time when IACT observations started; E range defines the energy range of the detected photons. The name of the telescope which made the observation and the significance of the detection are listed in the last column.
Observing conditions
Low zenith angles, fast repointing, dark nights, low redshift, and highly energetic events have always been considered as optimal, if not necessary, conditions to have some chances for GRB detections with IACTs. On the other hand, these first VHE GRBs have demonstrated that GRBs can have a level of TeV emission large enough to be detected by the current generation of IACTs, even under non-optimal conditions. GRB 190114C was observed with a zenith angle > 55 • and in presence of the Moon. Both conditions imply a higher energy threshold (typically 0.2 TeV) and require a dedicated data analysis. Another example is GRB 160821B, that was observed with a NSB 2-8 times higher than the standard dark night conditions. Moreover, significant VHE excess was found not only in case of short delays (less than hundreds of seconds) from the burst trigger but, somewhat surprisingly, also at quite late times, i.e. with delays of several hours or even days, as in the case of GRB 180720B and GRB 190829A, respectively. This showed the importance of pointing a GRB also at relatively late times, in cases fast follow-up observations are not feasible.
Optimal observing conditions and short delays remain however crucial to detect GRBs at higher redshift, for which the impact of EBL is large already at a few hundreds GeV. This explains how it has been possible the detection of a GRB at z = 1.1 (GRB 201216C): in this case, optimal observing conditions allowed to reach a low energy threshold of the sensitivity window (∼ 70 GeV). The excess of signal was indeed found only below 200 GeV (more precisely, between 70 and 200 GeV) where the attenuation by the EBL is still limited.
Redshift and the impact of EBL
The redshift of the detected GRBs covers a broad range, from z = 0.079 (GRB 190829A) to z = 1.1 (GRB 201216C). The impact of the EBL attenuation on the spectrum is severely changing, depending on the redshift value and on the photon energy. For redshift z ∼ 0.4 the impact becomes relevant for energies 0.2 TeV with a flux attenuation of ∼ 50% for 0.2 TeV and almost ∼ 99.5% for 1 TeV [183]. For nearby events (z 0.1 − 0.2) the effect of EBL is less severe and becomes relevant only for energies 0.3 TeV, reaching an attenuation factor of an order of magnitude only for energies 2 TeV. As a result, the GRB observed photon indices and the energy range of detected TeV photons differs significantly between the events. GRBs with redshift z > 0.4 such as GRB 190114C or GRB 180720B have very steep photon indices and they are detected in the lower energy range up to 0.44 TeV for GRB 180720B and 1.0 TeV for GRB 190114C. Spectral analysis from GRB 201216C are not yet public but preliminary results indicate that the emission is concentrated in the lower energy band between 0.1-0.2 TeV. Nearby GRBs with redshift z 0.1 − 0.2 such as GRB 160821B or GRB 190829A show a less steep photon spectrum (around −2.5) and the TeV detection range extends above 1 TeV. The detection of several GRBs with significant value of redshift (z > 0.4) is a robust proof that IACTs can overcome the limitations due to the EBL absorption and can expand the VHE detection horizon at the current stage up to z = 1.1. On the other hand, it is evident that detection of nearby GRBs is fundamental in order to explore more robustly the spectral shape, unbiased by the EBL effect which is a non-negligible source of uncertainty for higher redshifts.
Energetics
In terms of E γ,iso the VHE GRB sample spans more than three orders of magnitude from ∼ 10 49 erg up to ∼ 6 × 10 53 erg. The five long GRBs detected follows the Amati correlation as shown in Figure 28. GRB 160821B, the only short GRB of the sample, is consistent with the existence of a possible Amati-like correlation for short GRBs, with this event falling in the weak-soft part of the correlation. The detections of GRB 190829A and GRB 201015A show that an event does not need to be extremely energetic in terms of isotropic-equivalent prompt energy in order to produce a TeV emission with (intrinsic) luminosity comparable to the X-ray luminosity. As a result, sources with E γ,iso ∼ 10 50−51 erg are not excluded as possible TeV emitters, even though their detection is possible only for relatively low redshift. This reduces the available volume, and hence the detection rate of similar events. In any case, this is relevant also for short GRBs which are less energetic than long ones, with typical isotropic energies falling within the ∼ 10 49−52 erg range [144].
X-ray lightcurves
The comparison between TeV and X-ray light-curves suggests an intimate connection between the emission in these two bands, both in terms of emitted energy and luminosity decay rate. In Figure 29 the XRT afterglow light-curves (luminosity versus rest-frame time) in the 0.3 − 10 keV energy range are compared with the VHE light-curves (integrated over different energy ranges, depending on the detection window, see Table 4). Different colors refer to the six different GRBs. The VHE luminosity is shown with empty circles.
Considering the X-ray luminosity, the GRB sample can be divided into two groups: GRB 190114C, GRB 180720B and GRB 201216C display large and clustered X-ray luminosity (at t ∼ 10 4 s their luminosity is around 1-5 ∼ 10 47 erg s −1 ) and their light curves almost overlap for the entire afterglow phase. The other three GRBs (GRB 190829A, GRB 201015A and GRB 160821B) are much fainter in terms of X-ray luminosity (at least two order of magnitude at t ∼ 10 4 s). This is consistent with the fact that they also have a smaller E γ,iso . The correlation between X-ray afterglow luminosity and prompt E γ,iso is found in the bulk of the long GRB population and these GRBs make no exception.
Observations in the VHE band (empty circles in Figure 29) reveal that the VHE luminosities observed in the afterglow phase are in general smaller but comparable to the simultaneous X-ray luminosity, implying that almost an equal amount of energy is emitted in the two energy bands. Any theory aimed at explaining the origin of the TeV radiation should explain the origin of these similarity. Concerning the decay rate, observations are still not conclusive. The decay rate of the TeV emission is available only for two events. For GRB 190829A the temporal indices in X-ray and VHE are very similar, while for GRB 190114C the VHE clearly decays faster than the X-ray emission.
For GRB 190114C at t ∼ 380 s the VHE luminosity L V HE is ∼ 1.5 − 2.5 × 10 48 erg s −1 and the X-ray one L X is ∼ 0.6 − 1.0 × 10 49 erg s −1 . As a result, the power radiated in the VHE band is about ∼ 25% of the X-ray one. Similarly for GRB 190829A at t ∼ 4.5 hrs the VHE luminosity L V HE is ∼ 4.0 − 8.5 × 10 44 erg s −1 which is around ∼ 15 − 20% of the corresponding X-ray one (L X ∼ 2.0 − 5.0 × 10 45 erg s −1 ). For GRB 180720B at t ∼ 2 × 10 4 s the VHE luminosity L V HE is ∼ 9 × 10 47 erg s −1 and the X-ray one L X is ∼ 1.5 − 2.5 × 10 49 erg s −1 . In this case the power radiated in the VHE band is around ∼ 35 − 60% of the X-ray one.
The TeV contribution to the multi-wavelength modeling
Modeling of multi-wavelength afterglow data provide important insights concerning the GRB afterglow physics. In particular, the VHE data were crucial to investigate (i) the radiation mechanisms responsible for the production of photons between 10-100 GeV already detected by LAT; (ii) the environmental conditions at the GRB site; (iii) the free parameters which describe the shock micro-physics, and in particular the self-generated magnetic field.
The modelings proposed so far in literature to explain the VHE component have considered two different radiation processes at the origin of the TeV emission: SSC and synchrotron. In the first case, the VHE emission is interpreted as a distinct spectral component from the synchrotron radiation dominating from radio to ∼ GeV energies, which provides the seed photons that are upscattered at higher energies by the same electron population. In the second scenario, the VHE emission is seen as the extension of the synchrotron spectrum up to TeV energies.
In principle, a simultaneous SED covering the X-ray, HE and VHE range should be sufficient to discriminate among these two different possibilities. An hardening of the spectrum from GeV to TeV energies should be the smoking gun for the presence of a distinct component. In reality, the uncertainties in the spectral slope at VHE (caused by the uncertainty on the EBL and on the narrow energy range of TeV detection) can make the distinction hard to perform. In this case, LAT observations are of paramount importance to reveal the presence of a dip in the SED, which would also prove the need to invoke a different origin for the VHE emission. This seems to be the case for GRB 190114C, for which at least in one SED the LAT flux strongly suggests a dip in the GeV flux and hence the presence of the characteristic double bump observed for a synchrotron-SSC emission ( Figure 20). For GRB 190829A, LAT provides only an upper limit which is not constraining for modeling the shape of the SED (Figure 24). In this GRB an interpretation of the whole SED in terms of synchrotron radiation can not be excluded, although a modeling as SSC radiation has been proved to be successful [72] ( Figure 25). For the other events detected at VHE, either the data do not allow to build a proper SED with simultaneous multi-wavelength observations, or they are not yet public. Despite this, the SSC emission seems to be the most viable mechanism able to explain the TeV data. A firm conclusion on the responsible radiation mechanism has not been reached yet and future detections will be crucial for deeper investigations.
Assuming one of the two scenarios, TeV data coupled with broad band observations at lower energies can be exploited to give additional information on the details of the afterglow external forward shock scenario.
Concerning the shock micro-physics, several modeling have suggested the possibility that the fraction of electrons accelerated in non-thermal distribution, ξ e , is different from the standard value of 1 which is usually assumed. In a few GRBs modeling, namely for GRB 190114C [185] and GRB 190829A [72,198] the introduction of a ξ e < 1 was essential in order to fit consistently the observational data. In particular in [72] the requirement for a low value of ξ e 6.5 × 10 −2 was required to provide acceptable fit of the data. The other modeling assume a greater value of ξ e , around ∼ 0.3. Further detections will be exploited in order to verify if such indication could be present also in other events.
Some considerations can also be drawn for the equipartition parameters e and B . These values, especially the former one, are usually well unconstrained and can span several order of magnitudes. The TeV modeling described so far suggest that around ∼ 10% of the energy is given to the electrons, while a lower value (from 10 −5 to 10 −3 ) is given to the magnetic field. Larger values of B such as 0.1-0.01, which are considered in external shock scenario, are excluded. Moreover, some results also can be interpreted as an indication of an evolution in time of these parameters. In Figure 22 the modeling of the broad band light curves of GRB 190114C is shown. Two different modeling are presented: one optimized for the early time X-ray, HE and VHE observations (solid line) and one optimized for the late time lower energy bands (dotted line). This is due to the fact that the model which reproduce the early time data over-predict the late time optical and radio observations. This result point towards the possibility that some of the fixed parameters of the afterglow theory (e.g. the electron and magnetic field equipartition parameters) may evolve in time. A further clue of the presence of time-dependent shock micro-physics parameters is derived from the low frequency multi wavelength modeling of GRB 190114C presented in [121]. In order to model the optical and the radio data it is required that the micro-physical parameters evolve with time as e ∝ t −0.4 and B ∝ t 0.1 in the ISM case and B ∝ t 0.76 for the stellar wind scenario.
An issue that still is not solved by TeV observations is the discrimination between constant and wind-like profile for the ambient density. It is expected that long GRBs occur in wind-like environments. Nevertheless, at the current stage there seems to be no preference between such environment and a constant ISM one, which is able to well reproduce the observational data. Therefore conclusive answers on the topic cannot be drawn yet.
In conclusion, the current population of GRBs at VHE already show quite broad properties, spanning more than three order of magnitude in E γ,iso and more than two order of magnitude in terms of afterglow luminosity and ranging in redshift between 0.079-1.1. The afterglow X-ray and VHE emission have comparable fluxes and decay slopes. The afterglow emitted power in the VHE band seems to constitute from 15% up to 60% of the X-ray one. Data modeling suggest that the responsible VHE radiation mechanism is the SSC emission although different mechanisms (e.g. synchrotron radiation, EIC) cannot be completely excluded and a conclusive answer cannot be given yet. Multi-wavelength modeling show no preferences concerning the GRB environments between ISM or wind-like scenario and indicate that shock micro-physics parameters which seems to be able to reproduce VHE emission are ε e ∼ 0.1 and ε B ∼ 10 −5 − 10 −3 . Such features can be an indication of the universality of TeV emission in GRBs. It is then expected that a larger sample of GRBs than the current one will be detected in the VHE band, including also short GRBs for which, at the current stage, there are no confirmed detection except for the hint of excesses seen for GRB 160821B.
Conclusions and future prospects
The recent discoveries performed by current generation of Cherenkov telescopes in the VHE band have opened a new observational spectral window on GRBs. The presence of a TeV afterglow component has been unequivocally proven and the studies on the currently available sample have shown the potential that such detections have in probing several longstanding open questions in the GRB field. These first studies have focused on the identification of the responsible radiation mechanism, which is the first issue to address, and the comparison of the energetics, luminosity, and temporal behaviour of the VHE component with respect to emission at lower frequencies. Modeling of multi-wavelength data covering from radio up to TeV energies were performed giving interesting insights on the shock micro-physics conditions.
Limitations to the robust use of VHE data for afterglow modeling are imposed by the severe modification of the intrinsic spectrum cased by the energy-dependent flux-attenuation induced by EBL. GRBs with redshift z > 0.4, four out of six in the current VHE sample, are strongly affected by EBL absorption starting from hundreds of GeV. This implies large uncertainties on the shape and photon index of the intrinsic VHE spectrum. As a result, firm conclusions on the origin and spectral regime of the TeV component cannot be drawn yet. The low-energy extension of the range of sensitivity of IACTs is then fundamental for reaching a larger rate of detections and a more robust determination of the spectral index of the TeV component.
A full comprehension and exploitation of TeV data is expected to be reached thanks to the next generation of Cherenkov telescopes. The Cherenkov Telescope Array (CTA) will be a huge step forward for the detection of GRBs in the VHE band. The major upgrades with respect to the current generation of Cherenkov telescopes that will impact GRB observations are: i) a lower energy threshold ( 30 GeV), ii) a larger effective area at multi-GeV energies (∼ 10 4 times larger than Fermi-LAT at 30 GeV) and iii) a rapid slewing capability (180 degrees azimuthal rotation in 20 seconds). Moreover, its planned mixed-size array of large, medium and small size telescopes (called LST, MST and SST, respectively) situated at two sites in the northern and southern hemispheres will provide a full sky coverage from few tens of GeV up to hundreds of TeV. CTA will have a much better sensitivity and a broader energy range with respect to current ground-based facilities. A comparison is shown in Figure 30. At the present stage, the first prototype of the LSTs has been built and is operative under commissioning phase 10 in the northern site at the Roque de los Muchachos Observatory in La Palma. Despite these performance improvements, the expected CTA detection rates of GRBs will be anyway influenced by the relatively low duty cycle affecting IACTs and by the synergies with other instruments. Indeed, Cherenkov telescopes repointing relies on external triggers coming from space satellites. Assuming that currently operating space telescopes will be still operative, GRB alerts will be mostly provided by Swift-BAT and partially by the Fermi-GBM, and in the future by the French-Chinese mission Space-based multi-band astronomical Variable Objects Monitor (SVOM [232]). BAT observes around 92 GRBs per year with a typical localization error of a few arcmin [233]. The good localisation (later refined by XRT to a few arcsec) is fundamental for Cherenkov telescopes, given their limited field of view (e.g., about 4 • for the LSTs and 7 • for the MSTs). The GBM provides a much higher number of alerts, around 250 per year but with a larger localization error, from 1 − 3 • up to 10 • , which makes follow-up with IACTs very challenging. In case of such a large localization errors, CTA can exploit the so-called divergent mode for observations, which is currently under study [234]. In this pointing strategy, each telescope points to a position in the sky that is slightly offset to extend the field of view. Concerning future instruments, SVOM is expected to provide Swift-like alerts at a rate of ∼ 60 − 80 GRBs/yr with a localization error < 1 • , including also 10 GRBs/yr with redshift z < 1.
Available estimates of CTA detection rate of GRBs are reported in [235]. These studies were performed before the discovery of TeV emission. They are based on Swift-like alerts (triggered by Swift-BAT or SVOM) and Fermi-GBM alerts. The predicted detection rate is around a few GRBs per year, depending on the energy threshold of the observation and on the observation delay [235]. An updated study that considers current knowledge of TeV emission in the afterglow of GRBs is in progress [236].
Despite for decades GRBs' hunting by Cherenkov telescopes has been primarily focused on reaching low energy thresholds in order to explore the multi-GeV band, these first detections have shown that photons above TeV energies can be produced in GRBs and can be detected. This is mostly valid only for nearby events of redshift below 0.1 − 0.2 where EBL attenuation is not too severe. The exploration of the GRB emission component above 1 TeV can be of potential interest for SSTs and for the ASTRI Mini-Array. The ASTRI Mini-Array, currently under construction, will be an array of nine imaging atmospheric dual-mirror Cherenkov telescopes at the Teide Observatory site, expected to deliver the first scientific results in 2023. After the detection of GRB 190114C, the capabilities of the ASTRI Mini-Array in detecting and performing spectral studies of an event similar to the MAGIC GRB have been explored [237]. GRB 190114C has been taken as a template to simulate possible GRB emission from few seconds to hours, and has been extrapolated to 10 TeV on the bases of model predictions. The results show that the instrument will be able to detect afterglow TeV emission from an event like GRB 190114C up to ∼ 200 s (see the comparison between the GRB observed flux at 1 TeV and the differential ASTRI Mini-Array sensitivity in Figure 31). By moving the GRB at smaller redshift (down to z = 0.078, the redshift of the TeV GRB 190829A), the time for which the GRB is detectable increases up to ∼ 10 5 (however, the light-curve in this case should be re-scaled by the lower energetics of nearby events). Nearby GRBs are then potential target of interests for the ASTRI Mini-Array. These are certainly rare events, but their detection will provide a wealth of information, with spectra that can be characterised up to several TeV [237].
In conclusion, after decades of huge efforts, current ground-based VHE facilities have started a new era in the comprehension and study of GRB physics. Their breakthrough detections allow unprecedented studies. As discussed in this review, many open questions in afterglow physics can largely benefit from the inclusion of TeV data. The first detections are providing glimpses of such a huge potential. Luckily, we are at the dawn of the VHE era thanks to the upcoming CTA observatory, which will assure major upgrades in sensitivity, energy range, temporal resolution, sky coverage. Future observations, if complemented by simultaneous observations in X-rays and at ∼ GeV energies, will play a paramount role to improve our knowledge on the physics of GRB during the afterglow phase and hopefully also in the prompt phase. In particular, the afterglow SSC one-zone model will be tested to understand if it can grasp the main properties of the VHE emission or if a revision of our comprehension on the particle acceleration processes, shock micro-physics and radiation mechanisms is needed.
Author Contributions: All authors, D.M. and L.N., have contributed to writing. All authors have read and agreed to the published version of the manuscript. | 2022-05-10T15:34:34.387Z | 2022-05-05T00:00:00.000 | {
"year": 2022,
"sha1": "751985fc2a4b795537085b48c796a47e3d5122ed",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4434/10/3/66/pdf?version=1651761475",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "2dbd481f975775d4f7808817a2db5b92b5efc1fc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
249062572 | pes2o/s2orc | v3-fos-license | Deletion and Insertion Tests in Regression Models
A basic task in explainable AI (XAI) is to identify the most important features behind a prediction made by a black box function $f$. The insertion and deletion tests of Petsiuk et al. (2018) can be used to judge the quality of algorithms that rank pixels from most to least important for a classification. Motivated by regression problems we establish a formula for their area under the curve (AUC) criteria in terms of certain main effects and interactions in an anchored decomposition of $f$. We find an expression for the expected value of the AUC under a random ordering of inputs to $f$ and propose an alternative area above a straight line for the regression setting. We use this criterion to compare feature importances computed by integrated gradients (IG) to those computed by Kernel SHAP (KS) as well as LIME, DeepLIFT, vanilla gradient and input$\times$gradient methods. KS has the best overall performance in two datasets we consider but it is very expensive to compute. We find that IG is nearly as good as KS while being much faster. Our comparison problems include some binary inputs that pose a challenge to IG because it must use values between the possible variable levels and so we consider ways to handle binary variables in IG. We show that sorting variables by their Shapley value does not necessarily give the optimal ordering for an insertion-deletion test. It will however do that for monotone functions of additive models, such as logistic regression.
Introduction
Explainable AI methods are used help humans learn from patterns that a machine learning or artificial intelligence model has found, or to judge whether those patterns are scientifically reasonable or whether they treat subjects fairly. As Hooker et al. (2019) note, there is no ground truth for explanations. Mase et al. (2022) attribute this to the greater difficulty of identifying causes of effects compared to effects of causes (Dawid and Musio, 2021). Lacking a ground truth, researchers turn to axioms and sanity checks to motivate and vet explanatory methods. There are also some numerical measures that one can use to compute a quality measure for methods that rank variables from most to least important. These include the Area Over Perturbation Curve (AOPC) of Samek et al. (2016) and the Area Under the Curve (AUC) measure of Petsiuk et al. (2018) that we focus on. They have the potential to augment intuitive and philosophical distinctions among methods with precise numerical comparisons. In this paper we make a careful study of the properties of those measures and we illustrate their use on two datasets.
Insertion and deletion tests were used by Petsiuk et al. (2018) to compare variable importance methods for black box functions. In their specific case they had an image classifier that would, for example, conclude with high confidence that a given image contains a mountain bike. Then the question of interest was to identify which pixels are most important to that decision. They propose to delete pixels, replacing them by a plain default value (constant values such as black or the average of all pixels from many images) in order from most to least important for the decision that the image was of the given class. If they have ordered the pixels well, then the confidence level of the predicted class should drop quickly as more pixels are deleted. By that measure, their Randomized Input Sampling for Explanation (RISE) performed well compared to alternatives such as GradCAM (Selvaraju et al., 2017) and LIME (Ribeiro et al., 2016) when explaining outputs of a ResNet50 classifier (Zhang et al., 2018). For instance, Figure 2 of Petsiuk et al. (2018) has an example where occlusion of about 4% of pixels, as sorted by their RISE criterion can overturn the classification of an image. They also considered an insertion test starting from a blurred image of the original one and inserting pixels from the real image in order from most to least important. An ordering where the confidence rises most quickly is then to be preferred. Petsiuk et al. (2018) scored their methods by an area under the curve (AUC) metric that we will describe in detail below. The idea to change features in order of importance and score how quickly predictions change is quite natural, and we expect many others have used it. We believe that our analysis of the methods in Petsiuk et al. (2018) will shed light on other similar proposals. Figure 1 shows an example where an image is correctly and confidently classified as an albatross by an algorithm described in Appendix A.1. The integrated gradients (IG) method of Sundararajan et al. (2017) that we define below can be used to rank the pixels by importance. In this instance the deletion AUC is 0.27 which can be interpreted as meaning that about 27% of the pixels have to be deleted before the algorithm completely forgets that the image is of an albatross. The model we used accepts square images of size 224 × 224 pixels with 3 color channels. As a preprocessing step, we cropped the leftmost image in Figure 1 to its central 224 × 224 pixels. The IG feature attributions from each pixel (summed over red, green and blue channels) of this square image are presented in the rightmost panel of Figure 1. A saliency map shows that pixels in the bird's face, especially the eye and beak are rated as most important.
In this paper we study insertion and deletion metrics for uses that include regression problems in addition to classification. We consider a function f (x) such as an estimate of a response y given n predictors represented as components of x. The regression context is different from classification. The trajectory taken by f (·) as inputs are switched one by one from a point x to a baseline point x ′ can be much less monotone than in the image classification problems that motivated Petsiuk et al. (2018). We don't generally find that either f (x) or f (x ′ ) is near zero. There are also use cases where x and x ′ are both actual observations; it is not necessary for one of them to be an analogue of a completely gray image or otherwise null data value. Sometimes f (x) ≈ f (x ′ ) and yet it can still be interesting to understand what happens to f when some components of x are changed to corresponding values of x ′ .
Our main contributions are as follows. Despite these differences between classification and regression, we find that insertion and deletion metrics can be naturally extended to regression problems. We then develop expressions for the resulting AUC in terms of certain main effects and interactions in f building on the anchored decomposition from Kuo et al. (2010) and others. This anchored decomposition is a counterpart to the better known analysis of variance (ANOVA) decomposition. The anchored decomposition does not require a distribution on its inputs, much less the independence of those inputs that the ANOVA requires. In the regression context we prefer to change the AUC computation replacing the horizontal axis by a straight line connecting f (x) to f (x ′ ). We obtain an expression for the average AUC in a case where variables were inserted in a uniform random order over all possible permutations. In settings without interactions the area between the variable change curve (that we define below) and the straight line has expected value zero under those permutations, but interactions change this. We also show that the expected area between the insertion curve and an analogous deletion curve that we define below does have expected value zero, even in the presence of interactions of any order. Some other contributions described below show that in some widely used models the same ordering that optimizes an area criterion also optimizes a Shapley value.
We take a special interest in the integrated gradients (IG) method of Sundararajan et al. (2017) because it is very fast. The number of function or derivative evaluations that it requires grows only linearly in the number of input variables, for any fixed number of evaluation nodes in the Riemann sum it uses to approximate an integral. The cost of exact computation for kernel SHAP (KS) of Lundberg and Lee (2017) grows exponentially with the number of variables, although it can be approximated by sampling. We also include LIME of Ribeiro et al. (2016), DeepLIFT of Shrikumar et al. (2017), Vanilla Grad of Simonyan et al. (2013) and input times gradient method of Shrikumar et al. (2016). In the datasets we considered, KS is generally best overall. We note that the term 'Vanilla' is not used by Simonyan et al. (2013) to describe their methods but it has been used by others, such as Agarwal et al. (2022).
It is very common for machine learning functions to include binary inputs. For this reason we discuss how to extend IG to handle some dichotomous variables and then compare it to the other methods, especially KS. Simply extending the domain of f for such variables from {0, 1} to [0, 1] is easy to do and it avoids the exponential cost that some more principled choices have.
The remainder of this paper is organized as following. Section 2 cites related works, introduces some notation and places our paper in the context of explainable AI (XAI), while also citing some works that express misgivings about XAI. Section 3 defines the AUC and gives an expression for it in terms of main effects and interactions derived from an anchored decomposition of the prediction function f . The expected AUC is obtained for a random ordering of variables. We also introduce an area between the curves (ABC) quantity using a linear interpolation baseline curve instead of the horizontal axis. We show that arranging input variables in decreasing order by Shapley value does not necessarily give the order that maximizes AUC, due to the presence of interactions. Models that, like logistic regression are represented by an increasing function of an additive model do get their greatest AUC from the Shapley ordering depsite the interactions introduced by the increasing function. When that function is differentiable, then IG finds the optimal order. Section 4 discusses how to extend IG to some dichotomous variables. We consider three schemes: simply treating binary variables in {0, 1} as if they were continuous values in [0, 1], multilinear interpolation of the function values at binary points, and using paths that jump from x j = 0 to x j = 1. The simple strategy of casting the binary inputs to [0, 1], which many prediction functions can do, is preferable on grounds of speed. Section 5 presents some empirical work. We choose a regression problem about explaining the value of a house in Bangalore using some data from Kaggle (Section 5.1). This is a challenging prediction problem because the values are heavily skewed. It is especially challenging for IG because all but two of the input variables are binary and IG is defined for continuous variables. The model we use is a multilayer perceptron. We compare variable rankings from six methods. KS is overall best but IG is much faster and nearly as good. Section 5.2 looks at a problem from high energy physics using data from CERN. Section 6 includes a ROAR analysis that compares how well the KS and IG measures rank the importance of variables in a setting where the model is to be retrained without its most important variables. Section 7 has some final comments. Appendix A gives some details of the data and models we study. Theorem 1 is proved in Appendix B.
Related Work
The insertion and deletion measures we study are part of XAI. Methods from machine learning and artificial intelligence are being deployed in electronic commerce, finance and other industries. Some of those applications are mission critical (e.g., in medicine, security or autonomous driving). When models are selected based on their accuracy on holdout sets, the winning algorithms can be very complicated. There is then a strong need for human understanding of how the methods work in order to reason about their accuracy on future data. For discussions on the motivations and methods for XAI see recent surveys such as Liao and Varshney (2021), Saeed and Omlin (2021) and Bodria et al. (2021).
One of the most prominent XAI tasks is attribution in which one quantifies and compares importance of the model inputs to the resulting prediction. Since there is no groundtruth for explanations, these attributions are usually compared based on theoretical justifications. From this viewpoint, SHAP (SHapley Additive exPlanations) (Lundberg and Lee, 2017) and IG (Sundararajan et al., 2017) are popular feature importance methods due to their grounding in cooperative game theory. Given some reasonable axioms, game theory can produce unique variable importance measures in terms of Shapley values and Aumann-Shapley values respectively.
As a complementary method to theoretical a priori justification, one can also examine the outputs of attribution methods numerically, either applying sanity checks or computing quantitative quality measures. The insertion and deletion tests of Petsiuk et al. (2018) that we study are of this type. Quantitative metrics for XAI were recently surveyed in Nauta et al. (2022) who also discuss metrics based on whether the importance ratings align with those of human judges.
As an example of sanity checks, Adebayo et al. (2018) find that some algorithms to produce saliency maps are nearly unchanged when one scrambles the category labels on which the algorithm was trained, or when one replaces trained weights in a neural network by random weights. They also find that some saliency maps are very close to what edge detectors would produce and therefore do not make much use of the predicted or actual class of an image. Another example from Adebayo et al. (2020) checks whether saliency mathods can detect spurious correlation in a setting where the backgrounds in training images are artificially correlated to output labels and the model learns this spurious correlation correctly.
One important method is the ROAR (RemOve And Retrain) approach of Hooker et al. (2019). It sequentially removes information from the least important columns in each observation in the training data and retrains the model with these partially removed training data iteratively. A good variable ranking will show rapidly decreasing performance of the classifier as the most important inputs are removed. The obvious downside of this method is that it requires a lot of expensive retraining that insertion and deletion methods avoid.
The insertion and deletion tests we study have been criticized by Gomez et al. (2022) who note that the synthesized images generated in these tests are unnatural and do not resemble the images on which the algorithms were trained. The same issue of unnatural inputs was raised by Mase et al. (2019). Gomez et al. (2022) also point out that insertion and deletion tests only compare the rankings of the inputs. Fel et al. (2021) noted that scores on insertion tests can be strongly influenced by the first few pixels inserted into a background image. They then note that images with only a few non-background pixels are quite different from the target image. The policy choice of what baseline to compare an image to affects whether the result can be manipulated. An adversarily selected image might differ from a target image in only a few pixels, and yet have a very different classification. At the same time both of these images differ greatly from a neutral background image. An insertion test comparing the target image to a blurred background might show that the classification depends on many pixels, while a deletion test with the adversarial image will show that only a few pixels need to change. The two tests will thus disagree on whether the classification was influenced by few or many pixels. Both are correct because they address different issues.
A crucial choice in using insertion and deletion tests is which reference input to use when deleting or inserting variables. Petsiuk et al. (2018) decided to insert real pixels into a blurred image instead of inserting them into a solid gray image because, when the most important pixels form an ellipsoidal shape, then inserting them into a gray background might lead a spurious classification (such as 'balloon'). On the other hand, deleting pixels via blurring might underestimate the salience of those pixels if the algorithm is good at inferring from blurred images. In our albatross example of Figure 1 we used an all black image with 224 × 224 × 3 zeros. We call such choices 'policies' and note that a good policy choice depends on the scientific goals of the study and the strengths and weaknesses of the algorithm under study. Haug et al. (2021) study different kinds of baseline images for image classification problems noting that the choice of background affects how well a method performs. Among the backgrounds they mention are constants, blurred images, uniform or Gaussian noise, average images and baseline images maximally distant from the target image. They also mention the neutral backgrounds of Izzo et al. (2020) that lie on a decision boundary. Sundararajan and Najmi (2020) discuss taking every available data point as a baseline and averaging the resulting Shapley values. Sturmfels et al. (2020) note the importance of choosing baselines carefully, pointing out that zero would be a bad baseline for blood sugar and that if a solid color image is used as a baseline in image classification, then the result cannot attribute importance to pixels of that color. They discuss many of the baselines that Haug et al. (2021) do and they propose the 'farthest image' baseline.
There are also broader criticisms of XAI methods. Kumar et al. (2020) point out some difficulties in formulating a variable importance problem as a game suitable for use in Shapley value. Their view is that those explanations do not match what people expect from an explanation. They also identify a catch-22 where either allowing or disallowing a variable not used by a model to be important causes difficulties. Rudin (2019) says that one should not use a black box model to make high stakes decisions but should instead only use interpretable models. She also disputes that this would necessarily cause a loss in accuracy. We see a lot of value in that view, and we know that there are examples where interpretable models perform essentially as well as the best black boxes. However black boxes are very widely used. There is then value in XAI methods that can reveal and quantify any of their flaws. Furthermore, an explanation depends on not just the form of the model but also on the joint distribution of the predictor variables. For example an interpretable model that does not use the race of a subject might still have a discriminatory impact due to associations among the predictors and XAI methods can be used to evaluate such bias.
Prior uses of insertion and deletion tests have mostly been about image or text classification. There have been a few papers using them for tabular data or time series data, such as Cai et al. (2021), Hsieh et al. (2021), Parvatharaju et al. (2021), and Ismail et al. (2020). Ancona et al. (2017) also describe a strategy of changing variables one at a time and observing how the quality of a prediction changes in response independently of Petsiuk et al. (2018). Their Figure 3c compares the trajectories taken by several different methods on some image classification problems. They have insertion and deletion curves to compare an occlusion method to integrated gradients. Unlike Petsiuk et al. (2018), they do not report an AUC quantity.
The work of Samek et al. (2016) precedes Petsiuk et al. (2018) and uses the same global ablation strategy of deleting information in order from most to least important. Their deletions involve replacing a whole block of pixels (e.g., a 9 × 9 block) by uniformly distributed noise. One motivation for using such noise was to generate images outside the manifold of natural images. Their average over a perturbation curve is comparable to the average under the deletion curve of Petsiuk et al. (2018). Where Petsiuk et al. (2018) focus on curves for individual images, Samek et al. (2016) study the average of such curves over many images. In their numerical work they only make the first 100 such perturbations, affecting about 1/6 of the pixels.
For us the advantage of the approach in Petsiuk et al. (2018) is that their AUC is defined by running the variable substitution to completion, changing every feature x j to the baseline value x ′ j . Then we use two orders, one that seeks to increase f as fast as possible and one that seeks to decrease it as fast as possible, and study the area between those two curves.
The AUC for Regression
The AUC method for comparing variable rankings has not had much theoretical analysis yet. This section develops some of its properties. The development is quite technical in places. We begin with a non technical account emphasizing intuition. Readers may use the intuitive development as orientation to the technical parts or they may prefer to skip the technical parts.
If we order the inputs to a function f and then change them one at a time from the value in point x to that in a baseline value x ′ , the resulting function values trace out a curve over the interval [0, n]. If we have tried to order the variables starting with those that will most increase f and ending with those that least increase (most decrease) f then a better ordering is one that gives a larger area under the curve (AUC). We will find it useful to consider the signed area under this curve but above a straight line connecting the end points. This area between the curves (ABC) is the original AUC minus the area under the line segment.
A deletion measure orders the variables from those thought to be most decreasing of f to those that are least decreasing, i.e., the opposite order to an insertion test. For deletion we like to use the area ABC below the straight line but above the curve that the deletion process traces out. When we need to refer to insertion and deletion ABCs in the same expression we use ABC ′ for the deletion case.
Additive functions are of special interest because additivity simplifies explanation and such models often come close to the full predictive power of a more complicated model. For an additive model, the incremental change in f from replacing x j by x ′ j , call it ∆ j , does not depend on x k for k ̸ = j. In this case the AUC is maximized by ordering the variables so that ∆ 1 ⩾ ∆ 2 ⩾ · · · ⩾ ∆ n . In this case the Shapley values are ϕ j = ∆ j and so ordering by Shapley value maximizes both AUC and ABC. We also show that if one orders the predictors randomly, then the expected value of ABC under this randomization is zero for an additive function.
Prediction functions must also capture interactions among the input variables. We study those using some higher order differences of differences. This way of quantifying interactions comes from an anchored decomposition that we present. This anchored decomposition is a less well known alternative to the analysis of variance (ANOVA) decomposition. The interaction quantity for a set u ⊆ {1, 2, . . . , n} of variables is denoted ∆ u . This interaction does not contribute to any points along the curve until the 'last' member of the set u, denoted ⌈u⌉, has been changed. It is thus present only in the final n + 1 − ⌈u⌉ points of the insertion curve. A large AUC comes not just from bringing the large main effects to the front of the list. It also helps to have all elements in a positive interaction ∆ u included early, and at least one element in a negative interaction ∆ u appear very late in the ordering.
When there are interactions present, it is no longer true that E(ABC) must be zero under random ordering. We show in Appendix B.1 that the area between the insertion and deletion curves ABC + ABC ′ satisfies E(ABC + ABC ′ ) = 0 under random ordering of inputs whether or not interactions are present.
Section 3.4 shows that if we sort variables in decreasing order of their Shapley values, then we do not necessarily get the ordering that maximizes the AUC. This is natural: the Shapley value ϕ j of variable j is defined as a weighted sum of 2 n−1 incremental values for changing x j , while the AUC, uses only one of those incremental values for variable j. The anchored decomposition that we present below makes it simple to construct an example with n = 3 where the Shapley values are ϕ 1 > ϕ 2 > ϕ 3 while the order (1, 3, 2) has greater AUC than the order (1, 2, 3). This is not to say that insertion AUCs are somehow in error for not being optimized by the Shapley ordering, nor that Shapley value is in error for not optimizing the AUC. The two measures have different definitions and interpretations. They can reasonably be considered proxies for each other, but the Shapley value weights a variable's interactions in a different way than the AUC does.
In Appendix B.2 we consider the logistic regression model Because of the curvature of the logistic transformation, this function has interactions of all orders. At the same timef (x) = log(f (x)/(1 − f (x))) = β 0 + x T β is additive so on this scale the Shapley ordering does maximize AUC. The AUC on the original probability scale is perhaps the more interpretable choice. We show that due to the monotonicity of the logistic transformation, the Shapley ordering for f (x) is the same as forf (x) and so it also maximizes the AUC for f (x) = Pr(Y = 1 | x). Because exp(·) is strictly monotone the Shapley ordering also optimizes AUC for loglinear models and for naive Bayes. It is also shown there that for a differentiable increasing function of an additive function that integrated gradients will compute the optimal order. The next subsections present the above findings in more detail. Some of the derivations are in an appendix. We use well known properties of the Shapley value. Those are discussed in many places. We can recommend the recent reference by Plischke et al. (2021) because it also discusses Harsanyi dividends and is motivated by variable importance.
ABC Notation
We study an algorithm f : X → R that makes a prediction based on input data x ∈ X = n j=1 X j . The points x ∈ X are written x = (x 1 , x 2 , . . . , x n ). In most applications X j ⊆ R. While some attribution methods require real-valued features, the AUC quantity we present does not require it. For classification, f could be the estimated probability that a data point with features x belongs to class y, or it could be that same probability prior to a softmax normalization. Our emphasis is on regression problems.
The set of variable indices is 1:n ≡ {1, 2, . . . , n}. For any u ⊆ 1:n we write x u for the components x j that have j ∈ u. We write −u for 1:n \ u. We often need to merge indices from two or more points into one hybrid point. For this That is, the parts of x and x ′ have been properly assembled in such a way that we can pass the hybrid to f getting f (x). More generally for disjoint u, v, w with u ∪ v ∪ w = 1:n the point x u :y v :z w has components x j , y j and z j for j in u, v and w respectively.
The cardinality of u is denoted |u|. We also write ⌈u⌉ = max{j ∈ 1:n | j ∈ u} with ⌈∅⌉ = 0 by convention. It is typographically convenient to shorten the singleton {j} to just j where it could only represent a set and not an integer, especially within subscripts.
Suppose that we have two points x, x ′ ∈ X and are given a method to attribute the difference f (x ′ ) − f (x) to the variables j ∈ 1:n. This method produces attribution values We can then sort the variables j ∈ 1:n according to their attribution values A f (x, x ′ ) j . In an insertion test we insert the variables from x ′ into x in order from ones thought to most increase f (·) (i.e., largest If we have chosen a good order there will be a large area under the curve (j, f (x (j) )) for j = 0, 1, . . . , n and consequently also a large (signed) area between that curve and a straight line connecting its endpoints. The left panel in Figure 2 illustrates ABC for insertion.
In a deletion measure we order the variables from the ones thought to have the most negative effect on f to the ones thought to have the most positive effect. Those variables are changed from x j to x ′ j in that order and a good ordering creates a curve with a small area under it. We keep score by using the signed area above that curve but below the straight line connecting f (x) to f (x ′ ). Note that we are still inserting variables from x ′ into x but, in an analogy to what happens in images we are deleting the information that we think would make f large, which in that setting made the algorithm confident about what was in the image. Letx (j) be the point we get after placing the j elements of x ′ thought to most decrease f into x. Our ABC criterion for deletion is the signed area above the curve (j, f (x (j) )) but below the straight line connecting connecting the endpoints. The right panel in Figure 2 illustrates ABC for deletion.
We also considered taking insertion to mean replacing components x j by x ′ j in increasing order of predicted change to f when f (x ′ ) > f (x) and taking deletion to mean replacement starting with the most negative changes when f (x ′ ) < f (x). This convention may seem like a natural extension of the uses in image classification, but it has two difficulties for regression. First, it is not well defined when f (x) = f (x ′ ). Second, while this exact equality might seldom hold, that definition makes cases f ( When x and x ′ are two randomly chosen data points there is a natural symmetry between insertion and deletion. In many settings however, one of the points x or x ′ is not an actual observation but is instead a reference value such as the gray images discussed above. As mentioned above, choices of x and x ′ to pair with each other are called policies. Section 5.2 has some example policies on our illustrative data. We include a counterfactual policy with motivation similar to counterfactual XAI. The relation between counterfactual XAI and choice of background data are also discussed in Albini et al. (2021) in detail. A formal description of the curve is as follows. For a permutation (π(1), π(2), . . . , π(n)) of (1, 2, . . . , n), and 1 ⩽ j ⩽ n define Π(j) = {π (1) For our theoretical study it is convenient to define If we connect the points (j, f (x (j) ) by line segments then the area we get is a sum of trapezoidal areas The difference between this trapezoidal area and (1) is unaffected by the ordering permutation π becausex (0) = x andx (n) = x ′ are invariant to the permutation. One could similarly omit either j = 0 or j = n (or both) from the sum in (1) without changing the difference between areas attributed to any two permutations. Our primary measure is the area below the curve but above a straight line from (0, f (x (0) )) to (n, f (x (n) )). It is the (signed) area between those curves, that is is a measure of the area under the straight line connecting (0, f (x)) to (n, f (x ′ )) compatible with our AUC formula from (1). The difference between ABC and AUC is also unaffected by the ordering of variables. The AUC and ABC going from x to x ′ is the same as that from x ′ to x. That is, it only depends on the two selected points. The reason for this is thatx (k) going from x ′ to x equalsx (n−k) when we go from x to x ′ . For the same reason the deletion areas are the same in both directions, but generally not equal to their insertion counterparts.
Both AUC and ABC have the same units that f has. Then for instance if f is measured in dollars then ABC/n is in dollars explained per feature. This normalization is different from that of Petsiuk et al. (2018) whose curve is over the interval [0, 1] instead of [0, n] and whose AUC is then interpreted in terms of a proportion of pixels.
Additive Functions
The AUC measurements above are straightforward to interpret when f (x) takes the additive form f ∅ + n j=1 f j (x j ) for a constant f ∅ and functions f j : X j → R. We then easily find that The best ordering is, unsurprisingly, the one that sorts j in decreasing values of f j (x j ) − f j (x ′ j ). If f is additive then the insertion and deletion ABCs are the same. Also, the Shapley value for variable j is proportional to f j (x ′ j ) − f j (x j ) and so ordering variables by decreasing Shapley value maximizes the AUC and ABC.
Interactions
The effect of interactions is more complicated, but we only need interactions involving points x and x ′ . We define the differences and iterated differences of differences that we need via We can interpret this as follows: the interaction for variables u when represented as differences of differences, takes effect in its entirety once the last element j of u has been changed from x j to x ′ j . It then contributes to n − ⌈u⌉ + 1 of the summands. Thus, in addition to ordering the main effects from largest to smallest, the quality score for a permutation takes account of where large positive and large negative interactions are placed.
It is easy to see from (4) that for an additive function and a uniformly random permutation π we have E(ABC) = 0 because under such random sampling the expected rank of variable j is (n + 1)/2. Now suppose that we permute the variables 1 through n into a random permutation π. A fixed subset u = (j 1 , . . . , j |u| ) is then mapped to π(u) = (π(j 1 ), . . . , π(j |u| )). Under this randomization The next proposition gives E(⌈u⌉).
Next we work out the expected value of AUC. Using the decomposition in Appendix B.1 we have Then with Proposition 1 we find that As noted above, E(ABC) = 0 if f has no interactions because E(⌈{j}⌉) = (n+1)/2, but otherwise it need not be zero because E(⌈u⌉) > (n + 1)/2 for |u| > 1. The contribution to ABC from a given interaction has the opposite sign of that interaction because |u| − 1 < 0 for |u| ⩾ 2. We show in Appendix B.1 that E(ABC+ABC ′ ) = 0 under random permutation of the indices.
AUC Versus Shapley
If we order variables in decreasing order by Shapley value, that does not necessarily maximize the AUC. We can see this in a simple setup for n = 3 by constructing certain values of ∆ u . We will exploit the delay ⌈u⌉ with which an interaction gets 'credited' to an AUC to find our example.
It is easy to show that there can be no counterexamples for n = 2.
Incorporating Binary Features into Integrated Gradients
IG avoids the exponential computational costs that arise for Shapley value. However, as defined it is only available for variables with continuous values. Many problems have binary variables and so we describe here some approaches to including them.
IG is based on the Aumann-Shapley value from Aumann and Shapley (1974) who present an axiomatic derivation for it. We omit those axioms. See Sundararajan et al. (2017) and Lundberg and Lee (2017) for the axioms in a variable importance context.
We orient our discussion around Figure 3 that shows a setting with 3 variables. Panel (a) shows a target data point that differs from a baseline point in three coordinates. Panel (b) shows the diagonal path taken by integrated gradients. The gradient of f (x) is integrated along that path to get the IG attributions: If one of the variables is binary then one approach, shown in panel (c) is to simply jump from one value to another at some intermediate point, such as the midpoint. For differentiable f the integral of the gradient along the line segment given by the jump would, by the fundamental theorem of calculus, be the difference between f at the ends of that interval (times the Euclidean basis vector (0, . . . , 0, 1, 0, . . . , 0) corresponding to that variable). Such a difference is computable for binary variables even though the points on the path are ill defined. Finally, the vector of Shapley values is an average over n! paths making jumps from baseline to target in all possible variable orders (Sundararajan et al., 2017). Panel (d) shows two of those paths. For differentiable f one could integrate gradients along those paths as a way to compute the jumps, again by the fundamental theorem of calculus, and then average the path integrals. Now suppose that we have m > 0 binary variables in a set v ⊂ 1:n. Without loss of generality suppose that v = 1:m. Then any data x and x ′ are in {0, 1} m × R n−m and for IG we need to consider arguments to f in R n . We consider three choices that we describe in more detail below: a) Use the fitted f as if the binary x j ∈ [0, 1], for j ∈ 1:m. b) Replace f by a multilinear interpolation and compute the integrated gradients of g. c) Take paths that jump for binary x j as shown in Figure 3(c). We call these choices, casting, interpolating and jumping, respectively.
Option a is commonly available as many machine learning models cast binary variables to real values when fitting. Option b interpolates: if x 1:m ∈ {0, 1} m then g(x) = f (x). Sundararajan et al. (2017) show that integrated gradients match Shapley values for functions that are a sum of a multilinear interpolation like g above plus a differentiable additive function. Unfortunately, the cost of evaluating g(x) is Ω(2 m ), which is exponential in the number of binary inputs. For option c we have to choose where to make the m jumps. For m = 1 we would naturally jump half way along the path though there is not an axiomatic reason for that choice. For m > 1 we have to choose m points on the curve at which to jump. Even if we decide that all of those jumps should be at the midpoint, we are left with m! possible orders in which to make those jumps. By symmetry we might want to average over those orders but that produces a cost which is exponential in m.
Based on the above considerations we think that the best way to apply IG to binary variables is also the simplest. We cast the corresponding booleans to floats.
Experimental Results
In this section we illustrate insertion and deletion tests for regression. We compare our variance importance measures on two tabular datasets. The first one is about predicting the value of houses in India. It has mostly binary predictors and two continuous ones. The second dataset is from CERN and it computes the invariant mass produced from some electron collisions.
Bangalore Housing Data
The dataset we use lists the value in Indian rupees (INR) for houses in India. The data are from Kaggle at this URL: www.kaggle.com/ruchi798/housing-prices-in-metropolitan-areas-of-india. We use 38 of the 39 other columns in this dataset (excluding the "Location" column that contains various place names for simplicity), and we treat the "Area" and "No. of Bedrooms" columns as continuous variables.
We use only the data from Bangalore (6,207 records). Most of those data points were missing almost every predictor. We use only 1,591 complete data points. We normalized the output value by dividing by 10,000,000 INR. We centered the continuous variables at their means and then divided them by their standard deviations. We selected 80% of the data points at random to train a multilayer perceptron (MLP). The hyperparameters such as number of layers and ratio of dropouts are determined from a search described in Appendix A.2.
For the 20% of points that were held out (391 observations) we computed the ABC from (2) using our collection of variable importance methods. We also included a random variable ordering as a check. For each of those points x we made a careful selection of a reference point x ′ from holdout points as follows: • the point x ′ had to differ from x in at least 12 features, • it had to be among the smallest 20 such values of ∥x − ·∥, and • among those 20 it had to have the greatest response difference |f (x) − f (x ′ )|. Having numerous different features makes the attribution problem more challenging. Having ∥x − x ′ ∥ small brings less exposure to the problems of unrealistic data hybrids. Finally, having large |f (x)−f (x ′ )|, the absolute value of the sum of feature attributions for XAI algorithms with the completeness axiom, identifies data pairs in most need of an attribution. Despite having close feature vectors those pairs have quite different predicted responses.
Our implementation of KS used 120,000 samples. Our implementation of IG used a Riemann sum on 500 points to approximate the line integrals. The hyperparameters for other XAI methods are summarized in Appendix A.3.
The ABCs and their differences are summarized in Table 1. There we see numerically that KS was best for the insertion ABC and IG was second best. For the deletion measure it was essentially a three way tie for best among KS, IG and DeepLift.
The simple gradient based methods were disappointing. In particular, vanilla grad was worse than random. We note that vanilla grad uses a default variable scaling. We used the standard deviation of each input while another choice is to scale each variable to the interval [0, 1]. Neither of these choices use the specific baseline-target pair and this could cause poor performance.
The difference between KS and IG was not very large. Thus even in this setting where there are lots of binary predictors, IG was able to closely mimic KS. We see in Table 1 that insertion ABCs are on average higher than deletion ABCs for this policy.
While KS and IG and LIME and DeepLIFT make use of reference values in computation of feature attributions, vanilla grad and Input×Gradient do not require one to specify reference values. They are determined only by local information around the target data. Since our ABC criterion is defined in terms of reference values it is not surprising that methods which use those reference values get larger ABC values. It is interesting that a method like Input×Gradient that does not even know the baseline we compare to can do as well as it does here.
We note that DeepLIFT does reasonably well compared to KS and IG, even though DeepLIFT is derived without any axiomatic properties such as the Aumann-Shapley axioms. Ancona et al. (2017) pointed out a connection wherein DeepLIFT can be interpreted as the approximation of a Riemann sum of IG by a single step with average value in spite of the difference in their computational procedures. Their implementation details are also summarized in Appendix A.3.
KS performed well and IG is a fast approximation to it. Therefore we compare the ABC of insertion and deletion tests for KS and IG in Figure 4. In both cases the left panel shows that the ABC for KS has a long tail. This is also true for IG, but to save space we omit that histogram. Instead we show in the middle panels that the ABC for IG is almost the same as that for KS point by point, with KS usually attaining a somewhat better (larger) ABC than IG. The right panels there show that ABCs for random orderings have nearly symmetric distributions with insertion tests having a few more outliers than deletion tests do. Figure 5 shows some insertion and deletion curves comparing a randomly chosen data point to a counterfactual reference point. Figure 6 shows analogous plots for the data with the greatest differences in ABC between KS and IG. It shows that a very large ABC difference between methods in the insertion test need not have a large difference in the deletion tests and vice versa.
CERN Electron Collision Data
The CERN Electron Collision Data (McCauley, 2014) is a dataset about dielectron collision events at CERN. It includes continuous variables representing the momenta and energy of the electrons, as well as discrete variables for the charges of the electrons (±1: positrons or electrons). Only the data whose invariant mass of two electrons (or positrons) was in the range from 2 to 110 GeV were collected. We treat it as a regression problem to predict their invariant mass from the other 16 features.
The data contains the physical observables of two electrons after the collisions whose tracks are reconstructed from the information captured in detectors around the beam. The features are as follows: The total energy of the two electrons, the three directional momenta, the transverse momentum, the pseudorapidity, the phi angle and the charge of each electron. They are highly dependent features because some of them are calculated from the others. For instance, since a beam line is aligned on the z-axis as usual in particle physics, the transverse momenta p t i for i = 1, 2 are composed of p x i and p y i such that p 2 t i = p 2 x i + p 2 y i . The phi angle ϕ i is the angle between p x i and p y i where p t i = p x i cos ϕ i . The total energy of them is also calculated relativistically. Since the momenta are recorded in GeV unit, which overwhelms the static mass of electrons (∼ 511 keV in natural unit), the total energies E i are E 2 i ≃ p 2 x i + p 2 y i + p 2 z i . The pseudorapidities η i are given as angles from a beam line. The definition is η = − 1 2 ln |p|−pz |p|+pz ≃ − 1 2 ln E−pz E+pz for each η i . Regarding these definitions, only 8 (three directional momenta and the charges of each electrons) of the 16 are independent features, and the other features are used as convenient transformed coordinates in particle physics. Actually, the invariant mass M , the prediction target feature, is also approximated arithmetically from the momenta as M 2 ≃ (E 1 + E 2 ) 2 − |p 1 + p 2 | 2 within 0.6% residual error on average for this dataset. From this viewpoint, users of machine learning might confirm that the models properly exclude the charges from evidence of the predictions via XAI. This aspect is, in a sense, a case where ground truth in XAI can be obtained from domain knowledge. We have made such investigations but omit them to save space and because they are not directly related to insertion and deletion testing.
We omit data with missing values, and randomly select 80% of the complete observations (79,931 data points) to construct an MLP. Embedding layers are not placed in this MLP, and all variables are Z-score normalized. Hyperparameters such as the number of The predictions for the 2,000 held out data points x were inspected with both KS and IG. The reference data x ′ used in XAI methods are collected from these 2,000 data under this policy: • x ′ j ̸ = x j for all j ∈ 1:n including charges, • x ′ is among the 20 smallest such ∥x − ·∥ values, and • it maximizes |f (x ′ ) − f (x)| subject to the above. This policy is called the counterfactual policy below. It has similar motivations to the policy we used for the Bangalore housing data. In this case it was possible to compute KS exactly using 2 16 = 65,536 function evaluations.
The results are given in Table 2. KS is best for both insertion and deletion measures. IG and DeepLIFT are close behind. LIME is nearly as good and the simple gradient methods once again do poorly. As we did for the Bangalore housing data, we make graphical comparisons between KS and IG.
The results for the insertion test are shown in Figure 7. The deletion test results were very similar and are omitted. These results are similar to what we saw in the previous experiment. As a meaningful XAI metric, KS provides a larger ABC than the other orderings we tried. Also, even in this case where differentiability with respect to charges cannot be assumed, IG does nearly as well as KS.
Although most of data are close to the 45 degree line in the center panel of Figure 7 there are a few cases where KS gets a much larger ABC than IG does. Two such points are shown in Figure 8. Similarly to what we saw in the Bangalore housing example, the comparisons where KS and IG differ greatly in the insertion test has them similar in the deletion test and vice versa. A scatterplot of deletion versus insertion areas is given in Figure 9. Here and in the following we again only pick KS and IG as representative examples. Most of the data are near the diagonal in that plot but there are some exceptional outlying points where the two ABCs are quite different from each other. Next we consider two more policies, different from our counterfactual policy. In a 'oneto-one policy', observations are paired up completely at random. They almost always get different values of the continuous parameters and they get, on average one different value among the two binary values. We also consider an 'average policy' where the reference point has the average value for all features. Such points are necessarily unphysical, for instance, they have near zero charge.
The results of these two policies are shown in Table 3. KS attains the best ABC value in all four comparisons there and IG is always close but not always second, even though it treats the particle charges as continuous quantities. One striking feature of that table Figure 7: Results of the insertion test in CERN electron collision data. Results for the deletion test looked almost the same.
is that the insertion ABCs are much larger than the deletion ABCs. There is a simple explanation. The invariant masses must be positive and their distribution is positively skewed. The model never predicted a negative value and the predictions also have positive skewness. Because the predicted values satisfy a sharp lower bound, that reduces the maximum possible deletion ABC. Because they are positively skewed, higher insertion ABCs are possible. We see LIME does very well on the one-to-one policy examples as it did on the counterfactual policy, but it does not do well on the (unphysical) average policy comparisons. Figure 10 shows results for the one-to-one policy. Unlike Figure 9 for the counterfactual policy the points for f (x) > f (x ′ ) are exactly the same as those that had f (x ′ ) > f (x).
One data pair attaining an extreme difference in Figure 10 is inspected in Figure 11. This data gains over 150 GeV in the insertion ABC, which is anomalously large as this dataset is composed of events with invariant mass between 2 and 110 GeV. The figure shows that this large output is due to an extraordinary response to artificial data which appear on the path connecting x ′ and x. Both XAI methods (IG and KS) correctly identify the features that bring on this effect for the insertion test. Since the one-to-one policy can make long distance pairs compared to the counterfactual policy, synthetic data in the insertion and deletion processes can be far from the data manifold.
The result for the average policy where the reference data is common to all test data is shown in Figure 12. This is a setting where the reference data are very far out of distribution, just like a single color image is for an image classification task. In this case the deletion test, replacing real pixels in order by average ones, attains much smaller ABC values than the insertion test that starts with the average values. From the above results in Figure 9, 10 and 12 it is easy to observe that the relational distributions of insertion and deletion tests are totally different depending on the reference data policy, even for the same XAI algorithm. One cause is asymmetry in the response: if f (x) is sharply bounded Table 3: Mean insertion and deletion ABCs for 2,000 of the CERN Electron Collision Data points whose reference data are determined in one-to-one policy and average policy respectively. The figures are rounded to three places. Figure 8: Deletion test outliers in the CERN electron collision data. The top row shows a comparison where KS had much greater ABC for insertion than IG had. The bottom row shows a comparison where KS had much greater ABC for deletion. In both cases KS was comparable to IG for the other ABC type.
below but not above then insertion ABCs can be much larger than deletion ones. The ABC including other XAI methods are aggregated in Table 3. The relationships among their magnitudes are almost same as those in the counterfactual policy of Table 2. It is also confirmed that insertion tests have larger ABC values than deletion tests with same setup in general and their differences are larger than those of the counterfactual policy. This point supports our previous discussion about their asymmetry. The computation of Input×Gradient does not take the reference data into account, and this might explain why Input×Gradient has comparatively scores compared to other methods in the average policy.
The average number of different columns between reference data and target data and correlation coefficients of ABCs in two kinds of tests are summarized in Table 4. All sixteen columns have different values in the counterfactual policy by definition. The deviation from sixteen in the value in one-to-one policy is mostly from the two charges of the data, which can take only two levels, +1 or −1. In this sense, the reference data in average policy is unphysical data since it has charges that are near zero. The correlation coefficients between two tests also vary between policies. We note also that the behavior of the two ABCs in the average policy strongly depends on whether f (x) > f (x ′ ) or f (x) < f (x ′ ) as seen in the scatter plots Figure 12. Table 4: For three policies on (x, x ′ ) in the CERN data: the average number of j with x j ̸ = x ′ j and the correlation between insertion and deletion ABCs, for both KS and IG.
Number of Different Cols Correlation Coefficients
Figure 10: For the CERN data with the one-to-one policy, these figures plot deletion versus insertion ABCs. The left plot is for KS and the right is for IG. Each dot correspond to two data that configure a pair.
RemOve and Retrain Methods
In this section we compare KS and IG via ROAR (RemOve And Retrain) (Hooker et al., 2019). ROAR is significantly more expensive to study than the other methods we consider as it requires retraining the models, and so we did not apply it to all of the methods. We opted to apply it just to Kernel SHAP and IG. We chose IG as our representative fast method because IG can be used on more general models than DeepLIFT can. We chose Kernel SHAP as the other method because it had best or near best ABC values on our numerical examples. The task in ROAR is about which variables are important to the model's accuracy and not about which variables are important to any specific prediction. As a result the values in ROAR are not comparable to the other AUC and ABC values that we have computed. The original proposal of ROAR measures the drop in accuracy for image classification tasks and applying it to regression tasks with tabular data raises the same issues as extending insertion/deletion tests to regression. We measure the test loss on held out data as a measure of retrained model performance. Retraining procedures are taken with an increasing number of removed features at each quantile in {0.1, 0.3, 0.5, 0.7, 0.9, 1.0}, where 1.0 means that all features are removed. The model architecture and hyperparameters of retrained models are the original models as described in Sections A.2 and A.4.
To use ROAR we must decide how to remove features in the data. Removing in the original ROAR algorithm means padding pixels of the original images with noninformative The results of our ROAR calculations are shown in Figure 13. Features to be removed are sorted both by their absolute values and by their original signed values for both KS and IG. The error bars show plus or minus one standard error computed from five replicates.
Since ROAR in this experiment measures the Huber loss on the test points, it should be unaffected by the signs of the attributions and sensitive to their magnitude. For this reason, sorting features by the absolute values of their attributions should give a better score than sorting by their signed values. This is a contrasting point to insertion and deletion tests.
The results in Figure 13 are surprising to us. The curves we see are very nearly straight lines connecting the loss with all features present to the loss with no features present, so there is very little area between them and a straight line connecting the end points. This means that the loss in accuracy from variables deemed most important is about the same as those deemed less important. This could be because neither KS nor IG are able to identify important predictors for this task. It could also be that the majority of predictor variables in this data can be replaced by a combination of some other predictors which then prevents a large reduction in accuracy from removing a subset of predictors prior to retraining.
A second surprise is that the signed ordering, that we used as a control that should have been beaten by the absolute ordering was nearly as good as the absolute ordering.
As before KS and IG are comparable, though here both seem disappointing. Using IG with variables sorted by their absolute values even came out superior to KS in the Bangalore housing dataset.
Conclusion
In this paper we have extended insertion and deletion tests to regression problems. That includes getting formulas for the effects of interactions on the AUC and ABC measures, finding the expected area between the insertion/deletion curve under random variable ordering, and replacing the horizontal axis by a more appropriate straight line reference. We gave a condition under which sorting variables by their Shapley value will optimize ABC as well as constructing an example where that does not happen.
We compared six methods and several policies on two datasets. We find that overall the Kernel SHAP gave the best areas. The much faster Integrated Gradients method was nearly as good. In order to even run IG in settings with binary variables, some strategy for using continuum values must be employed. We opted for the simplest choice of just casting the booleans to real values.
A very natural policy question is whether to prefer insertion or deletion. Petsiuk et al. (2018) consider both and do not show a strong preference for one over the other. They use deletion when comparing a real image to a blank image (deleting real pixels by replacing them with zeros). They use insertion when comparing a real image to a blurred one (inserting real pixels into the blurred image). In other words the choice between insertion and deletion is driven by the counterfactual point. In the regression setting both inputs points could be real data. By studying both insertion and deletion we have seen that they can differ. A natural way to break the tie is to sum the ABC values for both insertion and deletion. Under a completely random permutation the expected value of that sum is zero. See Appendix B.1. In our examples, IG closely matches KS for both insertion and deletion, so it also matches their sum. Table 5: AUCs for the albatross example of Figure 1 using various reference images. The parameter for blurring the image is the same one Petsiuk et al. (2018) used.
A Detailed Model Descriptions of the Experiments
This appendix provides some background details on the experiments conducted in this article.
A.1 The Example in Image Classification
Here we summarize how the example of insertion and deletion tests in image classification shown in Figure 1 was computed. The image is from Wah et al. (2011) and the model is pretrained for classification in ImageNet (Russakovsky et al., 2015) whose architecture is EfficientNet-B0 (Tan and Le, 2019). The preprocessing of the model includes cropping the center of the image to make it square as shown in the saliency map of Figure 1.
The saliency map is computed using SmoothGrad of Smilkov et al. (2017). It averages IG computations over 300 randomly generated baseline images of Gaussian noise. The model output are distributed to the latent features in the first convolutional layer (implemented in Captum (Kokhlikyan et al., 2020) as layer integrated gradient). That layer has 32 blocks each of which has a 112 × 112 grid of 3 pixels times 3 pixels. The effect of any pixel is summed over those 32 blocks and over all 3 × 3 patterns that contain it. In the insertion and deletion test, this saliency map is then the same size, 224 × 224, as the preprocessed original image. The reference image for both insertion and deletion tests was a black image.
The AUCs for several different choices of reference data are summarized in Table 5. Note that the definition of AUCs are different from the main material of this paper and small deletion AUC is better in this situation. The reference image has a very significant effect on the AUCs.
A.2 Model for the Bangalore Housing Data
The detail of the model used in Section 5.1 is summarized in (He et al., 2015) and dropout layers. The dropout ratio is common in all of them.
and 3
The implementation details of the other XAI methods than KS and IG which appear in Tables 1, 2 and 3 are summarized in this subsection. We use the implementations on Captum (Kokhlikyan et al., 2020) for those methods, DeepLIFT (Shrikumar et al., 2017), Vanilla Grad (Simonyan et al., 2013), Input×Gradient (Shrikumar et al., 2016) and LIME (Ribeiro et al., 2016) with default set arguments. Inputs of Input×Gradient are those after applying Z-score normalizing in the electronelectron collision data from CERN. Zeros in binary vectors are replaced by a small negative value (−10 −4 ) in Input×Gradient to avoid degeneration in the Metropolitan Areas of India dataset. As we use parametric ReLU as the activation functions in our model, it is also treated as a usual nonlinear function in the DeepLIFT calculations. The reference values that can be set in LIME and DeepLIFT are chosen as same data to KS and IG, depending on their policy.
A.4 CERN Electron Collision Data
The hyperparameters for the model used in Section 5.2 are given in Table 7. They were obtained from a hyperparameter search using Optuna (Akiba et al., 2019). Each intermediate layer is a parametric ReLU with dropout. The dropout ratio is common to all of the layers. The performance for test data is depicted in Figure 14. The model is overall very accurate but the very highest values are systematically underestimated.
B Proof of Theorem 1
Here we prove that AUC = u⊆1:n (n − ⌈u⌉ + 1)∆ u f . We use the anchored decomposition that we define next. We also connect that decomposition to some areas of the literature.
The anchored decomposition is a kind of high dimensional model representation (HDMR) that represents a function of n variables by a sum of 2 n functions, one per subset of 1:n where the function for u ⊆ 1:n depends on x only through x u . The best known HDMR is the ANOVA of Fisher and Mackenzie (1923), Hoeffding (1948), Sobol' (1969 and Efron and Stein (1981) but there are others. See Kuo et al. (2010). The anchored decomposition goes back at least to Sobol' (1969). It does not require a distribution on the inputs. Instead of centering higher order interaction terms by subtracting expectations, which don't exist without a distribution, it centers by subtracting values at default or anchoring input points. We only need it for functions on {0, 1} n and without loss of generality we take the anchor to be all zeros.
There is an inclusion-exclusion-Möbius formula
See for instance Kuo et al. (2010). The anchored decomposition is also called cut-HDMR (Aliş and Rabitz, 2001) in chemistry, and finite differences-HDMR in global sensitivity analysis (Sobol', 2003). When f is the value function in a Shapley value context, the values g u (1) are known as Harsanyi dividends (Harsanyi, 1959). Many of the quantities we use here also feature prominently in the study of Boolean functions f : {0, 1} n → {0, 1} (O'Donnell, 2014).
The next Lemma is from Mase et al. (2019). We include the short proof for completeness.
Proof. The inclusion-exclusion formula for the binary anchored decomposition is Suppose that z j = 0 for j ∈ u. Then, splitting up the alternating sum g u (z) = v⊆u−j (−1) |u−v| (g(z v :0 −v ) − f (z v+j :0 −v−j )) = 0 because z v :0 −v and z v+j :0 −v−j are the same point when z j = 0. It follows that g u (e w ) = 0 if u ̸ ⊆ w. Now suppose that u ⊆ w. First g u (z) = g u (z u :1 −u ) because g u only depends on z through z u . From u ⊆ w we have (e w ) u = 1 u . Then g u (e w ) = g u (1 u :1 −u ) = g u (1), completing the proof.
We are now ready to state and prove our theorem expressing the AUC in terms of the anchored decomposition. Without loss of generality it takes π to be the identity permutation.
B.1 ABC for Deletion
Now suppose that we use the deletion strategy of replacing x j by x ′ j in the opposite order from that used above, meaning that we change variables thought to most decrease f first. Then letting ⌊u⌋ be the index of the smallest element of u ⊆ 1:n, with ⌊∅⌋ = n + 1 by convention, we get by the argument in Theorem 1, AUC ′ = u⊆1:n ⌊u⌋∆ u f.
Our area between the curves, ABC, measure for deletion is If we sum the two ABC measures we get ABC + ABC ′ = u̸ =∅ (n − ⌈u⌉ − ⌊u⌋ + 1)∆ u f.
B.2 Monotonicity
Here we prove a sufficient condition under which ranking variables in decreasing order by their Shapley value gives the order that maximizes the insertion AUC. We suppose that f (z) = h(a(z)) where a(x) is an additive function on {0, 1} n and h : R → R is strictly increasing. An additive function on z ∈ {0, 1} n takes the form a(z) = γ 0 + n j=1 γ j z j .
By choosing h(w) = σ(w) ≡ (1 + exp(−w)) −1 we can study logistic regression probabilities, while h(w) = w accounts for those same probabilities on the logit scale. By choosing h(w) = exp(w) we can include naive Bayes. Taking h(w) to be the leaky ReLU function we can compare the importance of the inputs to a neuron at some position within a network.
Logistic regression is ordinarily expressed as Pr(Y = 1 | x) = σ(β 0 + x T β). Then Pr(Y = 1 | x ′ ) = σ(β 0 + x ′T β). If we select z ∈ {0, 1} n with z j = 1 indicating that we choose x ′ j for the j'th component and j = 0 indicating that we choose x j for the j'th component, then In other words we take γ j = (x ′ j − x j )β j and γ 0 = β 0 to define the function on {0, 1} n that we study.
Next we consider integrated gradients for this setting, assuming that h is differentiable with h ′ > 0. The gradient is then h ′ (z)β. The gradient at any point then sorts the inputs in the same order as the Shapley value. Therefore any positive linear combination of those gradient evaluations sorts the inputs into this order which then optimizes the deletion AUC. | 2022-05-26T13:31:35.258Z | 2022-05-25T00:00:00.000 | {
"year": 2022,
"sha1": "77ca2a3f954249f53c09768b15088c282c382170",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "77ca2a3f954249f53c09768b15088c282c382170",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
15369763 | pes2o/s2orc | v3-fos-license | Stability Bound for Stationary Phi-mixing and Beta-mixing Processes
Most generalization bounds in learning theory are based on some measure of the complexity of the hypothesis class used, independently of any algorithm. In contrast, the notion of algorithmic stability can be used to derive tight generalization bounds that are tailored to specific learning algorithms by exploiting their particular properties. However, as in much of learning theory, existing stability analyses and bounds apply only in the scenario where the samples are independently and identically distributed. In many machine learning applications, however, this assumption does not hold. The observations received by the learning algorithm often have some inherent temporal dependence. This paper studies the scenario where the observations are drawn from a stationary phi-mixing or beta-mixing sequence, a widely adopted assumption in the study of non-i.i.d. processes that implies a dependence between observations weakening over time. We prove novel and distinct stability-based generalization bounds for stationary phi-mixing and beta-mixing sequences. These bounds strictly generalize the bounds given in the i.i.d. case and apply to all stable learning algorithms, thereby extending the use of stability-bounds to non-i.i.d. scenarios. We also illustrate the application of our phi-mixing generalization bounds to general classes of learning algorithms, including Support Vector Regression, Kernel Ridge Regression, and Support Vector Machines, and many other kernel regularization-based and relative entropy-based regularization algorithms. These novel bounds can thus be viewed as the first theoretical basis for the use of these algorithms in non-i.i.d. scenarios.
Introduction
Most generalization bounds in learning theory are based on some measure of the complexity of the hypothesis class used, such as the VC-dimension, covering numbers, or Rademacher complexity. These measures characterize a class of hypotheses, independently of any algorithm. In c 2000 Mehryar Mohri and Afshin Rostamizadeh.
contrast, the notion of algorithmic stability can be used to derive bounds that are tailored to specific learning algorithms and exploit their particular properties. A learning algorithm is stable if the hypothesis it outputs varies in a limited way in response to small changes made to the training set. Algorithmic stability has been used effectively in the past to derive tight generalization bounds Elisseeff, 2001, 2002).
But, as in much of learning theory, existing stability analyses and bounds apply only in the scenario where the samples are independently and identically distributed (i.i.d.). In many machine learning applications, this assumption, however, does not hold; in fact, the i.i.d. assumption is not tested or derived from any data analysis. The observations received by the learning algorithm often have some inherent temporal dependence. This is clear in system diagnosis or time series prediction problems. Clearly, prices of different stocks on the same day, or of the same stock on different days, may be dependent. But, a less apparent time dependency may affect data sampled in many other tasks as well.
This paper studies the scenario where the observations are drawn from a stationary ϕ-mixing or β-mixing sequence, a widely adopted assumption in the study of non-i.i.d. processes that implies a dependence between observations weakening over time (Yu, 1994;Meir, 2000;Vidyasagar, 2003;Lozano et al., 2006). We prove novel and distinct stability-based generalization bounds for stationary ϕ-mixing and β-mixing sequences. These bounds strictly generalize the bounds given in the i.i.d. case and apply to all stable learning algorithms, thereby extending the usefulness of stabilitybounds to non-i.i.d. scenarios. Our proofs are based on the independent block technique described by Yu (1994) and attributed to Bernstein (1927), which is commonly used in such contexts. However, our analysis differs from previous uses of this technique in that the blocks of points considered are not of equal size.
For our analysis of stationary ϕ-mixing sequences, we make use of a generalized version of Mc-Diarmid's inequality (Kontorovich and Ramanan, 2006) that holds for ϕ-mixing sequences. This leads to stability-based generalization bounds with the standard exponential form. Our generalization bounds for stationary β-mixing sequences cover a more general non-i.i.d. scenario and use the standard McDiarmid's inequality, however, unlike the ϕ-mixing case, the β-mixing bound presented here is not a purely exponential bound and contains an additive term depending on the mixing coefficient.
We also illustrate the application of our ϕ-mixing generalization bounds to general classes of learning algorithms, including Support Vector Regression (SVR) (Vapnik, 1998), Kernel Ridge Regression (Saunders et al., 1998), and Support Vector Machines (SVMs) (Cortes and Vapnik, 1995). Algorithms such as support vector regression (SVR) (Vapnik, 1998;Schölkopf and Smola, 2002) have been used in the context of time series prediction in which the i.i.d. assumption does not hold, some with good experimental results (Müller et al., 1997;Mattera and Haykin, 1999). To our knowledge, the use of these algorithms in non-i.i.d. scenarios has not been previously supported by any theoretical analysis. The stability bounds we give for SVR, SVMs, and many other kernel regularization-based and relative entropy-based regularization algorithms can thus be viewed as the first theoretical basis for their use in such scenarios.
The following sections are organized as follows. In Section 2, we introduce the necessary definitions for the non-i.i.d. problems that we are considering and discuss the learning scenarios in that context. Section 3 gives our main generalization bounds for stationary ϕ-mixing sequences based on stability, as well as the illustration of its applications to general kernel regularization-based algorithms, including SVR, KRR, and SVMs, as well as to relative entropy-based regularization al-gorithms. Finally, Section 4 presents the first known stability bounds for the more general stationary β-mixing scenario.
Preliminaries
We first introduce some standard definitions for dependent observations in mixing theory (Doukhan, 1994) and then briefly discuss the learning scenarios in the non-i.i.d. case.
Non-i.i.d. Definitions
Definition 1 A sequence of random variables Z = {Z t } ∞ t=−∞ is said to be stationary if for any t and non-negative integers m and k, the random vectors (Z t , . . . , Z t+m ) and (Z t+k , . . . , Z t+m+k ) have the same distribution.
Thus, the index t or time, does not affect the distribution of a variable Z t in a stationary sequence. This does not imply independence however. In particular, for i < j < k, The following is a standard definition giving a measure of the dependence of the random variables Z t within a stationary sequence. There are several equivalent definitions of this quantity, we are adopting here that of (Yu, 1994).
t=−∞ be a stationary sequence of random variables. For any i, j ∈ Z ∪ {−∞, +∞}, let σ j i denote the σ-algebra generated by the random variables Z k , i ≤ k ≤ j. Then, for any positive integer k, the β-mixing and ϕ-mixing coefficients of the stochastic process Z are defined as Z is said to be β-mixing (ϕ-mixing) if β(k) → 0 (resp. ϕ(k) → 0) as k → ∞. It is said to be algebraically β-mixing (algebraically ϕ-mixing) if there exist real numbers β 0 > 0 (resp. ϕ 0 > 0) and r > 0 such that β(k) ≤ β 0 /k r (resp. ϕ(k) ≤ ϕ 0 /k r ) for all k, exponentially mixing if there exist real numbers β 0 (resp. ϕ 0 > 0) and Both β(k) and ϕ(k) measure the dependence of an event on those that occurred more than k units of time in the past. β-mixing is a weaker assumption than ϕ-mixing and thus covers a more general non-i.i.d. scenario.
This paper gives stability-based generalization bounds both in the ϕ-mixing and β-mixing case. The β-mixing bounds cover a more general case of course, however, the ϕ-mixing bounds are simpler and admit the standard exponential form. The ϕ-mixing bounds are based on a concentration inequality that applies to ϕ-mixing processes only. Except from the use of this concentration bound, all of the intermediate proofs and results to derive a ϕ-mixing bound in Section 3 are given in the more general case of β-mixing sequences. It has been argued by Vidyasagar (2003) that β-mixing is "just the right" assumption for the analysis of weakly-dependent sample points in machine learning, in particular because several PAClearning results then carry over to the non-i.i.d. case. Our β-mixing generalization bounds further contribute to the analysis of this scenario. 1 We describe in several instances the application of our bounds in the case of algebraic mixing. Algebraic mixing is a standard assumption for mixing coefficients that has been adopted in previous studies of learning in the presence of dependent observations (Yu, 1994;Meir, 2000;Vidyasagar, 2003;Lozano et al., 2006).
Let us also point out that mixing assumptions can be checked in some cases such as with Gaussian or Markov processes (Meir, 2000) and that mixing parameters can also be estimated in such cases.
Most previous studies use a technique originally introduced by Bernstein (1927) based on independent blocks of equal size (Yu, 1994;Meir, 2000;Lozano et al., 2006). This technique is particularly relevant when dealing with stationary β-mixing. We will need a related but somewhat different technique since the blocks we consider may not have the same size. The following lemma is a special case of Corollary 2.7 from (Yu, 1994).
Lemma 3 (Yu (Yu, 1994), Corollary 2.7) Let µ ≥ 1 and suppose that h is measurable function, with absolute value bounded by M , on a product probability space The lemma gives a measure of the difference between the distribution of µ blocks where the blocks are independent in one case and dependent in the other case. The distribution within each block is assumed to be the same in both cases. For a monotonically decreasing function β, we have β(Q) = β(k * ), where k * = min i (k i ) is the smallest gap between blocks.
Learning Scenarios
We consider the familiar supervised learning setting where the learning algorithm receives a sample of m labeled points S = (z 1 , . . . , z m ) = ((x 1 , y 1 ), . . . , (x m , y m )) ∈ (X × Y ) m , where X is the input space and Y the set of labels (Y = R in the regression case), both assumed to be measurable. For a fixed learning algorithm, we denote by h S the hypothesis it returns when trained on the sample S. The error of a hypothesis on a pair z ∈ X × Y is measured in terms of a cost function c : Y × Y → R + . Thus, c(h(x), y) measures the error of a hypothesis h on a pair (x, y), c(h(x), y) = (h(x) − y) 2 in the standard regression cases. We will use the shorthand c(h, z) := c(h(x), y) for a hypothesis h and z = (x, y) ∈ X ×Y and will assume that c is upper bounded by a constant M > 0.
1. Some results have also been obtained in the more general context of α-mixing but they seem to require the stronger condition of exponential mixing (Modha and Masry, 1998).
We denote by R(h) the empirical error of a hypothesis h for a training sample S = (z 1 , . . . , z m ): In the standard machine learning scenario, the sample pairs z 1 , . . . , z m are assumed to be i.i.d., a restrictive assumption that does not always hold in practice. We will consider here the more general case of dependent samples drawn from a stationary mixing sequence Z over X × Y . As in the i.i.d. case, the objective of the learning algorithm is to select a hypothesis with small error over future samples. But, here, we must distinguish two versions of this problem.
In the most general version, future samples depend on the training sample S and thus the generalization error or true error of the hypothesis h S trained on S must be measured by its expected error conditioned on the sample S: This is the most realistic setting in this context, which matches time series prediction problems. A somewhat less realistic version is one where the samples are dependent, but the test points are assumed to be independent of the training sample S. The generalization error of the hypothesis h S trained on S is then: This setting seems less natural since, if samples are dependent, future test points must also depend on the training points, even if that dependence is relatively weak due to the time interval after which test points are drawn. Nevertheless, it is this somewhat less realistic setting that has been studied by all previous machine learning studies that we are aware of (Yu, 1994;Meir, 2000;Vidyasagar, 2003;Lozano et al., 2006), even when examining specifically a time series prediction problem (Meir, 2000). Thus, the bounds derived in these studies cannot be directly applied to the more general setting. We will consider instead the most general setting with the definition of the generalization error based on Eq. 4. Clearly, our analysis also applies to the less general setting just discussed as well.
Let us briefly discuss the more general scenario of non-stationary mixing sequences, that is one where the distribution may change over time. Within that general case, the generalization error of a hypothesis h S , defined straightforwardly by would depend on the time t and it may be the case that R(h S , t) = R(h S , t ′ ) for t = t ′ , making the definition of the generalization error a more subtle issue. To remove the dependence on time, one could define a weaker notion of the generalization error based on an expected loss over all time: It is not clear however whether this term could be easily computed and useful. A stronger condition would be to minimize the generalization error for any particular target time. Studies of this type have been conducted for smoothly changing distributions, such as in Zhou et al. (2008), however, to the best of our knowledge, the scenario of a both non-identical and non-independent sequences has not yet been studied.
ϕ-Mixing Generalization Bounds and Applications
This section gives generalization bounds forβ-stable algorithms over a mixing stationary distribution. 2 The first two sections present our main proofs which hold for β-mixing stationary distributions. In the third section, we will briefly discuss concentration inequalities that apply to ϕ-mixing processes only. Then, in the final section, we will present our main results. The condition ofβ-stability is an algorithm-dependent property first introduced by Devroye and Wagner (1979) and Kearns and Ron (1997). It has been later used successfully by Elisseeff (2001, 2002) to show algorithm-specific stability bounds for i.i.d. samples. Roughly speaking, a learning algorithm is said to be stable if small changes to the training set do not produce large deviations in its output. The following gives the precise technical definition.
Definition 4 A learning algorithm is said to be (uniformly)β-stable if the hypotheses it returns for any two training samples S and S ′ that differ by a single point satisfy
The use of stability in conjunction with McDiarmid's inequality will allow us to produce generalization bounds. McDiarmid's inequality is an exponential concentration bound of the type, where the probability is over a sample of size m and l is the Lipschitz parameter of Φ (which is also a function of m). Unfortunately, this inequality cannot be easily applied when the sample points are not distributed in an i.i.d. fashion. We will use the results of Kontorovich and Ramanan (2006) to extend the use of McDiarmid's inequality with general mixing distributions (Theorem 9).
To obtain a stability-based generalization bound, we will apply this theorem to Φ(S) = R(h S )− R(h S ). To do so, we need to show, as with the standard McDiarmid's inequality, that Φ is a Lipschitz function and, to make it useful, bound E[Φ]. The next two sections describe how we achieve both of these in this non-i.i.d. scenario.
Let us first take a brief look at the problem faced when attempting to give stability bounds for dependent sequences and give some idea of our solution for that problem. The stability proofs given by Bousquet and Elisseeff (2001) assume the i.i.d. property, thus replacing an element in a sequence with another does not affect the expected value of a random variable defined over that sequence. In other words, the following equality holds, for a random variable V that is a function of the sequence of random variables S = (Z 1 , . . . , Z m ). However, clearly, if the points in that sequence S are dependent, this equality may not hold anymore.
The main technique to cope with this problem is based on the so-called "independent block sequence" originally introduced by Bernstein (1927). This consists of eliminating from the original dependent sequence several blocks of contiguous points, leaving us with some remaining blocks of points. Instead of these dependent blocks, we then consider independent blocks of points, each with the same size and the same distribution (within each block) as the dependent ones. By Lemma 3, for a β-mixing distribution, the expected value of a random variable defined over the dependent blocks is close to the one based on these independent blocks. Working with these independent blocks brings us back to a situation similar to the i.i.d. case, with i.i.d. blocks replacing i.i.d. points.
Our use of this method somewhat differs from previous ones (see Yu, 1994;Meir, 2000) where many blocks of equal size are considered. We will be dealing with four blocks and with typically unequal sizes. More specifically, note that for Equation 9 to hold, we only need that the variable Z i be independent of the other points in the sequence. To achieve this, roughly speaking, we will be "discarding" some of the points in the sequence surrounding Z i . This results in a sequence of three blocks of contiguous points. If our algorithm is stable and we do not discard too many points, the hypothesis returned should not be greatly affected by this operation. In the next step, we apply the independent block lemma, which then allows us to assume each of these blocks as independent modulo the addition of a mixing term. In particular, Z i becomes independent of all other points. Clearly, the number of points discarded is subject to a trade-off: removing too many points could excessively modify the hypothesis returned; removing too few would maintain the dependency between Z i and the remaining points, thereby producing a larger penalty when applying Lemma 3. This trade-off is made explicit in the following section where an optimal solution is sought.
Lipschitz Bound
As discussed in Section 2.2, in the most general scenario, test points depend on the training sample. We first present a lemma that relates the expected value of the generalization error in that scenario and the same expectation in the scenario where the test point is independent of the training sample. We denote by R(h S ) = E z [c(h S , z)|S] the expectation in the dependent case and by R( ] the expectation where the test points are assumed independent of the training, with S b denoting a sequence similar to S but with the last b points removed. Figure 1(a) illustrates that sequence. The block S b is assumed to have exactly the same distribution as the corresponding block of the same size in S.
Lemma 5 Assume that the learning algorithm isβ-stable and that the cost function c is bounded by M . Then, for any sample S of size m drawn from a β-mixing stationary distribution and for any b ∈ {0, . . . , m}, the following holds: Proof Theβ-stability of the learning algorithm implies that The application of Lemma 3 yields The other side of the inequality of the lemma can be shown following the same steps.
We can now prove a Lipschitz bound for the function Φ. Figure 1: Illustration of the sequences derived from S that are considered in the proofs.
. . , z m ) be two sequences drawn from a β-mixing stationary process that differ only in point i ∈ [1, m], and let h S and h S i be the hypotheses returned by aβ-stable algorithm when trained on each of these samples. Then, for any i ∈ [1, m], the following inequality holds: Proof To prove this inequality, we first bound the difference of the empirical errors as in (Bousquet and Elisseeff, 2002), then the difference of the true errors. Bounding the difference of costs on agreeing points withβ and the one that disagrees with M yields Since both R(h S ) and R(h S i ) are defined with respect to a (different) dependent point, we apply Lemma 5 to both generalization error terms and useβ-stability. This then results in The lemma's statement is obtained by combining inequalities 14 and 15.
Bound on Expectation
As mentioned earlier, to obtain an explicit bound after application of a generalized McDiarmid's inequality, we also need to bound E S [Φ(S)]. This is done by analyzing independent blocks using Lemma 3.
Lemma 7
Let h S be the hypothesis returned by aβ-stable algorithm trained on a sample S drawn from a stationary β-mixing distribution. Then, for all b ∈ [1, m], the following inequality holds: Proof Let S b be defined as in the proof of Lemma 5. To deal with independent block sequences defined with respect to the same hypothesis, we will consider the sequence S i,b = S i ∩ S b , which is illustrated by Figure 1(c). This can result in as many as four blocks. As before, we will consider a sequence S i,b with a similar set of blocks each with the same distribution as the corresponding blocks in S i,b , but such that the blocks are independent.
Since three blocks of at most b points are removed from each hypothesis, by theβ-stability of the learning algorithm, the following holds: The application of Lemma 3 to the difference of two cost functions also bounded by M as in the right-hand side leads to Now, since the points z and z i are independent and since the distribution is stationary, they have the same distribution and we can replace z i with z in the empirical cost. Thus, we can write where S i i,b is the sequence derived from S i,b by replacing z i with z. The last inequality holds bŷ β-stability of the learning algorithm. The other side of the inequality in the statement of the lemma can be shown following the same steps.
ϕ-mixing Generalization Bounds
We are now prepared to make use of a concentration inequality to provide a generalization bound in the ϕ-mixing scenario. Several concentration inequalities have been shown in ϕ-mixing case, e.g. Marton (1998); Samson (2000); Chazottes et al. (2007); Kontorovich and Ramanan (2006). We will use that of Kontorovich and Ramanan (2006), which is very similar to that of Chazottes et al. (2007) modulo the fact that the latter requires a finite sample space.
These concentration inequalities are generalizations of the of following inequality of McDiarmid (1989) commonly used in the i.i.d. setting.
Theorem 8 (McDiarmid (1989), 6.10) Let S = (Z 1 , . . . , Z m ) be a sequence of random variables, each taking values in the set Z, then for any measurable function Φ : Z m → R that satisfies the following, ∀i ∈ 1, . . . , m, for constants c i . Then, for all ǫ > 0, In the i.i.d. scenario, the requirement to produce the constants c i simply translates into a Lipschitz condition on the function Φ. Theorem 5.1 of Kontorovich and Ramanan (2006) bounds precisely this quantity as follows, 3 Given the bound in Equation 20, the concentration bound of McDiarmid can be restated as follows, making it easily accessible to ϕ-mixing distributions.
It should be pointed out that the statement of the theorem in this paper is improved by a factor of 4 in the exponent, from the one stated in Kontorovich and Ramanan (2006) Theorem 1.1. This can be achieve straightforwardly by following the same steps as in the proof by Kontorovich and Ramanan (2006) and making use of the general form of McDiarmid's inequality (Theorem 8) as opposed to Azuma's inequality. This section presents several theorems that constitute the main results of this paper. The following theorem is constructed form the bounds shown in the previous three sections.
Theorem 10 (General Non-i.i.d. Stability Bound) Let h S denote the hypothesis returned by aβstable algorithm trained on a sample S drawn from a ϕ-mixing stationary distribution and let c be a measurable non-negative cost function upper bounded by M > 0, then for any b ∈ [0, m] and any ǫ > 0, the following generalization bound holds
3. We should note that original bound is expressed in terms of η-mixing coefficients. To simplify presentation, we are adapting it to the case of stationary ϕ-mixing sequences by using the following straightforward inequality for a stationary process: 2ϕ(j − i) ≥ ηij . Furthermore, the bound presented in Kontorovich and Ramanan (2006) holds when the sample space is countable, it is extended to the continuous case in Kontorovich (2007).
Proof
The theorem follows directly the application of Lemma 6 and Lemma 7 to Theorem 9.
The theorem gives a general stability bound for ϕ-mixing stationary sequences. If we further assume that the sequence is algebraically ϕ-mixing, that is for all k, ϕ(k) = ϕ 0 k −r for some r > 1, then we can solve for the value of b to optimize the bound.
Theorem 11 (Non-i.i.d. Stability Bound for Algebraically Mixing Sequences)
Let h S denote the hypothesis returned by aβ-stable algorithm trained on a sample S drawn from an algebraically ϕmixing stationary distribution, ϕ(k) = ϕ 0 k −r with r > 1 and let c be a measurable non-negative cost function upper bounded by M > 0, then for any ǫ > 0, the following generalization bound holds Proof For an algebraically mixing sequence, the value of b minimizing the bound of Theorem 10 . The following term can be bounded as Using the assumption r > 1, we upper bound m 1−r with 1 and find that, Plugging in this value and the minimizing value of b in the bound of Theorem 10 yields the statement of the theorem.
In the case of a zero mixing coefficient (ϕ = 0 and b = 0), the bounds of Theorem 10 coincide with the i.i.d. stability bound of (Bousquet and Elisseeff, 2002). In order for the right-hand side of these bounds to converge, we must haveβ = o(1/ √ m) and ϕ(b) = o(1/ √ m). For several general classes of algorithms,β ≤ O(1/m) (Bousquet and Elisseeff, 2002). In the case of algebraically mixing sequences with r > 1, as assumed in Theorem 11,β ≤ O(1/m) implies ϕ(b) = ϕ 0 (β/(rϕ 0 M )) (r/(r+1)) < O(1/ √ m). The next section illustrates the application of Theorem 11 to several general classes of algorithms. We now present the application of our stability bounds to several algorithms in the case of an algebraically mixing sequence. We make use of the stability analysis found in Bousquet and Elisseeff (2002), which allows us to apply our bounds in the case of kernel regularized algorithms, k-local rules and relative entropy regularization.
KERNEL REGULARIZED ALGORITHMS
Here we apply our bounds to a family of algorithms based on the minimization of a regularized objective function based on the norm · K in a reproducing kernel Hilbert space, where K is a positive definite symmetric kernel: The application of our bound is possible, under some general conditions, since kernel regularized algorithms are stable withβ ≤ O(1/m) (Bousquet and Elisseeff, 2002). Here we briefly reproduce the proof of thisβ-stability for the sake of completeness; first we introduce some needed terminology. We will assume that the cost function c is σ-admissible, that is there exists σ ∈ R + such that for any two hypotheses h, h ′ ∈ H and for all z = (x, y) ∈ X × Y , This assumption holds for the quadratic cost and most other cost functions when the hypothesis set and the set of output labels are bounded by some M ∈ R + : ∀h ∈ H, ∀x ∈ X, |h(x)| ≤ M and ∀y ∈ Y, |y| ≤ M . We will also assume that c is differentiable. This assumption is in fact not necessary and all of our results hold without it, but it makes the presentation simpler. We denote by B F the Bregman divergence associated to a convex function F : In what follows, it will be helpful to define F as the objective function of a general regularization based algorithm, where R S is the empirical error as measured on the sample S, N : H → R + is a regularization function and λ > 0 is the usual trade-off parameter. Finally, we shall use the shorthand ∆h = h ′ −h. (2002)) A kernel regularized learning algorithm, (22), with bounded kernel K(x, x) ≤ κ < ∞ and σ-admissible cost function, isβ-stable with coefficient,
Lemma 12 (Bousquet and Elisseeff
Proof Let h and h ′ be the minimizers of F S and F ′ S respectively where S and S ′ differ in the first coordinate (choice of coordinate is without loss of generality), then, To see this, we notice that since B F = B b R +λB N , and since a Bregman divergence is non-negative, By the definition of h and h ′ as the minimizers of F S and F S ′ , Finally, by the σ-admissibility of the cost function c and the definition of S and S ′ , which establishes (25).
K and by (25) and the reproducing kernel property, Thus ∆h K ≤ σκ mλ . And using the σ-admissibility of c and the kernel reproducing property we get, Therefore, which completes the proof.
Three specific instances of kernel regularization algorithms are SVR, for which the cost function is based on the ǫ-insensitive cost: Kernel Ridge Regression (Saunders et al., 1998), for which and finally Support Vector Machines with the hinge-loss, We note that for kernel regularization algorithms, as pointed out in Bousquet and Elisseeff (2002, Lemma 23), a bound on the labels immediately implies a bound on the output of the hypothesis produced by equation (22). We formally state this lemma below.
Then, the output of h * is bounded as follows, where λ is the regularization parameter, and κ 2 ≥ K(x, x) for all x ∈ X.
Proof Let F (h) = 1 m m i=1 c(h, z i ) + λ h 2 K and let 0 be the zero hypothesis, then by definition of F and h * , . Then, using the reproducing kernel property and the Cauchy-Schwartz inequality we note, Combining the two inequalities produces the result.
We note that in Bousquet and Elisseeff (2002), the following the bound is also stated: c(h * (x), y ′ ) ≤ B(κ B(0)/λ). However, when later applied it seems the authors use an incorrect upper bound function B(·), which we remedy in the following.
Plugging these values into the bound of Theorem 11 and setting the right-hand side to δ yields the statement of the corollary.
RELATIVE ENTROPY REGULARIZED ALGORITHMS
In this section we apply Theorem 11 to algorithms that produce a hypothesis h that is a convex combination of base hypotheses h θ ∈ H which are parameterized by θ ∈ Θ. Thus, we wish to learn a weighting function g ∈ G : Θ → R that is a solution to the following optimization, where the cost function c : G × Z → R is defined in term of a second internal cost function c ′ : H × Z → R: and where D is the Kullback-Leibler divergence or relative entropy regularizer (with respect to some fixed distribution g 0 ): It has been shown, (Bousquet and Elisseeff, 2002, Theorem 24), that an algorithm satisfying equation 29 and with bounded loss c ′ (·) ≤ M , isβ-stable with coefficient The application of our bounds, results in the following corollary.
Discussion
The results presented here are, to the best of our knowledge, the first stability-based generalization bounds for the class of algorithms just studied in a non-i.i.d. scenario. These bounds are non-trivial when the condition on the regularization λ ≫ 1/m 1/2−1/r parameter holds for all large values of m. This condition coincides with the i.i.d. condition, in the limit, as r tends to infinity. The next section gives stability-based generalization bounds that hold even in the scenario of β-mixing sequences.
β-Mixing Generalization Bounds
In this section, we prove a stability-based generalization bound that only requires the training sequence to be drawn from a stationary β-mixing distribution. The bound is thus more general and covers the ϕ-mixing case analyzed in the previous section. However, unlike the ϕ-mixing case, the β-mixing bound presented here is not a purely exponential bound. It contains an additive term, which depends on the mixing coefficient.
To simplify the presentation, here, we will define the generalization error of h S by R(h S ) = E z [c(h S , z)]. Thus, test samples are assumed independent of S. By Lemma 5, this can be assumed modulo the additional term bβ +M β(b), for a cost function bounded by M . Note that for any block of points Z = z 1 . . . z k drawn independently of S, the following equality holds since, by stationarity, z∈Z c(h S , z) for any such block Z. For convenience, we will extend the cost function c to blocks as follows: With this notation, R(h S ) = E Z [c(h S , Z)] for any block drawn independently of S, regardless of the size of Z.
To derive a generalization bound for the β-mixing scenario, we will apply McDiarmid's inequality to Φ defined over a sequence of independent blocks. The independent blocks we will be considering are non-symmetric and thus more general than those considered by previous authors (Yu, 1994;Meir, 2000;Lozano et al., 2006).
From a sample S made of a sequence of m points, we construct two sequences of blocks S a and S b , each containing µ blocks. Each block in S a contains a points and each block S b in contains b points. S a and S b form a partitioning of S; for any a, b ∈ [0, m] such that (a + b)µ = m, they are defined precisely as follows: for all i ∈ [1, µ]. We shall consider similarly sequences of i.i.d. blocks Z a i and Z b i , i ∈ [1, µ], such that the points within each block are drawn according to the same original β-mixing distribution and shall denote by S a the block sequence ( Z (a) 1 , . . . , Z (a) µ ). In preparation for the application of McDiarmid's inequality, we give a bound on the expectation of Φ( S a ). Since the expectation is taken over a sequence of i.i.d. blocks, this brings us to a situation similar to the i.i.d. scenario analyzed by Bousquet and Elisseeff (2002), with the exception that we are dealing with i.i.d. blocks instead of i.i.d. points.
Lemma 16 Let S a be an independent block sequence as defined above, then the following bound holds for the expectation of |Φ( S a )|: Proof Since the blocks Z (a) are independent, we can replace any one of them with any other block Z drawn from the same distribution. However, changing the training set also changes the hypothesis, in a limited way. This is shown precisely below, where S i a corresponds to the block sequence S a obtained by replacing the ith block with Z. The inequality holds through the use of Jensen's inequality. Theβ-stability of the learning algorithm gives We now relate the non-i.i.d. event Pr[Φ(S) ≥ ǫ] to an independent block sequence event to which we can apply McDiarmid's inequality.
Lemma 17 Assume aβ-algorithm. Then, for a sample S drawn from a stationary β-mixing distribution, the following bound holds, where Proof The proof consists of first rewriting the event in terms of S a and S b and bounding the error on the points in S b in a trivial manner. This can be afforded since b will be eventually chosen to be Byβ-stability and µa/m ≤ 1, this last term can be bounded as follows The right-hand side can be rewritten in terms of Φ and bounded in terms of a β-mixing coefficient: which ends the proof of the lemma.
The last two lemmas will help us prove the main result of this section formulated in the following theorem.
Theorem 18 Assume aβ-stable algorithm and let ǫ ′ denote ǫ − µbM m − 2µbβ − aβ as in Lemma 17. Then, for any sample S of size m drawn according to a stationary β-mixing distribution, any choice of the parameters a, b, µ > 0 such that (a + b)µ = m, and ǫ ≥ 0 such that ǫ ′ ≥ 0, the following generalization bound holds: Proof To prove the statement of theorem, it suffices to bound the probability term appearing in the right-hand side of Equation 33, Pr e Sa |Φ( S a )|− E[|Φ( S a )]| ≥ ǫ ′ 0 , which is expressed only in terms of independent blocks. We can therefore apply McDiarmid's inequality by viewing the blocks as i.i.d. "points".
To do so, we must bound the quantity |Φ( S a )| − |Φ( S i a )| where the sequence S a and S i a differ in the ith block. We will bound separately the difference between the generalization errors and empirical errors. 4 The difference in empirical errors can be bounded as follows using the bound on the cost function c: The difference in generalization error can be straightforwardly bounded usingβ-stability:
Using these bounds in conjunction with McDiarmid's inequality yields
Note that to show the second inequality we make use of Lemma 16 to estabilish the fact that Finally, we make use of Lemma 17 to establish the proof, 4. We drop the superscripts on Z (a) since we will not be considering the sequence S b in what follows.
In order to make use of the bounds, we must select the values of parameters b and µ (a is then equal to µ/m − u). There is a trade-off between choosing large value for b, to ensure the mixing term decreases, while choosing a large value of µ, to minimize the remaining terms of the bound. The exact choice of parameters will depend on the type of mixing that is assumed (e.g. algebraic or exponential). In order to choose optimal parameters, it will be useful to view the bound as it holds with high probability, in the following corollary.
Corollary 19
Assume aβ-stable algorithm and let δ ′ denote δ − (µ − 1)β(b). Then, for any sample S of size m drawn according to a stationary β-mixing distribution, any choice of the parameters a, b, µ > 0 such that (a + b)µ = m, and δ ≥ 0 such that δ ′ ≥ 0, the following generalization bound holds with probability at least (1 − δ): In the case of a fast mixing distribution, it is possible to select the values of the parameters to retrieve a bound as in the i.i.d. case, i.e. |R(h S ) − R(h S )| ∈ O m − 1 2 log 1/δ . In particular, for β(b) ≡ 0, we can choose a = 0, b = 1 and µ = m to retrieve the i.i.d. bound of Bousquet and Elisseeff (2001).
In the following, we will examine slower mixing algebraic β-mixing distributions, which are thus not close to the i.i.d. scenario. For algebraic mixing the mixing parameter is defined as β(b) = b −r . In that case, we wish to minimize the following function in terms of µ and b. s(µ, b) = µ b r + m 3/2β µ + m 1/2 µ + µb 1 m +β .
The first term of the function captures the condition on δ > (µ + 1)β(b) ≈ µ/b r and the remaining terms capture the shape of the bound in Corollary 19. Setting the derivative with respect to each variable µ and b to zero and solving for each parameter results in the following expressions: where γ = (m −1 +β) and C r = r 1 r+1 is a constant defined by the parameter r. Now, assumingβ ∈ O(m −α ) for some 0 < α ≤ 1, we analyze the convergence behavior of Corollary 19. First, we notice that the terms b and µ have the following asymptotic behavior, Next, we consider the condition δ ′ > 0 which is equivalent to, δ > (µ − 1)β(b) ∈ O m 3 4 −α 1− 1 2(r+1) .
In order for the right-hand side of the inequality to converge, it must be the case that α > 3r+3 4r+2 . In particular, if α = 1, as we have shown is the case for several algorithms in Section 3.4, then it suffices that r > 1.
Finally, in order to see how the bound itself converges, we study the asymptotic behavior of the terms of Equation 34 (without the first term, which corresponds to the quantity already analyzed in Equation 37): .
This expression can be further simplified by noticing that (b) ≤ (a) for all 0 < α ≤ 1 (with equality at α = 1). Thus, both the bound and the condition on δ decrease asymptotically as the term in (a), resulting in the following corollary.
Corollary 20 Assume aβ-stable algorithm withβ ∈ O(m −1 ) and let δ ′ = δ − m 1 2(r+1) − 1 4 . Then, for any sample S of size m drawn according to a stationary algebraic β-mixing distribution, and δ ≥ 0 such that δ ′ ≥ 0, the following generalization bound holds with probability at least (1 − δ): As in previous bounds r > 1 is required for convergence. Furthermore, as expected, a larger mixing parameter r leads to a more favorable bound.
Conclusion
We presented stability bounds for both ϕ-mixing and β-mixing stationary sequences. Our bounds apply to large classes of algorithms, including common algorithms such as SVR, KRR, and SVMs, and extend to non-i.i.d. scenarios existing i.i.d. stability bounds. Since they are algorithm-specific, these bounds can often be tighter than other generalization bounds based on general complexity measures for families of hypotheses. As in the i.i.d. case, weaker notions of stability might help further improve and refine these bounds. Our bounds can be used to analyze the properties of stable algorithms when used in the non-i.i.d settings studied. But, more importantly, they can serve as a tool for the design of novel and accurate learning algorithms. Of course, some mixing properties of the distributions need to be known to take advantage of the information supplied by our generalization bounds. In some problems, it is possible to estimate the shape of the mixing coefficients. This should help devising such algorithms. | 2018-04-03T04:07:08.683Z | 2008-11-11T00:00:00.000 | {
"year": 2008,
"sha1": "b9ea52b68288edd36df3d6056bd024b441690768",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b518f3bb1e21a47068e36f5b1452d7f638961244",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
233348070 | pes2o/s2orc | v3-fos-license | A POTENTIAL ZOONOTIC PARASITE: CRYPTOSPORIDIUM PARVUM TRANSMISSION IN RATS, PIGS AND HUMANS IN WEST LOMBOK, INDONESIA
Background: Cryptosporidium is a neglected zoonotic disease, but with the expansion of the human community into the animal environment, its incidence is increasing. Animals such as rats and pigs can act as intermediate hosts and transmit Cryptosporidium to humans due to their proximity. Transmission occurs due to the ability of Cryptosporidium to survive in any new host. The research aimed to identify and describe the transmission of Cryptosporidium from animals to humans. Materials and Methods: This research was a cross sectional study and samples were collected from 84 rats caught in residential areas, 205 pigs, and 438 humans in West Lombok. Fecal samples were examined using polymerase chain reaction (PCR) and sequencing to isolate the presence of Cryptosporidium, and identify the genetic similarity of the parasites found in rats and pigs with those that infect humans. Results: The PCR results found Cryptosporidium parvum in 4.76% (4/84) in rats; 6.34% 13/205) in pigs; and 0.91% (4/438) in humans. The sequencing results showed genetic kinship of C. parvum in rats, pigs, and humans. Based on sequence confirmation from Gene Banks and edited using ClustalW with MEGA X software, there are genetic similarities between Cryptosporidium isolates from West Lombok and C. suis isolates of cattle from Uganda and C. suis isolates of pigs from Slovakia. Conclusion: There are genetic similarities of Cryptosporidium in animals and humans, requiring that the Public Health programs in those contaminated areas must receive priority attention to prevent further transmission of these potentially fatal parasites.
Introduction
Zoonotic diseases are increasing especially in developing countries and are becoming neglected diseases. Survey results from 1,407 pathogens in humans showed 58% of emerging infectious diseases and 75% of emerging infectious diseases are zoonotic diseases. An estimated 75 percent of new infectious diseases are zoonotic in origin, directly resulting from human and animal interactions (Woolhouse and Sequeria, 2005;Austin, 2021). Factors contributing to the increase of these diseases are as a result of an increase in population and human activities that have changed the forest environment to the human environment. Natural habitats that have changed their functions as agricultural land, plantations, and shelter, cause humans to live side by side with animals. Rats are reservoirs of infectious diseases because of their habitat and habit of looking for food in dirty places so that the diseases they carry can harm humans. Rat-based diseases began to increase by changes in habitat for animals and their closer proximity to the human environment (Thiermann, 2004;Woolhouse and Sequeria, 2005;Morand, 2015;Sun et al., 2018).
Rats can spread and transmit various infectious diseases to humans and other animals. Rats can carry 61 types of infectious diseases, including 20 types of viruses, 19 types of bacteria, and 22 types of parasites, including Cryptosporidium spp. The spread of parasitic zoonotic diseases from rats in the human environment needs to be carefully investigated for the source of parasitic zoonotic transmission. The proximity of rats to the human environment can be a risk factor for transmission of parasites from rats to humans and animals (Perec-Matysiak et al., 2015;Zahedi et al., 2016;Azzam KM, 2017;Krijger, 2020;).
The presence of poorly organized pig farms and cattle farms has led to the rapid spread of zoonotic parasites. Pollution of soil, water and air around human housing by zoonotic parasites is a result of unsafe animal rearing. Research reports show that farming in a residential environment increases the incidence of diseases in animals and has the potential to spread zoonotic diseases (Mosites et al., 2016). Baqer et al. (2018) showed contamination by Cryptosporidium oocysts in a river adjacent to a cattle farm in Baghdad.
Cryptosporidium parvum is a zoonotic parasite that can cause gastrointestinal disorders with symptoms such as diarrhea. The incidence of diarrhea can increase morbidity and mortality rates especially in children and is a cause of death of four million lives in developing countries each year (Badry et al., 2014;Verkerke et al., 2014;Dupont, 2016;Yee et al., 2018). Prolonged infection results in dehydration and weight loss. This situation can be severe in children or people with low immunity. Pathological conditions that are caused by these parasitic infections include epithelium damage in the from of villious atrophy, mitochondrial changes and increased lysosomal activity in infected cells (Ridley, 2012;Bogitsh et al., 2013).
Cryptosporidium in humans is a recent case that often occurs in areas with poor hygiene, usually involving reports of contact between humans and rats causing the migration of Cryptosporidium to humans. Humans are often infected with C. parvum and C. hominis; cattle with C. parvum, C. bovis, C. ryanae, and C. andersoni; while sheep and goats are infected with C. parvum, C. ubiquitum, and C. xiaoi. Most of the species in these animals are also found in humans. So far, more than 20 species of Cryptosporidium have been identified in humans. Sequencing methods using the small subunit rRNA gene (SSU rRNA) showed the presence of C. parvum and C. muris in rats in China and detected C. parvum isolate 11dA15G1 identified using the gp60 gene which can infect humans (Zhao et al., 2015;Beser et al., 2020).
The research was conducted on the island of Lombok, where the pigs are often free to find their food around the house or are given leftovers. The poor pig farming practices result in the transmission of parasites between animals and humans. Zoonotic transmissions on the island of Lombok have never been reported. This research concentrated on the transmission of Cryptosporidium zoonotic diseases from animals to humans in West Lombok, Indonesia.
Materials and Methods Study design.
The study design was a comparative cross-sectional research. The duration of the study was six months (January, 2019 to June, 2019).
Samples
The location of fecal sampling is based on the area that has pig farms. Residents who previously volunteered to fill out the informed consent were the sources of the research samples. Stool samples were collected from 84 rats caught and sacrificed from residential areas, 205 pigs, and 438 humans. Samples were taken in West Lombok, Indonesia at 191 locations. The freshness of the stool sample was maintained by the addition of a 5% potassium bichromate preservative.
Informed Consent
Informed consent was obtained from all individual participants included in this study.
Data collection
Interviews method and laboratory examinations of Cryptosporidium DNA found in humans, pigs, and mice by PCR and sequencing methods (Munshi, 2012) were employed to gather data for this study.
Laboratory Methods DNA Extraction
Isolation of DNA was done using the procedures from QIAamp, Fast DNA Stool Mini Kit (Qiagen, German). Stool sample of 180-220 mg was inserted in a 2 ml tube with 1 ml InhibitEX Buffer, and vortexed until the sample was homogeneous. Samples were lysed using a mini Beadbeater for 5 minutes, and continued with the "freeze-Thaw" process, which was incubated at -80 O C for 5 minutes and incubated 60 O C in a water bath for 5 minutes (four times). To separate pellets, the sample was centrifuged for 1 minute at 10,000 rpm. 600 µL supernatant was then placed in a pipette, and 25 µL Protein K and 600 µL AL buffer was added, then vortexed for 15 seconds, and incubated 10 minutes at 70 O C. Next, 600 µL of 96-100% ethanol was added to the lysate and vortexed. Lysate was then put in a spin column, and centrifuged at 10,000 rpm for 1 minute, then the filtrate was removed. Next, 500 µL AW1 buffer was added into the spin column with a new collection tube, centrifuged at 10,000 rpm for 1 minute, and the filtrate removed. Next, 500 µL Buffer AW 2 was added in a new collection tube, and centrifuged at 10,000 rpm for 3 minutes. Then, the spin column was removed from the collection tube then put in a new collection tube, and centrifuged at 10,000 rpm for 3 minutes. Next, the spin column was transferred to the 1.5 ml tube, and 100 -200 µL buffer ATE was added to the spin column and incubated for 1 minute, then centrifuged at 10,000 rpm for 1 minute. Finally, the spin column was discarded, and the tube containing DNA was stored at -20 O C.
PCR Amplification and Detection
Polymerase chain reaction (PCR) used the Bioline mix, with 1 µL DNA template, 7 µL ultrapure water and 10 µL master mix, with 1 µL primers. The primers used were: Cryptosporidium parvum gene 18S rRNA, F: 5'-TAAACGGTAGGGTATTGGCCT-3'; R: 5'-CAGACTTGCCCTCCAATTGATA-3 '. The PCR conditions used were 35 cycles, with an initial activation temperature of 95 O C for 5 minutes, denaturation temperature of 95 O C for 30 seconds, annealing temperature of 59 O C for 45 seconds, an extension temperature of 72 O C for 3 minutes, and a final extension temperature of 72 O C for 10 minutes. The final results were examined using electrophoresis at 2% agarose, for presence of Cryptosporidium parvum at 240bp (Zebardast et al., 2016).
Sequencing
Sequencing used the Applied Biosystem 3500 Genetic Analyzer 2500 tool with the Bigdye Terminator kit. DNA Cryptosporidium sequences from rats, pigs and human isolates were analyzed using the online BLAST program (NCBI), while sequences from Gene Banks that have genetic similarities with Cryptosporidium isolates from Lombok isolates were edited using ClustalW with MEGA X software. Phylogenetic trees were arranged based on neighbor-joining (Kumar et al., 2018).
Results
PCR results showed Cryptosporidium infection in rats 4.76% (4/84), pigs 6.34% (13/205) and humans 0.91% (4/438). Electrophoresis results on agarose 2% are shown in Figure 1. Cryptosporidium parvum from isolates of rats, pigs and humans in Lombok was spread in several districts, and the distribution is shown in Figure 2. Genetic kinship analysis using sequence alignment is based on Gene Bank. Bootstrap tree consensus with the conclusion of 1000 replications and parasitic distance evolution was calculated based on the "Kimura 2" method. Phylogenetic analysis used "MEGA X" software (Kimura, 1980;Felsenstein, 1985;Saitou and Nei, 1987). Figure 3 shows the results. Genetic kinship of Cryptosporidium isolates of rats, pigs and humans, confirmed by the pairwise distances calculation to the DNA sequence of the gene Bank (NCBI), with genetic distances are shown in Table 2. The DNA sequence of Cryptosporidium parvum isolates of rats, pigs, and humans has a tight genetic range, which is: 0.0104-03112 from the Gene Bank sequence. The sequences from Gene Bank are: C. parvum human isolates from Japan, Iran; C. hominis Macaca isolate from China, human from Iraq, Estonian Homo sapiens; and C. suis pig isolates from China, Slovakia, Denmark.
Discussion
Cryptosporidium identified in rats, pigs and humans in West Lombok was 4.76% (4/84), 6.34% (13/205) and 0.9% (4/438), respectively, by PCR at 240bp. Cryptosporidium was also identified in other countries using PCR analysis. El-Bakri et al. The similarity of DNA sequences of Cryptosporidium parvum isolates of rats, pigs and humans are shown by the Mega X program. These results identify genetic similarities of C. parvum that infect rats, pigs, and humans. Genetic similarity is related to the emergence of C. parvum zoonotic infections from rats and pigs to humans. Molecular analysis of the Cryptosporidium parvum uses the 18S rRNA gene because it is a reference gene that is often used as an internal control in the analysis of gene expression. Reference genes are genes whose expression is stable, not induced by certain treatments, abundant in all tissues, and follow the stages of the eukaryotic development. The 18S rRNA gene encodes an 18S ribosomal RNA gene, as a constituent of the eukaryotic small subunit ribosome in the process of recognition and hybridization of mRNA that is translated in the ribosome (Thellin et al., 1999;Dresios et al., 2006) The 18S rRNA gene is also used to detect Cryptosporidium from porcupine isolates in the UK, which was detected in 8% (9/111). The 18S rRNA gene also showed good results for detecting Cryptosporidium in pigs in Australia with the Next Generation Sequencing (NGS) method (Paparini et al., 2015;Sangster et al., 2016). Cryptosporidium parvum isolates of rats, pigs and humans have a genetic similarity to DNA sequences from NCBI (the max score. 303, total score 303, query cover 97%, E-value 53-73 and per. ident 92.71%). The taxonomic results of the gene C. parvum have the highest scores than other species. Phylogenetic tree kinship analysis used the neighbor-joining method with bootstrap 1000x. The genetic relationship between C. parvum isolates of rats and pigs in West Lombok is monophyletic and C. parvum isolates of humans in West Lombok are synapomorphic with isolates of rats and pigs. Pairwise distance calculation between Cryptosporidium isolates of rats, pigs, and humans in West Lombok was 0.000-0.3170, indicating there is a close genetic relationship between C. parvum in rats, pigs, and humans in West Lombok.
C. parvum identified in humans in West Lombok comes from rats and pigs. C. parvum found in West Lombok is a zoonotic parasite, and there has been a transmission of Cryptosporidium infection from rats and pigs to humans. The presence of C. parvum in rats and pigs is a new source of transmission from C. parvum in West Lombok. The results of research on zoonotic Cryptosporidium were also reported by Deng et al. (2020), while C. parvum was identified in 8.6% (27/314) from the phylogenetic analysis of red squirrel pets sold in Sichuan, China. The presence of C. parvum in red squirrels is suspected as the source of transmission of C. parvum to humans and causes diarrhea. Phylogenetic analysis of C. parvum isolates of rats, pigs and humans are located in a genetic kinship group with isolates KP704556.1 C. suis pig, Slovakia and MK301308.1 C. suis, cattle, Uganda (Gordon, 2003).
Pairwise distance calculation analysis showed the genetic relationship between Cryptosporidium rats, pigs, and humans in West Lombok with Cryptosporidium Gene Bank isolates in Slovakia with genetic distance 0.1060-0.3030 and pigs in Uganda with genetic distance 0.1060-0.2710. Pairwise distance calculation analysis is used to determine the transition substitution and transversions through the many different nucleotides per base pair. Species with genetic distance that are getting closer have a strong genetic relationship (Dharmayanti, 2018).
Cryptosporidium parvum infection in West Lombok may be derived from rats and pigs, while the results of alignment with Gene Bank isolates showed a genetic relationship with C. parvum isolates of rats and pigs. These results concur with the study by Zou et al. (2017), identifying Cryptosporidium in pig farms in China using the 18S rRNA gene finding Cryptosporidium infection of 8% -23%, which can act as a zoonotic source to humans. Utsi et al. (2016) found that a cryptosporidiosis outbreak in America was caused by visitors infected with Cryptosporidium, after returning from a petting farm.
Volunteers who have provided stools samples in this study had a house close to the pig farm. The pigs' fecal litter was strewn in the yard and sometimes thrown into the garden or the river. The source of pigs fodder comes from recycling food scraps from residents.
The presence of rats around cages and houses, and the activity of rats moving from pigpens to poeple's homes can carry and transmit diseases from rats to humans or to other animals. C. parvum identified in rats, pigs, and humans in West Lombok shows that there is a connection between Cryptosporidium transmission from rats, pigs, and humans. C. parvum which infects rats, pigs and humans is a zoonotic intestinal parasite, and this parasite may contribute to the incidence of diarrhea in West Lombok. The same case has occurred in Korea, where C. parvum was identified in 0.37% (32/8,571) of hospital diarrhea samples (Ma et al., 2019). Cryptosporidium zoonosis has also occurred in Madagascar and infects rats, pigs and humans around the national park (Bodager et al., 2015).
Cryptosporidium parvum infection in West Lombok can originate from rats and pigs. Zoonoses occur due to risk factors for environmental hygiene, raising pigs, and the presence of rats. This is in accordance with the opinion of Innes et al. (2020), that the presence of pigs, cows, horses and other wild animals such as rats around residential areas presents a high risk of polluting the environment by animal feces containing Crytptosporidium oocytes. Contamination of the environment by animal feces facilitates transmission of Cryptosporidium parvum to humans. Feng et al. (2018) identified transmission between Cryptosporidium from goats to cattle that can even infect humans. Most Cryptosporidium species and genotypes have a host specificity so that one to four of the Cryptosporidium species can be found in one host (Widmer et al., 2020). The ability of the parasite to adapt within the new host and geographical pressure causes the formation of unique subtypes and phenotypic properties, especially those found in humans (C. parvum and C. hominis) (Feng et al., 2018). The intensity of transmission, genetic diversity, and genetic recombination form the genetic tree structure of Cryptosporidium. Molecular research on Cryptosporidium spp. can help increase our understanding of various patterns of transmission of Cryptosporidium to new hosts (Thompson, 2013).
Conclusions
There are genetic similarities of Cryptosporidium spp. that infect rats, pigs and humans in the West Lombok Regency, West Nusa Tenggara Province. This allows the transmission of the parasitic zoonoses Cryptosporidium which infect rats, pigs and humans in the West Lombok Regency, West Nusa Tenggara Province based on parasitic genetic kinship. The results of this study require that Public Health programs in contaminated areas receive priority attention to prevent further transmission of this potentially fatal parasite. More research is needed to see what risk factors contribute to zoonoses.
Conflict of Interest:
The authors declare that they have no conflict of interest. | 2021-04-23T05:17:31.448Z | 2021-03-18T00:00:00.000 | {
"year": 2021,
"sha1": "67f1375dae0ecf0daa0e21470a6aab6b718411ab",
"oa_license": "CCBY",
"oa_url": "https://journals.athmsi.org/index.php/AJID/article/download/5756/3263",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "67f1375dae0ecf0daa0e21470a6aab6b718411ab",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11808925 | pes2o/s2orc | v3-fos-license | Genetic variations in the TERT and CLPTM1L gene region and gastrointestinal stromal tumors risk
Recent studies have suggested polymorphisms in the TERT and CLPTM1L region are associated with carcinogenesis of many distinct cancer types, including gastrointestinal cancers. However, the contribution of polymorphisms in the TERT and CLPTM1L gene region to gastrointestinal stromal tumors (GISTs) risk is still unknown. We tested the six tagSNPs on TERT and CLPTM1L region with GIST risk, using a population-based, two-stage, case-control study in 2,000 subjects. Functional validation was conducted to validate our findings of TERT rs2736098 and explore its influence on relative telomere length (RTL) in GIST cells. It showed that variant rs2736098 was significantly associated with increased risk of GIST (per allele OR = 1.29, 95% CI: 1.14–1.47, P = 7.03 × 10−5). The difference remain significant after Bonferroni correction (P = 7.03 × 10−5 * 6 = 4.2 × 10−4). Real-time PCR showed carriers of genotype CC have the longest RTL, following by carriers of genotype CT, while carriers of genotype TT have the shortest RTL in GIST tissues (P < 0.001). Our data provide evidence to implicate TERT rs2736098 polymorphism as a novel susceptibility factor for GIST risk.
INTRODUCTION
Gastrointestinal stromal tumors (GISTs) are the most common mesenchymal tumors in the human digestive tract, representing 1-3% of gastrointestinal malignancies [1,2]. The histogenesis, classification, diagnostic criteria, and biological behavior of GISTs have been the subject of much controversy [2]. Simply, they are typically defined as tumors whose behavior is driven by mutations in the Kit gene or PDGFRA gene [3][4][5]. The mechanism of activation in part of sporadic GISTs is an alteration of the structure of the receptor's extracellular or cytoplasmic domains caused by somatic mutations of the c-kit gene, which leads to dimerization and autophosphorylation of KIT with subsequent activation of signal transduction cascades in the absence of ligand binding [6][7][8]. Inhibition of KIT activity by a specific tyrosine kinase inhibitor, imatinib, often results in dramatic clinical responses [8,9].
In contrast to GISTs associated with somatic mutations, little is known about inherited germline genetic risk factors. The rarity of the disease makes it a difficult subject to conduct population-based genetic research and unbiased assessment of non-genetic risk factors in any study population. An evaluation of the genetic determinants of GISTs is much more feasible, as the germline DNA of individuals does not change over time or in response to disease processes. Recently, O'Brien et al [10] only evaluated the associations between some candidate SNPs and several common types of acquired KIT and PDGFRA somatic mutations in a case-only study for the first time. However, no other research groups have published such evaluations, not to mention germline genetic associations with GISTs.
The TERT and CLPTM1L gene have been identified to be associated with carcinogenesis of at least 15 distinct cancers [11][12][13][14]. TERT promoter mutations were also detected in GIST tissues [15]. Recently, six tagSNPs on TERT and CLPTM1L region (five SNPs in the TERT gene: rs7726159, rs2853677, rs2736098, rs13172201, rs10069690; one SNP in CLPTM1L gene: rs451360) were identified, all of which influenced the risk of multiple cancers, including kinds of gastrointestinal cancers [16]. Given this evidence, our main objective was to identify whether these six tagSNPs potentially related to GIST carcinogenesis. Therefore, we first conducted this large population-based, two-stage, case-control study of GIST risk.
RESULTS
A total of 2,000 subjects were included in the current study; 600 were genotyped in Stage I and 1,400 were genotyped in Stage II (Table 1). People in the two genotyping stages were generally comparable. As expected, GIST cases were found to differ from controls in regard to known cancer risk factors: cases were more likely to have a higher education, body mass index (BMI), waist-to-hip ratio (WHR), and more likely to smokers and drinkers. Most of the GISTs were located in stomach (63.8%) or small Intestine (32.5%).
A total of six tagSNPs on TERT and CLPTM1L region were included in the current study; of these, five SNPs were located in the TERT gene (rs7726159, rs2853677, rs2736098, rs13172201, rs10069690) and one SNP was located in neighboring CLPTM1L gene (rs451360). None of the six polymorphisms were found to deviate from HWE. We first evaluated the six tagSNPs in Stage I with 300 cases and 300 controls. The estimates of effect on GIST risk in Stage I, adjusted for age and gender are shown in Table 2. Three SNPs (rs7726159, rs10069690, and rs2736098) were found to have associations of significance with GIST risk. Then they were evaluated in Stage II with 700 cases and 700 controls additionally (Table 3). One SNP (rs2736098) was replicated with significance (P = 7.03 × 10 −5 ). The difference remain significant after Bonferroni correction (P = 7.03 × 10 −5 * 6 = 4.2 × 10 −4 ). Compared with individuals with the CC genotype, the age and sex adjusted OR for developing GIST was 1.49 (95% CI 1.18-1.88) among those with the TT genotype. Under the log-additive model, each additional copy of minor allele A was associated with a 1.29-fold increased risk of GIST (OR = 1.29, 95% CI: 1.14-1.47, P = 7.03 × 10 −5 ).
The robustness of these findings was evaluated by sensitivity analyses. First, additional adjustments by education, BMI, WHR, physical activity, drinking and smoking were conducted respectively. The results didn't change materially. To validate our findings of TERT rs2736098 and explore its influence on RTL, we used real-time PCR to measure the RTL in a random sample of 150 GIST cases. In GIST tissues, We found a significant difference in RTLs among the genotype CC, CT, and TT, respectively ( Figure 1, P < 0.001).
DISCUSSION
To the best of our knowledge, this is the first report to attempt an evaluation of the association of six tagSNPs on TERT and CLPTM1L region potentially related to GIST carcinogenesis. In this large population-based, twostage, case-control study, we identified that the variant rs2736098 was significantly associated with increased risk of GIST, especially for GISTs located in stomach. To validate this finding, real-time PCR showed that the RTL in GIST cells were significantly lower than that of their adjacent normal tissues. And in GIST tissues, carriers of genotype CC have the longest RTL, following by carriers of genotype CT, while carriers of genotype TT have the shortest RTL. These provide evidence to implicate rs2736098 polymorphism as a novel susceptibility factor for GIST risk.
GISTs are the most common soft tissue sarcoma of the gastrointestinal tract, resulting most commonly from KIT or platelet-derived growth factor receptor alpha (PDGFRalpha)activating mutations [19][20][21]. However, they have distinct genetic background and gene expression patterns according to localization, genotype and aggressiveness [22,23]. Chr5p15.33 harbors a unique cancer susceptibility region that contains at least two plausible candidate genes: TERT and CLTPM1L [24][25][26][27]. The TERT gene has been mapped to chromosome 5p15.33 and consisted of 16 exons and 15 introns spanning 35kb of genomic DNA [28]. It encodes the catalytic subunit of the telomerase reverse transcriptase, which, in combination with an RNA template (TERC), adds nucleotide repeats to chromosome ends [29,30]. The CLTPM1L gene, also known as cisplatin resistance-related protein 9 (CRR9p), encodes a protein that is overexpressed in lung and pancreatic cancer, promotes growth and survival, and is required for KRAS driven lung cancer [31,32]. It confer resistance to apoptosis caused by genotoxic agents in association with up-regulation of the anti-apoptotic protein, Bcl-xL [33]. Studies indicate that the TERT-CLPTM1L region may harbor multiple elements that have the capacity to influence molecular phenotypes in cancer development [16,34]. Thus, It is possible to study that the interplay between risk variants, multiple biological mechanisms and attributed genes, influence various cancers, including GISTs.
Although few literature investigating the associations somatic mutations of TERT and GIST risk [15,35,36], none has evaluated germline genetic associations with GISTs. In current study, we identified rs2736098 contribute to increased risk of GIST and shorter RTL, using a two-stage, case-control study. This finding is consistent with many previous epidemiological studies with different cancer types, including lung cancer, bladder cancer, pancreatic cancer, gastrointestinal cancers, breast cancer, ovarian cancer, and so on [16,[37][38][39]. All of evidence above implicate TERT rs2736098 polymorphism as a novel susceptibility factor for carcinogenesis. www.impactjournals.com/oncotarget Considering rs2736098 polymorphism being a tagSNP, it is possible the association seen with rs2736098 tagSNP is due to one of those linked polymorphisms. We additionally listed the detailed information for these 11 linked polymorphisms of rs2736098 in Table 4. Among them, 9 are intergenic SNPs, and 2 are intron variants. Although rs2736098 was a synonymous SNP, our results showed that carriers of genotype CC have the longest RTL, following by carriers of genotype CT, while carriers of genotype TT have the shortest RTL in GIST tissues (P < 0.001). This evidence yet indicate the functionality of SNP rs2736098. Further fine mapping and sequencing studies may be helpful for the validation the our conclusions. Strengths of the current study include a large population, a two-stage genotyping design to minimize type I error, and good coverage of the genetic variation in the TERT and CLPTM1L region. This study also had several limitations. First, selection bias might have occurred through the selection of control subjects when the sampling is not random within the subpopulations of cancer and cancer-free subjects, though we have try our best to control it through the whole process of the study; since this study was restricted to a Chinese Han population, it is uncertain whether our findings can be replicated by other ethnic groups. Second, in spite of the relatively large sample size, the power to elucidate geneenvironment interactions was limited because of the small magnitude of the overall association.
In summary, our findings regarding genetic variation in the TERT and CLPTM1L region and GIST risk add to the growing body of literature suggesting the importance of this genetic region to cancer development. Further research is needed in this area to understand how changes in telomere length over time may influence GIST carcinogenesis in a prospective setting and interaction with a p53 pathway of development.
Subjects
The methods were carried out in "accordance" with the approved guidelines. Also, all experimental protocols were approved by the institutional review boards of liaoning cancer hospital and shengjing hospital, After giving written consent, participants provided demographic information using a standard intervieweradministered questionnaire. Stage I includes 300 GIST cases and 300 controls, while stage II includes 700 GIST cases and 700 controls. Totally incluede in this study were 1,000 cases and 1,000 controls. Five ml of peripheral blood was obtained for DNA extraction.
SNP selection and genotyping
Totally, five SNPs in the TERT gene (rs7726159, rs2853677, rs2736098, rs13172201, rs10069690) and one SNP in neighboring CLPTM1L gene (rs451360) were selected in this study (details in Table 5 and supplementary table 1), according to the previous literature [16]. 5′-Nuclease TaqMan ® assays were used to genotype the polymorphisms in 96-well plates on an ABI PRISM 7900HT Sequence Detection system (Applied BioSystems, Foster City, CA, USA). The primers and probes for the TaqMan ® assays were designed using Primer Express Oligo Design software v2.0 (ABI PRISM) and are available upon request as TaqMan ® Pre-Designed SNP Genotyping Assays.
Samples from matched case-control pairs were handled identically and genotyped in the same batch in a blinded fashion. All included SNPs had concordance rates of 100% among duplicates within each platform, and laboratory personnel were blinded to the case-control and QC status of all samples.
Relative telomere length (RTL) determination
RTL of GIST cells were measured using quantitative real-time polymerase chain reaction (PCR) as described earlier in a random sample of 150 GIST cases [17]. In short, telomeres and a single-copy gene (β2-globin) were amplified including an internal reference control cell line (CCRF-CEM) to which all samples were compared. The ΔΔCt method was used for calculation of RTL values and a standard curve was created in each PCR run to monitor the PCR efficiency.
Statistical analyses
Hardy-Weinberg equilibrium (HWE) was tested by comparing observed and expected genotype frequencies among controls (x 2 test). Odds ratios (ORs) and corresponding 95% confidence intervals (CIs) were determined by logistic regression analyses using models that included adjustment for age and gender. Linkage disequilibrium (LD) was assessed by Haploview [18]. Differences of RTLs among different groups were compared by One-Way ANOVA method. All statistical analyses were conducted with SAS version Figure 1: Boxplot for the RTL with different genotype of SNP rs2736098. www.impactjournals.com/oncotarget 9.2 (SAS Institute Inc.). All statistical tests were 2-tailed, and P < 0.05 was interpreted as statistically significant unless otherwise indicated.
CONFLICTS OF INTEREST
The authors declare that they have no conflicts of interest. | 2016-05-04T20:20:58.661Z | 2015-09-08T00:00:00.000 | {
"year": 2015,
"sha1": "2c2c2914ae7a34230744e733339cea9047d9f730",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=14118&path[]=5153",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c2c2914ae7a34230744e733339cea9047d9f730",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
28122005 | pes2o/s2orc | v3-fos-license | NOTES ON INTESTINAL PARASITIC DISEASES IN ARTISANAL FISHERMEN OF THE FISHING TERMINAL OF CHORRILLOS (LIMA, PERU) NOTAS EN LAS ENFERMEDADES PARASITARIAS INTESTINALES EN PESCADORES ARTESANALES DEL TEMINAL PESQUERO DE CHORRILLOS (LIMA, PERU)
Suggested citation: Zelada-Castro, H, Rodríguez-Borda, J, Flores-Liñan, H, León-Manco, J, Wetzel, EJ & CárdenasCallirgos, J. Neotropical Helminthology, vol. 7, N°1, jan-jun, pp. 155 166. 1 Alberto Hurtado School of Medicine. Cayetano Heredia Peruvian University. Av. Honorio Delgado 430, Urb. Ingeniería, S.M.P. Lima Peru. E-mail: henryzc86@hotmail.com 2 Alberto Cazorla Talleri School of Sciences and Philosophy. Cayetano Heredia Peruvian University. Av. Honorio Delgado 430, Urb. Ingeniería, S.M.P. Lima Peru. E-mail: hugo.flores@upch.pe 3 Department of Biology, Wabash College, Crawfordsville, IN 47933, Indiana, USA. E-mail: wetzele@wabash.edu 4 Invertebrate Laboratory. Museum of Natural History. Biological Sciences School. Ricardo Palma University. Av. Benavides 5440, Lima 33, Peru. E-mail: jmcardenasc@gmail.com 1 1 2 2 3 Henry Zelada – Castro , Jenny Rodríguez Borda , Hugo Flores Liñan , Jorge León Manco , Eric J. Wetzel & Jorge 4 Cárdenas Callirgos
Gastrointestinal parasitic disease is common in our environment mostly due to the deficiencies in our health system (Contreras et al., 1993) and the cultural habits of the population, especially of those living in poor conditions and who cannot access an adequate health system, additionally to the environmental characteristics promoting the transmission of etiologic agents (Apt, 1987). The access to drinking water services, an efficient excretion removal system and a right per capita income, additionally to the population literacy rate, are characteristics determining the socio-economic level of the population and are related to the human infection with gastrointestinal helminths (Mehraj et al., 2008).
Human communities dedicated to fishing activities present socio-cultural aspects determining the transmission of protozoa and helminths, especially for those getting oral transmission through water and contaminated food and this due to the human behavior habits which are directly related to the epidemiology of parasitic infections, such as the case of contamination due to cysts of Giardia sp. and oocysts of Cryptosporidium spp., which when transmitted by fecal contamination cause diarrhea in the population exposed to a contaminated environment due to the lack of access to drinking water services (Macpherson, 2005). This health approach with multifactor impacts, including environmental, cultural and social factors, all of them subject to continuous changes due to the dynamic of the current human development (Petney, 2001), may be grouped under the title of poverty ecology, indicating a context where food safety, health infrastructure and a comprehensive educational proposal must be considered (Stillwaggon, 2006) preferably with the most vulnerable human groups, such as children (Holland et ál., 1988) and women (Brabin & Brabin, 1992) and where food habits, the relationship with the wild fauna and the ecological characteristics of the marine -coastal ecosystem play a fundamental role in the transmission of helminthic zoonoses (Cárdenas-Callirgos, 2012).
INTRODUCTION
Based on the aforementioned, the objective of this study was established, that is, to conduct a coproparasitological study in part of the port population of the Fishing Terminal of Chorrillos dedicated to artisanal fishing and its likely relationship to marine helminthic zoonoses transmitted due to the consumption of marine products, considering the fishing community as a risk population since their diet, especially on the high seas, is based on fish, cephalopods and marine crustaceans (Cárdenas -Callirgos, 2010); thus, it was expected to find helminth e g g s o f p a r a s i t e s d e v e l o p e d i n t h e gastrointestinal tract until reaching sexual maturity and thus ovoposit. Also, possibly in some larvae or immature stages, helminthes cannot adhere themselves to the digestive tract and thus they cannot sexually develop and are removed with feces. But also, in some other cases, helminths get to migrate to other organs and; therefore, it is necessary to diagnose through serological or surgical methods (Tantaleán, 1994;Cabrera & Trillo -Altamirano, 2004;Cárdenas -Callirgos, 2010), where we shall also consider that in the case of fish collected in Chorrillos, it was previously reported several helminthic larvae with zoonotic potential and that become into risk factor for the population of the city of Lima who consume them without the previous cooking (Tantaleán & Huiza, 1994;Zelada -Castro et al., 2008).
The study zone was the Fishing Terminal of Chorrillos located in the District of Chorrillos in Lima, Peru (12°11'33"S 77°0'23"W) , where the inhabitants dedicated to artisanal fishing, for several generations now, sell fresh marine products, providing such products to several zones of Lima, particularly to the inhabitants of the neighboring districts. Prior to the collection of the fecal sample, meetings and talks to raise awareness were conducted, which were addressed to the members of José Olaya Fishermen's Association and to their relatives. In these meetings, voluntary participants were registered who signed an Informed Consent
MATERIALS AND METHODS
where they agreed on their participation in the study. Therefore, we worked with a randomized and representative population sample (n = 50) throughout all seasons of 2007. Human coproparasitological exams were conducted by using the direct method with saline and Lugol's solution and two concentration techniques: The Ritchie's Method and the Spontaneous S e d i m e n t a t i o n i n T u b e Te c h n i q u e (Concentration Technique by Sedimentation with no Centrifugation) as per the methods standardized previously (Navone et al., 2005). The prevalence of parasitic infection was based on the number of individuals parasitized by the number of sampled individuals.
No marine origin helminthic zoonoses were observed in the results given, although the presence of other parasitic agents, some with a probable zoonotic origin, confirms fecal contamination of possible animal and human origin in water and food. In Table 1, it is observed how the six species of parasites reported are equally distributed presenting a high prevalence especially for those considered non pathogenic, where the most prevalent parasite is Endolimax nana (Wenyon & O´Connor, 1917) reaching 40%, followed by Entamoeba coli (Grassi, 1879) with 28% and finally by Iodamoeba butschlii (Prowazek, 1911) with 4% among the protozoan commensal parasites. In the case of the pathogens, the most prevalent was Giardia intestinalis (syn. G. lamblia) (Lambl, 1859) Kofoid & Christiansen, 1915 with 16% prevalence, followed by h e l m i n t h s n a m e d H y m e n o l e p i s n a n a (Culbertson, 1940) and Ascaris lumbricoides (Linnaeus, 1758), both with a 4% prevalence, observing, in conclusion, that the population studied reaches a global parasitism equal to 68%.
In Table 2, we may observe that helminths are only parasitizing female population, while in the three parasites shared by both genders; it was observed that men present higher prevalence, although in the total count, women present a higher prevalence in global parasitism with respect to men despite of being represented on a lower degree. (14) We may observe in the comparison conducted in Table 3 that the parasitic richness (number of parasitic species) of the coastal population assessed in the group of studies mentioned equals 22 species distributed in 11 protozoa and 11 helminths (without considering the report of Taenia sp., where the species are not included and considering the two species of hookworms: Necator americanus (Stiles, 1902) and Ancylostoma duodenale (Dubini, 1843) even though these cannot be distinguished in the coprological test These studies were selected since these included a wide range of parasitic agents, including normally protozoan parasites, and since these were conducted with coastal populations presenting e n v i r o n m e n t a l a n d s o c i o e c o n o m i c characteristics similar to the target population in our study. This is even the only one study dedicated only to helminthiasis in the District of Chorrillos, in Lima. Finally, Table 4 shows the community structure of protozoan and metazoan parasites present in the human population studied where a certain similarity with one of the few studies conducted in artisanal fishermen is discerned, that is to say, the study conducted in the southern coast of Peru, in Chala, Arequipa, that even though it presents 4 non-reported species in our study, both human populations share 4 protozoan species and 1 helminthic species. Also, it shall be mentioned that we had a very similar number of patients As it has been previously stated, very few researches have been conducted on the prevalence of parasitic diseases in the artisanal fishermen's communities. In Table 4, details of the study conducted in Chala, Arequipa are presented, where the similarity between the structures of both parasitic communities would be related since the human population dedicated to fishing activity is subjected to the same health conditions despite the distance of both localities.
RESULTS
Thus, the handling of food would be a determining factor for understanding the relevance of the parasitic diseases reported (Villegas et al., 2012). The research focused on t h e a s s e s s m e n t o f t h e p r e s e n c e o f diphyllobothriasis, a parasite that could not be found in this study despite the wide distribution in the Peruvian coastal population that is used to consuming raw or semi-raw fish, especially as a daily food for artisanal fishermen. That is a risk behavior due to the imminent danger it represents for people who may be infected with larvae of zoonotic helminths and even more in the population of Chorrillos, who participated in this research. Also, no signs of marine origin zoonotic infection were found. Particularly, no cestode infections from the Diphyllobothridae family were present, which was actually found in a study conducted in a school population of Chorrillos (Table 3), where the presence of four helminths not found in this study was reported and this probably due to the use of different diagnostic techniques and since pediatric population is more sensitive to helminthiasis. In this same work line, a research was conducted in family groups of the coastal population of Riñihue Lake in Chile for assessing the possible impact of the educational work in the seasonal distribution due to the presence of human diphyllobothriasis and its presence in fresh water fish belonging to this lake related to gender, size and diet of host fish (Torres et al., 1998). Finally, in other places, such as in Egypt, one of the few studies published in human communities dedicated to the fishing activities was conducted. This study aimed at assessing the presence of heterophyiasis (related to several species of zoonotic trematodes) in tilapias looking for metacercariae encysted in fish, finding a prevalence of 13.3% of eggs characteristic of the Heterophyidae group. Also, a statistically significant positive co-
DISCUSSION
relationship was found between the fishing activity and heterophyidae in the population studied (Lobna et al., 2010).
As it may be observed in Table 1, G. intestinalis presents 16% prevalence and this flagellum, parasitizing duodenum, jejunum and the top part of ileum in human beings, is easily transmitted from person to person, although it may also be a zoonosis considering as a reservoir a wide range of domestic and wild mammals (Roberts & Janovy, 2005), this being mostly reported in dogs in Lima (Zarate et al., 2003) and of Callao (Araujo et al., 2004). The presence of this parasite was inspected based on the possible relationship between dogs and kids (Pablo et al., 2012). This pathogenic parasite was followed in terms of prevalence by H. nana with 4%, the only one cestode that during its life cycle, the intermediary host is optional and where domestic mice and rats may act as reservoirs (Roberts & Janovy, 2005); also, its prevalence in Lima is 5.95% and for the Peruvian coast is 9.95% (Cabrera, 2003), which is higher than that found in this study. Also, this helminth has been related in Peru to symptoms such as constipation, hyporexia and abdominal pain, finding a statistical significance (p<0.05) only when relating the presence of the parasite to diarrhea and not with the rest of the mentioned symptoms (Romaní et al., 2005). A. lumbricoides is reported with the same prevalence, which in the Province of Lima is 6.23% and in the Peruvian coast is 6.58%, being Oxapampa the province with the highest prevalence (64.32%) and the Low Jungle being the geographical region with the highest prevalence (45.30%) as well, calculating a national prevalence of 14.5% (Cabrera, 2003).
Finally, its transmission is related to the fecal contamination and environmental pollution due to dogs that are sensitive to the infection and where chicken play an important role since they act as a paratenic host. Similarly, cockroaches may transport and disseminate eggs (Roberts & Janovy, 2005). This verme has important economic implications for human population, presenting a negative effect in growth and the use of nutrients in undernourished children is associated to intestinal obstruction and other surgical emergencies endangering life and causing the need of treatment. This generating the need of an efficient healthcare system and expenses for the relatives due to the disease treatment (Stephenson, 1984).
Among the commensals found in Table 1, E. nana was the most prevalent parasite with 40%. This ameba lives in the human large intestine, mainly close to the cecum and feeds on bacteria.
Although it is not pathogenic, its presence indicates there is the possibility of colonization of the host by pathogenic parasites (Roberts & Janovy, 2005); E. coli presents a 28% prevalence, this non-pathogenic ameba is more common than E. histolytica due to its greater capacity of surviving putrefaction and its presence is an indicator of health level and the efficacy of water treatment system (Roberts & Janovy, 2005); finally, I. butschlii may infect the large intestine (mainly the area of the cecum where it feeds from intestinal flora of primates and pigs), being considered a zoonosis (Roberts & Janovy, 2005). To sum up, it is observed that although they do not cause diseases to men, commensal protozoa are good indicators of fecal-oral contamination from food and water of the population studied and of the lack of an adequate personal hygiene. Thus, infections inform us on the lack of hygiene in the handling of water and food in the community of fishermen of the Fishing Terminal of Chorrillos, presenting a 68% prevalence of global parasitism, placing ourselves in the scenario of a multi-parasitic infection in a community composed of 6 species of parasites, although probably due to the number of patients examined and due to the different techniques used, as well as due to the socio-cultural risk factors affecting the population and that may be differentiated from the factors affecting the population examined in the previous studies used for comparison as it is shown in Table 3, it is observed that the composition of parasitic communities in previous studies indicates a higher number of populations of parasitic helminths and protozoa; therefore, we shall consider that several factors may be influencing the structure of parasitic communities, such as factors related to individual hosts (age, anatomy, behavior, diet, genetic basis, immune response, nutritional condition, predisposition, physiological condition, gender, size and social conditions), to host populations (host social group size, population density of the hosts, predator -prey interactions, range of host species, sympatry with other potential hosts and history), to environmental characteristics (latitude, season and habitat characteristics), to evolutionary backgrounds (age of the species, co-evolution, geographical barriers, key species, phylogeny of the host and life cycle of parasites) and finally considering stochastic factors (size of samples of hosts, parasitic dispersion patterns, opportunity and source communities). They altogether influence on the opportunity of parasites to infect a host individual (Petney & Andrews, 1998). It has even been shown that there is a relationship between the pathogenicity of the infection and the genetic predisposition of the human population exposed to the infection as it has been shown in the studies conducted in Sudan with respect to Schistosoma mansoni Sambon 1907 (Dessein et al., 1999) and in , Nigeria with respect to A. lumbricoides (Holland et al., 1992).
One of the factors mentioned is gender and as it was previously mentioned in Table 2, it is observed that the prevalence of female parasitism is 77.77%, while in males it only reaches 62.5%; therefore, we shall analyze the factors related to differential parasitism between different sex persons. Thus, a factor may have subordinated factors conditioning it. In this case we may consider the differences attributed to the level of exposition in men and women to etiologic agents, which are actually both etiologic and immunological differences, even when understanding the effects of sex in the immune response of the host in its relationship with the chemotherapy efficacy (Brabin & Brabin, 1992).
If we analyze some factors influencing on the presence/absence of certain intestinal parasites and that may be relevant for the study, it shall be mentioned that in several researches, the possibility of a relationship between the epidemiology of certain parasitic diseases and the social and ecological factors has been studied, a relationship that influences on the target population daily life (Bhattacharya et al., 1981), as in the case of the studies conducted in some towns of Reunion Island where environmental life conditions were assessed, as well as weather conditions such as rainfall and altitude, and cultural aspects, such as the diverse daily habits of inhabitants related to a differential composition of immigrants (Picot & Benoist, 1975). Similarly, in a research conducted in Peru, in Concepción, Puerto Maldonado, Madre de Dios, 136 people were parasitologically assessed observing the lack of adequate hygienic standards, the environmental characteristics of cultivation lands, the degree of geographic isolation of family and the different activities inhabitants conduct according to their age and sex. These characteristics determine the transmission patterns of helminthiasis reported (Mc. Daniel et al., 1979). In Argentina, the influence of some environmental factors on parasitic infection was studied. This is a case where variables such as the material of construction of houses, the characteristics of floors and the type of water service showed an statistically significant association with the presence of intestinal parasites (Basualdo et al., 2007). In urban ecosystems, the consequences of a poor health planning additionally to the formation of slums and the migration phenomenon from provinces to the capital, that is to say, from rural areas to urban areas, causes consequences such as inadequate houses, unhealthy water, sewage and problems in waste m a n a g e m e n t , w h i c h e n c o u r a g e s t h e transmission of diverse infectious diseases, especially in tropical countries where heavy rain and a poor drainage system, if these actually exist, contribute to the transmission of parasitic disease through contaminated water. Therefore, social inequality causes that poor population become sensitive to diverse parasitic diseases (Herbreteau, 2010). In this context, it is important to determine in parasitological studies the risk factors contributing to infection through an adequate methodology (Alarcón et al., 2010) and the use of ecological analysis tools in parasitological studies related to public health (Iannacone et al., 2006).
As a final comment, it shall be mentioned that control strategies over intestinal parasitic diseases shall include a joint effort of chemotherapy, health education, participative community, basic sanitation, use of shoes and epidemiological research, with an adequate program of monitoring and assessment (Albonico et al., 1999). | 2017-10-23T15:47:38.379Z | 2021-02-15T00:00:00.000 | {
"year": 2021,
"sha1": "c16bf9b4634c06280ce2ec2e9376ddf742ab564f",
"oa_license": null,
"oa_url": "https://doi.org/10.24039/rnh201371958",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c16bf9b4634c06280ce2ec2e9376ddf742ab564f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
1712876 | pes2o/s2orc | v3-fos-license | Elucidation of the tumoritropic principle of hypericin
Hypericin is a potent agent in the photodynamic therapy of cancers. To better understand its tumoritropic behaviour, we evaluated the major determinants of the accumulation and dispersion of hypericin in subcutaneously growing mouse tumours. A rapid exponential decay in tumour accumulation of hypericin as a function of tumour weight was observed for each of the six tumour models investigated, and a similar relationship was found between tumour blood flow and tumour weight. Moreover, there was a close correlation between the higher hypericin uptake in RIF-1 tumours compared to R1 tumours and tumour vessel permeability. To define the role of lipoproteins in the transport of hypericin through the interstitial space, we performed a visual and quantitative analysis of the colocalisation of hypericin and DiOC18-labelled lipoproteins in microscopic fluorescent overlay images. A coupled dynamic behaviour was found early after injection (normalised fluorescence intensity differences were on the whole less than 10%), while a shifted pattern in localisation of hypericin and DiOC18 was seen after 24 h, suggesting that during its migration through the tumour mass, hypericin is released from the lipoprotein complex. In conclusion, we were able to show that the tumour accumulation of hypericin is critically determined by a combination of biological (blood flow, vessel permeability) and physicochemical elements (affinity for interstitial constituents).
Photodynamic therapy (PDT) involves the local or systemic administration of a photosensitising drug that, upon light irradiation and in the presence of oxygen, results in tumour destruction (Dolmans et al, 2003). We have recently been focusing on hypericin, a natural compound isolated from Hypericum plants (Lavie et al, 1995a), as a potent photosensitiser with a high antitumoral PDT efficacy (Chen and de Witte, 2000;Chen et al, 2001Chen et al, , 2002. In the course of our study, we found that the compound accumulated to a large extent in tumour tissues. For instance, after systemic administration (i.p. 5 mg kg À1 hypericin), a 16-fold higher concentration of hypericin in tumour tissue vs surrounding healthy tissue (skin, muscle) was found in a subcutaneous P388 lymphoma tumour model growing in DBA/2 mice (Chen and de Witte, 2000). A fast clearance of hypericin in the liver, spleen, kidney and plasma was observed within 6 h, while the peak concentration of hypericin in the tumour (maximal 8.7% of the injected dose per gram tissue (% ID g À1 )) occurred at 24 -48 h after drug administration. To confirm these data, a study using C3H mice bearing subcutaneous RIF-1 fibrosarcoma tumours was performed. The tumour drug concentration increased rapidly over the initial hours and peaked (5.5% ID g À1 ) approximately 6 h after i.v. administration (Chen et al, 2001).
The tumoritropic characteristics of hypericin, therefore, imply that some (radio)labelled derivatives of the compound could be applied in the field of clinical radiodiagnosis, radiotherapy ( 131 I-labelled) and possibly also in magnetic resonance imaging (MRI). In order to better understand the mechanistic background, the present paper addresses basic aspects and principles of the accumulation of hypericin in malignant tissue. Previous results have shown that, compared to normal cells, isolated malignant cells intrinsically do not take up more hypericin (Kamuhabwa et al, 2000). Therefore, the tumoritropic behaviour of hypericin should be envisioned as the result of molecular interactions with specific in vivo environmental, vascular and tumour tissue properties. In this paper, the tumour tissue accumulation of hypericin was examined as a function of tumour weight and a correlation with tumour perfusion and tumour vessel permeability was explored. Furthermore, we also investigated the intratumoral distribution of hypericin and its association with lipoproteins, aimed at defining the role of lipoproteins as regards the transport through the interstitial space.
Animals and tumour system
The following tumour models were used: (a) mouse RIF-1 (radiation-induced fibrosarcoma) cells (kindly provided by Dr F Stewart, The Netherlands Cancer Institute, The Netherlands) subcutaneously (s.c.) grafted in female C3H/Km mice, (b) mouse MH22A hepatoma cells (kindly provided by Dr Z Luksiene, Institute of Materials Science and Applied Research, Lithuania) s.c. grafted in female athymic nude mice, (c) human CaCo-2 colon carcinoma cells (kindly provided by Dr P Augustijns (KU Leuven)) s.c. grafted in female athymic nude mice, (d) human A431 cervix carcinoma cells (obtained from American Type Culture Collection Revised 31 January 2005; accepted 11 February 2005 (ATCC)) s.c. grafted in female athymic nude mice, (e) AY27 TCC (transitional cell carcinoma) rat cells (originally developed by Drs S Selman and JA Hampton (Ohio Medical College)) s.c. grafted in female athymic nude mice and (f) R1 rhabdomyosarcoma rat cells (kindly provided by Dr W Landuyt (KU Leuven)) s.c. grafted in female athymic nude mice.
Tumour cells (2 Â 10 6 ) were inoculated s.c. on the depilated lower dorsum of female mice (weight range 21 -25 g, purchased from Charles River Laboratories (France) or B&K Grimston (England)). Tumours were grown to different surface diameters ranging from 2 to 9 mm and to thicknesses ranging between 2 and 5 mm, as measured by a calliper. These dimensions covered tumour weights from ca 5 to 200 mg.
All aspects of the animal experiment and husbandry were carried out in compliance with national and European regulations and were approved by the Animal Care and Use Committee of KU Leuven.
Statistical analysis was performed using Prism 4.00, GraphPad Software, San Diego, USA.
Hypericin accumulation in tumour tissue
Tumour-bearing animals were killed 6 h after i.v. tail injection of hypericin (5 mg kg À1 ). Hypericin (synthesised from emodin anthraquinone according to Falk et al (1993)) was dissolved in a mixture of 25% dimethylsulphoxide (DMSO), 25% polyethylene glycol (PEG) 400 and water (2 mg ml À1 ) immediately before injection. Tumour tissues were harvested, weighed and frozen at À201C until determination of the hypericin content. Similar tissue samples were taken from control mice. Extraction and quantification of tissue hypericin concentrations were performed as previously described (Chen et al, 2001).
Tumour blood perfusion
The RIF-1 and R1 tumour-bearing animals were used to quantify the tumour perfusion by means of spectrofluorometric determination of tumour FITC-dextran uptake (fluorescein isothiocyanate dextran, M r 2 Â 10 6 , obtained from Sigma, St Louis, MO, USA), as described (Chen et al, 2002).
Tumour vessel permeability
The RIF-1 and R1 tumour-bearing animals were used to assess the tumour vessel permeability by a modification of the procedure described by Graff et al (2001). Evans blue dye (Sigma, St Louis, MO, USA) was dissolved in PBS (5 mg ml À1 ) and injected i.p. in tumour-bearing mice (25 mg kg À1 ). After 48 h, the animals were killed and the dissected and weighed tumour tissues dissolved in 1 ml of tissue solubiliser (Soluene 350; Packard Industries, Downers Grove, IL, USA) at 371C overnight. The solution was allowed to cool before the addition of 2 ml of ethyl acetate (Fisher Scientific, UK) and 2 ml of 1 N HCl. Absorption of the upper phase was read at 626 nm using a UV/Visible Spectrophotometer (Ultrospec 2000 Pharmacia Biotech, Amersham Biosciences, Uppsala Sweden). Concentrations were determined from a standard curve of Evans blue dye.
Tumour samples were immediately mounted in medium (Tissue Tek embedding medium, Miles Inc., Elkhart, IN 46515, USA) and immersed in liquid nitrogen. Different serial cryostat sections (5 mM slices) were taken from each tumour. The first of two serial sections was stained with hematoxylin and eosin (H&E) and the second was examined by fluorescence microscopy (Axioskop 2 plus equipped with a light-sensitive charge-coupled device digital camera (Carl Zeiss, Göttingen, Germany)). To specifically visualise hypericin, the Zeiss filter set 14 (ex: BP 510 -560 nm, em: LP 590 nm) was used, whereas the distribution of DiOC 18 or FITCdextran was examined with Zeiss filter set 10 (ex: BP 450 -490 nm, em: BP 515 -565 nm).
Overlay fluorescence images were quantitatively analysed using a KS imaging software system (Carl Zeiss, Göttingen, Germany) by subdividing the images in 2269 square fields of 38.7 mM 2 (24 pixels/ field) and by measuring the average fluorescence intensity per field for hypericin and DiOC 18 , respectively. The data were normalised for the maximal fluorescence intensity of each compound, and expressed as percentage fluorescence intensity (% f.i.). From these field-by-field data, scattergrams were constructed with axes representing the % f.i. of each compound. In addition, the absolute field-by-field differences in fluorescence intensity for hypericin and DiOC 18 (i.e. D ¼ |(% f.i. hypericin -% f.i. DiOC18 )|) were calculated. The differences were grouped in fractions with increments of 10% and the percentage of fields corresponding to each of the fractions was determined. In total, 15 overlay fluorescence images randomly taken throughout different tumours (n ¼ 3) were analysed for each time interval.
Tumour accumulation of hypericin
After systemic administration of hypericin, its uptake in tumour tissue was studied as a function of tumour weight in six tumour models. As previous results using the RIF-1 tumour model had demonstrated, a peak concentration of hypericin in tumour tissue between 4 and 8 h after intravenous injection, a 6 h interval between administration and analysis was used in all cases. From Figure 1 it can be seen that, hinging on the tumour size, large intratumoral differences in hypericin accumulation exist. Typically, small tumours tended to accumulate three to four times more of hypericin relative to their weight, as compared to larger tumours (ranging from 50 to 200 mg). As a matter of fact, for each tumour model investigated, a rapid exponential decay in hypericin accumulation was observed from the smallest tumours followed by a plateau phase, on the average starting from a tumour weight of 50 mg. Of interest, when comparing intertumoral dissimilarities, a difference in overall tumour accumulation was found between the mouse RIF-1 fibrosarcoma and rat R1 rhabdomyosarcoma tumours (see Figure 2).
Tumour blood perfusion and vessel permeability
To further evaluate the intratumoral and intertumoral differences in hypericin accumulation disclosed in R1 and RIF-1 tumours, we analysed and compared their relative perfusion as well as their vessel permeability as a function of tumour weight. To measure tumour perfusion, FITC-dextran was i.v. injected 2 min before the animal was killed . At this short interval, extravasation of the FITCdextran complex is negligible (Shockley et al, 1992) and therefore the fluorescent tracer is entirely confined to the lumen of the (tumour) blood vessels. The amount of dye extracted from the tumour was expressed as % of the injected dose per gram of tumour as a function of tumour weight (see Figure 3). Intratumoral differences similar to the one found in case of the hypericin accumulation can be seen for both tumour types, that is, a weightdependent exponential decay in tumour blood flow followed by a plateau phase from tumour weights of 50 mg on. As there was no significant difference between RIF-1 and R1 tumours (see Figure 3), an intertumoral variability in perfusion could not be found.
Tumour vessel permeability was measured by i.p. injection of Evans blue dye, a dye with a strong affinity for serum albumin. Extravasation of this dye is based on vessel permeability, which makes it a good marker for permeability measurements (Graff et al, 2001). Figure 4 shows the % of injected dose per gram of tumour vs tumour weight, 48 h after administration. Except for small tumours (o20 mg), the relative amount of Evans blue recovered from one tumour type was similar over a large range of tumour weight. However, major intertumoral differences between the permeability of vessels present in RIF-1 and R1 tumours were found.
Intratumoral localisation of hypericin
To track the intratumoral fate of hypericin, a fluorescence microscopy study was performed on sections of tumour biopsies (') and RIF-1 tumours (m). The accumulation of hypericin, expressed as % of injected dose per gram of tumour is shown as a function of tumour weight. Data points were fitted using one-phase exponential decay. The logarithm of the amount of hypericin recovered per gram of tumour tissue as a function of tumour weight is depicted in the inset (for RIF-1: y ¼ À0.001436x þ 3.305; for R1: y ¼ À0.002194x þ 3.015). The data were statistically compared after linearisation using the two-way ANCOVA test (analysis of covariance). The difference in relative accumulation of hypericin in R1 and RIF-1 tumours was extremely significant (P-valueo 0.0001). Tumour weight (mg) Log (ng FITC-dextran per gram tumour) Figure 3 Perfusion of RIF-1 (m) and R1 (') tumours. The amount FITC-dextran extracted is expressed as % of injected dose per gram of tumour, as a function of tumour weight. FITC-dextran (100 mg kg À1 ) was i.v. injected 2 min before killing. Data points were fitted using one-phase exponential decay. The logarithm of the amount of FITC-dextran recovered per gram tumour tissue as a function of tumour weight is depicted in the inset (for RIF-1: y ¼ À0.003575x þ 4.408; for R1: y ¼ À0.001982x þ 4.273). The data were statistically compared after linearisation using the two-way ANCOVA test (analysis of covariance). No significant difference between the perfusion of RIF-1 and R1 tumours was observed (P-value: 0.064).
taken at different time points after systemic administration of the compound to RIF-1 tumour-bearing animals. Since blood-borne hypericin mainly associates with high-density lipoproteins (HDL) and other lipoproteins (Chen et al, 2001), it was of interest to verify whether the compound colocated intratumorally with the lipoproteins upon intravenous administration. For that purpose, DiOC 18 was simultaneously injected with hypericin into the bloodstream. DiOC 18 is a green fluorescent analogue of DiIC 18 (Pitas et al, 1981), a marker that by means of its very lipophilic C 18 moieties avidly binds to lipoproteins without altering their affinity for the receptors.
Fluorescence microscopic analysis of tumour tissue, taken 5 min, 2, 6 and 24 h after administration of the compounds, revealed a shifting pattern, as a function of time, in the localisation of labelled lipoproteins and hypericin. At 5 min, DiOC 18 and hypericin were still confined to the luminal space of tumoral blood vessels ( Figure 5A), as shown in separate experiments with FITCdextran that was injected 2 min before killing the animals (results not shown). Figure 5B shows the situation after 2 h, where red (hypericin) and green (DiOC 18 ) fluorescence are apparent in the vessels and in the perivascular region. Conversely, at 24 h, a more homogeneous distribution is observed for hypericin in contrast to DiOC 18 -labelled lipoproteins that show an irregular spreading ( Figure 5C).
Quantitative measurements of colocalisation of hypericin and DiOC 18 were examined by a field-by-field analysis of fluorescent overlay images. Scattergrams revealed a good correlation between the percentage fluorescence intensity of hypericin and DiOC 18labelled lipoproteins at short intervals after coadministration of the compounds, whereas a poor correlation was observed at the later time points (Figure 6). For each time point, the percentage of fields that fall within fractions of grouped absolute differences (D) was scored (Figure 7, Table 1). A time-dependent shift in the distribution among the fractions can be seen, with an increased amount of larger differences between both compounds at longer time intervals. For instance, at 5 min and 2 h, 9276.1 and 8875.6% (mean7s.d.), respectively, of the fields had D values less than 10%, indicating that at these time points the distribution of fluorescence between hypericin and DiOC 18 was similar.
However, 6 h after injection, significantly more fields displayed higher D values and at 24 h the majority of the fields exhibited D values of more than 10%.
DISCUSSION
A basic understanding of the tumoritropic behaviour of hypericin would not only support the construction of hypericin derivatives with optimised PDT characteristics, but also sustain the development of labelled derivatives to extend the application of hypericins beyond the field of PDT. We therefore set out to gain a better insight in the major determinants of the accumulation and dispersion of hypericin in tumours.
Our study points out that, at least in the 5 -50 mg tumour weight range, the extent of hypericin uptake depends on the tumour weight, while in larger tumours the uptake remains constant. In the two models investigated (R1, RIF-1), an identical relationship was found between tumour blood flow and tumour weight. The fact that small tumours are relatively better perfused than larger ones has been documented before (Tozer et al, 1990;Hering et al, 1995). Since both the hypericin accumulation and the blood flow hinge to the same extent on the tumour weight, our data suggest that the hypericin accumulation in tumour tissue is critically dependent on the extent of local blood perfusion. Consistent with this hypothesis, our fluorescence microscopy analysis of RIF-1 tumour sections revealed a more homogeneous vessel distribution in small tumours, with a high percentage of functional vessels and a lack of necrotic areas. In contrast, large tumours have a more heterogeneous vessel distribution, with well-perfused regions in the periphery and less blood flow in the centrally located viable tumour regions. As a consequence, hypericin mainly accumulates in the periphery of these tumours (results not shown). Our results therefore support the concept that, especially in larger tumours, the intratumoral blood flow distribution is rather heterogeneous, both spatially and temporally (Jain, 2001).
Furthermore, we consistently found that the hypericin uptake in RIF-1 tumours was about twice as high as in R1 tumours. Since this difference cannot be accounted for by a dissimilar tumour perfusion, we investigated the permeability of the tumour vessels involved.
Extravasation of plasma constituents by means of convective currents across the microvascular wall originates from the incomplete or totally missing endothelial lining of the rapidly formed vessels in tumours growing beyond a mass of 10 6 cells (Feng et al, 2000;Hashizume et al, 2000;Kuszyk et al, 2001). Since Evans blue accumulated more in RIF-1 than in R1 tumours over a large tumour weight range, it could be concluded that the larger amount of marker extravasated in RIF-1 tumours was due to an increased permeability of the tumour vessels. Hence, it is likely that the relatively low leakiness of the R1 tumour vessels explains the lower uptake of hypericin in R1 as compared to RIF-1 tumours. A similar correlation between tumoral photosensitiser uptake and tumour vessel permeability has been reported before (Roberts and Hasan, 1993).
Importantly, it was reported that hypericin binds to human lipoproteins at high molar ratios (up to 467 and 14 moles per 10 4 Da of low-density lipoproteins (LDL) and high-density lipoproteins (HDL), respectively) (Lavie et al, 1995b). In contrast to human plasma, mouse plasma contains a high HDL/LDL ratio (Kessel and Woodburn, 1993), so that mainly the hypericin -HDL complex is formed in the bloodstream of mice (results not shown). Once extravasated, the lipoproteins tagged with numerous hypericin molecules become embedded in the tumour interstitial fluid. At this stage, the hypericin -lipoprotein complexes can follow two routes leading to hypericin uptake and accumulation in tumour cells. Firstly, hypericin can comigrate with the lipoprotein microparticles throughout the interstitial space, followed by Tumour weight (mg) Log ( g Evans blue per gram tumour) Figure 4 Extravasation of Evans blue in RIF-1 tumours (m) and R1 tumours ('), expressed as % of injected dose per gram of tumour, as a function of tumour weight. Data points were fitted using one-phase exponential decay. The logarithm of the amount of Evans blue recovered per gram tumour tissue as a function of tumour weight is depicted in the inset (for RIF-1: y ¼ À0.0007355x þ 2.116; for R1: y ¼ À0.001724x þ 1.826). The data were statistically compared after linearisation using the two-way ANCOVA test (analysis of covariance). The difference in vessel permeability in R1 and RIF-1 tumours was extremely significant (P-value o 0.0001). receptor-mediated intracellular uptake. Of importance here, a correlation exists between the extent of association of some classes of photosensitisers with LDL and their efficiency of tumour targeting (Jori and Reddi, 1993). Alternatively, after collision of the microparticles with interstitial proteins or tumour cell membranes, hypericin can be released from the lipoprotein complex, followed by a lipoprotein-independent diffusion and intracellular uptake of the compound. A similar in vitro cellular uptake of hypericin in serum-free conditions has been documented (Siboni et al, 2002).
To investigate which route prevails, lipoproteins were marked with DiOC 18 , a compound that virtually irreversibly associates with lipoproteins. After simultaneous administration of hypericin and DiOC 18 to mice bearing RIF-1 tumours, fluomicroscopic analysis of tumour sections revealed that both compounds initially behaved similarly, indicating that hypericin and lipoproteins comigrate. However, later time intervals showed obvious differences in their intratumoral localisation. Thus, while the lipoproteins seem to be limited in their capacity to migrate through the interstitial space, hypericin spreads rather homogeneously over vascularised areas.
These results suggest that during its migration through the tumour mass, hypericin is released from the lipoprotein complex. In support of this notion, it is worth mentioning that unlike the lipophilic DiOC 18 , hypericin is an amphiphilic compound that preferentially locates to the polar aprotic zone adjacent to the lipid -water interface (Lenci et al, 1995;Weitman et al, 2001). The phospholipid moiety in lipoproteins is situated at the surface of the microparticle, thereby allowing the associated hypericin molecules to dynamically interact with the immediate surrounding. This surrounding consists of phospholipid bilayers of plasma membranes of cancer cells and immune cells, and of proteins like collagen which are abundantly present in the interstitial space (Jain, 1987). Interestingly, it has been shown that hypericin associates with collagen resulting in a photodynamic effect, while another photosensitiser Chlorin e 6 was ineffective (Yova et al, 2001).
In conclusion, the results of the present study indicate that the tumour accumulation of hypericin is critically determined by a combination of tumour-dependent and compound-dependent factors. As far as the tumour tissue is concerned, it is clear that an adequate vascularisation, blood flow and vessel permeability, all contribute to the ability of hypericin to accumulate in the tumour mass. On the other hand, once extravasated and locally delivered by a lipoprotein carrier, the compound itself can dramatically affect the level of its migration throughout the interstitial zone. Our study reveals that hypericin may have an affinity for some constituents typically present in the interstitium, and that there probably is a partitioning of the compound over lipoproteins, structural proteins and lipid bilayers of cancer and immune cells. This partitioning allows hypericin to spread rather homogeneously over the tumour mass.
Elucidation of the tumoritropic principle of hypericin M Van de Putte et al | 2014-10-01T00:00:00.000Z | 2005-04-20T00:00:00.000 | {
"year": 2005,
"sha1": "cc8a2379ef1077a370521512fc305c74360d22f9",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/6602512.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc8a2379ef1077a370521512fc305c74360d22f9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
233525003 | pes2o/s2orc | v3-fos-license | A new generalization of the inverse Lomax distribution with statistical properties and applications
In this paper, we introduce a new generalization of the inverse Lomax distribution with one extra shape parameter, the so-called power inverse Lomax (PIL) distribution, derived by using the power transformation method. We provide a more flexible density function with right-skewed, uni-modal, and reversed J-shapes. The new three-parameter lifetime distribution capable of modeling decreasing, Reversed-J and upside-down hazard rates shapes. Some statistical properties of the PIL distribution are explored, such as quantile measure, moments, moment generating function, incomplete moments, residual life function, and entropy measure. The estimation of the model parameters is discussed using maximum likelihood, least squares, and weighted least squares methods. A simulation study is carried out to compare the efficiencies of different methods of estimation. This study indicated that the maximum likelihood estimates are more efficient than the corresponding least squares and weighted least squares estimates in approximately most of the situations Also, the mean square errors for all estimates are decreasing as the sample size increases. Further, two real data applications are provided in order to examine the flexibility of the PIL model by comparing it with some known distributions. The PIL model offers a more flexible distribution for modeling lifetime data and provides better fits than other models such as inverse Lomax, inverse Weibull, and generalized inverse Weibull.
Introduction
or Pareto type II distribution has been suggested by Lomax (1954) as an important model for lifetime analysis. The Lomax distribution is widely applied in some areas, such as analysis of income and wealth data, modeling business failure data, biological sciences, model firm size and queuing problems, reliability modeling, and life testing (Harris, 1968;Atkinson and Harrison, 1978;Holland et al., 2006;Corbellini et al., 2010;Hassan and Al-Ghamdi, 2009;Hassan et al., 2016). In the literature, some extensions of the Lomax distribution are available such as the Marshall-Olkin Extended-Lomax (Ghitany et al., 2007), gamma-Lomax (Cordeiro et al., 2015), power Lomax (Rady et al., 2016), exponentiated Lomax geometric (Hassan and Abdelghafar, 2017), power Lomax Poisson (Hassan and Nassr, 2018), exponentiated Weibull-Lomax (Hassan and Abd-Allah, 2018), and Type II half logistic Lomax (Hassan et al., 2020) distributions among others.
The inverse Lomax (IL) distribution is a special case of the generalized beta distribution of the second kind. It is one of the significant lifetime models in statistical applications. Also, it has an application in various fields like stochastic modeling, economics, actuarial sciences, and life testing as discussed by Kleiber and Kotz (2003). Besides this, it has been used to obtain the Lorenz ordering relationship among ordered statistics (Kleiber, 2004). The IL distribution has been used on geophysical databases especially on the sizes of land fibers in California State of United States (McKenzie et al., 2011). The IL distribution can be derived from Lomax distribution using the transformation, distribution has a Lomax Z where Y=1/Z.
The probability density function (pdf) of twoparameter IL distribution is specified by, ( ; , ) = −1 (1 + ) −( +1) ; , , > 0, here, and are the scale and shape parameters respectively. The associated cumulative distribution function (CDF) is: Rahman and Aslam (2014) used a twocomponent mixture IL model for the prediction of future ordered observations in the Bayesian framework using predictive models. Singh et al. (2016) obtained reliability estimates of the IL model under Type II censoring. In addition to this, the IL distribution from hybrid censored was applied to the survival data by Yadav et al. (2016).
In this paper, we provide a more flexible model by inducing just one extra shape parameter to inverse Lomax model for improving its goodness-offit to real data. We discuss some of its statistical properties. Estimation of the unknown parameters for the subject model using the maximum likelihood (ML), least squares (LS), and weighted least squares (WLS) methods is considered. Finally, simulation issues, as well as application to real data, are provided. The layout of the paper contains the following sections. In Section 2, we introduce the three-parameter PIL distribution. Some statistical properties of PIL distribution are presented in Section 3. In Section 4, ML, LS, and WLS estimators for the model parameters are obtained. Numerical study and analysis of real data sets are presented in Section 5. The article ends with concluding remarks.
Power inverse Lomax distribution
The power inverse Lomax distribution is developed by considering the power transformation = 1 , where the random variable follows IL distribution with parameters and . The cdf of the PIL distribution is specified by, The pdf of PIL distribution corresponding to 3 can be written as follows: , , , > 0.
A random variable X has PIL distribution will be denoted by ~PIL( , , ). Plots of the pdf and hrf for some selected parameters values are displayed in Fig. 1 and Fig. 2. As seen from these figures that the pdf and hrf take different shapes according to different values of parameters.
Properties of PIL distribution
In this section, some important mathematical characteristics of the PIL distribution are developed, specifically; the r th moment, the moment-generating function, incomplete moments, moments of residual life function, and Rényi entropy.
Moments
An explicit expression of the r th moment for the PIL can be obtained from pdf 4 as follows: Hence, after simplification, the r th moment of PIL distribution is obtained as follows: Setting r=1, 2, 3, 4 in 5, we can obtain the first four moments about zero. Generally, the moment generating function of PIL distribution is obtained as follows: The r th central moment ( ) of is given by, The mean ) ( 1 ) and variance ( 2 ) of the PIL distribution for some selected values of the parameters which can be calculated numerically in Table 1. Also, the skewness (SK) and (KU)kurtosis of the PIL distribution for various values of parameters and can be calculated numerically in Table 2. From Table 1, it can be observed that both values of the mean and variance of the PIL distribution increase as the values of and increase. Also, the values of the variance decrease as the values of increase. From Table 2, it can be noticed that both the skewness and the kurtosis are decreasing functions of and .
Next, we derive a simple formula for the s th incomplete moment of X defined by ( ) = ( | < ). So, the quantity ( )comes from 4 as follows where ( + ; + , 1 − ) is the incomplete beta function. The first incomplete moment of X is important to determine the mean deviations, which can be used to measure the amount of scattering in a population, and the Bonferroni and Lorenz curves. It can be obtained by substituting s=1in (6). Additionally, the Bonferroni and Lorenz curves of PIL distribution are, respectively, given by, and,
Moments of residual life function
The n th moment of the residual life of X is given by, Hence, n th moment of residual life of PIL distribution can be obtained as follows: where ( + ; 1 − , + ) is the incomplete beta function.
Rényi entropy
Entropy is a measure of uncertainty of a random variable X. The Rényi entropy of random variable X for a continuous random variable with range R is defined as follows: The pdf; ( ( ; , , )) of the PIL distribution can be expressed as follows: Therefore, the Rényi entropy of PIL distribution is given by, Table 3 gives ( ) of the PIL distribution for different choices of parameters , and . It's seems that the entropy increases with increasing values of and , while decreases with increasing values of .
Parameter estimation
This section deals with the parameter estimation for PIL distribution based on ML, LS, and WLS methods.
Maximum likelihood estimator
Let 1 , 2 , 3 , … , be the observed values follow PIL distribution with pdf 4, then the log-likelihood function, denoted by lnL, based on a complete sample for the unknown parameters can be expressed as: The partial derivatives of the log-likelihood function with respect to , and can be obtained as follows: , and, Then ML estimators of the parameters , , and denoted by ̂ ,̂ ̂ are determined by solving numerically the non-linear equations ⁄ = 0, ⁄ = 0,and ⁄ = 0 simultaneously.
Least squares estimator
The LS estimators were originally proposed by Swain et al. (1988) to estimate the parameters of the beta distribution. The method of LS is about estimating parameters by minimizing the sum of square errors between the observed data and their expected values with respect to the unknown parameters. That is; , also, the WLS can be obtained by minimizing the following with respect to the unknown parameters: .
Let 1 , 2 , 3 , … , be a random sample of size n from PIL distribution. Suppose that 1: < 2: < 3: < ⋯ < : denotes the corresponding ordered sample. Therefore, the LS estimators of , and say,̃,̃ and ̃ respectively, can be obtained by minimizing the following function with respect to , , and , Also, the WLS estimators of , and say, ̅ , ̅ and ̅ respectively, can be obtained by minimizing the following quantity with respect to , , and , .
Numerical illustration
In this section, a numerical study is presented to compare the performance of the estimates for different parameter values. The performance of the estimates of unknown parameters has been measured in terms of their absolute bias (AB), standard error (SE), and mean square error (MSE) for different sample sizes and for different parameter values. The numerical procedures are defined through the following algorithm.
Step 4: Steps from 1 to 3 are repeated 1000 times for each sample size and for selected sets of parameters. Then, the ABs, SEs, and MSEs of the estimates of the unknown parameters are calculated.
Numerical results
Numerical results are reported in Tables 4 to 6 and represented through some Figs. 3-6. From these tables and figures, the following observations can be detected on the properties of estimated parameters from the PIL distribution.
1. The MSEs and SEs of the ML, LS, and WLS' estimates decrease as the sample sizes increase for a different selected set of parameters ( Fig. 3 and Fig. 4). (Table 6). 6. The SEs for the LS estimates, ̃ take the smallest value among the corresponding SEs for the other methods in almost all of the cases.
Data analysis
In this subsection, two real data sets are provided to illustrate the importance of the PIL distribution by comparing it with some other distributions (IL, inverse Weibull (IW), and generalized inverse Weibull (GIW). Two real data sets are used to show that PIL distribution can be applied in practice and can be a better model than some others. The first data set is corresponding to remission times (in months) of a random sample of 128 bladder cancer patients given in Lee and Wang (2003). The data are given as follows: The maximum likelihood method is performed to obtain the point estimates of the model parameters.
To compare the fitted models, some selected measures are applied. The selected measures include; -2log-likelihood function evaluated at the parameter estimates, Akaike information criterion (AIC), Bayesian information criterion (BIC), consistent Akaike information criterion (CAIC), and Hannan-Quinn information criterion (HQIC). The better model is corresponding to the lower values of, AIC, CAIC, BIC, and HQIC. The results for the previous measures to the mentioned models are listed in Table 7. The results, in Table 7, show that the PIL distribution has the smallest values of AIC, CAIC, BIC, and HQIC. Then PIL distribution provides a significantly better fit than other distributions considered here IL, IW, and GIW. The second data set refers to Murthy et al. (2004) about the time between failures for the repairable item. The data are listed as the following: 1. 43, 0.11, 0.71, 0.77, 2.63, 1.49, 3.46, 2.46, 0.59, 0.74, 1.23, 0.94, 4.36, 0.40, 1.74, 4.73, 2.23, 0.45, 0.70, 1.06, 1.46, 0.30, 1.82, 2.37, 0.63, 1.23, 1.24, 1.97, 1.86, 1.17. The results in Table 8 indicate that the PIL model is suitable for this data set based on the selected criteria. The PIL model has the lowest AIC, BIC, CAIC, and HQIC values. Therefore, the PIL is a preferable model to the other model for this data set.
Concluding remarks
In this paper, we introduce a new model, the socalled, power inverse Lomax distribution. Several properties of the PIL distribution are investigated, including the moments, incomplete moments, moments of residual life, and Rényi entropy. The estimation of population parameters is discussed through the method of the maximum likelihood, least squares, and weighted least squares. The simulation study is presented to compare the performance of estimates. Applications of the power inverse Lomax distribution to real data show that the new distribution can be used quite effectively to provide better fits than the inverse Lomax, inverse Weibull, and generalized inverse Weibull models. The power inverse Lomax distribution parameters can be investigated using ranked set sampling methods (Al-Saleh and Al-Omari, 2002;Al-Omari, 2010;2011;Haq et al., 2014a;2014b). | 2021-05-04T22:06:07.774Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "7f7a0a907ebcc5ef29c5ec8f45bf059c6b9a281b",
"oa_license": "CCBY",
"oa_url": "http://science-gate.com/IJAAS/Articles/2021/2021-8-4/1021833ijaas202104011.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d12ec8be08710dd88d2387f5823146c02d60d2a3",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
210922198 | pes2o/s2orc | v3-fos-license | Fulminant and Diffuse Cerebral Toxoplasmosis as the First Manifestation of HIV Infection: A Case Presentation and Review of the Literature
Patient: Male, 9-year-old Final Diagnosis: Fulminant and diffuse cerebral toxoplasmosis Symptoms: Decreased level of consciousness • fever • generalized tonic-clonic seizures • hemiplegia Medication: — Clinical Procedure: Decompressive hemicraniectomy Specialty: Neurosurgery Objective: Unusual clinical course Background: One of the most common causes of central nervous system (CNS) opportunistic infections in immunocompromised patients is toxoplasmosis. It can cause focal or disseminated brain lesions leading to neurological deficit, coma, and death. Prompt management with optimal antibiotics is vital. However, the diagnosis of cerebral toxoplasmosis is challenging in infected individuals with human immunodeficiency virus (HIV). The possible diagnosis is based on clinical presentation, imaging, and specific serologic investigations. The diagnosis can be confirmed by histopathological examination and/or by finding nucleic material in the spinal cerebrospinal fluid (CSF) examination. Case Report: We present a review of the literature with a rare illustrative case of diffuse CNS toxoplasmosis as the first manifestation of HIV infection in a young patient. Brain MRI showed diffuse, ring-enhancing lesions, and significant midline shift. Decompressive hemicraniectomy for control of intracranial pressure and anti-infectious therapy were performed. Conclusions: This should raise awareness that cerebral toxoplasmosis can occur in pediatric patients with HIV infection, and, more importantly, as the first manifestation of AIDS. Although the prognosis is often poor, early diagnosis and immediate treatment of this life-threatening opportunistic infection can improve outcomes.
Background
Toxoplasmosis is known as one of the most prevalent infections worldwide, and it is estimated that more than one-third of the human population is infected. The causative microorganism is Toxoplasma gondii, an obligate intracellular parasite that causes zoonotic infection [1].
Transfer occurs via several routes, including ingestion of contaminated water or food (the main route), contact with cat litter containing parasitic cysts, blood transfusion, organ/tissue transplantation, and via the placenta following maternal infection [2].
Although most infections are subclinical in immunocompetent individuals, the parasite remains dormant in tissues of an infected host, including the central nervous system (CNS) [3]. Significant clinical disease often occurs secondary to reactivation of the inactivated parasite, which occurs when the immune system is suppressed or compromised. This condition is most often seen in pregnancy, acquired immunodeficiency, or transplantation [4][5][6][7][8][9]. However, primary disease does occur and is associated with more severe and disseminated disease [10,11].
Toxoplasmosis is the leading cause of opportunistic infection and cerebral lesions in individuals with acquired immune deficiency syndrome (AIDS), accounting for 50% to 70% of all lesions in the brain [12]. Toxoplasma encephalitis usually appears in late stages of AIDS, when CD4 counts are below 200 cells/mm 3 , and patients with CD4 counts below 50 cells/mm 3 are at higher risk [11].
Clinical manifestations of CNS involvement in HIV/AIDS patients are diverse and range from fever, headache, altered sensorium or motor function, and focal neurological deficit to disorientation, confusion, decreased level of consciousness, and seizure, related to a focal lesion or disseminated encephalitis [13,14]. In some patients with CD4 counts over 200 cells/mm 3 , CNS lesions mimic brain tumors [15,16].
Infrequently, cerebral toxoplasmosis present as the first manifestation of HIV/AIDS, which is life-threatening if left untreated [17][18][19]. Here, we describe a very rare case of a child who presented with diffuse cerebral toxoplasmosis as the first manifestation of pediatric HIV/AIDS.
Case Report
A 9-year-old boy was admitted to the Pediatric Emergency Department (ED) with a decreased level of consciousness, recent frequent episodes of generalized tonic-clonic seizures, and fever. The onset of fever was 3 weeks earlier, with oral ulcer, dyspnea, and coryza. He was treated on an outpatient basis, but the fever did not resolve. He also had a history of recent urinary incontinence and weight loss, but no history of night sweats or gastrointestinal symptoms (except oral ulcer).
Initial examination at the Pediatric ED revealed left hemiplegia, in addition to fever, seizure, and loss of consciousness (LOC), suggestive of meningitis. A hematologic workup showed significant leukopenia (WBC=2400/mm 3 ) and anemia (hemoglo-bin=9.5 gr/dl). Results of chest radiography and abdominal ultrasonography were normal.
The patient received intravenous steroids and empirical vancomycin plus ceftazidime plus metronidazole as broad-spectrum antibiotics, with clinical suspicion of meningitis. Brain magnetic resonance imaging (MRI) with and without intravenous contrast showed diffuse, ring-enhancing lesions with perifocal edema and significant midline shift (Figures 1, 2), suggestive of tuberculoma or cerebral toxoplasmosis. Therefore, serologic tests for toxoplasmosis and HIV antibodies were requested. A rapid decline in the patient's mental status and corresponding midline shift warranted an urgent decompressive hemicraniectomy and resection of the largest lesion to prevent herniation syndrome and to control the intracranial pressure, and thus provided the specimen for diagnosis of cerebral toxoplasmosis ( Figure 3). A pathological examination confirmed the diagnosis of cerebral toxoplasmosis.
Blood analysis for the HIV antibody was positive postoperatively and was confirmed with ELISA. CD4 counts were below 100 cells/mm 3 . Three days later, the toxoplasma IgG titer was reported to be positive with significant levels. Accordingly, we confirmed the diagnosis of CNS toxoplasmosis based on serology and pathologic evidence, and trimethoprim-sulfamethoxazole was added to his antibiotic regimen. Also, the tuberculin sensitivity test was negative. The patient's vaccination history was unclear. Unfortunately, the patient died on the 6 th hospital day, probably because of disease dissemination. A postmortem pathological examination was not done.
Discussion
Central nervous system involvement in patients with HIV/AIDS often occurs secondary to opportunistic infections, most commonly due to toxoplasma, mycobacteria tuberculosis, and fungi, and less commonly due to primary lymphoma [12].
Toxoplasma gondii, considered as one of the most prevalent parasites, causes clinical infection in an immunocompromised individual, usually by reactivation of the dormant form of the microorganism. The most common sites for latent infection are the CNS, eye, and muscles (skeletal, smooth, and heart muscles) [20].
Cerebral toxoplasmosis is among the most common CNS infections in untreated immunocompromised patients [21]. Toxoplasma encephalitis almost always occurs secondary to reactivation of the inactivated parasite in the brain of HIVinfected patients, especially when the CD4 count is below 200 cell/mm 3 [11,14].
CNS toxoplasmosis causes a wide range of symptoms corresponding to the location and distribution of involvement. Patients usually present with headache, confusion, or changes in the level of consciousness, fever, focal neurological signs, and seizure [22,23]. Presentation with meningeal signs and diffuse encephalitis is less common [24]. These lesions usually are seen as unifocal or multifocal abscess-like lesions or, rarely, as diffuse lesions. However, the first manifestation of AIDS in a child with fulminant encephalitic illness and diffuse involvement of the brain is quite rare. This situation warrants prompt diagnosis and treatment because of its high morbidity and mortality [25].
AIDS-related neurological signs/symptoms of brain involvement are not specific; thus, brain imaging with computed tomography (CT) or magnetic resonance (MR) is essential for the diagnosis of toxoplasma lesions. However, MRI should be used as the initial choice if there is a high clinical suspicion, due to its greater sensitivity than CT [26].
In CT, lesions usually show as ring enhancement with intravenous contrast, and cerebral edema may be found, which could be responsible for the mass effect in these patients. The most common locations of the lesion are in frontal, basal ganglia, and parietal regions [27]. The lesion(s) of cerebral toxoplasmosis is usually round, and is iso/hyperdense in gray-white matter junction, basal ganglia, and deep white matter. The lesion is usually ring-enhancing with intravenous contrast, but can also have a homogenous enhancing pattern [26].
The characteristic "target sign" in CT findings in patients with cerebral toxoplasmosis is defined as low-density mass lesions that enhance with intravenous contrast and are surrounded by edema [26,28]. This pattern also is seen in cerebral tuberculoma [29]. Additionally, diffuse toxoplasma encephalitis can occur without abscess formation and CT findings [26].
MRI shows "target sign" enhancement, which is commonly seen in cerebral toxoplasmosis and described as an isointense "eccentric" (or concentric) core surrounded by a hypointense zone and a peripheral hyperintense enhancing rim on post-contrast T1-weighted images and inverse appearance in T2-weighted/FLAIR images, with a hypointense core, an intermediate hyperintense region, and a peripheral hypointense rim [30,31]. Different features for cerebral toxoplasmosis on MRI have been described, which are probably due to the different stages of infection, including the degree of necrosis and cyst stages [29,32]. Furthermore, the peripheral rim of hyperintensity and central hypointensity in T2-weighted images are seen in CNS tuberculoma [33].
After treatment with pyrimethamine and sulfadiazine, the resolution of lesions is indicated in CT scans, related to the degree of involvement and latency [26].
The MRI study in our illustrative case shows both of these diagnostic radiological signs. Some authors have described a T2-weighted symmetric, concentric target sign with a hypointense core, an intermediate hyperintense core, and a peripheral hypointense zone as a more specific diagnostic pattern ( Figure 1) [34][35][36][37]. A ring-shaped zone of peripheral enhancement with a small eccentric nodule alone, the wall of the lesion on the post-gadolinium T1-weighted sequence is considered to be an eccentric target sign (Figure 2).
Primary CNS lymphoma is another differential diagnosis of cerebral toxoplasmosis in neuroimaging with CT scan. Primary CNS lymphoma can mimic toxoplasma in CT scans with ringenhancing lesions [28]. Primary CNS lymphoma and toxoplasmosis are indistinguishable in CT [26]. AIDS-associated lymphoma in T2-weighted MR images, often shown by areas of central hyperintense and surrounding hypointense area, present as a ring or "target sign" [28]. These similarities make biopsy necessary to differentiate lymphoma from toxoplasma lesions in people with AIDS [28].
The other mimicking brain lesions are pyogenic abscess (a thin hypointense rim on T2-weighted MRI) and metastasis (ring-enhancing lesions at the gray-white matter junction with vasogenic edema for the relative size of the lesion) [34].
MRI is more sensitive than CT and reveals multiple lesions [28]. Various neuroimaging patterns have been proposed for the diagnosis of toxoplasma lesions to differentiate from primary CNS lymphoma based on MR or other brain imaging modalities. Decreased or poor uptake in toxoplasmosis using singlephoton emission computed tomography (SPECT) and positron emission tomography (PET) has been reported [17,38,39], but cost and availability of these modalities limit their use in the clinical setting, especially in countries with a high prevalence of HIV/AIDS and toxoplasma infections.
Other diagnostic tool includes polymerase chain reaction (PCR) detection of Toxoplasma gondii in cerebrospinal fluid (CSF) [40], but clinical use of this tool can be time-consuming and is less sensitive in toxoplasma encephalitis.
Definite diagnosis of toxoplasma encephalitis is only possible with histopathology. Some authors suggest a trial of treatment for toxoplasma, which could be helpful in presumptive diagnosis, particularly in patients with low CD4, multiple cerebral lesions suspicious of toxoplasmosis, reactive anti-toxoplasma IgG, and lack of proper prophylaxis [41].
A presumptive diagnosis of cerebral toxoplasmosis can be made based on a combination of the clinical syndrome, a positive toxoplasma IgG antibody, and brain imaging, especially if the CD4 count is below 200 cells/mm 3 . If a patient meets all the diagnostic criteria, the positive predictive value of toxoplasmosis is nearly 90% [37,42,43]. Biopsy confirms the clinical diagnosis of cerebral toxoplasmosis through detection of the organism, and differentiates it from primary CNS lymphoma and tuberculoma, but may delay start of treatment.
The cornerstone of treatment is a combination of pyrimethamine or trimethoprim-sulfamethoxazole, sulfadiazine or clindamycin, in addition to treatment for HIV infection by combination antiretroviral therapy (cART) [44][45][46][47][48]. Timely initiation of proper antibiotics to treat toxoplasma encephalitis is critical and should be started promptly when there is a high clinical suspicion of toxoplasmosis [11,41]. However, patients may need other interventions, including decompressive surgery, to reduce the mass effect of the lesion.
Empirical treatment with pyrimethamine and sulfadiazine for a patient with neurological symptoms and intracranial mass should be kept in mind, especially in patients with a history of immunodeficiency [49]. However, it is more challenging when the initial manifestation of immunodeficiency status is encephalitis due to toxoplasmosis or tuberculosis, in which clinical presentation of encephalitis and mass effect due to edema indicates the use of corticosteroids.
In this illustrative case, given the patient's clinical presentation and brain MRI, the diagnosis was confirmed by pathology and high titers of anti-toxoplasma antibody, and treatment with TMP-SMX started shortly after serologic test results were received.
Conclusions
CNS toxoplasmosis should be considered in patients living in regions endemic for HIV and toxoplasma. Toxoplasmosis in immunocompromised patients should be considered when a combination of clinical presentation and neuroimaging evidence is suggestive, and promptly investigated as a life-threatening differential diagnosis, particularly in the pediatric population. Finally, toxoplasma encephalitis could be the first presentation of HIV infection in a child. | 2020-01-28T14:02:49.775Z | 2020-01-26T00:00:00.000 | {
"year": 2020,
"sha1": "e8139ef6cb4031471542925c5f75e56ed19fb2da",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc6998800?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b63984dbf6121f1af6d944c10ddb2a7ac2923aa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220942878 | pes2o/s2orc | v3-fos-license | Gene autoregulation by 3’ UTR-derived bacterial small RNAs
Negative feedback regulation, that is the ability of a gene to repress its own synthesis, is the most abundant regulatory motif known to biology. Frequently reported for transcriptional regulators, negative feedback control relies on binding of a transcription factor to its own promoter. Here, we report a novel mechanism for gene autoregulation in bacteria relying on small regulatory RNA (sRNA) and the major endoribonuclease, RNase E. TIER-seq analysis (transiently-inactivating-an-endoribonuclease-followed-by-RNA-seq) revealed ~25,000 RNase E-dependent cleavage sites in Vibrio cholerae, several of which resulted in the accumulation of stable sRNAs. Focusing on two examples, OppZ and CarZ, we discovered that these sRNAs are processed from the 3’ untranslated region (3’ UTR) of the oppABCDF and carAB operons, respectively, and base-pair with their own transcripts to inhibit translation. For OppZ, this process also triggers Rho-dependent transcription termination. Our data show that sRNAs from 3’ UTRs serve as autoregulatory elements allowing negative feedback control at the post-transcriptional level.
Introduction
Biological systems function on a mechanism of inputs and outputs, each triggered by and triggering a specific response. Feedback control (a.k.a. autoregulation) is a regulatory principle wherein the output of a system amplifies (positive feedback) or reduces (negative feedback) its own production. Negative feedback regulation is ubiquitous among biological systems and belongs to the most thoroughly characterized network motifs (Nitzan et al., 2017;Shen-Orr et al., 2002). At the gene regulatory level, negative feedback control has been qualitatively and quantitatively studied. Most commonly, a transcription factor acts to repress its own transcription by blocking access of RNA polymerase to the promoter region. This canonical mode of negative autoregulation is universally present in living systems and in Escherichia coli more than 40% of the known transcription factors are controlled by this type of regulation (Rosenfeld et al., 2002). Several characteristics have been attributed to negative autoregulatory circuits including an altered response time and improved robustness towards fluctuations in transcript production rates (Alon, 2007).
More recently, the mechanisms underlying RNA-based gene regulation have also been investigated for their regulatory principles and network functions (Nitzan et al., 2017;Pu et al., 2019). In bacteria, small regulatory RNAs (sRNAs) constitute the largest class of RNA regulators and frequently bind to one of the major RNA-binding proteins, Hfq or ProQ. Hfq-and ProQ-associated sRNAs usually act by base-pairing with trans-encoded target mRNAs affecting translation initiation and transcript stability (Holmqvist and Vogel, 2018;Kavita et al., 2018). The sRNAs frequently target multiple transcripts and given that regulation can involve target repression or activation, it has become ever more clear that sRNAs can rival transcription factors with respect to their regulatory scope and function (Hö r et al., 2018).
Another key factor involved in post-transcriptional gene regulation is ribonuclease E (RNase E), an essential enzyme in E. coli and related bacteria required for ribosome biogenesis and tRNA maturation (Mackie, 2013). RNase E's role in sRNA-mediated expression control is manifold and includes the processing of sRNAs into functional regulators (Chao et al., 2017;Dar and Sorek, 2018a;Papenfort et al., 2015a;Updegrove et al., 2019;Chao et al., 2012) as well as the degradation of target transcripts (Massé et al., 2003;Morita et al., 2005). Inhibition of RNase E-mediated cleavage through sRNAs can stabilize the target transcript and activate gene expression (Frö hlich et al., 2013;Papenfort et al., 2013;Richards and Belasco, 2019).
Global transcriptome analyses have revealed the presence of numerous sRNAs produced from 3' UTRs (untranslated regions) of mRNAs, a significant subset of which requires RNase E for their maturation . These 3' UTR-derived sRNAs can be produced from monocistronic (Chao and Vogel, 2016;Grabowicz et al., 2016;Huber et al., 2020;Wang et al., 2020) as well as long, operonic mRNAs (Davis and Waldor, 2007;De Mets et al., 2019;Miyakoshi et al., 2019) and typically act to regulate multiple target mRNAs in trans. The RNase E C-terminus also provides the scaffold for a large protein complex, called the degradosome, which in the major human pathogen, Vibrio cholerae, has recently been implicated in the turn-over of hypomodified tRNA species (Kimura and Waldor, 2019).
The present work addresses the regulatory role of RNase E in V. cholerae at a genome-wide level. To this end, we generated a temperature-sensitive variant of RNase E in V. cholerae and employed TIER-seq (transiently-inactivating-an-endoribonuclease-followed-by-RNA-seq) to globally map RNase E cleavage sites (Chao et al., 2017). Our analyses identified~25,000 RNase E-sensitive sites and revealed the presence of numerous stable sRNAs originating from the 3' UTR of coding sequences. Detailed analyses of two of these sRNAs, OppZ and CarZ, showed that 3' UTR-derived sRNAs can act in an autoregulatory manner to reduce the expression of mRNAs produced from the same genetic locus. The molecular mechanism of sRNA-mediated gene autoregulation likely involves inhibition of translation initiation by the sRNA followed by Rho-dependent transcription termination. This setup directly links the regulatory activity of the sRNAs to their de novo synthesis, analogous to their transcription factor counterparts. However, we show that, in contrast to transcriptional regulators, autoregulatory RNAs can act at a subcistronic level to allow discoordinate operon expression.
TIER-seq analysis of V. cholerae
The catalytic activity of RNase E (encoded by the rne gene) is critical for many bacteria, including V. cholerae (Cameron et al., 2008). To study the role of RNase E in this pathogen, we mutated the DNA sequence of the V. cholerae chromosome encoding leucine 68 of RNase E to phenylalanine (Figure 1-figure supplement 1). This mutation is analogous to the originally described N3071 rne TS isolate of E. coli (Apirion and Lassar, 1978) and exhibits full RNase E activity at permissive temperatures (30˚C), but is rendered inactive under non-permissive temperatures (44˚C). We validated our approach by monitoring the expression of two known substrates of RNase E in V. cholerae: A) 5S rRNA, which is processed by RNase E from the 9S precursor rRNA (Papenfort et al., 2015b), and B) the MicX sRNA, which contains two RNase E cleavage sites (Davis and Waldor, 2007). For both RNAs, transfer of the wild-type strain to 44˚C only mildly effected their expression, whereas the equivalent procedure performed with the rne TS strain led to the accumulation of the 9S precursor and the full-length MicX transcript ( Figure 1A, lanes 1-2 vs. 3-4). Additionally, accumulation of the two RNase E-dependent processing intermediates of MicX was reduced in the rne TS strain at the non-permissive temperature.
These results showed that we successfully generated a temperature-sensitive RNase E variant in V. cholerae and enabled us to employ TIER-seq to determine RNase E-dependent cleavage sites at a global scale. To this end, we cultivated V. cholerae wild-type and rne TS strains at 30˚C to late exponential phase (OD 600 of 1.0), divided the cultures in half and continued incubation for 60 min at either 30˚C or 44˚C. Total RNA was isolated and subjected to deep sequencing. We obtained~187 million reads from the twelve samples (corresponding to three biological replicates of each strain and condition; Figure 1-figure supplement 2A), resulting in~98 million unique 5' ends mapping to the V. cholerae genome. Comparison of the 5' ends detected in wild-type and rne TS at 30˚C showed almost no difference between the two strains (Pearson correlation coefficients R 2 ranging from 0.82 to 0.99 depending on the compared replicates), whereas the same analysis at 44˚C revealed 24,962 depleted sites in the rne TS strain (Figure 1-figure supplement 2B-C). Given that g-proteobacteria such as V. cholerae do not encode 5' to 3' exoribonucleases (Mohanty and Kushner, 2018), we designated these positions as RNase E-specific cleavage sites (Supplementary file 1). cholerae wild-type and rne TS strains were grown at 30˚C to stationary phase (OD 600 of 2.0). Cultures were divided in half and continuously grown at either 30˚C or 44˚C for 60 min. Cleavage patterns of 5S rRNA and 3' UTR-derived MicX were analyzed on Northern blots. Closed triangles indicate mature 5S or full-length MicX, open triangles indicate the 9S precursor or MicX processing products. (B, C, D) Biological triplicates of V. cholerae wild-type and rne TS strains were grown at 30˚C to late exponential phase (OD 600 of 1.0). Cultures were divided in half and continuously grown at either 30˚C or 44˚C for 60 min. Isolated RNA was subjected to RNA-seq and RNase E cleavage sites were determined as described in the materials and methods section. (B) Number of cleavage sites detected per gene. (C) Classification of RNase E sites by their genomic location. (D) The RNase E consensus motif based on all detected cleavage sites. The total height of the error bar is twice the small sample correction. The online version of this article includes the following source data and figure supplement(s) for figure 1: Source data 1. Full Northern blot images for the corresponding detail sections shown in Figure 1 and RNase E cleavage site counts within genes or transcript categories. Next, we analysed the~25,000 RNase E sites with respect to frequency per gene and their distribution among different classes of transcript. We discovered that RNase E cleavage sites occur with a frequency of 2.8 (median)/6.3 (mean) sites per kb ( Figure 1B). The majority of cleavage events occurs in coding sequences (~69.1%), followed by 5' UTRs (~8.4%), antisense RNAs (~7.1%), 3' UTRs (~5.3%), intergenic regions (~4.0%), and sRNAs (~0.6%) ( Figure 1C). RNase E sites were slightly enriched around start and stop codons of mRNAs ( Figure 1-figure supplement 3A). Furthermore, cleavage coincided with an increase in AU-content ( Figure 1-figure supplement 3B) and a rise in minimal folding energies (Figure 1-figure supplement 3C), suggesting reduced secondary structure around RNase E sites. Together, these data allowed us to determine a consensus motif for RNase E in V. cholerae ( Figure 1D). This 5-nt sequence, i.e. 'RN#WUU', is highly similar to previously determined RNase E motifs of Salmonella enterica (Chao et al., 2017) and Rhodobacter sphaeroides (Fö rstner et al., 2018), indicating that RNase E operates by a conserved mechanism of recognition and cleavage.
RNase E-mediated maturation of sRNAs
Earlier work on sRNA biogenesis in bacteria revealed that the 3' UTR of coding transcripts can serve as source for non-coding regulators and that RNase E is frequently required to cleave the sRNA from the mRNA (Miyakoshi et al., 2015). In V. cholerae, we previously annotated 44 candidate sRNAs located in the 3' UTR of mRNAs (Papenfort et al., 2015b). To analyse which of these sRNAs depend on RNase E for maturation, we searched for RNase E-cleavage sites matching with the first three bases of the annotated sRNAs. 17 sRNAs revealed potential RNase E-dependent maturation (Supplementary file 2A) and using Northern blot analyses of wild-type and rne TS samples, we were able to confirm these results for 9 sRNAs (Vcr016, Vcr041, Vcr044, Vcr045, Vcr053, Vcr064, FarS, Vcr079, and Vcr084; Figure 1-figure supplement 4). In all cases, transfer of the rne TS strain to nonpermissive temperatures led to a change in mature sRNA levels and/or their upstream processing intermediates. We also discovered several sRNAs undergoing maturation by RNase E (Supplementary file 2B). Specifically, Northern blot analysis of Vcr043, Vcr065, and Vcr082 revealed that these sRNAs accumulate as multiple stable intermediates (Figure 1-figure supplement 5) that may contain different regulatory capacities as previously described for ArcZ and RprA of S. enterica (Chao et al., 2017;Papenfort et al., 2015a;Soper et al., 2010). In addition, we also analysed the expression of several RNase E-independent sRNAs (RyhB, Spot 42 and VqmR; Figure 1-figure supplement 6) on Northern blots. Inactivation of RNase E did not affect the levels of the mature sRNAs or any processed intermediates.
OppZ is produced from the oppABCDF 3' end To understand the regulatory functions of 3' UTR-derived sRNAs in V. cholerae, we focussed on Vcr045, which is processed from the 3' end of the oppABCDF mRNA (encoding an oligopeptide transporter) and which we hence named OppZ. The oppZ gene is 52 bps long and conserved among the Vibrios (Figure 2A). RNase E-mediated cleavage of oppABCDF occurs immediately downstream of the oppF stop codon and using the rne TS strain, we were able to validate RNase E-dependent processing of OppZ ( Figure 2B). Northern and Western blot analysis of a V. cholerae strain carrying a 3XFLAG epitope at the C-terminus of the chromosomal oppA and oppB genes revealed that OppZ expression coincided with the expression of both proteins ( Figure 2C, lanes 1-4). Previous transcriptome data showed that expression of oppABCDF is controlled by a single promotor located~120 bps upstream of oppA (Papenfort et al., 2015b), indicating that the sRNA is coexpressed with all five opp genes. To test this prediction, we replaced the native promoter upstream of the chromosomal oppA gene with the L-arabinose-inducible pBAD promoter and monitored OppA, OppB, and OppZ expression under inducing and non-inducing conditions. In the absence of the inducer, expression of OppA/B and OppZ was strongly reduced ( Figure 2C, lanes 5-8) and L-arabinose had no effect on the activity of the native oppA promoter ( Figure 2C, lanes 9-10). In contrast, activation of the pBAD promoter led to a significant increase in OppA/B and OppZ ( Figure 2C, lanes 11-12), indicating that expression of the oppABCDF-oppZ operon is indeed controlled by a single promoter.
To support these results and confirm production of OppZ from the longer precursor transcript, we generated two plasmids carrying either only oppZ or oppF-oppZ under the control of the ) and the size of the processed OppZ transcript was comparable to endogenously expressed OppZ (lane 1) and OppZ transcribed directly by the P Tac promoter (lane 3). We also repeated these experiments in a V. cholerae hfq mutant (Svenningsen et al., 2009). Here, processing of the precursor into OppZ was still detected cholerae wild-type and rne TS strains were grown at 30˚C to stationary phase (OD 600 of 2.0). Cultures were divided in half and continuously grown at either 30˚C or 44˚C for 30 min. OppZ synthesis was analyzed by Northern blot with 5S rRNA as loading control. The triangle indicates the size of mature OppZ. (C) Protein and RNA samples were obtained from V. cholerae oppA::3XFLAG oppB::3XFLAG strains carrying either the native oppA promoter or the inducible pBAD promoter upstream of oppA. Samples were collected at the indicated OD 600 and tested for OppA and OppB production by Western blot and for OppZ expression by Northern blot. RNAP and 5S rRNA served as loading controls for Western and Northern blots, respectively. Lanes 1-8: Growth without L-arabinose. Lanes 9-12: Growth with either H 2 O (-) or L-arabinose (+) (0.2% final conc.). (D) V. cholerae wild-type (control) and hfq::3XFLAG (Hfq-FLAG) strains were grown to stationary phase (OD 600 of 2.0), lysed, and subjected to immunoprecipitation using the anti-FLAG antibody. RNA samples of lysate (total RNA) and co-immunoprecipitated fractions were analyzed on Northern blots. 5S rRNA served as loading control. The online version of this article includes the following source data and figure supplement(s) for figure 2: Source data 1. Full Northern and Western blot images for the corresponding detail sections shown in Figure 2. Lines indicate cut-offs for differentially regulated genes at 3-fold regulation and FDR-adjusted p-value 0.05. Genes with an FDR-adjusted p-value<10 À14 are indicated as droplets at the top border of the graph. (B) Predicted OppZ secondary structure and base-pairing to oppB. Arrows indicate the mutations tested in (C) and (D). (C) E. coli strains carrying a translational reporter plasmid with the oppAB intergenic region placed between mKate2 and gfp were co-transformed with a control plasmid or the indicated OppZ expression plasmids. Transcription of the reporter and oppZ were driven by constitutive promoters. Cells were grown to OD 600 = 1.0 and fluorophore production was measured. mKate and GFP levels of strains carrying the control plasmid were set to 1. Error bars represent the SD of three biological replicates. (D) Single-plasmid regulation was measured by inserting the indicated oppZ variant into the 3' UTR of a translational oppB::gfp fusion. Expression was driven from a constitutive promoter. E. coli strains carrying the respective plasmids were grown to OD 600 = 1.0 and GFP production was measured. Fluorophore levels from control fusions without an sRNA gene were set to one and error bars represent the SD of three biological replicates. OppZ expression was tested by Northern blot; 5S rRNA served as loading control. The online version of this article includes the following source data and figure supplement(s) for figure 3: Source data 1. Full Northern blot images for the corresponding detail sections shown in Figure 3 and raw data for fluorescence measurements. (lane 8), however, the steady-state levels of OppZ were lower, suggesting that OppZ binds Hfq. Indeed, stability experiments using rifampicin-treated V. cholerae showed that OppZ half-life is reduced in Dhfq cells (Figure 2-figure supplement 2), and RNA co-immunoprecipitation experiments of chromosomal Hfq::3XFLAG revealed that OppZ interacts with Hfq in vivo ( Figure 2D). Together, these data show that OppZ is an Hfq-dependent sRNA that is processed from the 3' UTR of the polycistronic oppABCDF mRNA by RNase E.
Feedback Autoregulation at the suboperonic level
Hfq-binding sRNAs control gene expression by base-pairing with trans-encoded target transcripts (Kavita et al., 2018). To determine the targets of OppZ in V. cholerae, we cloned the sRNA (starting from the RNase E cleavage site) on a plasmid under the control of the pBAD promoter. Induction of the pBAD promoter for 15 min resulted in a strong increase in OppZ levels (~30 fold, Figure (Rehmsmeier et al., 2004), we were able to predict RNA duplex formation of the oppB translation initiation site with the 5' end of the OppZ sRNA ( Figure 3B). We confirmed this interaction using a variant of a previously reported post-transcriptional reporter system (Corcoran et al., 2012). Here, the first gene of the operon is replaced by the red-fluorescent mKate2 protein, followed by the oppAB intergenic sequence and the first five codons of oppB, which were fused to gfp ( Figure 3C, top). Transfer of this plasmid into E. coli and co-transformation of the OppZ over-expression plasmid resulted in strong repression of GFP (~7 fold), while mKate2 levels remained constant. Mutation of either OppZ or oppB (mutations M1, see Figure 3B) abrogated regulation of GFP and combination of both mutants restored control ( Figure 3C, bottom). In contrast, OppZ-mediated repression of OppB::GFP was strongly reduced in E. coli lacking hfq (Figure 3-figure supplement 2A-B). We also generated three additional variants of the reporter plasmids in which we included the oppBC, oppBCD, and oppBCDF sequences fused to GFP (Figure 3-figure supplement 2C). In all cases, OppZ readily inhibited GFP but did not affect mKate2. These results confirm that OppZ promotes discoordinate expression of the oppABCDF operon.
Next, we aimed to reproduce OppZ-mediated repression from a single transcript. To this end, we compared GFP production of a translational oppB::gfp reporter with the same construct carrying the oppZ sequence downstream of gfp ( Figure 3D, top). Northern blot analysis revealed that OppZ was efficiently clipped off from the gfp transcript in this construct and fluorescence measurements showed that OppZ also inhibited GFP expression ( Figure 3D, bottom, lane 1 vs. 2). We confirmed that this effect is specific to base-pairing of OppZ with the oppAB intergenic sequence as we were able to recapitulate our previous compensatory base-pair exchange experiments using the single plasmid system ( Figure 3D). In addition, mutation of the RNase E recognition site in oppZ (UU!GG, mutation M2; Figure 3C). Together, our data demonstrate that OppZ down-regulates protein synthesis from its own cistron. Furthermore, mutation M2 shows that this autoregulation is not mediated by long-distance intramolecular base-pairing of OppZ with the oppB 5' UTR, but rather requires RNase E-dependent maturation of the transcript followed by Hfq-dependent base-pairing.
Translational control of OppZ synthesis
The above experiments revealed that OppZ inhibits protein production through feedback control, however, it was not clear if OppZ would also inhibit its own synthesis. To address this question, we generated an OppZ over-expression plasmid in which we mutated the sequence of the terminal stem-loop at eight positions. We call this construct 'regulator OppZ' ( Figure 4A). These mutations are not expected to inactivate the base-pairing function of OppZ, but will allow us to differentiate the levels of native OppZ and regulator OppZ on Northern blots. Indeed, when tested in V. cholerae, over-expression of regulator OppZ inhibited OppB::3XFLAG production, but did not affect OppA::3XFLAG levels ( Figure 4B, left). Importantly, regulator OppZ also reduced the expression of native OppZ ( Figure 4B, right) and introduction of the M1 mutation (see Figure 3B) in regulator OppZ abrogated this effect. These results revealed that OppZ also exerts autoregulation of its own transcript.
Gene expression control by sRNAs typically occurs post-transcriptionally (Gorski et al., 2017) raising the question of how OppZ achieves autoregulation at the molecular level. Given that OppZ inhibits OppB production ( Figure 4B), we hypothesized that OppZ synthesis might be linked to oppB translation. To test this prediction, we inactivated the chromosomal start codon of oppB (ATG!ATC) and monitored OppA/B and OppZ expression by Western and Northern blot, respectively. As expected, mutation of the oppB start codon had no effect on OppA::3XFLAG levels, but nullified OppB::3XFLAG production ( Figure 4C, top). Lack of oppB translation also resulted in a strong decrease in OppZ levels ( Figure 4C, bottom), however, did not change OppZ stability ( showing that OppZ production is independent of the cellular OppB levels. Based on these and the results above, we propose that autorepression of oppBCDF-oppZ must occur by a mechanism involving both translation inhibition, as well as transcription termination. cholerae oppA::3XFLAG oppB::3XFLAG carrying a control plasmid (pCMW-1) or a plasmid expressing regulator OppZ (pMD194, pMD195) were grown to stationary phase (OD 600 of 2.0). OppA and OppB production were tested by Western blot and expression of native OppZ and regulator OppZ was monitored on Northern blot using oligonucleotides binding to the respective loop sequence variants. RNAP and 5S rRNA served as loading controls for Western blot and Northern blot, respectively. (C) The oppB start codon was mutated to ATC in an oppA::3XFLAG oppB::3XFLAG background. V. cholerae strains with wild-type or mutated oppB start codon were grown in LB medium. Protein and RNA samples were collected at the indicated OD 600 and tested for OppA and OppB production by Western blot and for OppZ expression by Northern blot. RNAP and 5S rRNA served as loading controls for Western and Northern blots, respectively. The online version of this article includes the following source data and figure supplement(s) for figure 4: Source data 1. Full Northern and Western blot images for the corresponding detail sections shown in Figure 4.
OppZ promotes transcription termination through Rho
To explain the reduction of OppZ expression in the absence of oppB translation, we considered premature transcription termination as a possible factor. This hypothesis was supported by our finding that OppZ over-expression efficiently reduced oppB mRNA levels without significantly affecting transcript stability (Figure 3-figure supplement 1C-D). In E. coli, Rho protein accounts for a major fraction of all transcription termination events (Ciampi, 2006) and has previously been associated with the regulatory activity of Hfq-dependent sRNAs (Bossi et al., 2012;Sedlyarova et al., 2016;Wang et al., 2015). Rho is specifically inhibited by bicyclomycin (BCM; Zwiefka et al., 1993) and consequently we tested the effect of the antibiotic on OppZ expression in V. cholerae wild-type and the oppB start codon mutant. Whereas BCM had no effect on OppZ synthesis in wild-type cells ( Figure 5A, lane 1 vs. 2), it strongly increased OppZ and oppBCDF expression in the absence of oppB translation ( Figure 5A, lane 3 vs. 4, and Figure 5B). We confirmed these results by employing Term-Seq analysis (Dar et al., 2016) to wild-type and oppB start codon mutants cultivated with or without BCM. Detailed inspection of transcript coverage at the oppABCDF-oppZ genomic locus showed that lack of oppB translation down-regulated the expression of oppBCDF-oppZ, while presence of BCM suppressed this effect ( Figure 5C and Supplementary file 3B). Similarly, inhibition of the oppBCDF mRNA and OppZ by over-expression of regulator OppZ (see Figure 4A) was suppressed in the presence of BCM, whereas OppB protein levels remained low presumably due to continued repression of oppB translation initiation by OppZ ( Figure 5D-E).
To map the position of Rho-dependent transcription termination in oppB, we generated five additional strains carrying a STOP mutation at the 2 nd , 15 th , 65 th , 115 th , or 215 th codon of the chromosomal oppB gene ( Figure 6A). In addition, we mutated the start codons of oppC, oppD, and oppF and probed OppZ levels on Northern blot ( Figure 6B). In accordance with the data presented in Figure 4C, mutation of the oppB start codon resulted in strongly decreased OppZ levels ( Figure 6B, lane 1 vs. 2) and we observed similar results when the STOP mutation was introduced at the 2 nd , 15 th , and 65 th codon of oppB ( Figure 6B, lanes 3-5). In contrast, a STOP mutation at codon 115 led to increased OppZ expression (lane 6) and OppZ levels were fully restored when the STOP was placed at codon 215 of oppB (lane 7). Likewise, mutation of the oppC, oppD, and oppF start codons had no effect on OppZ production ( Figure 6B, lanes 8-10). To summarize, our data indicate that autorepression of the oppBCDF-oppZ genes relies on inhibition of oppB translation initiation by OppZ, which triggers Rho-dependent transcription termination in the distal part of the oppB sequence.
CarZ is another autoregulatory sRNA from V. cholerae Our TIER-seq analysis revealed 17 3' UTR-derived sRNAs produced by RNase E-mediated cleavage in V. cholerae (Supplementary file 2A). Detailed analysis of OppZ showed that this sRNA serves as an autoregulatory element inhibiting the oppBCDF genes as well as its own synthesis . We therefore asked how wide-spread RNA-mediated autoregulation is and if the other 16 3' UTR-derived sRNAs might serve a similar function in V. cholerae. To this end, we searched for potential base-pairing sequences between the sRNAs and the translation initiation regions of their associated genes using the RNA-hybrid algorithm (Rehmsmeier et al., 2004). Indeed, we were able to predict stable RNA duplex formation between the Vcr084 sRNA (located in the 3' UTR of the carAB operon; encoding carbamoyl phosphate synthetase) and the 5' UTR of carA, which is the first gene of the operon ( Figure 7A-B). In analogy to OppZ, we named this sRNA CarZ. Plasmid-borne expression of CarZ strongly inhibited GFP production from carA::gfp and carAB::gfp reporters in E. coli Figure 7-figure supplement 1A-B). Transcription of carAB-carZ is controlled by a single promoter located upstream of carA and the three genes are co-expressed in vivo ( Figure 7D and Papenfort et al., 2015b). These results suggested that CarZ provides feedback regulation and using an experimental strategy analogous to Figure 4A, we were able to show that CarZ inhibits CarA and CarB protein expression as well as its own synthesis ( Figure 7B,E). Furthermore, introduction of a STOP codon at the 2 nd codon of the chromosomal Figure 5. OppZ promotes transcription termination through Rho. (A) V. cholerae oppA::3XFLAG oppB::3XFLAG oppF::3XFLAG strains with wild-type or mutated oppB start codon were grown to early stationary phase (OD 600 of 1.5). Cultures were divided in half and treated with either H 2 O or BCM (25 mg/ml final conc.) for 2 hr before protein and RNA samples were collected. OppA, OppB and OppF production were tested by Western blot and OppZ expression was monitored by Northern blot. RNAP and 5S rRNA served as loading controls for Western and Northern blots, respectively. (B) Biological triplicates of V. cholerae oppA::3XFLAG oppB::3XFLAG strains with wild-type or mutated oppB start codon were treated with BCM as described in (A). oppABCDF expression in the oppB start codon mutant compared to the wild-type control was analyzed by qRT-PCR. Error bars represent the SD of three biological replicates. (C) Triplicate samples from (B) were subjected to Term-seq and average coverage of the opp operon is shown for one representative replicate. The coverage cut-off was set at the maximum coverage of annotated genes. (D) V. cholerae oppA::3XFLAG oppB::3XFLAG strains carrying a control plasmid (pMD397) or a plasmid expressing regulator OppZ (pMD398) were treated with BCM as described in (A).
OppA and OppB production were tested by Western blot and expression of native OppZ and regulator OppZ was monitored on Northern blot using oligonucleotides binding to the respective loop sequence variants. RNAP and 5S rRNA served as loading controls for Western and Northern blots, respectively. (E) Levels of oppABCDF in the experiment described in (D) were analyzed by qRT-PCR. Error bars represent the SD of three biological replicates. The online version of this article includes the following source data for figure 5: Source data 1. Full blot images for the corresponding detail sections shown in Figure 5 and raw data for transcript changes as determined by qRT-PCR. carA gene abrogated CarZ expression and similar results were obtained when the STOP codon was placed at the 2 nd codon of carB ( Figure 7F). Of note, inactivation of carA translation also blocked CarB production indicating, among other possibilities, that translation of the two ORFs might be coupled and that expression of CarZ relies on active translation of both ORFs. Together, these results provide evidence that CarZ is an autoregulatory sRNA and suggest that this function might be more wide-spread among the growing class of 3' UTR-derived sRNAs.
Autoregulatory sRNAs modify the kinetics of gene induction
Bacterial sRNAs acting at the post-transcriptional level have recently been reported to add unique features to gene regulatory circuits, including the ability to promote discoordinate operon expression (Nitzan et al., 2017). Plasmid-borne over-expression of OppZ resulted in decreased expression of the oppBCDF cistrons, while leaving oppA levels unaffected (Figure 3-figure supplement 1B-C). We therefore asked if OppZ expression had a similar effect on the production of their corresponding proteins. To this end, we cultivated wild-type and oppZ-deficient V. cholerae (both carrying a control plasmid), as well as DoppZ cells carrying an OppZ over-expression plasmid, to various stages of growth and monitored OppA and OppB levels on Western Blot (Figure 8- Expression was driven from a constitutive promoter. E. coli strains carrying the respective plasmids were grown to OD 600 = 1.0 and GFP production was measured. Fluorophore levels from control fusions without an sRNA gene were set to one and error bars represent the SD of three biological replicates. CarZ expression was tested by Northern blot; 5S rRNA served as loading control. (D) Protein and RNA samples were obtained from V. cholerae carA::3XFLAG carB::3XFLAG carrying either the native carA promoter or the inducible pBAD promoter upstream of carA. Samples were collected at the indicated OD 600 and tested for CarA and CarB production by Western blot and for CarZ expression by Northern blot. RNAP and 5S rRNA served as loading controls for Western and Northern blots, respectively. Lanes 1-8: Growth without L-arabinose. Lanes 9-12: Growth with either H 2 O (-) or L-arabinose (+) (0.2% final conc.). (E) V. cholerae carA::3XFLAG carB::3XFLAG strains carrying a control plasmid or a plasmid expressing a CarZ variant with a mutated stem loop (regulator CarZ) were grown to late exponential phase (OD 600 of 1.0). CarA and CarB production were tested by Western blot and expression of native CarZ or regulator CarZ was monitored on Northern blot using oligonucleotides binding to the respective loop sequence Figure 7 continued on next page Given the relatively mild effect of oppZ deficiency on steady-state OppB protein levels (Figure 8figure supplement 1A), we next investigated the role of OppZ on the dynamics of OppABCDF expression. Specifically, transcription factor-controlled negative autoregulation has been reported to affect the response time of regulatory networks (Rosenfeld et al., 2002) and we speculated that sRNA-mediated feedback control could have a similar effect. To test this hypothesis, we employed a V. cholerae strain in which we replaced the native promoter upstream of the chromosomal oppA gene with the L-arabinose-inducible pBAD promoter (see Figure 2C) and monitored the kinetics of OppA and OppB production in wild-type and DoppZ cells before and at several time-points post induction ( Figure 8A). Whereas OppA protein accumulated equally in wild-type and oppZ mutants ( Figure 8B), expression of OppB was significantly increased in DoppZ cells ( Figure 8C). This effect was most prominent at later stages after induction (>30 min) and coincided with accumulation of OppZ ( Figure 8A). Calculation of the OppB response time (50% of the maximal expression value) showed a significant delay in DoppZ cells (~78 min), when compared to the wild-type control (~52 min). We therefore conclude that alike transcription factors, autoregulatory sRNAs change the dynamics of their associated genes, however, in contrast to transcription factors, sRNAs act at the post-transcriptional level and can direct this effect towards a specific subgroup of genes within an operon.
Discussion
Base-pairing sRNAs regulating the expression of trans-encoded mRNAs are a major pillar of gene expression control in bacteria (Gorski et al., 2017). Transcriptomic data obtained from various microorganisms have shown that sRNAs are produced from almost all genomic loci and that the 3' UTRs of coding genes are a hotspot for sRNAs acting through Hfq (Adams and Storz, 2020). Expression of 3' UTR-derived sRNAs can either occur by independent promoters, or by ribonucleolytic cleavage typically involving RNase E (Miyakoshi et al., 2015). In the latter case, production of the sRNA is intimately connected to the activity of the promoter driving the expression of the upstream mRNA, suggesting that the regulatory function of the sRNA is linked to the biological role of the associated genes. Indeed, such functional interdependence has now been demonstrated in several cases (Chao and Vogel, 2016;De Mets et al., 2019;Huber et al., 2020;Miyakoshi et al., 2019;Wang et al., 2020), however, it remained unclear if and how these sRNAs also affected their own transcripts. In this regard, OppZ and CarZ provide a paradigm for 3' UTR-derived sRNAs allowing autoregulation at the post-transcriptional level. This new type of feedback inhibition is independent of auxiliary transcription factors and we could show that autoregulation by sRNAs can either involve the full transcript (CarZ), or act at the suboperonic level (OppZ).
Features of RNase E-mediated gene control
RNase E is a principal factor for RNA turnover in almost all Gram-negative bacteria (Bandyra and Luisi, 2018). The protein forms a tetramer in vivo and serves as the scaffold for the degradosome, a large, multi-enzyme complex typically containing the phosphorolytic exoribonuclease PNPase, the RNA-helicase RhlB, and the glycolytic enzyme enolase (Aït-Bara and Carpousis, 2015). Substrates of RNase E are preferentially AU-rich and harbor a 5' mono-phosphate. Thus, the enzyme relies on variants. RNAP and 5S rRNA served as loading controls for Western blot and Northern blot, respectively. (F) V. cholerae carA::3XFLAG carB::3XFLAG strains with the following carA or carB mutations were grown: wild-type (lane 1) or a STOP codon inserted at the 2 nd codon of carA (lane 2) or carB (lane 3), respectively. At late exponential phase (OD 600 of 1.0), protein and RNA samples were collected and tested for CarA and CarB production by Western blot and for CarZ expression by Northern blot. RNAP and 5S rRNA served as loading controls for Western and Northern blots, respectively. The online version of this article includes the following source data and figure supplement(s) for figure 7: Source data 1. Full blot images for the corresponding detail sections shown in Figure 7 and raw data for fluorescence measurements. RNA pyrophosphohydrolases such as RppH, which convert the 5' terminus from a triphosphate to a monophosphate, before transcript degradation can be initiated (Deana et al., 2008). Recognition of a substrate is followed by scanning of RNase E for suitable cleavage sites along the transcript (Richards and Belasco, 2019). TIER-seq-based identification of a consensus sequence for RNase E target recognition revealed highly similar motifs for V. cholerae ( Figure 1D) and S. enterica (Chao et al., 2017). These results further support the previously proposed 'U +2 Ruler-and-Cut' mechanism, in which a conserved uridine located two nts down-stream of the cleavage site is key for RNase E activity. However, in contrast to the data obtained from S. enterica, we discovered only a mild enrichment of RNase E cleavage sites occurring at translational stop codons (Figure 1-figure supplement 3A). This observation might be explained by differences in stop codon usage between V. cholerae and S. enterica (Korkmaz et al., 2014) and could point to species-specific features of RNase E activity. The role of termination factor Rho in sRNA-mediated gene expression control Approximately 25-30% of all genes in E. coli depend on Rho for transcription termination (Cardinale et al., 2008;Dar and Sorek, 2018b;Peters et al., 2012). BCM treatment of V. cholerae wild-type cells revealed 699 differentially regulated genes (549 upregulated and 150 repressed genes; Supplementary file 3A), suggesting an equally global role for Rho in this organism. Rhodependent transcription termination is modulated by various additional factors (Mitra et al., 2017). This includes anti-termination factors such as NusG, as well as Hfq and its associated sRNAs (Bossi et al., 2020). For sRNAs, the effect on Rho activity can be either activating or repressing. Previous work has shown that sRNAs can mask Rho-dependent termination sites and thereby promote transcriptional read-through (Lin et al., 2019;Sedlyarova et al., 2016). Negative gene regulation involving sRNAs and Rho typically includes translation inhibition by the sRNA resulting in separation of transcription and translation complexes (Figure 9). Coupling of transcription and translation normally protects the nascent mRNA from Rho action and loss of ribosome binding supports Figure 9. Model of the OppZ-dependent mechanism of opp regulation. Transcription of the oppABCDF operon initiates upstream of oppA and in the absence of OppZ (left) involves all genes of the operon as well as OppZ. In this scenario, all cistrons of the operon are translated. In the presence of OppZ (right), the sRNA blocks translation of oppB and the ribosome-free mRNA is recognized by termination factor Rho. Rho catches up with the transcribing RNAP and terminates transcription pre-maturely within oppB. Consequently, oppBCDF are not translated and OppZ is not produced. transcription termination (Bossi et al., 2012). In addition, lack of ribosome-mediated protection can render the mRNA target vulnerable to ribonucleases, e.g. RNase E, which can also lead to the degradation of the sRNA (Feng et al., 2015;Massé et al., 2003). Which of these mechanisms are at play for a given sRNA-target mRNA pair is most often unknown and it is likely that both types of regulation can occur either independently or in concert. For example, over-expression of OppZ did not affect oppB transcript stability (Figure 3-figure supplement 1D), suggesting that induction of Rhomediated transcription termination is the main mechanism for gene repression in this sRNA-target mRNA pair. In contrast, analogous experiments testing the stability of the carA and carB transcripts upon CarZ over-expression revealed a significant drop in transcript stability for both mRNAs (Figure 7-figure supplement 2A-B). These results suggest that translation inhibition of carA by CarZ has two outcomes: 1 st ) accelerated ribonucleolytic decay of the carAB transcript and 2 nd ) Rho-mediated transcription termination. Using two regulatory mechanisms (CarZ-carA) instead of one (OppZ-oppB) might explain the strong inhibition of carA::gfp by CarZ (~10 fold, Figure 7C), when compared to the relatively weak repression (1.8-fold) of oppB::gfp by OppZ ( Figure 3D).
Employing multiple regulatory mechanisms on one target mRNA might have led to an underestimation of the prevalence of Rho-mediated transcription termination in sRNA-mediated gene control. In fact, sRNAs frequently repress genes that are downstream in an operon with their base-pairing target, which could point to a possible involvement of Rho (Bossi et al., 2020). Rho is known to bind cytosine-rich RNA elements (Allfano et al., 1991), however, due to the strong variability in size and composition of these sequences, predicting Rho binding sites (a.k.a. rut sites) from genomic or transcriptomic data has been a difficult task (Nadiras et al., 2018). Indeed, while our transcriptomic data of the oppB start codon mutant did not allow us to pinpoint the position of the rut site in oppB ( Figure 5C), evidence obtained from genetic analyses using various oppB STOP codon mutants revealed that Rho-dependent termination likely occurs at or close to codon 115 in oppB ( Figure 6B). We attribute the lack of this termination event in the transcriptomic data to the activity of 3'À5' acting exoribonucleases (e.g. RNase II or PNPase Bechhofer and Deutscher, 2019; Mohanty and Kushner, 2018), which degrade the untranslated oppB sequence. Identifying the relevant exonucleases might well allow for an advanced annotation of global Rho-dependent termination sites and cross-comparison with documented sRNA-target interaction could help to clarify the relevance of Rho-mediated termination in sRNA-based gene control.
Dynamics of RNA-based feedback regulation
Transcription factors and sRNAs are the principal components of gene networks. While the regulatory outcome of sRNA and transcription factor activity is often very similar, the underlying regulatory dynamics are not (Hussein and Lim, 2012). Regulatory networks involving sRNAs and transcription factors are called mixed circuits and have now been studied in greater detail. Similar to systems relying on transcription factors, feedback regulation is common among sRNAs (Nitzan et al., 2017). However, unlike the examples presented in this study, these circuits always involve the action of a transcription factor, which has implications for their regulatory dynamics. For example, the OmpR transcription factor activates the expression of the OmrA/B sRNAs, which repress their own synthesis by inhibiting the ompR-envZ mRNA (Guillier and Gottesman, 2008). This constitutes an autoregulatory loop, however, given that transcription of OmrA/B ultimately relies on OmpR protein levels, this regulation will only become effective when sufficient OmpR turn-over has been achieved (Brosse et al., 2016). In contrast, autoregulatory circuits involving 3' UTR-derived sRNAs are independent of such auxiliary factors and therefore provide a more rapid response. In case of OppZ-oppB, we showed that the sRNA has a rapid effect on OppB expression levels ( Figure 8C) and given the involvement of Rho-mediated transcription termination in this process, we expect similar dynamics for OppZ autoregulation (Figure 9).
Another key difference between feedback regulation by transcription factors and 3' UTR-derived sRNAs is the stoichiometry of the players involved. In transcription factor-based feedback loops, the mRNA coding for the autoregulatory transcription factor can go through multiple rounds of translation, which will lead to an excess of the regulator over the target promoter. The degree of autoregulation is then determined by the cellular concentration of the transcription factor and the affinity towards its own promoter (Rosenfeld et al., 2002). In contrast, autoregulatory sRNAs which are generated by ribonucleolytic cleavage come at a 1:1 stoichiometry with their targets. However, this situation changes when the sRNA controls multiple targets. For OppZ, we have shown that oppBCDF is the only transcript regulated by the sRNA ( Figure 3A) and we currently do not know if CarZ has additional targets besides carAB. In addition, not all sRNA-target interactions result in changes in transcript levels as previously reported for the interaction of the Qrr sRNAs with the luxO transcript (Feng et al., 2015). New technologies, for example RIL-Seq (Melamed et al., 2020;Melamed et al., 2016), capturing the global interactome of base-pairing sRNAs independent of their regulatory state could help to address this question and clarify the stoichiometric requirements for sRNA-mediated autoregulation.
Possible biological relevance of autoregulatory sRNAs
Autoregulation by 3' UTR-derived sRNAs allows for discoordinate operon expression, which is in contrast to their transcription factor counterparts. This feature might be particularly relevant for long mRNAs containing multiple cistrons, such as oppABCDF. The oppABCDF genes encode an ABC transporter allowing high affinity oligopeptide uptake (Hiles et al., 1987). OppBCDF constitute the membrane-bound, structural components of the transport system, whereas OppA functions as a periplasmic binding protein. The overall structure of the transporter requires each one unit of OppB, OppC, OppD, and OppF, while OppA does constitutively interact with the complex and typically accumulates to higher concentrations in the periplasm (Doeven et al., 2004). Given that transcription of oppABCDF is controlled exclusively upstream of oppA ( Figure 2C and Papenfort et al., 2015b), OppZ-mediated autoregulation of oppBCDF (rather than the full operon) might help to achieve equimolar concentrations of OppB, OppC, OppD, and OppF in the cell without affecting OppA production.
The carAB genes, which are repressed by CarZ, encode carbamoyl phosphate synthetase; an enzyme complex catalyzing the first step in the separate biosynthetic pathways for the production of arginine, and pyrimidine nucleotides (Castellana et al., 2014). Similar to OppBCDF, the CarAB complex contains one subunit of CarA and one subunit of CarB. Transcriptional control of carAB is complex and controlled by several transcription factors integrating information from purine, pyrimidine, and arginine pathways (Charlier et al., 2018). While the exact biological role of CarZ-mediated feedback regulation of carAB requires further investigation, transcription factor-based feedback regulation has been reported to reduce transcriptional noise (Alon, 2007), which could also be an important feature of sRNA-mediated autoregulation. The OppZ and CarZ sRNAs identified in this study now provide the framework to test this prediction.
Orthogonal use of gene autoregulation by 3' UTR-derived sRNAs
Regulatory RNAs have now been established as powerful components of the synthetic biology toolbox (Qi and Arkin, 2014). RNA regulators are modular, versatile, highly programmable, and therefore ideal candidates for synthetic biology approaches. Similarly, autoregulatory loops using transcriptional repressors find ample use in synthetic regulatory circuits (Afroz and Beisel, 2013). While it might be counterintuitive for a transcript to also produce its own repressor, negative feedback regulation has been reported to endow regulatory networks with improved robustness when disturbances to the system are imposed. Hfq-binding sRNAs providing feedback control have recently also been demonstrated to efficiently replace transcriptional regulation in artificial genetic circuits (Kelly et al., 2018). However, these sRNAs were produced from separate genes and therefore required additional transcriptional input, which increases noise. In contrast, the autoregulatory sRNAs presented here are produced by ribonucleolytic cleavage and we have shown that both OppZ and CarZ are efficiently clipped off from foreign genes, such as gfp (Figure 3-figure supplement 3, Figure 7C). We therefore propose that autoregulatory sRNAs can be attached to the 3' UTR of other genes as well, offering a simple and highly modular concept to introduce autoregulation into a biological system. These circuits can be further tuned by modifying the base-pairing strength of the RNA duplex formed between the sRNA and the target, as well as the introduction of Rho-dependent termination events. The latter could be used to avoid over-production of the sRNA, which will further shape the regulatory dynamics of the system. Given that transcriptomic analyses have revealed thousands of stable 3 0 UTR RNA tails derived from human transcripts (Gruber and Zavolan, 2019;Malka et al., 2017), we believe that RNA-based gene autoregulation also could be present and find applications in higher organisms. (Thelin and Taylor, 1996) was used as the wild-type strain. V. cholerae and E. coli strains were grown aerobically in LB medium at 37˚C except for temperature-sensitive strains. For stationary phase cultures of V. cholerae, samples were collected with respect to the time point when the cells reached an OD 600 >2.0, i.e., 3 hr after cells reached an OD 600 reading of 2.0. For transcript stability experiments, rifampicin was used at 250 mg/ ml. To inhibit Rho-dependent transcription termination, bicyclomycin (BCM; sc-391755; Santa Cruz Biotechnology, Dallas, Texas) was used at 25 mg/ml. Other antibiotics were used at the following concentrations: 100 mg/ml ampicillin; 20 mg/ml chloramphenicol; 50 mg/ml kanamycin; 50 U/ml polymyxin B; and 5,000 mg/ml streptomycin. For transient inactivation of RNase E, V. cholerae wild-type and a temperature-sensitive strain harboring the rne-3071 mutation were grown at 30˚C to the indicated cell density. Cultures were divided in half and either continuously grown at 30˚C or shifted to 44˚C. RNA samples were collected from both strains and temperatures at the indicated time points after the temperature shift.
Materials and methods
RK2/RP4-based conjugal transfer was used to introduce plasmids into V. cholerae from E. coli S17lpir plasmid donor strains (Simon et al., 1983). Subsequently, transconjugants were selected using appropriate antibiotics and polymyxin B to specifically inhibit E. coli growth. V. cholerae mutant strains were generated as described previously (Papenfort et al., 2015b). Briefly, pKAS32 plasmids were transferred into V. cholerae strains by conjugation and cells were screened for ampicillin resistance. Single colonies were streaked on streptomycin plates for counter-selection and colonies were tested for desired mutations by PCR or sequencing. Strain KPEC53467 was generated by phage P1 transduction to transfer the Dhfq::KanR allele (Baba et al., 2006) into E. coli Top 10 and subsequent removal of the KanR cassette using plasmid pCP20 Datsenko and Wanner, 2000 following standard protocols.
Western blot analysis and fluorescence assays
Total protein sample preparation and Western blot analyses were performed as described previously . Signals were visualized using a Fusion FX EDGE imager (Vilber Lourmat, Marne-la-Vallé e, France) and band intensities were quantified using the BIO-1D software (Vilber Lourmat). 3XFLAG-tagged fusions were detected using mouse anti-FLAG antibody (#F1804; RRID: AB_262044; Sigma-Aldrich) and goat anti-mouse HRP-conjugated IgG antibody, (#31430; RRID:AB_ 228307; Thermo Fisher Scientific). RNAPa served as a loading control and was detected using rabbit anti-RNAPa antibody (#WP003; RRID:AB_2687386; BioLegend, San Diego, California) and goat antirabbit HRP-conjugated IgG antibody, (#16104; AB_2534776; Thermo Fisher Scientific). Fluorescence assays of E. coli strains to measure mKate and GFP expression were performed as previously described (Urban and Vogel, 2007). Cells were washed in PBS and fluorescence intensity was quantified using a Spark 10 M plate reader (Tecan, Mä nnedorf, Switzerland). Control strains not expressing fluorescent proteins were used to subtract background fluorescence.
RNA-seq analysis: TIER-seq V. cholerae wild-type and rne TS strains were grown in biological triplicates at 30˚C to OD 600 of 1.0. Cultures were divided in half and either continuously grown at 30˚C or shifted to 44˚C. Cells were harvested from both strains and temperatures at 60 min after the temperature shift by addition of 0.2 volumes of stop mix (95% ethanol, 5% (v/v) phenol) and snap-frozen in liquid nitrogen. Total RNA was isolated and digested with TURBO DNase (Thermo Fisher Scientific). cDNA libraries were prepared by vertis Biotechnology AG (Freising, Germany): total RNA samples were poly(A)-tailed and 5'PPP structures were removed using RNA 5'Polyphosphatase (Epicentre, Madison, Wisconsin). An RNA adapter was ligated to the 5' monophosphate and first-strand cDNA synthesis was performed using an oligo(dT)-adapter and M-MLV reverse transcriptase. The resulting cDNAs were PCR-amplified, purified using the Agencourt AMPure XP kit (Beckman Coulter Genomics, Chaska, Minnesota) and sequenced using a NextSeq 500 system in single-read mode for 75 cycles.
The Minimum free energy (MFE) of sequence windows was computed with RNAfold (version 2.4.14) of the Vienna package (Lorenz et al., 2011). Sequence logos were created with WebLogo (version 3.7.4; Crooks et al., 2004). Overlaps of cleavage sites with other features were found by BEDTools' (version 2.26.0, Quinlan and Hall, 2010) sub-command 'intersect'. Pair-wise Pearson correlation coefficients between all samples were calculated based on the above mentioned first-basein read coverages taking positions with a total sum of at least 10 reads in all samples combined into account. Positions that represent outliers with coverage values above the 99.99 percentile in one or more read libraries were not considered. The values were computed using the function 'corr' of the pandas Dataframe class (https://doi.org/10.5281/zenodo.3509134). For further details, please see the analysis scripts linked in the data and code availability section.
RNA-seq analysis: Identification of OppZ targets
V. cholerae strains carrying either pBAD1K-ctrl or pBAD1K-oppZ were grown in biological triplicates to OD 600 of 0.5 and treated with 0.2% L-arabinose (final conc.). Cells were harvested after 15 min by addition of 0.2 volumes of stop mix (95% ethanol, 5% (v/v) phenol) and snap-frozen in liquid nitrogen. Total RNA was isolated and digested with TURBO DNase (Thermo Fisher Scientific). Ribosomal RNA was depleted using the Ribo-Zero kit for Gram-negative bacteria (#MRZGN126; Illumina, San Diego, California) and RNA integrity was confirmed with an Agilent 2100 Bioanalyzer. Directional cDNA libraries were prepared using the NEBNext Ultra II Directional RNA Library Prep Kit for Illumina (#E7760; NEB). The libraries were sequenced using a HiSeq 1500 System in single-read mode for 100 cycles. The read files in FASTQ format were imported into CLC Genomics Workbench v11 (RRID:SCR_011853; Qiagen, Hilden, Germany) and trimmed for quality and 3' adaptors. Reads were mapped to the V. cholerae reference genome (NCBI accession numbers: NC_002505.1 and NC_002506.1) including annotations for Vcr001-Vcr107 (Papenfort et al., 2015b) using the 'RNA-Seq Analysis' tool with standard parameters. Reads mapping in CDS were counted, and genes with a total count cut-off >15 in all samples were considered for analysis. Read counts were normalized (CPM), and transformed (log2). Differential expression was tested using the built-in tool corresponding to edgeR in exact mode with tagwise dispersions ('Empirical Analysis of DGE'). Genes with a fold change !3.0 and an FDR-adjusted p-value 0.05 were considered as differentially expressed.
RNA-seq analysis: Bicyclomycin-dependent transcriptomes V. cholerae oppA::3XFLAG oppB::3XFLAG oppF::3XFLAG strains with wild-type or mutated oppB start codon were grown in biological triplicates to OD 600 of 1.5, divided in half and treated with either bicyclomycin (25 mg/ml final conc.) or water. Cells were harvested after 120 min by addition of 0.2 volumes of stop mix (95% ethanol, 5% (v/v) phenol) and snap-frozen in liquid nitrogen. Total RNA was isolated and digested with TURBO DNase (Thermo Fisher Scientific). cDNA libraries were prepared by vertis Biotechnology AG in a 3' end-specific protocol: ribosomal RNA was depleted and the Illumina 5' sequencing adaptor was ligated to the 3' OH end of RNA molecules. First strand synthesis using M-MLV reverse transcriptase was followed by fragmentation and strand-specific ligation of the Illumina 3' sequencing adaptor to the 3' end of first-strand cDNA. Finally, 3' cDNA fragments were amplified, purified using the Agencourt AMPure XP kit (Beckman Coulter Genomics) and sequenced using a NextSeq 500 system in single-read mode for 75 cycles. The read files in FASTQ format were imported into CLC Genomics Workbench v11 (Qiagen) and trimmed for quality and 3' adaptors. Reads were mapped to the V. cholerae reference genome (NCBI accession numbers: NC_002505.1 and NC_002506.1) including annotations for Vcr001-Vcr107 (Papenfort et al., 2015b) using the 'RNA-Seq Analysis' tool with standard parameters. Reads mapping in CDS were counted, and genes with a total count cut-off >8 in all samples were considered for analysis. Read counts were normalized (CPM), and transformed (log2). Differential expression was tested using the built in tool corresponding to edgeR in exact mode with tagwise dispersions ('Empirical Analysis of DGE'). Genes with a fold change !3.0 and an FDR-adjusted p-value 0.05 were considered as differentially expressed.
TIER-seq input data, analysis scripts and results are deposited at Zenodo (https://doi.org/10. 5281/zenodo.3750832). Further information and requests for resources and reagents should be directed to and will be fulfilled by the corresponding author, Kai Papenfort (kai.papenfort@uni-jena. de). | 2020-08-04T13:01:34.598Z | 2020-08-03T00:00:00.000 | {
"year": 2020,
"sha1": "a40159f2ceb3a00ecf66777e389208a9f5a1a0c9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.58836",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d7a1d63d6532dd05b6d8cf37c940f7df7b999dd3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
266162569 | pes2o/s2orc | v3-fos-license | Iterative Token Evaluation and Refinement for Real-World Super-Resolution
Real-world image super-resolution (RWSR) is a long-standing problem as low-quality (LQ) images often have complex and unidentified degradations. Existing methods such as Generative Adversarial Networks (GANs) or continuous diffusion models present their own issues including GANs being difficult to train while continuous diffusion models requiring numerous inference steps. In this paper, we propose an Iterative Token Evaluation and Refinement (ITER) framework for RWSR, which utilizes a discrete diffusion model operating in the discrete token representation space, i.e., indexes of features extracted from a VQGAN codebook pre-trained with high-quality (HQ) images. We show that ITER is easier to train than GANs and more efficient than continuous diffusion models. Specifically, we divide RWSR into two sub-tasks, i.e., distortion removal and texture generation. Distortion removal involves simple HQ token prediction with LQ images, while texture generation uses a discrete diffusion model to iteratively refine the distortion removal output with a token refinement network. In particular, we propose to include a token evaluation network in the discrete diffusion process. It learns to evaluate which tokens are good restorations and helps to improve the iterative refinement results. Moreover, the evaluation network can first check status of the distortion removal output and then adaptively select total refinement steps needed, thereby maintaining a good balance between distortion removal and texture generation. Extensive experimental results show that ITER is easy to train and performs well within just 8 iterative steps. Our codes will be available publicly.
Introduction
Single-image super-resolution (SISR) aims to restore highquality (HQ) outputs from low-quality (LQ) inputs that have been degraded through processes such as downsampling, blurring, noise, and compression.Previous studies (Liang et al. 2021;Zamir et al. 2022;Chen et al. 2023) have achieved remarkable progress in enhancing LQ images degraded by a single predefined type of degradation, thanks to the emergence of increasingly powerful deep networks.However, in real-world LQ images, multiple unknown degradations are typically present, making previous methods unsuitable for such complex scenarios.results from t = T to t = 0, and t is the iterative step index of the reverse discrete diffusion process.We can observe that the textures are gradually enriched with iterative refinement.To obtain satisfactory results, our ITER requires only a total iteration step of T ≤ 8. (Zoom in for best view) Real-world super-resolution (RWSR) is particularly illposed because details are usually corrupted or completely lost due to complex degradations.In general, the RWSR can be divided into two subtasks: distortion removal and conditioned texture generation.Many existing approaches, such as (Wang et al. 2018b;Zhang et al. 2019a), follow the seminal SRGAN (Ledig et al. 2017) and rely on Generative Adversarial Networks (GANs).Typically, these methods require the joint optimization of various constraints for the two subtasks: 1) reconstruction loss for distortion removal, which is usually composed of pixel-wise L1/L2 loss and feature space perceptual loss; 2) adversarial loss for texture generation.Effective training of these models often involves tedious fine-tuning of hyper-parameters between restoration and generation abilities.Moreover, most models have a fixed preference for restoration and generation and cannot be flexibly adapted to LQ inputs with different degradation levels.Recently, approaches such as SR3 (Saharia et al. 2022) and LDM (Rombach et al. 2022) have turned to the popular diffusion model (DM) for realistic generative ability.Although DMs are easier to train and more powerful than GANs, they require hundreds or even thousands of iterative steps to generate outputs.Additionally, current DM-based methods have only been shown to be effective on images with moderate distortions.Their performance on severely distorted realworld LQ images remains to be validated.
In this paper, we introduce a new framework for RWSR based on a conditioned discrete diffusion model, called Iterative Token Evaluation and Refinement (ITER).ITER incorporates several critical designs to address the challenges of RWSR.Firstly, we formulate the RWSR task as a discrete token space problem, utilizing a pretrained codebook of VQ-GAN (Esser, Rombach, and Ommer 2021), instead of pixel space regression.This approach offers two advantages: 1) A small discrete proxy space reduces the ambiguity of image restoration, as demonstrated in (Zhou et al. 2022); 2) Generative sampling in a limited discrete space requires fewer iteration steps than denoising diffusion sampling in an infinite continuous space, as shown in (Bond-Taylor et al. 2022;Gu et al. 2022;Chang et al. 2022).Secondly, in contrast to previous GAN and DM methods, we explicitly separate the two sub-tasks of RWSR and address them with token restoration and token refinement modules, respectively.For the first task, we use a simple token restoration network to predict HQ tokens from LQ images.For the second task, we use a conditioned discrete diffusion model to iteratively refine outputs from the token restoration network.This approach facilitates optimizing each module and enables flexible trade-offs between restoration and generation.Finally, we propose to include a token evaluation block in the condition diffusion process.Unlike previous discrete diffusion models (Bond-Taylor et al. 2022;Chang et al. 2022) which directly rely on token prediction probability to select tokens to keep in each de-masking step, we introduce a evaluation block to check whether each tokens are correctly refined or not.This allows our model to better select good tokens in each step during iterative refinement process, and therefore improve the final results.Additionally, the token evaluation block enables us to adaptively select the total refinement steps to balance restoration and texture generation by evaluating the initially restored tokens.We can use fewer refinement steps for good initial restoration results to avoid overtextured outputs.The experiments demonstrate that our proposed ITER framework can effectively remove distortions and generate realistic textures without tedious GAN training in an efficient manner, requiring less than 8 iterative refinement steps.Please refer to Fig. 1 for an example.In summary, our contributions are as follows: • We propose a novel framework, ITER, that addresses the two sub-tasks of RWSR in discrete token space.Compared to GAN, ITER is much easier to train and more flexible at inference time.Compared to DM-based methods, it requires fewer iteration steps and has demonstrated effectiveness on real-world LQ inputs with complex degradations.
• We propose an iterative evaluation and refinement approach for texture generation.The newly introduced token evaluation block allows the model to make better decisions on which tokens to refine during the iterative refinement process.Furthermore, by evaluating the quality of initially restored tokens, ITER is able to adaptively balance distortion removal and the texture generation in the final results by using different refinement steps.Besides, the user can also manually control the visual effects of outputs through a threshold value without the need for retraining the model.
Related Works
In this section, we provide a brief overview of SISR and generative models utilized in SR.We also recommend recent literature reviews (Anwar, Khan, and Barnes 2020;Liu et al. 2022Liu et al. , 2023) ) for more comprehensive summaries.(Zhang et al. 2018b), spatial attention (Niu et al. 2020;Chen et al. 2020), and non-local attention (Zhang et al. 2019b;Mei, Fan, and Zhou 2021;Zhou et al. 2020), have also been found to be beneficial.Recent works employing vision transformers (Chen et al. 2021;Liang et al. 2021;Zhang et al. 2022;Chen et al. 2023) have surpassed CNN-based networks by a large margin, thanks to the ability to model relationships in a large receptive field.
Latest works have focused on the challenging task of RWSR.Some methods (Fritsche, Gu, and Timofte 2019;Wei et al. 2021;Wan et al. 2020;Maeda 2020;Ji et al. 2020;Wang et al. 2021a;Zhang et al. 2021a;Mou et al. 2022;Liang, Zeng, and Zhang 2022) implicitly learn degradation representations from LQ inputs and perform well in distortion removal.However, their generalization ability is limited due to the complexity of the real-world degradation space.BSRGAN (Zhang et al. 2021b) and Real-ESRGAN (Wang et al. 2021c) adopt manually designed large degradation space to synthesize LQ inputs and have proven to be effective.Li et al. (Li et al. 2022) proposed learning degradations from real LQ-HQ faces and then synthesizing training datasets.Although these methods improve distortion removal, they rely on unstable adversarial training to generate missing details, which may result in unrealistic textures.
Generative Models for Super-Resolution.Many works employ GAN networks to generate missing textures for real LQ images.StyleGAN (Karras et al. 2020) works well for real face SR (Yang et al. 2021;Wang et al. 2021b;Chan et al. 2021).Pan et al. (Pan et al. 2020) used a BigGAN generator (Brock, Donahue, and Simonyan 2019) for natural image restoration.The recent VQGAN (Esser, Rombach, and Ommer 2021) demonstrates superior performance in image synthesis and is shown to be effective in real SR of both face (Zhou et al. 2022) and natural images (Chen et al. 2022).
The latest works with diffusion models (Saharia et al. 2022;Rombach et al. 2022;Gao et al. 2023;Wang et al. 2023) are more powerful than GAN, but they are based on continuous feature space and require many iterative sampling steps.In this work, we take advantage of the discrete diffusion models (Gu et al. 2022;Bond-Taylor et al. 2022;Chang et al. 2022), which is powerful in texture generation and efficient at inference time.To the best of our knowledge, we are the first work to show the potential of discrete diffusion models on image restoration.
Methodology
In this work, we propose a new iterative token sampling approach for texture generation in RWSR.Our pipeline operates in the discrete representation space pre-trained by VQGAN, which has been shown to be effective in image restoration (Chen et al. 2022;Zhou et al. 2022).Our framework consists of three stages: • Stage I: HQ images to discrete tokens.Different from previous works based on continuous latent diffusion models, our method is based on discrete latent space.Therefore, we need to pretrain a vector-quantized autoencoder (VQVAE) (Esser, Rombach, and Ommer 2021) with discrete codebook to encode input HQ images I h , such that I h can be transformed to discrete tokens, denoted as S h .After obtaining the discrete representations S l and S h , we formulate the texture generation as a discrete diffusion model between S l and S h .The key difference with our method is that we include an additional token evaluation block to improve the decision-making process for which tokens to refine during the reverse diffusion process.In such manner, the proposed ITER not only generates realistic textures but also permits adaptable control over the texture strength in the final output.
Details are given in the following sections.
HQ Images to Discrete Tokens
Following VQGAN (Esser, Rombach, and Ommer 2021), the encoder E H takes the input high-quality (HQ) image The corresponding indices k ∈ {0, . . ., N − 1} determine the token representation of the inputs S h ∈ Z m×n 0 .Finally, the decoder reconstructs the image from the latent Instead of using the original VQGAN (Esser, Rombach, and Ommer 2021), we replace the non-local attention with Swin Transformer blocks (Liu et al. 2021) to reduce memory cost for large resolution inputs.
LQ Images to Tokens with Distortion Removal
It is straightforward to also encode I l with pretrained E H in the first stage.However, since I l contains complex distortions, the encoded tokens are also noisy, increasing the difficulties of restoration in the following stage.Inspired by recent works (Chen et al. 2022;Zhou et al. 2022), we realize that a straightforward token prediction can eliminate evident distortions.Hence, we introduce a preprocess subtask to remove distortions when encoding I l into token space.Specifically, we employ an LQ encoder E l to directly predict the HQ code indexes S h as illustrated in Fig. 2: Through this approach, I l can be encoded into a comparatively clean token space with the learned E l .
Texture Generation with Discrete Diffusion
Although the distortions in S l are effectively removed, generating missing details through Eq. ( 2) is a challenging task because the generation of diverse natural textures is highly ill-posed and essentially a one-to-many endeavor.To address this issue, we propose an iterative token evaluation and refinement approach, named as ITER, for RWSR, following the generative sampling pipeline outlined in (Chang et al. 2022;Lezama et al. 2022).As ITER is based on the discrete diffusion model (Bond-Taylor et al. 2022;Gu et al. 2022), we will first provide a brief overview of it.
Discrete Diffusion Model.Given an initial image token s 0 ∈ Z 0 , the forward diffusion process establishes a Markov chain q(s 1:T |s 0 ) = T t=1 q(s t |s t−1 ), which progressively corrupts s 0 by randomly masking s 0 over T steps until s T is entirely obscured.Conversely, the reverse process is a generative model that incrementally "unmasks" s T to the data distribution p(s 0:T ) = p(s T ) T t=1 p θ (s t−1 |s t ).According to (Bond-Taylor et al. 2022;Chang et al. 2022;Lezama et al. 2022), the "unmasking" transit distribution p θ can be approximated by learning to predict the authentic s 0 , given any arbitrarily masked version s t : (3) Following (Chang et al. 2022), during the forward process, s t is obtained by randomly masking s 0 at a ratio of γ(r), where r ∈ Uniform(0, 1], and γ(•) represents the mask scheduling function.In the reverse process, s t is sampled according to the prediction probability p θ (s t |s t+1 , s T ).The masking ratio is computed using the predefined total sampling step T , i.e., γ( t T ) where t ∈ {T, . . ., 1}.N ← token numbers in S h 4: θ e ← θ e − η∇ θe L e ▷ Update ϕ e 8: until converge Network Training.As depicted in Fig. 3, the proposed ITER model is a conditioned version of the discrete diffusion model.It is a Markov chain that goes from ground truth tokens S h (i.e., S 0 ) to fully masked tokens S T while being conditioned on S l .The reverse diffusion step p θ (s t−1 |s t ) is learned with the refinement network ϕ r using the following objective function: where m t is the random mask in corresponding forward diffusion step, and tells ϕ e which tokens need to be refined.
The difference is that we introduce an extra token evaluation network ϕ e to learn which tokens are good tokens for both S t and S l with the objective function below: where m l are the ground truth sampling masks for S l .
Adaptive Inference of ITER
As illustrated in Algorithm 2, the inference process of ITER can be a standard reverse diffusion from S T to S 0 with the condition S l .However, in our framework, the initially restored tokens S l already contain good tokens and may not require the entire reverse process.With the aid of the token evaluation network ϕ e , it is possible to select the appropriate starting time step T s for the reverse diffusion process by assessing the number of good tokens in S l using m l = ϕ e (S l ), as shown below: Initialize with Eq. ( 7) 10: end if where α is the threshold value, and m s is the binary mask for the starting time step T s .We can quickly determine the appropriate T s by comparing the mask ratio indicated by γ(•), see Algorithm 2 for further details.We can then initialize S t and m t using the following equations: Finally, we follow the typical reverse diffusion process to compute the "unmasking" distribution p ϕr , where t ∈ {T s , . . ., 1}.The final outcome is obtained by Isr = D H (S 0 ).The proposed adaptive inference strategy not only makes ITER more efficient but also avoids disrupting the initial good tokens in S l .
Implementation Details
Training Dataset.Our training dataset generation process follows that of Real-ESRGAN (Wang et al. 2021c), in which we obtain HQ images sourced from DIV2K (Agustsson and Timofte 2017), Flickr2K (Lim et al. 2017), and Out-doorSceneTraining (Wang et al. 2018a).These images are cropped into non-overlapping patches of size 256 × 256 to serve as HQ images.Meanwhile, the corresponding LQ images are produced using the second-order degradation model proposed in (Wang et al. 2021c).
Testing Datasets.We evaluate the performance of our model on multiple benchmarks that include real-world LQ images such as RealSR (Wang et al. 2021b), DRealSR (Wei et al. 2020), DPED-iphone (Ignatov et al. 2017), and Real-SRSet (Zhang et al. 2021b).Additionally, we create a synthetic dataset using the DIV2K validation set to validate the effectiveness of different model configurations.
Training and Inference Details.ITER is composed of three networks, namely E l , ϕ r , and ϕ e , trained with cross- entropy losses in Eqs. ( 2), ( 4) and ( 5).In theory, the optimal strategy comprises training E l foremost, succeeded by ϕ e and ϕ r sequentially.Nevertheless, we discovered that training them concurrently works well in practice, thereby leading to a significant reduction in overall training time.The prominent Adam optimizer (Kingma and Ba 2014) is employed to optimize all three networks, with specific parameters of lr = 0.0001, β 1 = 0.9, and β 2 = 0.99.Each batch contains 16 HQ images of dimensions 256 × 256, paired with their corresponding LQ images.All networks are implemented by PyTorch (Paszke et al. 2019) and trained for 400k iterations with 4 Tesla V100 GPUs.
Experiments Comparison with Other Methods
We perform a comprehensive comparison of ITER against several state-of-the-art GAN-based approaches, including BSRGAN (Zhang et al. 2021b), Real-ESRGAN (Wang et al. 2021b), SwinIR-GAN (Liang et al. 2021), FeMaSR (Chen et al. 2022), and MM-RealSR (Mou et al. 2022).Specifi-cally, BSRGAN, Real-ESRGAN, and MM-RealSR employ the RRDBNet backbone proposed by (Wang et al. 2018b), whereas SwinIR-GAN utilizes the Swin transformer architecture, and FeMaSR utilizes the VQGAN prior.Regarding diffusion-based models, we compare with the most popular work, LDM-BSR (Rombach et al. 2022), which operates in the latent feature space using the denoising diffusion models.The model is finetuned with the same dataset for fair comparison.SR3 (Saharia et al. 2022) is not included in comparison due to the unavailability of public models.
We use two different no-reference metrics, namely NIQE (Mittal, Soundararajan, and Bovik 2012) and PI (perceptual index) (Blau et al. 2018), to evaluate the performance of different approaches.NIQE is widely used in previous works involving RWSR, such as (Wang et al. 2021b;Zhang et al. 2021a;Mou et al. 2022), while PI has been extensively used in recent low-level computer vision workshops, including the renowned NTIRE (Cai et al. 2019;Zhang et al. 2020;Gu et al. 2021) and AIM (Ignatov et al. 2019(Ignatov et al. , 2020)).Comparison with LDM-BSR.As can be seen from Tab. 1, it is evident that although LDM-BSR utilizes a diffusion-based model, its performance is worse than that of ITER.In Fig. 5, it is apparent why quantitative results of LDM-BSR are suboptimal for the RWSR task.Although LDM-BSR is capable of generating sharper edges for the blurry LQ inputs, it struggles with eliminating complex noise degradations in both examples.On the other hand, our proposed ITER does not face such challenges and can produce outputs with greater clarity while maintaining reasonably natural textures.This can be attributed to two main reasons.Firstly, LDM-BSR incorporates continuous diffusion models, while ITER relies on discrete representations.Prior studies (Zhou et al. 2022;Chen et al. 2022) have shown that a pre-trained discrete proxy space offers benefits for intricate distortions.Secondly, ITER explicitly filters out the distortions during the encoding of LQ images into token space before diffusion processing.As a result, ITER avoids generating additional textures similar to what can occur in LDM-BSR, as demonstrated in the second example.
Ablation Study and Model Analysis
We performed a thorough analysis of various configurations of our model using a synthetic DIV2K validation test set.Firstly, we evaluated the effectiveness of refinement network in adding textures to the initial results S l .Secondly, we assessed the necessity of the token evaluation block.Finally, we demonstrated how the token evaluation block can be exploited to manage the model preference toward removing distortions or generating textures.We utilized the PSNR metric to evaluate the quality of distortion removal and used the widely recognized perceptual metric LPIPS (Zhang et al. 2018a) to measure the performance of texture generation.The incorporation of these two metrics allowed us to assess the extent to which the proposed ITER adjusts the visual effects of its outputs in accordance with the threshold value α, as stated in Eq. ( 6).
Effectiveness of Iterative Refinement.We first evaluate the effectiveness of the iterative refinement network for texture generation.As illustrated in Fig. 6, the results obtained without the iterative refinement stage exhibit an oversmoothed texture and inconsistency in color.This could be attributed to the inherent limitations of token classification when confronted with complex distortions present in diverse natural images.In contrast, the results with iterative refinement are more realistic.Noticeable enhancements in texture richness and color correction are observed.These observations provide compelling evidence that the iterative refinement network plays an crucial role in our framework.
Necessity of Token Evaluation.An alternative method to decide which tokens to retain or refine involves directly selecting the top-k tokens in S t with higher confidence, as implemented in MaskGIT (Chang et al. 2022).However, our experimental findings indicate that the top-k mask selection is trapped with local propagation.This is due to the fact that under the greedy selection strategy, the refinement network ϕ r tends to assign higher confidence to neighboring tokens of previous selections.As illustrated in Fig. 8, the masks consistently expand around the previous step, resulting in some regions (indicated by black mask) being refined until the last step.This approach is unfavorable in the iterative texture generation process because it corrupts some goodlooking regions with unnecessary refinement.Our hypothesis is that low-level vision tasks exhibit the locality property where neighboring features are naturally more correlated.
Although the networks have large receptive fields with Swin transformer blocks, it still prefers to propagate information to neighbor features, resulting in higher confidence scores surrounding previous selections.The use of the proposed token evaluation network ϕ e allows the iterative refinement process to avoid the local propagation trap.As demonstrated in Fig. 8, the masks are dis-tributed more evenly, leading to more consistent results.
Balance Restoration and Generation.In Fig. 7, we have presented an example of the results with different threshold α.It is evident from the results that a larger α will lead to the identification of fewer valid tokens, thereby necessitating more refinement steps, or in other words, a larger start time step T s .Consequently, larger α create images with stronger textures.In Fig. 7, we have provided quantitative results for the different α thresholds, where the effectiveness of each threshold can be seen in the score curves of LPIPS and PSNR.We have observed that smaller α produce enhanced PSNR scores, which is a clear indication of a better ability to eliminate distortion.As for texture generation performance, the optimal LPIPS score of α = 0.5 was achieved since both excessively strong and overly weak textures can negatively impact the perceptual quality.In practice, we can adjust α to obtain the desired results without having to modify the network, resulting in a more adaptable framework during inference than GAN-based techniques, which are unmodifiable once the training process is completed.
Conclusion
We presents a novel framework named ITER that utilizes iterative evaluation and refinement techniques for texture generation in real-world image super-resolution.Unlike GANs, which require painstake training, we incorporate discrete diffusion generative pipelines with token evaluation and refinement blocks for RWSR.This new approach simplifies training with just cross-entropy losses and allows for greater flexibility in balancing distortion removal and texture generation during inference.Furthermore, our ITER has demonstrated superior performance with ≤ 8 iterations, highlighting the vast potential of discrete diffusion models in RWSR.
Figure 1 :
Figure 1: Example result with the proposed ITER.Left top: input LQ image; Right top: SR result with ITER; Bottom: results from t = T to t = 0, and t is the iterative step index of the reverse discrete diffusion process.We can observe that the textures are gradually enriched with iterative refinement.To obtain satisfactory results, our ITER requires only a total iteration step of T ≤ 8. (Zoom in for best view)
Figure 2 :
Figure 2: Training of E l to encode I l to token space S l .
Figure 3 :
Figure 3: Illustration of forward and backward diffusion process with the conditioned discrete diffusion model.The condition inputs of ϕ r are omitted here for simplicity.
Figure 4 :
Figure 4: Visual comparison between recent approaches and the proposed ITER on real LQ images.
Figure 5 :
Figure 5: Problem of LDM-BSR without explicit distortion removal.(Zoom in for best view) Figure 6: Comparison of results with and without iterative refinement.We can observe that the results only with distortion removal present overly smoothed textures and inconsistent color.After iterative refinement, the textures are enriched and the color is also corrected.
Figure 7 :
Figure 7: Results with different threshold.Left: visual examples of final results (top) and masks at start time step (bottom).Bigger α leads to stronger texture effect because more refinement steps are conducted.Right: LPIPS/PSNR with different α.
Figure 8 :
Figure 8: The top-k masking technique suffers from the local propagation problem, which is effectively avoided by the proposed token evaluation block. | 2023-12-12T06:41:13.018Z | 2023-12-09T00:00:00.000 | {
"year": 2023,
"sha1": "1282229a4dcbced5b14376e141d1895168b2534d",
"oa_license": null,
"oa_url": "https://ojs.aaai.org/index.php/AAAI/article/download/27861/27747",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "aaccb7ef1d5871b650e553e5915fb3412e4bd7f8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
8120848 | pes2o/s2orc | v3-fos-license | Facilitating watermark insertion by preprocessing media
,
Introduction
There are a number of applications of watermarking in which it is necessary to deploy a very large number of watermark embedders. In such situations, economic constraints are often severe and constrain the computational resources that are available for embedding. Unfortunately, high performance -as measured by effectiveness, fidelity and robustness -watermark embedders commonly require very substantial computational resources, especially when perceptual modeling [22,13], informed coding [15,3,5] 1 and/or informed embedding [20] are utilized.
We address this dilemma by proposing a two stage procedure in which a substantial fraction of the computational workload is performed as a preprocessing step on the media prior to its release to the general public. This preprocessing step is designed to permit, at a later time, subsequent watermark embedding based on computationally simple algorithms that are very economic.
Our solution is appropriate in situations where content can be modified before it reaches the watermark embedders. Section 2 discusses two examples where this is common. The first example uses watermarks for transaction tracking (also known as fingerprinting) during consumer playback of copyrighted material. Here, each player embeds a unique watermark into everything it plays. The watermarks may be used to identify the source of any content that is subsequently distributed illegally. The second example uses watermarks to prevent certain forms of illegal copying. Here, a copy mark is added to video as it is being recorded in a consumer device, differentiating the original from the copy. The copy mark indicates that it is illegal to make a second-generation copy of the copy.
In Section 3, we describe the basic principles behind preprocessing and a two-step watermarking process. Some performance implications are discussed in Section 4. An illustrative implementation of preprocessing is then described and tested in Section 5. Finally, a discussion of results and future work are contained in Section 6.
We are motivated by watermarking applications in which watermarks must be inexpensively embedded. Below, we describe two such applications, both for video: the DiVX transaction tracking system and the proposed Galaxy copy-protection system. Many aspects of these applications severely limit the power of the embedders that may be used. At the same time, both applications allow expensive preprocessing of video before it reaches the watermark embedders, making our solution possible. How this preprocessing can be used to improve the performance of inexpensive embedders is described in Section 3.
DiVX transaction-tracking system
In late 1996, the DiVX Corporation 2 released an enhanced DVD player based on a pay-per-play business model. DiVX disks used proprietary encryption, so they could only play in DiVXenabled DVD players. The players communicated with the DiVX Corporation over the phone lines, allowing DiVX to monitor the number of times a given player played each disk, and bill the player's owner accordingly.
In order to allay the piracy concerns of Hollywood studios, DiVX implemented a number of security technologies. One of these was a watermark-based system for transaction tracking. Each DiVX player embedded a unique watermark in the analog NTSC video signal during playback of a movie. These transaction watermarks were intended to be used to track the source of any pirated video that originated from the DiVX customer base. As players were connected to the DiVX corporation by phone, this would make it possible to quickly identify the pirate.
The DiVX DVD player was a consumer level product and, as such, was extremely price sensitive. Accordingly, the computational resources allocated to embedding the transactional watermark had to be small. This limitation on computational resources was further exacerbated by the requirement that the watermarks be embedded in real time. There are no published details regarding the design of the watermark embedder deployed by DiVX, but a personal communication between one of the authors and a Hollywood executive suggests that the fidelity was poor. This is to be expected given the design constraints.
The solution proposed here would preprocess the video prior to release of the DVD disc in order to improve the performance of the watermark embedder. This preprocessing could have been performed during DiVX's proprietary media preparation.
Generational copy-control for DVD
In 1997, the Copy Protection Technical Working Group issued a request for proposals for a watermarking system to prevent illegal copying of DVD movies. The basic idea is that each DVD recorder will contain a watermark detector, and will refuse to record video that contains certain watermarks. They received eleven proposals. After several rounds of testing and negotiations, these were reduced to the Millenium system, proposed by Philips, DigiMarc, and Macrovision, and the Galaxy system, proposed by NEC, IBM, Sony, Hitachi, and Pioneer. Both these systems involved embedding watermarks in video to implement one of the more difficult requirements in the request for proposals, known as generational copy-control or copy generation management.
Copy generation management is intended to allow a single generation of copies to be made from a master, but no subsequent copies to be made from the first generation copies. The requirement arises because consumers in the US are permitted by law to record television broadcasts for viewing later. This right was accorded consumers after the introduction of the video cassette recorder when Hollywood studios sued electronics manufacturers alleging that such devices enabled widespread piracy of movies [1]. DVD recorders are covered by this law, but the studios recognize that digital recording is a potentially greater threat than analog recording since there is no degradation in video quality with each generational copy.
In order to reduce the threat of piracy, content owners envisage labeling broadcasted material as copy once and subsequently labeling the material as copy no more after recording. A number of technical solutions to copy generation management were proposed in the context of DVD recorders. These are discussed in [2,17]. The solution proposed in the Galaxy system used a fixed watermark to encode the copy once state, and add a second, copy mark, to encode the copy no more state. This second watermark would be added during recording, within the consumer DVD recorder.
Because the second watermark embedder was to be incorporated into consumer devices, it was subject to severe economic constraints. These economic constraints mandated that the embedder circuitry not exceed 50K gates, which precluded the use of a framebuffer. A consumer DVD recorder is expected to have both analog and digital video input. In the analog case, e.g. NTSC, watermark embedding was required to proceed in real-time. The digital video input is assumed to be a compressed MPEG-2 stream. Copy mark embedding must therefore occur in both the compressed and baseband video domains. Moreover, compressed and baseband watermarks must be completely compatible, i.e. a watermark embedded in the MPEG domain must be detectable in the baseband domain and vice versa.
Embedding into the MPEG-2 stream introduces several additional limitations. First, because there is no possibility of employing a frame buffer, the watermark must be embedded without full decompression. This may be accomplished by directly modifying the compressed video stream in a manner that changes the underlying baseband video [10]. Second, MPEG-2 recording may occur faster than real-time, making it necessary to embed the watermark in up to eight times real-time. Third, to maintain the integrity of the transport stream, it is necessary to ensure that the size of individual transport packets remain unchanged by the watermarking process.
An embedder that satisfies all these constraints is unlikely to be capable of performing the processing required to embed high-fidelity, robust watermarks. In the Galaxy system, a primary component of our solution to the copy mark embedding problem was the use of preprocessing 3 . At the time that the copy once mark was embedded, video was also processed to ease the task of subsequent copy mark embedding. The principles for performing this type of preprocessing are the subject of the remainder of this paper. For the sake of simplicity, we describe these principles in the context of systems using conventional, baseband embedders. However, they apply equally well to any embedding method that embeds weak watermarks into the baseband video, even if that embedding is performed by modifying the compressed stream.
Media preprocessing
One of the main difficulties with cheap watermark embedders is that their performance is highly dependent on the cover Works to which they are applied. An embedder might perform well on one Work, successfully embedding a high-fidelity, robust mark, while completely failing to embed in another Work. The idea of preprocessing is to modify all the Works beforehand, altering them such that an inexpensive embedder will perform well.
We illustrate the idea of preprocessing by applying it to three basic watermarking systems: a simple, zero-bit 4 linear-correlation system (Section 3.1), a zero-bit, normalized-correlation system (Section 3.2), and a one-bit, normalized-correlation system (Section 3.3). Admittedly, these basic systems are quite rudimentary, and don't have the theoretical justification of more recent systems based on dirty-paper coding (see [6,21] for some recent examples). Nevertheless, systems like these have long proven useful in practice, and they serve nicely as test-beds for the concept of media preprocessing. In principle, the ideas presented here should also be applicable to more sophisticated systems.
Preprocessing for a linear correlation system
In a zero-bit, linear-correlation watermarking system, the detector tests for the presence or absence of a watermark by computing the linear correlation between a received Work, c, and a reference pattern, w r : If z lc is greater than a detection threshold, τ lc , then the detector reports that the watermark is present. The interested reader is directed to [9] for background on the justification and interpretation of this type of system.
The simplest method of embedding watermarks for such a system is with a blind embedder, in which the embedded pattern and embedding strength are independent of the cover Work. The structure of a blind embedder is shown in Figure 1. This contrasts with informed embedding, as shown in Figure 2, where the embedding strength can be adjusted to ensure that a watermark is successfully embedded in every cover Work. Blind embedding is computationally trivial. For example, a watermark can be added to a video stream (in baseband) without requiring that the frames be buffered. However, a blind embedder will necessarily fail to embed the watermark into some content, making its embedding effectiveness less than 100%. This makes it unacceptable for many applications in which the watermark must be embedded, even at the expense of occasional reductions in fidelity. An informed embedder, on the other hand, can guarantee 100% effectiveness by automatically adjusting the embedding strength (and hence the fidelity) for each cover Work, but to do so, it must examine the entire cover Work before embedding the mark, so a video system would require the expense of a frame buffer. Thus, informed embedding can be substantially more expensive than blind embedding. Below, we describe the two types of embedding in more detail, and then show how informed embedding can be split into a preprocessing step, followed by an inexpensive, blind embedder.
To understand the behavior of embedders, it is useful to consider a geometric model of the problem, in which cover Works are represented as points in a high dimensional marking space. In blind embedding a fixed vector that is independent of the cover Work is added to each Work, the intention being to move the cover Work into the detection region. A two-dimensional geometric model is illustrated in Figure 3a. If a simple correlation detector is used, then this detection region is a half-plane the boundary of which is denoted by the vertical line in Figure 3a.
Unwatermarked cover Works lie to the left of this boundary and are denoted by the open circles.
Notice that some cover Works are closer to the boundary than others 5 . The horizontal arrows represent the watermarking process which moves the cover Work towards, and hopefully into, the detection region. This is also illustrated in Figure 3a where the majority of cover Works have indeed been moved into the detection region, but one cover Work has not. The embedder is said to have failed to watermark this particular cover Work, i.e. its effectiveness is less than 100%.
Clearly, if the magnitude of the arrows is larger, then more cover Works will be successfully watermarked. However, a compromise must be made between the strength of the watermark and the fidelity of the watermarked Work.
In contrast to blind embedding, informed embedding allows us to automatically vary the strength of the watermark based on the cover Work. Figure 3b illustrates the effect of an informed embedder in which a watermark of different magnitude is added to each cover Work, such that all watermarked Works are guaranteed to be a fixed distance within the detection region. Using such an informed embedder ensures that all watermarked Works will lie in the narrow shaded region of Figure 3b. We refer to this region as the embedding region.
It should be noted that the systems illustrated in Figure 3 are not strictly comparable because they solve subtly different problems. The blind embedder in Figure 3a is trying to embed the most robust watermark possible within a given prescribed limit on perceptual distortion. By contrast, the informed embedder in Figure 3b is trying to embed the least-perceptible watermark possible within a given prescribed limit on robustness. Thus, the blind embedder deals with the problem of unwatermarkable content -content which cannot be watermarked within a prescribed fidelity limit -by failing to embed, while the informed embedder deals with this problem by relaxing the fidelity constraint 6 . Which approach is better depends on the application. In some The x-axis is aligned with the watermark reference vector. In (a), addition of the reference vector to unwatermarked Works moves these Works to locations denoted by the solid circles which are usually, but not necessarily, within the detection region. This gives roughly constant fidelity at the expense of variable robustness (and occasional failure to embed). In (b), the reference vector is scaled to ensure that every watermarked Work lies a fixed distance inside the detection region, giving roughly constant robustness at the expense of variable fidelity.
applications, maintaining fidelity (as specified with some numerical measure) is more important than ensuring that every Work is marked. In others, the watermark is more important. We have argued elsewhere [11,20] that the latter type of application is very common, and, for the remainder of this paper, we assume that this is the type of application in which our system will be employed. The difficulty we face is that informed embedding, because it requires a frame buffer, is too costly for our assumed application. Now let us consider a two step process in which informed preprocessing is used to guarantee that subsequent blind embedding will be successful. Figure 4 shows how such a system might work. Here, the preprocessing stage modifies each original cover Work (open circles) so that the processed Works (grey circles) all lie within a narrow region close to, but outside of the detection region. We refer to this narrow region as the prepping region. Since the prepping region is outside the detection region, no watermarks are detected in the preprocessed content.
However, when a simple blind embedder is subsequently applied to the preprocessed content, it will be 100% effective in embedding the watermark.
Preprocessing for a normalized correlation system
The same technique can be applied to more complex watermarking systems, such as those that use normalized correlation as a detection metric (see, for example, [12]). Here, the detector computes the normalized correlation between a received Work, c, and a reference pattern, w r , eliminate the problem entirely. However, lattice codes are inherently fragile against valumetric scaling distortions, which limits their applicability. Compensating for this limitation is a subject of on-going research [14,18]. It is not clear that the issue of valumetric scaling can be solved without re-introducing the problem of unwatermarkable content. as This results in a conical detection region.
Here again, blind embedding can often successfully embed watermarks, but it fails in many cases. It is argued in [11,20] that a more reliable method of embedding is to seek a fixed estimate of robustness. We can estimate robustness as the amount of white noise that may be added to the watermarked Work before it is likely to fall outside the detection region. This is given by where τ nc is the detection threshold that will be applied to the normalized correlation, and R 2 is the estimate of robustness (see [11] for a derivation of this equation). A fixed-robustness embedder that uses this estimate of robustness will employ a hyperbolic embedding region, as shown in Figure 5. Although such an embedder is preferable for many applications, it can be quite costly, as it not only requires examining the entire Work before embedding (which requires buffering), but also involves solving a quartic equation to find the closest point in the embedding region [9].
To obtain the reliability of a fixed-robustness embedder, while using a simple blind algorithm to embed, we can define a prepping region by shifting the embedding region outside the detection region. The distance that the embedding region must be shifted depends on the embedding strength that will be used by the blind embedder. This is shown in Figure 6. Here, the prepping region is a hyperboloid that lies entirely outside the detection cone. When a blind embedder is applied to a preprocessed Work (grey circle), the Work is moved into the detection region,
Preprocessing for multiple bit watermarks
The two systems described above apply preprocessing to simple, zero-bit watermarks. That is, the detectors in these systems report whether the watermark is present or absent, but do not distinguish between different watermark messages, so the watermark carries zero-bits of payload information. If we have a system that can embed several different watermark patterns, representing different messages, we must modify our preprocessing method accordingly.
In the simplest case, we might have a system with two possible messages, or 1 bit of payload.
For a message of m = 1, we might embed a reference mark, w r . For m = 0, we might embed the negation of the reference mark, −w r . The detector would check for presence of both the positive and negative watermark, reporting the corresponding message if one of them is found.
Such a system, then, would define two disjoint detection regions, one for each message.
To ensure that blind embedding will succeed in embedding any of the possible messages, the preprocessor must move content to a prepping region that is the intersection of appropriate prepping regions for all the messages. For example, consider a 1 bit system using normalized correlation as its detection metric, as illustrated in Figure 8. space, it is rotated into an (N − 1)-dimensional sphere. Thus, although the figure appears to define a prepping region of only two points, the actual prepping region is a high-dimensional surface, and, with appropriate watermark extraction techniques, it is possible to implement a preprocessor that does not introduce too much distortion (see Section 5).
A problem that might arise is that the prepping regions for the separate messages do not intersect. This would occur if the embedding strength used by the blind embedder is too weak.
In such a case, it would be impossible to perform the type of preprocessing we are proposing here. However, this is a pathalogical case regardless of whether preprocessing is employed, as it means there is no single Work into which the blind embedder can embed all possible messages.
For every Work, there is at least one message that the blind embedder cannot embed. Thus, this would be a case of an unacceptable blind embedder that cannot be made acceptable by preprocessing.
It might appear, from Figure 8, that we will necessarily have the problem of non-overlapping message-prepping regions when we introduce even one additional message. After all, there is no way to place an additional cone in the figure so that it's message-prepping region intersects with either of the two points of intersection illustrated. But this is an illusion caused by our limited, 2-dimensional figure. To understand that many more than two detection cones can have intersecting message-prepping regions, imagine that the center lines of the cones (reference marks) all lie on a single plane in a 3-dimensional space. The message-prepping regions for all these cones can intersect at two points, one above the plane and one below it. As in the case of the two-point prepping region of Figure 8, these two points in 3-space correspond to a high-dimensional hypersphere in media space.
Performance considerations
The above discussion of preprocessing has focused on the robustness of the watermark embedded by a simple embedder. However, by introducing the preprocessing step, we have introduced some new questions regarding the fidelity, robustness, and security in the overall system. Can we obtain satisfactory fidelity in both the preprocessed and watermarked media? What happens if the preprocessed Work is distorted by normal processing before the watermark is embedded?
Does preprocessing introduce any new security risks? Each of these questions is addressed in turn, below.
Fidelity
In watermarking systems that do not involve preprocessing, the embedder must create a watermarked Work that lies within some region of acceptable fidelity around the original. When we introduce a preprocessing step, we must now find two new Works within the region of acceptable fidelity: the preprocessed Work and the watermarked Work. These must be separated by the effect of the simple embedder.
In our experience, finding these two Works has not been difficult. This is not surprising, as the simple embedder will usually be designed to introduce very little fidelity degradation. Thus, the preprocessed and watermarked Works will be perceptually very similar, and if the fidelity of one is acceptable, the fidelity of the other is unlikely to be much worse.
Furthermore, the application may be designed in such a way that the preprocessed Work is never actually seen. For example, in the DiVX application, video never leaves the player without having a watermark embedded. In this case, the fidelity of the preprocessed video would be irrelevant, and we would only be concerned with the fidelity of watermarked video.
The problem of maintaining this fidelity is little different than that in a system that does not entail preprocessing.
Robustness
In some applications, the preprocessed Work might be expected to undergo some normal processing before the simple watermark embedder is applied. This would not be the case in the DiVX application, as the embedder is applied immediately after the video is read off the disk, but in the DVD application, it is expected that preprocessed video will be broadcast via television before it reaches the watermark embedder in a DVD recorder. Such broadcasting might entail lossy compression and analog distortions. This raises the question of whether these distortions will ruin the preprocessing, so that subsequent embedding fails.
In the case of additive distortion, where the distortion is independent of the Work being distorted, the performance of a system with a blind embedder is the same whether the distortion is applied before or after watermark embedding. If the distortion is applied first, it doesn't change the behavior of the embedder, and if the embedding is applied first, it doesn't change the nature of the noise. Thus, if the system is designed to yield a watermark that is robust to such noise, the use of preprocessing will not reduce its robustness.
However, many, if not most, distortions that can be expected are not independent of the Work. In these cases, there is a difference between applying the distortion before or after watermark embedding. For some distortions, this difference is small, and systems designed to be robust against additive noise will usually be reasonably robust against them. But other distortions are highly dependent on the Work to which they are applied, and these can represent a serious problem to a system employing preprocessing.
Perhapse the most severe example of such a class of distortions for video is the class of geometric distortions -translation, scaling, rotation, etc. If any of these distortions is applied to preprocessed video, it can desynchronise the preprocessing from the watermark embedding, causing the embedder to be no more effective than it would be on unpreprocessed video.
This is a problem that probably cannot be solved in a general way. In the DVD application, however, it can be solved by taking advantage of the detector for the copy once mark. This detector must be robust against the same geometric distortions that might cause copy no more embedding to fail. Robustness against geometric distortions is usually attained by detecting those distortions and inverting them before watermark detection. Thus, a description of the distortions can be made available to the copy no more embedder. The embedder can then apply them to the watermark pattern, so that the pattern is once again synchronized with the preprocessing.
Security
The final question to be addressed is whether a system that depends on preprocessing is necessarily less secure than one that does not. This question is of particular interest in the two example applications of Section 2, as they are both intended to deter unauthorized copying.
The main, novel security risk that preprocessing might introduce is a risk that adversaries might modify preprocessed media so that subsequent embedding fails. This assumes, of course, that the adversary has access to the preprocessed media before the embedder is applied. It is possible to imagine applications of preprocessing in which the adversary has no such access.
For example, we might build a streaming media server that puts a unique watermark into each stream. The stored media could be preprocessed to facilitate the use of inexpensive, real-time embedders. As all the embedding occurs before the media reaches the customer, the adversary will not have access to anything unwatermarked.
Unfortunately, in the DVD and DiVX applications, the adversary must be assumed to have access to unwatermarked video. In the case of DiVX, this would require hacking the player to disable or bypass the embedder. In the DVD application, unwatermarked video is broadcast in the clear. In these cases, the adversary may very well be able to modify the video so that the embedder will fail.
The question, however, is why would the adversary bother? Presumably, his aim is to make a copy of the video that does not contain the watermark. If he has access to the unwatermarked video, he needn't modify it -he can just copy it. In the case of DVD, this would require a noncompliant or hacked recorder that would not embed a mark. In the case of DiVX, this could be done with any recorder, once the DiVX player has been hacked. Thus, if the unwatermarked video is available to the adversary, the risk introduced by reliance on preprocessing is arguably irrelevant.
A second risk in the types of systems being discussed here arises from the weakness of the embedder itself. Simple embedding algorithms are more likely to be easily hacked. This risk is particularly high if the adversary has access to an embedder, and can compare unwatermarked with watermarked media, which is the case in the DVD and DiVX applications. But this risk is not a consequence of reliance on preprocessing. The simplicity of the embedder, and its availability to the adversary, are dictated by the application. Preprocessing is merely a trick that makes such an application feasible.
Our conclusion, then, is that preprocessing may introduce some novel security risks, but these only arise in application settings where security is extremely weak anyway. However, it must be noted that weak security can still be valuable. The proposed DVD system would add a level of deterrence to certain illegal copying which is presently entirely undeterred. If enough people are unwilling to bother breaking the system, the cost of that system may be justified.
An implementation
To illustrate the preprocessing technique, we implemented a preprocessor for the E BLK BLIND D BLK CC image watermarking system described in [9]. This is a one-bit, normalized-correlation system which operates in a linear projection of image space.
E BLK BLIND is a simple blind embedder. Although its description and implementation in values, as illustrated in Figure 9. The mark vector, v, is given by where 0 ≤ i < 8 and 0 ≤ j < 8 and w and h are the width and height of the image.
In the second step, the correlation coefficient 7 , z cc , is computed between the averaged 8 × 8 7 The correlation coefficient between two vectors is just their normalized correlation after projection into a space with one fewer dimension (see [9]). Thus, the detector computes the normalized correlation in a 63dimensional space. block, v, and the reference mark, w r . That is, whereṽ = (v −v),w r = (w r −w r ) andv andw r are the means of v and w r . It compares z cc against a detection threshold, τ cc . If z cc > τ cc , it reports that message m = 1 has been embedded. If z cc < −τ cc , it reports that message m = 0 has been embedded. Otherwise, it reports that there is no watermark present.
We implemented a preprocessor for this system according to the principles described in Section 3.3 and illustrated in Figure 8. The preprocessor performs the following steps 1. Extract a mark vector, v o from the unwatermarked Work, in the same manner as the detector.
2. Identify a 2-dimensional plane that contains v o and the reference mark, w r . The plane is described by two, orthogonal, unit vectors X and Y, obtained by Gram-Schmidt orthonormalization [16]: (Note, Y 0 here is a temporary vector.) 3. Project v o into the X, Y plane: 4. Find the point in the prepping region, x vp , y vp , that is closest to x vo , y vo . As shown in Figure 8, the prepping region in this 2-dimensional plane comprises only two points.
Since y vo is guaranteed to be positive, the upper of these two points will always be the closest to x vo , y vo . Thus, x vp = 0, and y vp is a positive value chosen to ensure that blind embedding will yield the desired level of robustness. To find y vp , first note that, in the X, Y plane, the watermark vector, w r , will be either k, 0 or −k, 0 , depending on whether we wish to embed a 1 or a 0. Here, k is the magnitude of the watermark reference pattern, which was √ N in our experiments, where N is the size of the watermark reference pattern, i.e. N = 64. After the blind embedder is applied with a strength of α, we will obtain v w = v p + αw r , which gives us, in the X, Y plane, either αk, y vp or v w = −αk, y vp .
By letting w r = ±k, 0 , c = ±αk, y vp , and τ nc = τ cc in Equation 3, and solving for y vp , we obtain where R 2 is the desired robustness.
5. Obtain a preprocessed mark vector, v p , by projecting x vp , y vp back into 64-dimensional space: 6. Invert the original extraction operation on v p to obtain the preprocessed cover Work, c p . This is done by simply adding v p − v o to each block of the image.
To test these procedures, we first tested the watermarking system on original images that had not been preprocessed, using a weak embedding strength of α = 0.5. Watermarks of m = 1 and m = 0 were embedded in each of 2000 images from the Corel image database [7]. Each image was 256 by 384 pixels, and k = 8. Figure 10 shows Figure 11: Results of the watermarking system applied to preprocessed images.
can guarantee that the detector will never be run on unpreprocessed images, we could take advantage of this to lower the detection threshold, thereby obtaining even better robustness.
The question arises whether we could obtain equally good results, with the same fidelity, by just increasing the embedding strength used during blind embedding. Blind embedding alone, with no preprocessing, yields an average mean-squared-error between marked and unmarked images of exactly α (because of the way we scaled w r ). Preprocessing, however, introduces additional fidelity degradation. The average mean-squared-error between original images and images that have been both preprocessed and watermarked was just under 1.04. If, instead of applying preprocessing, we simply increased α to 1.04, we would obtain the same fidelity impact as preprocessing plus embedding, but we would have substantially stronger watermarks than with α = 0.5. Would this yield 100% effectiveness without preprocessing? Figure 12 shows the results of applying the blind embedder to unpreprocessed images with α = 1.04. Although this performance is vastly better than that of Figure 10, it is still inferior to the performance obtained with preprocessing. With this higher value of α, blind embedding still failed to embed watermarks in just under 6% of the trials.
Of course, since we can assume that we have substantial computing power available during preprocessing, we can improve on the fidelity impact of preprocessing by applying more sophisticated algorithms, such as perceptual modeling. Such improvements would increase the disparity between watermarking with and without preprocessing.
Conclusion
There are several watermarking applications in which a potentially very large number of embedders must be deployed under severe computational constraints that limit performance. In order to attain the performance of sophisticated embedding algorithms, and yet maintain the simple, inexpensive embedder, we propose preprocessing media before it is released. Most of the computational cost is shifted to the preprocesing stage where it is assumed that significant resources are available.
Our proposal is applicable in settings where content can be modified before it reaches the watermark embedders. Two examples of such applications are the transaction-tracking system deployed by the DiVX corporation, and the proposed Galaxy watermarking system for copy protection of DVD video.
Before preprocessing, unwatermarked Works can be geometrically thought of as being randomly distributed in a high dimensional vector space. Within this space lies a detection region -Works falling within this region are said to be watermarked. Unwatermarked Works are seldom if ever found in the detection region. Traditional embedding algorithms seek to add a watermark pattern to a Work in order to move the Work into the detection subspace, subject to fidelity and robustness constraints. During the preprocessing stage suggested here, a signal is added to a Work such that the preprocessed Work lies on a predetermined surface near, but outside of the detection region. That is, the unwatermarked, but preprocessed Works are no longer randomly distributed in the high dimensional space but lie in a well-defined region.
This preprocessing step provides two main advantages. First, since preprocessed Works lie on a well-defined surface, near yet outside of the detection region, simple embedding techniques are sufficient to watermark the Works with good fidelity and robustness. Second, the computational cost associated with the preprocessing step is not borne by the embedders. Instead, content creators bare this cost, the preprocessing being performed by dedicated devices located with content creators. Thus, the performance of the overall system need no longer be constrained by the computational budget allocated to the embedder.
A third possible advantage of preprocessing is that it can reduce the probability of false positives. This results from the preprocessor ensuring that all Works are at least a certain distance outside the detection region. However, this advantage can only be exploited in applications where the watermark detector will never be applied to unpreprocessed content.
We have implemented a preprocessor for a simple, 1-bit watermarking system with blind embedding. Tests on 2000 images show that preprocessing significantly improves performance of the embedder. | 2014-10-01T00:00:00.000Z | 2004-01-01T00:00:00.000 | {
"year": 2004,
"sha1": "38d7e7391e9b3012e4a82cbc72445892a4137498",
"oa_license": "CCBY",
"oa_url": "https://asp-eurasipjournals.springeropen.com/track/pdf/10.1155/S1110865704403072",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "38d7e7391e9b3012e4a82cbc72445892a4137498",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
130074641 | pes2o/s2orc | v3-fos-license | Water Contamination of Suburban South India and its Implication for Enteric Infection
Microbial contamination of drinking water is the major reason for prevalence of diarrheal disease. This study assessed both microbial and chemical contamination of Community Water supply wells (CW) from November 2009 to December 2010.To analyze the seasonal and spatial variations, monthly sampling was done. In order to identify the risk of diarrheal prevalence the study also surveyed 290 household and gathered information on household water storage, symptoms of diarrhea. Drinking water samples were collected from the household and analysed forE. coli contamination as it is the main indicator of microbial contamination. Community water and purchased water samples showed high microbial contamination. Standard spread plate method were used to enumerate the total coliform and E. coli. Pathogenic organism were isolated and identified from CW3, CW4 and CW5. The enumeration of E coli ranges from 19-60 cfu/ml in pre monsoon, 7-150 cfu/ml in post monsoon season, Faecal streptococci ranges from 0-33cfu/ml and 11-55 cfu/ml in pre and post monsoon respectively. Vibrio sp. is observed only in post monsoon season for the community water supply well. Diarrheal prevalence is comparatively high in community water sources and purchased water consumer than in groundwater consumers. To protect public health routine monitoring and disinfection of ground water for the portable use should be implemented in all the suburban areas.
Introduction
Urbanisation places a burden on water management systems and providing basic sanitation services. Most of the urban peri-urban areas are dependent on ground water for their drinking water but its quality remains an issue. Microbiological contamination of drinking water and inadequate sanitation can lead to large waterborne disease outbreaks. Urbanisation leads to over exploitation of water resource, congestion in water supply, drainage system, sewage disposal practices and onsite sanitation systems. The rate of depletion of ground water levels and deterioration of ground water quality is of immediate concern in major cities and towns of our country.
Usage of onsite sanitation system is common in urban and suburban areas of developing countries; it is increasingly adopted in urban cities in India 1 . The adoption of on-site sanitation system endangers groundwater resources and the human health at a greater risk 2 . The contaminants are transported through the aquifers and ultimately get in contact with the groundwater. Groundwater sources are more vulnerable to nitrate contamination near on-site sanitation systems 1 . The sitting and continued use of the solid waste disposal site in Perungudi, in the midst of the suburbs, caused further degradation of the environment and pose concerns for water quality and community health 3 .
Unhygienic household sanitation and unsafe environmental waste disposal, household water storage practices leads to the occurrence of waterborne diseases and consign children at risk of illness and or death. More than half of the reported waterborne disease outbreaks have been linked to contaminated drinking water [4][5][6][7][8][9] . Intake of contaminated water due to the lack of hygienic practices and sanitation contributes to about 1.5 million child deaths and around 88% of them suffer from diarrhea per year 10 . Storage for longer duration and handling of water at home increases the chances of contamination 8 . Type of storage vessel, mouth size, design and material of the vessel are also important factors found playing a major role in maintaining the quality of water during the storage in the domestic domain 11,12 .
Improving the microbiological quality at source and the point-of-use treatment and safe storage methods may reduce diarrhea and other water-borne diseases in communities and households. Knowledge and attitude of the people for the water usage, water handling and personal hygiene are the important factors impacting on health 13 . This study analyzed the link between quality of different water sources and diarrheal cases. Microbial contamination of drinking water during storage in household is the prime reason for the diarrheal prevalence in the developing country. To assess the microbial contamination in household stored drinking water and its implication on health, particularly diarrhea. This paper investigates the extent of microbial contamination of community water supply and household storage.
Study Area
Pallikaranai is located in southern part of Chennai Metropolitan Area (CMA) the study area is located between latitudes 12 o 54'44''N to 12 o 59'0 o N latitudes and 80 o 11'41''E to 80 o 13'59''E longitude covering an areas of 17.36 sq km. It is bounded by Bay of Bengal in the North and West Kanchipuramdistrict in the south. The various sources of water are community water supply, bore well, open well, hand pump and can water (purchased water). Nearly half of the households depend on purchased water (can) for drinking. The dependency of tanker lorry water supply and the public hand pump were comparatively less.
Sampling
To assess the microbial contamination of drinking water, the samples have been collected from 7 source wells (CW1, CW2, CW3, CW4, CW5, CW6, CW7) for the period of one year from November2009-Decembe 2010. 1L of water sample collected in a sterile 500 mL glass bottle from the community wells. Samples were immediately transported to laboratory at 4°C in the icebox.
Sample size of 290 households from the study area was selected based on stratified randomly sampling. As per 2001 census, there are 15 wards in Pallikaranai. Each ward has household ranging from 75-450, a list of streets in all the wards and their household was collected from panchayat office. From the 15 wards 20 household having children in the age group 0-5 and 6-12 years were selected. Samples of stored water were collected from the houses enrolled in the study. The household samples were, categorized into groundwater, purchased water and community water.
Questionnaire Survey
The questionnaire was administered from April 2010-May2011 orally to the respondents, mostly housewives during the above and recorded. The mode of questionnaire administration was face to-face interview with their mothers; relevant information about the child in the family was obtained from the respondent using a pretested structured questionnaire in the local language.
Microbial Analysis
Faecal streptococci, E. coli, Vibrio, Salmonella and Pseudomonas were isolated and enumerated by spread plate method for community well sample. Household storage samples were analyzed for E. colialone as it is the main indicator organism.
E. coli was enumerated by placing an aliquot of 1 mL onto Petri films for E. coli/Coliform Colony Count. The plates were then incubated for 48 hours at 37 o C. Blue coloured colonies with gas entrapment on EMB -plates were presumed to be E. coli.Faecal streptococci were enumerated by spread plate method 15 mL of Slantez agar were poured on to sterile petri plate and 0.5 mL of sample were pipetted out on the surface of solidified agar slowly and spread it by using L rod, invert the plate and incubate it for 48 hours at 35 ºC. Vibrio sp. were enumerated by spread plate method 15 mL of Thiosulfate-Citrate-Bile-Sucrose agar (TCBS) was used for the primary isolates of other microorganisms, incubated for 24 hours at 37 ºC, and biochemical test were performed by using Hi media specific kit for Vibrio sp. (Kb007). Figure 1 shows the average concentration of EC, TDS and chloride in pre and post monsoon seasons. Estimated EC, TDS and chlorides are in the range of 661-1738 µS/cm, 390-1007 mg/L 164-344mg/L respectively. According to BIS and WHO standards, EC and TDS shall not exceed the desirable limit of 500 mg/L but within the maximum permissible limit of 2000 mg/L. The concentration of all the parameters was observed to be high in pre monsoon and comparatively low in post monsoon season. One possible reason for the uniform pattern of improvement of ground water quality during post monsoon season may be the location of these wells, either inside the lake/eri or adjoining them. Therefore the direct influence of recharge from these freshwater lakes into the shallow aquifer may have had a significant role on the water quality. Table 1 shows mean variation of pH, turbidity, Nitrate, EC, TDS and chloride in drinking water sources. In the present study, pH values for all the samples were well within the allowable range (6.2-8.7). Turbidity ranges from 0.1-3.31, 0.68-2.7, 0.15-3.66 NTU in community water supply, groundwater and purchased water respectively. The range of Nitrate for all the samples were from 0.1-26mg/L, the highest range was recorded in community water supply. TDS and EC have a linear relationship and EC increases with increase in TDS, the highest range of TDS and EC was observed in ground water from 92-4500 mg/L and 218 to 8820μS/cm respectively. EC values exceed 2000 μS/ cm in the groundwater source resulting in laxative effects to the consumers 14 . High EC and TDS concentrations are the main indicators of dissolved inorganic ions and the degree of mineralization in the groundwater. Chloride ranges from 5-2869 mg/L. The concentration of TDS is comparatively high in community water supply than in purchased water and ground water. However nearly 44 % of TDS values are within the desirable limit in community water supply, 25% of TDS values in groundwater are within the limits and in purchased water 49% of samples were within desirable limit 500 mg/L in the samples. The concentration of EC, TDS and chloride were more or less similar in community water supply and purchased water.
Microbiology
Since all the community wells are shallow, during post monsoon season microbial contamination is high, the enumeration of indicator organism TC and FC ranges from 93-1100 MPN/100 ml and 45-460 MPN/100 ml respectively. During pre monsoon season the value ranges from 9-20 MPN/100ml and 4-15 MPN/100ml respectively as shown in Figure 2. The enumeration of E. coli ranges from 19-60 cfu/mL in pre monsoon, 7-150 cfu/mL in post monsoon season, Faecal streptococci ranges from 0-33 cfu/mL and 11-55 cfu/mL in pre and post monsoon respectively. Vibrio sp. is observed only in post monsoon season. According to WHO standards, no coliforms should be detected in the ground water which is used for drinking purposes, because coliform is the indication of recent contamination of faecal matters. Among all the wells Chitteri (CW3) was found to be highly polluted.
The identification of these gram negative rod shaped organism by using readily available kit, these three organism such as Salmonella choleraesuis subspecies cholerasuis, Klebsiellapneumonia, Enterobacteraerogenes were predominantly present in CW1, CW3 and CW4. In the surroundings of CW2 and CW1 cattle goat shed is available. Klebsiellapneumoniae, present only In CW3 may be due to infiltration from lake infront of the well. More over stagnation of kitchen waste water around the well is observed during well sanitation survey as shown in Table 2.
A total of 3 isolates were isolated and undergone biochemical tests shown in Table 3 for the identification of Vibrio in species level. In the present study, the Vibrio species such as, V.cholerae, V.vulnificus, V.proteolyticuswere isolated and identified were predominantly present in both the shallow wells, SW8 and SW9. A selective medium, such as TCBS agar, eliminates most non target bacteria. Figure 4 shows the level of Contamination of E. coli in sources of drinking water and its relation with diarrheal prevalence. E. coli contamination is considerably high in Community water supply (piped water) and purchased water, this may be due to storage and household handling of water, whereas the groundwater consumers no need to store the water. From the graph it is clear that diarrheal prevalence is comparatively high in piped water and purchased water consumer than in groundwater consumer.
Sources of Contamination
The distance between septic tank and well were estimated in meter. 31.7% of well were less than 15m distance (the commonly used guideline is that the distance should be at least 15m). About 9% were estimated to be at a distance between 16 to 30 m from the septic tank and 14% of the well were in the distance of >30m as shown in Table 4.About 12% of the household are connected to the latrine sewage to the unlined septic tank, 54.8% of the household connected to a proper lined septic tank. Table 5 illustrates the risk estimation of E. coli contamination in groundwater and its relation with distance between septic tank and well. The relative risk of E. coli greater than 30 cfu/ml is 1.2 times higher than the group less than 30 cfu/ml, which reveals that as the distance between septic tank and well decreases the concentration
Discussion
This study identifies fecal contamination of community water supplies both at source and during household storage in a suburban area of Chennai. The water supply is obtained from a ground and purchased water. From the results reported, the community water supply was contaminated by pathogenic organisms such as V.cholerae, V.vulnificus, V.proteolyticus, salmonella. The concentration of all bacterial parameters was observed to be high in post monsoon and comparatively low in pre monsoon season. Since all the community wells are shallow well. Vibrio cholera was observed in CW2, CW3, and CW4 wells during post monsoon season. The concentration of TDS is comparatively high in community water supply than in purchased water and ground water which is coincide with the E. coli contamination also high in community water supply.The rate of occurrence of diarrhea were high in the households using thecommunity water supply, the data also suggests that contamination of drinking water during storage in household vessels is the major reason for disease transmission.Using narrow-mouthed storage vessels and point-of use water treatment before storage can reduce this risk as reported by 12 . The previous part 15 of the study observed the microbial pathogenic organisms in the community water supply well (source water), multiplies at the point of distribution and at the point of use. The major factor contributing to the contamination may be due to intermittent supply, as reported elsewhere also 8,16 , stagnation of sullage waste along the pipeline, breaking of pipeline, and possible biofilm formation in water distribution pipeline 17 .
Conclusion
The interdisciplinary approach of the study helps to explore and discuss the problems associated with ADD and its environmental relations. The results clearly indicate that the prevalence of diarrhea among children is mainly due to the consumption of microbial contaminated water. Sources of contamination for community water supply are anthropogenic activities and supplying water without proper chlorination. Poor handling of water while storage and for ground water the presence of E. coli due to inadequate distance between septic tank and well. Survival of Escherichia coli in water indicates recent contamination of the water source with fecal matter and hence possibility of presence of intestinal pathogens in the well. Continuous monitoring of well water quality and inspecting well sanitation helps to identify the source and route of contamination which aids to take corrective action.Household storage water is more contaminated than the community water, indicating that interventions are needed to decrease the contamination of water at household storage. Continuous monitoring of water supply by corporation may be unsuccessful in some circumstances; improving personal hygiene in handling and storage of water in domestic domain is the appropriate measures to reduce the ADD prevalence.
Acknowledgements
The author wishes to acknowledge the Wageningen University, Netherlands and SaciWATERs, Hyderabad for providing the research fellowship to carry out the present study. Grateful appreciation is extended to the health offices and the communities of Pallikaranai. | 2019-04-25T13:10:17.455Z | 2015-12-03T00:00:00.000 | {
"year": 2015,
"sha1": "e664160b422420c373da69707ee885ef2509205b",
"oa_license": null,
"oa_url": "https://doi.org/10.17485/ijst/2015/v8i36/88613",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f853dce3cab456821716d470834023b4e5c51331",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
252463776 | pes2o/s2orc | v3-fos-license | Objective and subjective measures of sleep in men with Muscular Dystrophy
Purpose Despite poor sleep quality being recognised in Duchenne Muscular Dystrophy, reports from milder forms of Muscular Dystrophy (MD), and accompanied associations with quality of life (QoL), pain and fatigue, remain limited however. Methods Adult males (n = 15 Beckers MD (BMD), n = 12 Limb-Girdle MD (LGMD), n = 12 Fascioscapulohumeral (FSHD), n = 14 non-MD (CTRL)) completed assessments of body composition (Bio-electrical impedance), sleep (7-day 24-hour tri-axial accelerometer, Pittsburgh Sleep Quality Index (PSQI) and Insomnia Severity Index, QoL (SF36-v2), pain (Visual analogue scale), fatigue (Modified Fatigue Index Scale) and functional assessments (Brookes and Vignos). Results FSHD and BMD reported worse sleep than CTRL on the PSQI. FSHD scored worse than CTRL on the Insomnia Severity Index (P<0.05). 25–63% and 50–81% of adults with MD reported poor sleep quality using the Insomnia Severity Index and PSQI, respectively. Accelerometery identified no difference in sleep quality between groups. Associations were identified between sleep measures (PSQI global and insomnia severity) with mental or physical QoL in LGMD, BMD and FSHD. Multiple regression identified associations between sleep impairment and fatigue severity (all MDs), body composition (BMD & LGMD), upper and lower limb function (LGMD, FSHD) and age (FSHD). Conclusions 25–81% of men with MD, depending on classification, experience sleep impairment, using self-report sleep measures. Whilst BMD and FSHD showed worse sleep outcomes than CTRL, no group difference was observed between LGMD and CTRL, however all groups showed associations with sleep impairment and higher levels of fatigue. These findings, and associations with measures of health and wellbeing, highlight an area for further research which could impact QoL in adults with MD.
Introduction Muscular Dystrophy (MD) is an umbrella term for a set of progressive muscle weakness conditions, for which the focus of research and interventions has been genetics and clinical characteristics, in particular muscle strength and the maintenance of ambulation in Duchenne MD (DMD) [1][2][3][4]. More recently, an increasing volume of research has been developed in order to identify influences on quality of life (QoL) and health status in other forms of MD [5][6][7][8][9][10]. The Health Status in MD model by de Groot et al. [11] acknowledges the importance of sleep disturbances along with pain and fatigue as contributing factors to QoL and health in MD. Evidence surrounding prevalence and impact of pain and fatigue on QoL is relatively wellestablished [12][13][14][15], sleep disturbances on the other hand, and the broader concept of sleep quality, remains comparably under-reported in adults with MD.
The term 'Sleep Quality' is commonly used to encompass a variety of sleep measures, including sleep time, onset of sleep, sleep efficiency and sleep disruptions [16], with poor sleep quality a defining feature of insomnia [17]. Sleep quality is known to be reduced in conditions with chronic pain [18,19], and has been shown to result in sensations of fatigue in other clinical conditions [18]. Both pain and fatigue have been shown to be symptomatic within MD [6,14,[20][21][22], indeed, our research group recently presented associations between both pain and fatigue with QoL in adults with MD [5]. Despite the presentation of pain and fatigue in MD [5,14], well established associations with sleep quality in other clinical conditions [19], and inclusion within the Health Status in MD model [11], direct evidence of associations between sleep quality with pain, fatigue and QoL remains limited in adults with MD, with the exception of DMD [23].
In adults with DMD, poor sleep quality is linked to a combination of supine body position, severely weakened respiratory system, and the use of a night-time ventilator (indeed, severe respiratory weakness can often result in the daytime use of a ventilator also [23][24][25][26][27][28]. In comparison, reports of sleep quality within milder conditions of MD, such as Beckers MD (BMD), Limb-Girdle MD (LGMD) and Fascioscapulohumeral MD (FSHD), are limited. In LGMD and FSHD, sleep disordered breathing has been identified previously [29-31] using recall questionnaires (LGMD and FSHD) and polysomnography (FSHD) [22]. Accelerometery as a technique has been used recently to identify impaired sleep quality in children with DMD [28], however this measurement technique has not yet been adopted in adults with milder forms of MD. While understanding of sleep quality in LGMD and FSHD is limited to these aforementioned studies [29][30][31], further research is required to understand the prevalence of poor sleep quality in adults with BMD, LGMD and FSHD, but also its implications on Health related QoL and associations with pain and fatigue.
Therefore this research study aimed to: 1) Assess sleep quality in adults with BMD, LGMD and FSHD using 3 methods, 2) Assess the relationship between sleep quality and QoL, 3) Identify relationships between Pain, Fatigue and Participant Characteristics, and sleep quality.
Material and methods
Fifty-Three adult males volunteered to participate in this study (n = 15 BMD, n = 12 LGMD, n = 12 FSHD, n = 14 non-MD controls (CTRL), Table 1). Participants were grouped by dystrophic condition which has been confirmed by gene analysis in their referral. All participants with MD were recruited from, and tested at, The Neuromuscular Centre (Winsford, UK), where they habitually participate in monthly physiotherapy sessions (Average monthly attendance = 2 ± 0.34 days). CTRL participants, were recruited from the general population and free from any health conditions. CTRL participants were tested at the local university campus, using identical methods and equipment as the MD participants (with the exception of height and body mass, see below). None of the MD or CTRL participants reported any change in their activity levels or physiotherapy provision in the 3-months prior to inclusion in this study. All participants arrived for their testing in a fasted state. Ethical approval was obtained through the Sports and Exercise Science Ethics Committee, and all participants provided informed written consent prior to participation. All procedures complied with the World Medical Association Declaration of Helsinki [32].
Procedures
Each participant was tested in a single testing session. The same equipment was used for all participants, with the exception of seated scales for body mass measures in non-ambulatory MD participants. Anthropometric measures were performed first, followed by questionnaires for sleep quality, insomnia, quality of life, fatigue and pain were completed independently. The principal investigator was present to aid with any questions, or in some cases, to tick the desired box for participants with limited upper-limb function. Upon completion of questionnaires, an accelerometer was strapped to the wrist of the self-reported dominant arm and worn for seven consecutive days and nights (GENEActiv, Cambridge, United Kingdom).
Anthropometry
Control participants' mass was measured whilst standing (unshod) using digital scales (Seca model 873, Seca, Germany). MD participants were weighed in digital seated scale (6875,
Body composition
Body composition measures of lean body mass (LBM) and fat mass (FM) were measured using BIA in a fasted state. Two adhesive electrodes were placed on the dorsal surfaces of the metacarpals and metatarsals of the right foot and hand. Two proximal electrodes were placed between the medial and lateral malleoli of the right ankle, and between the medial and lateral malleoli of the right radius and right ulna. BIA is a promoted method of body composition assessment in MD, given its speed and ease of use within populations that may be non-ambulant. BIA has been promoted as a measure for change in FM and LBM over time in MD [37].
Functional scales
Lower and upper limb function was assessed using common functional scales in MD, namely Brooke [38,39] and Vignos [1] scales, respectively. The Brooke scale ranges from 1-6, whereby 1 means the participant is able to "start with arms at the sides and abduct the arms in a full circle until they tough above the head" and 6 means "Cannot raise hands to the mouth and has no function of hands" [40]. The Vignos scale ranges from 1-10, with 1 "Walk and climb stairs without assistance, and 10 "Confined to a bed" [40]. All functional scales were performed by a chartered physiotherapist, and are reported in MD participants only [14,41].
Sleep assessment
Pittsburgh sleep quality index. The Pittsburgh Sleep Quality Index (PSQI) is a reliable (α = 0.73-0.81) [42][43][44] and commonly used measure of sleep quality within clinical and research settings [23, 45,46]. The PSQI is a 21-item questionnaire, determining a global score, representing the sum of its seven domain scores: Subjective Sleep Quality, Sleep Latency, Sleep Duration, Habitual Sleep Efficiency, Sleep Disturbances, Use of Medications for Sleep and Daytime Dysfunction. Individual domains are scored on a scale of 0-3, whereby 0 denotes no difficulty, and 3 denotes severe difficulty. "Poor sleep quality" is determined from a global score of 5 or greater.
Insomnia severity index. The Insomnia Severity Index is a self-report measure of Sleep that has been widely used in clinical conditions [47][48][49][50], consisting of 7 items. These 7 items are: Severity of sleep-onset, Sleep Maintenance, Early morning wakening, Satisfaction with sleep pattern, interference with daily functioning, impairment attributed to sleep and distress caused by sleep [51]. Questions are on a 5-point Likert scale, from "Not at all (0) to "extremely" (4), Total scored range from 0-28, with higher scores indicative of greater insomnia severity. Total scores up to 7 is considered 'Not Clinically Significant Insomnia', scores 8-14 are considered 'Subthreshold Insomnia', Scores 15-21 are considered 'Clinical Insomnia (Moderate)', and scores 22-28 are considered 'Clinical Insomnia (Severe) [52].
Accelerometer. Participants wore a wrist-watch triaxial accelerometer (GENEACTIV, Kimbolton, Cambs, United Kingdom) over a consecutive 7 day period [53], which has been reported as reliable and valid previously [54,55]. All participants wore accelerometers on the wrist to increase adherence and remove the potential discomfort of waist-worn accelerometers for non-ambulant participants [56]. Furthermore, the use of accelerometers to assess sleep has shown 83-89% agreement with polysomnography [57,58], and has become a more common assessment method due to participant convenience compared to polysomnography [59][60][61][62]. Wrist-worn accelerometers were worn for 24h a day on the self-reported dominant wrist of the participant, with daily physical activity reported previously by the authors [63]. Monitors were initialised to collect data at 100Hz. Once the monitors were returned, post 7-day data collection, data was downloaded from monitors into.bin files and converted in 60s epoch.csv files using the GENEACTIV PC Software (Version 2.1). 60s epoch data files were entered into a validated open source Excel Macro (v2, Activinsights Ltd.) [64]. The Excel macro reports partici- Pain. A Visual Analog Scale of pain (Pain VAS) was used to quantify the level of whole body pain felt by participants over the 7 days preceding assessment. VAS is a common method of pain assessment [65] and used in many conditions [66,67]. Participants were given a 10cm straight line, with at one end "No Pain", and the other "Worst Possible Pain", and instructed to mark where, on average, they felt their pain over the preceding 7 days was on the scale. The mark was then measured and presented as distance (cm) from the "No Pain" end.
Fatigue. The Modified Fatigue Index Scale is a 21-item questionnaire, was completed by participants to provide a total fatigue score (MFIS Total) or a subscore of its domains, namely Physical (MFIS Physical), Cognitive (MFIS Cognitive) or Psychosocial (MFIS Psychosocial). Participants rate the 21 questions on a 5-point Likert scale, from "Never" (0) to "Always" (4). MFIS Total is out of a possible score of 84, with scores 38 or over indicative of Fatigue.
Quality of life. All participants completed the SF-36v2 questionnaire, a reliable and validated measure, with eight domains of quality of life (QoL) [68,69], for which full description is available in our research group's previous work [5]. For the purpose of the present paper, data is presented as composite Total Mental (QoL Mental) and Total Physical (QoL Physical) component scores. All data was analysed using Health Outcomes Scoring Software 4.5 (Qual-ityMetric Health Outcomes™, Lincoln, United Kingdom).
Statistical analysis
All analyses were performed using IBM Statistics 26 software. The critical level of statistical significance was set at 5%. All data, except for stature and LBM, were non-parametric as determined through Levene's and Shapiro-Wilk tests. Kruskal-Wallis test was used to compare between groups with post hoc Mann-Whitney U (least significant difference) pairwise comparisons used where appropriate. Stature and LBM were compared between groups using a one-way analysis of variance (ANOVA), with Tukey's test used for post hoc pairwise comparisons. Where appropriate, bias corrected accelerated confidence intervals were calculated with 1000 bootstrap replicates [70]. Linear regressions between Sleep Quality (PSQI-Global Score; Insomnia Severity Index-Total Score; and Accelerometer-Sleep Time and Sleep Efficiency) and QoL Measures (SF36-v2 Physical and SF36-v2 Mental) were conducted. Stepwise Multiple Linear Regressions were conducted between co-variates (Pain, Fatigue and Participant Characteristics of Age, Fat Mass, Fat Free Mass, Vignos and Brookes rating) and measures of Sleep Quality (PSQI-Global Score; Insomnia Severity Index-Total Score; and Accelerometer-Sleep Time and Sleep Efficiency). ANCOVA was conducted to determine whether there were group differences in QoL (Physical and Mental) while controlling for covariables of fatigue, pain, and sleep (PSQI global) with Bonferroni post-hoc. Significant differences in tables and figures are denoted in the MD group furthest from CTRL in comparison, and thereafter from most affected to least affected MD.
Sleep
For clarification, higher domain and global scores for PSQI, as well as Insomnia Severity Index, are indicative of worse measures of sleep.
No differences were found between groups for any of the accelerometer determined measures of sleep time, sleep efficiency, number of activity periods or activity period length (P>0.05). (Table 3)
Discussion
This study has assessed sleep in adults with BMD, LGMD and FSHD using three separate assessments. Between 25-81% of Adults with MD, depending on classification, experience some form of sleep impairment compared to 14-43% of CTRL, using self-report measures of sleep impairment. The severity of sleep impairment in adults with MD was associated with low physical function, higher body fat, higher levels of fatigue and lower QoL. These findings, although consistent with work in children with DMD [23, 26-28], could represent a new avenue for potentially improving QoL in adults with MD.
While using accelerometery, adults with MD showed comparable sleep to CTRL, in contrast, self-reported outcomes consistently identified impaired sleep quality in Adults with MD. Of particular note, using the PSQI, poor sleep quality was noted in 80% and 81% of adults with BMD and FSHD, respectively. In comparison, Della Marca, Frusciante (30) reported 59% of adults with FSHD had poor sleep quality using the same assessment, while more recently Crescimanno et al. [23] reported 56% adults with DMD using night time ventilators had poor sleep quality, with a PSQI Global Score of 6.1, within the present study PSQI Global Score ranged from 5.5 (LGMD) to 10.3 (FSHD). The increased prevalence of impaired sleep quality in the present study, despite participants not using night-time ventilators may be surprising. It is possible that physical limitations and an impaired ability to re-position during sleep may explain these findings, given that accelerometers did not identify differences in sleep quality. This is further evidenced by the associations identified between physical function, namely the Brooks (FSHD) and Vignos (LGMD, FSHD) scales, and either the Insomnia Severity Index or sleep efficiency. It is pertinent to note, that despite being a valid measure of sleep compared with polysomnography [57,58], there is a lack of polysomnography data from adults with MD from which we can make a comparison. Regardless, in the present study, adults with MD had a 50- 81% prevalence of poor sleep quality (PSQI), and 8-27% had moderate insomnia (Insomnia Severity Index), emphasising the need for further investigation into impaired sleep quality in adults with MD. Despite experiencing low QoL, high pain and high fatigue, those with LGMD did not present differences in sleep outcomes from CTRL, despite having a 50% prevalence of participants with poor sleep quality. Of interest however was the within group associations, showing that those with poor sleep tended to have higher body fat consistent with BMD, and higher levels of fatigue (MFIS total), consistent with BMD and FSHD. The present study has therefore, provided further evidence for the health model proposed previously by de Groot et al. [11], through associations between sleep in multiple classifications of MD and QoL. In addition, associations were consistently identified between measures of body composition, function scales and fatigue with Sleep Measures in adults with MD in the present study. Previous work has reported associations between sleep disruption and respiratory impairment in LGMD [29], these respiratory impairments are likely exacerbated by body fat% and BMI associations identified in the present study, resulting in apnoea and a likely eventual need for nocturnal ventilation [30,31]. While relationships between visceral fat and sleep apnoea are recognised within healthy populations [71], further research is required to explore the relationships between absolute adiposity and fat distribution with sleep in adults with MD [72]. Associations between sleep impairment and functional measures, fatigue and QoL in the present study are similar to those reported previously in FSHD [22]. Fatigue has become a prevalent feature of MD research [13,73], particularly within FSHD [21,74], whereby the current authors reported its impact on QoL across adults with MD previously [5]. Therefore, the present authors propose an updated health model for adults with MD (Fig 2), whereby independence, body composition and sleep disturbances are included and evidenced to impact QoL across adults with MD, through our current and previous work [5,63], as well as work evidenced in the original de Groot model [11].
Using the updated health model, it could be suggested that physical activity may be a key area to improve QoL and health status in adults with MD. The effect of increased physical activity on improvements in body composition and reducing body fat is well established within the general population [75] and recognised in MD [33, 63,76,77]. This may therefore reduce the effect of increased body fat percentage on sleep disturbances in adults with MD. Furthermore, Voet et al. [74] has reported previously how increased aerobic exercise resulted in decreased experienced fatigue in adults with FSHD, however the applicability across MD conditions remains unexplored. The description of Physical behaviour (during waking hours from physical activity through to sedentary behaviours) is relatively limited in adults with MD however [63,78], future studies should consider the potential benefits of physical activity interventions on fatigue, sleep quality and QoL in adults with MD.
Limitations
It should be recognised that the MD population recruited for this study voluntarily attend a centre for care and rehabilitative treatments. Participants must be referred to the centre via a clinician, therefore participants may be pre-disposed to higher levels of physical dysfunction, pain or poor sleep quality, resulting in their referral. In addition, this study has used a relatively novel method of accelerometery to assess sleep quality in a clinical population [28]. Accelerometers allow for an objective method to be used within a home setting, rather than the intrusive nature of polysomnography [79]. Outcome measures however are estimates of sleep quality based on movement, and as our results attest, such should be treated with some caution. The present study included two self-report methods of sleep quality to supplement the objective method of assessment, and consistent in previous work in children with DMD [61], all three found associations relative to the conclusions proposed.
Conclusion
In conclusion, this study has identified a high prevalence of impaired self-reported sleep in adults with MD, along with a wide range of associations including fatigue and QoL. These findings have helped further develop an initial health model for adults with MD by de Groot et al. [11], resulting in a more comprehensive health model suggested by the present authors. This health models proposes future interventions that can improve sleep quality, will have benefits on QoL in adults with MD, and reduce some of the burden associated with these conditions. | 2022-09-24T05:23:35.409Z | 2022-09-22T00:00:00.000 | {
"year": 2022,
"sha1": "215cca7ccb07ab3fecac31ecb62e3646f04ad1f4",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "215cca7ccb07ab3fecac31ecb62e3646f04ad1f4",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
86863845 | pes2o/s2orc | v3-fos-license | Women , Politics and Decision-Making in Sierra Leone
The political and decision-making systems and processes of Sierra Leone are fraught with grave gender inequalities that disadvantage women. While women have not been formally barred from standing for political office or even partaking in decision-making in the history of the nation, systemic and structural factors and forces continue to restrict women’s access resulting in wide gaps in the participation of women and men. This paper examines such systemic and structural factors with an emphasis on the socio-cultural forces and factors that limit women’s political participation. The analysis is informed by the equality strategy and quota movement, which have been posited in gender analysis as fundamental to democratic development. The analysis shows that although women have historically played key political roles in national development they continue to be marginalized in formal politics and decision-making processes. Drawing from various quota and equality strategies from Africa and beyond, it argues that Sierra Leone in its post-conflict reconstruction should be guided by such positive examples. It notes that the continued marginalizations of women constitute an infringement on their human rights and contravene various conventions such as the CEDAW. Hence, recommendations are made for the elimination of moribund cultural practices that limit women’s access and the institution of policies and practices that actively promote women’s right and gender equality. Key Descriptors: Women’s Empowerment, Socio-cultural Factors, Politics Participation, Post-Conflict Reconstruction, Gender Equality
Introduction
The role of women in politics and decision making is not a new phenomenon in the sociopolitical development of Sierra Leone.Constitutionally, women have the legal right to be involved in politics; vote in elections, and to be members of parliament or even become ministers or cabinet ministers (Act No. 6 of 1991, Chapter/111, sections 15 and 27).In practice, however, the women of Sierra Leone have been restricted by many factors and forces, cultural, structural and material, which circumscribe their access to and participation in politics and decision-making at various levels of society.Yet, equitable access to politics and decision making are critical for the post conflict reconstruction of Sierra Leone.In particular, the re-emerging democracy, which embraces good governance, requires that the men and women who form that society not just represented in government but also equitably included in the systems and structures of governance.
According to Sanders (1999) decision-making involves making wise choices within a particular context and understanding the consequences of such actions, or determining alternative solutions to problems.It deals with critical thinking skills in cases such as resisting peer group influence, rape or any form of violence.These are some of the realities of particularly women who have to live with the horrors and abuses of war time and have to reclaim and rebuild their lives in a post-war era.If post-war and post-conflict reconstruction in going to benefit all, conscious efforts need to be made to address the particular needs of women who often are marginalized in decision-making processes.Women's views have to be sought and their voices must be heard.It is therefore imperative to consult with and solicit the views of women who, like men, are affected by decisions at all levels.It is even necessary to gather information from many sources in the process of decision-making so as to cater to the multiplicity of perspectives on the issue.Indeed, women like men should be active participants in the decisions of the community and state if the post war reconstruction efforts are to transcend the traditional inequalities and parochial patriarchies to promote gender equality and social justice.
Consequently, the promotion of gender equality and especially empowering women has been the focus of the governments of Sierra Leone, especially after the eleven-year civil wars (1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002).In the past, the condition of women in the male-dominated society of Sierra Leone was appalling.However, war austerity and survivalism as well as post-war reconstruction appeared to have inspired and renewed some agencies for women.Under war conditions and circumstances, where there was the collapse of not just law and order but also the social systems and structures that held women back, it was probably survivalism that demystified male superiority and provided the agencies that the women of Sierra Leonean society exhibit today.Austere war conditions have however been reinforced by the increasingly democratizing state where respect for human rights and social justice have become important ingredients in governance.
At the level of the state, post-conflict reconstruction has entailed infrastructural and institutional rebuilding alongside efforts at reconciliation, retribution and peace-building.At the institutional level the building of democratic systems that promote the rule of law, respect for life and equality before the law have been important pillars in the reconstruction efforts.This has also entailed the formulation of policies and initiation of programs and projects that foster social justice, including those that support women in politics and decision-making (Tickner, 2001).These however remain inadequate as the participation of women compared to men in all spheres of the Sierra Leonean society is still very low.Although the national constitution provides for rights of women like men, access to formal politics and decision-making still remains largely closed off to women.This situation is in spite of the long history of women's involvement in politics and decision-making areas in the country including traditional societies.
With ongoing reforms and progress in the re-emerging nation state, which values democratic participation, it is expected that this would translate strategically into gender equality in politics and decision-making.While the existing democratic structures offer opportunities for the exercise of electoral rights they have not been sufficient in assuring equity.Yet as a member of the African Union (AU) and party to its treaties and conventions, Sierra Leone like other countries of Africa, has a commitment to foster gender equality in not just politics and decision-making but also in all spheres of national life.The AU Solemn Declaration on Gender Equality and the Protocol on the Rights of Women are important reference points.Besides, Sierra Leone's national constitution provides for equality of rights of citizens and gender equality, where women's rights to politics and decision-making are subsumed.However, the slow pace of mitigating change and achieving tangible results has led to the search for options such as the use of affirmative action in the form of quotas.The examples from Eastern and Southern Africa have been important referents.
This paper investigates the use of the quota system as a tool for gender equality and its place in fostering equitable democratic governance and above any other democratic values.It reveals that cultural identities affect political values, as support for women's equality in the political realm is not the same across all social cleavages.It draws examples from countries in the Eastern and Southern Africa to show how they have engendered electoral processes and systems and used the quota system to increase the percentage of women in elected offices.It shows the extent to which such institutional mechanism, when utilized in Sierra Leone, can make tangible contributions to the achievement of gender equality in national politics and decision making.The work also looks at factors inhibiting women's political participation such as traditional laws, attitude of the society, socialization and gender role expectations in Sierra Leone in order to argue for change that recognizes women's histories of advantage and addresses the issue of women's poor participation from a radical perspective in the form of quotas.
Women In Politics And Decision-Making
The political history of Sierra Leone cannot be told without mention of the role of women.Yet these are individual women who have been able to attain very high positions and performed creditably, indicative of women's ability to perform as well as men.Sierra Leonean women's involvement in politics and decision-making dates back to the precolonial period and varies from one region and/or ethnicity to another.The history of the nation shows that even the leader of the Mende was a woman -Queen Masarico (Alie, 2005).It is the case that woman rulers have predominantly hailed from the south and eastern part of the country, home to the Mendes.Thus during pre-colonial era when intertribal wars and bush disputes were the norms, women in the south and east of the country played leading roles in the protection of their territories against other antagonizing warring ethnic groups such as the Temne and Limba.
Women like Paramount Chief Madam Yoko, from Moyamba District in the South and Paramount Chief Madam Humornya, from Kenema District in the East, were among early female decision-makers in the colonial era in the 1960s.Among the early female members of parliament were Nancy Tucker, from Bonthe District in the South, Ella Kobolo Gulama, from Moyamba District in the South and Madam Wokie from Kenema District in the East, in the 1960s and early 1970s.There has been an increase in this number in all spheres of life.Today Sierra Leone can boast of many prominent female rulers in politics, civil service, business enterprises and Christian organizations.The largest Christian Church in Freetown, Jesus Is Lord Ministries (Mammy Dumbuya), is headed by a woman, Dora Dumbuya.This is quite a unique achievement for a woman in the increasingly patriarchal Sierra Leone under colonialism.
Women like Dr. Kadi Sesay from Bombali District in the North, a former Minister of Development, Mrs. Zainab H. Bangura from Tonkolili District in the North, the current Foreign Minister, Dr. Christiana Thorpe from the Western Area, the current Chief Electoral Commissioner, Hajia Hafsatu Kabba from the Western Area, current Minister of Energy and Power and Mrs. Elizabeth Alpha-Lavallie, Deputy Speaker of Parliament have made marks in the political arena of Sierra Leone.In the Police Force, a woman like Mrs. Kadi Fakondor, Assistant Inspector General of Police from Moyamba District in the South, is one of the most senior officers.In business enterprise, Dr. Sylvia Blyden, from Western Area and Presidential Candidate in the 2002 General Elections, has been a very successful journalist, as her Awareness Times Newspaper is one of the most read.Presently, the country's Chief Justice is a woman, Her Excellency Justice Umu Hawa Tejan-Jalloh.There are many more female senior administrative officers who have performed their duties remarkably well in the civil service of Sierra Leone.This shows that Sierra Leone's women who have assumed political office or in public life have played active roles in the nation's development.Yet, women in their generality remain excluded en mass in representative politics as well as in decision-making systems at community, district, regional and national levels.Hence, the role of women in politics and decision-making is a typical and popular gender debate that figures in national development discourses, democratic processes and especially in matters of good governance.
However, there is nowhere, not even in developed countries, where women are equal to men in legal, social and economic rights although there are laws that provide for the equality of all.There are gender gaps in access to and control over resources as well as economic, social and political opportunities.Yet, it is widely believed that women are less likely, compared to men, to support women candidates when it comes to national politics, or even offer themselves for high level decision-making offices.When it comes to competing with men, many women are not willing to bulge.This is observable during socio-cultural and political gatherings, particularly so among Muslim and traditional communities, where doctrinal tenants blatantly discriminate against women.The cause of this is that in such spaces women's voices are not heard and their issues are marginalized.It must therefore be of national responsibility to sensitize and raise awareness of women's issues and gender equality.
While religion and culture could be a hindrance to the role of women in politics and decision-making, some women have been able to break the barriers and have rightly inserted themselves in the society.Such women have been able to provide for their basic needs, and in some case their families.However there are other barriers that women have to overcome.The likelihood for women to participate in politics depends on resources such as income, education and occupation; usually those women of good socioeconomic standing and strategic positions are able to make it that far.In this case, education matters as much as socio-economic wellbeing and might.In particular, education creates awareness of the political world to women.Providing women with work careers may enhance their confidence and independence, thereby helping them to get involved in politics and higher decision-making bodies.After the 11 years civil wars (1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002), most women decided to be trained in vocational skills.This has helped them live independent lives.Women who are professionals and well-educated have recognized gender inequality, and through social networks are able to improve their positions relative to their male counterparts.However, the majority of women are still waiting to be involved in politics and decision-making as their cultural beliefs states that the status for a woman is to be a wife and mother.
Factors Inhibiting Women's Participation in Politics and Decision-Making
As a member of the UN, Sierra Leone falls under the category of countries that must uphold the UN Charter and Convention on the Elimination of all forms of Discrimination against Women (CEDAW).It has also been a signatory to the Beijing Platform of Action, which advocates for women to enjoy their rights.Apusigah (2006: n.d.) has noted that "As people, we relish in making excuses for actions.We fail to take actions to remove the barriers; economic, cultural and even politics, that impede women's progress." Unfortunately, these forms of discrimination have socio-economic impacts on not just women but also the larger society.Sierra Leone is a poor nation and its situation has been compounded by eleven years civil wars and the massive destruction that attended it.Postconflict or war reconstruction has not done much to change the situation.
According to Jusu-Sheriff (2003: n.a), "Sierra Leone is a poor country that has stubbornly refused to give up its last place in the UNDP Human Development Index." Apart from the war effects and resources limitations, socio-economic factors also play a role to keep the nation in its unenviable bottom position on the Index.In the context of this paper, I argue that the inability of the state to harness all of its human capital and especially capacitate women is a major factor.Discriminatory socio-economic and socio-cultural factors continue to deny or limit women from accessing the necessary resources for effective participation in national development.Women who form the majority of the population at 51% are mostly engaged in either subsistence farming or are petty traders.Most of them are housewives or not engaged in anything that can help them live independent lives.
Education and Training
There is a great disparity between men and women in education.The overall adult literacy rate for women in Sierra Leone as of now is 22% -23% for female and 36% for male.Although access to education is open to both sexes, the reality is that primary school enrolment is only 43% for girls compared to 57% for boys.Women and children enjoy a lower social status and face a number of disadvantages, the most significant being that the majority live under customary laws which deny them basic rights in a number of key areas.Rogers (2001:114)
explains:
In Sierra Leone women account for about 51% of the total population and contribute to most of the household food requirements, including carrying out domestic chores and caring for the aged and children.This notwithstanding, they are marginalized in society and lack adequate access to production assets, including land, credit, training and technology.
Generally, belonging to formal and informal groups are crucial as they provide avenues to women's political participation.They also help to provide political experience and leadership skills that could be transferred to other organizations.Some researchers have even argued that political awareness such as following and having interest in political and public events are preconditions for political participation (Rizzo, 2005;Inglehart & Norris, 2003).This therefore indicates that belonging to social organizations and having an interest in discussing politics and current events is an indication of greater political participation among women.Also, political participation requires resources such as finance, education and status.Hence, only those few women who are able to meet the criteria such as having higher socioeconomic status and requisite educational qualifications and political training have been able to gain access to national and even traditional politics.
The reality is that most women are lowly educated and hardly have the requisite higher qualifications that will make them competitive.The education of women is therefore particularly important as it provides cognitive skills and civic awareness that will help women to mobilize and compete in politics.They also are better informed about politics and their rights to political participation.Most women also do not have the high paying jobs or businesses that give them the finances for running costly political campaigns.A profession or career for women does not only boost confidence but also provides much needed resources.Women who have entered professional occupations are more likely to go into politics."Professionals are more likely to be well educated, practiced in public speaking, and familiar with the political system and the laws of the state" (Kenworthy & Malami 1999: 240).These challenges have been compounded by the subjugated social positions that invariably make them second rate citizens.
Traditional Gendered Ideologies
Culturally informed and defined ideological factors have been extremely important in determining women's opportunities to participate politically.For example, in the rural areas, women are expected to be members of the male societies such as Poro, Gbangbani and Wonday before they can be considered for political or leadership offices.For a democratic nation which accepts equality for both men and women, making such male exclusive spaces as silent criterion for political qualification does not only deny women their rights but also infringe on their fundamental human rights.The use of traditional practices has especially made women to shy away from politics and/or decision-making in the rural areas.Besides some of the people including women belong to either the Islamic or Christian faiths and their religions forbid them from traditional practices.Yet, these traditional practices are still enforced to get the women out of the political arena.Again, in especially the rural areas where culture dictates that a woman's role is that of a wife and mother, most women are reluctant to become politically active as such considerations would be viewed as defying cultural norms about appropriate feminine behaviour.
Moreover gendered ideologies not only affect women's willingness to be involved in politics, but also the willingness of the largely male political elites and the broader Sierra Leonean's society acceptance of a woman as a capable political leader (Norris & Inglehart, 2001).Ideologies such as women are emotional, weak, bossy, indecisive work to put women out of competition.No matter how misinformed they may be, it would appear that such prevalent ideologies are taking seriously by many voters and tend to be more important than actual political factors in predicting differences between women and men in elections and representation in national politics (Paxton & Kunovich, 2003).It is this difference that explains why there is the low representation of women in politics and decision making in the northern and some parts of the eastern parts of the country.Thus, it could be said that with the help of a government that promotes social equality, women 57 M. N. Rogers: Women, Politics & Decision-Making with more progressive ideologies will be more politically involved than those who hold on to traditional view points about gender roles.
Religious Beliefs
Religion has its setbacks in women's political participation in Sierra Leone.While Christianity, especially the Protestant, has encouraged women's political participation, some Islamists still do not allow women into leadership.Their excuse is that the women will become proud and even look low on the male folks.Some gender analysts have observed recently that the core clash between Islamic states and the West is not only over the issue of democracy but also over issues concerning gender equality and sexual liberation (Norris & Inglehart, 2003).In Sierra Leone, even women from conservative religious communities are less supportive of equal rights and opportunities for women.Part of this stems from the varying interpretation (or even misinterpretation ) of religious doctrines, Islamic or Christian, which often result in the viewing of women as weaker, irrational and irresponsible sex who needs to be obedient, taken care of and under the control and protection of men.Generally, Muslims argue that Islam forbids women from governance and leadership.However, in places like Kuwait, Egypt and Iran, women activists are using orthodox Islamic beliefs of the early teachings of Muhammad and the Quran to seek for equality for both sexes.Such activists argue that under the eyes of God both sexes are equal and needed to be treated with justice and respect (Kusha, 1990).Other researchers have also observed that Mohammed (SAW) had greater love and respect for women, listened and gave weight to their expressed opinions and ideas (Mernissi, 1996;Afshar, 1996)).
Gender Role Socialization
Gender role socialization starts at birth and is likely to occur through adolescence to even adulthood.At early childhood, boys and girls are taught different sets of expectations, responsibilities, and personal attributes to which they should aspire.Such perspectives may continue up to adulthood and marriage, which are tied to the notion of masculinity.Often girls are socialized into domesticity and to aspire for less in families and society while boys are socialized into publicity and higher places.In Sierra Leone, marriage is considered important to society and social organizations.Every male and female is seen as incomplete if they do not enter into marriage.In marriage, families pass on gender roles and attitudes to their children.Within families, children are socialized into roles as mothers, fathers or spouses directly and vicariously.While it has been argued that education and financial independence give women, especially wives, more negotiating power in the home, the roles of women and girls compared to men and boys are gendered to foster subjugation (Oppong, 1987).For example, household chores are considered a woman's work irrespective of education, career or economic leverage.Also, many traditionalists believe that the educated girl is a less desirable marriage partner (Salm & Falola, 2002).Women in traditional families and societies are also more likely to be forced to have many children even when they are not economically strong to look after them.Most traditional husbands are against the use of contraceptives.They tend to have more children than they can financially support.This results in the proneness to child mortality and maternal mortality due to the inability to pay for and access modern and costly medical facilities.Traditional practices and gender roles also function to promote early dating, which in most cases leads to early or even forced marriage through betrothal.The lack of maturity of such girls who are likely to be coupled with much older men keeps them out of decision-making.As well, they are denied the opportunity to pursue and obtain higher education which will likely qualify them for political office or enhance their political chances.
Historically, especially in the rural areas, masculinity is defined in three ways.Manhood could be achieved by becoming an elder, establishing oneself economically and marrying many women with many children.Also, being an elder is not necessarily achieved by age or wealth but also by how well the man can articulate himself in the community, his skills at offering advice or resolving conflicts.Community chiefs often are rich business men and successful farmers who grow cocoa, coffee and palm trees.Because the first two qualifications are harder to attain, many men have tended to take the easier route of marrying many women who bear many children.
While marriage in Sierra Leone is considered an essential requirement and part of life for both men and women, traditionally it is not the union that is more important but the children that accompany the relationship.Children are usually the main reason for marrying.The husband has unlimited rights, and the two partners have rights to keep individual accounts.But the men are very happy with this setup since they will have chance to support their concubines unknown to their wives.However the wife still expects financial assistance from the husband, especially for food and housekeeping.It is through this food and housekeeping money that the husband exercises control over the wife, with threats to withdraw the financial assistance.
The payment of bride fees does not only give husbands and their families, limitless rights over wives but also determine their chances for political aspirations and extent of participation in household or family decision-making.It has been viewed that bride fee is a means to subordinate women in marriage (Boni, 2001).It obliges women to bear children and to take subordinate roles.The man will constantly remind the woman that the bride payment was done so she can bear children, warm his bed on demand and care for the home and family.Fortes have observed that this payment gives the husbands sexual domination, reproductive rights, women's labour control and lineage citizenship over wives (Fortes, 1962;in Tanbih 1989).
The Civil War and Gendered Violence
The eleven-year civil war did not help the situation of women and girls of Sierra Leone.The war dealt a blow to Sierra Leonean society but especially so to women.There was a high death toll, destruction of property and resources, gross violation of human rights, including mass murders, worse forms of child labour and violence against women in the form of torture, rape, forced marriage and sex slavery.In war time, not only did social systems that were supposed to support women and girls broken, leaving them to fend for themselves under conditions that they dire, they also became breadwinners for their families.Girls in particular were forced to engage in transactional sex, where they traded sex for money, food and security.Sex and its trade became an important resource for such women to support themselves and whatever family was left.Others were forced into sex slavery in rebel camps where they serviced war lords and other combatants.Women and girls who survived the war have to live through the losses and pain but also the after effects of loss of bodily integrity.That women were suffered so severely under men manufactured war and that women's were targeted in particular ways make gender issues a critical arena for redress in the post-war reconstruction.Specifically, women like men should be effectively included in the politics and decisions of reconstruction.It is only when women's experiences of the horrors of the war, the abuses and violations of womanhood and their indelible scars and wounds have become important areas for targeting interventions in the post-war reconstruction can justice be served although partially.
Taken together, women's low status in politics has been attributed to poverty, low level of education, socio-cultural inhibitions and state-sponsored violence.It is in view of such the gendered violations that the post-conflict reconstruction effort, in addition to dealing with the broad issues, should of necessity also isolate and threat women and girls needs for special attention.More importantly, the development of Sierra Leone can only yield desired results when women, who form the majority, are equitably included in the political and decision-making process that shape policies and programmes of development.
Women's Empowerment and the Politics Of Reconstruction
Since 2001, the governments of Sierra Leone have actively encouraged women to take part in the decision-making of the nation.Due to the socio-cultural factors and forces discussed above, majority of women remain at the background allowing largely men to take decisions.However, with education, women might come to know their rights and what contributions and developments they will bring to the nation if they take active part in decision-making.In the past, the Ministry of Education had stated that "The need for equity in educational opportunities must be met by multiple interventions to ensure that women's enrolment, retention and achievement are significantly increased to enable them serve the nation at the highest levels of decision-making."(Department of Education, 1975: 13).This statement had been reaffirmed by subsequent governments in 1996 and 2007.However, enough efforts have not been adequately applied by the governments to make sure that tradition the inhibitors are curtailed and eventually eliminated.
Women, if independent and empowered, can develop good sense of belonging and consequently find themselves in social clubs or religious organizations that have supportive social aims and prepare them for decision-making and political participation.The empowerment of women can therefore not be separated from agitating for their human rights.This means women should achieve equal status as men.Such aims can only be achieved when women have equal status with men in both labour and education forces; women's access to employment and economic resources, and to remove all legal impediments to better access of political power.
During the Beijing Declaration and Platform for Action, the primary focus of the gathering was to advocate for women's empowerment.Some men who are gender biased and have refused to go with the world for development and modernization, have looked upon such gatherings as being held by a bunch of frustrated women.In the Sierra Leone society however, although the governments are making efforts to empower women, there are still many hindrances.It is therefore advisable that government embarks on drastic reforms to stem the tide.Sometimes passing a law is one issue, and to monitor and implement such laws effectively could be another issue.Any of these laws passed must be put into effect if development should take place in the country.This is important in the sense that Sierra Leonean women like other women in other parts of the world particularly Africa, have gone through series of discrimination in a gender-biased male-dominant society.The state and gender activists are still finding ways through international bodies to grapple with the unjust conditions.
There is also the international clamour for gender equality that has made other women in other countries seize the bull by the horn.Countries like Denmark, Finland, Norway and Sweden have achieved a combination of approximate gender equality in secondary school enrolment, at least 30% of seats in parliament or legislature are held by women, and women represent approximately 50% of paid employment in non-agricultural activities (Lovenduski, 2005).Chandra (1999: 20) has stated that women and life are synonymous terms.A woman gives life and she is the most apt in preserving it.However, only 4% of decisions taken in the world are by women.She also believes that women are the best messengers for peace.This simply refers to the women's primordial and divine obligation of motherhood.The natural role of women to give birth to children accords them the right and obligation to treat them with endearment.To teach them how to speak goes a long way to influence their character and thus their contribution to national development.A popular adage by a Ghanaian educationist, Dr. J. E. Kwegyir Aggrey, which says 'to educate a man is to educate an individual; but to educate a woman is to educate a nation.'This is what Chandra (1999: 15) means when she said that "women do not seek power for power's sake, but to improve the human condition." During the civil war, the role of women could not be overemphasized.Women stood firmly by the men to bring the war to an end.They sat with them at round table conferences during conflict resolution periods, played active role in the democratization and election process, and still continue to participate in post-war reconstruction processes in poverty alleviation drive.All these have helped to promote peace and national development.Yet, women are yet to enjoy the full benefits of the now stable and peaceful Sierra Leonean society.This is not to deny some efforts that are being made to target and address women's and girls' special needs.
Other conventions and efforts to promote good governance and equitable or sustainable development have included gender equality as an important indicator of success or failure.The Millennium Declaration, signed in September 2000 at the United Nations Millennium Summit, required member states to commit themselves to promote gender equality and the empowerment of women, as effective ways to combat poverty, hunger and disease and to stimulate development that is truly sustainable.Thus the MDGs recognise that the only way of achieving sustainable development is to map out strategies that would eliminate gender disparities in primary and secondary school education and increase literacy rates, the share of women working in non-agricultural jobs and the proportion of seats women hold in national parliaments.Sierra Leone is not only a signatory to the declaration but also has been committed to promoting gender equality and the empowerment of women through the Ministry of Gender and Children's Affairs.
In order to show its commitment to the cause of the poor and deprived and promote gender equity, the government of Sierra Leone established the Ministry of Social Welfare, Gender and Children's Affairs in 2002.This Ministry is working in line with United Nations Convention on the Right of the Child (CRC) on creating safe childhoods and 'child friendly' environments while empowering women through various strategies in view of global and national commitments.The Ministry does this in collaboration with various stakeholders in education but also other equally important sectors such as health and agriculture toward enhancing gender equality and promoting sustainable development.These stakeholders include parents, teachers, policy makers and the larger community.
Earlier on, in 1995, it was revealed by the Ministry of Education that about 66% of children of school going age were not in school; and out of this percentage, 65% of them were girls.When it came into being later, the Ministry of Social Welfare, Gender and Children's Affairs has suggested that particular attention should be paid to girls whose academic output lags behind that of the boys.The Ministry argues that it is imperative that government takes actions to increase the participation of all school age children but especially so for girls.The emphasis here is on education as a vital element of women's empowerment.Educating girls to grow up into useful adulthood and take active part in planning, monitoring and executing development programs can be an important step forward.Through education, girls can acquire job skills that will empower them economically and intellectually as well as enhance their status in society.Education is the main gateway to alleviating the poverty and suffering of girls and women in particular and the poor of Sierra Leone in general.
Additionally, the CEDAW has become an important instrument for women's rights mobilization and advocacy.Such mobilizations have focused on addressing socio-cultural inhibitors, strengthening women's agencies, supporting recovery from the war experiences, protecting women's interests and promoting their development needs.Hence, as part of post-conflict reconstruction not only is the CEDAW gaining a place in gender discussions, but also national development efforts are including gender discussions.
More importantly, women activists are mobilizing and lobbying community leaders, government and donors for socio-economic resources for women, social protection policies and programmes as well as human rights protection for deprived women, children and even men.One line of activism has been on the promotion of affirmative action.They have advocated for the establishment of quota and equity strategies that target women's deprivations and history of discrimination.Government is also responding in many ways, although not at an appreciably pace.
Quota and Equity Strategies
The dire situation of many poor women and girls as well as the post-conflict development needs of the country require that vigorous efforts but also strategic steps are taken toward the improvement of women's conditions and status as well as the creation of a gender balanced society.While some such efforts are ongoing, it has been argued that they are too slow in addressing the dire situations of a post -conflict country.Hence, affirmation action in the form of quotas should be instituted as has been done in countries such as Rwanda and South Africa which have undergone similar situations.These but others such as Uganda and Tanzania have had to take strategic decisions and make strategic amendments in order to uplift the political participation of women but also other disadvantaged groups.
The quota movement, as a strategy for gender equality has been identified as capable of bringing the change in the dire situation of post-conflict Sierra Leone.The quota system is expected to go a long way to improve not only gender balance, but also the development of the society (Lovenduski, 2005).Lovenduski (2005: 83) has stated that "The creation of a gender-balanced institution elected on the basis of party list proportional representation system of election, combined with arrangements that public appointed offices are filled according to non-sexist criteria of qualifications, would be central to such systems."By extension, a quota system is important for bridging gender gaps in politics and decision-making in the formal and traditional system of Sierra Leone.Within the traditional setting, where for example Paramount Chiefs constitute largely male rulers, women should have equal rights of succession as men; not only equality but the succession must be alternated between the sexes.Yet, in its short life, the engendering of and gender equality in such established political processes and institutions has proved to be a frustrating process in some parts of the country, especially the North and Eastern parts.For example, in the Eastern Kono District, no woman has been able to rise to the position of a District Officer yet even when the country's Civil Service laws do not bar women.Yet, some barriers, socio-cultural and even structural hold women back and keep them out of such systems.
In the last two decades, large numbers of women in many Eastern and Southern African countries have entered parliament.As of 2007, the national legislatures of countries like Namibia, Mozambique, South Africa, Rwanda, Uganda and Tanzania had women ranging from 25% to 50% which placed them among the top 30 countries in the world that have increased the representation of women in politics (Bauer & Britton, 2006).Such developments are far above what is happening in other parts of Sub Sahara, where on the average women's representation hovers around 17%.In response to the situation, countries such as South Africa, Tanzania and Namibia have used the proportional representation electoral and voluntary based quotas to increase the percentage of women in their parliament.
Women of southern African countries have used strategies such as support from transnational feminist networks and pressure from women's group to effect transformations.Women leaders meet other gender activists at international conferences, where they are able to share experiences and lessons which shape national structures for women's right, and what strategies they can use to gain access to formal political offices.Namibia, Uganda, Zimbabwe, South Africa and Mozambique are all countries that have had a past of nationalist struggles, and some even have gone through civil wars.After those struggles, women of these countries were determined that women's issue must be paramount in decision-making and politics in their countries.Sierra Leone compares with them due to its shared history of civil war (1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002), although not of the same proportions.Women's activists can put pressure on the government to increase women's representation in the government.Shortly after the civil wars in Uganda (1986), leaders of women's movement met President Yoweri Museveni and demanded an increase in women's representation in the government (Tripp, 2006).With such opening, their representations have grown over the years (Tripp, 2006).
In Rwanda, women's group were instrumental in grassroots economic development, and overtime pushed for democratic reform and political representation.With the help of NGOs and other women's group, such strategies could be used to push for women's representation in Sierra Leone.Also, women's groups in Uganda have focused on common interests and minimized differences in order to rebuild their nation (Tripp 2000: 649).With such mobilizations, Ugandan women were heard immediately.Sierra Leone can implement such strategies.Political leaders will be forced to recognize the power and influence of women's movement, thereby yielding to their demands (Tripp 200: 233).Uganda was able to return a woman vice president in the national politics by 1996 (Tripp 2000: 67-71).The movement was able to secure a rapid appointment of women ministers.All these achievements were based on the success of a unified women's movement, as they were able to reduce women's marginalization and to build a strong unified movement to influence their constitutional frameworks.
Reynolds (1999) has also observed that the rules guiding elections are very instrumental in determining women's access to political offices.The quota system is therefore an important to address gender imbalance in national legislatures and parliaments (Caul, 2001).The implementation of quota system is far less difficult than to transform the level of political and socio-economic structures of a state (Norris 1996; Jones 1998).Sierra Leone electoral process can adopt the voluntary or mandatory quota systems.Countries which have adopted such systems often refer to the CEDAW and the 1995 UN Conference on Women in Beijing and the Platform for Action, which has laid down guidelines to increase women's representation in politics.The quota system is a worldwide trend used by nations to fast track equal legislative representation (Bauer & Britton, 2006).Such systems could be applied in the Sierra Leonean case as it will transform the institution of parliament and change the legislative priorities in favour of the needs and interest of women in the nation.
In southern Africa, the leaders of the Southern African Development Community (SADC) have signed the Declaration on Gender and Development (1997) which requested for 30% women in decision-making as a target against 2005, consistent with Millennium Development Goal Three (MDG 3).Such agreements could be signed between the government of Sierra Leone and women activist groups, which will be a binding tool with which women activists can agitate and hold the government accountable.But the most effective quota system is the mandatory one implemented in Rwanda, Burundi and Uganda.
In 2005, Burundi had only 18.4% women in its national assembly (www.ipu.org).With the implementation of the mandatory quota system, it is now ranked 18 th among states that have internationally increased women representation in their parliaments (www.ipu.org).In voluntary quota systems, political parties reserve seats for women in parliament on their pre-election party lists.Also, women in South Africa have been able to pressure political parties to put women on every third seat on the party list, popularly called the Zebra List.While they have not been able to gain the 50% women's representation, this move has helped to increase women's representation (Tripp, 2004).When leading parties implement the quota system, the impact is that the other parties are forced to follow the same method to gain support from the citizens.This is another method activists in Sierra Leone could use to increase women's representation in parliament.In Mozambique, the Front for the Liberation of Mozambique (FRELIMO) adopted a 30% quota in their pre-election party list for Congress in 1992 (Disney, 2006).The party also made a balanced distribution of women throughout the list instead of placing them at the bottom of unwinnable seats.Disney has observed that FRELIMO has been able to return at least 43% women as Deputies in the National Assembly (Disney, 1996).Applying such systems in Sierra Leone cannot only increase women's representation in parliament, but can also increase the popularity of the party.Thus, the normalization of gender equality in the East and Southern African countries has changed the cultural and societal perception about the nature of political leadership and governance.
Another institution established by state governments to ensure that the basic need of women are met is the creation of Gender Ministries, with the idea of a centralized model which serves as the focal point for gender legislation and policy implementation for the government.Namibia has a Ministry of Gender Equality and Child Welfare which overseas national gender policy, and a Women and Law Committee which helps to draft new laws for women.Uganda has the Ministry of Gender, Labour and Social Development which is responsible for Legal Aid Services and Legal Education, National Women's Day, Capacity Building for each of the other government departments, Monitoring and Evaluation of Gender Mainstreaming and Entandikwa Credit Scheme which provides lending and credit services for impoverished Ugandans, as well as gender sensitization for the Legal Profession.These are areas the Sierra Leone society could adopt.
South Africa has the National Gender Machinery (NGM) which helps to mainstream gender issues in all spheres of the government.Such machineries could make sure that gender issues are not sidelined (Seidman, 1999).Sierra Leone already has a Ministry of Gender and Children's Affairs.They have also introduced the Family Support Unit in the National Police Force to check domestic crimes and violence.However they can do better if they can use most of the strategies other states have used to attain gender balance in their politics.In South Africa, women MPs have been involved in legislation dealings on abortion, employment equity, skills development, domestic violence, and basic income grants and maintenance (Britton, 2006).Women in Namibia were able to mobilize to fight against the apartheid regime and its racist laws which doubly discriminated against women.Now Nambia has legislations which work to strengthen the economic and social development of women and the girl child; equality in marriage, land rights, gender violence, domestic violence, and child maintenance (Bauer, 2006).All such moves cater for a gender balanced and developed state.
Sierra Leone has responded to this by setting up similar machinery, the Ministry of Social Welfare and Women and Children's Affairs.The Ministry works with other ministry to foster gender equality.It also leads in policy and programming development on women and children.The Ministry of Social Welfare, Gender and Children's Affairs in collaboration with the Family Support Unit of the National Police Force has helped to seek the welfare of street children, sexually abused girl children, and even restored the rights of women who had been abused by their spouses.This Ministry has committed itself to activities that would ensure effective partnership that would eradicate gender-based violence in the nation.
Most times women have aspired to be included in decision-making in the nation, but they are just pushed aside like rags.Equity strategies such as the quota movement, call for a society that caters for an equal number of both sexes in national elections, job appointments and promotions in jobs.There must even be a principle of rotation that would guarantee that even positions like president and vice president would alternate between men and women.Culture and traditions have made the society fail to realize that women are equal citizens in the nation.Politics in general has been viewed as something treacherous and dangerous.As such integrating women into politics and decision-making both in number and ideas is unappealing.The idea that politics is male dominated has long been established.But with modernization, the presence of women in politics and public institutions has increased all over the world.It is unfair for men to monopolize representation especially when that country is moving towards modernity and democracy (Philips, 1995).
The Quota and Equity paradigm therefore favours that political parties increase the number of their women representatives.At this time when public distaste for corruption and distrust of politicians are high in especially Africa, supporters of Quota and Equity system have argued that increased women's representation in public offices could have an extremely significant beneficial impact on a nation (Randall, 1982).Women in countries such as Sweden have come closer to politics than any other country.While women of some of these states have been lucky to have been integrated in political and decision making, some others have been effectively challenged in others.
Equity advocates in most democracies have therefore generated and engaged in debates and mobilized in local, national and international social movements, wherein they were able to effect changes in processes, treaties, constitutions, formal and informal rules and daily practices.In places like Scotland, feminists are involved at every stage of the constitution making process (Lovenduski, 2005).For the past 25 years feminists in Scotland sought a "legislature in which women held 50% of the places and were well represented in its cabinet and executive" (Lovenduski, 2005).They were able to draw attention to the kind of electoral system that would most benefit women; and not only that but secured agreements from Labour and Liberal Democratic Party leaders to elect women candidates.As this was achieved, the advocates became involved in discussions about institutional designs.With this Quota system, the Labour Party was able to return 28 women and 28 men to the new Parliament in the first election in 1999.By 2003, through the Quota system women made up 40% of members of the Scottish Parliament and 50 % of members of the Welsh National Assembly (Lovenduski, 2005).Such system could apply in the case of Sierra Leone, and with an assertive leadership it can work.Women do not need to be men to bring changes in their nations.What they need to do is to disrupt old alliances, challenge prevailing problems in the society, and offer practical feminized solutions to real obstacles to their integration in higher offices of decision making and politics.
To increase women's political and decision-making representation in Sierra Leone and other modern democracies the idea of justice must be invoked in the society.The key actors to such advocacy organizations are religious houses and bodies, political parties, and the media -news papers.Equity strategy advocates for special training and financial assistance, setting targets for women's presence in the government.The government of Sierra Leone should therefore fund women's advocacy organizations, fund research on women's representation, include women's advocacy organizations in consultations at all level, sign international treaties and protocols that call for equality of women's and men's representation, make provisions for women to be appointed to public offices, remove legal obstacles to women's representation, remove local traditions which are obstacles to special measures, to promote women's equality; and even reserve seats for women in legislatures, encourage and facilitate women to compete in various ways.Parties and governments must secure places for women representatives by making their sex a necessary qualification for offices.
Conclusion/ Recommendations
The role of women in politics and decision-making in Sierra Leone started even before colonialism, especially among the Mendes.While women from each and every ethnic group are represented in decision-making bodies in the nation, the proportion on which this is based is very small.It will do the government of Sierra Leone and the society as a whole good, in its post-conflict reconstruction efforts in particular, to apply the quota and equity strategies as critical tools from enhancing women participation in politics and decision-making.Women in Sierra Leone have played no mean role in the transformation from a military rule to democracy; and were in the forefront in the recent national elections that resulted in the peaceful handing over of power from one civilian government to another.
Undoubtedly, the elimination of all forms of discrimination against women is the key to empower women in Sierra Leone to participate effectively in politics and decision-making.It must be noted also that Sierra Leone has been trying to fulfil the aim of CEDAW in the areas of bodily integrity and health rights for women, family and marriage rights, literacy and education rights, economic rights, civil and political rights.But among all these, the most important is the equal right to education and political participation, which the government should revisit and put together effective policies and programmes as well as legislations and regulations that will ensure that all women and girls enjoy their human rights especially the right to self determination and political participation.They should also be granted the opportunities and access to the much-needed socioeconomic and socio-cultural resources that enhance their competition and success in competitive and representative politics.Above all, affirmative action is needed to expedite progress.The numerous success stories from African and beyond, discussed above should guide the government and people of Sierra Leone in making the right choices.
Specifically, the following areas of interventions are needed: Effective educational policies with counselling and information campaigns should be put in place, to enable women so as to regain control over their own lives and bodies, There should be zero tolerance of violence against women, with support programs that may change laws.
Young people including both men and women must be consulted and integrated into the development of their communities.
The National Electoral Commission should make it mandatory for political parties to integrate gender into all areas of their electoral manifestos, setting targets for a 50-50 representation.
Government in collaboration with civil societies should undertake a public sensitization program on the electoral process, to assert women's right to equal political participation, ensure transparency and equity.
Make legislations for the adoption of quotas in electoral and other politics and decision making sphere.
Traditional beliefs and practices that are gender discriminatory should be identified, and laws must be enacted to eliminate them.
Government must strengthen the capacity of the national gender machinery, to ensure equity in response to the needs of women and men in the society. | 2019-02-12T20:54:01.621Z | 2012-02-10T00:00:00.000 | {
"year": 2012,
"sha1": "5215659bcb80f2734089b8f072f08007cdfcfd8c",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/gjds/article/download/73607/62777",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5215659bcb80f2734089b8f072f08007cdfcfd8c",
"s2fieldsofstudy": [
"Political Science",
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
204754320 | pes2o/s2orc | v3-fos-license | A Review on Viral Metagenomics in Extreme Environments
Viruses are the most abundant biological entities in the biosphere, and have the ability to infect Bacteria, Archaea, and Eukaryotes. The virome is estimated to be at least ten times more abundant than the microbiome with 107 viruses per milliliter and 109 viral particles per gram in marine waters and sediments or soils, respectively. Viruses represent a largely unexplored genetic diversity, having an important role in the genomic plasticity of their hosts. Moreover, they also play a significant role in the dynamics of microbial populations. In recent years, metagenomic approaches have gained increasing popularity in the study of environmental viromes, offering the possibility of extending our knowledge related to both virus diversity and their functional characterization. Extreme environments represent an interesting source of both microbiota and their virome due to their particular physicochemical conditions, such as very high or very low temperatures and >1 atm hydrostatic pressures, among others. Despite the fact that some progress has been made in our understanding of the ecology of the microbiota in these habitats, few metagenomic studies have described the viromes present in extreme ecosystems. Thus, limited advances have been made in our understanding of the virus community structure in extremophilic ecosystems, as well as in their biotechnological potential. In this review, we critically analyze recent progress in metagenomic based approaches to explore the viromes in extreme environments and we discuss the potential for new discoveries, as well as methodological challenges and perspectives.
INTRODUCTION
Viruses are the most abundant biological entities in the planet, from the world oceans to the most extreme environments found in the biosphere (Zhang et al., 2018;Graham et al., 2019). Historically, the study of viral communities has been carried out by co-culture of viruses and their cellular hosts (Tennant et al., 2018), and more recently by viral metagenomic-based approaches (Nooij et al., 2018;Graham et al., 2019).
The exploration of viral populations in extreme environments has uncovered considerable genetic complexity and diversity. The biological organisms that inhabit extreme environments are termed extremophiles, and are found in all three domains of life (Merino et al., 2019). Like all other organisms, extremophiles serve as hosts for viral replication (Castelán-Sánchez et al., 2019). As viruses depend on a cellular host for replication, the interactions with their hosts affect microbial diversity, population interactions and dynamics, and even the genomes of these hosts (Le Romancer et al., 2006;Zhang et al., 2018;Castelán-Sánchez et al., 2019). In extreme environments their impact extends from influencing microbial evolution to playing an indirect but significant role in the earth's biogeochemical cycles (Weitz and Wilhelm, 2012;Munson-McGee et al., 2018). However, despite their relevance, little is currently known about their ubiquity and diversity in extreme ecosystems (Paez-Espino et al., 2016;Berliner et al., 2018).
Nowadays the study of viruses can be carried out using metagenomic-based strategies that do not depend on cell culture approaches (Rose et al., 2016;Nooij et al., 2018;Zhang et al., 2018;Graham et al., 2019). Metagenomics represent a unique opportunity to describe the composition of viral communities in extreme environments, as well as to analyze the viral genetic reservoirs to characterize novel proteins and bioactive compounds of potential biotechnological utility.
Metagenomic studies are providing new sequences that in many cases do not share homology with sequences deposited in the reference databases (Hayes et al., 2017;Cantalupo and Pipas, 2019;Kinsella et al., 2019). It is evident that metagenomics from extreme habitats could be a powerful method to drastically increase the number of virus reported to date. It is surprising that despite the recent rapid advances in high-throughput sequencing techniques, there are still quite a limited number of studies describing the viromes of extreme environments in the literature. Here, we critically analyze recent progress in metagenomic-based approaches to explore the viromes in extreme environments, as well as methodological challenges and perspectives.
PERSPECTIVES ON SAMPLING AND PROCESSING: METHODOLOGICAL CHALLENGES FOR VIRAL METAGENOMICS IN EXTREME ENVIRONMENTS
Viral metagenomic studies are dependent on the ability to obtain sufficient amounts of nucleic acids from complex mixtures (Roux et al., 2019), particularly in extreme environmental samples that are as diverse as hot springs (Schoenfeld et al., 2008;Zablocki et al., 2017b), deep seawater, marine-sediments and oceanic basement (Breitbart et al., 2004;Hurwitz et al., 2013;Nigro et al., 2017), Antarctic and desert soils (Zablocki et al., 2014(Zablocki et al., , 2017a, among others, to facilitate either the construction of metagenomic libraries or to perform direct sequencing. The number of viral particles estimated to be present in a liter of water or kilogram of soil is in the order of 10 9 -10 11 , while the world's oceans are estimated to contain up to 10 6 viral particles per ml (Hara et al., 1991;Mokili et al., 2012).
In contrast, the viral abundance from Octopus hot spring water from Yellowstone National Park or oceanic basement samples are at the lower range of ∼10 4 and ∼10 5 viral particles per ml respectively, compared with non-extreme aquatic environments (Schoenfeld et al., 2008;Nigro et al., 2017). In spite of viral particle abundance, during purification only subnanograms of viral DNA or RNA are typically recovered (Van Etten et al., 2010). Considering that a phage contains ∼10 −17 g of DNA per particle, obtaining the amount of 1-5 µg of DNA required for standard pyrosequencing would implicate that ∼3 × 10 11 viral particles should be recovered , and even for third generation PacBio technology 10 µg of DNA is required (Faino et al., 2015). With the development of new sequencing technologies it is likely that lower amounts of nucleic acids will be needed. For example, only 50 ng of DNA was required to sequence bacteriophages and archaeal viruses from hypersaline environments (Motlagh et al., 2017;Liu et al., 2019) and surprisingly only 1 ng of DNA was needed to explore the metavirome from deep sea and the chaotropic salt lake Salar de Uyuni using KAPA Hyper Prep Kit and Nextera XT kits with Illumina platforms (Hirai et al., 2017;Ramos-Barbero et al., 2019).
Filtration and Concentration of Viral Particles
To overcome the problem of obtaining sufficient viral nucleic acid amounts in extreme habitats, viral-particles must be concentrated by ultracentrifugation, flocculation, or filtration while minimizing contamination from prokaryotic or eukaryotic nucleic acids (Mokili et al., 2012;Liu et al., 2019;Ramos-Barbero et al., 2019;Roux et al., 2019). The use of classical size-selective ultrafiltration methods is not widely used, as the filters can often become blocked by impurities during concentration of the samples. Instead Tangential Flow Filtration (TFF) and/or ultracentrifugation were preferentially used in samples from hot springs and chaotropic salt lake Salar de Uyuni (Diemer and Stedman, 2012;Zablocki et al., 2017b;Ramos-Barbero et al., 2019). An excellent review by Lawrence and Steward (2010) on centrifugation highlights the efficiency of the methodology to sediment even the smallest viruses, where centrifugal separations can be divided in differential pelleting and zonal separations. The former has been successfully used to remove cell debris from samples obtained from enrichment cultures of archaeal viruses from an acidic hot spring Umi Jigoku in Beppu (Japan) (Liu et al., 2019) before concentrating and purifying viral particles and the latter has been tested using different gradient materials such as glycerol, OptiPrep and sucrose to isolate virus from a boreal lake in Finland (Laanto et al., 2017).
Due to the limitation of the volume size of aquatic samples, John et al. (2011) introduced the use of FeCl 3 flocculation to concentrate viruses from seawater that results in the recovery of 92-95% of viruses, which has been favorably used in samples from glacier waters and deep-sea (Bellas et al., 2015;Poulos et al., 2018). This compares favorably with traditional centrifugation or TFF, which results in recovery levels of 18-26% and 62-93% respectively (Furtak et al., 2016), as evaluated by SYBR Gold staining, meaning that the use of FeCl 3 increases the efficiency of viral particle recovery from extreme environment samples to around 30-60%.
Filters in the range of 0.1-0.22 µm have been used to enrich samples from South African hot springs and igneous crust of the seafloor (Nigro et al., 2017;Zablocki et al., 2017b). However, in recent years giant viruses -giruses-(particle size of ∼720 nm) have been discovered Van Etten et al., 2010) leading some groups to use 0.45 µm filters that are effective in recovering these larger viral-particles (Van Etten et al., 2010;Hurwitz et al., 2013;Sangwan et al., 2015). Up to now, limited knowledge about giant viruses in extreme niches has been produced. For example, a new large DNA virus named Medusavirus, was isolated from hot spring water in Japan using a filter of 1.2 µm (Yoshikawa et al., 2019). In addition, 64 members of the Mimiviridae family were recently identified in Antarctic marine water (Andrade et al., 2018). Thus, selective filtration strategies should be considered to recover extreme giant viruses.
Despite the use of a variety of different approaches to enrich viral particles from extreme environments, systematic studies comparing these different concentration methods (TFF, FeCl3, PEG, commercial concentrators) are still lacking and likely the methods employed for viral enrichment may need to be adapted considering the nature of the sample.
Nuclease Treatment, Concentration and Viral Nucleic Acid Purification
Viral samples are usually treated with DNAse I, to avoid contamination with cellular genomic DNA that would, following sequence-based analysis, result in a large number of spurious DNA sequences from sources other than the virome. This treatment was used to obtain viral metagenomic DNA from Boiling Springs Lake (United States) and Great Salt Lake (thermophile and hypersaline ecosystems, respectively) (United States) (Diemer and Stedman, 2012;Motlagh et al., 2017). However, there are examples, such as in deep-sea ocean viral metagenomes (Hurwitz et al., 2013), desert perennial ponds (Fancello et al., 2013), hot springs (Zablocki et al., 2017b) among others, where despite the use of DNase treatment prior to viral genome purification, it was not possible to eliminate completely the cellular genome.
After DNase treatment, concentration steps are recommended using CsCl, sucrose or Cs 2 SO 4 gradients by ultracentrifugation Fancello et al., 2013;Bellas et al., 2015). Additionally, once the viral particles have been concentrated and purified, the capsids have to be broken to release the viral genomes. The classical method is the use of formamide (Breitbart et al., 2002;Thurber et al., 2009;Fancello et al., 2013) followed by phenol:chloroform:isoamylic alcohol extraction (Diemer and Stedman, 2012;Nigro et al., 2017) or alternatively, through thermal shock (Bellas et al., 2015;Roux et al., 2016;Motlagh et al., 2017). However, in samples from hypersaline ponds, thermal shock may not be fully efficient to denude enveloped viral DNA, which may be the reason why in some studies the majority of viral DNA has been recovered from non-enveloped tailed viruses (Roux et al., 2016). Some singlestranded DNA (ssDNA) extreme viruses (e.g., HaloRubrum Pleomorphic ssDNA Virus 1, Haloarcula Hispanica Pleomorphic Virus 3, Aeropyrum Coil-shaped Virus) infecting hyperhalophile or hyperthermophile archaeal hosts, present a lipid envelope and multiprotein complexes or two criss-crossed halves of a circular nucleoprotein (Pietilä et al., 2010;Mochizuki et al., 2012;Demina et al., 2016) that could confer resistance to capsid disassembly. Thus, the capsid composition of extremophile viruses is a relevant feature of unknown viruses, as well as to access their genetic material, and consequently limits the identification of unusual extreme morphotypes.
Retrotranscription or Amplification Steps
The identification of viral genomes to date has mainly focused on ssDNA or double-stranded DNA (dsDNA) viruses and only small RNA genomes of 5-10 kb have been assembled from extreme metaviromes. For example, RNA viruses infecting archaea were discovered from an acidic hot spring in Yellowstone (United States) (Bolduc et al., 2012;Wang et al., 2015), as well as from alkaline hot springs (Schoenfeld et al., 2008). In addition, RNA cyanophages have been recently reported from Porcelana hot spring in Chilean Patagonia (Guajardo-Leiva et al., 2018). Andrews-Pfannkoch et al. (2010) have implemented the use of hydroxyapatite chromatography to efficiently fractionate dsDNA, ssDNA, dsRNA, and ssRNA genomes of known bacteriophages from samples of marine environments. This methodology has been employed to study ssDNA viruses from deep-sea sediments, alkaline siliceous hot springs and Artic shelf seafloor (Yoshida et al., 2013;Nguyen and Landfald, 2015;, but to our knowledge it has not been applied to study RNA viruses from extreme ecosystems. When working with RNA viruses, a retro-transcription step is required previous to library preparation, and if the efficiency of the nucleic acid recovery is low, amplification strategies are required. Among these, phi29 polymerase-based multiple displacement amplification and random PCR using modified versions of Sequence Independent Single-Primer Amplification (SISPA) have been useful in the virome amplification in samples from hot acidic lakes, hot springs and polar aquatic environments (Diemer and Stedman, 2012;Mead et al., 2017;Yau and Seth-Pasricha, 2019). When the viral genome material is RNA, a Random-Priming SISPA (RP-SISPA) method is frequently used (Miranda et al., 2016). This approach was successfully conducted in the isolation of RNA viruses from seawater (Steward et al., 2013) and Antarctic virioplankton (Miranda et al., 2016).
Other strategies for viral amplification are also used when extremophile metaviromes are studied. While the implementation of the Linker Amplified Shotgun Library methodology (LASL) is suggested to amplify dsDNA, Multiple Displacement Amplification (MDA) is employed to enrich ssDNA preferentially (Fancello et al., 2013). LASL has been performed to analyze viral metegenomes from Yellowstone hot springs and Antarctic virioplankton Miranda et al., 2016), while MDA has been used to describe the virome present in deep-sea samples from Antarctica (Gong et al., 2018). Zablocki et al. (2016) have argued that although viral amplification is commonly used in metavirome studies, especially for samples collected from extreme habitats such as hyperarid desert soils, this step should be avoided because it prevents the determination of viral particle abundance and diversity, and may promote a biased amplification of certain virus groups.
Thus, it is clear that further comparative methodological studies using samples from extreme environments are required to evaluate if purification, concentration and amplification methods have any impact in the virome structure obtained from metagenome analysis.
DATABASE AND BIOINFORMATIC ANALYSIS: GENERAL REMARKS
Up to now viral sequence search is conducted essentially on the NCBI database GenBank or RefSeq, according to their viral sequence classification criteria. The RefSeq database excludes some categories of data such as those that incorporate too much information that cannot be processed readily, such as metagenomes or genomes that have significant mismatch or indel variation compared to other closely related genomes. In addition, not all sequences have a taxonomic classification in the International Committee on Taxonomy of Viruses (ICTV) (O'Leary et al., 2015). The number of viral sequences reported in GenBank, reached almost two million by December 2018, of which only 3,279 were registered as genomes in RefSeq and of these, only 1,800 have a classification at the species level in the ICTV (Kang and Kim, 2018). The classification of viruses in ICTV has been based on the characteristics that can be used to distinguish one virus from another, such as the genome composition, the capsid structure, the gene expression program during viral replication, host range and pathogenicity, among others. Comparisons of both pairwise sequence similarity and phylogenetic relationships have become the primary guidelines used to define virus taxa (Simmonds, 2015). However, without the incorporation of metagenomic data in both the RefSeq and the ICTV databases, the comparison of sequences and their allocation is limited (Simmonds et al., 2017). Alternative viral sequence similarity search strategies, such as VirSorter (Roux et al., 2015) and VirFinder (Ren et al., 2017) have been developed. The former is designed to search protein-coding genes and the latter works with k-mer composition, both attempting to identify viral sequences in prokaryotic genomes. Integration of such strategies should reduce the number of unidentified sequences and the comparisons of viromes should then help to formulate more robust theories about their biological roles within a given community, thereby increasing the possibility of gaining a fuller understanding of the viromes in any environment (Simmonds et al., 2017).
Virus databases have been developed, such as Virome with 73 projects from viromes or metagenomes currently containing data from 270 libraries (Wommack et al., 2012); EBI metagenomics, a virome dataset and pipelines for analysis of metagenomes (Hunter et al., 2014); Metavir2, which is a web server for analysis of environmental viromes (Roux et al., 2014); and more recently, IMG/VR v.2.0 (Paez-Espino et al., 2018) that includes >600 extreme environmental metaviromes; Gut Virome Database (GVD) with 648 viral or microbial metagenomes (Gregory et al., 2019); iVirus (based on vConTACT as the main classification tool) which contains a dataset from 1,866 samples and 73 ocean expedition projects (Figure 1). Some of them include viral sequences obtained through strategies such as the construction of fosmid libraries (Mizuno et al., 2013), cellular fraction of metagenomes (López-Pérez et al., 2017) or single viral genomics (Martinez-Hernandez et al., 2017), which enrich the virome sequences further. However, none of the above databases is particularly dedicated to viromes from extreme environments. Despite the limitations described above a comparative analysis of the population structure of viromes in extreme environments was carried out here, using publicly accessible virus metagenomic libraries as an attempt to exemplify the results that may be obtained using available tools and information. The data deposited in MetaVir2 until 2016 were selected because its user-friendly interface, which allows access to raw data or contig metagenome samples that contain well-classified metadata. We could select the 17 studies in MetaVir2 database that contain 66 viral metagenomes collected from most representative extreme environments: deep-sea (24), oxygen minimum zones (OMZ) (4), arid habitats (9), saline niches (23), cold environments (3) and hyperthermophile regions (3). The bioinformatic pipeline used was common to all data, so the comparison between environments relied on the same criteria. MetaVir2 followed two strategies to search the contigs in each sample: BLAST search in the RefSeq Virus database with the best-hit selection, and search for k-mer composition using di, tri or tetra nucleotides comparisons (Willner et al., 2009).
Comparison of Viromes Between Extreme Habitats
The relative abundances from these data were analyzed comparing the similarities between environments. Their metadata are summarized in Supplementary Table S1.
The structure of the viral population in all the metagenomes analyzed by Metavir2 was compared (Figures 2, 3). The 10 most abundant families are represented in Figure 2 and the rest in Figure 3 to visualize the differences in abundance of each family. Some families of the order Caudovirales are ubiquitous and the most abundant were the Siphoviridae, Myoviridae, and Podoviridae as expected, since the viruses that belong to these families infect a wide range of bacterial hosts from more than 140 prokaryotic genera (Konstantinidis et al., 2009; FIGURE 1 | Global distribution of metagenomics studies from extreme environments from public databases. Circles represent metagenomes deposited in Metavir database, stars in IMG/VR, triangles in Virome and squares in NCBI.
FIGURE 2 | Comparison of the 10 most abundant virus families according to the Metavir database. The taxonomic composition is expressed in relative abundance at the virus family level. The families Siphoviridae, Podovirididae, and Myoviridae are ubiquitous in extreme environments. The figure was constructed from an abundance matrix, using the number of sequences reported in the Metavir database, from which the relative abundance was obtained; using the R program.
Frontiers in Microbiology | www.frontiersin.org FIGURE 3 | Comparison of families with lower abundance in the Metavir database. Unclassified phages predominate in all environments, compared with other virus families. However, greater diversity is observed in the sediments, hyperthermophile and hypersaline environments compared to deep waters, Oxygen Minimum Zones (OMZ) or saline environments. Hurwitz et al., 2013;Danovaro et al., 2016;Graham et al., 2019). It has been considered that the information from whole metagenomic analysis can give clues of potential model microorganisms to host virus replication, through the analysis of the Clusters of Regularly Interspaced Short Palindromic Repeats (CRISPR) loci from the cellular fraction of the metagenomes, that have been isolated from extreme environments (Gudbergsdóttir et al., 2016;Sharma et al., 2018;Liu et al., 2019;Martin-Cuadrado et al., 2019).
Next in relative abundance were Circoviridae, followed by Phycodnaviridae, Microviridae, and Inoviridae and high fractions of unidentified ssDNA and dsDNA viruses and phages. Circoviridae infect vertebrates and were particularly abundant in sediment samples (Dennis et al., 2018(Dennis et al., , 2019Blanc-Mathieu et al., 2019). Members of Phycodnaviridae have been found at high levels in deep water samples (Mizuno et al., 2016;Gong et al., 2018;Blanc-Mathieu et al., 2019), which is curious given that they preferentially infect eukaryotic algae which require light to grow. It is possible therefore that these dsDNA viruses may infect other as yet unknown marine hosts in deep waters (Van Etten et al., 2010;Blanc-Mathieu et al., 2019). However, the predominance of the above families present obvious exceptions, such as in two samples from cold environments, a sample from a saline environment and most of the samples from deep marine sediments.
Within the metagenomes that correspond to deep sea environments (depths greater than 1,000 m), where the absence of light, oligotrophic conditions, low-oxygen concentrations, low temperatures and high hydrostatic pressure dominate (Le Romancer et al., 2006;Liang et al., 2019), two categories were considered based on the origin of the samples: sediments and deep water (Figure 2). It is clear that in metagenomes from sediments of the Atlantic, Arctic and Pacific Northwest, ssDNA viruses like Circoviridae, Microviridae and Inoviridae are more abundant than dsDNA viruses (Figure 2). This characteristic seems to be exclusive to samples from this environment. It should be noted that the two samples from cold environments show a similar composition to that of sediments, and all others include ssDNA viruses in low abundance (Figure 2). This is in agreement with a recent report by Yoshida and coworkers who reported that ssDNA viruses predominate in marine sediments and have an estimated abundance of 1 × 10 8 to 3 × 10 9 genome copies per cm 3 of sediment, clearly more abundant than dsDNA viruses which range from 3 × 10 6 to 5 × 10 6 genome copies per cm 3 (Yoshida et al., 2018).
In Figure 3, where the remaining viral families are shown, two general points can be highlighted: Mimiviridae are present in almost all environments, which is not surprising since some of their hosts are known polyextremophiles (Claverie et al., 2018;Yau and Seth-Pasricha, 2019). The second point is the abundance of unclassified sequences, which do not allow any conclusion to be made about the diversity observed by environment since these sequences could come from one or more than one family. A large part of the sequences obtained from different environments, except for sediments, have no similarity in the databases, an issue that should change with the inclusion of additional metagenomic-derived sequences in databases (Figure 3). Overall analysis of the composition of viral families present in each extreme environment could at the very least allow a description of the families that are shared or that are exclusive to each environment. Some environments are characterized by low-oxygen concentrations; these include those with high concentrations of greenhouse gases, which directly affect the biodiversity in those environments (Kiehl and Shields, 2005;Resplandy et al., 2018). There are three central oceanic regions which are considered to be Oxygen Minimum Zones (OMZ), namely the Eastern Tropical North Pacific (ETNP), the Eastern Tropical South Pacific (ETSP) and the Arabian Sea, within which the activity of anaerobic microorganisms is highly significant (Paulmier and Ruiz-Pino, 2009;Thamdrup, 2012). As expected, the viral population diversity closely reflects the microbial diversity in these environments (Cassman et al., 2012;Parvathi et al., 2018;Fuchsman et al., 2019), with the virome composition in OMZ being commonly composed of the Myoviridae and Siphoviridae families, followed by Phycodnaviridae (Figure 2).
OMZ were sampled at 200 m depths in Chile and Canada and virus composition was analyzed using the MDA (Genomiphi and GenomePlex) protocol. While the ssDNA Circoviridae family was predominantly observed in samples from Chile, this virus family was not observed in samples from the Canadian OMZ. In addition, in the samples from Canada (Chow et al., 2015) Parvoviridae (ssDNA) were highly abundant, but were totally absent in samples from Chile (Figure 3). In previous studies it was observed that the viral community along the vertical dissolved oxygen gradients was characterized by abundance taxa and diversity fluctuations. These differences could be related to changes in the viral replication strategy from lytic to lysogenic. It seems that oxygen reduction concurs with a decrease in viral abundance (Cassman et al., 2012;Parvathi et al., 2018). It should be noted that a large proportion of sequences obtained from these regions do not find similarity with other viruses in the databases, but those sequences could be from viruses that infect little known prokaryotic hosts, like ammonia-oxidizing archaea and anaerobic ammonia-oxidizing (anammox) bacteria which predominate in this environment (Parvathi et al., 2018).
Hyperarid environments exhibit conditions that are considered to be limiting for life, such as lack of water, high levels of UV radiation and extreme temperatures. However, both prokaryotic and eukaryotic organisms have adapted to live in these environments (Merino et al., 2019). Although low diversity might be expected in these environments, metagenomic studies performed with hypolithic communities have shown this not to be the case, with a high level of diversity being reported; particularly in bacterial communities from Antarctica (cold desert) and Namibia (desert), which are mainly Actinobacteria, Proteobacteria, and Cyanobacteria (Vikram et al., 2016). In the hypolithic viral communities from the Namibian desert and the Antarctic, metagenomic data has revealed the presence of Caudovirales which do not correlate with phages that infect Cyanobacteria species (Adriaenssens et al., 2015). The samples from the Antarctic hyperarid region displayed a greater diversity of unique viruses such as Bicaudaviridae, Asfarviridae, Lavidaviridae, Tectiviridae, and Sphaerolipoviridae when compared with the families found in the Namibian desert (Figure 3). Zablocki and coworkers have previously reported a higher viral diversity in the Arctic when compared with the Namibian desert, and it has been observed that Antarctic desert soils contain higher proportions of free extracellular virus-like particles compared to hot hyperarid desert soils, where a lysogenic lifestyle seems to prevail (Zablocki et al., 2016).
In Figure 3 the variability in the composition of viral families in hypersaline habitats is evident. Such environments are widely distributed throughout the world and are present in salt lakes, salt flats and salt deposits. In these environments, the low water activity directly affects the composition of the microbial communities (Le Romancer et al., 2006;Ma et al., 2010;Merino et al., 2019). Viruses that have been identified in these ecosystems are haloviruses and a large number of these infect Archaea, Bacteria and Eukaryotes (Atanasova et al., 2018;Plominsky et al., 2018;Ramos-Barbero et al., 2019). About 64 archaeal viruses have been isolated from the two kingdoms, Crenarchaeota and Euryarchaeota (Porter et al., 2007). These samples are also those that have a greater abundance in unassigned or not classified viruses, which prevents determination of the real diversity of that group of archaea viruses, probably because they are the least studied and have low representation in the databases (Atanasova et al., 2018;Ramos-Barbero et al., 2019).
In addition, unclassified dsDNA viruses have also been observed (Figure 3), while haloviruses such as HGV-1, HTVAV-4 and HSTV-1 have also been identified at high levels. On the other hand, ssDNA viruses which mostly infect eukaryotes such as colpodellids, nematodes, arthropods, chlorophytes, among others, are present at low levels in hypersaline habitats (Feazel et al., 2008;Heidelberg et al., 2013).
The thermophile environments are characterized by high temperatures, where thermophilic microorganisms thrive at 65-80 • C as their optimal growth temperature, and >80 • C for hyperthermophiles (Merino et al., 2019). Viruses that infect bacteria and archaea are abundant in these hyperthermophilic habitats (Schoenfeld et al., 2008;Strazzulli et al., 2017;Liu et al., 2019). The virome of hyperthermophile environments is composed of viruses that infect all three domains of life, with members of the Turriviridae, Fuselloviridae, Bicaudaviridae, and Globuloviridae families that infect Archaea (Krupovic et al., 2018). Moreover, the Nudiviridae, Phycodnaviridae, and Poxviridae families that infect eukaryotes are also present (Figure 3).
Within the hyperthermophile metagenomes analyzed here, the presence of ssDNA or RNA viruses was not observed, but in other studies from these environments the presence of picornaviruslike, alphavirus-like, and flavivirus-like RNA viruses has been reported (Bolduc et al., 2012). It is possible that ssDNA and RNA viruses were not detected in the samples we analyzed due to differences in sample processing (Figures 2, 3). Thus, as previously mentioned, if comparative viral metagenomic studies are to be undertaken to allow an accurate comparison between viromes from different ecosystems and to potentially identify novel viral clusters, then standardized methodologies will need to be developed and employed.
The polar regions of the Earth are dominated by the polar ice caps, with the microbial diversity present in these regions being much higher than might be expected. It is well established that viruses play an important role in controlling microbial mortality in these habitats (López-Bueno et al., 2009;Cárcer et al., 2015;Yau and Seth-Pasricha, 2019). While it has been reported that different lakes located in the Arctic and Antarctic share similar virome compositions, marked differences have been found.
Although at this taxonomic level it is possible to differentiate some of the particularities described above, in terms of the virus composition in each environment studied, very little information is revealed at the genus or species level that would allow a better understanding of the virus-host relationship and its influence in the environment. Therefore, two environments, OMZ and deep-sediments, which at the family level have a very similar structure (Figures 2, 3) were selected in an attempt to determine if it is possible to obtain biologically meaningful information on the differences or similarities in virus-host interactions at the genus level. The genus composition of two well-known families were analyzed: Podoviridae that infect bacteria and FIGURE 4 | Composition of Podoviridae family at genus level. Metagenomes from OMZ and deep-sediments were considered in the analysis.
Frontiers in Microbiology | www.frontiersin.org are ubiquitous even in extreme environments (Figure 4), and Poxviridae ( Figure 5) for which known hosts are terrestrial vertebrates and invertebrates and their presence in extreme environments, particularly in aquatic environments has not been reported. At this level only some viruses can be taxonomically identified and can be seen to vary in abundance, however, most of the genera were unidentified. The genera from Podoviridae identified by sequence were enterobacter phages, which infect bacteria that are known human pathogens, and would not be expected in these environments (Figure 4). In the case of Poxviridae, most genera found infect terrestrial vertebrates, suggesting that either the sequences of the Poxviridae family members obtained from aquatic niches are sufficiently similar to those from Poxviridae members that infect terrestrial hosts, or this is an artifact caused by the lack of sequences from Poxviridae found in the databases (Figure 5).
Clustering by Environmental Virome
Hierarchical clustering analysis was performed from abundant viral families previously published in the aforementioned metagenomic datasets (Figure 6). From this it was possible to conclude that some extreme environments have groups that indicate similarities in the viral communities present in these environments. This was particularly evident for some viromes obtained from hypersaline, deep-sea and hyperarid environments, while it was less evident in other extreme ecosystems which did not appear to show clustering, such as cold environments. However, this analysis again shows that some viral families are ubiquitous in all extreme environments, while the ssDNA viruses appear to predominate in sediments from deep-sea and cold environments.
In general, the virome structure from hypersaline samples reveals low levels of diversity, even in samples from different geographical areas (Figure 6). The high concentration of NaCl might limit viral diversity due to the shortage of prokaryotic hosts, since Haloquadratum walsbyi, Salinibacter ruber, and nanohaloarchaeas are the predominant organisms in these environments, with more than 90% of the contigs annotated to these taxa (Ventosa et al., 2015). Another factor that could determine the virome diversity that is observed in hypersaline environments is the dynamic switch between lytic and lysogenic replication cycles, since this represents a significant adaptation mechanism in environments with high salinity content (Roux et al., 2016). This notwithstanding, from Figure 6 it is clear that the virus family composition is quite similar in these environments, which could provide significant information related not only to viral evolution but also to physiological adaptation of microorganisms in response to high temperatures (Schoenfeld et al., 2008;Biddle et al., 2011). Figure 7 shows the degree of overall similarity between the viral metagenomes in relation to the extreme environment from which the viromes were isolated. As previously described, some viral families belonging to the Caudovirales order are ubiquitous and display polyextremophilic adaptation. The hypersaline environments present a consistent clustering depending on the viral diversity, as well as the relative viral family abundance, which suggests that NaCl enriched environments provide strong constraints for the development of life that may restrict ecosystem diversity. Some viromes from hyperarid, deep-sea and saline environments are closely clustered (Figure 7) suggesting that the organisms and therefore the viral composition is partially shared, at least between these environments. Regarding deep-sea environments, Figure 7 shows two clustered metagenome populations derived from the deep-sea, where those viromes FIGURE 7 | Multidimensional scaling (NMDS) to visualize the degree of overall similarity between the metagenomes. Metagenomes from different habitats are shown: deep-sea (pink), OMZ (green), hyperarid (red), hypersaline (blue), psychrophile (orange), hyperthermophile (yellow). Hypersaline metagenomes have a strong clustering compared to other metagenomes. The figure was constructed from an abundance matrix, where the multidimensional scaling algorithm was applied to observe the general similarity of metagenomes, using the R software.
obtained from deep water are closely clustered, as well as those from sediments.
An interesting hypothesis to be investigated using metagenomics studies conducted in different geographical areas is the possibility of identifying specific viral clusters associated with a particular extreme environment. The large numbers of unclassified sequences in the databases is an important issue to consider with studies on viromes from extreme environments. The limitations of the bioinformatic pipelines to assign a taxonomic identity to a majority of the viral sequences, together with our limited understanding about viruses in extreme environments, has resulted in a lack of progress in our knowledge of extremophilic viromes. This has also negatively impacted our understanding in terms of evolution, gene horizontal transfer, ecology and virus-host interactions.
To meaningfully compare viromes from different environments it is necessary to at least partially answer the previous questions and provide new information about how viromes are potentially limited by extreme physicochemical characteristics, geographical area, or other artificial circumstances such as sampling methods, enrichment techniques and other technical biases. It should be possible to determine whether some viral populations could be closely related to a specific type of extreme ecosystem and consequently obtain more information about viral evolution (Simmonds, 2015).
Viral-host systems (in vitro screening), sequence-based screening, activity-based screening (heterologous expression of viral proteins), and PCR-and hybridization-based screening, could be implemented for functional analysis from extreme viromes (Moser et al., 2012;Bzhalava and Dillner, 2013;Fancello et al., 2013;Heller et al., 2019). While sequencebased screenings have been responsible for the discovery of the majority of new viral enzymes (at least as annotated proteins) from extreme environments, PCR-and hybridization-driven methods have not been employed to date to functionally explore extremophile viromes.
In vitro screening from extremophile viromes is a challenge with respect to the co-cultivation of both hosts and viruses, and in particular in trying to mimic the conditions which are present in these habitats, thereby ensuring a better success rate regarding viral replication and viral protein expression. Thus, to help overcome this bottleneck, it will be important to develop new host systems (for prokaryotic and eukaryotic viruses) which grow under extreme pH, temperature, salinity, pressure and radiation (Schoenfeld et al., 2009).
Activity-based screening could be a very useful approach to identify novel enzymes. This method demands an efficient heterologous expression system for viral proteins. Thus, there are problems with the expression levels of many viral proteins in foreign host systems, particularly in genes isolated from extremophile viromes, which are dominated by rare genes, with issues such as codon usage together with promoter regulation/activation negatively impacting on enzyme production in different heterologous systems (Kristensen et al., 2012).
While the well-established Escherichia coli heterologous expression system is available, it is clear that additional systems with a particular focus on extremophile bacteria and fungi will need to be developed to increase the chances of producing sufficient levels of viral extremoenzymes to allow their detection in function-based screens. These screens usually employ activitybased assays which involve colorimetric changes, typically following utilization of a substrate. However, these types of screenings are not particularly useful with viromes, since viral genes encoding enzymes involved in the metabolism of different substrates are quite rare. Due to this, there are no reports to date of the detection of viral enzymes from viromes through activity-based screening.
Despite the aforementioned disadvantages related to the heterologous expression of viral proteins, the activity-driven screenings allow functional gene annotation through an in vitro phenotypic-based test. This has an important advantage over sequencing-driven screening where a high number of genes are annotated as "unknown function, " because the gene repertoires of the extremophile viromes are currently undersampled.
A number of viral enzymes with utility in scientific, diagnosis and therapeutic applications have been identified using sequencing-based screens from extremophile viromes (Schoenfeld et al., 2008(Schoenfeld et al., , 2009Moser et al., 2012;Dwivedi et al., 2013;Schmidt et al., 2014;Mead et al., 2017). Also, the genomes of extremophile viruses are likely to be a source of novel antimicrobial peptides that may have applications in the biopharmaceutical and molecular diagnostics areas (Rice et al., 2001;Le Romancer et al., 2006;Schoenfeld et al., 2009).
For example, a sequencing-driven metagenomic study from two mildly alkaline hot springs in Yellowstone, allowed identification of 532 lysin-like genes (Schoenfeld et al., 2008). In recent years, these lytic enzymes have gained increasing importance due to their potential use in biomedical science applications (Schmitz et al., 2010); however, no lysinlike genes from extreme environments have to date been experimentally characterized.
DNA polymerases (502 sequences) have been also detected in extreme metaviromes, particularly from hypoxic estuarine waters obtained in the Gulf of Maine, Dry Tortugas National Park and the Chesapeake Bay (Andrews-Pfannkoch et al., 2010;Schmidt et al., 2014). These shotgun metagenomic studies revealed a novel DNA polymerase A family in marine virioplankton, since some sequences were distantly grouped in a phylogeny comprising DNA polymerase A from virus and bacteria (Schmidt et al., 2014).
Ribonucleotide reductases (RNR) have also been found from viromes obtained from hypersaline, psychrophile and thermophile niches (Dwivedi et al., 2013). For example, a bioinformatics analysis demonstrated that viruses isolated from hot springs contained a high abundance of RNR. However, some habitats such as hydrothermal vents from the East Pacific Rise, a solar saltern pond and salterns from Alicante (Spain) were found to have fewer (≤5) identifiable RNR viral homologs (Dwivedi et al., 2013).
Recent efforts to characterize new viral DNA polymerases from extreme environments have resulted in the identification of a thermostable polymerase in a viral metagenomic DNA library from a near-boiling thermal pool in a hot spring in Yellowstone (Moser et al., 2012;Heller et al., 2019). This was the first report describing the isolation of a polymerase from a viral metagenomic library. In this study 59 complete polymerase clones were identified as possessing thermostable DNA polymerase activity following a functional screen. One of these polymerases, namely PyroPhage 3173 Pol, also has 5 -3 exonuclease activity, as well an innate reverse transcriptase activity. It was subsequently tested in high fidelity reverse transcription PCR (RT-PCR) reactions and compared with some commercially available enzyme systems (Moser et al., 2012;Heller et al., 2019). The PyroPhage 3173 Pol-based RT-PCR enzyme was found to have a higher specificity and sensitivity that the other enzymes. While the PyroPhage 3173 DNA polymerase shares amino acid identity (∼32%) with another bacterial polymerase, no significant similarity was found with other viral proteins (Moser et al., 2012). This highlights the potential diversity of enzymes that may be present in extremophile viromes. The enzyme has subsequently been characterized and shown to be effective in the molecular detection of certain viral and bacterial pathogens by loop-mediated isothermal amplification (Chander et al., 2014).
STRUCTURAL BIOLOGY OF VIROMES
Specific molecular-level adaptations to extreme environments can only be appreciated once the detailed molecular structures are known. In order to explore the available structural information of proteins belonging to extremophile viruses, we carried out a manual search in the Protein Data Bank 1 (Berman et al., 2000) of all the viral families and genera identified in the metagenomes analyzed in this review, and kept only those whose hosts were either marine viruses or frank extremophiles. The resulting proteins were then classified according to their annotated function, and are discussed below.
In general, all these structures are valuable from a biochemical and biotechnological perspective, as they contain the molecular representation of the required adaptation to the particular extreme environment favored by the virus host. For example, viruses that infect Acidianus or Sulfolobus archaea are subject to the combination of high temperature and acidic pH; their proteins tend to have many charged residues, in particular, acidic ones (see structure 3DF6, an orphan protein). They also tend to have compact folds with structured termini, short loops with prolines in specific positions to stabilize them (see structure 2BBD, a major capsid protein), and the absence of cavities. Despite being DNA-binding proteins, and therefore cytoplasmic, some of them also include disulfide bridges, intramolecular (see structure 2VQC) or intermolecular (in structure 2CO5), that can impart up to 14 degrees in thermal stability for the protein. The formation of these disulfide bonds requires the existence of a sulfhydryl oxidase, either encoded by the host or by the virus itself. When a mesophilic homolog exists, a direct comparison of the structural features of the proteins can guide protein engineering to improve stability and/or function. Also, as some viruses have space limitations in their capsids, resulting in compact genomes, viral homologs in these cases tend to be the minimal possible version of the protein family, allowing for the identification of the critical residues that stabilize both structure and function. A nice example of this is the minimal catalytic integrase domain of Sulfolobus spindle-shaped virus 1 (structures 3VCF, 4DKS and 3UXU). On the other hand, for viruses that have less space limitations, viral proteins can have surprising combinations of domains, suggesting ways to engineer multidomain proteins. This is particularly notorious in Mimivirus, where identifiable catalytic domains can be linked to domains with no sequence or structure homology to any known protein, as in the sulfhydryl oxidase in structure 3TD7.
Less dramatic examples are basic modules known to function as transcription factors, such as the ribbon-helix-helix domain, with an extra helix added as an embellishment that increases thermal stability, as in structure 4AAI from Sulfolobus virus Ragged Hills.
The analysis of the conservation of proteins amongst viruses of the same or different classes is instructive, and can help in establishing families and/or events of horizontal gene transfer. This conservation has been historically one of the criteria used to choose which proteins to study structurally from a particular virus. The wealth of information derived from identifying open reading frames in the data from sequencing endeavors can certainly be a source of novel activities, as described in the previous section. This functional annotation requires sequence homology to known proteins, something that does not happen frequently with extremophile viruses. As structure diverges more slowly than sequence, protein structural analysis allows for the inference of function when sequence homology is weak. In Supplementary Table S2 we list viral protein structures that were obtained in this spirit, sometimes as part of Structural Genomics Initiatives (Oke et al., 2010). As can be seen from Supplementary Table S2, the goal of assigning function is not always achieved, as on occasion novel folds are found (see, for example, structures 4ART and 3DF6 discussed above), precluding the transfer of function. In other, happier cases, the structure instructs the experiments needed to functionally annotate the sequence (see structure 3O27, with functional DNA binding activity).
Most of the structures we found were obtained with a previous inkling of the function of the protein. For example, Supplementary Table S3 lists structural proteins, such as capsids and tail spikes. The full capsid structures are interesting, for instance, as scaffolds for drug delivery, and as models to study capsid formation, propose infection mechanisms, and study the interactions with nucleic acids and membranes. In this regard, structure 5W7G proposes a model for the membrane envelope of Acidianus filamentous virus 1, composed of flexible tetraether lipids that are organized as horseshoes, including a mechanism for enrichment of the viral membrane with this particular lipid of low abundance in the host. Another important interaction is that of capsid proteins with DNA, and surprisingly, it appears that rod-shaped viruses (such as Sulfolobus islandicus rod-shaped virus 2 in structure 3J9X) organize their DNA in the A form, stabilized by alpha helices from the major capsid proteins. This is in stark contrast to icosahedral viruses, which pack their DNA in the B form. Turrets, tails and spikes are important to understand interactions with the hosts, as part of the ecological role that these viruses play.
Supplementary Table S4 lists proteins that bind either DNA or histones; the latter come from viruses that infect either fish or shrimp and are interesting because one of them is a DNA mimic (see structure 2ZUG). The architecture of these DNA-binding proteins is sometimes reminiscent of known bacterial classes (see structure 2CO5, a winged helix-turn-helix protein with an intramolecular disulfide bond, discussed above), or is a novel fold (see structure 2J85). Finally, Supplementary Table S5 lists enzymes found in extremophile viruses. The range of activities is wide, going from DNA, protein and sugar metabolism, to reactive oxygen species management (see, for example, structure 4U4I, a superoxide dismutase that does not require chaperones to capture copper or oxidize its disulfide bridges). There is also interest in auxiliary metabolic genes that support more efficient phage replication, and are normally related to photosynthesis; in this class we find structure 5HI8, a phycobiliprotein lyase, and 3UWA, a peptide deformylase particularly selective for the D1 protein of photosystem II.
The relevance and utility of all these structures is multiple: as crystallographic-amenable homologs of difficult targets (see structure 3VK7, a DNA glycosylase) given their stability, as inspiration to improve mesophilic orthologs in their resistance to high temperature and low pH, as examples on how to trim these orthologs to minimal yet functional versions, in the identification of novel quaternary structures (see for example structure 5Y5O, a dUTPase with novel packing), and as examples on how to adapt new modules to them (see structure 3TD7, the sulfhydryl oxidase with a novel domain attached at the C-terminus). The field of structural biology of extremophile viruses is still young, and there is plenty of room for the exploration of orphan ORFs and for viruses subject to other extreme environments.
FINAL REMARKS
Metagenomics is a powerful approach to study the virome structure of extreme environments and its potential biotechnological applications in a number of fields. However, despite its potential few studies have been undertaken to characterize viral communities in these environments. Some methodological challenges need to be overcome to ensure that samples enriched in viral particles can be obtained, as well as increasing the yields of viral nucleic acids that can be isolated.
A comparative analysis of the population structure of viromes in extreme environments was carried out here, using the 17 publicly accessible virus metagenomic libraries deposited in MetaVir2. Viral communities from different extreme environments showed quite high levels of overall similarity, with viral families belonging to the Caudovirales order being ubiquitous and displaying seemingly polyextremophilic adaptation. The most abundant families of Caudovirales were Siphoviridae, Myoviridae, and Podoviridae, followed by Circoviridae, Phycodnaviridae, Microviridae, and Inoviridae. However, very high fractions of unidentified ssDNA and dsDNA viruses and phages were identified. Considering the large number of unclassified viral sequences from extremophilic viromes, it is currently not possible to definitively identify novel virus families which are uniquely present in different extreme environments. Nor is it possible to correlate the presence of specific virus families or genera with any given environment.
Attempts to further explore specific virus-host relationships in the Podoviridae and Poxviridae present in OMZ and deepsediments resulted in the identification of viruses whose known hosts are highly unlikely to reside in these environments. Although many more sequences were obtained when a similar comparison was made using newer, more up to date databases (RefSeq from NCBI, or IMG/VR), the viruses identified often correspond to those for which more sequences are available. Therefore, although MetaVir2 is no longer kept up to date, and richer, more recent databases exist, such as IMG/VR, a similar comparative analysis using such databases produced similar results (not shown), indicating that more accurate taxonomic assignments are required and ideally they should be in a common repository where all the viral metadata are collected. Also new tools should be developed to automatize sequence classification so that viral species assignment can be obtained.
Hierarchical clustering analysis was performed from abundant viral families previously mentioned and it was possible to conclude that some extreme environments, such as hypersaline, deep-sea and hyperarid niches, have groups that indicate similarities in the viral communities present in these environments, although as above, with the available data this comparison could only be evaluated at the family level. An important challenge from viral metagenomics is to establish specific viral clusters associated with particular extreme environments and describe their role in different extreme ecosystems. Although taxonomic allocation at the level of the genus or species in viruses is a challenge, new strategies for the classification of viruses are still in development from the use of genomic sequences without previous information or clustering of their coding sequences, that allow a more efficient classification process, that is scalable and user friendly.
In addition, the functional prospecting for viruses in extreme ecological niches has been almost exclusively limited to sequencebased screening to date. While some viral sequences have been annotated and assigned to specific functions, very few viral proteins discovered using metagenomics approaches have been subsequently cloned, heterologously expressed and biochemically characterized. Other functional-based methods such as activitybased screenings and PCR -or hybridization-based screenings are currently underexploited as approaches to identify viral proteins from extremophilic viromes, which may have utility in biotechnological applications. Considering that our current knowledge of viromes associated with extreme ecosystems is quite limited, we still cannot fully appreciate the great biotechnological potential that they may represent. Thus, further efforts should be made to screen extremophile viral metagenomes for novel proteins and biomolecules if we are to advance our understanding of their biological impact and to capitalize on the unique viral diversity that is present within these novel ecosystems.
AUTHOR CONTRIBUTIONS
SD-R, RG, AD, and RB-G designed and wrote the manuscript. MS-C prepared the Section "Perspectives on Sampling and Processing: Methodological Challenges for Viral Metagenomics in Extreme Environments." HC-S, SD-R, RP, and AH collected and processed the metagenomic data and conducted the analysis of viral communities. RB-G and LM-Á prepared Section "Functional Metagenomics in Extreme Environments: Methodological Challenges, Discoveries and Opportunities" related to functional bioprospection in extreme viromes, while NP prepared Section "Structural Biology of Viromes."
FUNDING
We acknowledge the graduate fellowships that HC-S and LM-Á received from CONACyT-Mexico. The authors appreciate the support received from the Unidad de Secuenciación Masiva y Bioinformática (Instituto de Biotecnología UNAM-Mexico) for the access to computer facilities. | 2019-10-18T14:22:59.039Z | 2019-10-18T00:00:00.000 | {
"year": 2019,
"sha1": "ff49d7e314ea53f131917be7e2c03559c0d98d37",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.02403/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff49d7e314ea53f131917be7e2c03559c0d98d37",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
237501343 | pes2o/s2orc | v3-fos-license | Isolation, characterization, and effectiveness of bacteriophage Pɸ-Bw-Ab against XDR Acinetobacter baumannii isolated from nosocomial burn wound infection
Objective(s): With emergence of drug resistance, novel approaches such as phage therapy for treatment of bacterial infections have received significant attention. The purpose of this study was to isolate and identify effective bacteriophages on extremely drug-resistant (XDR) bacteria isolated from burn wounds. Materials and Methods: Pathogenic bacteria were isolated from hospitalized patient wounds in specialized burn hospitals in Iran, and their identification was performed based on biochemical testing and sequencing of the gene encoding 16S rRNA. Bacteriophages were isolated from municipal sewage, Isfahan, Iran. The phage morphology was observed by TEM. After detection of the host range, adsorption rate, and one-step growth curve, the phage proteomics pattern and restriction enzyme digestion pattern were analyzed. Results: All isolates of bacteria were highly resistant to antibiotics. Among isolates, Acinetobacter baumannii strain IAU_FAL101 (GenBank accession number: MW845680), which was an XDR bacterium, showed significant sensitivity to phage Pɸ-Bw-Ab. TEM determined the phage belongs to Siphoviridae. They had double-stranded DNA. This phage showed the highest antibacterial effect at 15 °C and pH 7. Analysis of the restriction enzyme digestion pattern showed Pɸ-Bw-Ab phage was sensitive to most of the used enzymes and based on SDS-PAGE, protein profiles were revealed 43 to 90 kDa. Conclusion: Considering the potential ability of the isolated phage, it had an antibacterial impact on other used bacterial spp and also strong antibacterial effects on XDR A. baumannii. Also, it had long latency and low burst size. This phage can be a suitable candidate for phage therapy.
Introduction
Antibiotic resistance among bacterial infections has increasingly concerned both developed and developing communities and nations (1). The rapid spread of antibiotic-resistant bacteria has led to a serious challenge in the treatment of wound infections (2,3). Burns are responsible for many pathophysiological changes (4) that lead to severe trauma (5,6). Patients with burns are clearly at high risk for acquired infections which have been important causes of death in these patients (7,8). Bacteria are the most frequent microorganisms responsible for wound infections. The frequency and diversity of bacterial infections in different wounds are influenced by factors such as wound type, depth, and area, the manner of wound formation, the amount of tissue loss, and the efficiency of host immune response (9). Wound colonization by pathogenic bacteria begins almost immediately after injury and the colonized bacteria involve opportunistic pathogens acquired from the environment or the patient's microbial flora (10,11). In many cases, only low-level microbial populations that grow in the wound impede wound healing (12). Among common gram-negative bacteria in burn wounds, the drug-resistant Pseudomonas aeruginosa, Acinetobacter baumannii, members of Enterobacteriaceae, and other broad-spectrum Beta-lactam-producing bacteria have special importance (13,14). A. baumannii is recently detected among the most abundant nosocomial wound infecting bacteria, eliminating of which from hospital wards remains a serious challenge due to its long survival time in the hospital environment (15). On the other hand, the increasing pattern of antimicrobial resistance among A. baumannii strains has caused a great deal of concern (16,18). A. baumannii, Escherichia coli, and Klebsiella pneumoniae are gram-negative bacteria and are some of the opportunistic and nosocomial pathogens of hospital-acquired infections in burn patients (19,20). Phages are viruses that infect bacteria and are used as antibacterial agents in the treatment of pathogenic and infectious bacteria. Recognition of effective phages can significantly solve the problems due to the emergence of increasing drug-resistant infectious pathogens (21)(22)(23). Some burn wounds related to phage therapy research (24)(25)(26) offere that phages could have the ability to limit bacterial wound infections in burn patients. Also, multiple pieces of research have illustrated the advantages of phage therapy to inhibit MDR bacterial infections in burn patients (27,28). Lavergne et al. studied the alternative treatments for multidrugresistant bacterial infections. They evaluated a case of bacteriophage-treated multidrug-resistant A. baumannii infection and concluded that determination of the exact role of phages as a chance for inhibiting multidrugresistant bacterial infections needs more clinical trials (29). These conclusions were also obtained by Yang et al. and Jin et al. (30,31). Researchers isolated the bacteriophages that were effective against MDR bacterial isolates from septic ulcer infections. Phage PA DP4 was effective on P. aeruginosa, phage KP DP1 was effective on K. pneumonia and phage EC DP3 was effective on E. coli. Moreover, the results of that study also showed that phages can be a valuable choice for prophylaxis against septic ulcers (32). Furthermore, researchers designed a new lytic bacteriophage cocktail with therapeutic potential against bacteria that had caused diabetic foot infections (33). Researchers examined the stability of bacteriophages in burn wound care products. According to the results of their study, supplementation of bacteriophages in burn wound care products led to increased rate of antimicrobial activity. However, some topical antimicrobial compounds traditionally used to prevent and treat burn wound infections may decrease phage activity (34). As regards some queries regarding MDR, XDR, and PDR bacterial infection treatment being unresolved in burn patients, other ways must be tried. Here, we reported both morphological identification of phage targeting resistant MDR and XDR A. baumannii, K. pneumonia and E. coli isolated from burn wounds and characterization of phage survival, host range, adsorption rate, one-step growth curve, proteomics pattern, and restriction enzyme mapping where we used some endonuclease enzymes for digestion analysis which were not mentioned in other articles.
Isolation and biochemical identification of bacterial strains
In this study, 50 patients with burn wounds were randomly selected in several burn-specialized hospitals in different areas of Iran. Patients' wounds were sampled over a period of three months (March to May 2020). Bacterial isolates causing infections were isolated from various burn wounds. Using morphological and biochemical tests, the initial identification of the bacteria was performed. The biochemical tests included IMVIC, TSI, MR, VP, motility, H 2 S and Indole production, OF, catalase, oxidase, urease, nitrate reduction, and citrate utilization (Merck, Germany) (35,36). Standard strains of K. pneumoniae ATCC10031, E. coli ATCC25922, and A. baumannii 19606 ATCC were prepared as lyophilized ampoules from Pasture Institute of Iran.
Molecular identification of bacterial isolates
Bacterial DNA was extracted using a nucleic acid extraction kit (RIBO-prep, Russia) according to the manufacturer's protocol. Molecular identification of the bacterial isolates was performed based on the 16S rRNA gene sequence. For this purpose, the pair of universal primers: 1492R (5′ACGGCTACCTTGTTACGACTT3′) and 27F (5′AGAGTTTGATCCTGGCTCAG3′) prepared by (Pishgam Co. Iran) under license from Metabione Co. of Germany were used. The PCR reaction was performed in a thermal cycler (T100 Bio-Rad, Malaysia). The sequences of all PCR products were then determined in a DNA sequencer (3130, Applied Biosystems) according to the manufacturer's instructions (37)(38)(39).
Antibiotic resistance pattern testing
To evaluate the antibiotic resistance pattern in gram-negative clinical isolates, an antibiogram test was performed by the agar disk diffusion method. The diameter of growth inhibition zones was measured following incubation at 37 °C for 24 hr. The XDR isolates had the highest resistance and lowest susceptibility to the tested antibiotics. The antibiogram discs (CONDA, Spain) were selected according to CLSI standard reference because extensively drug resistant (XDR) means non-susceptible to more than one agent in all but two or fewer antibiotic classes and multi drug resistant (MDR) means non-susceptible to more than one factor in three or more antimicrobial classes (40,41).
Isolation and enrichment of bacteriophage
Municipal inlet sewage was used to isolate possible phages. Sampling was done from the northern and southern municipal sewage treatment plants of Isfahan in Iran. The samples were then transferred to the laboratory on ice. Each sample was centrifuged at 8000g for 15 min at 4 °C . Then, 10 ml of supernatant was filtered through a 0.22 μm diameter syringe filter and the filtered solution was added to 50 ml of 2×BHIB (Ibresco, Iran). Then, 100 μl of each fresh overnight bacterial culture was separately added to each medium and the mixtures were shaken overnight at 140 rpm, 37 °C (42).
Enumeration of the isolated bacteriophage particles
The double-layered agar plate technique (overlay method) was used for plaque formation. For this purpose, serial dilutions of each phage filtrate solution (10 8 =PFU/ml) was prepared in 50 ml SM buffer (1 M Tris-HCl pH 7.5, 5.8 g NaCl, 2g MgSO 4. 6H 2 O in 1 L distilled water) and 0.1 ml of each dilution was separately added to 0.1 ml of fresh bacterial culture (10 8 CFU/ml). Finally, this mixture was added to 5 ml of melted BHI medium containing 0.7% agar, and the mixture was added to the surface of BHI with 1.5% agar, the prepared culture was then incubated for 24 hr at 37 °C . Afterward, the formed plaques were observed and counted (23,43).
Phage plaques purification and bacteriophage serial dilution
In order to purify specific phage, overlay agar plates were prepared with 10 -4 to 10 -8 dilutions of supernatants containing phage. The single plaques formed on the plate surface were cut with a sterile scalpel blade and transferred to sterile microtubes containing 1 ml SM buffer and mixed thoroughly for 30 sec. The mixtures were then centrifuged at 8000 g for 5 min at 4 °C , and then 0.1 ml of each separated supernatant was added to 0.9 ml of SM buffer. Plaque formation by each solution was investigated using the overlay method, and the plaques were counted after incubation for 24 hr at 37 °C (44).
Assessment of bacteriophage host range
The host range of the studied phage was investigated using the spot method on different strains of bacteria. Accordingly, 0.1 ml of fresh overnight bacterial culture (1.5×10 8 CFU/ml) was mixed with 5 ml of melted BHI medium containing 0.7% agar and poured on the surface of BHI with 1.5% agar. After solidifying of the media, 10 μl of each dilution prepared from phage filtrates in the SM buffer was inoculated as spots on the agar surface in separated areas. After 24 hr incubation at 37 °C , the spot test results were reported as «++» for complete lysis, «+» for opaque or weak lysis, and «-» for non-lysis (45,33).
Morphology and structure of bacteriophage by TEM
For a detailed study of the structure and morphology of possible bacteriophages, 10 μl of phage filtrate (10 8 PFU/ml) was carbon coated on a copper grid for 30 sec and stained with 2% uranyl acetate (w/v) for 1 min. After drying, phage particles were observed in the sample with TEM (EM 208S 100 Kv, Philips). Detection of the phage family according to the morphological features was performed by using the latest changes in the International Virus Classification Committee (ICTV) reports (46)(47)(48)43).
Appraisement of bacteriophage adsorption rate
The mixture containing equal volumes of bacterial strain fresh culture (1.5×10 8 CFU/ml) and phage stock (10 6 PFU/ml) was prepared incubated at 37 °C . Then, the host phage mixture was centrifuged at different time intervals (0, 5, 10, 15, 20, 25, and 30 min) at 8000 g to precipitate phage-absorbing cells. Then the titers of unabsorbed phages in the supernatant were determined by the overlay method (43).
Phage stability in the presence of different pH and temperatures
To measure the viability of isolated phages at different pH values, the pH of the SM buffer was adjusted in the range of 4-10 and then the phage filtrate (10 8 PFU/ ml) was added to the buffer with each pH value. After incubation for 1 hr at 37 °C , the survival rate of the phage was determined by counting the titers of active phages using the overlay method. In order to investigate the antibacterial effect of phages at different temperatures, 10 μl of each phage filtrate solution (10 4 -10 8 PFU/ ml) was added to overlayed bacterial cultures, then incubated at various temperatures (15,20,30,37, and 42 °C ). The temperature at which clear zones were observed, was determined as the optimum temperature for the antibacterial activity of the isolated phage (43).
Obtaining the desired multiplicity of infection (MOI)
The proportion of plaque-forming unit (PFU) to colony-forming unit (CFU) was considered as MOI. For this purpose, different dilutions of 10 5 to10 7 PFU/ml were mixed separately with 10 6 CFU/ml of bacteria. The numbers 10 5 , 10 6 , and 10 7 PFU/ml were considered for MOI=10, MOI=1, and MOI=0.1, respectively. For MOI 0, phage-free bacteria were considered as bacterial growth control. Then the plate was shaken at 37 °C for 13 hr and the light absorption of the wells was read continuously every 1 hr at OD=600 nm and the results were recorded (49,50).
One-step phage growth curve plotting
To obtain the latency period and the size of the phage burst size, first 1 ml of bacterial culture with OD=0.2 at the wavelength of 600 nm in BHI broth was added to different dilutions of the isolated phage and incubated at 37 °C . After 6 min of incubation, the culture was centrifuged at 6000 g for 10 min at 4 °C to remove residual phage particles. The resulting precipitate was then added to 50 ml of BHI broth medium and reincubated at 37 °C . Then, every ten min, the virus titer was determined using the overlay method in terms of PFU/ml, and the resulting curve was plotted (51,52).
Extraction of bacteriophage genomic DNA
First, the enriched phage stock was treated with 1 µg/ml DNase I and RNase A enzymes (Fermentas/ Thermo Fisher Scientific, USA) for 20 min at 37 °C , and then passed through 0.22 μm syringe filters and were centrifuged at 28000 g for one hour. Phage DNA extraction was performed using a genomic DNA extraction kit (NORGEN Biotek, Canada). To do this, a titer above 10 8 PFU/ml of phage particles an equal volume of phenol/chloroform/isoamyl alcohol (25/24/1) was added to remove protein residues. Phage DNA was then precipitated by isopropanol in the aqueous phase and washed twice with ethanol (53).
Sensitivity assessment by digestion profile
The purified nucleic acid of phage was examined for its sensibility versus HindIII, EcoRI, BamHI, KpnI, HaeIII, and XhoI enzymes (Fermentas /Thermo Fisher Scientific, USA). These enzymes were used to analyze the digestion pattern. For this purpose, the phage DNA was blended with every endonuclease enzyme and incubated at 37 °C for 16 hr. The results were analyzed by 1% agarose gel electrophoresis at 75 V (43).
Appraisement of bacteriophage proteomics pattern
The phage particles which were purified in CsCl were examined in SDS-PAGE 10%. For this purpose, 20 ml of each sample of phage filtrate was mixed with 5 μl SDS PAGE sample 6X loading buffer (50 mM Tris HCl [pH=6.8], 2% (w/v) SDS, 10% (w/v) glycerol, 5% (v/v)2mercaptoethanol, and 0.001% (w/v) bromophenol blue, 4.7 ml distilled water) and then heated in boiling water for ten minutes. Separated bands created by phage proteins were stained and visualized in the gel by using Coomassie Brilliant Blue dye (45).
Statistical analysis
The results were presented as SEM±Mean. One-way ANOVA and GraphPad Prism 8.0 software were used to analyze the data and to compare the experimental groups. SPSS version 20 was used for analytical statistics.
Biochemical and molecular identification of the bacterial isolates
A total of 29 bacterial isolates from burn wounds of the studied patients were identified and confirmed by biochemical and molecular tests. Among the hospitalized patients, 29 had gram-negative bacterial infections. The bacterial characteristics by biochemical tests were confirmed. A. baumannii, E. coli, and K. pneumonia were the most common gram-negative bacteria causing burn wound infection in this study. The results from BLAST analysis of the amplified fragments in 16S rRNA gene sequencing of the isolated bacteria showed that the bacteria were strains of A. baumannii, K. pneumonia, and E. coli with 98-99% identity ( Figure 1) ( Table 1).
Characteristics of the patients with burn wounds
In this study, 29 patients were eligible and had antibiotic-resistant burn wound infections. According to descriptive statistics, most of the patients were male (58.62%) and the patients aged 31 to 40 years had the highest population (34.5%). Among hospitalized patients, 37.93% had third-degree burns. The most common cause of burns was explosion (41.38%) followed by chemical burns (31.03%). The highest length of hospitalization was 1 to 3 weeks (48.27%). The upper limbs of burn patients had more involvement (37.93%) than other limbs ( Table 2).
Evaluation of antibiotic resistance pattern in hospitalized burn patients
Antibiotic resistance pattern obtained from agar disk diffusion method in clinical isolates showed that among the isolates, A. baumannii strain IAU_FAL101 (GenBank accession number: MW845680) that called Shmy03 showed above 80% to 100% resistance to most antibiotics. The lowest resistance of the bacterium was observed to the antibiotic colicitin (37.5%). E. coli isolates showed the highest resistance to aztreonam, cotrimoxazole, and ampicillin-sulbactam (87.5%), and the lowest resistance was seen to colicitin (25%). K. pneumoniae isolates were 80% to 100% resistant to most antibiotics and 40% resistant to colicitin (Table 3). According to the results, A. baumannii and K. pneumoniae were detected as XDR. Heat map results from antimicrobial resistance testing were shown in Figure 2.
Enumeration of isolated bacteriophage
The clear plaques were observed after the spotting test of phage filtrate on the BHI agar culture of A. baumannii strain IAU_FAL101. In the BHI agar medium, 31 PFU was counted on a plate and the total titer of A. baumannii strain IAU_FAL101 lytic phages was approximately detected as 31×10 8 PFU/ml ( Figure 3A).
The morphology evaluation by TEM
The lytic phages were isolated from the raw inlet sewage of Isfahan treatment plants in Iran by the double-layer method. The isolated lytic phages Pɸ-Bw-Ab, which formed clear plaques on the bacterial culture, were morphologically determined by TEM, and further studies were done on it. Observation of the morphology of the phage by TEM showed that this phage was a dsDNA virus with a narrow long tail and a cylindrical shape belonging to the family Siphoviridae among the order Caudovirales. Phage tail length was estimated at 160±10 nm and the phage head diameter was estimated to be 100 nm ( Figure 3B). Table 3. Results from antimicrobial resistance experiment of G-negative bacteria isolated from burn wounds The number and percent of resistant (R) isolates among the total number of isolates (N) are shown; in this study of a total of 29 isolates, Acinetobacter baumannii and Klebsiella pneumonia were the most resistant isolate and identified as XDR: Extensively drug-resistant (non-susceptibility to minimum one factor in all but two or fewer antibiotic classes/resistant to all antibiotics except colisitin in this study); Echerichia coli identified as MDR: Multi-drug-resistant (non-susceptibility to minimum one factor in three or more antibiotic classes); Overall Gram-negative isolates were found to show maximum susceptibility to colistin
The host range designation
The results from host range and lytic activity of the phage Pɸ-Bw-Ab on different clinical strains showed that among different bacterial strains, A. baumannii strain Shmy03 named IAU_FAL101 (GenBank accession number: MW845680) in (www.ncbi.nlm.nih.gov) with lysis range of ++ was sensitive to the phage, and plaque formation by it on the bacterial culture showed bacterial lysis ( Table 4).
The effect of different pH values and temperatures on bacteriophage stability
The results of the effect of different pH values and temperatures on the stability of the phage Pɸ-Bw-Ab showed that the phage had the highest stability and antibacterial activity at pH=7. The lytic efficacy of the phage was significantly reduced at pH=4, 5, and 10 ( Figure 4A). The results from the study of the effect of different temperatures on the stability of the phage Pɸ-Bw-Ab showed that this lytic phage had the highest stability (>90%) at the temperatures between 15 to 20 °C . As the temperature increased, the stability of the phage decreased so that at 42 °C , the phage stability percentage decreased down to approximately 40% ( Figure 4B).
Determining the rate of bacteriophage absorption
The process of phage Pɸ-Bw-Ab adsorption on the surface of A. baumannii was evaluated. Phage lysates showed an increase in absorption of above 80% during 10 to 15 min, and the absorption rate was constant during the following 15 to 30 min incubation. The isolated phage had a rapid and ascending absorption rate relative to bacterial host cells in the first 5 to 10 min. On the other hand, approximately 20% to 25% of the phages were not absorbed by the host cell. The phage absorption rate is shown in Figure 4C.
Bacteriophage one-stage growth curve
In the one-step growth curve that was plotted to study the growth pattern and lytic activity of the phage Pɸ-Bw-Ab, the latency period was about 50 min and the burst size, after lysis of a single bacterial cell, was Figure 4D).
The bacteriolytic ability of Pɸ-Bw-Ab in MOIs
The results from the phage activity in different MOI showed that in the control sample (MOI=0) bacterial growth was increased for 13 hr due to the absence of bacteriophage. At MOI=0.1 and MOI=1, the bacterial growth was continued for 10 hr. At MOI =10, due to the high phage titer (10 phages per bacterium), the bacterial growth rate had less increase than other MOIs in the first hour and was stopped at 9 hr. Phage amplification rates at different MOIs, based on the medium ODs are shown in Figure 6b. As shown, in all four MOIs the culture turbidity was close to OD=0.3 among the first 2 hours ( Figure 4E).
Results of restriction map and protein profile of isolated bacteriophage
The results of restricted digestion patterns are shown in Figure 5A. Comparison of enzyme digestion patterns (enzyme cleavage) with each other showed that Enzyme digestion results showed that the isolated phage genome was sensitive to most enzymes and produced fragments of digestion pattern. Restriction enzymes pattern showed that phage nucleic acid had more cleavages by HindIII than other enzymes, and HaeIII had no effect on the phage gene. The partial digest is obtained from enzymatic digestion. The results of phage protein profiles are shown in Figure 5B according to a 10 to 180 KDa protein marker. By protein profile analysis in SDS-PAGE, it was estimated that the phage Pɸ-Bw-Ab had 43 to 90 kDa.
Discussion
A. baumannii, E. coli, and K. pneumoniae were the most common gram-negative bacteria that caused burn wound infections in hospitalized patients. The results of the study showed a significant extent of resistance to various antibiotics among bacterial strains isolated from patient wounds. The common feature of the three main antibiotic-resistant bacteria was that they were sensitive to colistin. In a study carbapenem-resistant A. baumannii was detected in the patient burn wounds and high colistin sensitivity (99.9%) was observed among carbapenem-resistant A. baumannii (CRAB) strains which is consistent with the susceptibility of XDR strain of A. baumannii to colistin in the present study (54).
In the present study, the potential of the bacteriophage Pɸ-Bw-Ab which was isolated and identified from the inlet sewage of Isfahan treatment plant in Iran was determined for the removal of antibiotic-resistant bacteria. This bacteriophage was selected as the best candidate against A. baumannii strain IAU_FAL101.
Phage identification results showed that this phage was a tailed virus with a dsDNA genome and belonged to the Siphoviridae family. The bacteriophage had the highest percentage of stability and antibacterial activity at pH 7 and at the temperature range between 15 to 20 °C with a strong antibacterial potential to lyse bacterial cells. Bacteriophage Pɸ-Bw-Ab had a large burst size, thermal and pH stability, and high adsorption rate to the host cells in the first 5 to 10 min.
A group (2019) studied the phage Βϕ-R2096 which was used to control carbapenem-resistant A. baumannii.
The phage belonged to the family Myoviridae and showed high bacteriolytic activity at MOI = 10. Our results demonstrated that the phage Pɸ-Bw-Ab belonged to the family Siphoviridae and showed high bacteriolytic activity at MOI=10 (55). Furthermore, another study (2016) identified phage vB-GEC_Ab-M-G7 as phage Myoviridae. The phage vB-GEC_Ab-M-G7 had a short latent period and large burst size, wide host range, and thermal and pH consistency. Also, None of the eight (BamHI, EcoR I, EcoRV, HindIII, HincII, PstI, DpnI, and SpeI) restriction endonucleases used in this study digested phage vB-GEC_Ab-M-G7 (56). While in our research, bacteriophage Pɸ-Bw-Ab was identified as Siphoviridae and had a large burst size and viability. Among six (HindIII, EcoRI, BamHI, KpnI, HaeIII, and XhoI) restriction enzymes, phage nucleic acid had more cleavages by HindIII than other enzymes, and Phage Pɸ-Bw-Ab was not sensitive to HaeIII enzyme.
Researchers (2012) tested the impact of phage ɸkm18p that was isolated from sewage on XDR strains of A. baumannii, and they found that the phage effectively lysed the bacteria. The phage particle virion proteins were separated by SDS-PAGE. The most abundant protein was 39 kDa. The isolated phage belonged to the Podoviridae family and HincII and NheI enzymes digested phage ɸkm18p (57), but in our study, the isolated phage belonged to the Siphoviridae family, protein profile analysis was estimated 43 to 90 kDa and the most abundant protein was 43 kDa.
A study (2020) identified phage vB_AbaP_D2. A onestep growth curve reflects the latent period, burst size, and release period. The results showed, the latent and rising period of phage vB_AbaP_D2 lasted for 20 and 30 min, respectively, and the average burst size (the number of phage particles released by each infected host cell) was 80±6 (59). Our research showed the latency period was about 50 min, after a 50-min latency phase, the cell explosion occurred between 50 and 60 min after infection. Finally, almost constant growth was observed from 60 to 130 min. isolated Phage AB1 of A. baumannii. Restriction analysis indicated that phage AB1 was a dsDNA virus and had an icosahedral head with a non-contractile tail and whisker structures, and was classified as a member of the Siphoviridae family. The proteomic pattern of phage AB1, generated by SDS-PAGE, showed five major bands with molecular weight ranging from 14 to 80 kDa. Phage genomic DNA was digested with Eco RI, Xba I, Bgl II, Bgl II/Xba I, Eco RI/ Bgl II, and Eco RI/Xba I enzymes (60). In our study, we used (HindIII, EcoRI, BamHI, KpnI, HaeIII, and XhoI) restriction enzymes and SDS-PAGE results showed major bands with molecular weight ranging from 43 to 90 kDa.
Conclusion
A. baumannii has become a prevalent antibioticresistant bacterium in the special burn wards of hospitals. Due to the limitations of the effective spectrum of antibiotics in recent decades due to widespread drug resistance, phages with their specific properties can be considered as some of the most desirable and appropriate options to replace various antibiotics. Phages with the ability to replace antibiotics and control serious nosocomial infections can enter the treatment drugs as inexpensive antibacterial agents. It is hoped that by conducting various studies and evaluating the effectiveness of phages, they can be used as effective therapeutic agents against different antibiotic-resistant strains which may reduce the prevalence of infections caused by multidrug-resistant pathogens. Based on our study, the phage Pɸ-Bw-Ab had an antibacterial impact on other used bacterial spp, and also strong antibacterial effects and also strong antibacterial effects on Acinetobacter baumannii strains especially Acinetobacter baumannii strain IAU_FAL101, phage therapy would be an applicable proxy for antibiotics, especially in cases where there is resistance to antibiotics. | 2021-09-14T01:53:15.535Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "0a235034dd0b6704c1455edffc222e3cb5273cc9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0a235034dd0b6704c1455edffc222e3cb5273cc9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
259917273 | pes2o/s2orc | v3-fos-license | The Flow Stress Behavior and Physical-Based Constitutive Model for As-Quenched Al-Zn-Mg-Cu Alloy
Although heat-treatable Al-Zn-Mg-Cu alloys are widely used in aerospace industries, distortion and cracks exist due to the residual stress during quenching. Understanding the flow stress behavior and numerically modeling the process is the key to predicting the residual stress. This paper investigated the flow stress behavior of the as-quenched 7050 alloy at strain rates from 0.1 s−1 to 1 s−1, temperatures between 423 K and 723 K, and cooling rates from 0.1 K/s to 10 K/s. The experimental results showed that the strain rate, cooling rate, and temperature have effects on the flow stress value, except for the cooling rates at a temperature of 423 K or 723 K. The kinetics model was used to obtain the precipitate features, i.e., precipitate size and volume fraction. Then, a physical constitutive model based on the evolution of immobile dislocation, solutes, and precipitates was developed. The predicted flow stresses showed good agreement with the experimental data. The findings of this work expand the knowledge on the as-quenched flow behavior of Al-Zn-Mg-Cu alloys, improving the prediction accuracy of residual stress by FEM.
Introduction
Al-Zn-Mg-Cu (7000 series) alloys have high strength and good toughness and are widely selected as structural component materials for aerospace industries [1]. The structural components are usually manufactured by the initial blanks, which are the aged condition. Aging is one of the heat treatment processes, e.g., solution treatment, quenching, and ageing. In order to obtain enough supersaturated solid solution, the blanks are quenched very quickly. However, a large value of residual stress will result after the quenching process. However, the pre-stretching process can be used to reduce the residual stress. There also exist residual stress, resulting distortion, and cracks for component machining [2,3]. Therefore, it is very important to understand the residual stress distribution.
The precise knowledge of the flow stress behavior for as-quenched materials, which depends on the temperature, strain rate, strain, and microstructures, is the greatest challenge for the prediction of residual stress [4]. An accurate description of the flow behavior in the constitutive model is a difficult task. For aluminum alloys, the constitutive models are mainly divided into three kinds, i.e., phenomenological constitutive model, physical-based constitutive model, and the artificial neural network [5]. The phenomenological constitutive model includes the Arrhenius model, Johnson-Cook (JC) model, Khan-Huang (KH) model, and so on. The physical-based constitutive model includes the dynamic recovery and dynamic recrystallization model, Zerilli-Armstrong (ZA) model, cellular automaton (CA) model, and some other physical-based models.
Many researchers tried to build the constitutive model of as-quenched Al-Zn-Mg-Cu alloys. Ulysse [6,7] built an internal state variable model for the as-quenched 7075 and 7050 alloys. In this model, the state variable was resolved by the phenomenological exponent-type Zener-Hollomon parameter equation. Chobaut et al. [8,9] studied the as-quenched 7449 alloy, and they used a Chaboche-type constitutive model, which can combine the isotropic strain hardening and viscoplastic phenomena. In Chobaut's work, the threshold stress, which represented the effect of microstructures on the flow behavior, was identified by the inverse method. Reich and Kessler [10] experimentally studied the mechanical properties of undercooled 7020 aluminum alloy. The researchers used a revised empirical Hockett-Sherby hardening law describing the relationship of strain and stress. For the constitutive model of as-quenched 7075 and 7010 alloys, Robinson et al. [11] constructed the Arrhenius equation to generate the flow stress as a function of the temperature and strain rate.
From the above discussions, some models have been built for the as-quenched Al-Zn-Mg-Cu alloys. However, due to their empirical characteristics, the models cannot reflect the influence of the cooling rate on the flow stress behavior. According to the experimental results of [10], cooling rates can strongly influence the flow stress behavior, which is mainly attributed to the precipitation during quenching. The effects of the cooling rates on the flow stress behavior are found not only for the Al-Zn-Mg-Cu alloys, but also for other alloys, e.g., Al-Si alloys [12], Al-Cu-Mg alloys [13,14], and Al-Mg-Si alloys [10].
In the Al-Zn-Mg-Cu alloys, there exist different types of quench-induced precipitation. Starink et al. [15] studied the precipitation phenomenon of Al-Zn-Mg-Cu alloys and found three cooling reactions. In the temperature range of 723 K to 623 K, the S (Al 2 CuMg) phase formed, followed by the η (MgZn 2 ) phase forming from 623 K to 523 K and the Zn-Cu phase (deemed as the Y phase in [16]) at a temperature of 523 K to 423 K. In the further studies of [17,18], the results showed that different precipitates had an inconsistent critical cooling rate (CCR). Hence, it can be considered that the quench-induced precipitation is so complicated that the empirical constitutive model used in [6][7][8][9][10][11] cannot identify the impact of different precipitates on the flow stress behavior. Thus, a comprehensive constitutive model, which can fully describe the quench-induced precipitation and the flow stress behavior, is essential for modern residual stress prediction.
In the present work, a constitutive model that can describe the as-quenched flow stress behaviors was built. Firstly, the quench-induced precipitates were predicted by a kinetics model, and two variables (precipitate radius and volume fraction) were considered. Then, a dislocation-based model, considering the forest dislocation, solute element, precipitates, and their interactions, was built. Some isothermal tensile experiments were performed to observe the flow stress behavior.
Material and Experimental Procedure
The hot-extruded 7050 aluminum alloy was selected in this work. The chemical indices of AA 7050 are listed in Table 1. Based on the standard GB/T 4338-2006 [19], the dimensions of the tensile specimen are shown in Figure 1. Isothermal tensile tests were performed using the Gleeble ® 3500 (Dynamic Systems Inc., New York, NY, USA) thermal simulator. Firstly, the samples were heated to the solution temperature (753 K) with a rate of 10 K/s. The solution time was 25 min, followed by the quenching process to the desired temperatures with a constant cooling rate. Different parameters such as the temperature, strain rate, and cooling rate were tested, as shown in Table 2. A schematic illustration of the heat treatment history is shown in Figure 2. Continuous cooling diagrams (CCDs) were used to choose the cooling rate parameters. According to [20], the CCRs for the high- Firstly, the samples were heated to the solution temperature (753 K) with a rate of 10 K/s. The solution time was 25 min, followed by the quenching process to the desired temperatures with a constant cooling rate. Different parameters such as the temperature, strain rate, and cooling rate were tested, as shown in Table 2. A schematic illustration of the heat treatment history is shown in Figure 2. Continuous cooling diagrams (CCDs) were used to choose the cooling rate parameters. According to [20], the CCRs for the high-, medium-, and low-temperature reactions were 10, 100, and 300 K/s, respectively. Since the temperatures used in this work covered all temperature reactions, cooling rates of 0.1, 1 and 10 K/s were investigated. The isothermal tensile tests were ceased when the strain was 0.2, and the samples were cooled to room temperature immediately. Firstly, the samples were heated to the solution temperature (753 K) with a rate of 10 K/s. The solution time was 25 min, followed by the quenching process to the desired temperatures with a constant cooling rate. Different parameters such as the temperature, strain rate, and cooling rate were tested, as shown in Table 2. A schematic illustration of the heat treatment history is shown in Figure 2. Continuous cooling diagrams (CCDs) were used to choose the cooling rate parameters. According to [20], the CCRs for the high-, medium-, and low-temperature reactions were 10, 100, and 300 K/s, respectively. Since the temperatures used in this work covered all temperature reactions, cooling rates of 0.1, 1 and 10 K/s were investigated. The isothermal tensile tests were ceased when the strain was 0.2, and the samples were cooled to room temperature immediately.
Experimental Results of Tensile Test
The strain-stress curves of the as-quenched AA 7050 for different strain rates, cooling rates, and temperatures are shown in Figure 3. It can be seen that the strain rates and temperatures had a strong effect on the flow stress. The flow stress increased with the decreased temperatures or increased strain rates. The maximum flow stress value was found at a temperature of 423 K, a strain rate of 0.1 s −1 , and a cooling rate of 0.1 K/s. When the temperature was 423 K or 523 K, the stress increased with the strain, which showed a strain-hardening effect. When the temperature was 623 K or 723 K, the flow stress remained steady after reaching the peak stress. This phenomenon occurred when the dynamic softening and work hardening reached equilibrium, i.e., dynamic recovery. The observations were in good agreement with earlier reports [21,22] about AA 7050.
Experimental Results of Tensile Test
The strain-stress curves of the as-quenched AA 7050 for different strain rates, cooling rates, and temperatures are shown in Figure 3. It can be seen that the strain rates and temperatures had a strong effect on the flow stress. The flow stress increased with the decreased temperatures or increased strain rates. The maximum flow stress value was found at a temperature of 423 K, a strain rate of 0.1 s −1 , and a cooling rate of 0.1 K/s. When the temperature was 423 K or 523 K, the stress increased with the strain, which showed a strain-hardening effect. When the temperature was 623 K or 723 K, the flow stress remained steady after reaching the peak stress. This phenomenon occurred when the dynamic softening and work hardening reached equilibrium, i.e., dynamic recovery. The observations were in good agreement with earlier reports [21,22] about AA 7050. The effects of the cooling rate on the flow stress at different strain rates and temperatures are shown in Figure 4. When the temperature was 423 K or 723 K, the flow stresses remained constant for different cooling rates. According to [20], the amount of quenchinduced precipitates at a high temperature (723 K) or a low temperature (423 K) is minor. Therefore, these quench-induced precipitates had a negligible effect on the strengthening. However, the flow stresses strongly depend on the cooling rates at temperatures of 523 K or 623 K. The flow stress increased with the increasing cooling rates. Based on the experimental results of the step-quench [15], the quench-induced precipitates were the coarse η (MgZn2) phase at a temperature of 523 K or 623 K, which is the most-detrimental precipitation. The coarse precipitates led to a lowered solute concentration. Thus, the cumulated effects of the quench-induced coarse precipitates and the loss of solute elements weakened the strengthening. This observed phenomenon was similar to the finding in [10]. Due the The effects of the cooling rate on the flow stress at different strain rates and temperatures are shown in Figure 4. When the temperature was 423 K or 723 K, the flow stresses remained constant for different cooling rates. According to [20], the amount of quench-induced precipitates at a high temperature (723 K) or a low temperature (423 K) is minor. Therefore, these quench-induced precipitates had a negligible effect on the strengthening. However, the flow stresses strongly depend on the cooling rates at temperatures of 523 K or 623 K. The flow stress increased with the increasing cooling rates. Based on the experimental results of the step-quench [15], the quench-induced precipitates were the coarse η (MgZn 2 ) phase at a temperature of 523 K or 623 K, which is the most-detrimental precipitation. The coarse precipitates led to a lowered solute concentration. Thus, the cumulated effects of the quench-induced coarse precipitates and the loss of solute elements weakened the strengthening. This observed phenomenon was similar to the finding in [10]. Due the non-negligible effects of the cooling rate on the flow stress for temperatures of 523 K and 623 K, the flow behavior cannot be described by the phenomenological model, e.g., the Arrhenius model [21]. Thus, considering the quench-induced precipitates, a physical-based constitutive model is built in the following section. non-negligible effects of the cooling rate on the flow stress for temperatures of 523 K and 623 K, the flow behavior cannot be described by the phenomenological model, e.g., the Arrhenius model [21]. Thus, considering the quench-induced precipitates, a physicalbased constitutive model is built in the following section.
The Kinetics Model of Precipitation
The precipitation kinetics model was based on the KWN model [23], in which the nucleation, growth, and coarsening processes can be considered. In this model, the input
The Kinetics Model of Precipitation
The precipitation kinetics model was based on the KWN model [23], in which the nucleation, growth, and coarsening processes can be considered. In this model, the input parameters included the temperature, time, and initial concentration of the elements. The outputs of the model were the radius and volume fraction of the precipitates. Starink et al. [15] observed the quench-induced precipitates for 7050 alloys by TEM. The experimental results showed that the precipitates were irregularly shaped. Since the calculated parameter was the mean radius, all the irregularly shaped precipitates were deemed as spherically shaped precipitates and only one kind of quench-induced precipitate, i.e., MgZn 2 , was considered.
The precipitate nucleation rate from the supersaturated solution can be expressed as [24]: where N 0 is the nucleation site number per unit volume, β * is the atomic attachment rate, Z f is the Zeldovich factor, ∆G * is the nucleation activation energy barrier, k is the Boltzmann constant, and τ in is the nucleation incubation time, where τ in = 2/πβ * Z 2 f . The nucleation activation energy barrier ∆G * can be provided as: where γ is the interfacial energy; ∆g v is the chemical driving force for nucleation, defined as: where R is the gas constant, V pre molar is the molar volume of the precipitates, X is the present concentration of the solute in the matrix, X pre is the mole fraction of the solute elements in the precipitates (assumed to be Mg), and X eq is the equilibrium solute concentration in the matrix and is calculated by considering the Gibbs-Thomson effect [25]: where ∆H is the formation enthalpy. The atomic attachment rate β * has a relationship with X, the diffusion coefficient for solute D, and the critical nucleus size r * cp , shown as [24]: where a is the lattice parameter and r * cp is the critical nucleus size, which can be calculated by: The diffusion coefficient for solute D is expressed as [26]: where D 0 is the material coefficient of the solute and Q d is the diffusion activation energy. The diffusion-controlled growth rate of the precipitates is obtained through the balance of the precipitates and particles: where r * p is the nucleus size, expressed as r * p = r * cp + 0.5 kT/πγ), and X r * p is the solute concentration at the precipitate/matrix interface, expressed as [27,28]: The solute element composition can be mass balanced by where X 0 is the initial solute molar fraction and f v is the volume fraction of the precipitate. The precipitate radius can be discretized into several size classes. The kinetic model is calculated for each size class to obtain the final volume fraction, expressed as:
Dislocation Density Flow Stress Model
The as-quenched flow stress behavior resulted from the dislocation movement. The barriers of the movement were immobile dislocations, solutes, precipitates, etc. The resistance, i.e., the shear stress, can be calculated by the additive form: where τ is the shear stress, τ G is the athermal stress caused by immobile dislocations, τ ss is the shear stress caused by the solid solution, and τ p is the shear stress caused by the precipitates. The athermal stress τ G is a long-range term, which is temperature-independent, shown as where α is the proportionality factor, b is Burger's vector, ρ i is the immobile dislocation density, and G is the shear modulus, which has a relationship with the temperature [22]: where µ 0 is the shear modulus at a temperature of 300 K (2.54 × 10 4 MPa), T M is the melting temperature of aluminum (933 K), and T M µ 0 dµ dT is the parameter, which equals −0.5. During the deformation, the change of the dislocation density has two parts, i.e., the athermal storage and dislocation annihilation: where k 1 is a material constant describing the dislocation storage rate and k 2 is a parameter associated with the dynamic recovery. Since the dislocation annihilation is a thermally activated process, the parameter k 2 is a function of temperature, expressed as where k 20 and C are material constants and Q vm is the vacancy migration activation energy. Assuming the dislocation density is ρ i0 when the plastic strain is 0, the dislocation density in Equation (15) can be calculated by The solute strengthening is a short-range issue. The dislocation glide overcomes the barrier of the solutes with the assistance of thermal activation. The contribution of τ ss can be calculated as [29] where τ 0 is the shear stress at zero temperature, ∆ f 0 Gb 3 is the total energy barrier for the dislocation movement by thermal activation and • ε 0 is a constant. The precipitates act as geometric barriers to the dislocation movement. The dislocation may bypass or shear the precipitates, depending on the precipitates' characteristics. The earlier findings [30] for the Al-Zn-Mg-Cu alloy showed that the transition radius is about 3 nm. According to the TEM observation in [15], the diameter of MgZn 2 precipitates is about 100 nm. Thus, in the present work, the radius of the quench-induced precipitates (MgZn 2 phase) was larger than the transition radius. The obstacle strength required to move dislocations to bypass precipitates is expressed as Orowan bowing [31]: where β is a dislocation line tension parameter and ϕ is an efficiency factor considering the influence of the Orowan loop stability.
Finally, the shear stress can be converted to normal stress by the equation: where M is the Taylor factor, which has a value of 3.06 [32].
Summary of the Precipitation Kinetics and Constitutive Models
Using the precipitation kinetics model, the precipitates' characteristics, i.e., the radius and the volume fraction, can be obtained. The solution of the kinetics model was performed numerically using a Lagrange-like approach. Table 3 gives the data for the precipitation kinetics model's calculation. Initial solute concentration of Mg in the matrix 0.0376 (at.%) -Some known parameters used for the constitutive model are listed in Table 4. The rest parameters have the domains (Table 4) and can be calculated by the optimization technique using the experimental data. In the present work, the genetic algorithm (GA) method was selected using a Matlab-based toolbox (2014a). The objective function (f x ) is defined as minimizing the sum of the squared errors between the experimental data and the calculated data, expressed as where n is the strain rate, m is the temperature, σ c ij is the calculated stress, and σ e ij is the experimental stress.
Efficiency factor 0 < ϕ < 1 -Since the precipitate hardening is negligible, there are two parts (τ ss and τ G ) at a temperature of 423 K or 723 K. In general, the constitutive model for the flow stress prediction of the as-quenched Al-Zn-Mg-Cu alloy is expressed as follows:
Model Prediction Results and Discussions
The calculated evolution of the volume fraction and radius varied with the cooling rate at different temperatures, as shown in Figure 5. Starink et al. [15] gave the step-quench experiments on AA 7150, the samples being cooled at 3 K/s and 1 K/s to 593 K. To compare the experimental results, the literature data are also included in Figure 5. According to Straink's results, the radius of MgZn 2 precipitates is about 100 nm (3 K/s) and 120 nm (1 K/s), respectively. In the present precipitation prediction, both the volume fraction and the radius of the precipitates decreased with the increasing cooling rate. The variation trend was in agreement with the radius reported in the literature [15]. The experimental results for the radius were larger than the predicted radius. One reason may be the irregularly shaped precipitates by the TEM analysis. In the present work, the precipitates were assumed as spherically shaped, resulting in a small calculation. Another reason is the activation energy for diffusion. The η (MgZn 2 ) precipitates nucleated on dispersoids, grain boundaries, and so on. The activation energy barrier was inconsistent with different nucleation locations. An easy nucleation position requires a lower activation energy, leading to larger precipitates. The experimental and calculated strain-stress data are plotted in Figure 6. The curve in this figure corresponds to the calculated curve obtained by the physical constitutive model with optimized parameters using the GA-based optimization method. The scatter points denote measurements. Figure 5. The predicted volume fraction and radius varied with the cooling rate at different temperatures. Literature data are included as points and referenced [15].
The experimental and calculated strain-stress data are plotted in Figure 6. The curve in this figure corresponds to the calculated curve obtained by the physical constitutive model with optimized parameters using the GA-based optimization method. The scatter points denote measurements.
The results in Figure 6 show that there was good agreement between the experimental results and the physical constitutive model predictions. To evaluate the accuracy of the model, the average absolute relative error (AARE) is defined as the following equation [39]: where N is the predicted flow stress at different strain rates and cooling rates, σ cal is the calculated flow stress, and σ exp is the measured flow stress. The AARE based on Equation (23) is also demonstrated in Figure 6. As can be seen, the AAREs for different strain rates, cooling rates, and temperatures were relatively low. Thus, the developed model can successfully be used to describe the flow stress behavior of the as-quenched Al-Zn-Mg-Cu alloy with variable conditions. Figure 5. The predicted volume fraction and radius varied with the cooling rate at different temperatures. Literature data are included as points and referenced [15].
The experimental and calculated strain-stress data are plotted in Figure 6. The curve in this figure corresponds to the calculated curve obtained by the physical constitutive model with optimized parameters using the GA-based optimization method. The scatter points denote measurements. The results in Figure 6 show that there was good agreement between the experimental results and the physical constitutive model predictions. To evaluate the accuracy of the model, the average absolute relative error (AARE) is defined as the following equation [39]: Figure 6. Comparison of experimental (scatter points) and calculated (curves) strain-stress relationships at different temperatures, strain rates, and cooling rates.
Conclusions
The flow stress behavior of the as-quenched AA 7050 during uniaxial loading was investigated for different cooling conditions. The precipitate features obtained from the kinetics model were used to calculate the resistance in plastic deformation. The flow stress caused by the immobile dislocation was deemed as athermal stress, and the solutes were considered as thermal stress. Three components, i.e., immobile dislocations, solutes, and precipitates, were considered in physical-based constitutive model. The following conclusions were drawn from the present study: (1) All the tested parameters, i.e., cooling rates, strain rates, and temperatures, can have effects on the flow stress behavior of the as-quenched AA7050 alloy. The values of the flow stress increased with the increased strain rate and the decreased temperature. When the temperature was 523 K or 623 K, the flow stress values increased with the increasing cooling rate. The flow stress values increased with the increasing cooling rate at a temperature of 523 K or 623 K, compared to the negligible effects on flow stress behavior at a temperature of 423 K or 723 K. (2) In the present precipitation prediction, both the volume fraction and the radius of the precipitates decreased with the increasing cooling rate. The physical-based constitutive model was applied to predict the flow stress behavior in the isothermal tensile test of the as-quenched AA 7050 alloy. The AARE between the calculated and the experimental flow stress was about 4%. The present constitutive model, which considered the effects of the precipitates, can be used to describe the flow stress behavior of other series of as-quenched aluminum alloys.
Author Contributions: All authors contributed to the study's conception and design. The material preparation was performed by G.Q. The data collection and analysis were performed by R.G. and D.L. The first draft of the manuscript was written by R.G. All authors have read and agreed to the published version of the manuscript. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
No new data were created nor analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-07-16T15:06:49.488Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "e3f47207403243c28b93d5dc8a86b828b2048e15",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/16/14/4982/pdf?version=1689239646",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "78ccf08f819a88fcb3fe249935918f1f18a5fa4a",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246520777 | pes2o/s2orc | v3-fos-license | Optimization of Raw Ewes’ Milk High-Pressure Pre-Treatment for Improved Production of Raw Milk Cheese
Serra da Estrela protected designation of origin (PDO) cheese is manufactured with raw milk from Bordaleira and/or Churra Mondegueira da Serra da Estrela sheep breeds. Several socio-environmental shortcomings have reduced production capacity; hence, treatments that may contribute to its efficient transformation into cheese are welcome. High-pressure processing (HPP) milk pre-treatment may contribute to a cheese yield increment, yet optimization of processing conditions is warranted. An initial wide-scope screening experiment allowed for pinpointing pressure intensity, holding time under pressure and time after HPP as the most important factors influencing curd yield. Based on this, a more targeted screening experiment allowed for selecting the range of experimental conditions to be used for an experimental design study that revealed an HPP treatment at 121 MPa for 30 min as the optimum for milk processing to improve curd yield (>9%) and effectively maintain the beneficial cheese microbiota; the optimum was validated in a final experimental framework.
Introduction
In cheesemaking, the cheese yield (kg cheese/kg milk) is of particular economic interest since small differences in yield translate into big differences in both milk volume savings and final profits; the higher the solids percentage recovered, the greater the amount of cheese obtained, thus reflecting economic gains. In the particular case of the protected designation of origin (PDO) Serra da Estrela ewe cheese, the available milk is becoming scarcer due to limitations of various kinds, such as environmental and social cues. Serra da Estrela cheese is made solely with milk from Bordaleira Serra da Estrela and/or Churra Mondegueira ewe breeds, and according to specifications the milk cannot undergo any thermal treatment [1][2][3].
High-pressure processing (HPP) is a non-thermal food processing technology, wherein the food is subjected to a very high pressure range from 100-800 MPa during a holding period between 5-60 min. Different literature reports have indicated that HPP milk pretreatment can increase curd yield [4,5]. Moreover, milk HPP has the potential to reduce viable cell numbers of undesirable contaminant micro-organisms, without significant effects on flavour and nutritional components, contributing to safer high-quality cheese products; however, HPP may influence the physicochemical and technological properties of the milk [6][7][8]. The effect of HPP on the curd yield has been evaluated mainly in cows' milk [4,5,[9][10][11] and goats' milk [12][13][14], while only a few studies have focused on ewes' milk [13,14]. In general, milk HPP pre-treatments have enabled an increase of the curd yield from about 4-23% in comparison to untreated milk. Huppertz et al. (2005) studied cow milk HPP pre-treatment between 100 and 600 MPa, having verified higher yield values (13-18%) at 100 and 250 MPa. On the other hand, one year before, the same group had verified lower values for HPP-treated milk at 250 MPa (exception for treatment for 60 min with 4% yield increasing) and higher values for HPP treatment at 400 and 600 MPa (4-23%) [5]. Furthermore, higher cow curd yield values were verified after maintaining the milk at 20 • C for 24 h post HPP treatment [5]. The same study revealed that a longer holding time under pressure (from 5 to 30 min) also increased the curd yield. In ewes' milk, HPP treatments at 100 MPa for 30 min revealed a similar yield compared to untreated milk and an increase of about 5, 5 and 16% for 200, 300 and 400 MPa, respectively [13]. Ewes' milk HPP treatment at 300 MPa with a holding time of 10, 20 and 30 min showed similar yield values but lower values when processing was for 5 min [13]. Similar results were verified in a further study by the same research group; López-Fandiño and Olano (1998) observed a higher yield after HPP at 40 • C than at 25 • C (about 23% vs. 9%) but reported that such treatment caused deleterious effects on gel firmness. Several questions remain unanswered; hence, the main objectives of the current research were to use design of experiments (DoE) and response surface methodology (RSM) to determine the optimum HPP milk pre-treatment conditions that maximize curd yield while maintaining the beneficial microbiota of the cheese at most desirable levels for cheese biochemical properties development.
Screening Experimental Design and Rationale for Choice of Conditions
Firstly, an initial wide screening study, two level full factorial design in triplicate for four factors (24 = 16*3 = 48 runs-see Table 1), was performed, in a randomized order, to identify the variables/factors with main effect and the interaction of factors on Serra da Estrela curd yield and time of coagulation. Based on previous studies in the literature, the four variables selected were: pressure intensity (range 200-400 MPa), holding time under pressure (range 5-60 min), time before HPP (1-48 h) and time after HPP (1-24 h), as shown in Figure 1. The waiting time before HPP and after HPP allowed for understanding if the storage time of milk prior to HPP and after HPP treatment influenced curd yield. The outcome parameters measured were yield and coagulation time. At the same time, untreated milk was studied as a control in order to compare with HPP-treated milk. Table 1. Factors and levels for the initial wide screening and focused screening studies and optimization design of experiment (HPP stands for high-pressure processing).
Factors
Level ( Considering that the results of this initial wide screening study revealed more interesting results for lower pressures, 100 MPa was also studied, and the processing time was fixed at 5 min, because the processing time was found to have minor effect; further analyses were also carried out, namely pH, titratable acidity and microbiological enumeration.
Surface Model-Optimization Experimental Design-Central Composite Design
Upon selection of the most important factors in the initial wide and the focused studies, an optimization design for curd yield improvement was established using a central composite design ( Figure 1). This design consisted of a factorial design with two factors at two levels: pressure intensity ranged between 100-300 MPa and holding time under pressure between 5-30 min; additional axial and 5 central points were considered, as shown in Table 1, and dependent variables were technological cheese parameters (curd yield and coagulation time) and milk microbiota viable cell numbers (lactococci, lactobacilli, enterococci, Enterobacteriaceae, total coliforms, E. coli, staphylococci, yeasts and moulds counts).
Validation Experiment Design
The theoretical optimum conditions 121 MPa/30 min (obtained for the optimization experiment design) were applied to raw ewes' milk samples in a validation experiment in quintuplicate for greater validation robustness (untreated milk was also studied for data normalization).
Milk Supply
Raw ewes' milk (from three farms in Serra da Estrela cheese PDO region, Portugal) was kept in a refrigerated tank until use, and prior to sampling milk was well mixed to ensure a homogeneous sample. Five litres of milk were used for the initial wide screening and another 5 L for the focused screening experiments, which were performed in December and January, respectively. For the response surface design, 8 L of milk was used in February and another 8 L in March for the model validation.
Sample Packaging
In the dairy, milk aliquots (≈75 mL) were placed into polyamide-polyethylene (PA-PE) bags (Plásticos Macar-Indústria de Plásticos Lda, Santo Tirso, Portugal) and heat sealed. The milk bags were stored under refrigeration (4 • C) before and after HPP treatment until analysis.
High-Pressure Processing
HPP was performed in 55 L capacity industrial scale high-pressure equipment (model 55, Hyperbaric, Burgos, Spain). For all experiments, the initial temperature of the water used as transmitting fluid was 8 • C. For the initial wide screening study: HPP was performed on the day of milk collection and after 48 h, as shown in Table 1 and Figure 1, the milk having been treated at 200 and 400 MPa for 5 and 30 min. For the focused screening study: the milk was treated after 48 h of collection and the curd transformation occurred after 24 h of HPP treatment, and milk samples were treated at 100, 200, 300 and 400 MPa for 5 min. For design of experiment, the milk was treated after 24 h of collection and the curd transformation occurred after 24 h of HPP treatment, according to Table 1. The validation step occurred using the optimum HPP conditions obtained in the design of experiment study, i.e., 121 MPa for 30 min.
Yield and Coagulation Time
Yield was estimated by centrifugation. Prewarmed milk (30 mL) to 32 • C was treated with 50 µL of standard vegetable rennet (Cyanara cardunculus, strength 1:15,000, Enzilab, Maia, Portugal). After 1 h at 32 • C, the curd was cut and 10 min later centrifuged at 1500× g for 15 min at 5 • C. The curd and whey were then separated and weighed. Coagulation time was evaluated by placing a spatula in the tubes every 10 min to see when the spatula came out of the curd free of any curd granules.
Microbiological Analyses
Milk samples were added to and decimally diluted in 13.5 mL of sterile 0.1% (w/v) aqueous peptone and then plated, in triplicate, on several culture media. The following microbial groups were enumerated using the pour plate method: Enterobacteriaceae on violet red bile dextrose agar (VRBDA from Merck, Germany) and coliforms and E. coli on Chromocult coliform agar (CCA from Merck), both incubated at 37 • C for 1 d. The Miles and Misra technique [15] was used for enumeration of: total aerobic mesophilic micro-organisms on plate count agar (PCA from Merck) and incubated at 30 • C for 3 d; Enterococcus spp. on kanamycin aesculin azide agar base (KAAA from Oxoid, UK) and incubated at 37 • C for 1 d; Lactobacillus spp. on Man, Rogosa and Sharpe (MRS from Merck) and incubated at 30 • C for 3 d; Lactococcus spp. on M17 (Liofilchem, Roseto degli Abruzzi, Italy) and incubated at 30 • C for 3 d; Staphylococcus spp. on Baird-Parker agar (BPA from Merck) with egg yolk tellurite emulsion (Liofilchem) and incubated at 37 • C for 2 d; Listeria spp. on PALCAM agar selective agar base (Liofilchem), with selective supplement for PALCAM (Liofilchem) and incubated at 37 • C for 2 d; Pseudomonas spp. on pseudomonas agar base (PAB from Liofilchem) with glycerol and pseudomonas CFC supplement (CFC from Liofilchem) and incubated at 30 • C for 2 d. Petri dishes containing 10-100 colony forming units (cfu) were selected for counting. The results were converted into logarithmic decimals of the number of cfu per mL of milk.
Physicochemical Analyses
The pH values of the milk and cheese were measured, at room temperature, in random points using a properly calibrated pH/temperature penetration pH meter (Testo 205, Testo, Inc., Sparta, NJ, USA). The titratable acidity was determined according to AOAC 947.05 [16] procedure for milk, using an automatic titrator with pH meter (Crison TitroMatic 1S with pH electrode 5.14, Barcelona, Spain), by titration to a pH value of 8.9. Physicochemical analyses were performed in triplicate per milk and cheese samples.
Colour
Colour parameters were measured using a Minolta Konica CM-2300d (Konica Minolta CM-2300d, Osaka, Japan) at room temperature. The colour parameters were recorded in CIE Lab system and directly computed through the original SpectraMagic NX software (Konica Minolta, Osaka, Japan), according to the International Commission on Illumination regulations. Milk samples were kept 1 h at room temperature before measurements. Measurements were performed selecting six random spots, read in triplicate.
Statistical Analyses
For experimental design, Minitab and JMP software were used. SPSS software version 26.0 was used to evaluate the effect of factors and interactions in the initial wide screening study. For the focused screening, one-way analysis variance (ANOVA) was performed to establish the effect of different conditions (for HPP and untreated milk). The significant difference Tukey's test was applied to compare the mean values of parameters, with the significance assigned at p < 0.05.
Initial Wide Screening Study
In order to identify which factors may influence curd yield, a full factorial design was chosen where all possible combinations of all the input variables and their levels were included (Table 1 and Figure 1). Immediately after HPP treatment, the milk processed at 200 MPa was still liquid; however, the milk treated at 400 MPa revealed a more viscous texture, and after 24 h under refrigeration these samples revealed curd and whey separation. In the literature, a linear increase in skim milk viscosity was verified after HPP between 100 and 400 MPa for 30 min by Thom Huppertz, Fox and Kelly (2003).
In general, firmer curds were obtained from HPP processed milk for 5 min, while the milk treated for 30 min resulted in a curd/paste similar to a granular whey cheese ( Figure S1). Rosina López-Fandiño, Mercedes Ramos and Olano (1997) also verified higher curd firmness for cow HPP-treated milk for 10 rather than for 30 min at 400 MPa. Results in the literature indicate that, for ewes' milk, curd firmness was not affected by the HPP conditions (100-400 MPa for 30 min), while for goats' milk firmness increased at 300 and 400 MPa [13]. In the present study, control samples revealed intermediate firmness compared to the HPP-treated samples. Faster coagulation occurred in HPP-treated milk at 400 MPa, particularly for a 30 min holding time ( Figure 2A). The effect of HPP milk pre-treatment on coagulation time has been reported in the literature as being mainly dependent of pressure intensity and holding time. In this regard, a research group in this dominion treated bovine milk by HPP using different binomial pressure intensity/holding times [14,17,18]. These researchers were able to verify a reduction in coagulation time for HPP up to 200 MPa with treatment times within the range 10-60 min, while HPP treatment at 400 MPa only registered lower coagulation time when applied for 10 min; longer HPP holding times under such pressure increased coagulation time to values close to those of unprocessed milk. In a study performed with ovine milk, authors achieved results that support those presented in the current study; they were able to show that coagulation time decreased slightly after HPP 100 MPa for 30 min and increased significantly after HPP at 200-300 MPa to values 14-28% higher than untreated milk samples, albeit a new decrease for HPP at 400 MPa, although to values that remained slightly higher than those for untreated milk. Notably, gel firmness was not affected in the whole range of pressure studied (100-400 MPa) [13]. The timespan between HPP treatment and milk transformation into curd could also be a factor influencing coagulation time. In the present study, a lower coagulation time was verified when milk was transformed immediately into curd compared to curd production after 24 h storage of the HPP-treated milk under refrigeration ( Figure 2). Zobrist, Huppertz, Uniacke, Fox and Kelly, (2005) [19] also reported a lower coagulation time for cows' milk stored for short (after 0 and 4 h) than for long periods (after 24 and 48 h). This might be related to the fact that HPP leads to an increase in size and number of casein micelles, due to weakening of hydrophobic and electrostatic interactions between submicelles and further aggregation of submicelles to bigger clusters, with changes to form chains or clusters of submicelles ( [6] cited in [20]).
Syneresis occurred only in those cheeses manufactured from HPP milk treated at the lower pressure under study (200 MPa) and also in control cheeses (see Figure S1). Low syneresis was reported in the literature for milk HPP-treated at higher intensity, e.g., treatments at 676 MPa/5 min at 10 • C for bovine milk [10], 600/15 min for skim milk [21], while treatments at 200 and 400 did not show significant differences [21].
Yield was improved by milk HPP pre-treatment at 200 MPa, as shown in Figure 2B; in particular, when the milk was treated for 30 min, after 48 h of refrigeration upon collection and transformation 1 h after HPP, a 12% increase in yield was achieved (p > 0.05). In contrast to what has been reported in the literature [5,13], in this study the milk HPP treatment at higher pressure intensity, i.e., 400 MPa, led to lower curd yields. reported a higher yield for cheeses made from HPP-treated cows' milk (100-400 MPa) upon storage for 24 h at 20 • C than those produced immediately after HPP milk treatment.
Furthermore, a higher curd formation yield was obtained with ewes' milk HPP-treated at 200 MPa/30 min than with the control milk (11%); nevertheless, a considerable increase in curd yield was achieved after milk was HPP-treated at 400 MPa/30 min (15.6%) [13]. A similar HPP treatment (400 MPa/30 min) on cows' milk led to a curd yield increase of 20% [18].
The increase in curd/cheese yield may be due to greater moisture retention but also to the incorporation of some denatured ß-lactoglobulin [18].
Statistical analysis of the data showed that the curd yield was affected by the pressure intensity (p < 0.001), holding time under pressure (p < 0.05), time after HPP (p < 0.05) as single factors and by the interaction of pressure intensity time before HPP and time after HPP (p < 0.001). This first step, i.e., the initial wide screening design, was crucial to determine that the pressure intensity, holding time under pressure and time after HPP were the most important factors when curd yield increment was desired. To the best of our knowledge, this is the first research work where all four factors and their interactions were studied for milk; furthermore, only Zobrist et al. (2005) studied the effect of cows' milk storage after HPP and prior to rennet addition on coagulation time, having also verified different coagulation times after different milk HPP storage times of 4, 24 and 48 h at 4 and 20 • C.
Based on the results obtained in the wide screening study and discussed above, milk refrigerated storage before and after HPP treatment was fixed at 48 and 24 h, respectively, since the time before HPP showed no individual effects on curd yield.
Focused Screening Design
As mentioned above, the initial wide screening design revealed more interesting results in milk HPP pre-treated at low pressure intensity (200 MPa) than at high pressure intensity (400 MPa). Therefore, in order to rule out any possibility of lower pressures bringing on more favourable results, in the focused screening design, the range of pressure intensity was widened to include also 100 MPa, and additional analyses were also carried out, namely pH, titratable acidity and microbiological data.
As in the previous screening, HPP-treated milk at lower pressures, i.e., 100 and 200 MPa, remained in its liquid form, and the milk treated at 300 and 400 MPa became viscous and yellower and presented phase separation with time ( Figure S2). This visual analysis is in agreement with the obtained curds ( Figure S3) and milk pH values measured, as shown in Figure 3A. HPP-treated milk at 300 and 400 MPa revealed significantly higher pH values (6.38 and 6.42, respectively) than the control milk (5.74) (p < 0.001). Closer to the pH values of control milk were those of milk HPP-treated at 100 and 200 MPa (5.81 and 5.89), although statistically different (p < 0.001). In the literature, HPP goat milk treatment (500 MPa/15 min) led to significantly higher milk pH values in comparison to thermally pasteurized milk (6.66 vs. 6.54, respectively) [12]. Raw whole bovine milk revealed a similar effect, with HPP treatments (100, 250 and 400 MPa/15 min) inducing increments in milk pH values (to 6.73-6.75 vs. 6.66 of control milk) but without significant differences among HPP treatments [19]. These pH changes brought about by HPP can be due to dissolution of colloidal calcium phosphate (CCP), due to its dissociation from the casein micelle [22], possibly due to weakening of hydrophobic and electrostatic interactions between submicelles. Titratable acidity was in agreement with the changes in pH values ( Figure 3A). Relative to the curd yield, similar values were obtained for milk HPP-treated at 100 MPa and untreated milk (p > 0.05) (0.55 vs. 0.53 g milk/g curd), as shown in Figure 3B.
As expected, HPP treatment at 400 MPa strongly affected microbial cell viability, in particular the beneficial microbiota that contribute positively to the cheese ripening process. On the other hand, many of the microbial groups tested, namely lactobacilli, enterococci, total mesophilic micro-organisms, staphylococci, coliforms and Enterobacteriaceae counts, were only slightly affected when milk was treated at 100 MPa (data not shown). Thus, a lower pressure intensity kept the beneficial microbiota and could improve the yield, but the minimization of spoilage bacteria such as staphylococci, coliforms and Enterobacteriaceae was not successfully achieved.
Optimization Design of Experiment by Central Composite Design
Based on the results obtained in the two screening studies, an optimization approach followed, where the factors to be studied included pressure intensity between 100 and 300 MPa, time of HPP treatment between 5 to 30 min (Table 1 and Figure 1), after 24 h of milk collection (note that this time period was reduced due to high viable cell numbers quantified in the focused screening design) and curd transformed after 24 h of HPP treatment.
Visual analysis of the milk bags upon treatment revealed that samples treated at 300 MPa for 5, 30 and 17.5 min (samples 3, 4 and 6, respectively, in Figure S4) were yellower. Instrumental colour analysis confirmed these colour variations, since these HPP-treated milks revealed higher b*-values ( Figure 4) HPP-treated milk resulted in curds ( Figure S5) with increased yields between 5 and 24% in comparison to the control milk ( Figure 5A), being the highest values achieved with milk treated at 300 MPa/17.5 min. To the best of our knowledge, there is only one work that studied HPP application on ewes' milk, revealing a similar behaviour but reporting lower curd yields of about 5% for HPP-treated ewes' milk at 200 and 300 MPa for 30 min, while at 100 MPa a yield similar to untreated milk was verified (10, 20 and 30 min of treatment time at 300 MPa showed no effect on yield) [13]. In the present study, the model analysis of the results revealed that the effect of the studied variables on yield could be described by a linear model, where pressure has the greatest contribution (p < 0.03 with lack of fit p = 0.067).
Since during the two previous screening studies it was visually observed that syneresis showed a clearly different behaviour among samples, syneresis was also studied ( Figure 5B). Initially after centrifugation, minor whey release was quantified for curds obtained from HPP milk pre-treatment, particularly for treatments at 100 and 300 MPa for 17.5 min (about 34 and 29%, respectively, against 45% for untreated milk), but syneresis after 24 h revealed lower, yet statistically insignificant, values for the control milk curds (p > 0.05). As previously mentioned, HPP may induce water retention in curd [11,[26][27][28], a situation that appears to be related to a change in the structure of the para-caseinate network [26], an observation that may help explain the different syneresis behaviours observed for the HPP-treated samples. Relative to coagulation time, pre-treated HPP milk revealed at least 12% faster coagulation than untreated milk, as reported in the literature [13,14,17,19]. The pH values were also analysed in untreated and HPP pre-treated milk and in the curds obtained therefrom, as shown in Figure 6. HPP-treated milk revealed higher pH values (6.4-6.5) than the untreated milk (6.29), a trend even more noticeable in milk pressurized at the highest pressure intensity (300 MPa), corroborating the results reported above, which can be justified by colloidal calcium phosphate solubilization [22]. Higher pH values were registered in curds resulting from HPP-treated milk (5.18-6.42) than untreated milk (5.13), being significantly higher in curd resulting from milk treated at 300 MPa (p < 0.001) (Figure 6). Similarly, about 0.6 pH units above that of the control cheese were reported for curd from HPP goat milk (400 MPa/5 min) [26]. However, higher intensity HPP treatments (586 MPa/1 min and 400-600 MPa/10 min) in bovine milk revealed no effect on curd pH [9] or led to a decrease [29]. Milk microbiota viable cell numbers are shown in Figure 7. In untreated milk samples, lactobacilli, lactococci and enterococci were found at 7.25, 4.28 and 5.35 log cfu/mL, respectively ( Figure 7A). Enterobacteriaceae and total coliform viable cell numbers were found to be at a similar level, 6.53 and 6.69 log cfu/mL, respectively. Escherichia coli and Staphylococcus spp. were detected at 4.34 and 4.48 log cfu/mL, respectively. Yeasts and moulds were detected at 5.63 log cfu/mL ( Figure 7B). As expected, a higher pressure intensity led to a higher microbial inactivation, particularly for longer holding times under pressure, as shown in Figure 7. Pressure lethal effect on micro-organisms was also reported to be higher as pressure increased from 100 to 300 MPa for 30 min in bovine milk [18], having the total aerobic counts achieved at an approximately 0.9 log reduction.
HPP has been reported as an alternative to traditional thermal pasteurization in order to increase the microbial milk quality, but then lactic starter cultures needed to be added. This is an approach not possible for PDO cheeses, such as Serra da Estrela cheese, since the use of starter cultures is not allowed, and so a balance between spoilage microbiota inactivation while keeping as much as possible of the beneficial microbiota was necessary and became one of the objectives of the present work. Drake et al. (1997), Buffa, Guamis, Royo and Trujillo (2001) and Trujillo et al. (1999) treated bovine and goat milks at 586 MPa/1 min and 500 MPa/15 min, and the total viable cell numbers were reduced in 0.87-2.2 log cycles, coliforms in >1.3 log cycles and Enterobacteriaceae > 1.9-3.8 log cycles relativity to control milk. This same HPP treatment in goat milk led to lactobacilli reduction to below the quantification limit (>2.36 log cycles) [30].
The design of experiment results analysis allowed for optimization of the desirability answers (Table 2). In what concerns the microbiota, viable cell number reduction data were normalized by dividing by the mean of the microbial load of the untreated samples, expressing the microbial inactivation percentage. The model was then optimized in order to have: (1) the minimum values of normalized logarithmic reductions of lactobacilli, lactococci and enterococci (this family was added as a group to benefit, since its relevance in the development of cheese flavour is well known), (2) the maximum values of normalized logarithmic reductions of Enterobacteriaceae, total coliforms, E. coli, staphylococci and yeasts and moulds (known to be spoilage micro-organisms) and (3) the highest yield possible.
In this analysis, it was taken into account that not all the microbial groups have equal relevance to cheese maturation. Different importance levels were considered in the optimization design of experiment analysis to determine the optimal conditions, as shown in Table 2. An equal relevance attribution revealed 288.38 MPa for 5 min as the optimal HPP conditions. Considering the lactobacilli and lactococci values were fivefold more important, enterococci threefold more important and onefold for the other microbial groups under study, HPP treatment at 121.5 MPa for 30 min was achieved as the optimum conditions (the predicted results using these conditions are shown in Figure 8).
Model Validation
The optimum condition obtained in the optimization study considering different importance levels, 121 MPa/30 min, was subsequently applied to new batch of raw ewes' milk, in quintuplicate, for a greater robustness to validate the predicted results. Untreated milk was also studied to allow data normalization, and Table 3 and Figure S6 present all the obtained results: curd yield, microbiota viable cell numbers, whey quantification and pH values. Statistical analysis of all data obtained during model validation revealed that lactococci, lactobacilli, enterococci, Enterobacteriaceae, total coliforms and yeasts and moulds inactivation percentages (p > 0.05) were according to the predicted statistic prevision, thus validating these parameters (Table 3). However, E. coli and staphylococci inactivation was not validated (p < 0.05). Curd yield and released whey were also validated by the model (p > 0.05).
Thus, the HPP conditions 121 MPa for 30 min, when applied to raw ewes' milk within 24 h after collection and transformed within the next 24 h to curd, were validated as the optimal conditions to combine the best possible inactivation of spoilage microbial viable cells, with a very low reduction of viable cell numbers of beneficial microbiota, and simultaneously achieve a better curd yield.
Conclusions
When screening the factors that affect curd yield, it is very important to test as many factors as possible in order to identify the significance of each of them. The experimental design allowed us to determine that the most influential factors on Serra da Estrela cheese production, from high-pressure-treated milk, were pressure intensity, holding time under pressure and time after HPP. A focused screening design was able to pinpoint that the viable cell numbers in milk HPP-treated at 400 MPa were considerably affected, while the lower pressure intensity kept the beneficial microbiota and improved the curd yield. For the identification of optima, a response surface design was performed, and higher pressure intensity led to a higher microbial inactivation, which was more pronounced at longer holding times under pressure. Nevertheless, placing as the main target an equilibrium between the best inactivation levels for spoilage bacteria without hindering (lowest reduction possible) beneficial microbiota viable cell numbers, coupled to an increased yield, led to determining HPP milk pre-treatment at 121 MPa for 30 min as the optimum condition, and model validation confirmed the predicted results.
In conclusion, HPP treatment of raw ewes' milk prior to cheese manufacture can enable Serra da Estrela curd yield increment and improve the microbial profile important from both safety and quality points of view. | 2022-02-04T16:15:49.770Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "6f111bbb8a53330ef1098359c4775fac106daf7e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/11/3/435/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13235055631c73b9fc6745f4d275711f589a181a",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17713626 | pes2o/s2orc | v3-fos-license | SALMO and S3M: A Saliva Model and a Single Saliva Salt Model for Equilibrium Studies
A model of synthetic saliva (SALMO, SALiva MOdel) is proposed for its use as standard medium in in vitro equilibrium and speciation studies of real saliva. The concentrations come out from the literature analysis of the composition of both real saliva and synthetic saliva. The chief interactions of main inorganic components of saliva, as well as urea and amino acids, are taken into account on the basis of a complex formation model, which also considers the dependence of the stability constants of these species on ionic strength and temperature. These last features allow the modelling of the speciation of saliva in different physiological conditions deriving from processes like dilution, pH, and temperature changes. To simplify equilibrium calculations, a plain approach is also proposed, in order to take into account all the interactions among the major components of saliva, by considering the inorganic components of saliva as a single 1 : 1 salt (MX), whose concentration is c MX = (1/2)∑c i (c i = analytical concentration of all the ions) and z ion charge calculated as z=±(I/c MX)1/2 = ±1.163. The use of the Single Saliva Salt Model (S3M) considerably reduces the complexity of the systems to be investigated. In fact, only four species deriving from internal ionic medium interactions must be considered.
Introduction
Chemical speciation studies in real systems are usually very complex, due to the wide number of interactions that must be taken into account, which lead to the formation of several species of different stability [1][2][3][4][5][6][7]. This is particularly true in the case of biological fluids, where not only the composition varies from fluid to fluid, but it may also depend on several other factors like, for example, different physiological conditions, age, kind of living organism, and diseases [8,9]. These changes are usually mainly responsible of the differences between results obtained and predictions made by in vitro and/or in silico studies and what is effectively observed in vivo [9][10][11][12][13]. That is why, during the years, several "artificial media" have been proposed to simulate the composition of a wide number of real systems (with particular reference to biological fluids), with the aim of performing various studies in conditions that are as close as possible to those effectively found in the reality: typical is the use of artificial seawaters in environmental studies (e.g., [14,15] and references therein) or simulated body fluids in the pharmaceutical field (e.g., [9] and references therein). Unfortunately, the simple preparation and use of an artificial medium is not sufficient when performing rigorous chemical speciation studies. This is due to the fact that the investigation of the "distribution of an element amongst defined chemical species in a system" (i.e., its speciation [16]) is based on the evaluation of the main interactions of this element with all other components in the system and on determination of the stability of species formed, but this process requires the preliminary knowledge of all the interactions occurring between all components already present in the system. In other words, a chemical speciation model of the biological fluid itself is necessary prior to any investigation on the speciation of any other component in that fluid. Furthermore, assuming that a speciation model of the fluid is available, the above-cited variability of conditions makes also the assessment of their effect on the speciation necessary: the dependence of the stability and the distribution of various species on chemical (e.g., kind and concentration of components, ionic strength, and pH) and/or physical (e.g., temperature) parameters must be known to build accurate speciation models.
During the years, this group has been involved also in this kind of work, proposing the use of new synthetic media (like, e.g., a synthetic seawater [14]), providing chemical speciation models of natural waters (e.g., seawater [15]) and biological fluids (e.g., urine [17] and blood plasma [18]), as well as alternative approaches to the study of chemical equilibria in these media [19].
In this contribution, a model of synthetic saliva (SALMO, SALiva MOdel) is proposed for its use as standard medium in in vitro equilibrium and speciation studies of real saliva. In fact, though various artificial media simulating saliva have been proposed since many years and are still used in several fields (see, e.g., [9,[20][21][22] and references therein), to our knowledge no "reference" speciation models are available in literature, hampering the use of these media in chemical speciation studies.
Synthetic Saliva Composition and Formulation
As well known, real saliva has a very complex and variable composition, depending on several factors, so that its exact replication is almost impossible ( [8-10, 20-22, 31-33]). Nevertheless, from the point of view of chemical speciation studies, it is initially possible to neglect many constituents of lower interest (in this case!), such as, for example, proteins, enzymes, bacteria, and cellular material. In fact, any speciation study in this medium should start from the interactions of the element or compound under investigation with the main inorganic components of saliva and, successively, extending it to some organic ligands. Bearing this in mind, we analysed the most relevant literature findings on the composition of real and artificial saliva from present time (November 2014) to years 1983 and 2001 ( [8,9,[20][21][22][32][33][34] and references therein), when Lentner (in the Geigy Scientific Tables [8]) and Gal and coworkers [20] published two updated, comprehensive, and detailed revisions of previous contributions on the composition of real and artificial saliva, respectively. Geigy tables [8] represent a "standard" and well considered reference in the medical and biological field about the composition of a lot of biological fluids, including saliva.
They report data about the composition of hundreds of saliva samples, including stimulated and not stimulated and organic and inorganic components and differences of sex, age, and smoking: it is a very comprehensive reference reporting several chemicophysical parameters. Analogously, the work by Gal et al. [20] is one of the most successful and quite accurate attempts of building synthetic saliva. Also in this case, a wide number of synthetic (about 60) and natural saliva compositions are taken into account and critically evaluated.
On the basis of data reported in the above-cited literature ( [8,9,[20][21][22][32][33][34] and references therein), we here propose a saliva model (SALMO), which is able to summarize the main interactions of main inorganic components of saliva, as well as urea and amino acids. Its composition is reported in Table 1. During model development, higher weights have been given to data related to stimulated saliva, since this situation is probably the most important in many cases when speciation studies are required (stimulated saliva is produced, e.g., during oral drug absorption [12], eating, and drinking). The given composition takes into account (with different weights) both unstimulated saliva and stimulated (from different origin) saliva. The synthetic saliva according to SALMO can be prepared as reported in Table 2. As representative of amino acids, glycine can be used. Worthy of a mention is also the fact that, considering usual pH values of saliva, carbonate and phosphate ligands have been considered in Tables 1 and 2 as hydrogen carbonate and hydrogen phosphate, respectively, and must be added in this form in the formulation.
Data Sources.
In a formulation like the one already proposed, containing thirteen components (fourteen if one also considers H + /OH − ), it is immediately evident that the number of species that could be formed is consistent. The stability constants to be taken into account refer to protonation equilibria of the ligands, hydrolysis of cations, all possible species between cations and anions (including weak complexes), amino acid species with both cations and anions (due to the presence of both aminic and carboxylic groups), and urea interactions. Moreover, it is also well known that, in multicomponent solutions, the formation of mixed (ternary or higher) species is possible and usually favoured [4,35], so that these species cannot be neglected in a correct speciation model. On this basis, a huge dataset of stability constants is necessary to build the model and, furthermore, they must be available at the effective ionic strength and temperature of the system under study. In this work, the most of these data have been taken from the most common general stability constant databases [36][37][38][39][40][41] and, when possible, from some reviews and/or papers dedicated to specific ligands and/or cations, by this and other groups (e.g., [17,42] for glycine, [43][44][45][46] for phosphate, [47] for thiocyanate, [48,49] for fluoride, [50] for carbonate, [51,52] for urea, and for [43,53,54] sulphate; all considering references therein). Though the most of last references were already taken into account in the above-cited databases, they have been equally consulted because they contain some more specific information like, for example, the parameters for modelling the dependence of the stability constants of various species on ionic strength and/or temperature.
Expression of Results.
All hydrolysis, protonation, and complex formation constants reported in the paper are given according to the overall equilibrium: where the superscripts " " and " " denote the charges of cations and ligands, with their corresponding signs. The extra cations (M ) and ligands (L ) were taken into account in the general equilibrium to refer only to the formation of mixed species: in all other cases, = = 0. For simple species, when = 0, (1) refers to the ligand protonation constants; negative index refers to the formation of hydroxo-complexes and, in particular, to the cation hydrolysis constants when also = 0. If not necessary, the charges of the various species are omitted for simplicity.
If not differently specified, errors are expressed as ± standard deviation, and formation constants, concentrations, and ionic strengths are expressed in the molar concentration scale ( , mol L −1 ). Rigorously, this scale is temperature dependent and should not be used to express quantities at different temperatures. In those cases, temperature independent concentration scales, such as the molal scale ( , mol (kg solvent) −1 ) should be preferred. Nevertheless, the molar scale is more frequent and "practical" and, in relatively small temperature ranges and ionic strength values, errors associated to its use of the molar scale on behalf of the molal scale may be negligible [55]. A detailed description of errors associated to data reported in this paper and to their reliability is given in next sections.
The SALMO Model: Main and Minor Species.
According to the data sources described in previous paragraph, the speciation of SALMO is given by 93 species, listed in Table 3 together with the corresponding stability constants at = 37 ∘ C and = 0.15 mol L −1 . Due to the availability of many data at these temperature and ionic strength values (because they approach many physiological conditions like, e.g., blood plasma [8]) they have been taken as reference. The same table also reports the parameters for the dependence of the stability constants on ionic strength and temperature, though this aspect will be discussed in next paragraphs.
Looking at the species (and at their corresponding stability constants) reported in Table 3, a series of comments and clarifications is necessary. Of the 93 species reported, some (those we call the "main species" like, e.g., many protonation constants or some alkaline earth complexes) are more important than others (the "minor species") and better characterized (i.e., many stability constants, as well as other thermodynamic parameters, are reported in literature in different conditions). In contrast, many "minor species" have been less investigated or, in some worse cases, never reported, though it is reasonable that they may be formed in systems as complex as these. We refer, for example, to the formation of some mixed MM LH or MLL H species.
In fact, according to Beck and Nagypàl [35], in a ternary system (A, B, C), if A forms binary complexes with both B and C (i.e., AB 2 and AC 2 ), the formation of the ABC species is possible and statistically favored, since the probabilities of formation of AB 2 , AC 2 , and ABC are 0.25, 0.25, and 0.5, respectively. Briefly, for the generic equilibrium the probability of formation of the mixed species is given by A more accurate approach for the calculation of the statistical stability of mixed species takes into account the specificity of chemical interactions between various components [4]. In the above-described ternary system, the statistical value of the formation constant relative to equilibrium (i.e., (2) with = = 1) can be estimated knowing the stepwise formation constants of simple species: Bioinorganic Chemistry and Applications 5 The stability constant of a mixed species can be, therefore, either estimated statistically ( + ) log AB C = log stat + log AB( + ) or can be experimentally determined once the stability of the corresponding simple species is known. In this case, (6) may be rearranged to The same approach could be also adopted for the estimation of other thermodynamic formation parameters than stability constants (e.g., formation enthalpy or entropy changes) [56].
Higher log exp values than corresponding log stat indicate that the formation of mixed species is thermodynamically favored and are a numerical index of the extra stability of mixed species with respect to simple ones. This extra stability has been observed for several systems, providing evidence of the formation of various mixed species (like it has been supposed in this paper), which are able to affect the speciation and the thermodynamic properties of systems where they are formed [56][57][58][59][60][61][62].
That is why some mixed species, determined in this way, have been reported in Table 3 and taken into account in the model (values for other mixed species were already been determined experimentally and available in literature like, e.g., some glycinate [42] or phosphate [46] complexes). Their formation could be generally low, but, according to changes in saliva conditions (e.g., pH, ionic strength, temperature, and presence of other substances), some of these "minor species" may be formed in nonnegligible amounts.
Dependence of the Stability Constants on Ionic Strength
and Temperature. As already discussed, saliva conditions may vary, so that the use of the stability constant values reported in Table 3 at other temperatures and ionic strengths than the reference ones (i.e., = 0.15 mol L −1 and = 37 ∘ C) may represent a further source of error in the evaluation of saliva speciation. Fortunately, these errors may be significantly reduced by the calculation of these constants at the correct ionic strength and temperature values, by applying some common and well known models and equations.
In this work, the dependence of various formation constants on ionic strength has been taken into account by an Extended Debye-Hückel (EDH) type equation: where is an empirical parameter (reported in Table 3 for every species), and DH is the Debye-Hückel term with * = ∑ (charges) where is the desired temperature in Kelvin ( ∘ C + 273.15). As is in (11), " " parameter (reported in Table 3) takes directly into account the contribution of the formation enthalpy changes, the universal gas constant, and the conversion from natural to decimal logarithms. Parameters reported in Table 3 are generally valid at ≤ 0.5 mol L −1 and in the temperature range 25 ≤ / ∘ C ≤ 40. By using (11), the SALMO stability constant datasets at the ref ionic strength were calculated at four different temperatures and are shown in Table 4. Table 3 (and Table 4) is a clear indication of the complex network of interactions occurring between different saliva components. As direct consequence of these interactions, the free concentration of saliva components is never equivalent to the analytical (total). SALMO, designed to be employed during speciation studies in saliva, can also be used for the calculation of the free concentrations of different components of saliva of given composition. For example, considering the analytical concentrations of components of the synthetic saliva reported in Table 1, the free concentration of its components at two temperatures and two pH values has been calculated by SALMO (using common speciation programs [63,64]). These results are summarized in Table 5 and demonstrate what was already stated: all the internal ionic interactions between the saliva components cannot be neglected because they lower the concentration of free ions. For example, at = 37 ∘ C, more than 40% of Mg 2+ and Ca 2+ are complexed, while urea exists almost entirely as free form. Worth mentioning is also the fact that, instead of giving free phosphate and carbonate concentrations, we preferred to report their monoprotonated species as reference, since they are more relevant for the speciation of saliva and other natural and biological fluids [8].
The Single Saliva Salt Model, S 3 M
All considerations just presented on the advantages of using a synthetic medium cannot lead the reader astray from the fact that the speciation model proposed, as usually occurs for many other models of multicomponent systems, is "quite complex. " Performing the speciation study of an "external" component in this (or other) medium would result in the evaluation of all its relevant interactions with all saliva components, with the possibility of forming many species, whose stability constants should be determined and then added to the model. As a consequence, if we take into account these interactions when SALMO is used in the speciation studies of saliva, along with the other species formed by other components, a considerable number of species need to be considered.
To bypass this problem, in order to simplify equilibrium calculations, a simpler approach is proposed here, based on the Single Salt Approximation adopted for synthetic seawater [19] and successfully tested in several speciation studies (e.g., [23][24][25][26][27][28][29][30]). In order to take into account all the interactions among the major components of saliva, we considered the inorganic components of saliva given in Table 1. (i.e., all components except amino acids and urea) as a single 1 : 1 salt (MX), whose concentration is Main characteristics of the Single Saliva Salt (MX) are summarized in Table 6. The use of the Single Saliva Salt allowed us to build a much simpler but equally reliable speciation model for synthetic saliva than SALMO. In fact, the Single Saliva Salt Model (S 3 M) considerably reduces the complexity of the systems to be investigated, since only four species deriving from internal ionic medium interactions must be considered. These species represent the self-association of the salt, the hydrolysis of the cation M, and the protonation and the deprotonation of the anion X (coherently with the fact that HPO 4 2− and HCO 3 − were used as reference components and that they may be deprotonated). Overall stability constants relative to the formation of the species of S 3 M are reported in Table 7 at the reference ionic strength and temperature, together with their dependence parameter (according to what has been done for SALMO). Further details on the procedure adopted to calculate these parameters may be found, for example, in [19].
By means of S 3 M, all the internal interactions between the inorganic components of synthetic saliva are taken into account considering just four equilibria. As a consequence, the speciation of "external" components in saliva can be studied just by considering its interactions with the "M" and "X" ions of saliva (reducing the complexity to "just" a ternary one metal + one ligand + one component system).
The importance of various MX species, according to S 3 M, is better realized looking at Figures 1 and 2, where two speciation diagrams are reported for M 1.163+ and X 1.163− species, respectively. As can be noted, in the pH range 3 ≤ pH ≤ 9, the M(OH) and H −1 X species can be neglected. In the pH range of interest, ∼12% of the MX salt is self-associated, whilst the rest is present as free X and M. Only below pH ∼ 5 the protonation of the ligand becomes significant.
The Reliability of the Models
Both SALMO and S 3 M, as well as the synthetic saliva composition proposed, are "models. " Models are built to describe and/or interpret some observed phenomena, but, for their intrinsic nature, they are "approximations": a "good model" should be a good compromise between simplicity of use and reliability of results obtained. Also in the case of models proposed here, some aspects must be discussed more in detail.
Purposes of the Models.
We already discussed about the composition and the formulation of the synthetic saliva proposed. As already stated, several other compounds could have been included in the formulation, other concentrations could have been used, or some other modifications could have been possible. As we intended, this formulation would represent the "starting point" for specific studies, that is, those addressed at understanding the thermodynamic behavior and the speciation of components "of " and "in" the saliva system. From just this point of view, more attention should be (and it has been) given to the chemical and physical aspects of saliva system (like, e.g., ionic strength, temperature, and ionic composition), instead of others that are less important for the aims proposed (e.g., presence of enzymes and "living material"). A similar consideration can be done for SALMO. Its purpose is to describe the speciation of a complex system like saliva and to take into account the most relevant interactions in this medium, but what does "relevance" mean? Of course, of the 93 species reported, many could have been neglected, reducing sensibly this number (to about 60-70 species). Nevertheless, though the formation percentage of a single minor species could be "not significant, " all species globally contribute to give a comprehensive picture of what really happens in saliva. This is also the reason why some species (some mixed) never reported in literature before have been estimated in this work. Furthermore, the discussion about the possibility that these species could really be formed, as well as their stability, has already been done above.
A last consideration is necessary for S 3 M. Its peculiarity and its simplicity should not rule out the fact that this interaction model is directly derived from its parent model SALMO, maintaining all the characteristics of a comprehensive speciation model.
Errors Associated to the Stability Constants and
Influence on "Real" Speciation. Both SALMO and S 3 M are thermodynamic models, based on stability constants and parameters for their dependence on ionic strength and temperature. As already stated, some of these values have never been determined experimentally or are present in literature at other conditions than those of interest and have been estimated by taking into account well known "facts" like, for example, (a) the similarities of the thermodynamic behaviour of similar species (e.g., concerning the dependence on ionic strength and temperature, see [65,66]) and/or (b) well defined trends in the stability of complexes of homogeneous ligand classes (see, e.g., [67][68][69][70]). As a direct consequence, we associated a wide range (±0.01-0.1 standard deviation, see Table 3) to the errors of the stability constants reported in this work. This width comes out from the differences between well known stability constants and ionic strength and temperature dependence parameters (with lower standard deviations than 0.01, e.g., w , and some hydrolysis and protonation constants) and some estimated values (with higher values). Isolating this concept from the context of this work, from a pure thermodynamic point of view, errors like those reported here for a simple stability constant appear to be quite high. Nevertheless, during speciation studies, especially for very complex multicomponent systems, the critical aspect is the propagation of these errors on the "real" speciation of a given system. ES4ECI [63], the program we used to calculate the concentration of different species (as also the free components reported in Table 5) is able to propagate the errors of stability constants (included in the input) on the formation percentage of different species. As can be noted in Table 5, so (apparently) high standard deviation in the stability constants used results in an acceptable uncertainty in the formation percentage of species (below 3% for free components in Table 5). For practical uses and applications to real systems, this order of uncertainty is common and is generally accounted as "low, " supporting our assumptions of the reliability of the proposed models.
Literature Comparisons
As stated above, saliva composition is very variable. As a consequence, we already pointed out that many "different" artificial saliva models of very "different" composition have been proposed during the years, for many "different" purposes. Depending on the aim of studies performed, single components or classes of components may be included/ excluded from the formulation as, for example, done by Björklund et al. [21], who considered vitamins, enzymes, and glycoproteins (mainly mucins) in the artificial saliva they prepared for studying the influence of different carbon sources on bacterial growth. To our knowledge, neither artificial media have been ever prepared, nor have complex formation models been proposed specifically for speciation studies of saliva. The closest attempt is represented, once again, by the comprehensive review by Gal et al. [20]: in that work, some chemicophysical aspects have been considered, like, for example, the buffering effect of saliva, its ionic strength, and pH, affected by the presence of selected ions (Ca 2+ , SCN − , HCO 3 − , and HPO 3 2− ), which lead to the formation of selected species. Some acid-base titrations of saliva have also been simulated, and the free concentrations of some species have also been calculated using literature stability constants. From the comparison of data reported by Gal et al. and results obtained in this work, it is still possible to state that an excellent agreement exists, at least for the order of magnitude of free concentrations of some components (in mol L −1 ). At pH 6. The discrepancies can be ascribed to the differences in the saliva composition, but, mainly, in the number and species and in the stability constants considered (taken from literature at = 25 ∘ C and = 0 mol L −1 ). In fact, the same authors state in their work that only species where the thermodynamic constants were known were taken into account. This last consideration strengthens the necessity of a more comprehensive and dedicated speciation model for saliva.
Final Remarks
Results reported in this paper can be summarized as follows: (a) formulation of synthetic saliva specifically aimed at thermodynamic and speciation studies is reported here for the first time, based on several literature findings of compositions of real and synthetic saliva in various conditions; (b) comprehensive complex formation model of this saliva, based on the formation of 93 species, has been proposed for the modelling of its speciation at different ionic strength and temperatures; (c) another simpler model, based on the "Single Salt Approximation", is also proposed, in which the inorganic components of saliva are taken into account as a single 1 : 1 salt, reducing the complexity of the saliva system; (d) data reported have been critically analysed in terms of reliability of results obtained and applicability to real systems. | 2018-04-03T01:12:15.156Z | 2015-02-04T00:00:00.000 | {
"year": 2015,
"sha1": "434dd4df08d822b69e1e0df279776696b93d4f63",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bca/2015/267985.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb89057e64b7d2da9cc03b6bc697cc03802dab93",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
219905095 | pes2o/s2orc | v3-fos-license | Emergent Management of a Tracheoinnominate Fistula in the Community Hospital Setting
Tracheoinnominate fistula is a rare but highly lethal complication of tracheostomy. Early recognition and interventions are key to patient survival. A 63-year-old woman had undergone tracheostomy for respiratory failure secondary to disseminated histoplasmosis. She presented to the community hospital intensive care unit from a long-term acute care facility for presumed gastrointestinal bleeding. A tracheoinnominate fistula was suspected when there was bleeding around the tracheostomy. The patient underwent a median sternotomy with innominate artery ligation. The article will discuss the presentation, evaluation, and emergent management of this lethal complication of tracheostomies. The patient survival is dependent on high clinical suspicion, rapid diagnosis, and emergent surgical management.
Introduction
Tracheoinnominate fistula (TIF) is a rare (0.1%-1%) but a life-threatening complication after tracheostomy. The clinician caring for a patient with a tracheostomy must have a high suspicion for TIF bleedings that occur three days to six weeks after tracheostomy. TIF accounts for 10% of all bleeding associated with tracheostomies. Fifty percent of patients present with a sentinel bleed described as minor bleeding that spontaneously stops. The peak incidence is between seven to fourteen days after the procedure. Risk factors for developing TIFs include chronic steroid use, recent tracheostomy, overinflated cuff, high riding innominate artery, excessive movement of the tracheostomy, or low positioning of the tracheostomy.
Case Presentation
A 63-year-old female with a history of renal transplantation and chronic immunosuppression was admitted to the tertiary care center for respiratory symptoms. During her hospitalization, she progressed to respiratory failure and found to have disseminated histoplasmosis. After failed attempts at extubation, the patient underwent an open tracheostomy. She was discharged to a long-term acute care facility (LTAC).
The patient developed bright red blood per rectum during her stay at the LTAC without evidence of bleeding at the tracheostomy. Her hemoglobin level was 4.5 mg/dL; a blood transfusion was initiated at the facility and she was transferred to the local community hospital's intensive care unit (ICU). During routine morning patient care, the nursing staff noticed minor non-pulsatile bleeding around the tracheostomy and immediately alerted the ICU physician team. On initial inspection, there was no noted bleeding of the exterior surfaces while the patient was positioned with the head of the bed at 30 degrees. Bleeding recurred when the patient was placed back into the supine position, prompting urgent surgical consultation.
The surgical team and the ICU team proceed to evaluate for bleeding sources. A flexible fiberoptic scope was used to investigate the upper airway and no bleeding was seen from her nasopharynx or oropharynx. The tracheostomy was also evaluated and no bleeding was seen distal or proximal to the tracheostomy site, while the tracheostomy cuff was deflated. Due to the recurrence and unidentifiable source of bleeding, surgical evaluation and management were needed. While prepping the patient, pulsatile bleeding was observed from the tracheostomy. The patient started to develop hemodynamic instability and surgical management deemed necessary. Interventional radiology was not considered at this time because the service was not immediately available on site. The cuff was hyperinflated, which stopped the bleeding and the patient was emergently taken to the operating room with the general surgery team. The thoracic surgeon on call was notified and would meet the team in the operating room.
The patient was prepped for surgery. A median sternotomy is used to gain access to the great vessels. The pericardium was opened to provide additional visualization and mobilization of the great vessels. The innominate vein was first mobilized and provided exposure to the innominate artery. The innominate artery's course was traced and fistula palpated on the posterior wall of the artery. Vascular clamps were applied proximal to the fistula and distally, ensuring that the thyrocervical trunk remained intact. The proximal end of the vessel was ligated using a vascular stapler. The distal end was oversewed in two layers using a 4-0 prolene suture. Thymic tissue was mobilized and placed over the tracheal fistula. This method provided quick control and coverage of the defect. The sternotomy was closed and resuscitation continued until the transport team arrived in the operating room to take the patient to the tertiary care center.
Discussion
TIFs are highly lethal complications associated with a tracheostomy. Early bleeding within minutes to hours after performing a tracheostomy is often from poor hemostasis during the initial tracheostomy or from a coagulopathy. Tracheostomy bleeding occurring three days to six weeks in the postoperative period should be assumed to be a TIF until proven otherwise [1]. Fistulas from surrounding arteries, common carotid, inferior thyroid, thyroid ima, aortic arch, and the innominate vein, have been reported [2]. Risk factors for developing a TIF include steroid use, tracheal infection, tracheostomy below the third tracheal ring, high-riding innominate artery, pressure necrosis from an overinflated tracheal cuff, malposition of the tracheal cuff, poorly sized tracheal appliance, or excess movement of the appliance [2]. Pathology shows the evolution of the fistula progresses from superficial tracheitis to necrosis, loss of cartilage, and subsequent fistulization [3].
Controlling hemorrhage
Quick bedside management of the bleeding should include hyperinflating the tracheal cuff balloon, allowing for temporary control in 85% of the cases and is the first maneuver that should be attempted [1,2,[4][5][6][7]. Additional methods of bleeding control involve withdrawal of the tracheostomy tube with the advancement of an endotracheal tube past the bleeding to prevent blood collecting in the lungs, and digital compression of the artery against the manubrium by entering the pretracheal space, known as the Utley maneuver [1,7,8]. Bronchoscopy is used to evaluate the presence of the fistula, and imaging studies such as conventional angiography or CT angiography should be used in stable patients. Imaging shows a blush into the trachea, and can be used to diagnose a TIF [7]. Patients should additionally have adequate peripheral access to provide effect resuscitation. The unstable patient may need blood products, and was started on massive transfusion, crystalloid, or vasopressors until the TIF can be definitively managed.
Surgical management
Communication between the anesthesia and surgical team is crucial while preparing the patient for surgical intervention. The anesthesia team evaluates for bleeding in the airway while the tracheostomy cuff is deflated and the appliance is withdrawn. An endotracheal tube is advanced under visual guidance past the suspected source of fistulization. This placement helps prevent blood from entering the distal airway. Additional hemodynamic monitoring devices and vascular access lines are commonly placed.
A median sternotomy is used to gain access to the great vessels. The innominate artery is the first branch of the aorta and gives rise to the thyrocervical trunk. The innominate vein is mobilized, and the pericardium may need to be opened to obtain sufficient exposure. The innominate artery is then followed distally until it crosses over the trachea. Careful blunt dissection will reveal the fistula posterior to the innominate artery.
Once the TIF is identified, vascular clamps are placed to obtain control of the innominate artery ( Figure 1). The proximal end of the innominate artery is ligated with a vascular stapler. The distal end of the innominate artery or the thyrocervical stump can be stapled or suture ligated in an oversewn fashion. When the artery is ligated, there is a sharp sudden drop in the right arm arterial pressure.
FIGURE 1: Clamped innominate artery with tracheal fistula exposed
The trachea may be primarily repaired with the PDS suture after debridement of devitalized tissue or a bovine pericardial patch may be necessary. A lesser documented, but viable thymic flap can be used to buttress the tracheal opening and the innominate artery stumps to prevent fistulization. Other options include using omentum and the sternocleidomastoid muscle.
The endotracheal tube is left in place, the sternotomy is closed, and the patient is maintained in the ICU for continued neurovascular monitoring.
Intra-operative and postoperative considerations
Arterial lines placed in the right arm will demonstrate decreased perfusion when the innominate artery is ligated. It is vital that the placement of a hemodynamic monitoring device should take this effect into consideration. Checking the stump pressure of the distally ligated innominate artery will provide an early indication of retrograde flow to the right upper extremity and cerebral hemisphere.
Postoperatively, the patient should be monitored for ongoing neurovascular changes with special attention to the right upper extremity and right brain hemisphere perfusion. Perfusion to these areas is dependent on retrograde flow from the left brain hemisphere through a patent Circle of Willis.
Long-term outcomes show an overall poor prognosis for those patients who develop a TIF and often the cause of death for survivors is due to their other comorbidities. Innominate artery ligation has shown up to a 10% incidence of neurological deficit. Patients should be monitored for these changes. Perfusion to the brain and right upper extremity relies on the retrograde flow through the Circle of Willis.
Conclusions
TIFs are highly lethal if not addressed in a timely manner, and physicians caring for a patient with a tracheostomy should have a high suspicion for a TIF after the immediate three to five days postoperative period. Successful management includes immediate bedside maneuvers such as hyperinflating the tracheostomy balloon, or the Utley maneuver as well as utilizing interventional radiology, endovascular treatment for a stable patient, or surgical management for an unstable patient or when minimally invasive procedures are not immediately available. Communication between the surgeon, anesthesia, and critical care team is vital to isolate the fistula, providing ongoing hemodynamic and neurological monitoring. | 2020-06-04T09:12:14.996Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "8fe75a658dee2d4d5650e7b0887781a7e09bbd8c",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/29644-emergent-management-of-a-tracheoinnominate-fistula-in-the-community-hospital-setting.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43ec126bbaefa1b02ae207f242c8db323b36e307",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
49671486 | pes2o/s2orc | v3-fos-license | A Requirement for Zic2 in the Regulation of Nodal Expression Underlies the Establishment of Left-Sided Identity
ZIC2 mutation is known to cause holoprosencephaly (HPE). A subset of ZIC2 HPE probands harbour cardiovascular and visceral anomalies suggestive of laterality defects. 3D-imaging of novel mouse Zic2 mutants uncovers, in addition to HPE, laterality defects in lungs, heart, vasculature and viscera. A strong bias towards right isomerism indicates a failure to establish left identity in the lateral plate mesoderm (LPM), a phenotype that cannot be explained simply by the defective ciliogenesis previously noted in Zic2 mutants. Gene expression analysis showed that the left-determining NODAL-dependent signalling cascade fails to be activated in the LPM, and that the expression of Nodal at the node, which normally triggers this event, is itself defective in these embryos. Analysis of ChiP-seq data, in vitro transcriptional assays and mutagenesis reveals a requirement for a low-affinity ZIC2 binding site for the activation of the Nodal enhancer HBE, which is normally active in node precursor cells. These data show that ZIC2 is required for correct Nodal expression at the node and suggest a model in which ZIC2 acts at different levels to establish LR asymmetry, promoting both the production of the signal that induces left side identity and the morphogenesis of the cilia that bias its distribution.
primitive streak formation as well as for that of the node and of node-derived mesendoderm 7 . An epistatic genetic interaction exists between Nodal and Zic2 in forebrain development 8 .
Laterality of the heart and viscera is also determined by the NODAL pathway. Bilateral symmetry is broken through generation of a leftward fluid flow by cilia within the node 9,10 . Nodal is initially expressed bilaterally at the node within perinodal cells (at E7.5 in mouse), but in response to the nodal flow, this expression is reinforced in left perinodal cells, thus becoming asymmetric 11,12 . NODAL produced in perinodal cells is required to trigger its own expression in the left lateral plate mesoderm (LPM) 13 , where it induces the expression of Lefty2 14 , a NODAL antagonist, and Pitx2c 15,16 . Pitx2c expression in the left LPM determines left-sided identity in mesoderm derivatives. In the absence of Pitx2c expression the default pattern is that of two morphologically right sides, a condition known as right-isomerism. Ectopic activation of Pitx2c in the right lateral mesoderm only results in situs inversus whereas bilateral expression of Pitx2c results in two morphologically left sides, known as left-isomerism. These phenotypes can be distinguished by examination of lung lobulation, atrial morphology and other characters 17 . The establishment of Left-Right (LR) polarity is essential for key aspects of cardiovascular, thoracic and abdominal development. It determines the lateralised identity of atria and lungs, influences looping morphogenesis of the linear heart tube and the gut, directs asymmetric remodelling of the vascular system and determines the positioning of visceral organs such as stomach and pancreas [17][18][19][20][21] . Mutations mapping to NODAL pathway genes are associated with heterotaxy 22,23 , a condition characterized by discordant LR arrangement of internal organs which accounts for approximately 3% of all congenital heart disease 22 .
Cardiac malformations have been noted amongst a number of extra-craniofacial anomalies reported in HPE patients with ZIC2 mutation, occurring in 9-14% of probands 2,3 , but have not previously been described in detail. We hypothesise these may result from an underlying laterality defect. The Zic2 ku mutant shows randomised direction of heart tube looping during cardiac morphogenesis 24 , supporting this hypothesis. These embryos exhibit mid-gestation lethality, preventing a more detailed study of laterality defects 24 . Nodal cilia are shorter and morphologically abnormal in the Zic2 ku mutant suggesting that Zic2 may function during cilia morphogenesis 24 . Expression of Nodal at the node and of downstream genes in the LPM is reduced in the Zic2 ku mutant 24 , but ectopic right-sided or bilateral expression, which is present in iv embryos with nodal cilia defects 12,25 , is not observed.
Zic2 is also expressed earlier in development and is present in both embryonic (ESC) and epiblast (EpiSC) stem cells. ZIC2 is bound to the Nodal locus in both cell types 26,27 . EpiSCs have been shown to resemble cells from the anterior primitive streak 28 , from which the node is derived. ZIC2 appears to play a central role in the transition from the naive pluripotent state of ESCs to the primed pluripotent state of EpiSCs 26,29 and has been proposed to act as a "pioneer factor" that functions to seed enhancers, recruiting additional transcription factors in order to prime loci for transcriptional activation at a later stage 26 . The MDB3-NURD chromatin remodelling complex is associated with ZIC2 at a subset of binding sites in ESCs 27 , this complex is known to play a role in fine-tuning dynamic expression of bivalent enhancers during development 30 , consistent with a role for ZIC2 in such a priming process.
Here, we examine clinical data and show that the cardiovascular, pulmonary and visceral phenotypes of ZIC2 HPE patients are consistent with an underlying laterality defect. We identify a novel Zic2 ENU mouse mutant in a screen for cardiovascular laterality phenotypes. We use 3D imaging to characterise in detail the phenotype of this mutant and that of a series of Zic2 alleles generated by TALEN gene editing 31 , revealing a complex set of cardiovascular, thoracic and abdominal malformations, in addition to the previously-described holoprosencephaly and heart tube looping phenotypes. Analysis of the phenotype reveals a strong bias towards right isomerism, indicative of defective left-sided identity specification during development. This is supported by gene expression data revealing weak or absent Nodal node expression and loss of downstream gene expression in the LPM. We use in vitro assays to demonstrate that ZIC2 can activate transcription from Nodal enhancer reporters. Our data suggest that ZIC2 acts upstream of Nodal expression at the node, possibly to prime the gene locus for the subsequent activation of its expression there. Together with a previous study that identified a role for ZIC2 in ciliogenesis, our results suggest a model in which ZIC2 acts at multiple levels during the establishment of laterality, upstream of genes and events that are critical for the process to take place.
Cardiovascular and pulmonary malformations in human ZIC2 HPE cases suggest a laterality defect.
Extra-craniofacial defects, including cardiovascular, visceral and urino-genital anomalies have previously been noted in patients with holoprosencephaly (HPE) carrying ZIC2 mutations 2,3 , but the details of these malformations have not previously been documented. We examined clinical reports derived from a previously published European series consisting of 645 HPE probands 3 , including both liveborns and medically terminated pregnancies. A total of 67 probands in this series have ZIC2 mutation, of which 8 (12%) exhibit cardiovascular or visceral anomalies suggesting a putative laterality defect ( Table 1). The affected probands have mutations including three single amino acid substitutions (p.(His156Tyr); p.(Gln36Pro); p.(Phe314Cys); all affect highly conserved residues and are not found in 61,000 control exomes from the ExAC Project), alanine tract deletions and duplications, and larger chromosomal aberrations (Table 1; Fig. S1a,b).
Proband 7 has the most severe alobar form of HPE and also exhibits the most pronounced laterality defect (Table 1). This proband exhibits a loss of normal asymmetric thoracic anatomy, indicated by a duplicated superior vena cava (SVC) and a bilobulated lung. The right and left brachiocephalic veins are normally fused in man to form a single SVC which enters only the right atrium. Duplicated SVC therefore indicates bilateral connection to both atria, indicative of the laterality defect right atrial isomerism. Pulmonary morphology is also normally asymmetric in man such that, while the right lung has three lobes the left has only two. This patient has a symmetrical bilateral left-sided anatomy, suggesting left pulmonary isomerism. Thus, there is discordance between cardiovascular and pulmonary situs indicating situs ambiguus, a common phenomenon in heterotaxy. Proband 4 also has abnormal lung lobulation indicating a laterality defect, but unfortunately the attending clinician did not record the details of this malformation and we are unable to assign situs. The same proband has additional features indicative of a laterality defect including a single umbilical artery (also seen in Probands 2 and 5) and a common mesentery. Proband 1 has spleen hypoplasia suggestive of right isomerism (which is also known as asplenia). This proband also has both adrenal and renal hypoplasia, features suggestive of abnormal abdominal situs. Pulmonary hypoplasia in this proband may indicate abnormal pulmonary situs.
Ventricular septal defect (VSD) is the most common cardiovascular anomaly observed (5 of 67 ZIC2 probands). This is associated with hypoplastic ascending aorta in two probands, while another exhibits Tetralogy of Fallot. VSD is frequent in mouse models with a laterality defect 32 but is not diagnostic because it is also commonly associated with other genetic conditions, such as Chromosome 22 deletions 33,34 .
In summary, this analysis indicates that HPE is seen in ZIC2 probands together with cardiovascular, pulmonary and visceral malformations suggestive of an underlying laterality defect.
Identification of the Zic2 iso mutant. We performed a recessive ENU mouse mutagenesis screen in which MRI screening was used to identify novel mutants exhibiting cardiovascular anomalies at E14.5 35,36 . This resulted in isolation of the iso (isomeric) line. Mapping and sequencing revealed a stop-gain mutation (Y401X) in the Zic2 gene disrupting the fifth zinc finger domain of the protein ( Fig. S2; Table S1). The phenotype was validated through generation of a series of additional alleles, each with mutation targeted to the fifth zinc finger domain using TALEN-based gene editing 31 . Line Zic2 A8 encodes an identical protein to iso (Y401X), while Zic2 A5 , Zic2 A10 , Zic2 A17 and Zic2 A19 have short DNA deletions leading to a frameshift followed by premature termination (Zic2 A5 , Zic2 A10 , Zic2 A17 ) or an internal deletion (Zic2 A19 ). All mutations affect the fifth zinc finger domain (Fig. S2) and thus differ from the previously published Zic2 ku mutant in which the fourth zinc finger is disrupted (C370S) 37 . iso fails to complement either Zic2 A8 or Zic2 A5 (Table 2).
Zic2 mutants exhibit holoprosencephaly. External examination revealed that all Zic2 mutant embryos (n = 37) had neural tube defects including exencephaly, spina bifida and curly tail ( Table 2; Figs 1; 2a,b); spina bifida was also present in 2/27 heterozygotes, which also frequently show curly tail (Fig. S3 31 ), while wild-type embryos (n = 22) had no anomalies. We employed µCT and MRI imaging to analyse the phenotype more closely. All embryos examined had holoprosencephaly (Table 2; Fig. 1). A range of severity was seen, but the majority of embryos had the most severe alobar form in which the two hemispheres are completely fused (27 of 33; Fig. 1b2). The remaining 6 had the semilobar form, indicating partial hemisphere fusion. No embryos were observed to have the mild lobar form of HPE. Six of 33 embryos lacked eyes, cyclopia was seen in 17 of 33 embryos (Fig. 1b3). and hypotelorism in 10 of 33. These neural tube phenotypes are consistent with those of the previously described Zic2 Ku mutant 4,37 , suggesting that Zic2 iso may also carry a loss of function mutation.
Zic2 iso mutants have extensive cardiovascular, thoracic and abdominal malformations. CT imaging also revealed extensive defects within the cardiovascular system, thorax and abdomen of Zic2 iso and TALEN mutants ( Fig. 2; Table 2), many of which have not been previously described for Zic2. We observed severe cardiovascular malformations in the majority of embryos. Abnormal ventricular topology was seen in half of all embryos examined (16 of 32; Table 2) such that the morphologically right ventricle was positioned on the left side of the embryo (Fig. 2c,d), while it was normal in the other 16. This 50-50 split indicates that the direction of heart tube looping was randomly assigned in these embryos, consistent with the phenotype previously described in younger Zic2 Ku embryos 24 . The majority of embryos had an ostium primum atrial or atrio-ventricular septal defect (19 of 30; Fig. 2e,f asterisk). 21 of 35 embryos exhibited bilateral systemic venous sinuses ( Fig. 2e-h), often with bilateral right-sided atrial appendages. Double outlet right ventricle was seen in 14 of 32 embryos examined (Fig. 2c,d) and ventricular septal defect was present in 23 of 33 embryos (Fig. 2c,d). Defects were observed in the vascular system at reduced penetrance relative to the cardiac defects, and were generally present in about a third of embryos. The inferior vena cava (IVC) was found to be aberrantly left-sided in 11 of 36 embryos (Fig. 2g-j), while 8 of 36 exhibited hepatic vein drainage directly into the atrium, bypassing the IVC (Fig. 2g,h). A right-sided aortic arch was observed in 11 of 37 embryos (Fig. 2k,l).
In mouse, the right lung is divided into four distinct lobes while the left consists of a single lobe (Fig. 2m). We observed bilateral multilobed lungs in 27 of 37 mutant embryos (Figs 1b4; 2n), while a single embryo was observed with a 1:1 lobed arrangement ( Table 2). All control and heterozygous embryos showed a normal 4:1 lobed arrangement. The stomach and pancreas are normally located on the left side of the body but were observed to be ectopically right sided in 11 of 37 and 7 of 28 mutant embryos respectively ( Fig. 2p; Table 2). The spleen was reduced or absent in 5 of 16 embryos and right-sided in another two ( Table 2).
Zic2 iso mutants have right isomerism. A random distribution in the direction of looping of the heart tube and gut among mutant embryos is indicative of a generic laterality defect, but does not provide information Table 2. Neural tube, visceral and cardiovascular defects identified in mouse Zic2 mutants. A summary of the phenotypes observed by MRI and µCT imaging in all Zic2 mutants examined including both the original iso line and TALEN-generated lines. All embryos are homozygous mutants, except for those labelled "trans-hets" which carry one iso allele and one TALEN allele as a test of complementarity. For each phenotype described, the first number indicates the number of embryos observed with that phenotype, while the second indicates the number examined. The latter number differs for different phenotypes because it was not possible to assess every phenotype in every embryo due to limitations of imaging.
on the specific situs of affected individuals. This may be assigned based on careful examination of the anatomy of the lungs and atria, the positioning of visceral organs and the organisation of the vascular system, all of which show precise phenotypes that may be classified as right-isomerism, left-isomerism or situs inversus 17,38 . The azygos vein (AV) drains into the LSVC just above the level of the atria. In the mutant embryo, the IVC is left-sided while the hepatic vein (HV) and PV also drain into sinus venosus (red asterisk) which opens bilaterally into the atria. The AV is duplicated. (i,j) Hepatic venous anatomy (ventral view). In the wildtype, the IVC passes through the liver (grey shading) to drain into the right atrium (red arrowhead) and is connected to the umbilical vein (UV). In the mutant embryo, the IVC drains into the left atrium. The UV connects to the right hepatic vein (HV) which drains directly into the right atrium (red arrowheads). (k,l) Right-sided aortic arch (AoA). The descending aorta (Dao) may be seen to be ectopically located to the right of the trachea (Tr) in the mutant. contrast (right-sided Pitx2c expression), would be expected to result in a four-lobed lung on the left side and a single lobed lung on the right. Our data indicate that 73% of Zic2 embryos (27/37) exhibit right pulmonary isomerism, 3% (1/37) exhibit left isomerism, 24% (9/37) have the normal anatomy (situs solitus) and no embryos have reversed situs. This distribution is significantly different from a random distribution of the four phenotypes, and reveals a strong bias towards the phenotype of right isomerism (Chi squared test, p = 5.7 × 10 −11 ). Similarly, bilateral systemic venous sinus and bilateral right atrial appendage is indicative of right atrial isomerism. We observe bilateral systemic venous sinus (right atrial isomerism) in 60% of embryos (21/35), with no evidence for left isomerism or reversed situs. This distribution is also significantly different from a random distribution (Chi squared test, p = 3.1 × 10 −8 ). Double outlet right ventricle (44%; 14/32) is associated with right but not with left isomerism 32,39 . Abnormal right-sided aortic arch looping is not seen in left isomerism 40 but is present in right isomerism 32,41 and in situs inversus. We did not observe vascular phenotypes associated with left isomerism such as interruption of the inferior vena cava and partial anomalous pulmonary return 17 . Asplenia is commonly associated with right isomerism 42 , and indeed the human disease is sometimes known as Asplenia. The spleen is reduced in mice lacking the NODAL receptor Acvr2b, although asplenia seems to be observed only in Cfc1 (Cryptic) mutants 41,43 . We observe reduced or right-sided spleen in 44% of embryos (7/16) while none exhibited polysplenia, a feature of left isomerism.
These data thus indicate a strong bias towards right isomerism. Only one embryo exhibits any feature of left isomerism (bilateral unilobed lungs) and this embryo does not show atrial left-isomerism nor any vascular phenotype associated with left-isomerism.
The NODAL pathway is downregulated in Zic2 mutants. We performed a microarray experiment in mice as an unbiased screen to identify putative Zic2 transcriptional targets. Global gene expression was assayed in whole Zic2 iso/iso embryos harvested at E8.0 -E8.5 (0-4 somites) relative to wild-type. Only 29 genes showed a significant change in their expression level (with a false discovery rate of 5%) and the majority (23/29) were downregulated ( Fig. 3a,b; Table S2). Some of these, such as Dmrt3 and Fzd5, are specifically expressed in the developing head. Other downregulated genes have an established role in left-right patterning including Lefty2 (−5.1), Nodal (−2.5), Lefty1 (−1.8) and Shh (−2.0). Downregulated genes also include Gdf10 (−2.01) and Chordin (−1.53), which are, like Nodal and Lefty, associated with TGFβ signalling, and Sox9 which has not so far been associated with LR establishment in the mouse although it is known to be involved in sea urchin 44 . Many of the changed genes, including Nodal, Shh, Fam183b, Foxd4 and Dynlrb2 are known to be expressed within the node, while others including Gsc and Lefty1 are expressed in node-derived cells at the midline. Upregulated genes included the pluripotency factor Nanog (+1.67). To validate these changes we performed quantitative real-time polymerase chain reaction (qPCR) assays for changed genes, as well as for the NODAL pathway genes Pitx2c and Dand5 (also known as Cerl2; Fig. 3c). This analysis confirmed the downregulation of Nodal, Lefty2, Lefty1 and Shh. Surprisingly, Pitx2c was not changed. We hypothesised that this might be because the assay was performed at an age before Pitx2c is upregulated in the LPM, and therefore repeated the analysis using older embryos. In older embryos (aged 4-6 somites), we observed a significant reduction in expression, which persists in embryos of 12-25 somites (Fig. 3d), indicating that Pitx2c is downregulated in the iso embryo during the time at which it is normally expressed in the LPM. Thus, these data suggest downregulation of the NODAL pathway in the iso embryo.
Zic2 is required for Nodal expression in perinodal crown cells. We used in situ hybridisation to further analyse changes in the core NODAL pathway genes. Pitx2 expression was investigated using a probe which recognises all three Pitx2 isoforms 45 and was found to be expressed in the head folds and in the LPM in the wildtype embryo (Fig. 4a). In Zic2 iso/iso embryos Pitx2 expression was seen in the head folds, but was weak or, in many cases, absent from the LPM (Fig. 4b, arrow). Expression appeared to be delayed from the 4 to the 6-somite stage, it was weaker, and its laterality was perturbed, often bilateral (Fig. 4c). Lefty2 was expressed exclusively in the left LPM of wildtype embryos aged between 4 to 6 somites (Fig. 4d,e). All Zic2 iso/iso embryos failed to express Lefty2 ( Fig. 4f-h). Nodal was expressed in the node of all wildtype embryos examined (Fig. 4i, arrowhead) and was either bilaterally expressed or was enriched on the left side of the node (Fig. 4m). In the LPM, Nodal was expressed in all wildtype embryos from 2 to 6 somites and was restricted to the left side (Fig. 4i, arrow; Fig. 4n). Nodal expression at the node of Zic2 iso/iso embryos was in most cases (7 of 8) weak or absent (Fig. 4j-m). Expression was only detectable in early somite stages (1 to 4 somites) mutant embryos (5 of 8), and its abnormal laterality in one of them suggested a defective nodal flow, which would be consistent with the requirement for Zic2 in node cilia development or function as previously proposed 24 . Most Zic2 iso/iso embryos (6 of 8) failed to express Nodal in the left LPM (Fig. 4j,n). In the two mutant embryos that did express Nodal in the LPM, it was weak and was observed in both left (Fig. 4k) and right LPM (Fig. 4l). Cerl2 (Dand5), known to be the earliest asymmetrically biased gene expressed at the node, (L < R) 46 was expressed exclusively within the node of wildtype embryos (Fig. 4o,p) and was observed to be enriched on the right side of the node in 8 of 9 embryos (Fig. 4s). Cerl2 appeared to be maintained at wildtype levels in Zic2 iso/iso embryos (Fig. 4q-s), but the onset of its laterality appeared to be delayed and slightly perturbed (Fig. 4s), an observation again consistent with the possibility that the mutation of Zic2 initially results in defective nodal flow and randomisation of gene expression at the node. We also studied the expression of Shh, which was detected along the embryonic midline in wildtype embryos in a solid band (Fig. 4t). In Zic2 iso/iso embryos, expression levels were maintained but there seemed to be fewer positive cells and the band of expression appeared disrupted (Fig. 4u) suggestive of impaired development of Shh-expressing cells in these embryos.
In summary, we find some anomalies suggestive of an occasional randomisation of LPM identity (bilateral Pitx2c expression, right-sided expression of Nodal). These anomalies match the laterality defects we characterized in the expression of Nodal and Cerl2 at the node, defects which are consistent with the Zic2 mutation leading to nodal flow perturbations, most likely as a result from the node cilia defect described in Zic2 Ku24 . However, our data show that the predominant phenotype is the absence of expression in the LPM. This failure to activate NODAL-dependent gene expression (Nodal, Pitx2c, Lefty2) in the LPM is consistent with the later bias towards right-isomerism revealed by our analysis of the Zic2 iso mutant phenotype. Nodal expression within the node, a prerequisite for downstream gene expression in the LPM 13,46 , is reduced or absent in most embryos. These data suggest that ZIC2 may be required for Nodal expression at the node. ZIC2 binds to the Nodal locus and can activate transcription. The expression of Zic2 in the epiblast overlaps with that of Nodal from the blastocyst stage to the late gastrula stage, up to and including formation of the node 6,7 . The dynamic expression of Nodal during development is regulated by five enhancers [47][48][49] . Analysis of previously published ChIP-seq datasets 26,27 shows that ZIC2 is bound to the Nodal locus in both ESCs and in EpiSCs (Fig. 5a). In ESCs, prominent binding peaks were mapped to the Proximal Epiblast Enhancer (PEE) 50 and to the Highly Bound Element (HBE) 49 (Fig. 5a, green dots). These binding sites are also occupied in EpiSCs, but two additional high-affinity binding peaks are seen within the Node Dependent Enhancer (NDE) 51 and in HBE (Fig. 5a, red dots) indicating that ZIC2 binding is more widespread in these cells. Further low-affinity peaks map to the Asymmetric Enhancer (ASE) 48,51 , to Exon 1 and to HBE (blue dot).
We tested the ability of ZIC2 to regulate the transcriptional activity of each of the five Nodal enhancers using luciferase reporter constructs in which the enhancer is linked to the minimal E1b promoter 49 (Fig. 5b). Each construct was co-transfected in U2-OS cells together with an expression plasmid encoding ZIC2 or an empty vector (pcDNA).
The ASE enhancer showed a strong basal activity in control cells suggesting activation by endogenous factors present in U2-OS cells (Fig. 5b). For this reason, we were not able to accurately test the ability of ZIC2 to activate this enhancer. Low basal activity was observed for the remaining enhancers. We found no evidence for ZIC2-mediated transcriptional activation from the PEE, NDE or AIE enhancers. HBE, in contrast, shows a Three independent biological replicates of pooled embryos (0-4 somites) were performed for each condition and these are plotted as individual data points (blue circles indicate wildtype; orange triangles indicate Zic2 iso/iso ). Asterisks above each column indicate result of a one tailed t-test of samples with unequal variance testing the null hypothesis that loss of Zic2 has no effect on gene expression: ***p < 0.0005; **p < 0.005; *p < 0.05, NS = not significant. (d) Taqman qRT-PCR analysis of Pitx2c expression at three different ages (2-4 somites, 4-6 somites and 12-25 somites). Three independent biological replicates of pooled embryos were performed for each condition. Labels as per panel c. robust ZIC2-mediated transcriptional activation. HBE has been shown to be transcriptionally active in primitive streak-like cells (node precursors) 49 .
Zic3 has a phenotype closely resembling that of Zic2, including laterality-related malformations and HPE 52,53 , and also exhibits reduced or absent expression of Nodal 52 . ZIC3 has previously been shown to be able to activate an NDE enhancer 54 . We therefore asked whether ZIC3 can also activate expression from the HBE enhancer. Luciferase assays demonstrate that ZIC3 activates HBE at a level similar to that of ZIC2 (Fig. 5c), lending further support to the hypothesis that this enhancer may be important in LR patterning.
We next asked whether the iso mutation reduces the ability of ZIC2 to activate the HBE enhancer, and thus whether a failure to activate Nodal might explain the phenotype. The Zic2 kumba allele has been shown to evade nonsense mediated decay (NMD) but to produce a protein which cannot activate the ApoE promoter in luciferase assays 55 . While the kumba mutation disrupts a cysteine residue in the fourth zinc finger 37 , the iso mouse carries a Y401X mutation which disrupts the fifth zinc finger of ZIC2 (Fig. S1). Two lines of evidence indicate that iso, like kumba, may evade NMD. Firstly, Zic2 mRNA is not significantly reduced in iso embryos by microarray analysis (Table S2). Secondly, qRT-PCR analysis indicates that the mRNA can be detected in mutant embryos (Fig. S4). We used site-directed mutagenesis to generate a Y401X mutation within the ZIC2 expression construct (named ZIC2-ISO) and tested the response of the HBE luciferase reporter to this. ZIC2-ISO shows a reduced ability to activate HBE (Fig. 5c), just above the threshold of significance (p = 0.09), but still maintains some transcriptional activity. This, together with potential redundancy with ZIC3, may explain the partial penetrance of the Zic2 iso phenotype.
To gain further insights into the regulation of HBE by ZIC2, we investigated ZIC2 binding sites within HBE. ChIP-seq data suggests the presence of three putative binding sites, and we found that each of these contains sequence matching the consensus ZIC2 binding motif. ZIC2 Binding Site 1 (ZBS1) is located at position 99-109 bp within HBE and has the sequence CACCTCCTGGG (Fig. 5a,d,e red dot), ZBS2 at position 615-625 bp with sequence CCCCTGGGGTG (Fig. 5a,d,e green dot), and ZBS3 at 1845-1855 bp with sequence GCCCTCCTGGG (Fig. 5a,d,e blue dot). We used site-directed mutagenesis to delete each of these sites within the HBE-luciferase reporter construct. Luciferase assays indicated that while deletion of sites ZBS1 and ZBS2 has no effect on ZIC2-mediated transcriptional activation, reporters lacking ZBS3 completely lost their responsiveness to ZIC2, indicating that an intact site ZBS3 is an absolute requirement (Fig. 5d).
An electrophoretic mobility shift assay (EMSA) confirmed binding of ZIC2 to ZBS2 (occupied in both ESCs and EpiSCs) and ZBS3 (required for activation but demonstrating only low-affinity binding in ESCs and EpiSCs) but not to ZBS1 (occupied in EpiSCs but not in ESCs) in vitro (Fig. 5e). 27 and in epiblast stem cells (EpiSC) 26 . The five previously-characterised enhancer elements are indicated by yellow boxes in the cartoon above, while exons are indicated by blue boxes. Coloured dots in the cartoon indicate putative ZIC2 binding sites. In ESCs (upper trace), two high-affinity binding sites are seen, located within the PEE and HBE enhancers (green dots). In EpiSCs (lower trace), four high-affinity sites are observed which includes the two bound in ESCs (green dots) as well as two additional sites located within the NDE and HBE enhancers (red dots). Several lower-affinity binding sites map to ASE, HBE and exon 1, the site located within HBE is indicated by a blue dot. (b) Luciferase assays performed using reporter constructs consisting of each of the known Nodal enhancers linked to a viral E1b promoter and luciferase, as shown in the cartoon. Blue circles indicate control transfections, orange triangles ZIC2 transfections. Luciferase activity is plotted relative to control. The data show that the HBE reporter is activated by ZIC2 above background. (c) Luciferase assays using the native HBE reporter, performed Thus, these data demonstrate that the HBE enhancer, which is bound by ZIC2, both in vivo in node precursors (Fig. 5a) and in vitro (Fig. 5e), can mediate ZIC2-dependent and ZIC3-dependent transcriptional activation in vitro (Fig. 5b,c). An identified ZIC2 site within HBE, bound at low-affinity in precursor cells (Fig. 5a) is required for this activity in vitro (Fig. 5d) and a ZIC2 mutation reproducing the iso mouse mutant, which has impaired Nodal node expression (Fig. 4j-m), can reduce the ability of ZIC2 to activate HBE (Fig. 5c).
Discussion
In this work, we reveal that ZIC2/Zic2 loss in both man and mouse is associated with a complex set of previously unappreciated malformations affecting the cardiovascular, pulmonary and digestive systems, in addition to the better known neural tube defects. The mouse phenotype reveals a laterality defect which shows a strong bias towards right-isomerism, indicative of a lack of left-sided identity in the lateral plate mesoderm during morphogenesis of these organ systems. This is supported by molecular data indicating that the left-determining NODAL signalling cascade fails to be activated in the LPM. Nodal expression at the node is also impaired, implying that ZIC2 acts upstream of this event. Binding of ZIC2 to the Nodal locus in EpiSCs, together with in vitro data revealing ZIC2-mediated transcriptional activation of regulatory sequences, suggests that a direct interaction between ZIC2 and Nodal is critical for the establishment of left-sided mesoderm identity.
The association of Zic2 with cardiovascular laterality defects was first demonstrated in the Zic2 Ku mouse, in which the direction of heart tube looping was shown to be randomly assigned 24 . Node cilia were found to be reduced in length in these mutants from 4 µm to 2.5 µm, and this, together with the earlier observation that Zic2 expression in the node is turned off just before Nodal expression in perinodal cells is initiated 6 , led to the conclusion that the laterality defect (interpreted as random cardiac situs) resulted from impaired nodal cilia development 24 . A randomised distribution in the direction of heart tube looping within a population of mutants does not in itself indicate randomised cardiac situs. Such a phenotype is also associated with specific, non-random laterality defects such as isomerism 32,56 . This is because, in the absence of laterality, heart looping lacks directionality and may turn in either direction. The Zic2 Ku mutant exhibits mid-gestation lethality, preventing further investigation. In contrast, most Zic2 iso and TALEN mutants survive until E15.5, which has allowed us to perform a more detailed analysis of the laterality phenotype. We confirmed the heart tube looping phenotype but revealed that this results not from randomisation of situs, but from right isomerism. Zic2 iso and TALEN mutants show a strong bias towards right isomerism over other laterality phenotypes. Pulmonary situs was assessed in a total of 37 Zic2 mutant embryos, of these 73% showed right pulmonary isomerism, a single embryo exhibited left isomerism and none had situs inversus. The same phenotype was seen in multiple independent Zic2 alleles. Laterality of the cardiovascular system matched that of the thorax, albeit at reduced penetrance, as demonstrated, for example, in right atrial isomerism, which was seen in 60% of embryos. Many other features indicate right isomerism and not left isomerism or situs inversus.
This conclusion is consistent with the observed molecular phenotype. Both Zic2 Ku and Zic2 iso mutants fail to activate NODAL signalling in the LPM. Zic2 Ku mutants lack Nodal and Lefty2 LPM expression, and have reduced Pitx2 expression 24 . Our analysis reveals a similar molecular phenotype in the Zic2 iso mutant. Mutations which impair the motility of node cilia such as the iv mutant of the cilia dynein Dnah1 12 , or mutations in transcription factors required for cilia development such as Noto 57 , result in the stochastic activation of NODAL signalling in the LPM. This is visualised via detection of the expression of Nodal/Pitx2c/Lefty2 with the same frequency on the left, on the right, on both sides or in none, and leads at later stages to equal proportions of embryos with situs solitus, situs inversus, right isomerism or left isomerism. Only 4 out of 20 E8.5 Zic2 iso mutant embryos examined showed ectopic expression of these genes in the LPM. 15 failed entirely to express them in this tissue, an observation consistent with the fact that 73% of those examined at E14.5 exhibited right isomerism. These numbers do not support the hypothesis that a defect in cilia motility is the major cause of the laterality defect.
Although there is ample evidence that cilia motility conditions the asymmetry of Nodal expression in perinodal cells, this asymmetry in mRNA expression is not itself required to induce Nodal expression in the left LPM 47,58,59 . What appears to matter is the amount of NODAL protein produced. Pioneering studies relying on the removal of specific Nodal enhancers have established that Nodal expression at the node is required to induce Nodal expression in the LPM 13,58,59 thus providing an explanation for the right isomerism of Zic2 mutant embryos that fail to express Nodal at the node. However these studies also showed that residual Nodal expression at the node could be sufficient to induce correct expression in the LPM, and therefore call into question whether the low level of Nodal we detect at the node in some of our Zic2 mutants is the only reason for their failure to induce NODAL downstream targets in the LPM. Assessing the exact contribution of the loss of Nodal expression at the node to the observed phenotype would require a rescue experiment in which a transgenic construct is used to drive Nodal expression at the node in a Zic2 iso mutant background.
The situation is arguably complex in these mutants, because Nodal may not be the only gene in the pathway regulated by ZIC2. The level of Nodal expression in the node required to induce its own expression in the LPM may thus be different, and perhaps higher, than that in wildtype embryos, to compensate for a as in b. Cells were transfected with the reporter and an expression plasmid for ZIC3, ZIC2 or ZIC2-ISO, or a control empty vector (pcDNA). (d) Luciferase assays using modified HBE reporters, performed as in b. HBE contains 3 putative ZIC2 binding sites, indicated by the coloured dots in the cartoon, deleted binding sites are indicated by an "X". Deletion of sites ZBS1 and ZBS2 (red and green dots in the cartoon) has no effect on the ability of ZIC2 to activate the reporter, while deletion of site ZBS3 (blue dot) eliminates this activity. (e) Gel shift (EMSA) assays to study the binding of ZIC2 to HBE. P = probe only, Z = probe + HA-ZIC2, ZC = probe + HA-ZIC2 + unlabelled competitor, ZA = probe + HA-ZIC2 + αHA antibody. Red arrow = gel shift, blue arrow = supershift. Images show cropped gel pictures, uncropped images are shown in the supplemental data.
SCIEnTIFIC REPORtS | (2018) 8:10439 | DOI:10.1038/s41598-018-28714-1 possible concomitant down-regulation of partners or agonists. Gdf1, encoding a co-ligand of NODAL, is likewise expressed in perinodal cells where it is required to ensure the adequate transfer of NODAL to the LPM 60 , and could be one such partner as its absence from the node also leads to right isomerism. No ZIC2-binding peaks are present at the Gdf1 locus 26,27 and our microarray analyses of Zic2 iso mutant embryos detected no alteration of Gdf1 expression (Table S2), but, given that Gdf1 is also bilaterally expressed in lateral plate mesoderm at this stage, this experiment may not be sensitive enough to detect its specific misregulation in perinodal cells. Interestingly, the expression of Gdf1 in these cells, like that of Nodal and Cerl2, is known to be dependent on NOTCH signalling 61 . Our observation that in most Zic2 iso mutant embryos Cerl2 shows normal levels of expression in the node indicates that NOTCH signalling is likely to be intact in these mutants. This, together with a previous report that absence of Nodal expression in the node does not affect the expression of Gdf1 there 58 make it quite possible that its expression is similarly unaffected in the node of Zic2 iso mutant embryos. However, further investigations are necessary to make certain that this is case, not just for Gdf1 but also for all the genes expressed at the node that contribute to the production and propagation of the left identity-inducing signal.
The occurrence of right isomerism has been described in embryos carrying point mutations in either Pkd1l1 or Pkd2, which are believed to affect the detection of nodal flow by immotile cilia 62 , rather than affecting motile cilia function (node morphology and cilia motility is unaffected in these mutants). Crucially, not only does the NODAL signalling cascade fail to be activated in the LPM of these mutants, both Nodal and Cerl2 expression at the node remain symmetrical, unlike in Zic2 mutants. Pkd1l1 and Pkd2 expression is restricted to the node at E7.0-E7.75. Our microarray data shows no evidence for a downregulation of their transcripts in the Zic2 iso mutants, but in situ data suggests they may be reduced in Zic2 Ku24 mutants. The anomalies we detected in the laterality of Nodal and Cerl2 expression around the node of Zic2 iso mutant embryos confirmed that their nodal flow is perturbed, and suggest that motile cilia at the node present defects similar to those characterized in Zic2 Ku mutants 24 . However, the emergence of these anomalies, and their occasional consistency with corresponding anomalies in the expression of Nodal or Pitx2 in the LPM, suggest immotile cilia are functional and argue against this mutant version of ZIC2 having a major impact downstream of the nodal flow.
The expression of Cerl2 in perinodal cells of Zic2 iso mutant embryos indicates NOTCH signalling is intact. This suggests that the Nodal locus in this mutant is either unresponsive to NOTCH signalling or unable to maintain its own expression after it is induced. The dynamic expression of Nodal is regulated by five distinct enhancers, three of which (NDE, PEE and HBE) were found to harbour significant ZIC2-binding peaks in ESCs and/or in EpiSCs (Fig. 5a). NDE is active in perinodal cells and its deletion eliminates most, but not all, of Nodal expression at the node. Its transcriptional activity has been shown to be dependent on NOTCH signalling 63 and a previous study using Xenopus cells, has shown that it can be activated by ZIC3 54 , a transcription factor closely related to ZIC2. NDE therefore would appears as an ideal candidate to mediate the influence of ZIC2 on Nodal expression at the node, however we did not detect an effect of ZIC2 on its transcriptional activity in our luciferase assay and further analysis will be necessary to find out whether it does mediate this influence in vivo.
PEE transcriptional activity is detected in the proximal epiblast and in the anterior primitive streak, but not in the node 50 . Deletion of PEE has been shown to result in a range of defects, including heart abnormalities 50 , which are reminiscent of those of Zic2 mutants. However, like for NDE, we did not detect an effect of ZIC2 on the transcriptional activity of PEE in our luciferase assay. These data leave open the possibility that this enhancer mediates the influence of ZIC2 on a domain of Nodal expression that is critical for anterior patterning and the establishment of laterality, but again further analysis will be necessary to confirm this is the case.
HBE was the only Nodal enhancer showing a ZIC2-mediated transcriptional activation response. Mutagenesis analysis indicated both that a defined ZIC2-binding site within HBE, ZBS3, is required for this response, and that a protein carrying the same mutation as the iso mouse shows a reduced ability to activate HBE. While the transcriptional activity of HBE is highest at preimplantation stages it is still detectable in the post-implantation epiblast, in the primitive streak and in the early node 49 . The impact on embryonic development of its deletion at these stages is not yet known, but these data suggest that an interaction between ZIC2 and HBE may be required for Nodal to be correctly expressed at the node. Testing this hypothesis calls for an investigation of the impact of a targeted mutation of ZBS3 on Nodal expression and the establishment of laterality.
Our analyses indicate that ZBS3 is a low-affinity ZIC2-binding site, and that higher affinity ZIC2-binding sites in HBE are dispensable for its transcriptional activity. This is consistent with current models of how transcription factors regulate gene expression, which place greater emphasis on the critical role played by low-affinity binding sites 64 . This result led us to consider the possibility that another Nodal enhancer, ASE, in which a low-affinity ZIC2-binding site similar to ZBS3 was identified in EpiSCs, might also mediate the influence of ZIC2 on Nodal expression at the node. ASE is an auto-regulatory enhancer, known to be dependent on Activin/Nodal signalling. Mouse embryos deleted for ASE show very weak Nodal expression in the left LPM, which leads to partial right isomerism later on 47 . Crucially, the level of Nodal expression at the node at E7.5 and E8.5 in these mutants is similar to that of wildtype embryos, except that it remains symmetrical. This observation appears to rule out the possibility that ASE plays a critical role in mediating the influence of ZIC2 on Nodal node expression.
Thus, a review of the evidence relating to NDE, PEE and HBE leaves open the possibility that they may all be, directly or indirectly, involved in promoting the influence of ZIC2 on the expression of Nodal during the time window in which ZIC2 function at the midline is critical for the establishment of a left identity in the LPM, which broadly extends from E6.5 to E7.5. The expression of Zic2 and the transcriptional activity of NDE only overlap briefly in the young node, which seems to preclude a prolonged interaction. In contrast, the overlap with the transcriptional activities of HBE and PEE in the epiblast and in the primitive streak, ahead of node formation, is extensive. This may be important as ZIC2 has been shown to act as a pioneer factor in ESCs, binding to target genes before they are transcriptionally active, seeding the locus to facilitate binding of additional factors later in more differentiated cell types 26,29 . Furthermore, in the case of Nodal, the presence of one of the ZIC2-bound enhancers, HBE, has been shown to condition the later activation of at least one of the other Nodal enhancers 49 . These observations may explain how the impact of ZIC2 absence on Nodal node expression could be delayed until a time when the transcription factor is no longer expressed. ZIC2 associates at a subset of loci in ESCs with the NURD chromatin remodelling complex 27 , a complex known to regulate bivalent enhancers. ZIC2 is bound to the Nodal locus in epiblast stem cells, precursors of the node, and is able to activate expression in in vitro assays from one such bound enhancer (HBE), yet there is no obvious phenotype at this stage of development in Zic2 mutants. All available data is therefore compatible with the possibility that ZIC2 interaction with regulatory sequences at the Nodal locus in node cell precursors in the primitive streak primes it for later activation, a pre-requisite for the correct establishment of left-sided identity. Further work will be required to test this hypothesis and to elucidate the precise molecular mechanisms by which ZIC2 may regulate Nodal expression.
Our analysis lends support to the hypothesis that cardiovascular and other visceral defects seen in ZIC2 HPE patients may result from an underlying laterality defect. However, it would appear that the co-morbidity of cardiovascular laterality defects with HPE is a relatively rare event in reported ZIC2 HPE cases, occurring in only between 9 and 14% of cases 2,3 . Cardiovascular malformations of this kind are embryonic lethal in mice and thus many pregnancies may be lost before term, indeed only one of the probands we describe was born alive. Thus, many cases may go unreported. It should also be noted that most patients have heterozygous ZIC2 mutation; heterozygosity in mice is associated with mild neural tube defects but is not associated with cardiovascular malformations. Thus, milder forms of HPE reported in living patients would not be expected to be associated with cardiovascular anomalies. Situs abnormalities do not always manifest in right isomerism in human cases as in the mouse model, and we find evidence for discordance between cardiac and pulmonary situs suggesting heterotaxy in one patient. This is likely to be for a number of reasons. The patient with heterotaxy (Proband 7) carries a large chromosomal deletion encompassing many genes in addition to ZIC2 and therefore genetic interactions are likely to be a factor. Genetic background is known to affect the penetrance of cardiovascular laterality phenotypes 32 , indicating that genetic interactions at multiple loci influence the phenotype. This may also reflect the influence of environmental factors, which are known to impact upon the expression of cardiovascular phenotypes 65 . Finally, in many cases it is not possible to make a definitive diagnosis based on the information provided.
Our data nevertheless explain the co-morbidity of holoprosencephaly with congenital heart disease and suggest that ZIC2 should be considered as a candidate for screening for the latter disease.
Methods
Human genetics. All protocols were approved by the local ethics committee of Rennes Hospital. All work was carried out in accordance with this protocol. All samples were obtained and analysed with informed consent according to the protocols approved by the local ethics committee (Rennes Hospital).
Mouse genetics. All animal procedures were approved by the Committee for Animal Care and Ethical
Review at the University of Oxford, and all the experiments conformed to the UK Animals (Scientific Procedures) Act, 1986, incorporating Directive 2010/63/EU of the European Parliament. All animal procedures were performed in accordance with UK Home Office regulations (PPL 30/3174). The iso allele was isolated in a random recessive ENU mutagenesis screen in which cardiovascular anomalies were identified by MRI screening of E14.5 embryos 35,36 . Male C57Bl6/J males were mutagenized then crossed to C3H/HeH females. G3 progeny were screened for a phenotype. TALEN alleles of Zic2 have been previously described 31 . All Zic2 alleles were subsequently backcrossed for several generations and then maintained on a C3H/HeH background (mice obtained from MRC Harwell, Oxfordshire, UK) and embryos of both sexes were used in experiments. Heterozygous animals were crossed and pregnant dams were sacrificed by cervical dislocation before the embryos were dissected and processed for further analyses. Genotyping was performed using allele-specific Taqman probes on DNA obtained from ear biopsies. In all experiments, mutant embryos were compared to littermate controls.
Genetic mapping and sequencing. Mutations were mapped using a panel of SNP markers that differentiate between C57BL6/J and C3H/HeH mouse strains, as described previously 35 , identifying a minimal homozygous segregating interval lying on chromosome 14 between SNP rs13482392 (118.9 Mb) and the end of the chromosome at 124 Mb. The identified chromosomal interval was then exome sequenced using SureSelectXT mouse all exon kit (Agilent, as recommended by the manufacturer). Captured libraries were sequenced on the Illumina platform as paired-end 76-bp reads. Study accession number ERP000530. The only exonic mutation identified within this interval was a C to A substitution at position 122877892 indicating a stop-gain mutation (Y401X) in the Zic2 gene disrupting the fifth zinc finger domain ( Fig. S1; Table S1). This was confirmed by capillary sequencing. No other exonic mutations were identified within the mapped interval.
Phenotyping by MRI/µCT scanning. Embryos were harvested at either E14.5 or E15.5, exsanguinated in warm HANKS saline (SIGMA, H4641), cooled down in ice cold PBS and then fixed in 4% paraformaldehyde. Imaging was performed using either MRI (Iso, A8, A10/17, A19) or by µCT (A5). MRI was performed as previously reported 66 . µCT was performed using SkyScan 1172 scanner (Kontich, Belgium). Prior to scanning, embryos were incubated in 0.025 N Lugol's solution for 4 days to achieve soft tissue contrast and embedded in a tube with 1% agarose. Generated datasets were analysed and 3D reconstructions generated using Amira 5.3.3 software (FEI Visualization Sciences Group, Merignac, France).
Microarray analysis. E8.5 wildtype and Zic2 iso/iso whole embryos were collected and snap frozen in liquid nitrogen. Eight stage-matched embryos were pooled per sample and three replicates of each genotype were performed. RNA was prepared using the RNeasy Plus Micro kit (Qiagen) and hybridised to Illumina Mouse WG6 v2 arrays at Oxford Genomics Centre (following manufacturers' protocol). Raw data were imported into R statistical software for processing and analysis (http://www.R-project.org). Pre-processing and normalisation steps were performed with the BioConductor 67 package 'Variance Stabilisation and Normalisation' (VSN) 68 . Quality control analyses showed the data were high quality with no outlier samples. Statistical analysis was performed on the full dataset (approximately 45,000 probes) with the Linear Models for Microarray Analysis (LIMMA) package 69 . Raw p-values were corrected for multiple testing using the false discovery rate (FDR) controlling procedure 70 . At 5% FDR, this resulted in 30 significantly changed genes. A heatmap was generated using the Expander programme 71 . Microarray data has been submitted to the Gene Expression Omnibus (accession number will be provided following acceptance). qPCR. Total RNA was harvested from whole embryos as above. RNA was reverse transcribed using the Quantitect reverse transcription kit with genomic DNA eliminator (Qiagen), and analysed by SYBR Green based qPCR ( Fig. 2c; Supplementary Table 3 lists primers used) or using Taqman probes (Figs 2d, S4). Graphs show the mean +/− standard deviation for three biological replicates, each consisting of a pool of embryos. A one tailed t-test was performed on linear expression values normalised to GAPDH to test the hypothesis that expression of selected genes is reduced in the mutant relative to wildtype.
In situ hybridisation. Embryos were harvested at E8.5 and fixed in 4% paraformaldehyde, 0.2% glutaraldehyde for 3 hours at 4 °C before being dehydrated and stored in methanol at −20 °C until use. Prior to hybridising, embryos were bleached in 6% H 2 O 2 and digested for 6 minutes in 10 µg/ml proteinase K. Hybridisation was performed with digoxigenin-labelled RNA probes (2 µg/ml) at 70 °C following standard protocols 72 . Probes used are listed in Supplementary Table 4.
Chip-Seq analysis. ChIP-seq data used in this paper are from published studies 26,27 ; GEO accession numbers GSE61188, GSE74636. Binding peaks were visualised using the Integrated Genome Viewer software 73 .
Luciferase assays. All assays were performed with U2-OS cells (obtained from ATCC) grown in DMEM with 10% FBS. Cells were tested and confirmed free of mycoplasma contamination. Luciferase reporters carrying Nodal enhancers linked to a viral E1b promoter have been previously described 49 . Generation of Y401X ZIC2 and deletion of ZIC2 binding sites within HBE was performed by site directed mutagenesis (NEB) using primers listed in Supplemental Table 5. Cells were transfected using Fugene reagent (Promega) with either HA-ZIC2 (Zic2), HA-ZIC3 (Zic3), Y401X ZIC2 (Zic2-iso) or an empty vector (pcDNA) along with a 5× excess of reporter plasmid. A Renilla luciferase reporter was co-transfected to control for transfection efficiency. Cells were harvested after 48-72 hrs and assayed using the Dual Luciferase Reporter assay system (Promega). Each data point represents the mean of three technical replicates performed during the same experiment, and each experiment was repeated three times, as shown on the graphs. Asterisks indicate results of a one tailed t-test of samples with unequal variance testing the null hypothesis either that transfection with ZIC2 does not result in a change in activation of a given reporter over transfection with pcDNA (Fig. 5b,d) or that there is no difference between the two indicated conditions (Fig. 5c). *p < 0.05, NS = not significant. EMSA assays. Crude protein extracts were derived from HEK293T cells (obtained from ATCC, tested for mycoplasma contamination) transfected with a Zic2-HA expression plasmid or a control pcDNA plasmid. Cells were lysed in a solution of 10 mM HEPES pH7.9, 1.5 mM MgCl2, 10 mM KCl, 0.5 mM DTT and nuclei pelleted by centrifugation. Nuclei were lysed in 20 mM HEPES pH7.9, 25% glycerol, 0.42 M NaCl, 1.5 mM MgCl2, 0.2 mM EDTA, 0.5 mM DTT. Short 31 bp double-stranded biotin labelled probes (listed in Supplemental Table 6) were synthesised and EMSA assays performed using the Lightshift Chemiluminescent EMSA kit (Life Technologies). Unlabelled oligos were used as competitors and a supershift was performed with a monoclonal αHA antibody (Covance #MMS-101R).
Data availability. The microarray datasets generated in the current study are available in the GEO repository, accession number GSE106350. Luciferase constructs and mouse alleles described in this work are available on request. | 2018-07-13T16:22:01.823Z | 2018-07-11T00:00:00.000 | {
"year": 2018,
"sha1": "df6d9b017b03c6351251335b40ecaf7b2fc5b02a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-28714-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "560390e362b811f3ab1f22d1ba11385e6ba1b806",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
13110185 | pes2o/s2orc | v3-fos-license | Cameron ulcers: An atypical source for a massive upper gastrointestinal bleed
Cameron lesions represent linear gastric erosions and ulcers on the crests of mucosal folds in the distal neck of a hiatal hernia (HH). Such lesions may be found in upto 50% of endoscopies performed for another indication. Though typically asymptomatic, these may rarely present as acute, severe upper gastrointestinal bleed (GIB). The aim is to report a case of a non-anemic 87-year-old female with history of HH and atrial fibril lation who presented with hematemesis and melena resulting in hypovolemic shock. Repeat esophagogas-troduodenoscopy was required to identify multiple Cameron ulcers as the source. Endoscopy in a patient with HH should involve meticulous visualization of hernia neck and surrounding mucosa. Cameron ulcers should be considered in all patients with severe, acute GIB and especially in those with known HH with or without
INTRODUCTION
The incidence of hiatal hernia (HH) rises with age [1] . Given the rising demographics and the growing number of endoscopies, this condition now constitutes an increasingly common endoscopic finding. One study reported the incidence to be upwards of 50% during upper endoscopies performed for another indication [2] . Though typically asymptomatic, several complications can occur including gastroesophageal reflux disease [3] , iron deficiency anemia [4] , acute or chronic bleeding [5] , and ulcer or erosion formation [2] . Usually an incidental endoscopic finding, Cameron lesions represent linear gastric erosions and ulcers on the crests of mucosal folds in the distal neck of a HH. Both erosions and ulcers are thought to be distinct forms of the same disease process. Lesions are found in 5.2% of patients with HH identified on upper endoscopy [6] and in over 60%, multiple lesions may be found [1] .
CASE REPORT
An 87-year-old female presented to the Emergency Department (ED) with several episodes of bright red hematemesis and black, tarry stools over six hours. She complained of severe lightheadedness and crampy, lower abdominal pain. On examination, she was found to be hypotensive with systolic blood pressures in the 60s, and despite aggressive resuscitation with intravenous fluids and blood products, she required vasopressors.
CASE REPORT
Two days earlier, she had been discharged following a brief hospital stay for atrial fibrillation and a large HH. She had been conservatively treated with proton pump inhibitor, Sucralfate, and Ondansetron as needed with fair results.
At this presentation, hemoglobin (Hgb) was 12.7 g/dL (baseline of 14.8 g/dL). Nasogastric lavage returned 500 cc of bright red blood. Intravenous proton pump inhibitor was started, patient was intubated and an emergent EGD revealed a large clot in the gastric fundus along with diffuse, friable mucosa in the mid-distal esophagus. No other bleeding site was identified. Despite multiple units of packed red blood cells and fresh frozen plasma, the patient's hemoglobin continued to decline. A repeat EGD demonstrated multiple circumferential nonbleeding Cameron ulcers at 37 cm with large placental clots in the fundus (Figure 1). Small clots were also seen at the gastroesophageal junction. In total, the patient received 14 units of packed red blood cells, multiple units of fresh frozen plasma, protamine sulfate and Vitamin K. Her Hgb stabilized on hospital day 10. During the course, she developed aspiration pneumonia, which was successfully treated with antibiotics. No further bleeding occurred and the patient recovered.
DISCUSSION
The pathogenesis of Cameron lesions is poorly understood. Some attribute lesion formation to mechanical trauma to the esophagus caused by respiration-related diaphragmatic contractions [6] . Other etiological factors may include acid reflux, ischemia, Helicobacter pylori infection, gastric stasis or vascular stasis [7] . It is likely that the etiology is multifactorial, and includes genotype, phenotype and patient risk factors including underlying co-morbidities and medication use. The prevalence is also likely dependent on the size of the HH with a 10%-20% risk for Cameron ulcers in hernias 5 cm in size or greater [8] . However, the absence of HH should not rule out Cameron lesions either. One study used push enteroscopy to evaluate ninety-five patients with obscure GIB previously investigated with standard endoscopy. Of the thirty-nine patients with an identifiable source, Cameron lesions were the second most commonly missed lesion (21%) [9] . Presumably, the lack of a large HH may have reduced the suspicion for Cameron lesions. Among the most concerning clinical manifestations of Cameron lesions are acute and chronic GIB. Chronic blood loss resulting in anemia has been well described. A large prospective, national, population-based study found that patients with HH had a significantly higher association of iron-deficiency anemia as compared to those with esophagitis [4] . Additionally, in a case-controlled study, Cameron showed that of 259 patients with radiographic evidence of HH, 18 were anemic compared to 1 in the control group (P < 0.001) [10] . Patients with Cameron lesions typically respond well to medical treatment consisting of iron supplementation with or without acid-suppression therapy. It is worth noting that our patient's MCV was 97 on admission without any supplementation.
More alarming are lesions that present as severe, acute GIB. Studies have reported rates of 29%-58%, raising the possibility of life-threatening hemorrhage [2,10] . To our knowledge, there are very few reports of life-threatening upper GIB secondary to a Cameron ulcer. One case report describes the successful treatment of a visible vessel within a Cameron ulcer with band ligation [11] . However, in these cases, surgical intervention is recommended, as endoscopic hemostasis can be technically difficult. Potential risks of endoscopy in acute, upper GIB include deep ulcers and perforation as the gastrointestinal wall around the gastroesophageal junction lacks fibrous tissue. Thus, surgery should be considered in those with either severe sliding hernias with Cameron lesions and in patients with lesion-related complications refractory to medical treatment [12] . Long-term recurrence rates are extremely low following surgery [13] .
Complicating our case was the need to anticoagulate the patient for atrial fibrillation prior to endoscopy. More studies are needed to evaluate whether radiographic evidence of large HH should prompt an endoscopy prior to anticoagulation. In conclusion, endoscopy in a patient with radiographic evidence of large HH should involve meticulous antegrade and retrograde visualization with views of hernia neck and surrounding mucosa. Cameron | 2018-04-03T00:38:57.978Z | 2012-09-21T00:00:00.000 | {
"year": 2012,
"sha1": "de496fd88701c51e6e74e11265acf014f1ac0f66",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v18.i35.4959",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "c3b1771c704e199b27b8b7744d1c1dfed4c5c6da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
104349308 | pes2o/s2orc | v3-fos-license | Experimental study on the efficiency of dodecafluoro-2-methylpentan-3-one on suppressing lithium-ion battery fires
Currently, the effective and prompt suppression of lithium-ion battery fires is still challenging. Herein, a 38 A h prismatic ternary (Li(Ni1/3Co1/3Mn1/3)O2/graphite) battery with the size of 150 × 92 × 27 mm3 was adopted to investigate the suppression efficiency of dodecafluoro-2-methylpentan-3-one (C6F12O) in high capacity lithium-ion battery fires. Five doses of C6F12O agent including 0, 0.5, 1.0, 1.5 and 2.0 kg were adopted. It was concluded that as the dose of C6F12O agent increased, the peak temperature of the long surface and bottom of the cells first increased slowly and then decreased rapidly. The results indicated that the C6F12O agent first shows a negative inhibitory effect, which is then transformed into an inhibitory effect as the dose increases. This inhibitory effect grew distinct gradually with an increase in dose. It was found that in a 47.5 × 21.5 × 16 cm3 module box, the appropriate dose of C6F12O agent was 9.42 g W−1 h−1. Accordingly, these results have implications in the fire suppression design for lithium-ion batteries.
Introduction
Due to their advantages of high energy density, long lifespan, no memory effect and environmentally friendly nature, lithium-ion batteries have become the main medium for new energy storage systems. However, batteries may undergo thermal runaway 1 under abuse conditions, including overcharging, overheating, and short circuiting, which may develop into violent burning and/or explosion without effective protective measures. Some lithium-ion battery re accidents are summarized in Table 1. [2][3][4] Thus, the issue of lithium-ion battery safety has attracted great concern. [4][5][6][7][8] Recently, many experimental and numerical investigations have been conducted with the aim to understand the thermal runaway and re hazard of lithium-ion batteries, and some progress has been achieved. It was found that cells with an LiFePO 4 (LFO) cathode seemed to show better safety characteristics, and batteries with a higher energy content performed the worst in safety tests. 5 Thermal runaway is the most intractable safety issue for lithium ion batteries. When thermal runaway occurs, the temperature inside the battery reaches 870 C, 6 which is much higher than its surface temperature. Wang et al. 1 and Feng et al. 4 provided a comprehensive review on the thermal runaway mechanisms. Thermal runaway leads to a mechanism of chain reactions, during which the decomposition of the battery component materials occurs. 1,4 Then, res or explosions may occur aer thermal runaway. Huang et al. 7 investigated the combustion behavior of a lithium-titanate battery, and found that the re hazard increased with the battery state-of-charge (SOC), and the battery combustion time became shorter with an increase in the SOC. Sun et al. 8 conducted a toxicity analysis of the battery combustion products, which indicated that the SOC signicantly affected the types of toxic combustion products, and 100% SOC even had the most serious toxicity.
Hence, aiming to reduce the thermal risk of lithium-ion batteries, many researchers 9-13 have tried to achieve active protection by changing the internal structure of the battery. Nevertheless, existing technologies cannot fundamentally prevent thermal hazards of the battery, and re accidents related to lithium-ion batteries still occur frequently. Consequently, in lithium ion battery-based energy storage systems, passive protection methods, such as extinguishing techniques, are important for the prevention and control of re accidents at the present stage.
Many scholars and institutions conducted relevant experimental studies on suppressing lithium-ion battery res. [14][15][16][17][18][19][20][21][22] The re test conducted by the National Technical Information Service (NTIS) 14,15 showed that different Halon products could suppress battery res, but the battery temperature would still increase aer the ame was extinguished. Later, Egelhaaf et al. 16 studied the suppression effect of a water agent with surfactant, a gelling agent and pure water agent on lithium ion battery res.
They proposed that water could be effective for lithium ion battery res and additives helped to largely reduce the amount of water required for re-ghting. Nevertheless, a lot white smoke was emitted aer the re was extinguishes. Then, a fullscale suppression test was conducted by the Fire Research Foundation. 17 It was suggested that although battery res could be quickly knocked down by a water jet ow within 25 s, the smoke and gas were still released aer suppression. In the study of the Federal Aviation Administration (FAA), 18 their results showed that water and other aqueous extinguishing agents such as water, AF-31, AF-21, Aqueous A-B-D, and Novec 1230 (C 6 F 12 O) were the most effective and the nonaqueous agents were the least effective. To nd a high-efficiency extinguishing agent for lithium-ion battery res, Wang et al. 19 carried out a series of tests based on the lithium-titanate battery. Their results indicated that a single-cell or small-scale battery pack re could be extinguished by heptauoropropane. However, it was also found that the battery may reignite aer it was put down due to the violent reactions inside the battery. In their other work, 20 the extinguishing agents of CO 2 and C 6 F 12 O were utilized to suppress lithium-titanate battery res. Their results showed that C 6 F 12 O could suppress the re within 30 s, whereas CO 2 was incapable of fully extinguishing the ame over the full duration of the test. In the test of Det Norske Veritas and Germanischer Lloyd (DNV GL), 21 F500, Fireice, PyroCool, aerosol and water were applied to test their extinguishing effects on battery res. Their results showed that all the tested extinguishers could put down battery res if they were used immediately upon the detection of a thermal spike. However, water was demonstrated to have the best ability to cool and maintain low temperatures in the battery. A water mist containing additives system was tested on an iron phosphate lithium ion battery re. 22 5% F-500 solution and 5% self-made solution were veried to be more efficient than pure water in the water mist system.
To date, numerous experimental studies on lithium-ion battery re suppression have been conducted. However, there are still many deciencies in the current research. For example, re extinguishing agents cause dramatic damage to batteries and modules, and the dose of agents may be hard to estimate during extinguishing.
Thus, as a new clean agent Halon alternative, C 6 F 12 O combines an outstanding extinguishing performance with an excellent environmental prole. In addition, the insulation and cooling performance of C 6 F 12 O are both outstanding, which is widely used in electrical re protection. However, the application of the C 6 F 12 O agent in suppressing NCM lithium battery res has not been reported to date. In this particular research, experiments were performed to investigate the inhibition efficiency of C 6 F 12 O on lithium-ion battery res in a module box.
Battery
A commercial ternary battery with a capacity of 38 A h and voltage of 4.2 V was used for the re extinguishing experiments. The shape of the battery was prismatic, which was 150 mm, 92 mm and 27 mm in length, width and thickness, respectively. The cathode and anode electrode materials were Li(Ni 1/3 Co 1/ 3 Mn 1/3 )O 2 (NCM) and graphite, respectively. Before the test, the batteries were charged to full state of charge (100% SOC) with its open circuit voltage of 4.2 V.
Experimental apparatus
A schematic view of the experimental platform is depicted in Fig. 1, which mainly consisted of an agent store tank, explosionproof module box, re detection tube, scale, temperature data acquisition system, several thermocouples and digital video. The size of the explosion-proof module box was 47.5 Â 21.5 Â 16 cm 3 , which was identical to the commercial single battery were mounted on the side of the wall. A pressure relief vent was placed in the upper the box to emit smoke and reduce the internal pressure. The re detection tube was placed above the cell safety valve with the height of 7.5 cm, and the tube was connected to the agent store tank, where the C 6 F 12 O and highpressure N 2 were stored. When the temperature in the protected enclosure rose to a critical threshold, the re detection tube melted at the point of the highest affecting temperature. The C 6 F 12 O agent stored in the tube on the source of the re was released through the melted hole of the tube. Fig. 2 shows that a 400 W electric sheet heater with the same size as the battery was placed next to the battery to induce thermal runaway. The battery and the heater were trapped by two steel holders to simulate the close arrangement of the batteries. Two-mica plates were settled between the battery and the steel hold, and the heater and the steel hold, which simulated the real arrangement of the batteries in the module.
Different masses of C 6 F 12 O were packed into the agent store tank before the re extinguishing test started. In the experiments, ve experimental cases were conducted using 0, 0.5, 1.0, 1.5 and 2.0 kg C 6 F 12 O agent, which were initially lled into the tank. Then, nitrogen was pressed into the tank to let the interior pressure reach 2.5 MPa. The weight of the battery and agent store were measured before and aer the experiment to determine the real mass loss of the battery and agent. Repeat tests were conducted in each condition to ensure the accuracy of the test. The specic experiment conditions are summarized in Table 2.
During the test, the explosion-proof tank was settled on the scale, and the test was carried out in a conned compartment, as shown in Fig. 1. Once thermal runaway occurred, the heater was closed and the ventilating fan was opened.
Experimental condition settings and characteristic temperature
Eight K-type thermocouples (TCs) were adopted to measure the battery surface and the ame temperatures. The positions of the TCs are shown in Fig. 3. The temperatures (T lf ) in the long surface of the cell were monitored by TCs 0-2, while the temperature (T uf ) in cell bottom surface was detected by TC4. A TC was always located on the surface of the heater element to verify adequate heat input. In addition, three TCs were placed 0, 30, and 75 mm above the safety valve to check the ame temperature during the thermal runaway and the extinguishing process. Fig. 4 shows a schematic diagram of the commercial battery module. When thermal runaway occurs, the heat transfer and thermal runaway propagation between adjacent batteries mainly depend on the heat conduction induced by the long surface. Similarly, the heat transfer between the batteries and Paper the electronic circuit relies on the thermal radiation above the safety valve. Moreover, the heat transfer between the different modules mainly depends in the heat radiation spread by the bottom surface. Thus, to investigate the suppression and cooling effects of the C 6 F 12 O agent in different cases, the temperatures in the long surface (T lf ), bottom surface (T uf ), 7.5 cm above the safety valve (T a ) and the mass loss during the suppression process in the different cases were compared.
Processes of thermal runaway and extinguishing
Fig . 5 shows the typical thermal runaway and re suppression scenario in case 2. With the amount of heat accumulating (under heating process), various gases such as CO 2 and H 2 (ref. 23 and 24) expanded within the limited cell space, which caused the internal pressure to increase dramatically. Due to the restraint of the steel holders, deformation did not occur on the long surface, but it occurred to the side surface slightly. Aer heating for 272 s, as the cell reached the stress limit, the safety valve broke. White electrolyte together with some gas spilled from the safety valve in a remarkably short period of time, as shown in Fig. 5(a). 1 s later, with the ignition of the electrolyte and gas, the white smog turned black. Meanwhile, the anode and cathode materials were ejected together with the dense black smog. Due to the large amount smoke, the jet re was not recorded by the digital camera. From Fig. 5(c), aer the safety valve opened for 3 s, the re detection tube melted due to the blistering hot gas and re, and subsequently, the C 6 F 12 O agent was sprayed into the cell. Then 9 s later, the extinguishing agent release was completed, while the smog was still rather thick. As shown in Fig. 5(b)-(d), the black smog rst turned brown then white. The initial black smoke was mainly composed of the ejected electrode materials and the incompletely combusted electrolyte. Aer the re extinguishing agent was released, the combustion of the battery was chemically suppressed, and the combustion reaction was weakened, thereby leading to the black smoke turning brown gradually. Finally, due to the poor cooling effect of the agent, the electrolyte, which was not involved in the combustion reaction, was vaporized to white vapour at the high temperature. The nal process took a long time of about 60 s. About 60 s aer the agent was applied, the smog and vapour were diluted and the battery did not reignite. The burning and suppression behaviors in the other cases were similar to that of case 2. Likewise, the cell res in the other cases were put out and the cells did not reignite aer the consumption of the agent. Due to the different rupture shapes of the safety valve, the timelines of the agent application may be diverse among the four cases. The experimental results show that the extinguishing agent seemed to mostly to be released within 3 to 5 s aer the safety valve opened. It was also found that aer the agent was applied for 60 s, the density of the smog and vapour was not reduced with an increase in the dose of the suppression agent.
Moreover, Fig. 6 shows the case where no C 6 F 12 O agent was used. Since no C 6 F 12 O used as an inhibitory agent, a jet re was formed above the safety valve aer the thermal runaway. Simultaneously, the duration of the brown smoke increased, which also indicates that the combustion reaction inside the battery in the case without any agent was more violent.
The results indicate that the efficiency of the C 6 F 12 O suppression agent was remarkable since it controlled the battery re within 2 to 3 s and no obvious reignition appeared aer the suppression. Aer the agent was applied, the battery produced a large amount of white smoke, which last for 60 s or even longer. The amount of white smoke was reduced with an increase in the dose of the agent, but the duration seemed to be independent of the dose of the agent.
Battery temperature response during thermal runaway and suppression process
The temperature of the cell surface is the most persuasive parameter to indicate the characteristics of the thermal runaway and suppression process. Thus, four TCs were placed around the cell surface to measure the surface temperature, and three other TCs were arranged 0 cm, 3 cm, and 7.5 cm above the safety valve to gauge the air and ame temperature. Fig. 7 shows the temperature responses without agent in case 1, and Fig. 8 shows the temperature responses before and aer the agent was applied in case 3.
From Fig. 7 and 8, the temperature of the cell increased dramatically with the thermal conduction and radiation from the heater. The increasing temperature promoted the decomposition of the solid electrolyte interface (SEI) lm and the reaction between the electrolyte and anode.
Aer heating for nearly 240-265 s, thermal runaway occurred. A jet re was formed at the safety valve, where the three TCs above the safety valve detected the high-temperature process. During the test, the maximum ame temperature of around 350-420 C was much lower than the typical ame temperature, which may be a result of many uncontrollable factors such as agent stream pushing. About 9 s later, with the thermal runaway propagation inside the battery, the cell surface temperature increases dramatically from 80 C to nearly 450 C. Among the cases, the temperature rising rate (TRR) of the surface near the anode and cathode was the highest; whereas, the TRR of the bottom surface was much lower.
From Fig. 8(a), when the agent was completely released, the surface temperature still rose quickly, but the TRR decreased remarkably. This may be due to the following reasons: (1) the cell was clamped tightly by the holders, and the contact interface between the cell and agent was limited, thus the cooling efficiency of the agent was weakened and (2) although the ame and some of the reaction chains could be controlled and blocked by the C 6 F 12 O agent, it was nearly impossible to hinder all the violent reactions inside the battery. Thus, the battery surface temperature still increased, but the TRR was much slower than before. Notably, there was a minor temperature decline in the center of the cell long surface when the safety valve was opened, which is attributed to the ejection of the active substance and the cooling process of high-pressure stream inside the battery.
From Fig. 7(a) and 8(a), in case 1 without C 6 F 12 O agent, the average TRR of the cell surface was 4.0175 C s À1 , while in case 3 it was 3.795 C s À1 , which means that the C 6 F 12 O agent removed some of heat and delayed the propagation of heat.
Aer the C 6 F 12 O agent nished, the surface temperatures were vastly different in the different locations of the cell. It was found from Fig. 8(a) that the peak temperature at the bottom and the center of the long surface was about 470 C and 490 C, while that at the long surface near the anode and cathode as almost 570 C and 550 C, respectively. Simultaneously, the temperature above the safety valve decreased gradually, then uctuated around an average value, which decreased from the surface of the safety valve to the upper air. The average value at the surface of the safety valve was nearly 180 C, while the temperatures at 3 and 7.5 cm above the safety valve were all almost 90 C.
In summary, the experimental results indicate that the C 6 F 12 O agent cannot reduce the battery temperature immediately aer the extinguishing process. When the C 6 F 12 O agent nished, the battery temperature still increased. However, when the dose of C 6 F 12 O agent was different, the peak temperature of each surface of the battery was different, which would be discussed in the next section.
Suppression efficiency of C 6 F 12 O
To study the suppression efficiency of C 6 F 12 O with different doses, the characteristic temperature responses and the mass change were compared. Fig. 9 shows the temperature responses of the cell long surfaces aer the agent was applied. The blue band in Fig. 9 is used to represent the release time of the C 6 F 12 O agent. From Fig. 9, the peak value of T lf signicantly decreased as the amount of agent increased. The average TRR in cases 2-5 from applying agent to reaching the peak temperature was 5.5, 4.08, 3.7 and 2.7 C s À1 , respectively. The results suggest that the exothermic reaction inside the battery becomes much more moderate with an increase in the amount of C 6 F 12 O agent, i.e. as the dose increases, the cooling effect of the agent becomes much more pronounced.
It was also found that the TRR and peak temperature in case 1 were lower than that in case 2, as shown in Fig. 9. This is mainly because a small amount of agent may promote a temperature increase in the cell, which indicates the peculiar performance of the C 6 F 12 O in extinguishing battery res.
The relationship between the peaking of T lf (T lf,max ) and the dose of agent (X in ) is shown in Fig. 10, which was tted as a third-order polynomial curve. The T lf,max in each case was denoted by the average value of several repeated tests. According to Fig. 10, the curve could be segmented into 2 characteristic regions. In the rst region, as X in increased, T lf,max increased slightly, then peaked at the critical dose (X inc ). Thereaer, in the second region, for inhibitor loadings greater than X inc , T lf,max decreased gradually with an increase in X in . In the system, there was an unsuppressed interval and inhibition interval, which depend on the dose of the C 6 F 12 O agent. When the dose of C 6 F 12 O agent exceeded the inhibition critical dose (X inh ), the C 6 F 12 O agent played an inhibition role; otherwise, the agent exhibited a negative effect on the inhibition.
This peculiar phenomenon may be related to the special nature of C 6 F 12 O. In a rich-burn system, the inhibition effect becomes more obvious as the dose increasing. 25 However, in our experiments, the batteries were ignited in a semi-closed tank, in which oxygen was amply furnished. Thus, the battery re inside the tank is deemed as lean combustion. In the lean-burn system, when the amount of re extinguishing agent was limited, the amount of uorine atoms is less than hydrogen atoms aer the release of C 6 F 12 O. There is enough H atoms to form HF, which is the most stable product of uorine, and more heat is released in this process compared to the formation of other uorine species. At X inc , the uorine to hydrogen ([F]/[H]) atomic ratio is 1, 25 thus, T lf,max reached the peak value under all the conditions. In the second region, T lf,max decreased gradually. This is because above X inc , there is insufficient H atoms in the system to form HF, and instead partially oxidized species (such as COF 2 and CF 4 ) are formed, leading to less heat release. Another theory indicates 26 that at low inhibitor loadings and over-ventilated conditions, adding agent made the system more reactive, while at higher loadings, higher concentrations had little suppression effect on the reactivity.
However, due to the uneven distribution of the agent, the inhibition effect of C 6 F 12 O on different positions in the cell may be dramatically different. Fig. 11 shows the temperature responses of the bottom surfaces of the cells aer the agent was applied in cases 1-5. Both the average TRR and the peak temperature for case 2 and case 3 were signicantly higher than that in case 1. The TRR and T lf,max in case 4 slightly increased compared to case 1, which illustrates that C 6 F 12 O in case 4 still has an adverse effect in inhibiting the temperature increase on the bottom surface.
The T uf,max was tted in a third-order polynomial curve as well, as shown in Fig. 12. It was found that the trend of the curve was almost the same as that in Fig. 10. Nonetheless, due to the uneven distribution of the agent, the critical dose of the different positions was different. Compared to the long surface, the critical dose (X inc,uf ) and the unsuppressed interval in the bottom surface seemed lager. When thermal runaway arose, plenty black smoke was produced, which contained numerous unreacted electrode materials, including graphite. Thus, a large amount of graphite dust was suspended in the fundus of the explosion-proof tank due to its larger relative molecular mass. As a result, the agent concentration at the fundus was much lower than that at the long surface. Hence, X inc,uf was larger than X inc,lf , and the unsuppressed interval was much more extensive. Fig. 9 Temperature responses of the cell long surfaces after the agent was applied in cases 1-5. During the experiment, the mass change in the experimental system was also determined, as shown in Fig. 13. When thermal runaway occurred in the cell, the quality of the system decreased rapidly due to the release of the electrolyte and electrode material. Although the was agent applied, the mass of the system still decreased. This is mainly because the C 6 F 12 O could not be spread into the interior of the cell, where the violent reaction was continuing, and the material was quickly released.
When the suppression effect improved, the system residual quality (Q sr ) was much higher, for the decomposition, which led to the mass loss being weakened by C 6 F 12 O. From Fig. 13, when the extinguishing agent was nished, Q sr in case 2 was lower than that in case 1. The Q sr in case 3 was slightly higher than case 1, which indicates that a small amount of agent exerts a negative effect on the inhibition. The Q sr in case 4 and case 5 was higher than that for the other cases, but the system quality still declined aer the agent was released. This implies that the combustion reaction inside the cell was still taking place; however, the reaction rate and material consumption were both at a low level. The Q sr in case 4 and case 5 was higher, which is possibly because the amount of F atoms is greater than H atoms in the system aer the agent was released, and then some uorine species substances (CF 4 , etc.) with a larger molecular weight were generated and deposited in the bottom part of the module box, which increased the quality of the system. There-aer, the quality of the system decreased slowly with the diffusion of gases and deferred reaction inside the battery. For case 2 and 3, which applied less C 6 F 12 O, the amount of H atoms was sufficient to consume all the F atoms to generate HF. However, the molecular weight of HF is lower than air, thus HF was released from the top pressure relief hole in the module box during the test. In addition, when the dose of agent was limited, the inhibitory effect was much poor, thus the reaction inside the cell was more severe and the Q sr was much lower. For the system quality in cases 1, 2 and 3, the slight increasing process may be responsible for the deposition of suspended graphite powder in the module box.
In summary, as the dose of C 6 F 12 O agent increased, the residual quality of the battery remined higher and the mass change became much slower, which indicate that a larger amount of agent can slow down the reaction, but it may not prevent the reaction. Moreover, more C 6 F 12 O cannot fundamentally interrupt the reaction, but only delay the reaction process, which can provide more time for system alert and personnel evacuation.
Proper choice of C 6 F 12 O dose
For a lithium-ion battery system, the combustion type of this system should be rst dened. If the combustion is a lean-burn process, the critical inhibition dose needs to be considered. However, if the combustion is a rich-burn process, the critical inhibition dose may not need to be considered because the inhibition effect becomes better with an increase in the dose of agent. 26 According to the above analysis, due to the uneven distribution of the agent, the critical inhibition dose in the different parts of the battery pack may be signicantly different, as shown in Fig. 10 and 12. Specically, for a certain lithium-ion battery system, the proper dose of C 6 F 12 O may be determined through the coupling of several characteristic surface inhibition critical doses. The proper suppression dose of a single cell re in the 47.5 Â 21.5 Â 16 cm 3 module box should be more than 1.504 kg, as calculated by this method. Thus, in the other similar lithium-ion battery-based systems, the proper dose of C 6 F 12 O agent is 9.42 g W À1 h À1 . However, the nal dose should be evaluated by combining the weight, cost and other comprehensive factors since only the inhibitory effect is considered for this method.
Conclusions
In this work, the efficiency of C 6 F 12 O on suppressing the lithium-ion battery res was experimentally investigated. The primary results are as follows: (1) The present results show that an open re can be extinguished by C 6 F 12 O within 2 to 3 s. The amount of the smoke released during thermal runaway will be reduced with an increase in the dose of C 6 F 12 O, while the duration has nothing to do with the dose. Moreover, when the dose of agent is limited, the battery may undergo reignition due to the deep smoldering inside the prismatic battery.
(2) In the case with steel holders, the cooling effect of C 6 F 12 O is unobvious. Therefore, to control the battery temperature immediately aer re extinguishing, other auxiliary means such as liquid cooling are required.
(3) It was found that the relationship between the dose of the agent and inhibitory effect is not a simple linear relationship. With an increase in the dose, the C 6 F 12 O agent rst exerts a negative effect on the inhibition, and then exhibits an inhibitory effect. For doses larger than the critical value (X inc ), the inhibitory effect becomes better. A critical inhibition dose exists in the system, but due to the uneven distribution of the agent, the critical inhibition dose varies with different locations in the battery. In this research, aer using C 6 F 12 O, the peak temperature of the long surface with 0, 0.5, 1.0, 1.5 and 2.0 kg C 6 F 12 O was 571.8 C, 582.7 C, 564.4 C, 547.9 C and 530.2 C and the peak temperature of the bottom was 456.1 C, 483.8 C, 481.4 C, 476.7 C and 415.7 C, respectively. Thus, the proper dose of C 6 F 12 O may be determined through the coupling of several characteristic surface inhibition critical doses. In the experimental module box, the proper dose of the C 6 F 12 O agent is 9.42 g W À1 h À1 .
Conflicts of interest
There are no conicts to declare. | 2019-04-10T13:12:45.223Z | 2018-12-12T00:00:00.000 | {
"year": 2018,
"sha1": "ccfcd2e5f26d8a53bcaf1d71d7fff5bc88a5b10e",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/ra/c8ra08908f",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ccb6d2711c9c210741d6c37f436dd0c6e4d8e396",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
211542 | pes2o/s2orc | v3-fos-license | Incidence and clinical value of prolonged I–V interval in NICU infants after failing neonatal hearing screening
Infants admitted to neonatal intensive care units (NICUs) have a higher incidence of perinatal complications and delayed maturational processes. Parameters of the auditory brainstem response (ABR) were analyzed to study the prevalence of delayed auditory maturation or neural pathology. The prevalence of prolonged I–V interval as a measure of delayed maturation and the correlation with ABR thresholds were investigated. All infants admitted to the NICU Sophia Children’s Hospital between 2004 and 2009 who had been referred for ABR measurement after failing neonatal hearing screening with automated auditory brainstem response (AABR) were included. The ABR parameters were retrospectively analyzed. Between 2004 and 2009, 103 infants were included: 46 girls and 57 boys. In 58.3% (60 infants) of our population, the I–V interval was recordable in at least one ear at first diagnostic ABR measurement. In 4.9%, the I–V interval was severely prolonged. The median ABR threshold of infants with a normal or mildly prolonged I–V interval was 50 dB. The median ABR threshold of infants with a severely prolonged I–V interval was 30 dB. In conclusion, in case both peak I and V were measurable, we found only a limited (4.9%) incidence of severely prolonged I–V interval (≥0.8 ms) in this high-risk NICU population. A mild delay in maturation is a more probable explanation than major audiologic or neural pathology, as ABR thresholds were near normal in these infants.
Introduction
Infants admitted to the neonatal intensive care unit (NICU) have a higher incidence of congenital hearing loss as compared to the healthy newborn population [1,2]. Several risk factors have been associated with this increased risk [3][4][5][6]. Moreover, preterm infants often have a delayed maturation of the auditory system as compared to term infants. This results in a vulnerable population regarding audiologic problems.
The I-V interval is often used as a measure of auditory maturation to describe the central conduction time. It is reported to be increased in preterm infants as compared to term infants [7][8][9]. The I-V interval shows an age-dependent decline up to about 2 years of age [10][11][12]. Explanations for the normalization of the I-V interval are increased myelination or increased synaptic efficacy [8,10,[12][13][14][15]. Although it is known that infants admitted to NICUs are at higher risk of developing perinatal complications and abnormal maturational processes, the incidence of prolonged I-V interval in NICU infants who failed neonatal hearing screening is unknown.
What this study adds is the incidence of prolonged I-V interval in a large cohort of NICU infants after failing neonatal hearing screening. We also investigated whether there is a correlation between prolonged I-V interval and elevated auditory brainstem response (ABR) thresholds. The development of these parameters over time was followed to study the auditory maturational changes.
Patients
The Sophia Children's Hospital is a tertiary care center in Rotterdam, the Netherlands. In 2008, the life birth number in the Netherlands was 184,634, of which 4,003 infants required NICU care of which 639 were admitted to the NICU at Sophia's Children Hospital.
In the Netherlands, all infants admitted to the NICU longer than 24 h undergo standard hearing screening by means of automated auditory brainstem responses (AABR). The first AABR screening is usually conducted upon discharge from the NICU. In case of unilateral or bilateral failure on AABR screening, AABR measurement should be repeated before 6 weeks corrected age (46 weeks post-conceptional age). Upon second AABR failure children are referred for audiologic evaluation. This audiologic evaluation consists of ABR, transient evoked otoacoustic emissions (TEOAEs) and tympanometry measurement. After diagnostic evaluation, all infants are seen by an experienced audiologist and otorhinolaryngologist. This should ideally take place before 3 months corrected age (52 weeks post-conceptional age).
Between 2004 and 2009, 3,366 infants were admitted to our NICU, of which 3,316 were screened with AABR. A total of 103 infants were referred for ABR analysis after repeated failure on AABR screening. Data of these ABR recordings were used to retrospectively analyze the ABR parameters.
Apparatus and procedures
All children were discharged from the NICU by the time ABR measurement was conducted. ABR measurements were recorded at our outpatient clinic in a soundproof room.
All children were in natural sleep or in calm conditions throughout the assessment. Both ears were sequentially tested. ABRs were recorded using the EUPHRA-1 system using a Toennies preamplifier. Responses were recorded using silver cup electrodes placed at both mastoids with a reference at the vertex and a ground electrode on the forehead. A band-filter was used with cut-off frequencies of 20 Hz and 3 kHz. The repetition frequency was 23 Hz. Click stimuli were presented starting at a level of 90 dB nHL. With step sizes of 10 dB, the level was decreased until no response was found.
TEOAE measurements were performed using the Otodynamics ILO 288 USB II system with the standard settings. The stimulus level was set to 84 dB SPL, a number of 260 averages was used.
Tympanometry was performed with an Interacoustics AT 235H system using the standard settings and a 1 kHz probe frequency. Clinical experts interpreted the results.
After diagnostic evaluation, all infants were seen at the outpatient clinic by an experienced audiologist and otorhinolaryngologist.
Analysis of response
The absolute latencies and interpeak intervals as well as the response thresholds were recorded. Experienced clinical specialists interpreted the ABR waves. The response latencies in milliseconds were obtained by establishing the peak of the wave and reading out the digitally displayed time. The I-V interval was obtained by subtracting the latency of peak I from peak V, measured at 90 dB nHL stimulation level. The response threshold was estimated by the lowest level at which a response was found. The corresponding hearing loss was estimated as 10 dB below this level.
The absolute latencies and interpeak intervals of ABR measurement were compared with the references values based on the normal hearing infants from our clinic [16]. These reference values are corrected for post-conceptional age to account for maturational changes in ABR parameters.
TEOAE and tympanometry measurement were used to confirm the diagnosis of conductive hearing loss when available.
Results
Between 2004 and 2009, 3,366 infants were admitted to our NICU, of which 3,316 were screened with AABR. A total of 103 infants were referred for ABR analysis after second failure on AABR screening: 46 girls and 57 boys. The median gestational age at birth was 34.7 weeks (interquartile range 27.3-39.3 weeks). The median birth weight was 1,930 g (interquartile range 946-2,911 g). The median post-conceptional age at first diagnostic ABR measurement was 43 weeks (interquartile range 39-48 weeks). Data of repeated ABR measurement was available for 79 of the 103 infants (76.7%). The majority (75%) of infants that had no repeated ABR measurement had a normal ABR results at primary assessment. Five infants died after primary ABR measurement. The median post-conceptional age at final ABR measurement was 83 weeks (interquartile range 62-124 weeks).
ABR results were analyzed in 103 NICU infants (206 ears). In Table 1, the different types of responses at first ABR measurement are presented. In some cases, all peaks were recordable, whereas in others only a single peak (mostly peak V) or no measurable ABR response was found. The peaks were not always equally measurable in both ears.
In 104 ears (60 infants), the I-V interval was measurable at the first diagnostic ABR after failing neonatal hearing screening. Figure 1 shows the I-V intervals of these infants and the age corrected reference values used in our clinic [16]. A clear age-dependent decline of I-V interval with increasing post-conceptional age is present. A prolonged I-V interval compared with our reference values is mainly seen in the younger post-conceptional ages.
Further on we will focus on infants instead of ears. In 44 infants, the I-V interval was recordable in both ears. In eight infants, the I-V interval was recordable only in the right ear and in another eight infants, the I-V interval was recordable only in the left ear. Table 2 shows the number of cases in which the I-V interval was prolonged by one (mildly) or two (severely) standard deviations compared to our reference values. In 15.5% of our population (16 infants) at least a mildly prolonged I-V Interval was found, in 4.9% of our population (5 infants), the I-V interval was severely prolonged by two standard deviations. It can be concluded from Table 2 that a prolonged I-V interval very often only affects one ear. Table 3 shows the follow-up of the 16 infants with a prolonged I-V interval. Nineteen percent of infants with a prolonged I-V interval, by either one or two standard deviations, developed a normal I-V interval after followup.
ABR response thresholds
To give a better view on the effect of a prolonged I-V interval on the ABR results, we also analyzed the corresponding ABR thresholds. In infants with a normal I-V interval, the median ABR threshold was 50 dB (interquartile range 32.4-70 dB). In infants with a mildly prolonged I-V interval (by one standard deviation), the median ABR threshold was 50 dB (interquartile range 37.5-70 dB). In infants with a severely prolonged I-V interval (by two standard deviations), the median ABR threshold was 30 dB (interquartile range 30-35 dB).
After follow-up, the median ABR threshold of infants with a normal I-V interval was 50 dB (interquartile range 30-62.5 dB). The median ABR threshold of infants with a prolonged I-V interval after follow-up was also 50 dB (interquartile range 30-60 dB).
In 31.5% of infants with elevated ABR thresholds (C50 dB), a flat tympanogram was found, it should be noted that tympanometry was not available in all infants. A conductive hearing loss will influence ABR thresholds and peak latencies, but will have no effect on the I-V interval latency. No response 23 The peaks were recordable in at least one ear, but were not always symmetrically measurable. All infants with no measurable response were affected on both sides Postconceptional age (weeks)
I-V interval
Reference values Fig. 1 The I-V interval of 104 ears (60 infants) with recordable I-V interval at first diagnostic ABR measurement after failing neonatal hearing screening is presented. The black line represents the reference values used in our clinic that correct for post-conceptional age
Discussion
The prevalence of prolonged I-V interval and the correlation with ABR thresholds in a population of 103 NICU infants who failed neonatal hearing screening was analyzed. In 58.3% of infants, the I-V interval was recordable at first diagnostic ABR measurement after failing neonatal hearing screening. A prolongation of the I-V interval by one or two standard deviations (C0.4 ms) was found in 15.5% of our population. Jiang et al. [7] found an incidence of abnormal central ABR component in 17% of preterm very low birth weight infants. Although the populations differ with respect to birth weight and failing neonatal hearing screening, the prevalence of prolonged I-V interval as a measure of abnormal central component concur. It is known that highrisk infants have an increased incidence of prolonged I-V interval as compared to low risk infants [17].
Several studies regarding normal values and maturational changes of ABR parameters have reported no significant differences between right and left ears [8,18,19]. Therefore, it is remarkable that we found that a prolonged I-V interval often only affects one ear. However, in the three infants with a unilateral prolonged I-V interval by two standard deviations, the I-V interval in the other ear was either unrecordable or prolonged by one standard deviation. Therefore, no large inter aural differences in I-V interval were found.
Jiang et al. [7] found that 14% had an elevation of ABR threshold ([30 dB). In our population, the median ABR threshold was elevated at 50 dB for both infants with a normal I-V interval and infants with a mildly prolonged I-V interval. The ABR threshold of infants with a severely prolonged I-V interval was lower, median ABR threshold 30 dB. The lower ABR thresholds in infants with more severe prolongation of I-V interval suggest that a severely prolonged I-V interval has no large impact on hearing sensitivity. This also suggests that a delay in maturation is a more probable cause than major audiologic or neural pathology. This is supported by the fact that these infants are among the younger infants in our population. The immature auditory system is characterized by increased ABR peak latencies and increased ABR thresholds. We know that auditory maturation can be delayed in preterm as compared to term infants [17]. The maturation effect of the response threshold is relatively small and matures sooner than the maturation of the I-V interval [20]. Therefore, the combination of a normal response threshold and a prolonged I-V interval is likely to occur in case of delayed auditory maturation. In addition, in the presence of a normal ABR threshold, severe neural pathology is unlikely.
In 41.7% of the population, the I-V could not be recorded at first diagnostic ABR measurement. In 22.3%, no measurable ABR response was found. After follow-up, this improved to normal or prolonged I-V interval for eight infants (7.8% of the total population). In these infants again delayed auditory maturation or dissolving of middle ear effusion is the most likely explanation. There were only a few infants in whom a normal I-V interval deteriorated to a prolonged or absent I-V interval after repeated ABR measurement.
The aim of universal neonatal hearing screening is to diagnose hearing impairment and start treatment before the age of 6 months [2]. Based on our findings that only 4.9% of infants have a prolonged I-V interval, the timing of the first diagnostic evaluation in our population seems adequate (median post-conceptional age 43 weeks). When a prolonged I-V interval is found, infants should be followed to determine if the I-V interval normalizes. Especially since we know that the maturational processes can be delayed in preterm infants.
Conclusion
I-V interval and ABR thresholds in a population of 103 NICU infants who failed neonatal hearing screening were analyzed. In 58.3% of the population I-V could be measured at primary ABR measurement. In 4.9% of the population a severely prolonged I-V interval was found. Corresponding ABR thresholds were lower as compared to infants with normal I-V interval, suggesting delayed auditory maturation or at least no large impact on hearing pathology. | 2016-05-04T20:20:58.661Z | 2010-11-11T00:00:00.000 | {
"year": 2010,
"sha1": "604bd71feb7b93214b1d43362bd84858438b0603",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00405-010-1415-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9505b3d803468483f03a4fa761ae8bd97efbe6bb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12201946 | pes2o/s2orc | v3-fos-license | Mitoxantrone Induces Natural Killer Cell Maturation in Patients with Secondary Progressive Multiple Sclerosis
Mitoxantrone is one of the few drugs approved for the treatment of progressive multiple sclerosis (MS). However, the prolonged use of this potent immunosuppressive agent is limited by the appearance of severe side effects. Apart from its general cytotoxic effect, the mode of action of mitoxantrone on the immune system is poorly understood. Thus, to develop safe therapeutic approaches for patients with progressive MS, it is essential to elucidate how mitoxantrone exerts it benefits. Accordingly, we initiated a prospective single-arm open-label study with 19 secondary progressive MS patients. We investigated long-term effects of mitoxantrone on patient peripheral immune subsets using flow cytometry. While we corroborate that mitoxantrone persistently suppresses B cells in vivo, we show for the first time that treatment led to an enrichment of neutrophils and immunomodulatory CD8low T cells. Moreover, sustained mitoxantrone applications promoted not only persistent NK cell enrichment but also NK cell maturation. Importantly, this mitoxantrone-induced NK cell maturation was seen only in patients that showed a clinical response to treatment. Our data emphasize the complex immunomodulatory role of mitoxantrone, which may account for its benefit in MS. In particular, these results highlight the contribution of NK cells to mitoxantrone efficacy in progressive MS.
Introduction
Multiple sclerosis (MS) is the most common autoimmune disease of the central nervous system (CNS) leading to severe disability in young adults. It is considered to be initiated by autoreactive T cells that recognize CNS antigens, and in concert with numerous immune cells orchestrate an inflammatory reaction which eventually results in demyelination and neuroaxonal damage [1]. The most typical disease course is relapsing-remitting MS (RRMS) characterized by total or partial recovery after attacks. Most patients initially displaying a relapsing-remitting course eventually convert to a secondary progressive disease course (SPMS) after 10-25 years of disease [2].
Current therapies for MS focus mainly on immune aspects of the disease and benefit principally patients with RRMS, while their efficacy is minimal or even lacking in patients with primary progressive disease. Mitoxantrone (MX) is one of the few treatments licensed for use in SPMS [3]. MX is an anti-neoplastic anthracenedione derivative that inhibits DNA replication and induces single and double strand breaks by intercalating in DNA through hydrogen bonding [4]. The mechanisms of action of MX are still not fully understood, and clear data on its effects on the immune system are limited [5]. Although MX has shown effectiveness in SPMS [6], a substantial proportion of patients fail to respond to treatment, and thus there is an urgent need to identify markers that allow the prediction of individual treatment responses. Moreover, the administration of MX is limited to a treatment-period of 2-3 years by the cumulative dose-dependent risk of severe adverse effects, such as cardiotoxicity [7][8][9][10][11] and potential leukemia development [12]. Nevertheless, understanding how MX benefits SPMS patients is essential for establishing safer and more effective treatments for this group of patients. Therefore, we conducted a longitudinal study on an SPMS cohort, analyzing intra-individual comparisons (baseline versus treatment) of major populations of peripheral blood lymphocytes using flow cytometry.
Study Design and Participants
A prospective monocentric single-arm open-label study design was used to evaluate the effects of MX treatment on immunological parameters in MS patients. The study was approved by the ethics committee of the Charité -University Medicine Berlin and was conducted in accordance with the Declaration of Helsinki, the guidelines of the International Conference on Harmonization of Good Clinical Practice, and the applicable German laws. All participants gave informed written consent. Patients were screened and enrolled at the neuroimmunology outpatient clinic of the Charité -Universitaetsmedizin Berlin in Germany. Inclusion and exclusion criteria are summarized in Table 1. Patients received MX according to the standard protocol [6], meaning that unless dose reduction was required owing to side effects such as hematological abnormalities, an MX dose of 12 mg/m 2 body surface area (BSA) was applied intravenously every three months up to a cumulative dose of 140 mg/m 2 BSA. Treatment with other cytotoxic or immunomodulatory drugs was prohibited during the study.
In this study, we aimed to determine the persistent effects of MX that may account for its long-term benefit in SPMS. Therefore, study-related clinical assessment and blood sampling for evaluation of immunological parameters was done immediately before the next subsequent MX administration, i.e. three months after the previous MX-infusion. Clinical examination and venipunction occurred at baseline, after six months (directly prior to the third MX cycle) and after twelve months (directly prior to the 5th MX cycle). As controls, we included 10 RRMS patients with no sign of disease activity (mean EDSS of 2), as well as 8 healthy controls. Both groups were gender and age matched.
Isolation of PBMCs
Peripheral blood mononuclear cells (PBMCs) were obtained from heparinized peripheral blood from the patients and isolated by density gradient centrifugation (Percoll, Nycomed Pharma, Roskilde, Denmark) according to the manufacturer's instructions. PBMC were then cryopreserved in liquid nitrogen for later analysis.
Flow Cytometry Analysis
For ex vivo investigation, whole blood was lysed, washed, and stained with antibodies against CD14, CD19 or CD3/CD4/CD8 to identify monocytes, B cells or T cells, respectively. Neutrophils were identified based on gating on the granulocyte population in the forward and sideward scatter profile, and CD16 positivity.
Statistical Analysis
The paired t-test was used to calculate p-values for comparisons between two groups (i.e. baseline versus six months and baseline versus 12 months). Repeated measures ANOVA was used for comparisons between three groups (i.e. baseline versus six and 12 months), with the Tukey post-hoc test. Statistical significance was defined as p,0.05, and depicted as *p,0.05; **p,0.01; ***p,0.001. One-way ANOVA with the Bonferroni post-hoc test was used to compare baseline and 12 months of treatment with healthy controls and with RRMS patients. We verified that the data conformed to a Gaussian distribution. Statistical significance was depicted as # p,0.05; ### p,0.001.
Cohort Description
Of the 19 SPMS patients screened, 15 patients were included in this study because four patients did not conform to the eligibility criteria. From the 15 patients enrolled, two patients dropped out before receiving the third MX dose because of intolerability, and one patient terminated MX treatment before receiving the 5 th dose because of severe disease progression. The remaining twelve patients completed the study period of twelve months.
In accordance with the pivotal MX trial in MS [6], clinical assessment of therapy response was based primarily on the EDSS and secondarily on the occurrence of relapses. Those patients who improved or remained stable on the EDSS throughout the study period and who did not experience any relapses were considered
Effects on MX Treatment on the Frequency of Peripheral Immune Cell Populations
In order to apprehend the persistent effects of MX on the immune system in SPMS patients, we first determined the effect of the treatment on neutrophils, monocytes and T and B lymphocytes in whole blood, directly after venipunction at baseline and three months after six months and 12 months of treatment ( Figure 1).
MX treatment did not affect the populations of CD14 + monocytes, CD4 + Th or conventional CD8 high T cells at these time points ( Figure 1B). In contrast, we observed a significant increase of a subset of immunomodulatory CD8 low T cells at six and 12 months (repeated measures ANOVA, p = 0.0002) as well as an increase in the frequency of neutrophils at month 12 (repeated measures ANOVA, p = 0.044, Figure 1C). Moreover, confirming previously reported data, the B cell population was persistently reduced during the entire observation period (repeated measures ANOVA, p,0.0001, Figure 1C). Furthermore, in order to elucidate if MX-induced alteration of the proportion of B cells, CD8 low T cells and neutrophils reflected a restoration toward normal cell levels observed in healthy individuals or to levels observed in stable patients, we assessed the percentages of these three immune cell populations in a gender and age-matched cohort of RRMS patients with mild and stable disease, and in a matched group of healthy controls. Figure 1D shows that only in the case of the immunomodulatory CD8 low T cell population, MX treatment seems to restore the proportion of these cells to levels observed in healthy controls (one-way ANOVA p = 0.015). No significant difference was observed between frequencies of CD8 low T cells before or after MX application and the levels observed in stable MS patients. In contrast, the MX-induced neutrophil enrichment does not seem to reflect any trend towards normalization since the proportion of neutrophils at 12 months was also significantly elevated, compared to both healthy controls and stable MS patients (one-way ANOVA p = 0.001). Likewise, the selective depletion of B cells during MX treatment cannot be considered as a normalization of immune cell proportion, at least in terms of absolute numbers.
We also examined the effects of MX on NK cells in fresh blood and observed an increased frequency of NK cells at six months. In light of this observation, we conducted a more detailed analysis of these cells in subsequent investigations using frozen material, in an attempt to clarify specifically how the NK cell compartment is modulated during MX treatment.
Effects of MX Treatment on Frequency and Absolute Numbers of Circulating NK Cells
We then analyzed NK cells at the different time points using thawed PBMCs. NK cells were first categorized according to their expression of CD56 and CD16 in the well defined subsets of CD56 dim and CD56 bright NK cells (Figure 2A). Using a more comprehensive set of markers, we could confirm a significant increase of the total NK cell frequency during the first part of the study, i.e. after six months of treatment ( Figure 2B). This initial increase receded in the latter stages after 12 months of treatment (repeated measures ANOVA, p = 0.001). Also here, we examined whether MX effects on NK cells reflected a restoration toward NK cell frequencies observed in healthy individuals. As shown for B cells and neutrophils, MX treatment seems not to restore the proportion of NK cells or their subsets to levels observed in healthy controls ( Figure 2C). Interestingly, we did not observe any increase of the absolute NK cell numbers during treatment, indicating that the increased frequency was rather an indirect cell enrichment due to the depletion of other major immune cell populations ( Figure 2D).
Effects of MX Treatment on Maturation and Differentiation of Circulating NK Cells
Next, we focused on the effects of MX on the distinct NK cell subsets, the cytotoxic CD56 dim and immunomodulatory CD56 bright NK cells. The frequency of both CD56 dim and CD56 bright NK cells increased concomitantly after six months of treatment ( Figure 2B) (repeated measures ANOVA, p = 0.004 and 0.029, respectively). Again the analysis of absolute cell numbers did not show any statistically significant differences in these populations ( Figure 2C) confirming the interpretation of a general and non-specific enrichment of all NK cells at six months. Moreover, neither total NK cells nor CD56 dim and CD56 bright NK cell evolution after treatment presented normalization to the healthy control levels ( Figure 2C). We and others have recently reported on different stages of NK cell subsets, cell maturation with distinct functional properties that can be characterized by the expression of specific markers including CD27 [14], CD57 [15], CD62L [16], or CX3CR1 [13].
To determine the effects of MX on the NK cell maturation and activation phenotype, we analyzed an array of NK cell markers at months six and 12 of treatment. Although at month six the NK cell phenotype appeared not to be affected, we did observe a longterm MX-associated reduction of CD62L expression (p = 0.025), which is indicative of a process of maturation [16]. The expression of CD27 and CD57 remained unaltered ( Figure 3A).
Moreover, to prove that this maturation process is coupled to the regulation of NK cell receptors, we examined the expression of the activatory receptors NKp30 and NKp46, and the inhibitory receptors CD94/NKG2A and panKIR, which are known to be regulated during NK cell maturation [13]. Our data show that, coinciding with maturation, both NKp46 (p = 0.041) and NKp30 (p = 0.004) were decreased after treatment ( Figure 3B). No changes were detected in the inhibitory receptors CD94/NKG2A and killer cell immunoglobulin-like receptors (panKIR) (data not shown). Thus, altogether these results indicate that MX promoted a shift towards a more mature NK cell phenotype, as reflected by the downregulation of CD62L as well as NKp46 and NKp30. However, these changes in NKp30, NKp46 and CD62L expression did not reflect any trend toward a restoration to healthy control levels (data not shown).
Association of the Clinical Response to MX and the Modulation of NK Cells
In Figure 3, we show a heterogeneous picture on NK cell modulation by MX. Since regulation of NK cells has been associated with response to diverse MS therapies such as daclizumab or interferon beta therapy [17,18], we asked whether changes in NK cell status may correlate with the treatment response. The cohort of patients was stratified into responders and non-responders to treatment, according to the response criteria described above. Even despite the limited sample size, we demonstrate that maturation, reflected by the downregulation of CD62L (p = 0.029), NKp46 (p = 0.032) and NKp30 (p = 0.020), occurred exclusively in the cohort of responders ( Figure 4A). In contrast, patients that did not benefit clinically from MX treatment showed no significant alterations of the various NK cell markers examined here ( Figure 4B).
Discussion
To address the specific and persistent effects of MX treatment on different immune cell subpopulations in SPMS patients, we conducted a longitudinal study on a cohort of 15 SPMS patients. Using flow cytometry, we compared intra-individually (baseline versus treatment) major populations of peripheral blood lymphocytes including NK cells. We demonstrated that, apart from being cytotoxic for B lymphocytes, MX promoted the enrichment of peripheral neutrophils as well as of subsets of CD8 low T lymphocytes and of NK cells. In addition, we observed that sustained MX treatment induced a shift in the maturation of circulating NK cells which was associated with clinical response to MX treatment.
Our initial results addressed the effects of MX treatment on major blood cell populations (Figure 1). We showed a dramatic depletion of B cells following MX treatment. The baseline B cell levels in these patients were comparable to levels seen in stable MS patients and healthy controls ( Figure 1D), therefore the MXinduced depletion cannot be considered as a normalization of an atypical elevated proportion of B cells, at least in term of quantity. B cell depletion by MX was previously reported by other groups both as an immediate and a persistent consequence of the MX therapy in vivo [19]. Moreover, Chan et al. demonstrated that already 1 h after MX infusion not only B cells but also CD8 + T cells underwent apoptosis [20]. We did not observe any effects of MX on T cells, which is consistent with the results reported by Gbadamosi et al. [19]. Thus, the immediate MX-induced apoptosis of CD8 + T cells is probably a transient phenomenon. However, we observed a strong and consistent increase in the frequency of a subset of CD8 T cells characterized by a lower expression of the CD8 co-receptor.
CD8 low T cells are less cytotoxic than CD8 high T cells, and they express IL-4, IL-10, and interferon-gamma [21]. CD8 low lymphocytes, and in particular CD8 low NK cells have been shown to be reduced in untreated patients with clinically isolated syndrome and MS [22]. Interestingly, MX appears to restore the frequency of CD8 low T cells to the levels observed in healthy individuals (Fig. 1D), suggesting that this regulation may contribute to the efficacy of the treatment in MS. In any case, it is evident that investigations on the role of CD8 low T lymphocytes in MS and other neuroimmunological disorders and their modulation during treatment are worthwhile topics for future investigations.
We also observed an increased frequency of peripheral neutrophils in MX-treated patients, the proportion of which is elevated even when compared to control stable patients or to healthy individuals (Fig. 1D). This was an unexpected result, as MX was reported to normalize IL-6 production [23], a cytokine that together with G-CSF is known to induce neutrophil production [24]. Neutrophil infiltration into the CNS has been associated with the acute EAE phase [25] and with early axonal pathology in EAE [26]. A recent report indicates that neutrophils may also contribute to MS pathogenesis, as patients display elevated numbers of pre-activated or primed neutrophils in peripheral blood [27]. It is not clear whether neutrophil activity or phenotype are altered by MX treatment. Since an elevated frequency of neutrophils was observed in both responders and non-responders (data not shown), one may speculate that the observed effect was not related to the efficacy of MX treatment. Similarly, the elevated frequency of CD8 low T cells did not appear to be associated with clinical response to MX.
In our study, MX induced an enrichment of the NK cell population after six months of treatment, which subsequently stabilized by month 12 (Figure 2). This was an unexpected finding in view of the potent immunosuppressive function of MX on other lymphocyte populations [28][29][30].
To better understand how MX treatment influences NK cell phenotype, we investigated the treatment effects on NK phenotype and activity. We showed that at six months of treatment, both the cytotoxic CD56 dim and the immunomodulatory CD56 bright subpopulations were enriched, without any shift to a particular phenotype. Moreover, MX seems not to restore the frequency of NK cells and their subsets to healthy control levels as shown in Figure 2C. Elevated frequencies of total NK cells or of the CD56 dim and CD56 bright subsets were not accompanied by an elevation of absolute NK cell numbers. Thus, NK cell enrichment seemed to be the indirect consequence of the dramatic suppression of other immune populations, primarily B cells ( Figure 1).
Interestingly, prolonged MX treatment did not further affect immune cell frequencies, but appeared to modulate immune cells in a more specific way. In particular, we observed a significant reduction in the frequency of circulating immature NK cells after After six months of treatment, the NK cell population was significantly enriched and then decreased from six to 12 months of treatment. The CD56 dim and CD56 bright NK cells subsets were significantly increased after six months of treatment, but no difference was detected from six to 12 months. (C) Frequencies of NK cells and NK cell subsets in SPMS patients before and after MX treatment compared to the frequency observed in matched healthy individuals. (D) Shows absolute counts of NK cells and CD56 subsets: NK cells, CD56 dim and CD56 bright NK cell subsets remained unchanged over time. *p,0.05; **p,0.01; FSC, forward scatter; SSC, side scatter; mth, months; ns. non significant. doi:10.1371/journal.pone.0039625.g002 12 months of treatment. During maturation, NK cells downregulate the expression of CD27, CD57 and CD62L [16] as well as the expression NKp30 and NKp46 activatory receptors and the inhibitory CD94/NKG2A receptor complex. Here, we showed that at 12 months of treatment, the maturation marker CD62L and the activatory receptors NKp46 and NKp30 were significantly downregulated, suggesting a late and persistent effect of MX on the maturation of circulating NK cells. This could be in part the consequence of the expected elevated susceptibility of immature proliferating NK cells to MX-induced cytotoxicity. Indeed, we previously demonstrated that immature NK cells proliferate much better than mature NK cells in response to IL-2. [13]. However, that cytotoxicity alone may not entirely explain this rather late effect of the treatment, which manifested only after repeated applications of MX, at 12 months. It was previously reported that MX treatment enhances the expression of Th2-related cytokines [31]. IL-4 is known to induce NK cell maturation [32]. Therefore, we speculate that MX promoted enhancement of IL-4 production may also contribute to NK cell maturation in treated patients. Similarly, in agreement with the report of Kienzle et al. on the generation of CD8 low T cells in the presence of IL-4 [33], the strong IL-4 production induced by MX may contribute to the expansion of CD8 low T cells observed in our study.
In MS, deficient NK cell activity has been extensively reported by several groups during the last 40 years [34][35][36][37]. Thus, the observed increased frequency of mature and active NK cells may contribute to the benefit of MX in patients with MS, although we could verify that MX treatment did not restore NK cells to a maturation or activation status observed in healthy donors.
Treatment-related enrichment of particular NK cell subsets, or induction of NK cell activation has been associated with the therapeutic success of numerous MS drugs including interferonbeta, glatiramer acetate, and daclizumab [17,18,[38][39][40]. Stratifying our data according to MX response revealed that the elevated frequency of mature NK cells was observed exclusively in patients that responded to the therapy (Figure 4). This result is especially promising, considering the robust significance seen in this relatively limited data set. Moreover, of all the effects of MX we observed here, only the NK cell effects were related to the clinical response to MX. It remains elusive why NK cell maturation occurred primarily in patients that responded to the treatment. The fact that both responder and non-responder patients showed the same ratio of immature/mature peripheral NK cells at baseline suggests that baseline differences between the two patient groups likely did not account for this effect. Thus far, we have not explored additional factors, such as selective production of Th2cytokines [31] or selective alteration of the functionality of antigenpresenting cells [41], which may contribute directly or indirectly to NK cell maturation, and which may also differ in responders and non-responders.
Thus, while alteration of immune cell frequencies in the peripheral blood was an early and sustained effect of MX in MS patients, the alteration of NK cell phenotype appears to be a later mechanism of action (observed only after 12 months of treatment), related to the clinical benefits of the treatment.
In conclusion, we have shown for the first time that the NK cell population is promoted by MX treatment in vivo, accompanied by a shift towards a mature NK cell phenotype associated with the response to the therapy. The contribution of NK cells to beneficial effects of MX in SPMS may serve as a first step to establish novel and safe treatments for progressive MS. | 2017-04-13T11:58:59.741Z | 2012-06-29T00:00:00.000 | {
"year": 2012,
"sha1": "5282c2641a806419f816365afd8637c8b047d761",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0039625&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d988738ccbae3d122bd93f17b046cb2a881e49d6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54134302 | pes2o/s2orc | v3-fos-license | HBG2 -158 (C>T) polymorphism and its contribution to fetal hemoglobin variability in Iraqi Kurds with beta-thalassemia minor
PURPOSE: Hemoglobin (Hb) F% is increased in up to half of beta-thalassemia (β-thal) carriers. Several polymorphisms have been linked to such variability in different populations, including HBG2 - 158(C>T) (Xmn I polymorphism) on chromosome 11. To determine the role of this polymorphism in such variability among Iraqi Kurds, the current study was initiated. MATERIALS AND METHODS: A total of 102 consecutive patients diagnosed as β-thal minor were enrolled. The enrollees had their diagnosis based on peripheral blood counts and high-performance liquid chromatography to determine HbA2 and HbF. All enrollees had their DNA extracted by phenol-chloroform method and Xmn I polymorphism detected by restriction fragment length polymorphism-polymerase chain reaction. RESULTS: The mean age (standard deviation [SD]) of the 102 enrollees was 25.4 (14.0) years, and the enrollees included 48 males and 54 females. Xmn I polymorphism was identified in heterozygous state in 46 (45.1%) patients and in homozygous state in one patient (0.98%). Thus, the minor allele frequency of this polymorphism was 0.235 in the studied group. There were no significant differences in red cell indices and HbA2% in carriers of the minor allele compared to noncarriers, while HbF% and absolute HbF concentrations were significantly higher in the former subgroup (P = 0.032 and 0.014, respectively). This polymorphism's contribution to HbF variability was found to be 5.8% in the studied sample. Furthermore, those with HbF ≥2% were 3.2 folds more likely to carry the minor allele. CONCLUSIONS: Xmn I polymorphism is frequently encountered in Iraqi Kurds with β-thal minor, and it is significantly associated with higher fetal hemoglobin in these patients.
Introduction
B eta-thalassemia (β-thal) is an autosomal recessive inherited disorder of hemoglobin (Hb) synthesis, associated with a defect in the synthesis of β-globin chains. [1] Its inheritance is associated with a variety of phenotypes ranging from severe transfusion-dependent thalassemia major to usually asymptomatic thalassemia minor, with an intermedia phenotype in between. [2] The major phenotype is due to homozygous or compound heterozygous β-thal gene inheritance, while the minor is heterozygous for the mutant allele. The intermedia phenotype genetics are much more complex. [3] In addition to the type of β-thal mutation, other modulators are responsible for the variability in phenotype in this inherited disorder. One such modulator is inheritance of determinants associated with increased γ chain production, with resultant increase in HbF leading to reduction in α:β ratio. [4] There are three major quantitative trait loci (QTLs) on chromosomes 11, 6, and 2 related to this γ chain modulation. One of the most significant single-nucleotide polymorphisms relevant to these QTLs is located on the HBG2 locus on chromosome 11 (HBG2 -158 [C>T] [rs7482144]). Its minor allele creates a site for the restriction endonuclease Xmn I and hence the name Xmn I polymorphism. [4][5][6][7] While earlier studies have focused and documented the contribution of Xmn I polymorphism to phenotype and HbF levels in homozygous and compound heterozygous β-thal in various populations including Iraq, [6,[8][9][10][11] such contribution particularly to HbF has been subject to controversy in heterozygous β-thal (thal minor) [12][13][14][15] and was not addressed in Iraqi patients, and that is why this study was initiated.
Materials and Methods
A total of 102 consecutive patients (aged 2 years or older) diagnosed as β-thal minor by two specialist laboratories in Duhok, Kurdistan, Iraq, were recruited. All enrollees had a full blood count and red cell indices determined using a Hematology Analyzer (Sysmex XP-300, USA). This instrument is calibrated daily by calibrators provided by the manufacturers. Quantitation of HbF and HbA2 and exclusion of other hemoglobinopathies were performed by high-performance liquid chromatography using D-10 short thalassemia program (Bio-Rad Laboratories Inc., CA, USA). Thereafter, patients had their DNA extracted by a phenol-chloroform method. [16] The extracted DNA was then amplified using an AB2720 Thermocycler (Applied Biosystems, USA) for a 650 bp sequence in the promoter region of the G γ-globin gene. The primers used were as follows: Forward 5' AAC TGT TGC TTT ATA GGA TTT T3' and Reverse 5' AGG AGC TTA TTG ATA ACT CAG AC 3'. The polymerase chain reaction (PCR) program consisted of pre-PCR denaturation at 94°C for 2 min, followed by 30 cycles of denaturation at 95°C 1 min, annealing 60°C 1 min, and extension 72°C 1.5 min, and post-PCR final extension for 5 min at 72°C. [16] The resultant 650 bp amplicon was digested with the enzyme Xmn I according to the manufacturer's instructions (Promega, USA), and the digestion products were run on a 2% agarose gel and visualized after ethidium bromide staining via ultraviolet transilluminator (HVD Life Sciences, Austria). This study was approved by the Ethics Committee at the College of Science, University of Duhok, Iraq, and informed consent was obtained from all participants. Statistical analysis utilized the SPSS software program (release 20, SPSS Inc., Chicago, IL, USA). Chi-square test and Student's t-test were used when applicable. To assess the effect of Xmn I polymorphism on HbF concentration variability, the latter was natural log transformed (to ensure linearity), and then, linear regression was applied. P < 0.05 was considered statistically significant.
Results
The 102 enrolled β-thal minor patients had ages ranging from 2 to 61 years, with a mean age of 25.4 ± 14.0 years, and included 48 males and 54 females. Their main hematological parameters are outlined in Table 1. HbF% varied from 0.4% to 7.7% with a mean of 1.7% ± 1.25%. In 26.5% of the enrollees, HbF% was equal or in excess of 2%.
HBG2 -158 C>T polymorphism was detected in 47/102 patients, including 46 in heterozygous state (CT) and one in homozygous state (TT). This would give a minor allele frequency (MAF) of 0.235. Figure 1 shows examples of gel electrophoresis of Xmn I digested amplicons in those with homozygous (TT) and heterozygous (CT) states for the minor allele, as well as in those with homozygous state for the wild allele (CC). 0.07, respectively. [23][24][25][26] These rates may be relevant to the underlying β-thal genotypes in these populations. Although the current study did not include molecular characterization of the underlying β-genotypes, earlier studies have documented that IVS-II-1 (G>A), codon 44 (-G), codon 5 (-CT), IVS-I-1 (G>A), and codon 39 (C>T) are the five most common β-thal mutants in carriers from our region. [27] The first two mutations have been reported as associated with Xmn I polymorphism in 89% and 75%, respectively, of carriers from Turkey, [26] while the latter two were linked to Xmn I in a lower but considerable proportion of cases in the same study. More or less similar observations were also documented by a study on Italian carriers, where Xmn I polymorphism was frequently associated with IVSII-1 and less so with codon 39 and IVS-I-1. [15] Similarly, IVS-II-1 was highly associated with Xmn I polymorphism in Greek carriers. [28] Furthermore, IVS-II-1 was quite frequently associated with Xmn I in an earlier study on thalassemia intermedia in our region. [6] Data based on the studies on β-thal intermedia and β-thal major support a role of Xmn I polymorphism in relevance to higher HbF production and amelioration of phenotype. [9] This contribution is related to the ability to increase γ chain production in homozygous and compound heterozygous patients where there is evident erythropoietic stress. Such stress seems less evident in thalassemia carriers (heterozygous); though a mild degree of ineffective erythropoiesis, presumably due to extramedullary destruction of cells with excess alpha chains, has been documented in these carriers. [18] This may explain the significant association of this polymorphism with increased HbF in carriers, an association which is even more evident at HbF ≥2% in the current study. Similarly, several authors found a significant associated between Xmn I polymorphism and HbF levels in Chinese, Brazilian, and Portuguese β-thal carriers. [12,22,25,29,30] On the other hand, an association could only be documented with the combination of Xmn I and (AT) x (T) Y polymorphisms in Italian carriers. [15] Conversely, other investigators failed to demonstrate an association between this polymorphism and HbF in carriers. [13,14,31] The failure to document an association with HbF in the latter studies may be related to the background β-genotype or small sample size.
It is important to note that the contribution of 5.8% of this polymorphism to the HbF variability in the current study means that there is a need to study the contributions of polymorphisms in other two major QTLs, namely in BCL11A and HBS1 L/MYB. The latter has been found to contribute to variability in HbF among β-thal carriers in other populations. [12,29] Other culprits that may have played a role in this variability as documented by other studies in other populations and need scrutiny are the β-genotype and alpha gene triplication. [13,14,32] for HbF (%) which was significantly higher in carriers of the T allele with a P = 0.032. This was even more significant when the absolute HbF concentrations were compared between carriers and noncarriers (P = 0.014).
The contribution of carriage of the minor allele to HbF concentration variability was found to be a significant 5.8%, when age and sex were taken as covariates by linear regression (effect size 0.294, P = 0.016). Moreover, enrollees with HbF% ≥2% were a significant 3.2 (CI 1.2-7.6) folds more likely to be carriers of the minor allele (T) than those with lower HbF% (P = 0.012) [ Table 2].
Discussion
HbF levels are variably increased in β-thal carriers as documented by many studies all over the world, with up to half of the cases having a slightly increased HbF%. [3,17] These increases are attributed to preferential survival of red cell precursors that synthesize relatively more γ chains. Several factors have been implicated in this increase in γ-globin chain production although twin studies have confirmed that genetic factors are the main culprit. [18] Among one of the earliest genetic factors implicated was HBG2 g.-158 C>T rs7481244 (Xmn I polymorphism), which has been reported to be associated with 3-11 folds increase in G γ-globin chain production, by increasing the rate of the transcription of the gene, in conditions characterized by hematopoietic stress. [5,19,20] In this series of Iraqi Kurds who are β-thal carriers, nearly 60% had increased HbF%, including 26% with a HbF in excess of 2%, which is to some extent slightly higher than many previous studies. [3,13,17] This observation further justifies the need for addressing the issue in our population.
The MAF of HBG2 g.-158 C>T as determined in the studied sample of β-thal carriers was 0.235, which is intermediate between the rates of 0.36 in β-thal intermedia and 0.13 in β-thal major reported earlier in the same population. [9] The higher rates in thal intermedia compared to major is well documented and further supports the role of Xmn I as a modulator of disease severity in β-thal. [7,9,21] Population studies have revealed that the MAF of Xmn I polymorphism varies between 0.10 and 0.26 in different populations. [22] Studies focusing particularly on MAF in β-thal minor, however, are not frequent. Studies on β-thal carriers from Brazil, Northern Pakistan, Turkey, and Hong Kong reported rates of 0.19, 0.16, 0.18, and
Conclusions
It appears that Xmn I polymorphism is quite frequent in Iraqi Kurd carriers of β-thal and is associated with significantly higher HbF proportions in these carriers, though it does not explain all HbF variability and other polymorphisms related to the three major QTLs, β genotypes, and haplotypes, as well as alpha gene numbers, need to be addressed by future studies.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2018-12-02T20:30:56.445Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "c0b7e4bd2d21ee50bb1a9399fc7c977323354424",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.4103/JLP.JLP_22_18.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "07acdb729caabb671c6cf45691e1de94fff96060",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13857627 | pes2o/s2orc | v3-fos-license | Effects of Eccentric Contractions Induced Electrical Stimulation Training System on Quadriceps Femoris Muscle
We developed an eccentric contraction induced electrical stimulation (ES) training system. The purpose of this study was to investigate whether the eccentric contraction induced ES enhance the knee extension torque compared with typical ES. Twenty-two young untrained men (age: 23 ± 3 years) in the acute response trial (single training) and seven untrained men in the long period training trial (for 6 weeks) were studied. We measured muscle thickness and knee extension torque evoked by ES with eccentric contraction training system (ES + ECC) or ES alone for the quadriceps muscle of men. The levels of pain and discomfort were evaluated using numeric rating scale (NRS) and heart rate variability. The knee extension torque of ES + ECC was higher than that of ES alone in the acute response trial. There were no significant differences in the levels of pain and discomfort between ES and ES + ECC. Additionally, ES + ECC training for 6 weeks was effective on the quadriceps muscle thickness and knee extension torque. In contrast, the ES alone training failed to increase muscle thickness and knee extension torque. These results suggest that eccentric contraction induced ES would have the potential to become an effective intervention to promote muscle strengthening.
Introduction
Resistance exercise can be effective for muscle strengthening [1].The effect of How to cite this paper: Tanaka, M., Nakanishi, R., Maeshige, N. and Fujino, H. resistance exercise is known to be dependent on the intensity of muscle loading [2].Exercise involving eccentric contractions has a greater effect for muscle strengthening because the high intensity of muscle loading can be generated eccentric contraction compared to concentric or isometric contractions [3] [4].
The previous studies have suggested that eccentric exercise has advantages compared with concentric training, e.g., increases in peak torque and strength-related performance parameters [5] [6].Therefore, eccentric exercise might have an efficient exercise for muscle strengthening compared to concentric or isometric exercise.
It has been well established that electrical stimulation (ES) can be effective to induce muscle strengthening [7] [8] [9] [10].The effectiveness of ES is determined by the intensity of muscle loading, as well as resistance exercise [11] [12] [13].The electrical stimulation-induced muscle loading is influenced by the current intensity, current frequency, and waveform [12] [14].ES with low frequency direct current is commonly used in electrical stimulation therapy [14].However, it has been suggested that ES with low frequency direct current cannot elicit muscle contraction in the deep portion of the limb due to its low conductivity [11].Slow fiber muscles locate in the deep portion of the extremities and the trunk mainly, and fast muscles locate in the superficial portion [15].Deep portion of muscles have an important muscle function, e.g., joint stability and maintaining posture [16].Our previous study suggested that middle frequency electrical stimulation could induce strong contraction to skeletal muscle located deep portion of calf muscles compared with low frequency electrical stimulation [13].Therefore, ES with middle frequency has a potential to be the effective intervention for deep muscle strengthening.
In contrast, ES causes pain and discomfort [17].Additionally, the levels of pain and discomfort by ES depend on current intensity [17].Therefore, the intensity of ES could not increase for strong muscle contraction and it is necessary to develop new methods for muscle strengthening without pain in the deep portion of the extremities.As a solution of problem with ES for muscle strengthening, it has reported that ES combined voluntary eccentric contraction which an agonist performs a voluntary concentric contraction against an electrically stimulated antagonist was developed [18].However, this eccentric contraction training has some limitations [19].The patients who have severely affected with neuromuscular diseases might not be adequate the eccentric contraction because they need to be able to generate agonist muscle forces to overcome the resistance provided by the electrically stimulated antagonist.Additionally, this eccentric contraction could not set the joint range and maintain a constant angular velocity to the joint.To improve those problems, we developed an ES with eccentric contraction system.The purpose of this study was to investigate the acute response whether eccentric contraction induced electrical stimulation training system enhance muscle knee extension torque compared with typical electrical stimulation method and to evaluate the long period training the effects.International Journal of Clinical Medicine
Participants
This study recruited twenty-two young untrained men (mean age ± SD: 23 ± 3 years, height: 176 ± 7 cm, mass: 67 ± 7 kg, respectively) who responded to an invitation to participate in the acute response trial (Experiment 1) and seven young untrained men (mean age ± SD: 23 ± 8 years, height: 175 ± 9 cm, mass: 67 ± 3 kg, respectively) who responded to an additional invitation to participate in 6-weeks training trial (Experiment 2).In the acute response trial, the subjects were measured in the left limb.In the long period training trial, the subjects were trained the both limbs.The subjects were free from known cardiovascular, neurological, or orthopedic problems, volunteered to participate in the study.The subjects were asked to avoid stimulants (e.g.alcohol, caffeine, chocolate) and exercise on the test day, and did not perform any intense exercise 2 days prior to the tests.The subjects were informed of all the procedures, purposes, benefits, and risks of the study and signed an informed consent form, which was approved by the Medical Ethical Committee of Kobe University in accordance with the Declaration of Helsinki.We measured Experiment 1 from February in 2015 to April in 2015 and Experiment 2 from July in 2015 to September in 2015.
Electrical Stimulation with Eccentric Contraction System
Our eccentric contraction induced electrical stimulation system consists of two parts: 1) a continuous passive movement (CPM) device for the knee joint; and 2) a ES device with controller.The CPM device includes an actuator (EASM6, oriental motor, Tokyo, Japan) to generate knee movements with a set velocity which can set freely and an exoskeleton to fix the limb.The exoskeleton was designed to allow the knee joint ROM from 5˚ (fully extended) to 100˚ (flexed).
The ES device (ES-360, Ito, Tokyo, Japan) was used to stimulate the quadriceps femoris muscle focus on vastus intermedius (VI) muscle only while the knee joint was flexing; thus, VI muscle could perform eccentric contractions without voluntary contraction.A controller was used to link the CPM and the ES device, controlling the knee joint movement using the current intensity modulation function and triggering the knee joint was in flexion only while (Figure 1).
During the training, the subject was required to maintain supine position and the start position with hip and knee joint angles were fixed at 30˚ and 5˚, respectively.
ES Procedures
The effects in the acute response trial compared between before and after exercise session (a single bout training) in Experiment 1.In addition, the long period training trial compared between before the first training and after 48 h from the last training day in Experiment 2. One burst of electrical stimulation was delivered every 3 sec (time on: 1 sec and time off: 2 sec) for 1 min, followed by 5 min of rest.Exercise which included six consecutive stimulation sessions was performed.International Journal of Clinical Medicine Eccentric contractions were induced at an angular velocity of 30˚/sec as described previously [18].In this study, we set stimulated time at 1 second.It has been suggested that quadriceps femoris muscle play a crucial role of flexion angle from 0˚ to 30˚ on walking [20].Additionally, VI muscle is crucial to the dynamic stability control and may make the greatest contribution to knee extension during dynamic contractions [21] [22].To stimulate at flexion angle from 0˚ to 30˚, we set stimulated time at 1 sec.The electrical stimulation (carrier frequency: 2500 Hz; and burst modulated frequency: 100 Hz) was delivered through a pair of 9 × 5 cm gelcoated electrodes attached to the region of the VI muscle belly following described [23].
Torque Assessment with ES
At first, isometric knee extension torque was recorded at MVC using Cybex (CYBEX NORM, CYBEX Division of LUMEX, New York, USA) set at 0˚/sec angular velocity as the subjects sat strapped a chair.Subjects completed 3 maximal isometric repetitions of the dominant limb for 10 sec at 5˚ of knee flexion (full knee extension, 0˚) to match the knee flexion angle of start position.Each maximal isometric repetition was followed by a 3 min rest interval.During voluntary contractions, participants were encouraged verbally and received visual feedback during each repetition.The greatest peak torque achieved was determined as the maximal voluntary contraction torque.After determined MVC force, the current intensity determined.Current intensity was increased gradually and was determined as the subject's maximum tolerance current level, but no more than 80 mA, with the system start position; the mean value was 49.5 ± 3 mA.Maximal tolerated intensity was identified as the intensity of stimulation received when the subject said that he could no longer tolerate an increase in intensity.After the intolerance current level set, we set the current intensity induced 30% MVC force considered comfortable and safety in the present study.The quadriceps muscle torque of maximum voluntary contraction was shown 97 ± 9 N•m.The quadriceps muscle torque of the ES was shown 37 ± 4 N•m, and confirmed that the intensity of ES was set 30% MVC force approximately.
Muscle Thickness with Exercise
While subjects reclined on the training system for the assigned posture with start position, the thickness of the VI muscle was measured with an ultrasound image device with 9 MHz linear transducer (EUB-415, HITACHI medico, Tokyo, Japan) at rest (REST), at MVC, and at stimulated electrically (ES) respectively.
Seven healthy untrained men were recruited for the reliability analysis.The intraclass correlation coefficients (ICC) for the test-retest reliability of the muscle thickness measurements were 0.991 (95% CI 0.971 -0.996) for the vastus intermedius; these results indicated a high degree of reproducibility in measuring muscle thickness of these muscles.
Torque Assessment with Exercise
In order to evaluate acute response with the developed training system, a dynamometer (GT-30, OG giken, Okayama, Japan), which was incorporated in developed training system as to adhere the front part of the ankle, was used to measure at MVC, during peak flexion torque at with (ES + ECC) and without training system (ES).
Pain Evaluation
To evaluation of pain during using training system, NRS (Numeric Rating Scale) scores was compared between rest condition (REST), ES with (ES + ECC), and without training system (ES).Additionally, to evaluate the subjects intolerance current, NRS was compared between the current intensity was 10% down from 30% MVC force (20% MVC), 30% MVC (30% MVC), and 10% up (40% MVC).
For the NRS, the pain intensity was rated on a numerical scale from 0 to 10 (0 = no pain and 10 = worst pain imaginable).The electrocardiogram (ECG) signals
Training Protocol with Electrical Stimulation with Eccentric Contraction Training System
The previous studies have suggested that it was necessary to induce muscle strengthening at least 50% MVC [24] [25].In addition, the pain was depended on current intensity [26].Therefore, in the long period training trial, the current intensity was set at 50% MVC with eccentric contraction induced electrical stimulation (current intensity: 36 ± 7 mA, NRS: 5 ± 1).The subjects were trained
Data Analysis
Data were presented as mean ± SD.In the acute response trial, the thickness of the VI muscle and quadriceps muscle torque, LF/HF measures were obtained for subjects with one-way repeated measures analysis of variance (ANOVA).When a significant difference was found post hoc comparisons were performed using a Bonferroni correction.In the long period training trial of isometric knee extension torque compared pre and post training, differences were assessed by two-way analysis of variance (ANOVA).The Tukey-Kramer post-hoc test was performed if the two-way ANOVA indicated a significant difference.Student's t-test was performed to compare the VI muscle thickness compared pre and post training.
Statistical significance was set at P < 0.05.To achieve a significant difference at α = 0.05 and with 80% power, the necessary and sufficient n was calculated using International Journal of Clinical Medicine the mean and SD from a pilot study involving similar experimental groups and from a previous study on the effects of muscle thickness [19].
Experiment 1 3.1.1. The Thickness of the Vastus Intermedius Muscle
The thickness of the VI muscle was thicker in the MVC (P < 0.05) and the ES (P < 0.05) than in the REST condition (Figure 2).In addition, there were no significant differences between the MVC and the ES.
The Thickness of the Vastus Intermedius Muscle
The quadriceps muscle torque of MVC was higher than in the ES and the ES + ECC (Figure 3).Whereas, the quadriceps muscle torque of the ES + ECC was higher than the ES, and was approximately 69% MVC force.
Numeric Rating Scale (NRS) Related Increased Current Intensity
The NRS was higher in the 30% MVC force trial than in the 10% down trial in which 10% current down from the current of 30% MVC force was used, and lower than the 10% current up trial in which 10% current up from the current 30% MVC force was used (Figure 4).Therefore, our results suggested that 30% MVC force induced the current intensity would be suggested maximum tolerance current level in the present study.
The Changes in the LF/HF Ratio of HRV
The LF/HF ratio of HRV in the ES was higher than that in the REST (Figure 5).
Whereas, there were no significant differences in the LF/HF ratio of HRV between the ES and the ES + ECC.
Numeric Rating Scale (NRS) of ES with Eccentric Contraction
The NRS score in the ES and the ES + ECC was higher than that in the REST (Figure 6).Additionally, there were no significant differences between the ES and the ES + ECC trials.
Experiment 2 The Effects of ES with Eccentric Contraction for Long Period Training
The change value of VI muscle thickness in the ES + ECC after 6 weeks training was higher than in the ES (Figure 7).There were no significant differences knee extension torque between Pre-ES and Pre-ES + ECC (Figure 8).However, the knee extension torque of the Post-ES + ECC was higher than the Pre-ES + ECC.
Additionally, the Post-ES + ECC was higher than the Post-ES.In contrast, there were no significant differences between the Pre-and the Post-ES.
Discussion
The main finding of the present study is the promotional effects eccentric con- Our results showed that the muscle torque of the ES + ECC was shown approximately 69% MVC force although that of ES alone was shown 30% MVC force in the experiment 1 In the present study, we have developed the ES with eccentric contraction system from two points of view.First point was to enhance the promotional effects of muscle strengthening by using eccentric contraction system.The principle of overload is generally recognized as fundamental to the strengthening process, meaning that when the target muscle was loaded with resistance training, the muscle will adapt to become able to enhance the effects of training involving physiological changes e.g.muscle hypertrophy or neural adaptations following increased muscle loading [19] [27].It has been suggested that eccentric contraction exercise could enhance the loading to target muscle in comparison with isometric and concentric contraction [28].The results of present study showed that the muscle loading with ES was increased by using eccentric contraction system.Therefore, it has been suggested that ES with eccentric contraction system in this study would be effective for enhancement the effect of ES alone, which lead to muscle strengthening.
Our results (Figure 4) showed that 30% MVC force was nearly tolerance maximum current intensity for training.However, the muscle loading need at least 50% MVC force to induce muscle hypertrophy for the healthy subjects [24].In contrast, muscle loading induced electrical stimulation could also be enhanced by increasing electrical current intensity.However, the increase of pain and discomfort level depends on current intensity during electrical stimulation.It has been reported that some subjects complained severe pain with electrical stimulation for muscle strengthening [29].This pain could be so uncomfortable that many subjects prefer not using this modality even though there was good therapeutic [29].Thus, current intensity which was set for ES must be considered a balance between tolerance pain and maximum muscle loading.To suppress increased severe pain and discomfort level was our second point.In the present study, the results of NRS were no significant differences between ES and eccentric contraction induced ES at current intensity of 30% MVC.Additionally, the results of LF/HF ratio were no significant differences between ES and eccentric contraction induced ES trial.Heart rate variability (HRV) has been used as a biomarker of autonomic nervous system function.HRV is a reliable method to obtain information on sympathetic and parasympathetic contributions to heart rate, and several studies have shown that pain increases sympathetic activity [30] [31].Frequency fluctuations of HRV in the range of LF are considered to be markers of sympathetic and parasympathetic nerve activity, and HF fluctuations are considered markers of parasympathetic nerve activity [30] [31].Additionally, the LF/HF ratio is considered an index of sympathetic nerve activity and as an index of pain and discomfort due to activated sympathetic nerve following increased pain and discomfort level [31] [32].Therefore, in the present study, the results of LF/HF were suggested that the pain and discomfort induced ES could not be enhanced by eccentric contraction induced ES.It has been suggested that nociceptor on the skeletal muscle which was a receptor detected nociceptive sti-International Journal of Clinical Medicine mulus, e.g.electrical stimulation and muscle stretch located fascia mainly.The nociceptor was related muscle pain and discomfort.In addition, a high-threshold mechanical receptor, which was one of a nociceptor, related the muscle extension [33].Whereas, high-threshold mechanical receptor detects by muscle to overstretching [34].In the present study, quadriceps femoris muscle would not be overstretched because the knee angle was moved at flexion angle from 5˚ to 30˚.Therefore, ES with eccentric contraction system was not enhanced the intensity of pain and discomfort induced by ES.
The results of present study showed that ES with middle frequency could be induced effective muscle contraction on deep muscle, and promotional effect by ES with eccentric contraction system was found in the VI muscle thickness after 6 weeks.In contrast, the only ES training failed on muscle strengthening.The results showed that although ES was induced muscle loading insufficiently for muscle strengthening, ES with eccentric contraction system was induced the muscle loading sufficiently for muscle strengthening at the same time as suppressing increased current intensity.Therefore, it has been suggested that eccentric contraction induced ES would lead to muscle strengthening without sever pain and discomfort even if using only ES induced insufficient muscle loading for muscle strengthening.
The present study has been conducted with limitations.First, the present study was conducted with the healthy men.Therefore, it is unclear that the results of the present study apply the neuromuscular patients, disuse atrophy of the lower limbs of patients, and loss of skeletal muscle mass induced during aging (sarcopenia).Second, the protocol such as current intensity and angular velocity is unknown in effective therapy using eccentric contraction for various patients.Therefore, we plan to perform further studies to answer to the question.
Conclusion
Eccentric contraction induced ES enhanced muscle torque in the quadriceps femoris muscle in comparison to ES alone.Additionally, eccentric contraction induced ES did not increase pain and discomfort.Moreover, eccentric contraction induced ES for 6 weeks training trial showed to be effective for muscle strengthening.These results suggest that eccentric contraction induced ES would have the potential to become an effective intervention to promote muscle strengthening.
Figure 1 .
Figure 1.Apparatus for eccentric contraction induced ES exercise and its application.(a) Schema of moving during ES exercise; (b) Electrical stimulator using ES; (c) Actuator controller (1: start, 2: current up, 3: current down, 4: emergency stop); (d) Pictures of moving during ES exercise.
International Journal of Clinical Medicine 5.
were obtained from a portable ECG recorder (Check My Heart, Daily Care Bio-Medical, Chungli, Taiwan) and transferred to a computer loaded with heart rate variability (HRV) analysis software.HRV sampling frequency is 250 samples/sec and measured for 5 min.The two components of power of the R-R Interval (RRI: ms•ms), low frequency (LF: 0.04 -0.15 Hz) and high frequency (HF: 0.15 -0.4 Hz), were calculated.The participants were allowed to set supine position comfortably on a training system in a quiet environment for 5 min, as a rest condition.Then, the record of the ECG signal for HRV analysis started.LF/HF ratio was measured at rest (REST), at ES, and at stimulated electrically (ES + International Journal of Clinical Medicine ECC) respectively.To measure the change of HRV during training, we set the last training period for 5 min.
with training system on the right limb and only ES training on the left, three times per week for 6 weeks following the previous study[18].Pre and post training, subjects were measured thickness of VI muscle and maximum knee extension torque of the both limbs.Before starting first of the training and after 48 h from last training, subjects reclined supine position and the thickness of the VI muscle was measured with an ultrasound image device with 9 MHz linear transducer.The captured images were measured using the Image J software (NIH, Bethesda, MD, USA).The values pre and post training of the VI muscle were used to calculate the change value of thickness.After measured the thickness of the muscles, isometric knee extension torque was measured at maximum knee extension torque using Cybex (CYBEX NORM, CYBEX Division of LUMEX) set at 0˚/sec angular velocity as the subjects sat strapped a chair.Subjects completed maximal isometric repetition of the right and left limbs for 10 sec at 60˚ of knee flexion (full knee extension, 0˚) respectively.Each maximal isometric repetition was followed by a 3 min rest interval.During voluntary contractions, subjects were encouraged verbally and received visual feedback during each repetition.The greatest peak torque achieved was determined as the maximal knee extension torque.The values of pre (right limb: Pre-ES, left limb: Pre-ES + ECC) and post (right limb: Post-ES, left limb: Post-ES + ECC) torques were compared between pre and post, right and left limb, respectively.
Figure 2 .
Figure 2. The muscle thickness of vastus intermedius muscle using ultrasound image in the acute response trail at the rest (REST), MVC, and during ES (ES).The thickness is measured and is presented as the mean ± SD. *indicate significant difference compared to REST at P < 0.05.
Figure 3 .
Figure 3.The quadriceps muscle torque with MVC, ES, and eccentric contraction induced ES (ES + ECC) in acute response trial.The quadriceps muscle torque is presented as the mean ± SD. * and † indicate significant difference compared to MVC and ES, respectively, at P < 0.05.
Figure 4 .
Figure 4. Numetric rating score (NRS) with ES of maximum tolerance current (10% up), the current induced 30% MVC force (30% MVC), and the current 10% down from 30% MVC force (10% down).The subject's maximum tolerance current level was identified as the intensity of stimulation received when the subject said that he could no longer tolerate an increase in intensity.NRS is presented as the mean ± SD. * and † indicate significant difference compared to the current 10%down from 30%MVC force and 30% MVC force, respectively, at P < 0.05.
Figure 5 .
Figure 5. Numetric rating score (NRS) with ES (ES) and eccentric contraction induced ES (ES + ECC) and rest condition (REST).NRS is presented as the mean ± SD. * indicate significant difference compared to REST at P < 0.05.
Figure 6 .
Figure 6.The changes in the LF/HF ratio of HRV at the rest (REST) and during ES (ES) and eccentric contraction induced ES training (ES + ECC).The ECG signal was recorded for heart rate variability (HRV).Frequency fluctuations of HRV were calculated from HRV and identified in the range of 0.04 -0.15 Hz (low frequency, LF) and high frequency (HF) fluctuations in the range of 0.15 -0.4 Hz.The LF/HF ratio of HRV was calculated as the ratio relative the LF and HF.The LF/HF ratio is presented as the mean ± SD. * indicate significant difference compared to REST at P < 0.05.
Figure 7 .
Figure 7.The muscle thickness of vastus intermedius muscle using ultrasound image in the long period training trial.The changed values after training for 6 weeks are shown in the only ES (ES) and ES with eccentric contraction system (ES + ECC).The thickness is measured and is presented as the mean ± SD. ‡ indicate significant difference compared to ES at P < 0.05.
traction training using the training system synchronized ES on the enhancement of loaded muscle torque without enhancing the pain and discomfort induced ES.In addition, eccentric contraction induced ES for 6 weeks training was effective on muscle strengthening.In contrast, the only ES training failed on muscle strengthening.Therefore, our findings suggest that eccentric contraction induced ES might be not only more effective training for muscle strengthening than only ES training, but also available to avoid the increases of pain and discomfort induced by high intensity electrical stimulations which are usually selected to cause strong muscle contraction.The present study demonstrated the increases of thickness of VI during ES as well as that during MVC in Experiment 1.Recently, ES with middle frequency burst-modulated alternating current has also been used to stimulate skeletal muscles, as well as low frequency direct current[14].Petrofsky et al. reported that middle frequency alternating current has higher conductivity than low frequency direct current[11].We have shown that ES with middle frequency burst-modulated alternating current elicited muscle contraction in the deep muscle of rat hindlimb[13].In the present study, the ES with middle frequency burst-modulated alternating current increased the thickness of VI.Therefore, it is suggested that ES with middle frequency burst-modulated alternating current could be induce effective contraction on deep muscle.
Figure 8 .
Figure 8.The quadriceps muscle torque ES and eccentric contraction induced ES in the long period training trail.The values of pre (right limb: Pre-ES, left limb: Pre-ES + ECC) and post (right limb: Post-ES, left limb: Post-ES + ECC) torques were compared between pre and post, right and left limb, respectively.The quadriceps muscle torque is presented as the mean ± SD. * and † indicate significant difference compared to Post-ES and Pre-ES + ECC, respectively, at P < 0.05. | 2018-04-27T21:06:20.937Z | 2017-09-26T00:00:00.000 | {
"year": 2017,
"sha1": "b2f6cf6c7caaa7a698748ffa2039767232d4ca4f",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=79338",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b2f6cf6c7caaa7a698748ffa2039767232d4ca4f",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119450250 | pes2o/s2orc | v3-fos-license | Enhancement of superconductivity in NbN nanowires by negative electron-beam lithography with positive resist
We performed comparative experimental investigation of superconducting NbN nanowires which were prepared by means of positive-and negative electron-beam lithography with the same positive tone Poly-methyl-methacrylate (PMMA) resist. We show that nanowires with a thickness 4.9 nm and widths less than 100 nm demonstrate at 4.2 K higher critical temperature and higher density of critical and retrapping currents when they are prepared by negative lithography. Also the ratio of the experimental critical-current to the depairing critical current is larger for nanowires prepared by negative lithography. We associate the observed enhancement of superconducting properties with the difference in the degree of damage that nanowire edges sustain in the lithographic process. A whole range of advantages which is offered by the negative lithography with positive PMMA resist ensures high potential of this technology for improving performance metrics of superconducting nanowire singe-photon detectors.
Introduction
During last 15 years the technology of superconducting nanowire single-photon detectors (SNSPDs) is under intense development, continuously improving SNSPDs performance. Many efforts in the fields of optics, solid state physics and thin-film technology have been attempted in order to find the way to increase detection efficiency (DE) at longer wavelengths and to decrease timing jitter and dark count rate (DCR) of such detectors.
The development of SNSPDs follows several main directions. The search for optimal materials and the improvement of the quality of superconducting nanowires are among the most straightforward ones.
First SNSPDs were made from thin NbN [1] and NbTiN [2] films. These materials can be reliably elaborated and patterned; they have hard superconducting gap and critical temperatures well above the temperature of liquid helium that facilitates their use. Though, they exhibit a roll-off in the wavelength dependence of the detection efficiency which begins at wavelengths 1 µm. As compared to nitrides, uniform thin films of amorphous superconductors like WSi [3], NbSi [4], MoGe [5] and MoSi [6] were shown to be a) Electronic mail: ilya.charaev@kit.edu more promising for an effective detection of near infrared photons with larger wavelengths. The drawback is a noticeably lower critical temperature and correspondently smaller energy gaps, which unavoidably results in lower critical current densities. Moreover, it has been found [5,6] that SNSPDs from these low-temperature materials exhibit larger timing jitter than those from nitrides [7]. That is why high-quality ultrathin nitride films remain a reference in a highly competitive field of SNSPDs.
It has been confirmed experimentally that reduction of the cross-section of nanowires is a good approach for enhancement of the DE of SNSPD for photons with small energies [8,9,10]. However, this approach has several limitations. Ultra-thin films undergo superconductor-to-insulator transition [11], they are characterized by a reduced absorbance [9] and by a stronger spatial non-uniformity of the superconducting energy gap [12].
The decrease in the cross-section of nanowires also results in a smaller experimental critical current I C which causes a relatively low amplitude-to-noise ratio (ANR) and significant timing jitter in the voltage transients after the photon absorption events [13]. Furthermore, thin and narrow nanowires require special efforts to optimize optical absorption in meanders of typical SNSPDs [14]. Theoretical models [15,16] of photon detection in SNSPDs predict an increase of a cut-off wavelength 0 of DE()-dependence when the current applied to the nanowire approaches the depairing current I C dep . In practice, SNSPDs operate at a bias current which is slightly less than the experimentally achievable critical current I C . The latter is usually smaller or even much smaller the departing current. Hence, the useful spectral bandwidth of SNSPD could in principle be extended to larger wavelengths via pushing I C towards the depairing current limit I C = I C dep . This idea is very attractive. Once the current ratio I C /I C dep is enhanced, it would enable operation of detectors (made of materials with high T C ) at relatively large temperatures and high bias currents. There have been already several reports on possible approaches towards enhancement of I C /I C dep ratio. One of the reasons why the ratio remains significantly below unity is the crowding of supercurrent in the vicinity of sharp bends inherent to nanowires in the form of a meander [17]. The current crowding can be effectivity suppressed by optimization of the detector layout in that a radius of the bends plays the major role. Increase of the bending radius decreases the strength of current crowding, thus lowering both the dark count rate and timing jitter [18,19,20]. It has been also shown that the adjustment of stoichiometry of NbN films towards higher Nb-content results in an increased I C /I C dep ratio and thereby in a broadening of the spectral bandwidth of SNSPDs [21]. In general, detection efficiency is higher for SNSPDs with local values of the I C /I C dep ratio uniformly distributed over the nanowires and bends. This ratio is affected by different types of defects such as cross-section variations due to the non-uniformity of the thickness or width of the film, nanowire edge defects or internal structural defects weakening the superconducting order.
All those reduce the current ratio and make detector characteristics worse. Therefore, development and optimization of approaches, which target growth of highly uniform thin films and pattering them in defect-free superconducting nanowires, are in permanent focus of numerous research groups. In this paper we focus on the effect of patterning on superconducting properties of NbN nanowire.
Usually the nanowires below 100 nm in width are obtained out of thin superconducting films by electronbeam lithography. The films are spin-coated with an electron-beam resist (PMMA, HSQ, ZEP among others), which is then locally affected by the electron beam. After the lithography, unprotected parts of the film are subsequently removed by one of available etching techniques. Each step of the process may potentially influence the SNSPD performance. For instance, the choice of the resist type and the spin-coating procedure decide the smallest pixel-size which is possible to write and, consequently, the nanowire edge roughness.
Ideally, to achieve the ultimate I C /I C dep ratio for a specific layout of SNSPD, the size of defects should be reduced below the coherence length, in order to do not affect the critical current. Since the lithography resolution is usually improved with reducing the thickness of the resist layer, the straightforward tendency would be to make it as thin as possible. However, the stability of the resist during etching should be sufficient to prevent the etching of the film under the protecting resist layer and at its edges. Thinner layers are more fragile against the etching attack, leading to rougher nanowire edges.
Here we demonstrate that via strong overexposure the standard (positive tone) PMMA electron-beam resist can be made readily suitable for the negative lithographic process and that this procedure enables significant improvement of superconducting characteristics of NbN nanowires. The nanowires with a width less than 100 nm demonstrate enhanced superconducting critical temperature, densities of the critical and retrapping currents as well as the I C /I C dep ratio. We relate this enhancement to reduced non-perfection of nanowire edges and invoke theoretical considerations [15,16] to estimate expected improvement in SNSPD performance which will be introduced by the negative-PMMA lithography.
A. Thin-film deposition
Thin NbN films have been deposited simultaneously on two identical 10x10 mm 2 single-side polished substrates from R-plane-cut sapphire via reactive magnetron sputtering of pure Nb target in an atmosphere of mixed argon and nitrogen gases. Partial pressures of argon and nitrogen were P Ar = 1.9 × 10 −3 mbar and P N2 = 3.9 × 10 −4 mbar, correspondingly. The substrates were placed without been thermally anchored on the surface of a copper holder, which was in turn placed onto a heater plate. During the deposition of NbN layer the plate was kept at a temperature of 850°C. The deposition rate of NbN was 0.14 nm/s at the discharge current of 275 mA.
These conditions ensure the stoichiometry of NbN films which results in the highest critical temperature for a given thickness. The film thickness d = 4.9±0.2 nm was measured by a stylus profiler.
B. Nanowire patterning
The films were patterned into nanowires via the electron-beam lithography over the PMMA resist and subsequent Ar ion milling.
The PMMA resist is a well-known positive-tone resist which is attractive for users due to easy handling, high temporal stability, reproducibility of lithographed structures, and high-resolution. The PMMA resist is available with different sensitivities. The required thickness of the resist layer can be easily achieved by varying the speed of spinning and/or the amount of solid content in the resist. The PMMA itself and the required developer and stopper are water-free materials; this is preferable for pattering of films from water-sensitive materials. Certain disadvantage of the PMMA resist is a relatively high temperature (between 150 and 190 o C), which is required to bake the resist after spinning. Such high temperature stimulates diffusion of oxygen that increases its penetration depth into the film where, for films from Nb compounds, oxygen deteriorates or suppresses superconductivity. Furthermore, this resist is moderately stable against plasma assisted etching processes. This limits its applicability especially in the case of thin layers, which are required for writing ultimately small features. Although the PMMA electron-beam resist was originally introduced as a positive-tone resist, it can be used for negative lithographic processes also.
When a primary beam of electrons with energies 10 keV (far below the threshold for the displacement of the carbon atom [22]) enters the PMMA resist and a substrate, it produces low-energy secondary electrons (SE), which are mainly responsible for the scission process of the PMMA polymer chain [23]. A reduced molecular weight of the PMMA resist, which is exposed with a dose in the range of 100 C/cm 2 , makes it soluble in solvents with the high enough activation energy [24]. In this case PMMA acts as a positive-tone resist.
At high exposure doses ( 1 mC/cm 2 ) PMMA chains decompose into very short low-molecular-weight fragments, which start to form a dense carbonized film. The structures made of this film are insoluble in a standard PMMA-developer and even in acetone due to a cross-linking and formation of the covalent bonds between fragments. In this case PMMA acts as a high-resolution negative-tone electron-beam resist [25].
Two identical NbN films were patterned simultaneously one by the positive and another by the negative process in order to eliminate different aging degrees of the films. The substrates were spin-coated with PMMA 950k resist with a layer thickness of 95 nm. In order to minimize degradation of the films, the resist was baked on a hot plate at the lowest recommended temperature of 150°C for 5 minutes.
Layout was the same for all samples in both the positive-and negative-PMMA series and represented a straight nanowire with a typical for SNSPDs width W 100 nm, which was embedded between small contact pads. In order to avoid the current crowding at steps from the pads with a width of a few tens of micrometers to the nanowire, they were rounded off with a radius r 4 m. In each series, the width of nanowires was varied between 50 and 100 nm. Additionally, several strips with similar layout but with a width of a few micrometers were made to serve as reference structures. We assumed that the influence of edges is negligible for such wide strips and their properties are mostly determined by the patterning and aging. The actual widths of nanowires and strips were measured using the scanning electron microscopy (SEM).
In the case of "standard" positive-PMMA process, two separate islands were exposed by 10 kV electron beam with the dose about 100 C/cm 2 . The islands were separated by a slit which in its middle had the width equal to the design width of the nanowire and grew at edges to encompass rounded steps to contact pads. After development in the standard developer for 30 sec (30% MIBK in 2-propanol at 23 ºC) and rinsing in the 2propanol stopper, the exposed areas were removed. Unexposed resist between islands remained on the surface of film (Fig. 1a) and protected the film during the subsequent ion-milling process. Large contact pads sized to a few millimeters for ultrasonic bonding were prepared by photolithography with a mask which additionally protected the already patterned nanowire and small pads during the second etching step and separated samples from each other making them ready to measure.
In the negative-PMMA process, exposure dose of the resist was increased by two orders of magnitude to reach 10 mC/cm 2 while the energy of electrons was kept unchanged. In contrast to the positive-PMMA process described above, here the electron beam exposed only nanowire and small pads (Fig. 1b) and after development these areas remained on the film surface. The pattern was developed in acetone for 1.5 min and then rinsed in 2- propanol. Right after that, large contact pads were prepared by standard photolithography with a mask which left area with the nanowire and rounded steps under negative-PMMA open but overlapped with the small pads.
After development of the photoresist, the complete image containing the central part with negative-PMMA and the large contact pads was transferred into NbN film by ion-milling process.
We note that the thickness of the PMMA resist at the areas, which were exposed with the large dose of 10 mC/cm 2 , shrunk from 95 nm to about 50 nm (this value was measured right after exposure) and remained unchanged after the development in acetone. No measurable changes were observed in the thickness of PMMA resist exposed by the low dose of 100 C/cm 2 . Etching rate of the negative-PMMA resist was found to be about 2.7 nm/min. This rate is comparable to the etching rate of NbN by Ar ions with the energy 200 eV and current density 1 mA/cm 2 at 10° incident angle. At the same etching conditions, the etching rate of the positive-PMMA resist was almost 2.5 times higher ( 6.7 nm/min). After the etching a residual resist (both positive and negative) was removed from the surface of NbN film using a combination of the warm acetone, ultrasonic shaking and a gentle mechanical brushing. The possibility to remove a hardened PMMA mask after etching is essential for multi-layer structures such as a single-spiral SNSPD [20].
However, it has to be noted that the increase of the exposure dose by two orders of magnitude and the increase of the area, which has to be exposed in the case of negative-PMMA lithography, result in increased writing time with the electron beam. In turn, the increase in the writing time requires additional efforts to get long-term stability of electron-beam parameters and long-term suppression of external acoustic, mechanic, and electro-magnetic interferences disturbing the lithography apparatus.
A. NbN film
The temperature dependence of the square resistance R sq of the NbN film was measured immediately after deposition at temperatures from 4.2 up to 300 K by the standard four-probe technique. The critical temperature of the film was 13.55 K. The specific resistivity ρ was evaluated using the measured thickness d and the square resistance of the film as ρ = R sq ×d. The residual resistivity ρ 0 at T = 25 K was about 120 μΩ×cm.
The residual-resistivity ratio (RRR) of the film, i.e. the ratio of the resistivity at room temperature to the residual resistivity, was slightly larger than one (RRR 1.02). The temperature dependence of the second critical magnetic field B C2 (T) was measured at temperatures in the vicinity of the transition in an external magnetic field up to 3 T applied perpendicular to the film surface. The value of the second critical magnetic field at zero temperature was calculated in the dirty limit [26] and amounted at B C2 (0) = 18.8 T. With this value the coherence length at zero temperature was calculated as (1) The found value (0) 4.2 nm is close to the thickness of our film. The magnetic field penetration depth was calculated as where Δ(0) = 2.05k B T C is the superconducting energy gap of NbN [27,12]. We found λ(0) = 287 nm which is close to the value [11] obtained by the inductive technique for similar NbN films.
B. NbN nanowires
After patterning, the resistance of all nanowires was measured from the room temperature down to 4.2 K.
Critical temperatures of nanowires were lower than those of the micrometer-wide reference strips made from the same film. Independently of the type of lithographic process, the T C of the micrometer wide strips was almost equal to the critical temperature of the non-patterned film. It is seen in Fig. 2
IV. Discussion
The negative-PMMA lithography offers the following advantages over the positive-PMMA lithography.
The negative-PMMA resist is more stable which is seen in more than two times smaller etching rate.
Furthermore, since the critical temperatures of micrometer wide reference strips made by the positive-and the negative-processes were the same, the 50 nm thick layer of the negative-PMMA resist protects the film during etching process as good as almost the twice thicker layer of the positive-PMMA resist. In the electron-beam lithography, smaller thickness of the resist layer, in general, allows for writing smaller pixels in a more reproducible manner. Therefore, further reduction of the width of nanowires is possible with the negative-PMMA process while keeping the protection properties of the resist at high enough level. This is already seen in Fig. 2 where the critical temperature of the 47 nm wide nanowire made by the negative-PMMA process is as high as the T C value of the 80 nm wide nanowire from the positive-PMMA series. While the enhancement of the critical temperature of negative-PMMA nanowires is about 0.5 K, which is only 5% of the T C of the positive-PMMA nanowires, the increase in the critical current density measured at 4.2 K is much more significant (Fig. 3). The latter makes the negative-PMMA technology even more attractive for optimization of SNSPDs. Larger critical current, which can be realized for the given width, means not only larger amplitude-to-noise ratio (ANR) in the voltage-pulse response of the detector but, according to [13], also leads to smaller values of the timing jitter. The higher retrapping current density (Fig. 4) can be interpreted as a result of enhanced cooling efficiency for the negative-PMMA nanowires. For SNSPDs, this should lead to lower dark count rates [28] and to smaller latching probability.
Higher T C corresponds to a larger energy gap. With other material parameter and operation conditions being equal, this should reduce a value of the cut-off wavelength. From the other side, the less T C of nanowires is reduced the thinner films with lower critical temperature (the critical temperature decreases with decreasing thickness due to the proximity effect [29]) can be used to fabricate SNSPD for operation at a given temperature, e.g. at T = 4.2 K. Furthermore, higher T C is one of the reasons for higher j C of the negative-PMMA nanowires.
However, increase in T C alone cannot explain almost 40% increase in j C (4.2 K). In Fig. 5 the ratio of the densities of the measured critical current and the de-pairing critical current at 4.2 K j C /j C dep (4.2K) is shown for all nanowires. The densities of the depairing critical current were computed with Eq. 3 for the actual T C of each nanowire (Fig. 2). The temperature dependence of j C dep was adopted from the work of Kupriyanov and Lukichev [30]. We used the KL(T)-correction ( Fig. 1 in Ref. 30) for the extreme dirty limit to the Ginzburg-Landau (GL) departing current density and obtained j C dep for our films as: In Eq.
Here dB C2 /dT is the temperature derivative of the second critical magnetic field in the vicinity of the critical temperature. It is seen that the j C /j C dep (4.2K) ratio for the negative-PMMA nanowires is approximately 0.67 ( 3) that is noticeably larger than the value 0.48 ( 5) obtained for the positive-PMMA nanowires. According to the theoretical models of SNSPD response [15,16] an increase of this ratio should result in the shift of the cut-off wavelength towards longer wavelengths.
Although the superconductivity in the negative-PMMA nanowires is enhanced the ratio j C /j C dep (4.2 K) is still less than one. One of the possible reasons for that can be the particular stoichiometry of our NbN films. The films were deposited at the discharge parameters providing maximum of T C . Larger j C /j C dep (4.2 K) ratio is achieved in films with larger relative content of Nb [21] which, however, have slightly smaller critical temperature. Another reason is revealed by the temperature dependence of the critical current density of the 80 nm wide nanowire which is shown in Fig. 6. Such dependence is typical for nanowires with W 100 nm. As function of reduced temperature in the form t = (1-T/T C ) 3/2 , j C increases linearly (the dashed line) at temperatures in the vicinity of T C . At t 0.08 the experimental points deviate from the linear dependence. The deviation becomes larger at temperatures close to 4.2 K. Although the linear part is present in the j C (t) dependences for nanowires from both series, its slope and the temperature at which the deviations begins vary from sample to sample affecting the current ratio j C /j C dep (4.2 K). Physical phenomena, which cause this deviation and variations of the slope, are out of the scope of the present work. They will be considered in details elsewhere [31].
Here we would like to deal only with the high temperature part of the j C (t) dependence near T C (t 0) where the critical current density increases linearly with the reduced temperature. Since the width of our nanowires is much smaller than the Pearl length the supercurrent is assumed to be distributed evenly over the cross-section of the nanowire. With this approximation, the critical current of nanowires should be determined by the de-pairing mechanism solely at least at temperatures close to T C where the film thickness is additionally much less than the coherence length (1). In the framework of this approximation, the slope of the linear fit of j C (T) dependence at t << 1, j C extr (the dashed line in Fig. 6), should equal the temperature independent coefficient j C dep (0) in the KL-corrected section. This leads to a weaker proximity effect and to increased T C (Fig. 2). Larger superconducting crosssection of the nanowire is capable to carry larger critical current. Therefore, for a given nominal width, the calculated density of the critical current is also larger for the negative-PMMA nanowires (Fig. 3). The same arguments explain the increase of the retrapping current density (Fig. 4).
To prove the assumption that edges of the nanowire are less damaged in the negative process, we performed statistical analysis of the edge roughness of our nanowires. In Fig. 1 there are two SEM images of nominally 100 nm wide nanowires after ion milling. Nanowires were prepared by the positive (Fig. 1c) and negative (Fig. 1d) electron-beam lithography of PMMA resist. The rest of resist was stripped off and samples were thoroughly cleaned. For the analysis of the nanowire geometry, we used a high-resolution field-emission SEM equipped with the highly-efficient in-lens detector of secondary electrons (SE). This detector is sensitive mainly to secondary electrons, which are generated in the vicinity of the primary beam at the very surface of a sample ("SE type-I"). Therefore, the in-lens detector ensures spatial resolution of about 1-2 nm. The raster scan was aligned along the length of nanowire to minimize low-frequency noise contribution to the image. The set of images with different magnification were acquired with the nominal resolutions better than 0.5 nm/pixel.
V. Conclusions
We investigated experimentally and then compared superconducting properties of thin-film NbN nanowires which were fabricated using the positive and negative electron-beam lithography over standard positive PMMA resist. Nanowires with smaller width and smaller edge roughness were obtained in a reproducible manner due to higher resolution of the negative-PMMA lithography process. The higher critical temperature (about 5% enhancement) of the negative-PMMA nanowires allows further decreasing the crosssection of the nanowire while keeping T C high enough for operation of SNSPD at 4.2 K. The improved cooling efficiency of the negative-PMMA nanowires manifests itself as the 20% higher values of the retrapping current density. For SNSPDs, increase in the cooling efficiency should result in lower dark count rate and lower latching probability. Furthermore, approximately 30% higher critical current density in the negative-PMMA nanowires should improve signal to noise ratio of detectors, while the 40% higher j C /j C dep ratio at 4.2 K should shift the cut-off wavelength of NbN SNSPD's response towards the infrared range. Critical current density j C = 0.85j C dep in the negative-PMMA nanowires should be reachable also at the operation temperature once edge quality of the nanowires further improves. 6 Dependence of the density of the experimental critical current j C (red circles) on the reduced temperature t = (1 -T/T C ) 3/2 for the 80 nm-wide nanowire. The dashed line shows the best linear fit j C (t) = j C extr t of the experimental data near the transition temperature where j C extr is the fit parameter. FIG. 7 Ratio of the extracted current density j C extr (slope of the dashed line in Fig. 6) to the density of the depairing critical current j C dep at T = 0 (Eq. 3) as function of nanowire width for nanowires made by the positive-PMMA (black squares) and the negative-PMMA (red circles) lithography. The dashed lines are to guide the eyes. | 2019-04-13T18:20:01.169Z | 2017-06-05T00:00:00.000 | {
"year": 2017,
"sha1": "4648170c2f7a81ad27cf1c39c0d74af8d8eeba62",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1706.01289",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "25fd8393bdb56c9821b742b6e016bacfb1e8702a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
268360646 | pes2o/s2orc | v3-fos-license | Risk of supranormal left ventricular ejection fraction in patients with aortic stenosis
Abstract Background Cardiovascular events are increasing in patients with supranormal left ventricular ejection fraction (snLVEF). However, the effect of snLVEF in patients with aortic stenosis (AS) remains unclear, especially in patients with moderate AS. Hypothesis This study aimed to evaluate the prognosis of mortality and heart failure (HF) in patients with LVEF ≥ 50% and moderate or severe AS. Methods This retrospective study targeted patients with moderate or severe AS and LVEF > 50%. LVEF of 50%–65% was classified as normal LVEF (nLVEF, nEF group) and >65% as snLVEF (snEF group). AS severity was stratified based on the aortic valve area into moderate (1.0–1.5 cm²) and severe (<1.0 cm²). Primary outcomes included all‐cause mortality and HF hospitalization. Results A total of 226 participants were included in this study. There were 67 and 65 participants with moderate AS in snEF (m‐snEF) and nEF groups (m‐nEF), respectively, and 41 and 53 participants with severe AS in the snEF (s‐snEF) and nEF groups (s‐nEF), respectively. During the observation period (median: 554 days), the primary composite outcome occurred in 108 individuals. Cox hazard analysis revealed no significant differences among the four groups in primary composite outcomes. With respect to HF hospitalization, the adjusted hazard ratios (95% confidence intervals) with m‐snEF as the reference were as follows: m‐nEF, 0.41 (0.19–0.89); s‐nEF, 1.43 (0.76–2.67); and s‐snEF, 1.83 (1.00–3.35). Conclusions The risk of HF hospitalization for m‐snLVEF was higher than m‐nLVEF and not significantly different from s‐nLVEF.
| INTRODUCTION
The prevalence of cardiovascular disease in Japan and other countries is increasing with the increase in the aging population. 1,2Among cardiovascular diseases, valvular diseases are the fifth highest cause of death, 3 with a concerning risk of sudden death associated with severe aortic stenosis (AS). 4The prevalence of severe AS is 12.4% across all age groups and 3.4% in those aged ≥75 years, 5 making it a common condition.
In addition to surgical aortic valve replacement (AVR), in recent years, transcatheter AVR has been introduced, allowing AVR to be performed in older patients, leading to improvements in mortality rates. 6,7e timing of AVR has traditionally been determined based on the presence of symptoms associated with AS and its severity as assessed by echocardiography. 8However, it is recommended to comprehensively assess these indications rather than relying on a single criterion. 8Khan et al. 9 reported that heart failure (HF) in cases of moderate AS complicated by reduced left ventricular ejection fraction (LVEF) is associated with higher mortality and HF hospitalization rates compared to HF without AS.In some cases, conditions other than severe AS may also be associated with poor prognosis.Moreover, Franke et al. suggested that performing AVR in cases of moderate AS improves prognosis. 10 has been suggested that LVEF decreases under the influence of the afterload during the progression of AS, 8 and an LVEF ≤ 50% is often used in the treatment guidelines for patients with AS. 8 Recently, a new category called supranormal LVEF (snLVEF; ejection fraction [EF] >65%-70%) has been proposed in the field of HF, 11 and several reports have indicated a poor prognosis, including mortality, in patients with snLVEF suffering from HF. 12,13 However, to date, studies examining the relationship between LVEF and the severity of AS are limited.In a study by Imamura et al., 14 patients with severe AS and snLVEF showed poorer outcomes after AVR than those with severe AS and normal LVEF (nLVEF; EF 50%-64%).However, this study only focused on cases of severe AS, which already warrants consideration for AVR.Hence, the impact of this study on the treatment guidelines may not be significant.
The prognostic influence of snLVEF in moderate AS, for which AVR is currently not indicated, has not been evaluated previously.
Investigating the prognosis of such cases may have significant implications for the reevaluation of indications for AVR in the future and thus holds valuable research potential.Therefore, this study aimed to evaluate the prognosis of mortality and HF in patients with LVEF ≥ 50% and with moderate or severe AS.
| METHODS
This retrospective observational study was performed in accordance with the standards of the Declaration of Helsinki and current ethical guidelines.
This study was approved by the Ethics Committee on Medical Research of the Chutoen General Medical Center (Reference Number: 1260231201).Informed consent was obtained using an opt-out approach, and information about this study (opt-out statement) was published on the hospital's website according to the Personal Information Protection Law.In cases where requests were made by patients or their families, the relevant information was removed from the research subject.Patients who met the inclusion criteria were categorized as follows: patients with LVEF of 50%-65% were classified as having nLVEF (nEF group), and those with LVEF > 65% as having snLVEF (snEF group). 14AS severity was stratified based on AVA as follows: moderate AS (AVA,1.0-1.5 cm²) and severe AS (AVA < 1.0 cm²). 9The use of AVA for stratification was justified by the fact that, even with preserved LVEF, a small left ventricular volume can lead to a decrease in stroke volume (SV), resulting in a low-pressure gradient.
The study population was further classified into the following four groups: m-snEF (moderate AS and snEF), s-snEF (severe AS and snEF), m-nEF (moderate AS and nEF), and s-nEF (severe AS and nEF).
| Primary composite outcome
Patients identified from the echocardiographic data were retrospectively followed up until November 2023 (maximum follow-up duration: 1500 days), with all-cause mortality and HF hospitalization defined as the primary composite outcomes.The criteria for hospitalization due to HF were based on previous reports and included worsening of typical HF symptoms and radiographic congestion, elevated natriuretic peptides and filling pressures on echocardiography, and initiation of intravenous diuretic therapy. 15,16If HF hospitalization occurred at the time of echocardiography, subsequent hospitalization due to HF was considered an endpoint event.In the case of repeated hospitalization for HF, only the first hospitalization was recorded.HF hospitalization as a primary outcome was censored at the time of surgical AVR or transcatheter AVR; however, patients were monitored for survival after AVR.
| Statistical analysis
Categorical variables were analyzed using the chi-square or Fisher's exact test and are presented as numbers and percentages.For continuous variables, after evaluating normality, parametric or nonparametric tests (Kruskal-Wallis tests) were performed.Time-to-event rates for mortality and HF after echocardiography were calculated and group-wise comparisons of the event occurrence curves were conducted using the log-rank test.When a significant difference was observed in the log-rank test among the four groups, multiple comparisons were performed using the Bonferroni method.Univariate Cox proportional hazards analysis was performed for the primary composite and individual outcomes, followed by multivariate analysis using a forced entry method for factors potentially influencing clinical outcomes.Statistical significance was set at p < .05.All statistical analyses were conducted using R Statistical Software (Foundation for Statistical Computing) or EZR (Jichi Medical University). 17To identify associations between HF hospitalization and LVEF, we employed restricted cubic splines.We conducted a complete case analysis by examining only samples with no missing values in the data necessary for each statistical analysis during the patient characteristic comparisons.For Cox hazard analysis, there were 11 missing data points for body mass index (BMI) and 18 for B-type natriuretic peptide (BNP); consequently, we performed analysis using the data imputed through multiple imputations using the mice function in the R package to account for missing values.
| Comparison of patient characteristics
Among patients who underwent echocardiography at our hospital between May 2013 and November 2020, 226 were included in this study.Of these, 67, 41, 65, and 53 patients were included in the m-snEF, s-snEF, m-nEF, and s-nEF groups, respectively.Significant differences among the four groups were observed in sex, BMI, BNP, and echocardiographic parameters such as LVEF, AVA, mPG, and Vmax.Additionally, differences were observed in the LVDd, LVDs, and left ventricular posterior wall thickness (Table 1).
| Impact of supranormal and normal LVEF on primary outcomes
Log-rank tests were conducted to compare primary outcomes between the snEF and nEF groups.During the observation period (median, 554 days), the primary composite outcome occurred in 108 patients (snEF group, 56.1% vs. nEF group, 39.8%; log-rank p = .14;Figure 1A).There was no significant difference in all-cause mortality (p = .83;Figure 1B).However, a significant difference was observed in HF hospitalization (p = .041;Figure 1C).
During the observation period, the follow-up for HF was censored due to the implementation of AVR in 32 individuals, with 15 in the snEF group and 17 in the nEF group (p > .99).
| Comparison among the four groups considering the grade of AS in relation to LVEF differences
To assess the impact of AS severity, we evaluated the primary outcomes in the four groups.The primary composite outcome rates per 100 person-years were 24.6, 36.8, 18.4, and 28.1 in the m-snEF, s-snEF, m-nEF, and s-nEF groups, respectively (Supporting Information S1: Table S1).Detailed results were shown in Supporting Information S1: Table S1.
Subsequently, we conducted log-rank tests using Kaplan-Meier curves for the four groups.The results remained consistent across the four groups, showing no significant differences in the primary composite outcome (log-rank test; p = .14;Figure 2A) of all-cause mortality (p = .90;Figure 2B).However, there was a statistical difference in HF hospitalization (p < .001; Figure 2C) among the groups.Furthermore, while comparing patient characteristics, there was a difference in the proportion of men among the groups.To account for this influence, the analysis was restricted to women, and the results of the log-rank test showed a similar trend (log-rank test for HF hospitalization, p = .019;Supporting Information S1: Figure S1).
| Restricted cubic spline analysis
Finally, using a restricted cubic spline analysis, we evaluated the relationship between the risk of HF hospitalization and LVEF (Figure 4A).As a result, in the analysis focusing on moderate AS, the risk of HF hospitalization peaked at LVEF between 65% and 70% (Figure 4B).Conversely, in cases of severe AS alone, a U-shape relationship was observed, with a lower risk of HF hospitalization at an LVEF between 60% and 65% (Figure 4C).
| DISCUSSION
This is the first study to compare the prognosis of snLVEF and nLVEF in patients with moderate to severe AS.The results indicated no significant difference regarding mortality, but an increased risk for HF hospitalization in patients with snLVEF.In the analysis considering severity, the risk of m-nEF was lower than that of m-snEF, the risk of s-snEF was higher, but the comparison between m-snEF and s-nEF did not show a significant difference.
In recent years, there have been many reports of a U-shaped relationship between mortality and high LVEF. 11,13,14Van Essen et al. 11 evaluated mortality and hospitalization risks by stratifying HF patients based on EF.The results showed a tendency towards increased risk in a U-shaped pattern, using LVEF 40%-49% as the reference point for 180-day death.In our study, we did not observe a difference in the risk for all-cause mortality between groups.This difference may be related to the characteristics of elderly AS patients, where 39%-45% of deaths are reported to be due to noncardiac diseases, 18,19 suggesting that noncardiac deaths may influence all-cause mortality in elderly AS patients.In a report by Imamura et al., which included only patients with severe AS who underwent AVR, the risk of mortality was significantly higher in the snEF group than in the nEF group.Therefore, in populations where the risk of cardiovascular-related mortality is high, such as those with severe AS, LVEF > 65% may suggest a potential risk of mortality, similar to other cardiovascular diseases.
On the other hand, the previously reported U-shaped relationship between the risk of HF and high LVEF was consistent with this study in severe AS patients. 13,20sed on these results, we speculated based on myocardial characteristics and cardiac mechanics, considering the relationship with afterload due to AS. Rosch et al. 21and Popovic et al. 22 found differences in myocardial characteristics between patients with LVEF of 50%-60% and those with LVEF > 60%, with the group with LVEF > 60% showing less concentric remodeling and fibrosis.In addition, in patients with LVEF > 65%, left ventricular volumes (LVDd and LVDs) were smaller, and left ventricular diastolic stiffness increased, resulting in a leftward shift in the end-diastolic pressure-volume relationship. 22[23] When snLVEF coexists with AS, it induces concentric remodeling due to afterload, causing not left ventricular wall thickening but also further narrowing of the left ventricular cavity. 24though SV analysis was not conducted in this study, a comparison of patient characteristics revealed that LVDd and LVDs were significantly decreased in the snEF group.Moreover, significant thickening of the left ventricular posterior wall was observed in the s-snEF group, supporting the aforementioned speculation.As expected, easily changing LVEDP leads to an increase in left atrial pressure, causing pulmonary congestion and exacerbating HF, 25 as evidenced by the higher frequency of HF hospitalization events observed relatively early in this study through echocardiography.
We discovered that moderate AS with snEF may have a prognostic value similar to that of severe AS by including patients with moderate or severe AS in our analysis.Khan et al. showed that even moderate AS is a poor prognostic factor for heart failure patients with LVEF < 50%, 9 depending on the circumstances, even moderate AS can have a significant impact on patient prognosis.We believe that this study provides important findings to better define the target population for AVR in patients with moderate AS, which is recommended as Class IIA in the latest guidelines. 8
| Study limitations
This study had some limitations.First, this was a retrospective single-center study, and the involvement of potential bias cannot be ruled out.Additionally, the small sample size and shorter follow-up period compared to previous studies may have led to an underestimation of mortality events and inadequate statistical power.Furthermore, there were significant differences in the men-to-women ratio among the four groups in this study.
Although analyses were conducted with maximum consideration of the influence of sex differences on the results, it is possible that this bias was not completely eliminated.Moreover, reliable data on patients' frequent or emergency visits due to worsening HF (A) (B) F I G U R E 3 Univariate and multivariate cox hazard analysis for primary outcomes.Unadjusted hazard ratio (HR) and 95% confidence interval.(B) HR adjusted for sex, log-transformed brain natriuretic peptide level, and log-transformed body mass index.(C) HR adjusted for hypertension, dyslipidemia, diabetes mellitus, and current smoking status.* Denotes a logarithmic transformation.Green, reference; white, no significant difference; blue or orange, statistically significant (p < .05).
were not consistently obtained in this retrospective study.This lack of information could significantly impact the study and may lead to overinterpretation of the results in some cases.However, considering the concerns regarding information collection and reporting bias, this impact was not assessed.
| CONCLUSION
In conclusion, compared to the m-snLVEF group, there were fewer HF hospitalization in the m-nLVEF group, while no significant difference was observed in comparison to the s-nLVEF group.Thus, more intensive treatment may be beneficial in patients with moderate AS and snLVEF.However, to address these biases and generalize the results of this study, prospective and large-scale studies are required.
2. 1 |
Study design, participants, and measurements This study included patients who underwent echocardiography at Chutoen General Medical Center between May 2013 and November 2020.Patients diagnosed with AS who were ≥18 years of age and had LVEF ≥ 50% were identified.Data on the aortic valve area (AVA), maximum blood flow velocity across the aortic valve (Vmax), and the mean pressure gradient (mPG) across the aortic valve were extracted when available.Only echocardiography data from the earliest date were used in cases with duplicate data from the same patient.The exclusion criteria were as follows: (1) missing measurements for AVA, Vmax, or mPG; (2) suggestion of mild AS with AVA > 1.5 cm²; (3) history of AVR; (4) presence of obstructive hypertrophic cardiomyopathy; and (5) lack of follow-up at the hospital after echocardiography.
1
Comparison of primary outcomes between normal and supranormal left ventricular ejection fraction (A) Kaplan-Meier curves for primary composite outcomes in the normal and supranormal LVEF groups.(B) All-cause mortality.(C) HF hospitalization.HF, heart failure; LVEF, left ventricular ejection fraction. | 2024-03-13T06:17:57.191Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "b28683518992a1af9bd37b1cd4e9cd99650f939d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ae253fb231eac22f95b965ec59e134ca54f3734f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55156681 | pes2o/s2orc | v3-fos-license | Muscle training program in patients with severe chronic obstructive pulmonary disease
DOI: 10.5935/0104-7795.20160028 ABSTRACT Objective: The objective of this study was to analyze the effects of a muscle training program with neuromuscular electrical stimulation (NMES) for lower limbs (LL) and active resisted exercises for upper limbs (UL) for patients with severe Chronic Obstructive Pulmonary Disease. Methods: With a sample of 5 subjects (65.2 ± 6.09 years), the initial and final evaluations were: One-RepetitionMaximum testing; Sit-to-stand test; perimetry of the thigh; 6-minute walk test; Saint George’s Respiratory Questionnaire; Medical Research Council scale for dyspnea and the BODE index. The intervention was performed three times a week and was composed of 18 sessions of 30-minute NMES followed by 30 minutes of exercise for the UL based on the diagonal Kabat method. The NMES parameters were: 50Hz of frequency, 6s on and 8s off, increase slope of 2s and decrease slope of 2s, pulse width of 400μs, and intensity defined as patient tolerance and increased from 1 to 5mA each day. Results: The results have shown an increase in muscle strength (p = 0.01) and of muscle resistance (p = 0.01). There was an improvement tendency in the quality of life (p = 0.16) and in the cardiorespiratory fitness (p = 0.11). Conclusion: The association of physical exercises with diagonals and NMES can be a beneficial resource for the treatment of patients with severe COPD. It is suggested, however, the need for new researches with a wider sample size for assuring these benefits.
INTRUDUCTION
The chronic obstructive pulmonary disease (COPD) may cause severe economic and social consequences, also composing, in an individual level, a substantial source of incapacity and low life quality of patients and their caregivers. 1 Per the World Health Organization, 2 80 million individuals have moderate or severe COPD.The COPD is the fifth main cause of mortality around the world and, under recent estimates, it will reach the third position until 2030.
According to the Brazilian Society of Pneumology and Tisiology, 3 the COPD is classically defined as a chronic and progressive reduction in the aerial flow, secondary to an abnormal inflammatory response of the lungs after the inhalation of toxic gases or particles.This inflammation promotes alterations of variable intensity in the bronchia (chronic bronchitis), bronchioles (obstructive bronchiolitis) and/or pulmonary parenchyma (emphysema). 4The diagnosis of COPD, confirmed by the pulmonary function test, 5 must be considered in the presence of cough, catarrh production, dyspnea and/or history of exposure to risk factor for the development of the disease, such as smoking, environmental pollution, and occupational exposure to toxic gases or particles.These factors may override the recovery mechanism that restores the tissue structure damaged by some injuries. 6n addition to the structural and functional consequences induced in the lungs, the COPD also combines relevant systemic effects that have important repercussion over the quality of life and survival of the patients, including nutritional depletion and the skeletal muscles dysfunction, which contributes to intolerance to physical exercise, 7,8 and for this reason, their participation in rehabilitation programs is hindered.
Due to this factor, the neuromuscular electrical stimulation may be a facilitating resource so that these patients may reach the minimal physical conditioning needed to participate in these programs. 9The neuromuscular electrical stimulation (NMES) is a technique in which an electrical current is applied to evoke muscle contractions and therefore promote functional movements and improvements in the physical performance. 10,11n COPD, the peripheral muscle dysfunction (PMD) is characterized by structural and functional abnormalities. 12The peripheral skeletal muscles undergo morphologic and metabolic alterations due to a combination of events in which the etiology seems to be multifactorial, including the hypercapnia, oxidative stress, long-term use of corticosteroids, hypoxemia, nutritional depletion, systemic inflammation, the underuse atrophy, and the amino acids metabolism. 8ecent clinical guidelines on the treatment of COPD emphasize the role of physical exercise to disrupt the vicious circle of deconditioning.On this subject, the most recent guidelines about pulmonary rehabilitation recommend the physiotherapy programs to include exercises targeted at the upper limbs (UL) muscles for patients with COPD, due to its importance in their daily life activities. 13Many of this muscles are also accessory muscles of respiration, therefore activities with the arms elevated cause these muscles to reduce their participation as respiratory accessory, imposing more respiratory work to the diaphragm, what causes an increase in dyspnea and fatigue in these patients. 14
OBJECTIVE
This study proposes the application of a training program with the aid of NMES for the lower limbs (LL) muscles and active resisted exercises for upper limbs muscles in patients with severe COPD as a manner to verify whether there are improvements in muscle strength and resistance, and in cardiorespiratory fitness of these patients.
METHODS
This is an experimental study, with quantitative approach, of patients of both genders recruited in the pneumology ambulatory of the Santa Maria University Hospital -Brazil (HUSM), and with clinical and functional diagnosis of severe and very severe COPD (levels III and IV of GOLD). 5This clinical trial was approved by the Independent Ethics Committee of the Federal University of Santa Maria -Brazil, under registration 0393.0.243.000-10.
The inclusion criteria of this study were: At spirometry, obtain the forced expiratory volume in one second (FEV1) < 65%, forced vital capacity (FVC) < 70%, and the ratio residual volume/total lung capacity (RV/TLC) > 40%; clinical stability; sedentary with exercise limitation (self-reported); age < 75 years.Patients with classes III and IV functional heart failure, renal and hepatic dysfunction, orthopedic and trauma diseases and/or neuromuscular deficit, cognitive deficit, paresthesia or tissue injuries at the electrode laying site, use of pacemaker, diagnosis of HIV infection, or those who did not sign the informed consent form were excluded.30 subjects were included per the inclusion criteria and, among them, 25 were excluded due to the exclusion criteria, yielding a total of 5 eligible subjects for the study.
The eligible patients underwent the following evaluations: anamnesis, physical exam, pulmonary function test (spirometry); 15 the 6-minute walk test (6MWT); 16 the Sit-to-stand test; 17 the perimetry of the quadriceps muscle; 18 the quadriceps muscle strength testing (One-Repetition-Maximum testing -1MR); 19 the Saint George's Respiratory Questionnaire (SGRQ) for quality of life; 20 and the upper limb incremental test. 21Along with them, the Medical Research Council scale (MRC) 22 was also used to evaluate dyspnea and the BODE index was calculated (B -body mass index; Oairflow obstruction; D -dyspnea; E -exercise capacity). 23The evaluations were applied sequentially by the same evaluator before the intervention, as well as after the intervention, during the reevaluation.
The intervention was performed in the Physiotherapy ambulatory of the Santa Maria University Hospital -Brazil (HUSM) for 6 weeks, totaling 18 sessions.The protocol was designed as 30 minutes of NMES in the quadriceps and 30 of functional exercise for upper limb, therefore performing a 1-hour session, with a frequency of three times a week.
The NMES was applied onto the quadriceps muscles of each patient with the KLD-biosistemas Neuromuscular Electrical Stimulator, model nms.0501,Endophasys.Self-adhesive electrodes were placed on the thighs, approximately 5cm below the inguinal fold, 5cm above the suprapatellar edge and in the ventromedial muscle, at the medial femoral condyle.Before the placement of the electrodes, the patient skin was cleaned with cotton previously soaked in 70% alcohol solution.
The NMES protocol was based on the study of Vivodtzev et al. 24 which aimed to minimize the effects of fatigue of the contractibility of the quadriceps muscles of patients with COPD, so described: the patient remained laid down in a stretcher, with the legs bent at 60° supported by a wedge pillow; the current used was the symmetric biphasic square pulse.The 60° flexion of the knees was used to optimize the muscle contraction, once, according to the literature, this is the angle in which the maximum force is produced by the quadriceps muscles. 25The 5 initial minutes were performed as a warmup at a frequency of 5Hz with a pulse width of 400µs of reciprocal electrical current.Along the next 25 minutes, the stimulator generated electrical pulses at a frequency of 50Hz with a pulse width of 400µs for 6s long, alternated with a resting period of 8s, also reciprocal, swapping the contractions of the lower limbs muscles.The applied intensity was defined as the patient's maximum tolerable intensity, being increased from 1 to 5mA each day.This strategy allowed a better tolerance of long exposure to electrical stimulation of patients with severe COPD.
During the NMES, the patients were requested not to do any active movement with the lower limbs so that the quadriceps movements were totally passive to the stimulation.By using an ankle weight, extra load was applied (in kilograms), beginning with 50% of the maximum load of the 1MR, 19 and increased as the patient got used to the electrical current of the study.It was performed as a mean to enhance the patient strength without having to change the stimulation device parameters, what could cause discomfort to the patient.
The upper limb (UL) training had three stages: warmup, upper limb exercise and stretching. 26The UL exercises were performed with dumbbells, with a load of 50% of the maximum weight as measured in the incremental test for UL. 21The first and the second diagonal of the Neuro-proprioceptive Facilitation Method 27 was used because of its functionality and the ability to recruit several UL muscle groups that are needed in the daily life activities.Each cycle of diagonal movement was performed for two minutes followed by a resting period of one minute, and, during the limb elevation, the patient was requested to exhale.
The analysis of the obtainable variables was conducted considering the percentages distribution and measures of central tendencies (mean, standard deviation).For the statistical analysis, the software SPSS (Statistical Package for the Social Science) version 13.0 was used.The Shapiro-Wilk test was performed for evaluating the distribution of the data.As the data was considered normal, the t-test was performed for comparing the variables and the chosen significance level was 5% (α < 0.05).
RESULTS
As for the anthropometric characteristics, most patients were male (4/5), the mean age was 65.2 ± 6.09 years, and the body mass index was 23.84 ± 3.37 kg/m 2 .The average cigarette consumption was 79 ± 41.29 packages/year.Out of the 5 patients, 3 were rated as GOLD III, and 2 as GOLD IV by the pulmonary function test.
The Table 1 presents the results of comparisons among the variables walk test, MRC, BODE, and quality of life before and after the training sessions.It is possible to observe an improvement tendency in the parameters of all the analyzed variables, however the results show there was no significant difference.
As for the perimetry, no substantial increase was found in the circumference of the thighs of both groups, in any of the measurements of 5cm, 10cm and 15cm performed in the study.
The results of the Sit-to-stand test and the quadriceps muscle strength testing (1MR) have shown a significant increase in the period after the intervention (p = 0.001).The mean repetitions of the Sit-to-stand test before the training was 22.40 ± 8.26 and after the training was 26.80 ± 7.39.In the 1MR, the mean before and after the training was 15.60 ± 6.42 and 18.60 ± 5.77 kilograms respectively (Figure 1).
DISCUSSION
The main findings of this study, intervention with a program of muscle training with the aid of NMES in the quadriceps and physical exercises for the upper limbs, were the increase of strength and muscular resistance, evaluated by the 1MR and the Sit-to-Stand test.
The quadriceps muscle of patients with COPD is characterized by, in addition to muscle weakness, 12,28 premature fatigability, 29 due to the reduction in the proportion of type-I fibers and oxidative enzymes. 30,31The findings of this study, as verified with the 1MR test, have shown that the NMES could increase the capacity to perform the knee extension with higher loads, as well as to improve the performance in the Sit-to-Stand test.Other studies have shown the strength increase in NMES programs can be related to the increase of the muscle activation, the electromyographic activity, i.e. neural activation, and the transversal anatomical section area. 32,33Moreover, the neural adaptions occur in the first four training weeks with electrical stimulation and the alterations in the muscle density between the 4 th and the 8 th week. 32y comparing the acute effect on strength of functional electrical stimulation with frequencies of 15Hz and 50Hz over the quadriceps muscle of aged patients, Sbruzzi et al. 34 found that 50Hz NMES reaches higher peak of isometric muscle torque.This study attributed this difference to the fact that the muscle strength is proportional to the stimulation frequency and to the number of recruited motor units.Therefore, the higher the frequency, the higher the motor recruitment, yielding higher muscle strength. 35he NMES effects over the motor units depend on the stimulation frequency. 36With a frequency as low as 20Hz, the work is directed to the type-I fibers, 37 which promote effective muscle contractions at a low metabolic cost, decreasing muscle fatigue. 38With stimulation frequencies between 35 and 70Hz, it is possible to work the fast twitch muscle fiber -type II. 38Possibly, this is the explanation why the 50Hz frequency yields higher torque peak when compared to the 15Hz frequency. 36he maximum overload applied was equivalent to 50% of 1MR, with the use of ankle weights.Guedes & Guedes 39 emphasized that exercises overloaded above 40% of maximum strength yield enhancement of strength either by hypertrophy of fibers or by the increase of the recruitment of motor units.Oppositely, exercises with overloads lower than 40% of the maximum strength, emphasize resistance, even though they produce strength as well.
The perimetry, a method for evaluating the muscular trophism, is broadly applied either as therapy or as research, for it is considered practical, of low cost, and for non-invasively analyzing body mass. 40The fact that we did not find significant difference in relation to trophism, besides the application of NMES being consistently considered related to an increase in body mass, 41 it can be explained by the subjectivity of this measurement method to determine the muscular circumference when different muscle tensions is applied to the tape measure during the evaluation as well as the small number of subjects in the sample.
The average age of the participants was 65 years, therefore they were considered aged.Concerning this variable, it is important to consider that combined with the degenerating effects of COPD over peripheral muscles, the aging process on the muscular system progressively produces muscle density loss, specially of fast twitch type II fibers. 42e cardiorespiratory fitness analyses, as measured by the 6MWT, 16 have shown the patients walked longer distances at the end of the study.This difference, however, was not significant, what may also have been due to the insufficient number of subjects.
The findings on quality of life and desensitization of dyspnea have not been significant.On this matter, it is important to consider that along the therapy period, 3 out of the 5 patients had relevant symptoms of dyspnea, fever, and productive cough suggesting acute respiratory infection, probably due to seasonal temperature changes that occurred along the period of the year of the region the study was performed, where the weather is humid and cold.
Concerning BODE, the mortality predictor index, Pitta et al. 43 found higher mortality rate in insufficiently active subjects when compared to sufficiently active ones.In the present study, no significant difference was found after the training program, what can also be explained by the respiratory complications, once the BODE index is directly related to the dyspnea index.
As for the study limitations, the small sample size, the narrow data collection period, and the scarce financial resource as to increase the treatment period are among them.Nevertheless, the results we found encourage new researches on this muscle training program as part of the treatment given to patients with severe COPD.
CONCLUSION
The muscle training program applied in this research, by which NMES was used in lower limbs combined with training of upper limbs, has shown to be effective to increase muscle strength and resistance of patients with severe COPD.On this subject, new studies are suggested to be performed with longer follow up period and the inclusion of a broader number of subjects, as to possibly yield statistical relevance in the results of all variables analyzed. | 2018-12-12T14:15:59.255Z | 2016-08-09T00:00:00.000 | {
"year": 2016,
"sha1": "d45ab7ce074d387bdf48a0275a61767364ca27af",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.revistas.usp.br/actafisiatrica/article/download/137663/133308",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d45ab7ce074d387bdf48a0275a61767364ca27af",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
2151967 | pes2o/s2orc | v3-fos-license | Central mechanisms in burning mouth syndrome involving the olfactory nerve: a preliminary study
Burning mouth syndrome (BMS) is characterized by a continuous sensation of burning or heat in the oral cavity, mainly on the tongue, palate and/or gingiva 1–3, in the absence of a primary cause 4–5. Systemic diseases, such as diabetes mellitus or anemia, must be ruled out 3. It is most common among postmenopausal women and causes intense discomfort and suffering.
There is no defined etiology for BMS other than precipitating causative factors, and it is still considered idiopathic. One of the most widely accepted theories is that the partial or total loss of chorda tympani (facial) nerve function disinhibits the trigeminal nerve, resulting in pain along trigeminal pathways, as both taste and pain systems are regulated by interneurons of the central nervous system (CNS) 6–8. This theory is based on evidence of neuropathic mechanisms 9–10, including the loss of small fibers in oral tissues 11, salivary and somatosensory abnormalities , reduced corneal reflexes 15, and peripheral nerve degeneration 11–16. Despite the known interaction between smell and taste 17, we found no studies that investigated it in relation to BMS.
Thus, the objective of this preliminary study was to determine tactile, pain, thermal, gustative and olfactory thresholds in a group of patients with BMS as compared with controls.
INTRODUCTION
Burning mouth syndrome (BMS) is characterized by a continuous sensation of burning or heat in the oral cavity, mainly on the tongue, palate and/or gingiva [1][2][3] , in the absence of a primary cause [4][5] . Systemic diseases, such as diabetes mellitus or anemia, must be ruled out 3 . It is most common among postmenopausal women and causes intense discomfort and suffering.
There is no defined etiology for BMS other than precipitating causative factors, and it is still considered idiopathic. One of the most widely accepted theories is that the partial or total loss of chorda tympani (facial) nerve function disinhibits the trigeminal nerve, resulting in pain along trigeminal pathways, as both taste and pain systems are regulated by interneurons of the central nervous system (CNS) 6-8. This theory is based on evidence of neuropathic mechanisms [9][10] , including the loss of small fibers in oral tissues 11 , salivary and somatosensory abnormalities [8][9][10][12][13][14] , reduced corneal reflexes 15 , and peripheral nerve degeneration [11][12][13][14][15][16] . Despite the known interaction between smell and taste 17 , we found no studies that investigated it in relation to BMS.
Thus, the objective of this preliminary study was to determine tactile, pain, thermal, gustative and olfactory thresholds in a group of patients with BMS as compared with controls.
METHODS
This research was approved by the Ethics Committee of Hospital das Clinicas, Medical School, University of Sao Paulo (HC-FMUSP), and all patients provided informed consent. Twenty consecutive patients with BMS, diagnosed according to the International Association for the Study of Pain (IASP) criteria 18 , were evaluated by the HC-FMUSP orofacial pain team between August 2007 and January 2008 and then compared with 30 normal subjects. All patients had BMS for more than 3 years and had no oral infections or other lesions and no diseases included in the exclusion criteria.
Inclusion criteria: The study included 20 patients newly diagnosed with BMS who had not begun pharmacological treatment and 30 healthy controls with no complaint of facial or intraoral pain within the last 6 months who were consecutively selected from patients receiving dental treatment at the Dentistry Division of the hospital.
Exclusion criteria (for patients and controls): Exclusion criteria included Sjö gren syndrome, rheumatological diseases (i.e., fibromyalgia and rheumatoid arthritis), diabetes, anemia, hyper-or hypothyroidism, generalized pain, and history of surgery in the facial/oral region. The patients and controls underwent a systematized evaluation by the hospital's general physician to investigate the presence of systemic diseases. In addition to the clinical exam, a hematological evaluation of thyroid hormones, glycemia, rheumatoid factors, including reactive C protein and hemosedimentation velocity, and hemogram values was performed.
All subjects underwent a standardized superficial facial sensibility protocol applied to distinct areas of the face (bilateral trigeminal branches) and oral mucosa (superior and inferior arches) 19 in the following order.
(1) Thermal sensibility (using an electrical device designed at the Functional Neurosurgery Division of HC-FMUSP) at a temperature range between 0˚C and 50˚C. (2) Mechanical/tactile sensibility (using microfilaments from von Frey) ranging from 0.1 g/mm2 to 10.0 g/mm2. Each thermal and mechanical stimulus was applied three times, and the threshold was established when the subject recognized at least two of the three applications. If this did not happen, the next stimulus in crescent order would be applied to avoid a tolerance effect. Algometry was performed with a superficial device and a disposable 0.7615-mm needle. The ophthalmic branch (V1) was evaluated 1 cm above the eyebrow, and the maxillary branch (V2) was evaluated 1 cm to the side of the nose wing. Finally, the mandibular branch (V3) was evaluated 1 cm below the angle of the lips. A single drop of each concentration was applied and swallowed by the patient; the results were compared to results from a single drop of distilled water. When the stimulus was not perceived, the next concentration was applied. The patient's mouth was washed with distilled water between different tastes.
(5) Olfactory threshold with isopropanol solutions (9.9; 15; 23.3; 32; 48; 53; 70%) [23][24] . Each concentration was offered to the patient along with a bottle of water, and the patient was asked to choose the bottle containing the substance three times. The threshold was established when the patient correctly chose all three times. If the patient chose incorrectly, the next concentration was offered along with a bottle of water.
All subjects were evaluated in the sitting position, with the head resting on a flat surface and the Frankfurt line parallel to the ground. All evaluations took place at the same time of day (between 1 and 4 pm) in a silent room with acoustic protection on the walls and with the door closed. Only the patient and the researcher were in the room during evaluations. All patients were evaluated by the same researcher. The subjects received the same instructions after being positioned, which were to keep their eyes closed during the exam and to identify and report whether they felt the stimuli being applied to the face (by saying ''yes'' or ''no'') and what they felt (by naming the stimulus). Only the researcher knew the order in which the stimuli would be presented. Finally, all findings were tabulated and statistically analyzed.
Statistical analysis
For age and algometry, we used the one-factor ANOVA and Tukey's test. The Kruskall-Wallis test was used to analyze facial and oral sensitivity. Finally, gustative and olfactory thresholds were evaluated with the Kruskall-Wallis test followed by Dunns test. The level of significance was p,0.05.
Demographic characteristics
The mean age of subjects was 60.95, and there were 16 women and 4 men in the BMS group. There was a significant age difference between groups (Table 1).
Somatosensory findings
There were no between-group differences in the somatosensory results for the ophthalmic branch, and similar cold thresholds were noted between the groups. The BMS patients had higher tactile thresholds at the maxillary branch (p = 0.001) and higher warm thresholds at the maxillary (p = 0.032) and mandibular (p = 0.001) branches ( Table 2). The BMS patients had higher pain thresholds at the ophthalmic and maxillary branches (p,0.05) ( Table 3). There were no intraoral sensibility differences between the studied groups (p = 0.87).
Gustative evaluation
The gustative evaluation showed significant differences in all basic tastes (sweet p,0.001; salty p = 0.004; sour p = 0.001; bitter p = 0.001). The BMS patients had higher salty, sweet and bitter thresholds but lower sour thresholds (Figure 1). Neither group exhibited difficulties with taste identification.
Olfactory evaluation
The BMS patients had higher olfactory thresholds ( Figure 2).
DISCUSSION
This study presents evidence that supports the theory that the neuropathic mechanisms underlying BMS involve the somatosensory, gustative and olfactory pathways. To our knowledge, this is the first time the olfactory threshold of BMS patients has been investigated, and the findings show abnormalities in the trigeminal, gustatory and olfactory systems. Thus, these findings support the notion that central sensitization is involved in the physiopathology of this disease [6][7][8] . The pathophysiology is complex because of the overlapping of cortical areas that receive afferents with trigeminal and gustative inputs 25 . Furthermore, taste perception includes olfaction, and olfaction also requires somatosensory input 26 . These three sensory systems (i.e., trigeminal, gustatory and olfactory) show abnormal interactions in patients with BMS. Our results are similar to those of previous studies that show decreased somatosensory perception in BMS patients, including higher tactile and thermal thresholds in all trigeminal branches 8,9,12 , higher taste thresholds 10,13,14 and a delay in the blink reflex 15 . Thermal abnormalities in the orofacial region that only pertain to the perception of warmth might be associated with the burning sensation these patients describe, and it has been reported that abnormal functioning of the warmth-perceiving pathways can induce burning pain sensations 27 . It is possible that in BMS, there is a malfunction in warmth detection that leads to a pain sensation and, when it becomes chronic, induces other somatosensory, gustative and olfactory disturbances through central mechanisms, including neuroplasticity at the cortical areas responsible for sensory interaction 26 . Abnormal taste has been described as a consequence of the loss of warmth detection 27 . In this sense, BMS could be described as a phantom sensation of heat on the tongue.
In the gustative evaluation, the sweet, salty and bitter tastes had higher thresholds, but the sour taste had lower thresholds. Sour is the taste that involves the activity of H+ ions directly through channels in the receptor membranes, which also can activate small pain fibers. In addition to peripheral nerve degeneration 11,16 , a more sensitive perception of acids (for taste and pain) could be a peripheral mechanism of BMS.
A limitation in this study is the small sample size. Larger studies are necessary to confirm the reported results. Although there was an age difference between the groups, the implications of this difference are controversial, especially for gustation and olfaction, which apparently does not differ by age or gender 28 .
In conclusion, this preliminary study shows evidence of abnormal thresholds for pain, tactile, warmth perception, olfaction and gustation. Given these results and previous results in the literature, we propose a phantom heat sensation involving central pathophysiology as a mechanism in BMS. | 2014-10-01T00:00:00.000Z | 2011-03-01T00:00:00.000 | {
"year": 2011,
"sha1": "4b23e7098a9594bfa928086cebb0311d554f3e31",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/clin/v66n3/v66n3a26.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b23e7098a9594bfa928086cebb0311d554f3e31",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
254091 | pes2o/s2orc | v3-fos-license | Determination of Morphine and Codeine in Human Urine by Gas Chromatography-Mass Spectrometry
A sensitive and selective gas chromatography-mass spectrometry (GC-MS) method was developed and validated for the determination of morphine and codeine in human urine. The GC-MS conditions were developed. The analysis was carried out on a HP-1MS column (30 m × 0.25 mm, 0.25 μm) with temperature programming, and Helium was used as the carrier gas with a flow rate of 1.0 mL/min. Selected ion monitoring (SIM) mode was used to quantify morphine and codeine. The derivation solvent, temperature, and time were optimized. A mixed solvent of propionic anhydride and pyridine (5 : 2) was finally used for the derivation at 80°C for 3 min. Linear calibration curves were obtained in the concentration range of 25–2000.0 ng/mL, with a lower limit of quantification of 25 ng/mL. The intra- and interday precision (RSD) values were below 13%, and the accuracy was in the range 87.2–108.5%. This developed method was successfully used for the determination of morphine and codeine in human urine for forensic identification study.
Introduction
Morphine and codeine are naturally occurring alkaloids in opioid plants, have long been used as a drug, and are also abused. While the presence of illicit drugs or their metabolites in urine is an evidence of intake, their concentrations in blood are expected to correlate with their effects on the central nervous system [1]. Morphine is a powerful narcotic analgesic and highly addictive. Codeine is a potent -opioid receptor agonist which is used for the treatment of adult cough. Simultaneously, there have been athletes in sports competitions who use a larger dose in order to improve performance. This practice is contrary to the principle of fair competition and also harmful to the health of the athletes' body. Heroin as one of the most widely abused drug, rapidly metabolized to 6-monoacetylmorphine (6-MAM) once inside the human body. This specific heroin metabolite 6-MAM is detected at a higher concentration usually within 2 to 4 hours, and after six hours, has not been detected in the urine. The absence of 6-MAM in urine, however, morphine is both a wellknown pharmaceutical agent and an important metabolite of codeine and heroin which have relatively long a detection time. Morphine and codeine analysis of urine is used in forensic toxicology to study drug addiction.
There are numerous papers published about the simultaneous determination of Morphine and Codeine in human fluids, including the micellar electrokinetic chromatography (MEKC) method [2], disposable pipette extraction (DPX) method [3], high performance liquid chromatography method [4], liquid chromatography-mass spectrometry [5], and liquid chromatography/triple quadrupole tandem mass spectrometry (LC/MS/MS) method [6][7][8]. Several gas chromatography-mass spectrometry (GC-MS) methods have been developed for the analysis of codeine, morphine, or other opiates. Much attention has been directed to the confirmation of morphine and codeine in urine by GC-MS [9]. A few methods have been developed specifically for the analysis of 6-acetylmorphine (6-AM) with morphine and codeine because all three drugs are often present after heroin use. Assays of morphine and codeine by GC-MS are capable of high sensitivity, specificity, and selectivity. GC-MS is superior to other analytical methods which provide important diagnostic value to study the drug abuse. The aim of this study was to establish methods and seek out more reliable identification and quantitation of morphine and codeine for detection addicts sample.
Currently, urine sampling has been extensively employed for the evaluation of drug consumption. Although through in saliva is another approach; the reliability of saliva analysis is limited by the fact that analyte levels, and even the availability of required sample volume, are again dependent on several physiological factors, nutrition and fluid intake, while the biological effects of the consumed illicit substance may also be a significant factor [10]. The identification of chronic consumers or the late verification of a single intake is feasible using hair as a matrix [11], but it is not suitable for the early verification of consumption. Urine is a preferable matrix for analytical purposes in comparison with saliva because of the minimal discomfort caused to sampled individuals, so it is widely available.
Sample preparation is a key step for the determination of drugs in biological samples. The simple and effective ethyl acetate extraction was employed in our work, and ethyl acetate was adopted because of its high extraction efficiency. Pyridine is a catalytic solvent for reactions with propionic anhydride. Propionic anhydride was chosen as the derivatization reagent because it exhibited better effect than acetic anhydride or trifluoroacetic acid anhydride, which could provide preferable stability, and the disadvantage of acetyl derivatives indistinguishable from morphine and the 6-AM can be avoided. Kushnir et al. [12] evaluated propionic anhydride, MBTFA, HFAA, and BSTFA for GC-MS analysis of 6-AM. They concluded that propionic anhydride gave accurate, precise, and sensitive results while providing compatibility with other methods on the same GC-MS instrument. Residual derivatization reagent in the injector will react with drugs in other methods not intended for derivatization. The derivatization procedure accommodates the analysis of opioids commonly requiring GC-MS confirmation in urine. Difficulties were expected to arise due to a number of reasons. Concentrations of the analytes in the samples were expected to be smaller than the low end of the therapeutic range (25 ng/mL), which highlighted the importance of efforts aimed at increasing the sensitivity of detection. Validation of the analytical method also posed certain requirements. The relative standard deviation of the retention parameters of the target compound was required not to exceed 5% relative standard deviation.
Chemicals and Reagents.
Morphine [10 g/mL in methanol] and codeine [10 g/mL in methanol] solutions were obtained from the Institute of Forensic Science under the Ministry of Justice (Shanghai, China). Sodium hydroxide (purity >98.0%) was purchased from Sigma-Aldrich Trading Co (Shanghai, P.R., China), and ethyl acetate (purity >98.0%) was purchased from Siyou Chemical Reagent Co., Ltd (Tianjin, China), and propionic acid anhydride (purity >98.0%) was purchased from Sinopharm Chemical Reagent Co., Ltd (Beijing, China). Pyridine was from Shenbo Chemical Co., Ltd (Shanghai, China). While methanol was obtained from Siyou Chemical Reagent Co., Ltd (Tianjin, China). Ultrapure water was prepared by a Milli-Q purification system from Millipore (Bedford, USA). All other chemicals were analytical pure and used without further purification.
Instrumentation and Conditions.
Analysis was performed on an Agilent 6890N gas chromatograph (GC) coupled with an Agilent 5975B mass spectrometer (MS, Agilent Technologies, Wilmington, DE, USA). Samples were injected using an Agilent autosampler unit.
The capillary column used was a HP-1MS [30 m × 0.25 mm, 0.25 m]. Helium was the carrier gas at a flow rate of 1.0 mL/min. The temperature program was: initial temperature, 100 ∘ C for 1.5 min; ramp at 25 ∘ C/min to 280 ∘ C and held for 15 min; injection temperature, 250 ∘ C; and transfer line, 280 ∘ C. Sample injection volume was 1 L. Splitless injection mode was used. Electron impact ionization was performed at 70 eV energy and at a 230 ∘ C ion source temperature. The quadrupole temperature was 150 ∘ C. The MS was operated in single ion monitoring (SIM) mode. SIM mode was applied to quantify analyzes using target ions at / 341, 397, and 268 for morphine propionyl compound and / 229, 355, and 282 for codeine propionyl compound ( Figure 1).
Sample Preparation.
The primary standard stock solutions of morphine (100 g/mL) and codeine (100 g/mL) were separately prepared in 10 mL volumetric flasks with urine; 10% NaOH was added dropwise until pH 9.0-9.2 was reached, and 1.0 mL of borax buffer solution was added. To this, 3 mL of extraction solvent (ethyl acetate) was added and vortex-mixed on a vortexer for 2.0 min, followed by centrifugation at 3000 r/min for 5 min. The supernatant organic layer was transferred into a 5 mL glass test tube and dried under air stream at 60 ∘ C. The dried residue was reconstituted in 50 L of propionic anhydride and 20 L of pyridine. All reagents were vortex-mixed, then heating for 3 min at 80 ∘ C and dried under air stream at 60 ∘ C. The dried residue was reconstituted in 50 L of methanol, and 1 L of this solution was injected into GC-MS.
Method Validation.
Specificity was determined by analysis of blank urine, without addition of morphine and codeine to determine possible interference with these compounds.
To evaluate the linearity, the calibration curves were generated using the analyte peak area by linear regression on three consecutive days. The LLOQ was estimated in the process of calibration curve construction and was defined as the lowest concentration for which precision (RSD) was better than 20%.
QC samples at three concentration levels (50, 200, and 1600 ng/mL for morphine and codeine) were analyzed to assess the accuracy and precision of the method. Again, the assays were performed on three separate days, and on each day six replicates of the QC samples at each concentration level were analyzed. The assay accuracy was calculated as relative error. The assay precision for each QC level was determined as the relative standard deviation (RSD) of the measured concentrations. The intra-and interday precisions were required to be below 15%, and the accuracy was required to be within ±15%. Stability in urine was assessed in the autosampler at room temperature for 12 h. The effect of three freeze-thaw cycles was also investigated. Figure 2 shows the typical chromatograms of a blank urine sample spiked with morphine and codeine. No interfering endogenous substances were observed at the retention times of the morphine and codeine.
Selectivity and Linearity.
Calibration curves for morphine and codeine were generated by linear regression of peak area ratios against concentrations, respectively. The regression equation for the calibration plot were = 2270.9 + 202.3 with = 0.9974 for morphine, and = 3099.0 + 31625.7 with = 0.9958 for codeine ( is the peak area of analyte, and is the concentration of analyte in human urine), and concentrations are in the range 25-2000 ng/mL for morphine and codeine, respectively.
The LLOQ for morphine in human urine was 25 ng/mL and the precision and accuracy at LLOQ were 10.5% and 87.6%, respectively. The LLOQ for codeine in human urine was 25 ng/mL and the precision and accuracy at LLOQ were 13.8% and 88.9%, respectively.
Precision, Accuracy, and Extraction
Recovery. The precision of the method was determined by calculating RSD for QCs at three concentration levels over three validation days. Intraday precision was 12% or less and the interday precision was 13% or less at each QC level. The accuracy of the method ranged from 87.2% to 99.7% at each QC level. Assay performance data are presented in Table 1. The aforementioned results demonstrate that the values are within the acceptable range and the method is accurate and precise. The recovery of morphine and codeine was evaluated by comparing peak area ratios of extracted QC samples with those of reference QC solutions reconstituted in blank urine extracts. Mean recoveries of morphine and codeine were better than 75.5%.
Stability.
All the stability studies of morphine and codeine in human urine were conducted at three concentration levels (50, 200, and 1600 ng/mL for morphine and codeine) with three replicates for each concentration. The stability results showed that morphine and codeine in human urine were stable during three freeze-thaw cycles. Stability of morphine and codeine extracts in the sample solvent on autosampler was also observed over a 12 h period. The results of stability experiments are listed in Table 2.
Conclusions
A stable, selective, and sensitive GC-MS method has been developed for the simultaneous determination of codeine and its metabolite morphine in human urine. This developed method with derivatization for sample preparation was successfully applied for the determination of morphine and codeine in human urine for methodological study. | 2016-05-12T22:15:10.714Z | 2013-10-10T00:00:00.000 | {
"year": 2013,
"sha1": "8af03130eb9d3a92cf5b72f6fd3c351e0fa3a479",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2013/151934",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b8d9db802f7c0ba5484182b9e6ce440e781a93c",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
1588617 | pes2o/s2orc | v3-fos-license | Equidistribution towards the Green current for holomorphic maps
Let f be a non-invertible holomorphic endomorphism of a projective space and f^n its iterate of order n. We prove that the pull-back by f^n of a generic (in the Zariski sense) hypersurface, properly normalized, converge to the Green current associated to f when n tends to infinity. We also give an analogous result for the pull-back of positive closed (1,1)-currents.
Introduction
Let f be a holomorphic endomorphism of algebraic degree d ≥ 2 on the projective space P k . Let ω denote the Fubini-Study form on P k normalized so that ω is cohomologous to a hyperplane or equivalently P k ω k = 1. It is well-known that the sequence of smooth positive closed (1, 1)-forms d −n (f n ) * (ω) converges weakly to a positive closed (1, 1)-current T of mass 1. Moreover, T has locally continuous potentials and is totally invariant, i.e. f * (T ) = dT . We call T the Green current of f . The complement of the support of T is the Fatou set, i.e. the sequence (f n ) is locally equicontinuous there. We refer the reader to the survey [29] for background. Our main results in this paper are the following theorems, where [·] denotes the current of integration on a complex variety. Theorem 1.1. Let f be a holomorphic endomorphism of algebraic degree d ≥ 2 of P k and T the Green current associated to f . There is a proper analytic subset E of P k such that if H is a hypersurface of degree s in P k which does not contain any irreducible component of E then d −n (f n ) * [H] converge to sT in the sense of currents when n tends to infinity. Moreover, E is totally invariant, i.e. f −1 (E ) = f (E ) = E .
The exceptional set E will be explicitely constructed in Sections 6 and 7. It is the union of totally invariant proper analytic subsets of P k which are minimal. That is, they have no proper analytic subsets which are totally invariant, see Example 7.5. That example shows that E is not the maximal totally invariant analytic set. The previous result is in fact a consequence of the following one, see also Theorem 7.1 for a uniform convergence result. Theorem 1.2. Let f be a holomorphic endomorphism of algebraic degree d ≥ 2 of P k and T the Green current associated to f . There is a proper analytic subset E of P k , totally invariant, such that if S is a positive closed (1, 1)-current of mass 1 in P k whose local potentials are not identically −∞ on any irreducible component of E then d −n (f n ) * (S) → T as n → ∞.
The space H d of holomorphic maps f of a given algebraic degree d ≥ 2 is an irreducible quasi-projective manifold. We will also deduce from our study the following result due to Fornaess and the second author [17], see also [16,29]. The rough idea in order to prove our main results is as follows. Write S = dd c u+T . Then, the invariance of T implies that d −n (f n ) * (S) = d −n dd c (u•f n )+T . We have to show, in different situations, that d −n u • f n converge to 0 in L 1 . This implies that d −n (f n ) * (S) → T . So, we have to study the asymptotic contraction (à la Lojasiewicz) by f n . The main estimate is obtained using geometric estimates and convergence results for plurisubharmonic functions, see Theorem 5.1. If d −n u•f n do not converge to 0, then using that the possible contraction is limited, we construct a limit v with strictly positive Lelong numbers. We then construct other functions w −n such that the current dd c w −n +T has Lelong numbers ≥ α 0 > 0 and w 0 = d −n w −n • f n . It follows from the last identity that w 0 has positive Lelong numbers on an infinite union of analytic sets of a suitable dimension. The volume growth of these sets implies that the current associated to w 0 has too large self-intersection. This contradicts bounds due to Demailly and Méo [5,26]. (One should notice that the Demailly-Méo estimates depend on the L 2 estimates for the ∂-equation; they were recently extended to the case of compact Kähler manifolds by Vigny [33]). The previous argument has to be applied inductively on totally invariant sets for f , which are a priori singular and on which we inductively show the convergence to 0, starting with sets of dimension 0. So, we also have to develop the basics of the theory of weakly plurisubharmonic functions on singular analytic sets which is probably of independent interest. The advantage of this class of functions is that it has good compactness properties.
One may conjecture that totally invariant analytic sets should be unions of linear subspaces of P k . The case of dimension k = 2 is proved in [3,28]. These authors complete the result in [18]. If this were true for k ≥ 3, our proof would be technically simpler. It is anyway interesting to carry the analysis without any assumption on the totally invariant sets since our approach may be extended to the case of meromorphic maps on compact Kähler manifolds. At the end of the paper, we will consider the case of regular polynomial automorphisms of C k .
The problem of convergence was first considered by Brolin for polynomials in dimension 1 and then by Lyubich, Freire, Lopes and Mañé for rational maps in µ := T k is the unique invariant measure of maximal entropy, see [17,1,29]. In this case, the conjecture was proved by the authors in [9]. Weaker results in this direction were obtained in [17] and [1]. We will give some details in Theorem 6.6. For 2 ≤ p ≤ k − 1, the authors have proved in [12] that for f in a Zariski dense open set H ′ d ⊂ H d , there is no proper analytic subset of P k which is totally invariant and that the conjecture holds. Indeed, a version of Theorem 1.3 is proved.
Plurisubharmonic functions
We refer the reader to [22,6,10] for the basic properties of plurisubharmonic (psh for short) and quasi-psh functions on smooth manifolds. In order to study the Levi problem for analytic spaces X, the psh functions which are considered, are the restrictions of psh functions on an open set of C k for a local embedding of X. Let u : X → R ∪ {−∞} be an upper semi-continuous function which is not identically equal to −∞ on any irreducible component of X. Fornaess-Narasimhan proved that if u is subharmonic or equal to −∞ on any holomorphic disc in X, then u is psh in the above sense [15]. However, this class does not satisfy good compactness properties which are useful in our analysis. Assume that X is an analytic space of pure dimension p. Let reg(X) and sing(X) denote the regular and the singular parts of X. We consider the following weaker notion of psh functions which is modeled after the notion of weakly holomorphic functions. The class has good compactness properties.
Fornaess-Narasimhan's theorem implies that psh functions are wpsh. Wpsh functions are psh when X is smooth. One should notice that the restriction of a wpsh function to an irreducible component of X is not necessarily wpsh. For example, consider X = {xy = 0} in the unit ball of C 2 , let v = 0 on {x = 0}\(0, 0) and v = 1 on {y = 0}, then v is wpsh on X but its restriction to {x = 0} is not wpsh. Consider the (strongly) psh function v n := |x| 1/n on X. The sequence v n converge to v in L 1 (X). So, psh functions on analytic sets do not have good compactness properties.
Proposition 2.2. Let Z ⊂ X be an analytic subset of dimension ≤ p − 1 and v ′ a wpsh function on X \ Z. If v ′ is locally bounded from above near Z then there is a unique wpsh function v on X equal to v ′ outside Z.
Proof. The extension to a psh function on reg(X) is well-known. So, we can assume that Z ⊂ sing(X). Condition (b) in Definition 2.1 implies the uniqueness of the extension of v ′ . Define v(a) = lim sup v(x) with x ∈ Z and x → a. It is clear that v = v ′ out of Z and v satisfies the conditions in Definition 2.1. Now assume for simplicity that X is an analytic subset of pure dimension p of an open set U in C k . The general case can be deduced from this one. The following results give characterizations of wpsh functions.
Proof. Define v := v • π outside the analytic set π −1 (sing(X)). This function is psh and is locally bounded above near π −1 (sing(X)). We can extend it to a psh function on X that we also denote by v. For x ∈ X, π −1 (x) is compact. The maximum principle implies that v is constant on each irreducible component of π −1 (x). From the definition of wpsh function, we get v(x) = max π −1 (x) v. The second assertion in the proposition follows from the definition of wpsh functions.
A theorem of Lelong says that the integration on reg(X) defines a positive closed (k − p, k − p)-current [X] on U, see [23,6]. Let z denote the coordinates in C k . (a) v is in L 1 loc (X), i.e. K |v|(dd c |z| 2 ) p < +∞ for any compact set K ⊂ X. (b) v is strongly upper semi-continuous, i.e. for any a ∈ X and any full measure subset ) is a positive current on U.
Proof. We use the notations in Proposition 2.3. The proposition is known for smooth manifolds, see [6]. Assume that v is wpsh. The function v defined above satisfies properties (a), (b) and (c) on X. It follows that v satisfies (a) and (b) Conversely, Properties (a)-(c) imply that v is psh on reg(X). Then, Property (b) implies that v satisfies the conditions of Definition 2.1.
Proposition 2.5. Let (v n ) be a sequence of wpsh functions on X, locally uniformly bounded from above. Then, there is a subsequence (v n i ) satisfying one of the following properties: (a) There is an irreducible component Y of X such that (v n i ) converges uniformly to −∞ on K \ sing(X) for any compact set K ⊂ Y .
(b) (v n i ) converges in L q loc (X) to a wpsh function v for every 1 ≤ q < +∞.
In the last case, lim sup v n i ≤ v on X with equality almost everywhere.
Proof. Let π : X → X ⊂ U be as above. We extend the functions v n •π, which are psh on π −1 (reg(X)) to psh functions v n on X. Recall that v n (x) = max π −1 (x) v n . Now, since the proposition holds for smooth manifolds, it is enough to apply it to ( v n ). If a psh function v is a limit value of ( v n ) in L q loc ( X), the function v, defined by v(x) := max π −1 (x) v, satisfies the property (b) in the proposition. If not, v n converge to −∞ locally uniformly on some component of X and the property (a) holds.
The following result is the classical Hartogs' lemma when X is smooth [22]. Lemma 2.6. Let (v n ) be a sequence of wpsh functions on X. Let u be a continuous function on X such that lim sup v n < u. Then for every compact set K ⊂ X, v n < u on K for n large enough. This holds in particular, if (v n ) converges to a wpsh function v in L 1 loc (X) and v < u.
Proof. Let π and v n be defined as above. These functions v n are psh on X. Define u := u•π. It is clear that u is continuous and that lim sup v n ≤ lim sup v n •π < u. We only have to apply the classical Hartogs' lemma in order to obtain v n < u on π −1 (K) for n large enough. This implies the result. The last assertion in the lemma is a consequence of Proposition 2.5.
The following lemma will be useful.
Lemma 2.7. Let G be a family of psh functions on U locally uniformly bounded from above. Assume that for each irreducible component of X there is an analytic subset Z such that the restriction of G to Z is bounded in L 1 loc (Z). Then, the restriction of G to X is bounded in L 1 loc (X). Proof. We can assume that X is irreducible. For (v n ) ⊂ G , define the psh functions v n on X as above. It is clear that v n are locally uniformly bounded from above. Let W ⋐ U be an open set which intersects Z. The maximal value of v n on π −1 (Z ∩W ) is equal to the maximal value of v n on Z ∩W . It follows from the hypothesis that no subsequence of ( v n ) converges uniformly on compact sets to −∞. Proposition 2.5 applied to ( v n ), implies that this sequence is bounded in L 1 loc ( X). Applying again Proposition 2.5 to (v n ) gives the lemma. Let R be a positive closed (1, 1)-current on U with continuous local potentials, i.e. locally R = dd c v with v psh and continuous. Let R ′ be a positive closed (k − p, k − p)-current on U, 1 ≤ p ≤ k − 1. Recall that we can define their intersection by R∧R ′ := dd c (vR ′ ) where v is a local potential of R as above. This is a positive closed (k − p + 1, k − p + 1)-current on U which depends continuously on R ′ . The definition is independent of the choice of v. By induction, if R 1 , . . ., R p are positive closed (1, 1)-currents with continuous local potentials, the intersection ν := R 1 ∧ . . . ∧ R p ∧ [X] is a positive measure with support in X. This product is symmetric with respect to R 1 , . . ., R p . Proposition 2.8. For every compact sets K and In particular, ν has no mass on analytic subsets of dimension ≤ p − 1 of X.
Proof. Choose a compact set L such that K ⋐ L ⋐ K ′ and a neighbourhood W of sing(X) small enough. If a is a point in K ∩ W , then we can find a Riemann surface in X containing a and having boundary in L \ W . Indeed, it is enough to consider the intersection of X with a suitable linear plane P of dimension k − p + 1 passing throught a. The maximum principle applied to the lift of u to X (defined above) implies that u(a) ≤ max L\W u and hence max K u ≤ max L\W u. Since L \ W ⊂ reg(X), the submean inequality for psh functions on smooth manifolds implies that max L\W u ≤ c u L 1 (K ′ ) for some constant c > 0. Hence, max K u ≤ c u L 1 (K ′ ) .
We prove now the second inequality. Replacing u by u − c u L 1 (K ′ ) allows us to assume that u ≤ 0 on K. Since the problem is local, we can assume that R i = dd c v i with v i continuous on U. Moreover, we can approximate v i by decreasing sequences (v i,n ) of smooth psh functions. Define R i,n := dd c v i,n . It is well-known that ν n := R 1,n ∧. . .∧R p,n ∧[X] converge to ν in the sense of measures. Using the same arguments as in the Chern-Levine-Nirenberg inequalities [4,6,29] where c ′ > 0 is independent of n. When n → ∞, since ν n → ν and since u is upper semi-continuous, we obtain This implies the second inequality in the proposition.
Let Y be an analytic subset of X of dimension ≤ p − 1. Then, there is a psh function u ′ on U such that {u ′ = −∞} = Y . The last inequality applied to the restriction of u ′ to X, implies ν(Y ) = 0.
Modulo T plurisubharmonic functions
We are going to develop in this section the analogue in the compact case of the local theory in Section 2. Consider a (compact) analytic subset X of P k of pure dimension p. Recall that the Green current T of f has locally continuous potentials. Observe that in what follows (except for Lemma 3.8, Corollary 3.9 and Remark 3.10), T could be an arbitrary positive closed (1, 1)-current of mass 1 with continuous potentials, and P k could be replaced by any compact Kähler manifold. We will use the following notion that allows us to simplify the notations.
locally it is the difference of a wpsh function on X and a potential of T . If X is smooth, we say that u is psh modulo T .
The following result is a consequence of Proposition 2.4. Note that if u is a modulo T wpsh function, dd c (u[X]) + T ∧ [X] is a positive closed current of bidegree (k − p + 1, k − p + 1) supported on X. If S is a positive closed (1, 1)-current on P k of mass 1, then it is cohomologous to T and we can write S = T + dd c u where u is a modulo T psh function on P k . The restriction of such a function u to X is either wpsh modulo T or equal to −∞ on at least one irreducible component of X.
The following proposition is a consequence of Proposition 2.5. Proposition 3.3. Let (u n ) be a sequence of modulo T wpsh functions on X, uniformly bounded from above. Then there is a subsequence (u n i ) satisfying one of the following properties: (1) There is an irreducible component Y of X such that (u n i ) converges uniformly to −∞ on Y \ sing(X).
(2) (u n i ) converges in L q (X) to a modulo T wpsh function u for every 1 ≤ q < +∞.
In the last case, lim sup u n i ≤ u on X with equality almost everywhere.
The Hartogs' lemma 2.6 implies the following.
Lemma 3.4. Let (u n ) be a sequence of modulo T wpsh functions on X converging in L 1 (X) to a modulo T wpsh function u. If w is a continuous function on X such that u < w, then u n < w for n large enough.
The following lemma is deduced from Lemma 2.7.
Lemma 3.5. Let G be a family of modulo T psh functions on P k uniformly bounded from above. Assume that each irreducible component of X contains an analytic subset Y such that the restriction of G to Y is bounded in L 1 (Y ). Then, the restriction of G to X is bounded in L 1 (X).
Define a positive measure supported on X by µ X := T p ∧ [X]. By Bézout's theorem, the mass of µ X is equal to the degree of X. The same argument implies that µ X has positive mass on any irreducible component of X. The following result is a consequence of Proposition 2.8.
In particular, µ X has no mass on analytic subsets of dimension ≤ p − 1 of X.
We also have the following useful Proposition and Lemma. Proof. Proposition 3.6 implies that µ Y has no mass on sing(X). If G is bounded in L 1 (X) then it is bounded in L 1 (Y ). We have seen that the restriction of u ∈ G to Y is equal outside sing(X) to a modulo T wpsh function on Y . By Proposition 3.6, there is a constant c > 0 such that | udµ Y | ≤ c for u ∈ G . Conversely, assume that | udµ Y | ≤ c for u ∈ G and for any irreducible component Y of X. Since µ Y has no mass on sing(X), we can replace X by Y and assume that X is irreducible. Define m u := max X u and v := u − m u . Since max X v = 0, Proposition 3.3 implies that the family of such functions v is bounded in L 1 (X), see also Definition 2.1(b). On the other hand, we have This and Proposition 3.6, applied to v, imply that |m u | is bounded. Since u = m u + v, we obtain that G is bounded in L 1 (X).
) to a modulo T wpsh function w on X. Moreover, w depends continuously on u.
. On the other hand, since u is bounded from above, d −1 u • f is bounded from above. Proposition 2.2 implies the existence of w. That w depends continuously on u follows from Proposition 3.3.
Corollary 3.9. Assume that X is invariant. Let G be a family of modulo T wpsh functions on X, bounded in L 1 (X). Then, the family of modulo T wpsh functions on X which are equal almost everywhere to d −n u • f n with n ≥ 0 and Proof. Replacing f by an iterate f n allows us to assume that f fixes all the irreducible components of X. So, we can assume that X is irreducible. For the first assertion, by Propositions 3.6, we can subtract from each u a constant in order that max X u = 0. So, we can assume that G is the set of such functions u. This is a bounded set in L 1 (X). All the functions d −n u • f n are equal almost everywhere to functions in G . The first assertion follows. For the second assertion, by Lemma 3.8, d −n u n •f n is equal outside an analytic set to a modulo T wpsh function v n on X. Propositions 3.3 and 3.6 imply that u n ≤ A and |u n |dµ X ≤ A for some constant A > 0. It follows that v n ≤ d −n A, see also Proposition 3.2(b), and then lim sup v n ≤ 0. Hence, u ≤ 0. On the other hand, since X is invariant and T is totally invariant, we have (f n ) * (µ X ) = µ X and Hence, v n dµ X → 0. By Propositions 3.7 and 3.6, (v n ) is bounded from above. This allows us to apply the last assertion in Proposition 3.3. We deduce from Fatou's lemma and the convergence v n dµ X → 0, that udµ X ≥ 0. This and the inequality u ≤ 0 imply that u = 0 µ X -almost everywhere. By upper semicontinuity, u = 0 on supp(µ X ).
Remark 3. 10. Assume that f is chaotic, i.e. the support of the Green measure µ of f is equal to P k . Then, the previous corollary gives us a simple proof of the following property: for all positive closed (1, 1)-currents S n of mass 1 on P k , we have d −n (f n ) * (S n ) → T . Indeed, we can write S n = T + dd c u n with u n bounded in L 1 (X), and hence d −n u n • f n converge to 0.
Lelong numbers
In this section, we recall some properties of the Lelong numbers of currents and of plurisubharmonic functions, see [6] for a systematic exposition. Let R be a positive closed (p, p)-current on an open set U of C k . Let z denote the coordinates in C k and B a (r) the ball of center a and of radius r. Then, When r decreases to 0, ν(R, a, r) is decreasing and the Lelong number of R at a is the limit ν(R, a) := lim r→0 ν(R, a, r).
The property that ν(R, a, r) is decreasing implies the following property that we will use later: if R n → R and a n → a, then lim sup ν(R n , a n ) ≤ ν(R, a). The Lelong number ν(R, a) is also the mass of the measure R ∧ (dd c log z − a ) k−p at a. It does not depend on the coordinates. So, we can define the Lelong number for currents on any manifold. If R is the current of integration on an analytic set V , by Thie's theorem, ν(R, a) is equal to the multiplicity of V at a. Recall also a theorem of Siu which says that for c > 0 the level set {ν(R, a) ≥ c} is an analytic subset of dimension ≤ k − p of U.
Let S be a current of bidegree (1, 1) and v a potential of S on U. Define the Lelong number of v at a by ν(v, a) := ν(S, a). We also have The function log r → sup Ba(r) v is increasing and convex with respect to log r. It follows that if v is defined on B a (1) and is negative, the fraction in (1) is decreasing when r decreases to 0. So, if two psh functions differ by a locally bounded function, they have the same Lelong number at every point. Moreover, identity (1) allows to define the Lelong number for every function which locally differs from a psh function by a bounded function. Let X be an analytic subset of pure dimension p in U and u a wpsh function on X. Then, When X is smooth at a, we can also define a positive closed (1, 1)-current on a neighbourhood of a in X by S X := dd c u. We have ν X (u, a) = ν(S X , a) where the last Lelong number is defined on X.
Consider a proper finite holomorphic map h : U ′ → U between an open set U ′ of C k and U. Let X ′ be an analytic subset of pure dimension p of U ′ such that h(X ′ ) = X, and a ′ ∈ U ′ a point such that h(a ′ ) = a. It follows from Proposition 2.2 that u • h is equal almost everywhere to a wpsh function u ′ on X ′ . The continuity of u ′ with respect to u is proved as in Lemma 3.8.
Proof. Recall that X and X ′ may be reducible and singular, but one can work on each irreducible component separately. We deduce from the identity h(X ′ ) = X and from the definition of δ that near a : Hence, On the other hand, by Theorems 9.9 and 9.12 in [6], we have The inequalities in the proposition follow from (2) and (3).
Let B X a (r) denote the connected component of B a (r) ∩ X which contains a. We call it the ball of center a and of radius r in X.
Proposition 4.2. Let G be a family of wpsh functions on X which is compact in L 1 loc (X). Let δ > 0 such that ν X (u, a) < δ for u ∈ G and a ∈ X. Then, for any compact set K ⊂ X, there exist constants c > 0 and A > 0 such that Moreover, the constant c is independent of G and of δ.
Proof. Reducing U allows to assume that G is bounded in L 1 (X) and ν X (u, a) ≤ δ − ǫ, ǫ > 0, on X for every u ∈ G . Moreover, by Proposition 2.8, G is uniformly bounded from above. So, we can assume that u ≤ 0 for every u ∈ G . If 0 < r 0 < 1 is fixed and r 0 < r < 1, the fact that G is bounded in L 1 (X) implies that sup B X a (r) u ≥ −A for every a ∈ K where A > 0 is a constant. Hence, it is enough to consider r small.
We first consider the case where X is smooth. Since the problem is local we can assume that X is a ball in C p . Up to a dilation of coordinates, we can assume that the distance between K and ∂X is larger than 1. Define s(u, a, r) := sup Ba(r)∩X u log r .
Hence, for a ∈ K and for 0 < r < 1, s(u, a, r) decreases to ν(u, a) when r decreases to 0. For every (a, u) ∈ K × G , since ν(u, a) ≤ δ − ǫ, there is an r > 0 such that s(u, a, r ′ ) ≤ δ − ǫ/2 for r ′ ≤ 2r. It follows that if a psh function v on X is close enough to u then s(v, a, r) ≤ δ − ǫ/4, see Lemma 2.6. We then deduce from the definition of s(v, a, r) that if b is close enough to a and if r ′′ := r − |b − a| then The fact that s(v, b, r) is increasing implies that s(v, b, r ′ ) ≤ δ for r ′ ≤ r and for (b, v) in a neighbourhood of (a, u). Since K × G is compact, if r is small enough, the inequality s(u, a, r) ≤ δ holds for every (a, u) ∈ K × G . This implies the proposition for c = 1 in the case where X is smooth. Now consider the general case. Since the problem is local, we can assume that X is analytic in U = D 1 × D 2 where D 1 and D 2 are the unit balls in C p and C k−p respectively. We can also assume that the canonical projection π : D 1 ×D 2 → D 1 is proper on X. Hence, π : X → D 1 defines a ramified covering. Let m denote the degree of this covering. For u ∈ G , define a function u ′′ on D 1 by Since dd c u ′′ = π * (dd c (u[X])) ≥ 0, u ′′ is equal almost everywhere to a psh function u ′ . It is easy to check that the family G ′ of these functions u ′ is compact in L 1 loc (D 1 ). Fix a ball D containing π(K) such that D ⊂ D 1 . We need the following Lojasiewicz type inequality, see [17,Proposition 4.11], which implies that z → π −1 (z) ∩ X is Hölder continuous of exponent 1/m with respect to the Hausdorff metric. The lemma is however more precise and is of independent interest. Lemma 4.3. There is a constant A > 0 such that for z ∈ D and x ∈ X with π(x) ∈ D, we have Moreover, if y and z are in D we can write Proof. We prove the first assertion. Let x j , p + 1 ≤ j ≤ k, denote the last k − p coordinates of x. Let z (1) , . . ., z (m) denote the points in π −1 (z) ∩ X and z their last k −p coordinates. Here, the points in π −1 (z) ∩X are repeated according to their multiplicities. For w ∈ D 1 , define w (i) and w (i) j in the same way. We consider the Weierstrass The coefficients of these polynomials are holomorphic with respect to w ∈ D 1 . The analytic set defined by the polynomials P j contains X. In particular, we have P j (x j , π(x)) = 0. We consider the case where z = π(x), otherwise the lemma is clear. We will show the existence of a z (i) with good estimates on z for every p + 1 ≤ j ≤ k. We call this the security ring. For θ ∈ R define Observe that G j,c,θ (w) are Lipschitz with respect to w in a neighbourhood of D uniformly with respect to (j, c, θ). Using the choice of l, we have Hence, if c is large enough, since the G j,c,θ (w) are uniformly Lipschitz, they do not vanish on the ball D of center π(x) and of radius 2 z − π(x) . Note that here we only need to consider the case where z and π(x) are close enough, and we have D ⋐ D 1 . We denote by Σ the boundary of the polydisc H of center (x p+1 , . . . , x k ) ∈ C k−p and of radius lc m z − π(x) : the P j (t, w) have no zero there when w ∈ D. Then, X does not intersect D × Σ. Since z ∈ D and x ∈ X, by continuity, there is a point z (i) satisfying |z . This gives the first assertion of the lemma.
We now prove the second assertion. Fix a point x in π −1 (y) ∩ X and use the above construction. In the box D × H, X is a ramified covering over D of some degree s ≤ m. So we can write with an arbitrary order π −1 (y)∩X ∩ D ×H = {y (1) , . . . , y (s) } and π −1 (z)∩X ∩ D ×H = {z (1) , . . . , z (s) } with the desired estimates on |y (i) − z (i) |, since the diameter of D × H is controled by y − z 1/m . This gives a partial correspondence between π −1 (y) ∩ X and π −1 (z) ∩ X.
Choose another point x ′ ∈ π −1 (y) ∩ X outside D × H and repeat the construction in order to obtain a box D × H ′ . We only replace the constant c by 8[m(k − p) + 1]c. This garantees that either D × H and D × H ′ are disjoint or D × H is contained in D × H ′ because of the security rings. In the last situation, we remove the box D × H. Then, we repeat the construction for points outside the boxes obtained so far. After less than m steps, we obtain a finite family of boxes which induces a complete correspondence between π −1 (y) ∩ X and π −1 (z) ∩ X satisfying the lemma. Proof. Consider the functions u ′ ∈ G ′ and u ′′ as above, see (4). Let y be a point in π −1 (x) ∩ X and V a neighbourhood of y such that π −1 (x) ∩ X ∩ V = {y}. We can choose V so that X ∩V is a ramified covering over π(V ). Let l denote the degree of this covering. Consider the current R := dd c (u[X]) in V . In a neighbourhood of x, dd c u ′ (which is equal to dd c u ′′ ) is the sum of the currents π * (R) for y varying in π −1 (x)∩X. Since ν(R, y) < δ and l ≤ m, it is enough to prove that ν(π * (R), x) ≤ l p−1 ν(R, y). Assume that y = 0 and x = 0 in order to simplify the notation. If z = (z ′ , z ′′ ) = (z 1 , . . . , z p , z p+1 , . . . , z k ) denote the coordinates in C k = C p × C k−p , then the mass of π * (R) ∧ (dd c log z ′ ) p−1 at x = 0 is equal to ν(π * (R), 0). It follows from the definition of π * that the mass of R ∧ (dd c log z ′ ) p−1 at y = 0 is also equal to ν(π * (R), 0). Define v := max(log z ′ , l log z ′′ − M) with M > 0 large enough. Lemma 4.3 applied to X ∩ V implies that v = log z ′ on X ∩ V .
the comparison lemma in [6] implies that the mass of R ∧ (dd c v) p−1 at 0 is smaller than the mass of l p−1 R ∧ (dd c log z ) p−1 at 0 which is equal to l p−1 ν(R, 0). This completes the proof.
End of the proof of Proposition 4.2. Now, we apply the case of smooth variety to G ′ . If 0 < ρ < 1 then sup B u ′ ≥ m p δ log ρ − const, where B is the ball of center π(a) and of radius ρ in C p . Let B ′ be the connected component of X ∩ π −1 (B) which contains a. This is a ramified covering over B. Since u is negative, we have sup Consider the case where X is an analytic subset of pure dimension p of P k . The following proposition is a direct consequence of the last one. Proposition 4.5. Let G ⊂ L 1 (X) be a compact family of modulo T wpsh functions on X. Let δ > 0 such that ν X (u, x) < δ for u ∈ G and x ∈ X. Then, there exist constants c > 0 and A > 0 such that u ≥ cδ log r − A for u ∈ G , a ∈ X and 0 < r < 1.
Moreover, the constant c is independent of G and of δ.
The following result is a consequence of an inequality due to Demailly and Méo [6,26]. It gives a bound for the volume of the set where the Lelong numbers are large.
Lemma 4.6. Let u be a modulo T wpsh function on an analytic set X of pure dimension p in P k . Let β ≥ 0 be a constant and q the dimension of {ν X (u, x) > β}. Consider a finite family of analytic sets Z r , 1 ≤ r ≤ s, of pure dimension q in X. Assume that ν X (u, x) ≥ ν r for x ∈ Z r where (ν r ) is a decreasing sequence such that ν r ≥ 2β. Assume also that deg Z r ≥ d r where the d r 's are positive and satisfy d r−1 ≤ 1 2 d r . Then Recall that ν X (u, x) = ν(R, x). The mass of R is equal to deg(X). Define Z ′ 1 := Z 1 and for r ≥ 2, Z ′ r the union of irreducible components of Z r which are not components of Z 1 ∪ . . . ∪ Z r−1 . So, Z ′ i and Z ′ r have no common component for i = r. Let d ′ r denote the degree of Z ′ r . We have d ′ 1 + · · · + d ′ r ≥ d r for r ≥ 1. We also have ν(R, x) ≥ ν r on Z ′ r . The inequality of Demailly-Méo [6,26] implies that Hence, since β ≤ ν r /2, On the other hand, using the properties of d r , d ′ r , the fact that (ν r ) is decreasing and the Abel's transform, we obtain This proves the lemma.
Asymptotic contraction
In this section, we study the speed of contraction of f n . More precisely, we want to estimate the size of the largest ball contained in the image of a fixed ball by f n . Our main result is the following theorem where the balls in X are defined in Section 4.
Theorem 5.1. Let f be a holomorphic endomorphism of algebraic degree d ≥ 2 of P k and X an analytic subset of pure dimension p, 1 ≤ p ≤ k, invariant by f , i.e. f (X) = X. There exists a constant c > 0 such that if B is a ball of radius r in X with 0 < r < 1, then for every n ≥ 0, f n (B) contains a ball in X of radius exp(−cr −2p d n ).
Corollary 5.2. Let f be a holomorphic endomorphism of algebraic degree d ≥ 2 of P k . There exists a constant c > 0 such that if B is a ball of radius r in P k with 0 < r < 1, then f n (B) contains a ball of radius exp(−cr −2k d n ) for every n ≥ 0.
Let H be a hypersuface in P k which does not contain any irreducible component of X such that the restriction of f to X \ H is of maximal rank at every point. We choose H containing sing(X) ∪ f −1 (sing(X)). If δ is the degree of H, there is a negative function u on P k psh modulo T such that dd c u = δ −1 [H] − T . 2 u(a)). Proof. The constants c i that we use here are independent of a and r. We only have to consider the case where u(a) = −∞. Observe that when c 1 is small and c 2 is large enough, the ball of center f (a) and of radius c 1 r exp(c 2 u(a)) does not intersect sing(X). Let π : X → X ⊂ P k be a desingularization of X and A := π C 1 . If π( a) = a, and if B is the ball of center a and of radius r := A −1 r then π( B) is contained in the ball B. Define h := f •π, u := u•π and T := π * (T ). Since T has continuous local potentials, so does π * (T ).
The current π * [H] is supported in π −1 (H) and satisfies dd c u = δ −1 π * [H] − π * (T ). Since u = −∞ exactly on π −1 (H) and since π * (T ) has continuous local potentials, the support of π * [H] is exactly π −1 (H). So, π * [H] is a combination with strictly positive coefficients of the currents of integration on irreducible components of π −1 (H). Observe that h is of maximal rank outside π −1 (H). It is enough to prove that h( B) contains the ball of center h( a) and of radius c 1 r exp(c 2 u( a)) in X.
We can assume that r is small and work in the local setting. We use holomorphic coordinates x = (x 1 , . . . , x p ) of X and y = (y 1 , . . . , y k ) of P k in small neighbourhoods W and U of a and a respectively. Write h = (h 1 , . . . , h k ) and consider a holomorphic function ϕ on W such that ϕ −1 (0) = π −1 (H) ∩ W . Then, δ −1 π * [H] ≥ ǫdd c log |ϕ| with ǫ > 0 small enough. We have dd c ( u−ǫ log |ϕ|) ≥ − T . It follows that u − ǫ log |ϕ| is a difference of a psh function and a potential of T . Since T has local continuous potentials, u − ǫ log |ϕ| is bounded from above. Up to multiplying ϕ by a constant, we can assume that ǫ log |ϕ| ≥ u.
If J ⊂ {1, . . . , k} is a multi-index of length p, denote by M J the matrix (∂h j /∂x i ) with 1 ≤ i ≤ p and j ∈ J. Since h is of maximal rank outside π −1 (H), the zero set of J | det M J | 2 is contained in {ϕ = 0}. The Lojasiewicz's inequality [31] implies that J | det M J | 2 ≥ c 3 |ϕ| c 4 for some constants c 3 > 0 and c 4 > 0. Up to a permutation of the coordinates y, we can assume that | det M( a)| ≥ the L 1 (X)-norm of d −n (u + u • f + · · · + u • f n−1 ) is bounded by a constant c ′ > 0 independent of n. If A > 0 is a constant large enough, the set of points x ∈ X satisfying u(x) + u • f (x) + · · · + u • f n−1 (x) ≤ −Ar −2p d n has Lebesgue measure ≤ c ′ A −1 r 2p . By a theorem of Lelong [23,6], the volume of a ball of radius r/2 in X is ≥ c ′′ r 2p , c ′′ > 0. Therefore, since A is large, there is a point b ∈ X, depending on n, such that |b − a| ≤ r/2 and We obtain the result using (5) and the estimate 1 2 c n 1 r ≥ exp(−c 3 r −2p d n ) for 0 < r < 1, where c 3 > 0 is a constant.
Remark 5.4. With the same argument we also get the following. Let B x denote the ball of center x and of radius 0 < r < 1 in X. Let r n (x) be the maximal radius of the ball centered at f n (x) and contained in f n (B x ). Then, there is a constant A > 0 such that Consequently, there is a constant c > 0 such that We can also replace ω p by any PB measure on X, i.e. a measure such that modulo T wpsh functions are integrable, see [10].
In the following result, we use the Lebesgue measure vol X on X induced by the Fubini-Study form restricted to X.
Theorem 5.5. Let f and X be as in Theorem 5.1. Let Z be a Borel set in X and n ≥ 0. Then there is a Borel set Z n ⊂ Z with vol X (Z n ) ≥ 1 2 vol X (Z) such that the restriction f n |X of f n to X defines a locally bi-Lipschitz map from Z n to f n (Z n ). Moreover, the differential of the inverse map f −n |X satisfies Df −n |X ≤ exp(cvol(Z) −1 d n ) on f n (Z n ) with a constant c > 0 independent of n and Z. In particular, we have vol X (f n (Z)) ≥ exp(−c ′ vol X (Z) −1 d n ) for some constant c ′ > 0 independent of n and Z.
Proof. As in (5), there is a subset Z n of Z with vol X (Z n ) ≥ 1 2 vol X (Z) such that Since the fibers of f n contains at most d kn points, the estimate on The last assertion in the theorem follows.
Remark 5.6. It is not difficult to extend Theorems 5.1 and 5.5 to the case of meromorphic maps or correspondences on compact Kähler manifolds. We can use the continuity of f * on the space DSH in order to estimate the L 1 -norm of u • f n for u ∈ DSH, see [10]. The volume estimate in Theorem 5.5 for meromorphic maps on smooth manifolds was obtained in [21], see also [16,13,20] for earlier versions.
Let G be a compact family of modulo T wpsh functions on X. Let H n denote the family of T wpsh functions which are equal almost everywhere to d −n u • f n , u ∈ G . Define ν n := sup{ν X (u, a), u ∈ H n , a ∈ X}.
We have the following result.
Proposition 5.7. Assume that inf ν n = 0. Then, d −n u n • f n → 0 in L 1 (X) for all u n ∈ G . In particular, the hypothesis is satisfied when there is an increasing Proof. Consider a sequence (d −n i u n i • f n i ) converging in L 1 (X) to a modulo T wpsh function u. Corollary 3.9 implies that u ≤ 0. We want to prove that u = 0.
If not, since u is upper semi-continuous, there is a constant α > 0 such that u ≤ −2α on some ball B of radius 0 < r < 1 in X. By Lemmas 3.4 and 3.8, for i large enough, we have d −n i u n i • f n i ≤ −α almost everywhere on B.
Fix δ > 0 small enough and m such that ν m < δ. Consider only the n i larger than m.
This is a contradiction if δ is chosen small enough and if n i is large enough.
Assume now that d −n i u n i • f n i converge to 0 in L 1 (X) for all u n i ∈ G . Then, for every ǫ > 0, we have ν(u, a) < ǫ for u ∈ H n i , a ∈ X and for i large enough. Therefore, inf ν n = 0. Here, we use that if positive closed currents R n converge to R and a n → a then lim sup ν(R n , a n ) ≤ ν(R, a).
Corollary 5.8. Let F be a family of positive closed (1, 1)-currents of mass 1 on P k . Assume that there is an increasing sequence of integers (n i ) such that Proof. Observe that the hypothesis implies that d −n i (f n i ) * (S n i ) → T for all S n i ∈ F . So, we can replace F by F and assume that F is compact. To each current S ∈ F we associate a modulo T psh function u on P k such that dd c u = S − T . Subtracting from u some constant allows us to have max P k u = 0. Proposition 3.3 and Lemma 3.4 imply that the family G of these functions u is compact. The hypothesis and Corollary 3.9 imply that d −n i u n i • f n i → 0 for u n i ∈ G . Proposition 5.7 gives the result. Corollary 5.9. Let F be a compact family of positive closed (1, 1)-currents of mass 1 on P k . Assume that for any S ∈ F , the Lelong number of S vanishes at every point out of supp(µ). Then, d −n (f n ) * (S n ) → T for any sequence (S n ) ⊂ F .
Assume also that for every δ > 0, there is a subsequence (u m i ) ⊂ (u n i ) converging to a modulo T wpsh function w with ν X (w, a) < δ at every point a ∈ X. Then, v = 0.
Proof. Corollary 3.9 implies that v ≤ 0. Assume that v = 0. Then, since v is upper semi-continuous, there is a constant α > 0 such that v < −2α on a ball of radius 0 < r < 1 on X. As in Proposition 5.7, for i large enough we have u n i < −d n i α on a ball B n i of radius exp(−cr −2p d n i ) in X with c > 1. Fix δ > 0 small enough, and (u m i ) and w as above. The property of w implies that if s is an integer large enough, we have ν X (u m i , a) < δ for every a ∈ X and for i ≥ s. By Proposition 4.5 applied to the compact family {u m i , i ≥ s} ∪ {w}, there is a constant c ′ > 0 independent of δ, r and a constant A > 0 such that This is a contradiction for m i large enough, since δ is chosen small.
Exceptional sets
Let X be an analytic subset of pure dimension p in P k invariant by f , i.e. f (X) = X. Let g : X → X denote the restriction of f to X. We will follow the idea of [9] in order to define and study the exceptional analytic subset E X of X which is totally invariant by g, see also [7,8]. The following result can be deduced from Section 3.4 in [9]. Theorem 6.1. There is a (possibly empty) proper analytic subset E X of X which is totally invariant by g and is maximal in the following sense. If E is an analytic subset of dimension < p of X such that g −s (E) ⊂ E for some s ≥ 1, then E ⊂ E X . In particular, there is a maximal proper analytic subset E P k of P k which is totally invariant by f .
We will need some precise properties of E X . So, for the reader's convenience, we recall here the construction of E X and the proof of the previous theorem since the emphasis in [9] is on polynomial-like maps. Observe that g permutes the irreducible components of X. Let m ≥ 1 be an integer such that g m fixes the components of X.
Lemma 6.2. The topological degree of g m is equal to d mp , that is, g m : X → X defines a ramified covering of degree d mp . In particular, for every x ∈ X, g −m (x) contains at most d mp points and there is a hypersurface Y of X containing sing(X) ∪ g m (sing(X)) such that for x ∈ X \ Y , g −m (x) contains exactly d mp points.
Proof. We can work with each component. So, we can assume that X is irreducible. It follows that g m defines a ramified covering. We want to prove that the degree δ of this covering is equal to d mp . Consider the positive measure (f m ) * (ω p ) ∧ [X]. Its mass is equal to d mp deg(X) since (f m ) * (ω p ) is cohomologous to d mp ω p . The operator (f m ) * preserves the mass of positive measures. We also have (f m ) * [X] = δ[X]. Hence, Therefore, δ = d mp . So, we can take for Y , a hypersurface containing the ramification values of f m and sing(X) ∪ g m (sing(X)).
Let Y be as above. Observe that if g m (x) ∈ Y then g m defines a biholomorphic map between a neighbourhood of x and a neighbourhood of is a positive closed (k − p + 1, k − p + 1)-current of mass d mn(p−1) deg(Y ), we can define the following ramification current By a theorem of Siu [30,6], for c > 0, the level set E c := {ν(R, x) ≥ c} of the Lelong number is an analytic set of dimension ≤ p − 1 contained in X. Observe that E 1 contains Y . We will see that R is the obstruction for the construction of "regular" orbits.
For any point x ∈ X let λ ′ n (x) denote the number of distinct orbits These are the "good" orbits. Define λ n := d −mpn λ ′ n . The function λ n is lower semicontinuous with respect to the Zariski topology on X. Moreover, by Lemma 6.2, we have 0 ≤ λ n ≤ 1 and λ n = 1 out of the analytic set ∪ n−1 i=0 g mi (Y ). The sequence (λ n ) decreases to a function λ, which represents the asymptotic proportion of orbits in X \ Y .
Proof. We deduce from the Siu's theorem, the existence of a constant 0 < γ < 1 satisfying {ν(R, x) > 1 − γ} = E 1 . Consider a point x ∈ X \ E 1 . We have x ∈ Y . Define ν n := ν(R n , x). We have ν n ≤ 1 − γ. Since E 1 contains Y , ν 0 = 0 and F 1 := g −m (x) contains exactly d mp points. The definition of ν 1 implies that g −m (x) contains at most ν 1 d mp points in Y . Then The definition of ν 2 implies that F 2 contains at most ν 2 d 2mp points in Y . Hence, F 3 := g −m (F 2 \ Y ) contains at least (1 − ν 1 − ν 2 )d 3mp points. In the same way, we define F 4 , . . ., F n with #F n ≥ (1− ν i )d mpn . Hence, for every n we get the following estimate: This proves the lemma.
End of the proof of Theorem 6.1. Let E n X denote the set of x ∈ X such that g −ml (x) ⊂ E 1 for 0 ≤ l ≤ n and define E X := ∩ n≥0 E n X . Then, (E n X ) is a decreasing sequence of analytic subsets of E 1 . It should be stationary. So, there is n 0 ≥ 0 such that E n X = E X for n ≥ n 0 . By definition, E X is the set of x ∈ X such that g −mn (x) ⊂ E 1 for every n ≥ 0. Hence, g −m (E X ) ⊂ E X . It follows that the sequence of analytic sets g −mn (E X ) is decreasing and there is n ≥ 0 such that g −m(n+1) (E X ) = g −mn (E X ). Since g mn is surjective, we deduce that g −m (E X ) = E X and hence E X = g m (E X ).
Assume as in the theorem that E is analytic with for every n ≥ 0. Hence, g −n−1 (E ′ ) = g −n (E ′ ) for n large enough. This and the surjectivity of g imply that g −1 (E ′ ) = g(E ′ ) = E ′ . By Lemma 6.2, the topological degree of (g m ′ ) |E ′ is at most d m ′ (p−1) for some integer m ′ ≥ 1. This, the identity g −1 (E ′ ) = g(E ′ ) = E ′ together with Lemma 6.3 imply that E ′ ⊂ E 1 . Hence, Remark 6.4. The maximality of E X in Theorem 6.1 implies that it does not depend on the choice of m and of the analytic set Y satisfying Lemma 6.2. Moreover, E X is also the exceptional set associated to g n for every n ≥ 1. An analytic set, totally invariant by g n , is not necessarily totally invariant by g, but it is a union of components of such sets. We deduce from our construction that E P k depends algebraically on f . Corollary 6.5. There are only finitely many analytic subsets of X which are totally invariant by g. In particular, there is only a finite number of analytic subsets of P k which are totally invariant by f .
Proof. We only have to consider totally analytic sets E of pure dimension q. The proof is by induction on the dimension p of X. Assume that the corollary is true for X of dimension ≤ p − 1 and consider the case of dimension p. If q = p then E is a union of components of X. There is only a finite number of such analytic sets. If q < p, by Theorem 6.1, E is contained in E X . Applying the hypothesis of induction to the restriction of f to E X gives the result.
We now give another characterization of E X . Recall that µ X := T p ∧[X]. This is a positive measure of mass s := deg X. The invariance of T implies that µ is totally invariant by g m , that is, (g m ) * (µ) = d pm µ. Since g m fixes the components of X, we can apply the to each component following result where the second assertion was proved by the authors in [9]. Theorem 6.6. Assume that X is irreducible. Let δ a denote the Dirac mass at a point a ∈ X. Then d −pmn (g mn ) * (δ a ) converge to s −1 µ X if and only if a is out of E X . In particular, if a is a point in P k then d −kn (f n ) * (δ a ) converge to µ if and only if a is out of E P k .
Since T has continuous local potentials, µ X has no mass on proper analytic subsets of X. It follows that if a ∈ E X , any limit value of d −pmn (g mn ) * (δ a ) has support in E X and is singular with respect to µ X . Consider a point a in X \ E X . We only have to check the convergence to s −1 µ X . Fornaess and the second author proved this convergence for X = P k and for a outside a pluripolar set [16]. Briend and Duval extended this result to a outside the orbit of the critical set of f [1]. They also proposed a geometrical approach in order to prove this property for a outside an analytic set but there is a problem with the counting of multiplicity in their lemma in [1, p.149].
Briend-Duval result can be extended to our situation: for a outside the orbit of Y we have d −pmn (g mn ) * (δ a ) → s −1 µ X . We recall the following proposition, see [1] and also [9,7,8] for more general cases, in particular, for non-projective manifolds.
Observe that if n ≥ r ≥ 0 then where the points in g −mr (a) are counted with multiplicities. So, if a point a does not satisfies the conclusion of Proposition 6.7 then it admits many preimages in Y ǫ . We quantify now this property.
Let N n (a) denote the number of orbits of g m O = {a −n , . . . , a −1 , a 0 } with g m (a −i−1 ) = a −i and a 0 = a such that a −i ∈ Y ǫ for every i. Here, the orbits are counted with multiplicities. So, N n (a) is the number of negative orbits of order n of a which stay in Y ǫ . Observe that the sequence of functions τ n := d −pmn N n decreases to some function τ . Since τ n are upper semi-continuous with respect to the Zariski topology and 0 ≤ τ n ≤ 1, the function τ satisfies the same properties.
Observe that τ (a) is the probability that an infinite negative orbit of a stays in Y ǫ . The following proposition gives also a characterization of E X .
Proposition 6.8. The function τ is the characteristic function of E X , that is, τ = 1 on E X and τ = 0 on X \ E X .
Proof. Since E X ⊂ Y ǫ and E X is totally invariant by g, we have E X ⊂ {τ = 1}. Let θ ≥ 0 denote the maximal value of τ on X \ E X . This value exists since τ is upper semi-continuous with respect to the Zariski topology (indeed, it is enough to consider the algebraic subset {τ ≥ θ 0 } of X which decreases when θ 0 increases). We have to check that θ = 0. Assume in order to obtain a contradiction that θ > 0. Since τ ≤ 1, we always have θ ≤ 1. Consider the non-empty analytic set . We deduce from the definition of τ and θ that It follows that g −m (a ′ ) ⊂ E. Therefore, the analytic subset E of Y ǫ satisfies g −m (E) ⊂ E. This contradicts the maximality of E X .
End of the proof of Theorem 6.6. Let a be a point outside E X . Fix ǫ > 0 and a constant α > 0 small enough. If ν is a limit value of d −pmn (g mn ) * (δ a ), it is enough to show that ν − s −1 µ X ≤ 2α + ǫ. Proposition 6.8 implies that τ (a) = 0. So for r large enough we have τ r (a) ≤ α. Consider all the negative orbits O j of order r j ≤ r Each orbit is repeated according to its multiplicity. Let S r denote the family of points b ∈ g −mr (a) such that g mi (b) ∈ Y ǫ for 0 ≤ i ≤ r. Then g −mr (a) \ S r consists of the preimages of the points a (j) −r j . So, by definition of τ r , we have We have for n ≥ r Since d −pmn (g mn ) * preserves the mass of any measure, the first term in the last sum is of mass d −pmr #S r = τ r (a) ≤ α and the second term is of mass ≥ 1 − α. We apply Proposition 6.7 to the Dirac masses at a (j) −r j . We deduce that if ν is a limit value of d −pmn (g mn ) * (δ a ) then This completes the proof of the theorem. Proof. Replacing f by an iterate allows to assume that g m fixes all the components of every analytic set which is totally invariant by g m . So, all these components are totally invariant. Let ν be an extremal probability measure totally invariant by g m . Let X ′ be the smallest analytic set totally invariant by g m such that ν(X ′ ) = 1. Since ν is extremal, X ′ is irreducible and ν(E X ′ ) = 0. It follows from Theorem 6.6 and the invariance of ν that ν is proportional to µ X ′ . By Corollary 6.5, the family of such measures is finite.
The following lemma will be useful in the proof of our main results where n 0 is an index such that E n X = E X for n ≥ n 0 . Lemma 6.10. There is a constant θ > 0 such that if Z is an analytic subset of pure dimension q ≤ p−1 of X not contained in E X then for every n ≥ 0, g −mn (Z) contains an analytic set Z −n of pure dimension q of degree ≥ θd mn(p−q) . Moreover, if n ≥ n 0 and if x is a generic point in Z −n , then x ∈ reg(X), g m(n−n 0 ) (x) ∈ reg(X) and g m(n−n 0 ) defines a biholomorphism between a neighbourhood of x and a neighbourhood of g m(n−n 0 ) (x) in X.
Proof. Let P be a generic projective plane in P k of dimension k − q. Consider a point a in Z ∩ P \ E X . Since E X = E n 0 X , we have g −ml (a) ⊂ E 1 for some 0 ≤ l ≤ n 0 . Then, by Lemma 6.3, #g −mn (a) contains at least γd mp(n−n 0 ) distinct points x satisfying the last property in the lemma. Let Z −n denote the union of the irreducible components of g −mn (Z) which contain at least one such point x. Then, Z −n satisfies the last property in the lemma. We have #Z −n ∩ f −mn (P ) ≥ γd mp(n−n 0 ) . Since deg f −mn (P ) = d mnq , we obtain that deg Z −n ≥ θd m(p−q)n for θ := γd −mpn 0 .
Convergence towards the Green current
In this section, we will prove the main results. Define the exceptional set E as the union of proper analytic subsets E of P k which are totally invariant by f and are minimal in the following sense. The set E does not contain non-empty proper analytic sets which are totally invariant by f . Theorem 6.1 and Corollary 6.5 imply that E is a totally invariant analytic set and it does not change if we replace f by an iterate of f , see also Remark 6.4. We have the following result which implies Theorems 1.1 and 1.2.
Theorem 7.1. Let f , T , E be as above. Let G be a family of modulo T psh functions on P k which is bounded in L 1 (P k ). Assume that the restriction of G to each component of E is a bounded family of modulo T wpsh functions. Then, d −n u • f n converge to 0 in L 1 (P k ) uniformly on u ∈ G .
Let m ≥ 1 be an integer such that f m fixes all the irreducible components of all the totally invariant analytic sets. By Proposition 5.7, we can replace f by f m and assume that f fixes all these components. Let X p denote the union of totally invariant sets of pure dimension p. We will prove by induction on p that d −n u•f n converge to 0 in L 1 (X p ) uniformly on u ∈ G . We obtain the theorem for p = k and X k = P k . Assume this convergence on X 0 , . . ., X p−1 (the case p = 0 is clear). Define X := X p and E X as in Section 6. From the induction hypothesis, on each component E of E X , d −n u • f n converge in L 1 to 0 uniformly on u ∈ G . We deduce that G is bounded in L 1 (E). So, if Z is a component of X, which is not minimal in the above sense, by Lemma 3.5, G is bounded in L 1 (Z). If Z is a minimal component of X, then by hypothesis of the theorem, G is bounded in L 1 (Z). So, we can apply Corollary 3.9 to G . Let G ′ denote the set of all the modulo T wpsh functions on X which are limit values in L 1 (X) of a sequence (d −n u n • f n ) with u n ∈ G . For every u ∈ G ′ , Corollary 3.9 implies that u ≤ 0. Since E X ⊂ X, by induction hypothesis we have convergence on E X . The last assertion of Proposition 3.3 imply that u ≥ 0 on E X . Hence, u = 0 on E X for every u ∈ G ′ . It is clear that G ′ is compact. Fix a function v 0 ∈ G ′ . We have to show that v 0 = 0.
Proof. Assume that v 0 is the limit of a sequence (d −n i u n i • f n i ). Then, for n ≥ 0 the sequence (d −n i −n u n i • f n i +n ) converges to d −n v 0 • f n . Lemma 3.8 implies that We construct the functions v −n in the same way by induction. If v −n is the limit of Proof of Theorem 7.1. Let G ′′ denote the set of all the modulo T wpsh functions w on X which are limit values of the sequence (v −n ) n≥0 . Since G ′ is compact, we have G ′′ ⊂ G ′ . We have to show that v 0 = 0. Assume this is not the case. Since v 0 = d −n v −n • f n almost everywhere, by Lemma 5.10, there is a constant α 0 > 0 such that max X ν X (w, a) ≥ α 0 for every w ∈ G ′′ . Fix a function w 0 ∈ G ′′ . Lemma 7.3. There are functions w n ∈ G ′′ such that w n+1 = d −1 w n • f almost everywhere for n ∈ Z.
Proof. Assume that w 0 is the limit of (v −n i ). Let w 1 and w −1 be modulo T wpsh functions which are limit values of (v −n i +1 ) and (v −n i −1 ) respectively. These functions belong to G ′′ . Then, w 0 = d −1 w −1 • f and w 1 = d −1 w 0 • f almost everywhere. We obtain the lemma by induction. If w n is the limit value of (v −m i ) then we obtain w n−1 or w n+1 as limit values of (v −m i −1 ) or (v −m i +1 ) respectively.
of coordinates. If f is the endomorphism of P 1 × · · · × P 1 , k times, defined by f (x 1 , . . . , x k ) := (h(x 1 ), . . . , h(x k )), then there is a holomorphic map f : P k → P k of algebraic degree d such that f •π = π• f . We also have f m •π = π• f m . Consider a point x in P k and a point x in π −1 (x). We have π −1 (f −m (x)) = f −m (π −1 (x)). Hence, #π −1 (f −m (x)) ≥ # f −m ( x) ≥ 2 −k d mk . Since π has degree k!, we obtain #f −m (x) ≥ 1 2 k k! d mk > d m(k−1) . This completes the proof. Remark 7.4. Let C denote the compact convex set of totally invariant (1, 1)currents of mass 1 on P k . Define an operator ∨ on C . If S 1 , S 2 are elements of C , write S i = T + dd c u i with u i psh modulo T on P k such that u i ≤ 0 and u i = 0 on supp(µ), see Corollary 3.9. Define S 1 ∨ S 2 := T + dd c max(u 1 , u 2 ). It is easy to check that S 1 ∨ S 2 is an element of C . An element S is said to be minimal if S = S 1 ∨ S 2 implies S 1 = S 2 = S. It is clear that T is not minimal if C contains other currents. A current of integration on a totally invariant hypersurface is a minimal element.
The currents T i of integration on (z i = 0) belong to C and T j = T + dd c u j with u j := log |z j | − max i log |z i |. These currents are minimal. If α 0 , . . ., α k are positive real numbers such that α := 1 − α i is positive, then S := αT + α i T i is an element of C . We have S = T + dd c u with u := α i u i . The current S is minimal if and only if α = 0. One can obtain other elements of C using the operator ∨. One can also prove that C admits an infinite number of elements which are extremal in the cone of positive closed (1, 1)-currents. This implies that C has infinite dimension. The elements of the set E in this case are just the points [0 : · · · : 0 : 1 : 0 : · · · : 0].
Polynomial automorphisms
The approach that we used above can be extended to other situations. From now on we consider a polynomial automorphism f : C k → C k of degree ≥ 2 and its extension as a birational map on P k that we also denote by f . Let I + and I − denote the indeterminacy sets of f and f −1 respectively. These are the analytic sets where f and f −1 are not defined; they are contained in the hyperplane at infinity L := P k \ C k . Assume that f is regular, i.e. I + ∩ I − = ∅. We refer the reader to [29] for the basic properties of regular automorphisms. There is an integer 1 ≤ s ≤ k − 1 such that I + and I − are irreducible analytic sets of dimension s − 1 and k − s − 1 respectively. We also have f (L \ I + ) = f (I − ) = I − and f −1 (L \ I − ) = f −1 (I + ) = I + . The maps f n and f −n are also regular. The algebraic degrees d + and d − of f and f −1 satisfy the relation d k−s + = d s − . The Green currents of bidegree (1, 1) associated to f and f −1 are denoted by T + and T − . They are limits in the sense of currents of d −n + (f n ) * (ω) and d −n − (f n ) * (ω) respectively. The current T + has locally continuous potentials outside I + , the current T − has locally continuous potentials outside I − . We also have f * (T + ) = d + T + and f * (T − ) = d − T − . We will consider the problem of convergence towards T + , the case of T − is obtained in the same way.
Let g : X → X denote the restriction of f to X := I − . The positive measure µ X := T k−s−1 + ∧ [X] has positive mass. Since T + is totally invariant, we have g * (µ X ) = d k−s−1 + µ X . This implies that g has topological degree d k−s−1 + . We construct as above the families X 0 , . . ., X k−s−1 of totally invariant sets associated to g with X k−s−1 = I − . Let E + denote the union of minimal components in {X 0 , . . . , X k−s−1 }. We have the following result, see [16] for the case of dimension 2.
Theorem 8.1. Let S be a positive closed (1, 1)-current of mass 1 on P k . Assume that the local potentials of S are not identically equal to −∞ on any irreducible component of E + . Then, d −n + (f n ) * (S) converge to T + . The proof follows the same lines as above. We will describe the difference with the case of holomorphic endomorphisms and leave the details to the reader. There is a neighbourhood V of I + with smooth boundary, which can be chosen arbitrarily small, such that f (P k \ V ) ⋐ P k \ V , see [29]. If S is as above, there is a modulo T + psh function u such that S = T + + dd c u. This function is defined and is locally bounded from above on P k \ I + . Denote by G the set of modulo T + psh functions on P k which are limit values of d −n + u • f n . Since the Lelong number of u is ≤ 1 at every point in P k \ I + and since f is an automorphism, Proposition 4.1 implies that the Lelong number of d −n + u • f n is ≤ d −n + at every point in C k . On the other hand, for v ∈ G , we prove as in the previous sections that v ≤ 0 and v = 0 on X = I − . It follows that v = 0 on L \ I + since we can write v = d −1 + v ′ • f with v ′ ∈ G and f (L \ I + ) = I − . The upper semi-continuity of the Lelong number implies that for every δ > 0, there is an m such that the Lelong number of d −m + u • f m is smaller than δ on P k \ V . We want to prove that v = 0 on P k \ V .
Assume that v = lim d −n i + u • f n i and that v ≤ −2α with α > 0, on a ball B ⊂ P k \V of radius r. Then as in Proposition 5.7, we will have that d −m + u•f m ≤ −d n i −m + α on a ball B i ⊂ P k \ V of radius exp(−cr −2k d n i −m + ); this contradicts Proposition 4.2 for δ small and n i large. We can also obtain a uniform convergence for regular automorphisms as in Theorem 7.1. | 2014-10-01T00:00:00.000Z | 2006-09-25T00:00:00.000 | {
"year": 2006,
"sha1": "d2435f5c4fd72a02c94c203b4cb77bd139014f42",
"oa_license": null,
"oa_url": "http://www.numdam.org/item/10.24033/asens.2069.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "dc9892e4da2cacf9413d170577fd46a145f6b579",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Mathematics"
]
} |
245097094 | pes2o/s2orc | v3-fos-license | Detection of Forced Change Within Combined Climate Fields Using Explainable Neural Networks
Assessing forced climate change requires the extraction of the forced signal from the background of climate noise. Traditionally, tools for extracting forced climate change signals have focused on one atmospheric variable at a time, however, using multiple variables can reduce noise and allow for easier detection of the forced response. Following previous work, we train artificial neural networks to predict the year of single-and multi-variable maps from forced climate model simulations. To perform this task, the neural networks learn patterns that allow them to discriminate between maps from different years—that is, the neural networks learn the patterns of the forced signal amidst the shroud of internal variability and climate model disagreement. When presented with combined input fields (multiple seasons, variables, or both), the neural networks are able to detect the signal of forced change earlier than when given single fields alone by utilizing complex, nonlinear relationships between multiple variables and seasons. We use layer-wise relevance propagation, a neural network explainability tool, to identify the multivariate patterns learned by the neural networks that serve as reliable indicators of the forced response. These “indicator patterns” vary in time and between climate models, providing a template for investigating inter-model differences in the time evolution of the forced response. This work demonstrates how neural networks and their explainability tools can be harnessed to identify patterns of the forced signal within combined fields
few. Recently, neural networks have also entered the fold. Neural networks are machine learning algorithms that are able to detect complex, nonlinear relationships between input and output data (Abiodun et al., 2018). Because neural networks are able to detect highly complex relationships, they are useful for many high dimensional problems and have become prevalent in several atmospheric science research fields, such as weather forecasting (e.g., Lagerquist et al., 2019;Lee et al., 2021;Weyn et al., 2020), climate model parameterizations (e.g., Brenowitz & Bretherton, 2018;Gettelman et al., 2021;Silva et al., 2021), and, most relevant to the focus of this study, detection of a forced climate response (e.g., Barnes et al., 2019Barnes et al., , 2020Labe & Barnes, 2021;Madakumbura et al., 2021). To detect patterns of forced change, Barnes et al. (2020) trained a neural network to predict the year label of maps of annual-mean temperature (or precipitation) from climate model simulations for forced historical and future scenarios. Given that the internal variability in any given year differs between the various climate models, the neural network had to learn patterns of the forced climate response. Using neural network explainability methods, they then visualized the regions that were most reliable indicators for identifying change across the Coupled Model Intercomparison Project (CMIP5) models. Barnes et al. (2020) demonstrated that neural networks, and their explainability methods, are powerful tools for extracting forced patterns from climate data. This neural network method is a natural approach for isolating the forced climate response. While many other methods require assumptions to be made about the time evolution of the forced signal and internal variability within the system, neural networks do not (Barnes et al., 2019). Following Barnes et al. (2020), neural networks have since been used to explore the sensitivity of regional temperature signals to aerosols and greenhouse gases using single-forcing large ensembles, and to detect the signal of extreme precipitation in observational data sets (Labe & Barnes, 2021;Madakumbura et al., 2021). Though many climate signal detection studies focus on single variables, such as annual-mean temperature or a single season of precipitation (Gaetani et al., 2020;Li et al., 2017;Santer et al., 1996Santer et al., , 2019, there are benefits to studying climate change through a multivariate lens (Bindoff et al., 2013;Bonfils et al., 2020;Mahony & Cannon, 2018). Many variables in our atmosphere are closely interconnected, so when the variables are intelligently selected signals of change within multiple variables may be detected earlier than in single variables alone. For example, departure from natural variability can be seen decades earlier in bivariate maps of summertime temperature and precipitation than in either variable alone (Mahony & Cannon, 2018). Similarly, Fischer and Knutti (2012) found that climate model biases in the signal of relative humidity and temperature are negatively correlated such that climate model simulations of their combined quantity, heat stress, have considerably less spread. Combined variables have also been used to identify the impacts of anthropogenic forcings on climate in observational data sets by identifying the multivariate patterns that enhance the signal of change relative to the underlying noise (e.g., Barnett et al., 2008;Marvel & Bonfils, 2013). Understanding how the patterns of the forced response take shape through multiple atmospheric variables also allows for a deeper understanding of the physics at play, as in Bonfils et al. (2020). They explored the evolution of the climate fingerprint by analyzing the leading combined empirical orthogonal functions of temperature, precipitation, and climate moisture index. This multivariate approach illuminated two cross-variable patterns of change: intensification of wet-dry patterns and meridional shifts in the ITCZ associated with interhemispheric temperature contrasts. Neither pattern can be fully explained by a single variable which highlights the utility of combining variables when identifying patterns of the forced response.
Combining fields can be useful for identifying patterns of forced change that do not reveal themselves in single fields alone, but this added information does not come without its drawbacks. Many variables covary in complex and nonlinear ways, such as sea surface temperature and precipitation (Lu et al., 2015), drought indices (Wu et al., 2017), and snowpack, soil moisture and flood risk (Swain et al., 2020), often requiring complex statistics to isolate these interactions. Identifying nonlinear correlations within climate fields introduces another issue, namely in explaining the complex interplay between fields. These drawbacks highlight the need for methods that are both complex and explainable in multivariate climate analyses.
Providing a method for both nonlinear and multi-variable analysis of the forced response, this study extends the neural-network approach of Barnes et al. (2020) to combined fields of input. Combined fields could mean the same variable for different temporal segments (e.g., seasons), or different geophysical variables, both of which are explored here. For the sake of consistency and comparability, this study largely follows the methodology of Barnes et al. (2020), however there are some departures. We standardize the input fields differently which improves the predictive skill of the neural networks. We also use a slightly simpler neural network architecture 2. Data
CMIP6 Climate Models
We use climate model output from the sixth phase of the CMIP6 . Specifically we focus on monthly-, seasonal-, and annual-mean fields of 2-m air temperature (K), precipitation rate (kg m −2 s −1 ), and precipitation rate from very wet days (kg m −2 s −1 ), hereafter referred to as temperature, precipitation, and extreme precipitation, respectively. We use the meteorological seasons of December-January-February (DJF), March-April-May (MAM), June-July-August (JJA), and September-October-November (SON) for calculating seasonal-mean fields. Defining seasons in this way allows for the earliest detection of forced change (see Figure S1 in Supporting Information S1 for more details).
Very wet days are defined as days that exceed the 95th percentile of all days with precipitation over a pre-defined baseline period (Donat et al., 2016). This is a popular index for measuring changes in extreme precipitation (Cui et al., 2019;Kim et al., 2020) and is used as an indicator of climate change in the U.S. Global Climate Research Program (USGCRP, 2018). We define the baseline as the 40 years from 1980 to 2019, a period for which daily precipitation data exists in both the climate models and the observations. To remove the instances in which climate models simulate sub-trace daily precipitation totals, we only include days that simulated at least 1 mm of precipitation when calculating the 95th percentile of all days with precipitation (Dai et al., 2007).
The neural networks are trained on CMIP6 climate model data. One ensemble member is selected for each of the 37 CMIP6 climate models analyzed so each climate model is only represented once in the training and testing data. Since daily output is required to calculate very wet days, we are limited to 32 models for extreme precipitation ( Figure S3 in Supporting Information S1). We analyze the climate model data from 1920 to 2098 under historical forcing and the shared socioeconomic pathway 585 (SSP585) scenario . SSP585 represents the highest development pathway within CMIP6 scenarios (O'Neill et al., 2016), combining SSP5 and representative concentration pathway 8.5.
Our neural network methodology requires that all climate model fields have the same shape. To accommodate this we regrid the climate model fields from their native resolutions using the second-order conservative remapping method in the Climate Data Operators package from MPI (Schulzweida, 2019). This regridding step reduces the spatial resolution of the data for most climate models. For temperature and precipitation, the data is regridded to 4° latitude by 4° longitude. We elect to use lower resolution data to reduce the computational expense of training neural networks over global maps of temperature and precipitation. Since the domain for extreme precipitation is smaller than the domain for temperature and precipitation (see the following paragraph), and higher resolution data may better capture regional extreme precipitation patterns, the data for extreme precipitation is regridded to a slightly higher resolution: 1.5° latitude by 1.5° longitude.
Two spatial domains are considered in the results of this paper. For temperature and precipitation, the neural networks are trained on all land north of 60°S. Here, we choose to focus on land grid points because that is where humanity lives and will acutely feel the impacts of changing surface temperatures and precipitation. We also exclude Antarctica where climate models and reanalyses struggle to accurately simulate temperature and precipitation. Each map of temperature and precipitation has 948 unique data points. For extreme precipitation, the neural networks are trained on North and South America (land grid points bounded by 90°N, 55°S, 170°W, and 25°W). Here, we choose to narrow the regional scope to show that neural networks are powerful tools for identifying the forced response even when the spatial domain, and thus the available data, is limited. Each map of extreme precipitation has 2,314 unique data points.
Observations
While this work largely focuses on the results of neural networks trained and tested on climate model data, we show that neural networks trained on climate model data can be applied to observational data as well. For temperature, we use the Berkeley Earth Surface Temperature data set (Rohde & Hausfather, 2020). This data set provides both a temperature climatology and the anomalies at monthly resolution from 1850 to the present. We added the anomalies to the climatology to reconstruct the absolute temperature (K) at each grid point for all months between 1920 and 2019. Monthly observational precipitation fields are obtained from the NOAA Global Precipitation Climatology Project (GPCP), version 2.3, for 1979 to the present (Adler et al., 2018). Since daily precipitation fields are required to calculate extreme precipitation, and daily GPCP precipitation observations are only available back to October 1996, we elected to calculate observed extreme precipitation using the European Centre for Medium-Range Weather Forecasts' ERA5 global reanalysis (Hersbach et al., 2020) at 6-hr resolution from 1980 to present. All observations are regridded in the same way as the climate model data for each respective variable.
Neural Network Design
To identify indicator patterns of the forced response for combined fields we first develop artificial neural networks that, given maps of CMIP6 climate model output from every simulated year from 1920 to 2098, are tasked to predict the year that is being simulated. The results for neural networks trained on 10 different input vectors are explored in the following two sections. The input vectors include annual-, seasonal-, and monthly-mean data for temperature, precipitation, and temperature and precipitation combined, as well as seasonal-mean maps for extreme precipitation over the Americas. We use this diverse selection of input vectors to compare neural network performance and indicator patterns for single-field and combined-field inputs.
The neural network architecture is illustrated in Figure 1. Each unit of the input layer corresponds to a different grid point in the input fields. For example, a neural network that uses seasonal-mean maps of temperature and precipitation as input (two variables and four seasons for a total of eight maps, 948 grid points per map) would have an input vector with 7,584 units. In all cases, this input layer is followed by two fully connected hidden layers with 10 nodes each. The hidden layers are followed by an output layer that consists of 22 classes, one corresponding to each decade midpoint between 1905 and 2115 (e.g., 1905, 1915, 1925, …, 2115). A softmax function is applied to the outputs to convert them to units of likelihood, where the sum of the output vector is one.
Neural networks with this architecture learn the patterns of forced change well, and more complicated architectures do not substantially improve neural network performance (see Figure S2 in Supporting Information S1). It is also notable that this neural network architecture performs better than multiple linear regression, especially when trained on precipitation, and thus using nonlinear techniques improves our ability to detect the year via patterns of forced change ( Figure S2 in Supporting Information S1). This architecture is also widely accessible to most in the climate science community as it can be trained on a personal laptop-highly complex architectures can be prohibitively computationally expensive (Chen et al., 2020). These neural networks were trained on a standard desktop computer with 16 GB of RAM and a 3.1 GHz, 6-core processor. Training a single network took anywhere between 2 and 10 min depending on the size of the input field. More details on the neural network design and hyperparameter tuning can be found in Supporting Information S1.
The neural network is tasked with "predicting the year" rather than "predicting the decade" as the output layer may suggest. To translate between decade midpoints and individual year labels, we use fuzzy encoding (Zadeh, 1965) such that each year can be mapped to one or more neighboring classes with varying degrees of membership (encoded as likelihood). This is different than traditional methods that would map each year to a single decade midpoint. In the traditional case, 2040 and 2049 would be considered to be members of the same class since they are in the same decade, and information would be lost as there is no way to distinguish whether the samples come from the beginning or the end of the decade. Using fuzzy encoding, this information of where a sample lies in each decade is retained. We use a triangular membership function (Zadeh, 1965) with a width equal to one decade such that each year has partial membership in one or two neighboring decade classes, and the total membership sums to one. Following this method, any year directly on a decade midpoint has membership in that class only while years that fall between decade midpoints have membership in the two neighboring classes. The year 1925, for example, is mapped to a likelihood of one for the class 1925 and a likelihood of zero in all other classes. The year 2078 is mapped to a likelihood of 0.7 for the 2075 class and a likelihood of 0.3 for the 2085 class. Note that decoding class likelihoods back to their year is simply the decade-weighted sum of the likelihood: 0.7 × 2075 + 0.3 × 2085 = 2078. A visualization of the encoding/decoding process can be found in Figure 2 of Barnes et al. (2020).
Neural Network Training
For each input vector we train 100 neural networks that differ only in which climate models are randomly split into the training and testing sets. Partitioning so that each climate model's samples are all part of either the training set or the testing set avoids issues with autocorrelation (i.e., near-identical data appearing in both the training and testing sets). One hundred neural networks provide a range of results across multiple combinations of training and testing simulations and offer confidence that the results are consistent across CMIP6 climate models and do not overfit to any one training set. Each neural network is trained over the entire 1920-2098 period on 80% of the climate model simulations, and then tested on the remaining 20%. This leads to a training set of 30 simulations and a testing set of 7 simulations for temperature and precipitation fields, and a training set of 26 simulations and a testing set of 6 simulations for extreme precipitation fields. We train the neural networks using the binary cross-entropy loss (see Barnes et al., 2020) between the predicted class likelihoods and the correct class membership weights, such that the loss function is minimized when the two are equal. Properties of the neural network training process, such as the learning rate and activation functions, can be found in Supporting Information S1.
The neural networks have several hidden nodes which enable them to learn complicated relationships between the input and output data. However, with limited training data, many of these learned relationships will capture Figure 1. Schematic of the fully connected neural network architecture. Inputs from multiple maps of data are flattened into an input layer vector ( size of the input layer ranges from 948 to 22,752 ). These inputs are fed through two hidden layers with 10 nodes each. The neural network is trained to predict the year that the data came from, outputting the likelihood that the input data came from each decade midpoint between 1905 and 2115. This is then converted to a year via fuzzy classification.
patterns of the noise in the training data set which can lead to overfitting (Srivastava et al., 2014). Atmospheric science data is also highly correlated in space and this collinearity can cause complications in the interpretation of the learned weights (Newell & Lee, 1981). Thus, to reduce overfitting and address these issues, we apply ridge regularization (L 2 regularization, see Barnes et al., 2020) to the weights of the first hidden layer. Ridge regularization adds a penalty (called the ridge penalty) to the square of the weights so the solution is penalized for having large weights. Through training, this acts to shrink the largest weights, thus spreading the weight out more evenly across multiple grid points. In our application this results in a more even distribution of importance across regions with strong spatial correlation and improves the performance of the neural networks when given data they were not trained on, namely those models in the testing set (elaborated on in Figure 3, Section 4 of Barnes et al., 2020).
Unlike classical approaches which tune the neural network to reduce the mean squared error (MSE) between the predicted and truth outputs in the testing set (in our case this would be the MSE between the truth and predicted years), we select the ridge penalty that minimizes the time of emergence (TOE) of the forced climate signal (see Section 3.3). Using TOE, rather than MSE, to identify the appropriate ridge penalty ensures that we are encouraging the neural networks to learn the patterns of the forced response across all decades. When a small ridge penalty is used, the neural networks are able to predict the year at the end of the twenty-first century almost perfectly, at the expense of the predictive skill in earlier decades. This results in a later calculation of TOE for the testing set. Slightly increasing the ridge penalty can allow the neural networks to detect the climate change signal slightly earlier ( Figure S4 in Supporting Information S1). The ridge penalty used for each input vector can be found in Supporting Information S1. We use the same ridge penalty for all 100 neural networks trained on each input vector.
All input fields (for climate models and observations) are standardized to assist with the training and overall performance of the neural network. We subtracted the 1980-2019 mean at each grid point of the input fields for each climate model independently. This recasts each input field to measure the change relative to the 1980-2019 mean, rather than the raw magnitudes, which improves the predictive skill of the neural networks and is also appropriate for identifying indicator patterns of forced change. Since values for precipitation change are often on the order of 10 −6 , while the values for temperature change are on the order of 10 0 , we normalized the data so the inputs to the neural network all have a similar magnitude. To do this, the data from 1980 to 2019 at each grid point for each climate model are detrended using ordinary least squares linear regression. We then take the multi-model mean of the standard deviation of the detrended 1980-2019 data for each grid point. The input fields are then divided by this new field of standard deviations so the inputs are of the same magnitude and fall in a reasonable range for training the neural networks. Since all our observational data sets include the years 1980-2019, we standardize the observations as if they were additional climate models: raw observations are subtracted by their own 1980-2019 mean, and divided by the same multi-model standard deviations that were used to standardize the CMIP6 data.
Time of Emergence Calculation
The TOE of the forced climate response is the time in which the forced response signal is distinguishable from the background climate by the neural network. Specifically, we define the TOE as the year when the neural network is able to distinguish that year's map from any map over a historical baseline period. In this work, we define this baseline period as 1920-1959 and, under this definition, the earliest possible TOE estimate is 1960. The TOE is estimated for each climate model simulation independently and a schematic of how the TOE is estimated is presented in Figure 2. First, we calculate the maximum of the neural network-predicted years over 1920-1959 for each model, which is referred to as the baseline maximum. We then identify the TOE as the earliest year in which . The TOE is defined as the earliest year in which a map, and all subsequent maps, permanently exceed the maximum predicted year from the baseline period . The baseline maximum for each model is indicated by the horizontal lines, the last year that falls below the baseline maximum is circled, and the TOE is indicated by the vertical lines. Sample model 1 ( dark red ) has a baseline maximum of 1966 and permanently exceeds this threshold in 2028. Sample model 2 ( light green ) has a baseline maximum of 1981 and permanently exceeds this threshold in 1989. Thus, the TOE for sample models 1 and 2 are estimated as 2028 and 1989, respectively. a map, and all subsequent maps, permanently exceed the baseline maximum. In Figure 2, sample model 1 has a baseline maximum of 1966 and permanently exceeds this prediction threshold in 2028. Sample model 2 has a baseline maximum of 1981 and permanently exceeds this threshold in 1989. Thus, the TOE for sample models 1 and 2 are estimated as 2028 and 1989, respectively. In the following sections we present the TOE for the testing set, however TOE estimates are similar for both the training and testing sets. Year predicted by the neural network ( y-axis ) versus the truth year ( x-axis ) for temperature ( a, d, g ), precipitation ( b, e, h ), and temperature and precipitation combined ( c, f, i ). Input maps include annual-mean data ( a-c ), seasonal-mean data ( d-f ), and monthly-mean data ( g-i ). Testing data is shown in color and observations are shown in white.
Layer-Wise Relevance Propagation
To visualize the patterns learned by the neural network we apply LRP which highlights the regions that were most relevant in the neural network's decision-making process (Bach et al., 2015;Montavon et al., 2019). Toms et al. (2020) discusses in detail how LRP can be used for neural network explainability in the geosciences, though the most relevant details of LRP are described here.
LRP is a neural network explainability method that traces how information flows through the pathways of a trained neural network. The values in a single-sample input vector (in our case, a single year) are passed forward through the neural network. Using the same weights and activations used in the forward pass, LRP then propagates a single-valued output back through the neural network to infer the extent to which each of the values in the input layer contribute to the output (see Figure 2 in Bach et al., 2015). We refer to this quantity as relevance. Through this backpropagation process the output value is conserved such that the sum of all relevance is equal to the output. At first order, relevance can be likened to the product of the regression weights and input map in a linear model. This quantity is natively unitless, but we convert it to a fraction by dividing by the output value. This way, we can consider the relevance of a single pixel in terms of its fractional contribution to the predicted class. Since LRP propagates only a single output value at a time, we propagate relevance only for the decade class with the highest likelihood. While the relevance maps detected by these networks evolve from year to year, this evolution is slow so we find visualizing the highest likelihood decade is sufficient.
There are several LRP decomposition rules which provide different methods of visualizing neural networks (Lapuschkin, 2019;Mamalakis et al., 2021). In our applications we use the αβ-rule which propagates positive relevance (regions that act to increase the class likelihood) and negative relevance (regions that act to decrease the class likelihood) separately. Using the parameters α = 1 and β = 0 we choose to only propagate positive relevance, thus highlighting the regions that added to the likelihood of the selected decade class. We also looked at the relevance maps for β = 1 and found that propagating negative relevance did not impact the conclusions.
Signal-to-Noise Ratio Calculation
In Section 4, we compare the LRP relevance maps to maps of S/N ratio, a more conventional method for identifying indicator patterns of the forced response. S/N ratio consists of three distinct components: the forced signal, which is divided by the sum of noise due to internal variability, and noise due to climate model disagreement. A higher S/N ratio indicates that the signal of the forced response within the climate models is very large relative to the underlying noise. We evaluate the S/N ratio for each grid point separately, following the methodology in Hawkins and Sutton (2012). First, we smooth the data from 1920 to 2098 for each climate model using a fourth-order polynomial fit. The signal is defined as the difference between 2090 and 1920 in the smoothed data, while internal variability is defined as the standard deviation of the residuals from the smoothed data, and climate model disagreement is defined as the standard deviation of the signals calculated for all the climate models. S/N ratio is calculated by dividing the climate signal by the 90% confidence interval in the noise: internal variability and climate model disagreement. S/N ratio, and its components, can be seen in Figure S8 in Supporting Information S1.
Time of Emergence
Across all input vectors of temperature and precipitation, the neural networks are able to learn patterns of the forced response. In the early-to-mid twentieth century, the forced signal is small and undetectable by the neural networks amidst the noise of internal variability and model disagreement, which leads to poor predictive skill ( Figure 3). However, as the signal increases in magnitude into the late-twentieth and twenty-first centuries, the neural networks are able to detect the patterns of the forced response and distinguish between maps in different years. These patterns of the forced response detected by the neural networks are generalizable across CMIP6 models, and as a result the neural network has predictive skill for seen data (the training set, see Supporting Information S1) as well as unseen data (the testing set). These behaviors are shown in Figure 3 which presents the predicted years from one trained neural network for each combination of global precipitation and temperature input fields. Across all input vectors, a similar story of the forced signal unfolds. Prior to the TOE, the neural network is unable to identify patterns that allow it to accurately predict the year. As a result, the neural network is equally confident (or unconfident) about which year, between 1920 and the TOE, each input came from, so it predicts years right around the middle of the twentieth century. After the TOE, the predicted years tend to follow a 1:1 line with the truth years, indicating that the neural network has identified reliable indicators of change for this period.
Although the neural networks are trained on climate model simulations, their learned patterns can be used to predict the year for observational data as well. When observations are used as input, the predicted years increase with time, just as they do for climate model input (Figure 3). This means that the indicators of change derived by the neural networks trained on climate models simulations are largely consistent with the real world. Pearson correlations (r) of the actual years with the years predicted by each neural network are shown in Figure 4. All correlations are positive, indicating that the years predicted by the neural networks increase with time. These correlations are strongest for temperature and combined observations (r ≈ 0.9), but still quite high for precipitation (r ≈ 0.8). Correlations of actual years with predicted years are slightly higher for the combined temperature and precipitation observations than for temperature observations alone ( Figure S5 in Supporting Information S1), suggesting that the multivariate indicator patterns derived from climate model data are useful for understanding trends in the present-day climate. Across all variables, the highest observational correlations are found by the neural networks trained on seasonal-mean data. The correlation of actual years with predicted years for precipitation observations are sensitive to the data set of choice, which is expanded on in Section S4 and Figures S5 and S6 in Supporting Information S1.
The average TOEs, calculated from the climate models in the testing sets of all 100 trained neural networks for each input field ( Figure 5), reveal that the forced response can be detected earlier in maps of temperature than in maps of precipitation (Figures 5a-5c). When presented with combined fields the neural networks are, in many cases, able to detect the forced signal even earlier than when given single fields alone (Figures 5b and 5f). The TOE is generally earlier for the neural networks trained on seasonal-mean data than for the neural networks Correlations were computed for all years beginning in 1980 where observational data exists for all variables. The box plots indicate the first, second, and third quartile statistics, and the whiskers denote 1.5 times the interquartile range, or the minimum/maximum value, whichever is less extreme. Outliers are excluded for clarity, but can be found in Figures S5 and S6 in Supporting Information S1. trained on annual-mean data (Figures 5d-5f). This is most notable for precipitation fields, likely because there are large seasonal precipitation responses muted by taking the annual mean (Tabari & Willems, 2018;Zappa et al., 2015). The TOEs are earlier for temperature and precipitation combined than temperature alone when using seasonal-mean maps (Figure 5b), but are approximately equal when using annual-mean or monthly-mean maps (Figures 5a and 5c), which suggests that precipitation only improves upon the detectability of the forced temperature signal when seasonal-mean fields are used. While annual-mean precipitation may mute seasonal precipitation signals, monthly-mean precipitation is noisy. In this case, seasonal means emerge as the appropriate temporal segments for detecting precipitation change, underlining the importance for the intentional and intelligent selection of neural network inputs.
The neural networks identify the earliest TOEs when trained on seasonal-mean temperature and precipitation combined (Figures 5b and 5f). The TOE results for all 100 seasonal-mean neural networks are summarized in the box plots in Figure S7 in Supporting Information S1. While the improvement in forced response detection is small when precipitation is combined with temperature, it is still notable given that the forced signal of temperature is much clearer than the forced signal of precipitation. We use these variables as an initial example for employing this neural network methodology. We anticipate that more robust results might be found for combinations of variables that have more distinct combined signals, such as humidity and temperature (Fischer & Knutti, 2012). , and monthly-mean ( c ) input fields, and neural networks trained on temperature ( d ), precipitation ( e ), and temperature and precipitation combined ( f ). A total of 100 neural networks with different train-test splits were trained for each input field. Each dot represents the mean TOE for all climate models in the testing set for a single trained neural network, ranked from earliest to latest. Note the change in the y-axes between panels, and that the TOE results for each set of neural networks appear once in the panels ( a-c ), and once in panels ( d-f ).
Indicator Patterns for Combined Variables
Having shown that the neural networks are able to predict the year given seasonal means of temperature and precipitation (Figures 3 and 5), we now identify and explore the spatial indicator patterns used by the neural networks to make correct predictions. By understanding the neural networks' decision-making process, we can identify which regions act as combined (multi-seasonal and multi-variable) indicators of forced change amidst a background of internal variability and climate model disagreement. To identify these indicator patterns, we apply LRP to all climate model samples in the training and testing sets from the year 2090 for the seasonal-mean combined neural networks. Averaging the LRP results for each season and variable, we highlight the regions that have the highest mean relevance across the 37 CMIP6 climate models and 100 trained neural networks. The relevance maps for temperature (precipitation) are shown in Figures 6a-6d (7a-7d) and indicate the importance of each region in the neural networks' predictions of the year 2090.
LRP identifies temperature over North Africa and Central Asia in JJA (Figure 6c) and the Andes and Central Africa in SON (Figure 6d) as the most relevant regions for predicting the year. For precipitation, the regions of highest relevance can be found in Canada and Russia in DJF and SON (Figures 7a and 7d) and in Central Africa and India in JJA and SON (Figures 7c and 7d). That is to say that these are the regional patterns identified by the neural networks that indicate the presence of forced change across the CMIP6 climate models. The scale of the color bars are different between Figures 6 and 7, such that the darkest regions in the temperature maps are approximately one order of magnitude more relevant than the darkest regions in the precipitation maps. Hence, the neural network is relying more heavily on the temperature inputs than the precipitation inputs to accurately predict the year. This is not surprising because the forced signal of temperature is clearer than the forced signal of precipitation (Figure SPM.7 in Field et al., 2014). Even so, including seasonal precipitation allows the neural networks to detect forced change earlier within combined fields than in temperature fields alone (Figure 5b). The improvement in neural network performance provided by precipitation (alongside temperature) is particularly noteworthy given that the S/N ratio for temperature is larger than the S/N ratio for precipitation in all seasons and regions (Figures 6e-6h and 7e-7h, discussed further in this section). In other words, the forced temperature signal is always more pronounced than the forced precipitation signal, but the precipitation signal is still useful for detecting forced change.
LRP is designed to highlight the regions that were most relevant for predicting the correct class (in our case, the correct decade class). These LRP indicator patterns for 2090 are not the time-mean patterns of the forced response, they are the patterns used by the neural network to distinguish the end of the twenty-first century from all other decades. This is distinctly different from S/N ratio which identifies the regions where the forced change from 1920 to 2090 is largest relative to internal variability and climate model spread. Maps of S/N ratio for temperature are shown in Figures 6e-6h, and the corresponding maps for precipitation are shown in Figures 7e-7h, where a higher S/N ratio (darker green) indicates a clearer forced signal. These regions of high S/N ratio are consistent with other related studies (e.g., Hawkins et al., 2020). For the most part, the indicator patterns identified by LRP correspond with the regions with the highest S/N ratios. Calculating the Spearman's rank correlation (ρ) between each map of relevance and S/N ratio, we find that there is generally a strong positive correlation (0.71 ≤ ρ ≤ 0.77) between the LRP indicator patterns and the S/N ratios for temperature, and a moderate positive correlation (0.30 ≤ ρ ≤ 0.56) for precipitation. The exact correlation coefficients between each map are displayed in the subtitles for Figures 6e-6h and 7e-7h.
Given that precipitation only contributes a small amount of relevance compared to temperature, it is perhaps unsurprising that there are several regions where the S/N ratio for precipitation is high, but the relevance is low (e.g., Alaska in JJA, Figures 7c and 7g or South Africa in SON, Figures 7d and 7h). Most likely, the forced signal of temperature is clear enough that these regions do not add to the predictive skill of the neural networks. Regions also exist where the S/N ratio for temperature is high despite low relevance (e.g., North Africa in DJF, Figures 6a and 6e), although these are more rare, as hinted by the strong correlation between the temperature maps of S/N ratio and relevance. In contrast, there are fewer regions with high relevance despite low S/N ratios, but they do occur (e.g., SON temperatures in northern South America, Figures 6d and 6h). These high-relevance, low-S/N ratio regions confirm that the indicator patterns identified by LRP capture more than the local S/N ratio. Some reasons a region/variable/season may be important in terms of LRP, but not in terms of S/N ratio, are: (a) LRP may be identifying places in our data where a signal exists only in the combination of regions/seasons/variables, which would not be captured by this definition of S/N ratio. (b) Since LRP highlights the patterns the neural networks use to predict the correct decade over all other decades, it may be capturing abrupt nonlinear changes in the climate that are filtered out by the century-long analysis of S/N ratio In the next section, we discuss further applications of neural network-derived indicator patterns and task the network with the much harder problem of identifying changes in extreme precipitation over the Americas. Figure 6. Combined indicator patterns of the forced response ( temperature ). Average temperature LRP results for the seasonal-mean combined neural networks ( left, in yellow ) and signal-to-noise ( S/N ) ratio ( right, in green ) for 2090. Darker shading indicates regions of temperature that are more relevant for the neural network's prediction or have a higher S/N ratio. The Spearman's rank correlation ( ρ ) between corresponding maps of relevance and S/N ratio are shown in the subtitles of panels ( e-h ).
Extreme Precipitation Over the Americas
We now task the neural networks to predict the year given combinations of seasons for a single variable: extreme precipitation over the Americas. We choose to shift our focus for a few reasons. First, we wish to demonstrate Figure 7. Combined indicator patterns of the forced response ( precipitation ). Average precipitation layer-wise relevance propagation results for the seasonal-mean combined neural networks ( left, in yellow ) and signal-to-noise ( S/N ) ratio ( right, in green ) for 2090. Darker shading indicates regions of precipitation that are more relevant for the neural network's prediction or have a higher S/N ratio. The Spearman's rank correlation ( ρ ) between corresponding maps of relevance and S/N ratio are shown in the subtitles of panels ( e-h ). that this neural network approach can be extended to variables that have considerable noise (like extreme precipitation, see Figure S8 in Supporting Information S1), and data sets that do not cover the globe. Second, extreme precipitation has major implications for human health (Ali et al., 2019;Eekhout et al., 2018;Rosenzweig et al., 2002) but there is considerable disagreement between climate models in its signal ( Figure S8 in Supporting Information S1). This neural network approach can be used to identify agreed-upon patterns despite climate model spread. Further in this section, we will demonstrate that LRP maps can be used to investigate climate model differences and better understand the time evolution of the forced response.
The extreme precipitation signal is not as pronounced as the temperature signal, and using the Americas rather than the full globe limits the amount of unique information in the input field. Nevertheless, the neural networks are still able to detect patterns of forced change. Figure 8 depicts the years predicted by one neural network trained on seasonal-mean extreme precipitation. As in Figure 3, the neural network is unable to accurately predict the year given CMIP6 data prior to the TOE around 2010, whereafter the predicted years generally follow the 1:1 line with the truth years, indicating that the neural network has identified reliable indicators of change for this period. All Pearson correlations of the actual years with the predicted years for extreme precipitation in observations are positive (r ≈ 0.4), demonstrating that the indicator patterns found in climate models can be successfully applied to observations (Figure 4). These correlations are not as strong as those for mean precipitation observations, due in part to the magnitude of climate model disagreement in extreme precipitation as well as the observational data set used: ERA5. As shown in Figure S6 in Supporting Information S1, the correlations of actual with predicted years for ERA5 precipitation observations are far smaller than those for GPCP observations. ERA5 tends to perform poorly in remote regions such as northern North America and northwestern South America (Bell et al., 2021), which may be responsible for these low correlations. The correlation between actual years and neural network-predicted years for extreme precipitation observations are explored in much more detail by Madakumbura et al. (2021).
To investigate the indicator patterns used by the neural networks to predict the year when the forced signal first emerges from the background noise, we apply LRP to all climate model samples in the training and testing sets for all 100 neural networks at the TOE (using the TOE calculated for each climate model and neural network individually, see Figure S9 in Supporting Information S1). LRP points to western South America in DJF and British Columbia in MAM and SON as the most relevant regions when the neural networks first detect the forced response (Figures 9a-9d). These LRP maps exhibit a more even distribution in relevance across each region and season than the end-of-the-twenty-first-century LRP maps of global temperature and precipitation (Figures 6a-6d and 7a-7d). Predicting the year at the TOE, when the signal has just barely emerged from the background climate, likely requires the neural networks to use all of the information available to them.
Up to this point, we have only considered the mean LRP maps across climate models. Since the neural networks are nonlinear by nature, they can identify multiple patterns that differ between climate models for a given decade. We apply k-means clustering to all 3200 LRP maps at the TOE (32 climate models samples, 100 neural networks) to identify two distinct indicator patterns that are being used by the climate models (Figures 9e-9l, see Supporting Information S1 for more details on k-means clustering). Taking the difference between the mean LRP maps for clusters one and two reveals that the Amazon in JJA is a highly relevant region in cluster one, while western Canada in DJF is a highly relevant region in cluster two (Figures 9m-9p). With the sole exception of MPI-ESM1-2-HR, all 100 LRP maps for each individual climate model fall cleanly into one cluster or the other, suggesting that there are two distinct ways in which the forced signal emerges in the CMIP6 simulations ( Figure 10). Interestingly, when k-means is instructed to identify 32 unique clusters within the LRP maps, each cluster contains all 100 relevance maps for each of the 32 climate models. In other words, the pathway used by the neural networks to predict the year is unique to each climate model and distinguishable from all other climate models, regardless of whether the climate model samples appear in the training or testing sets (further investigated by Labe and Barnes (2022)).
In the same way that indicator patterns can differ between models, indicator patterns are also able to evolve through time (e.g., Barnes et al., 2020;Labe & Barnes, 2021;Madakumbura et al., 2021). Comparing the LRP maps at the TOE (Figures 11a-11d) with those at the end of the twenty-first century (Figures 11e-11h) highlights the regions that become more important for predicting the year over time. The difference plots in Figures 11i-11l Figure 9. Relevance map clusters at the time of emergence ( TOE ) for extreme precipitation. Average layer-wise relevance propagation results for: extreme precipitation at the TOE ( a-d ), each cluster identified by k-means ( e-l ), and the difference between the clusters ( m-p ). In panels ( a-l ), darker shading indicates regions of extreme precipitation that are more relevant for the neural networks' prediction of the year at the TOE. In panels ( m-p ), blue shading indicates the regions that are more relevant in cluster 1, while red shading indicates the regions that are more relevant in cluster 2. Note that panels ( a-d ) are identical to panels ( a-d ) in Figure 11. reveal that the neural network learns to focus on Alaska during MAM, JJA, and SON, Greenland in JJA and SON, and Quebec in MAM and SON as the forced response becomes stronger. These regions are more important for predicting the year at the end of the twenty-first century than the early twenty-first century. While further exploration is required, there are several reasons a region may become more relevant over time. For example, it may be that the region does not initially have a clear forced signal, but following some abrupt change (e.g., an ice-free Arctic) the forced signal becomes extremely pronounced. It may also be that the region has a signal that is consistently agreed upon by the majority of CMIP6 climate models, and becomes more relevant compared to other regions as climate model projections in those other regions drift apart. These time-varying patterns support the idea that combined indicators are effective for identifying dynamically evolving patterns of forced change.
Conclusions
Neural networks are powerful tools for identifying patterns of forced change in the climate system. When tasked with predicting the year given climate model simulations of temperature, precipitation, or extreme precipitation, artificial neural networks can learn these patterns of forced change that allow them to distinguish between maps from different years. In combined fields, such as multiple variables, seasons, or both, the forced response can be detected earlier than in single fields alone. By visualizing the decision-making process of the neural networks with an explainability method we extracted reliable, multivariate patterns of forced change. These neural network-derived combined indicator patterns are complex and nonlinear and capture more than the local S/N ratio. Explainability methods take a huge step toward disentangling the relationships learned by neural networks by pointing out what inputs contributed most to the final prediction, but they stop short of explaining why.
Expanding on previous work by Barnes et al. (2020), we used k-means clustering in tandem with LRP to study the relationships learned by the neural networks. This approach revealed two distinct ways in which the extreme precipitation response emerges in CMIP6 data. While combining neural network explainability methods with other statistical techniques can enhance explanations of neural network decisions, there is still a large gap between what the neural network has learned and what we can explain post hoc. Some unanswered questions, such as "why does temperature in Region A combine with precipitation in Region B to improve the signal of the forced Figure 10. Climate models in each relevance map cluster at the time of emergence ( TOE ). The number of times each climate model appears in each cluster when k-means is applied to the maps of relevance at the TOE for 100 ANNs trained on extreme precipitation over the Americas. Only the relevance maps for MPI-ESM1-2-HR appear in both clusters. All other relevance maps for each climate model are found in one cluster or the other. response?" may be better answered with a different architectural approach, such as neural network designs that are inherently interpretable and do not require post-hoc approaches like LRP (Rudin, 2019). This is a natural next step for future work. The flexibility and accessibility of this framework provide several other future research directions. Given that this predict-the-year approach can be applied to observational data, one possible extension of this work could involve exploring the observed features of forced change that are consistent with climate model simulations. There is also space for these methods to be used to determine which definitions of seasons are optimal for detecting forced change. While we used meteorological seasons here, there may be more appropriate definitions, such as unique definitions of the wet and dry seasons, or the shoulder seasons, that vary between variables and regions. Furthermore, this framework should be expanded to other variables, regions of focus, and climate change scenarios, to identify the combined indicators that best elucidate the forced signal. For example, extreme precipitation and extreme drought may combine to capture the increased volatility in precipitation extremes that are expected with climate change (O'Gorman, 2015). Further application of this technique to compound climate extremes, such as heat wave intensity, drought duration, and flood frequency, may reveal that explainable neural networks are useful for assessing societal impacts and improving climate change preparedness. Figure 11. Time evolution of extreme precipitation relevance. Average layer-wise relevance propagation results at the time of emergence ( a-d ), 2090 ( e-h ), and the difference between ( i-l ). Darker shading in panels ( a-h ) highlights regions that were more relevant for the neural networks' prediction of the year. In panels ( i-l ), red shading indicates regions where the relevance has increased over time, while blue shading indicates regions where the relevance has decreased over time. Note that panels ( a-d ) are identical to panels ( a-d ) in Figure 9.
Data Availability Statement
All data used in this study is publicly available and referenced throughout the paper. The CMIP6 simulations used in this study can be via the Earth System Grid Federation (ESGF, https://esgf-node.llnl.gov/projects/ cmip6/). Monthly temperature observations are available through Berkeley Earth (http://berkeleyearth.org/data/). Global Precipitation Climatology Project monthly global precipitation fields are available through the NOAA Physical Sciences Laboratory (https://psl.noaa.gov/data/gridded/data.gpcp.html). Monthly, daily, and sub-daily precipitation reanalyses were provided by the European Centre for Medium-Range Weather Forecasts (ERA5: https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5) and the National Center for Atmospheric Research (JRA55: https://climatedataguide.ucar.edu/climate-data/jra-55). Python code used in this work has been made publicly available at https://doi.org/10.5281/zenodo.6780638. | 2021-12-12T17:21:01.814Z | 2021-12-08T00:00:00.000 | {
"year": 2022,
"sha1": "70906b97919a95677c9cdc04ef6ae933b504160d",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2021MS002941",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "13c8664c85dbc6c4d96e2ab53672da668b32a45b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
261337625 | pes2o/s2orc | v3-fos-license | Types of Nursing Intervention on Improving Quality of Life among Patients with Diabetes Mellitus: A Systematic Review
Background: Long-term treatment of patients with diabetes mellitus (DM) is considered a major factor causing disease complications. DM complications mostly impact the patient’s quality of life (QoL). Only a few studies have been conducted summarizing the types of nursing interventions for improving the QoL of DM patients. Objective: The objective of this study is to explore the types of nursing interventions that can improve the QoL of DM patients. Methods: The online databases, including ScienceDirect, Medline, Google Search, and Pro-Quest, were used to search for the relevant articles. Articles that met the inclusion criteria were analyzed, and their level of evidence was determined and synthesized. Results: A total of 30 articles defining the types of nursing intervention on improving the QoL of DM patients were discovered, comprising the five types of nursing interventions, such as health education (15 articles), exercise (8 articles), WhatsApp/short message service (WA/SMS) gateway (3 articles), blood glucose control (3 articles), and black garlic herbal therapy (1 article). Conclusion: Sequentially, the most common types of nursing interventions to improve the QoL of DM patients was health education, followed by exercise, WA/SMS gateway, and glucose control. A personal approach to health education is a significant point in improving the QoL of DM patients in the future. The findings of this study might not be strongly generalized, so further randomized controlled trial (RCT) studies with larger samples are needed.
INTRODUCTION
Diabetes Mellitus (DM) is considered one of the most prevalent chronic diseases worldwide.Globally, a total of 422 million DM patients were recorded in 2022 (an increase of 28.6% compared to the previous year), and it is predicted to be elevated in 2045 with a total number of 629 million cases.An increasing rate of DM cases has also been witnessed in Indonesia.A total of 19.5 million DM cases were reported in 2021 (an increase of 28.6% from 2020), and it is estimated to increase to 183 million in 2045 [1].The increasing prevalence of DM indicates the need for prevention efforts.These prevention efforts aim to control blood glucose levels through four pillars of DM management, such as education, meal planning, physical exercise, and pharmacological therapy [2].Several etiologies contribute to DM prevalence, including obesity, low physical activity, sugar levels in the body, stress, genetics, and the aging process (elderly).A previous study supported that a higher percentage of DM complications are triggered by obesity (53.6%), low physical activity (50.7%), and higher fasting sugar levels (75.4%) [3].Numerous efforts have been implemented to prevent DM *Address correspondence to this author at the Department of Nursing, Faculty of Nursing and Health Sciences, Universitas Muhammadiyah Semarang, Semarang City, Central Java, Indonesia; E-mail: aminsamiasih@unimus.ac.id 1875-6417/ 24 complications and maintain blood sugar within the normal range.The Indonesia Ministry of Health has initiated a healthy living program for DM patients by maintaining a diet style.A good diet style will improve the quality of life (QoL) of DM patients [4].
Patients with DM potentially encounter a decrease in QoL and contribute to decreasing health-related quality of life (HRQoL).It occurs due to limited physical, psychological, and cognitive abilities in performing daily activities.To date, many studies have been investigated to improve the QoL of DM patients.A recent study conducted in 2020 described that health education intervention is effective in improving the QoL of DM patients [5].The previous investigation also explored that exercise, blood glucose level management, and black garlic consumption can also improve the QoL of DM patients [6].Several studies mentioned above are still categorized as single studies.There is no systematic review that discusses the types of nursing interventions for increasing the QoL in DM patients.Thus, it is necessary to conduct a systematic literature study to determine the types of nursing interventions that contribute to increasing the QoL among DM patients.Therefore, the purpose of this study was to explore and review the current evidence regarding the types of nursing interventions that contribute to improve the QoL among patients with DM.
Method of Assessment and Appraisal of the Articles' Quality
Articles that met the inclusion criteria were systematically analyzed and their level of evidence was determined and synthesized (Table 2).Concerning these systematic procedures, the findings of this review can be summarized into nursing interventions on improving the QoL of DM patients, which are beneficial as the fundamentals of implementing nursing care in hospital and community settings.
Data Extraction Methods
Data extraction was performed by reading all of selected the articles' findings and then summarizing their main contents.The main contents of the articles included the title of the study, authors' details, study methods, number and characteristics of samples, and the number of intervention and control groups (Table 3).
RESULTS
A total of 30 articles defining the types of nursing intervention for improving the QoL of DM patients that met the inclusion and exclusion criteria were searched from various types of study methods, including RCT, quasi-experiment, and cross-sectional design.Those selected articles were then systematically analyzed.Five types of nursing interventions that contributed to improving the QoL of DM patients were concluded as health education (15 articles), exercise (8 articles), WhatsApp/short message service (WA/SMS) gateway (3 articles), blood glucose control (3 articles), and black garlic herbal therapy (1 article).The research locations of the articles were diverse such as the USA, China, Indonesia, India, Saudi Arabia, England, Spain, Singapore, Jordan, Malaysia, Turkey, Romania, Brazil, Pakistan, Thailand, Canada, Kuwait, Netherlands, and Iran.
Health Education
An investigation study [7] described that the QoL of DM patients was improved after the implementation of health education intervention.Health education was provided with an empowerment process model.The intervention group received a 6-week empowerment-based transitional care program, emphasizing personally meaningful goals, facilitated collaborative partnerships, shared decision-making, and an approach through situational reflection.While the control group received health education twice, after discharge from the hospital and during routine visitation assessmentmanagement.Other findings of that study also mentioned the level of empowerment and decreased DM status.Furthermore, another published study [8] highlighted that health education has an effective impact on blood glucose control in the intervention group.During the intervention, blood glucose levels in the intervention group were well maintained, while in the control group, there was no alteration.Furthermore, a study [8] underlined that diabetic foot ulcers (DFUs) are considered a major complication of DM, which strongly impacts patients' QoL.The aim of the study was to determine the QoL of DM patients (HRQoL) and the management of DFUs.The results showed that HRQoL scores improved in DM patients who received foot care management education.Research on diabetic ulcer patients, using the DFS-SF questionnaire (Diabetic foot scale short form-8), determined that patients with diabetic ulcers had a low means of QoL score on the physical and mental components and men have a significantly higher DSF-SF score than women.Moreover, a study about complementary and alternative medicine (CAM) in DM patients with inclusion criteria of age, gender, family income, occupation, length of suffering, and complications showed that the use of CAM was relatively high so it was concluded that the development of CAM management is necessary [9].The increasing incidence of DM is a raising concern in the health sectors worldwide, including in Saudi Arabia.In a research study [10], an SF 36 questionnaire was implemented to determine random sugar levels in the community.It underlined that the incidence of DM affects the QoL of adolescents, but the domain of physical function is the most influential aspect impacting social activities.Moreover, an investigation study on a 6-month community-based program improved the HRQoL of mental health and self-management domains and reduced depressive symptoms in adults with type 2 diabetes and the intervention group [11].In addition, a study on the effect of lifestyle interventions on diabetes medical care costs that investigated two groups of patients highlighted that the group without the lifestyle counseling intervention spent more money on diabetes treatment than the lifestyle modification counseling group.There was a significant difference in hospitalization costs with a significant p-value of 0.0038.The difference in the cost of hospitalization in DM patients with surgery due to DM complications was higher in a group without lifestyle counseling intervention (p-value 0.0111).Significant improvement was also found in the quality of life domain, which included physical, emotional, and social functioning [5].Along with physical activity and sports, the Prolanis group was also given education, with a focus on DM nutrition diet, amount, and type of food, as well as regularity in medication and blood sugar measurement.
The findings revealed that there was a difference in blood sugar levels between the intervention and control groups with a p-value of 0.008 and that the Prolanic exercise activities were applied to treat type 2 diabetes patients at the Padangmatingi Public Health Center in Padangsidimpuan.The Effects from Physical Exercise on the Blood Glucose Levels, HbA1c and Quality of Life of Type 2 Diabetes Mellitus Patients: A Systematic Review The benefits of resistance training, aerobic exercise, and a combination of both in HbA1c, blood sugar levels, and quality of life in T2DM patients are explored.Several research and recommendations for the management of T2DM promote physical activity.
The clinical situation and the patient's specific physical fitness must be taken into consideration when choosing the type and intensity of exercise for the therapy of T2DM.To evaluate the combined benefits of resistance training and aerobic exercise on glucose, HbA1c, and the quality of life of T2DM adjusted for different age categories, more research is required.It is crucial to combine the IMB intervention with the protection motive theory to help type 2 DM patients become more psychologically resilient, improve their quality of life, and lower their blood glucose levels.
Prior to the study, there was not a measurable difference between the treatment and control groups (p> 0.05).Following the intervention, our treatment group's blood glucose level and depression score were substantially lower compared to those of the experimental group (p<0.05), and their levels of psychological resilience and quality of life were significantly greater.Following the intervention, the treatment group's blood glucose level and depressed mood scale were reduced (p<0.05),and their psychological resilience and quality of life considerably improved (p<0.05).The baseline group's blood sugar levels, depression, psychological toughness, and quality of life did not differ significantly before and after the treatment (p> 0.05).
6 Rosyid (2020) [46] Fasting Blood Glucose Levels Associated with Quality of Life in Diabetic Foot Ulcer Patients Evaluating the relationship between fasting blood glucose (GDP) levels and quality of life in people with diabetic foot ulcers.
In individuals with diabetic foot ulcers, GDP levels were substantially correlated with quality of life (p=0.04).Blood sugar levels are strongly correlated with patients' quality of life when they have diabetic foot ulcers.
7 Sesaria (2020) [47] Mobile Smartphone Intervention For Managing Glycaemia Control in Patients with Diabetes Mellitus: A Systematic Review Effectiveness of a mobile smartphone application for regulating blood sugar levels in people with diabetes.
According to the findings of this literature analysis, people with diabetes mellitus who used mobile phones had lower HbA1c and fasting blood glucose levels.
The results of this study support the effectiveness of the mobile diabetes intervention in maintaining glycaemia control in diabetes mellitus patients.
(Table 3 Health-Related Quality of Life and self-care Management Among People With Diabetic Foot Ulcers in Northern Thailand To check about how patients with DFUs manage their foot care and their HRQoL.In the self-care management of people with DFUs, foot care is crucial. The HRQOL and self-care behaviours of people with DFUs in Northern Thailand have never been studied before.The findings show that improving HRQoL requires tailored, targeted foot care instruction that includes self-care management techniques.9 Alrub (2019) [49] Factors Associated with Health-Related Quality of Life among Jordanian Patients with Diabetic Foot Ulcer Diabetic Foot Scale-Short Form (DFS-SF) and Short Form-8, two self-administered questionnaires, were used to measure health-related quality of life (SF-8).
Low mean DFS-SF scores and low mean scores on the physical and mental component summary scales were observed in patients with diabetic foot ulcers (PCS8 and MCS8).Males had considerably higher DFS-SF scores than females, indicating a superior quality of life in terms of health (p =value 0.03).
10 Joeliantina (2019) [9] A Literature Review of Complementary and Alternative Medicine Used Among Diabetes Mellitus Patients Along with conventional treatment, some diabetic patients also use complementary and alternative medicine (CAM) to preserve their health and regulate their blood sugar.
Depending on the patients who use it, CAM might be classified as supplementary medicine, alternative medicine, or integrative medicine.Chronic illness patients frequently use CAM.
11 Ibrahim (2019) [10] Diabetes Prevalence and Quality of Life of Female Nursing Students The King Saud bin Abdulaziz University for Health Sciences in Riyadh evaluated the impact of diabetes mellitus on the quality of life of female nursing students (KSAU-HS).This analysis found that the measured quality of life areas for people with diabetes mellitus was negatively impacted by constraints on activities, emotional health issues, and social activities.
12 Almaramhi (2018) [50] The Correlation of Fasting Blood Glucose Levels with the Severity of Diabetic Foot Ulcers and the Outcome of Treatment Strategies The relationship between fasting blood sugar levels, DFU grades, and the effectiveness of proposed treatment methods.
13 Nagaraja Y (2018) [17] Discovering the Benefits of Yoga and Improving Quality of Life Advantages of consistent yoga practice and enhancement of life quality.There is proof that yoga practice increases both physical and mental capabilities.
Yoga is a type of mind-body exercise that combines physical exertion with a conscious interior emphasis on awareness of the self, the breath, and energy.Yoga is quite helpful for its advantages and enhancing quality of life.Healthcare practitioners should pay close attention to how their patients take their medications and make an attempt to explain to them the advantages of doing so.
In the current study, patients' HRQoL was subpar, with mean scores of 0.48 and 0.36.In a crosstabulation analysis, significant relationships were reported between HRQoL and age, disease duration, number of prescription medications, medication adherence, and treatment satisfaction (p 0.05).The important variables were added to the model after it passed the very relevant Omnibus Test of Model Coefficient (Chi-square = 12.983, p = 0.030, df = 4) and showed a considerable goodness of fit.The impact of treatment on patients with painful diabetic neuropathy's quality of life (QoL) and degree of neuropathic pain.
In order to alleviate neuropathic pain and enhance QoL in patients with acute neuropathy, aromatherapy massage is a quick and efficient nonpharmacological nursing intervention.
21
Teston (2017) [55] Effect of the Consultation of Nursing on Knowledge, Quality of Life, Attitude towards Disease, and self-care among Persons with Diabetes To examine how self-care-based nursing consultation affects patients with type 2 diabetes mellitus' knowledge of and attitudes toward their disease, as well as their quality of life and adherence to self-care rituals (DM).
The IG revealed a significant shift in education about diabetes (p<0.001), the impact of the condition on life satisfaction (p = 0.002), approach toward the condition (p = 0.024), and self-care activity compliance (p<0.001).The nursing consultation on encouraging self-care has a beneficial influence on knowledge, attitude, and commitment to personality activities, but it also has an increased impact on quality of life.The goal of this study was to determine the association between chronic diabetes mellitus and the level of life of those who have it.
The p-value<0.05indicates that there is a connection between the duration of suffering and the standard of living of DM patients at the external medical centre of RSUD Prof. Dr. Wahidin Sudiro Husodo.
23
Cai (2016) [56] Effect of Exercise on the Quality of Life in Type 2 Diabetes Mellitus: A Systematic Review The quality of life of a type 2 diabetic patient before and after physical activity is evaluated.Exercise is one of the crucial diabetic therapies.
We divided the exercise into four modes: aerobic, resistance, a combination of aerobic and resistance, and yoga.Aerobic exercise showed a significant effect between groups.Resistance and combined exercise showed mixed results.Yoga also showed good intervention effects on quality of life.
24
Kueh (2016) [39] The Effect of Diabetes Knowledge and Attitudes on Self-Management and Quality of Life among People with Type 2 Diabetes The Summary of Diabetes Self-Care Activities (SDSCA) scale, the Diabetes Knowledge Scale (DKN), the Diabetes Integration Scale-19 (ATT19), and the Diabetes Quality of Life (DQoL) scale, which measure, respectively, diabetes knowledge, attitudes, self-management, and QoL is presented.
A topic of behaviors and personality in areas of monitoring blood glucose levels and foot care was diabetes knowledge.The influence of QoL was significantly predicted by attitudes.Diet was a strong predictor and impacted the QoL, and personality in the context of glucose control was a major correlate of influence on QoL.Aspects of self-management, such as exercise and foot care, were highly predictive of satisfaction and the influence of QoL, respectively.
(Table 3 The antecedent constructs of diabetes treatment self-efficacy and diabetic knowledge have been found to be closely related to effective diabetic self-management. In our analysis, 52.4% of the patients had blood sugar that was out of control (HbA1c > 7%).The bivariate analysis revealed that all three psychometric measures (DK, DMSE, and DSM) were related to blood glucose control.Blood glucose control has been found to be substantially correlated with diabetes management self-efficacy in the Thai Type 2 diabetic population.
SMS-based Reminder System Design to Improve Medication Adherence of Diabetes Mellitus Patients
Sending SMS reminders to patients as part of the delivery of health services is one technology-based approach to enhancing drug adherence.
Since the result of the remember system report suggests that 81.3% of participants attended the clinic after getting a reminder SMS, using a notification proposed system can be advised as a technique to increase adherence to treatment of patients with diabetes mellitus.
29 Power (2020) [59] Diabetes Self-management Education and Support in Adults with Type 2 Diabetes: A Consensus Report of the American Diabetes Association, the Association of Diabetes Care and Education Specialists, the Academy of Nutrition and Dietetics, the American Academy of Family Physicians, the American Academy of PAs, the American Association of Nurse Practitioners, and the American Pharmacists Association Diabetes Consciousness Education and Guidance (DSMES) focuses on the full variety of clinical, intellectual, sociological, and psychosocial aspects of care necessary for everyday self-management and lays the groundwork for assisting all diabetics in navigating their basic self with optimism and improved outcomes.
The clinical, psychosocial, and behavioral facets of diabetes are all improved by DSMES.The foundation receives ongoing support from DSMES to encourage the accomplishment of individual goals and impact the best results.DSMES has been shown to have numerous advantages, however, only a small percentage of persons with diabetes are actually referred for and receive treatment.
30 Si (2019) [60] Lactobacillus bulgaricus Improves the Antioxidant Capacity of Black Garlic in the Prevention of Gestational Diabetes Mellitus: A Randomized Control Trial.
Black garlic's antioxidant activity may be enhanced by Lactobacillus bulgaricus to help prevent diabetes during pregnancy (GDM).L. bulgaricus was used in the preparation of black garlic.
There were 40 weeks of treatment.Blood glucose levels were found following an oral glucose tolerance (OGTT) and at 1, 2, and 3 hours after fasting (FBG, 1hBG, and 2hBG).Measurements of plasma malondialdehyde (MDA), superoxide dismutase (SOD), glutathione peroxidase (GSH-PX), and total antioxidant capacity (T-AOC) in GDM patients were used to assess the antioxidant efficacy of black garlic.Two independent samples of the t-test were used to compare the two groups.L. bulgaricus decreased the incidence of fetal problems as well as the rate of FBG, 1hBG, and 2hBG.
Exercise
The findings of the study on the Prolanis exercise intervention on blood sugar levels in DM patients showed a significant result in reducing blood sugar levels (p-value: 0.001).Before and after the intervention, the average blood sugar levels were, respectively, 150.31 mg/dl (SD 24,730 mg/dl) and 127.33 mg/dl (SD 14,764 mg/dl).Prolanis exercise has the benefit of preventing obesity by burning calories within the body.During Prolanis activity, blood glucose can be used as energy [12].Furthermore, a meta-analysis study about yoga intervention revealed that yoga significantly affected the physical function of DM patients (5% significance level).The physical function of the intervention group that was actively implementing yoga was better than the control group that was not actively implementing yoga [13].Yoga was also found to improve the QoL in physical and social domains [14].Aerobic physical exercise has a greater impact on blood sugar, HbA1c, and QoL.Aerobic exercise, resistance training, and a combination of both activities have benefits in reducing blood glucose and HbA1c levels, as well as improving the QoL of patients with type 2 diabetes [15].Another study on aerobic exercise found that the social and environmental domains in the peer group and the environmental domains in the yoga group significantly increased the QoL scores.Moreover, 96% of the intervention group's members reported feeling the program's benefits.The main goal of treating chronic DM is to improve well-being and achieve QoL [16].The concept related to the QoL of DM patients is well-being, which assesses the positive aspects of an individual's life, such as positive emotions and life satisfaction.Well-being measurement assesses a positive evaluation of a person's daily life, such as healthier and satisfied feelings or satisfaction with the quality of good relationships, positive emotions, resilience, and realization of personal potential.Yoga practice aims to improve the quality of life by modifying the level of fitness that is safe and suitable for all ages [17].A research study that combined the Sun and Yang and Kai Mai styles of Tai Chi and Qigong came to the conclusion that Tai Chi can enhance quality of life.It was concluded that Tai Chi might play a significant role in encouraging individuals to become more spirited and physically active.Tai Chi is a gentle activity that has the ability to put people at ease and demonstrates a positive impact of the intervention on the quality of life of DM patients [7].Moreover, a study conducted on patients with type 2 diabetes showed that yoga contributed to increasing QoL.While study on type 2 DM that focused on metabolic factors and glycemic control after yoga showed significant differences in pain disturbance, Fullerton Advanced Balance scale, upper limb strength, lower limb strength, and QoL scores [18].Yoga requires less equipment and it is relatively inexpensive.Once trained, the patients can practice at home, thus leading to long-term adherence.Yoga is a relatively low-cost intervention to reduce stress and improve the QoL with type 2 diabetes.By supporting stress resilience, yoga can prevent stress-induced increases in cortisol, thereby controlling the elevation of blood sugar levels [19].It has been reported that Tai Chi statistically improved QoL as measured by SF-36 in each domain of physical function [20].Furthermore, aromatherapy is a complementary therapeutic method that uses essential oils for therapy.Aromatherapy in several studies has been shown to have benefits against pain, anxiety, de-pression, cognitive function, and sleep disorders in the elderly.The study on the effect of aromatherapy and massage on neuropathic pain and QoL of DM patients demonstrated a significantly increased quality of life score in the intervention group in the fourth week of the study [21].In addition, research on the efficacy of Pilates-based mattress training programs on the parameters of QoL, sleep quality, and satisfaction in type 2 DM patients concluded that Pilates-based mattress training has a significant effect on improving the QoL parameters after a 4-week training program.This study also revealed the significant effect of an exercise intervention on QoL, sleep quality, and life satisfaction [4].Exercise is part of a planned, structured, and repetitive physical activity of body movement to improve and maintain one or more physical components.Exercise can improve sleep quality and prevent chronic diseases [7].
Reminder WA/SMS Gateway
This current review found 3 articles describing the effect of WhatsApp (WA)/Short Message Services (SMS) gateway on QoL of DM patients.These studies described the uses of diabetes self-management smartphone applications on both iOS and Android smartphones, which aimed to manage diabetes through self-management and glycemic control in DM patients.Smartphone applications and their variation features have shown a significant effect on decreasing HbA1c and blood glucose in DM patients.The results of this systematic review found that the implementation of mobile smartphone applications led to a decrease in HbA1c and fasting blood glucose in patients with DM [3].Mobile health interventions are evolving research to change behavior among patients with chronic diseases.This study showed that interactive text messaging was a viable and enjoyable intervention among a population of adolescents and young adults with DM.Based on participant feedback in the TEACH intervention group, a significant improvement was reported by comparing enrollment scores with 3-month follow-up scores in patients' activity (p-value = 0.01) and quality of life (Global Mental Health p-value = 0.01 and Physical Health Global p-value = 0.03) [22].The study on SMS intervention found that SMSbased self-management support programs contributed to the improvement of glycemic control.The effect of the intervention was also seen in foot care behavior and diabetes support levels.This program showed significantly high acceptance in the intervention group with the majority of participants experiencing a decrease in HbA1c at nine months [23].This application may be a standalone intervention but can also be used as a part of a more comprehensive program.The possible limitations of this intervention might be considered for large-scale delivery.This study provided evidence of the effectiveness of a newly developed smartphone application designed to trigger diabetes self-management [24].This smartphone application is easy to use for adults and allows a quick assessment of whether personal habits match the recommended healthy lifestyle in terms of nutrition and physical activity.The broad use of mobile health technologies, such as m-health, is promising in providing high validity in diabetes self-care.M-health is a virtual communication tool that can offer psychological support to encourage and facilitate changes in diabetic self-care.It is versatile and quickly adjusts to varied cultural norms.In comparison to standard 131 care without the use of biofeedback devices or software, an additional motivational interview conducted using m-health in Kuwait improved glycemic control.Since the dates intervention is non-pharmacological, negative side effects are anticipated to be limited [25].The goal of the trial was to determine whether personalized text message in addition to conventional lifestyle recommendations could slow the progression of prediabetes to type 2 diabetes in two different settings (India and UK).If the results are positive, text messages can provide a low-cost and far-reaching modality in addition to diabetes prevention programs globally.Diabetes prevention using intervention and intensive lifestyle monitoring has achieved a reduction in the cumulative incidence of diabetes by 36% and 65%, respectively [26].All of the described mHealth interventions were tested in RCTs, and eight out of the thirteen interventions showed clinically and statistically significant results.Five interventions, however, had no results or differences in HbA1c reduction between the intervention and control groups of less than 0.5% (5.5 mmol/mol).These interventions include anything from new developments in glucose monitoring and insulin bolus calculators to health education and lifestyle changes.Its components also include communication with distant clinicians using the telemedicine model, educational information, selfmonitoring, and automated messages that provide inspiration, education, and feedback.The mHealth intervention's design should take into account input from patients and doctors on lifestyle or workflow integration, as well as usability and contents [27].In addition, a study [28] highlighted that participants who received a smartphone-based selfmanagement intervention had improved self-efficacy with a large effect size of 0.98 (p 0.001), self-care activities with an effect size of 0.90 (p 0.001), health-related quality of life with an effect size of 0.26 (p 0.01), and lower glycated hemoglobin (pooled MD=-0.55;p).
Glucose Control
Cardiovascular events are less likely in people with intensive glycemic control than in those who receive standard medication over an extended period of time.The findings demonstrated that better glucose management decreased the frequency of diabetic microvascular problems.Although follow-up of the advantages of rigorous glucose control helped improve QoL, it did not demonstrate a substantial decrease in the incidence of cardiovascular disease and reduced mortality in diabetes patients.Nearly all of the associations between glucose control and major cardiovascular events are explained by the mean of glycated hemoglobin level over the past 3 years [29].Only 1 patient (2%) showed good QoL with controlled fasting blood glucose levels in the category of hyperglycemia [30].This is in line with the results of a study describing that most of the patients with controlled fasting blood glucose levels in the hyperglycemia category showed a good QoL.The results of the distribution of respondents in terms of fasting blood glucose levels in the emergency room were in accordance with research conducted previously [31], where uncontrolled glycemic control developed long-term complications and stress and impacted the quality of life among DM patients [32].Therefore, glycemic control is very important to achieve target blood glucose levels and encourage patients to improve QoL.Considering the importance of QoL in diabetic patients, self-monitoring of blood glucose levels involves self-care behaviors that are assessed from adherence to four dimensions, such as nutrition, activity, medication, and glucose control [33].
Black Garlic Herbal Therapy
Hyperglycemia, a hallmark of type 2 DM, and dyslipidemia are linked to a reduced risk of consequences in diabetic patients.Antioxidants may be helpful in preventing diabetes complications, according to mounting research.According to reports, antioxidant minerals and phytochemicals can help lower blood sugar levels and prevent diabetesrelated problems.Garlic has been shown in several trials to have a hypoglycemic impact (Lee et al., 2009).According to a recent study, fermented black garlic products include a high concentration of polysaccharides, phenolic compounds, organic sulfur compounds, proteins, and melanoidins, all of which have numerous health advantages (Zhang et al., 2019).In streptozotocin-induced diabetic rats, consumption of an 80% ethanol extract of garlic decreased serum glucose levels, and injection of the extract delayed the onset of hypoglycemia and structural nephropathy.As it is processed at a controlled temperature and humidity, aged black garlic, which has lately become accessible in the Korean market, is an example of a garlic product that is anticipated to have a strong antioxidant capacity (Jing, 2020).Aged black garlic consumption significantly lowered the homeostasis model assessment for insulin resistance (HOMA-IR) by 11.0% and tended to lower serum glucose levels.In db/db rats, diet for 7 weeks dramatically elevated levels of insulin by 12.1% and lowered serum glucose by 8.7% (Seo et al., 2009).Black garlic also generalizes plasma lipid imbalance and increases fibrinolytic activity (Setiawan et al., 2021).In rats given a high-fat diet, the consumption of black garlic enhanced adiponectin and downregulated PAI-1, thus enhancing insulin resistance (Nurmawati et al., 2021).The capacity of black garlic to neutralize free radicals, such as hydroxyl radicals, 2,2-azino-bis (3-ethylbenzenthiazoline-6-sulfonic) acid (ABTS), and DPPH, was enhanced by L. bulgaricus (Si et al., 2019).This is associated with an improvement in gut microbial composition, which has been shown to decrease blood sugar levels and limit body weight in diabetic patients.By lowering lipid and glucose levels, butyrate synthesis by the gut bacteria may be one of the key mechanisms regulating energy metabolism.The potent antioxidant content is beneficial for protecting the heart against atherosclerosis, where the presence of SAC has the potential to provide antiinflammatory benefits and inhibit endothelial vasodilation (Setiawan et al., 2021).
DISCUSSION
The types of nursing interventions that have been proven to be able to improve the QoL of DM patients are discussed in this review.First, exercise is effective in burning and converting sugar into energy and can improve the QoL of DM patients.Subsequently, the WA/SMS gateway is capable of reducing HbA1c and blood glucose as well as behavioral changes among patients with chronic disease.Various nursing interventions are indeed proven to be able to significantly improve QoL and reduce blood glucose levels and anxiety levels in DM patients.DM is viewed as a significant public health issue that negatively impacts sufferers' QoL.The high prevalence has a negative impact on the patient's health and causes a number of issues with low quality of life.From a self-evaluation of the consequences of diabetes care, one of the most significant elements influencing treatment results is the quality of life (QoL).Patients' individual expectations, attitudes, behaviors, and information regarding specific diseases have a significant impact on their quality of life [32].Diabetes self-management education (DSME) is a crucial program that helps patients maintain regular blood sugar control and has been shown to be a successful strategy.The curriculum for all prepared content includes information on the fundamentals of diabetes, the value of self-management, and self-care.It is conducted over the course of several sessions and includes presentations, discussions, demonstrations, and self-care.The DSME program has a beneficial impact on enhancing type 2 diabetes patients' health status.For diabetic patients to improve QoL, exercise is a powerful and motivating tool.Self-management of diabetes and blood sugar levels was demonstrated by an intervention trial in Thailand [34].Prior to the intervention, there were no appreciable changes in the participants' age, education, blood sugar monitoring behavior, health examination, knowledge, selfcare, stress, or hemoglobin HbA1c (> 0.05) between the two groups' baseline characteristics.However, type 2 DM patients in the intervention group demonstrated significant changes (0.05) in HbA1c serum test, stress levels, and QoL following the intervention.While the control group's results remained unaltered, there was no statistically significant difference (>0.05) between the two groups.The results of the study showed that DSME improved the QoL of adult female patients with type 2 diabetes by lowering blood glucose levels, reducing tension, and lowering anxiety.Therefore, patients with diabetes may benefit from this intervention [32].Diabetes is a difficult and complex condition that people must manage on a daily basis.A foundation is provided by DSMES to help DM patients navigate their daily self-care with confidence and respectable results.It addresses the combination of clinical, educational, psychosocial, and behavioral aspects of care needed for daily self-management [35].Encouraging patients to use DSMES helps to implement informed decision-making, self-management behaviors, problem-solving, and active collaboration with healthcare providers to improve critical outcomes, health status, and QoL.The goal of DSMES is to provide DM patients with the knowledge, skills, and confidence to accept responsibility for self-management of type 2 DM.DSMES has various benefits, including clinical, psychosocial, and behavioral outcomes, in type 2 DM patients, such as improving QoL, healthy life planning, and patient involvement in regular physical activity [35,36].
A technology-based strategy to improve patients' behavior in DM treatment provides controlled treatment services to patients at home using mobile smartphone technology.An SMS-based reminder system was tested in the Netherlands to determine the effect of an SMS reminder on patient adherence to oral antidiabetic drugs using real-time medication monitoring.The study proved that SMS reminders were effective in increasing treatment adherence of type 2 DM patients, which aimed to prevent patients from dropping out from the treatments.The reminder system application is open source and proposed as a non-profit reminder that provides important information quickly, inexpensively, and accurately to the target group's cellphone numbers.The reminder system produces monitoring report output, visiting reports, and reminder reports that help in monitoring outpatient DM patients [37].
According to an updated current investigation, nonpharmacological therapy has a positive effect in reducing blood sugar levels, fasting blood sugar levels, HbA1c, and excessive use of diabetes drugs in type 2 DM patients.The results showed that exercise therapy is a promising intervention for type 2 DM management [38].The prognosis of the DM affects patients' QoL, where most of them have been suffering for years.It causes the patients to feel restless and hopeless in treating the disease, especially for patients who experience the DM complications that can contribute to a negative impact on their QoL.In addition, due to low knowledge about the DM prognosis, patients mostly do not understand the need for treatments.Therefore, they feel hopeless about what to do and it affects their QoL [15].Type 2 DM self-management is not only limited to education, but with simultaneous and regular exercise, type 2 DM patients will achieve a higher level of satisfaction of QoL compared to patients with less exercise management [39].The higher incidence of type 2 DM can be associated with adverse lifestyles, such as lack of physical activity and obesity.A previous study showed that exercise has effective results for type 2 DM management.However, the optimal intensity of exercise to prevent type 2 DM progression remains to be investigated.A prior RCT investigation found that moderate-intensity exercise was effective in improving blood glucose tolerance.Regular exercise can improve insulin sensitivity and help to control blood glucose levels and lose weight.Traditional Chinese "Qigong" and Indian exercises, including Taichi, Baduanjin, and Yoga, which combine body regulation and breathing with unifying body movements, have been frequently used by type 2 DM patients [38].Based on the proceeding study, exercise can increase the QoL of DM patients with peripheral neuropathy.Diabetic peripheral neuropathy can cause leg pain, ulcers, and body amputation, which seriously affect the QoL.Exercising with family can encourage DM patients to exercise, improve glycemic control, and foster a positive attitude toward life.Moreover, it can make patients feel better in terms of physical and psychological manners and improve their social relationships [7].
A prior prospective diabetes study revealed that intensive glucose control can significantly reduce risk to the cardiovascular system [40].Self-monitoring glucose control (SMGC) is the standard of care in DM patients with continuous glucose monitoring.The SMGC monitoring system has more advantages for DM patients in making clinical decisions and alerting them of integrated hypoglycemic events every 2 weeks.The integration of the results of the SMGC system provides a more complete description of glucose control throughout the week.Therefore, it is useful to reduce complications in DM patients [41].
Moreover, the complementary and alternative medicine studies reported the use of black garlic (BG) (Allium sativum) as one of the effective complementary interventions 133 to prevent diabetes complications, since it has been tested and used to prevent hypoglycemia in diabetic patients.BG contains high antioxidants when exposed to humidity and fermentation.The antioxidant activity of BG is influenced by the way it is processed.Allicin is an unstable compound in onions that is converted into a stable compound during the aging process, so it can produce a stronger antioxidant content that is capable of being used as treatment management for type 2 DM patients [42].Recent studies have reported that bioactive compounds in the BG have various biological activities and pharmacological properties that show their effectiveness in preventing various types of diseases.Most of the benefits of BG can be attributed to its antioxidant, anti-inflammatory, anti-obesity, anti-cancer, anti-allergic, and hypolipidemic properties.With the increasing prevalence of chronic diseases, such as diabetes, health has become a top priority for research, with the aim of finding new foods and tactics to address the health burden on patients [43].Onions have been used throughout the world as traditional medicine to treat several disorders, such as rheumatism, diabetes, and other diseases [44].
CONCLUSION
This recent systematic review concluded that nursing intervention methods, such as exercise, WA/SMS gateway, health education, BG herbal therapy, and glucose control, are proven to improve the QoL of DM patients.It is also evidenced that these nursing interventions are very popular in Asian, American, European, and African countries.The application of these types of interventions is relatively easy, low cost, and has been supported by adequate training facilities and instructors.Health education is the most frequently implemented nursing intervention.Moreover, it has been reported that the personal approach has become a trend.However, further research needs to be conducted in the future.
STANDARDS OF REPORTING
PRISMA guidelines and methodology were followed.
PRISMA checklist is available on the publisher's website
Fig. ( 1
Fig. (1).PRISMA flowchart of screening articles on intervention for improving quality of life.
Model Based on Protection MotivationTheory on the Psychological Resilience and Quality of Life of Patients with Type 2 DM (2020)
Table 2 . Data extraction and assessment of articles' quality. No Author (Year) Method SMS Gateway Education Glucose Control Exercise Black Garlic Level of Evidence Summary Quality of Article
Annotation: RCT: Randomized Controlled Trials; SR: Systematic Review; MA: Meta-Analysisl; CS: Cross-Sectional; QE: Quasi-Experimental; ES: Experimental Study. | 2023-08-31T06:18:31.365Z | 2023-08-29T00:00:00.000 | {
"year": 2024,
"sha1": "d143223386e3c103803d39457c33c6ce83981cc0",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc6faa12701ccb196015d0834c8a9a9398071bba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.