id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
195788647 | pes2o/s2orc | v3-fos-license | Preparation and Application of Molecularly Imprinted Monolithic Extraction Column for the Selective Microextraction of Multiple Macrolide Antibiotics from Animal Muscles
This study aimed to prepare a molecularly imprinted monolithic extraction column (MIMC) inside a micropipette tip by situ polymerization with roxithromycin as the dummy template. The polymers possessed excellent adsorption capacity and class-specificity to multiple macrolide drugs. MIMC was directly connected to a syringe for template removal and for the optimization of extraction conditions without any other post-treatment of polymers. A liquid chromatography-tandem mass spectrometric method was developed for the selective microextraction and determination of macrolide antibiotics in animal muscles based on MIMC. High recoveries of 76.1–92.8% for six macrolides were obtained with relative standard deviations less than 10.4%. MIMC exhibited better retention ability and durability when compared with the traditional C18 and HLB cartridges. The proposed method shows a great potential for the analysis of macrolide drugs at the trace level in animal foodstuffs.
Introduction
Macrolide antibiotics (MALs), a group of alkalescent antibiotics that are produced by Streptomycetes, consist of a large lactone ring (12-16 carbons atoms) and sugar moieties. MALs are used in animal husbandry for therapy and prophylaxis because of their antibacterial activity against gram-positive bacteria and some gram-negative cocci. MALs have been extensively introduced as feed supplements in industrialized animal production since the discovery that antibiotics could increase animal growth rate in 1940s [1,2]. Despite their benefits to animal production, the incorrect use of these drugs may leave residues in food products, bringing adverse effects to human health, such as allergic dermatitis [3]. Researchers have confirmed that the increase of drug resistance is associated with the massive consumption of MALs in animal husbandry [4,5]. As a result of increasing concerns over drug resistance, international organizations, including the European Union [6], United States [7], and China [8], have established maximum residue limits for MALs in animal products. Besides, spiramycin (SPM), tylosin (TYL), and tilmicosin (TIM), which were used as animal growth promoters before, are banned or severely restricted [9,10]. Worryingly, MALs, including TYL, TIM, erythromycin (ERY), and roxithromycin (ROX), are still frequently detected in animal foodstuffs due to their wide availability and low cost [11][12][13]. Thus, it is necessary to supervise MALs residues in animal-derived food.
Sigma-Aldrich (St. Louis, MO, USA). The inhibitors inside MAA, 4-VP, and EGDMA were removed by active carbon before use.
The standards of ERY, TIM, and azithromycin (AZI) were bought from Sigma Chemicals Co. (St. Louis). ROX, SPM, clarithromycin (CLA), and tulathromycin (TUL) were purchased from European Pharmacopoeia (EDQM, Strasbourg, France). The chemical structures of these compounds are shown in Figure S1. The purity of each standard was above 95.5%. The individual stock standard solution (1000 mg/L) was prepared by weighing 10 mg of each standard into a 10 mL of volumetric flask and dissolving with MeOH. The stock solutions were stored at −20 • C and can be kept stable for six months. Intermediate standard solutions (200 mg/L, 100 mg/L, and 10 mg/L) were prepared by diluting stock solutions with MeOH. Mixed working solutions were prepared daily by mixing the intermediate standard solutions and appropriately diluting with MeOH. Figure 1 illustrates the preparation procedure of MIMC. 1 mmoL ROX (template) was dissolved with 12.5 mL of toluene/dodecanol solution (1:6, v/v) in a 50 mL of polypropylene tube and sonicated for 5 min. 4 mmoL MAA (functional monomer) was added and pre-assembled at 4 • C for 4 h. Subsequently, 20 mmol EGDMA (cross-linker) and 30 mg AIBN (initiator) were successively added. Under the protection of nitrogen, 40 µL of the mixture was transferred into a 200 µL micropipette tip, in which the bottom part has been previously sealed, and then the upper part was sealed with a silicon rubber. The polymerization reaction was allowed to perform in a vacuum chamber at 60 • C for 24 h. After polymerization, the silicon rubbers on both ends of the pipette tip were removed to obtain the MIMC. The MIMC was connected to a 5 mL syringe, and the syringe was then installed on a syringe infusion pump (Baoding Longer Precision Pump Co. Ltd., Baoding, China) for the delivery of loading, washing, and eluting solvents. The template was removed by continually loading MeOH-acetic acid (90:10, v/v) on MIMC at a flow rate of 0.2 mL/min until no ROX could be detected by high performance liquid chromatography with evaporative light-scattering detector (HPLC-ELSD). 20 mL of MeOH was used to remove the remaining acetic acid. NIMC was prepared by the same procedure, but without adding the template. , and formic acid (FA) were purchased from Fisher Scientific (Fairlawn, NJ, USA). Guangzhou Chemical Reagent Factory (Guangzhou, China) supplied other reagents (analytical grade), including toluene, dodecanol, acetone, and ammonium hydroxide (AM). Oasis HLB (60 mg, 3 mL) and C18 cartridges (60 mg, 3 mL) were purchased from Waters Co. (Milford, MA, USA) and Agilent Technologies Co. (Santa Clara, CA, USA), respectively. A Milli-Q water system (Molsheim, France) was used to produce de-ionized water.
Preparation of MIMC
The standards of ERY, TIM, and azithromycin (AZI) were bought from Sigma Chemicals Co. (St. Louis). ROX, SPM, clarithromycin (CLA), and tulathromycin (TUL) were purchased from European Pharmacopoeia (EDQM, Strasbourg, France). The chemical structures of these compounds are shown in Figure S1. The purity of each standard was above 95.5%. The individual stock standard solution (1000 mg/L) was prepared by weighing 10 mg of each standard into a 10 mL of volumetric flask and dissolving with MeOH. The stock solutions were stored at −20 °C and can be kept stable for six months. Intermediate standard solutions (200 mg/L, 100 mg/L, and 10 mg/L) were prepared by diluting stock solutions with MeOH. Mixed working solutions were prepared daily by mixing the intermediate standard solutions and appropriately diluting with MeOH. Figure 1 illustrates the preparation procedure of MIMC. 1 mmoL ROX (template) was dissolved with 12.5 mL of toluene/dodecanol solution (1:6, v/v) in a 50 mL of polypropylene tube and sonicated for 5 min. 4 mmoL MAA (functional monomer) was added and pre-assembled at 4 °C for 4 h. Subsequently, 20 mmol EGDMA (cross-linker) and 30 mg AIBN (initiator) were successively added. Under the protection of nitrogen, 40 μL of the mixture was transferred into a 200 μL micropipette tip, in which the bottom part has been previously sealed, and then the upper part was sealed with a silicon rubber. The polymerization reaction was allowed to perform in a vacuum chamber at 60 °C for 24 h. After polymerization, the silicon rubbers on both ends of the pipette tip were removed to obtain the MIMC. The MIMC was connected to a 5 mL syringe, and the syringe was then installed on a syringe infusion pump (Baoding Longer Precision Pump Co. Ltd., Baoding, China) for the delivery of loading, washing, and eluting solvents. The template was removed by continually loading MeOH-acetic acid (90:10, v/v) on MIMC at a flow rate of 0.2 mL/min until no ROX could be detected by high performance liquid chromatography with evaporative light-scattering detector (HPLC-ELSD). 20 mL of MeOH was used to remove the remaining acetic acid. NIMC was prepared by the same procedure, but without adding the template.
Equipments of Characterization
A ZEISS EVO18 microscope (Jena, Germany) was used to obtain the scanning electron microscope (SEM) images of MIMC and NIMC.
A Micromeritics Gemini VII 2390 surface area analyzer (Atlanta, GA, USA) was applied to evaluate the specific surface area via the BET method.
Fourier-transform infrared spectroscopy (FT-IR) was carried out by a Thermo Nicolet 6700 Fourier transform infrared spectrometer (Thermo Nicolet, Waltham, MA, USA) with anhydrous KBr as the background. The IR spectra were recorded from 500 to 4000 cm −1 .
Binding Assays
The polymer particles of MIMC and NIMC were collected and dried before use. The adsorption capacities for single and multiple macrolide drugs (ERY, ROX, CLA, AZI, TUL, TIM, and SPM) were evaluated. 20.0 mg of polymers were incubated with 5 mL of single/mixed standard solutions in ACN (200 mg/L) at 25 • C for 24 h. After centrifugation at 15000 rpm for 5 min., the free MALs in the supernatant were detected by HPLC-ELSD. The adsorption amounts of MIMC and NIMC were calculated by the equation: where Q (mg/g) is the adsorption amounts at equilibrium; C 0 (mg/L) and C e (mg/L) are the initial and equilibrium concentration, respectively; v (L) is the volume of standard solution; and, m (mg) is the weight of polymers. Imprinting factor (IF) was introduced to evaluate the specific recognition ability according to the equation: where Q MIMC and Q NIMC (mg/g) are the adsorption amounts of MIMC and NIMC, respectively.
Sample Preparation
Chicken, pork, and beef were purchased from local supermarkets (Guangzhou, China) and there were no MALs residues in these samples through a previous LC-MS/MS analysis. 2 g homogenized samples were weighed into 15 mL polypropylene centrifuge tubes. For recovery test, 100 µL of mixed solutions was added into the muscle samples to prepare three spiked levels (5.0, 10, and 25 µg/kg). The spiked samples were kept at room temperature for 30 min. before proceeding. 5 mL of ACN was used to extract MALs with the help of ultrasonic and shaking extraction for 10 min. After centrifugation at 8000 rpm for 5 min., the supernatant was transferred into a new centrifuge tube. Another 5 mL of ACN was added into the residue to repeat the extraction. The extract solutions were combined for cleanup.
1 mL of ACN was applied to condition MIMC, and then 5 mL of extract solution was loaded on MIMC at the rate of 0.2 mL/min. MIMC was successively rinsed with 1 mL of ACN and 1 mL of water. 1 mL of 2% ammonium hydroxide in MeOH was used to elute the target compounds. The eluate was dried under a gentle stream of nitrogen at 40 • C, and finally, the residues were reconstituted with 1 mL of 20% MeOH in water solution (containing 0.1% formic acid) for LC-MS/MS analysis.
HPLC and LC-MS/MS Analysis
The binding assays and optimization of MIPMME procedures were performed on HPLC-ELSD, according to our previous study [37].
LC-MS/MS was applied for the recovery and reusability test of MIMC. LC-MS/MS conditions, including mass parameters, mobile phase and separation program, were the same as the reported method [29]. Briefly, an Agilent 1200 HPLC system (PaloAlto, CA, USA) and an Agilent Zorbax SB-Aq C18 column (150 mm × 2.1 mm i.d., 3.5 µm) separated the MALs. The mobile phase consisted of ACN (solvent A) and 0.1% FA in water (solvent B) with the following gradient elution: 0.0-5.0 min. 10-60% A; 5.0-7.0 min. 60-45% A; 7.0-7.01 min. 45-10% A; 7.01-15 min. 10% A. The flow rate was 0.25 mL/min and the injection volume was 10 µL. Mass analyses were performed on an Applied Biosystems API 4000 triple quadrupole mass spectrometer (Foster City, CA, USA) under the positive electrospray ionization mode. Table S1 gives the mass parameter of each analyte.
Preparation of MIMC
The volume and type of monomer and porogen are the two important factors which can impact the structural, physical, and molecular recognition properties of polymers [38], so four types of functional monomers (MAA, AM, HEMA, and 4-VP) were estimated. 1 mL of ROX in ACN (10 µg/mL) was percolated through the MIMC/NIMC and the analysis was performed by HPLC-ELSD. Figure 2 gives the results. Fairly high recovery of ROX (more than 95%) was obtained from MIMC, with MAA as the functional monomer, which was more than twice that of NIMC, which suggested the high specificity of MIMC. Although HEMA provided satisfactory retention for ROX (recovery above 70%) on MIMC, HEMA was not selected for further optimization, because of the IF value less than 1.5. MIMC failed to form rigid polymers with the use of 4-VP (an alkaline monomer), leading to the poor retention of ROX (less than 50%). As a hydrogen bond donator, the carboxyl group of MAA was more likely to participate in the formation of hydrogen bonds with abundant hydroxyl groups of the template. As previous studies discovered [25][26][27], with the 4:1 ratio of MAA to template, the polymers gained high adsorption capacity and specificity to MALs. Based on this ratio, other polymerization parameters were investigated while using single factor analysis. LC-MS/MS was applied for the recovery and reusability test of MIMC. LC-MS/MS conditions, including mass parameters, mobile phase and separation program, were the same as the reported method [29]. Briefly, an Agilent 1200 HPLC system (PaloAlto, CA, USA) and an Agilent Zorbax SB-Aq C18 column (150 mm × 2.1 mm i.d., 3.5 μm) separated the MALs. The mobile phase consisted of ACN (solvent A) and 0.1% FA in water (solvent B) with the following gradient elution: 0.0-5.0 min. 10-60% A; 5.0-7.0 min. 60-45% A; 7.0-7.01 min. 45-10% A; 7.01-15 min. 10% A. The flow rate was 0.25 mL/min and the injection volume was 10 μL. Mass analyses were performed on an Applied Biosystems API 4000 triple quadrupole mass spectrometer (Foster City, CA, USA) under the positive electrospray ionization mode. Table S1 gives the mass parameter of each analyte.
Preparation of MIMC
The volume and type of monomer and porogen are the two important factors which can impact the structural, physical, and molecular recognition properties of polymers [38], so four types of functional monomers (MAA, AM, HEMA, and 4-VP) were estimated. 1 mL of ROX in ACN (10 μg/mL) was percolated through the MIMC/NIMC and the analysis was performed by HPLC-ELSD. Figure 2 gives the results. Fairly high recovery of ROX (more than 95%) was obtained from MIMC, with MAA as the functional monomer, which was more than twice that of NIMC, which suggested the high specificity of MIMC. Although HEMA provided satisfactory retention for ROX (recovery above 70%) on MIMC, HEMA was not selected for further optimization, because of the IF value less than 1.5. MIMC failed to form rigid polymers with the use of 4-VP (an alkaline monomer), leading to the poor retention of ROX (less than 50%). As a hydrogen bond donator, the carboxyl group of MAA was more likely to participate in the formation of hydrogen bonds with abundant hydroxyl groups of the template. As previous studies discovered [25][26][27], with the 4:1 ratio of MAA to template, the polymers gained high adsorption capacity and specificity to MALs. Based on this ratio, other polymerization parameters were investigated while using single factor analysis. Generally, the combination of toluene and dodecanol as porogen can produce porous structures in situ polymerization, which might be beneficial to the permeability of MIMC [36]. Different ratios of toluene to dodecanol (1:4, 1:5, 1:6, 1:7, and 1:8, v/v) as porogen candidates were evaluated. As shown in Figure S2-A, the recovery of ROX in MIMC significantly reduced when the dodecanol percentage was higher than 1:6, while, ROX had the different destiny in NIMC. This trend could be explained by the fact that the higher ratio of dodecanol in porogen resulted in the formation of soft and low specific polymers, and thus ROX did not retain well in these polymers. Although the percentage of toluene-dodecanol (1:4, v/v) provided satisfactory recovery, as well as specificity, the loading process has been prolonged, owing to the decrease of the permeability of MIMC. To make a compromise, toluene-dodecanol (1:6, v/v) was deemed as the most suitable porogen. From Figure Figure 2. Effect of molecularly imprinted monolithic column prepared by different functional monomers on the recovery of roxithromycin.
Generally, the combination of toluene and dodecanol as porogen can produce porous structures in situ polymerization, which might be beneficial to the permeability of MIMC [36]. Different ratios of toluene to dodecanol (1:4, 1:5, 1:6, 1:7, and 1:8, v/v) as porogen candidates were evaluated. As shown in Figure S2A, the recovery of ROX in MIMC significantly reduced when the dodecanol percentage was higher than 1:6, while, ROX had the different destiny in NIMC. This trend could be explained by the fact that the higher ratio of dodecanol in porogen resulted in the formation of soft and low specific polymers, and thus ROX did not retain well in these polymers. Although the percentage of toluene-dodecanol (1:4, v/v) provided satisfactory recovery, as well as specificity, the loading process has been prolonged, owing to the decrease of the permeability of MIMC. To make a compromise, toluene-dodecanol (1:6, v/v) was deemed as the most suitable porogen. From Figure S2B, it was obvious that the increasing porogen volume did not favor the retention of ROX in MIMC. However, when the volume of porogen was less than 12.5 mL, the polymers were highly rigid, which made the following MIPMME step cumbersome because of the low porosity and permeability. Thus, 12.5 mL of toluene-dodecanol (1:6, v/v) was used as the porogen for further experiments.
EGDMA was used as cross-linker after the self-assembly of MAA and ROX, followed by the addition of AIBN as the initiator to accelerate polymerization reaction. The amounts of EGDMA and AIBN were estimated to obtain highly specific MIMC. As presented in Figure S2C,D, with the increase of EGDMA and AIBN, there was no significant improvement in the retention of ROX both in MIMC and NIMC. Nevertheless, when EGDMA was less than 15 mmol, the polymers were tightly tiny particles, so that the permeability of MIMC was fairly poor. Besides, it was rather difficult to form rigid-shape polymers with the amount of AIBN less than 20 mg. At the participation of 20 mmol EGDMA and 30 mg AIBN, we successfully synthesized MIMC with excellent performance (favorable permeability, ruggedness, and adsorption capacity). Figure 3 gives representative SEM images of MIMC and NIMC. The surface of MIMC was rougher and more porous than that of NIMC, which could improve the mass transfer rate and provide larger surface areas for binding compounds. The specific surface areas that were obtained by using BET model were 191.5 m 2 /g for MIMC and 123.3 m 2 /g for NIMC, thus confirming the presence of more imprinted cavities in MIMC.
Characterization of MIMC
Polymers 2019, 11, x FOR PEER REVIEW 6 of 15 S2-B, it was obvious that the increasing porogen volume did not favor the retention of ROX in MIMC. However, when the volume of porogen was less than 12.5 mL, the polymers were highly rigid, which made the following MIPMME step cumbersome because of the low porosity and permeability. Thus, 12.5 mL of toluene-dodecanol (1:6, v/v) was used as the porogen for further experiments. EGDMA was used as cross-linker after the self-assembly of MAA and ROX, followed by the addition of AIBN as the initiator to accelerate polymerization reaction. The amounts of EGDMA and AIBN were estimated to obtain highly specific MIMC. As presented in Figure S2-C and S2-D, with the increase of EGDMA and AIBN, there was no significant improvement in the retention of ROX both in MIMC and NIMC. Nevertheless, when EGDMA was less than 15 mmol, the polymers were tightly tiny particles, so that the permeability of MIMC was fairly poor. Besides, it was rather difficult to form rigid-shape polymers with the amount of AIBN less than 20 mg. At the participation of 20 mmol EGDMA and 30 mg AIBN, we successfully synthesized MIMC with excellent performance (favorable permeability, ruggedness, and adsorption capacity). Figure 3 gives representative SEM images of MIMC and NIMC. The surface of MIMC was rougher and more porous than that of NIMC, which could improve the mass transfer rate and provide larger surface areas for binding compounds. The specific surface areas that were obtained by using BET model were 191.5 m²/g for MIMC and 123.3 m²/g for NIMC, thus confirming the presence of more imprinted cavities in MIMC. The IR spectra of MIMC and NIMC were similar ( Figure S3). Adsorption bands at around 3543 cm −1 and 2990 cm −1 were the stretching vibrations of O-H and C-H bonds, respectively. An obvious absorption peak at 1731 cm −1 was attributed to the stretching vibration of C=O, which was provided by EGDMA and MAA. The weak adsorption peak at 1636 cm −1 was assigned to the C=C vibration, which indicated the successful polymerization.
Characterization of MIMC
Thermogravimetric analysis revealed that the decomposition of MIMC occurred at 260 °C and completed at 450 °C (loss of 95% weight), suggesting the outstanding thermal stabilities of MIMC at extreme conditions.
Binding Assays
Adsorption capacities of MIMC and NIMC to single and multiple macrolide drugs (spiked concentration: 200 mg/L of each compound in ACN) were investigated. 5 mL of ACN as a blank reference was subjected to the same procedure to ensure no template-bleeding in the whole process. As listed in Table 1, there was a great difference between the adsorption amounts of MIMC and NIMC, resulting from the abundant binding sites in MIMC. The specificity of MIMC seemed to be The IR spectra of MIMC and NIMC were similar ( Figure S3). Adsorption bands at around 3543 cm −1 and 2990 cm −1 were the stretching vibrations of O-H and C-H bonds, respectively. An obvious absorption peak at 1731 cm −1 was attributed to the stretching vibration of C=O, which was provided by EGDMA and MAA. The weak adsorption peak at 1636 cm −1 was assigned to the C=C vibration, which indicated the successful polymerization.
Thermogravimetric analysis revealed that the decomposition of MIMC occurred at 260 • C and completed at 450 • C (loss of 95% weight), suggesting the outstanding thermal stabilities of MIMC at extreme conditions.
Binding Assays
Adsorption capacities of MIMC and NIMC to single and multiple macrolide drugs (spiked concentration: 200 mg/L of each compound in ACN) were investigated. 5 mL of ACN as a blank reference was subjected to the same procedure to ensure no template-bleeding in the whole process. As listed in Table 1, there was a great difference between the adsorption amounts of MIMC and NIMC, resulting from the abundant binding sites in MIMC. The specificity of MIMC seemed to be related to the number of carbons atoms within the lactone ring, that is, the higher specificity (IF > 2.0) of MIMC showed to 15-membered ring macrolides (ROX, CLA, and ERY), meaning higher affinity of MIMC to analogues whose molecular structures are highly similar to the template. Furthermore, MIMC could well recognize multiple macrolides and the IF values were higher than 1.5, to some extent, revealing the class-specificity. When compared with single analyte, there was competitive adsorption among multiple macrolides due to the limited binding sites that presented in MIMC. MIMC showed high specific adsorption to CLA and ERY (IF > 2.0), because their ring sizes and spatial arrangement of glycosidic side chains are highly similar to ROX, so they could quickly occupy the imprinted cavities that were created by the template [29]. These results indicated that not only lactone ring sizes of MALs, but the space structure of glycosidic side chains will play an important role in specific recognition of MIMC.
Packing Volume
The binding capacity of MIMC to MALs heavily relies on the amounts of polymers. Generally, the more polymers that we added, the more MALs could be well retained on MIMC. At the same time, more polymers will prolong the loading time, which is time-consuming and can reduce the microextraction efficiency. Consequently, the volume of polymerization system packed inside micropipette tip was measured. As presented in Figure S4, the recovery of ROX from MIMC rapidly increased with the rising of polymerization volume and then gradually reached the equilibrium when the volume was up to 40 µL. Meanwhile, NIMC provided steadily growing recovery to ROX, which suggested the enhancement of non-specificity. Thus, 40 µL of polymerization mixture was packed into micropipette tips to synthesize the MIMC.
Loading Solvent
The molecular recognition of MIPs is mainly based on hydrogen bonding interactions between the MIPs and analytes, and such interactions are often more stable in weak polar media [39]. Organic solvents with various polarities, including MeOH, ACN, and ethyl acetate (EA), which are commonly used to extract MALs from animal tissues, were investigated as the loading candidates (fortified concentration: 10 µg/mL of six macrolides in 5 mL of each candidate solution). While considering the possible leakage of template during LC-MS/MS analysis, ROX was not estimated in the following programs. Figure 4A gives the results. Satisfactory recoveries of target compounds (above 95%) were obtained while using ACN and EA as the loading solvents. In sharp contrast, MeOH provided low recoveries for six MALs, especially for SPM (less than 50%), which might be caused by the strong suppression of high polar solvent (MeOH) to imprinted interaction. We selected ACN as the loading solvent for further study in view of easy volatilization of EA and consistency with the extraction solvent.
Washing Solution
The optimization of washing solution is a crucial step for imprinted extraction in eliminating the co-extracted impurities with less loss of analytes and to simultaneously reduce the non-specific adsorption. Several solvents with different polarity (MeOH, ACN, acetone, and water) as washing solutions were estimated after loading with extract solution. As shown in Figure 4B, MeOH had washed off lots of analytes and the recoveries were below 50%. It has been reported that polar solvents, such as MeOH, could disrupt the non-covalent interactions between analytes and MIPs [40]. On the contrary, there was no obvious loss of six MALs for MIMC washing with ACN, acetone, and water. However, the recoveries were also high on NIMC (above 80%) when acetone and water were used as the washing solvents. Ultimately, the MIMC was successively rinsed with ACN and water, which could effectively remove the lipid and water-soluble impurities. Under optimal conditions, non-specificity adsorption could be reduced, whereas most of the analytes were trapped in the polymer because of the specific interactions. The recoveries of six MALs on MIMC were more than twice that on NIMC.
Eluting Solution
MeOH and different percentages of ammonium hydroxide in MeOH (1%, 2%, 3%, and 4%) as eluting candidates were assessed in this study in view of easy volatilization of EA and consistency with the extraction solvent. The results ( Figure S5) revealed that the participation of AM in eluting solution tended to improve the elution ability, perhaps because of the weak alkalinity of MALs. A higher concentration of AM (>3%) in MeOH did not significantly improve the recoveries of analytes, but it extended the evaporation time. Therefore, 1 mL of 2% ammonium hydroxide in MeOH was chosen for eluting the MALs from MIMC, and the recoveries were higher than 94.5%.
Class-Specificity of MIMC
The class-specificity of MIMC was investigated through analyzing six MALs and other pharmaceuticals with a large consumption in animal husbandry, including florfenicol (FLO), sulfadimidine (SM2), and valnemulin (VAL). 1 mL of mixed standard solution in ACN (10 μg/mL) was loaded onto MIMC or NIMC, followed by the optimal MIPMME procedure. As illustrated in Figure 5, six MALs were well recognized by MIMC, with their recoveries being higher than 88% due
Washing Solution
The optimization of washing solution is a crucial step for imprinted extraction in eliminating the co-extracted impurities with less loss of analytes and to simultaneously reduce the non-specific adsorption. Several solvents with different polarity (MeOH, ACN, acetone, and water) as washing solutions were estimated after loading with extract solution. As shown in Figure 4B, MeOH had washed off lots of analytes and the recoveries were below 50%. It has been reported that polar solvents, such as MeOH, could disrupt the non-covalent interactions between analytes and MIPs [40]. On the contrary, there was no obvious loss of six MALs for MIMC washing with ACN, acetone, and water. However, the recoveries were also high on NIMC (above 80%) when acetone and water were used as the washing solvents. Ultimately, the MIMC was successively rinsed with ACN and water, which could effectively remove the lipid and water-soluble impurities. Under optimal conditions, non-specificity adsorption could be reduced, whereas most of the analytes were trapped in the polymer because of the specific interactions. The recoveries of six MALs on MIMC were more than twice that on NIMC.
Eluting Solution
MeOH and different percentages of ammonium hydroxide in MeOH (1%, 2%, 3%, and 4%) as eluting candidates were assessed in this study in view of easy volatilization of EA and consistency with the extraction solvent. The results ( Figure S5) revealed that the participation of AM in eluting solution tended to improve the elution ability, perhaps because of the weak alkalinity of MALs. A higher concentration of AM (>3%) in MeOH did not significantly improve the recoveries of analytes, but it extended the evaporation time. Therefore, 1 mL of 2% ammonium hydroxide in MeOH was chosen for eluting the MALs from MIMC, and the recoveries were higher than 94.5%.
Class-Specificity of MIMC
The class-specificity of MIMC was investigated through analyzing six MALs and other pharmaceuticals with a large consumption in animal husbandry, including florfenicol (FLO), Polymers 2019, 11, 1109 9 of 15 sulfadimidine (SM2), and valnemulin (VAL). 1 mL of mixed standard solution in ACN (10 µg/mL) was loaded onto MIMC or NIMC, followed by the optimal MIPMME procedure. As illustrated in Figure 5, six MALs were well recognized by MIMC, with their recoveries being higher than 88% due to the complementation between target MALs and imprinted cavities, while poor recoveries were obtained from NIMC. Both MIMC and NIMC presented low affinity to FLO, SM2, and VAL, since their molecular structures are quite different from the template ( Figure S1 for their structures). It can be demonstrated that the MIMC had good class-specificity for multiple MALs and it had great potential for the selective separation of MALs.
Polymers 2019, 11, x FOR PEER REVIEW 9 of 15 to the complementation between target MALs and imprinted cavities, while poor recoveries were obtained from NIMC. Both MIMC and NIMC presented low affinity to FLO, SM2, and VAL, since their molecular structures are quite different from the template ( Figure S1 for their structures). It can be demonstrated that the MIMC had good class-specificity for multiple MALs and it had great potential for the selective separation of MALs.
Comparison of Different Cleanup Methods
The recoveries of six MALs obtained from different SPE cartridges, including MIPMMC, C18, and Oasis HLB, were compared. 1 mL of blank pork matrix was spiked at 10 ng/mL of six MALs for the further SPE procedures. For the C18 cartridge, 3 mL of MeOH and 3 mL of water were used to condition the cartridge. 1 mL of the extract was diluted with 4 mL of water before loading. The C18 cartridge was washed with 3 mL of 10% MeOH in water and the analytes were eluted with 5 mL of 5% ammonium hydroxide in MeOH. For Oasis HLB cartridge, 1 mL of the extract was evaporated and the residues were re-dissolved with 5 mL of phosphate buffer solution (0.1 M, pH = 8.0) before loading. Except for the loading process, cleanup and elution were the same as the C18 cartridge. As shown in Figure S6, the MIMC provided higher recoveries than other cartridges for six MALs, especially for ERY and SPM. It certified that MIMC had excellent purification and enrichment ability to six MALs in animal muscles.
Reusability of MIMC
The MIMC was employed to repeat several binding and eluting SPE cycles to investigate its reusability (each SEP cycle: 5 mL of pork matrix spiked at 10 ng/mL of six MALs). After each cycle, MIMC was rinsed with 1 mL of 5% ammonium hydroxide in MeOH to remove the residual MALs, and then regenerated by washing with 1 mL of water and 1 mL of MeOH three times. The results exhibited that MIMC could be reused at least 20 times, with only a slight decrease of its recognition properties (recovery loss less than 5%). In contrast, the recoveries for most MALs in C18 cartridge dramatically declined (below 60%) after five cycles. It was obvious that the MIMC had favorable durability, which would be economical and stable in the analysis of real samples.
Application of MIMC in Animal Foodstuff
The developed MIPMME procedure, coupled with LC-MS/MS, was applied to selectively enrich and detect six MALs from chicken, pork, and beef samples. Good linearity was achieved in the concentration range of 1.0-100 μg/kg for target compounds, with correlation coefficients (r 2 ) higher than 0.99 under the optimized conditions of sample separation and detection (Table 2). Linear equations for target analytes in three matrices are listed in Table S2. The limit of detection (LOD) and limit of quantification (LOQ) were in the range of 0.5-1.0 μg/kg and 2.0-5.0 μg/kg, respectively.
Comparison of Different Cleanup Methods
The recoveries of six MALs obtained from different SPE cartridges, including MIPMMC, C18, and Oasis HLB, were compared. 1 mL of blank pork matrix was spiked at 10 ng/mL of six MALs for the further SPE procedures. For the C18 cartridge, 3 mL of MeOH and 3 mL of water were used to condition the cartridge. 1 mL of the extract was diluted with 4 mL of water before loading. The C18 cartridge was washed with 3 mL of 10% MeOH in water and the analytes were eluted with 5 mL of 5% ammonium hydroxide in MeOH. For Oasis HLB cartridge, 1 mL of the extract was evaporated and the residues were re-dissolved with 5 mL of phosphate buffer solution (0.1 M, pH = 8.0) before loading. Except for the loading process, cleanup and elution were the same as the C18 cartridge. As shown in Figure S6, the MIMC provided higher recoveries than other cartridges for six MALs, especially for ERY and SPM. It certified that MIMC had excellent purification and enrichment ability to six MALs in animal muscles.
Reusability of MIMC
The MIMC was employed to repeat several binding and eluting SPE cycles to investigate its reusability (each SEP cycle: 5 mL of pork matrix spiked at 10 ng/mL of six MALs). After each cycle, MIMC was rinsed with 1 mL of 5% ammonium hydroxide in MeOH to remove the residual MALs, and then regenerated by washing with 1 mL of water and 1 mL of MeOH three times. The results exhibited that MIMC could be reused at least 20 times, with only a slight decrease of its recognition properties (recovery loss less than 5%). In contrast, the recoveries for most MALs in C18 cartridge dramatically declined (below 60%) after five cycles. It was obvious that the MIMC had favorable durability, which would be economical and stable in the analysis of real samples.
Application of MIMC in Animal Foodstuff
The developed MIPMME procedure, coupled with LC-MS/MS, was applied to selectively enrich and detect six MALs from chicken, pork, and beef samples. Good linearity was achieved in the concentration range of 1.0-100 µg/kg for target compounds, with correlation coefficients (r 2 ) higher than 0.99 under the optimized conditions of sample separation and detection (Table 2). Linear equations for target analytes in three matrices are listed in Table S2. The limit of detection (LOD) and limit of quantification (LOQ) were in the range of 0.5-1.0 µg/kg and 2.0-5.0 µg/kg, respectively.
Chicken, pork, and beef samples at three spiked concentration levels of six MALs (5.0, 10, and 25 µg/kg) were analyzed to investigate the accuracy and precision of this method. The precision was described as the relative standard deviation (RSD). Table 2 also gives the results. The average recoveries of six MALs in three muscle samples were from 76.1% to 92.8%, with intra-day and inter-day RSDs that were lower than 10.4%. Figure 6 shows the typical SRM chromatograms of spiked chicken sample. Figure S7 shows the chromatograms of spiked pork and beef samples after the MISPE procedure. Chicken, pork, and beef samples at three spiked concentration levels of six MALs (5.0, 10, and 25 μg/kg) were analyzed to investigate the accuracy and precision of this method. The precision was described as the relative standard deviation (RSD). Table 2 also gives the results. The average recoveries of six MALs in three muscle samples were from 76.1% to 92.8%, with intra-day and inter-day RSDs that were lower than 10.4%. Figure 6 shows the typical SRM chromatograms of spiked chicken sample. Figure S7 shows the chromatograms of spiked pork and beef samples after the MISPE procedure. Table 3 lists the comparison of other reported methods with the proposed method for the determination of MALs. The LODs of the developed method were lower than those of LC-MS/MS analysis based on traditional extraction approaches, such as pressurized liquid extraction (PLE) [41] and accelerated solvent extraction (ASE) [12]. When compared with novel extraction strategies using MIP technology, the method had lower LODs than the HPLC methods based on multi-walled carbon nanotubes MISPE (MWNTs-MIPSE) [28] and magnetic MISPE (MMISPE) [27]. Two papers described the determination of multiple macrolides in animal foodstuffs by LC-MS/MS based on MISPE [29] and hollow porous MIP-DSPE (HPMIP-DSPE) [26]. Although the LODs of the method that was developed were slightly higher than those of the MISPE method, the recoveries of this method were much better than the latter. The HPMIP-DSPE method had higher recoveries and lower LODs for the analysis of MALs in honey due to the differences between sample matrices. Thus, the LC-MS/MS coupled with MIPMME was efficient and sensitive in analyzing trace amounts of MALs in animal-derived food. Table 3 lists the comparison of other reported methods with the proposed method for the determination of MALs. The LODs of the developed method were lower than those of LC-MS/MS analysis based on traditional extraction approaches, such as pressurized liquid extraction (PLE) [41] and accelerated solvent extraction (ASE) [12]. When compared with novel extraction strategies using MIP technology, the method had lower LODs than the HPLC methods based on multi-walled carbon nanotubes MISPE (MWNTs-MIPSE) [28] and magnetic MISPE (MMISPE) [27]. Two papers described the determination of multiple macrolides in animal foodstuffs by LC-MS/MS based on MISPE [29] and hollow porous MIP-DSPE (HPMIP-DSPE) [26]. Although the LODs of the method that was developed were slightly higher than those of the MISPE method, the recoveries of this method were much better than the latter. The HPMIP-DSPE method had higher recoveries and lower LODs for the analysis of MALs in honey due to the differences between sample matrices. Thus, the LC-MS/MS coupled with MIPMME was efficient and sensitive in analyzing trace amounts of MALs in animal-derived food.
Conclusions
In this work, the MIP monolithic extraction column for MALs was prepared inside a micropipette tip while using ROX as the dummy template. MIMC was directly connected with a syringe to perform template removal and the following microextraction procedures, which is simple and convenient for sample pre-treatment. Based on MIMC, a LC-MS/MS method was established for the analysis of MALs in animal muscle samples. The developed method provided good linearity, high sensitivity, and satisfactory recoveries for MALs within the experimental concentration ranges. When compared with traditional C18 and HLB cartridges, MIMC showed more favorable retention ability for six MALs and it can be reused at least 20 times. Thus, the proposed method was selective, efficient, and economical for the monitoring of trace MALs residues in animal foodstuffs.
Supplementary Materials: The following are available online at http://www.mdpi.com/2073-4360/11/7/1109/s1: Figure S1, Chemical structures of macrolide antibiotics; Figure S2, Effects of (A) different ratio of toluene to dodecanol as porogen, (B) porogen volume, (C) different amounts of EGDMA as cross-linker and (D) AIBN as initiator on the recovery of roxithromycin obtained from MIMC and NIMC; Figure S3, FT-IR characterization of MIMC and NIMC; Figure S4, Effect of polymerization volume on the recovery of roxithromycin for MIMC and NIMC; Figure S5, Effect of MeOH and different percentages of ammonium hydroxide (AM) in MeOH as elution solutions on the recoveries of macrolides drugs; Figure S6, The comparison of MIMC, C18 and Oasis HLB cartridges on the recoveries of target macrolides (the abbreviations are same as Figure 4) at 10 ng/mL spiked concentration of six macrolides in pork matrix; Figure S7, Typical SRM chromatograms obtained from (A) spiked pork and (B) spiked beef matrices at the concentration of 5 µg/kg and their corresponding blank matrices; Table S1, SRM parameters for target analytes in positive ion mode; Table S2, Linear equation for each analyte in three matrices. | 2019-07-04T13:05:55.091Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "c8ee52c2b5c01e3a77786de5252bffd1d69033d4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/11/7/1109/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8ee52c2b5c01e3a77786de5252bffd1d69033d4",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
53730365 | pes2o/s2orc | v3-fos-license | Social Withdrawal in Adolescence and Early Adulthood: Measurement Issues, Normative Development, and Distinct Trajectories
Social withdrawal during adolescence and early adulthood is particularly problematic due to the increasing importance of social interactions during these ages. Yet little is known about the changes, trajectories, or correlates of being withdrawn during this transition to adulthood. The purpose of this study was to examine the normative change and distinct trajectories of withdrawal in order to identify adolescents and early adults at greatest risk for maladjustment. Participants were from a Dutch population-based cohort study (Tracking Adolescents’ Individual Lives Survey), including 1917 adolescents who were assessed at four waves from the age of 16 to 25 years. Five items from the Youth Self Report and Adult Self Report were found to be measurement invariant and used to assess longitudinal changes in social withdrawal. Overall, participants followed a U-shaped trajectory of social withdrawal, where withdrawal decreased from ages 16 to 19 years, remained stable from 19 to 22 years, and increased from 22 to 25 years. Furthermore, three distinct trajectory classes of withdrawal emerged: a low-stable group (71.8%), a high-decreasing group (12.0%), and a low-curvilinear group (16.2%). The three classes differed on: shyness, social affiliation, reduced social contact, anxiety, and antisocial behaviors. The high-decreasing group endorsed the highest social maladjustment, followed by the low-curvilinear group, and the low-stable group was highly adjusted. We discuss the potential contribution of the changing social network in influencing withdrawal levels, the distinct characteristics of each trajectory group, and future directions in the study of social withdrawal in adolescence and early adulthood. Electronic supplementary material The online version of this article (10.1007/s10802-018-0497-4) contains supplementary material, which is available to authorized users.
Social withdrawal is an umbrella term referring to an individual's voluntary self-isolation from familiar and/or unfamiliar others through the consistent display of solitary behaviors (Rubin et al. 2009) such as shyness, spending excessive time alone, and avoiding peer interaction. Underlying motivations to withdrawal may vary between individuals (Asendorpf 1990;Ozdemir et al. 2015;Wang et al. 2013). Based on varying approach-avoidance motivations, Coplan and Armer (2007) identified three types of social withdrawal: shyness (high approach, high avoidance), unsociability (low approach, low avoidance), and social avoidance (low approach, high avoidance). Phenotypic withdrawal behaviors overlap across withdrawal types. In the current manuscript, we use the term 'social withdrawal' to refer to the global, multidimensional, behavioral phenotype of voluntary self-isolation. Socially withdrawn adolescents (ages 10-20) and early adults (ages 20-25) face challenges both parallel to those of withdrawn children and unique to the transition to adulthood (Hamer and Bruch 1997;Nelson et al. 2008;Nelemans et al. 2014;Rowsell and Coplan 2013). Yet only a small segment of the withdrawal literature has examined the normative changes, heterogeneous trajectories, or correlates of withdrawal during these ages. More research is needed to increase our understanding of the specific roles social withdrawal plays in the lives of adolescents and early adults in order to increase wellbeing during this transitional period and promote positive adjustment thereafter. The current study contributes to the literature by examining the developmental changes of social withdrawal while considering measurement issues pertinent to developmental research. The following sections review the current state of knowledge of both aspects.
Normative Patterns and At-Risk Trajectories of Social Withdrawal
Socio-cognitive abilities allow youth to evaluate themselves in their social contexts. These abilities improve during early adolescence, which leads to growing attention to and perceived importance of adolescents´social functioning relative to their peers, and a heightened sensitivity to how they are perceived by their peers (Gazelle and Rubin 2010;Steinberg and Morris 2001;Weems and Costa 2005;Westenberg et al. 2004). This increase of evaluative concerns might induce an increase in social withdrawal in early and middle adolescence. With greater social experience and brain maturation (Choudhury et al. 2006), and larger social networks (Wrzus et al. 2012), evaluative concerns likely diminish during late adolescence and early adulthood, thereby decreasing social withdrawal. To date, only two studies have reported on changes in withdrawal through adolescence and early adulthood, with contradicting findings. The first found a small increase in parent-reported social withdrawal from ages 4 to 18 years (Bongers et al. 2003), while the second found a small decrease in parent-reported shyness from ages 4 to 23 years (Dennissen et al. 2008). Neither study tested for potential curvilinear associations of withdrawal over time. Curvilinear associations are likely because several phenomena related to social evaluation and social withdrawal have been found to follow a curvilinear association with increases during early adolescence and decreases during late adolescence and adulthood, such as selfconsciousness (Rankin et al. 2004), social conformity (Sistrunk et al. 1971), perceived importance of popularity (LaFontana and Cillessen 2010), and social anxiety (Nelemans et al. 2014).
Regardless of the mean-level trajectory of withdrawal, not all adolescents and adults will follow the general developmental pattern. On the individual level, increases, decreases, and stability in social withdrawal are all likely. The contradictory findings and weak effects in the two mean-level studies may point to heterogeneous patterns of withdrawal trajectories. In the preadolescent social withdrawal literature, distinct trajectories of increasing, decreasing, and low-stable withdrawal have been consistently reported using peer nominations of anxious withdrawal (Booth-LaForce et al. 2012;Oh et al. 2008) and teacher-reported social withdrawal (Booth-LaForce and Oxford 2008). In this body of work, the majority of children have very low and stable levels of withdrawal over time. The increasing withdrawal group exhibits the highest maladjustment, and the decreasing group exhibits intermediate maladjustment levels. Only Tang and colleagues (Tang et al. 2017) have examined the trajectories of withdrawal into adolescence and adulthood, with participants aged 8 to 35 years. They found the same three withdrawal trajectories as reported in the preadolescent literature and comparable group differences in maladjustment. However, it is too early for firm conclusions, because Tang's study had three major limitations. First, curvilinear mean-level patterns and distinct curvilinear trajectories were not reported. Second, informant effects were introduced by assessing social withdrawal with parent-reports during their first two measurement waves and self-reports during the last two. Different informants provide unique information that cannot be adjoined without first testing for measurement invariance across reporters. Finally, measurement invariance of the social withdrawal measure was not established prior to trajectory analysis. To date, no study has examined if the social withdrawal scales used were measurement invariant.
Measurement Issues
We suspect that the limited progress of adolescent and adult withdrawal research is partly related to measurement obstacles such as possible informant biases and lack of measurement invariance. In the following sections, we will argue that selfreport items from the Achenbach System of Empirically Based Assessment (ASEBA) provide an adequate method for obtaining social withdrawal ratings in adolescence and adulthood, and stress the importance of assessing measurement invariance in studies examining developmental patterns.
Measuring Social Withdrawal in Adolescence and Adulthood
The majority of assessment measures and methods of social withdrawal focus on childhood, and only few have been developed or adapted for use in adolescent or adult samples. Furthermore, no questionnaire has been developed to measure the longitudinal changes in the global behavioral aspects of social withdrawal in adolescence and adulthood, such as shyness, spending excessive time alone, and avoiding peer interaction. To the best of our knowledge, only two validated proxy measures are available, the Behavioral Inhibition/ Activation Scales (Coplan et al. 2006) and the Revised Cheek and Buss Shyness Scale (Cheek and Buss 1981), but these measures capture only inhibition toward novelty and shyness, respectively, rather than the global behavioral aspects that span across the types of social withdrawal. The Child Social Preferences Scale has been adapted for use with early adults to measure the three aspects of social withdrawal (i.e. shyness, unsociability, and avoidance; Nelson 2013). Although a promising new scale, more research is needed to determine its application in longitudinal research. A measure capturing the global behavioral aspects of social withdrawal, longitudinally, in adolescence and adulthood has been lacking.
Another issue that deserves attention concerns the validity of informant reports. Most instruments that assess social withdrawal in childhood obtain ratings from parents, teachers, or peers, but other-reports become less reliable in adolescence, when individuals spend more and more time outside parental supervision. Consequently, the difference between parent-and self-reports increases from childhood through adulthood (Van der Ende et al. 2012) due to decreasing parent-child contact. Similarly, obtaining teacher-or peer-ratings of withdrawal becomes more difficult when individuals take part in more flexible classes (i.e. attend secondary and tertiary education) or no longer belong to a formal education setting. A way to avoid these informant-related measurement problems is to assess social withdrawal by means of self-reports.
To overcome these two measurement issues, we need a self-report measure that captures the key characteristics of social withdrawal and is designed for longitudinal use during adolescence and adulthood. We suggest that the commonly used and well-validated ASEBA Youth Self-Report (YSR) and Adult Self-Report (ASR) could fulfil these criteria and overcome the limitations of previous studies. Both the YSR and the ASR contain a Withdrawn/Depressed scale that measures aspects of depression and social withdrawal. Several studies have already used this scale to assess social withdrawal. Among them, four have used the complete Withdrawn/ Depressed scale (Katz et al. 2011;Lamb et al. 2010;Perez-Edgar et al. 2010;Rubin et al. 2013) and three used selected items from the scale, removing depression-related items to avoid confounding results (Booth-LaForce and Oxford 2008; Eggum et al. 2009;Tang et al. 2017). Importantly, only few of these studies spanned through the adolescent or early adult periods and none consistently used the YSR/ASR to assess the development of self-reported social withdrawal.
Longitudinal Measurement Invariance Items from the Withdrawn/Depressed scale have been reported to have high internal consistency, which is promising. However, it is unclear whether these items measure the same construct over time and show measurement invariance. Longitudinal measurement invariance (also called factorial invariance, measurement equivalence, or structural stability) of a variable is especially important when examining mean-level or individual trajectories. When examining longitudinal changes in social withdrawal, it is essential that the variable used captures the same aspects of withdrawal in the same way at every assessment wave. Although this may seem obvious at first glance, most studies do not examine measurement invariance prior to interpreting results, and few mention the possibility of measurement variance as a study limitation. Assessing longitudinal measurement invariance tells us if individuals interpret specific items of a given measure in the same way over time through a series of increasingly constrained Confirmatory Factor Analysis (CFA) models. Briefly, four types of measurement invariance, at increasing strength, are examined: configural invariance (baseline), metric invariance (weak), scalar invariance (strong), and residual invariance (strict), see methods section for details. Examining the longitudinal measurement invariance of social withdrawal is not only novel and timely but will provide information on the underlying structure of withdrawal across multiple developmental periods and, if social withdrawal is at least partially invariant, allow for valid interpretations of observed changes or trajectories in withdrawal over time.
Overview of the Current Study
The current study uses 9 years of longitudinal data from a population-based cohort survey in order to fill some of the gaps in the literature and to answer four main questions: (1) Which withdrawal-related YSR and ASR items measure social withdrawal validly and reliably in our sample? (2) Is the structure of social withdrawal invariant over time? (3) What is the stability and normative change (i.e., mean-level continuity) of social withdrawal during adolescence and early adulthood? (4) How many trajectories of social withdrawal can be distinguished?
Method
Participants Participants were part of the Tracking Adolescents' Individual Lives Survey (TRAILS), a prospective, population-based cohort study aiming to track the social, psychological, and physical development of pre-adolescents through adulthood. During the first measurement wave in 2001 (T1), 2230 children (M age = 11.09, SD = 0.56; 50.8% female), who were born between October 1989 and September 1991, were recruited for participation. In subsequent waves, occurring every two or 3 years, 73-96% of the children from T1 participated again. More information about the recruitment and assessment procedure has been reported by De Winter et al. (2005), Huisman et al. (2008), and Oldehinkel et al. (2015). Extensive case analyses from T1 to T4 can be found in Nederhof et al. (2012). Participants who missed at least one measurement wave between T1 and T5 were more likely to be male, to come from low-socioeconomic families, and to have more externalizing problems at T1 (Oldehinkel et al. 2015). The current study uses data from the last four measurement waves (T3-T6). Due to missing social withdrawal data on every time point during T3 to T6, 313 participants were excluded from analyses, leading to a final sample size of 1917 adolescents (53% female; Table 1 depicts the retention rates and demographics).
Data Collection Procedure
The TRAILS study protocol was approved by the Dutch Central Committee on Research Involving Human Subjects. The adolescent participants of the study provided written consent at the second through sixth assessment waves. A parent or guardian provided written parental consent for adolescent participation during the first three assessment waves, and written consent to participate at every assessment wave.
At the initial assessment wave, well-trained interviewers visited one of the parents or guardians (95.6% mothers) at their home to conduct interviews regarding the family composition, child's developmental history, somatic health, and impairments, health care use, and familial psychopathology. During this visit, parents also completed a written questionnaire. Children completed a questionnaire and neuropsychological tests at school, under the supervision of at least one TRAILS assistant. During the second and third assessment waves, parents completed a questionnaire, which they received via postal mail, and children completed a questionnaire at school, in groups, under TRAILS supervision. At the fourth assessment wave, a custom research company (CRC) was hired to recruit and assess participants, who were now over the age of 18 years, thereby requiring adolescent written informed consent but not parental consent for adolescent participation. Participants received information explaining that the CRC would collect data, and if participants gave informed consent to participate, the CRC sent them a web-based questionnaire battery. During the fourth wave, parents completed a questionnaire, which they received again via postal mail. During the fifth and sixth assessment waves, data collection was completed by the TRAILS team. During these waves, participants and parents received study information in print via mail, followed by an email or letter with the website link to the online questionnaire two to 3 weeks later. Reminders to complete the questionnaires were sent by email, followed by letters and/or telephone calls.
Measures
Social Withdrawal Social withdrawal was measured using items from the Youth Self-Report (YSR; Achenbach 2001) and Adult Self-Report (ASR; Achenbach 2003) Withdrawn/ Depressed and Withdrawn scales, respectively. The YSR is a widely-used, 112-item self-report measure of emotional and behavioral problems, developed for adolescents aged 11 to 18 years. The items can be rated on a 3-point scale, with 0 = not at all; 1 = a little or sometimes; and 2 = always or often true in the past 6 months. The ASR is the adult version of the YSR, meant for individuals aged 18 to 59 years. The ASR includes 102 items rated on the same 3-point scale as the YSR. The YSR was administered at T1 to T3 and the ASR at T4 to T6. In a sample of 11-to 18-year-old youth, the YSR Withdrawn/ Depressed scale had moderate 8-day test-retest reliability (r = 0.67), and scores were positively correlated with measures of depression (rs > 0.36, ps < 0.001) and withdrawal (rs > 0.58, ps < 0.001; Achenbach and Rescorla 2007). In a sample of adults over the age of 18 years, the ASR Withdrawn scale had high 7-day test-retest reliability (r = 0.87), and scores were positively correlated with measures of depression (r = 0.46, p < 0.01), anxiety (r = 0.44, p < 0.01), and social introversion (r = 0.43, p < 0.01; Achenbach and Rescorla 2003).
As a starting point for the analyses, we selected five withdrawal-related items, which were identical in the YSR and ASR: BI would rather be alone than with others,^BI am secretive or keep things to myself,^BI am too shy or timid,^BI refuse to talk,^and BI keep from getting involved with others.Ŝ election was based on face validity and on previous research (e.g., Booth-LaForce and Oxford 2008; Eggum et al. 2009;Katz et al. 2011;Tang et al. 2017). Cronbach's alphas of the five items at T1 to T6 were 0.49, 0.57, 0.65, 0.67, 0.72, and 0.72, respectively. Although our original, preregistered plan included analyses of data from all six measurement waves, measurement invariance was not found when including T1 and T2, likely because of insufficient internal reliabilities Mean age (SD) are presented for the participants included in the current study (N = 1,1917) while the remaining columns present demographic data for the all participants in the larger survey (N = 2230) a Survey retention refers to the proportion of participants from the baseline who participated in subsequent assessments (Cronbach's alphas <0.60 given the few number of items; Loewenthal 2004) of the pre-and early adolescent responses. We therefore decided to perform all following analyses on T3 to T6 data, which showed sufficient reliabilities and measurement invariance over time.
Criterion Variables
Shyness and social affiliation were assessed by the Early Adolescent Temperament Questionnaire-Revised (EATQ-R, Ellis and Rothbart 2001). The EATQ-R is a parent-report questionnaire measuring temperament in adolescents aged 9 to 15 years with 65 Likert-type items (1 = Almost never true to 5 = Almost always true). The scales of the EATQ-R include Fearfulness, Frustration, Shyness, Surgency, Affiliation, and Effortful Control, of which the Shyness and Affiliation scale were used for the purposes of this study. The Shyness scale measures hesitancy toward novel social situations, with items including BMy child is shy^and BMy child is shy when he or she meets new people.^The Affiliation scale measures the tendency to want closeness with others, including items such as BMy child likes talk to someone about everything he or she thinks^and BMy child would like to spend time with a good friend every day.^In a sample of early adolescents, the Shyness scale had good 8-week test-retest stability (intra-class correlation = 0.73), and scores were positively correlated with two measures of behavioral inhibition (rs = 0.39 and 0.45, ps < 0.001) and with measures of anxiety, depression, and emotional problems (rs = 0.34, 0.25, and 0.34, ps < 0.001, respectively; Muris and Meesters 2009). In the same sample, the Affiliation scale had good test-retest stability (intra-class correlation = 0.80), and scores were correlated with a measure of prosocial behavior (r = 0.39, p < 0.001). The EATQ-R was completed by parents at T3, T4, and T5 with acceptable internal consistencies for both Shyness (4 items, α = 0.87, 0.78, 0.80) and Affiliation (5 items, α = 0.73, 0.63, 0.66).
Reduced social contact was measured by the 12-item Reduced Social Contact scale of the Children's Social Behavior Questionnaire (CSBQ) and Social Behavior Questionnaire-Adult Version (SBQ-A; Hartman et al. 2006). The CSBQ and SBQ-A were developed to assess the sociobehavioral symptoms of Autism Spectrum Disorders. The CSBQ is a parent-report measure consisting of 49 items rated on a three-point scale (0 = Never; 1 = A little/sometimes; 2 = Often). The SBQ-A parent-report form is the adult version of the CSBQ, with 44 items rated on the same three-point scale. The CSBQ Reduced Social Contact scale was administered at T3 and T4 (α = 0.89, 0.86); at T6, the SBQ-A Reduced Social Contact Scale was administered (α = 0.77). The Reduced Social Contact scale includes 12 items such as BDoes not start playing with other children^,^Has little or no need for contact with others^, and BDoes not respond to other children's initiatives^.
Anxiety was assessed by the YSR (Achenbach 2001) and ASR (Achenbach 2003) Anxious/Depressed subscale. The YSR Anxious/Depressed subscale included 13 items at T3 (α = 0.84) and the ASR Anxious/Depressed subscale included 18 comparable items at T4-T6 (α = 0.91, 0.92, 0.93). Because of the discrepancy between the number of items, our analyses utilized the mean Anxiety per participant, based on item endorsement, rather than the total Anxiety score.
Statistical Analyses
Analyses were conducted in MPlus Version 8.0 (Muthén & Muthén 1998-2017 using maximum likelihood with robust standard errors (MLR) estimation. First, a CFA model with five YSR items loading onto a single social withdrawal latent variable during baseline (T3) was tested in half of the data. The following goodness of fit cutoffs were considered to indicate a good model fit: comparative fit index (CFI) ≥ 0.95, root mean square error of approximation (RMSEA) ≤ 0.06, and standardized root mean square residual (SRMR) ≤ 0.08 (Hu and Bentler 1999). If the model had good fit, analyses were repeated in the second half of the data.
Next, a series of increasingly constrained CFA models systematically tested if the YSR/ASR items, indicating a single social withdrawal latent factor, were measurement invariant over time. Differences in model fit were examined in two ways: first, using the Satorra-Bentler scaled chi-square difference tests (ΔSBχ 2 ; Satorra and Bentler 2001), and second, using change-in-fit indices following the Chen (2007) criteria: ΔCFI ≥ −0.010, ΔRMSEA ≥0.015, and ΔSRMR ≥0.030 for metric invariance, and ΔSRMR ≥0.010 for scalar invariance, because SRMR is less sensitive to noninvariance in intercepts than noninvariance in item loadings. Priority was given to the chi-square difference test for determining if the data demonstrated invariance (Bowen and Masa 2015;Vandenberg and Lance 2000), with further support for model fit conclusions based on the change-in-fit indices (Chen 2007). In the configural invariance model, factor loadings, intercepts, and residual variances were allowed to vary. Factor loadings were constrained to be equal over time in the metric model, and factor loadings and item intercepts were constrained to be equal in the scalar model. If the scalar model fits significantly worse than the metric model, up to 20% of intercepts were allowed to vary until the model fitted the data as well as the metric model. When this happens, partial scalar invariance is established. Within the (partial) scalar invariance model, the social withdrawal latent variable correlations between consecutive time points provide information about the rank-order stability of social withdrawal.
After that, we examined the normative mean-level change in social withdrawal over time. A multiple-indicator Latent Growth Model (mLGM) assessed the growth curve of the social withdrawal latent variable. This model was preferred over traditional item summation scores, because mLGMs account for both random and systematic variance through the use of latent variables. Furthermore, the mLGM included results from the measurement invariance analyses, and hence prevented biasing growth results with any metric discrepancies. The mLGM included (1) the measurement model, which defined social withdrawal from the five YSR/ASR items and specified factor loadings and intercept equalities found in the measurement invariance analyses, and (2) the intra-individual linear or quadratic changes in social withdrawal over time, defined by the intercept growth factor, linear slope growth factor, and the quadratic slope factor latent variables. The variance of the intercept and slope growth factors indicated the amount of individual differences at baseline and in trajectories, respectively.
Finally, we extended the mLGM to determine the number and type of distinct linear and quadratic social withdrawal trajectory classes by assessing how the data fitted one-to four-class models through multiple-indicator Latent Class Growth Analysis (mLCGA). To determine the best class enumeration, we used the Lo-Mendell-Rubin adjusted Likelihood Ratio Test (aLRT) and the Bayesian Information Criterion (BIC). The aLRT compares k-1 class model to the k-class model; a significant value indicates that the k class fits the data better than the k-1 class model. Additional evidence for the number of latent classes was provided by Akaike's Information Criterion (AIC), the sample size adjusted BIC (SSBIC), and the entropy of the model.
Longitudinal Measurement Invariance
Results from the longitudinal measurement invariance models are depicted in Table 2. First, we specified a configural invariance model in which all factor loadings, intercepts, and residual variances are allowed to vary. Factor variances were all fixed to 1 and all factor means were fixed to 0 for model identification. Results indicated that the configural invariance model was an excellent fit to the data. Next, we specified a metric invariance model in which the factor loadings were constrained to be equal over time, while intercepts and residual variances were allowed to vary. The withdrawal factor variance at T3 was fixed to 1 and all factor means were fixed to 0 for identification. Results indicated that the metric invariance model had an excellent fit to the data too, and was no worse than the configural invariance model. Finally, we specified a scalar invariance model, which constrains all factor loadings and intercepts to be equal while allowing residual variances to vary. The withdrawal factor variance at T3 was fixed at 0 and the factor mean at time 1 was fixed at 0 to allow model identification. The scalar invariance model fit the data significantly worse compared to the metric model. Using modification indices to determine which factor loadings needed to be freed to improve model fit, we freed intercepts one-by-one in testing partial scalar invariance. Comparisons between the partial scalar model and metric model were made until the scalar invariance model did not fit the data significantly worse than the metric model. We freed four item intercepts (20% of the intercepts) to achieve partial scalar invariance (BI am too shy or timid^and BI keep from getting involved with others^at T3; BI'd rather be alone than with others^at both T5 and T6). Although the SB-scaled X 2 difference test was still significant (p = 0.02), the strictest change in fit-index criteria were met, indicating that the metric and partial scalar model with four freed intercepts fit the data almost identically well.
Rank-Order Stability of Social Withdrawal
Rank-order stability of social withdrawal was determined by the correlations between consecutive withdrawal latent factors, and indicated substantial stability over time: r T3-T4 = 0.70; r T4-T5 = 0.67; r T5-T6 = 0.72; all ps < 0.001.
Mean-Level Change of Social Withdrawal across the Full Sample
We could examine both linear and quadratic changes in social withdrawal because we had four time points of data. In our mLCG model, factor loadings were constrained to be equal across time (except for the first item in every time point, constrained to 1, for model identification), item intercepts were constrained to be equal across time except the intercepts which were freed in the partial scalar model, and residual errors were correlated. The means and standard errors ( Fig. 1). Social withdrawal steeply decreased from T3 to T5 and increased again from T5 to T6. The intercept variance was significant (intercept variance = 0.068, p < 0.001) while the quadratic slope variance was non-significant (quadratic slope variance = 0.002, p = 0.277), indicating that there was a significant U-shaped trajectory in the overall sample with individual differences in baseline levels of social withdrawal.
Distinct Social Withdrawal Trajectories
Multiple-indicator LCGA identified three trajectories of social withdrawal: a low-stable group (71.8%), a high-decreasing group (12.0%), and a low-curvilinear group (16.2%; Fig. 2). Table 3 depicts the multiple-indicator LCGA results and Table 4 depicts the parameter estimates of the intercept and slope factors, and their respective variance estimates, of the three classes. The majority of participants were classified into the low-stable group with the lowest levels of withdrawal throughout the four measurement waves. The highdecreasing withdrawal group had the highest level of social withdrawal at every time point, but demonstrated a linear decrease over time. Finally, the low-curvilinear group had baseline levels of withdrawal between the low-stable and decreasing groups, decreased to the low-stable levels of withdrawal during the second and third measurement waves, and had a slight increase in withdrawal during the final wave.
Post-Hoc Analyses: Differences between Trajectory Groups on Associated Variables
Once we identified the three trajectories of social withdrawal, we were interested in exploring how these groups differed. We selected six relevant variables: gender, antisocial behavior, anxiety, shyness, affiliation, and reduced social contact. Gender, antisocial behavior, and anxiety were selected because the relationships between these variables and social withdrawal are commonly examined in the preadolescent literature, but little is known about these associations during adolescence and adulthood. Shyness, affiliation, and reduced social contact were selected as measures pointing to individuals' more specific withdrawal characteristics. Shyness captures a hesitancy toward novel social situations; affiliation is the extent to which close relationships are desired; and reduced social contact measures the underlying social disinterest toward peers.
A chi-square test indicated that gender was not equally distributed among the three groups, χ 2 (2, N = 1917) = 18.33, p < 0.001. The low-curvilinear group had a significantly higher proportion of males (58.7%) compared to the lowstable (44.6%) and high-decreasing (47.5%) groups. To examine class differences on the withdrawal-related variables, while controlling for gender, accounting for classification error, and keeping the class distributions the same as in the three-class LCGA model, we used the three-step BCH approach (Asparouhov and Muthén 2014;Bolck et al. 2004; Results indicated significant class differences on shyness, reduced social contact, and anxiety at every time point, on affiliation during two out of three time points, and on Means with same subscript do not significantly differ from one another. Means without a subscript are significantly different from one another at p < 0.05. The mean and standard error (SE) of the intercept parameter of the intercept factor of the low-stable trajectory was set to zero for model identification *p < 0.05 **p < 0.01 ***p < 0.001 J Abnorm Child Psychol (2019) 47:865-879 873 antisocial behaviors during two out of four time points. Pairwise comparisons indicated that the high-decreasing class was significantly shyer than the low-stable and lowcurvilinear classes at every time point. At T3 and T4, the low-stable and low-curvilinear class did not differ on shyness, but at T5 the low-curvilinear group was significantly shyer than the low-stable class. At T3 and T4, the high-decreasing class had the lowest affiliation. At T3, the low-stable and lowcurvilinear class did not differ in affiliation, but at T4, the lowcurvilinear class reported significantly lower affiliation compared to the low-stable class. The three classes no longer differed on affiliation at T5. Classes also differed on antisocial behaviors at T3 and T5, but did not differ at T4 and T6. During T3, the low-curvilinear group endorsed more antisocial behaviors than the low-stable class, while the high-decreasing class did not differ from the other two. At T5, the high-decreasing class endorsed more antisocial behaviors than the low-stable class, while the low-stable and low-curvilinear classes did not differ. The high-decreasing class had the highest reduced social contact at every time point, while the low-curvilinear and low-stable groups did not differ. Finally, the high-decreasing class also reported the highest anxiety at every time point. The low-curvilinear class reported higher anxiety than the lowstable class at T3; the low-stable class reported higher anxiety than the low-curvilinear class at T4 and T5; and at T6, the lowstable and low-curvilinear groups did not differ on anxiety.
Discussion
The study presented in this article used almost a decade of longitudinal data to examine the mean-level change and specific trajectories of social withdrawal through adolescence and early adulthood. We contributed to the small, but expanding, adolescent and early adult social withdrawal literature in hopes of increasing our knowledge of the normative and atrisk patterns of withdrawal during this transitional period of life. Prior to examining the trajectories of social withdrawal, we aimed to overcome some of the measurement-related limitations in previous studies, such as informant biases and possible measurement noninvariance, by examining the ability for self-report measures to capture withdrawal in the same way over time. We found evidence that the YSR and ASR can be used through adolescence and early adulthood to assess global behavioral aspects of social withdrawal, such as shyness, spending excessive time alone, and avoiding peer interaction. The five selected withdrawal-related items captured a single dimension of social withdrawal and were partially measurement invariant over time. This indicates that the selected withdrawal items were interpreted in the same way between the ages of 16 and 25. We could not establish measurement invariance when including data from measurement waves prior to the age of 16 years, indicating that the interpretations of the withdrawal items were different in pre-and early adolescence compared to middle and late adolescence and young adulthood. This is important because previous studies have made conclusions about the trajectories of social withdrawal with broad age ranges spanning from childhood through adulthood without examining the measurement invariance of their withdrawal items. The validity of these conclusions is questionable considering the changing interpretations of items during adolescence. With confidence, we can interpret our subsequent findings in participants aged 16 to 25 as real developmental changes rather than as measurement artifacts.
Normative Mean-Level Withdrawal Changes
Results did not support our hypothesis that the mean-level change of social withdrawal follows a curvilinear, inverted-U trajectory. On the contrary, we found a U-shaped curvilinear trajectory, in which social withdrawal decreased from 16 to 19 years (T3-T4), remained low and stable from 19 to 22 years (T4-T5), and increased again from 22 to 25 years (T5-T6). This curvilinear pattern of social withdrawal might be related to the changes in individuals' social relationships during late adolescence and again during early adulthood. The decrease in mean-level social withdrawal from 16 to 19 years might be driven by the increasing size of individuals' social network during the same time. The size of the social network increases during late adolescence (Wrzus et al. 2012), due to increasing social motivations, greater autonomy from parents, and the entry to postsecondary institutions which expose individuals to a large number of new peers. These changes mean more opportunities for socializing, forming new relationships, and expanding one's social network. The increasing social network size likely underlies the decrease in social withdrawal from 16 to 19 years and the maintenance of low levels of withdrawal from 19 to 22 years. Similarly, the increase in mean-level social withdrawal from 22 to 25 years might be driven by decreasing sizes of individuals' social networks. The size of the social network of adolescents increases until early adulthood, then begins to steadily decrease (Wrzus et al. 2012). This social network decline is due to common life events during early adulthood such as exiting post-secondary education, entering the job market, transitioning to parenthood, and/or relocating. These life events lead to fewer people in the social network of early adults, thereby limiting opportunities for social experiences and contributing to early adults' perceptions of themselves as more withdrawn.
In sum, the U-shaped mean-level trajectory of social withdrawal during adolescence and early adulthood is probably related to the changes in the social network during these ages. Future studies should examine the longitudinal relationship between social network changes and the social withdrawal trajectory during adolescence and early adulthood.
Furthermore, more frequent assessments of the size and changes of the social network over a short period of time during the observed withdrawal decreases (16 to 19 years) and increases (22 to 25 years) could offer insights into the underlying mechanisms of the network-withdrawal relationship.
Three Trajectories of Social Withdrawal
We found three distinct withdrawal trajectory groups: a lowstable group (71.8%), a high-withdrawal group (12%), and a low-curvilinear group (16.2%). Most individuals had consistently low levels of withdrawal, which was expected considering that most community cohort studies report low levels of withdrawal or other problem behavior. Our post-hoc analyses indicated that this group was well adjusted, with high initial levels of social affiliation and low initial levels of shyness, antisocial behaviors, reduced social contact, and anxiety. Furthermore, we found that even in this majority low-stable group social withdrawal increased from 22 to 25 years, providing further support for a normative increase in withdrawal during early adulthood.
The low-curvilinear group had higher withdrawal than the low-stable group when they were aged 16 and 25 years, but was no different from the low-stable group from 19 to 22 years. Notably, the social withdrawal levels of the lowcurvilinear group deviated substantially enough from the low-stable group to distinguish these individuals as following a distinct trajectory. The higher withdrawal of the lowcurvilinear group at 16 years could be due to unsociable or avoidant tendencies of these adolescents.). At age 16, the lowcurvilinear group endorsed more frequent participation in antisocial behaviors than the low-stable group, indicating higher externalizing behaviors which may have contributed to withdrawal via peer exclusion. These results may indicate that externalizing youth become less withdrawn during late adolescence and early adulthood due to decreases in externalizing behaviors, which promote greater peer acceptance (Bongers et al. 2003). The low-curvilinear group was also more withdrawn than the low-stable group at 25 years. This increase likely reflects the normative increase in withdrawal in early adulthood that was discussed previously, but the reason why the low-curvilinear group surpassed the withdrawal levels of the low-stable group after being at identical withdrawal levels for years is unknown. Future research should focus on nonanxious withdrawn adolescents, such as those with unsociable, avoidant, or externalizing characteristics. Our results point to the possibility that these individuals decrease in withdrawal during late adolescence and early adulthood, but further investigation to the reasons behind this decrease (e.g. greater sensitivity to social network changes) is warranted.
Finally, a considerable percentage of individuals were persistently withdrawn through adolescence and early adulthood.
The high-decreasing group reported the highest shyness, reduced social contact, and anxiety, and the lowest affiliation, at every time point, indicating that the high-decreasing group was the most maladjusted. Although this high-decreasing group had decreasing levels of withdrawal over time, they were considerably more withdrawn compared to those in the other two groups at every time point. The decrease in withdrawal could be due to the establishment of new relationships, albeit at a slower rate, or age-related improvements in social or coping skills. Regardless, withdrawal in this group might be maintained by a negative feedback loop described by Rubin et al. (2009). Withdrawn youth avoid interacting with peers, which limits opportunities to develop social skills. Limited social skills contribute to withdrawn behavior during peer interactions, which elicit negative feedback from peers. Negative peer feedback perpetuates negative self-beliefs and anxiety, leading to greater withdrawal. Through this cyclical process, socially withdrawn adolescents are unable to follow the normative social network expansion during adolescence. Relatedly, withdrawn individuals are at greater risk for psychopathology (e.g. anxiety, depression), which could further maintain withdrawal during adolescence and early adulthood. The specific factors and mechanisms that maintain these high levels of social withdrawal during adolescence and early adulthood remain unknown. Future studies should examine the mechanisms that maintain high levels of social withdrawal in some adolescents. One possible mechanism is anxiety, which is theorized to underlie the negative feedback loop mentioned previously. The relationships between social withdrawal and anxiety are still poorly understood (Kingerly et al. 2010) and more research is needed to determine how withdrawal and anxiety influence one another in perpetuating high levels of one another (Perez-Edgar and Guyer 2014).
Three withdrawal trajectories have been reported in previous studies. Apart from the current study, the only other study to examine the trajectories of social withdrawal in adolescents and early adults was by Tang and colleagues (Tang et al. 2017). They found three trajectories in participants ages 8 to 35 years, two trajectories of which were different from the trajectories found in our study. Consistent with Tang et al., as well as with the preadolescent literature (Booth-LaForce et al. 2012;Booth-LaForce and Oxford 2008;Eggum et al. 2009;Oh et al. 2008), we found that most individuals had low-stable levels of withdrawal over time. Inconsistently, we found a high-decreasing and a low-curvilinear trajectory instead of linear increasing and decreasing groups. These inconsistencies are related to the differences in how we conceptualized and measured social withdrawal. First, we used only self-reports to capture social withdrawal at every assessment wave while Tang and colleagues used self-and parent-reports at different ages. Different levels of withdrawal symptoms are found when using ratings from different informants due to the context in which behaviors are observed. Parents may underestimate the social interaction and social involvement of their children as youth become increasingly autonomous during adolescence or no longer live in the parental home in early adulthood. This underestimation could inflate withdrawal estimates, thereby influencing the shapes of the trajectories. Second, we included participants who were in a specific period of development, namely adolescence and early adulthood. Perhaps by focusing on these two short and adjacent developmental periods, we captured withdrawal processes which were more time-limited compared to the broader developmental periods included in Tang et al. Notably, Tang et al. included withdrawal assessments during 12-16 years and 22-26 years, leaving a gap during the major transition from adolescence to early adulthood (i.e. 16 to 22 years). Our study filled this developmental gap and zoomed in on the withdrawal patterns during this transition, thereby creating differences in the trajectory shapes from previous studies.
Strengths and Limitations
The current study was the second to include assessment points during adolescence and early adulthood when examining the longitudinal trajectories of social withdrawal, and the first to include multiple assessment waves during this transitional period to capture more fine-grained withdrawal changes during these ages. Prior to trajectory analyses, we established partial measurement invariance of the YSR and ASR withdrawal items. No previous study examined the measurement properties of these items in relation to developmental changes. In doing so, we have obtained further psychometric support for the use of the YSR and ASR withdrawal items, allowing for more social withdrawal research using these measures. Furthermore, our longitudinal design allowed us to examine curvilinear patterns of withdrawal, both at the mean and individual levels. Through our longitudinal design, we could also examine how trajectory groups differed on the same variable (i.e., shyness, affiliation, antisocial behaviors, etc.) over time, providing preliminary insights into the magnitude and stability of these group differences. Overall, this study contributed to the expanding literature on adolescent and early adult social withdrawal and increased our understanding of the normative and at-risk expression of withdrawn behaviors during these ages.
Results should be interpreted with consideration of several limitations. First, we used a global conceptualization of social withdrawal, which did not distinguish individuals based on underlying motivations to withdrawal. The five YSR and ASR items captured global behavioral characteristics of withdrawal such as shyness, preference for solitude, and refusal to talk; although they loaded on a single withdrawal latent factor, the underlying reason for endorsing an item can vary widely. An individual may Bkeep from getting involved with others^b ecause they fear negative evaluation, because they are disinterested in others, or because they are excluded or neglected by their peers. Different underlying motivations to withdrawal contribute to different types of maladjustment (Rubin et al. 2009). Future studies are advised to examine the motivation to withdrawal and if underlying motivations change over time. A second limitation is that there was some overlap between the ages of our participants in adjacent assessment waves. This may have increased the standard errors because older individuals could differ from younger individuals within an assessment wave, but it is unlikely that this within-wave heterogeneity has caused systematic bias. Future studies could model withdrawal trajectories more sensitively with more homogenous age groups or more frequent assessment waves. Third, we did not include T1 and T2 data because measurement invariance was not found when including these time points, possibly due to the low internal reliability of social withdrawal items during these time points. On one hand, this means that subsequent results were robust and reflected real developmental changes; on the other hand, perhaps we have applied stricter invariance criteria than necessary to draw valid conclusions about the invariance and exclusion of younger ages. The reasons behind and the developmental implications of non-invariance of the withdrawal scores at younger ages are beyond the scope of this study, but seems worthy of exploration in future studies. Fourth, we relied solely on self-reported social withdrawal. Although we established measurement invariance of the self-reported social withdrawal items and believe self-reports of withdrawal are more suitable for early adulthood than other-reports, a multi-informant approach might capture withdrawal more validly and across multiple settings. Future studies should aim to include additional informants, such as parents, romantic partners, or observations, when examining social withdrawal in early adulthood. Finally, the large majority of participants were from an ethnically Dutch background, and participants from minority groups were heterogeneous. This prevented us from examining ethnic differences in social withdrawal trajectories. Future studies might include a more ethnically diverse sample to examine if minority group status is a risk or protective factor for social withdrawal during early adulthood.
Conclusion
This study investigated the mean-level and individual trajectories of social withdrawal during adolescence and early adulthood. We found that the normative pattern of social withdrawal during these ages follows a U-shaped curve, with the lowest levels during late adolescence, and that individuals follow three withdrawal trajectories. Although most maintained low levels of social withdrawal throughout adolescence and early adulthood, 12% of individuals were persistently withdrawn.
These results indicate that social withdrawal continues to be a developmentally relevant behavior after childhood, impacting the lives of adolescents and young adults. Many questions remain about the roles and mechanisms of social withdrawal during adolescence and adulthood. | 2018-12-02T16:55:10.381Z | 2018-11-27T00:00:00.000 | {
"year": 2018,
"sha1": "e16643720414b1504655576e494e1c860186a0ca",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10802-018-0497-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e16643720414b1504655576e494e1c860186a0ca",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
233470084 | pes2o/s2orc | v3-fos-license | Performance and perception on front-of-package nutritional labeling models in Brazil
ABSTRACT OBJECTIVE: To evaluate the performance and perception of five models of front-of-package nutrition labeling (FOPNL) among Brazilian consumers. METHODS: Cross-sectional study, which applied an online questionnaire to 2,400 individuals, allocated randomly into six study groups: a control group and five others exposed to FOPNL (octagon, triangle, circle, magnifier and traffic light), applied to nine products. We evaluated the understanding of nutritional content, the perception of healthiness, the purchase intention and the perception of Brazilian consumers on the models. RESULTS: All FOPNL models increased the understanding of the nutritional content and reduced the perception of healthiness and purchase intention, when compared with the control group (41.3%). FOPNL warning models — octagon (62.4%), triangle (61.9%) and circle (61.8%) — performed significantly better than the traffic light (55.0%) regarding the understanding of the nutritional content. The performance of the magnifier (59.5%) was similar to the other four tested models, including the traffic light (55.0%), for understanding nutritional content. The individual analysis of the products suggests a better performance of warnings in relation to the magnifier and the traffic light for the perception of healthiness and purchase intention. Consumers were favorable to the presence of FOPNL, perceiving it as reliable to increase the understanding to nutritional information. CONCLUSION: FOPNL must be implemented on food labels in Brazil, considering that it increases the nutritional understanding, reduces the perception of healthiness and the purchase intention of products with critical nutrients. Warnings showed a better performance when compared with other models.
INTRODUCTION
Front-of-package nutritional labeling (FOPNL) is internationally recommended 1 as a tool to assist the consumer in interpreting quantitative nutrient statements in foods, which are generally difficult to understand and arranged in small print on the back of the package 2 . Almost half of the Brazilian population has difficulty interpreting nutritional information on food labels 3 . By not understanding the content of the products, judgment regarding healthiness and, consequently, the purchase decision of the individual are affected 4,5 .
Several countries adopt different FOPNL models to help the consumer in this interpretation. Warning models (octagon, circle and triangle), inform, in a simple and direct way, if the product has a high content of some nutrient (sugars, fats, sodium). They have been more efficient in increasing understanding, and consequently, reducing the perception of healthiness and the intention to buy product, when compared with the nutritional traffic light, which informs the low, medium and high content of nutrients, or the Guideline Daily Amounts (GDA), which indicates the percentage of nutrients present in the product in relation to the recommended daily value [6][7][8][9][10] . In recent years, four Latin American countries -Chile, Peru, Uruguay and Mexico -have adopted the octagon-shaped warning FOPNL as mandatory [11][12][13][14] .
In Brazil, the National Health Surveillance Agency (ANVISA) approved, in 2020, a FOPNL model in a black rectangular format with a magnifier, similar to what has been discussed in Canada 15, 16 . However, only two studies evaluated the performance of this FOPNL model, being inferior to the octagon and the triangle in reducing the time for identification of nutrients in excess among Brazilian adults 9 . The magnifier model was also inferior to the octagon, circle and triangle in increasing the understanding of nutritional content among adults from the United States, Canada, Australia and the United Kingdom 17 .
The performance of the FOPNL models can also be influenced by factors such as motivation for health, ease of preparation and price 18 , as well as by aspects related to the model's own design, such as its ability and draw attention, the ease of the consumer to identify and process their information, familiarity with the FOPNL and the perception of risk generated by it 8,19-21 . Because of this, it is important to conduct local studies to identify the most appropriate FOPNL model for the population of each country 22 . There is a need for studies comparing the performance of different FOPNL models in Brazil, including the magnifier, investigated in only one of the two studies conducted with Brazilian adults that compared more than one FOPNL model 9,10 .
Therefore, the objective of this study was to evaluate the performance of five models of front-of-package nutritional labeling (octagon, triangle, circle, magnifier and traffic light) in increasing the understanding of nutritional content, reducing the perception of healthiness and the intention to buy product, in addition to identifying the perception of Brazilian adult consumers about these models and the importance of factors related to food choice.
Study participants
A cross-sectional study was conducted with a sample of 2,400 individuals randomly assigned to six study groups. The sample was made by quotas, being representative of the Brazilian population in relation to sex, economic class and the five macro-regions of the country. The recruitment of participants was carried out digitally, by a company specialized in online surveys that has a register of respondents. Invitations were sent only to people who met the quota profile pre-determined in the sample. As quotas were finalized, invitations were sent to the remaining quotas.
The questionnaire was applied in August 2019. All individuals agreed to their participation by signing the informed consent form. The study was approved by the Ethics Committee in Human Research of the Faculdade de Ciências da Saúde of the Universidade de Brasília (protocol 67420817.7.0000.0030).
Sample Allocation in Research Groups
The participants were randomly allocated into six groups, 400 of which were in the control group (GC), while the others in one of the five exposure groups: 1) magnifier (n = 400); 2) circle (N = 400); 3) octagon (N = 400), 4) triangle (N = 400) and 5) and nutritional traffic light (N = 400) ( Figure 1). The inclusion of a control group allowed the comparison of the performance of the FOPNL models among themselves and the performance of each individually in relation to the absence of FOPNL in the product.
Position and size of FOPNL models
The FOPNL models ( Figure 1) were applied in the upper right corner, in different percentages of the area of the main panel of the product: 15% if the product had high content of sugars, sodium and saturated fat; 10% when high in two of these nutrients; and 5% in one of these nutrients. The magnifier always used 10% of the area.
The magnifier, octagon and triangle were shown in black and the circle in red color. The nutritional traffic light was shown in red, yellow and green colors, indicating respectively the high, medium or low levels of nutrients.
For the definition of low, medium and high nutrient content (free sugars, saturated fat and sodium), the most restrictive nutritional profile model proposed by ANVISA was adopted 23 .
Product selection
A panel of experts selected nine products commonly consumed by the Brazilian population 24 and usually perceived as healthy, despite having a high content of at least one nutrient (free sugars, fat and sodium) ( Table 1). As performed in a previous study 9 , product images were prepared by a company specialized in graphic design exclusively for this research and did not contain health claims, trademarks or trade names, seeking to neutralize the influence of these factors on the performance of the models (
Data collection
The questionnaire was organized into three sections: 1) characteristics of participants, 2) performance of FOPNL models and 3) consumer perception of FOPNL models.
In Section 1, the characteristics of the participants (sex, age group, education, income and region of housing) were identified, including the importance of factors related to food choice. Ten items were elaborated based on the Food Choice Questionnaire 18 , for which the participants evaluated the importance with answer options on a 5-point Likert scale, ranging from: 1 -"not at all important" to 5 -"very important." The items were "I choose food": a) easier to prepare; B) that the place of purchase is close to me ; c) for the price; d) that are healthier; E) that make me cheerful, relaxed, active/awake; F) natural, without additives or artificial/industrialized ingredients; g) with few calories, sugars or fats; h) from the brand I always usually buy; i) or similar to what I ate in childhood; j) that do not harm the environment, preferring organic foods and avoiding foods with pesticides.
In Section 2, we evaluated the performance of the FOPNL models in increasing the understanding of nutritional content and in reducing the perception of healthiness and the intention to purchase the products shown. Individuals saw, individually and randomly, the nine products with the FOPNL model according to their randomization group. No individual was exposed to more than one type of FOPNL. While viewing each product, participants answered three questions. The first question measured the understanding of the nutritional content of the product: "in your opinion, does this product contain nutrients at higher levels than recommended for a healthy diet?". For the purposes of standardization and comparison between the FOPNL models studied, we chose to only keep this question for the traffic light, since this was the only FOPNL model that allowed to quantify medium and low levels of nutrients 9,10 . The answer options were multiple choices: "too much sugar," "too much sodium," "too much saturated fat" or "does not contain any nutrients in too much quantity," and the participant could choose more than one answer option. Two other questions measured the purchase intention and the perception of healthiness of the products, with answer options on a 5-point Likert scale: "would you buy this product?" (1 -"I certainly would not buy it" to 5 -"I certainly would buy it"); and "you consider this product": (1 -"not healthy" to 5 -"very healthy"). The control group visualized the same product, however, without any FOPNL model. If desired, the subjects of all groups could look at the nutritional information table and the list of ingredients of each product by clicking on a button located just below the image of the product.
In Section 3, participants' perception of FOPNL models was evaluated in relation to "ease of identification," "reliability," "information processing" and "preference." These dimensions Table 1. Nutritional composition of the products included in the study and content (low, medium and high) of nutrients associated with chronic non-communicable diseases, according to the criteria established in the preliminary report of regulatory impact analysis on nutritional labeling. General Food Management. National Health Surveillance Agency, 2018. were based on the acceptability structure proposed by Nielsen 25 , already used in studies on FOPNL 19,20 . The questions were shown while the participant viewed the FOPNL model of their randomization group in isolation. To assess the ease of identification, the individual answered the following questions/statements: "1. Did you see this label on the product you evaluated? (yes or no);" 2. It was difficult to see this label on the package," "3. I found the nutritional information more quickly with this label." For "reliability," the statement was "4. I trusted the information of this label." When evaluating the processing of information, the following statements were shown "5. I understood the nutritional information more quickly with this label," -"6. I understood this label," "7. I felt uncomfortable with this label." The following statement allowed assessing the preference: "8. I would like to find this label on food packaging." Also in this section, we investigated whether the FOPNL models induced the participants to the basic emotion fear 25 with the statement: "9. the presence of this label made me afraid." Questions 2 to 9 had 5-point Likert scale answer options ranging from 1 -"I totally disagree" to 5 -"I totally agree." This section was shown only to the participants of the exhibition groups.
Statistical Analysis
To estimate the sample size, we considered a 95% confidence level, a maximum acceptable error of 2 percentage points, alpha of 0.05 and test power of 95%. The estimation of the sample considered a 57.6% mean number of correct answers for understanding the nutritional content for the traffic light, 79.9% for the triangle 10 , and the use of the one-way ANOVA test, estimating an effect size of 11. Thus, this study should include at least 210 adults per group, to which were added 100% to cover possible data loss or inconsistency, estimating a sample of 2,400 individuals. The G*Power 3.1.9.2 software performed the calculations 2 .
To understand the FOPNL, the percentage of correct items for each product was first calculated according to the participant's response in relation to the presence or absence of the nutrient in excess. For all nine products, we considered the percentage of the participant's correct answers in relation to all products. Subsequently, the means of the percentage of right answers of the exposure and control groups were compared.
We also estimated, for the six groups, the mean of purchase intention and perception of healthiness of the participants in relation to the nine products together and individually. A percentage expressed the visualization of the FOPNL model. We also calculated the means of agreement, according to the Likert scale, considered as a continuous variable, for the questions that evaluated the participants' perception in relation to the FOPNL models.
Pearson's chi-square test (categorical variables) or one-way ANOVA with Tukey's post-test (continuous variables) were used to verify whether there were differences between the groups regarding the characteristics of the participants, the performance of the FOPNL models and the perception of the participants in relation to the FOPNL models between the groups. We considered a 95% confidence interval. All analyses were conducted using the Statistical Package for the Social Sciences (SPSS) software, version 23.0.
RESULTS
Most of the sample of 2,400 adults was aged between 18 and 34 years (55.1%), 51.2% were women and 37.1% had completed high school. The characteristics of the participants showed no statistical difference between the six research groups.
The mean importance of factors related to food choice (ease of preparation, proximity to the place of purchase, price, preference for healthier foods, with natural content, for weight control and ethical concern in food choice) attributed by the participants were similar between the control and exposure groups (Table 2).
Understanding the nutritional content
In relation to the mean percentage of correct answers of the participants for the set of nine products, all FOPNL models performed significantly better than the CG (Table 3). In the presence of the octagon, circle and triangle, the percentages of correct answers were significantly higher than the percentage observed in the presence of the traffic light. The mean percentage of correct answers in the presence of the magnifier was similar to the percentage observed in the presence of the other four FOPNL models. In the analysis of each product individually, the mean percentage of correct answers in the presence of the octagon, magnifier, circle and triangle was higher than that of the CG for the nine products ( Table 3). For the traffic light, the mean percentage of correct answers was significantly higher than for the GC for eight of the nine products.
Perception of Healthiness
The performance of the five FOPNL models was significantly higher than the CG, reducing the means of perception of healthiness for all nine products (Table 3). In the analysis of the < 0.05). a mean percentage of correct answers for the nine products (0 to 100%). b mean percentage of correct answers for each product. c Mean of the participants' perception of healthiness for the set of nine products: 1 -"not healthy to 5 -very healthy". d mean of participants' perception of healthiness for each product individually. e Mean purchase intention of the participants for the set of nine products: 1 -"I would certainly not buy to 5 -I would certainly buy" f Mean participants' purchase intent for each of the products.
means of perception of healthiness for each product alone, the presence of the octagon was the only one that significantly reduced the perception of healthiness of the participants for all nine products, compared with the CG. The traffic light showed means lower than the GC for four products, and the magnifier, only for three products.
Purchase intention
The presence of FOPNL reduced the purchase intention in relation to the CG for the product group investigated (Table 3), regardless of the FOPNL model. In the analysis of the means of purchase intention for each product individually, the octagon and triangle showed significantly lower means than the GC for the nine products. The circle showed lower averages than the CG for eight products. The magnifier and the traffic light showed means lower than the CG for only five of the nine products investigated.
Consumer perception of FOPNL models
There was a significant difference between the five FOPNL models for the "I saw the label" items. The percentage of participants who declare to have seen the FOPNL ranged from 73.3% to 83.3%, being higher for the traffic light (83.3%) and circle (79.0%) than for the octagon (73.3%). The agreement for the item "I understood this label" was higher for the octagon (4.59) when compared with the traffic light (4.41).
Consumers supported the presence of FOPNL, perceiving it as reliable to increase the understanding of nutritional information. There was a high degree of agreement (means greater than 4, on a 5-point scale) for all positive items of perception of the models. For negative items, such as discomfort or difficulty in identifying the model, low agreement was observed (means less than 2.5). Despite the low agreement, the mean of the octagon (2.58) for the item "it was difficult to see the label" was higher than the mean of the circle (2.21). For the item "the presence of this label made me afraid," the mean of the traffic light (2.34) was lower than that observed for the triangle (2.75).
DISCUSSION
Understanding nutritional content is considered crucial to evaluate the effectiveness of nutritional labeling 4,5 . Warning FOPNL models (octagon, triangle, circle) performed better than the traffic light regarding the understanding of nutritional content. The superiority of warnings over traffic lights had already been reported in previous studies 6,10,21,27 .
Different from what had been reported by a previous study 9 , the magnifier model had a performance similar to the traffic light for this issue, and several factors may explain these results. Information processing, familiarity with the symbol used, the ability of the model to capture the consumer's attention and its color are factors already evidenced as important influencers to understand nutritional content 9,27,28 .
Regarding the processing of information, the traffic light does not show the same objectivity as the warnings, which only inform the nutrients present in high content in the product. Thus, a product may have, for example, a red (high) and two green labels (low), which may increase the perception of its healthiness, being this a possible limitation of the model 6,10 .
Regarding familiarity, the magnifier is the only model that is not widely used or standardized, being less familiar to the consumer than the warnings 9 . Familiarity with the symbol used in the FOPNL is essential to establish a fast and clear communication, enabling better understanding. According to the human information processing model, the internationally standardized and familiar warning signs are the triangle (sign most associated with risk), the octagon (associated with traffic stop sign), the traffic light and the red circle (used in traffic) 28 .
Understanding nutritional content is also related to attention capture, measured by the time necessary for the consumer to locate and visualize the FOPNL and the time required to identify the nutrients present in excess 27 . The traffic light and magnifier are FOPNL models that require longer attention capture time compared with warnings 9 . Warning models show images that repeat with each nutrient present in excess, drawing more consumer attention when compared with single-image models 29 .
Attention capture is also influenced by color, image or text presentation, position and symbol used in FOPNL [27][28][29] . The better performance of the warnings (octagon, triangle and circle) in relation to the traffic light, for understanding, may be related to color, since black captures attention faster, followed by red 27 .
All FOPNL models reduced the perception of product healthiness and purchase intention when compared with the CG, however, in the individual analysis, the traffic light and the magnifier reduced the perception of healthiness and purchase intention of a smaller number of products compared with the warnings. By perceiving a product as unhealthy, the consumer is expected to reduce the purchase intention 8 . One of the explanations for the lower performance of the traffic light in these two questions may be the presence of "low content" information in green color, which is usually associated with positive references and may increase perception of healthiness of the product even with the presence of high content of another critical nutrient, and consequently increase the purchase intention 6,9,10 . The lower performance of the magnifier may be related to the fact that it is the only model that does not have a design familiar to the consumer, requiring more effort to interpret the judgment regarding the healthiness of the food and the purchase decision, besides not being a model associated with risk, such as warnings, which have already been able to reduce the purchase intention in previous studies 9,17,21,26 .
The participants' perception was favorable to the presence of FOPNL in food packaging, understanding the FOPNL as reliable and easy to visualize and interpret to improve the understanding of the nutritional content of product, as already observed in similar studies 9,10,30 .
The percentage of participants who reported seeing the traffic light and the circle was higher than that observed for the octagon. The mean agreement for the item "it was difficult to see the label" was also higher for the octagon compared with the circle. These subjective findings, measured from the participants' perception, differ from a previous study, in which the difficulty of visualization was measured objectively, by time required to see the FOPNL with the use of software, when the circle required more time compared with the traffic light and the octagon 9 . We expected that the model that is easier to visualize also has better performance regarding the understanding of nutritional content, however, the octagon was the warning model that showed the highest percentage of correct answers (62.4%) for understanding nutritional content. The contradiction between results obtained by objective and subjective measures was reported by a previous study, suggesting that consumer perception may not accurately reflect the performance of FOPNL models 9 .
Regarding the item "the presence of this label made me afraid," caution is needed in the interpretation of the results. Studies suggest limitations of the approach to basic emotions in detecting different aspects of emotional experience 26 . The higher mean of this item for the triangle compared with the traffic light may, for example, be in line with psychology studies that report that fear is an emotion activated by potentially threatening situations or real dangers 31 . In addition, it may also be aligned with the human communication model, which reports that the triangle is the most risk-associated sign 27,28 . The warnings (octagon, circle and triangle) were similar in this respect, corroborating the human communication model that reports that they are more familiar warning signs to consumers, and are also more commonly associated with risk 26 . We suggest that other investigations deepen the study of emotions associated with the presence of FOPNL, not restricted to fear, evaluating in detail its factors and the generated behaviors 31 .
Regarding the limitations of this study, we suggest that future research include the simulation of factors present in the actual purchasing situation, such as limited time, presence of nutritional claims and advertising on the food label, as well as a greater number of products, including healthy and unhealthy products.
This study was conducted with a robust and diverse sample in terms of sex, age group, education, income and region of the country. The control of these variables ensured homogeneity between the groups, which did not show statistical difference. Finally, it should be noted that the importance of factors related to food choice, such as price, ease of preparation, proximity to the place of purchase, preference for healthier foods, were similar between the exposure and control groups, and therefore were not factors influencing the performance of the FOPNL models in the studied sample.
CONCLUSION
This study showed that FOPNL increases the understanding of nutritional content, reduces the perception of healthiness and the intention to buy foods with a high content of sugars, saturated fats and sodium. Warning FOPNL models (octagon, triangle and circle) showed superior performance to the traffic light for understanding. The magnifier model showed less consistent results than the warning models (octagon, triangle and circle). Regarding perception, the results revealed that consumers are favorable to the presence of FOPNL in food packages.
The results of this study bring important subsidies to public policy makers, reinforcing the need and advantages of the adoption of FOPNL in Brazil. Such a measure is urgent in a scenario where studies already point to rising prices of healthy food and cheapening of ultra-processed products in the coming years 32 . The choice of the FOPNL model to be adopted in the country should be discerning and consider the available scientific evidence, seeking to choose the model with the greatest potential for good performance, aligned with the particularities of the population that will benefit from it. | 2021-05-02T05:15:34.509Z | 2021-04-23T00:00:00.000 | {
"year": 2021,
"sha1": "0f8247e7c5944fa4ac02277dcd412538358c243c",
"oa_license": "CCBY",
"oa_url": "https://www.revistas.usp.br/rsp/article/download/185587/171562",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f8247e7c5944fa4ac02277dcd412538358c243c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234037619 | pes2o/s2orc | v3-fos-license | Research on Railway Ballast With the Optimal Sphere Filling by Using Discrete Element Model
DEM (discrete element method) is an effective method to study the dynamics of ballast track. The DEM assumes that the ballast is composed of spheres and the optimal combination of spheres in single ballast is the key to study ballast tracks by using DEM. A track ballast model using DEM is established with accurate ballast profile that is filled with spheres obtained by using a binocular 3D scanner and the Bubble Pack algorithm; In this paper, a direct shear test is adopted to study mechanical properties of the track ballast filled differently. The optimal sphere filling method is chosen using a ballast model based on DEM. The results show that: the volume filling rate ω of the ballast is negatively correlated with the radius ratio r of the small ball and the large ball that are adjacent, and positively correlated with the overlap angle ϕ of the two adjacent balls; with the same volume filling rate, the smaller ratio r is, the greater number of spheres is, and the closer the maximum shear stress of numerical calculation is to the measured level; for general engineering calculations, the optimal combination with a volume filling rate of 0.8 should be used for calculation.
Introduction
DEM (discrete element method) is an effective method to study the dynamics of ballast track. As numerical calculation develops, research unit of DEM has evolved from the early disc and sphere to the arbitrary particle with irregular shape. When modeling the ballasted track with DEM, ballast with irregular shape is usually inlaid or bonded by multiple spheres. Therefore, scholars from home and abroad pay much attention to how to optimally fill the ballast with spheres, a first clue to build the ballast track model with DEM.
Jing [1] mentioned using PFC 2D connecting rods to combine 7 two-dimensional discs into particle clusters to simulate the crushing mechanism of ballast under cyclic loading in PFC 2D. Xiao [2] uses three-dimensional spherical particles to simulate the ballast, but the occlusal characteristics between ballast particles cannot be accurately simulated since the geometric characteristics of the ballast particles are not considered. McDowell [3][4] uses PFC3D connecting rods to generate composite particle clusters with irregular shapes, using bonding units with 2, 4, and 8 balls to simulate ballast particles to study the cracking laws of ballast particles and the influence of ballast particle shape on the performance of the ballast bed. The results show that bonding units with 8 balls enhance the interlocking between particles that restrains the rotation of the particles in the sample to a certain extent, which reflects the real circumstances even better. Tutumluer [5][6][7][8] 2 particles by three-dimensional scanning, and selects 11 typical polyhedrons to simulate the real ballast particles. However, the shape of real ballast particles is quite random, the polyhedrons selected cannot reflect the real ballast particles.
Research presents in this paper can help to better resolve these limits mentioned above.
Taghavi 's proposal [9] is based on 2D and Bubble Pack algorithm that a profile after triangular meshed can be filled by two adjacent spheres with radius ratio r and overlapping angle φ. These two factors can be controlled in order to fill different profiles. Results can be found from Bubble Pack algorithm that when r is smaller and φ is larger, the more densely the larger and smaller balls are filled, the closer the result is to the real ballast profile.
In the particle discrete element method, the indicators to measure the filling of the ballast include the total number of spheres, the mass filling rate of the 3D ballast profile, and the volume filling rate of the 3D ballast profile. When the ballast is regarded as the homogeneous, its mass filling rate and volume filling rate can be converted to each other. In this article, the volume filling rate of the ballast serves as the control index. A track ballast model using DEM is established with accurate ballast profile that is filled with spheres obtained by using a binocular 3D scanner and the Bubble Pack algorithm; A direct shear test is adopted to study mechanical properties of the track ballast filled differently. The optimal sphere filling method is then given.
Particle Discrete Element Ballast Filling Modeling
From the above analysis, we can see that the value range of r is (0, 1], and the value range of φ is [0°, 180°]. A 3D scanner is used to obtain profiles of multiple ballasts randomly selected with uniform pinsheet coefficients in the laboratory. Based on the value range of r and φ, with 0.1 as the step length of r and 20° as the step length of φ, a 2-factor, 10-level orthogonal test is designed for each ballast. A DEM ballast filling test with 10 ballasts, 1000 working conditions in total is carried out. The average filling rate of 10 ballasts was analyzed statistically when the test is finished. The results are shown in Figure 1. Figure 1 shows that the smaller r is, the greater its volume filling rate ω is; the greater φ is, the greater its volume filling rate ω is. In other words, ω, the volume filling rate of ballast is negatively correlated with r and positively correlated with φ, but not linearly correlated. Within a certain range of r and φ, Figure 1 The principle of determination coefficient [10] : Adjusted coefficient of determination R and the prediction coefficient R 2 are both close to 1, and the difference between the two is less than 0.2, indicating that the coefficients of the fitting equation are valid.
Research on the optimal filling method with the same volume filling rate
Equation (1) shows that the volumetric filling rate ω of the ballast is related to r, the small to large ratio r, and φ, the overlap angle. When the volume filling rate of the ballast is smaller than 0.5, that is, when the volume of the filled sphere is less than 1/2 of the actual ballast profile, the filled ballast has there is a large difference in quality and volume between the filled ballast and the actual ballast. Therefore, when ω>0.5, the filling situation is studied.
Hertz-Mindlin's contact constitutive is adopted and DEM model contact parameters are determined referring to research from Liu et al. [11]. Taking ω = 0. 8 as an example, the discrete element direct shear test is performed on ballast particles with different filling combinations, shown in Figure 2.
Figure2 DEM direct shear test diagram Results comparison of numerical calculation of direct shear test and actual test with the same size [12] shown in Figure 3. Figure 3 shows: With the same volume filling rate, the smaller ratio r is, the more numbers of spheres is, and the closer the maximum shear stress calculated by the numerical value is.
Research on the optimal filling method with different volume filling rates
In 2.2, the optimal filling method with different volume filling rates is screened out, and the maximum shear stress is calculated, shown in Table 1 Table 1 shows that when the volume filling rate is 0.8, the maximum shear stress change rate relative to the actual test is 4.7%<5%. Therefore, the general engineering calculations adopts optimal combination method when the volume filling rate is 0.8 for the best of calculation accuracy and efficiency.
3. The lateral resistance test of gravel ballast bed based on the optimal packing method of discrete element A discrete element model of the gravel track bed composed of three Type III sleepers is established in accordance with the typical cross-sectional dimensions of China's single-track ballasted track, The Bubble Pack algorithm is used to optimally fill multiple conventional needle-shaped and sheet-shaped ballasts meeting the super ballast standard; the sleepers are modeled by wall units, and the balance issues of complex sleeper gravity is processed through the API interface.
After finish building the model, the volume density of the ballast on the surface of the gravel ballast bed model, 1.70g/cm 3 can be tested out. The middle sleeper is pushed laterally and uniformly at a speed of 1mm/s to analyze the lateral resistance of the track bed, which is shown in Figure 5 Figure 4 shows that the test result of the simulation test of the lateral resistance of the discrete element track bed and the measurement result in the lab are consistent, considering that it's inevitable that a certain error exists. It can be concluded that the discrete element ballast model with a volume filling rate of 0.8 should be built to better reflect the engineering mechanical properties of the ballast.
Conclusion
A track ballast model using DEM is established with accurate ballast profile that is filled with spheres obtained by using a binocular 3D scanner and the Bubble Pack algorithm; A direct shear test is adopted to study mechanical properties of the track ballast filled differently. The optimal sphere filling method is chosen using a ballast model based on DEM. The results show: (1) The volume filling rate of ballast, ω is negatively correlated with r and positively correlated with φ.
(2) With the same volume filling rate, the smaller ratio r is, the larger number of spheres is, and the closer the maximum shear stress calculated is to the real test.
(3) Therefore, the volume filling rate of 0.8 should be taken when calculating for optimal combination. | 2021-05-10T00:03:27.244Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "fdd7a47758ed72de3fd52428a373986b467f3844",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/676/1/012092",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "bd49ebb6c775f397416cc0b45d9c227cffb50b80",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
118935026 | pes2o/s2orc | v3-fos-license | Satellite-to-ground quantum communication using a 50-kg-class micro-satellite
Recent rapid growth in the number of satellite-constellation programs for remote sensing and communications, thanks to the availability of small-size and low-cost satellites, provides impetus for high capacity laser communication (lasercom) in space. Quantum communication can enhance the overall performance of lasercom, and also enables intrinsically hack-proof secure communication known as Quantum Key Distribution (QKD). Here, we report a quantum communication experiment between a micro-satellite (48 kg and 50 cm cube) in a low earth orbit and a ground station with single-photon counters. Non-orthogonal polarization states were transmitted from the satellite at a 10-MHz repetition rate. On the ground, by post-processing the received quantum states at an average of 0.14 photons/pulse, clock data recovery and polarization reference-frame synchronization were successfully done even under remarkable Doppler shifts. A quantum bit error rate below 5% was measured, demonstrating the feasibility of quantum communication in a real scenario from space.
The final version of this manuscript is accessible through the Nature Photonics website: http://www.nature.com/nphoton/journal/vaop/ncurrent/full/nphoton.2017.107.html The final version of this manuscript is accessible through the Nature Photonics website: http://www.nature.com/nphoton/journal/vaop/ncurrent/full/nphoton.2017.107.html only the overall performance of conventional lasercom, but also provides a prerequisite platform for intrinsically hack-proof secure communication, i.e., QKD [12]. The satellite QKD technology actually allows a global scale QKD, which cannot be covered only by earthbound networks due to inevitable losses of optical fibers. There have been significant efforts on developing the basic technologies for QKD in space, namely, terrestrial free-space quantum-communications [11,[12][13][14], demonstrations with moving terminals to emulate the motion of a satellite [15,16], experiments using orbiting objects such as passive corner-reflector satellites to receive single photons in a ground station [17,18], a program to miniaturize the QKD technologies for future cube satellite missions [19], and an experiment of quantum-limited coherent communication from a geostationary satellite to a ground station [20]. Recently a 600-kg quantum-communication satellite has been launched into an orbit for QKD and quantum teleportation experiments [21]. However, it remains a greater challenge to demonstrate quantum communication with a small-size and low-cost satellite. If this could be done using a micro-satellite, the paradigm of satellite communications would change.
Here, we report the first satellite-to-ground quantum-communication experiment using a micro-satellite based on polarization encoding and the ground station with single-photon counters. The SOTA (Small Optical TrAnsponder) terminal, which is as light as 5.9 kg, onboard the micro-satellite SOCRATES (Space Optical Communications Research Advanced Technology Satellite), which is as small as 50 cm cube and 48 kg, transmitted Pseudo-Random Binary Sequences (PRBSs) of non-orthogonal polarization states at a 10-MHz repetition rate from a LEO using a wavelength of 0.8 μm. On the ground, the polarized quantum states at an average of 0.14 photons/pulse were received by a 1-m diameter telescope. By post-processing the received sequence of quantum states, clock data were successfully recovered even under remarkable Doppler shifts, and the polarization reference frame could be well aligned between SOTA and the ground station. Binary non-orthogonal polarization states were finally discriminated by a polarizing quantum receiver with the Quantum Bit Error Rate (QBER) below 5%, demonstrating the feasibility of quantum communication (e.g. B92 QKD protocol [22]) in a real scenario from space.
The SOTA lasercom terminal was designed to carry out feasibility studies on optical downlinks and quantum communications with a low-cost platform onboard the micro-satellite SOCRATES inserted in a LEO at an altitude of about 650 km. Instead of a fine-pointing mechanism, which usually requires an additional bulky payload, the transmission of the 0.8-μm signals is based solely on a coarse-tracking gimbal system with stepping motors. So far, optical downlinks of imaging-sensor data at different wavelengths (980 nm and 1550 nm) were successfully carried out, using On-Off Keying (OOK) modulation at 10 Mbit/s from SOTA, and a 1-m diameter telescope in the Optical Ground Station (OGS) at the NICT headquarters, in Koganei (Tokyo, Japan). An experiment on the effect of the atmospheric propagation on the polarization was performed using circular and linear polarizations transmitted from SOTA [11]. CNES (National Centre for Space Studies) could also successfully receive the signals from SOTA in the 1.54-m MeO OGS in Caussol (France), demonstrating satellite-to-ground links with adaptive optics to compensate atmospheric effects [9,10]. These works After the success of these experiments, we moved forward to the quantum-communication experiment based on binary non-orthogonal linear-polarization encoding to emulate the B92 QKD protocol [22]. The purpose of the quantum-communication experiment was to verify the feasibility of the polarization-encoded onboard-laser-transmitter technology in orbit and the photon-counting polarization-decoding technology on the ground through a space-to-ground slant atmospheric path. Since the laser beam divergence was widened to be able to track the OGS more reliably with the SOTA coarse pointing, brighter laser pulses than those required in QKD were used, although the optical signals arriving at the NICT OGS were photon-limited at 0.14 photons/pulse on average, which is in the regime of quantum communication.
The polarization encoding is the most reasonable option for quantum communications from space thanks to its stable propagation through the atmosphere while time-bin encoding is widely used in fiber networks [23].
The polarization-based quantum communication allows simple and compact implementations of transmitter and receiver systems with low-cost optical components because it does not require an interferometer, and hence can be adapted to an environment of strong mechanical vibrations. A big challenge in this kind of systems is the polarization reference-frame synchronization between the fast-moving LEO satellite and the OGS in order to perform a reliable implementation of the QKD protocol. Another challenge is the clock-data recovery using the received sequences of quantum states directly, which will enable compact and low-cost transmitter and receiver implementations. For a slant-atmospheric downlink from a LEO satellite, the Doppler shift is also an important factor to evaluate for precise clock-data recovery. The purposes of this work are to solve these two main issues in order to demonstrate correct decoding of polarized quantum-state sequences and finally be able to evaluate the QBER.
For these purposes, we transmitted repeating PRBSs generated by a linear feedback shift register with a period of 2 15 -1 = 32767, the so called PN15, encoding it into a signal sequence of binary non-orthogonal polarization states. Using this known bit pattern of PRBSs, we performed the necessary tasks for quantum communication, including clock data recovery, timing offset identification, bit pattern synchronization, polarization reference-frame synchronization, and decoding of the polarized quantum states. Figure 1 shows a picture of SOTA (Fig. 1a), the configuration of the two linearly polarized laser diodes Tx2 and Tx3 (Fig. 1b) in SOTA, the receiver telescope and the quantum receiver in the NICT OGS (Fig. 1c). The The NICT OGS consists of the 1-m diameter Cassegrain telescope and the polarization-based quantum receiver. The incident light reflected by the primary and the secondary mirror passes through a tertiary mirror, made of aluminum to minimize the linear polarization deterioration. At this point, the beam after the tertiary mirror has a width of 3 mm, and is guided towards the quantum receiver installed at the Nasmyth bench of the telescope. A 1.5-μm-wavelength circularly-polarized laser beam transmitted from SOTA was used for satellite tracking purposes, and was separated from the 0.8-μm light using a dichroic mirror in the quantum receiver. This 1.5-μm beam was then guided to a photodetector and monitored using an IR camera.
The quantum receiver consists of beam splitters, polarizing beam splitters and half wave plates, ending with four ports, where four Single-Photon Counter Modules (SPCMs of Excelitas Technologies Corp.) were used as detectors after coupling the beams to multi-mode optical fibers using converging lenses. Received photon counts were then time-tagged by a time-interval analyzer (Hydraharp 400 of PicoQuant) whose timing resolution is 1 ps, generating a time-tagged photon-count sequence per each SPCM. Table 1 shows the losses of
Transmitter and receiver
The final version of this manuscript is accessible through the Nature the quantum receiver, the estimated atmospheric attenuation and total loss budget for a 50° elevation angle. The total loss includes the coupling loss of the arriving beam into the receiver telescope and the following losses: the quantum receiver loss including the SPCM coupling efficiency as the main source of loss; the receiver's telescope loss including the secondary and tertiary mirrors reflectivity and a vignetting effect produced in the secondary mirror because it was replaced by a slightly smaller uncoated aluminum mirror to maintain the linearity of the Tx2/Tx3 polarizations. The atmospheric attenuation was calculated with MODTRAN (MODerate resolution atmospheric TRANsmission by Spectral Sciences) using the conditions of the quantum experiment.
Clock-data recovery and timing-offset identification
In the OGS, the clock data was first recovered by post-processing a part of the received photon-count sequence from the quantum receiver at the OGS. Due to the heavy attenuation in the atmospheric path of optical downlink and in the quantum receiver, which is roughly -87dB~-70dB in total, many optical pulses emitted at SOTA did not arrive at the SPCMs in the quantum receiver. Therefore a 10-Mbit block of the transmitted PRBS at SOTA was used for the clock-data recovery. Since SOCRATES was moving fast at a velocity of 7 km/s with propagation distances ranging from 650 km to 1000 km, the Doppler shift through each optical link campaign was expected to be within ±200 Hz around the clock frequency f0 = 10 MHz with the shifting rate (frequency
Bit-pattern synchronization
After the clock-data recovery and the timing-offset identification, the time-tagged photon-count sequence in the time domain turns into a simple bit sequence in the bit domain. The next task is to establish the synchronization of bit patterns between the transmitted signal sequence and the received bit sequence. This could be made by calculating the cross correlation between the transmitted signal sequence and the received bit sequence for the period of PN15 PRBS, namely the 32767 bit length. Figures 3a and 3b present the experimental result on the cross correlation between the transmitted signal sequence and the received bit sequence. This has a peak at 29656, which corresponds to the offset.
By compensating this offset, we could finally synchronize the photon-count histogram in the bit domain with the transmitted bit patterns. Figure 3c shows, in the bottom, the histogram of photon counts summed up for 1 sec (a time span of a 10 Mbit transmitted sequence) after the bit patterns were synchronized, as well as the PN15 PRBS (top), the on/off sequences of Tx2 (second top), and Tx3 (third top). As indicated by red vertical arrows, one can recognize that whenever the quantum receiver registered finite counts, SOTA had always emitted optical pulses.
Polarization reference-frame synchronization
The tracking of the polarization between SOTA and the OGS should be able to be realized by adaptively rotating a half-wave plate at the entrance of the quantum receiver, based on the orbital and telemetry information from the satellite. However, in the campaign on 5 th of August 2016, this polarization-tracking system was not used, and shortly after that the satellite operation of SOCRATES was terminated. In addition, the alignment of the receiver was not optimum because the quantum-communication experiment was carried out in parallel with other experiments within the SOTA mission. Therefore, we performed the polarization reference-frame synchronization by post-processing a part of photon counts registered at the SPCMs.
In principle, this task, whose purpose is to extract the polarization angle of the input state in front of the receiver telescope, could be done with two SPCMs with a polarizing beam splitter. However, because of a non-optimum alignment in this experiment, the situation was more complex. In fact, the polarization characteristics and hence the effective coupling efficiency of the input state into each port to the SPCM in the quantum receiver depends on the telescope elevation and azimuth angle. Therefore we first made a calibration chart to correct the relative sensitivity of each port for various telescope elevation and azimuth angles, by observing the light from different high-luminosity stars as reference point sources. As the light from the stars is not polarized, a rotating linear polarizer was inserted between the telescope and the quantum receiver to prepare linearly polarized light at different azimuth/elevation angles. Then given a known sequence of polarized states either from Tx2 or Tx3, we analyzed photon-count sequences from SPCM1, SPCM2, and SPCM4, because this combination maximized the signal-to-noise ratios in the polarization angle evaluation, and finally reconstructed the polarization angle which varied as SOTA was moving. This evaluation was performed for every second during the optical-link campaign from 22:58:03 to 23:00:18 on 5 th of August 2016. Figure 4b shows the variations of received power estimated by the total photon counts of the SPCMs for the sequences from Tx2 and Tx3, as well as the distance from the NICT OGS to SOTA.
In the first half period 22:58:03~22:59:00, the received power fluctuated, which may be due to unstable tracking.
On the other hand, in the last half period 22:59:00~23:00:18, the tracking could be more stabilized, the received power was increasing, and the observed curves of polarization angle variation could be fitted much better with the theoretical curves.
Quantum bit error rate
Once the polarization reference-frame was established between SOTA and the OGS, we can estimate an essential parameter for the QKD protocol, i.e., QBER. In the B92 protocol which we emulate here, the bit information 0 and 1, denoted as inputs x=0, 1, are encoded into binary non-orthogonal quantum states.
According to quantum mechanics, it is impossible to distinguish them with certainty. Moreover, the more one tries to distinguish the two states, the more the states get disturbed. These quantum states are detected by a receiver which has the three kinds of outcomes; (i) the detection of the 0 with certainty, (ii) the detection of the 1 with certainty, and (iii) the inconclusive outcome for which both possibilities of the 0 and the 1 are implied, which is dealt with as failure events. These outcomes are denoted as y=0, 1, and F. The inconclusive outcomes, y=F, are discarded, whose process is referred to as the sifting. After the sifting, one can get the transition statistics, N(y|x), which represents the number of events detecting y given the input x.
The QBER is given by Since we did not track the polarization angle of SOTA but established the polarization reference-frame by post-processing in our experiment, we chose, for the QBER estimation, a 12-sec duration of 22:59:21~22:59:33 on 5 th August 2016, in which the quantum states arrived at the OGS were actually -45° and -90°, which were originally emitted in H from Tx2 and in -45° from Tx3 at SOTA, respectively. For this configuration, the three outcomes in the quantum receiver correspond to the clicks at (i) SPCM3 for y=0, (ii) SPCM2 for y=1, and (iii) SPCM1 or SPCM4 for y=F (see Fig. 1 again for SPCM positions). The observed QBER was smaller than 4.9% and reached the minimum value of 3.7% at 22:59:25. The variation of QBER around this time is shown in Fig. 5 for a wider duration of 1 min (22:59:00~23:00:00). The increase of QBER outside the estimation interval as marked by vertical dashed lines are attributed to the non-optimal polarization reference-frame configuration of the quantum receiver for the input polarized quantum states, which was caused because we could not use the polarization tracking between SOTA and the OGS.
Conclusion
We have demonstrated the first satellite-to-ground quantum-communication experiment with a micro-satellite as small as 50-kg-class. Our techniques of clock-data recovery, and polarization reference-frame synchronization directly from quantum states will enable compact implementation of a quantum communication system. The results of QBER<5% demonstrated the feasibility of quantum communication in a real scenario from space. | 2019-04-13T18:46:03.322Z | 2017-07-10T00:00:00.000 | {
"year": 2017,
"sha1": "c89c0e78c14fb07df08b745a3fb3bdc104262df8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1707.08154",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c89c0e78c14fb07df08b745a3fb3bdc104262df8",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
39097434 | pes2o/s2orc | v3-fos-license | Context-Based Topology Control for Wireless Mesh Networks
Topology Control has been shown to provide several benefits to wireless ad hoc and mesh networks. However these benefits have largely been demonstrated using simulation-based evaluations. In this paper, we demonstrate the negative impact that the PlainTC Topology Control prototype has on topology stability. This instability is found to be caused by the large number of transceiver power adjustments undertaken by the prototype. A context-based solution is offered to reduce the number of transceiver power adjustments undertaken without sacrificing the cumulative transceiver power savings and spatial reuse advantages gained from employing Topology Control in an infrastructure wireless mesh network. We propose the context-based PlainTC+ prototype and show that incorporating context information in the transceiver power adjustment process significantly reduces topology instability. In addition, improvements to network performance arising from the improved topology stability are also observed. Future plans to add real-time context-awareness to PlainTC+ will have the scheme being prototyped in a software-defined wireless mesh network test-bed being planned.
Introduction
Wireless mesh networks (WMNs) are increasingly used both as an inexpensive alternative to broadband provisioning in urban areas and as a primary method for broadband provisioning in rural areas.The most common form of WMN deployment consists of a two-tier architecture comprising an access and a backbone network.This type of WMN is commonly referred to as an Infrastructure WMN (I-WMN).Client devices connect to the I-WMN backbone which is typically self-organising and self-configuring.These backbone nodes, comprising Mesh Points (MPs), Mesh Access Points (MAPs), and Mesh Portals (MPPs), collaborate to maintain network connectivity and deliver traffic to the intended destinations (see Figure 1).
Despite the stationary nature of the I-WMN backbone, maintaining network connectivity is made difficult by the transient nature of wireless links, making them unreliable when deployed [1][2][3].Network connectivity is traditionally assured by ensuring that each device in the I-WMN backbone utilises its maximum transceiver power.The use of maximum transceiver power is disadvantageous, resulting in high levels of interference, increased contention for the shared transmission medium, a reduction in network capacity, and unnecessary transceiver power consumption.
As a result of the inefficiencies associated with maximum power consumption in ad hoc networks, several Topology Control (TC) schemes have been developed that can be applied to the WMN backbone in order to maintain network connectivity whilst reducing interference, enhancing the network capacity, and reducing transceiver power consumption.Within the context of TC, power consumption usually refers to the power consumed by a node's wireless transceiver.Power consumed by the wireless transceiver is reported to account for between 15% and 35% of the total energy consumed by the device [4].TC aims to enhance the QoS capabilities of the WMN backbone by optimizing the transceiver powers of all backbone devices whilst maintaining network connectivity.Distributed TC, however, does have the potential to add to the inherent variations experienced by wireless links by dynamically changing nodes' transceiver power levels.This may cause link quality to exhibit higher-than-normal variation or links may disappear altogether.
Several simulation studies [5][6][7][8] have demonstrated the efficacy of TC in ad hoc networks whilst the study of TC prototypes for I-WMNs remains rare.It is important to fill Figure 1: Infrastructure WMN architecture [13].
this gap in the literature since there is often a significant discrepancy between what models and simulations predict and the actual performance experienced in a deployed mesh network.TC implementation for the laptop [9] and sensor [10] platforms is available but these devices are not typical I-WMN backbone nodes.A study reported in [11] used a commercially available wireless router platform, but these routers were arranged in a string topology which is an unrealistic topology for the mesh networking use cases described in [12].The study also reported that the transceiver powers of the participant nodes were manually adjusted.A TC prototype based upon a popular hardware platform for I-WMN backbone nodes was reported in [14].The prototype platform was also used as the basis for experimentation with various instances of the Critical Number of Neighbours connectivity strategy in [15].The prototype, dubbed PlainTC, was found to be capable of maintaining network connectivity whilst achieving cumulative transceiver power savings.
Wireless network topologies experience inherent variations in link quality and the sudden breaking and reestablishment of links when deployed.This leads to temporary fluctuations in network performance as routing protocols and other QoS mechanisms adjust to the new network topology.These natural variations in link quality are difficult to eliminate and must be separated from the topology instability caused by varying the wireless transceiver power of a node as directed by a TC scheme.In the context of this paper, topology instability refers to the forced changes in the network topology caused by a TC scheme.
In this paper, we pay further attention to the PlainTC prototype.We first demonstrate the negative impact that the PlainTC prototype has on topology stability.To the best of the authors' knowledge this is the first instance of such an observation for a TC scheme.This observation is made possible by the use of a test-bed prototype and would have been prohibitively difficult to make with a simulation tool due to the tool's high levels of abstraction.This paper then proposes a context-based solution to reduce the topology instability caused by PlainTC.This solution allows for changes in a node's context to be identified and the quantum of context change to be computed.The quantum of context change is then used to regulate, using a context-change threshold, the adjustment of a node's transceiver power output.
The evaluation of PlainTC+ on our indoor I-WMN testbed indicates that the incorporation of context information produces a 45% reduction in the number of transceiver power changes when compared to the original PlainTC scheme.In addition, the reductions in the oscillations of observed neighbourhood size and link quality are indicators of reduced topology instability.The reduction in topology instability is also shown to improve the PDRs and throughputs being measured.These improvements are achieved without significant increases in the computational resources being required.
The remainder of this paper is organised as follows.Section 2 reviews earlier TC prototypes that can be considered applicable for an I-WMN.This section also reviews earlier attempts to address topology instability in I-WMNs.Section 3 demonstrates the negative impact of TC on topology instability.A context-based TC prototype is presented in Section 4 and our indoor I-WMN test-bed environment is described in Section 5.The test-bed evaluation of the PlainTC+ prototype is contained in Section 6 and the paper is concluded in Section 7. Section 7 also describes future work that would have PlainTC+ being prototyped in a software-defined I-WMN test-bed.
Literature Review
This section reviews existing TC prototypes that are applicable to I-WMNs.These prototypes are discussed in terms of their usage of context information and their control of topology instability.In addition, this section also reviews other attempts to reduce topology instability in I-WMNs.
Topology Control Prototypes Applicable to Infrastructure
Wireless Mesh Networks.The overwhelming majority of TC schemes in the literature have only been subjected to simulation-based evaluations and the reliance on this evaluation type has provided an idealised version of the efficacy of TC in wireless ad hoc networks.These TC schemes have not easily resulted in prototypes, forcing researchers to adopt simpler, more practical approaches in order to study the efficacy of TC in test-beds or deployed networks.
Numerous sleep-based TC prototypes can be found in the literature but these prototypes are better-suited for wireless sensor networks.Typical I-WMN usage scenarios discussed in [16] require that all backbone nodes remain available, thus rendering sleep-based TC schemes inappropriate.Therefore, only those TC prototypes that do not employ sleep states are considered in this review.
To the best of our knowledge, only five TC prototypes that can be considered for use in an I-WMN appear in the literature.The COMPOW, CLUSTERPOW, and MINPOW schemes were reported in [9].These schemes create routing tables for each specific power level supported by the wireless cards employed.Each scheme requires the modification of existing packet headers as well as the routing protocol messages, thus resulting in tight coupling between the schemes and the protocol stack.The evaluation of these schemes was limited only to the correctness of the routing tables created since the authors reported that the hardware used crashed repeatedly.These schemes are not context-aware and exacerbate topology instability due to the near-constant changing of transceiver power levels in order to maintain updated routing tables for each supported power level.
The scheme reported in [11] required the predetermination and subsequent manual tuning of a common transceiver power level, on all network nodes, to ensure network connectivity.This results in a completely static scheme that lacks scalability.The scheme was found to have a negative effect on a Network Layer routing protocol since the protocol performed best when the nodes operated at maximum transceiver power.The evaluation of the scheme was based on a string topology, which is an uncommon network topology for most I-WMN deployment scenarios.This scheme is not considered to be context-aware due to the manual tuning involved and topology instability is not a concern as the nodes are not involved in distributed decision-making.
The ConTPC power control scheme was presented in [17].ConTPC attempts to minimise packet loss caused by transceiver power reductions.A transceiver power reduction is only allowed if the reduction would not affect the delivery rate of the wireless link involved.This method ensures that only high-quality links are considered for transceiver power reductions.The mechanism for computing the delivery rate requires that probe packets be periodically sent at every permissible power level.The disadvantages of such an approach are that it adds to the existing communications overhead and may result in unstable network topologies as node power levels are continuously changing in order to send these probe messages at the various power levels.In addition, the sending and receipt of these probe messages are subjected to the prevailing scheduling mechanism which may result in a delayed view of network conditions.
The TC prototypes discussed above may actually exacerbate topology instability by continuously switching between the various supported transceiver power levels.Each change in transceiver power level causes a change in the network topology.
The PlainTC prototype presented in [14] attempted to maintain network connectivity whilst producing cumulative transceiver power savings.The prototype employed the Critical Number of Neighbours connectivity strategy [16] to achieve its goal.Nodes iteratively adjusted their transceiver power levels to maintain the required number of one-hop neighbours.This prototype differed from the previous ones by not requiring the broadcast of probe messages at every supported power level and being evaluated on Linksys WRT54GL devices which are commonly used in wireless mesh network deployments.However, despite these differences, PlainTC may also exacerbate topology instability due to the transceiver power changes required to maintain the required neighbourhood size.
Efforts to Control Instability in Infrastructure Wireless
Mesh Networks.Instability has been recognised as being detrimental to the QoS offered by an I-WMN.Thus, there are existing efforts to control instability.Instability in the I-WMN is usually discussed from a network routing perspective where link quality fluctuations and frequent route flapping are observed in deployed networks [18].This observation led to the proposal of a stability-aware routing protocol in [18] and a stability-aware routing metric can be found in [19].
The work in [18] was motivated by observations of a deployed I-WMN.The network deployment made it easier to observe that the link quality varied often and route flapping was excessive.These observations point to the instability of the underlying network topology.Additional factors leading to network instability are revealed in [19].These factors include interference, topology and traffic patterns, congestion, and the presence of distributed decision-making agents whose local actions have wider-reaching consequences.These observations in [19] led to the development of a stabilityaware routing metric.
Both examples react to the instability of the underlying network topology whereas the TC scheme being proposed in this paper proactively seeks to minimise the topology instability caused by TC in I-WMNs.A demonstration of the topology instability caused by TC in I-WMNs is presented in the next section.
Demonstrating the Impact of Topology Control on Topology Instability
The distributed QoS mechanisms employed in wireless ad hoc networks work best in stable network topologies [19].However, topology instability is a feature of deployed networks.This section practically demonstrates the additional instability introduced by employing a distributed TC scheme in an I-WMN.The basis for this demonstration is the PlainTC prototype described in [14].
The Side-Effects of Topology Control
Exposed by the OLSR Routing Protocol.The OLSR routing protocol is a useful tool to observe the side-effects of a TC prototype's iterative transceiver power adjustment process.These side-effects are exposed by the metrics measured by the OLSR routing protocol and can be thought of as depicting a microlevel view of PlainTC's iterative transceiver power adjustment process.Figure 2 depicts the effect of iterative transceiver power adjustment on the number of neighbours maintained by a node in the network.There is an almost-constant oscillation in the number of neighbours which causes topology instability.Figure 2 only depicts this behaviour for a single node in the test-bed but every node experiences a similar phenomenon.I-WMN deployments possess some inherent instability due to variations in link quality but this does not account for the almost-constant variations in the number of neighbours being observed via OLSR.The changes in the number of neighbours can only be caused by changes in a node's transceiver power levels and these power levels are under the direct control of PlainTC's transceiver power adjustment process.The effect of the almost-constant variations in the number of neighbours is topology instability.
The topology instability negatively affects the operation of OLSR's ETX routing metric as paths to destinations are affected by the changing ETX values experienced by a node as depicted in Figure 3. Excessive route flapping is the result.Link quality is known to be subjected to natural variations but these variations are exacerbated by PlainTC's iterative transceiver power adjustment process.
Topology Instability.
According to [19], wireless ad hoc networks are inherently unstable due to the high bandwidth demands and dynamic traffic variations.Therefore, this particular experiment was carried out with PlainTC activated but without data traffic being transmitted so as to expose the amount of instability being measured.
The previous experiment has shown that the use of PlainTC, which is based upon iterative transceiver power adjustment, injects instability into the network.The instability caused by PlainTC is explored further and differs from the existing literature in that the instability being observed is caused by a TC scheme rather than being reacted to by a routing protocol as in [18] or a routing metric as found in [19].
This experiment was conducted over a 24-hour period to track the changes that occur in an I-WMN test-bed that employed PlainTC.Readings were collected every second and the number of recorded changes for each context variable was reported for every hour period.The test-bed is described in Section 5.The experiment logged the number of changes to important topology variables such as the transceiver power output level, neighbourhood size, network size, and link quality.These variables are chosen for their relevance to WMNs and TC but other variables such as speed, altitude, orientation, location coordinates, and available sensors may be more applicable for mobile ad hoc or sensor networks.
The four chosen variables collectively represent the network topology and changes to these variables signify changes to the topology.These variables were collected during the normal operation of a test-bed node.
The number of recorded changes for each context variable was reported across all the network nodes and these changes are presented for twenty-four one-hour intervals.As stated before, the test-bed carried no data traffic as the objective was to explore the global effect of PlainTC's distributed transceiver power adjustment process on topology instability.
The number of changes recorded for the four context variables is recorded in Table 1.The first hour recorded the greatest cumulative number of changes for the four context variables.This is due to the nodes adjusting their individual transceiver power levels (from the initial maximum power level) to satisfy the required connectivity requirements.It would be expected for the subsequent one-hour intervals to show little or no further changes in transceiver power output levels, neighbourhood size, and network size as the network topology should have stabilised within the first hour.This is clearly not the case.
Nodes are found to be constantly adjusting their transceiver power output levels throughout the 24-hour period.This is caused by nodes reacting to the transceiver power changes effected by their neighbours.Changes in transceiver power output levels made by neighbouring nodes caused a change in the neighbourhood size recorded by the affected node.The affected node responded by adjusting its own transceiver power output level.This action, in turn, caused the neighbours of the affected node to adjust their own transceiver power output levels in response.Thus a cycle of transceiver power output level adjustments and counteradjustments took place amongst the backbone nodes.
The cascading effect of changes in the transceiver power output caused changes in the other observed variables.The data contained in Table 1 shows that changes in transceiver power output levels have a multiplier effect on the other observed variables.Consider a node, node , and its immediate neighbours. adjusts its transceiver power output level.This change causes a change in its neighbourhood size (in order to maintain the required CNN) and link quality.The neighbours of may record changes in their own neighbourhood sizes, network sizes, and link quality until the local topology stabilises.Note that 's neighbours have not had to adjust their own transceiver power output levels in order to experience changes to their own observed variables.This example shows that a single change in transceiver power output can cause changes in neighbourhood size, network size, and link quality amongst the originating node and each of its immediate neighbours.Thus, changing the transceiver power level caused multiple changes to be recorded for the other observed variables.
The observed changes contribute to instability in the network topology.This topology instability affects the QoS offered by the backbone network.This is because the frequent adjustment of transceiver power output levels causes changes in neighbourhood sizes.The changes in neighbourhood sizes affect the routes being maintained by the OLSR routing protocol as these neighbouring nodes are the potential next hop to an intended destination node.Constant changes to the neighbourhood size thus result in changes to the routing table maintained by OLSR.This, in turn, means that routes are constantly changing which causes an increase in OLSR's route creation and maintenance traffic.This additional routing protocol overhead consumes bandwidth at the expense of data traffic and thus the data throughput rates would be reduced in a network that is forwarding traffic.
It must be remembered that the instability measured in this experiment only captured the instabilities caused by the TC scheme as the network was not carrying data traffic.Other works in the literature have shown the instability caused by traffic forwarding in the I-WMN.Thus, PlainTC's use in a deployed I-WMN is likely to exacerbate the existing network instability.Solutions to the instability caused by traffic forwarding have been proposed in [18,19] in the form of a stability-aware routing protocol and routing metric, respectively, but these existing solutions are not able to reduce topology instability.A solution to reduce the topology instability caused by a TC scheme is required.Section 4 presents a context-based solution to reduce the topology instability introduced by a distributed TC scheme.
A Context-Based Topology Control Prototype: PlainTC+
PlainTC+ is an improvement upon the PlainTC scheme in [14].PlainTC+ uses context information to reduce the topology instability caused by TC in the I-WMN.This instability was demonstrated in Section 3.
4.
1. Context Sources.Topology instability cannot be measured directly but it can be inferred from the number of recorded changes to the transceiver power output level, neighbourhood size, network size, and link quality variables, respectively, for the entire network.Thus, these four variables are employed as sources of contextual information for the context-aware PlainTC+ scheme.
The four variables represent a node's view of its local neighbourhood and the global network and are taken to represent the context of a node.A change to any of these four variables represents a change to the node's observed context.
4.2.
Quantifying Context Changes.Changes in the observed context variables are referred to as events.The weights associated with each of the observed variables are meant to identify the high-impact events that PlainTC+ should respond to.This differs from PlainTC's current operation where events taking place outside of a node's immediate vicinity cause the node to adjust its transceiver power output.The envisaged result is a reduction in the number of changes made to node's transceiver power levels and a reduction in the multiplier effect that such changes have on the other observed variables.This multiplier effect was shown in Section 3.
The weights associated with each variable are meant to be used to quantify the change in context experienced by each node.The process that is established here can be used on any deployed I-WMN to determine the appropriate weightings for that particular deployment.The use of this process will be demonstrated using data obtained from an indoor I-WMN test-bed.The data found in Table 1 is specific to our testbed but the methodology employed to record the changes to the context variables and the process described here can be applied to any WMN.
The data contained in Table 1 is subjected to a statistical method described in [20] called Principal Component Analysis (PCA).PCA is commonly used to reduce the dimensionality of large data sets but it can also be used to determine the contribution of each variable to the total variability contained in multivariate data.PCA is being employed, in this instance, to determine the contribution (weight) of each variable to the overall context change experienced by a node.
The FactoMineR package introduced in [21] is used within statistical environment to perform the PCA.Apart from being able to perform classical PCA, FactoMineR also has a mechanism for determining the contribution of a variable to total variability of multivariate data.This mechanism is documented in [22].This makes FactoMineR and its PCA methodology a reliable choice for determining the contribution of each of the four variables to topology instability.
The process defined by FactoMineR for calculating the variable weights is as follows.
Step 1.The first step is to load the contents of Table 1 into R.
Step 2. The second step is to perform the PCA on the input data.
Step 3. The third step determines the proportion of variation retained by the Principal Components.The amount of variability retained by each Principal Component is calculated from the eigenvalues.The variability for each component is depicted below: Eigenvalues less than 1 are commonly used to eliminate their associated dimensions as these dimensions do not greatly account for the total variance in the data.Thus, only the first component is selected for further use as it has an eigenvalue greater than 1 and accounts for approximately 84% of the total variance contained in the data.Contributions towards the total variance contained in the data are depicted in Figure 4.
Step 4. The fourth step determines the percentage contribution of each variable on the total variance explained by the identified components: These contributions correspond to the weights that can be associated with each variable and represent the relative impact of a change in that variable on the context change experienced by a node.This step represents the end of the PCA on the observed data.
Step 5.The fifth step uses the normalised weights (obtained in the previous step) to determine the appropriate contextchange threshold for the network.The goal of the threshold value is to reduce the number of changes made to the transceiver power output level of a network node.The multiplier effect of the number of changes to a node's transceiver power output on the other context variables was shown in Section 3. Thus, reducing the number of transceiver power changes performed by a node will have a positive impact on reducing the number of changes observed for the other context variables.The role of the threshold is to act as a sentinel value that determines whether an observed change is sufficiently significant to warrant a change to a node's transceiver power output level.
Deriving a Context-Change
Threshold.The contextchange threshold is proposed to limit the number of transceiver power changes produced by the TC scheme.This threshold value quantifies the minimum amount of change required to trigger a transceiver power adjustment.
The weightings derived for each of the four observed variables and knowledge of the multiplier effect of the transceiver power variable on the other three variables provide valuable input into the formulation of the context-change threshold value.This value has to be sufficiently high to reduce the number of changes to the transceiver power output variable but sufficiently low to cause PlainTC to react to significant events.
A significant event is defined as one that takes place within the immediate neighbourhood of an affected node and affects a variable with network-wide impact such as the network size.Less significant events originate from outside of the immediate neighbourhood.These definitions allow for a node to react to significant events originating in its immediate neighbourhood by adjusting its transceiver power output whilst maintaining the current transceiver power output for events originating elsewhere.A balance between reducing the number of transceiver power changes and reacting to significant events can thus be achieved.
The normalised versions of the threshold values obtained in Step 4 of the PCA process, of 0.1956, 0.2482, 0.2767, and 0.2795, respectively, are too low to ensure that the TC scheme only reacts to local events with network-wide impact.A threshold value of 1.0000 is too high and the hurdle would only be met in extreme circumstances such as network partitioning or network merging immediately after the affected node has adjusted its transceiver power output.The TC scheme would thus be rendered unresponsive to all events, even those in its immediate neighbourhood, in this scenario.
The only means that a TC scheme possesses in order to react to network events is to adjust the transceiver power output of the affected node.Calculating the threshold using the transceiver power variable with any combination of the other variables would enable continuous adjustment of node transceiver power output levels.This is a situation to be avoided as the aim of the threshold value is to reduce the number of transceiver power changes performed by the network nodes.
PlainTC currently adjusts its transceiver power output in order to maintain the desired connectivity at each network node.The number of neighbouring nodes to be maintained is derived from the network size variable and this variable has been shown to change quite frequently.Thus, the connectivity value changes frequently in response to the network size variable and the required transceiver power adjustment is attempted.The transceiver power change should only occur if the change in the observed network size is caused by a change in the observed neighbourhood size.This would force the TC scheme to react only to events in the affected node's immediate neighbourhood which have network-wide impact.Thus, the network size and neighbourhood size variables must contribute to the threshold value.The link quality variable changes most frequently because it encapsulates natural variations in link quality as well as changes caused by TC.Thus, the link quality variable must also contribute to the threshold value.Therefore, the most appropriate threshold value to force the TC scheme to react only to local events with a network-wide impact is one that ensures that PlainTC+ will only adjust a node's transceiver power output in situations where the affected node simultaneously experiences changes in neighbourhood size and network size and in link quality.The selected threshold value is derived from the weightings determined in Step 4 and is computed as ℎℎℎ = 0.2767 + 0.2482 + 0.1956 = 0.7205. (1)
PlainTC+ Design.
The execution logic consists of four phases, each one corresponding to the Autonomic Control Loop described in [23] and shown in Figure 5. Autonomous systems must collect information to determine the current situation or context in which they operate.The collected information is analysed to inform the adaptation decisions to be taken.These decisions are subsequently implemented to complete the adaptation response.
PlainTC+ is designed to employ the Autonomic Control Loop as follows.In the first phase, PlainTC+ collects the current node, neighbourhood, and network states.These states are comprised of low-level variables such as the current transceiver power level, number of neighbouring nodes, and the total backbone network size.Thus, the current context of both the node and the network can be ascertained.This contextual information is then analysed in the second phase to quantify the change in context experienced by the node and to determine the current neighbourhood size.The neighbourhood size is used to maintain network connectivity as this approach has been shown in [15] to be effective.PlainTC+ now decides whether to change the node's transceiver power output in the third phase.If the decision is made for the node to adjust its transceiver power output, then this decision is acted upon in the final phase.The node's transceiver power will be adjusted upwards or downwards depending upon the decision taken in the previous phase.PlainTC+ is designed to iteratively adjust its transceiver power to reach the required level as this approach does not add to the computational complexity of the scheme.Thus, a faster convergence time is sacrificed for simplicity.Figure 6 depicts a single execution cycle.
The use of the Autonomic Control Loop in PlainTC+ does not impact on the performance of the scheme as all the data being processed is always locally available at the node.In addition, there are no additional messaging overheads and latency introduced since all the required data is collected in the normal operations of the routing protocol and the nodes Can tx power be adjusted?do not collaborate with, or communicate their decisions to, each other.
PlainTC+ is an amalgamation of the original PlainTC algorithm and the results of the process to quantify topology instability.The process (contained in Section 4.2) produces two outcomes: the first outcome is the determination of the weights associated with each observed context variable and the second outcome is the appropriate context-change threshold value for the observed network.
The context change for the observed test-bed network can be computed as ℎ = (0.2795 × ) + (0.2767 × ) where ← − ℎ V (13) end if (14) end if (15) if ℎ < then (16) if ℎ( ) ≥ ℎ ℎℎ then (17) if ( + ℎ V) ≤ then (18) ← + ℎ V (19) end if (20) end if (21) end if (22) A change in every variable is mapped to the contribution of that variable to the total variability in the collected data.Thus, the formulation quantifies the change that a node detects in its environment.The result is a normalised value in [0, 1] that is independently computed by each node and the result depends upon the variable changes being reported by the node.This quantified context-change value can now be subjected to the appropriate threshold value within PlainTC+ in order to reduce the number of transceiver power output changes performed by the network nodes.
The resultant algorithm employed by PlainTC+ is shown in Algorithm 1.The use of the CNN value is augmented with the context-change threshold value when deciding to adjust a node's transceiver power output.The potential reduction of a node's transceiver power output is a key motivator for the use of a TC scheme in the I-WMN.Thus, PlainTC+ is designed to allow for the easy reduction of a node's transceiver power output by basing the adjustment decision only upon the CNN to be maintained.
Raising a node's transceiver power output requires that the quantified change being observed meets or exceeds the context-change threshold value.The threshold value is defined in line (8) and the additional context-change condition can be found in line (16) of Algorithm 1.These additions to the algorithm are meant to reduce the number of transceiver power adjustments made by a node whilst allowing for savings in a node's transceiver power output to be achieved.
It should be noted that the threshold value derived above is based upon data obtained from our test-bed and this value is therefore unique to this particular test-bed.However, the procedure for deriving the threshold value is still applicable to any other WMN test-bed or real-world deployment and this procedure will yield a value that is unique to that deployment.
In the event that other context variables are considered, then the process for quantifying context changes in Section 4.2 can be used to determine the various contributions to the context change experienced by a node.These variables can be divided into variables that are affected by neighbourhood changes and those variables affected by network changes.The principle of only adjusting a node's transceiver power in response to significant events as defined earlier in this section can still apply to determine a threshold value that will only allow for responses to neighbourhood-originating events that have a network-wide impact.Therefore, the threshold value needs to incorporate all the identified neighbourhood-based context variables but exclude the variable whose changes are being minimised.4.5.Implementation.Linksys WRT54GL nodes are used as the hardware platform on which to deploy PlainTC+.These devices are popular backbone nodes for I-WMN deployments due to their low cost, rugged reputation, and easily available tutorials and related support documentation.
The Linksys nodes are transformed into I-WMN nodes via the use of the OpenWRT firmware [24].This firmware is a stripped-down version of the Linux OS that caters for the limitations imposed by embedded devices and wireless routers such as the Linksys WRT54GL.Apart from common embedded Linux tools such as uClibc, Busybox, and a shell interpreter, a package manager is also provided.Due to OpenWRT's modular design, the Linux kernel can be optimized to suit the underlying hardware platform whilst allowing the user-space environment to remain unaffected (requiring only a recompilation).Mesh networking functionality is enabled by the installation of the necessary routing and network management packages via the in-built package manager.The OpenWRT firmware also allows access to the NVRAM (nonvolatile random access memory) partition of the Linksys node.This partition is the equivalent of the secondary storage facilities available on all possible I-WMN nodes.User-space packages can interact with the NVRAM partition.The 64 KB NVRAM partition stores configuration variables that span the entire logical protocol stack and is thus a ready-made source of cross-layer optimization data.It is also possible to alter the values of configuration variables.
As the Linksys nodes were deployed with the Open-WRT firmware to provide the mesh networking capabilities, PlainTC+ has to exist within the OpenWRT ecosystem depicted in Figure 7. PlainTC+ is implemented as a user-space application that is initiated at node start-up and executes at two-minute intervals thereafter.The application is required to interact with the configuration variables (stored within the NVRAM partition) that control the magnitude of the transceiver power output of the Linksys nodes as well as with the OLSR routing protocol that is usually also implemented as a user-space application.This interaction is depicted in Figure 8. PlainTC+ relies upon the topology information collected during OLSR's normal operations.The total number of backbone nodes derived from OLSR's routing table is used to determine the appropriate CNN to be maintained.If the transceiver power output requires modification then the value of the state variable associated with the node's transceiver power level is modified.This modification forces an OpenWRT firmware trigger that sets the updated value of the state variable on the wireless transceiver hardware.The change of transceiver power level is performed without having to perform a node reboot.
No modifications to the OpenWRT firmware were necessary and therefore the advantage of the above-mentioned implementation approach was that the benefit of a crosslayer approach is gained without the need to reengineer the protocol stack by specifying an additional protocol layer, its interlayer interfaces, and the various communication messages that would be required, thus conforming to the logical implementation architecture shown in Figure 9.
The Test-Bed Environment
The indoor mesh test-bed operated in a ground-floor laboratory in a 3-storey office building at the University of Zululand's main campus.The test-bed network operated in 802.11g mode on channel 6 in order to mitigate interference caused by a separate WLAN that was operational within the building.
The mesh test-bed consisted of 14 nodes placed in 6 m × 4 m area as shown in Figure 10.The node placement was determined by the availability of plug points which is analogous to the coupling of nodes with existing infrastructure in real-world deployments.Each node in the mesh backbone consisted of a mains-powered, Linksys WRT54GL router with the OpenWRT-based Freifunk firmware (version 1.7.2) used to provide mesh networking functionality.The Freifunk firmware uses the OLSR routing protocol by default as this protocol has been successfully employed in large-scale WMN deployments.
The Linksys WRT54GL routers possessed a 200 MHz processor, 16 Mb of RAM, 4 Mb flash memory, and a Broadcom 802.11b/g radio chipset.The wireless chipset allowed transceiver power output levels to be set from 1 dBm to 19.5 dBm.The latter value is the maximum power output recommended by the manufacturer.
Each node was connected via Ethernet through a switch to a central server.The use of the Ethernet port for data collection was to allow for network traffic to flow over the wireless mesh network and for performance data and other statistical information to flow over the Ethernet network.Thus, the performance data did not interfere with the data flowing over the wireless mesh network.The central server also synchronised the clocks of all the network nodes in order for the collected data to be accurately time-stamped.
Custom data collection scripts were written and installed within each test-bed node.These scripts were designed to amalgamate the time-stamped data from the node and the routing protocol at one-second intervals.This data was "pushed" by each test-bed node to the central data collection server where it was stored whilst awaiting further analysis.The data collected from the individual nodes included their current transceiver power output, network size, neighbourhood size, and the link quality associated with their various neighbouring nodes.
Data traffic was generated by using the iperf traffic generator (with its default settings) installed on each test-bed node.The central server remotely controlled the operation of these traffic generators.The test-bed supported two traffic scenarios: the intramesh traffic scenario where the source and destination are within the test-bed backbone and the Portal-oriented traffic scenario where either the source or destination was resident outside of the test-bed backbone.For the Portal-oriented scenario, a PC accessible via the designated gateway node (node 10 in Figure 10) was set up to act as an external source or destination.These two scenarios are depicted in Figure 11.
The iperf tool allowed for the generation of both UDPand TCP-based traffic.The UDP-based traffic was used in the intramesh scenario to emulate activities such as LAN gaming.The TCP-based traffic was meant to emulate web activities where at least 80% of all traffic is carried via TCP.
The server was set up to initiate traffic flow from 90% of the network nodes to randomly selected, destination nodes for the intramesh scenario.The server initiated traffic flows from 90% of the network nodes to the external PC for the Portaloriented scenario.
The test-bed allowed for the evaluation of both constantsize and incrementally increasing network backbones.The constant-size networks were initiated with all the network nodes being switched on simultaneously and a ten-minute stabilisation period was allowed to pass before data was collected.The duration of the data collection in the constant-size experiments is 60 minutes.Network sizes ranged from 8 to 14 nodes.
The forementioned experiments were designed to examine the dynamic or run-time behaviour of PlainTC+ when nodes were added to an existing backbone network.These experiments began with an initial network size of eight (8) nodes and edge nodes were incrementally introduced to the backbone network at ten-minute intervals.The initial 8-node network was given a ten-minute stabilisation period before data collection begins.Data collection stopped after a duration of 70 minutes with data being collected whilst each newly introduced node adjusted its transceiver power output.Thus, the collected data showed the response of the newly introduced nodes to the existing network and showed the reaction of the existing nodes to the introduction of a new node.With data being collected at one-second intervals, a total of 600 data points were collected by each node for each observed metric for each evaluation run.Each experiment was repeated five (5) times as the volume of data collected posed a storage challenge.
Experimental Results
PlainTC+ is designed to reduce the topology instability caused by transceiver power changes.This is achieved by limiting the number of power changes performed by a node in the backbone network.PlainTC+ is evaluated against PlainTC to determine whether the context-change mechanism can successfully reduce topology instability.
6.1.Topology Instability.The numbers of changes to the observed network variables, when using PlainTC+, are listed in Table 2.This is compared to the outcome for PlainTC found in Table 1.PlainTC+ is found to significantly reduce the in the number of transceiver power changes recorded over a 24-hour period when compared to the original PlainTC scheme.This result is produced by using the threshold value to suppress unnecessary transceiver power increases and compares favourably with a similar threshold mechanism described in [18] that reduced route flapping by 60%.It must be remembered that PlainTC+ only applied its contextchange threshold mechanism to transceiver power increases.It is expected that PlainTC+ would have achieved a larger reduction in topology instability if the threshold was applied to transceiver power decreases as well.However, in the context of TC, such a move would have been at the expense of cumulative transceiver power savings.The reduction in the number of transceiver power output changes also caused reductions in the other variables observed within the network.Reductions of 36%, 37%, and 38% were recorded for the neighbourhood size, network size, and link quality variables, respectively.These values represent a significant improvement to the stability of the network topology.
Effect on the OLSR Routing Protocol.
The OLSR routing protocol reports both the number of neighbouring nodes and the link quality associated with each neighbour.These metrics are affected by transceiver power output changes and the effect of a reduction in the number of transceiver power changes is depicted in Figures 12 and 13.
The restriction on transceiver power changes imposed by the context-change threshold helps to maintain a stable one-hop neighbourhood, as depicted in Figure 12.PlainTC+ causes a node and its neighbours not to deviate from the required CNN.This results in a stable neighbourhood size.Nodes may reduce their transceiver power output level to maintain the required CNN but the context-change threshold causes the node to avoid increasing its transceiver power output to maintain a new CNN when a new node joins the network and this new node is not an immediate neighbour.Any increases in neighbourhood size are either caused by neighbouring nodes in the process of adjusting their own transceiver power outputs to maintain their CNN or caused by the natural variations in link quality which cause a temporary loss of links to neighbours.
The improved network stability offered by PlainTC+ is also found to reduce the severity of the fluctuations in observed link quality.This is depicted in Figure 13.Nodes do not adjust their transceiver power output levels as frequently as with PlainTC.This improves the performance of the probe packets sent out by OLSR to calculate the ETX metric and leads to reduced route flapping caused by variations in the route cost metric.
Transceiver Power Output.
PlainTC+ results in a lower cumulative transceiver power output level compared to PlainTC (see Figure 14).The power savings are a result of PlainTC+ subjecting any increases in transceiver power output to the context-change threshold.This prevents nodes from increasing their transceiver power outputs in response to changes in network size that arise from outside of their immediate neighbourhood.The new CNN to be maintained in such scenarios is overruled by the context-change threshold.Therefore, nodes within an unaffected neighbourhood do not adjust their transceiver power output levels upwards with PlainTC+ whereas these same nodes increase their transceiver power levels when employing PlainTC.
Network Connectivity.
Figure 15 shows that PlainTC+ reduces the time to establish network connectivity when changes to the network size occur.This reduction (when compared to PlainTC) is caused by the positive effect of the context-change threshold on the workings of the OLSR routing protocol.
Network connectivity is determined by the availability of routes to all possible destinations.The reduction in network instability caused by the context-change threshold aids in the propagation of routing information by OLSR.The routing updates propagate faster due to the increased stability of links between neighbouring nodes.Thus, the routing tables at each network node report routes for every other node at a faster rate than with PlainTC.6.5.Packet Delivery Ratio and Throughput.PlainTC+ outperforms PlainTC in delivering data traffic to the intended destinations as the network size increases.The improvements in topology stability offered by PlainTC+ result in PDR and throughput increases for both the intramesh and Portaloriented traffic scenarios.These increases cause the data traffic performance of the network created by PlainTC+ to approach the performance achieved when employing Max.Power.Thus, PlainTC's performance gap to Max.Power is decreased despite achieving even greater cumulative transceiver power savings compared to PlainTC.The PDR performance of PlainTC+ is contained in Tables 3 and 4 and the throughput results are contained in Tables 5 and 6.
6.6.Resource Consumption.PlainTC+ was observed to consume 371 KB of memory (2.3% of total memory) and the CPU utilisation equalled that of PlainTC at 0.3%.Therefore, the addition of the context-change mechanism did not significantly increase the recorded resource consumption.
Conclusion and Future Work
In this article we demonstrated the negative impact that the PlainTC prototype has on topology stability.This instability was diagnosed to be caused by transceiver power adjustments being carried out under the auspices of a distributed TC scheme.
We subsequently proposed the use of context information to reduce the topology instability caused by PlainTC.The PlainTC+ prototype was presented with this prototype incorporating the collection of context information and using this information to filter out unnecessary transceiver power adjustments.
An indoor test-bed evaluation of PlainTC+ showed a 45% reduction in the number of transceiver power adjustments undertaken in the network.This reduction stabilised the neighbourhood sizes and link qualities experienced by network nodes.The improved topology stability enhanced the network performance with improvements to the observed PDR and throughput metrics when compared to PlainTC.The performance gap to the Max.Power scheme was narrowed and PlainTC+ offers a deployed I-WMN the possibility of achieving significant cumulative transceiver power savings with a minimal sacrifice in network performance.
Our future work seeks to provide a mechanism for the dynamic, real-time calculation of the context variables' contributions to the total variability in topology stability.The current process to determine these contributions that is defined in Section 4.2 is backwards-looking and the results may not be an accurate reflection of current network conditions.We are in the process of deploying a software-defined I-WMN test-bed and the centralised software-defined network controller is ideally suited to the task of collating the context information from the network nodes and dynamically computing each context variable's contribution to the total variability in topology instability.The appropriate contextchange threshold value can be subsequently determined in real time at the controller and this threshold value can be pushed to the WMN nodes to regulate the adjustment of the node's transceiver power.PlainTC+ will thus become a realtime, context-aware TC mechanism with the ability to regulate nodes' transceiver powers in response to current network conditions.
Figure 2 :Figure 3 :
Figure 2: Varying neighbourhood size as a result of transceiver power adjustments.
Figure 4 :
Figure 4: Contribution of dimensions towards the total variability in the observed data.
Figure 9 :Figure 10 :
Figure 9: Placement of PlainTC with regard to the network protocol stack.
Figure 14 :
Figure 14: Dynamic adjustment of transceiver power by PlainTC+.
Figure 15 :
Figure 15: Dynamic establishment of network connectivity by PlainTC+.
Table 1 :
Number of changes to node variables over a 24-hour period.
Table 2 :
Number of changes in node variables over a 24-hour period.
Table 3 :
Average PDR achieved for intramesh traffic.
Table 4 :
Average PDR achieved for Portal-oriented traffic. | 2018-04-03T04:28:37.616Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "986479554cfbfee954783c3ef6754d978069f089",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/misy/2016/9696348.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "986479554cfbfee954783c3ef6754d978069f089",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
226776211 | pes2o/s2orc | v3-fos-license | STEM Outreach via Science Forensic Module: The Impact of the Near-peer Mentoring Approach
STEM education emphasizes the integrated study of science passing the boundaries of traditionally labelled disciplines while demonstrating its application in real life. Science forensic is an eye-catching subject for students, which implements the knowledge of biology, physics, and chemistry. The purpose of this study was to identify students’ interest towards STEM through science forensic module and the impact of the near-peer mentoring approach in the module for STEM outreach. This pilot study was conducted for the students of Sekolah Berasrama Penuh Integrasi (SBPI) Gombak with 36 participants. By using the Statistical Package for the Social Science (SPSS), the analysis revealed that 94.5% of students found the module interesting and 77% of participants agreeing that the module increased their interest to pursue their future study in the STEM-field. Focusing on demographics, this module received more positive responses from females and pure science stream students as compared to males and Islamic science stream, respectively. This result is consistent with the higher number of female students and pure science stream students in STEM-field study at university level. The near-peer mentoring approach showed a promising impact with 88% of students giving positive feedback on the credibility of mentors. The science forensic modules consisted of 8 main stations with the Fingerprinting station being the most popular (94.5%) and DNA profiling being the least popular (77.8%). One possible explanation of this is that the level of understanding for DNA profiling is harder with larger amounts of knowledge needed to be learned in a short period of time. Overall, the outcomes of this study suggest that exposing secondary school students to science forensic has a positive impact on their level of interest towards STEM education.
Introduction
The Malaysian Education Blueprint 2013-2025 includes STEM education as part of the national agenda to address the worrying trend of the declining number of students pursuing STEM-field courses at university level (Bahrum, Wahid, & Ibrahim, 2017). STEM education emphasizes the integrated study of science passing the boundaries of traditionally labelled disciplines while demonstrating its application in real life. Science forensic is an eyecatching subject for students, which implements the knowledge of biology (e.g. DNA fingerprinting), physics (e.g. blood splatter) and chemistry (e.g. toxicology) as well as mathematics as integrated tools.
The purpose of this study was to identify students' interest towards STEM through science forensic module and the impact of the near-peer mentoring approach in the module for STEM outreach.
Methodology
This pilot study was conducted with the secondary school students of Sekolah Berasrama Penuh Integrasi (SBPI) Gombak with 36 participants. The module is based on a detective role-play setup where the students were divided into a group of 6 with the aim of solving a crime. There were 8 experimental stations in which they were required to analyse the samples collected from the crime scene. The stations were fingerprinting, blood typing, DNA profiling, bioinformatics, toxicology, forensic mathematics, geology, and trace elements. Each station was duplicated in a training room where they were equipped with relevant knowledge and skills to solve the task. Students were allowed to switch between training room and the experimental stations as long as the crime is solved within the expected given time frame.
Besides the eye-catching content, the near-peer mentoring approach was applied, where the mentors were selected among trained university students. Their age range is between 18 to 22 years old. These volunteer mentors were supervised by the lecturers who acted as facilitators in the module. The module was evaluated at the end of the program via a short survey answered by the participants and analysed by using the Statistical Package for the Social Science (SPSS). Figure 1 revealed that 94.5% of students found the module interesting and 77% of participants agreed that the module increased their interest in pursuing their future study in the STEM-field. The science forensic module consisted of 8 main stations with the fingerprinting station being the most popular (94.5%) and the DNA profiling station being the least popular (77.8%). One possible explanation for this is that the level of understanding needed for DNA profiling is harder with larger amounts of knowledge needed to be learned in a short period of time. A previous study has reported the effectiveness of forensic science in increasing students' level of confidence and motivation towards science (Marle et al., 2014). Another study has also claimed that the forensic based activity could encourage students to discover a diverse career in science (Miller, Chang, & Hoyt, 2010).
Results and Discussion
Focusing on the demographics (see Figure 1), this module received more positive responses from females and pure science stream students when compared to males and Islamic science stream students, respectively. This result is consistent with the higher number of female students and pure science stream students in the STEM-field study at Malaysian university level. However, there is a gender gap problem in certain STEM-fields especially the male dominant field of engineering. A STEM outreach program has been shown to address this problem by encouraging female students, boosting their enthusiasm, and tackling their perception of this male niche area in STEM (Levine et al., 2015).
The near-peer mentoring approach showed a promising impact with 88% of students giving positive feedback on the credibility of the mentors. This approach has been identified as guiding students to a visible education pathway as well as envisioning themselves as future scientists (Pluth et al., 2015). The smaller gender gap between the students and mentors could break the wall and allow a safe space for students to interact and learn science without being labelled or judged. In addition, this approach could also tackle the problem of science teachers with lack of STEM literacy. Several studies have highlighted this alarming issue in Malaysia (Amiruddin, Azman & Ismail, 2018;Mahmud et al., 2018) and the near-peer approach could also be applied to teacher-training where the mentors could either be university students or lecturers.
Conclusion
Forensic science reflects a good STEM module in which it encompasses all the STEM education elements. Early exposure to STEM education through the forensic science module could nurture the interest of the next generation towards science and technology. The study was conducted successfully showing a positive impact on students' level of interest towards STEM education through the implementation of this module. Similar extended problembased scenario modules and using the near-peer mentoring approach could be integrated in the STEM outreach program in order to catapult the interest of students as well as exposing them to a STEM career. | 2020-07-30T02:06:58.020Z | 2019-12-31T00:00:00.000 | {
"year": 2019,
"sha1": "477678b1c39000ba94c8ef0ac6e7cdf574d2933d",
"oa_license": "CCBY",
"oa_url": "http://journal.qitepinmath.org/index.php/seamej/article/view/76/70",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "139045e6407879f14aeca5b48652f8cc99249558",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
248740671 | pes2o/s2orc | v3-fos-license | Proteomic analysis reveals dual requirement for Grb2 and PLCγ1 interactions for BCR-FGFR1-Driven 8p11 cell proliferation
Translocation of Fibroblast Growth Factor Receptors (FGFRs) often leads to aberrant cell proliferation and cancer. The BCR-FGFR1 fusion protein, created by chromosomal translocation t(8;22)(p11;q11), contains Breakpoint Cluster Region (BCR) joined to Fibroblast Growth Factor Receptor 1 (FGFR1). BCR-FGFR1 represents a significant driver of 8p11 myeloproliferative syndrome, or stem cell leukemia/lymphoma, which progresses to acute myeloid leukemia or T-cell lymphoblastic leukemia/lymphoma. Mutations were introduced at Y177F, the binding site for adapter protein Grb2 within BCR; and at Y766F, the binding site for the membrane associated enzyme PLCγ1 within FGFR1. We examined anchorage-independent cell growth, overall cell proliferation using hematopoietic cells, and activation of downstream signaling pathways. BCR-FGFR1-induced changes in protein phosphorylation, binding partners, and signaling pathways were dissected using quantitative proteomics to interrogate the protein interactome, the phosphoproteome, and the interactome of BCR-FGFR1. The effects on BCR-FGFR1-stimulated cell proliferation were examined using the PLCγ1 inhibitor U73122, and the irreversible FGFR inhibitor futibatinib (TAS-120), both of which demonstrated efficacy. An absolute requirement is demonstrated for the dual binding partners Grb2 and PLCγ1 in BCR-FGFR1-driven cell proliferation, and new proteins such as ECSIT, USP15, GPR89, GAB1, and PTPN11 are identified as key effectors for hematopoietic transformation by BCR-FGFR1.
INTRODUCTION
Over the last half century, chromosomal translocations encoding functional oncogenic proteins have been identified as drivers of multiple cancers, and account for 20% of all malignant neoplasms [1,2]. With the emergence of personalized medicine and cancer genome sequencing, the characterization of these oncogenic fusions created by chromosomal translocations -which serve as drivers of specific cancers -is vital to advance therapeutic methods and improve outcomes.
Genomic studies have revealed the presence of many specific RTK fusion proteins as drivers of blood cancers [3]. In particular, fibroblast growth factor receptors (FGFRs), a subfamily of RTKs, have been identified as recurrent translocation partners in both solid and hematologic malignancies [4]. Constitutively activated FGFR1 fusion proteins give rise to 8p11 myeloproliferative syndrome (EMS), also known as stem cell leukemia/lymphoma (SCLL), which can progress to acute myeloid leukemia (AML) or T-cell acute lymphoblastic leukemia lymphoma (T-ALL), dependent on the fusion partner gene [5,6]. Patients positive for FGFR1-driven SCLL often present with eosinophilia and have a poor prognosis as these fusions are not respondent to first generation tyrosine kinase inhibitor (TKI) therapies, and the one-year overall survival from time of diagnosis is 43% for SCLL patients [5,7]. Although both Research Paper www.oncotarget.com ponatinib and pemigatinib have been used to treat SCLL with mixed results, hematopoietic stem cell transplantation remains the only known curative option for SCLL patients and few alternative treatment plans exist for those who are either awaiting or are unable to receive transplantation [8]. The poor prognosis and lack of molecular targeted therapies highlights SCLL as a critically unmet medical need.
This work focuses on the t(8;22)(p11;q11) chromosomal translocation which creates the Breakpoint Cluster Region-Fibroblast Growth Factor Receptor 1 (BCR-FGFR1) fusion protein. This fusion protein retains the coiled-coil dimerization/oligomerization domain and partial RhoGEF domain contributed by BCR, and a tyrosine kinase domain contributed by FGFR1. Our recent work demonstrated the importance of the Hsp90 protein chaperone complex for BCR-FGFR1 driven oncogenic activation, together with the importance of several salt bridges for stabilization of the coiled-coil dimerization domain of BCR [9].
Earlier work examining two FGFR1-containing fusion proteins, BCR-FGFR1 and ZNF198-FGFR1, provided important insights into mechanisms of cancer progression; specifically, this work identified the importance of the phospholipase PLCγ1 binding site at Y766 in the ZNF198-FGFR1 fusion, and the importance of the small adapter protein Grb2 binding site at Y177 in BCR-FGFR1 for progression of myeloproliferative disease in murine models [10]. From this work, they concluded that PLCγ1 represents a critical downstream pathway for ZNF198-FGFR1induced disease, and that Grb2 activation was important for BCR-FGFR1 in the induction of CMLlike leukemia in mice [10].
Building from these advances, our current work examines mutations in the PLCγ1 and Grb2 binding sites individually and, importantly, when combined together in a double mutant within BCR-FGFR1. Importantly, our work finds that this Grb2 and PLCγ1 binding site double mutant is no longer biologically active. We exploit quantitative proteomic analyses to identify crucial proteinprotein interactions necessary for BCR-FGFR1 activation. Thus, we are able to demonstrate a dual requirement for Grb2 and PLCγ1 for BCR-FGFR1-mediated oncogenic cell proliferation. We extensively profiled the differences in cell signaling between BCR-FGFR1 and the nonbiologically active mutants BCR(Y177F)-FGFR1(Y766F), and BCR(Y177F)-FGFR1(K656E/Y766F), containing both Grb2 and PLCγ1 interaction site mutations, through proteomics analysis to elucidate the BCR-FGFR1 total proteome, the phosphoproteome, and protein interactome. This systemic study reveals the multisubstrate docking protein, Gab1, and the protein tyrosine phosphatase, PTPN11 (Shp2), as likely downstream targets of Grb2 and PLCγ1 in BCR-FGFR1-driven SCLL. Furthermore, we identified PLCγ1 as potential therapeutic target to treat BCR-FGFR1 mediated SCLL using the PLCγ1 inhibitor U73122, and show that futibatinib, an irreversible FGFR inhibitor, suppresses downstream signaling and cell transformation. These data unravel essential roles of Grb2 and PLCγ1 in BCR-FGFR1 mediated oncogenic growth and suggest the importance of further investigation into PLCγ1 as a potential therapeutic target in treating SCLL.
BCR-FGFR1 requires Grb2 and PLCγ1 interaction for cell transformation and proliferation
During RTK-mediated signal transduction, Grb2, a small adapter protein, associates with SOS (son of sevenless), leading to Ras activation. Furthermore, the enzyme PLCγ1, a protein involved in cell growth and proliferation, has been known to play a role in cancer progression, yet the role of PLCγ1 in BCR-FGFR1mediated malignancies is undetermined [11].
We constructed BCR-FGFR1 derivatives containing single mutations to abolish the Grb2 and PLCγ1 interaction sites, and BCR(Y177F)-FGFR1(Y766F), containing a double mutation abolishing both interaction sites ( Figure 1A). These were assayed for NIH3T3 focus formation ( Figure 1B and 1C). NIH3T3 cells expressing BCR(Y177F)-FGFR1 exhibited nearly a 50% decrease in focus forming ability, while cells expressing BCR-FGFR1(Y766F) showed an 80% ( Figure 1B and 1C). Interestingly, the double mutant BCR(Y177F)-FGFR1(Y766F) completely abolished focus formation in this assay. Although the use of NIH3T3 cells, a murine fibroblast cell line, may be criticized as a proxy for hematopoietic cell cancer, nevertheless, this assay has routinely served as a useful biological readout for the assay of many different oncogenic fusion proteins [9,12,13].
STAT3 signaling along with Grb2 and PLCγ1 association are necessary for BCR-FGFR1 mediated cell growth
The cell signaling differences between BCR-FGFR1 and the non-transforming derivative, BCR(Y177F)-FGFR1(Y766F), remain unclear, particularly since this mutant retains tyrosine kinase activity contributed by FGFR1 [9]. Signaling analyses were performed in HEK293T cells, as they have previously been used in FGFR signal transduction and protein phosphorylation studies [12]. HEK293T cells expressing either BCR-FGFR1, a kinase-dead variant BCR-FGFR1(K514A), single mutants, or the non-transforming double mutant BCR(Y177F)-FGFR1(Y766F), were analyzed for cell signaling differences by immunoblotting. HEK293T cells expressing BCR-FGFR1 display activation of MAPK, STAT3, and PLCγ1 pathways, while BCR-FGFR1(K514A), containing a kinase-inactivating mutation, was unable to activate downstream pathways ( Figure 2A). HEK293T cells expressing the nontransforming BCR(Y177F)-FGFR1(Y766F) displayed a substantial decrease in STAT3 signaling and nearly total ablation of PLCγ1 phosphorylation, even while retaining Additionally, cells expressing BCR(Y177F)-FGFR1(Y766F) were unable to interact with Grb2 and PLCγ1, as seen through immunoprecipitation analyses followed by immunoblot analysis ( Figure 2B). BCR-FGFR1(K514A), the kinase-inactive mutant, was unable to associate with either Grb2 or PLCγ1, suggesting that receptor kinase activity leading to tyrosine phosphorylation is required for this protein-protein interaction ( Figure 2B, lane 3). These data suggest that Downstream pathways potentially activated by either BCR-FGFR1, a kinase inactivated BCR-FGFR1(K514A), BCR(Y177F)-FGFR1, BCR-FGFR1(Y766F), or BCR(Y177F)-FGFR1(Y766F) were examined. All pathways were detected by anti-sera directed towards each phosphorylated protein as shown. Blotting for total protein shown below each activated panel. (B) Protein interactions are shown by immunoprecipitation with antisera for Grb2 or FGFR1 followed by immunoblotting with anti-sera against either FGFR1 or PLCγ1 to detect protein interactions for BCR-FGFR1. Each experiment was performed a minimum of 3 times. (C) Graph of focus formation by BCR-FGFR1(K656E) and its derivatives in NIH3T3 cells. Each experiment was performed a minimum of 3 times, and standard error of the mean (SEM) is shown. (D) HEK293T cell lysate expressing BCR-FGFR1(K656E) or its derivatives subjected to immunoblot analysis. All pathways were detected by anti-sera directed towards each phosphorylated protein as shown, followed directly below by blotting for each total protein. Each experiment was performed a minimum of 3 times, and standard error of the mean (SEM) is shown. www.oncotarget.com BCR-FGFR1 may rely on the Jak/STAT pathway and interactions with Grb2 and PLCγ1 for cell proliferation, as BCR(Y177F)-FGFR1(Y766F) displays low levels of STAT3 activation, and minimal association with Grb2 and PLCγ1 (Figure 2A and 2B). Furthermore, MAPK activation may be inconsequential for BCR-FGFR1driven oncogenesis, as cells expressing BCR(Y177F)-FGFR1(Y766F) exhibited increased levels of MAPK phosphorylation despite the inability of this variant to transform NIH3T3 cells ( Figure 1B).
Kinase-activating mutations in BCR-FGFR1 do not overcome a dual Grb2 and PLCγ1 interaction requirement
Kinase-activating mutations and gatekeeper mutations are commonly found in patients receiving TKI treatment [14]. Therefore, we introduced a kinaseactivating K656E mutation ( Figure 1A) to determine if a constitutively activated kinase would alter the potential requirement for Grb2 and PLCγ1 interactions with BCR-FGFR1 for cell transformation and signal cascade activation. The K656E mutation lies within the "YYKK" activation loop sequence in FGFR1 and is an activating mutation found in cancers as well as developmental disorders [4,15,16].
When assayed for focus formation, cells expressing BCR-FGFR1, or the kinase-activated variant, BCR-FGFR1(K656E), were biologically active and generated foci. However, when the double mutant was combined with the kinase-activating mutation, the resulting BCR(Y177F)-FGFR1(K656E/Y766F) was unable to transform NIH3T3 cells ( Figure 2C). Cells expressing the non-transforming triple mutant, BCR(Y177F)-FGFR1(K656E/Y766F), containing a deficiency in both Grb2 and PLCγ1 interaction sites along with the kinaseactivating K565E mutation, displayed a lack of PLCγ1 phosphorylation while maintaining FGFR1 activation loop phosphorylation ( Figure 2D, lane 6). Additionally, these cells displayed increased levels of MAPK phosphorylation, similar to BCR(Y177F)-FGFR1(Y766F), despite the inability of either of these variants to transform NIH3T3 cells (Figure 2A and 2D). These data suggest that this kinase-activating mutation is unable to overcome the need for protein-protein interactions between BCR-FGFR1 with both Grb2 and PLCγ1 for oncogenic growth, highlighting the importance of these interactions as plausible therapeutic targets.
Characterization of the BCR-FGFR1 protein interactome and phospho-proteome
Examining the protein interactome and phosphoproteome of various oncogenes have led to the identification of important biomarkers and therapeutic targets in cancer [17][18][19]. Recent studies have utilized proteomic approaches to determine differences in cell signaling between BCR-ABL p210 and p190 isoforms [20]. We employed quantitative mass spectrometry to characterize the BCR-FGFR1 mediated protein interaction network, or interactome, as well as the BCR-FGFR1 mediated phospho-proteome. For these proteomic studies, four biological replicates of each sample were included to achieve statistical significance. Of importance, the inclusion of the biologically inactive, but kinase-activated mutant, BCR(Y177F)-FGFR1(K656E/Y766F), allowed the elimination of many interacting and phosphorylated peptides that might otherwise appear as authentic hits.
This interactome analysis detected over 3000 unique BCR-FGFR1 derivative complexes. To subsequently identify the interactome differences between BCR-FGFR1 and the non-biologically active mutants, interacting protein hits were screened against interactions with the kinase inactive BCR-FGFR1(K514A) mutant. Each interacting protein presented in this data was detected in at least three out of four biological replicates ( Figures 3B and 4A).
BCR-FGFR1 preferentially forms protein complexes with only seven proteins, including PTPN11 (Shp2), Gab1, ECSIT, USP15, and GPR89 in addition to Grb2 and PLCγ1, when compared to the biologically inactive mutants ( Figures 3B and 4A). Of these identified complexes, BCR-FGFR1 interactions with PTPN11 and Gab1 are particularly interesting. PTPN11 is a well-studied tyrosine phosphatase, known to modulate oncogenic signaling pathways downstream of Grb2, while Gab1 is an adapter protein associated with Grb2, and known to activate signal transduction pathways [21,22]. Furthermore, ECSIT is an adapter protein known to activate the NF-κB signaling pathway, and USP15 is a deubiquitinating enzyme (DUB) responsible for ubiquitin chain cleavage on known substrates, ultimately leading to cancer cell survival [23,24]. GPR89, or G Protein-Coupled Receptor 89A, represents an effector for the RAS family member RABL3 in hematopoietic cells [25]. These data support previous studies demonstrating that PTPN11 inhibition reduces BCR-FGFR1-driven cell viability and leads to suppression of leukemogenesis in mice [26]. Discovery of the novel interacting proteins ECSIT and USP15 as potential targets in BCR-FGFR1 mediated cell www.oncotarget.com growth will require further investigation to determine their roll in SCLL progression.
Phospho-proteome analysis
We also wished to characterize the BCR-FGFR1 induced total proteome and phospho-proteome to further understand cell signaling differences between the fusion and the biologically inactive mutants. HEK293T cells expressing either BCR-FGFR1 or its derivatives were harvested in PBS, labeled with a tandem mass tag (TMT) [27], and subjected to IMAC and CST Y1000 phosphoenrichment prior to LC-MS/MS detection ( Figure 3A). The resulting phosphopeptides were then combined to provide greater overall coverage of the BCR-FGFR1 phosphoproteome.
This phospho-proteome analysis method resulted in the detection of over 5,000 phosphorylated proteins ( Figure 3C). As expected, BCR-FGFR1 demonstrated an increase in Grb2 and PLCγ1 phosphorylation, when compared to its biologically inactive mutants; furthermore, an increase in PTPN11 and TCP1 phosphorylation was also detected in BCR-FGFR1 ( Figure 3C). Of note, PTPN11 (Shp2) preferentially formed protein complexes with BCR-FGFR1 as seen through the interactome data ( Figures 3B and 4A). BCR-FGFR1 stimulates TCP1 phosphorylation, a protein involved in the TRiC chaperone complex [28], suggesting that TCP1 mediated protein folding may play a role in the regulation of the BCR-FGFR1 oncoprotein. The inactive BCR-FGFR1 mutant also demonstrated an increase in MAPK1, MARK2, and CDK1 phosphorylations ( Figure 3C, Table 1A).
The BCR-FGFR1 associated phospho-proteome demonstrates an increase in proteins associated with catalytic activity, signal transduction, and cell communication, as seen through gene ontology analyses (Table 1B). Overall, these data demonstrate that the BCR-FGFR1 phospho-proteome may be driven by Grb2, PLCγ1, and PTPN11 mediated signaling cascades, with the ultimate result of cell proliferation.
Total proteome analysis
The total proteome was analyzed to identify differences in protein expression that contribute to the activity of BCR-FGFR1. The BCR-FGFR1 proteome is associated with an increase in expression of several proteins, notably, ISG15, IFIT1, IRF9 and SP110, which are interferon response genes associated with JAK/STAT signaling (Table 1C, Figure 4B) [29,30]. Overexpression of these proteins may explain the increase in STAT3 activation seen in BCR-FGFR1 compared to biologically inactive derivatives. Furthermore, the proteomes of both BCR(Y177F)-FGFR1(Y766F) and BCR(Y177F)-FGFR1(K656E/Y766F) are associated with an increase in expression of 44 proteins and with a decrease in 8 proteins when compared to BCR-FGFR1 ( Figure 4B). Of these, GADD45A is a well characterized TP53 effector and stress-induced protein shown to induce overactivation of the MAPK pathway, resulting ultimately in apoptosis [31]. The overexpression of GADD45A may explain the increase in phosphorylated MAPK signaling in the BCR-FGFR1 biologically inactive mutants as seen by immunoblotting (Figure 2A and 2D) and phosphoproteome analysis ( Figure 3C, Table 1A). Overall, the total proteome of the BCR-FGFR1 fusion demonstrates an increase in cytokine stimulus and interferon response genes, while the biologically inactive mutants demonstrate an increase in apoptotic pathways, negative regulation of kinase signaling, and positive regulation of ubiquitination, as seen through gene ontology analyses (Table 1C).
Examination of PLCγ1 and Grb2 mutations on hematopoietic cell proliferation
To confirm the results presented in Figure 1, showing the effects of PLCγ1 and Grb2 mutations on NIH3T3 cell transformation, we wished to examine the biological effects of these mutations using a more relevant hematopoietic cell line. Previous studies have utilized either Ba/F3 or 32D hematopoietic cell lines to demonstrate oncogenic and proliferative potential in these IL-3 dependent cell lines [9,12,32,33]. Using 32D cells, expression of the double mutant BCR(Y177F)-FGFR1(Y766F) was unable to drive proliferation in the absence of IL-3 ( Figure 5A). In contrast, cells expressing the single mutant BCR(Y766F)-FGFR1 proliferated as well or better than BCR-FGFR1-expressing cells, while cells expressing the Grb2 site single mutant, BCR(Y177F)-FGFR1, exhibited reduced but significant proliferative ability. These data demonstrate that inhibition of either signaling pathway alone fails to inhibit hematopoietic cell proliferation, and demonstrate a dual requirement for Grb2 and PLCγ1 interactions with BCR-FGFR1 for proliferation.
Futibatinib inhibits BCR-FGFR1 and BCR-FGFR1(K656E)-driven cell proliferation
Tyrosine kinase inhibitor (TKI) therapy is often prescribed to patients with FGFR fusions, however, while ATP-competitive FGFR inhibitors can deter tumor growth, patients commonly develop secondary kinase domain resistance mechanisms in response [37,38]. Futibatinib (TAS-120) is a non-ATP competitive irreversible pan-FGFR inhibitor which binds to covalently to a conserved cysteine in the P-loop of the kinase domain [38]. Furthermore, futibatinib has demonstrated clinical efficacy in patients harboring FGFR2-fusion-driven cholangiocarcinoma, and is in clinical trials to assess its efficacy in the treatment of solid or myeloid and lymphoid neoplasms with FGFR1 re-arrangements (NCT04189445) [38]. 32D cells stably expressing BCR-FGFR1 were treated with increasing concentrations of futibatinib, in the absence of IL-3 ( Figure 5C), and exhibited a dosedependent response to futibatinib treatment.
BCR-FGFR1 exhibits absolute requirement for both Grb2 and PLCγ1
Since the discovery of BCR-ABL, over 500 additional oncogenic fusion proteins have been identified as drivers of hematologic malignancies, emphasizing the importance of characterizing these drivers and their respective cancers [9]. While FGFR2 alterations and FGFR2 fusion proteins have been identified as drivers of intrahepatic cholangiocarcinoma [13,38,39], FGFR1 fusion proteins are implicated as drivers of stem cell leukemia/lymphoma. The use of TKI therapy treatment often results in acquired drug resistance in patients, often through secondary kinase-activating mutations, highlighting the need to develop alternative treatments [37].
We demonstrate here that BCR-FGFR1 relies dually on the small adapter protein, Grb2, and the phospholipase, PLCγ1, for biological activity and the activation of cell signaling pathways (summarized in Figure 6). Previous work demonstrated the dependence of BCR-FGFR1 on Grb2 for CML-like leukemia, and the importance of PLCγ1 for ZNF198-FGFR1-driven EMS like disease [10]. Mutation of the Grb2 and PLCγ1 phospho-acceptor sites in BCR-FGFR1 abolished cell transformation ability and cell proliferation (Figures 1 and 5). While single mutations of either the Grb2 interaction site (Y177F in BCR) or PLCγ1 interaction site (Y766F in FGFR1) reduced biological activity, both mutations were necessary for ablation of BCR-FGFR1-driven cell proliferation. Importantly, the BCR(Y177F)-FGFR1(Y766F) double-mutant, despite being biologically inactive, retains tyrosine kinase activity; this demonstrates clearly that kinase activation alone is insufficient for biological transformation (Figure 2). Furthermore, addition of a secondary K656E kinaseactivating mutation in BCR-FGFR1 did not overcome the dual requirement for Grb2 and PLCγ1 interaction for biological activity.
Our novel proteomic screen reveals for the first time the BCR-FGFR1 protein interactome, phosphoproteome, and total proteome (Figures 3 and 4). These data confirm that Grb2 and PLCγ1 interactions are necessary for BCR-FGFR1 mediated cell proliferation and identify Gab1 and PTPN11 as possible downstream effectors of Grb2 and PLCγ1 (Figure 3). Importantly, PTPN11(Shp2) inhibition has recently emerged as a therapeutic target in multiple cancer models [26,40]. A recent study has demonstrated that certain RTK fusion proteins have the ability to assemble into higher order membraneless protein granules, which activate Ras/MAPK signaling in a ligand independent manner [41]. Interestingly, Grb2, PLCγ1, PTPN11(Shp2) and Gab1 were all enriched in these RTK protein granules, suggesting that BCR-FGFR1 may also function in the same modality, with the additional identified proteins, USP15, GPR89, and ECSIT ( Figure 6). Recently, PLCγ1 inhibition has emerged as a therapeutic target for hematologic cancers and PLCγ1 phosphorylation status is a biomarker for metastatic risk in luminal breast cancer [11,42,43]. However, the importance of PLCγ1 for SCLL remained uncharacterized prior to this study. While this work clearly shows the importance of PLCγ1 for BCR-FGFR1-driven SCLL, through cell-based assays and quantitative proteomics, we further demonstrate that PLCγ1 inhibition reduces overall biological activity as seen through the assays performed with U73122 ( Figure 5). U73122, a known inhibitor of PLCγ1, was able to drastically decrease the biological activity of BCR-FGFR1, or of the kinase-activated BCR-FGFR1(K656E) mutant. These experiments yielded unequivocal results using NIH3T3 cell transformation assays, and of greater relevance to hematopoietic cancers, using the hematopoietic IL3-dependent cell line 32D. However, further experiments will be required examining PLCγ1 inhibitors in patient-derived cell lines and clinical studies to fully understand the efficacy of inhibiting this pathway.
ATP-competitive TKIs allow durable responses in patients with FGFR-driven tumors [44]. However, patients often develop acquired resistance to these inhibitors through the emergence of secondary kinaseactivating mutations, as observed in FGFR2 fusion-driven intrahepatic cholangiocarcinoma [13,38]. Futibatinib, a non-ATP competitive irreversible pan-FGFR inhibitor, reduces BCR-FGFR1 and BCR-FGFR1(K656E)-driven cell transformation and cell signaling in a dose-dependent manner ( Figure 5). Furthermore, futibatinib treatment resulted in a durable complete hematologic and cytogenetic remission in a patient with PCM1-FGFR1 positive myeloid neoplasm. This demonstrates that futibatinib may be efficacious in treating BCR-FGFR1-driven SCLL to overcome additional kinase-activating mutations. the phosphorylated Y177 binding site for the adapter protein, Grb2, within the BCR domain of the oncogenic fusion protein, and by the phosphorylated Y766 binding site for the membrane associated enzyme, PLCγ1, within the FGFR1 domain. The proposed membrane-less protein granule [41] is represented in blue, containing the additional proteins found from our mass spectrometry interactome screen. These proteins include: Shp2 (PTPN11), Grb2, PLCγ, Usp15, Gpr89, and ECSIT. Small molecule inhibitors of PLCγ, such as U73122, used in conjunction with FGFR1 inhibitors such as the irreversible TKI futibatinib, are able to efficiently abrogate the proliferative and oncogenic effects of the BCR-FGFR1 fusion protein. www.oncotarget.com
Implications for additional hematological cancers
Since the detection of BCR-ABL, BCR has been identified as a commonly occurring fusion partner in many other hematologic malignancies. Notably, BCR-PDGFRA, BCR-JAK2, and BCR-RET fusions have been established as additional drivers of myeloid and lymphoid neoplasms, while BCR-NTRK2 was identified as a potential driver of glioblastoma [45,46]. Clinical evidence suggests that patients who harbor these mutations benefit from personalized therapies, highlighting the importance of molecular testing and oncoprotein characterization. Identified BCR fusion proteins in patients contain at minimum the coiled-coil oligomerization domain and Grb2 biding site contributed by BCR, fused to a constitutively activated tyrosine kinase contributed by a partner gene [45]. Due to many structural similarities between these identified fusion oncogenes, the results described in this study may be applicable to additional leukemias driven by BCR fusion proteins.
The quantitative proteomic profiling described here detected Shp2 and Gab1 as possible downstream effectors of Grb2 in BCR-FGFR1-induced malignancies. While Shp2 is essential in driving BCR-ABL mediated leukemogenesis [47], our results suggest that Shp2 also plays a vital role in BCR-FGFR1 driven hematologic malignancies. As the Grb2 binding site at Tyr177 in BCR is uniformly conserved among other BCR-fusion proteins, such as BCR-JAK2, BCR-PDGFRA, BCR-RET and BCR-NTRK2, our results suggest that Shp2 and Gab1 play an equally important role in cancers driven by these oncogenes as well. Furthermore, inhibition of Shp2 maybe beneficial for these BCR-fusion protein driven hematologic cancers, however, this remains to be investigated.
PLCγ1: an emerging target for myeloid and lymphoid neoplasms
The membrane associated phospho-enzyme, PLCγ1, is typically activated by RTKs and mediates downstream signaling and cell proliferation. However, PLCγ1 is overexpressed and mutated in various cancers including breast cancer, gastric cancer, colorectal cancer, T-cell lymphoma, and AML [11,48]. Activation of this enzyme is associated with cancer cell migration and metastasis, which has resulted in PLCγ1 emerging as a potential therapeutic target for cancer treatment [11,48]. In hematological malignancies, PLCγ1 is known to play an important role in AML leukemogenesis and is required for AML1-ETO induced leukemic stem cell survival; however, the role of PLCγ1 in SCLL was unknown prior to this study [11,49]. Through this work, we demonstrate that PLCγ1 is required for BCR-FGFR1-induced cell proliferation and establish PLCγ1 inhibition as a potential therapeutic target for SCLL. Furthermore, PLCγ1 inhibition may emerge as an alternative therapeutic option for imatinib-resistant CML cases.
Stem cell leukemia/lymphoma (SCLL) exhibits distinct clinical and pathological features, characterized by chromosomal translocations involving the FGFR1 gene at chromosome 8p11. Currently, 15 FGFR1 partner genes have been identified in SCLL, all of which contain a crucial dimerization domain, imperative for FGFR1 tyrosine kinase activity [4,7]. Due to the large number of FGFR1 partner genes, each with its own specific dimerization domain, inhibition of oligomerization or dimerization may not be easily feasible as a therapeutic modality for SCLL. However, all identified FGFR1 fusions in SCLL display a commonality in containing a PLCγ1 binding site at the C-terminus of FGFR1 at Tyr766 [4]. Due to this similarity across these FGFR1 fusions, PLCγ1 inhibition may be a beneficial therapeutic target in treating FGFR1 translocation induced myeloproliferative neoplasms.
The characterization of driver mutations in cancer is imperative, as this provides a mechanistic understanding of cancer progression. SCLL patients have a median one-year overall survival rate of 43%. This poor prognosis and lack of molecular targeted therapies highlights SCLL as a critically unmet medical need. This study provides new information concerning the dual roles of Grb2 and PLCγ1 as modulators in BCR-FGFR1-driven SCLL. Our use of quantitative mass spectrometry methods unraveled the BCR-FGFR1 mediated protein interactome and protein phosphoproteome. This comprehensive screen identified Shp2, Gab1, GPR89, USP15, and ECSIT as new proteins for further study, as they may be key effectors in hematopoietic transformation exploited by BCR-FGFR1. With the advent of personalized medicine, the characterization of oncogenic fusion proteins resultant from chromosomal translocations provides opportunity to introduce molecular therapies. Our work highlights the importance of sequencing based, mutation-specific therapies for FGFR1 induced hematologic malignancies.
DNA constructs
The mutations for BCR(Y177F), FGFR1(766F), and all other mutations described were introduced by PCRbased site-directed mutagenesis. Other clones were as previously described [9].
U73122 and futibatinib experiments
U73122 was obtained from Selleckchem (Houston, TX, USA) and futibatinib (TAS-120) was obtained from Chemgood (Glen Allen, VA, USA). Approximately 24 h after transfection, cells were starved with no FBS for 18 h. Stated concentrations of U73122 or futibatinib were added 14 h into the starvation period. Cells were then collected and lysed as described for immunoblotting and immunoprecipitation analyses. For experiments involving U73122 or futibatinib performed in NIH3T3 cell focus assays, cells were re-fed with the respective drug in 2.5% CS/DMEM media every 3-4 days, after which they were fixed and scored for transfection efficiency as described. The amount of drug was initially titrated for each assay in order to avoid toxicity to the various cell lines. Each experiment had a total of 2 technical replicates and 4 biological replicates.
Following cell lysis and protein digestion, peptides were labeled with Tandem Mass Tags (TMT) and fractionated by high pH reversed phase chromatography. The subsequent TMT-labeled phosphopeptides were sequentially enriched by Immobilized Metal Affinity Chromatography (IMAC) and anti-phospho-Tyrosine antibody. All mass spectra were analyzed with Spectromine software [52,53]. Statistical analyses of TMT total and phosphoproteome data were carried out separately using in-house R script (version 3.5.1, 64-bit), including R Bioconductor packages limma (Linear Models for Microarray Data) [53], ssGSEA [54] and MSstatsTMT (Mass Spectrometry statistical package) [27]. All gene ontology analyses functions in Tables 1B and 1C had a minimum p-value of 1.0 × 10 −3 and a minimum of three protein hits per GO function. All determined phosphosites in Table 1A
Author contributions
MNP and DJD were responsible for project conceptualization. MNP, ANM, and DW performed research. ARC performed mass spectral data analysis. DJD reviewed all data and provided funding acquisition. MNP prepared the initial draft of the manuscript. MNP, ANM, ARC, and DJD revised the manuscript.
ACKNOWLEDGMENTS
We thank all current and past lab members particularly, Juyeon Ko and Clark Wang, for advice and encouragement, and Dan Crocker for additional support.
Consent for publication
All authors have contributed to this work and consent to this publication.
Availability of data and materials
All materials and data described herein will be fully available to members of the scientific community.
CONFLICTS OF INTEREST
Authors have no conflicts of interest to declare.
FUNDING
MNP gratefully acknowledges support from a UC San Diego San Diego Fellowship, and DJD gratefully acknowledges generous philanthropic support from the UC San Diego Foundation. Support to the SBP Proteomics Facility from grant P30 CA030199 from the National Institutes of Health is also gratefully acknowledged.
Editorial note
This paper has been accepted based in part on peerreview conducted by another journal and the authors' response and revisions as well as expedited peer-review in Oncotarget. | 2022-05-13T15:07:21.088Z | 2022-05-11T00:00:00.000 | {
"year": 2022,
"sha1": "4d4665efafcee6965caa0146607619341fd944cf",
"oa_license": "CCBY",
"oa_url": "https://www.oncotarget.com/article/28228/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7823cb0d4d9d60608386ad9082aa688d1e8281b7",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
84181702 | pes2o/s2orc | v3-fos-license | YAng Integrated decIsIon on spare parts orderIng and equIpment maIntenance under condItIon based maIntenance strategy
Aiming to optimize the equipment maintenance and the spare parts ordering management jointly, a comprehensive decision model under condition based maintenance (CBM) policy is presented for a single equipment system with continuous and random deterioration. In this model, the equipment deterioration is a continuous Gamma process under a continuous condition monitoring, and the spare parts inventory is controlled by spare parts support probability. Firstly, a spare part support probability model was developed to determine the optimal spare parts stock level S, which is set to meet the requirement of a predetermined stockout probability. Secondly, the equipment replacement and spare parts ordering decision is made to optimize the equipment replacement and spare parts ordering jointly, which is based on the equipment deterioration leveland total operating cost of the system. Thirdly, an integrated decision simulation model is presented for evaluating cost rate, availability and stockout probability. Finally, a numerical example is given to illustrate the performance of this model. The results show that the optimal preventive maintenance threshold obtained from the proposed decision model can satisfy the spare parts support requirements under (S-1, S) inventory control strategy.
Introduction
Condition based maintenance (CBM), as one of the most important maintenance method, has been studied extensively in recent years. Based on the analysis of fault mechanism and the test results without disassembly repairing, decision of repair or replace can be made and functional failures can be avoid to a large extend. The advantage of CBM is to supervisory control the working states detect the problems and take corresponding measures in time. It can make an effective prevention before some failures occur, even some serious failures can be prevented or excluded. As a result, it can prevent the occurrence of serious fault, greatly reduce the failure rate and repair costs, improve the usability of equipment, and change the maintenance work from passive to active.
Since the 1990s, research reports on CBM modeling and optimization are growing constantly. Using the CBM, engineers can make a maintenance decision on the basis of equipment deterioration level and the health status from the equipment condition monitoring data. In the current engineering practice, the decision criteria of CBM are mostly based on the engineer's experience or supplier's recommendations. However, the working environments and workloads in different companies are different, and therefore the rationality and scientificity of these decisions criteria has been questioned by researchers. Condition based maintenance modeling and optimization techniques have been studied widely [17]. Grall et al. [2,5,6] have studied a class of single equipment system, which is modeled as a continuous deterioration and stochastic process, with a state implementation by non-periodic inspection, and establish an appropriate analytical model. However, these sciENcE aNd tEchNology studies did not take into account the availability of repair spare parts. Dohi et al. [3] proposed an ordering and changing strategy for single equipment system. The decisions on spare parts ordering and system replacement are based on the time. Among them, the delivery time of normal order and emergency orders is a random variable which obeys different distributions. Sheu and Griffith [14] proposed an age replacement policy. In which, the maintenance work includes minor maintenance, and the delivery time of spare parts ordering is random. Sheu et al. [15] studied the age replacement policy like document [14] for single equipment system, which includes impact deterioration system. Alenka [1] studied the joint optimization problem of periodic batch replacement and periodic spare procurement. These studies have only considered the spare parts ordering strategy under planned maintenance system and based on spare parts inventory levels. While the research on joint optimization of spare parts ordering and equipment replacement under CBM strategy is rarely at present. Yoo et al. [19] presented an expected cost model which formulated for the joint spare stocking and block replacement policy using the renewal process. Kawai [7,8] discussed a decision optimization problem on spare parts ordering and equipment replacement for a Markov degradation system. In which, the decision on spare parts ordering or equipment replacement was given based on the deteriorating state of the system. Y. B. Wang et al. [18] discussed spare parts allocation optimization in a multi-echelon support system based on multi-objective particle swarm optimization method. In fact, most of the degradation process of system is continuous, and the system modeling by the application of Markov chain will make the degradation states divided into several intervals for the discrete random process. In addition, the state transition matrix estimation is often more difficult, which characterizes the discrete time in Markov chain.
In this paper, the problem of maintenance strategy is considered for a single equipment system which shutdown will cause more losses, or even lead to disastrous consequences. Therefore, it is necessary to make a preventive replacement of equipment before failure. When the deterioration of the working unit reaches a scheduled preventive maintenance threshold, a prevention replacement will be conducted using spare parts. In view of the continuous monitoring on a single equipment system, this paper will develop a strategy of equipment replacement and spare parts ordering under condition based maintenance. Under this strategy, the replacement action and spare parts inventory control is driven by the unit deterioration. Spare parts inventory control strategy is ( , 1) S S -, and spare parts support meets the given stockout probability.
The rest of this paper is arranged as follows. In section 2, the system description is presented, which includes the analysis of degradation processes and the modeling assumptions. Section 3 describes the development of the spare parts inventory control model. In which, a spare parts support probability model is established to ensure the optimum stock of spare parts S. Then, the equipment maintenance and spare parts ordering model is presented. In section 4, an integrated decision models for equipment maintenance and spare parts inventory is developed. Section 5 gives an example to show the performance of the proposed model. Finally, in section 6, conclusions are drawn from the work.
Notations
In order to establish the comprehensive decision model, the main parameters are defined as follows: , ( ) fault cumulative distribution function of the equipment, where α is the shape parameter, and β is the scale parameter. L order delivery time, which is a random variable.
Degradation processes analysis
Generally, there are many kinds of distributions for equipment deterioration. However, Gamma distribution has a strong universality for the traditional exponential, Chi-square and Erlang distribution are all the special cases of the Gamma distribution [4]. Therefore, Gamma distribution is suitable for various forms of distribution. It can be used to represent different failure distributions such as the early failure, occasional failure and exhaustion failure. Therefore, we take Gamma distribution as the form of equipment deterioration in this paper.
The status parameters of equipment is characterized by the random variable ( ) is called independent increment process [11]. If the distribution of ( ) have a stationary increment. The independent increment processes with stationary increments is called stationary and independent increment process.
The Gamma process is time-homogeneous Lévya process, which is a kind of random and continuous process with independent increments [16].
has a stable, independent and nonnegative increment. If the incremental degradation was expressed as , the Gamma process has the following properties: is a smooth, independence and nonnegative increment.
obeys a distribution with probability density of , where α is the shape parameter, β is the scale parameter, that is: The cumulative failure distribution function of Gamma process can be expressed as: where, X is the life of the product, f D is the failure threshold.
The deterioration process of equipment is a continuous Gamma process, and the cumulative failure distribution function is: where α is the shape parameter, β is the scale parameter, 0 x means the state parameters of the initial moment.
Park and Padgett [13] gave the Gamma function degradation process, which includes degradation time and amount of degradation, and the failure distribution function.
General assumptions
We suppose that the following assumptions are satisfied.
Working period (1) T is always greater than the order delivery time L , and T is a fixed value. The downtime loss is generated by equipment maintenance or (2) lack of spare parts, and it is proportional to the downtime.
Unit spare parts management fee (3) o c includes its own cost, spare parts ordering cost and spare parts storage cost, while o c is a constant. The stocking spare parts do not deteriorate or fault, and the (4) length of the reserve is not effect to the later working life of spare parts. The spare parts with earlier ordering must arrive earlier than (5) the later one.
In the integrated decision model, the replacement time is great- (6) er than zero, and pm D is less than f D . Obviously, when the replacement time is greater than zero, the stockout probability of spare parts will decrease. While the stockout probability of spare parts will be increased when pm . In this paper, it is assumed that the number of the optimal spare parts * S meets the demands of the spare parts support probability in the integrated decision models. The equipment deterioration is under a continuous condition (7) monitoring. And after preventive maintenance or corrective mantenance, the equipment will resume to a state of "as good as new".
Spare parts inventory control model
Spare parts support probability represents the probability of demand of equipment spare parts with satisfaction. Its numerical size is closely related to spare parts inventory and spare parts demand.
There are two main kinds of index of spare parts support probability. One is the fill rate, which is the percentage of spare parts meet the needs of supply at any time; the other is the stockout rate. It is the percentage of the requirement quantity of spare parts, which did not meet the supply at some time. Spare parts support probability model is particularly important in some important equipment spare parts support process. Because the safety and economic behavior will cause serious consequences for once the important equipment downtime due to lack of spare parts. For some important equipment spare parts which usually with high prices and very low demand, spare parts inventory control strategy ( 1, ) S S − is the most optimal support scheme [10,12,13]. This article will use spare parts support probability model for the spare parts inventory decision. . The meaning of the inventory policy is that the initial inventory level is S , when stocks fall as 1 S − , then a spare part order is request, and the order quantity is 1. OH S is defined as the available inventory of spare parts, DI S is the number of spare parts in transportation, BO S is the number of spare parts for delayed delivery. In the process of the inventory system operation, the relationships among OH S , DI S , BO S and S are as follows:
spare parts support probability models
Assuming that preventive maintenance threshold value pm D is equal to the failure threshold value f D , and the replacement time is ignored. Under the inventory strategy, when the equipment degrada-sciENcE aNd tEchNology tion level has reached f D , a spare part will be withdraw from the inventory for changing spare parts repair, at the same time, a spare part ordering is performed, and the ordering number is 1, the ordering delivery time is L ( its probability density function is ( ) g l ). The inventory is out of stock when the equipment degradation reaches to the failure threshold and no inventory for prevent replacement.
Because spare parts ordering delivery time 0 L > , under the condition of meeting the requirement of stockout probability η, the initial inventory of spare parts quantity S shall meet the following conditions: where, ω is the stockout probability of spare parts by calculation; i T is the time from using to degradation failure threshold f D for spare parts; S ϖ means the sum using time of spare parts, and its probability density distribution function is As the spare parts are of the same type, so the expression of ( ) , ( ) , and S represents spare parts inventory quantity. When the spare parts inventory quantity S fit ω η ω S S < < −1 , the spare parts inventory quantity is the optimal inventory ( * S ).
model solution
In order to solve the optimal spare parts inventory number to meet the requirements of stockout probability, the spare parts support probability model is established. The Eqs.5 can be solved by iterative method, and the steps are as follows: Step 1: Initialize the probability density distribution function f t α β , ( ) of components Gamma degradation process. Then, initialize the failure threshold value f D , the probability density distribution function ( ) g l of order delivery time L , the maximum allowable stock-out probability υ, and make the counter variable 1 i = .
Step 2: Make i S i = , and calculate the corresponding stockout Step 4: Make Step 5: Step 6: Calculate stockout probability ω S i when i S i = .
Step 7: If the calculated stockout probability ω η S i < , then * S i = , and the calculation is end. Otherwise, it should return to step 4.
Model description
In many cases, a system shutdown will cause significant losses, or even lead to disastrous consequences. In order to reduce the risk of shutdown casued by failures, a CBM strategy may be employed. In this paper, it is considered that a system suffers from deteriorating and its condition is monitored under a CBM strategy. Both preventive maintenance and corrective maintenance are involved in the strategy. The role of the preventive maintenance is preventing the replacement before failure. When the deterioration of the work equipment reaches to a predetermined threshold of preventive maintenance, a preventive replacement is implemented by a spare part. Because the equipment deterioration is under a continuous condition monitoring, a corrective maintenance will occur only for spare parts is stockout when the deterioration reaches to the threshold of preventive maintenance. After maintenance, either preventive or corrective, the equipment will restore to a state of "as good as new".
It is supposed that the system is made up of 1 S + components, in which, one component is working, while the others are as cold redundant. When the working components fault, the redundant parts will replace them one by one. And the spare parts inventory is under ( 1, ) S S − strategy. Which means once preventive replacement (one spare parts needed at one time) is needed when the equipment degrades to the preventive maintenance threshold pm D . Then, a spare part ordering is performed. While the order number is 1, and order delivery time L is a random variable. Here, the redundant components S must fulfill that its stockout probability is less than η within the order delivery time L . The aim of constructing this model is to solve sciENcE aNd tEchNology the optimal pm D with a minimum cost of system total operation, which is based on calculation of S under the prescribed conditions.
Model construction
Under the CBM strategy, the maintenance decision-making and spare parts ordering are performed based on the equipment degradation and inventory levels in this paper. In addition, the spare parts inventory level is affected by the spare parts order delivery time and the replacement of the equipment. As a result, it is difficult to establish a mathematical model to calculate the system costs. So, a simulation model is used to simulate the system maintenance cycle such as replacement, spare parts ordering and inventory in this paper. And the related fee is estimated through Monte-Carlo simulation method.
Equipment maintenance and spare parts ordering
In one work cycle [0, ] T , the analysis of equipment maintenance and spare parts inventory are as follows: Equipment maintenance analysis (1) When • When the ordering spare parts arrival to the warehouse, the • number of the arrival spare parts increase 1, and the total number of existing inventory I increase 1.
The cost ratio
The total operating cost of the equipment consists of the following three parts: the costs of equipment maintenance and replacement, the downtime costs, and the costs of spare parts management.
The expected costs of maintenance and replacement in the cycle The expected cost of downtime loss in the cycle [0, ] T can be given by: The expected cost of spare parts management in the cycle [ Then, the model can be obtained as follows: The objective function is: The constraint conditions include: where ϕ k is the time of spares stockout and k ∂ means the number of spares stockout. The expressions for them are as follows respectively:
Availability
If the average downtime in the working cycle [0, ] T can be acquired, the equipment availability can be calculated by: Because the equipment downtime includes maintenance time and stockout time, so the expression of average downtime in a work cycle T is expressed as: sciENcE aNd tEchNology
Model solution
In order to obtain the optimal preventive maintenance threshold, the smallest operating costs of the system and the systems availability, a Monte Carlo simulation and iterative optimization program flow for the ordering and replacement policy is given as follows: Step 1: Parameter initialization, that is the system parameters , and the stockout number of the spare parts SCL can be acquired. This simulation process is repeated N times, while N should be set large enough.
Step 3: Make Else, it will continue with the next step.
Step T . Accordingly, the systems total running fee, spare parts stockout probability, and system's availability can be worked out. Finally, the optimal threshold value of preventive maintenance can be acquired by decision on the calculated results above.
Example analysis
In order to explain the equipment maintenance and spare parts ordering of the integrated decision model, the following numerical example is given. The assignment model parameters are shown in table 1.
In the example, it is assumed that the more serious the equipment deteriorates, the longer the time spent in preventive maintenance. In addition, the equipment maintenance time consuming is increased exponentially with the degree of equipment deterioration [9]. That is: Similarly, it is assumed that the more serious the equipment deteriorate, the more costs will spend for preventive maintenance. In addition, the equipment maintenance and replacement cost consuming is increased exponentially with the degree of equipment deterioration. It can be expressed as follows: Spare parts inventory decision (1) According to the data in table 1, the optimal spare parts inventory level S * can be worked out by the iterative method offered by this paper. The iterative results are shown in Table 2.
From Table 2, it can be seen that 3 2 ω η ω < < . That is when the inventory level 3 S = , the system probability of stockout can meet the stated probability. So, the optimal spare parts inventory level 3 S * = . Preventive maintenance threshold decision (2) When the optimal inventory level of spare parts * S is certain, the model can be commutated by the Monte-Carlo simulation and iterative methods mentioned above. According to the parameters in Table 1, and make pm D as 40,39,…,6,5, with 1000 times simulation, we can acquire the results of the integrated decision model, which is shown in Table 3.
The optimization results of simulation and iterative can be seen from Table 3. When 13 pm D = , the system total operation cost is minimum, and the spare parts stockout probability ω is less than the maximum allowable value. However, the system availability is not optimal when 13 pm D = , and the system availability is optimal when This shows that the total operation cost and availability of this system cannot achieve optimal at the same time.
In order to analyses the simulation data more intuitively to make a maintenance decision, Three figures are drawn according to the simulation data, which is shown as Figure 1, Figure 2 and Figure 3. Fig.1 shows the relationship between the total operating costs C , maintenance costs m C , cost of downtime loss q C , costs of spare parts management o C and preventive maintenance threshold value pm D . From Fig.1, It can be seen that: (1) there is an optimum preventive maintenance threshold with a minimum costs for total operating; (2) the maintenance costs is increased with the increasing of pm D , which is related to the model assumption that the more serious the equipment deteriorate, the more costs will spend for preventive mainte- Fig. 3. The corresponding relationship of availability and D pm sciENcE aNd tEchNology nance; (3) there is a firstly drop and later increase process for the costs of downtime loss with the increasing of pm D , which is related to the stockout time increases rapidly when threshold value is too small and the maintenance time increases rapidly when preventive maintenance threshold is too large; (4) the cost of spare parts management is gradually reduced with the increasing of pm D , which is related to that the longer the single equipment working, the less the maintenance times in need during the working cycle. Fig.2 shows the relationship of the a and pm D when the maximum number of stock S is determined. It can be seen from Fig.2 that the bigger the pm D , the smaller the ω . The reason is that the longer single equipment work, the smaller the stockout probability of the system. Fig.3 shows the relationship of the system availability and pm D . It can be seen from Fig.3 that the equipment availability is increased with the increasing of pm D at first. When it reaches to a certain value, it will decrease with the increasing of pm D afterwards. It because the maintenance frequency will increase when pm D is too small, and the equipment downtime will increase for the lack of spare parts. While the maintenance time will increase when the pm D is too large, and the equipment availability will decrease correspondingly.
conclusions
In this paper, a spare parts inventory control and integrated decision model is proposed based on the CBM for a class of single equipment system. Firstly, the spare parts support probability model is established to determine the optimum stock of spare parts. And then the system operation and spare parts ordering comprehensive decision-making simulation model is established, which can be used to calculate the running fee rate, system availability and the stockout probability of spare parts. At last, the model is verified by a numerical example. The results show that the optimal preventive maintenance threshold pm D with the decision of target of cost can meet the requirements of spare parts support degree under the ( 1, ) S S -strategy. However, the fee rate and availability cannot achieve an optimal point at the same time.
It is noted that although a Gamma deterioration process is used to develop the spare parts control model, the proposed model may allow other types of distriburion to be incorporated to obtain an optimal stragety for spares controling. | 2019-03-17T06:50:18.806Z | 2015-09-16T00:00:00.000 | {
"year": 2015,
"sha1": "202f7945cafd40995b0d562ca3d21fdd9b694c24",
"oa_license": null,
"oa_url": "https://doi.org/10.17531/ein.2015.4.15",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "29e8e2591acb3964ed21517fa3fafc7dbce8cec8",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
259328124 | pes2o/s2orc | v3-fos-license | NeuroMabSeq: high volume acquisition, processing, and curation of hybridoma sequences and their use in generating recombinant monoclonal antibodies and scFvs for neuroscience research
The Neuroscience Monoclonal Antibody Sequencing Initiative (NeuroMabSeq) is a concerted effort to determine and make publicly available hybridoma-derived sequences of monoclonal antibodies (mAbs) valuable to neuroscience research. Over 30 years of research and development efforts including those at the UC Davis/NIH NeuroMab Facility have resulted in the generation of a large collection of mouse mAbs validated for neuroscience research. To enhance dissemination and increase the utility of this valuable resource, we applied a high-throughput DNA sequencing approach to determine immunoglobulin heavy and light chain variable domain sequences from source hybridoma cells. The resultant set of sequences was made publicly available as searchable DNA sequence database (neuromabseq.ucdavis.edu) for sharing, analysis and use in downstream applications. We enhanced the utility, transparency, and reproducibility of the existing mAb collection by using these sequences to develop recombinant mAbs. This enabled their subsequent engineering into alternate forms with distinct utility, including alternate modes of detection in multiplexed labeling, and as miniaturized single chain variable fragments or scFvs. The NeuroMabSeq website and database and the corresponding recombinant antibody collection together serve as a public DNA sequence repository of mouse mAb heavy and light chain variable domain sequences and as an open resource for enhancing dissemination and utility of this valuable collection of validated mAbs.
Introduction
Using antibodies (Abs) to detect endogenous target proteins in brain samples is foundational to many aspects of neuroscience research. Antibodies provide specific and effective labeling of endogenous targets in diverse brain samples including those obtained from human donors 1 .
Antibody labeling can be detected with various imaging modalities, allowing for determination of spatial details of protein expression and localization across a wide range of scales, which in neuroscience research can range from single molecules to nanoscale molecular assemblies to cells to intact brain circuits 1 . Conventional (i.e., non-recombinant) Abs can be produced in a variety of animal species (e.g., mice, rats, rabbits, goats, etc.) as polyclonal Abs, and from hybridoma cell lines as monoclonal Abs (mAbs) 2 . Combinatorial (i.e., multiplex) labeling and detection can be performed using combinations of Abs from different species followed by their detection with species-specific dye-conjugated secondary Abs. Moreover, except for those of rabbit origin, within a given species individual mAbs exist as one of multiple IgG subclasses, and those of different IgG subclasses can be multiplexed and separately detected with subclass-specific secondary Abs 3 .
As a consequence of mAb development efforts that span over 30 years, including at the UC Davis/NIH NeuroMab Facility, we have generated a large collection of cryopreserved hybridoma cells producing mouse mAbs. These mAbs have well-defined target specificities and efficacies for immunolabeling endogenous target proteins in mammalian brain samples by immunoblot (IB) and immunohistochemistry (IHC) applications [4][5][6] . Cryopreserved archives of viable mAb-producing hybridoma cells define mAbs as renewable research reagents, a major distinguishing characteristic of mAbs when compared to polyclonal Abs 7 . However, the continued availability of a given mAb is not absolutely guaranteed as it relies on the successful recovery into cell culture of these cryopreserved hybridoma cells, and that these cells in culture continue to reliably produce the exact same mAb that was characterized during its development.
The target binding specificity and efficacy of a given Ab is defined by its light and heavy chain variable domains (i.e., VL and VH domains) that together with the light and heavy chain constant regions define the full Ab molecule 2 . Determining the sequence of a particular mAb's VL and VH domain generates a truly permanent and unique Ab archive in the form of DNA sequence 8 . Furthermore, utilizing such sequence information to generate plasmids expressing recombinant forms of these mAbs (R-mAbs) effectively eliminates the need for the expensive and labor-intensive maintenance of cryopreserved hybridoma collections in liquid nitrogen and allows for inexpensive archiving and simple dissemination as nucleotide sequence and/or plasmid DNA. Defining the primary sequence of mAbs also allows for their use as molecularly defined research regents, enhancing their value in terms of research transparency 8 .
Recombinant expression can also afford more reliable and often higher-level expression than from hybridomas and enhance research reproducibility as the expression plasmid can be resequenced prior to each use 8 . Plasmids can also be archived at and disseminated from open access nonprofit resources such as Addgene (https://www.addgene.org/), with increased ease and lower cost dissemination than cryopreserved hybridoma cells. Cloning and recombinant expression also allows for diverse forms of Ab engineering. This includes engineering to confer distinct detection modalities to the expressed mAb, facilitating their use in multiplex labeling 9 , as well as development of miniaturized Abs such as single chain variable region fragments (scFvs) 10,11 with additional advantages due to their small size, which enhances tissue penetration and allows for increased imaging resolution 12 .
To generate a lasting archive and obtain recombinant Abs with enhanced opportunities for engineering, we sequenced the VL and VH domains of mAbs in our large and extensively characterized collection. Initial efforts used RT-PCR-based cloning of mAb VL and VH domains into mammalian expression plasmids followed by Sanger plasmid sequencing. This led to the successful cloning, sequencing, and expression of almost 200 of our mAbs 9 , but this effort only represented a small fraction of the ≈2,400 mAbs in our extensive collection. Here we describe the development of a pipeline for high-throughput sequencing of hybridomas to obtain mAb VL and VH domain sequences. We also detail novel bioinformatics approaches used to analyze the quality of the obtained sequences and the diversity of identified VL and VH domain sequences.
Together these efforts have led to a large public repository of VL and VH domain sequences. We also used these sequences to generate R-mAb expression plasmids that are available through open access resources. We also describe pipelines for engineering these R-mAbs into forms with distinct detection modalities and miniaturizing them into scFvs. Together these efforts have generated a resource that further enables antibody-based neuroscience research and serve as a model for enhancing the archiving and dissemination of other mAb collections in recombinant form.
Results
Our mAb development projects typically start with 960-2,880 candidate oligoclonal hybridoma samples, from a set of between 10-30 x 96 well microtiter plates in which the initial products of the mouse splenocyte-myeloma fusion reaction are cultured 5 . These cultures and the Abs they produce are oligoclonal, likely containing more than one hybridoma clone, but producing a collection of Abs much less complex than that present in polyclonal antiserum and/or affinity-purified polyclonal Ab preparations. We refer to these hybridoma samples as "parent" samples as it is from these initial oligoclonal samples that monoclonal hybridomas and mAbs are derived by subcloning to monoclonality. Conditioned medium from each culture well, referred to as tissue culture supernatants or TC supes, is evaluated by ELISA from which we typically identify 24-144 ELISA positive hybridoma samples for expansion and further characterization. The TC supes from each of these expanded parent hybridoma cultures are subsequently evaluated by numerous assays (transfected cell immunocytochemistry/ICC, brain immunohistochemistry/IHC, and brain immunoblots/IB being the standard set) in parallel [4][5][6] . A subset of parent hybridomas, up to five per project, are selected for subcloning to monoclonality by limiting dilution 2 . We typically retain and archive five independent target-positive subclones for each parental hybridoma cell line with the expectation that these are independent isolates of a single clone of target-positive hybridoma cells present in the oligoclonal parent hybridoma culture. Relatively few target-positive wells (e.g., 5%) are observed among the large collection of parent samples initially screened 2,5 , suggesting that it is unlikely that there exist more than one target-positive hybridoma clone in the oligoclonal parental cell culture.
Our mAb nomenclature reflects this process. In our naming system, individual mAb projects are designated by a letter (for the most part a K, L or N) and a number, followed by a "/" and the number assigned to that ELISA-positive sample. Overall, from ≈800 mAb projects, we have ≈45,000 distinct parent oligoclonal hybridoma cell lines, comprising a collection of ≈60,000 cryopreserved vials of parent oligoclonal hybridomas. From these parents, we subcloned by limiting dilution ≈3,500 distinct parental hybridoma cell lines, from which we cryopreserved between 1-5 subcloned monoclonal sample per parent, for a total of ≈11,000 distinct samples. As we typically cryopreserve multiple vials of individual subclones, our collection comprised ≈26,000 vials of cryopreserved monoclonal hybridoma samples. For sequencing purposes, which focused on monoclonal samples, we separated these samples into two classes. The first we defined as "biological replicates", representing independent subclones obtained from the subcloning of a single parental hybridoma culture (e.g., K89/34.1 and K89/34.2 are biological replicates of the K89/34 hybridoma). The second class we defined as "technical replicates", comprising independent cryopreserved vials of the same subcloned hybridoma (e.g., distinct vials of K89/34.1, including those cryopreserved on different dates). Our overarching goal was to use the novel high throughput sequencing and bioinformatics workflow we developed (Figure 1), as detailed below, to obtain corroborating VL and VH domain sequences from at least three biological replicates for each of the ≈3,500 subcloned mAbs in our collection.
Establishment of a hybridoma sequencing pipeline
Prior to initiating large-scale sequencing efforts, we optimized the sequencing pipeline, beginning with processing of the frozen collection of hybridoma cells, and all subsequent steps, up to and including Illumina MiSeq sequencing ( Figure 1). We previously found that RNA of sufficient quantity and quality for RT-PCR based cloning of VL and VH domain sequences could be isolated directly from cryopreserved hybridoma cells, without the need to recover the cells into culture 9 . As such, we assumed that this would also hold for obtaining RNA that would enable effective and reliable sequencing of the mAb VL and VH domains employing Illuminabased high throughput sequencing. We made aliquots of hybridoma cells in 96 well plates after rapid thawing and after a single PBS wash, lysed them and isolated RNA using a QiaCube HT system. RNA was quantified on a well-by-well basis by Nanodrop readings and normalized across all wells of the plate to a range of 7-15 ng/µL. Illumina sequencing are run through HTStream for base quality trimming and other read processing. Next, they are passed through DADA2 for amplicon denoising followed by SAbPred ANARCI tool based on the IMGT numbering scheme. All ASVs, metadata, and other quality metrics are uploaded to the NeuroMabSeq database and website where further information and tools are provided to the end users. This includes but is not limited to BlastIR results, BLAT searches across the database, and recommended high quality sequences for recombinant antibody design. Annotations of internally generated scores are provided in addition to other database information. Finally, high quality sequences are used in the design of gene fragments for generation of R-mAb and scFv expression plasmids.
Sequencing of antibody variable regions
The sequencing library preparation employed a 5'-RACE like approach combined with a seminested barcode-indexing PCR (Supplementary Figure 1). The protocol of Meyer, DuBois, and colleagues 13 was modified to reverse transcribe four transcripts in a single reaction, employing a cocktail of four reverse transcription primers (see Supplementary Table 1 for all primer sequences). Two of these reverse primers were specific for the mouse heavy chain constant region, one representing a sequence conserved in the heavy chain constant regions of the IgG1, IgG2a and IgG2b subclasses, and the other specific for the IgG3 subclass. The second pair of reverse primers used were specific for the mouse kappa and lambda light chain constant region, respectively. We also utilized a shorter version of the template switching oligo (TSO) than used previously 13 to preserve more sequencing cycles for the regions of interest. The cDNA was subsequently PCR-amplified with a cocktail of four nested constant region chainspecific reverse primers analogous but internal to those used in the cDNA synthesis reaction on
Bioinformatics processing
A novel bioinformatic pipeline was developed to analyze the resultant sequences and make them easily and publicly accessible via a software package, database, and website ( Figure 1).
The forward and reverse reads from the Illumina sequencing were joined bioinformatically, and demultiplexed to the sample level using Illumina barcodes and Illumina bcl2fastq (v. 2.20) software. Primer sequence was used to determine whether the sequence obtained corresponded to mouse VL or VH and was then removed. TSO sequence was identified and removed, any sequence containing a 'N' character was removed from further consideration, low quality base pairs (<10 q-values) were removed from the 3' ends, followed by overlapping of 16 . We eliminated ASVs containing these sequences so that they were not included in any subsequent analyses.
We then applied a filter to eliminate sequences that did not correspond to the full-length coding regions of mouse VL and VH domains. The nucleotide sequences were first translated in silico, and the deduced amino acid sequences were analyzed for correspondence to expected immunoglobulin sequences using the ANARCI (Antigen receptor Numbering And Receptor ClassificatIon) tool as applied to Ab variable domains 17 downloaded from the SAbPred structure-based Ab prediction server (https://opig.stats.ox.ac.uk/webapps/sabdabsabpred/sabpred/) 18 . This tool defined the sequences corresponding to the expected primary structure of the VL and VH domains. It assigned each amino acid a position corresponding to the IMGT (international ImMunoGeneTics information system) numbering scheme 19 . This entailed aligning the obtained sequence against that of consensus VL and VH domains and assigning This allowed for the generation of trimmed amino acid sequences corresponding to only the VL and VH domains themselves. These sequences were assigned positions with the FR and CDR domains using the Abysis tool, which is displayed in a color-coded format on the NeuroMabSeq website ( Figure 2). Translated hybridoma sequences that did not yield any amino acids within any one of the assigned FR1-4 or CDR1-3 regions were filtered from the database, as were any sequences lacking the first 10 amino acids of FR1 or the last 10 amino acids of FR4. The boxes for "Sequencing Information '', "Scoring Information" and "Amino Acid Information". The "Sequencing Information'' dropdown contains data such as the number of ASVs attributed to the obtained sequence and the number of total reads attributed to light chains or heavy chains for the sample, as well as the plate. The "Scoring Information" reveals the star rating assigned to each sequence, as well as the contribution of the ASV-based star and replicate-based star components of the scoring to the total score. The "Amino Acid Information" dropdown contains information such as the full amino acid sequence, the sequence corresponding to the ANARCI prediction of IMGT amino acid positions for the VL domain, and within this the FR 1-4 and CDR 1-3 boundaries.
The nucleotide sequence corresponding to the ANARCI prediction of IMGT amino acids is also shown to facilitate design of gBlocks for Gibson Assembly-based cloning of recombinant mAbs and scFvs. In addition, the "BLAT Sequence" feature is available to compare this sequence to all other sequences in the database. An analogous set of information is supplied for the heavy chain.
Finally, sequences from each hybridoma sample were filtered based on ASV count and only those sequences corresponding to ≥10% of the total ASVs were included in the database.
The ASV counts, quality scoring, and amino acid information of each distinct sequence corresponding to IMGT amino acids are grouped additively and displayed on the NeuroMabSeq website ( Figure 2). This allows for ease of access to all nucleotide information, amino acid information, and tools embedded on the website.
To date we have sequenced 8,642 novel (i.e., non-control) monoclonal hybridoma samples using this approach (Figure 3). Of these, 1,903 samples (22%) did not yield any sequences that met the sequencing criteria, including any sequences for the aberrant Sp2/0derived VL domain (referred to as "dropout" samples). Removing these yielded a database of 6,739 hybridoma samples, which yielded 15,064 distinct ASVs. After eliminating those that did not correspond to bona fide VH and VL sequences, 13,401 were used as the input to the star scoring system (see details below). Finally, sequences were grouped by whether they came from biological or technical replicates of hybridomas with the same mAb ID ( Figure 3). depicts the details of hybridoma samples sequenced to date. The 8,642 novel (i.e., non-control hybridoma) samples sequenced included 1,903 that did not yield usable sequences and were designated as "dropouts". 6,739 samples returned usable sequences that yielded 15,064 total VL and VH sequences. 13,401 of these remained after eliminating any sequences that had insufficient support, did not conform to ANARCI conventions of valid antibody sequences, or were duplicates; these were subjected to the star scoring system. Of these, 11,226 had sequencing quality scores greater than 3, while 2,175 samples had sequencing quality scores less than 3. The number of unique mAb IDs after grouping all biological and technical replicates is also provided (1,931 high quality and 331 low quality).
Representation of monoclonal hybridomas that express multiple productive VL and VH domain transcripts
Prior studies have revealed the presence of more than one productive light and heavy chain transcript in a subset of what are presumably monoclonal mouse and rat hybridoma cells [21][22][23][24][25][26][27] .
The largest study to date used sequence information, primarily from Sanger sequencing of plasmids containing VL and VH domains obtained from PCR-based cloning, to evaluate the incidence of additional productive VL and VH transcripts in 185 otherwise unrelated rat and mouse hybridoma cell lines from seven different laboratories 27 , many of which produce mAbs that are commercially available. Approximately one-third of these hybridomas (59/185) were found to contain an additional productive (i.e., non-aberrant) light and/or heavy chain transcript 27 . Our large-scale sequencing efforts represent a valuable opportunity to determine the prevalence of VL and VH domain sequences in a much larger set of curated sequencing samples from 1,931 distinct monoclonal mouse hybridomas. We found that in addition to the predominant sequence with the preponderance of ASV counts, a substantial subset of our samples contain low levels of one or more alternate productive VL and/or VH domain sequences that represent <10% of the total ASV count ( Figure 4). It is not clear due to their relatively low abundance whether these sequences represent biologically relevant transcripts (i.e., would substantively impact the population of mAb produced by that hybridoma). However, another subset of samples contains alternate sequences, primarily VL, at levels similar to the predominant component of the total ASV counts returned (e.g., representing 30-50% of the ASV counts), and presumably would make a substantive contribution to the Abs produced from that hybridoma.
From the 1,931 unique high quality monoclonal hybridoma cell lines we sequenced that contained a bona fide VH and VL, we found that 37 (1.9%) consistently contained, across biological and technical replicates, an additional productive (i.e., non-aberrant) VL and/or VH transcript that represented ≥10% of the total count of productive VL or VH encoding reads corresponding to the ASVs (Table 1). We analyzed whether the representation of these values differed based on whether the hybridomas were made during the earlier period when PEG chemical fusion was used to generate hybridomas, or the more recent period when electrofusion was used 5 . We did not detect any difference in the overall representation of VH or VL transcripts between unique mAb sequences from these two periods (Table 1). and electrofusion (849 total) to generate the hybridomas. Samples with at least one VL and one VH sequence are then separated into those containing exactly one VL and one VH sequence, one additional chain of some sort, one additional VH, one additional VL, and one additional VL and VH.
Projects performed using either PEG chemical fusion or electrofusion were then analyzed separately.
Development of a scoring system to rank the quality of sequences from individual hybridoma samples
To rank the quality of sequence support for any given mAb and identify the best sequence to utilize in subsequent cloning efforts, we developed a simple scoring system. Scoring was based on both the quantity of sequences as defined by ASV counts, and the quality of the sequences obtained. This is defined by the presence of that sequence among biological and technical replicate samples of the same hybridoma, and the lack of that sequence in unrelated hybridoma samples (see Methods for details). The scoring system returned scores on a continuous scale ranging from 0-5 based on read support, defined as the ratio of a particular ASV relative to total reads produced for that sample, as well as consistency across biological and technical replicates of the same hybridoma cell line. The result of this scoring system and the distribution of VL and VH scores is shown in Figure 5. Individual VL sequences often have a lower proportion of total ASV read support compared to VH sequences ( Figure 5), likely due to the penalty imposed because of the higher incidence of samples yielding multiple VL sequences than seen for VH sequences and less average support for any given VL compared to VH ( Figure 5). Due to the tendency of VL to report more ASVs we see a tendency for a left skewed distribution for VL compared to VH. The sections of high density shown are due to the scoring system which counts the number of matches of biological and technical replicates for a given sequence
Gibson Assembly-based cloning of recombinant mAbs based on hybridoma sequences
The availability of VL and VH sequences from hybridomas allowed for the conversion of the mAbs they produced into recombinant form. We employed the hybridoma-derived sequences to design VL and VH gene fragments that were used to generate R-mAb mammalian expression plasmids in four fragment Gibson Assembly reactions (Figure 6a). selected colonies subjected to colony PCR to amplify the VL-joining fragment-VH region of the plasmid, as we had done previously for screening R-mAb colonies from our PCR-based cloning approach 9 . We used Sanger sequencing to verify the presence and fidelity of the mAb-specific VL and VH sequences, and sequence-positive plasmids were then used to transfect HEK293-6E cells to produce R-mAb TC supes. These TC supes were subsequently evaluated in immunoassays (IB against brain samples, ICC against transfected cells, and/or IHC on brain sections) in each case performed as a side-by-side comparison with the corresponding conventional mAb TC supernatant (Figure 7).
This R-mAb cloning process has been completed employing 410 unique pairs of VL and VH gene fragments in Gibson Assembly reactions. We primarily focused on mAbs whose sequencing yielded only one productive VL and VH domain sequence, and/or which are widely used in the research community based, the number of research publications citing that particular mAb (https://neuromab.ucdavis.edu/publications.cfm). Of these, a very high percentage (381/410 = 93%) yielded at least one plasmid encoding an R-mAb that tested positive in one or more assays in a side-by-side comparison against the progenitor mAb ( Figure 7). We also completed this process for eleven mAbs whose hybridomas yielded more than one prominent VL sequence. For ten of these we found one VL-VH combination that yielded a functional mAb, which in six cases was the sequence with the majority of reads, and in four cases the sequence with the minority of reads. One mAb (K60/87) yielded two VL and two VH sequences. In this case all four pairwise combinations were generated of which only one was functional, which corresponded to the VL and VH sequences with the majority of reads. We note that these outcomes are based almost entirely on a single attempt at cloning, expressing, and evaluating each R-mAb using the workflow detailed above. A list of the 392 R-mAbs cloned using this sequence-based approach is provided as Supplementary Table 2
Discussion
Here we describe a high throughput sequencing-based workflow for determining the sequences of VL and VH domains from cryopreserved hybridomas. This has led to a public sequence database that unequivocally defines at the molecular level a valuable collection of mAbs highly validated for neuroscience research. This workflow includes processing hybridoma samples in 96 well plates to obtain RNA that is used as a template for cDNA synthesis employing a modified 5'-RACE approach. We then use PCR amplification employing nondegenerate semi-nested primers followed by generation of bar-coded sequencing libraries and Illumina sequencing to determine VL and VH domain sequences from hundreds of hybridoma samples in a single sequencing run. A novel bioinformatics platform is then applied to filter and curate the sequences obtained, which are then posted to a public database. The database is accompanied by a variety of useful features for researchers to utilize such as novel quality assessment system, protein annotation, and alignment tools built in across all sequences.
Finally, the sequences are used in gene fragment-based cloning of plasmids expressing R-mAbs and scFvs engineered to enhance their utility in multiplex labeling. Together, this represents an effective workflow to preserve in the form of a public database the DNA sequence information defining the mAbs that comprise this large collection of hybridomas, and to use these sequences to develop validated recombinant Ab expression plasmids and make them available through open access non-profit sources.
It is well established that mAbs have tremendous value as renewable research reagents with the potential to endure indefinitely, being produced by immortalized hybridoma cells that can be archived by cryopreservation 2 . However, it is possible to lose a particular mAb when cryopreserved hybridomas fail to grow when recovered into cell culture, when mAb production is reduced or lost spontaneously, or when mutations arise leading to the production of a mAb with altered properties. In addition, maintenance of cryopreserved hybridoma collections in liquid nitrogen is challenging due to its high cost and labor-intensive nature. Moreover, entire hybridoma collections can be lost upon loss of research funding or the closure of a laboratory.
Open access hybridoma banks such as the Developmental Studies Hybridoma Bank at the University of Iowa (https://dshb.biology.uiowa.edu/) represent an important resource in providing longer term preservation of hybridomas above and beyond that afforded by an individual research laboratory. However, they are still subject to the potential loss of hybridoma viability and/or mAb production and fidelity, and the high cost and labor-intensive nature of maintaining cryopreserved hybridomas in liquid nitrogen. Obtaining the sequences of VL and VH domains provides a reliable method for permanently preserving the quintessential identity of a particular mAb and allowing for its functional expression after the hybridoma cells themselves cease to exist. Re-sequencing of extant hybridomas being used for repeated mAb production can also be used as a routine analytical tool to verify that the VL and VH domains remain intact years or decades later after their sequence was originally determined.
We had previously used an RT-PCR based approach employing highly degenerate 5' PCR primers to generate VL and VH regions amplified from hybridoma-derived cDNA followed by their insertion into plasmids, validation of functional expression and sequencing 9,28 . This process is labor intensive and can be prohibitively resource intensive for preserving large hybridoma collections as DNA sequence. Direct sequencing of hybridomas using the approach described here has a higher throughput and requires fewer resources than the cloning based for the most part, lack information on the Ab target, although ≈750 sequences in the IMGT database are returned with a search for "hybridoma". Our public database is distinct from these in that it represents novel sequences of highly characterized mAbs obtained from our in-house sequencing efforts.
The investigation into expression of additional productive (i.e., non-aberrant) VL and/or VH transcripts has revealed a wide range of variation across different monoclonal hybridoma cell lines [21][22][23][24][25][26][27] . This has prompted speculation into the factors contributing to this phenomenon, examples being imperfect allelic exclusion in the splenocyte that gave rise to the hybridoma, or anomalies within the fusion process whereby more than one splenocyte fuses to a single myeloma cell. Compared to a previous analysis of almost 200 rat and mouse hybridoma cell lines 27 our sequencing of multiple independent samples from almost 2,000 hybridoma cell lines indicates a lower frequency of multiple productive VL and/or VH transcripts. While we do not know the basis for this, this previous study analyzed both rat and mouse hybridoma cell lines generated in many different laboratories using different approaches and employing different myeloma partners 27 . Moreover, in this previous study the methods used to obtain VL and VH transcript sequences differed across samples. While a subset was obtained from direct highthroughput sequencing as used here, others came from RT-PCR-based plasmid cloning followed by Sanger sequencing. We used uniform methods to obtain and then process the sequencing data, in the latter case employing a multi-step process that incorporates ASV support, ANARCI prediction, and an in-house scoring system based on reproducibility of obtaining the same sequence from multiple biological and technical replicates. Our dataset also came from a hybridoma collection that, while diverse in the proteins targeted, is otherwise relatively homogenous as it is exclusively mouse hybridomas generated in a single laboratory using the same Sp2/0 myeloma cell line. The one distinction among the hybridomas in our collection was whether they were generated by the original method of PEG/chemical fusion, or the more recent method of electrofusion. However, we found no differences in the incidence of additional among VL and VH transcript sequences among these two sets of hybridomas. We note that the presence of aberrant VL transcripts in Sp2/0-derived hybridoma cells could potentially introduce additional complexity into sample preparation process (e.g., cDNA synthesis and/or RT-PCR amplification steps) and the sequencing data analysis aimed at determining the productive (i.e., non-aberrant) VL transcript repertoire of a given hybridoma cell line. Our curation steps were aimed at addressing these issues to provide a consistent dataset.
Our public database represents a valuable resource for those wishing to use these sequences to recapitulate these mAbs in recombinant form, including those engineered to enhance their utility in multiplex labeling. Cloning of R-mAb expression plasmids employing Gibson Assembly using VL and VH gene fragments designed from hybridoma sequences has advantages over our previous method of RT-PCR based cloning 9,28 . First, we avoid the use of degenerate primer sets such as those we 9 and many others [e.g., 24,28 , etc.] used previously, so that the sequences of the VL and VH domains obtained and used to generate R-mAbs and scFvs are an exact match to the hybridoma. Second, our previous method 9 relied on evaluating numerous candidate expression plasmids (sometimes ranging into tens or hundreds) to find those that expressed a functional R-mAb, in part due to the presence of the aberrant VL transcript but also due to mutations that can occur with PCR amplification. Generation of R-mAb expression plasmids based on high quality sequences leads us more directly to a functional R-mAb. We have generated a collection of mouse IgG2a R-mAbs, and in some cases mouse IgG1 and IgG2b versions of the same mAbs, to greatly enhance their utility in combining with other mouse mAbs and/or R-mAbs multiplex fluorescence labeling. We have also generated plasmids encoding functional miniaturized mAbs, in the form of scFvs, that have substantial benefits for immunolabeling due to their small size , such as enhanced sample penetration and increased imaging resolution. Some of these scFvs have already been used to enable a higher resolution correlative light and electron microscopy analysis of cell population within mouse brain 32 .
Moreover, unlike antibodies in their intact IgG format, scFvs can be used as intracellular antibodies or intrabodies 33 to effectively report on or manipulate neuronal function 34 . These publicly posted sequences can be used by researchers to generate a variety of other alternative forms of R-mAbs such as those with heavy chain constant regions from different species, and various fragments, such as Fab, F(ab')2, Fab2 (monospecific and bi-specific), scFv-Fc, and many others, each of which have distinct properties advantageous to specific applications. The availability of hybridoma-derived sequences also allows for numerous other forms of Ab engineering of recombinant mAbs 35 to enhance their binding (affinity and specificity) and biophysical properties (folding and stability), as well as providing insights that could be used in de novo design 36 . Overall, the workflow described here can be effectively applied to first obtain mAb sequences from hybridomas, and then to use these sequences to generate R-mAbs including those in formats engineered to enhance their utility. In addition, the database and user interface utilized in our study offer a unique and robust platform for standardized data processing. The NeuroMabSeq platform provides enhanced capabilities for data analysis, contributing to a more thorough understanding of the biological processes under investigation.
Conclusion
We generated an open access publicly available resource in the form of a mAb sequence database and a collection of R-mAbs and scFvs to enable neuroscience research. We first developed a workflow for sequencing and curation of sequences from a large collection of hybridomas. This allowed us to generate a publicly available curated online database of these sequences with numerous attributes to enhance the use of the sequences. We used these sequences to develop a Gibson Assembly-based cloning strategy that led to generation of hundreds of R-mAbs. We also generated additional R-mAb variants with altered IgG subclasses to enhance their utility in multiplex fluorescence labeling. We developed miniaturized versions of these R-mAbs in the form of scFvs to enhance their tissue penetration and imaging resolution due to their small size. All R-mAb and scFv plasmids are publicly available from the open access plasmid resource Addgene.
RNA extraction
RNA was extracted from cryopreserved hybridoma cells in a 96 well plate format. Hybridoma
High throughput sequencing of hybridoma VL and VH domains
A schematic of cDNA synthesis and PCR amplification steps is presented as Supplementary
Bioinformatic processing of hybridoma VL and VH domains
The resulting forward and reverse reads were cleaned, joined bioinformatically, and demultiplexed using a custom in house software pipeline. Primer sequence was used to determine heavy or light chain sequence and removed. TSO sequence was identified and removed. Any sequence containing a 'N' character was removed from further consideration.
Sequence quality assessment and quality control
Sequences with poor annotation via IMGT VL and VH amino acid prediction were removed from the database. This includes sequences with any framework region (FR) or complement determining region (CDR) with zero length. Some basic requirements that need to be met to be deemed a quality annotation include intact FR1 and FR4 regions, a start codon, and the absence of stop codons in the VL and VH coding regions. ANARCI IMGT prediction was also used to further group sequences into bins as some non-VL and non-VH regions were causing unique sequences using DADA2, but the amino acid prediction was the exact same within a given monoclonal hybridoma. This was likely caused by trimming edges based on quality and 'N' values. Finally, prior to final scoring, ASV results with < 10% of read support were removed.
Each individual sequence sample was scored to provide users with confidence of sequences in the database. Sequences were assigned a score based on the following heuristic where ASV score can be up to 2 points and match score can be up to 3 points for a total of 5 button to cross compare this sequence to all other sequences in the database.
Generation of R-mAb expression plasmids
Nucleotide sequences corresponding to the ANARCI predicted IMGT amino acids 1-127 of the VL region and amino acids 1-128 of the VH region were used to design gene fragments for use in Gibson Assembly reactions. Gene fragments were designed with 50 bp overhangs corresponding to regions in the P1316 plasmid backbone 9,28 (modified to contain a mouse IgG2a heavy chain) and in the joining fragment region 9,28 . for sequencing and expression. In this case, expression and verification of immunoreactivity was performed first, followed by sequence verification of any subclass-switched R-mAb plasmids whose TC supe exhibited immunoreactivity comparable to the corresponding mAb and/or IgG2a R-mAb by either IB, ICC or IHC.
scFv cloning
Nucleotide sequences corresponding to the ANARCI predicted IMGT amino acids 1-127 of the VL region and amino acids 1-128 of the VH region were used in combination with a synthetic linker sequence to design an scFv gene fragment for use in Gibson Assembly reactions. The gene fragment encoded a leader-VH-linker-VL scFv followed by HA, sortase and 6xHis tags. The encoded leader amino acid sequence was MGWSCIILFLVATATGVHS, and the encoded linker amino acid sequence was GGGGSGGGGSGGGGSGGGS. Gene fragments were designed with overhangs corresponding to flanking regions in the polylinker cloning region of pCDNA3.4.
Gene fragments (Twist Biosciences) were used in two-piece Gibson Assembly reactions. The initial subset of scFvs were cloned into the pcDNA3.1 expression plasmid. The plasmid backbone was generated by EcoRI/HindIII restriction digestion of the N52A/42 scFv originally generated to order by Genscript. The second subset of scFvs were cloned into the pcDNA3.4 expression plasmid. In this case, the plasmid backbone was generated by EcoRI/HindIII restriction digestion of the K89/34 scFv originally generated to order by Genscript. These digestions were followed by heat inactivation of the enzymes by incubation at 80 °C for 20 min.
In some cases, gel isolation of the 6,702 bp fragment was performed. The plasmid backbone was then used in a two-piece Gibson Assembly reaction with the synthetic scFv gene fragment | 2023-07-05T16:55:12.991Z | 2023-06-30T00:00:00.000 | {
"year": 2023,
"sha1": "709a8f5f4f711d0212697f7b1c0e3dc2d76559a5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "BioRxiv",
"pdf_hash": "709a8f5f4f711d0212697f7b1c0e3dc2d76559a5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
10198081 | pes2o/s2orc | v3-fos-license | Cluster-based MDS Algorithm for Nodes Localization in Wireless Sensor Networks with Irregular Topologies
Nodes localization in Wireless Sensor Networks (WSN) has arisen as a very challenging problem in the research community. Most of the applications for WSN are not useful without a priori known nodes positions. One solution to the problem is by adding GPS receivers to each node. Since this is an expensive approach and inapplicable for indoor environments, we need to find an alternative intelligent mechanism for determining nodes location. In this paper, we propose our cluster-based approach of multidimensional scaling (MDS) technique. Our initial experiments show that our algorithm outperforms MDS-MAP[8], particularly for irregular topologies in terms of accuracy.
INTRODUCTION
In the recent years, there has been great advancement in the wireless sensor computing technology [2]. The sensors are organized in a network and communicate by exchanging information using radio modules. After taking samples from the environment they sense (light level, air temperature, humidity etc.), they can process data or exchange it. The processed packets are sent to the sink node. All data is not directly sent to the sink as sensors nodes have a short range of radio communication while being deployed in a vast region. There are a lot of multi-hops routing protocols that offer optimal communication cost. Each sensor sends data to its closest neighbor responsible for retransmitting the packets [3].
The problem of nodes localization appears in a variety of wireless sensor network (WSN) applications. The information gathered from the network can often be useless if not matched with the location where it is sensed. Finding the exact physical location is a crucial issue for continual network operation and WSN management. Many different techniques have been proposed for solving this problem, however, since most of them fail to perform well on irregular topologies, this problem remains a challenge.
Multidimensional Scaling (MDS) is a set of analytical techniques that have been used for many years in disciplines such as mathematical psychology, economics and marketing research. It is a suitable method used for reducing the dimensionality of the data, showing multidimensional data as e-points in two or three dimensional space [5]. This technique can also be used in WSN where only distances between nodes are known. The main advantage in using MDS is that it performs very accurate position estimation even when there are no anchor nodes. The distance measurement between each node is used as an input data. Although this method outperforms many others, it still provides poor results for irregular topologies. Since MDS is a centralized technique, all measurements are collected at the sink node where further processing is done. Other drawback of MDS is time complexity. Sending all distance measurements to the sink node may result in unaccepted occupation of the bandwidth which will reduce the efficiency of the system.
In this paper, we propose a modification of classical Multidimensional Scaling technique in order to improve its performance. The general idea of our cluster-based MDS algorithm is to generate local maps within each cluster. Local maps are then merged into a global one in order to estimate the position of the nodes. If given sufficient number of anchor nodes (nodes with a prior known location), this global map can be transformed into an absolute map.
The rest of the paper is organized as follows: In the second section, the relevant work related to the present localization techniques is discussed. The third section gives a detailed explanation of our cluster-based MDS algorithm (CB-MDS). The fourth section gives the results provided from the experiments. Finally, we conclude this paper in section five.
RELATED WORK
Many research groups have investigated different techniques for nodes localization in WSN. Most of the techniques proposed within the last years can be basically divided into two categories: range-based and range-free methods.
Range-free methods are also known as "hop-based" methods. They use hop or connectivity information for discovering nodes location.
The category of range-based methods estimates the distance between the neighboring nodes using different signal measurement techniques. RSSI (Receive Signal Strength Indicator) is the most common technique used since it doesn't require any additional hardware. Other popular techniques are ToA (Time of Arrival), AoA (Angle of Arrival) and TDoA (Time Difference of Arrival). They all require sensors equipped with powerful CPUs and appear as an expensive solution.
Multidimensional scaling (MDS) based algorithms are rangebased sensor localization algorithms. There are different versions of MDS for nodes localization, the most popular is MDS-MAP, proposed by Yi Shang and Wheeler Ruml [8]. They showed that MDS-MAP outperforms other techniques, especially when applied on density networks. But this centralized approach gives significant errors for irregular topologies, such as C-shaped topology. Other approaches based on MDS exist, but they are complex and thus computationally dependent. Such an example is MDS-MAP(P) [9], which is a modification of the MDS-MAP based on a decentralized approach. It shows better results than MDS-MAP, but requires intensive computational resources at each node. MDS-MAP(P) computes local maps from each node in the network and then merges local maps into a global map.
The algorithm proposed in this paper is based on MDS and shows good results for irregular topologies in terms of accuracy and performance. Our cluster-based MDS is similar to MDS-MAP(P), but it calculates only local maps for each cluster instead of each node. Thus, only one node in the cluster is doing the computation. Additionally, for irregular C-shaped and H-shaped topologies, our algorithm gives more accurate prediction of the sensor locations than MDS-MAP. Although there has been proposed variations of cluster-based approaches (for example, see [1]), we used the metric proposed in [8], in order to provide comparable results with the reference MDS-MAP algorithm.
CLUSTER-BASED MDS
In this section we will explain in detail our cluster-based MDS algorithm for nodes localization within WSN.
Since radio signals are omni-directional, only nodes within certain radio range R can communicate with each other. If two nodes are within each others transmission range they are called neighbors. Further, we made the following assumptions: • There is a path between every pair of nodes.
• Nodes that belong to the same cluster are deployed in a small geographical area. In other words, each cluster consists of nodes in close proximity to each other. • Each node uses RSSI method for distance estimation. • RSSI provide accurate neighboring sensor distance estimation.
Our cluster-based MDS algorithm is divided into four phases described below: 1. Initial clustering 2. Cluster extension 3. Local map construction
Local map merging
In the initial clustering phase, the network is divided into subsets called clusters. There are a lot of algorithms for nodes clustering in WSN [4] [6]. In this paper, node clustering is not subject of interest, so it is assumed that network is already clustered by clustering algorithm. Each cluster consists of several neighbors nodes grouped together. In each cluster one representative node is chosen to be a cluster-head. Other nodes in the cluster are called members of the cluster. Clusters are disjunctive sets, allowing each node in the network to belong to only one cluster.
Figure 1. Cluster extension
In the second phase, clusters are extended by adding additional nodes from neighboring clusters. If node A from the cluster a has one hop neighbor node B that belongs to other cluster b, then node B is added to the cluster a, and node A is added to the cluster b ( Figure 1). Thus nodes A and B become gateways. Gateways act as cluster members at the same time. Each gateway node participates in at least two clusters.
In the third phase, MDS-MAP algorithm generates local maps of each cluster. This phase has three sub-steps: 3.1. All member nodes send their measured distances to the cluster-head.
3.2. Cluster-head computes the shortest paths between all pairs of nodes that belong to the cluster and construct the distance matrix.
3.3 Cluster-head apply classical MDS to the distance matrix. The output of MDS is a local map that consists of relative positions of the cluster members.
Members of each cluster send their neighboring distance measurements to the cluster-head. After receiving the estimated distances, the cluster-head creates the distance matrix. If there are no measured distances between some of the nodes in the cluster, the cluster-head calculates these distances using Dijkstra's shortest path algorithm. When the distance matrix is filled with appropriate values (the true distances or shortest path distances), the cluster-head applies multidimensional scaling technique to the matrix. The output from MDS-MAP gives the coordinates of sensors within that cluster. At the end of the third phase, each cluster-head has the coordinates of its members. If the network consists of n clusters, then there are n local maps, each in different coordinate system.
In the forth phase merging of the local maps is provided using gateways. At least two neighboring clusters have 3 or more common nodes and they are called common-gateways. In both local maps common-gateways have different coordinates. One of these two neighboring clusters is known as a master and is denoted as Cm, while the other is the slave and is denoted with Cs. All member node coordinates that belong to the master are declared to be the correct positions. They are also known as local anchors. Common-gateways coordinates form Cm are also declared as local anchors and their coordinates are considered to be correct positions. At the same time, their coordinates in Cs are considered to be relative positions. Then the best linear transformation is applied to all nodes in Cs and their relative positions are aligned to the correct positions. The alignment consists of shift, rotation and reflection of coordinates [7]. All nodes in Cm will retain their positions, while all nodes in Cs will have new positions obtained after the alignment. Then all Cs members will become local anchors. It is worth to mention here that this phase can be done either iterative or in parallel which enables better overall performance. This process continues until the global map is generated.
If classical MDS is applied on distance matrix, nodes positions are estimated without an error. Since only the distances between neighboring nodes are known, Dijksta algorithm is used to find the shortest path between each node. This approximation produces an error, i.e. the correct positions usually differ from the predicted ones. This error is bigger when the nodes are in multihop communication range and that is the main reason why MDS-MAP reports very poor results on irregular topologies. In clusterbased MDS, multidimensional scaling is applied to each cluster. Nodes in the cluster are close to each other and shortest path distance estimation error is very small. Thus cluster-based MDS is expected to give better performance than MDS-MAP for irregular topologies.
SIMULATION RESULTS
We simulated CB-MDS algorithm on different network topologies with Matlab. Our work was mainly focused on irregular topologies (both grid and random), but we also considered other network properties like number of clusters, number of anchors and average connectivity.
•
We consider grid and random deployment of the nodes for C-shaped, L-shaped and H-shaped topologies.
•
Each node location was discovered with MDS-MAP algorithm and CB-MDS algorithm (using 5, 7, 10 and 15 clusters). • Different number of global anchors (3, 4, 6 and 10) were used for generating the absolute map. For each topology, we experimented with randomly placed nodes with a uniform distribution. Nodes are deployed on a 10r x 10r square, where r is a unit edge distance. For grid topology, nodes are not placed on the grid points, but 10%r placement error is added to each coordinate. This error is modeled as Gaussian noise. We changed the radio range R (1.3r, 1.5r, 1.8r, 2.0r, 2.5r) which led to a different connectivity of the network. For network clustering, we used kmeans function from Matlab. After computing the global map, different nimber of randomly chosen anchor nodes (3, 4, 6 0r 10) were used for aligning the relative positions to absolute positions [7]. We find the best linear transformation to generate the absolute map of the nodes. Figure 6 to Figure 9 demonstrates the average performance of our algorithm as a function of connectivity for different topologies. Connectivity represents averages over 30 trials. Estimation errors are normalized with R, as proposed in [8] [9]. As can be seen from the figures, CB-MDS performs smaller estimation error than MDS-MAP for both topologies. Figure 6 shows that CB-MDS is better than MDS-MAP for all connectivity levels. It can also be mentioned that when the connectivity level is low, better results can be achieved if a small number of clusters are used. When connectivity level is 13, the best results are performed with 7 clusters. As the connectivity level increases, it is better to use more clusters. For connectivity level above 17 (20), the error is minimal if 10 (15) clusters are used to compute the local maps. This is true regardless of the number of anchors.
CONCLUSION
In this paper, we presented a new cluster-based MDS algorithm for nodes localization in WSN. In our approach, network is divided into clusters and each cluster has one cluster-head for computing local information. Each cluster-head creates its own local relative map which consists of the nodes in its cluster. All local maps are merged into one global relative map using the best linear transformation. If anchor nodes are presented in the network, this global map can be transformed into global absolute map.
The results from our experiments (using the metric given in [8]), show better results than MDS-Map and MDS-MAP(P). Our algorithm estimates the nodes location with greater accuracy than MDS-MAP algorithm if applied on irregular topologies. If compared with MDS-MAP(P), our algorithm is less computational intensive, since in MDS-MAP(P) each node computes its local map. In CB-MDS, only cluster-heads do the computation. | 2015-07-06T21:03:06.000Z | 2008-10-28T00:00:00.000 | {
"year": 2016,
"sha1": "bd18695b5e61cb38e3f2a24b9f6f60e79237462d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1606.07506",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bd18695b5e61cb38e3f2a24b9f6f60e79237462d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
1534979 | pes2o/s2orc | v3-fos-license | The correlation functions of the $(D_{4},A_{6})$ conformal model
In this work, we exploit the operator content of the $(D_{4}, A_{6})$ conformal algebra. By constructing a $Z_{2}$-invariants fusion rules of a chosen subalgebra and by resolving the bootstrap equations consistent with these rules, we determine the structure constants of the subalgebra.
Introduction
The biggest advantage in studying two dimensional conformal field theories, is the presence of an infinite amount of symmetries. This fact is very helpful to compute the correlators of the fields present in the theory.
The first approach proposed to reach this goal is the bootstrap approach [1]. Using the coulomb gas formulation, Dotsenko and Fateev have solved, in [2], the bootstrap equations and determined the conformal blocks as well as the structure constants of the operator algebras associated with the minimal models. These models are shown later to form the A-series of the A-D-E classification obtained in [3]. They involve only spinless fields.
In the case of the models of the D-series, involving also spin fields, many approaches have been proposed for the determination of the structure constants and the correlators (see for example [4], [5], [6], [7] and [8]).
In [5] the authors have determined the structure constants appearing in the three point correlation functions for the particular case of the (A 4 , D 4 )model or what they call the D 5 -model. They have used the well known coulomb gas technique adapted to the cases where the correlators involve spin fields. Their results depend on the choice of the Z 2 -symmetry used to separate the two copies of certain fields present in the theory. Among the four different possibilities, they took only one by imposing the consistency of the results with the bootstrap equations. Their choice has led to the vanishing of one of the structure constants. This fact was not predicted by the non-chiral Z 2 invariant fusion rules, but at the same time it does not disagree with a non-zero Z 3 -invariant 3-pt function of extended fields [10]. The implementation of such discrete symmetries in non diagonal models is discussed in [11].
In our present work, we apply exactly the same method, as in [5], to determine the structure constants of the (D 4 , A 6 )-model. The only difference is that we consider a different Z 2 -transformation. Our results are consistent with the bootstrap equations and also with the non-chiral Z 2 invariant fusion rules.
The paper is organised as follows. In section 2, we start with the operator content of our model. After that, we write the fusion rules and exhibit the different choices for the Z 2 -symmetries. In section 3 , we derive the desired Z 2 -invariant fusion rules of our operator algebra. In section 4 and 5, we give the essential of the method used to determine the structure constants. In section 6, we apply it to the (D 4 , A 6 )-model and give the conclusion.
The conformal (D 4 , A 6 ) Model
The partition functions of unitary conformal field theories defined on the torus were classified in three infinite series : the A-D-E-series [3]. In a generic case, for a modular invariant model corresponding to the minimal one the partition function is written as a sesquilinear form in the characters χ h (χ h ) of the representations of the left (right) Virasoro algebra generated by the primary fields where E(p, p ′ ) is the set of the conformal weights given by the Kac formula and having the symmetry property The nonnegative integers N h,h denote the multiplicities of occurrence of the corresponding left-right representation modules (the identity operator Φ (1,1),(1,1) being nondegenerate, makes N h 1,1 ,h 1,1 = N 0,0 = 1 ). The characters χ h and χ h are shown to form a unitary representation of the modular group P SL(2, Z). So if we note N the matrix whose elements are the integers N h,h , the problem of finding all possible modular invariant forms of equation (2) is reduced to the problem of finding all the matrices N commuting with those representing the generators S and T of the modular group (in [10] one can find the explicit elements of the matrices S and T ).
For a given minimal model with (m ≥ 5) there is always at least two solutions to this problem. A trivial one with corresponds to the diagonal modular invariants models of the A-series here χ r,s = χ hr,s . The second solution (less trivial) gives the modular invariants models of the D-series. Those associated with the minimal models with p ′ = m = 2(2n + 1) (to which belongs our model) are Then for the model with m = 6, associated to the minimal one M(7, 6) we have Z (D 4 ,A 6 ) = s=1,2,3 solution exist for some exceptional minimal models , it gives the modular invariants of the E-series (see [10] for a complete review on the conformal field theory).
The fusion rules
The set of the primaries and their descendants, of a given minimal model, form a closed algebra with respect to the fusion operation. This is in general an operation among the operators forming representations of chiral algebras inherited from the operator product algebra. In our case and when considering only the "chiral part" of the conformal theory, i.e only the holomorphic part (or in the same manner only the anti-holomorphic part) the fusion between two conformal families is written where the fusion coefficients {N k i,j } take the values 1 or 0 to indicate if at least one member of the conformal family φ r ′′ ,s ′′ is present in the fusion of the two families φ r,s and φ r ′ ,s ′ . This looks similar to the decomposition of the usual tensor product of representations in terms of the irrep. There is a formula, due to Verlinde [9], expressing these coefficients for the minimal models in terms of the elements of the unitary matrix S of the modular group On the operator algebra, the fusion operation is realized by the short distance expansion of the operators product where the contribution of the descendants is contained in O(z, z). The coefficients appearing in the expansion are the structure constants of the operator product algebra. They are of very interest because as we will see later they appear also in the 3-point correlation functions.
For our model , and if we restrict ourselves to the subalgebra containing the operators : Φ (1,1),(1,1) , Φ (1,1),(5,1) , Φ (5,1),(1,1) , Φ (3,1), (3,1) and Φ (5,1),(5,1) , we have for the chiral fusion rules , i.e the fusion rules between the holomorphic parts only or the antiholomorphic ones Now combining both the left and right part we obtain the non-chiral fusion rules To be able to distinguish between the two copies of the field Φ (3,1), (3,1) , both present in our subalgebra, one introduces a Z 2 -symmetry which gives opposite parities to the two copies. The action of this transformation on the rest of the fields will be fixed by imposing a consistency condition on the fusion rules. Let's note Φ + and Φ − the two copies and take the transformation It's now easy to verify that the fusion is preserved under the four following distinct transformations : One can then choose four different Z 2 -symmetries. They lead to four different operator algebras, i.e the fusion rules are contrained differently in each case. But as we will see later, only the results for the structure constants obtained from the possibility T 2 are in total concordance with the fusion rules and the bootstrap equations. This transformation was already used in [8] for the D odd models. This choice is different from that taken in [5], where the possibility T 1 was preferred.
With the possibility T 2 the fusion rules (12) become : The correlation functions and the structure constants In a 2-dimensionnal conformal field theory, the form of the two and 3-point correlation functions are fixed only from symmetry considerations. Considering only primary fields, and for a particular normalisation, the two and three point functions are written as where This means that the product Φ h 1 ,h 1 × Φ h 2 ,h 2 contains the field Φ h 3 ,h 3 with strength C 123 . As already mentioned the constants C ijk of the 3-point correlation function are then related to the structure constants of the operator product algebra by The 4-point correlation functions are fixed up to an undermined factor depending on the variable z = z 12 z 34 Using the operator product expansion (10), one can write G(z 1 , ..., z 4 ) in terms of the structure constants and because of the fact that it is not obvious for which pairs of fields we should compute the operator product first (the duality or the crossing symmetry), one obtains strong constraints on the structure constants : the bootstrap equations. In the s-channel (z 12 , z 34 → 0,or z → 0 ) one obtains where the F i and F i are the conformal blocks. In the case where all the fields present in the correlation are symmetric left-right combination (i.e are spinless), the coefficients A kl are diagonals. This is always the case in the models of the A-series but not for those of the D-series because of the presence of spin fields.
The conformal blocks and their integral representation
To determine the structure constants for the (D 4 , A 6 ) model, we use the method first used by Dotsenko in [2] for the models of the A-series and later adapted by McCabe for the D 5 model in [5] and [6]. It consists at first in finding the integral representation of the conformal blocks in the coulomb gas formulation and to write their approximate forms as well as that of the associated 4-point correlation function (21) in both the s-and t-channel. After that one can compare the obtained expressions for G(z 1 , ...., z 4 ) with those given by the equations (19) and (20) respectively and extract the structure constants.
In this section we try to reproduce, with very few details and using the same notation as in [5], the most important steps in the determination of the integral representation of the conformal blocks for the special case of the 4-point correlations containing the field Φ (3,1), (3,1) In the coulomb gas approach, the conformal blocks are written as where the V αrs (z) is a vertex operator of charge α rs and conformal weight The operators Q ± are the screen operators, they are integrals over closed contour of the vertex V α± of conformal dimension 1 and their exponents N and M are fixed by the neutrality condition For the chosen 4-point correlation (22), the last equation is solved for N = 0 and M = 2. Noting that the conformal block is then written as A simpler form of this equation is given by with I k (a, b, C, g, z) = To obtain (25) one perform the global conformal transformation which fixes z 1 , z 3 and z 4 to 0,1 and ∞ respectively, while z 2 is transformed into z = z 12 z 34 z 13 z 24 . There is in principle many possibilities for the choice of the integration contours appearing in (25). But because of the fact that the integrand has branch cuts at 0, 1, z and ∞, it is shown then that there is only three independants integration contours (see [2] for more details) leading to three different blocks defined with the following integrals In the s-channel (z → 0) they behave like Now one can write the 4-point correlation function of interest (22) in the s-channel and compare it with the corresponding one given by (19) to obtain some equations involving the desired structure constants. To do the same in the t-channel one uses the monodromy properties of the integrals I k to transforme them into functions of (1 − z) with the monodromy matrices [γ] is the monodromy matrix. It's general expression as well that of the N k are given in [2].
In the particular case where the correlation function studied contains an operator having a null vector at the level one, like Φ (1,1),(m,n) (z, z) for example, we have the additional constraint This equation with the usual ones, obtained from the global conformal invariance, permit to obtain a very simple forms of the conformal blocks [5].
Application
As an application of the previous algorithm, in this section, explicit values of some structure constants of the subalgebra will be calculated. It is obtained by considering the following correlation function From the bootstrap equations (19)and (20) one can write in the s−channel : (z 24 z 24 ) 10 (31) and in the t−channel : On the other hand from the integral representation, one finds only one block and can write when z → 0 the last expression behaves like and for (1 − z) → 0 10 (35) Comparing the leading terms from the correponding expressions one find : ( C (51|11),(11|51),(51|51) ) 2 = 1 (36) 6.2 The structure constant C (51|11)+− : Here we use the function The bootstrap equations give, in the s−channel : and in the t−channel : The development in terms of the conformal blocks is given by The blocks F l (z 1 , .., z 4 ) are determined using the fact that the correlation G 2 involves null state vectors at level one. From the equations (29a),(29b) and (29c) for the behaviour of the I k 's when z → 0, we have and a comparison of the both expressions in the s−channel gives : The comparison between its behavior when z → 0 and (47) gives while that when (1 − z) → 0 leads to Resolving the two systems of equations , we find : It is noted here that G 3 can be calculated as a correlator of the A-series, however we calculated it with non diagonal blocks form and we observed that only non vanishing elements of A ij are those of the diagonal, this fact is not trivial when considering G 3 with different action of Z 2 .
A Let consider the correlation: In the s-and t-channel we have respectively (58) A comparison with (49) in the s-channel leads to, where, and in the t-channel to, The second equation in (60) gives A 13 ≺ 0 , and the second one of (59) leads to C −−+ = − C +++ Finally, we note from the results obtained from G 4 that if we choose C +++ to be positive (as all the structure constants of the A-series) C −−+ will be of negative sign.
Sign of C −−(51|51)
From the correlation function and if we take C ++(51|51) to be positive we obtain It is then noted here that the calculus of the conformal blocks and the corresponding monodromy matrices for the correlation of type G 5 are obtained in the same way as was done for the correlation (22) .
Sign of C (51|11)(11|51)(51|51)
Consider the correlation The constants structure in l.h.s of the relation above can be choosen to be positives ( this not altere the form of the two points correlation). This gives C (51|11),(11|51),(51|51) The values for the remaining structure constants are of course obtained in the same manner. They are resumed in the table 2.
Summary
To conclude this work, we would like to notice that the use of the transformation T 2 as the Z 2 -symmetry in our case doesn't give any inconsistencies with the non chiral fusion rules and the bootstrap equations. We also notice that the relative signs of the structure constants we obtained are in concordance with those obtained in [4]. | 2014-10-01T00:00:00.000Z | 2002-10-08T00:00:00.000 | {
"year": 2002,
"sha1": "2feec57bc7bf0ce84daddfbceba9c7a5005675ba",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0210071",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e1fbf4542a05e24e3284b90959454a0a8244503f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
15742299 | pes2o/s2orc | v3-fos-license | Colonization factors of Campylobacter jejuni in the chicken gut
Campylobacter contaminated broiler chicken meat is an important source of foodborne gastroenteritis and poses a serious health burden in industrialized countries. Broiler chickens are commonly regarded as a natural host for this zoonotic pathogen and infected birds carry a very high C. jejuni load in their gastrointestinal tract, especially the ceca. This eventually results in contaminated carcasses during processing. Current intervention methods fail to reduce the colonization of broiler chicks by C. jejuni due to an incomplete understanding on the interaction between C. jejuni and its avian host. Clearly, C. jejuni developed several survival and colonization mechanisms which are responsible for its highly adapted nature to the chicken host. But how these mechanisms interact with one another, leading to persistent, high-level cecal colonization remains largely obscure. A plethora of mutagenesis studies in the past few years resulted in the identification of several of the genes and proteins of C. jejuni involved in different aspects of the cellular response of this bacterium in the chicken gut. In this review, a thorough, up-to-date overview will be given of the survival mechanisms and colonization factors of C. jejuni identified to date. These factors may contribute to our understanding on how C. jejuni survival and colonization in chicks is mediated, as well as provide potential targets for effective subunit vaccine development.
Introduction
Campylobacter infections are now the leading cause of human bacterial gastroenteritis in many developed countries [1,2]. Campylobacter enteritis in humans is mainly caused by C. jejuni [2]. Chickens are a natural host for Campylobacter species and are often colonized by C. jejuni in particular [2]. Therefore, this review will focus on the interaction of C. jejuni with the broiler chick in particular. Transmission to humans probably most commonly occurs through consumption and handling of chicken meat products contaminated with this zoonotic pathogen during slaughter and carcass processing [3,4], in which Campylobacter colonization of the chicken intestinal tract plays an important role [5]. The chicken reservoir as a whole is estimated to be responsible for up to 80% of human campylobacteriosis cases [4]. But despite many efforts, current intervention methods fail to reduce the colonization of chickens with Campylobacter [6]. Intensive research in the past few years resulted in an increased insight into the colonization mechanism of C. jejuni in chicks, with several of its colonization factors identified. This newly gathered information, which is the topic of this review, might aid in the development of new effective vaccines to reduce C. jejuni prevalence in broiler flocks and eventually the number of chicken meat-related human campylobacteriosis cases.
C. Jejuni Colonization Pattern in Broiler Chicks
It is generally accepted that C. jejuni colonizes the avian gut as a commensal and colonized broilers carry a large number of bacteria in their ceca (generally around 10 6 to 10 8 cfu/g), the predominant site for colonization [7,8]. Ingestion of C. jejuni numbers as few as 35 cfu can be sufficient for successful colonization of chicks [9]. After ingestion, the bacterium reaches the cecum and multiplies, resulting in an established colonizing Campylobacter population within 24 hours after entrance [10]. Most flocks become colonized only at an age of two to four weeks [11,12], probably due to the presence of maternallyderived antibodies in young chicks conferring protection against colonization [13]. Once flock colonization is detected, the majority (> 95%) of the birds of that flock is colonized within several days [14] and stay so until slaughter [10,15].
C. jejuni isolates can have different colonization potential [9,16,17]. Isolates from humans have been reported to be less successful in colonizing chickens than poultry isolates [17,18]. C. jejuni isolates from poultry have been divided in three colonization phenotypes. Strains of the first phenotype fail to colonize 14-day-old chickens. In the second phenotype, strains can colonize but are readily eliminated and are classified as transient. The third phenotype contains strains that show efficient and sustained colonization [16,18]. These three colonization phenotypes were found to be stable and independent of in vivo passages and the amount of viable bacteria in the inoculum. Although C. jejuni strains did show enhanced colonization capacity (i.e. the minimal infective dose required for maximal colonization decreased) after passage through the avian gastrointestinal (GI) tract, their colonization phenotype did not change [17]. Enhanced colonization capacity and increased virulence after in vivo passage through chicks has been shown in several other studies as well [9,19,20]. This variability in colonization capacity, but the fixedness of the colonization phenotype of a given strain indicates that C. jejuni genes involved in initial and sustained colonization are not identical. However, in contrast to this stable colonization phenotype [17], it has been previously reported that after several in vivo passages a poorly colonizing isolate was able to consistently colonize chicks [9].
C. Jejuni Colonization Factors in the Chicken Gut
As in the environment, also in the chicken intestine C. jejuni is likely to encounter environmental stressors compromising optimal growth [21]. The persistent colonization of the chicken GI tract by C. jejuni indicates that the bacterium harbours regulatory systems that confer protection toward a hostile environment inside, but also outside the host. The mechanism by which the bacterium adapts to this "hostile" environment, resulting in successful and persistent colonization, is poorly understood. It is clear, however, that successful colonization of the chicken GI tract is a multifactorial process [22] in which genes involved in all areas of the colonization process of C. jejuni play a role. In the next sections, a thorough, up-to-date overview is given on identified colonization factors of C. jejuni in the chicken gut, summarized in Table 1.
Multidrug and bile resistance
The Campylobacter multidrug efflux pump (CME) plays an important role in multidrug resistance in C. jejuni, mediating resistance to heavy metals and a broad range of antibiotics and other antimicrobial agents [23]. It is also responsible for resistance to bile salts in the chicken intestinal tract and is therefore essential for successful intestinal colonization in chickens [24]. CME is encoded by the operon cmeABC and consists of a periplasmic protein (CmeA), an inner membrane efflux transporter (CmeB) and an outer membrane protein (CmeC). Expression of cmeABC in C. jejuni is modulated by CmeR, functioning as a transcriptional repressor [23]. In a cmeR mutant, one gene in particular was upregulated most compared to the wild type strain: cj0561c, encoding a putative periplasmic protein [25]. It is suggested that CmeR directly inhibits the transcription of this gene. The expression of both cmeABC and cj0561c is strongly induced by bile compounds present in the chicken intestinal tract and expression of Cj0561c is increased over four-fold during chicken colonization [25,26]. Inactivation of cj0561c and loss-of-function mutation of CmeR resulted in reduced fitness of C. jejuni in chickens and impaired ability to colonize chicks, respectively [25]. Finally, a mutant in the Campylobacter bile resistance regulator (cbrR) gene, coding for the response regulator CbrR, was shown to be sensitive to bile components in vitro [27]. In addition, this mutant had reduced colonization ability in chicks indicating that also in vivo CbrR modulates resistance to bile salts in C. jejuni. Together these observations indicate that bile salts and multidrug resistance is crucial for C. jejuni to survive in the chicken gut.
Chemotaxis
Since C. jejuni is a highly motile bacterium, chemotaxis might be an important factor promoting its migration toward favourable conditions, and thus its survival in and colonization of the intestinal mucosa. For successful chemotaxis, an intact gradient-sensing mechanism, in which adaptation has a crucial role, is indispensable. The C. jejuni genome contains genes encoding putative adaptation proteins: a methylesterase CheB and a methyltransferase CheR, which are both involved in a methylationdependent chemotaxis pathway [28]. A ΔcheBR mutant was shown to have a reduced ability to colonize the chick cecum [29]. C. jejuni is attracted by the glycoprotein mucin, the principal constituent of mucus, and also by the bile and mucin constituent L-fucose. The amino acids aspartate, cysteine, serine and glutamate, and the salts of the organic acids citrate, fumarate, α-ketoglutarate, malate, pyruvate and succinate also act as chemoattractants [30]. Additionally, L-asparagine, formate and D-lactate were recently identified as attractants of C. jejuni [31]. Surprisingly, in this study, C. jejuni was not attracted to citrate and L-fucose. All these chemicals are sensed by the transmembrane methyl-accepting chemotaxis proteins (MCP) of C. jejuni [31]. Hendrixson & DiRita [32] identified 22 C. jejuni genes involved in colonization of the chicken GI [107] tract. Severely affected colonization capacity particularly resulted from mutation in the determinant of chick colonization gene B (docB), encoding a putative MCP and alternatively called chemoreceptor transducer-like protein10 (Tlp10). DocC (Tlp4), another MCP, was important for obtaining wild type colonization levels. Finally, also Tlp1 is important for chick colonization since a tlp1-isogenic mutant showed reduced colonization ability [32,33]. Surprisingly, all three chemoreceptors (tlp1, tlp4 and tlp10) have been identified as being important for invasion (see further) of C. jejuni in chicken embryo intestinal cells (used as a model for in vivo invasion in chicken gut epithelial cells), but not for chemotaxis [31]. While it is clear that these factors contribute to in vivo colonization, their precise role in colonization requires further study. The putative accessory colonization factor (acfB), encoding a probable MCP protein, is highly upregulated in the chick cecum and although not important in the early stages of colonization, it cannot be ruled out that it might be involved in the persistence of C. jejuni in the chick cecum in the presence of a developed gut flora [26]. A number of other genes as well have been associated with C. jejuni chemotaxis, including the Campylobacter energy taxis response genes cetA and cetB [34] and the chemotaxis regulatory gene cheY, which codes for a response regulator controlling flagellar rotation and is involved in the same signal transduction pathway as CheBR [28]. A cheY mutant was affected in its colonization potential of the chick cecum [32]. Also the production of the signal autoinducer AI-2 has been shown to be important for colonization [35]. Inactivation of luxS, the gene encoding the AI-2 biosynthesis enzyme, lead to a decrease in chemotaxis toward organic acids, in vitro adherence to chicken hepatoma (LMH) cells and chick colonization. These observations indicate that energy taxis may be an important force in environmental navigation by C. jejuni, driving the organism toward optimal chemical conditions for colonization.
Flagella and motility
Intact and motile flagella are important colonization factors for C. jejuni in chickens [36]. C. jejuni contains one or two polar flagella. The flagellar filament consists of multimers of the protein flagellin and is attached by the hook protein to a basal structure, embedded in the cell membrane and serving as a motor for rotation. The flagellin locus contains two adjacent genes, flaA (encoding the major flagellin) and flaB (encoding a minor flagellin). Both genes are independently transcribed, with the flaA gene regulated by a σ 28 promoter and the flaB gene by a σ 54 promoter [32,37,38]. Environmental and chemotactic stimuli modulate flaA and flaB promoter activity. Medium pH, growth temperature and the concentration of certain inorganic nutrients affect flaB promoter activity [39]. Lower pH, bovine bile, deoxycholate, L-fucose, high osmolarity and chemotactic effectors such as aspartate, glutamate, citrate, fumarate, α-ketoglutarate and succinate all upregulate the flaA promoter. Proline, high viscosity and milk fermented by Bifidobacterium or Lactobacillus strains downregulate the flaA promoter [40,41]. The flaA gene seems to be highly conserved among Campylobacter isolates and transcription is usually higher than that of flaB [42]. Transcription of σ 54 -dependent genes, necessary for assembly of the hook-basal body filament structure, is regulated by a two-component system composed of the sensor kinase FlgS and the response regulator FlgR [43]. Experiments with mutants have shown that flaA but not flaB is essential for colonization of chickens [44,45] although probably both are needed for full motility [46]. Colonization is also impaired with the mutant for the motility accessory factor 5 (maf5) gene, important for the formation of flagella [44,47]. Once C. jejuni reaches the cecum, it seems that mutants in the flagellar biosynthesis genes rpoN (encoding σ 54 ) and fliA (encoding σ 28 ) and the response regulator gene flgR could establish [77,110,116] [ 99,100] colonization at a high inoculation dose, albeit bacterial numbers were much lower compared to the controls and the number of chicks colonized by these mutants was extremely low [32,43,48]. Chickens exposed to the flgR mutants showed a delayed colonization. Moreover, a re-infection of Campylobacter-negative chickens was not observed. Since bird-to-bird transmission in flocks is generally considered to be very rapid, this indicates that the FlgS/FlgR system is mainly required for initial colonization and less for survival and persistence in the cecum of chicks [43]. Also the flgK mutant, expressing only the hook, showed diminished motility and was completely attenuated for colonizing the chick cecum [48]. Further supporting indications that flagella are important colonization factors for C. jejuni in chickens was given by Hiett et al. [49]. These authors demonstrated differential expression patterns of flagella proteins between a poor and a robust colonizer strain in poultry. These differentially expressed genes, coding for proteins involved in the modification of the flagellum, are located in hypervariable regions of the C. jejuni genome. This variability was shown to be extendable to the protein level, and thus may contribute to the survival of C. jejuni in its different environments and hosts. In C. jejuni chicken isolates, the flagellin O-linked glycosylation island, responsible for successful flagellin assembly and motility, is very diverse [50]. Five genes (cj1321 -cj1325/6) lying in this variable region are, however, significantly prevalent among C. jejuni strains associated with poultry [51] and might therefore be important for the ability of certain C. jejuni strains to colonize this host. Mutagenesis and functional and structural data supported this hypothesis, with particularly cj1324 being important for chick colonization [52].
The flagellar apparatus functions as a type III secretion apparatus for the Campylobacter invasion antigens (Cia proteins) [53], important for in vitro cell invasion [54] and chick colonization [55], and secretion is enhanced upon exposure to chicken mucus [56]. A correlation has been demonstrated between chicken colonization potential and in vitro secretion of Cia proteins [56]. RpoN mutants are completely aflagellated and as such do not secrete Cia proteins, nor do flgK mutants [48], making it clear that the molecular basis behind the colonization mechanism in chickens is complex.
The role of motility of C. jejuni colonization in the chicken GI tract is not fully understood. Non-motile C. jejuni mutants can colonize chickens, be it at substantially reduced levels and only when chickens are inoculated with high amounts of viable cells [43]. Probably, motility is needed for C. jejuni to pass the GI tract so it can reach its protective niche, the mucus layer of the cecal crypts [7], and to resist gut peristalsis [32], hence it is important for initial colonization. It is, however, not known if motility is important in the persistence of C. jejuni in the intestinal tract, leading to long-term colonization. In any case it is clear that the specialized flagellum of C. jejuni serves multiple functions in the adaptation of C. jejuni to the chicken GI tract.
Surface-accessible carbohydrate structures and immune evasion
Several surface-accessible carbohydrate structures (SACS) such as flagella, lipooligosaccharides (LOS), a capsule and Oand N-linked glycans contribute to C. jejuni colonization in chicks.
In C. jejuni, the lipopolysaccharide molecule only consists of lipid A and the (inner and outer) core oligosaccharide and is therefore referred to as LOS, as the high-molecularweight O-polysaccharide is a capsular polysaccharide not linked to the lipopolysaccharide molecule [57]. C. jejuni LOS is important for immune evasion in humans as well as host cell adhesion and invasion, and sialylation of the LOS outer core further enhances epithelial cell invasion [58]. Moreover, sialylated LOS results in reduced immunogenicity [59] and increased invasion potential in Caco-2 cells [60]. The majority of strains from human and chicken origin belonging to the clonal complex CC-21, an ecologically diverse and the largest complex in the general population structure of C. jejuni, were found to belong to one sialylated LOS class in particular, LOS class C, correlating with a high invasive potential [60]. Thus, sialylation of the LOS outer core is likely to contribute to successful colonization of C. jejuni in a suitable host. Genes responsible for the formation of the polysaccharide capsule, surrounding the surface of C. jejuni cells and possibly involved in survival, adherence and evasion of the host's immune system [57,61], also play a role in colonization of the chicken intestine by C. jejuni. Mutation in the capsular polysaccharide transporter gene M (kpsM), which results in the loss of a high molecular weight glycan, and thus absence of a capsule, abolished colonization of chickens [44,62]. A C. jejuni mutant for the kpsE gene, which is unable to express any capsular polysaccharide, was not hampered in its ability to colonize the chicken intestinal tract but the number of bacteria recovered from cecum and colon were lower compared to the control [63]. Interpretation of these results is hampered by the use of different chicken in vivo models and bacterial strains. Capsule formation and LOS biosynthesis genes are located in hypervariable regions in the C. jejuni genome [64], resulting in an enormous antigenic diversity among isolates.
C. jejuni is unique in being the only known prokaryote having an N-linked protein modification system, which is encoded by the pgl multigene locus [65,66]. The N-linked glycosylation pathway is responsible for posttranslational modification of multiple proteins, including flagellin, and is conserved among C. jejuni isolates [50,67]. In contrast, the only known proteins to be modified by O-linked glycosylation in C. jejuni (see above) are flagellar subunits [50]. In humans, most of the N-linked glycosylated proteins are highly immunogenic with their glycosyl moieties being immunodominant while only limited antibody is generated against the protein fraction [67]. This indicates that glycosylation might offer C. jejuni a unique system of immune evasion by masking primary amino acid sequences. A mutant in the N-linked general protein glycosylation pathway gene H (pglH) possessed an intact capsule, but was unable to glycosylate proteins and was severely reduced in its ability to colonize the chicken intestinal tract [44,68]. Also strains with other mutations in the pgl locus were affected in their ability to colonize chicks [32], indicating that N-linked glycosylation in C. jejuni is an important colonization determinant. However, glycan modification of Cj1496c, a glycoprotein important for in vitro cell invasion in human epithelial cells and initial chick colonization does not seem to influence its function [66]. Moreover, most N-glycosylated proteins, including Cj1496c, are annotated to be periplasmatic and do not come in direct contact with host factors and the exact mechanism by which this glycosylation system contributes to colonization remains to be elucidated [66,69].
To conclude, several SACS of C. jejuni, including the unique N-linked glycans, contribute to successful colonization in chicks. Not only by mediating adhesion (see further), but also by creating an enormous antigenic diversity in C. jejuni isolates resulting in persistent highlevel gut colonization of certain strains.
Two-component regulatory systems
C. jejuni, like all prokaryotes, responds to environmental changes by using two-component regulatory systems (TCRSs) consisting of response (R) regulators and sensor (S) kinases regulating C. jejuni gene expression [26,70]. A histidine kinase senses specific environmental triggers through autophosphorylation of the histidine residue. Subsequent transfer of the phosphate group to the corresponding response regulator turns it into an active transcription factor that can stimulate the differential expression of target genes, allowing C. jejuni to immediately respond to changing environmental conditions within the chicken gut such as several stressors, nutrients and temperature [70].
To date, five TCRSs have been identified in C. jejuni to be important for optimal chick colonization: FlgRS [43] and the orphan response regulator CbrR [28] (see above), the reduced ability to colonize (RacRS) system [71], diminished capacity to colonize (DccRS) [72] and Campylobacter planktonic growth regulation (CprRS) [73]. RacRS is responsive to temperature, and mutation of racR reduces the colonization potential of C. jejuni [26,71] (see also below). DccRS controls the expression of several genes encoding probable membrane-associated proteins [26]. Finally, CprRS is thought to control essential biological processes, stress tolerance and biofilm formation, making it possible for C. jejuni to adapt to different environments [73]. A ΔcprS mutant was reported to display a dramatic dose-dependent defect for chick colonization. Thus, it is clear that the genome of C. jejuni harbours multiple TCRS genes, involved in all aspects of C. jejuni biology, which are vital for its efficient adaptation to the chicken host.
Temperature regulation and heat shock response
The elevated body temperature of the chicken (42°C) as compared to humans implies the transcription of many different proteins uniquely transcribed in response to the chicken GI tract. The RacR/RacS signal transduction system responds to temperature changes and might play an important role in chicken colonization by C. jejuni [71]. Comparative analysis of the protein profile of wild type C. jejuni and racR mutants, revealed 11 proteins to belong to the RacR regulon. Three proteins were sequenced and were identified as RacR and two isoforms of a cytochrome c peroxidase homologue. A comparative study by Zhang et al. [74] revealed 15 to 20 proteins differentially expressed by at least two-fold when C. jejuni was grown at 37°C or at 42°C. All identified differentially expressed proteins are periplasmic proteins or major antigens of C. jejuni, or are involved in the metabolism or regulatory system. These proteins might play a role in adaptation to and pathogenicity in the different hosts of C. jejuni. DnaJ belongs to a family of heat shock proteins and plays a role in C. jejuni thermotolerance [75]. The dnaJ gene is located adjacent to racR and likely to be under the transcriptional control of RacR [71]. Mutation of dnaJ severely reduced colonization in chicks [55,75].
Adhesion
Campylobacter adhesion to epithelial cells of the chicken GI tract is believed to be an important step in successful colonization. Several studies contributed to the importance of intact flagella and adhesins, surface-exposed proteins, in chicken colonization. Mutation of the Campylobacter adhesion protein A (capA) gene, encoding an autotransporter lipoprotein, resulted in reduced capacity to adhere to human and chicken intestinal epithelial cells, reduced invasion capacity in human epithelial cells and abolished colonization in a chick model [76,77]. In another study, however, mutation of capA did not result in reduced colonization capacity [77]. Moreover, since this gene is absent in many C. jejuni poultry isolates, the genuine contribution of capA to successful chick colonization is unclear [77,78]. The Campylobacter adhesion to fibronectin (CadF) outer membrane protein was shown to bind to fibronectin, a glycoprotein of the extracellular matrix of the GI tract [79], and to be important for full binding capacity of C. jejuni to chicken epithelial cells [77]. Ziprin et al. [55,80] demonstrated that mutants in the genes cadF and pldA, the structural gene for phospholipase A, are impaired in their ability to colonize the cecum, indicating that these genes may play a prominent role in successful colonization. But in contrast to the highly prevalent cadF gene, many C. jejuni isolates lack the pldA gene [81]. Moreover, the biological function of pldA is not known. But due to its outer membrane localization it might be involved in maintaining the functional integrity of surface exposed adhesins in some strains [82]. Hiett et al. [49] demonstrated differential expression patterns for major outer membrane proteins in poultry between a poor and a robust colonizer strain. These differentially expressed genes are located in hypervariable regions of the C. jejuni genome and may contribute to the survival of C. jejuni in its different environments and hosts. Recently, a new adhesin, fibronectin-like protein A (FlpA), has been identified to be important for full binding capacity to chicken epithelial cells and successful colonization [77]. Konkel et al. [83] found that different C. jejuni strains compete for colonization in broilers and hypothesized that this is due to the sharing of common adhesins among these isolates and limited host epithelial cell binding places. This finding supports the hypothesis that adhesion is a key step in the colonization process of C. jejuni in chicks.
Invasion
Invasion might be an important colonization determinant of C. jejuni in chicks because mutations in ciaB as well as in the MCP genes tlp1, tlp4 and tlp10, important for in vitro invasion in mammalian and chicken cells respectively (see above), severely impair cecal colonization [31,55]. Studies with isolated primary intestinal cells from chickens indeed showed that C. jejuni was able to invade chicken cells [84,85], an unexpected feature since C. jejuni does not associate with chicken crypt epithelium in vivo [85]. Invasion capacity was largely strain-dependent, but overall no difference was observed between isolates from poultry or human origin. Microtubule-as well as microfilament-dependent invasion was reported, which is in accordance with results obtained from invasion experiments in human epithelial cell lines [86]. Many studies on the genes which are thought to play a role during invasion have been conducted on human epithelial cell lines, but thus far experiments on chicken primary epithelial cecal cells are lacking. While it is tempting to assume that invasion mechanisms in these cells are analogous to those in human cell lines, some differences do exist: C. jejuni can survive in vitro in human T84 epithelial cells by avoiding fusion with lysosomes [87], but intracellular survival seems not to be the case in the primary chicken enterocytes [84]. The lack of an immortalized chicken intestinal cell line and the complicated handling of primary chicken cecal cells clearly hamper investigation toward invasion (and other) mechanisms in chicken cecal cells. Nevertheless, the recent obtained in vitro and in vivo results described under this section suggest that invasion of C. jejuni in gut epithelial cells might be an important colonization determinant in vivo.
Iron transport and regulation
Regulation of the intracellular iron concentration is an important factor to secure colonization. Iron is essential for electron transfer processes and functions as a cofactor for several enzymes. It is also responsible for the generation of hydroxyl radicals. Moreover, iron availability modulates the transcription of genes belonging to several functional groups, thereby affecting the ability of C. jejuni to colonize the GI tract [88]. The soluble ferrous iron (Fe 2+ ) is readily transported across the outer membrane via porins and is subsequently transported across the cytoplasmic membrane by a specific transporter protein, FeoB. This transporter is important for iron acquisition and intracellular survival of C. jejuni, as well as for successful gut colonization [89]. Mutants in the ferric uptake regulator (fur) gene, the cfrA gene responsible for an outer membrane ferric enterobactin (FeEnt) receptor and the ceuE gene encoding a FeEnt periplasmic binding protein regulated by fur, are all compromized in their ability to colonize chickens, with complete absence of live bacteria for the latter two [88], as were mutants in another recently identified and characterized outer membrane FeEnt receptor CfrB, which is most prevalent in C. coli strains [90]. Inactivation of cfrB in a cfrA-negative C. jejuni strain fully abolished its ability to utilize FeEnt as a sole iron source for growth. Moreover, the reduced colonization phenotype of the isogenic cfrB mutant of C. jejuni could not be restored by the presence of a functional cfrA gene. In contrast, complementation of an isogenic cfrA mutant with the wild type cfrB gene in trans fully restored the ability of this C. jejuni mutant to utilize FeEnt. Thus, CfrB plays an important role during colonization of Campylobacter in chicks and cannot be compensated by other iron uptake mechanisms without affecting the colonization potential. Therefore, it is believed that CfrB is the dominant receptor both in FeEnt utilization by and during colonization of chickens with C. jejuni strains producing both a functional CfrA and CfrB. Transcription levels of chuA, a gene believed to code for an outer membrane receptor for hemin and hemoglobin, are increased over 40-fold in the chicken cecum, indicating that ChuA might be required for C. jejuni to colonize chicks [26]. Finally, mutation in Cj0178, a putative transferrin-bound iron utilization outer membrane receptor, resulted in reduced colonization potential [88]. Given this information, it can be concluded that several iron-uptake systems are essential for the survival of C. jejuni and for its successful colonization in the chicken host.
Besides iron, also zinc has been reported to be an important trace element necessary for C. jejuni growth inside the chicken host [91]. A C. jejuni mutant lacking ZnuA, the periplasmic component of a putative zinc ATP-binding cassette (ABC) transport system, had a growth defect in zinc-limiting media and was severely affected in its colonization potential in chickens.
Oxidative and nitrosative stress defence
C. jejuni is a microaerophilic microorganism and thus requires reduced oxygen levels for its growth. Nevertheless, it must resist oxidative stress it may encounter both in the environment and in its host, like the superoxide anion, hydrogen peroxide and biotoxic hydroxyl radicals. These stressors can result from incomplete reduction of oxygen by C. jejuni, or be induced by the chick immune system [92]. C. jejuni contains a wide range of enzymes involved in defence against oxidative stress. Several of these regulators have already been identified. However, the mechanism of gene regulation in C. jejuni is still poorly understood. Cytochrome c peroxidases (CcPs) are generally responsible for the conversion of hydrogen peroxide to water [92]. In a study by Ahmed et al. [93] 23 DNA sequences, including cytochrome oxidase III, were found to be present in a robust but absent from a poor colonizer C. jejuni strain. No direct link could be found that these factors correlate with the identified genes by Hendrixson and DiRita [32], but it can be assumed that also these strain-specific genes are factors important for efficient and sustained colonization. C. jejuni has two CcP loci, which surprisingly do not contribute to hydrogen peroxide resistance and thus do not protect against oxidative stress. Instead, it seems that in C. jejuni resistance to hydrogen peroxide is mainly mediated by the sole cytoplasmic catalase KatA, breaking it down to water and oxygen [92,94]. Nevertheless, mutation in one of the two CcP loci, docA, located immediately upstream of docB, resulted in a substantial dose-dependant decrease in colonization potential [32,94]. Moreover, Woodall et al. [26] found Cj0358, another putative CcP, to be upregulated 12-fold in vivo suggesting a role for this protein in hydrogen peroxide removal from the periplasm. By constructing an isogenic ΔperR mutant, deficient in the regulon of the peroxide-sensing regulator (PerR), and comparing its transcriptome profile with that of the wild type strain, Palyada et al. [95] identified over 100 genes to be part of the PerR regulon. Mutation of perR significantly reduced C. jejuni motility and attenuated colonization in chickens. This study also revealed a functional network between the key players of the oxidative stress defence system, including mainly the antioxidant proteins encoded by the superoxide dismutase (sodB), defending C. jejuni against the superoxide anion, the alkyl-hydroperoxide reductase (ahpC) and katA, their transcriptional regulators fur and perR and the regulatory pathways that connect them. This indicates that there is a link between oxidative stress (PerR regulated) and iron metabolism (Fur regulated) in C. jejuni and that oxidative stress defence mechanisms and their proper regulation are essential for successful and efficient colonization of the chick cecum. Indeed, the colonization potential in chicks was reduced by 50 000-fold in the C. jejuni ΔahpC mutant, while in ΔperRΔfur, ΔkatA and ΔsodB mutants colonization was completely abolished. This indicates that all key players of this functional network need to be intact for successful colonization of C. jejuni in chicks. Garenaux et al. [96] demonstrated that next to SodB, CadF and FlaA also a periplasmic protein (Cj1371) and a two-component regulator (Cj0355c) were overexpressed following exposure to paraquat, a strong oxidizing agent. These findings suggest that both proteins play a role in C. jejuni oxidative stress resistance and might be important for persistent chick colonization, but this has yet to be demonstrated.
The enzyme g-glutamyl transpeptidase (GGT) is involved in maintaining cellular glutathione levels. Glutathione is an antioxidant molecule providing vital cellular protection against reactive oxygen species, generated by aerobic respiration [97,98]. GGT was shown to be present in a robust but absent from a poor colonizer C. jejuni strain [93], suggesting that GGT activity is not needed for initial colonization but indispensable for persistence of C. jejuni in the avian gut [99]. GGT catalyzes the conversion of gluthatione and glutamine to glutamate, and the ability of certain C. jejuni strains to utilize glutamine or glutathione as a sole carbon source is absolutely dependent on the presence of GGT [100]. GGT is not present in all C. jejuni strains [99] which could explain the lower colonization capacity of strains lacking a functional GGT.
A ppk1 and ppk2 mutant, defective in respectively polyphosphate kinase 1 (PPK1) and 2 (PPK2), two key enzymes of the polyphosphate metabolization, were shown to have decreased invasion ability in human intestinal epithelial cells and a dose-dependent colonization defect in chicken ceca [101,102]. This indicates that the utilization and accumulation of polyphosphate helps C. jejuni to adapt to the cecal environment of the chick.
For survival and optimal colonization in the chick, C. jejuni must also be capable of eliciting a suitable response to cytotoxic nitric oxide (NO), a free radical produced by several cells of the host immune system that is bactericidal against C. jejuni [103]. C. jejuni is protected against NOinduced nitrosative stress by NO-detoxifying mechanisms, including a nitrite reductase and its single-domain Campylobacter globin (Cgb) [104,105]. Expression of Cgb in response to NO is not regulated by Fur nor PerR, but mediated by the transcription factor NssR, regulating a nitrosative stress-response regulon that also includes a truncated haemoglobin (Ctb) probably involved in oxygen metabolism [98,106]. NO detoxification in C. jejuni is believed to proceed via a Cgb-catalyzed dioxygenase or denitrosylase reaction, converting NO and oxygen to nitrate [103].
Many C. jejuni redox proteins essential for electron transfer (see further) have N-terminal twin-arginine translocase (TAT) signal sequences ensuring proper transport across the cytoplasmic membrane [107]. The TAT secretion system has been shown to be important for C. jejuni to cope with stress and for chick colonization [108]. A C. jejuni tatC knockout mutant had defects in biofilm formation, motility and flagellation, and was defective in survival under osmotic shock and oxidative and nutrient stresses, impairing the efficient transmission of C. jejuni to a susceptible host. The ΔtatC mutant was unable to persistently colonize chickens which is likely the result of multiple, additive effects caused by the inability of the tatC mutant to translocate essential TAT substrates [108]. Also a cj0379c mutant, lacking a functional TAT translocated molybdo-enzyme of unknown function, was deficient in chick colonization [107]. The nitrosative stress phenotype of this mutant suggests a role for Cj0379 in the reduction of reactive nitrogen species in the periplasm.
It is clear that within its chicken host C. jejuni can encounter several stressors which it must resist for successful colonization. The evidence above indicates that C. jejuni developed some interplaying survival mechanisms that allow the organism to cope with chicken gut-induced oxidative and nitrosative stress.
Central intermediary and energy metabolism
In C. jejuni, all enzymes necessary for a complete oxidative tricarboxylic acid cycle are present. A key step in this cycle is the oxidation of succinate to fumarate. Until recently, it was believed that in C. jejuni this reaction is exerted by both a fumarate reductase (Frd) and a succinate dehydrogenase (Sdh) since both enzymes were found to contribute to the total fumarate reductase of C. jejuni in vitro and were significantly upregulated in the chick cecum [26,109]. A C. jejuni mutant missing the intact FrdA subunit of the FrdABC enzyme was completely deficient in its succinate dehydrogenase activity in vitro and had reduced colonization ability in chicks. In contrast, experiments with the sdhA mutant of C. jejuni showed that Sdh exhibits no succinate dehydrogenase activity and is not required for colonization, indicating that the sdh operon has been misannotated. Thus, Frd is the sole succinate dehydrogenase of C. jejuni and is therefore essential for full host colonization [109].
To meet all of its energy demands, C. jejuni utilizes oxidative phosphorylation [110]. In the chicken cecum, however, C. jejuni encounters an environment with reduced oxygen levels to which it must elicit a suitable response to efficiently and persistently colonize this part of the gut. Microarray analysis revealed several genes involved in this response to be upregulated when C. jejuni enters its host compared to in vitro culture [26], with three genes in particular: the anaerobic C 4 -dicarboxylate transporter genes dcuA and dcuB as well as the aspartase gene aspA. Probably these genes play an important role during chick colonization. A double mutant in hydrogenase (Hyd) and formate dehydrogenase (Fdh) and a mutant in 2-oxoglutarate:acceptor oxidoreductase (OoR), had markedly reduced colonization ability in chicks, indicating the importance of these electron donor enzymes [110]. The same authors also identified NADH:ubiquinone oxidoreductase (complex I) to play an important role because a mutant in this gene showed impaired colonization capacitiy. Mutants in the respiratory enzymes nitrate reductase, nitrite reductase and cbb 3 -type oxidase all colonize the chicken cecum to a lesser extent [111]. Moreover, these enzymes are upregulated in the chick cecum, indicating that C. jejuni might utilize nitrite and nitrate, as well as fumarate as a terminal electron acceptor instead of oxygen [26]. Especially nitrate is considered as a potential in vivo electron acceptor [104]. It is suggested that the ability of C. jejuni to use gluconate as an electron donor is important for full colonization potential in the avian host [112]. A cj0415 mutant, lacking gluconate dehydrogenase (GADH) activity, was impaired in establishing colonization in chicks but not in mice, which can probably explained by the higher expression level of cj0415 at 42°C compared to 37°C [112].
C. jejuni is an asaccharolytic bacterium and is therefore entirely dependent on a tight set of amino acids including L-aspartate, L-glutamate, L-proline and L-serine and Kreb cycle intermediates as a primary carbon and energy source [113]. Mutants of the L-serine dehydratase gene sdaA were defective to catabolize L-serine and their colonization potential in chicks was abolished [114]. Moreover, sdaA was upregulated more than two-fold in C. jejuni upon colonizing the chick cecum, indicating the importance of serine for in vivo survival [26]. Also aspA has been demonstrated to be upregulated (by 4.8-fold) in the chick cecum [115]. An aspA mutant, which was unable to use any amino acid besides L-serine, was shown to have impaired ability to persist in the intestines of outbred chickens, which can possibly be explained by the reduced growth potential of this mutant in the avian gut, because aspartate enhances oxygen-limited growth of C. jejuni in an AspA-dependent way. Also mutation in one of two genes probably involved in amino acid transportation in C. jejuni, livJ and cj0903c, resulted in a marked colonization defect in chicks [32].
Due to an observed reduction in adhesion to and invasion in cultured epithelial cells the PEB1a protein has been regarded as a putative adhesin [77,116]. A peb1A mutant was not capable of colonizing chicks but did, however, not show a reduced binding capacity to chicken LMH cells [77]. This suggests that PEB1a serves a role other than, or next to, mediating adhesion during in vivo colonization. Indeed, the protein is mainly located in the periplasm and is believed to function as an ABC transporter of aspartate and glutamate, essential for the utilization of these amino acids as a carbon source during microaerobic growth [77,102,116]. However, the twocomponent signal peptide of PEB1a might be responsible for its localization both in the periplasm as on the cell surface, where it could act as an adhesin [116]. It is unclear whether PEB1a is present in the outer membrane, but it can definitely be found in the supernatant of C. jejuni cultures, indicating that the protein can be exported across the outer membrane. Nevertheless, no direct evidence is available that PEB1a functions as an adhesin in C. jejuni. Therefore, the inability of the peb1A mutant to colonize chicks [77] is probably attributable to the inability of this mutant to utilize glutathione, glutamine and the dipeptide g-glutamylcysteine, although GGT activity is not affected. Thus, GGT allows the utilization of these nutrients by generating glutamate, which is then taken up by the PEB1a-dependent transporter and subsequently used as a carbon source [100].
To conclude, due to its assaccharolytic and microaerobic nature, C. jejuni is dependent on amino acids and electron acceptors other than oxygen as primary energy sources for optimal growth. Although the underlying mechanisms are not yet fully characterized, several of the key molecules and genes of the central intermediary and energy metabolism have been identified to date. It is clear that disturbance in the proper metabolism of these nutrients is accompanied by a severely hampered survival potential and colonization ability in chicks.
Vaccine Application Versus Immune Evasion
Although generally accepted that C. jejuni colonizes its avian host as a commensal, C. jejuni inefficiently adheres to and invades cells of the chicken gut epithelium [117]. This is initially followed by an inefficient inate immune response by the chick resulting eventually in the production of specific antibodies [117,118]. Although such a response is not able to clear C. jejuni from the gut, reduced bacterial counts have been observed [118]. Moreover, if antibodies against C. jejuni are already present in the chick the ability of C. jejuni to colonize is dramatically reduced, be it due to transfer of maternal antibodies to or through immunization of such birds [13,118]. Therefore, the recent identification of many of the factors needed by C. jejuni to colonize the chicken gut opens the way for subunit vaccine development to eradicate this pathogen from poultry flocks.
Potential vaccine candidates must be expressed during and being important for colonization in chicks. In addition they should ideally be highly immunogenic, conserved and prevalent among C. jejuni isolates. Although some promising results were obtained focussing on C. jejuni outer membrane proteins (OmpH1 and Omp18) and FlaA as subunit vaccine candidates, no effective commercial vaccine against Campylobacter in chicks is available to date [6].
Bacterial OMPs are regarded as promising vaccine components because of their accessibility for the host immune system and the key roles they play in the host-bacterium cross-talk [119]. CadF and CfrA may therefore hold much promise for such applications. Not only are they highly conserved and prevalent in C. jejuni strains, but these surface-exposed proteins are also highly immunogenic in chicks [118,119]. Moreover, antibodies directed to CfrA were recently suggested to hinder the interaction of FeEnt with this receptor [91,119]. C. jejuni periplasmic PEB1 can possibly be transported across the outer membrane and is highly immunogenic in humans. Whether this immunogenicity can be extended to the chicken host is not known, but clearly PEB1 deserves further attention as a possible candidate for vaccination studies in chicks. Finally, Cj0178, the secreted CiaB and transmembrane Tlp-10 may be immunogenic but their precise role during chick colonization has yet to be determined.
Also C. jejuni surface-exposed polysaccharide structures may be promising candidates for subunit vaccines. Indeed, several genes essential for successful colonization (kpsM, flaA, flgK, pglH, maf5 and cj1324) are involved in SACS biosynthesis. However, most SACS are highly variable and implicated in immune evasion in humans. Whether this can be extended to the chicken host is not clear. In any case, identification of conserved polysachharide epitopes of SACS is critical for exploiting these structures for vaccine application.
Finally, a plethora of other C. jejuni factors are indispensable for chicken gut colonization. These include znuA, cj0379, docA, docB, perR, fur, ceuE, katA, dnaJ and sodB, as well as the (highly) conserved cj0415, tatC and ppk1 genes. Their gene products are, however, not known to be surface-expressed, but rather reside in the peri-or cytoplasm where they exert their vital roles. As a consequence, they do not come in direct contact with the chick immune system. Nevertheless, their identification significantly contributes to a better understanding in the C. jejuni biology during chick colonization. Therefore, it could be useful to examine whether (and how) also these targets could be exploited for C. jejuni control in poultry.
Concluding Remarks
Poultry is a natural host for zoonotic Campylobacter species and the broiler chicken gut is often colonized by C. jejuni in particular. As a result, chicken meat products are considered to be the main source of campylobacteriosis in humans. Despite many efforts, no effective strategy exists to clear this pathogen from chickens, in part due to the poor understanding of their dual interaction. Besides genes which are probably necessary during colonization of the GI tract in a wide range of animal species in general, it seems that C. jejuni needs a distinct set of gene products for optimal adaptation to the unique aspects of the chicken intestinal environment, resulting in high-level cecal colonization. And although information about genes important for C. jejuni colonization in chicks is increasing and some cooperative functional networks (e.g. iron metabolism/oxidative stress defence) crucial for colonization are starting to unravel, the mechanisms by which these factors interplay to form the basis behind the complex interaction of C. jejuni with its avian host remain largely unclear. Nevertheless, we can now conclude that several factors and processes, involved in all branches and stages of the C. jejuni cellular response, are crucial for the adaptation of this bacterium to the chicken gut and thus indispensable for the organism to colonize its avian host. Some of these critical colonization determinants may be exploited by researchers in the field to develop new, effective vaccines to eradicate this zoonotic pathogen from poultry flocks. Especially CadF, CfrA, Tlp-10, CiaB and PEB1 seem promising targets and further research, including identification of their functional and conserved epitopes, could result in the identification of factors capable of targeting a wide range of the circulating C. jejuni strains in poultry.
In conclusion, intensive research in the last few years resulted in the identification of several of the chicken colonization determinants of C. jejuni. Further research must give a better insight of how these factors interplay, forming the functional network that is responsible for the highly adapted nature of this organism to the avian gut. Unravelling these mechanisms might aid in the development of more efficient control measures for clearing this zoonotic pathogen from poultry lines, thereby reducing the number of human campylobacteriosis cases associated with consumption and handling of contaminated poultry meat products. | 2014-10-01T00:00:00.000Z | 2011-06-29T00:00:00.000 | {
"year": 2011,
"sha1": "225e2cfcb8a79831329bbde17b24761fb40affb2",
"oa_license": "CCBY",
"oa_url": "https://veterinaryresearch.biomedcentral.com/track/pdf/10.1186/1297-9716-42-82",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a62452cd9af3358028c1cac8a0c5f2e057172a63",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
231587307 | pes2o/s2orc | v3-fos-license | Implicit Solvation Using the Superposition Approximation (IS-SPA): Extension to Polar Solutes in Chloroform
Efficient, accurate, and adaptable implicit solvent models remain a significant challenge in the field of molecular simulation. A recent implicit solvent model, IS-SPA, based on approximating the mean solvent force using the superposition approximation, provides a platform to achieve these goals. IS-SPA was originally developed to handle non-polar solutes in the TIP3P water model but can be extended to accurately treat polar solutes in other polar solvents. In this manuscript, we demonstrate how to adapt IS-SPA to include the treatment of solvent orientation and long ranged electrostatics in a solvent of chloroform. The orientation of chloroform is approximated as that of an ideal dipole aligned in a mean electrostatic field. The solvent--solute force is then considered as an averaged radially symmetric Lennard-Jones component and a multipole expansion of the electrostatic component through the octupole term. Parameters for the model include atom-based solvent density and mean electric field functions that are fit from explicit solvent simulations of independent atoms or molecules. Using these parameters, IS-SPA accounts for asymmetry of charge solvation and reproduces the explicit solvent potential of mean force of dimerization of two oppositely charged Lennard-Jones spheres with high fidelity. Additionally, the model more accurately captures the effect of explicit solvent on the monomer and dimer configurations of alanine dipeptide in chloroform than a generalized Born or constant density dielectric model. The current version of the algorithm is expected to outperform explicit solvent simulations for aggregation of small peptides at concentrations below 150 mM, well above the typical experimental concentrations for these materials.
I. INTRODUCTION
The solvation of polar molecules changes the conformations they sample and the higher order structures they form relative to that found in vacuum. Self-assembling peptides, for example, achieve different macroscopic structure and properties under different solvent environments. 1 Computational tools used to predict this behavior are limited due to (1) the length and time scales achievable using all-atom simulations and (2) the the lack of solvent transferability in coarse-grained models. A computationally cheap, thermodynamically accurate, and solvent adaptable implicit solvent model will have an immediate impact on studying these processes.
Molecular simulations play an important role at discerning the mechanism of aggregation and self-assembly of peptide-based materials. Experimental approaches often lack the multiscale resolution necessary to determine macroscopic structure as well as the underlying molecular driving forces. All-atom molecular dynamics (aaMD) simulations have been employed to help elucidate the latter component. [2][3][4][5] aaMD, however, cannot readily sample the large length-and time-scales necessary to capture macroscopic self-assembly. Additionally, these methods struggle with the low concentration of solutes typically used in self-assembly experiments. 6 Top-down coarsegrained (CG) simulations and mesoscale models have a) These authors contributed equally to this work. b) martin.mccullagh@okstate.edu been used to investigate peptide aggregation but their tie to atomistic detail is often obscured. 7-10 Bottom-up CG models provide a rigorous tie between scales but often lack transferability. Recent efforts have attempted to alleviated the transferability issue but have not been widely adopted in molecular simulations. 11,12 Recently, a multiscale model for peptide assembly has been proposed that uses a combination of top-down CG models and aaMD simulations but a rigorous connections between scales is not guaranteed. 13 Thus, it remains an open challenge in the field of molecular simulation to achieve computationally efficient and thermodynamically consistent bridge between all-atom and CG models. 14 Implicit solvent models provide a necessary bridge between aaMD simulations and CG models and an appealing platform for self-assembly prediction. Previous simulation studies of peptide aggregation using implicit solvation are limited. 15,16 The limited application of these models is due to the inaccuracies of the computationally feasible models 15 and the significant computational expense of more accurate models. 16 Recent developments have attempted to bridge the gap between the computationally expensive and cheap models but, as of yet, the Goldilocks implicit solvent model does not exist. [17][18][19] Additionally, much of the development of implicit solvent models focus on water as the solvent and the portability to other solvents is not considered.
Our recently developed implicit solvent model, implicit solvation using the superposition approximation (IS-SPA), provides a platform to achieve an accurate and computationally feasible implicit solvent model. 20 This model builds on previous approaches that uti-lize the Kirkwood superposition approximation (SPA) 21 to estimate the mean solvent force for a given solute configuration. 22,23 Initially developed for non-polar solutes in water, nothing about the underlying theory is specific to water as a solvent. Indeed, the SPA is predicted be more accurate for a less polar solvent, such as chloroform.
In this paper, we extend the development of IS-SPA to accurately and efficiently capture peptide monomer configurations and dimerization behavior in chloroform. In the subsequent theory section, we set forth the underlying theory of IS-SPA and how it is adapted for (1) nonspherically symmetric solvent forces and (2) polar solutes in a polar solvent. The accuracy and limitations of the approach are demonstrated on the solvation and dimerization of spherical charges in chloroform followed by the modeling of the monomeric conformations and dimerization behavior of alanine dipeptide (ADP) in chloroform.
II. THEORY
IS-SPA relies on estimating the average solvent force on atom i given the position R N of the N solute atoms, which is given exactly when the solvent is spherically symmetric as where ρ is the solvent density, g is the solvent distribution function, and f i,solv is the solvent force. The first generation of IS-SPA introduced two main approximations to estimate Equation 1 on the fly: (1) the many-body solvent distribution function, g(r; R N ), is approximated using SPA with atomic radial distribution functions fit from the results of a single molecular configuration and (2) Monte Carlo (MC) integration is used to approximate the integral. 20 This second point yields a simulation with the correct force on average, with the thermostat being responsible for removing any excess heat produced by the inexact calculation. This approach to an implicit solvent model was shown to capture physics of solvent interactions for non-polar association in water better than other implicit solvent models such as SASA and RISM. 20 Subsequent sections detail the extensions of IS-SPA to account for the non-spherically symmetric solvent potentials, the long range electrostatic forces, and how the associated parameters are calculated.
A. Inclusion of Non-spherically Symmetric Solvent Potentials
The previous version of IS-SPA only considered the spherically symmetric Lennard-Jones solvent forces from the TIP3P model of water. In the case of polar solutes in a solvent of chloroform, both the Coulomb and Lennard-Jones solvent-solute forces also depend on the orientation of the solvent relative to the solute atoms. The mean solvent force found in Equation 1 is changed to be where Ω represents the internal coordinates of the solvent molecule. We use a series of approximations to analytically integrate over Ω such that only the position r of the solvent needs to be sampled.
The first approximation we introduce is to presume that the only important internal and orientational degree of freedom of a solvent molecule is the alignment of its dipole moment and that the molecule is axially symmetric. This reduces the twelve degrees of freedom in Ω for a chloroform molecule to two, signified byp. Next, the SPA is modified to account for the orientation of the dipole moment. Namely, where P i (p; |r − R i |) is the probability distribution of the dipole moment at a given distance from atom i.
Next, we approximate the distribution of the dipole moment to be that of a thermally ideal dipole in a constant electric field. Each atom is presumed to produce a radially symmetric mean field along its separation vector, E i (r)r. Thus, where p is the static dipole moment of a chloroform molecule and T is the temperature in units of energy. In this sense, the orientation is dependent on the superposition of electric fields generated by each atom. The only parameters that are needed for the model to predict the solvent distribution function are the atomic radial distribution functions g i (r) and the magnitude of the effective electric field from each atom E i (r). The latter is measured in simulation through the average polarization and related to the effective electric field using the Langevin function.
The solute-solvent force functions depend on all of the solvent degrees of freedom so must be simplified in a manner analogous to the distribution function. We use the expansion of the axial multipole moments for the electrostatic force. A description of the equations associated with the solvent force are found in the Supplementary Information. For this, the multipole moments are calculated for the minimum energy structure of the solvent molecule in vacuum, assuming that the distribution of the rotation around the dipole moment is isotropic. A free parameter, d, is introduced relating to the position of the molecular center relative to the carbon atom along the dipole of the molecule, with positive value being towards the chlorine atoms ( Figure 1). We choose a value of -0.21Å from the carbon atom in order to minimize the magnitude of the moments and to have the hexadecapole moment be zero, as shown in Figure SI2. We then calculate the solvent electrostatic force to fourth order.
The simplification of the Lennard-Jones potential is not as immediately obvious. It is possible to expand the potential analogous to the multipole expansion. The problem with this approach is that the functions do not converge as quickly due to the rapidly varying potential energy as a function of orientation. Instead, we consider the force to be radially symmetric, effectively truncating the expansion at zeroeth order. We use the same definition of the solvent molecular center as with electrostatic force. Instead of developing a function to describe the Lennard-Jones force, the average force that the solvent puts on each atom at a given distance along the separation vector is measured and histogrammed to be used in the simulations.
B. Treatment of Long Ranged Electrostatics
The inclusion of electrostatic forces requires handling their long range nature. We calculate an analytic result for the long ranged interactions instead of proposing any approximations to simplify the calculation of the electrostatic solvent force. We limit the discussion to nonperiodic systems to avoid needing to use methods such as particle mesh Ewald. This limitation still allows for calculating the monomer configurations and dimer PMFs discussed herein.
The approach to calculating the integral of the solvent force over all space is to divide the space into two parts. One part is the union of spherical volumes within some cut off distance from the solute atoms, named the interaction volume. The MC sampling is only performed within the interaction volume and the force from any MC point is calculated for all solute atoms. Outside of the interaction volume, the Lennard-Jones force is set to zero, the solvent density is presumed to be that of the bulk fluid, and the polarization density is taken to be that found in a constant density dielectric (CDD). A cut off distance of 12Å is chosen to define the interaction volume based on when the variations in density and polarization tend to these bulk values.
An analytic result for the force due to a CDD continuum is not immediately accessible for an arbitrary shape found for the interaction volume created by a molecule. What is calculated readily is the result of the solvent force on atom i from the solvent polarization produced by the second j outside the interaction volume of the two atoms. Namely, This pairwise force is added to the calculation of the direct solute-solute forces. Since the law of superposition is an exact relation in the case of the CDD, adding all these pairwise interactions correctly calculates the force outside of the interaction volume, at the expense of also calculating the forces inside the interaction volume of the rest of the molecule but outside that of the given ij pair. This overcounted force is sampled along with the MC sampling within the molecular interaction volume and subtracted from the final forces. Note that this force is non-zero even within the excluded volume of the atoms such that the MC integration needs to sample volumes that have zero density. This exact approach to calculating the long range force introduces many inefficiencies to the model, including the need to sample within the excluded volume of the solute where the solvent forces are identically zero and that the force of each MC sampled point needs to be calculated for each solute atom regardless of separation distance. These inefficiencies are a target for future approximations to simplify the calculations while introducing minimal error.
C. Fit Parameters for Molecular Systems
There are two parameter sets discussed in the above theory that need to be measured for the system: the atomic radial distribution functions and effective electric fields. In the previous work on IS-SPA, it is shown that the SPA is not valid to describe the solvent distribution function of a molecule built upon the radial distribution functions of each separate atom. Instead, it is better to fit the parameters to the solvent distribution function observed for the molecule. The input for this method is achieved by running a simulation with the solute molecule pinned in space and sampling the solvent distribution. The mean density and polarization of solvent is measured in a cubic grid with cells much smaller than the size of a solvent molecule, cubes of linear length 0.25Å in the case of chloroform. Instead of restricting the fit to some functional form with a minimal number of free parameters, the functional form of the fit is a binned function of distance with bins of 0.1Å.
Since the underlying statistics for measuring the distribution function from a molecular dynamics simulation is counting statistics, the goodness of fit function that is minimized in the case of finding parameters for the atomic radial distribution function is related to the Poisson distribution. Finding the most likely set of parameters to describe the explicit solvent simulation results leads to minimizing the function Υ g with respect to each fit parameter, where the outer sum is over all cells in the measured distribution function, the inner sum and product are over all solute atoms, and g obs n is the observed value of the distribution function of the cell. The Newton-Raphson method is applied to iteratively converge to the minimum in Υ g .
A different goodness of fit is needed for the case of finding the effective electric field generated by each atom in the solute. The nature of the statistics associated with the distribution of polarizations of a solvent molecule is not as simple, but it can be approximated as being that of a dipole in a constant external field. Finding the most likely set of parameters to describe the aaMD results leads to minimizing the function Υ EF with respect to each fit parameter, where N n is the number of solvent molecules observed in the cell, p n is the average unit vector of the dipole moment in the cell, and the model effective electric field E mod n is equal to In this sense, the polarization is the observable in the simulation and the effective electric field per atom is the fit parameter. The total effective electric field in a given point of space is the sum over all atomic effective fields which is converted to a polarization using the Langevin function. Again, Newton's method is used to iteratively converge to the minimum of Υ EF .
Using a goodness of fit function related to the underlying statistics of the measurement gives the most reliable set of parameters. In fact, artifacts in the fit parameters can be found if a typical Gaussian χ 2 method is used, especially for small distances in the radial distribution functions. It is also be noted that the parameters associated with atoms with identical underlying force field parameters are fit to a single function. The results of these fitting procedures are found in Figure SI3 for the ADP system.
III. METHODS
Explicit solvent molecular dynamics simulations along with the Generalized Born (GB) and constant CDD simulations are performed with the AMBER 18 software package. 24 The GB model used is the GBneck model using the modified Bondi radii for the atoms. 25 The implicit solvent models use an external dielectric constant of 2.3473, as measured in the from the bulk chloroform model. 26,27 The custom-made single ion solutes are given Lennard-Jones parameters of = 0.152 kcal/mol and r min = 7Å. The parameters for alanine dipeptide are from the ff14SB force field. 28 Simulations were run in the NPT ensemble at 298 K maintained using a Langevin thermostat with a collision frequency of 2 ps −1 and a pressure of 1 bar maintained using a Berendsen barostat. A direct interaction cut off of 12Å is used with particle mesh Ewald being used to account for long range electrostatic forces. SHAKE is employed to allow the use of a 2 fs integration time step.
The IS-SPA simulations are performed using our own Fortran code. These simulations maintain the NVT ensemble with an Andersen thermostat with a collision frequency of 16.67 ps −1 . The mass of the hydrogen atoms is set to 12 u to reduce the frequency of those bonds and allow the use of a 2 fs integration time step. No cutoff is used in the IS-SPA simulations and 100 MC points per The solute-chloroform radial distribution functions for the five different Lennard-Jones spheres as measured from explicit solvent simulation are shown in Figure 2 (a) and (d). The q = 0 e, q = 0.5 e, and q = 1.0 e curves depicted in Figure 2(a) all demonstrate a first solvation shell peak at r = 6Å with g < 2, followed by an interstitial region, and a slight second solvation shell before achieving bulk density. The only major discrepancy between these three species is the shoulder in the q = 0 e curve (purple curve) at r = 4.5Å. These results indicate that the chloroform molecule packs similarly around neutral and positively charged species. The q = 0 e, q = −0.5 e, and q = −1.0 e curves depicted in Figure 2(d), however, demonstrate significant differences in solvation structure. The q = −0.5 e species (green curve) has its first solvation shell peak at r = 5Å followed by a shoulder at r = 6Å before the chloroform starts to behave similarly to that found in around the q = 0 e species. The q = −1.0 e (yellow curve) g(r) demonstrates even more dramatic discrepancy to the neutral with g > 4 around 5Å. The discrepancy in radial packing around positive and negative ions can be attributed to the asymmetric charge distribution in the chloroform model. The asymmetry of charge solvation is something that is also observed in water and any asymmetric polar solvent. 29 This asymmetry is captured in IS-SPA since the g(r)'s plotted in Figure 2 (a) and (d) are used directly as parameters in the model.
The polarization of chloroform around the solute is an additional parameter that is measured from explicit solvent simulation and utilized in the IS-SPA equations. From simulation of each solute, the average orientation of the dipole moment of chloroform,p, relative to the solute-solvent separation vector,r, is measured as a function of solute-solvent separation distance. These functions are plotted for the five charged Lennard-Jones spheres in Figure 2 (b) and (e). Given the orientation of the dipole moment in chloroform depicted in Figure 1, a value of r ·p = 1 indicates that the hydrogen of the chloroform is pointing towards the solute and a value of r·p = −1 indicates that the hydrogen of the chloroform is pointing away from the solute. Chloroform shows little preferential orientation around the neutral solute (purple curves in Figure 2 (b) and (e)) until it comes within 8 A of the solute. At distances 6 < r < 8Å, the hydrogen orients away from the solute until r < 6Å at which point the hydrogen orients toward the solute. This feature is due to definition of the molecular center being closer to the hydrogen atom and thus the chloroform must orient in this manner to be able to pack close to the solute. This orientation is also preferred around positively charged solutes at short distances despite the Coulombic repulsion between the positively charged hydrogen and solute. An orientation with the hydrogen pointing away from the solute is observed for the positively charged solutes at distances r > 5Å, as expected. The decay of the solvent orientation behavior agrees with the CDD result (dashed lines, dielectric of 2.3473) at distance r > 9Å. The orientation of the dipole plateaus to a magnitude of 1 at around 5Å that is not predicted by the CDD result but is the result of having a physical dipole. Regardless of the exact origin of the oscillations in the polarization, these aspects of the individual solutes' solvation are fed directly into the IS-SPA model. Using these parameters, IS-SPA reproduces the dimer potential PMF for charged and uncharged Lennard-Jones spheres with high fidelity. For this analysis, we look at the PMF of dimerization of the neutral pair and the |q| = 1 e pair, plotted in Figure 3(a). The green curves are for the neutral pair with the explicit curve in dashed lines and the IS-SPA curve in solid. The neutral species has a minimum in the free energy of −1.63 kcal/mol and −0.79 kcal/mol from explicit and IS-SPA respectively at a separation distance of 6Å. Both IS-SPA and explicit solvent demonstrate a slight desolvation barrier at 9.5Å with minimal correlations beyond this distance. The charged Lennard-Jones spheres demonstrate a large propensity to dimerize with the explicit (dashed purple) and IS-SPA (solid blue and red) binding free energies being close to 30 kcal/mol. The CDD result under predicts the stabilization of the contact pair by approximately 5 kcal/mol. The IS-SPA free energy curve for the oppositely charged solutes shows high fidelity with explicit over the entire domain plotted, lending strong support that IS-SPA is capturing the correct physics of ion solvation in chloroform.
The Lennard-Jones and Coulombic components of the free energy are computed separately in the IS-SPA framework and can be assessed from explicit solvent using a free energy decomposition scheme. 4 Figure 3 The discrepancies between IS-SPA and explicit solvent Lennard-Jones components of the PMF are outweighed by the dominant Coulombic contribution depicted in Figure 3(c). Of particular importance is how the positive and negative ions have slightly different Coulombic contributions, which is captured by IS-SPA. Additionally, the CDD result agrees at large distance, as expected, but deviates from the explicit and IS-SPA results of both ions at distances shorter than 5Å.
The Lennard-Jones and Coulombic components of the PMF need not be identical for the positive and negative ion, as they are not state functions, but the sum of the two must be equivalent. To demonstrate this, we integrate the mean forces from IS-SPA on the positive and negative ions separately and plotted them in solid red and blue curves in Figure 3(a). Despite the different parameters for the cation and anion, IS-SPA produces nearly identical PMFs for both species during their dimerization. That IS-SPA accurately reproduces the explicit PMF for both the positive and negative ions despite having no closure relation to restrict such a equivalence is an important test of the method.
B. Alanine Dipeptide in Chloroform
Alanine dipeptide (ADP) is chosen as a model molecular system since it is a well studied model system for solvent models 16,30 as well as enhanced sampling methods.
ADP has been mostly studied in aqueous environment but there have been studies of how solvent, including chloroform, affects the observed configurations. [31][32][33] As in these previous studies, we are concerned with the monomer configurations observed in a solvent of chloroform as quantified by the φ and ψ backbone dihedrals.
The solvent density around ADP is used to fit atombased radial density functions. For molecular systems, a particular solute configuration is chosen and explicit solvent simulation is run to measure the solvent distribution around the solute. The chosen solute configuration and a 2D slice of the 3D measured explicit solvent density are shown in Figure 4(a). In this plot, white pixels indicate bulk density, red indicates below bulk density, and blue indicates above bulk density. The excluded volume of the solute molecule is evident as the contiguous red area in the center of the figure. The blue to black regions just outside the excluded volume are the first solvation shell of the molecule. Two subsequent rings of low and high density are evident as the solvent gets radially farther away from the solute followed by the noisy region of bulk density. The complete 3D data set is used to fit radial atomic densities (results provided in Figure SI1) using the Poisson regression in Equation 6. The resulting SPA fit densities are shown in Figure 4 between the SPA and explicit, in particular in the immediate vicinity of the solute molecule, the free energy differences are within the thermal energy of the system. Thus, we conclude that SPA and the fitting procedure performed here are sufficient at reproducing the solvent density around a given configuration of the solute.
Similarly, the solvent polarization around ADP is used to fit atom-based radial electric mean fields. A 2D slice of the 3D chloroform polarization around ADP measured from explicit solvent simulation is presented as a vector field in Figure 4(d). These are fit to atomic radial mean fields using the regression in Equation 7, resulting in the polarization depicted in Figure 4(e). A difference map is present in Figure 4(f) with little quantitative difference observed. Thus, we conclude that the SPA and fitting procedure performed here are sufficient to capture the polarization of chloroform around this single orientation of ADP.
The atom-based parameters for density and polarization used to generate Figure 4(b) and (d), in combination with the force functions, can be used to compute the mean solvent force as a function of ADP internal coordinates. Unlike the results for the Lennard-Jones spheres, the molecular systems necessitate a sampling protocol for which we developed our own IS-SPA simulation code. The results from the IS-SPA simulation are compared to vacuum and explicit chloroform simulations by looking at the relative free energy from each simulation as a function of two backbone dihedral angles, φ and ψ. Also known as a Ramachandran plot, the free energy plots are depicted in Figure 5 for (a) vacuum, (b) explicit solvent, and (c) IS-SPA. A low relative free energy is depicted in black to purple and indicates a high propensity for the simulation to populate that state; a low propensity is indicated in yellow with regions in white never being sampled. All three systems (vacuum, explicit, and IS-SPA) have free energy wells in four regions: C 5 ( 2 3 π < φ < − 2 3 π, 2 3 π < ψ < − 2 3 π), P II (− 2 3 π < φ < 0, 2 3 π < ψ < − 2 3 π), C eq 7 (− 2 3 π < φ < 0, 0 < ψ < 2 3 π), and C axial 7 (0 < φ < 2 3 π, − 2 3 π < ψ < 0).
There are discrepancies, however, in the relative populations of each of these states, as quantified in Table I. The dominant state in vacuum is found to be C eq 7 with a probability of 64.4% which is depleted in both explicit (47.9%) and IS-SPA (36.7%) models of chloroform. The dominant increase due to solvation is seen in the C 5 populations of both explicit (32.20%) and IS-SPA (41.18%) as compared to vacuum (22.38%). Based on the probability of these four states, IS-SPA captures the impact of chloroform solvation on the ADP monomer.
In addition to vacuum, explicit and IS-SPA simulations of ADP monomer, we also ran simulations of ADP with a CDD and GB solvent with the dielectric constant of the solvent set to 2.3473 to match the explicit chloroform. The percent populations of the four states discussed above are provided in Table I for CDD and GB models in addition to vacuum, explicit and IS-SPA. The CDD model dramatically overstabilizes the C 5 configuration (48.5%) and understabilizes the C eq 7 configuration as compared to explicit solvent. The GB model also understabilizes the C 5 configuration but overtabilizes the P II configuration as compared to explicit. To more concisely compare these distributions, we consider the relative entropy of the Ramachandran distribution of each model (IS-SPA, CDD and GB) as compared to explicit solvent. We include vacuum compared to explicit as a control in the values tabulated in Table II. Of all the models, IS-SPA has the smallest relative entropy value of 0.072(5), indicating that it has the most similar Ramachandran distribution to explicit solvent. Vacuum has a relative entropy to explicit solvent of 0.114(5), demonstrating the IS-SPA is more similar to explicit than vacuum is to explicit. Surprisingly, the CDD and GB results are actually worse than vacuum.
C. Dimerization of Alanine Dipeptide in Chloroform
The IS-SPA parameters developed for the monomer of ADP can also be used to simulate a dimer of ADP. To quantitatively compare with explicit solvent, we investi- gate the dimerization behavior along the center-of-mass separation distance. We utilize umbrella sampling simulations of five different models: vacuum, explicit, IS-SPA, GB and CDD. Additionally, we compute the PMF by integrating the mean force of IS-SPA using the configurations sampled in the explicit solvent simulations and refer to this data as 'IS-SPA/Explicit'. The resulting PMFs as a function of center-of-mass separation between the monomers are shown in Figure 6.
IS-SPA correctly captures the effect of chloroform solvation on the dimerization of ADP. This is evident in two aspects of the PMFs shown in Figure 6. The first aspect is the dimerization free energy of explicit and IS-SPA as compared to vacuum. ADP dimerization has a minimum in the PMF of 5.0 kcal/mol in explicit solvent as compared to 9.5 kcal/mol in vacuum. This demonstrates that solvation destabilizes dimerization by 4.5 kcal/mol. IS-SPA (green curve) destabilizes dimerization of ADP relative to gas phase by 2.7 kcal/mol. The second aspect of the PMFs that indicate IS-SPA is correctly capturing this effect of chloroform is the position at which the PMF goes to zero. Chloroform screens the interaction of one ADP with the other such that the attractive component of the PMF is not present until R < 8Å. This is as compared to vacuum, and the GB and CDD models, that demonstrate finite attraction between the molecules at all distances beyond contact.
The quantitative discrepancy between explicit and IS-SPA stems from a disagreement between populated solute configurations between 5 and 8Å. This is demonstrated in two ways. The first piece of evidence is the impressive agreement between explicit solvent and the IS-SPA/Explicit curves in Figure 6. This curve is computed by integrating the mean IS-SPA force using the explicit solvent sampled solute configurations. Thus, the IS-SPA curve in Figure 6 differs from explicit due to sampling different solute configurations. Since the solvent force projected along the separation of the center of mass of the two molecules agree, it must be a small difference in the force acting in some other degree of freedom in the system. The second piece of evidence is the deviation in mean solute-solute forces along the center-of-mass separation. The mean solute-solute force is decomposed into Coulomb and Lennard-Jones components and plotted in Figure 7(a) and (b) respectively. Focusing on the solute Coulomb forces, we see that Explicit (purple) and IS-SPA have finite attractive forces for R < 10Å but that these values are discrepant for 5 < R < 8Å. Outside of this domain, IS-SPA and explicit have good agreement. This suggests that IS-SPA and explicit solute configurations are almost identical except for outside of this domain. The discrepancy in forces in this domain propagate into the PMF at the minimum upon integrating the mean force. A similar argument can be made by investigating the solute Lennard-Jones force in Figure 7(b). We note that the explicit, IS-SPA/Explicit, IS-SPA solventsolute forces plotted in Figure 7(c) and (d) demonstrate less quantitative difference for R > 5Å than the solutesolute components. The takeaway is that IS-SPA simulations populate a different set of solute configurations than explicit in the domain 5 < R < 8Å while populating the same states in the rest of the domain sampled. IS-SPA better reproduces the explicit dimerization of the PMF than the other solvation models tested here as seen in Figure 6. Unlike Explicit and IS-SPA, GB and CDD models demonstrate finite attraction out to R > 10Å. Additionally, CDD and GB understabilize the ADP contact dimer and oversimplify the curvature near the minimum. These discrepancies can be quantified in a single parameter, χ 2 , defined as the integral of the squared difference between the PMFs, where ∆∆A(R) = ∆A(R) model − ∆A(R) explicit . These values are computed for each model depicted in Figure 6 as compared to explicit solvent and are tabulated in Table III. The least discrepant model is IS-SPA/Explicit with a value of χ 2 = 6.0Å. With the IS-SPA simulation and the sampling of different states between 5 < R < 8 A we get an IS-SPA χ 2 = 14.0Å which is still smaller 16.6(11) than the CDD (χ 2 = 16.6Å) and GB (χ 2 = 19.5Å) models.
D. IS-SPA Algorithm Scaling
An IS-SPA simulation follows typical all-atom molecular dynamics simulation protocols expect for the inclusion of an additional IS-SPA routine to estimate the mean solvent force on each atom. Currently, this algorithm is performed at every step of the simulation. For a system of N solute particles and N MC MC points per particle, the algorithm is as follows: 1. Generate N · N MC MC points (sampled from predetermined distribution).
2. For each MC point, loop over all atoms and use SPA to determine density and mean field at MC point.
3. For each atom, loop over all MC points and compute Lennard-Jones and Coulomb force from MC point on atom.
The algorithm involves two loops of size N 2 · N MC . The first to compute the density and mean field at each MC point and the second to push the force from that MC point onto each atom. This is similar to what is done in GB except that we have the additional N MC points per solute atom in IS-SPA. Considering a system of N solute particles solvated with either M solvent atoms per solute atom in explicit solvent or N MC MC points per solute atom in IS-SPA, we achieve the following naive (no neighbor list, no PME) performance scaling relationships for non-bonded and solvent calculations where a and b are performance coefficients in front of the IS-SPA and non-bonding loops, respectively. We expect a ≥ 2b since IS-SPA involves two loops over all-pairs. In practice, we get b ≈ 0.27a in our IS-SPA code due to the additional algebraic steps to compute the solvent forces. Notice that IS-SPA does not have an N 2 MC term in the expected scaling relationship since each MC point is independent of the other. From these scaling relationships, it is apparent that IS-SPA will perform better than explicit when N MC < b a M (M + 2). If we set N MC = 100 as used in the current work and b a = 0.27, we find that IS-SPA is expected to perform better than explicit for ADP concentrations of lower than 150 mM in chloroform. This is significantly higher than the 54 mM high experimental concentrations of diphenylalanine in self-assembly experiments. 6
V. CONCLUSIONS
In the current work, we adapt the previous IS-SPA model to accurately account for polar solutes in a polar, non-spherical solvent. We consider the solvent to be a dipole orienting in a superposition of mean fields emanating from each solute atom. These mean fields and radial solute-solvent densities are determined from a simulation of the monomer in explicit solvent. Combined with a long-ranged electrostatic term and solvent-solute Coulombic forces through the octupole term, IS-SPA simulations can be performed in a procedure analogous to our previous non-polar version.
Using this model, it is demonstrated that polar solute solvation and association is accurately captured. As a test case, the dimerization of charged Lennard-Jones spheres in chloroform using IS-SPA is found to be in high fidelity with explicit solvent simulations. It should also be noted that the asymmetry of charge solvation is built into the parameters fed into IS-SPA. Thus, IS-SPA captures the asymmetry of charge solvation in a polar solute such as chloroform. More importantly, the asymmetry of the charge solvation still amounts to the same PMF on the anion and cation for the dimerization of the opposite charges. This behavior is not guaranteed due to the lack of closure of the SPA.
These additions to IS-SPA also accurately capture the solvation behavior of chloroform around alanine dipeptide. The Ramachandran plot of the monomer is well replicated by the model, with IS-SPA netting the lowest relative entropy to explicit solvent out of the three solvation models tested. The dimerization of ADP is also well captured by IS-SPA with the lowest integrated free energy difference relative to explicit solvent. Here there is still room for improvement. The quantitative discrepancy in the dimer PMFs between 5 and 8Å stem from subtle inaccuracies in the solvent force. We hypothesize this is mainly due to the Lennard-Jones force since we see that this force is less quantitative in the Lennard-Jones spheres. We will pursue a variety of ways to improve this including accounting for the non-radial component of the Lennard-Jones force from chloroform.
Finally, using the naïve performance scaling behavior we determined that the current IS-SPA model should outperform explicit solvent simulations at 150 mM peptide solute concentration. This predicted behavior does not account for neighbor lists or long-ranged electrostatic corrections that will impact performance. Next steps in the development of the method for broad use will be to determine how to implement cut offs, and thus the ability to use neighbor lists, while still accurately accounting for the solvent forces.
SUPPLEMENTARY MATERIAL
Supporting information is provided including: simulation methods, multipole moments of chloroform, equations for the electrostatic solvent forces in IS-SPA, and example atomic parameters for alanine dipeptide. Initial frames for all the implicit solvent simulations are collected from the explicit solvent simulations. The input for the explicit solvent simulations are produced using tleap. The energy of the initial configuration is first minimized for 20000 steps. The system is then heated to 298 K over 50 ps and then the volume is allowed to change for another 50 ps. The system is evolved for 3 ns before collecting data in order to remove any artifacts from the initial configuration. interaction between the two ions is not calculated allowing for calculation of the PMF between 0Å and 25Å, using windows separated by 0.5Å. Each window is simulated for 100 ns. Only the explicit solvent model is simulated since the IS-SPA results can be obtained by direct integration of the mean force for a given ion separation.
The distribution of internal orientations of alanine dipeptide is calculated by simulating a single molecule. For the explicit solvent simulation, the molecule is solvated with 5,000 chloroform molecules resulting in a cubic box with an approximate linear length of 65Å. A bias potential is applied to the φ-dihedral angle in order to fully sample the dihedral states of the molecule. The bias potential is found by calculating the minimum energy states of the system in vacuum in all φ-and ψ-dihedral space and finding the Fourier decomposition to third order. The resulting potentials have spring constants of k 1 = 3.8124 kcal/mol, k 2 = 3.2418 kcal/mol, and k 3 = 3.2418 kcal/mol, and phase shifts of ψ 1 = 4.6815 rad, ψ 2 = 1.9924 rad and, ψ 3 = 3.3407 rad, where the subscript is the periodicity of the potential.
All solvent model systems were run with 5 replicas for 200 ns each.
Average solvent distributions for a single solute configuration are needed to parameterize IS-SPA. A frame of alanine dipeptide in explicit solvent in the minimum free energy dihedral configuration is chosen for this purpose. The solute atoms are then restrained as to not move and the solvent distribution is sampled in a simulation of 100 ns. The resulting solvent density and polarization are then measured in cubic bins of 0.25Å length around the molecule.
The mean force and PMF for two alanine dipeptide molecules are calculated by performing US simulations as a function of separation distance from the center of mass distance between the three central heavy atoms in the solute. In the case of the explicit solvent simulations, the two solutes are solvated in 2,000 chloroform molecules resulting in a cubic box with an approximate linear length of 65Å. A harmonic restraint with a spring constant of 20 kcal/mol/Å 2 is applied to the two solute molecules. The PMF is calculated via windows between 3.5Å and 16.5Å separated by 0.5Å. Each window is simulated for 100 ns.
III. ELECTROSTATIC SOLVENT FORCES IN IS-SPA
The force of the chloroform solvent electrostatic force is expanded in a multipole expansion. The assumption is that the distribution of the dipole moment of a chloroform molecule at a given point in space is that of an ideal dipole. Given that distribution, the average force is then expanded. The force is calculated as where q i is the charge on the solute atom, M are the multipole moments,r and r are the unit vector and distance between the solute and solvent,Ê is the unit vector of the electric field, and P are the Legendre polynomials. The angle brackets is the ensemble average over the distribution of dipole moments. This requires calculating (r ·p) for a dipole in a uniform electric field, which is calculated recursively as with (r ·p) 0 = 1. | 2020-10-01T01:01:05.677Z | 2020-09-29T00:00:00.000 | {
"year": 2020,
"sha1": "35bd725b754b4f57aafafda374ba81d7475e7f02",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "35bd725b754b4f57aafafda374ba81d7475e7f02",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
17000846 | pes2o/s2orc | v3-fos-license | Lipids in Health and Disease
Background: Coronary heart disease is increasing in urban Indian subjects and lipid abnormalities are important risk factors. To determine secular trends in prevalence of various lipid abnormalities we performed studies in an urban Indian population. Methods: Successive epidemiological Jaipur Heart Watch (JHW) studies were performed in Western India in urban locations. The studies evaluated adults ≥ 20 years for multiple coronary risk factors using standardized methodology (JHW-1, 1993–94, n = 2212; JHW-2, 1999–2001, n = 1123; JHW-3, 2002–03, n = 458, and JHW-4 2004–2005, n = 1127). For the present analyses data of subjects 20–59 years (n = 4136, men 2341, women 1795) have been included. In successive studies, fasting measurements for cholesterol lipoproteins (total cholesterol, LDL cholesterol, HDL cholesterol) and triglycerides were performed in 193, 454, 179 and 252 men (n = 1078) and 83, 472, 195, 248 women (n = 998) respectively (total 2076). Age-group specific levels of various cholesterol lipoproteins, triglycerides and their ratios were determined. Prevalence of various dyslipidemias (total cholesterol ≥ 200 mg/dl, LDL cholesterol ≥ 130 mg/dl, non-HDL cholesterol ≥ 160 mg/dl, triglycerides ≥ 150 mg/dl, low HDL cholesterol <40 mg/dl, high cholesterol remnants ≥ 25 mg/dl, and high total:HDL cholesterol ratio ≥ 5.0, and ≥ 4.0 were also determined. Significance of secular trends in prevalence of dyslipidemias was determined using linear-curve estimation regression. Association of changing trends in prevalence of dyslipidemias with trends in educational status, obesity and truncal obesity (high waist:hip ratio) were determined using two-line regression analysis. Results: Mean levels of various lipoproteins increased sharply from JHW-1 to JHW-2 and then gradually in JHW-3 and JHW-4. Age-adjusted mean values (mg/dl) in JHW-1, JHW-2, JHW-3 and JHW-4 studies respectively showed a significant increase in total cholesterol (174.9 ± 45, 196.0 ± 42, 187.5 ± 38, 193.5 ± 39, 2-stage least-squares regression R = 0.11, p < 0.001), LDL cholesterol (106.2 ± 40, 127.6 ± 39, 122.6 ± 44, 119.2 ± 31, R = 0.11, p < 0.001), non-HDL cholesterol (131.3 ± 43, 156.4 ± 43, 150.1 ± 41, 150.9 ± 32, R = 0.12, p < 0.001), remnant cholesterol (25.1 ± 11, 28.9 ± 14, 26.0 ± 11, 31.7 ± 14, R = 0.06, p = 0.001), total:HDL cholesterol ratio (4.26 ± 1.3, 5.18 ± 1.7, 5.21 ± 1.7, 4.69 ± 1.2, R = 0.10, p < 0.001) and triglycerides (125.6 ± 53, 144.5 ± 71, 130.1 ± 57, Published: 24 October 2008 Lipids in Health and Disease 2008, 7:40 doi:10.1186/1476-511X-7-40 Received: 16 July 2008 Accepted: 24 October 2008 This article is available from: http://www.lipidworld.com/content/7/1/40 © 2008 Gupta et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Introduction
Cardiovascular diseases, especially coronary heart disease, are important public health problems in India and many developing countries [1,2]. There is evidence that the diseases are increasing in these countries in contrast to developed nations of Europe and North America where the incidence has decreased [3,4]. Societal changes as well as individual lifestyle factors are important in driving this cardiovascular epidemic [5]. These changes influence the proximate determinants of atherosclerosis which include smoking and tobacco use, high total and low density lipoprotein (LDL) cholesterol, low high density lipoprotein (HDL) cholesterol, high blood pressure, diabetes and the metabolic syndrome. Trends of these risk factors have been well studied in developed countries and show significant correlation with rise and fall of the coronary heart disease epidemic [5].
There have been only a few studies that have examined trends in cardiovascular risk factors in middle and low income countries [6]. In Seven Countries Study multiple cross sectional surveys were conducted among men aged 40-59 years in Yugoslavia, Italy, Greece, Holland, Finland, Japan and USA [7]. These studies reported that while major coronary risk factors initially stabilized and later declined in many of these countries, in middle income countries such as Yugoslavia the risk factors increased. The WHO-MONICA study reported that population risk factors increased in the Chinese while they declined in North American and Western European cohorts [8,9]. Increasing trends in coronary risk factors has also been reported from many middle income Latin American countries [6]. In Asia, increasing trends in lipids and in prevalence of dyslipidemias (high LDL cholesterol and low HDL cholesterol) has been reported in urban populations of Beijing [10], rural China [11] and South Korea [12].
To our knowledge no single study that systematically evaluated trends in major cardiovascular risk factors in India exists although reviews have reported increasing prevalence of hypertension [13], diabetes [14], and hypercholesterolemia [2], and declining smoking rates among the educated Indians [15]. All these evaluations suffer from multiple biases inherent in compiling studies from different sources and different methodologies [16]. We performed multiple coronary heart disease risk factor epidemiological studies in urban populations in western Indian state of Rajasthan to determine their lifestyle and other determinants [17][18][19][20]. Here we report trends in levels of various lipoproteins (total, LDL, HDL and non-HDL cholesterol, triglycerides) and total-HDL cholesterol ratio and prevalence of dyslipidemias using current definitions.
Methods
A series of cross sectional epidemiological studies using similar tools in the Indian state of Rajasthan over years 1992-2005 were performed to determine cardiovascular risk factors in urban populations [17][18][19][20]. All the studies were approved by the institutional ethics committee and supported financially by different organizations. The studies have been performed in Jaipur, the capital city of Rajasthan state in western India with population in year 2001 of 2.34 million. The first study -Jaipur Heart Watch (JHW)-1 [17], was conducted in years 1993-1994 and randomly selected 1608 men and 1392 women were targeted using stratified cluster sampling on the Voters' lists in six locations in Jaipur city. 2212 subjects (1415 men 88.0%, 797 women 57.3%) were evaluated for various cardiovascular risk factors and attempt for fasting blood sample for cholesterol lipoproteins and triglycerides was in 15%. In the second urban study (JHW-2) [18] we targeted 960 men and 840 women in the same locations as in JHW-1 and could examine 550 men (57.3%) and 573 women (68.2%). In this study we targeted all the participants for the fasting blood sample. The third (JHW-3) [19] and fourth (JHW-4) [20] urban studies targeted at a smaller sample and were designed to gather information on risk factors in middle-class locations. Response rates are shown in Table 1.
Data collection
Methodological details have been previously reported [17]. Briefly, we collected information regarding demographic data, educational level, history of chronic illnesses such as coronary heart disease, hypertension, diabetes or high cholesterol levels, and smoking or tobacco intake. Brief questions were asked to evaluate physical activity and diet but the results were considered inadequate and not included in the analyses. Physical examination was performed to assess height, weight, waist and hip size and blood pressure. BMI was calculated as weight (kg) divided by squared height (m). Waist-to-hip ratio was calculated. Fasting glucose was determined at a central laboratory using glucose peroxidase method and external quality control. Total cholesterol was measured using cholesterol oxidase-phenol 4-aminophenazone peroxidase method and HDL cholesterol using an enzymatic method after precipitating non-HDL cholesterol with a managaneseheparin substrate. Triglycerides were measured using the glycerol phosphate oxidase-peroxidase enzymatic method. Quality control measures were followed for estimation of total cholesterol, high density lipoprotein (HDL) cholesterol and triglycerides while low density lipoprotein (LDL) cholesterol was estimated using the Friedewald formula [21].
Diagnostic criteria
The diagnostic criteria for coronary risk factors have been advised by the American College of Cardiology clinical data standards [22]. Educational level was used as marker for socioeconomic status as reported in an earlier study [23]. More than 5 years of formal education (primary education or more) was taken as acceptable literacy level for analysis. Obesity or overweight was defined as body mass index of ≥ 25 kg/m 2 and truncal obesity was defined by waist:hip ratio of > 0.95 for men and > 0.85 for women [24]. Dyslipidemia was defined by the presence of high total cholesterol (≥ 200 mg/dl), high LDL cholesterol (≥ 130 mg/dl), low HDL cholesterol (< 40 mg/dl), high non-HDL cholesterol (≥ 160 mg/dl), high cholesterol remnants [very low density lipoprotein cholesterol = total -(HDL+LDL) cholesterol ≥ 25 mg/dl] or high triglycerides (≥ 150 mg/dl) according to National Cholesterol Education Program (NCEP) Adult Treatment Panel-3 (ATP-3) guidelines [25]. High total to HDL cholesterol was defined when ratio was either ≥ 5.0 or ≥ 4.0 as reported in an earlier study from India [26].
Statistical analyses
The continuous variables are reported as mean ± 1 SD and ordinal variables in percent. Prevalence rates are reported in percent. Age-stratified prevalence rates and distribution of various risk factors have been reported for decadal intervals from 20 years 59 years. Age-adjustment of various prevalence rates was performed using direct method using the Jaipur urban population according to 2001 census. Correlation of age with lipid values was performed by simple correlation analysis and significance of ageadjusted trends in mean lipoprotein levels was evaluated by 2-stage least squares regression using SPSS 10.0 statistical package (SPSS Inc, Chicago). Significance of trends in prevalence rates was determined using linear curve-estimation regression analysis using the SPSS package. Regression coefficients are reported as multiple R values after age adjustment. Significance of graphical trends was determined by logarithmic regression analysis using the Microsoft Office Power Point (2002) program. Signifi- cance of two-line trends was determined by least squares regression analyses using GB-Stat for Windows ® software 7.0 (Dynamic Microsystems Inc, Silver Spring, MD USA) and reported as r 2 values. The r 2 values of more than 0.10 and p values less than 0.05 were considered significant.
Mean levels of various lipoproteins at different age-groups are shown in Table 2. There is age-associated escalation in total cholesterol, LDL cholesterol, non-HDL cholesterol, remnant cholesterol, total:HDL cholesterol ratio and triglycerides in men and women in all the cohorts. The levels of HDL cholesterol decline with age, the decline being similar in men and women. Correlation of various lipoproteins with age in combined data from JHW studies is shown in Figure 1. There is a significant increase in total cholesterol (r = 0. 16 Table 4).
Discussion
This study shows that there is a high prevalence of various forms of lipoprotein abnormalities in Indian urban subjects. Secular trends reveal increasing mean levels of total-, LDL-, non HDL-, and remnant cholesterol, total:HDL cholesterol ratio and triglycerides and decline in HDL cholesterol. Prevalence of high non-HDL cholesterol, Correlation of various cholesterol lipoproteins and triglycerides with age in combined data from JHW studies remnant cholesterol, and total-HDL cholesterol ratio increased. These changes correlate significantly with increasing education (socioeconomic status) and truncal obesity. Most of the lipid abnormalities are markers of dietary excess, low physical activity and increasing obesity [27]. The present study confirms that increasing obesity manifest as truncal obesity, due to population-wide sedentary lifestyle and high calorie intake [28,29], leads to increase in multiple dyslipidemias. We have previously reported increase in prevalence of coronary heart disease [2] in urban Indian populations and the present study suggests that increasing non-HDL cholesterol, cholesterol remnants, and total-HDL cholesterol ratio are important risk factors. The importance of these dyslipidemias has been highlighted in multiple prospective studies from other countries [30][31][32].
Rise and fall of cholesterol and other lipoproteins associated with changing cardiovascular mortality and coronary heart disease incidence has been well documented in Trends in age-adjusted prevalence of various dyslipidemias in various Jaipur Heart Watch (JHW) studies Remna nt Cholester ol >=25 many developed countries [7,8,25]. There is paucity of similar data from developing countries. Data of the present study has significant healthcare policy and pharmacoeconomic implications because more than 40% of the world's population is in India and China. A econo-mies of these countries boom [33] and individual buying capacity increases the lifestyle changes shall lead to massive increase in lipid levels fuelling cardiovascular epidemic as observed in the present study in an Indian urban population. In China two large scale surveys have been carried out to determine prevalence of lipid abnormalities [34,11]. The first survey in 1992 reported greater lipid values in urban as compared to rural populations [34]. Among 9477 subjects the mean ± SD cholesterol at urban sites in China was 181.6 ± 32 to 184.8 ± 38 mg/dl in men and 187.5 ± 33 to 187.6 ± 42 in women. The values were 15-25 mg lower in rural subjects [34]. Prevalence of hypercholesterolemia ≥ 200 mg/dl was 29.1-31.0% in urban subjects and 7.7-20.0% rural subjects. Second survey in 2004 was a population based epidemiological study among 15540 adults and reported mean ± SEM cholesterol of 193.0 ± 0.7 in urban men and 196.4 ± 0.7 in urban women [11]. These levels were significantly greater than the 1992 study. This study also reported lower urban-rural gap in cholesterol levels (10-11 mg/dl more in the urban). Age-adjusted prevalence of hypercholesterolemia was 39.8% in urban men, 44.1% in urban women, 30.2% in rural men and 31.7% in rural women. Age-adjusted prevalence rates for hypercholesterolemia are lower in our study ( Table 3). Prevalence of low HDL cholesterol < 40 mg/dl in Chinese urban subjects were 29.5% in men and 14.6% in women aged 35-74 years which is lower than reported in our subjects. These studies did not report prevalence of hypertriglyceridemia or high total:HDL cholesterol ratios. Another study from China reported changing trends of cardiovascular risk factors in different socioeconomic groups but did not comment on lipid levels [35]. [7]. The decline in total and LDL cholesterol has been attributed to documented decreases in dietary intake of saturated fats and cholesterol [40]. However, recent evidence suggests that the decline in USA may have been due to increased use of medications rather than positive lifestyle changes [40]. It is also suggested that the slower decline in recent years is likely due to increase in obesity among adults and the observed increase in triglyceride levels is a marker [39]. In the present study too, it is observed that increasing obesity is important determinant of increases in total, non-HDL, and LDL cholesterol and triglycerides. This augurs more adverse lipid profiles worldwide unless the obesity epidemic is controlled.
This study has multiple limitations as well as strengths. The variable and low response rates in some cohorts make the data tenuous but the age-structure of the studied cohorts was similar to the local populations and therefore the data can be generalized for evaluation of risk factor trends. Small number of subjects in each of the studies and age-specific subgroup could also be concern but the sample sizes have been determined using available recommendations for the prevalence of cardiovascular risk factors in a community [41] and are considered appropriate for inter-group comparisons. We have determined both age-adjusted mean levels of various lipoproteins as these could be earliest population-level change and these show significant trends. Prevalence rates are robust evidence of population level change and the present study using simple meta-regression techniques shows significant trends in important lipid abnormalities. This type of meta-regression is used for combining clinical trials as well as epidemiological studies using pre-defined end points [42]. Generalizability of the study results to the local urban population or to the whole country may not be appropriate at this time as socioeconomic structure of the country is so different from locality to locality and town to town [43]. A major strength of the study is use of similar assessment methodologies that make the observations comparable. Another strength is determination of different types of lipoprotein abnormalities and cholesterol ratios which have emerged as important risk factor. The study definitively shows that biological risk factors (lipids) are causally related to increasing obesity and to increasing socioeconomic status as measured by educational status. It has been previously reported that up to a certain level of socioeconomic status (gross national product) the risk factors tend to increase and once a particular per capita income is achieved the risk factors tend to decline with increasing socioeconomic status [44]. Indeed, a more careful assessment of the trends of dyslipidemias ( Figure 3) reveals that risk factors in educated Indian middle-class subjects may be starting to level-off as observed in JHW-3 and JHW-4 studies. This indicates importance of evolving socioeconomic changes as an important driver as well as controller of cardiovascular diseases [44].
Low HDL cholesterol and high total:HDL cholesterol are important cardiovascular risk factors. Multiple prospective studies have identified the importance of low HDL cholesterol as cardiovascular risk factor [25]. Importance of total:HDL ratio has been highlighted in the Physicians Health Study that reported relative risk (RR) of acute myocardial infarction in the top vs. bottom quintile of total:HDL cholesterol ratio was 3.73 (95% confidence interval 1.95-7.12) and was substantially greater than total cholesterol (RR 1.86; 1.05-3.28), HDL cholesterol (0.38; 0.21-0.69), and apolipoprotein B (2.50; 1.31-4.75) [45]. The INTERHEART Study also reported that ratio of apolipoprotein B/A 1 was the most important risk factor for acute myocardial infarction in South Asians [46]. It has also been reported that in patients receiving statin therapy levels of non-HDL cholesterol, apolipoprotein B, and total:HDL cholesterol ratio of ≥ 4.0 were more important than other lipid parameters [47]. Increasing ratio in this Indian urban population along with increasing non-HDL cholesterol and falling HDL cholesterol levels associated with increasing socioeconomic status and obesity shows the appropriate direction for prevention effort. Increasing socioeconomic status of Indians has to be complemented with intensive public health education and policy changes at the national level [2,48] for cardiovascular disease prevention.
In conclusion, this report is first in a low income country -India-that demonstrates cross-sectional and longitudinal trends in dyslipidemia and its causal relationship with adiposity and socioeconomic status. Data analysis of such a cohort reveals non-similarities (increasing non-HDL cholesterol, triglycerides, and total-HDL ratio) as well as similarities (increasing socioeconomic status and obesity) vis-à-vis other developing and developed regions of the world [49]. The inferences that can be translated for implications to health care policies and practice, medical education and research is beyond the scope of this publication. | 2018-05-08T18:14:23.631Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "34b116ab4a749b88b1f29a6b073914d432246e7b",
"oa_license": "CCBY",
"oa_url": "https://lipidworld.biomedcentral.com/track/pdf/10.1186/1476-511X-7-40",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "34b116ab4a749b88b1f29a6b073914d432246e7b",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
233852612 | pes2o/s2orc | v3-fos-license | Foreword to the special issue: Multidimensional objective functions and institutions: Efficiency assessment of public services
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Introduction
The objective of this special issue was to produce theoretical and empirical work that provokes and fertilizes the scholarly debate on the role of public service providers and their institutional backbone, namely the public administration. More specifically, with this end goal at the center of our priorities, we expect to help advance our understanding on the antecedents and challenges ahead of research dealing with the efficiency in the public sector. The primary role of public administrations-at national and local level-as leading agents for the provision of public services has gained increased attention as a result of the economic challenges resulting from the economic downturn that hit most countries after 2008 (e.g., Refs. [1,2]. During the last decade scholars and policy makers have witnessed how different policy efforts have materialized in economic reforms that condition the funding and performance of public services. Regardless of the outcomes of any specific performance evaluation, many of the concerns that led to reforms promoted after 2008 are currently escalating mostly because of the handling of the global Covid-19 pandemic by public administrations, especially in health care areas.
In parallel to the reforms and developments, scholars and policy observers have fueled the debate on what elements form the objective function of public service providers as well as on how to model the technology of public services for evaluation purposes. These aspects have become evident in the approach adopted by many academic studies dealing with efficiency assessments in a number of public services, including among others higher education, health care, and the functioning of local governments. Notwithstanding the large stock of knowledge on public service analyses generated in the last decade, various questions on the role as well as the assessment of public services remain unaddressed in the literature.
The first central question relates to the multidimensionality of the objective function of public sector agents. What methodological developments can contribute to improve our understanding of the sources of efficiency of public service providers? While academic work offers insights on how public service providers can benefit from different performance-and governance-led strategies (e.g. Refs. [1,3], it is crucial to provide clear nuances of the factors shaping efficiency among both public administrations and public service providers so that scholars can build a significant and informative stock of research on this subject. The second central question deals with the operationalization and analytical approaches chosen to evaluate public services. What policy lessons can be drawn from empirical applications focused on efficiency evaluation of public services (e.g., frontier estimation methods, performance measurements and other techniques)? Because methodological choices condition the implications that can be extracted from reported empirical findings, our objective was to promote the publication of theoretically rooted work that helps to better evaluate public services, while acknowledging their specific attributes (e.g., production technology, economic/social function). The relevance of accurate efficiency analyses of public services and public administrations for academics and policy makers is unquestionable. In this special issue we therefore encouraged contributors to produce research that challenges canonical approaches and adopts a critical perspective that sheds valuable insights on the efficiency of public services, as well as of specific policies designed to enhance the functioning of public services, as well as of public administrations and other market agents interacting with the public sector.
We started this enriching journey in 2019 seeking to satisfy our academic curiosity by bringing together different perspectives on the analysis of public services as well as public administrations. Mostly thanks to the work sessions at the 7th International Workshop on "Efficiency in Education, Health and other Public Services" (Universitat Internacional de Catalunya, Barcelona, September 5th-6th 2019), this special issue received great support from scholars and policy observers working in the field. Obviously, all our efforts simply would not have been possible without the support and nurturing of the journal's Editorin-Chief Vedat Verter to whom we express our deepest gratitude.
As a result of our efforts, throughout this editorial note we address the two subjects outlined above, and then provide an overview of the collection of papers included in this special issue.
The contributions of this special issue to the literature on public services' efficiency
After an exhaustive peer review process, this special issue includes 13 articles that contribute significantly to advance the efficiency assessment of public services and public administrations.
By analyzing the approaches adopted by the selected papers, we observe that public services' efficiency can be researched from multiple angles, and that the unit of analysis varies from organizations (public service: 5 studies; private firms: 1 study), to different territorial levels (municipality: 2 studies; region: 3 studies; country: 2 studies). Note that part of the value of the papers included in this special issue is the capacity to bring together theoretical premises from different fields, including organizational as well as arguments closer to economic geography.
The richness of these papers also becomes evident in the variety of methods employed-spanning from parametric (3 studies) and nonparametric (8 studies) frontier approaches to regression models and spatial econometrics (3 studies)-and in the geographic diversity of the analyzed settings, covering different European countries (7 studies), Latin America (2 studies), Africa (1 study), Asia (1 study) as well as multi-country comparisons and cross-regional studies (2 studies). By using multiple analytical methods on cross-sectional (7 studies) and longitudinal (6 studies) data sets, the selected papers contribute to identify different patterns that are conducive to superior efficiency among public services and administrations. The diversity of the selected papers is consistent with and further reinforces the logic presented above on the need to analyze the drivers of efficiency in public services and local administrations from multiple perspectives.
Overall, the collection of papers included in this special issue focus Table 1 Methodology and geographic scope of the articles included in the special issue.
Geographic scope of the collection of papers included in the special issue Topic Organizational analysis Cross-regional/single-country analysis Cross-regional/multiple-country analysis on three main topics which are summarized in Table 1: analysis of education centers, efficiency assessment of public administrations, and the analysis of resource allocation and the provision of public goods.
Analysis of education centers
Seven of the manuscripts deal with the analysis of education centers from an organizational (4 studies) and a more territorial (3 studies) perspective. The paper by Ref. [4] employs stochastic frontier methods (SFA) to assess the cost structure of Indian higher education institutions (HEIs). The authors find an exhaustion of economies of scale in the teaching function of HEIs. This finding indicates that promoting the growth of smaller HEIs seems a more promising strategy if policy makers are interested in expanding the provision of higher education in India. Also, scale economies of the research function remain unexhausted, which suggests that a concentration of research activities may produce benefits among Indian HEIs.
By employing a conditional panel data DEA model on a sample of 124 Catalan primary schools during 2009-2014 [5], found that efficiency differences between top-performing and poor performing schools has drastically reduced over the analyzed period. This suggests that Catalan primary schools have improved their decision making processes-in terms of resource allocation-during the crisis that characterized the period 2009-2014, and this finding is in line with prior work highlighting that budget constraints are effective tools for narrowing efficiency differences of primary schools.
The paper by Ref. [6] employs a centralized DEA approach and a 'benefit of the doubt' (BoD) model to evaluate the relative efficiency level of preschool education centers (kindergarten) in Chile. In a second analytical stage, the authors use decision trees to identify variables explaining the composition of homogeneous preschool groups according to their effectiveness. Results point to an average efficiency level of 70.54% with important heterogeneity across Chilean regions, a figure that is significantly lower than the 84.47% reported by the BoD model. The findings underline the importance of three factors shaping the effectiveness of Chilean kindergartens: size of the center, household income, and location (rural or urban).
Building on the educational value added theory [7], analyze how class groups' efficiency is improved by transforming non-cognitive skills (linked to traits and abilities forming individual's personality and attitudes that can affect goal-directed effort, social relations, and decision-making) into cognitive skills (linked to acquired abilities and skills that allow people to perform mental activities associated with learning and problem solving). Using a sample of 108 Italian school groups, the results of the SFA model indicate that actions targeting the development of non-cognitive skills improve cognitive skills' performance in school years. This suggests that conventional teaching can be challenged by encouraging the adoption of innovate teaching methods that stimulate other, equally important soft skills and, subsequently, help realize students' potential.
Among the studies evaluating the role of education on territorial outcomes, the paper by Ref. [8]; which was handled by Vedat Verter (Editor-in-Chief), employs longitudinal spatial econometrics models to evaluate how the local configuration of universities (i.e., number of universities and the proportion of public universities in a region) impact the regional rate of new knowledge-intensive business service (KIBS) firms on a sample of 47 Spanish provinces during 2009-2013. Results support that regions with a greater concentration of universities and a higher proportion of public universities attract more new KIBS. Also, the authors report a substitution effect between university-based variables and regions' industry specialization: new KIBS tend to locate in regions where they expect either greater knowledge input from universities or a higher presence of potential industrial partners.
[9] study the impact of HEIs on regional economic growth among 284 European regions during 2000-2017. Similar to Ref. [8]; the authors find a positive relationship between the number of universities at regional level and regional economic growth (GDP). Further analyses reveal that the quality of research and academic specialization in STEM (science, technology, engineering and mathematics) subjects are the main channels through which universities impact the regions' economic performance.
[10] focus on the Program for International Student Assessment (PISA) and evaluate how organizational (resource endowment) and local heterogeneity (contextual factors) affect the efficiency analysis of PISA results. By employing a flexible non-parametric location-scale model on a sample of 35 OECD countries for 2015, the core finding of this study indicates that organizational factors-linked to resource availability and decision-making-and environmental factors-linked to location (rural or urban) and accountability-significantly impact the performance analysis of PISA results as well as country rankings based solely on student results.
Analysis of public administrations
By focusing on different, equally relevant aspects of local governments, the second group of four papers proposes an efficiency assessment of public administrations at different territorial levels.
In their study of the interplay between firm productivity and perceived corruption as a determinant of the probability to obtain government contracts among 949 firms located in 33 African and Asian developing countries [11], find that corruption negatively moderates the relationship between firm productivity and a positive outcome in public procurement processes (i.e., a government contract) for pro-market firms, while this moderation effect turns positive for rent-seeking firms. The implication, in terms of public procurement policy, of this study is clear: the exclusion of productive (pro-market) firms from public procurement processes increases the cost and potentially reduces the quality of public services. In this sense, the authors suggest that encouraging the participation of internationalized (exporting) firms may constitute a valid mechanism both to ensure a more efficient public procurement decision-making and to improve the quality of outsourced public services.
The study by Ref. [12]; which was handled by Vedat Verter (Editor-in-Chief), employs a BoD model to build a composite indicator that evaluates the competitive efficiency of 81 Costa Rican counties during 2010-2016. The authors find that the informative power of the proposed BOD composite indicator (based on a participatory method) outperforms alternative specifications using homogeneous weight restrictions or weights estimated via principal component analysis. The analysis of counties' competitiveness has proved itself useful for monitoring local competitiveness. In a second analytical stage, the study findings reveal how the analysis based on the BOD approach may offer useful information to policy makers on what strategic actions may potentially optimize the allocation of local resources and, subsequently, enhance economic outcomes related to business creation rates and employment figures.
By applying conditional models (DEA and BoD) on a sample of 307 Flemish municipalities [13], explore the relationship between municipality size and the provision of local services (i.e., administration, culture, care services, education, housing, local mobility, security, and environment). The findings highlight the presence of diseconomies of scale among Flemish municipalities, especially for those with more than 10,000 citizens. From a policy perspective, the results of the study suggest that optimal public service provision can be realized by promoting inter-municipality collaborations in the provision of some services (e.g., waste disposal or recycling). [14] evaluate the implications of ideal and anti-ideal decision-making units on the BoD model. The authors propose that, in the presence of an ideal (anti-ideal) decision-making unit, the efficiency scores of the BoD (inverted BoD) model can be computed without solving the corresponding linear program. The authors show the value of their approach by evaluating the e-Government Development Index (e-GDI, United Nations) for 193 countries. The findings indicate that countries with a relatively balanced performance fall into the top-performing group (green group), while countries with a less balanced performance are classified as poor performing territories (red group). The proposed analytical tool can help guide policy makers. Countries in the red group should pay more attention to the component indicators at which they are relatively worse in order to gradually transform their operating mix to a more balanced one.
Analysis of resource allocation policies and the provision of public goods
Finally, the last group includes two papers that specifically analyze resource allocation policies and the provision of public goods from a territorial perspective.
The study by Ref. [15] evaluates how the political ideology-i.e., left-wing, populist, and extremist parties-of regional administrations in the Czech Republic influence the relative efficiency of public service policies-in terms of education, health care, and infrastructure spending-between 2007 and 2017. To compute the relative efficiency measures in each policy area, the authors use conditional non-parametric efficiency models that take into account the quality of service provision (outputs). The main findings of the study reveal that the share of left-wing members in regional councils is negatively correlated with public service spending efficiency. This overall negative relationship appears to be explained by the low performance in health care provision, while education efficiency outperforms in councils governed by left-wing councils. Also, the authors failed at finding any significant relationship between the share of populist councilors in regional councils and overall spending efficiency; however, they found a significantly lower efficiency level in education provision in those councils with high presence of populist members.
The last paper included in this special issue by Ref. [16] focuses on the efficient allocation of public resources. Using a sample of 271 municipalities from the Spanish region of Navarra, the authors propose a directional distance function (DDF) in order to accurately deal with grant allocation problems that upper-tier local governments face among the municipalities under their corresponding jurisdiction. The authors found that policy priorities condition local administrations' efficiency: the total amounts of grants and taxes could be reduced by up to 9.4 and 28.8%, respectively (relative to their current level) while simultaneously increasing the level of all local services by the same proportion. The proposed model constitutes a valid tool to inform policy makers on how to achieve a more efficient and equitable utilization of public resources.
Funding
Esteban Lafuente acknowledges financial support from the Spanish Ministry of Economy, Industry and Competitiveness (grant number: ECO2017-86305-C4-2-R). | 2021-05-07T00:04:15.472Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "c3eb8690935f3b3b09d81c32cb069c36007cab55",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.seps.2021.101056",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "48885abdfa04267d937aa4ae19b817109f5ddf60",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
234470272 | pes2o/s2orc | v3-fos-license | Prevalence of postamputation pain and its subtypes: a meta-analysis with meta-regression
Supplemental Digital Content is Available in the Text. Postamputation pain is high in patients with nontraumatic lower-extremity amputations, but the pooled prevalence rates were associated with high levels of heterogeneity. Ongoing research using the Durham Pain Investigations Group Postamputation Pain Algorithm taxonomy is needed to fully delineate the prevalence of postamputation pain and its subtypes.
Introduction
Chronic postamputation pain (PAP) is a debilitating condition that stems from a confluence of neurological and musculoskeletal factors. As a result, the prevalence of PAP has been difficult to establish. A related barrier to establishing the prevalence of PAP has been the inconsistent use of standardized approaches for classifying the various clinical conditions responsible for PAP. The Durham Pain Investigations Group PAP Algorithm (DPIG-PAPA) is a taxonomy for PAP based on pain type. 4 The first 2 subtypes are phantom limb pain (PLP) and residual limb pain (RLP). The latter category is subdivided into a somatic pain subtype (eg, chronic infection, chronic wound inflammation, and prosthesis maladaptation) and a neuropathic pain subtype. The neuropathic pain category is further subtyped as (1) sympathetically mediated pain, often referred to as complex regional pain syndrome-like pain, (2) painful neuroma, and (3) mosaic postamputation neuralgia. 4 The number of people living in the United States with limb loss is projected to double by the year 2050. 24 Acquiring detailed knowledge about the prevalence of PAP and its subtypes would enable clinicians, researchers, and policymakers the ability to allocate health care resources based on projections of anticipated need. 6 Thus, the primary objective of this systematic review and meta-analysis is to determine the prevalence of nontraumatic lower-extremity PAP using an established taxonomy for PAP. The secondary objective is to determine the prevalence of PAP subtypes including PLP and the various subtypes of RLP.
Study protocol
This study was deemed exempt by the Mayo Clinic IRB. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines 12 were followed. An a priori protocol was followed. The trial was registered in the PROSPERO database CRD42020159480. 2
Search strategy
A comprehensive search of several databases from each database's inception to November 20, 2019, was conducted. The databases included Ovid MEDLINE, MEDLINE Epub Ahead of Print, MEDLINE In-Process and Other Non-Indexed Citations, Daily, Ovid EMBASE, Ovid Cochrane Central Register of Controlled Trials, Ovid Cochrane Database of Systematic Reviews, and Scopus. The search strategy was designed and conducted by an experienced librarian with input from the study's principal investigator. Controlled vocabulary supplemented with keywords was used to search for studies of the prevalence of PAP in patients who have undergone lowerlimb plus or minus upper-limb amputation. The actual strategy listing all search terms used and how they are combined is available in Appendix A (available at http://links.lww.com/PR9/ A105).
Study selection process
Study inclusion criteria included (1) randomized designed, crossover design, and parallel-designed clinical trials, (2) prospective and retrospective observational cohort studies, Two independent pairs of reviewers screened all titles and abstracts identified by our search strategy in the first phase. In the second phase, the 2 pairs of independent reviewers screened the full text of all studies identified in the first phase and applied inclusion and exclusion criteria. Any disagreements between reviewers with respect to inclusion of studies were resolved by an additional author (R.N.M.).
Data extraction
Data were extracted by 4 independent reviewers using a templated electronic database. Based on the a priori protocol, abstracted data included the prevalence of (1) PAP, (2) PLP, (3) RLP, and (4) each RLP subtype, including somatic, neuropathic pain, CRPS-like, neuroma, and mosaic neuralgia. The follow-up period of the studies varied; thus, the 6-month time point postamputation was used in the prevalence calculations. Baseline demographic data were collected, including age, sex, and the presence of presurgical limb pain.
Risk of bias assessment
Because the outcome of interest was the prevalence of pain in a single cohort, the risk of bias was assessed using a modified tool specifically designed for assessing bias in uncontrolled studies. 14 This modified tool consists of 4 questions: (1) do patients represent the whole experience of the investigator or center, (2) was the exposure adequately ascertained, (3) was the outcome adequately ascertained, and (4) is the case described with sufficient details. The risk of bias was reported for each of 4 questions relating to selection, ascertainment, and reporting for each study.
Evidence synthesis
The prevalence of PAP was extracted from each study and metaanalyzed. Statistical analysis was performed after the Freeman-Tukey double arcsine transformation. Results were pooled with random-effects models using the DerSimonian and Laird method and were reported with 95% confidence intervals (CIs). Statistical analyses were performed using R 3.5.0 (R Core Team, 2018), 1,23 and P values ,0.05 were considered significant.
Risk of bias evaluation
The risk of bias assessment is contained in Appendix (available at http://links.lww.com/PR9/A105)B. The most common sources of bias were related to patient selection (question 1) and adequacy of ascertaining outcomes (question 3).
The pooled prevalence of PLP in these studies was 53% (95% CI, 40%-66%) with high heterogeneity (I 2 5 93%). Subgroup analysis of PLP prevalence by study design revealed that prospective cohort studies were a statistically significant moderator of heterogeneity (P 5 0.02) but statistically significant residual heterogeneity remained (P , 0.0001). This suggests that study design did not fully account for heterogeneity. Individual subgroup analysis of PLP showed that prevalence by year of publication, country, and country development status were not significant moderators of heterogeneity. Meta-regression with study design, country development status, and year of publication as covariates resulted in significant residual heterogeneity (data not shown).
Discussion
The primary findings of this systematic review and meta-analysis are (1) the pooled prevalence of PAP is 61%, (2) the pooled prevalence of PLP is 53%, (3) the pooled prevalence of RLP is 32%, and (4) study design is a statistically significant moderator of heterogeneity in studies reporting the prevalence of PLP. The pooled prevalence values were associated with high levels of heterogeneity that warrant further consideration.
The prevalence range of PAP was 28%, but the prevalence range of PLP and RLP were 71% and 46%, respectively. A subgroup analysis of PLP demonstrated that study design was a significant moderator of heterogeneity. Alternatively, subgroup and meta-regression analyses of PLP demonstrated that year of publication, country, and country development status were not significant moderators of heterogeneity. For RLP, study design was not a significant moderator of heterogeneity. Although the multifactorial pathophysiological mechanisms responsible for PAP could contribute to heterogeneity, time since amputation and individual clinical factors could be important contributors. In a longitudinal cross-sectional study from the Netherlands, the prevalence of PLP in patients with lower-extremity amputations 6 months after surgery was 32%. 3 However, the prevalence declined to 27% at 3.5 years follow-up. 3 These findings can be contrasted against a cross-sectional study from the United States where the prevalence of PLP in patients who had at least one amputation ranged from 78% to 85% during a mean follow-up period of 26 years. 20 Approximately 50% of patients reported some improvements in pain, and the remaining 50% reported stable or worsening pain during the follow-up period. 20 These studies suggest that PAP is a dynamic disease process and the prevalence may vary over time. This may be particularly relevant to patients with RLP due, in part, to the varied and timedependent pathophysiological mechanisms responsible for the clinical manifestation of symptoms in this important subgroup of patients.
Individual patient factors may influence of the prevalence of PAP. A cross-sectional study of 122 double amputees revealed high intraindividual concordance for the development of PLP and RLP. 22 Preoperative pain, sex, and age did not explain concordance in PLP or RLP but the authors reported that recent amputation and short residual limb length were associated with a higher probability of PLP. However, the scope of our systematic review precluded investigating individual factors potentially associated with the development of PAP.
None of the studies included in this systematic review subdivide RLP into the somatic and neuropathic pain subtypes. However, one study described somatic pain and neuroma as possible causes of RLP but the prevalence was not reported. 13 This observation highlights the need for studies that characterize RLP subtypes in amputees because this subdivision has important treatment implications.
This study has limitations. First, the scope of this systematic review was limited to studies that reported the prevalence of chronic nontraumatic PAP involving the lower extremities. Studies of patients with traumatic amputations alone were excluded because of the risk that treatment of ongoing trauma-related conditions could adversely influence or obscure the identification of PAP. Thus, the prevalence reported in this article may not be applicable to populations of patients with traumatic PAP or populations of patients with upperextremity PAP and PAP related to amputations of upper-extremity and lower-extremity digits. Second, only 2 studies reported the prevalence of PAP and no studies reported the prevalence of RLP subtypes. Ongoing research using the DPIG-PAPA taxonomy are needed to further investigate the prevalence of PAP and its subtypes. Third, the included studies were published between 1988 and 2019. Although subgroup and meta-regression analyses did not identify significant associations between year of publication and heterogeneity of pooled prevalence rates, it remains possible that advances in surgical technique, perioperative management, and rehabilitation strategies could have influenced the prevalence of PAP. Finally, most differences in the risk of bias were related to selection bias and, to a lesser extent, adequacy of ascertaining outcomes. Thus, these 2 key methodological shortcomings could have influenced the pooled prevalence rates reported in this systematic review.
In conclusion, this systematic review and meta-analysis demonstrate that the prevalence of PAP is high in patients with nontraumatic lower-extremity amputations, but the pooled prevalence rates were associated with high levels of heterogeneity. Aside from a subgroup analysis that suggested study design is a significant moderator of heterogeneity for PLP, other subgroup and meta-regression analyses did not yield significant sources of heterogeneity. Ongoing research that uses the DPIG-PAPA taxonomy is needed to fully delineate the prevalence of PAP and its subtypes. | 2021-05-13T05:17:43.697Z | 2021-05-04T00:00:00.000 | {
"year": 2021,
"sha1": "e9ee1b235e65e4a45867bbe57216ffb41770a90b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/pr9.0000000000000918",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e9ee1b235e65e4a45867bbe57216ffb41770a90b",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119110673 | pes2o/s2orc | v3-fos-license | Pion production in neutrino-nucleus collisions
We compare our pion production results with recent MiniBooNE data measured in mineral oil. Our total cross sections lie below experimental data for neutrino energies above 1 GeV. Differential cross sections show our model produces too few high energy pions in the forward direction as compared to data. The agreement with experiment improves by artificially removing pion final state interaction.
Introduction
In this contribution we present our results [1] for ν µ /ν µ -induced one pion production cross sections in mineral oil (CH 2 ) for neutrino energies below 2 GeV. These results are compared to the experimental data obtained by the MiniBooNE Collaboration [2,3,4].
Our calculational starting point is the pion production model at the nucleon level of Refs. [5,6], that we have extended from the ∆(1232) region up to 2 GeV neutrino energies by the inclusion of the D 13 (1520) resonance. Apart from the ∆(1232) already present in the model, the D 13 (1520) resonance gives the most important contribution in that extended energy region [7]. In-medium corrections in the production process include Pauli-blocking, Fermi motion, and the modification of the ∆ resonance properties inside the nuclear medium. Not only the ∆ propagator is modified, but there is also a new pion production contribution (referred to as C Q in the following) that comes from the changes in the ∆ width in the nuclear environment. For pion final state interaction (FSI) we use a cascade program that follows Ref. [8] where a general simulation code for inclusive pion nucleus reactions was developed. When coherent pion production is possible we evaluate its contribution using the model in Refs. [9,10]. Due to lack of space, here we shall just show the results. For details we refer the reader to Ref [1]. Our results are qualitatively similar to those obtained by other groups [11,12].
Results and comparison with MiniBooNE data
We start by showing total cross sections for a given neutrino energy. In the left panel of Fig. 1 we compare with MiniBooNE data our results for π + production in a charged current (CC) process. Our cross sections are below data for neutrino energies above 0.9 GeV. The contribution from the D 13 resonance only plays a role above E ν = 1.2 GeV, making some 8% of the total at the highest neutrino energy. The C Q term contributes for all energies, being around 8% of the total. Similar results are obtained for a final π 0 (right panel). In Fig. 2 we show the effects in our results of changing the value of the dominant axial nucleon-to-Delta form factor within the uncertainties in its determination in Ref. [6]. A larger value than the central one we use (C A 5 (0) = 1) seems to be preferable in the high energy region.
In Fig. 3 we compare results, convoluted with the neutrino flux in Ref. [2], for the differential dσ dTπ cross section for CC 1π + production by ν µ . We disagree with data for T π above 0.15 GeV. The agreement improves if we artificially remove FSI (see right panel). Also in the right panel we show the effects of not including the C Q or D 13 contributions. By neglecting the C Q contribution the cross section decreases by some 10% around the peak at T π = 0.08 GeV. The D 13 plays a very minor role since the neutrino flux peaks at around 600 MeV.
Differential dσ dpπ and dσ d cos θπ cross sections for CC 1π 0 production by ν µ are shown in Fig.4. For their evaluation we take the neutrino flux from Ref. [3]. Our model agrees with data for pion momentum below 0.2 GeV/c but it produces too few pions in the momentum region from 0.22 to 0.55 GeV/c. As seen from the angular distribution those missing pions mainly go in the forward direction. The effects of ignoring FSI are also shown in both panels.
In Figs. 5 and 6 we present the results for neutral current (N C) production that we compare with data by the MiniBooNE collaboration [4]. In each case we use the ν µ /ν µ fluxes reported in Ref. [4]. Fig. 5 shows the different contributions to the dσ dpπ differential cross section. Our results show a depletion in the 0.25 ∼ 0.5 GeV/c momentum region though the agreement is better than in the CC case. The results agree with data if one neglects FSI. Looking now at the differential dσ d cos θπ cross sections shown in Fig. 6, one can see that our results agree better with data in the antineutrino case where we are within error bars except in the very forward direction. A clear deficit in the forward direction is seen for the reaction with neutrinos but the agreement is better than in the corresponding CC reaction. In both cases, the coherent contribution is shown to be very relevant in the forward direction. Once more, if one artificially switches off FSI effects we get a good agreement with data. | 2013-10-16T10:08:02.000Z | 2013-10-16T00:00:00.000 | {
"year": 2013,
"sha1": "1b98e2b51c312d5b0ddf4dc43945db35d72442cb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1b98e2b51c312d5b0ddf4dc43945db35d72442cb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
6441650 | pes2o/s2orc | v3-fos-license | Particle-antiparticle asymmetries from annihilations
An extensively studied mechanism to create particle-antiparticle asymmetries is the out-of-equilibrium and CP violating decay of a heavy particle. Here we instead examine how asymmetries can arise purely from 2<->2 annihilations rather than from the usual 1<->2 decays and inverse decays. We review the general conditions on the reaction rates that arise from S-matrix unitarity and CPT invariance, and show how these are implemented in the context of a simple toy model. We formulate the Boltzmann equations for this model, and present an example solution.
I. INTRODUCTION
Cosmological observations have shown Ω DM ≈ 5Ω B ≈ 0.24, where Ω DM (B) is the dark matter (baryon) density divided by the critical density [1,2].However, current physics cannot explain what makes up Ω DM , why the baryon asymmetry of the universe (BAU) and hence Ω B is non-negligible [3], or indeed why Ω B ∼ Ω DM .A baryogenesis mechanism satisfying the Sakharov conditionsviolation of the baryon number, violation of charge conjugation (C) and charge parity (CP) symmetries, and a departure from thermal equilibrium -is required to explain the BAU [4].A similar asymmetry may also exist in the DM sector.In fact, asymmetric DM (ADM) scenarios seek to explain Ω B ∼ Ω DM as resulting from n B ∼ |n X − n X |, where n B is the baryon number density and n X (n X ) is the DM particle (antiparticle) density [5][6][7][8].Understanding possible mechanisms for creating particle-antiparticle asymmetries is therefore crucial if we are to understand the cosmological history of the universe at the earliest times.
In well known scenarios of baryogenesis, a matterantimatter asymmetry is created by the out-ofequilibrium decay of a heavy particle [9][10][11][12].Similar mechanisms have been applied to ADM scenarios [13].The decays must be CP violating for a preference of matter to be created over antimatter.Furthermore, the asymmetry can only be created once the decaying particle has departed from thermal equilibrium, because Smatrix unitarity ensures no net preference for particle over antiparticle states can occur in equilibrium.Such scenarios have been studied extensively.
In contrast there has been much less focus on asymmetries created from annihilations.Again, due to the unitarity, one or more of the particles involved in the annihilation must go out of thermal equilibrium for an asymmetry to be generated [14][15][16].This is the case in WIMPy baryogenesis, for example, in which heavy neu-tral particles freeze out and become the DM density and at the same time create the BAU through their annihilations [17][18][19][20][21].The effect of 2 ↔ 2 annihilations has also been investigated in the context of resonant leptogenesis [22].In this case, it was found that the annihilations change the asymmetry at high temperature but have only a negligible effect on the final asymmetry.However, there is no reason to expect this feature to hold for baryogenesis in general.
The effect of annihilations is therefore interesting from -at least -the perspective of baryogenesis.The WIMPy baryogenesis mechanism also explains the DM density, but with no asymmetry between DM particles and antiparticles.However, it may be possible to construct an ADM model in which such annihilations play a role: this paper is a first step towards such a goal. 1,2 The purpose of this paper is to provide a general framework for models which seek to create particle-antiparticle asymmetries from annihilations.While certain aspects of such mechanisms are necessarily model dependent, other considerations, such as the unitarity relations and construction of the Boltzmann equations are generic.Our focus in this paper is on examining asymmetries from annihilations alone; in future work we will examine scenarios in which decays and annihilations compete in creating the final asymmetry.
The structure of the paper is as follows.In the next section we review S−matrix unitarity and its implications for the CP violating reaction rates of annihilations.We then study a toy model involving the interaction between four fermions.We outline the Boltzmann equations for the model and show a non-zero source term develops when one or more of the species depart from equilibrium.We calculate the relevant thermally averaged cross sections and solve the Boltzmann equations numerically.
Unitarity of the S-matrix (S † S = SS † = 1) together with invariance under charge parity time (CPT) implies for the usual invariant matrix elements: where α is an arbitrary state, α its CP conjugate and the sum runs over all possible states β.Consider the collision term in the Boltzmann equations for the transition of a set of particles α i where i = 1, ..., n to and from a set of particles β j where j = 1, ..., m.Let us denote the integrated collision term for transitions α → β in chemical equilibrium as W (α → β).Approximated using Maxwell-Boltzmann statistics the net collision term is related to the matrix elements by [27]: where is the phase space density of species ψ with chemical potential µ ψ at energy E ψ , is the normalized volume element of the three momenta, g ψ are the degrees of freedom, and we assume throughout kinetic equilibrium so that the temperature (T ) of each species is identical.Under chemical equilibrium we have in addition, Chemical equilibrium and the delta function enforcing four momentum conservation allows the replacement: under the integral sign in Eq. ( 2).Using the replacement in Eq. ( 5) and taking the sum over all possible final states one finds [28]: where the second line follows from CPT invariance.Equation ( 6) means there must be a departure from thermal equilibrium for a baryon asymmetry to be produced (the third Sakharov condition). 3e note that the same result holds for full quantum statistics.The collision term and phase space densities are modified to take into account quantum statistics [27], but the unitarity condition is also modified: where Sβ = (1 + Θ β1 f β1 )...(1 + Θ βm f βm ) and Θ ψ = ±1 for Bosons (Fermions) [10,25].Taking the sum over β for the collision term one finds Eq. ( 6) also holds for full quantum statistics [9].It is the unitarity of the S-matrix together with CPT invariance which elegantly ensures there are no spurious departures from the usual equilibrium phase space densities even if individual matrix elements are not invariant under time reversal [31,32].We will apply this unitarity constraint below so as to correctly relate the CP violation in the reaction rates which enter the Boltzmann equations.
III. TOY MODEL
Consider the interaction Lagrangian: where the Ψ and f are Dirac fermions and the κ i and λ i are effective couplings with mass dimension -2.
The above Lagrangian violates the particle numbers associated with Ψ 1 , Ψ 2 and f but preserves the linear combination ∆(Ψ 1 + Ψ 2 − f ).We will show how these interactions will generate an asymmetry in the f sector and a related asymmetry in the Ψ sector, ∆(f ) = ∆(Ψ 1 +Ψ 2 ), through 2 ↔ 2 processes.The last three interaction terms break the particle numbers associated with Ψ 1 and Ψ 2 individually but preserve ∆(Ψ 1 + Ψ 2 ).These latter interactions must be included to allow CP violation to arise in the interference between tree and loop level diagrams.Majorana masses are prohibited by the global symmetry of the Lagrangian ∆(Ψ 1 + Ψ 2 − f ) = 0.
We assume f are in thermal equilibrium with the radiation bath and that Ψ 1 and Ψ 2 are coupled to the radiation bath only through their interactions in the above Lagrangian.The asymmetries are generated during the time when the Ψ particles are going out-of-equilibrium.
The above Lagrangian includes four physical phases in the couplings.CP violation arises in Ψ number changing interactions of the form Ψ i Ψ j → f f in the interference between the tree level and one loop level diagrams such as those depicted in Fig. 1.
1. Tree and one-loop diagrams for the annhilation Ψ1Ψ1 → f f .
We define the equilibrium reaction rate density -which will enter as a collision term in the Boltzmann equation -for the annihilation Ψ 1 Ψ 1 → f f as: = n eq Ψ1 n eq Ψ1 vσ( where the thermally averaged cross section comes from integrating over the phase space densities: where n eq αi (f eq αi ) is the number (phase space) density in the absence of a chemical potential.We have parametrized the CP violation in the following way: hence time reversed rates can be found by making the substitution: a 1 → −a 1 .The other CP violating interactions are denoted: CP conjugate rates can again be found by substituting a i → −a i .The unitarity conditions yield: We have checked that the CP violating rates calculated in terms of the underlying parameters of the Lagrangian do indeed respect these unitarity conditions.Washout interactions of the form Ψ i f → Ψ j f must also be taken into account.Furthermore sufficiently rapid interactions of the form Ψ i Ψ j ↔ Ψ k Ψ l relate the chemical potentials of Ψ 1 and Ψ 2 , these are also included in our numerical solutions below.These are denoted as: We take the Ψ 2 mass greater than the Ψ 1 mass (M Ψ2 > M Ψ1 ) and consider the decays of Ψ 2 .A priori Ψ 2 may have two decay channels: where the γ i denote the CP odd component.Unitarity implies γ a Γ 2a = −γ b Γ 2b .Here we kinematically forbid the second decay channel, ensuring no CP violation is possible in the Ψ 2 decays.The remaining decay width is given by: where we have ignored the final state masses.(We include the final state masses and the Lorentz factor suppression resulting from the thermal average in our numerical solutions.)
IV. BOLTZMANN EQUATIONS
We can now write down the Boltzmann equations using the usual approximation of Maxwell-Boltzmann statistics.The use of Maxwell-Boltzmann statistics allows one to factor out the chemical potential of a species from the collision term.The nonequilibrium rate is then simply the equilibrium rate multiplied by the ratio of the number density to the equilibrium number density of the incoming particles.For notational clarity we define the ratio of the number density to the equilibrium number density as: We assume f and f are in thermal equilibrium with the SM radiation bath so µ f = −µ f .We find the Boltzmann equations for n 1 , n 2 , and the asymmetries n ∆1 ≡ n 1 − n 1 and n ∆2 ≡ n 2 − n 2 in terms of the CP even and odd interaction rates.This results in a system of four coupled first order ordinary differential equations.The equations take the form: dn dt + 3Hn = (source terms) + (washout terms), (25) where the source terms can create an asymmetry once one or more species depart from equilibrium and r i = 1, while the washout terms drive towards equilibrium and washout any asymmetries present.For example, the equation for n ∆1 has washout terms: The source terms for n ∆1 are: By the application of the unitarity conditions (18)(19)(20) these terms can only generate asymmetries, n ∆1 = 0, when the distribution of Ψ particles depart from equilibrium: r i = 1.We proceed to solve the Boltzmann equations numerically.The standard change of variable is made to express the equations in terms temperature rather than time.We calculate the relevant cross sections and find the thermal averaged cross sections numerically by making use of the single integral formula [33]: = g i g j T 32π 4 n eq i n eq j Λ 2 (mj +mi) 2 where s is the centre-of-mass energy squared, p ij is the initial centre-of-mass momentum, K 1 (x) is the modified Bessel function of the second kind of order one and Λ is the effective theory cut-off.Having calculated the reaction rates and CP violation, we then solve the system of coupled Boltzmann equations using Mathematica [34].An example solution is shown in Fig. 2. The thermal history proceeds as follows.At high temperatures the 2 ↔ 2 annihilations keep Ψ 1 and Ψ 2 in thermal equilibrium and no asymmetry can develop.As the particles freeze out and approach the point where the equilibrium distributions become Boltzmann suppressed, the source terms in the Boltzmann equations become non-negligible and the asymmetries grow.Eventually the heavier Ψ 2 decay into Ψ 1 and the final ∆(Ψ) asymmetry is stored in Ψ 1 .Due to the different masses, couplings and phases, the asymmetries created in Ψ 2 and Ψ 1 are different and hence the eventual ∆(Ψ) decays of Ψ 2 do not washout the overall asymmetry.
Note that a large symmetric component of Ψ 1 is still present: |Y ∆1 | Y 1 .In a realistic model, so as to not overclose the universe, the symmetric component should be annihilated away.This can be achieved by introducing an interaction of the form Ψ 1 Ψ 1 → f f .Alternatively Ψ 1 and Ψ 1 could eventually decay.The asymmetry can then be stored in the decay products.These could be regular baryons or if they make up the DM, and have a sufficiently large annihilation cross section to annihilate away the symmetric component, form asymmetric DM [35][36][37].
V. CONCLUSION
We have presented a generic setup for the generation of particle-antiparticle asymmetries from 2 ↔ 2 processes, such as annihilations or scatterings.This is to be contrasted with the more well known scenario in which such asymmetries are generated via 1 → 2 out-of-equilibrium decays.We have explicitly outlined how the Boltzmann equations should be formulated, taking S-matrix unitarity and CPT invariance into account.We have also presented an example numerical solution to the Boltzmann equations in the context of a simple toy model.Such techniques can be applied in calculation of particleantiparticle asymmetries in models of baryogenesis and ADM, as will be the focus of our future work. | 2014-11-03T04:41:26.000Z | 2014-07-17T00:00:00.000 | {
"year": 2014,
"sha1": "90b1923b0f442d334655a38d0e3ad7ccd33e16a1",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1407.4566",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "90b1923b0f442d334655a38d0e3ad7ccd33e16a1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
252524059 | pes2o/s2orc | v3-fos-license | Evaluation System of Mobile English Learning Platform by Using Deep Learning Algorithm
. At present, China’s economic development continues to progress and foreign exchanges are also increasingly frequent. Learning to master the world’s universal pronunciation, English, has become a more important link. However, people’s living habits make it difficult to carry heavy desktop devices for a long time, so mobile English learning platform meets the development needs of people’s English teaching. Based on the existing English mobile learning platform, this paper puts forward the concept of integrating artificial intelligence and deep learning technology into it. Through deep learning, the learning status and learning situation of students in the process of English learning are extracted, so as to analyze the needs and learning interests of each student in English learning and then push corresponding English materials to each student, which improves the efficiency of English learning. In addition, deep learning can also model the data of students’ behaviors and build a language vector feature extraction mechanism and translation quality evaluation model, so as to carry out certain intelligent auxiliary correction and correction on students’ English grammatical expressions and spoken English. The research in this paper has achieved good results through practice. The practice results show that the integration of deep learning into the existing mobile English teaching platform can optimize the functions of the existing platform, provide more ideas for the development of online English learning, and has good theoretical and practical value.
Introduction
e term "deep learning" comes from the machine learning industry, which means a form of neural network that builds and simulates the human brain for analysis and interpretation, which is called deep learning.
is deep learning network is extremely prominent in some complex problems.In the nal analysis, it can strongly simulate the neural sensor system in the human brain for data analysis.is way of deep learning algorithm has been tested in many industries.In particular, the development and technological progress of image computer industry and computer industry promote the innovation of deep learning algorithm at the same time, and the complexity of which is no longer a problem.erefore, the research and application of deep learning algorithm in the study of English learning methods can also greatly improve the data and information processing ability of English, improve the information processing e ciency of learners, and improve the overall user experience of learners.In English learning, many people will use repeaters, MP3, MP4, and other players, as well as mobile phone software, to achieve the purpose of learning English at any time, but most of the facilities cannot fully achieve the purpose of learning.ey can closely realize the functions of search and follow-up, and there is no way to intuitively give English learners some guidance and advice, as well as follow-up learning and other ways.Moreover, due to the limitations of technical conditions, many network systems focus on the spelling of words and the English of grammar.Only one or two learning indicators are tested, which is not perfect in function, making English learners unable to intuitively feel the progress of learning.However, due to the di erences in English learners' English level, it is di cult to correct themselves and correct them in time through the errors prompted on the client.On the other hand, the final test of English is also subjective, and manual scoring is dominant for pronunciation standards and grammatical errors.In terms of manual scoring, because experts have different levels and experiences, the same expert may give multiple scores, so there are also many deviations.Experts like this also spend a lot of human and material resources, so it is not suitable for English learners.
In order to avoid the above shortcomings, we apply deep learning to the construction of English learning methods and establish a learning method model based on deep learning algorithm, so as to improve the accuracy and speed of English learners' learning English [1].In addition, the traditional English learning methods have been improved, and a reasonable and objective English learning method model has been established considering the pronunciation, grammar, composition, and other parameters involved in learning English [2,3]. is research will obtain a number of original research results, such as English learning methods.In the future, the research results can be used in many applications, such as human-computer interactive English learning training and evaluation [4].Giving corresponding guidance methods for error information in English teaching mode can effectively enable learners to learn English well.In addition, the research results can effectively solve the problems formed in teaching and improve the current situation of learners' learning English [5].
Related Work
Aiming at the problem that some English songs may have melody changes, leading to difficulty in recognition, the literature designed melody factors, constructed prosody model, and added melody recognition to the original monotonic one-tone and three-tone models, thus improving the accuracy of speech recognition [6,7].In view of the current situation that speech quality is difficult to be guaranteed in noisy environments, the literature puts forward the improvement of target speech acoustic model confusion [8].In view of the details of phoneme pronunciation, the literature introduced the GMM model to model and sort out the feature distribution of sonic speed in a more detailed way, so as to solve the details in English pronunciation recognition more pertinently and improve the efficiency and accuracy of English pronunciation recognition [9].In the literature, a new computational strategy is introduced in the grading of English speech, so as to further narrow the gap between the grading proposed by the machine-assisted correction and that proposed by the teacher's manual modification, so that the machine can learn the grading rules more accurately and improve the performance of the computer-aided learning system [10].In terms of the design of English mobile teaching platform, the literature has proposed that the current English mobile teaching platform is more in the form of database [11].e course is sorted out in advance by teachers and platform staff, and the materials are collected in the online platform for students to use [12].According to the literature, the current mobile English teaching platform is the use of social software in the field of learning, and learning is essentially completed through interpersonal communication [13].e literature summarizes the previous research experiences and puts forward that the current mobile English teaching platform is actually a new application of the traditional social network carried by the Internet in the field of education [14].e existence of the Internet provides teachers and students with more technical support, which is essentially a new media [15].According to the literature, under the current development status of artificial intelligence, traditional social networks have made new progress, and artificial intelligence technology should also be introduced into the design of English mobile teaching platform [16].However, there is still a lack of application of relevant technologies in this direction, so it is necessary to learn and master relevant artificial intelligence technologies as much as possible and optimize the existing mobile English teaching platform through machine learning [17,18].
Design of Mobile English Learning Platform
3.1.System Architecture Design.At present, the system architecture of mobile English learning platform is still a common three-tier application service structure.
e overall block diagram of the system structure is shown in Figure 1.
e intelligent control layer includes commonly used mobile phones or laptops.e mobile smart client controls the service management layer through the built-in fixed access system and application server and manages the entire system server by logging in to the browser.
e focus of this system research is to realize the functions of the whole mobile intelligent client.After the installation of the mobile intelligent device system, the client preinstalls the logic functions of the whole client business.
is application is only one end of the server.e focus of the system is on the request function of the mobile smart client to send business logic.e design requirements of the whole server system should be combined with the interaction of the data server to interact and transfer data between the system client and the data server.All data should be processed and transferred by the unified server, that is, after receiving the data request from the client, the server should transmit and send the data of the application operation.e data information processed in the database should also be transmitted to the server terminal for system processing and unified transmission.Finally, the system should be screened and displayed in the mobile intelligent data terminal.
System Function Analysis.
e mobile client of this system is simply like a smart phone or tablet computer using 5G mobile network.
is kind of smart device has the characteristics of convenient carrying and light use.rough the whole mobile intelligent client platform, the system client can use its spare time every day to learn interesting knowledge.If the English learning materials and services provided by the system are not satisfied, you can go to the 2 Mobile Information Systems customer service terminal for demand analysis and push more qualified learning materials.
In terms of technology, under the current network technology environment in China, 5G network has been widely spread to medium and large cities and regions.Moreover, 3G network technology has a fast Internet speed, which is unmatched by ordinary broadband networks.erefore, it is convenient and effective to use 5G network to transmit data. is system also has additional online video on demand function, which is not a small breakthrough for English learning software in 4G environment.is is because the users of English learning system have greatly improved the efficiency of English learning.Video media can be seen in the whole English learning system, which makes the whole English learning more intuitive.e addition of audition function is more conducive to the learning and practice of English knowledge.
However, in terms of other business functions and service requirements, this system also covers the functional requirements of most users for learning English, so that the English learning system can more meet the needs of potential customers.After analyzing and comparing many users of professional English learning, several aspects are added to the English learning system.
Online Simulated English Test Function.
Under the English level test function, the client of this English learning system can screen several popular English level tests at home and abroad and simulate the learning process under the test environment.e popular English tests at home and abroad include CET-4 and CET-6, public English level test, Cambridge Business English test, IELTS, and TOEFL.e background of the English system will push the real questions over the years and conduct online simulation tests according to the test content selected by the candidate, judge and analyze the test results, give standard answers, and uniformly explain the wrong answers.Of course, this system design only provides the option of objective questions, which is not open for subjective classes such as composition questions.
English Short Plays, Movies, and Other Videos on
Demand Functions.For potential client users who need to strengthen the practice of listening and speaking, this system also provides English film series for oral material practice.In the video content push, the system pushes different materials for users at different levels, including educational films, cartoons, English films, and other materials with slow pronunciation standard sentences, which lays a solid foundation for practitioners in English listening and speaking.For some basic system clients, it provides some high-end English film classics to learn.Mobile Information Systems
Life and Work Situation Simulation Dialog Function.
For some English system learners who need to go abroad, it will provide some daily functional scene simulation environments, so that learners have an immersive sense of dialog and always cultivate the ability of emergency dialog.ese scenes usually include restaurants, hotels, banks, stations, supermarkets, and airports and even provide common sentences such as alarm, inquiry, and thank you to learn and use.
Online Translation Function.
is function is aimed at some English fresh users to translate and carry out certain online translation according to the provided words and sentences.is translation function is not only implemented in the third-party translation function but also to facilitate the use of English client users to solve the troubles of jump software.
Mobile Terminal Architecture Protocol Design.
Mobile end architecture protocol is an open chat technology, which can be used to customize any chat software.Although the agreement can provide many customized functions, many expanded functions need users to realize and experience by themselves.Among them, the basic function is to realize single person or multiperson chat, as well as personal data display function.is paper starts from the specific content of the mobile terminal architecture protocol and describes its functions, internal implementation principles, and the way of network data transmission.
Protocol Architecture.
e architecture used by the mobile end architecture protocol is based on the form of client server cluster, which has multiple forms of interaction with each other.e following figure shows a simple system architecture.
e corresponding background is the opensource server that runs the whole server and completes the writing.It dominates a huge amount of information and has powerful functions.e amount of background data contained in this system is huge, so the amount of relevant client software is also increasing.If these data can be subject to the mobile end architecture protocol, the responsibility separation mode between the client and the server can be realized.In this way, system development researchers do not need to install complex processing logic, but only need to pay attention to how to write the operating system of the client to make it run at high speed.Openfire server needs very professional logic processing ability and receives and processes data requests transmitted from different clients at the same time.Many clients will customize different personality modules according to their own service providers.
erefore, the mobile client protocol has strong scalability requirements and is deeply loved by customers, as shown in Figure 2.
Network Communication Mode.
On the basis of mobile Internet communication, if one party wants to receive the information of the other party, it will generate a specific address as its identity mark.e selfsymbol in the mobile architecture protocol is called JID, which is a kind of running code.JID is a kind of entity and unique sign, just like everyone's ID number is different.JID format code and e-mail address format are very similar.Like e-mail address example@Jabber.org,similarly, the example in this refers to the user's name, and the address after the @ symbol is the server address information.In the common mobile end architecture protocol, the transmission format between the two mainly uses the idea of layering to split information.It is divided into three information elements, and the content labeled by each element is a file message, including the sender's information, the discoverer's Avatar, and the sending content.e presence tab is mainly used to obtain whether the user is online, as well as the list of relevant friends and other information.After we get the communication and analyze it, we can get the useful information we want.At this stage, the background service manager uses Alibaba cloud virtual server to transfer the corresponding Apache server and my SQL database to the cloud server.e corresponding website of the cloud server is www.bigtreecom.Download the latest Openfire server and configure it in the background.Local services can be provided in the whole test phase.On this basis, database information needs to be selected and connected with my SQL database.
Mobile
e program development voice form of the mobile client is operated on the Apple system, but because the Apple system and Android system are not interconnected, the commercial operating system developed by Apple is adopted instead of Android system.IOS software development kit, known as iPhone SDK, is a software development kit specially established and developed by Apple to develop IOS applications.
e first development was in early 2008.After its release, the software development package can only host IOS or Mac OS operating systems, and other operating systems cannot host and run. is system is not open to the public and must be operated on an Apple system.When developers develop software, they can only download and use it for other users on the application manager where they are located.Developers need to pay a certain fee to release the application.erefore, developers can freely customize their own system price in the application software.e tool for developing IOS programs uses Xcode, which is the only software for developing IOS programs.e software can not only develop IOS applications but also develop computer applications.Apple generally releases two versions at the same time when it releases the system, one is a stable version that has been tested many times, and the other is a beta version for developers.
e advantage of the two versions is to let everyone debug the vulnerabilities under this version and increase the stability of the system.
Application of Deep Learning Algorithm in the System
Due to the relatively cumbersome processing of concatenation in the formula, the strict monotonicity of function Ln x can be known, maximizing L θ is equivalent to maximizing LN l .
English Learning Level Evaluation
Algorithm.English learning level usually tests the speed of oral pronunciation, which is reflected in the speed of speaking intonation when learning English.It can also be calculated by calculating the change of syllable length in unit time by computer, or by the length of pause between two English words.Due to the differences of individual speaking, different people have certain differences in the pronunciation of different sentences.Moreover, the different emotional states of speaking also affect the effect of sentences.For example, in the state of anger and happiness, the sentences expressed are slightly gentle, while in the state of sadness, the sentences are slightly slow.
is paper studies the change of English sentence length by calculating the duration ratio φ, as shown in formula (2): Among them, Len std is the standardized parameter duration and Len test is the duration of the test statement.For further setting and comparison of data, see Figure 3.
e pronunciation speed displayed by sentences can be followed by rules and cycles.For speaking rhythm, it can be divided into stress type, incomplete stress type, and stress type.In English, learning, reading, and talking, rhythm combination patterns in different states alternate, and language rhythm has different itineraries.erefore, the English Sentence Rhythm evaluation mechanism is shown in Figure 4, and the specific steps are as follows: Mobile Information Systems
Extract English Short-Term Energy Values to Form an English Intensity Curve.
e poisoned scale characteristics in the sentence reflect the change of energy intensity.e greater the change of English intensity of stressed sections, the definition of short-term energy of English signal s (n) formed in this state is shown in formula (3): ( For short English sentences, the corresponding calculation mode can be formed.Each frame on the X-axis matches the frame between [y min , y max ] on the Y-axis.e calculation of y min and y max is as follows: where D and d represent cumulative distance and frame matching distance, respectively.is paper uses the double threshold comparison method to detect the accent endpoint and sets the threshold after data comparison, such as formulas ( 5) and ( 6): According to the time length analysis of English learning, the improved dPvi parameter calculation formula is adopted to compare the length of fragments of complete English sentences and test sentences, and the converted parameters are processed systematically, as shown in formula (7): For the correlation function between the correlation function algorithms, calculate the similarity between the sound frame s (i), {i � 0, 1, 2, . .., n − 1} and itself, as shown in formula (9):
English Writing Level Evaluation
Algorithm.e level of English writing under this system is very different from the situation of English texts.It is the main purpose of the establishment of English learning system.It is to process the text data transmitted from the front end, remove the classification under complex English texts, and select English words and sentences with low correlation.e corresponding English words and sentences are transformed into vector data in the computer processing mode, so that the classification of the English original text is one-to-one corresponding.
e constructed co-occurrence matrix is expressed as equations (10) 6 Mobile Information Systems e formula of GloVe model is as follows (10): v i and v j represent the word vector of words i and j, respectively, b i and b j represent deviation, f represents weight, and the number of words is represented by N. it can be seen that this model does not involve any derivation and exercise in the form of neural network.
e frequency of word i and other words in the sentence can be expressed as formula (12): erefore, the generated vector words have to go through the derivation exercise of some function.e text related information contained in this word vector is expressed as formula (13): e three variables are infinitely close.e variance between the two can be used as a function.
Considering the linear relationship between the two words and the constancy of the result, let e brought in function can be expressed as follows: e expanded formula is as follows: Let b i and b j be deviation values, then J satisfies the following formula: It can be seen that GloVe model can well show the relevance between the two words, and its actual expression effect is also higher than other models.
e quantitative expression of English text words and sentences is transformed into text sequences, and hash search is carried out.
Application of Mobile English
Learning Platform
Platform Test Results.
For the delay effect of the English learning platform test system, when English learners enter the interface, click the function key to request to send data, and the time interval between data sending and receiving can be calculated under the test module of mobile data terminal.By testing the increase of users one by one, the curve formed by the delay time under multiple modules is shown in Figure 5. e results of the reaction on the graph show that the functions realized by the system affect the effect of delay.According to the data analysis, the unit of delay is seconds.erefore, there is almost no so-called lag and hysteresis.With the increasing number of users, the response time did not increase significantly and gradually became a flat trend.
e test results show the stability of the mobile platform system and achieve the effect of user satisfaction.
Application Effect Analysis.
In terms of specific experimental classification, in addition to the distinction between single group and multiple groups of experimental objects, there are also differences between pretest and post-test.In order to further study the application effect of this system, the paper chooses the way of controlled experiment.e experimental group adopted the online mobile English teaching platform based on deep learning proposed in this paper, while the control group adopted the traditional online English learning platform.After one semester of online course learning, the final scores of the two groups of students were compared, as shown in Table 1.
In this group of satisfaction tests, a data analysis form of four options is designed to test the difference between English and learners' overall satisfaction with setting up an English learning platform.e satisfaction test results are shown in Table 2 below: e test results show that in the comparison of pretest and post-test, English learners analyze the difference data from the mean value and standardization, and the data results formed by the post-test are more prominent than the pretest.For the establishment effect of the teaching mode of mobile English learning platform, the degree of satisfaction formed is more distinct.
is means that in this Mobile Information Systems environment, English learners from the beginning of the unfamiliar attitude, after learning for a period of time into love and identity.e changes in this period are significant in many aspects, such as cognition, emotion, ability, and behavior effect.Among them, there is no obvious change in the satisfaction of English learning process.e main reason is the dependence on traditional learning methods.Individual English learners believe that although the establishment effect of mobile English learning platform is very obvious, due to individual differences and differences in learning concepts, it is difficult to achieve long-term persistence learning effect, and there is a certain sense of laziness.From the perspective of this theoretical concept, we have more realized the responsibilities and obligations of educators.As long as English learners are educated in ideas and theoretical concepts, they should also form quality-oriented training in the face of many individualized student development.Only by constantly updating the educational concept and innovating the teaching mode we can change the current English teaching quality and make students develop in an all-round way.
Conclusion
e core of building mobile English learning platform is the research and evaluation technology of English learning system, and the technology of English learning system is the key.As learning English becomes more and more complex, English learners have a huge amount of data and information, and there are more characteristic parameters in the English learning industry.erefore, the English learning system and evaluative calculation involved are also huge, which makes the processing of information under the English learning system have higher hardware requirements and algorithm requirements.Traditional English learning system algorithm and artificial algorithm have their own advantages and disadvantages and have different bottlenecks for their development, so it is difficult to judge their accuracy.In recent years, with the development and
Figure 1 :
Figure 1: Overall block diagram of system structure.
Figure 2 :
Figure 2: Simple mobile terminal architecture protocol system architecture.
4. 1 .
Foundation of Deep Learning Algorithm.A deep learning adjustment algorithm means that the learning adjustment parameters are θ � w, a, b { }, assuming a given training sample, even if the distribution probability of the corresponding deep learning calculation algorithm matches it under this condition.e sample set satisfying the distribution conditions given: S � v (1) , v (2) , . . ., v (N) , which maximizes the goal of training the deep learning algorithm, as shown in formula (1):
Table 2 :
Overall satisfaction test of mobile English learning platform.
Table 1 :
Comparison of final grades between the experimental group and the control group.learning algorithm definition and deep learning achievements, the technology of establishing English learning system has developed rapidly.Deep learning algorithm is a nonlinear network structure form, characterized by the distribution of information and data processing, and rationalized the ability to show multiple features sorted out by the sample set.It is more excellent in learning algorithms that simulate human brain and has played a prominent role in the development of mobile English learning platform. | 2022-09-26T15:03:19.213Z | 2022-09-24T00:00:00.000 | {
"year": 2022,
"sha1": "f3d4bf5a460c6d46f4f8d3d6b6b2837244677e7f",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/misy/2022/3849079.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7334ec8dc4d72a2c4731fdd8cb76e3be4451eba9",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": []
} |
258188331 | pes2o/s2orc | v3-fos-license | A dual gene-specific mutator system installs all transition mutations at similar frequencies in vivo
Abstract Targeted in vivo hypermutation accelerates directed evolution of proteins through concurrent DNA diversification and selection. Although systems employing a fusion protein of a nucleobase deaminase and T7 RNA polymerase present gene-specific targeting, their mutational spectra have been limited to exclusive or dominant C:G→T:A mutations. Here we describe eMutaT7transition, a new gene-specific hypermutation system, that installs all transition mutations (C:G→T:A and A:T→G:C) at comparable frequencies. By using two mutator proteins in which two efficient deaminases, PmCDA1 and TadA-8e, are separately fused to T7 RNA polymerase, we obtained similar numbers of C:G→T:A and A:T→G:C substitutions at a sufficiently high frequency (∼6.7 substitutions in 1.3 kb gene during 80-h in vivo mutagenesis). Through eMutaT7transition-mediated TEM-1 evolution for antibiotic resistance, we generated many mutations found in clinical isolates. Overall, with a high mutation frequency and wider mutational spectrum, eMutaT7transition is a potential first-line method for gene-specific in vivo hypermutation.
INTRODUCTION
Directed evolution is a powerful approach that mimics natural evolution to improve biomolecular activity (1,2). Traditional directed evolution relies on in vitro gene diversification such as error-prone PCR or randomized oligonucleotide pools (2). In contrast, continuous directed evolution (CDE) adopts in vivo hypermutation, allowing for simultaneous gene diversification, selection, and replication in cells; this technique significantly enhances the depth and scale of biomolecular evolution (3)(4)(5). As random mutagenesis in the genome is highly deleterious to cells, in vivo hypermutation methods should aim to introduce mutations in a relatively narrow region around the target gene (5).
The deaminase-T7RNAP system was first reported in bacteria (MutaT7) (17) and further extended to mammalian cells (TRACE) (18), yeast (TRIDENT) (21) and plants (22). We previously demonstrated that the mutation frequency of MutaT7 could be enhanced 7-to 20fold with a more efficient cytidine deaminase, Petromyzon marinus cytidine deaminase (PmCDA1) (20). This Pm-CDA1 T7RNAP mutator (previously termed eMutaT7, but here renamed eMutaT7 PmCDA1 ) generated ∼4 mutations per 1 kb per day in Escherichia coli, representing the fastest gene-specific in vivo mutagenesis. The major limitation of eMutaT7 PmCDA1 is a narrow mutational spectrum: it mainly generates C→T mutations on the coding strand and, with the Shoulders group's dual promoter/terminator approach that induces transcription in both directions, introduces C→T and G→A mutations (C:G→T:A) (17,20). Mutations could be expanded to A→G and T→C mutations (A:T→G:C) with engineered tRNA adenosine deaminases, TadA-7.10 (11,19) and yeTadA1.0 (21), but they either had a mutation frequency much lower than eMutaT7 PmCDA1 (19), or presented C:G→T:A as dominant mutations (∼95%) in nonselective conditions when combined with PmCDA1 T7RNAP (21).
Here, we report on eMutaT7 transition , a new dual mutator system that introduces all transition mutations (C:G→T:A and A:T→G:C) at comparable frequencies. The eMutaT7 transition system uses two mutators, eMutaT7 PmCDA1 and eMutaT7 . The latter is the fusion of T7RNAP and a recently evolved E. coli adenosine deaminase, TadA-8e (23), which had much higher mutational activity than the previously evolved TadA-7.10 (11). We optimized the expression of the two mutators and a uracil glycosylase inhibitor, and demonstrated that the frequencies of the C:G→T:A and A:T→G:C mutations were not significantly different. Furthermore, overall mutation frequency was not markedly reduced. eMutaT7 transition also promoted rapid continuous directed evolution of antibiotic resistance with various transition substitutions, suggesting that it is a viable alternative for gene-specific in vivo hypermutation with an improved mutational spectrum.
Materials
All PCR experiments were conducted with KOD Plus neo DNA polymerase (Toyobo, Japan). T4 polynucleotide kinase and T4 DNA ligases were purchased from Enzynomics (South Korea). Plasmids and DNA fragments were purified with LaboPass TM plasmid DNA purification kit mini and LaboPass™ Gel extraction kit (Cosmogenetech, South Korea). Sequences of all DNA constructs in this study were confirmed by Sanger sequencing (Macrogen, South Korea and Bionics, South Korea). Antibiotics (carbenicillin, chlo-ramphenicol, kanamycin), arabinose, and Isopropyl -D-1-thiogalactopyranoside (IPTG) were purchased from LPS solution (South Korea). Streptomycin was purchased from Sigma Aldrich. Tetracycline was purchased from Bio Basic. Cefotaxime and ceftazidime were purchased from Tokyo chemical industry (Japan). H-p-Chloro-DL-Phe-OH (p-Cl-Phe) was purchased from Bachem (Switzerland).
All plasmids expressing variants of mutators or targets (mutation, deletion, and insertion) were constructed using the site-directed mutagenesis PCR method (25). Plasmids expressing eMutaT7 PmCDA1 and UGI in different conditions (deletion of UGI, an optimized ribosomal binding site (RBS) for UGI, or a constitutive promoter for UGI) were made on pHyo094. Sequence of the optimized RBS region is AACAGAGCGCGCTCTGTTTGAGTACTAGCAAT AAATAAGGAGGATTTTTT (the underlined sequence indicates RBS) (26). Plasmids harboring TadA-8e were made on pDae029. Plasmids expressing PmCDA1 TadA-8e T7RNAP with different linkers were constructed on pDae036.
For evolution of antibiotic resistance, a target plasmid (pGE158) was constructed from pHyo245, which contains the pheS A294G gene between dual promoter/terminator pairs in a low-copy-number plasmid (20): Ampicillin resistance gene in pHyo245 was replaced with tetracycline resistance gene and pheS A294G was replaced with the TEM-1 gene by IVA cloning. Tetracycline resistance gene was amplified from the plasmid pREMCM3 (27) and the TEM-1 gene was obtained from pHyo182 (20).
W3110 ΔalkA Δnfi strain (cDJ085) and W3110 ΔlacZ::KanR-P T7 -gfp-T T7 (cDJ092) were constructed by homologous recombination method (28) The alkA and nfi genes in W3110 were replaced with the streptomycin resistance gene and the kanamycin resistance gene, respectively. The lacZ gene in W3110 was replaced with the kanamycin resistance gene and gfp gene. 30 g/ml of streptomycin or kanamycin was used for selection. Proper gene deletion was confirmed by colony PCR using 2X TOP simple TM DyeMIX-Tenuto (Enzynomics).
PAGE 3 OF 11
Nucleic Acids Research, 2023, Vol. 51, No. 10 e59 In vivo hypermutation Three biological replicates of W3110 or the Δung strain (cHYO057) harboring a mutator plasmid and a target plasmid (pHyo182, pDae117, pDae118, and pDae119 for a single promoter) were grown overnight in LB medium with 35 g/ml chloramphenicol and 50 g/ml carbenicillin (cycle #0). On the following day, the overnight cultures were diluted 100-fold in a fresh LB medium supplemented with 35 g/ml chloramphenicol, 50 g/ml carbenicillin, 0.2% arabinose, and 0.1 mM IPTG in a 96-deep well plate (Bioneer, South Korea) and incubated at 37 • C with shaking (cycle #1). Bacterial cells were diluted every 4 hours and this growth cycle was repeated up to 20 times for accumulation of mutations. At the end of cycle, a fraction of cells were stored at -80 • C with 15% glycerol. To identify mutations in the target gene, cells at cycle #20 were streaked on LB-Agar plates with 35 g/ml chloramphenicol and 50 g/ml carbenicillin. Three or six colonies were randomly chosen for isolation of target plasmids. The target genes in the purified target plasmids were sequenced by Sanger sequencing. Mutations were counted in the region between 147-bp upstream and 138-bp downstream of the pheS A294G gene (total 1269 bp), malE gene (total 1389 bp), and gfp gene (total 1005 bp). Primer 314 and 315 were used for amplification and sequencing of the target gene that has a single promoter system.
PheS A294G suppression assay
Suppression frequency of the pheS A294G toxicity was determined as previously described (20). Cells obtained at the endpoint of each cycle (overnight culture for cycle #0) were diluted to OD 600 ∼0.2. Serial 10-fold dilutions of cells (5 l) using LB broth were placed on YEG-agar plates with or without additives (16 mM p-Cl-Phe, 0.2% arabinose, and 0.1 mM IPTG) and grown overnight at 37 • C. On the following day, the number of colonies on each condition was counted to calculate the suppression frequency. The suppression frequency was calculated as N 1 /N 0 (N 1 : colony forming unit (CFU) in the p-Cl-Phe plates and N 0 : CFU in plates without p-Cl-Phe).
Assays for cell viability and off-target mutagenesis
Cell viability and off-target mutagenesis were assayed as previously described (20). Overnight cultures of the cells harboring the plasmid expressing eMutaT7 TadA-8e , no mutator, or MP6 were diluted 100-fold in LB supplemented with 35 g/ml chloramphenicol and grown to a log phase (OD 600 = 0.2-0.5) at 37 • C. Cells were diluted to OD 600 ∼0.2 and serial 10-fold dilutions of cells (5 l) using LB broth were placed on LB-agar supplemented with 35 g/ml chloramphenicol and 0.2% arabinose. After overnight growth at 37 • C, the number of colonies on the plates were counted to calculate CFU/ml. To evaluate the off-target mutagenesis via rifampicin resistance, cells taken at cycle #0 and cycle #20 were grown to log phase in LB supplemented with 35 g/ml chloramphenicol and 50 g/ml carbenicillin, and subjected to viability assay on plates with or without rifampicin (50 g/ml).
Fluctuation analysis
Fluctuation analysis was performed as previously described (29). Cells harboring a mutator plasmid (pDae079, eMutaT7 transition ) and a target plasmid (pHyo182 for a T7 promoter; pDae120 for constitutive promoter (BBa J23100)) were grown overnight in LB medium with 35 g/ml chloramphenicol and 50 g/ml carbenicillin. The cultures were diluted 1:10 6 with induction media containing 35 g/ml chloramphenicol, 50 g/ml carbenicillin, 0.2% arabinose and 0.1 mM IPTG, and divided into 32 wells (50 L each) in a 96-deep well plate. This plate was sealed and incubated for 6 hours (pHyo182) or 16 hours (pDae120) at 37 • C with shaking. To assess the total cell counts, 8 cultures were resuspended and plated on a YEG-agar plate at the required dilutions. The remaining 24 cultures were resuspended using a pipette, and placed on YEG-agar plates with additional ingredients (16 mM p-Cl-Phe, 0.2% arabinose, and 0.1 mM IPTG). Colonies on YEG-agar plates with or without additives were counted after an overnight incubation.
The Ma-Sandri-Sarkar (MSS) maximum likelihood method was used to compute the loss-of-function mutation rate (30), and the 95% confidence intervals (CIs) were calculated as previously described (29). The FAL-COR webtool (https://lianglab.brocku.ca/FALCOR) was used with both of these methods (31). The calculated loss-of-function mutation rates serve simply as comparative estimates for per-base-pair mutation rates in our study.
High-throughput sequencing and data analysis
Cells taken at cycle 0 and cycle 20 were sequenced as previously described (20). Cells taken at cycle 0 (n = 1) and cycle 20 (n = 3) were grown in 15 ml of LB broth without arabinose and IPTG, and the plasmids were extracted with Plasmid DNA Miniprep Kit. The 3288 bp DNA fragments containing the pheS A294G gene were amplified using primer 512 and 513 covering from 999 bp upstream from T7 promoter and to 1020 bp downstream from T7 terminator. The 2 × 151 paired-end sequencing library was constructed using TruSeq Nano DNA Kit and were sequenced using NovaseqTM (Illumina; operated by Macrogen).
The quality of the sequencing data was checked with FastQC (v0.11.8). Raw reads were trimmed to remove adapter sequences and low-quality end sequences using Trimmomatic (v0.38) (32). Processed data were aligned to the reference sequence (3288 bp) using Burrows-Wheeler Aligner (BWA v0.7.17) with MEM mode and BAM files generated by mapping were sorted using SAMtools (v1.9) (33,34). Sorted BAM files were subject to SAMtools mpileup to obtain a pileup output with maximum depth option, which was set as total number of trimmed reads, and output tag list option consisting of DP, DP4 and AD. Alleles for each locus were called using BCFtools (v1.9), which was a set of utilities of SAMtools package, with multiallelic-caller option. Allele count for each allele and ratio (each allele count/total allele count) were calculated based on AD information of VCF files.
Statistical analysis
For high-throughput sequencing data ( Figure 4 and Supplementary Figure S6), Mann-Whitney test (unpaired Wilcoxon test) was used to assess the significance of the substitution frequency caused by the eMutaT7 transition system. Calculation was conducted using Stata (USA). Statistical significance was determined with P values. P < 0.05 was considered significant for this experiment. For other data, statistical analyses comparing groups in pairs were performed using two-sided Mann-Whitney test (Figures 1-3, Supplementary Figures S1C, S3B, and S5B) without assuming that the data follow normal distribution or twotailed Student's t-test (Supplementary Figure S1B) assuming that the data follow normal distribution. Calculation was conducted using GraphPad prism 5. P < 0.05 was considered significant.
TEM-1 evolution and identification of the evolved mutants
TEM-1 evolution experiments were performed as previously described (20). Strains were grown in LB medium supplemented with 6 g/ml tetracycline, 35 g/ml chlo-ramphenicol, 0.2% arabinose, and 0.1 mM IPTG. Cells were grown without selection pressure at the initial cycle. Then, multiple cultures were grown with different concentrations of an antibiotic (cefotaxime and ceftazidime) at the same time and the culture grown at the highest antibiotic concentration (OD 600 > 1) were used for the next round of evolution. After the final cycle, the target plasmids were purified and re-inserted into fresh W3110 cells harboring the T7RNAP-expressing plasmid (pHyo183) for validation of antibiotic resistance. Twelve colonies were randomly selected for MIC measurement and those with high MIC values (five colonies with 400-1600 g/ml MIC for CTX, three colonies with 4000 g/ml MIC for CAZ) were subjected to the target gene sequencing by Sanger method.
MIC determination
MIC values were measured as previously described (20). Cells were grown overnight in LB medium supplemented with 6 g/ml tetracycline, 35 g/ml chloramphenicol. They were diluted 10 000-fold into fresh LB broth with increasing concentrations of antibiotics (2-fold) in 96-deep well plates, and grown at 37 • C with shaking (290 rpm) overnight. Final cell density (OD 600 ) was measured by M200 microplate reader (TECAN, Switzerland).
RESULTS AND DISCUSSION
eMutaT7 TadA-8e promotes rapid gene-specific in vivo hypermutation To date, TadA-8e is the most efficient TadA variant, presenting a rate constant (k app ) 590 times higher than that of the previous TadA-7.10, and has been successfully used for genome editing (23). To evaluate their efficiency in gene-specific in vivo hypermutation, we fused TadA-7.10 and TadA-8e to the N-terminus of T7RNAP, creating eMutaT7 TadA-7.10 and eMutaT7 TadA-8e , respectively (Figures 1A, 1 and 2). As in the previous characterization of eMutaT7 PmCDA1 (20), we expressed the mutator and induced hypermutation in the target gene, pheS A294G, which was inserted between T7 promoter and T7 terminator in a low-copy-number plasmid. We determined mutational suppression of the pheS A294G toxicity by counting viable cells in the presence of p-chloro-phenylalanine (p-Cl-Phe), which is toxic to cells containing intact pheS A294G. We performed 20 rounds of in vivo hypermutation (4 h growth and 100-fold dilution to a new medium for a single round) without p-Cl-Phe and then sampled cells at different time points for the cell viability assay. We found that the suppression frequencies of eMutaT7 TadA-8e were several orders of magnitude higher than the eMutaT7 TadA-7.10 frequencies after 8 h, indicating that eMutaT7 TadA-8e induces genespecific hypermutation much faster than eMutaT7 TadA-7.10 ( Figure 1B).
To examine whether eMutaT7 TadA-8e generates mutations in the target gene, we randomly selected three clones from cells that had undergone 20 rounds of hypermutation and sequenced the target gene by Sanger method. We also included as negative controls cells that had an empty vector, expressed TadA-8e without T7RNAP, or contained the eMutaT7 TadA-8e plasmid without induction ( Figures 1A, 3-5). Notably, we found ∼6.7 substitutions per clone in the eMutaT7 TadA-8e -expressing cells, while eMutaT7 TadA-7.10expressing cells and negative controls did not exhibit mutations ( Figure 1C and Supplementary Figure S1A). This mutation frequency is definitely much higher than that of eMutaT7 TadA-7.10 and only 2.4-fold lower than that of eMutaT7 PmCDA1 (20). Interestingly, we identified nine A→G (45%) and eleven T→C (55%) mutations on the coding strand, indicating that eMutaT7 TadA-8e causes mutations on both DNA strands ( Figure 1C and Supplementary Figure S1A). We observed that eMutaT7 TadA-8e neither noticeably reduced cell viability (Supplementary Figure S1B) nor induced rifampicin resistance (Supplementary Figure S1C). This result suggests that eMutaT7 TadA-8e does not generate significant off-target mutations in the genome.
Deletion of genes associated with hypoxanthine repair does not significantly increase eMutaT7 TadA-8e activity
In the eMutaT7 PmCDA1 system, deletion of a gene encoding a uracil-DNA glycosylase (UNG) enhanced the mutation frequency (20). UNG removes uracil (deaminated cytosine) and initiates the base excision repair pathway (35). Likewise, we hypothesized that the deletion of genes encoding hypoxanthine (deaminated adenine)-removing enzymes would further increase the eMutaT7 TadA-8e -mediated mutation frequency. We prepared a strain in which two genes involved in hypoxanthine repair, nfi (36,37) and alkA (38), are deleted and analyzed eMutaT7 TadA-8e -mediated hypermutation ( Figure 2A). Twenty rounds of targeted hypermutation revealed that the mutation frequency in the Δnfi ΔalkA strain did not increase significantly from the wild-type level (11 and 7.2 substitutions per clone on average, respectively) ( Figure 2B and Supplementary Figure S2). Because a DNA repair enzyme often reduces the mutation rate by more than an order of magnitude (39)(40)(41)(42) and the construction of a gene deletion strain requires additional experimental steps, we concluded that the Δnfi ΔalkA strain has no obvious advantage over the wild-type strain for eMutaT7 . No significant increase of mutations in the Δnfi strain was also previously observed (19).
Optimized expression of uracil glycosylase inhibitor increases eMutaT7 PmCDA1 activity
Although we co-expressed a UNG inhibitor (UGI) with eMutaT7 PmCDA1 from the plasmid pHyo094, we did not obtain an efficiency level that matched the Δung strain (20). Proper UGI expression can greatly expand eMutaT7 PmCDA1 utility by avoiding the ung deletion. To enhance UGI activity, we initially tested a new constitutive promoter for ugi or a triply fused protein, UGI PmCDA1 T7RNAP. However, both were less efficient than the Δung strain (Supplementary Figure S3). Next, we optimized the ribosomal bind-ing site (RBS) of ugi (26) (Figure 2C), and obtained a suppression frequency indistinguishable from that of the Δung strain ( Figure 2D). Thus, we were able to avoid the ung deletion for efficient eMutaT7 PmCDA1 -mediated mutagenesis.
Dual expression system introduces all transition mutations at comparable frequencies
We examined whether the two deaminases could simultaneously install both C:G→T:A and A:T→G:C mutations at similar frequencies. Initially, we tested two triplefused proteins, PmCDA1 TadA-8e T7RNAP and TadA-8e PmCDA1 T7RNAP, in which two deaminases were attached to the N-terminus of T7RNAP in different orders ( Figure 3A, 2 and 3). Sequencing of clones after 20 rounds of in vivo hypermutation revealed that Pm-CDA1 TadA-8e T7RNAP installed more A:T→G:C mutations (84%) than C:G→T:A (16%), whereas TadA-8e PmCDA1 T7RNAP generated more C:G→T:A (96%) than A:T→G:C (4%) ( Figure 3B and Supplementary Figure S4A). This result indicates that the deaminase closer to T7RNAP is more active. Shorter or longer linker lengths between enzymes did not significantly reduce the gap (Supplementary Figure S5).
Next, we tested the expression of two mutators, eMutaT7 PmCDA1 and eMutaT7 TadA-8e , from a single plasmid ( Figure 3C, 4-7). The pDae079 plasmid, in which the eMutaT7 TadA-8e gene is located in front of the eMutaT7 PmCDA1 gene, yielded the same amounts . The ratios of A:T→G:C and C:G→T:A frequencies adjusted by subtracting frequencies at cycle 0 as well as the substitution frequencies were shown above. Y-axes above and below 0.0001% are in log-and linear-scale, respectively. P values were obtained with two-sided Mann-Whitney tests and presented as -log 10 (P value). of A:T→G:C (50%) and C:G→T:A (50%) mutations (P = 0.87; Figure 3D and Supplementary Figure S4B). In contrast, the pDae080 plasmid, which reversed the order of the two mutators, disproportionately generated C:G→T:A (85%) over A:T→G:C (15%) (P = 0.012; Figure 3D and Supplementary Figure S4B). As expected, weaker UGI expression without the optimized RBS significantly reduced C:G→T:A mutations in the wild-type strain (P = 0.0046; Figure 3D and Supplementary Figure S4B) but produced comparable numbers of mutations in the Δung strain (A:T→G:C, 38%; C:G→T:A, 62%; P = 0.37; Figure 3D and Supplementary Figure S4B). We thus selected pDae079 for eMutaT7 transition , which on average installed 5.7 transition mutations in the 1269-bp gene during 80-hour in vivo hypermutation.
High-throughput sequencing demonstrates that eMutaT7 transition introduces all transition mutations at similar frequencies
To further dissect the eMutaT7 transition -mediated in vivo hypermutation, we used next-generation sequencing (NGS) to analyze the sequences of ∼3.3 kb DNA fragments around the target region from mixed pools of cells taken at cycle 0 (n = 1) or cycle 20 (n = 3). We found that, among all substitution types, all four transition substitutions were significantly accumulated at cycle 20 ( Supplementary Figure S6); the adjusted average substitution frequencies (fre-quency differences between cycle 0 and cycle 20) were 0.28% for A→G, 0.22% for T→C, 0.046% for G→A, and 0.41% for C→T, respectively. We further dissected the 3.3 kb DNA into three regions--upstream, target gene, and downstream. Among them, the target gene showed the highest level of adjusted transition substitution frequencies (0.52%, 0.54% and 0.54%, respectively) than the upstream (0.023%, 0.023% and 0.021%) and the downstream (0.089% 0.090% and 0.088%) regions ( Figure 4A and B). This result supports the gene-specific mutagenesis of eMutaT7 transition . As previously observed with eMutaT7 PmCDA1 (20), the downstream region showed higher leakages of gene targeting than the upstream region. Given the very low rifampicin resistance frequencies of eMutaT7 PmCDA1 (20) and eMutaT7 TadA-8e (Supplementary Figure S1C), however, we believe that eMutaT7 transition does not generate high level of off-target mutations in the genome.
The average number of eMutaT7 transition -mediated substitution in the target gene was 6.7 (1269 bp × 0.53%) in NGS analysis, closely recapitulating the result from Sanger sequencing (5.7 substitutions; Figure 3D, 5). The high-throughput sequencing data also confirmed that eMutaT7 transition generates comparable amounts of A:T→G:C and C:G→T:A substitutions, whose average ratio was 1:0.93 ( Figure 4B). Taken together, NGS analysis corroborated that eMutaT7 transition rapidly introduces all transition mutations on the target gene at comparable frequencies.
Additional analyses demonstrate high mutational activity and target tolerance of eMutaT7 transition
To further estimate the mutational activity of eMutaT7 transition , we performed fluctuation analysis for the loss-of-function of pheS A294G (29). We used two target plasmids in which the target gene is controlled by either T7 promoter or an unrelated constitutive promoter. The conversion of the pheS A294G loss-of-function colony counts to loss-of-function mutation rate with the FAL-COR webtool (31) resulted in 3.6 × 10 −5 loss-of-function mutations per generation with T7 promoter and 2.4 × 10 −9 loss-of-function mutations per generation without T7 promoter, indicating 15 000-fold increase of the mutation rate with the proper targeting of eMutaT7 transition ( Figure 5A and Supplementary Figure S7). This result suggests that eMutaT7 transition indeed has a high mutational activity.
We also tested target tolerance of eMutaT7 transition by using different target genes in various genetic contexts.
We initially performed the 20-cycle in vivo hypermutation of two additional genes (malE and gfp) encoding maltose binding protein (MBP) and green fluorescent protein (GFP), respectively, as well as pheS A294G with or without eMutaT7 transition . We found that these three genes contained comparable numbers of transition mutations (average 6.3, 7.2 and 5.0 substitutions in pheS A294G, malE and gfp, respectively), whereas the pheS A294G gene without eMutaT7 transition displayed no mutation (Figure 5B and Supplementary Figure S8A). This result suggests that the presence of pheS A294G itself does not induce hypermutation and that the high mutational activity of eMutaT7 transition is not limited to our model gene, pheS A294G. We also tested the conditions in which T7controlled gfp is inserted in the chromosome ( Figure 5C), malE and gfp are located in a single target plasmid (Figure 5D), or malE and gfp are positioned in a target plasmid and chromosome, respectively ( Figure 5E). We found that three conditions lead to total 7.0, 4.5 and 6.0 mutations, Although majority of these results showed comparable numbers of A:T→G:C and C:G→T:A substitutions (Supplementary Figure S8A-C), one experiment almost exclusively showed A:T→G:C (Supplementary Figure S8D), indicating that only eMutaT7 TadA-8e was active during hypermutation. Because we used the same DNA sequence of T7RNAP for two mutator genes, the deletional recombination of these two mutator genes might generate the eMutaT7 TadA-8e -only mutator plasmid. Indeed, we found that the mutator plasmid obtained from the cycle #20 of this sample was shorter than the original eMutaT7 transition plasmid (Supplementary Figure S8E), suggesting that eMutaT7 transition needs to be improved for longer in vivo hypermutation experiments.
eMutaT7 transition evolves TEM-1 with various transition mutations
We previously demonstrated that eMutaT7 PmCDA1 promoted rapid continuous directed evolution of TEM-1 for resistance against third-generation cephalosporin antibiotics, cefotaxime (CTX) and ceftazidime (CAZ) (20). Here, we tested eMutaT7 transition in the same way. We used the dual promoter/terminator approach to install both C→T and G→A mutations (17,20). By sequentially increasing antibiotic concentrations during multiple rounds of in vivo hypermutation, we elevated minimum inhibitory concentrations (MICs) from 0.05 to 400-1600 g/ml in 80 h for CTX ( Figure 6A) and from 0.4 to 4000 g/ml in 48 h for CAZ ( Figure 6B).
In conclusion, this study described a new mutator system that combines eMutaT7 PmCDA1 and eMutaT7 TadA-8e , called eMutaT7 transition . This new system has advantages over previous deaminase-T7RNAP mutators. First, eMutaT7 transition expands the mutational spectrum to all transition substitutions (C:G→T:A and A:T→G:C). eMutaT7 PmCDA1 can mediate 8.4% of all amino acid e59 Nucleic Acids Research, 2023, Vol. 51, No. 10 PAGE 10 OF 11 changes (32 out of total 380 changes), but eMutaT7 transition expands them to 19% (74 changes). Accordingly, we observed in TEM-1 evolution experiments several A:T→G:C substitutions that have been previously identified in clinical or laboratorial isolates. Although transition substitutions nominally compose only a small fraction of all amino acid changes, they generally appear more frequently in natural variants, explaining approximately two-third of single nucleotide polymorphisms in several species (52)(53)(54). Second, all transition substitutions are produced at similar frequencies. This outcome was made possible by the use of two efficient deaminases, PmCDA1 and TadA-8e, along with appropriate expression of the two mutators and a DNA glycosylase inhibitor. In contrast, TRIDENT generated considerably more C:G→T:A substitutions (∼95%) in yeast (21).
Future research should aim to include transversion mutations in the mutational spectrum without significantly sacrificing substitution frequencies. Additionally, eMutaT7 transition would be improved to suppress the deletional recombination for longer in vivo hypermutation experiments; either the different DNA sequences of T7RNAP for two mutators or the recently reported TadA variants that can mutate both cytidine and adenine simultaneously (55,56) may enhance its property. With its good substitution frequencies and wider mutational spectrum, we believe that eMutaT7 transition or its improved variants can become the method of choice in synthetic biology studies requiring evolutionary approach, particularly in evolution or engineering of enzymes, metabolic pathways, or gene circuits.
DATA AVAILABILITY
Illumina sequencing data have been deposited in the ArrayExpress database at EMBL-EBI (www.ebi.ac.uk/ arrayexpress) under accession number E-MTAB-12258. Other data that support the findings of this work can be found in the paper and in the Supplementary Data files. Protein and primer sequences are listed in supplementary tables. All E. coli strains and plasmids described in this work are available upon request. The pDae029 (eMutaT7 TadA-8e ), pDae069 (eMutaT7 PmCDA1 ), and pDae079 (eMutaT7 transition ) have been deposited and are available through Addgene (#187620 for pDae029; #187621 for pDae069; #187622 for pDae079). | 2023-04-19T06:17:34.833Z | 2023-04-18T00:00:00.000 | {
"year": 2023,
"sha1": "22d3d6cc6a732a6a0333a85fe62ceffe77fb3ab8",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/nar/advance-article-pdf/doi/10.1093/nar/gkad266/50004613/gkad266.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f8061d35916ebdefdee919ab18c80fbc6bb1a9b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
91045797 | pes2o/s2orc | v3-fos-license | An in silico Analysis of Upstream Regulatory Modules (URMs) of Tapetum Specific Genes to Identify Regulatory cis -Elements and Transcription Factors
The present work presents an in silico analysis of Upstream Regulatory Modules (URMs) of genes expressed in tapetum specific manner in dicotyledon and monocotyledon plants. In the current analysis, we identified several motifs conserved in these URMs of which ten were observed to be part of known cis-elements using tools and databases like MEME, PLACE, MAST and TFSEARCH. We also identified that binding sites for two transcription factors, DOF and WRKY71 were found to be present in majority of the URMs.
Introduction
Tapetum is the innermost layer of the anther wall of plants. It performs the function of a nourishing tissue that remains in continuity with the pollen mother cell through plasmadesmatal connections till the formation of meiocytes occurs in young anther. Tapetum varies from unilayer to multilayer in different plant species and can be uninucleate or multinucleate. Although tapetum cells form a single or at-most a few cell layers in the anther tissue, several studies have been carried out to understand how these cell layers develop and the functions played by them in pollen cell development [1] [2] [3] [4]. These studies have led to the identification of several genes expressed in tapetum specific manner. Such genes have mainly been identified by analyzing comparative cDNA libraries, subtractive hybridization, microarray analysis, in-situ hybridization and in recent years by Laser Dissection Microscopy followed by RNA sequencing [1] [5]- [16].
TA29 from Nicotiana tabacum [1] and A9 from Arabidopsis thaliana [6] are examples of tapetum specific genes that were identified in early years. The promoters of these genes known as TA29 and A9 promoters have been used extensively in the expression of transgenes like barnase and barstar from Bacillus amyloliquifaciens to develop pollination control systems for hybrid seed production [5] [17] [18] [19] [20] [21]. Tight regulation of these promoters leading to tapetum specific expression was the key to success of this system. Attaining a robust tissue specificity of a promoter may need the combinatorial interplay of positive and negative regulators (transcription factors, TFs). The TFs would bring about their outcome by binding to the promoter through specific motifs or cis-elements. Several tapetum specific promoters have been identified till date, examples of which have been summarized in Table 1. However, there is limited knowledge about the transcription factors or the cis-elements of the promoters that are important for regulating these promoters. Although some tapetum specific promoters have been recently characterized in details e.g. OsLTP6 from rice [22] and A9 from Arabidopsis [23] in most of the studies, the characterization is limited to identifying the minimum length of the promoter needed for tapetum specific expression.
The present work is an attempt to identify conserved motifs/cis-elements present in genes expressing in the tapetum tissue of dicotyledon and monocotyledon plants. Further, putative TFs that may bind to these elements have also been predicted. Information generated from this work can be used for experimental validation.
Method
Motif Based Sequence Analysis Suite, MEME suite ver. 4.9.1 [24] was used to find out the conserved motifs in the different datasets. PLACE database [25] was used to figure out the cis-elements from the conserved motifs so obtained. Multiple Alignment & Search Tool, MAST ver. 4.9.1 [26] was used to attain consensus sequence of the conserved motifs obtained from MEME analysis. TFSEARCH software ver.1.3 [27] was used to find the putative TFBS and the transcription factors.
Results and Discussion
A literature survey was carried out to identify genes that expressed in a tapetum or anther specific manner. A total of 34 genes, 24 from dicot and 10 from monocot plants were identified and used in the present analysis (Table 1). From these, two datasets were developed comprising of 600 bp Upstream Regulatory Module (URM), one from dicot and another from monocot species. URMs [40] are defined as a region of a gene upstream to the translational start site, which includes the 5'UTR. Analyzing the URM was necessary in this analysis as in most cases the transcriptional start site has not been experimentally identified. P. A. Sharma, P. K. Burma The sequences for the respective URMs were downloaded from NCBI website.
The sequence files of dicots and monocots URMs thus generated were submitted separately at MEME Tool available online for analysis of conserved motifs. In order to identify the conserved motifs, MEME program was run with different parameters that defined the motif width (5 -13, 6 -10 or 6 -14) and the total number of motifs to be generated was fixed at 10. After identifying the motifs generated using the different widths, it was observed that in most cases, the motif generated with 6 -14 width encompassed those generated by 5 -13 or 6 -10 width. Thus, the 10 motifs generated with 6 -14 width were taken for further analysis. The position of the motifs in the different URMs as generated by MEME for both datasets and the sequence of the motifs as identified by MAST are presented in Figure 1 and Figure 2.
After identifying the conserved motifs in the two datasets of anther/tapetum specific genes, the next step was to analyze if these motifs corresponded to any known cis-elements of plant promoters. This was done by creating strings of the identified motifs and submitting it to the PLACE database. This led to the identification of several known cis-elements. This data was then manually curated and 10 known cis-elements were identified that are enlisted in Table 2. It was observed that out of the 10 identified motifs in case of dicots (Figure 1), 6 of them have already been reported in the literature. In this case, no known cis-elements were identified for motifs 1, 2, 8 and 10. In case of monocots, we could identify known cis-elements only for motifs 1, 5 and 7. This could be ref- lective of the fact that generally more information is available for dicot promoters than those of monocots.
We then attempted to see if there was any information about TFs binding to these cis-elements. In order to do so, we first analyzed the presence of transcription factor binding sites using TFSEARCH tool. TFSEARCH searches highly correlated sequence fragments versus a TFMATRIX that is a transcription factor binding site profile database present in "TRANSFAC" database [27]. Strings of the motifs enlisted in Figure 1 and Figure 2 were submitted as query sequence to TFSEARCH which is available online. These led to the identification of four transcription factors which could bind to these URMs. These were DOF [ Table 2. Known cis-elements corresponding to identified motifs in Figure 1 and Figure None [56] American Journal of Molecular Biology across lower and higher plants possessing multiple genes encoding the Dof domain containing protein [57]. Its cDNA was first isolated from maize [58] [62]. It is involved in regulation of genes of specific pathway for carbon metabolism in maize where it regulates C4PEPC (C4 photosynthetic phosphoenol-pyruvate carboxlase), cyPPDK (cytosolic pyruvate orthophosphate dikinase) and non photosynthetic PEPC [45].
WRKY71 belongs to WRKY family of transcription factors. They are reported to be present across lower eukaryotes (protista) to ferns (pteridophytes) and in plants [63]. The WRKY family members are identified by the presence of a conserved 60 amino acid residue region and a zinc finger domain. Promoters of genes carrying the W-box are potential targets of the WRKY factors [55]. They are key components in the innate immunity of the plant and bind to the W-box of pathogenesis related genes [55] [64]. They are involved in seed and trichome development and embryogenesis [63]. They function as both activators and repressors by protein-protein interaction and autoregulation [65]. WRKY71 expresses in the aleurone layer in rice and is reported to function as a repressor of gibberellic acid signalling pathway in aleurone layer cells. GA pathway is involved in growth and development of plants [66].
The present analysis has led to the identification of certain elements and TFs that could regulate tapetum specific promoters. However, the role of these needs to be experimentally analysed. This can be done by a "loss-of-function" strategy in which the cis-elements in a given URM are mutated and changes in promoter activity, if any are analysed. In a second strategy, "gain-of function", a given TF can be ectopically expressed and its influence on the activity of a given URM is recorded. | 2019-04-02T13:12:20.108Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "dffa577954d6cacd34c048a469003d73115c2bc3",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=81115",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d062828ab7fcb6be6fd6341d27500be4c0aeec89",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
53040020 | pes2o/s2orc | v3-fos-license | Fatigue is common and severe in patients with mastocytosis
Chronic fatigue is a common phenomenon in inflammatory and autoimmune conditions, in cancer, and in neurodegenerative diseases. Although pain and psychological factors influence fatigue, there is an increasing understanding that there is a genetic basis, and that activation of the innate immune system is an essential generator of fatigue. Mast cells are important actors in innate immunity and serve specialized defense responses against parasites and other pathogens. They are also major effector cells in allergic reactions. Primary disorders causing constitutively hyperactivity of mast cells are called mastocytosis and are frequently due to a gain-of-function mutation of the KIT gene encoding the transmembrane tyrosine kinase receptor. It is a clinical experience that patients with mast cell disorders suffer from fatigue, but there is a lack of scientific literature on the phenomenon. We performed a controlled study of fatigue in mastocytosis patients and document a 54% prevalence of clinical significant fatigue.
Introduction
Mastocytosis is a term that encompasses the primary mast cell (MC) disorders and is divided into a systemic form, a cutaneous form, and the rare MC sarcoma. MCs develop from myeloid stem cells in response to stimulation by stem-cell factor and migrate from the blood into various tissues where they mature and acquire specific phenotypes influenced by the local environment. The majority of patients with mastocytosis display a gain-offunction mutation of the KIT gene that encodes the transmembrane tyrosine kinase receptor (CD117), and this renders MCs constitutively hyperactive. A variety of symptoms and signs follow the continuous degranulation and release of histamine, tryptase, serotonin, pro-inflammatory cytokines, and other biological mediators from MCs and give rise to cardiovascular, cutaneous, digestive, musculoskeletal, neurologic, respiratory, and systemic phenomena. 1 It is a clinical experience that patients with mastocytosis suffer from severe fatigue and may report worsening of fatigue hours to days before outbreak of disease attacks. To our knowledge, only one case report has enlightened this issue of the mastocytosis symptom spectrum. 2 On the other side, cognitive disturbances and cerebral involvement are acknowledged, but the exact pathophysiology remains obscure. 3 Although much debated and thought to have multifactorial origin, emerging evidence points to a genetic and molecular basis for fatigue. 4 Fatigue is generated at least partly through innate immunity responses, and MCs are strong activators of innate immunity. It is therefore to be expected that fatigue is a significant complaint among patients with MC disorders, but there is a lack of literature based on systematic studies regarding this issue, as far as we can understand. We recently had the opportunity to investigate 28 subjects with mastocytosis, rate their fatigue, and compare findings with healthy subjects.
Subjects and methods
Twenty-eight patients with mastocytosis attending a national educational meeting were investigated. In addition, 28 healthy control subjects matched for age (±5 years) and gender were selected from our research cohorts on fatigue ( Table 1). The severity of fatigue was rated by the fatigue Visual Analog Scale (fVAS), a generic instrument that is widely used to measure fatigue in various diseases. 5 It consists of a 100 mm horizontal line with wording "no fatigue" at the left anchor and "fatigue as bad as it can be" at the right anchor. A higher score indicates a more fatigue, and an fVAS score >50 is often regarded as clinical significant fatigue. 6
Statistical analysis
Normality of data was tested with the Shapiro-Wilk test. Some data were not normality distributed and the results are thus presented as median and ranges for continuous data and as counts and percentages for categorical data. The Wilcoxon signed-rank test was used to compare two groups of continuous data.
Ethics
This study was carried out in compliance with the Helsinki Declaration and approved by the Regional Committee for Medical and Health Research, West (2010/1455; 2011/2631). All subjects gave informed consent to participate in the study.
Results
Patients with mastocytosis reported a median fVAS score of 53 (15-91) versus 6 (0-35) in the healthy subjects; P < 0.001 ( Figure 1). If subjects were categorized in clinical significant fatigue versus not significant fatigue (fVAS score ⩾50 vs <50), 13 out of the 28 patients (54%) had fatigue, while none of the healthy subjects reported fatigue. Fatigue scores were not associated with age or gender in either group.
Discussion
This observation indicates that fatigue is a prevalent and clinically significant phenomenon in about half of patients with mastocytosis. The reported prevalence and severity match with findings in other chronic inflammatory conditions in which this issue has been more systematically investigated. 6,7 Fatigue is increasingly being recognized as a prominent and severe phenomenon of chronic inflammatory and autoimmune diseases, cancer, and various other chronic conditions. Although the pathophysiology is much debated, a conceptual biological model for understanding fatigue is the sickness behavior response, an evolutionary strongly based phenomenon triggered by innate immunity activation to invading pathogens and damage. 8 This unconscious and automated response is characterized by sleepiness, depressive mood, social withdrawal, and loss of grooming, thirst, appetite, and initiative and is supposed to increase survival of the sick animal. Fatigue is a dominant feature of this response. Several animal studies have demonstrated the fundamental role proinflammatory cytokines, especially interleukin (IL)-1β, play in this response. 8 In conditions with infection and/or tissue injury, activation of innate immunity cells will rapidly lead to increased production of IL-1β which pass through the bloodbrain barrier (BBB) and reaches neuronal cells in the brain by both passive and active transport systems and can even be produced intrathecally. Once in the brain, IL-1β binds to a subtype of the IL-1 receptor and to a brain isoform of the accessory protein, the IL-1RaAcPb. 9 Thus, while IL-1β in the periphery is a strong inducer of innate immunitybased inflammation, IL-1β directly modulates synaptic transmission through neuronal potassium and calcium influx (without inflammation) in the brain and induces subconscious and irresistible sickness behavior. In chronic inflammatory diseases, these processes are continuously active and sickness behavior (and fatigue) becomes chronic. Increased activation of IL-1β in the brain is observed in human subjects with chronic inflammatory and autoimmune conditions and severe fatigue, 10 and treatment with IL-1 blocking agents alleviates fatigue. 11 MCs serve important functions in innate immunity surveillance and carry out specialized defense responses against parasites and other pathogens when TLRs or G protein-coupled receptors are activated by peptidoglycans, snake venoms, wasp toxins, and so on. MCs are also major effector cells of allergic reactions. Whatever the primary response, degranulation of MCs releases a vast number of biological active molecules involved in innate immunity responses resulting in a focused and optimal attack on the invading pathogen.
A hypothetical model for generation of fatigue in mastocytosis is therefore that activated MCs outside the brain release IL-1β, IL-6, TNF-α, and other bioactive molecules that pass the BBB and activate neuronal cells as well as microglia ( Figure 2). Substance P (SP) and IL-33 together markedly enhance the production and release of TNF-α in MCs and leads to an increase in other pro-inflammatory cytokines. 12 Vascular endothelial growth factor (VEGF) disrupts the BBB and augments trafficking of immune cells and signaling substances across the BBB. Inside the brain, activated MCs secrete tryptase, histamine, IL-1β, and TNF-α that trigger microglial cells to produce IL-1β which bind to adjacent brain-specific neuronal IL-1 receptors. Activation of these receptors induces sickness behavior.
Weaknesses of the study
The patients were investigated during a national educational meeting for patients with different MC disorders. We had no access to the exact subtype of disorders, nor to tryptase levels, IgE-mediated MCs both in the periphery and in the brain produce and secrete pro-inflammatory cytokines, histamine, proteases, substance P, and other highly active signaling and reactive substances. VEGF disrupts the blood-brain barrier and augments influx to the brain of immune cells, cytokines, and other signaling molecules. Activated microglia and MCs secrete IL-β that bind to specific IL-1 receptors on cerebral neurons and induce the sickness behavior response, in which fatigue is a major element. VEGF: vascular endothelial growth factor; SP: substance P.
allergies or other comorbidities. These matters obviously influence the interpretation of the results. Nevertheless, we think that the study throw light on a phenomenon that has gained relatively little attention in patients with MC disorders. Also, use of H1-antihistamines and sleep disorders due to nocturnal itch are phenomena that may influence fatigue experience and should be included in future studies.
In conclusion, our observation emphasize that fatigue is a prevalent and significant clinical phenomenon of the mastocytosis disease spectrum and can be explained in a biological context as part of the sickness behavior response driven by innate immunity mechanisms.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 2018-11-10T06:17:23.851Z | 2018-10-23T00:00:00.000 | {
"year": 2018,
"sha1": "85466cdbb0b1aceffcb4fc77cf49c3e979ba0ca8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1177/2058738418803252",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "85466cdbb0b1aceffcb4fc77cf49c3e979ba0ca8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119351645 | pes2o/s2orc | v3-fos-license | Influence of noncommutativity on the motion of Sun-Earth-Moon system and the weak equivalence principle
Features of motion of macroscopic body in gravitational field in a space with noncommutativity of coordinates and noncommutativity of momenta are considered in general case when coordinates and momenta of different particles satisfy noncommutative algebra with different parameters of noncommutativity. Influence of noncommutativity on the motion of three-body Sun-Earth-Moon system is examined. We show that because of noncommutativity the free fall accelerations of the Moon and the Earth toward the Sun in the case when the Moon and the Earth are at the same distance to the source of gravity are not the same even if gravitational and inertial masses of the bodies are equal. Therefore, the Eotvos-parameter is not equal to zero and the weak equivalence principle is violated in noncommutative phase space. We estimate the corrections to the Eotvos-parameter caused by noncommutativity on the basis of Lunar laser ranging experiment results. We obtain that with high precision the ratio of parameter of momentum noncommutativity to mass is the same for different particles.
Introduction
In recent years much attention has been devoted to studies of quantized space realized on the basis of idea of noncommutativity. The idea was proposed by Heisenberg and later it was formulated by Snyder in his paper [1]. The interest to the studies of noncommutative space is motivated by the development of String Theory and Quantum Gravity (see, for instance, [2,3]).
The weak equivalence principle states that kinematic characteristics, such as velocity and position of a point mass in a gravitational field depend only on its initial position and velocity, and are independent of its mass, composition and structure. This principle is a restatement of the equality of gravitational and inertial masses. Implementation of this principle was considered in a space with noncommutativity of coordinates [39,40,44,45] in a space with noncommutativity of coordinates and noncommutativity of momenta [46,47]. It was shown that the equivalence principle is violated in a space with noncommutativity of coordinates and noncommutativity of momenta [46]. In [47] the authors concluded that the equivalence principle holds in noncommutative phase space in the sense that an accelerated frame of reference is locally equivalent to a gravitational field, unless noncommutative parameters are anisotropic (η xy = η xz ). In our previous papers we proposed the ways to recover the weak equivalence principle in a space with noncommutativity of coordinates [39,45], a space with noncommutativity of coordinates and noncommutativity of momenta [48].
In the present paper we examine effect of noncommutativity on the motion of Sun-Earth-Moon system and consider the weak equivalence principle. We find influence of noncommutativity of coordinates and noncommutativity of momenta on the free fall accelerations of the Moon and the Earth toward the Sun. The results are compared with the results of the Lunar laser ranging experiment.
The paper is organized as follows. In Section 2 features of description of macroscopic body motion in noncommutative phase space are presented. The influence of noncommutativity on the motion of Sun-Earth-Moon system is studied in Section 3. In Section 4 the effect of noncommutativity of coordinates and noncommutativity of momenta on the free fall accelerations of the Moon and the Earth is obtained and the weak equivalence principle is examined. Conclusions are presented in Section 4.
Features of description of macroscopic body motion in noncommutative phase space
In general case different particles may feel noncommutativity with different parameters [P (a) here indexes a, b label the particles, θ a , η a are parameters of coordinate and momentum noncommutativity. So, there is a problem of describing the motion of the center-of-mass of composite system in noncommutative phase space. This problem was studied in our paper [48].
In the classical limit → 0 taking into account commutation relations (4)-(6) one obtains the following Poisson brackets Defining momenta and coordinates of the center-of-mass of composite system, momenta and coordinates of relative motion in the traditional waỹ with µ a = m a /M, M = a m a , one has Here we take into account that coordinates X (a) i and momenta P (a) i satisfy (7)- (9). Parametersθ,η are effective parameters of coordinate noncommutativity and momentum noncommutativity which describe the motion of the center-of-mass of composite system (macroscopic body). They are defined as Note that the effective parameters of noncommutativity depend on the composition of a system [48].
3 Sun-Earth-Moon system in noncommutative phase space Let us study influence of noncommutativity on the motion of the Earth and the Moon in the gravitational field of the Sun. We consider the following Hamiltonian here m S , m E , m M are the masses of Sun, Earth and Moon, respectively, G is the gravitational constant. Writing Hamiltonian (23) we suppose that influence of relative motion of particles which form the macroscopic body on the motion of its center-of-mass is not significant.
Choosing the Sun to be at the origin of the coordinate system we have where X E i , X M i are coordinates of the Earth and the Moon, respectively, i = (1, 2). These coordinates and the momenta satisfy the following relations Taking into account (27)-(29) the equations of motion reaḋ It is worth mentioning that because of the terms caused by noncommutativity in (30)-(37) the velocity of macroscopic body in gravitational field depends on its mass. Also, taking into account definition of effective parameter of noncommutativity (21) which correspond to motion of the centerof-mass of macroscopic body in noncommutative phase space, we can state that the velocities of Earth and Moon depend on the composition of these bodies. From this follows that the weak equivalence principle is violated in noncommutative phase space.
4 Estimation for effect of noncommutativity on the weak equivalence principle Stringent limit on any violation of the equivalence principle was provided by the Lunar laser ranging experiment [49]. The result was obtained on the basis of comparison of the free fall accelerations of the Earth and the Moon toward the Sun. According to the experiment the equivalence principle holds with accuracy where a E , a M are the free fall accelerations of Earth and Moon toward the Sun when they are at the same distance from the Sun. Let us use this result for analysis of the weak equivalence principle in noncommutative phase space.
Using equations (30)-(37) we can write expressions for accelerations of the Earth and the Moon. Up to the first order in the parameters of noncommutativity θ M , η M , θ E , η E we havë here . Let us compare accelerations of the Moon and the Earth toward the Sun in the case when the Moon and the Earth are at the same distance to the source of gravity, R M S = R ES = R. It is convenient to choose the frame of references with X 1 axis perpendicular to the R EM (passing through the middle of R EM ), X 2 axis parallel to the R EM and with origin at the Sun's center. Namely, X E 1 = X M 1 = R 1 − R 2 EM /4R 2 ≃ R (here we take into account that R EM /R ∼ 10 −3 ), and X E 2 = −X M 2 = R EM /2. So, we can write the following expressions for the free fall accelerations of the Moon and the Earth toward the Sun here we take into account thatẊ E 2 =Ẋ M 2 = υ E (υ E is the Earth orbital velocity). Also, we haveẊ E 1 = 0 andẊ M 1 = υ M , where υ E is the Moon orbital velocity. Note, that R EM /R ∼ 10 −3 , υ M /υ E ∼ 10 −2 . So, the last terms in (41), (42) can be neglected, Let us analyze the obtained result. Note that because of noncommutativity the Eotvos-parameter is not equal to zero even in the case of equality of gravitational and inertial masses of the bodies. In (43) one has term caused by the momentum noncommutativity ∆a η /a which is proportional to (η E /m E −η M /m M ) and term caused by the noncommutativity of coordinates ∆a θ /a which is proportional to (θ E m E − θ M m M ). Parameters θ E , η E , θ M , η E are effective parameters of noncommutativity which are given by (21), (22) and depend on the composition of the bodies. So, even if we consider as an example two bodies with the same masses but with different composition the Eotvos-parameter is not equal to zero.
In our paper [48] we proposed conditions on the parameters of noncommutativity on which the list of important results can be obtained in noncommutative phase space. Namely, we found that in the case when parameters of noncommutativity θ i , η i corresponding to a particle of mass m i satisfy relations where γ, α are constants which do not depend on the mass, the weak equivalence principle is preserved; the kinetic energy has additivity property and does not depend on the composition; the Poisson brackets (19), (20) are equal to zero therefore motion of the center-of-mass of composite system is independent on the relative motion; the noncommutative coordinates can be considered as kinematic variables [50]; the effective parameters of noncommutativityθ,η describing the motion of the center-of-mass do not depend on its composition, one hasθ where M is the total mass of the system. Note, that in the case when conditions (46), (47) are satisfied the Eotvosparameter (43) is equal to zero and the equivalence principle is preserved.
If the conditions (46), (47) are not satisfied one has here α E = α M , γ E = γ M . The result (43) can be used to estimate the values of differences α E − α M , γ E − γ M . For this purpose we suppose that effect of noncommutativity on motion of Earth and Moon which causes the violation of the weak equivalence principle is less than the experimental results for limits on violation of this principle. So, we can write where 2.1 · 10 −13 is the largest value of |∆a|/|a| obtained on the basis of the Lunar laser ranging experiment [49]. To estimate the orders of ∆α = α E −α M , ∆γ = γ E −γ M it is sufficiently to consider the following inequalities Using ( here m n is the neutron mass. Taking into account (55), (57) we have So, we obtained quite strong restriction on the value of ∆α and can conclude that proposed condition (47) holds with high accuracy. The constant γ is of the order of 10 −66 s [39]. So, inequality (55) does not impose strong restriction on the value of ∆γ. The result is expectable because of reduction of the effective parameter of noncommutativityθ with respect to parameters corresponding to the individual particles (21). For instance, in particular case when a system is composed of N identical particles of mass m and parameters of noncommutativity θ from (21) we haveθ = θ/N. So, the effect of coordinates noncommutativity on the properties of macroscopic systems is less than effect of the noncommutativity on the motion of individual particles. Therefore an experimental data with very hight accuracy are needed to obtain strong upper bound on the parameter θ or on the value of ∆γ on the basis of studies of macroscopic bodies in noncommutative space.
Conclusions
Noncommutative phase space of canonical type has been considered. The influence of noncommutativity of coordinates and noncommutativity of momenta on the motion of the Sun-Earth-Moon system has been studied. We have found that the free fall accelerations of the Moon and the Earth toward the Sun in the case when the Moon and the Earth are at the same distance to the source of gravity are not the same even in the case of equality of gravitational and inertial masses of the bodies. Therefore the Eotvos-parameter is not equal to zero (43) and the equivalence principle is violated. The parameter depends on the values of (η E /m E − η M /m M ) and (θ E m E − θ M m M ).
We have used result for the Eotvos-parameter (43) to estimate the values (η E /m E − η M /m M ) and (θ E m E − θ M m M ), namely to estimate the difference of constants α and γ for Earth and Moon. For this purpose the data of the Lunar laser ranging experiment have been considered. Assuming that the effects of noncommutativity which cause the violation of the weak equivalence principle are less than the limits for any violation of the principle we have obtained upper bounds for the values ∆α and ∆γ (55), (56). The upper bound on ∆α (55) is quite stringent. The obtained restriction on ∆γ is not strong (56). We have concluded that to find more strong restriction on the ∆γ more hight precision of experimental results is needed. This is because of reduction of effective parameter of coordinate noncommutativity with respect of increasing of number of particles in a system.
It is important to note that the Eotvos-parameter is equal to zero and the equivalence principle is recovered in noncommutative phase space when conditions (46), (47) are satisfied. The importance of these conditions is stressed by the number of results which can be obtained in noncommutative phase space in the case when they hold. Among them are preserving of the properties of the kinetic energy, independence of the motion of the centerof-mass on the relative motion [48,50]. On the basis of our result for ∆α (55) we can conclude that the condition on the parameter of momentum noncommutativity (46) holds with high precision, with high precision the ratio η/m is the same for different particles. | 2018-07-29T09:13:28.000Z | 2018-07-29T00:00:00.000 | {
"year": 2018,
"sha1": "6a1a7665cbe3519a7c1e742c297b0d2f9e2cbe28",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1808.02353",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ace98835ee9a2e95bdae219d7a5359a9c23e140e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
248106212 | pes2o/s2orc | v3-fos-license | Sandwich-structured electrospun all-fluoropolymer membranes with thermal shut-down function and enhanced electrochemical performance
Abstract High safety and rate capability of lithium-ion batteries (LIBs) remain challenging. In this study, sandwich-structured poly(vinylidene fluoride)/poly(vinylidene fluoride-co-hexafluoropropylene)/poly(vinylidene fluoride) (PVDF/PVDF-HFP/PVDF) membranes with thermal shut-down function were successfully prepared through electrospinning. The effects of different weight ratios of PVDF and PVDF-HFP in a composite membrane on the physical and electrochemical properties of the membrane were explored. It was found that the composite membrane with 36 wt% PVDF-HFP (P/H2/P) showed excellent electrolyte absorption (367%) and ionic conductivity (2.5 × 10−3 S/cm). Half-cell with P/H2/P as separator exhibited higher discharge capacity and better cycle performance than commercial PP membrane. More importantly, thermally stable high melting-temperature PVDF was chosen as outer layer, while low melting-temperature PVDF-HFP was used as inner layer. Self-shutdown function of this separator was achieved when heated at 140 °C, providing a safety measure for LIBs. These results indicate that PVDF/PVDF-HFP/PVDF composite membrane is a promising separator candidate in high performance LIBs applications. Graphical Abstract
Introduction
Due to their obvious advantages of high energy density, long cycle life and environmental friendliness, lithium-ion batteries (LIBs) have been widely used in smart electronic devices, electric vehicles (EVs) and energy storage system [1,2]. However, current safety issue and poor rate capability severely impede their further applications and development of new generation high performance LIBs [3,4]. In recent years, the safety of LIBs has been improved by adding electrolyte additives [5,6], composite electrodes [7] and modified separators [8][9][10], etc. As an inert component, separators do not participate in electrochemical processes but determine the electrochemical performance and safety of LIBs, providing an effective and less sacrificial way to improve the safety of LIBs [11]. Commercial polyolefin membranes including polyethylene (PE), polypropylene (PP) and PP/PE/PP have been widely used as LIBs separators due to their excellent mechanical properties and low cost. However, their relatively low porosity, insufficient electrolyte wettability severely affect the rate capability of LIBs [12]. Besides, low thermal stability may cause internal short circuit at elevated temperatures and lead to uncontrollable thermal runaway, limiting their applications in next generation of batteries [13]. Although there have been studies on modifying commercial membranes, such as coating inorganic particles to polyethylene terephthalate (PET) nonwoven fabrics [14], electrospun PVDF on PP nonwoven fabrics [15], etc. Yet the coated membranes suffer from pore blockage and performance deterioration [16]. On the other hand, poor compatibility between different polymer materials in the composite membrane may result in inferior interfacial bonding and premature interlayer failure. Therefore, a method to fabricate a durable membrane with shut-down function, high porosity and excellent ionic conductivity is still a challenge.
Electrospinning has been widely used to prepare various functional fiber membranes with high porosity, large specific surface area and interconnected porous structure [17,18]. Electrospun membranes have been made from a variety of polymers including polyacrylonitrile (PAN) [19,20], polyimide (PI) [21,22], thermoplastic polyurethane (TPU) [23,24], polyvinylidene fluoride (PVDF) [25,26], polyvinylidene fluoride-hexafluoropropylene (PVDF-HFP) [27,28], etc. In particular, PVDF and PVDF-HFP have attracted significant attentions because of their excellent mechanical properties and thermal stability, chemical inertness in electrolyte and good electrolyte affinity [29,30]. Kim et al. confirmed that the crosslinked PVDF-HFP membrane provided highly efficient ionic conducting pathways, resulting in higher discharge capacity of assembled LIBs compared with PP membranes [31]. Khalifa et al. demonstrated a nonwoven PVDF/halloysite nanotube (HNT) nanocomposite membrane with high ionic conductivity and low thermal shrinkage [32]. Although single-layer polymer fiber membranes with nanoparticles embedded can gain higher thermal stability, it is not able to achieve thermal shut-down at heat accumulation situation, therefore, three-layer membranes based on electrospun fibers have been further developed to improve safety and electrochemical performance of LIBs. Wu et al. reported a novel sandwich structured PI/PVDF/PI composite membrane. High meltingtemperature PI component improved thermal stability while low melting-temperature PVDF component melt to shut down ion pathways at elevated temperatures [33]. Pan et al. fabricated ultrathin SiO 2 -anchored layered PVDF/PE/PVDF porous fiber membranes. The membranes exhibited highly porous structure with high electrolyte uptake capability, and unique layered structure was beneficial to arrest heat accumulations by cutting off Li þ diffusion channels [34]. However, most polymer pairs are thermodynamically immiscible, which results in poor interfacial adhesion and therefore inferior mechanical and electrochemical properties [35].
In this study, we chose PVDF homopolymer and PVDF-HFP copolymer to prepare all-fluoropolymer composite membrane, aiming to achieve better interlayer adhesion and lower lithium ion transfer resistance. Moreover, the outer PVDF microfibers layer with better thermal stability was selected as a support to avoid short circuit and the intermediate PVDF-HFP microfibers layer with a low melting temperature was selected to realize thermal shutdown at high-temperature situation in order to improve battery safety. By controlling a total spinning time of 18 h, membranes with different time ratios were obtained (Figure 1), i.e. PVDF: PVDF-HFP: PVDF ¼ 1:1:1, 1:2:1, 1:3:1, denoted as P/H1/P, P/H2/P, P/H3/P, respectively. The influences of different weight ratios of two polymers in composite membranes on ionic conductivity, electrolyte uptake, thermal stability, mechanical properties and electrochemical properties of assembled half-cells were investigated. Thermal shut-down function was also examined by a simulated high-temperature situation. . Commercial membrane (Celgard 2500) was used as reference for comparison purpose. PVDF binder (Arkema 500) was purchased from Arkema, France and super P was purchased from Tianchenghe Technology Co. Ltd. (Shenzhen, China) used as conductive additive.
Membranes fabrication
The PVDF/PVDF-HFP/PVDF microfibrous membranes were prepared by electrospinning method, as shown in Figure 1. Before preparing the spinning solution, PVDF and PVDF-HFP powders were dried at 60 C for 12 h. Then PVDF and PVDF-HFP powders were dissolved in a mixed solvent of DMF:acetone ¼ 7:3 (v:v) respectively, and mechanically stirred at 50 C for 3 h to obtain 14 wt% PVDF solution and 20 wt% PVDF-HFP solution.
The as-prepared solution was then electrospun into fibers at a tip-to-collector distance of 19 cm, a voltage of 18 kV, a flow rate of 0.02 ml min À1 and the speed of collector was 150 rpm. By controlling a total spinning time of 18 h to obtain membranes with different time ratios, i.e. PVDF:PVDF-HFP:PVDF ¼ 1:1:1, 1:2:1, 1:3:1, denoted as P/H1/P, P/H2/P, P/H3/P, respectively. The as-prepared membranes were dried in a vacuum oven at 80 C for 12 h to remove the solvent. Finally, the membranes were hot pressed at 120 C under a pressure of 4 MPa for 1 h to consolidate into PVDF/PVDF-HFP/PVDF composite membranes.
Characterizations
The microscopic morphology of the membranes were observed by a scanning electron microscope (SEM JSM-6390LV, Japan), and the accelerating voltage was 15 kV.
The porosity of membranes was measured as follows. The samples with a diameter of 18 mm were cut from membranes by a membrane-punching machine.
They were washed and dried at 50 C for 6 h. The porosity was then calculated by using Eq. (1) [36].
where, P is the porosity of the sample; q 0 is the density of PVDF and PVDF-HFP (1.78 g cm À3 ) and q is the density of the sample. In order to evaluate the electrolyte affinity of microporous membranes, the electrolyte uptake ratio (EU) of membranes was calculated by Eq. (2). The membranes were soaked in the electrolyte (1 M LiPF 6 in EC:DEC) for 2 h.
where, W 0 and W are the mass of membrane before and after absorbing the electrolyte, respectively. Tensile tests were carried out using a tensile tester (UTM4104X, SUNS, Shenzhen, China) at a testing speed of 20 lm s À1 . The gauge length of samples was 20 mm and the width was 8 mm.
The thermal behavior of the membranes was characterized by a differential scanning calorimeter (DSC 250, TA Instruments, US). The membranes were heated from 25 C to 200 C at a rate of 10 C min À1 under nitrogen atmosphere. The crystallinity of membranes was calculated from DSC data by using Eq. (3).
where DH m is the enthalpy of melting peak in the DSC curve; DH 100 is the apparent enthalpy of fusion per gram of totally crystalline PVDF/PVDF-HFP (DH 100 ¼104.7 J g À1 ) [37]. The samples with a diameter of 18 mm were cut from membranes by a membrane-punching machine and treated at 130 C for 0.5 h to compare thermal stability of these membranes. Thermal shrinkage ratio (TS) was calculated by Eq. (4).
where S 0 and S 1 represent the surface area of membranes before and after thermal treatment, respectively. Cells with a configuration of stainless steel (SS)/ membrane/SS were assembled, electrochemical impedance spectroscopy (EIS) was used to measure bulk resistance (R b ) of the membranes at frequencies between 1 Hz and 500 KHz at an amplitude of 5 mV. Then the ion conductivity was calculate by Eq. (5) [38].
where, r is ionic conductivity; L is the thickness of the membrane; R b is the bulk resistance and A is the effective area of the membrane. The cathode materials were prepared by blending LiFePO 4 powders (80 wt%), super P (10 wt%) and PVDF binder (10 wt%). A coin-type cell (CR 2032) with a configuration of Li/membrane/LiFePO 4 was used to investigate the rate capability, cycling performance and EIS. Galvanostatic charge/discharge C-rate capabilities were examined at voltage range of 2.5-4.2 V in a Neware Battery Test System (BTS-4000, China) at 0.2, 0.5, 1, 2, and 4 C, a 1 C-rate meaning that the selected discharge current discharged the battery in 1 h. The cycling performance was investigated at 1 C for 100 cycles.
In order to simulate its thermal shut-down behavior at high temperature, the P/H2/P was sandwiched between two steel plates and treated at 140 C for 30 min (denoted as SD-P/H2/P). The half-cell assembled with SD-P/H2/P was charged and discharged at 2 C and 4 C.
Results and discussion
The microscopic morphology of P/H2/P composite membrane was observed by SEM and shown in Figure 2. P/H2/P had interconnected fibrous structures leading to high porosity, which can facilitate the absorption and retention of electrolyte and lithium ion transportation. After a time-controlled electrospinning process, the P/H2/P membrane had a 72 lm-thick layer made of PVDF-HFP microfibers at the center and two 40 lm-thick layers made of PVDF microfibers at the top and bottom. After hot pressing at 120 C, PVDF-HFP microfibers were partially melted ( Figure S1) and consolidated well with the PVDF microfiber layer (Figure 2b). It is noticed from magnified image in Figure 2c that the partially melted and consolidated PVDF-HFP formed numerous microfibrils in the interfacial regions between PVDF and PVDF-HFP layers resulting in better interfacial bonds between these two layers. Furthermore, it is observed from Figure 2b and c that a crack was formed during the freezefracture preparation of these samples. It shows that the crack occurred in the outer PVDF layer instead of in the interlayer regions between PVDF and PVDF-HFP layers. This intralayer crack indicates that a good interfacial bonding between PVDF and PVDF-HFP layers were obtained. The cohesive and physically connected interfacial regions of these all-PVDF composite membranes can promote an integrity of composite membranes and therefore their mechanical properties.
Excellent tensile properties of separators are necessary to improve the safety and prelong service life of LIBs, but the tensile strength and modulus of electrospun membrane are generally low because its nonwoven features [21]. Figure 2d shows the stress-strain curves of different samples. The maximum tensile strengths of single-layered PVDF-HFP and PVDF membranes were only 3.3 MPa and 4.4 MPa, respectively. Besides, PVDF-HFP membrane was brittle and less tough than PVDF membrane. The tensile properties of sandwich-structured membrane without hot pressing (denoted as nHP-P/H2/P in Figure 2d) were also investigated. This nHP-P/H2/P membrane exhibited a similar brittleness as PVDF-HFP membrane. However, the sandwich-structured membranes after hot pressing showed much higher tensile strengths ($ 400% maximum increment) and tensile moduli ($1400% maximum increment) than single-layered and non hot-pressed membranes. During hot press process, PVDF-HFP microfibers with a low melting point (T m ) ($130 C) partially melted and consolidated with PVDF microfibers to obtain cohesive interfacial adhesions, which led to a significant increase in tensile strength. Inspiringly, the superior interfacial adhesion between microfibers in the interfacial regions similar as adhesion between fibers and matrix in all-polymer composites is a key factor for the improvement in tensile properties [39][40][41][42][43][44][45][46][47]. Meanwhile, as the content of PVDF-HFP in the composite membrane increased, the tensile strength of membrane was improved successively because more interfacial regions with cohesive adhesions were created.
Thermal behavior of PVDF/PVDF-HFP/PVDF composite membrane was characterized by DSC and shown in Figure 3a. It is observed that the composite membrane showed endothermic peaks at around 135 C and 172 C corresponding to the melting temperatures of PVDF-HFP and PVDF components, respectively. By calculating from DSC curves [48], the actual weight ratios of three layers in these composite membranes were 1:0.6:1, 1:1.1:1 and 1:2.1:1 for P/H1/P, P/H2/P and P/H3/P respectively. The crystallinity of PVDF-HFP and PVDF components was calculated by Eq. (3) and shown in Figure 3b. As the PVDF-HFP content increased from 23 wt% (P/H1/P) to 51 wt% (P/H3/P), the crystallinity of PVDF-HFP component only increased from 4.1% to 5.5%, revealing the crystallization of PVDF-HFP copolymer chains with strong steric hindrance is kinetically unfavorable. Meanwhile, when the content of PVDF component in these composite membranes decreased from 77 wt% to 49 wt%, the crystallinity of PVDF component decreased sharply by 58.8%. The total crystallinity of composite membranes was reduced to below 20% for P/H2/P and P/H3/P. Because electrolyte uptake and retention occur through a swelling process in the amorphous regions in separators [49], lower crystallinity is therefore beneficial to improve electrolyte affinity and ionic conductivity.
Additionally, porosity and electrolyte uptake ratio are two evidently important indicators for evaluating the performance of membranes. Figure 3c shows that the composite membranes exhibited significantly higher porosity and electrolyte uptake ratio than PP membrane, attributing to three-dimensional fibrerous network structure of electrospun microfibre membranes and lower crystallinity. Especially, due to effective combination of PVDF and PVDF-HFP layers through hot press, P/H2/P membrane had high porosity (69%) and electrolyte uptake ratio (367%), which is beneficial to improve ionic conductivity and subsequent cell performance.
Severe thermal shrinkage of the membrane may cause a short circuit within the battery and increase the risks of spontaneous combustion and explosion during heat accumulation, therefore good thermal stability of the membrane is crucial to the safety performance of LIBs. Figure 3d shows the photographs of the membranes before and after heat treatment at 25 C and 130 C for 0.5 h. Thermal shrinkage ratios of PP, P/H1/P, P/H2/P, and P/H3/P membranes were calculated to be 11.9%, 1.1%, 2.1%, and 10.1%, respectively. The thermal shrinkage ratio of membranes increased as PVDF-HFP component increased due to relatively lower thermal stability of PVDF-HFP than PVDF. The thermal shrinkage ratio of PP was the highest among these membranes, obvious shrinkage and shape change from round to rolled-up in the machine direction (uniaxial stretching direction) were observed for PP. Meanwhile, P/ Hx/P showed uniform shrinkage in all directions, which is advantageous for effectively improving the safety of LIBs at high temperatures. Figure 4a shows the Nyquist plots of SS/membrane/SS cells. In high-frequency region, the intercept of Nyquist plot on the real axis represents the bulk resistance (R b ) of membrane. The ionic conductivity value were hence calculated and presented in Table S1. Due to their high porosity and electrolyte uptake ratio, the composite membranes demonstrated lower R b and much higher ionic conductivity than PP membrane (0.8 Â 10 À3 S cm À1 ). Obviously, the R b of P/H2/P membrane was lower than P/H1/P and P/H3/P membranes, therefore, it showed the highest ionic conductivity (2.5 Â 10 À3 S cm À1 ) among these composite membranes, indicating rapid migration of lithium ions during charge-discharge process.
The compatibility of liquid electrolyte-soaked porous membrane with commercial electrode materials was characterized by EIS of Li/membrane/ LiFePO 4 half cells. All the Nyquist plots in Figure 4b show a semicircle in high-and medium-frequency region indicating the charge-transfer resistance (R ct ) in interfacial regions, as well as straight line in low-frequency region indicating the diffusion of lithium ions in cathode materials [50]. The R ct values of composite membranes were significantly lower than that of PP membrane (Table S1), particularly, P/H2/P membrane displayed the lowest charge-transfer resistance, due to its high porosity, better wettability to electrolyte and low crystallinity. The low resistance of P/H2/P composite membrane would improve the compatibility between electrode and electrolyte-soaked P/H2/P membrane, revealing that the transportation of lithium ions between electrode and electrolyte interfaces is more efficient.
The electrochemical performances of Li/membrane/LiFePO 4 half-cells are shown in Figure 4c-f. The type and structure of membranes influences lithium ions transport through electrolyte-soaked membranes. The initial discharge capacities of half-cells with composite membranes at 0.2 C were higher than cell with PP (147.0 mAhg À1 ) as shown in Figure 4c. This is correlated with the fact that these electrospun microfibrous membranes had higher porosity and electrolyte retention, leading to lower interfacial resistances. Moreover, P/H2/P membrane showed higher initial discharge capacity (157.1 mAhg À1 ) than P/H1/ P membrane (152.2 mAhg À1 ) and P/H3/P membrane (150.0 mAhg À1 ), due to its higher ionic conductivity than other two composite membranes. Figure 4d shows the rate capability of the cells assembled with various membranes. At the same C-rate, cells with PVDF/PVDF-HFP/PVDF composite membranes exhibited higher capacities than cell with PP due to their lower interfacial resistances. Meaningfully, the capacity-decay values of P/H1/P, P/H2/P and P/H3/P were 41.4%, 36.1% and 42.0% respectively, lower than PP (53.0%) when the charge-discharge rate increased from 0.2 C to 4 C, indicating that composite membranes can help to reduce the ohmic polarization of the cell. P/H2/P membrane had higher porosity and liquid electrolyte absorption, consequently, it had superior ionic conductivity which can facilitate rapid lithium ions transportation between electrodes and improve the high-rate performance of battery. Therefore, cell assembled with P/H2/P had the highest discharge capacity especially at higher rates ( Figure 4e). Figure 4f shows the cycling performance of the cells with different membranes at 1 C. The cells with PVDF/PVDF-HFP/PVDF membranes had no apparent capacity loss after 100 cycles, indicating that composite membranes had excellent cycle stability.
The shut-down behavior of composite membrane was further investigated. P/H2/P was sandwiched between two steel plates and treated at 140 C for 30 min (denoted as SD-P/H2/P). The microscopic morphology of SD-P/H2/P composite membrane was shown in Figure 5a-c. It is noticed from Figure 5a that pore blockage clearly occurred. Because 140 C does not reach the melting temperature of PVDF layer, it is supposed that the melted PVDF-HFP microfibers penetrated into PVDF outer layers and consolidated to cause pore blockage. Figure 5b and c indicate that melted and consolidated PVDF-HFP wrapped the fibers of PVDF layers to form a dense membrane. It is expected that the as-formed dense membrane would be able to effectively block ion transfer channels and therefore shut down electrochemical reactions between electrodes to prevent further heat accumulations.
The charge-discharge tests of Li/membrane/ LiFePO 4 half-cell with SD-P/H2/P membrane were performed at 2 C and 4 C, the results are shown in Figure 5d. The discharge capacity of this cell was almost zero at both 2 C and 4 C, indicating that the dense middle layer of SD-P/H2/P membrane blocked the transport channels of lithium ions [51], demonstrating that P/H2/P composite membrane can perform thermal shut-down function at hightemperature situation [33]. Currently, polypropylene (PP) and polyethylene (PE) monolayer membranes are mostly used in LIBs industry, but they suffer from low thermal stability and high safety risks [10]. Although commercial PP/PE/PP membranes (e.g. Celgard 2320) can provide thermal shut-down function at 135 C (T m of PE), its thermal shrinkage ratio is as high as 20% at 130 C [51] due to its low thermal stability and poor interfacial compatibility between different polymer layers. The current PVDF/PVDF-HFP/PVDF composite membrane can therefore provide a promising alternative separator solution for high safety LIBs. In summary, a sandwich-structured all-fluoropolymer composite membrane was prepared by timecontrolled electrospinning and hot pressing process. The all-fluoropolymer composite membrane exhibited excellent tensile properties due to its cohensive interfacial adhesion between microfibers in interfacial regions. The effects of different weight ratios of two polymers in composite membrane on physical and electrochemical properties of the membrane were explored. P/H2/P composite membrane showed superior electrolyte absorption ratio (367%), ionic conductivity (2.5 Â 10 À3 S/cm) and lower interfacial resistance (119.3 X). Hence, cell with P/ H2/P composite membrane demonstrated high discharge capacity (157.1 mAhg À1 ) at 0.2 C and low capacity-decay of 36% from 0.2 C to 4 C. More importantly, P/H2/P can perform thermal shutdown function at high-temperature situation to prevent heat accumulation and decrease the risks of thermal runaway. Therefore, the all-fluoropolymer composite membrane is a promising separator candidate to improve safety and electrochemical properties of LIBs.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes on contributors
Rongyan Wen received her BEng degree in 2019 and she is now a postgraduate student working on electrospun polymeric functional membranes for LIBs. Zhihao Gao received his BEng degree in 2019 and he is now a postgraduate student working on functional composite membranes for LIBs.
Lin Luo received his BEng degree in 2020 and he is now a postgraduate student working on functional composite membranes for LIBs.
Xiaochen Cui received his BEng degree in 2020 and he is now a postgraduate student working on carbon-based materials for energy storage applications.
Prof. Jie Tang is a managing researcher in advanced lowdimentional nanomaterials group in National Institute for Materials Science, Tsukuba, Japan. Her research interest is in the design, fabrication, characterization and applications of one-or two-dimensional nanostructured materials.
Dr. Zongmin Zheng received her PhD degree in Chemistry from Xiamen University and joined in Qingdao University in 2017. Her research focuses on materials for energy storage applications.
Dr. Jianmin Zhang is now an Associated Professor in Qingdao University. She received her PhD degree in Materials Science from Queen Mary University of London in 2009. Then she worked for AVIC and Simens in Beijing. In 2015 she joined in Qingdao University. Her research interests include polymeric functional membranes, carbon-based materials for energy storage applications. | 2022-04-13T15:04:31.412Z | 2022-04-11T00:00:00.000 | {
"year": 2022,
"sha1": "1840b35c2bacabe0595da4bcb72094f37dfa79a6",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20550324.2022.2057661?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "13c171fbde0559ca2cfe30760b0d886dd7631a02",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Chemistry"
],
"extfieldsofstudy": []
} |
56125672 | pes2o/s2orc | v3-fos-license | The impact of HIV infection and disease stage on the rate of weight gain and duration of refeeding and treatment in severely malnourished children in rural South African hospitals
Background. Evidence of the effects of HIV infection and clinical stage on the duration of refeeding and treatment (DRT) and the rate of weight gain (RWG) in severely malnourished children remains inconclusive. Objectives. To determine whether the RWG and DRT differ by baseline clinical characteristics, and to assess the effect of HIV status and disease stage on the relationship between these two clinical outcomes. Methods. This was a retrospective record review of 346 patiens discharged between 2009 and 2013 following treatment for severe acute malnutrition (SAM) at two rural hospitals in South Africa. Results. A third of the sample was HIV-positive, the RWG (measured as g/kg/day) was significantly slower in HIV-positive patients compared with HIV-negative cases (mean 5.2, 95% confidence interval (CI) 4.47 - 5.93 v. mean 8.51; CI 7.98 - 9.05; p <0.0001) and cases at stage IV of HIV infection had a significantly slower RWG (mean 3.97; CI 2.33 - 5.61) compared with those at stages I (mean 7.64; CI 6.21 - 9.07) ( p <0.0001) and II (mean 5.87; CI 4.74 - 6.99). The mean DRT was longer in HIV-positive cases and those at advanced stages of HIV infection. HIV-positive cases were renourished and treated for almost 3.5 times longer than their HIV-negative counterparts to achieve a moderate RWG (5 - 10 g/kg/day). Conclusion . This study highlights the need to reconsider energy requirements for HIV-positive cases at different clinical stages, for more rapid nutritional recovery in under-resourced settings where prolonged hospitalisation may be a challenge.
Eligibility and selection
The unit of analysis in this study was the patient's medical treatment record.The treatment records were purposefully selected by one researcher (MM) during regular visits to each hospital, based on a set of eligibility criteria.In total, 346 medical records were reviewed over the study period.The research team reviewed updated medical records at 3-month intervals during hospital visits.Medical records were eligible for review if they belonged to children aged between 6 and 60 months, who were admitted at any of the two hospitals with SAM between January 2009 and May 2013, and were discharged following treatment.Children were discharged if they: (i) completed the transition to catch up and were eating well; (ii) they had no oedema; (iii) had completed antibiotic treatment; (iv) had received electrolytes and micronutrients for at least 2 weeks; (v) their immunisation was up-to-date; and (vi) their road-to-health card had been updated.The exclusion of patients that died was logical, as most of these patients died during the first three days of admission or earlier before the stabilisation phase.Thus, their RWG could not be determined.Other inclusion criteria included having patient treatment records with clearly defined SAM syndromic classifications based on the Wellcome classification system, [14] having records showing HIV test results and HIV clinical stage for HIVpositive patients, and having had a complete treatment record while in the hospital.A comprehensive written medical examination by a doctor, and the discharge criteria followed for patients who did not die while on treatment, were also used as eligibility criteria.
Patient management and follow-up
Treatment records were accumulated over the study period following standardised treatment of patients admitted with SAM.A patient with SAM brought to the hospital was seen by a doctor in the outpatient department, where the admitting doctor provided the diagnosis and the course of treatment to be followed based on the WHO 10-step guidelines.This information was recorded in standardised patient treatment charts as the basis for follow-up treatment and for record-keeping.For patients that were admitted to the ward, their caregivers were requested to provide consent for their children to take part in the study and also so that they would both be screened for HIV infection.Children who tested HIV-positive were identified and their treatment charts set aside for HIV disease staging by the doctor during follow-up ward rounds.Based on the screening results, two broad groups were formed: group A (HIV-negative patients) and group B (HIV-positive patients).Group B was further divided into four categories based on the clinical stage of HIV infection as defined by the WHO guidelines for staging infants and children. [15]The recruitment process is summarised in Fig. 1.
All SAM patients were treated at the hospital using the recommended WHO 10-steps guidelines for management of SAM. [16]Children with SAM and HIV co-infection were referred to an HIV clinic situated within the hospital premises for initial or follow-up treatment.
Variable definition and measurement
The outcome variables in this study were DRT and RWG.DRT was defined as the total number of days -from admission to discharge -during which a SAM patient was treated for SAM and other comorbidities as per the WHO treatment guidelines.This was computed from the admission and discharge dates in the patient treatment chart.The RWG was defined as the number of grams gained per kilogram of body weight per day (g/kg/day) during the rehabilitation phase.Patients with the RWG ≤0 g/kg/day were considered as those who lost or did not gain weight; whereas those with ≤5 g/kg/day had poor weight gain; 5 -10 g/kg/day had moderate weight gain; and >10 g/kg/day had good weight gain.The data used to compute this measure were obtained from a standardised patient weight monitoring chart which was included in Two hospitals (A and B) selected on account of having been the best in the region to optimally implement the WHO guidelines for some time the patient treatment record.The weight-for-height z-scores were not considered as a measure of nutritional recovery as the height data were not always recorded in the patient treatment record.
Predictor variables and possible confounders included baseline clinical characteristics, such as SAM classification, oedema grade, dermatosis grade, presence of LRTIs, critical illness on admission, presence of other comorbidities, HIV status, and the WHO HIV/AIDS disease stage.Classification of SAM followed the Wellcome system, [14] primarily because there was evidence of inconsistent measurement of patients' height/ length.HIV testing was done using the HIV polymerase chain reaction (PCR) test, following confidential and private counselling of the caregiver by a professionally trained nurse.HIV clinical staging was done by the admitting doctor as per the WHO guidelines. [15]Oedema and dermatosis were graded on admission as none, mild (+), moderate (++), and severe (+++). [17,18]The LRTIs was an umbrella term used for patients with comorbidities such as pneumonia, bronchitis and other infections below the larynx.Tuberculosis was not a common comorbidity in the treatment records, which may be a result of under-diagnosis or misdiagnosis of the condition in the study setting.Critical illness and other comorbidities were defined based on clinical diagnostic information in the patients' medical records.Definition of cases as 'critically ill' was based on whether or not they were admitted with one or a combination of five clinical features, namely: (i) depressed conscious state (prostration or coma); (ii) bradycardia; (iii) evidence of shock with or without dehydration; (iv) hypoglycaemia and/ or (v) hypothermia, as defined by Maitland et al. [19] Other comorbidities, directly or indirectly related to SAM, were also noted, for example: lethargy, hyponatraemia and hypokalaemia, dehydration, deep acidotic breathing, anaemia and pyrexia, herbal intoxication, presence of diarrhoea, burns and other congenital dysfunctions commonly reported by the doctors in each hospital.
A structured and validated questionnaire developed by the International Malnutrition Taskforce and Muhimbili Hospital in Tanzania [20] was used for the extraction of all the data.
Data analysis
All the data were cleaned and analysed using Stata/IC 13.0 (StataCorp., Texas).Subjects' baseline clinical characteristics were summarised using frequency tables.The RWG and DRT were firstly inspected for normality using the Shapiro-Wilk and Shapiro-Francia tests, which revealed that they were normally distributed.The distributions of these outcomes across all nine baseline clinical profile variables were displayed using Forest Plots with means and 95% confidence intervals (CIs).Inter-group mean differences were assessed using one-way analysis of variance and independent sample t-tests, as applicable.To assess whether there were significant differences between the two study sites in terms of RWG and DRT, an independent samples t-test was used.
Exploratory bivariate analyses were conducted using a linear regression model to explore the relationships between each outcome variable (RWG and DRT) and the nine baseline clinical characteristics as predictors.These relationships were further explored using multivariate regression analysis.The model estimates were plotted using the coefplot command in Stata 13.0 which displayed different levels of statistical significance for each predictor variable.
To assess the relationship between the RWG and DRT, and whether this was influenced by HIV status or HIV clinical stage, a non-parametric regression analysis using a locally weighted smoothing (LOWESS) technique was used.This technique generated a locally weighted regression of the dependant variable (RWG) on the independent variable (DRT) and two-way locally weighted scatterplot smooths stratified by different levels of HIV status and HIV clinical stages.This nonparametric method was preferred because the relationship between RWG and DRT did not appear to be linear during exploratory analysis.LOWESS was also used because it is known to generate a regression line which follows the data and, as such, provided a more accurate reflection of the relationship between the RWG and DRT. [21]
Descriptive results
Approximately 88% of the study records for children who were discharged during the study period met the eligibility criteria and were included in this study.Subjects' baseline clinical characteristics are presented in Table 1, which shows that 33.8% of SAM patients who survived and were discharged were HIV-positive, 15% were admitted in a critical condition, 28% had other comorbidities and 20% had LRTIs.A large proportion (86%) were younger than 25 months and 42% were admitted with kwashiorkor, whereas 33% were admitted with marasmus.It was noteworthy that 28% and 8% of patients were at stages III and IV of HIV infection, respectively.
Inferential results
The comparison of the two hospitals in terms of the distribution of the RWG revealed that there were no statistically significant differences (mean (standard deviation (SD)) 7.788 ( The mean RWG was slower with advanced HIV disease stage (p<0.0001),as shown in Fig. 2. Similarly, HIV-positive patients attained a much slower RWG compared with their HIV-negative counterparts (p<0.0001), as did marasmic patients compared with kwashiorkor and marasmic kwashiorkor, although this difference was not statistically significant (p=0.233).Patients who were admitted with other comorbidities, and those who were critically ill, attained a slower RWG than those who were not critically ill, but these differences were also not statistically significant (p=0.169 and p=0.102, respectively).The overall mean RWG was 7.38 g/kg/day (95% CI 6.91 -7.84).
All inter-group differences were not statistically significant at 95% significance level except for HIV status (Fig. 3).HIV-positive patients were hospitalised for notably longer periods (mean 18.59 days; 95% CI 16.96 -20.22) than their HIV-negative counterparts (mean 14.07 days; 95% CI 13.17 -14.97).However, there were some patterns of differences in other predictor variables, which are worth noting despite the lack of statistical significance.Marasmic SAM patients who were discharged remained on treatment for longer periods compared with those who were classified as having kwashiorkor or marasmic kwashiorkor.Patients without or with mild oedema (+)stayed a little longer than those with moderate (++) and severe oedema (+++), but there were no notable statistically significant differences in respect of dermatosis grade.The average length of stay was also longer for SAM patients at stage IV of HIV infection compared with other clinical stages.The mean DRTs for all HIV clinical stages were higher than the overall mean DRT for the study sample, which was 15.6 days (95% CI 14.76 -16.44).
Table 2 shows the bivariate relationship between patients' baseline clinical profile and each of the two outcome variables in this study.
As shown in Table 2, the DRT was significantly different among SAM patients depending on their SAM syndromic classification.Marasmic patients stayed significantly longer in the hospital than kwashiorkor and marasmic kwashiorkor patients (p=0.032 and p=0.001, respectively).HIV-positive patients, most of whom were marasmic, stayed longer in the hospital by four daily units compared with their HIV-negative counterparts (p<0.0001),whereas HIV-positive patients who were at stage IV stayed longer by nine daily units compared with those who were at stage 1 (p=0.004).Other baseline clinical characteristics were not significantly associated with the DRT.
With regards to the RWG, HIV status and HIV clinical stages were the only clinical characteristics that were significantly associated with the RWG at the bivariate level.HIV-positive patients achieved a slower RWG by 3.3 units compared with HIV-negative patients (p<0.0001).Similarly, HIV-positive patients who were at stages IV, III and II attained a slower RWG by 3, 5 and 1 units, respectively, compared with those who were at stage 1; these results were statistically significant (p=0.006,p<0.0001, p=0.032, respectively).
The multivariate model showed that HIV clinical stage was the only predictor of RWG and DRT at 95% level of statistical significance after adjusting for all other predictors in the model.The sum of all the predictor variables in the multivariate model explained 33% of variability of the RWG and 26% of variability in the DRT.The unexplained variance was most likely due to unmeasured confounders.
Since none of the predictors were significantly associated with the RWG and the DRT in a multivariable model, except for HIV disease stage, a multivariate LOWESS regression (MLOWESS) was not necessary to determine the adjusted relationship between the two outcome variables.Therefore, a bivariate LOWESS regression was used and the results are presented in Figs 4 and 5.There were notable differences in the RWG between HIV-positive and HIV-negative SAM patients, as shown in Fig. 4. The locally weighted smooths predicted that, while HIV-negative patients who were on treatment for at least 10 days achieved a RWG of around 7.5 g/kg/day, those who were HIV-positive only attained a rate of 3.5 g/kg/day during the same time period (as shown by the vertical dotted lines in Fig. 4).For HIV-negative patients, a moderate RWG (5 -10 g/kg/day) was achieved by patients who received refeeding and treatment for at least 5 days, whereas HIV-positive patients who attained the same RWG had to receive refeeding and treatment for at least 17 days.However, this analysis did not consider the SAM patients who had a negative RWG.There were 4 such patients from both facilities that were extreme outliers and distorted the position of the locally weighted smoothed lines significantly.It is also important to note that the RWG was consistently higher among HIV-negative patients compared with HIV-positive patients across all time intervals.
The locally weighted smoothed regression lines in Fig. 5 show that HIV-positive SAM patients who were at stage I achieved a faster RWG in a relatively shorter period of refeeding and treatment compared with those who were at advanced stages of HIV infection.
Discussion
The results of the relationship between HIV status and the RWG both confirm and refute evidence from past research.The current study showed that on average, HIV-negative SAM patients recorded a better RWG than their HIV-positive counterparts.The results were similar in both hospitals.Savadogo et al. [22] also found similar relationships between HIV status and the RWG; however, unlike in the present study, they used the median RWG as a measure of the distribution of the RWG by HIV status.Their results revealed that HIV-positive SAM patients achieved a median of 4.64 g/kg/day v. 9.04 g/kg/day for HIV-negative patients.Several other studies [2,5,10] have also confirmed this relationship.However, Fergusson et al. [11] reported similar RWGs between HIV-positive and -negative SAM patients (mean 8.0 v. 8.9 g/kg/day, respectively).In the present study, the poorer nutritional recovery observed among HIV-positive SAM patients may, in part, be a result of metabolic changes associated with HIV infection, which impact on the nutritional status of the child.These changes include, for example, hyper-metabolism of energy stores, nutrient losses and malabsorption as a result of inflammation of the gastrointestinal tract, reduced bioavailability of certain nutrients, and altered nutrient utilisation. [23]Poor appetite, which results in inadequate nutrient intake, has also been documented. [12]HIV-positive patients tend to present with severe oral and oesophageal candidiasis which undermine stherapeutic feeding efforts. [24]This finding begs a question as to whether a much more aggressive therapeutic feeding approach and treatment modality for HIV-positive SAM patients with associated comorbidities may be required to counteract these pathophysiological and metabolic challenges that HIV infection presents among SAM patients.The finding related to the relationship between DRT and HIV status agrees with results from a study by Madec et al., [25] who showed that the duration of refeeding was much longer among HIVpositive patients (mean 22 days) than in HIV-negative patients (mean 12 days).However, these estimates were larger than those found in our study which recorded means of 14.07 and 18.59 days for HIV-negative and HIV-positive SAM patients, respectively.These differences may be related to the concomitant differences in discharge criteria set out in the study.The study by Madec et al. [25] seems to imply that the minimum number of days required to achieve good nutritional recovery is roughly 22 for HIV-positive SAM patients and 12 for HIVnegative patients.However, neither the present nor Madec's study were able to provide precise quantifiable targets, such as time taken to achieve weight-for-height z-scores, which are oedema-free.In the present study, weight-for-height z-scores were not used as the medical records did not always have data on patient length and height.Perhaps the most important contribution to the literature from our study is the estimation of the relationship between HIV disease stage and the RWG.The study showed that the mean RWG became smaller with advanced HIV disease stage.The relationship between the RWG and HIV disease stage can be explained in light of the randomised controlled trial which demonstrated that half the children hospitalised for SAM developed oedema after starting antiretroviral therapy (ART). [26]edema may be associated with a slower RWG as children with oedema have to lose weight during the rehabilitation phase before they gain non-oedema-associated weight.Another possible explanation for this observation is that oedematous children are often more ill and unable to adequately metabolise nutrients.The evidence around this physiological process is still poorly understood.
Another key finding from this study was the estimation of the relationship between the RWG and DRT and how these variables can be influenced by HIV status and disease stage.The non-linear polynomial regression and scatter plot smooths estimated that the trajectory to better RWG was faster and consistently higher among HIV-negative SAM patients compared with their HIV-negative counterparts.To our knowledge, this finding has not been documented elsewhere in the literature and may need to be verified in future studies, within a variety of contexts.Nevertheless, against the backdrop of this study, where resources for prolonged management of SAM patients may be relatively fewer, the fact that HIV-positive SAM patients took longer to attain the same RWG as their HIV-negative counterparts may have some practical implications to consider.To optimise outcomes in respect of nutritional recovery, it may be important to prioritise resources for HIV-positive SAM patients, particularly the availability of hospital beds and therapeutic feeds, in addition to medication stock for SAM-related comorbidities.
Study limitations
The measurement of quality of care for SAM patients and its relationship with the outcome variables (RWG and DRT) was beyond the scope of this study and is encouraged in future research.However, it was encouraging to learn that there were no statistically significant differences between the two hospitals in terms of the distribution of the two outcomes and how they were related to the predictor variables.Furthermore, only 88% percent of the available medication records for children who were discharged during the entire study period met all the eligibility criteria for record review.It is not known what the remaining 12% would have contributed to the direction and strengths of the relationships presented in this study.There is also limited generalisability of the results presented here, as the study was conducted in purposefully selected facilities where the implementation of the WHO treatment modality for SAM was presumed optimal.Lastly, but not least, patient records did not always have an indication of whether the study subjects were already on ART at admission, and for how long they had been on treatment.This information could not be verified since the study involved a retrospective record review.This information would have constituted important variables to assess as potential confounders or predictors of the RWG and DRT.Given the design limitation of this study, the recommendations made in this article in relation to the WHO protocol should not be considered as definitive but rather suggestive.
Conclusions
The findings from this study suggest that nutritional recovery is, in part, a function of HIV status, HIV disease stage and the duration of refeeding.Our findings raise some important research topics to be explored in future research studies, including, for example, the determination of differential energy requirements among SAM patients depending on their HIV status.Such studies can also explore the optimal choice of therapeutic feeds during the transition phase for HIV-positive SAM patients and how long it takes SAM patients, with or without HIV infection, to achieve specific targets for nutritional recovery in terms of the weight-for-height z-scores.
Fig. 1 .
Fig. 1.Flow chart of the participant recruitment and data extraction process.
Fig. 3 .
Fig. 3. Distribution of the duration of refeeding and treatment (DRT) by HIV status, HIV disease stage and other baseline clinical characteristics: Pooled analysis based on patients who were discharged (2009 -2013).
Fig. 4 .Fig. 5 .
Fig. 4. Relationship between rate of weight gain (RWG) and duration of refeeding and treatment (DRT) by HIV status: Two-way scatter plot with locally weighted smoothed regression lines and a linear plot overlay.
Table 1 . Characterisation of SAM patients by baseline clinical profile (N=346)
3.121) v. 7.186 (3.421) g/kg/day; p=0.236).The means for DRT were also not statistically different (13.15 3.794) v. 12.011 (3.324) days; p=0.052).Pooled analyses were therefore carried out to determine the distribution of each of the two outcome indicators across various clinical characteristics at baseline, as shown in Figs 2 and 3. | 2018-12-12T08:29:26.344Z | 2017-07-05T00:00:00.000 | {
"year": 2017,
"sha1": "265dc016739a3e3d815084eb14013b4b2d261944",
"oa_license": "CCBYNC",
"oa_url": "http://www.sajch.org.za/index.php/SAJCH/article/download/1374/775",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "265dc016739a3e3d815084eb14013b4b2d261944",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219689413 | pes2o/s2orc | v3-fos-license | Insights into glycan import by a prominent gut symbiont
In Bacteroidetes, one of the dominant phyla of the mammalian gut, active uptake of large nutrients across the outer membrane is mediated by SusCD protein complexes via a “pedal bin” transport mechanism. However, many features of SusCD function in glycan uptake remain unclear, including ligand binding, the role of the SusD lid and the size limit for substrate transport. Here we characterise the β2,6 fructo-oligosaccharide (FOS) importing SusCD from Bacteroides thetaiotaomicron (Bt1762-Bt1763) to shed light on SusCD function. Co-crystal structures reveal residues involved in glycan recognition and suggest that the large binding cavity can accommodate several substrate molecules, each up to ∼2.5 kDa in size, a finding supported by native mass spectrometry and isothermal titration calorimetry. Mutational studies in vivo provide functional insights into the key structural features of the SusCD apparatus and cryo-EM of the intact dimeric SusCD complex reveals several distinct states of the transporter, directly visualising the dynamics of the pedal bin transport mechanism.
Introduction 38
The human large intestine is home to a complex microbial community, known as the gut 39 microbiota, which plays a key role in host biology 1-3 . One such role is to mediate the breakdown transporters (TBDTs) known as SusCs (we propose to re-purpose the term "Sus" for 54 saccharide uptake system rather than starch utilisation system) 7,12,13,15 . SusC proteins are 55 unique amongst TBDTs in that they are tightly associated with a SusD substrate binding 56 lipoprotein [15][16][17] (Fig. 1a). Recently we showed that SusCD complexes mediate substrate 57 uptake via a "pedal bin" mechanism 15,17 . The SusC transporter forms the barrel of the bin, 58
A FOS co-crystal structure reveals SusCD residues involved in ligand binding 97
In the B. theta levan PUL, the periplasmic enzymes are GH32 exo-acting fructosidases that 98 release fructose from the imported β2,6 FOS ( Fig. 1a) 19 A previous crystal structure of Bt1762-63 obtained in the absence of levan substrate revealed 116 a dimeric (SusC2D2) closed state in which the SusC TonB-dependent transporter (Bt1763) 117 lacked the plug domain as a result of proteolytic cleavage 15 . We have now obtained a structure 118 without substrate using a preparation that did not suffer from proteolysis (Fig. 1b,
left panel;; 119
Supplementary Table 1). Interestingly, while this structure is very similar to that reported 120 earlier, density for the plug domain is weak but clearly present. While this suggests that the 121 plug has been ejected from the barrel in the majority of transporter molecules in the crystal, 122 the relatively poor fit of the density with the native plug domain suggests increased dynamics 123 of the in situ plug domain in the subset of transporters that contain a plug (Extended Data Fig. 124 1).
126
To provide further insight into glycan recognition and transport by SusCD complexes, we next 127 used the same protein preparation to determine a co-crystal structure using data to 3.1 Å 128 resolution with b2,6-linked FOS. The FOS were generated by partial digestion of levan by 129 Bt1760 endo-levanase, followed by size exclusion chromatography (SEC) and analysis by 130 thin-layer chromatography (TLC) and mass spectrometry (MS;; Methods). In this structure, 131 containing FOS with a wide range of sizes (~DP15-25), the plug domain is present with normal 132 occupancy, suggesting that it is more stable in the presence of substrate (Fig. 1b, middle panel 133 and Extended Data Fig. 1). Like the oligopeptide ligands in the RagAB and Bt2261-64 134 structures 15,17 , the FOS is bound at the top of a large, solvent-excluded cavity formed by the 135 Bt1762-63 complex. Density for seven b2,6-linked fructose units can unambiguously be 136 assigned in the structure and this was designated as the primary binding site (Fig. 2a). The 137 bound oligosaccharide is compact and has a twisted, somewhat helical conformation. The 138 ligand makes numerous polar contacts with side chains of residues in both Bt1762 and Bt1763 139 ( Fig. 2b). For Bt1762 (SusD) these residues are D41, N43, D67, R368 and Y395, and for 140 Bt1763 (SusC) T380, D383, D406 and N901. In addition, prominent stacking interactions are 141 present between the ring of fructose 2 (Frc 2) and W85 of Bt1762. Interestingly, a β2,1 142 decoration is present in the bound ligand at Frc 4, and the branch point interacts with the 143 extensive non-polar surface provided by the vicinal disulphide between Cys298 and Cys299 144 of Bt1762 (Fig. 2b).
146
We also determined a co-crystal structure of Bt1762-63 with shorter β2,6 FOS (~DP6-12) 147 using data to 2.69 Å resolution (Supplementary Table 1 and W483 in the barrel wall, and with H169 and E170 in the plug domain (Fig. 2c). The fit to 153 the density is better for a 3-mer with a β2,1 decoration compared to a b2,6-linked 4-mer, 154 suggesting the transporter may have some specificity for FOS with a β2,1 decoration, or 155 alternatively, that Erwinia levan contains extensive β2,1 decorations such that most of the 156 levanase products are branched. The relatively small size of the co-crystallised FOS, 157 combined with the relative orientation and the large distance between FOS1 and FOS2 (> 20 158 Å;; Fig. 1c), makes it highly plausible that there are two ligand molecules in the Bt1762-63 159 cavity. The co-crystal structure with the longer FOS also shows some density at the secondary 160 site, but it is of insufficient quality to allow model building, perhaps due to the lower resolution. and is colored black, with sidechains displayed as sticks. The equivalent region in the closed 213 structure is not visible and is therefore assumed to be disordered and likely protrudes from the 214 barrel, leaving the Ton box accessible to TonB. The visible density for the N-terminus starts 215 at residue 96 of the "closed" plug and at residue 84 for the "open" plug. Cryo-EM figures were 216 made with ChimeraX 52 . 217 218 The established model of TonB-dependent transport 24 assumes that extracellular substrate 219 binding to a site that includes residues from the plug domain induces a conformational change 220 of hinge1 (Methods) caused little to no growth defect (Fig. 4b), which is surprising given that 270 Bt1762-63Dhinge1 expression is barely detectable (Figs. 4d,e). By contrast, the Dhinge2 strain 271 showed a complete lack of growth during the 24 h monitoring period, but expression of this 272 mutant was also very low (Figs. 4b,d,e). Surprisingly, a strain in which both hinges were 273 deleted (Dhinge1&2), grew similarly to the DSusD strain, i.e. after a ~8 hr lag phase (Fig. 3b).
275
TonB box and N-terminal extension mutants: SusC-like proteins are predicted to be TonB-276 dependent transporters (TBDTs), but direct evidence for this is lacking. We therefore 277 examined the importance of the putative TonB box located at the N-terminus of Bt1763. In Supplementary Tables 3 and 4). The NTE 304 structure shows a well-defined core of an Ig-like fold with a 7-stranded barrel (Fig. 5c, left 305 panel). The N-terminus (including the His-tag) and the C-terminus, corresponding to the Ton 306 box, are flexibly unstructured, as evidenced from their random coil chemical shifts and the 307 absence of long-range NOEs ( Fig. 5b and Fig. 5c Interestingly, the structure of the FoxA STN in complex with the CTD of TonB shows that the 329 STN is also composed of a small barrel with seven elements, some of which are helical instead 330 of strands (Fig. 5c) 27 . This similarity suggests that, like the STN, the NTE might interact with a 331 protein in the periplasmic space. In both domains, the Ton box is separated from the domain 332 body and will thus be accessible to binding by the C-terminal domain of TonB. One possibility 333 for a role for the NTE could be to provide interaction specificity for the multiple TonB orthologs 334 present in the B. theta genome. saccharide displaying no affinity for the transporter (Fig. 6, Extended Data Fig. 8 and 357 Supplementary Table 5). For the larger FOS, affinity increased from DP5 to 6 (Kd ~30 and 17 358 µM, respectively) and plateaued at DP8 (tube 174, T174 Kd ~1 µM), with Bt1762-63 binding 359 to all FOS between DP8 and at least DP13-14 (T115) with similar affinity. These data are in 360 broad agreement with the co-crystal structures, which show well-defined density for 7 fructose 361 units in the primary binding site, suggesting that these provide the bulk of the binding 362 interactions. Surprisingly, no binding was detected for the FOS in SEC fractions T114 and 363 T113, despite these fractions having similar MS profiles to T115 with a broad range of 364 oligosaccharides present (Fig. 6 and Extended Data Fig. 7). The average MW of the FOS in 365 tube T114 (Mn >2666) is larger than that of T115 (Mn >2193;; Extended Data Fig. 7), and it may 366 be that this increase in average size is enough to preclude binding to the transporter. 367 Furthermore, based on the co-crystal structure we can see that at least some, and perhaps 368 all, of the bound Erwinia levan-derived FOS has a b2,1 decoration, which may influence 369 binding to Bt1762-63. However, it was not possible to identify b2,1 decorations in the TLC or 370 MS analysis. Thus, T115 FOS might contain significantly more branched species than T114 371 and this could explain a higher affinity for the T115 fraction. Taken together, however, these 372 data indicate there is both an upper and lower size limit for FOS binding to the Bt1762-63 373 transporter in vitro, with the lower limit being DP5 and the upper limit ~DP15. In addition to 374 wild-type Bt1762-63, we also measured binding of ~DP9 FOS to the Bt1762(W85A)-63 variant 375 (Fig. 6). Surprisingly, no binding is observed for the mutant, even though the Bt1762W85A-63 376 strain grows as well as wild type on levan (Fig. 3a), suggesting that FOS binding by Bt1762 is 377 not essential for Bt1762-63 function in vivo. The complexity of the binding pattern in the spectrum is consistent with polydispersity of the 398 T114 and T115 fractions (Extended Data Fig. 9). More useful insights were obtained with the 399 T159 sample, which consists mainly of FOS with 8-10 fructose units (Fig. 7 and Extended 400 Data Fig. 7b). These medium-chain oligosaccharides bind preferentially to the intact SusC2D2 401 dimer rather than to the SusCD monomer such that no ligand-free dimer was evident in the 402 spectrum, potentially suggesting some kind of cooperativity for ligand binding in the dimer. 403 Interestingly, the relative proportions of protein-bound FOS mirrored their abundance in the 404 T159 sample (Fig. 7b), supporting the similar affinities of FOS with 8-10 fructose units for 405 Bt1762-63 as measured by ITC ( Fig. 6
and Supplementary Table 5). At the higher FOS 406
concentrations, binding of more than one FOS molecule per SusCD transporter was observed 407 (Fig. 7b), confirming the observation from our co-crystal structure that more than one ligand 408 molecule can be present in the binding cavity, at least for the relatively small FOS.
410
Finally, we wanted to confirm the upper FOS size limit in vivo by using testing growth of a 411 strain lacking the surface endo-levanase BT1760 against FOS of different sizes as the sole 412 carbon source. The D1760 strain was previously reported to lack the ability to grow on levan 19 , 413 which would provide another indication that the Bt1762-63 complex cannot import high 414 molecular weight substrates. Surprisingly, however, the growth rate of the D1760 strain on 415 levan from several different sources was similar or only slightly slower than that of the wild 416 type strain (Extended Data Fig. 10). PCR of the ∆1760 cells taken from stationary phase of 417 the cultures confirmed the deletion of the BT1760 gene from the cells, indicating the phenotype 418 was not due to contamination with wild-type strain (Extended Data Fig. 10). These data 419 suggest that all the levans tested contained enough low DP FOS to allow growth without 420 needing digestion by the surface endo-levanase. It was therefore not possible to determine 421 an upper FOS size limit of the Bt1762-63 importer in vivo. native MS data, it is likely that this total mass would comprise several individual molecules, 456 rather than one large molecule. As there are unlikely to be large structural differences among 457 SusCD-like systems, we suggest ~5 kDa as a general total size limit for these transporters, 458 which is consistent with recent data for the archetypal Sus 18 .
460
Our structures that show FOS in the principal binding site at the Bt1762-63 interface raise an 461 important question: how is ligand occupancy relayed to the plug domain, and how does this 462 lead to increased accessibility of the Ton box? This key issue is likely unique to SusCD 463 systems, in particular those SusCs without the long plug loop present in e.g. Bt2264 15 that is 464 able to contact ligand in the principal binding site and relay binding site occupancy directly to 465 the plug domain (Extended Data Fig. 11). In Bt1763, the smallest distance between the visible 466 part of the substrate in the principal binding site and the plug is 15 Å, and so an optimal-sized 467 FOS (in terms of binding affinity) of ~DP8-12 could not contact the plug directly. The presence 468 of a second substrate molecule at the bottom of the binding cavity (Fig. 1b) might be a way to 469 overcome this problem, implying a mechanism in which the binding cavity "bin" is filled first via 470 two or more substrate binding-release cycles to provide plug contacts, that collectively 471 increase accessibility of the Ton box and binding to the CTD of TonB.
473
The substrates for the transporter are generated by the combined action of the Bt1760 endo-474 levanase and the Bt1761 surface glycan binding protein (SGBP; ; Fig. 1a) transiently associating with the SusCD core complex 35,36 . Besides depending on the type of 480 levan 23 , the FOS sizes delivered to Bt1762-63 will depend critically on the binding kinetics of 481 the Bt1760 levanase and on the proximity of Bt1761: a close association between the two 482 would most likely favour production of uniformly-sized FOS of relatively small size which, as 483 we have shown, are preferred substrates. Likewise, a close association between the enzyme 484 and Bt1762-63 will enhance capture of the generated FOS by SusD and subsequent delivery 485 to SusC. With regards to this last step, it is interesting to note that, in contrast to in vitro 486 conditions, the substrate binding function by Bt1762 is not necessary in vivo (Fig. 4 and Fig. 487 Our structural data provide important clues about the function of SusD and about glycan import 508 in general (Fig. 8). The basis for these clues is the unprecedented observation that closed, 509 but empty transporters lack the entire plug domain. This spontaneous expulsion of the plug is 510 likely to be non-physiological and caused by a loss of lateral membrane pressure due to 511 detergent solubilisation. Nevertheless, it does suggest that Bt1762 lid closure causes 512 conformational changes within the Bt1763 barrel that decrease the "affinity" of the plug for the 513 barrel. This may facilitate the removal of the entire plug domain from the barrel by TonB action, 514 as opposed to local unfolding and formation of a relatively narrow channel as has been 515 proposed for non-Sus TBDTs 38-40 . To prevent plug removal in the absence of substrate 516 resulting in futile transport cycles, we postulate that only the direct contact of substrate with 517 the plug (as observed in the co-crystal structure with FOS2 and the in BT2263/64-peptide 518 complex 15 ) leads to increased accessibility of the TonB box and interaction with TonB (Fig. 8). 519 In our model, the impermeability of the OM, which otherwise would be compromised due to 520 the formation of a very large channel of ~20-25 Å diameter, would be preserved by the seal 521 provided by the closed SusD lid. Upon reinsertion of the plug, the transporter would revert 522 back to its open state (Fig. 8). The most important function of SusD proteins during glycan 523 import may therefore be to provide a seal to preserve the OM permeability barrier. Na2CO3 1 mg/ml, cysteine 0.5 mg/ml, KPO4 100 mM, vitamin K 1 µg/ml, FeSO4 4 µg/ml, 560 vitamin B12 5 ng/ml, mineral salts 50 µl/ml (NaCl 0.9 mg/ml, CaCl2 26.5 µg/ml, MgCl2 20 µg/ml, 561 MnCl2 10 µg/ml and CoCl2 10 µg/ml) and hematin 1 µg/ml. These cultures were supplemented 562 Fig. 2). Invariably, the closed 671 position of BT1762 was associated with an absence of density for the plug domain of BT1763. 672 Global 3D classification was unable to distinguish 'true' closed conformations from those 673 where BT1762 occupied a marginally open state. As a result, a masked 3D classification 674 approach was employed to achieve homogeneous particle stacks. The masked classification 675 was performed without image alignment and, since the region of interest is relatively small, 676 the regularization parameter, T, was set to 20. Intermediate results and further details are 677 provided in Extended Data Fig. 12. Clean particle stacks for the three principle conformational 678 states were subject to multiple rounds of CTF refinement and Bayesian polishing 49 . C2 679 symmetry was applied to both the OO and CC reconstructions. Post-processing was 680 performed using soft masks and yielded reconstructions for the OO, OC, and CC states of 3.9, 681 4.7 and 4.2 Å respectively, as estimated by gold standard Fourier Shell correlations using the 682 0.143 criterion.
684
Model building into cryoEM maps. Comparing the maps to the crystal structure of BT1762-685 BT1763 revealed that their handedness was incorrect. Maps were therefore Z-flipped in UCSF 686 Chimera 52,53 . The reconstruction of the OO state was of sufficient resolution for model building 687 and refinement. Bt1762 and Bt1763 subunits from the crystal structure were independently 688 rigid-body fit to the local resolution filtered map and later subjected to several iterations of 689 manual refinement in COOT 44 and 'real space refinement' in Phenix 45 . The asymmetric unit 690 was symmetrised in Chimera after each iteration. Molprobity 46 was used for model validation. 691 The reconstructions of the OC and CC states were of insufficient resolution to permit model 692 building and refinement owing to low particle numbers and a poor distribution of viewing angles 693 respectively. Instead, the crystal structure of Bt1762-Bt1763 was rigid-body fit to the CC state. 694 The ligand was removed from the model and an inspection in COOT showed that no density 695 extended past Lys213 in the direction of the N-terminus. All residues N-terminal of Lys213 696 were therefore removed from the model before rigid-body fitting. The open state from the OO 697 EM structure and the closed state from the crystal structure (modified as described above) 698 were rigid-body fit to their corresponding densities in the OC state. Rigid-body fitting was 699 performed in Phenix. | 2020-06-16T13:11:16.120Z | 2020-06-11T00:00:00.000 | {
"year": 2020,
"sha1": "95403aab650c3454196b5d28d785fd48264574e0",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-020-20285-y.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "95403aab650c3454196b5d28d785fd48264574e0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Chemistry"
]
} |
68402061 | pes2o/s2orc | v3-fos-license | The effect of honey on the blood and urine glucose levels of wistar rats
The diabetogenic activity of honey was evaluated in a group of 40 wistar rats divided into experimental and normal control groups of 10 rats each. Experimental groups of male and female rats were administered with hone of 100g/kg of body weight through feed for 4 weeks have while control groups were given normal rat chivo. Data was analysed using the student's t-test statistics. Fasting blood glucose and body weight were statistically significantly higher in experimental groups there was no glucose detected in urine of control and experimental group. This work suggest that: continuous ingestion of honey could lead to the establishment of type II DM and as such care must be exercised in substituting it for refined sugar.
INTRODUCTION
Honey is fast becoming a food item in most times in Nigeria especially with increase knowledge of its superior nutritional advantage over processed sugar.According to rational honey board (2003).Honey is a sweet and viscous fluid produced by honey bees from the nectar of flowers.It is a pure product that does not allow for the addition of any other substances.Humans have been consuming honey which contains about 60% of fructose for hundreds of years without considering its health significance.What is disturbing is that most individual now take honey as a substitute to refined sugar as a way of controlling or preventing diabetes milletus.Yet the incidence and prevalence of DM is on the increase.
Although there is a little evidence that modest amount of honey have detrimental effect The effect of honey on the blood and urine glucose levels of wistar rats
ABSTRACT
The diabetogenic activity of honey was evaluated in a group of 40 wistar rats divided into experimental and normal control groups of 10 rats each.Experimental groups of male and female rats were administered with hone of 100g/kg of body weight through feed for 4 weeks have while control groups were given normal rat chivo.Data was analysed using the student's t-test statistics.Fasting blood glucose and body weight were statistically significantly higher in experimental groups there was no glucose detected in urine of control and experimental group.This work suggest that: continuous ingestion of honey could lead to the establishment of type II DM and as such care must be exercised in substituting it for refined sugar.
Key words: Honey, Glucose levels, Wistar rats.on carbohydrate and lipid metabolism, larger doses have been associated with numerous metabolic abnormalities in laboratory animal and humans suggesting that high consumption of honey adversely affect health (Halltis, 1990; Henry et al, 1991).
Blood glucose is mostly regulated because of its importance in the Brain, retina, germinal epithelium of ganads and Red blood cells.Thus there is need for maintenance of Blood glucose at a particular set point of about 100mg%, as higher or lower values with exposed the individual to health problems.
The insight that consumption of honey may have effects on the metabolism of lipids and carbohydrates calls for propare investigation into the diabetogernic effect of honey in order to appropriately advice on its consumption.This work Naiho et al., Biosci., Biotech.Res.Asia, Vol.6(2), 555-557 (2009) aims to find out the effect of chronic honey consumption on the fasting blood glucose level and the urine glucose level of wistar rats.
Materials
Honey was purchased in the local market in Abraka Delta State, Nigeria Twenty wistar rats, 10 males and 10 females weighing an average of 121g were divided into 4 groups two male groups MC and ME, and two female groups FC and FE.MC and FC were control groups and were fed normal rat chow and water adlibitum.ME and FE were experimental groups and were given 100g/kg body weight of honey feed and water adlibitum for 5 weeks.
Procedure
Animal were weighed before commencement of experiment and weighed subsequently every week until the end of experiment.
At the end of administration, blood samples were collected via cardiac puncture under chloroform anesthesia after an over night test about 5 mls of blood were collected into container containing sodium oxalate, samples were centrifuged and serum obtained was assayed.
Fasting blood glucose level was determined through the enzymatic colorimetric method by Trinder P.A. (1966).
Statistics -Student's t-test and Anova was used to analyse data.
RESULTS AND DISCUSSION
Our results show a statistically significant (P<0.01)increase in fasting blood glucose of male and female experimental rats with male rats being more affected.Urine glucose was negative in all groups.Suggesting that chronic consumption of honey may have a diabetigenic effect.This is in with work of Benard Thorens et al 1990; at Waili and Nadirs-Boni, 2007) who reported that increase in concentration of honey as a result of increase in consumption with subsequent metabolism of disaccharide containing glucose may lead to hyperglycemia with its attendant effects on pancreatic islets, liver, blood and urine.Hyperglycemia reduces the expression of beta cell specific glucose transfer isoforms and the extent of reduction correlates with the seventy of hyperglycemia.This form the view point excess consumption of honey may not be of benefit but may result in the establishment or aggravation of Diabetes Mellitus.
We also observed increase in weekly weight gain compared to control.However in the 3 rd week when honey was withdrawn, there was a sudden and severe drop in weight which further increased when honey was reintroduced in the fourth and fifth weeks.It is our view that this increase in weight may be of negative impact as obesity is one of the strongest predisposing factor in Metabolic syndrome.Thus increase in weight of persons who continuously consume honey may predispose them to obesity.And couple with hyperglycemia and possible dyslipidemia may result in metabolic syndrome (syndrome x).We therefore advocate that the use of honey as alternative to refined sugar in diabetics or those primitive to refined sugar in diabetics or those prime to diabetic, should be with causion.That is it should not be taken continuously and should be taken in very shall quantities. | 2019-05-20T06:46:22.589Z | 2017-01-01T00:00:00.000 | {
"year": 2009,
"sha1": "ef973cea02e9fdc4c0ed9154cfdd8c139fb69687",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.13005/bbra",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ef973cea02e9fdc4c0ed9154cfdd8c139fb69687",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
212725177 | pes2o/s2orc | v3-fos-license | MACHO 311.37557.169: A VY Scl star
Optical surveys, such as the MACHO project, often uncover variable stars whose classification requires followup observations by other instruments. We performed X-ray spectroscopy and photometry of the unusual variable star MACHO 311.37557.169 with \XMM\ in April 2018, supplemented by archival X-ray and optical spectrographic data. The star has a bolometric X-ray luminosity of about $1\times 10^{32}$ erg s$^{-1}$ cm$^{-2}$ and a heavily absorbed two-temperature plasma spectrum. The shape of its light curve, its overall brightness, its X-ray spectrum, and the emission lines in its optical spectrum suggest that it is most likely a VY~Scl cataclysmic variable.
Introduction
The MACHO survey (Alcock et al., 1997(Alcock et al., , 2000(Alcock et al., , 1992 was a two-colour photometric study of several million stars in the Magellanic Clouds and the Galactic Bulge that aimed to spot gravitational lensing events associated with massive free-floating bodies in the Galactic halo. A useful byproduct was the discovery of numerous intrinsically variable stars in the southern sky (e.g. Cieslinski et al. 2004), of which many still require classification.
Our target was first observed as a variable star of unknown nature by Hoffleit (1972), ranging between visual magnitudes 16.2 and 14.8 on photographic plates, and was eventually designated NSV 10530 (e.g. Samus et al. 2004). It was later observed during the MACHO survey, under the designation MACHO 311.37557.169 (henceforth M311, RA 18 18 41.7, DEC -23 56 21.10), and was identified as a possible cataclysmic variable by Zaniewski et al. (2005). Those authors were searching for R CrB stars, obtained optical spectra of numerous candidates, and excluded this object because it exhibits the conspicuous Balmer lines that R CrBs lack. Instead, they hypothesised that it is an AM Her star. The Gaia Data Release 2 (Bailer-Jones et al., 2018;Gaia Collaboration et al., 2018 gives a parallax of 0.723 ± 0.054 mas for M311, providing a distance estimate of 1.34 +0.11 −0.09 kpc. The star is listed as bright as m V = 14.8 in DR10 of the APASS catalogue (Henden et al., 2018), at m V = 15.7, m I = 15.2 in the OGLE-III database (Szymański et al., 2011), and at around m V = 14.8 in ASASSN (Kochanek et al., 2017;Shappee et al., 2014). M311 has been observed by XMM-Newton on two occasions. In 2006 it was spotted serendipitously in an observation of the pulsar candidate AX J1817.6−2401, in which M311 is visible far off-axis, about 11.7 arcminutes, ⋆ Corresponding author: hworpel@aip.de with the MOS1 and MOS2 cameras and was thus designated as 3XMM J181841.7-235618 in the 3XMM catalogue (Rosen et al., 2016). It is unfortunately outside the fields of view of the EPIC-pn and Optical Monitor. A second, targeted, pointing was performed in 2018 by XMM-Newton and we additionally found a short serendipitous observation of its field by Swift from Aug 2017.
We here present an analysis of the X-ray data, the MA-CHO light curve, and the optical spectrum. Our aim is to classify M311. Zaniewski et al. (2005) obtained an identification spectrum of M311 with the LDSS2 spectrograph of the Magellan telescope at Las Campanas, which they generously made available to us. The 900 s observation was performed on 2003 May 10. Unfortunately no standard star spectra are available for this observation so we are unable to perform a proper flux calibration. We were, however, able to get an adequate wavelength calibration by identifying hydrogen Balmer lines in the spectra by eye and fitting their known wavelengths to the CCD pixel values with a low order polynomial. The spectrum is shown in Figure 1.
Optical spectrum
The hydrogen Balmer lines are very prominent and the helium lines, though clearly present, are quite faint. These features are reminiscent of a cataclysmic variable (CV).
Optical photometry
We downloaded the light curves of M311 from the MA-CHO survey website and corrected the timings to the solar system barycenter using the algorithms of Eastman et al. (2010). The MACHO blue and red filter magnitudes were converted to Johnson V and Kron-Cousins R magnitudes using the conversion formulae in Popowski et al. (2003). The light curves and colours are shown in Figure 2. We also include absolute magnitudes, derived from the Gaia distance. These have not been corrected for Galactic extinction.
To find possible periodicities of a few hours in the long-term light curve, we subtracted the best-fitting quartics from the segments before and after the dip and performed an analysis-of-variance (AoV; e.g. Schwarzenberg-Czerny 1989) on the residuals. Other than a signal at 24 hours and some of its integer divisors-the rotation of the Earth and its aliases-we found no signal.
By adding a faked sinusoidal signal to the residuals we deduced that a modulation with a half-amplitude of 0.1 magnitudes would have been detectable if stable in time.
The ASASSN survey has observed the target 290 times, during which it had an average m V of 14.8 and no longterm light curve variations. We downloaded these data and performed another AoV search on them between 5 minutes and 24 hours to search for shorter term periodicities, but there was no significant signal.
Hutton-Westfold Observatory Data
The Hutton-Westfold Observatory is a teaching observatory located at Monash University in Melbourne, Australia. It consists of a 14-inch telescope. On the night of 2019 Oct 23 we obtained a single 60 s exposure of M311 that confirmed the target was in a bright state.
The following night we obtained 63 × 60 s exposures using the V filter. One exposure, near the end of the run, was affected by a passing cloud and unusable. We performed bias, dark, and flat field correction, and image alignment using AstroImageJ (Collins et al., 2017). The observations were performed under very challenging conditions, with significant light pollution, some high cloud, and high airmass (1.46-2.01).
We used a nearby bright star at RA 18 18 42 DEC -23 52 43 with V magnitude 9.798±0.004 (Henden et al., 2018) as the comparison star. The mean magnitude, measured from the stacked observations, of M311 is around m V = 15.2 ± 0.1, slightly brighter than it was during the the MACHO observations. We used apertures of radius 40 and 19 pixels for the comparison star and target respectively, and annuli of outer radius 58 and 29 pixels respectively for the sky.
We did not see any clear evidence for variability of M311 in the individual frames, due to the unfavourable viewing conditions. Stacking multiple frames together did not help, so we are unable to detect or to rule out variability of ∼ 0.2 magnitudes or less. Table 1.
X-ray spectra
We reduced the XMM-Newton data with version 16.1.0 of the XMM-SAS software and produced photon event lists with the emchain and epchain tasks. The arrival times were corrected to the solar system barycenter with the barycen task. In the earlier observation the source was so far off-axis that the point spread function was distinctly non-circular, so we used an elliptical source extraction region with minor and major radii (10 and 15 arcsec) respectively, and rotated to approximately the same orientation as the source. The background extraction region was a large circle located in a source-free region on the same chip. For the later, targeted, observation we used circular source and background extraction regions.
The spectra are shown in Figure 3. We assumed the source had the same spectrum, with possibly varying intensity, in both observations so we fitted all five spectra jointly. We attempted to fit with a Mekal plasma model (Liedahl et al., 1995;Mewe et al., 1985) and found that a two-temperature plasma was necessary together with strong partial absorption. We also required an additional Gaussian near 6.4 keV. Thus, the Xspec model was const*pcfabs*(mekal+mekal+gaussian).
The results are given in Table 2. Uncertainties are at the 1σ level and fluxes are bolometric, obtained via the cflux command of Xspec (Arnaud, 1996). The equivalent width of the additional Gaussian component was determined using the eqwidth command and found to be 350 ± 100 eV.
The variable normalisation factor between the 2018 and 2006 observation was 1.14 +0.13 −0.12 , indicating that the 2006 observation may have been slightly brighter, but the results are consistent with a constant X-ray luminosity. The absorption fraction, though very close to unity, did not give an adequate fit if we set it to 100%, or replaced it with a totally covering cold absorber. We also calculated the X-ray luminosity of M311 using the bolometric flux and the Gaia distance.
Simplifying the model, by changing to a onetemperature plasma or replacing the partially covering absorber with a cold totally-covering absorber, did not produce acceptable fits. We obtained χ 2 ν of 1.84 and 1.30 respectively, and the cooler hump in the 2018 spectrum was clearly not fit adequately.
The binned spectra clearly show a feature around 6.0-7.0 keV, coincident with the iron emission line triplet. If, in particular, the 6.4 keV fluorescence line is present we can use its equivalent width to determine if the X-ray emission is primarily scattered, as is sometimes seen in CVs with discs. We therefore tested whether the iron lines are significantly detected, under the following assumptions: First, we assume that the emission between 5.5 keV and 7.5 keV is an unabsorbed plasma continuum with possibly Gaussian emission lines superimposed on it. We model the plasma continuum as a bremsstrahlung with temperature fixed at 7.8 keV. For this exercise we will not use a Mekal model since that already includes the 6.7 and 6.9 keV lines. Second, we assume that the spectrum is the same shape in 2018 as it was in 2006 but with possibly different luminosity. Thus, we separated the 2018 and 2006 data into two spectral groups, with a constant multiplicative factor that can differ between them, as for the previous fit.
To avoid losing fine spectral features to the binning procedure, we fit the unbinned spectra between 5.5 keV and 7.5 keV with the c-statistic. We used the method developed by Kaastra (2017) to estimate the goodness of fit. Fitting with only a bremsstrahlung model and no Gaussians gave a c-stat 4.1σ above the expected value, indicating a poor fit. Thus, the iron line complex as a whole is clearly detected.
Next, we added a single Gaussian to test the possibility that the iron line triplet is detected but that the individual lines cannot be distinguished. For a Gaussian with best-fit energy 6.68 +0.16 −0.21 keV and equivalent width of 1.4 +1.6 −0.9 keVfor all three lines combined-we obtained a fit consistent (0.45σ) with the data. We conclude therefore that there is Copyright line will be provided by the publisher no need to separate the iron line complex into three lines to obtain a formally acceptable fit and that, therefore, we cannot resolve the individual lines. Furthermore, this fit gave a line energy of around 6.7 keV-suggesting that the fluorescent line is approximately equal in intensity to the 7.0 keV line, or around 0.2 keV and somewhat lower than the more crude fit performed above. These two lines have equivalent widths of 650 and 340 eV respectively for a 7.8 keV Mekal. Thus, the sum of the equivalent widths is consistent with the value derived above, with or without the 6.4 keV fluorescent line. We conclude that evidence for its presence is weak at best.
X-ray photometry
In Figure 4 we show the XMM-Newton X-ray light curves of M311 in all available instruments. There is no obvious evidence of variability.
We reduced the Swift XRT data using xrtpipeline version 0.13.3. The source was not detected in X-rays, and it was outside the field of view of the UV telescope. Using the procedure of Loredo (1992) (eq. 5.13) we obtain a 1σ upper limit to the count rate of 2.4 × 10 −4 s −1 . If we assume the same spectral shape as in the 2018 observation but a different normalisation we obtain a bolometric flux of less than 4.1 × 10 −13 erg s −1 cm −2 . Thus, it seems that M311 is X-ray variable by a factor of at least three.
We also looked for periodicities in the X-ray data for the 2018 observation. To do this, we applied the H-test (de Jager et al., 1989) to the barycenter-corrected EPIC-pn source event list from the 2018 observation. We sought periods between one minute and one hour. We found a significant signal at around 136 s but, on closer investigation, this turned out to be in the soft proton flaring and not in the source.
We constructed an X-ray to optical ratio by approximating the optical flux by log 10 (F opt ) = −m V /2.5 − 5.37 (Maccacaro et al., 1988) and comparing this to the X-ray flux between 0.5 and 2.0 keV. Since M311 is variable both in X-rays and in the optical, this ratio is not well defined. We therefore simply took the 2018 XMM observation, and the approximate non low-state m V = 16 magnitude observed by MACHO. We obtained log 10 (F X /F opt ) = −2.1.
Optical Monitor
In the 2018 observation XMM-Newton's Optical Monitor observed the target with the UVW1 filter, centered around 300 nm. The OM light curve ( Figure 5) showed significant variability over the duration of the observation, ranging from approximately 6 to 12 counts per second. Superimposed on this is apparently some shorter-term flickering of amplitude ∼ 0.1 mag, but the data is not of sufficient quality to make any definitive statement regarding this flickering. We performed an Analysis-of-Variance (AoV) period search (Schwarzenberg-Czerny, 1989) but there was no strong signal. To account for the possibility of longer term variability swamping a fast periodic signal, we subtracted the best-fitting sinusoid from the OM light curve and repeated the AoV search on the remainder. Again, we found no signal.
The magnitude of M311 was m UVW1 = 14.8 ± 0.2, M UVW1 = 4.2±0.3. This is quite bright and indicates that the XMM-Newton observation occurred during the high state, and not during one of the dips.
Discussion
We have studied the optical, UV, and X-ray properties of the X-ray source MACHO 311.37557.169 to attempt to determine its nature. The prominent emission lines of hydrogen suggest that it is a cataclysmic variable, and the long term behaviour of the MACHO light curve resembles a CV switching from high to low states. The decline to the low state, however, would be unusually slow for a magnetic CV. Furthermore, it it was magnetic, we would expect the 4686Å helium line to be stronger. A more likely hypothesis is that it is a VY Scl star. Its long-term optical light curve strongly resembles, for instance, that of TT Ari (Zemko et al., 2014) or V794 Aql (Greiner, 1998), both in the depth of the dip and in its duration of a few hundred days. Its high state absolute magnitude of ∼ 5 is similar to that of V794 Aql, 5.2, and the optical spectrum resembles that of the VY Scl star RX J2338+431 (Weil et al., 2018), with the He II 4865 line roughly the same intensity as the He I 4471 line, and showing just a trace of an iron feature at 5169Å. The UV light curve from the 2018 observation show features that are consistent with flickering and possibly a superhump period of ∼ 4 hours in the UV though the observation is not long enough for a definitive conclusion.
The X-ray spectrum of M311, a partially absorbed two-temperature plasma with a luminosity of order 10 32 to 10 33 erg s −1 , is also consistent with a CV, which have long been known to be strong X-ray emitters. Again, there are strong similarities to V794 Aql, which also showed a two-temperature plasma spectrum with luminosity 4 × 10 32 erg s −2 , as calculated from the flux measurement of Zemko et al. (2014) together with the distance determination of Bailer-Jones et al. (2018). There was some evidence for a fluorescent iron line at 6.4 keV. The equivalent width was difficult to determine because of the mediocre photon statistics, but seems to be in about the 200 to 400 eV range. Although this is a higher value than in the VY Scl star TT Ari (0.1 keV; Zemko et al. 2014), it appears similar to the one found for V751 Cyg (see Page et al. 2014, Table 2).
If the CV interpretation is correct, then the large absorption column density and covering fraction suggests a high system inclination. There is no evidence for eclipses in any of the light curves, however. The low F X /F opt value of -2.1 is low compared to magnetic CVs, even the X-ray underluminous IPs (e.g., -1.7 in the case of V902 Mon, Worpel et al. 2018).
Other types of variable stars are unlikely. M311 shows Balmer lines, so it is not an R CrB star. The optical colour is too blue to be a semiregular variable, and the optical reddening with decreasing brightness does not seem to fit a normal dwarf nova or anti dwarf nova. The two-peaked X-ray spectrum superficially resembles that of a δ-type symbiotic variable (e.g., Luna et al. 2013) but M311 is too close for the companion star to be a red giant. Similarly, it is not luminous enough to be a Herbig Ae/Be object. Conversely, it is too X-ray luminous, by at least an order of magnitude, to be a T Tauri star (e.g. Telleschi et al. 2007, Fig 1). Although the object shows HeII lines their weakness would be unusual for a magnetic CV.
We have found that M311 is likely to be a VY Scl star, based on its numerous similarities to that class and on ruling out other types of variable stars. A longer optical campaign aimed at determining the orbital, and possibly the spin, periods would be desirable. Further short-term photometry with the goal of finding or ruling out variability on ∼ 15 m time scales would also be helpful. | 2020-03-17T01:01:05.333Z | 2020-03-15T00:00:00.000 | {
"year": 2020,
"sha1": "3b0e21a56bd2f10b0c36354f7322ddf44e520aa6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/asna.202013531",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "ccda718fa522946a775354e557490843511f6ca2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
115879072 | pes2o/s2orc | v3-fos-license | A Modified Bayesian Network Model to Predict Reorder Level of Printed Circuit Board
Featured Application: The research was motivated by the requirement of a printed circuit board (PCB) manufacturer and the application of the work is to identify the repeated PCB orders for batch production according to the predicted reorder level. Abstract: Identifying the printed circuit board (PCB) orders with high reorder frequency for batch production can facilitate production capacity balance and reduce cost. In this paper, the repeated orders identification problem is transformed to a reorder level prediction problem. A prediction model based on a modified Bayesian network (BN) with Monte Carlo simulations is presented to identify related variables and evaluate their effects on the reorder level. From the historically accumulated data, different characteristic variables are extracted and specified for the model. Normalization and principal component analysis (PCA) are employed to reduce differences and the redundancy of the datasets, respectively. Entropy minimization based binning is presented to discretize model variables and, therefore, reduce input type and capture better prediction performance. Subsequently, conditional mutual information and link strength percentage are combined for the establishment of BN structure to avoid the defect of tree augmented naïve BN that easily misses strong links between nodes and generates redundant weak links. Monte Carlo simulation is conducted to weaken the influence of uncertainty factors. The model’s performance is compared to three advanced approaches by using the data from a PCB manufacturer and results demonstrate that the proposed method has high prediction accuracy.
Introduction
A printed circuit board (PCB) is found in practically all electrical and electronic equipment. It is the base of the electronics industry [1]. Due to increased competition and market volatility, demand for highly individualized products promotes a rapid growth of orders with a small batch of purchase and production. Some orders even with the relatively large volume have been placed separately and repeatedly at different times by customers. Dynamic fluctuation of market demands for PCB can easily bring great production imbalance, which is a waste of production capacity during the idle period with fewer orders from customers. However, during the busy period, it results in tardiness among many orders. Multi-batches of the same PCB product produced separately always require higher preparation and production cost with a higher scrap rate. Identifying orders with high reorder frequency and combining different batches of these orders during a reasonable period (e.g., an idle period) as batch and inventory-oriented production can reduce production cost, benefit production capacity balance, and facilitate on time delivery.
Taking an example from a PCB manufacturer named Guangzhou FastPrint Technology Co., Ltd. (called FastPrint in this paper), few orders were manually selected each month for batch production during the idle period based on reorder frequency and cumulative delivery area in the past 30 months. The reorder frequency is the number of times a customer places the same type of orders to a manufacturer in a given period. The delivery area of each order corresponds to the amount (quantity) of PCB products the customer orders multiplied by the area of each piece of PCB. The cumulative delivery area is the accumulation of the delivery area of the same type of orders in a given period. If 80% of the manually selected orders for batch production can be purchased by customers within six months (i.e., the maximum storage period in the manufacturer's inventory can be ordered by most of customers), then the manufacturer can profit from better utilization of idle resources and the reduction of repeated production preparation and cost. However, the manual selection process is experience-dependent and time-consuming. Meanwhile, the accuracy needs to be improved because only the reorder frequency and the cumulative delivery area are taken into consideration.
The selection of orders for batch production is not based on the accurate reorder frequency within a certain period but always according to the range of predicted reorder frequency (e.g., reorder frequency ≥ 3) in practice. Moreover, it is difficult to accurately determine the reorder frequency within a certain period for each PCB order in advance. Therefore, we transform the repeated orders identification problem into a reorder level prediction problem in which the reorder frequency within six months was divided into four reorder levels (i.e., 1, 2, 3, and 4) corresponding to the reorder frequency (0, 1-2, 3-5, and >5, respectively). On this basis, orders with a highly predicted reorder level corresponding to the range of high reorder frequency placed within six months are taken as candidates for batch production.
The reorder level prediction is similar to the data mining based customer identification (also referred to as customer acquisition) problem as an important task of customer relationship management (CRM) [2]. The former can be conducted by analyzing characteristics of the orders and subdividing them into different groups in which the order groups with higher reorder levels (e.g., 3 and 4) can be taken as candidates for batch production. The latter, on the other hand, is to seek the profitable customer segments by analyzing their underlying characteristics and subdividing an entire customer base into smaller customer segments, which are comprised of customers who are relatively similar within each specific segment [2,3]. Identification of the most profit-generating customers and segmentation of customers are quite vital [3]. Previous studies reveal that recency, frequency, and monetary (RFM) analysis and frequent pattern mining can be successfully used or integrated to discover valuable patterns of customer purchase behavior [3][4][5][6][7][8]. Dursun and Caber [3] took the RFM analysis for profiling profitable hotel customers and related customers were divided into eight groups. Chen et al. [4] incorporated the RFM concept to define the RFM sequential pattern and developed a modified Apriori for generating all RFM sequential patterns from customers' purchasing data. Hu and Yeh [5] proposed RFM-pattern-tree to compress and store entire transactional database and developed a patterned growth-based algorithm to discover all the RFM-patterns in the RFM-pattern-tree. Coussement et al. [6] employed RFM analysis, logistic regression, and decision trees for the customers' segmentation and identification. Mohammadzadeh et al. [7] employed k-means clustering for identifying target patient customers and then conducted the prediction of customers churn behavior via the RFM model based on the decision tree classifier. Song et al. [8] employed RFM considering parameters with time series to cluster customers and identify target customers.
Other data mining approaches have also been developed and many special factors were considered to excavate the customer pattern purchase behavior. Liu [9] developed a fuzzy text mining approach to categorize textual data to analyze consumer behaviors for the accurate classification of customers. Sarti et al. [10] presented a consumer segmentation method using clustering based on consumers' purchase of sustainability and health-related products. Murray et al. [11] combined clustering with time series analysis to create customer segments and segment-level forecasts and then applied the forecasts to individual customers. Caigny et al. [12] proposed a logit leaf model for customer churn prediction in which customer segments are identified using decision rules and then a model is created for every segment using logistic regression. Ngai et al. [2] provided a comprehensive review of CRM from four dimensions such as customer identification, customer attraction, customer retention, and customer development. Zerbino et al. [13] presented a review of Big Data-enabled CRM including research on customer evaluation and acquisition. However, the topic discussed in this paper has seldom been studied to the best of our knowledge and it was not involved in the two previously mentioned reviews either. Customer identification and a reorder level prediction are similar in that both of them aim to develop classified treatment strategy according to historical transactions. Nevertheless, there are differences from the following three aspects. First, the customer identification problem is to develop more accurate sales and advertising strategies based on customer transaction history and, therefore, better for retaining target customers [10] while the final purpose of the problem discussed in this paper is to select orders for batch production with different misclassification risks based on accumulated manufacturing orders. Second, RFM of different purchase products (orders) should be considered for the customer pattern mining while, in this research study, we only consider the parameters of the same product ordered at different times. Third, RFM are the main parameters considered in the customer identification problem while the production scale including quantity, area, and lifecycle of orders should also be considered in this paper. However, the previously mentioned approaches cannot be employed directly for reorder level prediction. Therefore, more influential variables and misclassification loss should be considered and related approaches should be developed.
In this paper, a prediction model based on modified Bayesian network (BN) with Monte Carlo simulations is presented to predict a reorder level of PCB orders. More precisely, we apply BN to excavate the relationship between influential variables (factors) and the reorder level. The main reason for choosing BN is that it has the clearest common sense interpretation and can be viewed as causal models of the underlying domains. It also owns the powerful capability of dealing with uncertainty and causality inference and has been widely used in predicting and classifying problems [14][15][16][17][18][19][20][21]. Figure 1 illustrates the framework of the proposed approach in which all procedures will be discussed in detail except for decision making marked with the dashed boxes.
Appl. Sci. 2018, 8, x 3 of 21 [11] combined clustering with time series analysis to create customer segments and segment-level forecasts and then applied the forecasts to individual customers. Caigny et al. [12] proposed a logit leaf model for customer churn prediction in which customer segments are identified using decision rules and then a model is created for every segment using logistic regression. Ngai et al. [2] provided a comprehensive review of CRM from four dimensions such as customer identification, customer attraction, customer retention, and customer development. Zerbino et al. [13] presented a review of Big Data-enabled CRM including research on customer evaluation and acquisition. However, the topic discussed in this paper has seldom been studied to the best of our knowledge and it was not involved in the two previously mentioned reviews either. Customer identification and a reorder level prediction are similar in that both of them aim to develop classified treatment strategy according to historical transactions. Nevertheless, there are differences from the following three aspects. First, the customer identification problem is to develop more accurate sales and advertising strategies based on customer transaction history and, therefore, better for retaining target customers [10] while the final purpose of the problem discussed in this paper is to select orders for batch production with different misclassification risks based on accumulated manufacturing orders. Second, RFM of different purchase products (orders) should be considered for the customer pattern mining while, in this research study, we only consider the parameters of the same product ordered at different times. Third, RFM are the main parameters considered in the customer identification problem while the production scale including quantity, area, and lifecycle of orders should also be considered in this paper. However, the previously mentioned approaches cannot be employed directly for reorder level prediction. Therefore, more influential variables and misclassification loss should be considered and related approaches should be developed.
In this paper, a prediction model based on modified Bayesian network (BN) with Monte Carlo simulations is presented to predict a reorder level of PCB orders. More precisely, we apply BN to excavate the relationship between influential variables (factors) and the reorder level. The main reason for choosing BN is that it has the clearest common sense interpretation and can be viewed as causal models of the underlying domains. It also owns the powerful capability of dealing with uncertainty and causality inference and has been widely used in predicting and classifying problems [14][15][16][17][18][19][20][21]. Figure 1 illustrates the framework of the proposed approach in which all procedures will be discussed in detail except for decision making marked with the dashed boxes. The remainder of this paper is organized as follows. Relevant variables specification and data preprocessing including principal component analysis (PCA)-based factors extraction and entropy minimization-based data discretization are introduced in Section 2. The combination of conditional mutual information (CMI) and link strength percentage (LSP) to avoid the defect of tree augmented naïve (TAN) BN and conditional expected loss for final classification are described in Section 3. The model evaluation and comparison are given in Section 4 in which Monte Carlo simulation is conducted to determine the confidence upper limits of reorder frequency and weaken the influence of uncertainty factors. Additionally, performance of the proposed approach is compared to TAN, AdaBoost, and artificial neural networks (ANN). Conclusions are drawn in Section 5.
Variables Specification
Reorder level related variables were inherited and derived from fields in enterprise resource plan (ERP) system and order management system (OMS) in which the same type of repeated orders placed at different date are labeled with the same production number but different order numbers generated by the manufacturer's coding rule. On this basis, statistics of delivery area, quantity, transaction money, and interval days of the past 30 months before a set date were derived and the related description is presented in Table 1. The set date is prepared for orders selection and batch production. The statistic excludes the accumulation before 30 months under the consideration of order's lifecycle based on expert experience. The reorder level is the classification objective and four levels are set based on the reorder frequency of a production number in the next six months after a set date.
Layer number Ln
PCB is made of resin, substrate, and copper foil and the Ln is the number of copper foil layers.
Continued days Condays
Interval days between the first order date and a set date.
Recency
Rec A period between the last order date and a set date. Reorder level Rel 1, 2, 3, and 4 levels corresponding to the reorder frequency 0, 1-2, 3-5, and >5 within six months, respectively.
Note: Statistic parameters of maximum/minimum/mean/sum were derived from the orders with the same production number accumulated in the past 30 months before a set date.
Data from three factories accumulated in ERP and OMS of Fastprint were collected and integrated. for test samples). Each record was aggregated based on the orders placed during the past 30 months according to the production number. The records with reorder frequency 1 were not to be considered for batch production and have been deleted. Meanwhile few special orders with odd number layers and layer number greater than 20 have also been excluded because they are seldom taken for batch production in practice.
Sample size and the proportion of different reorder levels are presented in Table 2. It can be seen that sample proportions of different reorder levels are similar to those of the training and test samples, respectively. The statistic results show that only about 5.5% of the records with reorder level ≥ 3 in Table 2 in number were aggregated by a separately placed reorder. However, these small proportions of the records exerted significant influence on resource utilization and balance for the manufacturer in practice.
Principal Component Analysis
There are significant differences in values among variables given in Table 1. Some variables may be redundant or not have a significant influence on the reorder level prediction. Furthermore, continuous-valued variables with a large amount of input types easily generate too many conditional probability tables (CPTs) with sparse samples for each value, which negatively affects the establishment of a robust model. It is, therefore, necessary to perform preprocessing before building a model.
First, in order to eliminate the negative impact caused by the huge difference between each variable in terms of values, there is a need to normalize each variable ranging from 0 to 1. Second, the total data sample matrix 48,026 × 23 (i.e., the number of the samples multiplied by the number of the input variables for each records) would be considerably complicated and time consuming to model and test for such a high-dimension data samples [22]. It is, therefore, essential for reducing the dimension of the data samples and extracting the typical features from the original data samples. Third, in order to reduce input type and get better performance for variables, it is important to discretize variables for BN model development [14].
PCA is an effective statistical analysis method in multi-dimensional data compression and factors extraction. It can fuse relatively useful features and extract more sensitive factors through the evolution of the variance contribution rate and the cumulative variance contribution rate of each variable [22]. In this study, PCA was used for reducing variable redundancy for the proposed models. This could greatly reduce the modeling time and improve operational efficiency. The procedure of PCA is described below.
PCA was conducted by Algorithm 1 based on training samples according to the 21 input variables given in Table 1 except for the layer number and recency with some initial experiment. Seven factors were extracted with 87.87% of the cumulative variance contribution rate, which means the extracted factors can represent 87.87% of information of the original 21 input variables. The variance contribution rate and cumulative variance contribution rate are shown in Figure 2. The factor loading matrix with each loading a ij represents how many information factors f j can explain the variable x i , which is illustrated in Figure 3. The numbers represent the original variables in Figure 3 and the main variables that each factor explained can be found in Table 3. On this basis, factor values of the test samples were computed based on the weighted sum of the original variables.
its eigenvalues and eigenvectors was conducted. Subsequently, a variance explained matrix was constituted based on the eigenvectors. All the columns of this matrix were ranked according to the variance contribution rate in descending order. The cumulative variance contribution rate of principal components (factors) was calculated by
Data Discretization
The entropy minimization based binning method employed in this paper has been widely applied in discretizing variables [23]. The core measures of entropy minimization based discretization include information entropy and gain [24,25]. Let k classes be 12 , ,...
where Ent(S) measures the amount of information needed to specify the classes in S. The greater the Ent(S) value is, the more information it contains and the lesser purity it has. A binned interval with all values belonging to the same class has the highest purity [26,27]. Entropy of samples S partitioned by an arbitrary split point T of attribute X into two disjoint intervals is defined by the equation below. (1) Normalization: For each column of the data sample x i , min-max normalization was taken and where x i is the original data with x imim and x imax representing the minimum and maximum values in x i , respectively. (2) Principal component analysis: The calculation of the correlation coefficient matrix and its eigenvalues and eigenvectors was conducted. Subsequently, a variance explained matrix was constituted based on the eigenvectors. All the columns of this matrix were ranked according to the variance contribution rate in descending order. The cumulative variance contribution rate of principal components (factors) was
Data Discretization
The entropy minimization based binning method employed in this paper has been widely applied in discretizing variables [23]. The core measures of entropy minimization based discretization include information entropy and gain [24,25]. Let k classes be C 1 , C 2 , . . . , C k in samples set S and let P(C i , S) be the proportion of samples in S that has class C i . The entropy of S is defined by the equation below.
where Ent(S) measures the amount of information needed to specify the classes in S. The greater the Ent(S) value is, the more information it contains and the lesser purity it has. A binned interval with all values belonging to the same class has the highest purity [26,27]. Entropy of samples S partitioned by an arbitrary split point T of attribute X into two disjoint intervals is defined by the equation below.
where |S j | and |S| are the sample size of subset S j and S, respectively. The information gain for a variable X based on a given split point T can be defined by Equation (3).
A partition induced by a split point T for a set S is accepted according to the minimum description length principle (MDLP) [24]. The binning algorithm for the discretization of each variable (i.e., F1-F7, layer number and recency) is described below.
The maximum number of the binned intervals was set to 10 and the discretization results of the variables obtained by Algorithm 2 are given in Table 4. Split points were used directly for the discretization of the test samples. Proportions of the different reorder levels for the training samples in the different binned intervals of the variables are illustrated in Figure 4.
It can be seen that proportions of the different reorder levels in the different binned intervals vary significantly, which indicates that the cumulative delivery quantity/area (F1 and F3), delivery interval day (F5), and continued days (F2) are also important for classification of reorder level besides RFM (i.e. Rec, F7, and F4). Proportion of reorder level 2, 3, and 4 decreases with an increase in the value of discretized recency and the reorder level of the samples with the binned interval 10 for recency can almost directly be classified as 1. F7 performs the opposite tendency compared to recency in which the sample with the binned value 1 or 2 has high purity (probability) to be classified as 1. A sample with the binned recency as 1, F7 as 6, F2 or F3 as 1, or F7 as 9 has high probability to be classified as 4. for recency can almost directly be classified as 1. F7 performs the opposite tendency compared to recency in which the sample with the binned value 1 or 2 has high purity (probability) to be classified as 1. A sample with the binned recency as 1, F7 as 6, F2 or F3 as 1, or F7 as 9 has high probability to be classified as 4.
Bayesian Network
Bayesian network (also known as belief network and causal network) is a probabilistic graphical model that represents a set of random variables and their conditional dependence by means of a directed acyclic graph (DAG) and CPTs [19,20]. Each node in DAG represents a variable if tempj = tempN; break; end; end; return SP x and B x .
Bayesian Network
Bayesian network (also known as belief network and causal network) is a probabilistic graphical model that represents a set of random variables and their conditional dependence by means of a directed acyclic graph (DAG) and CPTs [19,20]. Each node in DAG represents a variable of the ranges over a discrete set of domain and contacts with its parent's nodes [14] and directed arcs represent the condition or probability dependency between random variables [14,28]. BN has become a popular knowledge-based representational scheme in data mining [27][28][29][30]. This graphical structure, which expresses causal interactions and direct/indirect relations as probabilistic networks, has secured BN's popularity. Experts can easily understand such structures and (if necessary) modify them to improve the model [28].
The critical problem in establishing a BN is to determine the network structure S and corresponding set of parameters θ [28,29], which are always called structure learning and parameter learning, respectively. In order to reduce arcs between nodes (variables) with weak causal interactions and corresponding CPTs, CMI and LSP were combined with expert's experience to establish BN structure and, therefore, avoid the defects of TAN. The CMI was first introduced in TAN [31] by relaxing the conditional independence assumption of naïve Bayesian for the purpose of selecting particular dependences [15]. However, TAN links all the input variables (evidence node) to output variables (class node) and allows at most two parents nodes with one connection to the class node and one causal connection to another evidence node, which easily misses some strong links and sometimes generates redundant weak strength links [28] that are negative for the robustness and generalization of the BN model. Suppose a set of discretized random variables is X = {x 1 , x 2 , . . . , x 9 } corresponding to the variables such as a layer number, recency, and F1-F7 and CMI between x i and x j can be computed below.
where x m i , x n j , andRel k represent the mth, nth, and kth values of x i , x j , and Rel, respectively. CMI(x i ; x j |Rel) measures the information x j provides on x i when the value of reorder level Rel is known. The smaller the CMI(x i ; x j |Rel) value is, the weaker the connection between x i and x j is.
The LSP from parent node x to child node y is defined by the equation below.
where Z = Pa y /{x} denotes a set of all parents of y other than x, Ent(y|Z) = ∑ z P(z)∑ [32]. The structure learning algorithm is depicted below (Algorithm 3).
Algorithm 3. Modified BN structure establishment.
(1) Compute the CMI(x i ; x j |Rel) between x i and x j according to Equation (4); (2) Select input variables x k1 , . . . , x kt with CMI(x i ; x j |Rel) being greater than a threshold, and manually link x i to x k1 , . . . , x kt with directed arcs if there are no arcs between the two nodes; (3) Combining CMI by expert experience to determine the variables that links to Rel with directed arcs; (4) Compute LSP according to Equation (5) for each link to evaluate the quality of BN structure and modified (deleted) arrows with small LSP, e.g., 10%.
The Bayesian estimation method was employed for parameter learning in this paper to estimate θ = maxp(θ|X) based on the training samples. Initially, θ was treated as a random variable and prior knowledge of θ is expressed as a prior probability distribution p(θ). Furthermore, there is a likelihood that the function was utilized based on samples. Subsequently, the Bayesian formula was taken to determine the posterior probability distribution of θ. Dirichlet distribution was employed as the prior probability distribution of p(θ) [16]. CMI between x i and x j for the training samples based on Equation (4) is given in Table 5. The threshold was set as 10% and CMI equal to or greater than 10% was reserved to construct the link. It can be seen that (1) CMI is small between layer number, recency, and F1-F7, which can be taken as independent variables while constructing BN structure. (2) CMI is large between F3, F4, F6, and F7, which means that the cumulative delivery scale (F1) of repeated orders is not independent of mean/min/max statistic results of delivery quantity, area, transaction money (F3, F6, and F4), and frequency (F7). (3) Similarly, F6 (mean/min/max delivery area) has great mutual information between F3 (mean/min/max delivery quantity) and F4 (mean/min/max transaction money).
On this basis, the structure of modified BN with entropy in each node and LSP for each link was constructed according to Algorithm 3, which was given in Figure 5. Entropy in each node reflects the purity of the node and they were computed based on Ent(x) = −∑ x i P(x i ) log 2 P(x i ) where x i is the discretized value set of node x, which indicates how much uncertainty is in x if no evidence is given for any other nodes. LSP was computed based on Equation (5) and it can be seen that the LSP for links from Ln, Rec, F1, F2, F5, and F7 to Rel are large, which means that these variables can help reduce the high percentage of uncertainty for Rel when knowing the state of these nodes. Similarly, LSP of links from F2, F3, F6, and F7 to F1 are large, which indicates that F2, F3, F6, and F7 have a close causal relationship with F1. However, weak LSP of links from F7 to F2 and F4 to F6 marked with dotted lines are less than 10% and, therefore, the directed arcs from F7 to F2 and F4 to F6 were deleted accordingly. Then parameter learning was conducted based on the training samples and the structure to determine the CPTs for each node. On this basis, the structure of modified BN with entropy in each node and LSP for each link was constructed according to Algorithm 3, which was given in Figure 5. Entropy in each node reflects the purity of the node and they were computed based on x is the discretized value set of node x , which indicates how much uncertainty is in x if no evidence is given for any other nodes. LSP was computed based on Equation (5) and it can be seen that the LSP for links from Ln, Rec, F1, F2, F5, and F7 to Rel are large, which means that these variables can help reduce the high percentage of uncertainty for Rel when knowing the state of these nodes. Similarly, LSP of links from F2, F3, F6, and F7 to F1 are large, which indicates that F2, F3, F6, and F7 have a close causal relationship with F1. However, weak LSP of links from F7 to F2 and F4 to F6 marked with dotted lines are less than 10% and, therefore, the directed arcs from F7 to F2 and F4 to F6 were deleted accordingly. Then parameter learning was conducted based on the training samples and the structure to determine the CPTs for each node.
Conditional Expected Loss-Based Classification
Classification can be conducted based on learned BN structure and joint probability (the product of all the conditional probabilities of the network). Posterior probability of the reorder level can be calculated according to the Bayesian equation. Lastly, each sample can be predicted to the reorder level corresponding to the greatest posterior probability. However, a sample with a reorder level 1 misclassified as 2, 3, or 4 will bring economic risk if it is taken for batch production. At the same time, posterior probabilities of the biased reorder level may bring a different misclassification. Posterior probabilities of the training samples with observed reorder level 2, 3, or 4 based on the
Conditional Expected Loss-Based Classification
Classification can be conducted based on learned BN structure and joint probability (the product of all the conditional probabilities of the network). Posterior probability of the reorder level can be calculated according to the Bayesian equation. Lastly, each sample can be predicted to the reorder level corresponding to the greatest posterior probability. However, a sample with a reorder level 1 misclassified as 2, 3, or 4 will bring economic risk if it is taken for batch production. At the same time, posterior probabilities of the biased reorder level may bring a different misclassification. Posterior probabilities of the training samples with observed reorder level 2, 3, or 4 based on the modified BN are given in Figure 6 according to initial experiments in which the posterior probabilities is generated by the formula below. P(Pr_Rel i |Ob_Rel = 2, 3, 4) = P(Pr_Rel i )P(Ob_Rel = 2, 3, 4|Pr_Rel i ) ∑ 4 j=1 P(Pr_Rel j )P(Ob_Rel = 2, 3, 4|Pr_Rel j ) (6) where P(Pr_Rel i ) is the probability of predicted reorder level i (i = 1, 2, 3, 4), P(Ob_Rel = 2, 3, 4) is the probability of observed reorder level with the value of 2, 3, or 4, P(Ob_Rel = 2, 3, 4|Pr_Rel i ) is the posterior probabilities of observed reorder level with the value of 2, 3, or 4 on the condition of the predicted reorder level i and P(Pr_Rel i |Ob_Rel = 2, 3, 4) is the posterior probabilities of predicted reorder level on the condition of observed reorder level as 2, 3, or 4. It can be seen that samples to be predicted as 3 and 4 are small and the corresponding posterior probabilities subjects to serious left skewed distribution with a mean value less than 0.25. In contrast, samples to be predicted as 1 or 2 are subject to right skewed distribution with a mean value greater than 0.5. This indicates that the posterior probability-based classification has high posterior probability to predict the reorder level of 1 with an observed value equal to or greater than 2 in many cases. Only a few instances with observed Rel = 3 or 4 have been predicted as 3 or 4.
probabilities is generated by the formula below. is the posterior probabilities of predicted reorder level on the condition of observed reorder level as 2, 3, or 4. It can be seen that samples to be predicted as 3 and 4 are small and the corresponding posterior probabilities subjects to serious left skewed distribution with a mean value less than 0.25. In contrast, samples to be predicted as 1 or 2 are subject to right skewed distribution with a mean value greater than 0.5. This indicates that the posterior probability-based classification has high posterior probability to predict the reorder level of 1 with an observed value equal to or greater than 2 in many cases. Only a few instances with observed Rel = 3 or 4 have been predicted as 3 or 4.
Figure 6.
Posterior probabilities corresponding to different predicted reorder levels. Figure 7 illustrates the posterior probabilities of 100 randomly selected samples obtained by Equation (6) with observed Rel = 2 and Rel = 4 in which the probability of 1, 2, 3, and 4 corresponds to a predicted reorder level 1, 2, 3, and 4, respectively. Posterior probability in Figure 7a illustrates that it is easy to predict the reorder level of 2 to 1. Figure 7b illustrates that many posterior probabilities corresponding to the predicted reorder level 4 have no significant possibility for classifying it as 4. Therefore, the conditional expected loss was introduced instead of a posterior probability for the final classification decision. Let Figure 7 illustrates the posterior probabilities of 100 randomly selected samples obtained by Equation (6) with observed Rel = 2 and Rel = 4 in which the probability of 1, 2, 3, and 4 corresponds to a predicted reorder level 1, 2, 3, and 4, respectively. Posterior probability in Figure 7a illustrates that it is easy to predict the reorder level of 2 to 1. Figure 7b illustrates that many posterior probabilities corresponding to the predicted reorder level 4 have no significant possibility for classifying it as 4. Therefore, the conditional expected loss was introduced instead of a posterior probability for the final classification decision. Let α i be the decision to classify sample X as α i , λ ij = λ(α i , ω j )represents the loss (risk) to classify X with observed value ω j to α i , and all the λ ij = λ(α i , ω j ) i, j = 1, 2, 3, 4, consist of classification loss matrix. Conditional expected loss is defined to illustrate the expected risk for a decision to predict X as α i .
The final decision can be conducted based on the minimization of the conditional expected loss.
The conditional expected loss-based classification can be described below (Algorithm 4).
Appl. Sci. 2018, 8, x 13 of 21 The final decision can be conducted based on the minimization of the conditional expected loss.
The conditional expected loss-based classification can be described below.
Algorithm 4. Conditional expected loss-based classification.
(1) Compute the posterior probability ( ) for the classification of i according to Equation (7); Initial results also show that the probability to predict order with an observed reorder level of 1 to 2, 3, and 4 decreases with an increase in the value of binned recency and the risk to predict the larger reorder level to the smaller one will also decrease. Therefore, four loss matrices corresponding to the binned recency 1, 2-3, 4-5, and 6-10 were introduced for final classification based on Algorithm 4, in which the value in the upper half of the matrix decreases with an increase in the value of binned recency while the value in the lower half of the matrix increases with an increase in Initial results also show that the probability to predict order with an observed reorder level of 1 to 2, 3, and 4 decreases with an increase in the value of binned recency and the risk to predict the larger reorder level to the smaller one will also decrease. Therefore, four loss matrices corresponding to the binned recency 1, 2-3, 4-5, and 6-10 were introduced for final classification based on Algorithm 4, in which the value in the upper half of the matrix decreases with an increase in the value of binned recency while the value in the lower half of the matrix increases with an increase in the value of binned recency. The values were set to 1 and 0 in the non-diagonal positions and diagonal position, respectively, when recency is 1. The other three matrices are shown in Table 6.
Estimation of Reorder Frequency
In order to get the expected reorder frequency for a given order, we use the sum of the mean value of reorder frequency in each level weighted by conditional probability as the expected reorder frequency. The expected output of the model can be computed by the equation below.
E(Re Freq|Cluster
where M(Rel = k) is the mean value of reorder frequency within six months for the samples with Rel = k, which can be referred to Table 7. P(Rel = k|Cluster = i) is the average conditional probability determined by the modified BN with Rel = k given a specific cluster i and Cluster = i represents the ith cluster of the samples determined by the clustering algorithm. The purpose of the clustering is to classify samples according to their similarity by considering the input features of discretized F1-F7, Rec, and Ln. The k-summary approach that can handle both categorical and numerical data was adopted for the clustering and the number of clusters was set to 7 based on an initial experiment. On this basis, the average conditional probability of different reorder levels given different clusters is presented in Table 8.
Evaluation Indicators
The confusion matrix was taken to visualize the performance of different approaches in which each column of the matrix represents the instances in an actual class while each row represents the instances in a predicted class. All correct predictions are located in the diagonal of each table and errors can be visually inspected by values outside the diagonal. Related terminology and derivations are defined in Table 9 [33]. In order to evaluate the performance of the proposed model, the following mean squared error (MSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) evaluation indicators were used. MSE is the average of square sums between the predicted reorder level α i and the observed value ω i [14]. It defines the goodness of fit of the models and is given by the following equation.
The MAE is the average of the sum of the absolute difference between observed values and the predicted reorder level, which can be expressed below.
The MAPE is the average of the sum of the normalized absolute difference between observed values and estimated values. The formula is written below.
Experimental Results
Reorder frequency of the samples six months after a set date can be taken as a random event for a manufacturer without prior knowledge and the models are expected to have errors in prediction. It is necessary for finding an upper limit and a lower limit to make as many as possible observed values lie in. Additionally, the small field data set may cause uncertain deviations between observed reorder levels and predicted ones. It is also impractical to fit the reorder frequency in a specific distribution absolutely. The counting nature of the reorder frequency makes it intuitive for using a Poisson distribution as the probability distribution function (PDF). Therefore, assuming the reorder frequency in each reorder level following a Poisson distribution is justifiable since it has a high repetition occurrence rate and more numbers under low reorder levels and a small probability at high reorder levels, which is shown in Table 2. The approximate upper limit for 95% confidence aligns with the Poisson cumulative distribution function (CDF) and is equal to or greater than 95%. The lower limits are considered to be zero because of the nonnegative counting property of Poisson distribution [14].
Monte Carlo simulation is used to weaken the influence of uncertainty factors in this research. It is an effective method for quantifying the variance resulting from the random nature of repetition events. For any sample (orders with the same production number) with a given cluster and the distribution of the reorder frequency, a random number can be generated through simulation to present the reorder frequency for this sample and the 95% confidence upper limit can be obtained. As a result, it is used to estimate reorder frequency and the 95% upper limit for each sample. Along with the increase in the number of simulations, the prediction accuracy will increase gradually. Therefore, it can quantify the variance resulting from the randomness of repetition events. A procedure of Monte Carlo simulation is described below (Algorithm 5).
Algorithm 5. Monte Carlo simulation of reorder times.
(1) Given a specific order, determine the cluster of the sample; (2) Determine the expected reorder frequency (within six months after a set date) as the parameter (lamda) of Poisson PDF and CDF for the sample according to Equation (9) based on Tables 7 and 8; (3) Generate n (10,000 here) random number by Poisson PDF with lamda as its expectation and take the average result of the n random number as the simulated reorder frequency; (4) Determine the least integer for Poisson CDF being greater than 0.95 as the 95% upper limit of reorder frequency for the order.
A total of 250 randomly selected samples with Monte Carlo simulation results obtained by Algorithm 5 are presented in Figure 8. The performance of the Bayesian network for training and test data can be seen from Figure 8a,b, respectively. When the reorder level is low, the estimated value is close to the observed one. In some extreme situations, it can cause a greater reorder frequency than what can be predicted. The presentation in figures is that the observed values are higher than the 95% upper limits. Figure 8 indicates that the difference between estimated values and actual values is small in most cases and almost all of them are less than 1. with the increase in the number of simulations, the prediction accuracy will increase gradually. Therefore, it can quantify the variance resulting from the randomness of repetition events. A procedure of Monte Carlo simulation is described below.
Algorithm 5. Monte Carlo simulation of reorder times.
(1) Given a specific order, determine the cluster of the sample; (2) Determine the expected reorder frequency (within six months after a set date) as the parameter (lamda) of Poisson PDF and CDF for the sample according to Equation (9) based on Table 7 and Table 8; (3) Generate n (10,000 here) random number by Poisson PDF with lamda as its expectation and take the average result of the n random number as the simulated reorder frequency; (4) Determine the least integer for Poisson CDF being greater than 0.95 as the 95% upper limit of reorder frequency for the order.
A total of 250 randomly selected samples with Monte Carlo simulation results obtained by Algorithm 5 are presented in Figure 8. The performance of the Bayesian network for training and test data can be seen from Figure 8a and Figure 8b, respectively. When the reorder level is low, the estimated value is close to the observed one. In some extreme situations, it can cause a greater reorder frequency than what can be predicted. The presentation in figures is that the observed values are higher than the 95% upper limits. Figure 8 indicates that the difference between estimated values and actual values is small in most cases and almost all of them are less than 1. In order to verify the proposed ensemble approach in this research, the data preprocessing and modified BN prediction model were implemented and the performance was compared to other classifiers including TAN, AdaBoost, and ANN. Among the four competing methods, TAN as a naïve Bayesian network has been widely utilized in classification [17,18]. AdaBoost is an ensemble method whose output is the weighted average of many weak classifiers and is the best-known and most widely applied boosting algorithm in both research and practice [34]. ANN has a strong learning ability and has also been widely used for prediction and classification [35][36][37]. TAN and ANN were implemented in the IBM SPSS Modeler and AdaBoost was developed by Matlab while the proposed modified BN was developed based on Matlab and package FullBNT.
Confusion matrices of different approaches are given in Figure 9. These confusion matrices show that TAN achieved the highest sensitivity for observed reorder levels 2, 3, and 4. However, the ability to identify a reorder level 1 was weak and the unbalanced and biased distribution of reorder level 1 can greatly increase the production risk if a large amount of orders were taken as batch production in advance without customers' confirmation. On the other hand, the modified BN achieved better results with sensitivity 98.5% and 98.1% for training and test samples, respectively. It can reduce the number of samples with a reorder level 1 to be incorrectly predicted as 2, 3, or 4, which reduces the risk of batch production. In addition, the modified BN can correctly identify a higher amount of orders with an observed reorder level 2 and 3 compared with AdaBoost and ANN both for training and test samples. The sensitivity of the modified BN for observed reorder level 4 deteriorated for the test sample. However, many samples have been predicted as 2 or 3, which can also be taken for batch production. Overall indicators of confusion matrices also show that the modified BN obtained the highest accuracy (81.9%) both for training and test samples.
The approaches were compared both for training and test samples according to indicators presented in Equations (10)- (12) and the comparison results can be referred to Table 10. It shows that the proposed modified BN obtained the lowest MAE and MAPE for training samples and the lowest MSE and MAE for test samples. The results in Figure 9 also illustrate that TAN obtained the maximum correctly classified instances with observed reorder levels of 2, 3, and 4 as well as the maximum incorrectly classified instances with an observed reorder level of 1 compared to the other classifiers. Therefore, the indicators show that TAN achieved the lowest MSE for training samples as well as almost the largest MAE and MAPE both for training and test samples. This may be caused by redundant links between the evidence node and the missing strong links such as the causal relationships between F2, F3, F6, and F1. Yet, it is worth noting that TAN deteriorated greatly on the test dataset according to MAE and MAPE, which indicates that the TAN considered in the current study lacked robustness and generalization ability. The ANN exhibited steady performance both for training and test samples but had no superiority according to the three indicators. The AdaBoost achieved slightly better performance for test samples for the indicator MAPE but the worst performance for the indicators MSE and MAE. The above results indicate that the modified BN combing CMI, LSP, and expert experience maintains the DAG requirement of BN and produces a more nuanced network that captures the main dependency relationships among evidence nodes (variables) while deleting some weak dependency relationships without allowing arbitrary graphical structures that would make it harder to interpret and extract relations to enhance the prediction model. At the same time, the conditional expected loss can benefit the final classification and it can exhibit better performance especially when compared to TAN. In addition, the modified BN has the clearest common sense interpretation.
Conclusions
In this paper, the identification of repeated PCB orders for batch production was transformed into a reorder level prediction problem and a modified Bayesian network model with Monte Carlo simulations to study the relationship between different characteristic variables and reorder levels of PCB within six months was established. Reorder frequency was divided into four reorder levels and variables related to a reorder level were specified. Field data was exported and integrated from a PCB manufacturer with 33,542 training samples and 14,484 test samples. Normalization and PCA were employed to reduce differences and redundancy of the datasets, respectively. PCA results indicated that the causes of the reorder level are closely related to seven principal components and the other two variables, i.e., recency and layer number. Entropy minimization based binning method was employed to discretize model variables for the purpose of reducing input type and capturing better performance and results. The modified structure of BN was established by deleting redundant connections between nodes (with weak link strength) and corresponding conditional probability tables based on conditional mutual information and link strength percentage combining with expert experience. This can facilitate the manufacturer to comprehend causal interactions between variables. On this basis, the conditional expected loss was presented for final classification considering different misclassification risk.
Monte Carlo simulation was conducted to enable the determination with greater accuracy of a mean and confidence interval for reorder frequency estimations based on the predicted reorder level. The upper limits of reorder frequency are particularly useful for the PCB manufacturer as a basis of each reorder level. The performance of the proposed modified BN was visualized by confusion matrix, evaluated by three indicators, and compared to three advanced methods including TAN, AdaBoost, and ANN. It was found that the modified BN prediction model achieved steady and satisfactory results both for training and test samples with the clearest common sense interpretation. Therefore, the proposed model in this paper is an effective approach to capture the repetition pattern of PCB orders that have seldom been studied before. The established explicit relationship between the variables including extracted factors and the reorder level by the causal network can directly facilitate order selection for batch production that can be conducted according to the decision making step given in Figure 1.
The main contributions of this work are summarized below.
1.
The tricky problem of identifying repeated orders for batch production was transformed into a reorder level prediction problem and then a reorder level prediction model based on modified causal Bayesian network was proposed. From the historically accumulated data in a PCB manufacturer, different characteristic variables were extracted and specified for the model.
2.
PCA was employed for data compression and factors extraction. Yet, an entropy minimization based method was presented to discretize variable and extracted factors. They could facilitate data compression, input type reduction, and better classification performance.
3.
In order to avoid the defect of TAN BN that easily misses strong links between nodes and generates redundant weak links, CMI and LSP were combined for the establishment of the BN structure. 4.
By using Monte Carlo simulations, the confidence upper limits of reorder frequency within six months were determined and the influence of the random nature of reorder was reduced.
Further research will be made to design intelligent approaches that can predict and determine reasonable batch production area for each candidate order. Further attempts will also be made to apply this method to similar order-oriented production and develop other intelligent techniques for the repetition pattern excavation.
Author Contributions: S.L. implemented the algorithm and wrote the paper. H.K. edited the paper and improved the quality of the article. H.J. proposed the algorithm and the structure of the paper. B.Z. conducted the experiments and analyzed the data. | 2019-04-16T13:28:23.778Z | 2018-06-02T00:00:00.000 | {
"year": 2018,
"sha1": "21ae36a3a3db3fd5299e0a4cf13dde58fb1601ec",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/8/6/915/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e86b60440fa10567d996e619c0565e78056429c8",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
5471399 | pes2o/s2orc | v3-fos-license | Global well-posedness and scattering for the higher-dimensional energy-critical non-linear Schrodinger equation for radial data
In any dimension $n \geq 3$, we show that spherically symmetric bounded energy solutions of the defocusing energy-critical non-linear Schr\"odinger equation $i u_t + \Delta u = |u|^{\frac{4}{n-2}} u$ in $\R \times \R^n$ exist globally and scatter to free solutions; this generalizes the three and four dimensional results of Bourgain and Grillakis. Furthermore we have bounds on various spacetime norms of the solution which are of exponential type in the energy, which improves on the tower-type bounds of Bourgain. In higher dimensions $n \geq 6$ some new technical difficulties arise because of the very low power of the non-linearity.
Introduction
Let n ≥ 3 be an integer. We consider solutions u : I × R n → C of the defocusing energy-critial non-linear Schrödinger equation (1) iu t + ∆u = F (u) on a (possibly infinite) time interval I, where F (u) := |u| 4 n−2 u. We will be interested in the Cauchy problem for the equation (1), specifying initial data u(t 0 ) for some t 0 ∈ I and then studying the existence and long-time behavior of solutions to this Cauchy problem.
We restrict our attention to solutions for which the energy E(u) = E(u(t)) := R n 1 2 |∇u(t, x)| 2 + n − 2 2n |u(t, x)| 2n n−2 dx is finite. It is then known (see e.g. [4]) that for any given choice of finite energy initial data u(t 0 ), the solution exists for times close to t 0 , and the energy E(u) is conserved in those times. Furthermore this solution is unique 1 in the class Mathematics Subject Classification. 35Q55. The author is a Clay Prize Fellow and is supported by the Packard Foundation. The author is indebted to Jean Bourgain, Jim Colliander, Manoussos Grillakis, Markus Keel, Gigliola Staffilani, and Hideo Takaoka for useful conversations. The author also thanks Monica Visan and the anonymous referee for several corrections. 1 In fact, the condition that the solution lie in L 2(n+2)/(n−2) t,x can be omitted from the uniqueness result, thanks to the endpoint Strichartz estimate in [14] and the Sobolev embeddingḢ 1 x ⊆ L 2n/(n−2) x ; see [13], [8], [9] for further discussion. We thank Thierry Cazenave for this observation. , and we shall always assume our solutions to lie in this class. The significance of the exponent in (1) is that it is the unique exponent which is energy-critical, in the sense that the natural scale invariance (2) u(t, x) → λ −(n−2)/2 u( t λ 2 , x λ ) of the equation (1) leaves the energy invariant; in other words, the energy E(u) is a dimensionless quantity. If the energy E(u(t 0 )) is sufficiently small (smaller than some absolute constant ε > 0 depending only on n) then it is known (see [4]) that one has a unique global finite-energy solution u : R × R n → C to (1). Furthermore we have the global-intime Strichartz bounds ∇u L q t L r x (R×R n ) ≤ C(q, r, n, E(u)) for all exponents (q, r) which are admissible in the sense that 2 (3) 2 ≤ q, r ≤ ∞; 1 q + n 2r = n 4 .
In particular, from Sobolev embedding we have the spacetime estimate for some explicit function M (n, E) > 0. Because of this and some further Strichartz analysis, one can also show scattering, in the sense that there exist Schwarz solutions u + , u − to the free Schrödinger equation (i∂ t + ∆)u ± = 0, such that This can then be used to develop a small energy scattering theory (existence of wave operators, asymptotic completeness, etc.); see [3]. Also, one can show that the solution map u(t 0 ) → u(t) extends to a globally Lipschitz map in the energy spaceḢ 1 (R n ). The question then arises as to what happens for large energy data. In [4] it was shown that the Cauchy problem is locally well posed for this class of data, so that we can construct solutions for short times at least; the issue is whether these solutions can be extended to all times, and whether one can obtain scattering results like before. It is well known that such results will indeed hold if one could obtain the a priori bound (4) for all global Schwarz solutions u (see e.g. [2]). It is here that the sign of the non-linearity in (1) is decisive (in contrast to the small energy theory, in which it plays no role). Indeed, if we replaced the non-linearity F (u) by the focusing non-linearity −F (u) then an argument of Glassey [10] shows that large energy Schwarz initial data can blow up in finite time; for instance, this will occur whenever the potential energy exceeds the kinetic energy.
In the defocusing case, however, the existence of Morawetz inequalities allows one to obtain better control on the solution. A typical such inequality is Strictly speaking, the result in [4] did not obtain these estimates for the endpoint q = 2, but they can easily be recovered by inserting the Strichartz estimates from [14] into the argument in [4].
for all time intervals I and all Schwarz solutions u : I ×R n → C to (1), where C > 0 is a constant depending only on n; this inequality can be proven by differentiating the quantity R n Im( x |x| · ∇u(t, x)u(t, x)) dx in time and integrating by parts. This inequality is not directly useful for the energy-critical problem, as the right-hand side involves the Sobolev normḢ 1/2 (R n ) instead of the energy normḢ 1 (R n ). However, by applying an appropriate spatial cutoff, Bourgain [1], [2] and Grillakis [11] obtained the variant Morawetz estimate (5) for all A ≥ 1, where |I| denotes the length of the time interval I; this estimate is more useful as it involves the energy on the right-hand side. For sake of selfcontainedness we present a proof of this inequality in Section 2.3. The estimate (5) is useful for preventing concentration of u(t, x) at the spatial origin x = 0. This is especially helpful in the spherically symmetric case u(t, x) = u(t, |x|), since the spherical symmetry, combined with the bounded energy assumption can be used to show that u cannot concentrate at any other location than the spatial origin. Note that spatial concentration is the primary obstruction to establishing global existence for the critical NLS (1); see e.g. [15] for some dicussion of this issue.
With the aid of (5) and several additional arguments, Bourgain [1], [2] and Grillakis [11] were able to show global existence of large energy spherically smooth solutions in the three dimensional case n = 3. Furthermore, the argument in [1], [2] extends (with some technical difficulties) to the case n = 4 and also gives the spacetime bound (4) (which in turn yields the scattering and global well-posedness results mentioned earlier). However, the dependence of the constant M (n, E(u)) in (4) on the energy E(u) given by this argument is rather poor; in fact it is an iterated tower of exponentials of height O(E(u) C ). This is because the argument is based on an induction on energy strategy; for instance when n = 3 one selects a small number η > 0 which depends polynomially on the energy, removes a small component from the solution u to reduce the energy from E(u) to E(u) − η 4 , applies an induction hypothesis asserting a bound (4) for that reduced solution, and then glues the removed component back in using perturbation theory. The final argument gives a recursive estimate for M (3, E) of the form for various absolute constants C > 0, and with η = cE −C . It is this recursive inequality which yields the tower growth in M (3, E). The argument of Grillakis [11] is not based on an induction on energy, but is based on obtaining L ∞ t,x control on u rather than Strichartz control (as in (4)), and it is not clear whether it can be adapted to give a bound on M (3, E).
The main result of this paper is to generalize the result 3 of Bourgain to general dimensions, and to remove the tower dependence on M (n, E), although we are still restricted to spherically symmetric data. As with the argument of Bourgain, a large portion of our argument generalizes to the non-spherically symmetric case; the spherical symmetry is needed only to ensure that the solution concentrates at the spatial origin, and not at any other point in spacetime, in order to exploit the Morawetz estimate (5). In light of the recent result in [7] extending the threedimensional results to general data, it seems in fact likely that at least some of the ideas here can be used in the non-spherically-symmetric setting; see Remark 3.9.
for some absolute constants C depending only on n (and thus independent of E, t ± , u).
Because the bounds are independent of the length of the time interval [t − , t + ], it is a standard matter to use this theorem, combined with the local well-posedness theory in [4], to obtain global well-posedness and scattering conclusions for large energy spherically symmetric data; see [3], [2] for details.
Our argument mostly follows that of Bourgain [1], [2], but avoids the use of induction on energy using some ideas from other work [11], [7], [18]. We sketch the ideas informally as follows. Following Bourgain, we choose a small parameter η > 0 depending polynomially on the energy, and then divide the time interval [t − , t + ] into a finite number of intervals I 1 , . . . , I J , where on each interval the L 2(n+2)/(n−2) t,x norm is comparable to c(η); the task is then to bound the number J of such intervals by O(exp(CE C )).
An argument of Bourgain based on Strichartz inequalities and harmonic analysis, which we reproduce here 4 , shows that for each such interval I j , there is a "bubble" of concentration, by which we mean a region of spacetime of the form {(t, x) : |t − t j | ≤ c(η)N −2 j ; |x − x j | ≤ c(η)N −1 j } inside the spacetime slab I j × R n on which the solution u has energy 5 at least c(η) > 0. Here (t j , x j ) is a point in I j × R n and N j > 0 is a frequency. The spherical symmetry assumption allows us to choose x j = 0; there is also a lower bound N j ≥ c(η)|I j | 1/2 simply because the bubble has to be contained inside the slab I j × R n . However, the harmonic analysis argument does not directly give an upper bound on the frequency N j ; thus the bubble may be much smaller than the slab.
In [1], [2] an upper bound on N j is obtained by an induction on energy argument; one assumes for contradiction that N j is very large, so the bubble is very small. Without loss of generality we may assume the bubble lies in the lower half of the slab I j × R n . Then when one evolves the bubble forward in time, it will have largely dispersed by the time it leaves I j × R n . Oversimplifying somewhat, the argument then proceeds by removing this bubble (thus decreasing the energy by a non-trivial amount), applying an induction hypothesis to obtain Strichartz bounds on the remainder of the solution, and then gluing the bubble back in by perturbation theory. Unfortunately it is this use of the induction hypothesis which eventually gives tower-exponential bounds rather than exponential bounds in the final result. Also there is some delicate playoff between various powers of η which needs additional care in four and higher dimensions.
Our main innovation is to obtain an upper bound on N j by more direct methods, dispensing with the need for an induction on energy argument. The idea is to use Duhamel's formula, to compare u against the linear solutions u ± (t) := e i(t−t±)∆ u(t ± ). We first eliminate a small number of intervals I j in which the lin- norm; the number of such intervals can be controlled by global Strichartz estimates for the free (linear) Schrödinger equation. Now let I j be one of the remaining intervals. If the bubble occurs in the lower half of I j then we 6 compare u with u + , taking advantage of the dispersive properties of the propagator e it∆ in our high-dimensional setting n ≥ 3 to show that the error u − u + is in fact relatively smooth, which in turn implies the bubble cannot be too small. Similarly if the bubble occurs in the upper half of I j we compare u instead with u − . Interestingly, there are some subtleties in very high dimension (n ≥ 6) when the non-linearity F (u) grows quadratically or slower, as it now becomes rather difficult (in the large energy setting) to pass from smallness of the non-linear solution (in spacetime norms) to that of the linear solution or vice versa.
Once the bubble is shown to inhabit a sizeable portion of the slab, the rest of the argument essentially proceeds as in [1]. We wish to show that J is bounded, so suppose for contradiction that J is very large (so there are lots of bubbles). Then the Morawetz inequality (5) can be used to show that the intervals I j must concentrate fairly rapidly at some point in time t * ; however one can then use localized mass conservation laws to show that the bubbles inside I j must each shed a sizeable amount of mass (and energy) before concentrating at t * . If J is large enough there is so much mass and energy being shed that one can contradict conservation of energy. To put it another way, the mass conservation law implies that the bubbles cannot contract or expand rapidly, and the Morawetz inequality implies that the bubbles cannot persist stably for long periods of time. Combining these two facts we can conclude that there are only a bounded number of bubbles.
It is worth mentioning that our argument is relatively elementary (compared against e.g. [1], [2], [7]), especially in low dimensions n = 3, 4, 5; the only tools are (non-endpoint) Strichartz estimates and Sobolev embedding, the Duhamel formula, energy conservation, local mass conservation, and the Morawetz inequality, as well as some elementary combinatorial arguments. We do not need tools from Littlewood-Paley theory such as the para-differential calculus, although in the higher-dimensional cases n ≥ 6 we will need fractional integration and the use of Hölder type estimates as a substitute for this para-differential calculus.
Notation and basic estimates
We use c, C > 0 to denote various absolute constants depending only on the dimension n; as we wish to track the dependence on the energy, we will not allow these constants to depend on the energy E.
For any time interval I, we use L q t L r x (I × R n ) to denote the mixed spacetime Lebesgue norm with the usual modifications when q = ∞. We define the fractional differentiation operators |∇| α := (−∆) α/2 on R n . Recall that if −n < α < 0 then these are fractional integration operators with an explicit form for some computable constant c n,α > 0 whose exact value is unimportant to us; see e.g. [17]. We recall that the Riesz transforms ∇|∇| −1 = |∇| −1 ∇ are bounded on L p (R n ) for every 1 < p < ∞; again see [17].
Duhamel's formula and Strichartz estimates.
Let e it∆ be the propagator for the free Schrödinger equation iu t + ∆u = 0. As is well known, this operator commutes with derivatives, and obeys the energy identity and the dispersive inequality for t = 0. In particular we may interpolate to obtain the fixed-time estimates We observe Duhamel's formula: if iu t + ∆u = F on some time interval I, then we have (in a distributional sense, at least) for all t 0 , t ∈ I, where we of course adopt the convention that t t0 = − t0 t when t < t 0 . To estimate the terms on the right-hand side, we introduce the Strichartz normsṠ k (I × R n ), defined for k = 0 as where admissibility was defined in (3), and then for general 7 k by Observe that in the high dimensional setting n ≥ 3, we have 2 ≤ r < ∞ for all admissible (q, r), so have boundedness of Riesz transforms (and thus we could replace |∇| k by ∇ k for instance, when k is a positive integer. We note in particular that for all positive integer k ≥ 1. Specializing further to the k = 1 case we obtain and in dimensions n ≥ 4 We also define dual Strichartz spacesṄ k (I × R n ), defined for k = 0 as the Banach space dual ofṠ 0 (I × R n ), and for general k as . From the first term in (11) and duality (and the boundedness of Riesz transforms) we observe in particular that (14) F We recall the Strichartz inequalities see e.g. [14]; the dispersive inequality (9) of course plays a key role in the proof of these inequalities. While we include the endpoint Strichartz pair (q, r) = (2, 2n n−2 ) in these estimates, this pair is not actually needed in our argument. Observe that the constants C here are independent of the choice of interval I.
2.2.
Local mass conservation. We now recall a local mass conservation law appearing for instance in [11]; a related result also appears in [1].
Let χ be a bump function supported on the ball B(0, 1) which equals one on the ball B(0, 1/2) and is non-increasing in the radial direction. For any radius R > 0, we define the local mass Mass(u(t), note that this is a non-decreasing function of R. Observe that if u is a finite energy solution (1), then (at least in a distributional sense), and so by integration by parts If u has bounded energy E(u) ≤ E, we thus have the approximate mass conservation law Observe that the same claim also holds if u solves the free Schrödinger equation iu t +∆u = 0 instead of the non-linear Schrödinger equation (1). Note that the righthand side decays with R. This implies that if the local mass Mass(u(t), B(x 0 , R)) is large for some time t, then it can also be shown to be similarly large for nearby times t, by increasing the radius R if necessary to reduce the rate of change of the mass. From Sobolev and Hölder (or by Hardy's inequality) we can control the mass in terms of the energy via the formula (18) |Mass(u(t), B(x 0 , R))| ≤ CE 1/2 R.
Morawetz inequality.
We now give the proof of the Morawetz inequality (5); this inequality already appears in [1], [2], [11] in three dimensions, and the argument extends easily to higher dimensions, but for sake of completeness we give the argument here.
Using the scale invariance (2) we may rescale so that A|I| 1/2 = 1. We begin with the local momentum conservation identity where j, k range over spatial indices 1, . . . , n with the usual summation conventions, and ∂ k is differentiation with respect to the x k variable. This identity can be verified directly from (1); observe that when u is finite energy, both sides of this inequality make sense in the sense of distributions, so this identity can be justified in the finite energy case by the local well-posedness theory 8 . If we multiply the above identity by the weight ∂ k a for some smooth, compactly supported weight a(x), and then integrate in space, we obtain (after some integration by parts) We apply this in particular to the C ∞ 0 weight a(x) := (ε 2 + |x| 2 ) 1/2 χ(x), where χ is a bump function supported on B(0, 2) which equals 1 on B(0, 1), and 0 < ε < 1 is a small parameter which will eventually be sent to zero. In the region |x| ≤ 1, one can see from elementary geometry that a is a convex function (its graph is a hyperboloid); in particular, (∂ j ∂ k a)Re(∂ k u∂ j u) is non-negative. Further computation shows that in this region; in particular −∆∆a, ∆a are positive in this region since n ≥ 3. In the region 1 ≤ |x| ≤ 2, a and all of its derivatives are bounded uniformly in ε, and so the integrals here are bounded by O(E(u)) (using (18) to control the lower-order term). Combining these estimates we obtain the inequality Integrating this in time on I, and then using the fundamental theorem of calculus and the observation that a is Lipschitz, we obtain sup t∈I |x|≤2 By (18) and Cauchy-Schwarz the left-hand side is O(E(u)). Since |I| = A −2 < 1, we thus obtain I |x|≤1 Taking ε → 0 and using monotone convergence, (5) follows.
Remark 2.4. In [7], an interaction variant of this Morawetz inequality is used (superficially similar to the Glimm interaction potential as used in the theory of conservation laws), in which the weight 1/|x| is not present. In principle this allows for arguments such as the one here to extend to the non-radial setting. However the (frequency-localized) interaction Morawetz inequality in [7] is currently restricted to three dimensions, and has a less favorable numerology 9 than (5), so it seems that the arguments given here are insufficient to close the argument in the general case in higher dimensions. At the very least it seems that one would need to use more sophisticated control on the movement of mass across frequency ranges, as is done in [7].
Proof of Theorem 1.1
We now give the proof of Theorem 1.1. The spherical symmetry of u is used in only one step, namely in Corollary 3.5, to ensure that the solution concentrates at the spatial origin instead of at some other location.
We fix E, [t − , t + ], u. We may assume that the energy is large, E > c > 0, otherwise the claim follows from the small energy theory. From the bounded energy of u we observe the bounds We need some absolute constants 1 ≪ C 0 ≪ C 1 ≪ C 2 , depending only on n, to be chosen later; we will assume C 0 to be sufficiently large depending on n, C 1 sufficiently large depending on C 0 , n, and C 2 sufficiently large depending on C 0 , C 1 , n. We then define the quantity η := C −1 2 E −C2 . Our task is to show that C1,C2) ). We may assume of course that t+ t− R n |u(t, x)| 2(n+2)/(n−2) dxdt > 4η since our task is trivial otherwise. We may then (by the greedy algorithm) subdivide [t − , t + ] into a finite number of disjoint intervals I 1 , . . . , I J for some J ≥ 2 such that (20) η ≤ Ij R n |u(t, x)| 2(n+2)/(n−2) dxdt ≤ 2η for all 1 ≤ j ≤ J. It will then suffice to show that We shall now prove various concentration properties of the solution on these intervals. We begin with a standard Strichartz estimate that bootstraps control on (20) to control on all the Strichartz norms (but we lose the gain in η): Proof. From Duhamel (10), Strichartz (15), (16) and the equation (1) we have for any t j ∈ I j . From (19), (14) we thus have But from the chain rule and Hölder we have (formally, at least) by (20), (11). Thus we have the formal inequality If η is sufficiently small (by choosing C 2 large enough), then the claim follows, at least formally. To make the argument rigorous one can run a Picard iteration scheme that converges to the solution u (see e.g. [4] for details) and obtain the above types of bounds uniformly at all stages of the iteration; we omit the standard details.
Next, we obtain lower bounds on linear solution approximations to u on an interval where the L 2(n+2)/(n−2) t,x norm is small but bounded below.
Proof. Without loss of generality it suffices to prove the claim when l = 1. In low dimensions n = 3, 4, 5 the Lemma is easy; indeed an inspection of the proof of Lemma 3.1 reveals that we have the additional bound and hence by (12) When n = 3, 4, 5 we have 2/(n + 2) > (n − 2)/2(n + 2), and so the above estimates then show that u − u 1 is smaller than u in L 2(n+2)/(n−2) t,x ([t 1 , t 2 ] × R n ) norm if η is sufficienty small (i.e. C 2 is sufficiently large), at which point the claim follows from the triangle inequality (and we can even replace η C by η).
In higher dimensions n ≥ 6, the above simple argument breaks down. In fact the argument becomes considerably more complicated (in particular, we were only able to obtain a bound of η C rather than the more natural η); the difficulty is that while the non-linearity still decays faster than linearly as u → 0, one of the factors is "reserved" for the derivative ∇u, for which we have no smallness estimates, and the remaining terms now decay linearly or worse, making it difficult to perform a perturbative analysis. The resolution of this difficulty is rather technical, so we defer the proof of the higher dimensional case to an Appendix (Section 4) so as not to interrupt the flow of the argument. We remark however that the argument does not require any spherical symmetry assumption on the solution.
Define the linear solutions u − , u + on [t − , t + ] × R n by u ± (t) := e i(t−t±)∆ u(t ± ); these are the analogue of the scattering solutions for this compact interval [t − , t + ]. From (19) and the Strichartz estimate (15), (12), we have Call an interval I j exceptional if we have Ij R n |u ± (t, x)| 2(n+2)/(n−2) dxdt > η C1 for at least one choice of sign ±, and unexceptional otherwise. From the above global Strichartz estimate we see that there are at most O(E C /η C1 ) exceptional intervals, which will be acceptable for us from definition of η. Thus we may assume that there is at least one unexceptional interval.
Unexceptional intervals will be easier to control than exceptional ones, because the homogeneous component of Duhamel's formula (10) is negligible, leaving only the inhomogeneous component to be considered. But as we shall see, this component enjoys some additional regularity properties. In particular, we now prove a concentration property of the solution on unexceptional intervals. Proposition 3.3. Let I j be an unexceptional interval. Then there exists an x j ∈ R n such that Proof. By time translation invariance and scale invariance (2) Since I j is unexceptional, we have 1 t * R n |u − (t, x)| 2(n+2)/(n−2) dxdt ≤ η C1 . From (24) and Lemma 3.1, it is easy to see (using the chain rule and Hölder as in the proof of Lemma 3.1) that and hence by Strichartz (16) 1 From these estimates and (26), we thus see from the triangle inequality (if C 0 is large enough, and η small enough (i.e. C 2 large enough depending on C 0 )) that where v is the function We now complement this lower bound on v with an upper bound. First observe from Lemma 3.1 that also from (19) and (15) we have Finally, from (28) and (16) From the triangle inequality and (27) we thus have We shall need some additional regularity control on v. For any h ∈ R n , let u (h) denote the translate of u by h, i.e. u (h) (t, x) := u(t, x − h).
Proof. First consider the high-dimensional case n ≥ 4. We use (19), the chain rule and Hölder to observe that so by the dispersive inequality (9) Integrating this for s in [t − , t * − η C0 ] we obtain interpolating this with (31), (11) we obtain The claim then follows (with c = 1) from the Fundamental theorem of calculus and Minkowski's inequality. Now consider the three-dimensional case n = 3. From (19), the fundamental theorem of calculus, and Minkowski's inequality we have while from the triangle inequality we have Since F (u) is quintic in three dimensions, we thus have from Hölder and (19) that Integrating this for s ∈ [t − , t * − η C0 ] using (8) we obtain On the other hand, from (31), (12), and the triangle inequality we have and the claim follows by interpolation.
We can average this lemma over all |h| ≤ r, for some scale 0 < r < 1 to be chosen shortly, to obtain where v av (x) := χ(y)v(x + ry) dy for some bump function χ supported on B(0, 1) of total mass one. In particular by a Hölder in time we have Thus if we choose r := η CC0 for some large enough C, and η is sufficiently small, we see from (29) that v av L 2(n+2)/(n−2) t,x On the other hand, by Hölder and Young's inequality v av L 2n/(n−2) t,x (11). Thus by Hölder we have Thus we may find a point (t j , and in particular by Cauchy-Schwarz for all R ≥ r. Observe from (30) that v solves the free Schrödinger equation on [t * − η C0 , 1], and has energy O(E C ) by (31), (11). Thus by (17) we have From Duhamel's formula (10) (or (27)) we have From (25) and Hölder we have Thus if we choose C 1 sufficiently large depending on C 0 (recalling that r = η CC0 and R = Cη −C E C r −C ), and assume η sufficiently small depending polynomially on E, we have Mass(u(t * − η C0 ), B(x j , R)) ≥ cη C E −C r C . By another application of (17) we thus have for all t ∈ [0, 1], and Proposition 3.3 follows.
We now exploit the radial symmetry of u to place the concentration point x j at the origin. This is the only place where the spherical symmetry assumption is used.
Corollary 3.5. Let I j be an unexceptional interval, and assume that the solution u is spherically symmetric. Then we have Proof. We again rescale I j = [0, 1]. Let x j be as in Proposition 3.3. Fix t ∈ [0, 1]. If |x j | = O(η −C ′ C0 ) for some C ′ depending only on n then we are done. Now suppose that |x j | ≥ η −C ′ C0 . Then if C ′ is big enough, we can find η −cC ′ rotations of the ball B(x j , Cη −CC0 ) which are disjoint. On each one of these balls, the mass of u(t) is at least cη CC0 by the spherical symmetry assumption; by Hölder this shows that the L 2n/(n−2) norm of u(t) on these balls is also cη CC0 . Adding this up for each of the η −cC ′ C0 balls, we obtain a contradiction to (19) if C ′ C 0 is large enough. Thus we have |x j | = O(η −C ′ C0 ) and the claim follows.
From this corollary and Hölder we see that |x|≤R |u(t, x)| 2n/(n−2) |x| dxdt ≥ cη CC0 |I j | −1/2 whenever t ∈ I j for some unexceptional interval I j , and R ≥ Cη −CC0 |I j | 1/2 . In particular we have Ij |x|≤R Combining this with (5) and the bounded energy we obtain the following combinatorial bound on the distribution of the intervals I j .
Corollary 3.6. Assume that the solution u is spherically symmetric. For any interval I ⊆ [t − , t + ], we have 1≤j≤J:Ij ⊆I (note we can use η −C to absorb any powers of the energy which appear; also, note that the O(Cη −C1 ) exceptional intervals cause no difficulty).
This bound gives quite strong control on the possible distribution of the intervals I j , for instance we have Corollary 3.7. Assume that the solution u is spherically symmetric. Let I = j1≤j≤j2 I j be a union of consecutive intervals. Then there exists j 1 ≤ j ≤ j 2 such that |I j | ≥ cη C(C0,C1) |I|.
Proof. From the preceding corollary we have Since j1≤j≤j2 |I j | = |I|, the claim follows.
We now repeat a combinatorial argument 11 of Bourgain [1] to show that the intervals I j must now concentrate at some time t * : Proposition 3.8. Assume that the solution u is spherically symmetric. Then there exists a time t * ∈ [t − , t + ] and distinct unexceptional intervals I j1 , . . . , I jK for some K > cη C(C0,C1) log J such that Proof. We run the algorithm from Bourgain [1]. We first recursively define a nested sequence of intervals I (k) , each of which is a union of consecutive unexceptional I j , as follows. We first remove the O(η −C1 ) exceptional intervals from [t − , t + ], leaving O(η −C1 ) connected components. One of these, call it I (1) , must be the union of J 1 ≥ cη C1 J consecutive unexceptional intervals. By Corollary 3.7, there exists an I j1 ⊆ I (1) such that |I j1 | ≥ cη CC0 |I (1) |, so in particular dist(t, I j1 ) ≤ Cη −CC0 |I j1 | for all t ∈ |I (1) |. Now we remove I j1 from I (1) , and more generally remove all intervals I j from I (1) for which |I j | > |I j1 |/2. There can be at most Cη −CC0 such intervals to remove, since I j1 was so large. If J 1 ≤ Cη −CC0 then we set K = 1 and terminate the algorithm. Otherwise, we observe that the remaining connected components of I (1) still contain at least cη CC0 J intervals, and there are O(η −CC0 ) such components. Thus by the pigeonhole principle we can find one of these components, I (2) , which is the union of J 2 ≥ cη CC0 J 1 intervals, each of which must have length less than or equal to |I j1 |/2 by construction. Now we iterate the algorithm, using Corollary 3.7 to locate an interval I j2 in I (2) such that |I j2 | ≥ cη CC0 |I (2) |, and then removing all intervals of length > |I j2 |/2 from I (2) |. If the number of intervals in |I (2) | is O(η −CC0 ), we terminate the algorithm, otherwise we can pass as before to a smaller interval I (3) which is a union of J 3 ≥ cη CC0 J 2 intervals. We can continue in this manner for K steps for some K > cη C(C0,C1) log J until we run out of intervals. The claim then follows by choosing t * to be an arbitrary time in I (K) .
Thus by the triangle inequality, we have
A m ∇u 1 X ≥ cη (n−2)/2(n+2) for some 0 ≤ m ≤ M , again assuming that η is sufficiently small. Ideally we would now like the operator A to be bounded on X. We do not know if this is true; however we have the following weaker (and technical) version of this fact which suffices for our application.
X for some absolute constant 0 < θ < 1 (depending only on n).
Assuming this Lemma for the moment, we apply it together with (35), (36), (37) we obtain a bound of the form for some constants C m , θ m > 0. Combining this with our lower bound on A m ∇u 1 X we obtain ∇u 1 X ≥ cη C (assuming η sufficiently small depending on E, and allowing constants to depend on the fixed constant M ), and Lemma 3.2 follows (again using the boundedness of Riesz transforms).
It remains to prove Lemma 4.1. The point is to take advantage of one of the (many) refinements of the Sobolev embedding used 13 to prove (37); we shall use an argument based on Hedberg's inequality. We will not attempt to gain powers of η here (since the Neumann series step has in some sense fully exploited those gains already) and so shall simply discard all such gains that we encounter.
We will make the a priori assumption that w is smooth and rapidly decreasing; this can be removed by the usual limiting argument. We normalize w Ṡ0 ([t1,t2]×R n ) := 1, and write α := w X , thus α ≤ C by (37). Our task is to show that Aw X ≤ Cα c .
By the above estimates, we see that we can prove this estimate in the regions |y| ≤ α c0 R and |y| ≥ α −c0 R, even if we place the absolute values inside the integral, where 0 < c 0 ≪ 1 is an absolute constant to be chosen shortly. Thus it will suffice to estimate the remaining region α c0 R ≤ |y| ≤ α −c0 R. Partitioning the integral via smooth cutoffs, we see that it suffices (if c 0 was chosen sufficiently small) to show that | R n v(y)ϕ(y/r) dy| ≤ Cα c r n 2 , for all r > 0, where ϕ is a real-valued bump function. One may verify from dimensional analysis that this estimate (as well as the hypotheses) are invariant under the scaling w(t, x) → λ −n/2 w(t/λ 2 , x/λ); v(t, x) → λ −n/2 w(t/λ 2 , x/λ) and so we may take r = 1. Since v = Aw(0), we thus reduce to proving that | Aw(0), ϕ | ≤ Cα c .
Expanding out the definition of A and using duality, we can write this as (40) | 0 t1 R n (V 1 (t)w(t) + V 2 (t)w(t))e it∆ ϕ dxdt| ≤ Cα c . From (33), (11) we have for all τ > 1. Thus if we set τ = Cα −c0 for some small c 0 to be chosen later, we see that the portion of (40) arising from [t 1 , −τ ] is acceptable, and it suffices to then prove the bound on [−τ, 0]. In fact we will prove the fixed time estimates | R n (V 1 (t)w(t) + V 2 (t)w(t))e it∆ ϕ dx| ≤ CE C (1 + t) C |∇| −1 w(t) c L 2(n+2)/(n−2) (R n ) for all t ∈ [−τ, 0], which proves the claim if c 0 is sufficiently small, thanks to Hölder's inequality and the hypothesis w X = α. Fix t. We shall just prove this inequality for V 2 w, as the corresponding estimate for V 1 w is similar. Because of the negative derivative on w on the right-hand side, we shall need some regularity control on V 2 . Note that V 2 behaves like |u| 4/(n−2) ; since 4/(n − 2) ≤ 1, the standard fractional chain rule is not easy to apply. Instead, we will work in Hölder-type spaces 15 , which are more elementary. As with Lemma 14 Indeed, one just needs to note that e −it∆ ϕ is bounded in L 2 x and decays in L ∞ x like O(t −n/2 ) to verify this claim. 15 Using Hölder spaces rather than Sobolev spaces costs an epsilon of regularity (see e.g. [17] for a discussion) but for our purposes any non-zero amount of regularity will suffice. The reader may recognize the arguments below as that of splitting a product into paraproducts; however we are avoiding the use of standard paraproduct theory as it does not interact well with non-linear maps such as u → V 2 which may only be Hölder continuous of order 4/(n − 2) < 1. | 2014-10-01T00:00:00.000Z | 2004-02-01T00:00:00.000 | {
"year": 2004,
"sha1": "99566a9cf01aef5728d17377cc144d31b30c6c31",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7337620fd7f79d653eb42269f5c7e9de0b3fd122",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
231799882 | pes2o/s2orc | v3-fos-license | Profile of fungal opportunistic infection in HIV/ AIDS patients: An appraisal at Indian tertiary care
Introduction: AIDS is characterized by a number of opportunistic infections which are responsible for high morbidity and mortality. The spectrum and distribution of opportunistic infection (OIs) in AIDS patients is due to viral, Bacterial, Fungal cytopathology and are secondary to the failure of both cellular and humeral response with CD4 count of <200 mm3 leads to morbidity and mortality. Aim of the study: To document the spectrum of Fungal opportunistic infections in various age groups of HIV/AIDS patients and to note the CD4 counts among the group Materials and Methods: This is a descriptive study. Clinically and laboratory confirmed fungal cases of opportunistic infections in HIV patients are recorded, during the one year period from June 2017 May 2018. Blood of these patients processed for CD4 counts, by Partec flow cytometery to assess the immune status among them. Results: Out of 500 HIV seropositive cases, we found 65 of fungal opportunistic infections accounting for 13% of the cases. Majority of opportunistic infections, were in the age group of 31-40 years (37.8%) with predominance of male accounting for 55.2% of the cases. Out of 65 cases, 9.2% had oral candidiasis followed by 1% of vaginal candidaisis with CD4 count <100 mm3. Conclusions: In our study, predominant lesion observed was oral candidiasis among all the fungal opportunistic infections. Our study will help in programme management and to plan appropriate strategies for the investigation and treatment of common OIs as a part of management programme for HIV infected populations. © This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Introduction
AIDS is an emerging pandemic viral infectious disease caused by Human Immunodeficiency Virus, which has posed the greatest challenge to public health in modern world. Clinical manifestations in HIV infections are primarily due to viral cytopathology and are secondary to the failure of both cellular and humoral immune response. [1][2][3][4][5] Opportunistic infections with low CD4 counts influence the morbidity and mortality due to HIV infections. 1,3,5,6 Patients with CD4 counts >200 /mm 3 are 6 times more likely to develop opportunistic infections compared to those with CD4 counts of > 350/mm. 3 In India, Tuberculosis is the most commonly reported opportunistic infection with CD4 cells >200 / mm 3 . 2,3 Other commonly reported opportunistic infections among HIV infected are Oral Candidiasis, Herpes zoster, Cryptococcal Meningitis, Cerebral Toxoplasmosis and Cytomegalovirus Retinitis with CD4 counts < 200 /mm 3 . 6,7 The high incidence of commonly reported opportunistic infections with low CD4 counts in Indian HIV infected individuals highlights the need for early screening and also the need to increase awareness in health care providers https://doi.org/10.18231/j.jdpo.2020.072 2581-3714/© 2020 Innovative Publication, All rights reserved. 369 in order to improve decisions regarding prophylaxis for prevention and appropriate therapeutic interventions. [7][8][9] To determine role of CD4 decline and the incidence of opportunistic infections, CD4 counts as a clinical score serve as both an alarm for timing of prophylaxis and a guide for therapeutic intervention. 2,3,6,10 It is also documented that types of opportunistic infections is profoundly influenced by geography and prevalence of infectious diseases in particular region, nutritional, socioeconomic conditions and other factors. 1,2,7,11 Therefore this study will be conducted to evaluate the correlation between CD4 counts in HIV infected patients with onset of specific opportunistic infections. 1,2,7,9 2. Materials and Methods
Source of data
This was a prospective study involving proven Fungal cases of HIV/AIDS with signs and symptoms of opportunistic infections attending the outpatient department or admitted to Hospitals during the one year period, from June 2017 to June 2018 from the study group.
Sample size
500 HIV/AIDS seropositive patients with signs and symptoms of OIs, clinically, radiologically and diagnostically proven cases. Informed consent was taken from all patients during the study.
Exclusion criteria
HIV seropositive individuals already on antiretroviral therapy, asymptomatic partners and children of HIV seropositive individuals, HIV seropositive individuals detected during routine ANC checkup, pre-operative, preemployment and pre-insurance screening.
Methods of specimen collection 2.4.1. Specimen for CD4 count
With strict aseptic precautions, 3ml of venous blood sample was collected by venepuncture using EDTA vacutainer and processed by flow cytometry, according to the standard protocol supplied by the manufacturer. (PARTEC IVD FLOW CYTOMETER machine, by Partec Gmbh. Am Flugplatg 13. D-02828 Gorlitz. Germany).
Principle
The mouse monoclonal antibody MFM-241 recognizes the human CD4 antigen, a transmembrane glycoprotein (55 kDa) of the immunoglobulin supergene family, present on a subset of T-lymphocytes ("helper/inducer" T-cells) and also expressed at a lower level on monocytes, tissue macrophages and granulocytes. Approximately 20-60% of human peripheral blood mononuclear cells as well as a subpopula-tion of monocytes but with a weaker signal are stained The antibody has been studied at the 8th International Workshop on Human Cell Differentiation Molecules HCDM (former HLDA VIII), May 2006, Quebec, Canada. CD4 is the primary cellular receptor for the human immunodeficiency virus (HIV).
Flow cytometric analysis
CD4-PE fluorescence can be analysed on a Partec Flow Cytometer with an excitation light source of 488 nm or 532 nm (blue or green solid state laser). To count CD4+ T-cells transfer the test tube with 84ftul of the ready prepared blood sample (see Method) to the Partec counting results will be displayed automatically as CD4+ T-cells per µl whole blood.
Statistical analysis
The collected data was tabulated, analyzed and subjected for statistical analysis using SPSS 19.0. Results are presented as range for quantitative data and number and percentage for qualitative data.
Results
The present study was carried out on 500 HIV seropositive patients with signs and symptoms of 65 Fungal opportunistic infections attending District Hospital, ART center, over a period of 12 months (June 2017 to May 2018), to know the incidence of fungal infections and their correlation with CD4 count. The observations made from the study are shown in the following In our study out of 500 cases includes age group of < 20yrs to > 60yrs, maximum cases noted were 189(37.8%) cases with age group of 31-40yrs,18 (3.6%) cases were least group with age group > 60yrs. Among gender distribution of cases, maximum cases were 276 (55.2%) noted in males, with a male: female ratio of 1.2:1. In our study, we found majority of study group were 434 (86.8%) cases belongs to heterosexual risk group, least study group 2(0.4%) cases belongs to injecting drug use.
According to WHO Grading, out of 500 cases, majority were belongs to Grade 3, accounting for 277 (55.4%) of the cases. We observed that occupation wise distribution of cases maximum cases belongs to agricultural labourer accounting for 31.4% of the cases, In our study we noted 65(13%) cases of fungal infections with majority of cases with oral candidiasis.
There was a statistically significant association between age, risk factor and WHO Grading and CD 4 count and "P" value less than 0.05. There was no statistically significant association between gender, occupation and CD 4 count.
Discussion
In the present study the clinical profile of various fungal opportunistic infections among HIV seropositive patients admitted were analyzed.
Maximum numbers of HIV positive individuals (37.8%) were in the age group of 31-40 years. Several study groups both in India and abroad have reported 48.2% to 92% HIV seropositive individuals in this age group. Our findings are in accordance with Vajpayee et al. Male: female ratio in the present study was 1.2:1. Our findings are in consistent with Vickers et al study showed 1.4:1. While the males belonged to a wide age spectrum, the females were a considerably younger population, and most of them acquired infection from their spouses, reflecting the male dominance in Indian society and emphasizing an increased need for awareness and counseling of both spouse. Our findings are consistent with Ghate et al study. 11 The lower CD4 counts in present study may be due to a diagnostic bias from later detection of the disease reflecting a paucity of extensive diagnostic facilities at the peripheral health care centers, so that the diagnosis remains uncertain or is not established until late stages, when significant The findings of low CD4 counts at admission to the hospital demonstrate that a high level of immunodeficiency was already present, defining advanced AIDS.
Epidemiological features depend upon social and cultural practices of the people which may again vary from region to region.
Our findings are in accordance with Ghate et al 11 study. Oral candidiasis was the commonest mucocutaneous opportunistic infection observed in our study. The number of T-helper cell usually fall over the course of HIV infection. Serious fungal infections tend to occur, when T-helper cell count has dropped to around 100 mm 3 .
Cryptococcal meningitis is the most common type of meningitis reported in important neurological studies in India. Cryptococcal meningitis, an AIDS -defining illness, usually appears when CD4 counts are below 100/mm 3 and is associated with an increased risk of death.
Four of the HIV seropositive patients were co-infected with pneumocystis carinii pneumonia (PCP) in the present study. It is now established that PCP is one of the common opportunistic infections in HIV but the cases are relatively less documented, may be due to the lack of routine testing facility. PCP is rarely documented in India Our study correlates with Ghate et al 11 and Vajapyee et al 8 with CD4 count <100.
Four percent of the HIV seropositive patient had polymicrobial infections, which included oral candidiasis plus pulmonary tuberculosis in 2% and PCP plus cryptosporidial infestation in 2%.
About one percent of HIV seropositive cases of present study were co-infected with Hepatitis B virus. All the co-infected patients were under gone blood transfusion previously.
Conclusions
HIV/AIDS is the burning crisis worldwide. Early diagnosis of opportunistic infections and prompt treatment improves the quality of life, increases the life expectancy among infected patients and delays progression to AIDS. Timely initiation and continuous intake of ART will not only prolong the survival but will also decrease the viral load a transmission of the disease.
Source of Funding
No financial support was received for the work within this manuscript.
Conflict of Interest
The authors declare they have no conflict of interest. | 2020-12-24T09:12:30.117Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "4e5f0647736b39bfde7d4592e82e3fe52cd386c5",
"oa_license": "CCBY",
"oa_url": "https://www.jdpo.org/journal-article-file/12826",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9f32afc8ebbbb6c1efab9faa1efab8f2ac474d91",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
89745676 | pes2o/s2orc | v3-fos-license | The immunomodulatory effect of Zingiber cassumunar ethanolic extract on phagocytic activity, nitrit oxide and reaxtive oxygen intermediate secretions of macrophage in mice
Immunomodulators could protect the body from a variety of infectious agents and boost immunity. Zingiber cassumunar rhizome or bangle potentially showed as an immunomodulator through increasing of macrophage activity in vitro. The objective of the study was to determine the effect of Z. cassumunar rhizome ethanolic extract on phagocytic activity, nitrite oxide (NO) and reactive oxygen intermediate (ROI) secretions in macrophages in vivo. A total of 200 g of Z. cassumunar rhizome was powdered, macerated in 96% ethanol and evaporated to get concentrated extract. Mice were divided into 5 groups as follow: the normal group was given by water only, the negative control group was given by a 0.94% CMC-Na suspension, the treatment groups were given by 250, 500 and 1000 mg/kgBW, respectively, of Z. cassumunar ethanolic extract. The extract was administered orally for 7 days. On the 8th day the mice were injected intraperitoneally 0.7 mg/kg BW of lipopolysaccharide. Four hours later macrophage was isolated. Furthermore, the determination of the phagocytic activity, NO and ROI secretions levels of macrophage were performed. The treatments of 250, 500 and 1000 mg/kg BW of Z. cassumunar ethanolic extract significantly increase the ROI and NO secretions levels (p<0.05), but did not increase the phagocytic activity (p>0.05) of macrophage. Z. cassumunar ethanolic extract have immunomodulatory effect in vivo.
Introduction
The environment around humans contains various types of pathogenic elements, such as bacteria, viruses, funguses, protozoa and parasites which can cause disease in human body. Infections that occur in normal people are generally short and rarely leave permanent damage. This is because the human body has a system called the immune system that responds and protects the body against the pathogenic elements. The immune response is depended on the ability of the immune system to recognize foreign molecules (antigens) which presented in pathogens and then generate appropriate reactions to exclude antigenic sources [1].
Increasing of immune response could be done by improving the function of the immune system using immunostimulant. Immunostimulants can increase the the body's resistance in fighting against various infections or to assist in the treatment of diseases associated with suppression of the immune system. Immunostimulants work by stimulating the main factors of the immune system, through phagocytosis either through direct phagocytic mechanisms or by indirect mechanisms by releasing reactive oxygen compounds such as NO and ROI [2]. NO and ROI play a role in cooperating with macrophages destroy pathogens with dependent oxygen on the immune system.
Understanding of mechanisms is an important part of developing extracts to herbal remedies which can be used in formal health services. Z. cassumunar has been used traditionally to treat various diseases but the mechanism of its action is still very limited. This study will examine the potential of ethanol extract of Z. cassumunar as an immunomodulator and assess the mechanism. This research is expected to provide benefits for the development and utilization of Z. cassumunar in health services.
Plant material
Z. cassumunar rhizome was purchased from local market of Pasar Beringharjo, Yogyakarta, Indonesia. The rhizome was macerated in 96% ethanol and evaporated to get a concentrated extract.
Animal treatment
The animal handling procedures in this study was approved by the Research Ethics Committee, Universitas Ahmad Dahlan, with approval number 011601011. The test animal used in this study was Swiss male mice, 8 weeks of age. The mice were divided into 5 groups: the normal group was given water only, the negative control group was given a 0.94% CMC-Na suspension only, the treatment groups were given with 250, 500 and 1000 mg/kg bw of Z. cassumunar ethanol extract. The extract was administered orally for 7 days. On the 8th day the mice were injected intraperitoneally of 0.7 mg/kgBW lipopolysaccharide, 4 hours later macrophages were isolated.
Macrophage isolation
The mice were fasted for 10-12 hours, then narcosed with chloroform. After that, mice was sprayed with disinfectant solution and placed in the supine position. The abdominal skin was opened and cleaned the peritoneum sheath with 70% alcohol. The 10 ml of RPMI medium was injected into the peritoneum cavity and waited for 3 minutes.
The peritoneal cavity fluid was aspirated with an injection syringe from a non-fat and distant part of the intestine. The aspirate was centrifuged at 1200 rpm, 4 o C, for 10 min. The supernatant was removed, and the obtained macrophages resuspended by 1000 μl of complete medium. The number of cells was calculated by the hemocytometer with 10 μl of macrophages suspension.
The cell suspension was grown by 1000 μl into a 24-well microplate well with a density of 1x10 5 cells/ml for NO secretion testing. For phagocytic activity test and ROI secretion, the cells were grown with density of 5x10 5 cells/well into 6-well microplate. The cells was then incubated for 30 min in 5% CO 2 incubator with 37˚C temperature, then 800 μl complete medium added into the susceptibility. Cell suspension for NO or ROI secretion test was incubated in 5% CO 2 incubator, 37˚C for 24 hours.
Phagocytic assay
The medium in the microplate well was discard from cultured cell. Each wells was then added with 50 uL latex suspension and incubated for 60 min in 5% CO 2 incubator at 37 o C. Following incubation, latex suspension was discard and cell was let in the room temperature for drying. Cell was then fixed using methanol for 3 min and methanol was discarded. After that, cells was stained by 10% Giemsa for 30 min. The cell was then washed with distilled water and observed using microscope with 400x magnification.
Greiss assay for detection of NO secretion
Greiss reagent was prepared by mixing Greiss A and Greiss B in the same amount. Greiss A was composed by of 0.1 g N-(1-naphthyl) ethylene diaminehydrochloride (NED) (Sigma N, 5889) in 100 ml of distilled water. Greiss B was prepared by mixing 1 g of sulfanilamide (Sigma N 5589) in 100 ml of 5% orthophosphoric acid. Greiss A and Greiss B must be stored in the dark place for protecting from direct light.
The culture suspension macrophage of 50 ul was placed onto each well in 96 well microplate. The sample was added by 50 ul greiss reagent, and let in room temperature for 15 min until the colour was changed. The absorbance of nitric oxide was observed on wavelength of 550 nm. Nitrite standard solution was used for calculating the concentration.
ROI secretion assay
The 50 μl of NBT solution was added to each wells. The cells was then added by 1 ml of PBS containing 125 ng/ ml of PMA in the middle of coverslip and incubated in 5% CO 2 incubator, 37 o C for 60 min. The reagent was removed from the wells, and dried at room temperature. After cells were dry, cells were fixed with 1000 μl methanol for 30 seconds. After drying, cells were added by 200 μl neutral red spultion 2% for 15 minutes and dried at room temperature. After drying, cellswere rinsed with distilled water. The percentage of macrophage cells showing NBT reduction was calculated from about 100 cells examined with a 400 times magnification of light microscope.
Results and Discussion
The immune system is a mechanism by which the body maintains its integrity to protect against various pathogenic matter. The immune system consist of a specific and a nonspecific immune system. Z. cassumunar was reported to be a potential an immunomodulatory agent. One of chemical content was found in Z. cassumunar was curcumin. The dose treatment of extract in mice was equal to the treatmen of 18.08 mg/20g BW of curcumin . The dose can increase the NO secretion compared to negative control [11]. The treament of ethanol extract of Z. cassumunar for 14 days was also found to increase the phagocytic capacity of macrophages in mice induced by Plasmodium berghei [12].
In this research, lipopolysachcharide (LPS) was used as antigen for boosting the immune response. Injection of 1 mg/kg BW of LPS from E. coli O111B4 intraperitoneally could increase the C-reactive protein in mice serum as a part of acute inflammation reaction [13].
Effect of Z. cassumunar rhizome extract on macrophage phagocytosis
Phagocytic activity of macrophage is activated by the presence of antigens from macromolecules and pathogens. In this study the phagocytic activity of macrophage was observed by determining the macrophages which phagocytosed latex. The parameter used was Active Phagocytes Cells (SFA) refer to the number of macrophage cells that phagocytosed latex cells in 100 macrophage cells. The other parameter was the phagocytic capacity refer to the amount of latex which is phagocytosed in 100 cells macrophages and Phagocytosis Index (IF) refer to the average number of latex particles deposited in 100 macrophage cells. Table 1 shows that the greater the dose of ethanol extract of Z. cassumunar, the greater the value of SFA, phagocytic capacity and phagocytosis index. This result indicate that the greatest activity of phagocytosis is present at a dose of 1000 mg/kgBW. This is probably due to the content of flavonoids and curcumin in the Z. cassumunar rhizome. Flavonoids potentially work against lymphokines produced by T cells which stimulate phagocyte cells to respond to phagocytosis [14].
The curcumin which is also found in the Z. cassumunar could enhance reactive oxygen species (ROS) which will further activate a signal involving peroxism proliferator activated receptor gamma (PPAR-γ) and NF-E2-related factor 2 (Nrf2). The activity of both signals on monocytes and macrophages results in increased expression of cluster defferensiation 36 (CD36) in order to increase macrophage phagocytosis. Activated macrophages receive signals from interferon alfa (INF-α) to produce inducible nitrite oxide synthase (iNOS). The enzyme catalyzes the conversion of L-arginine to L-citrulline which produces NO gas. Increasing of NO is associated with increasing of macrophage activity as phagocyte cells [11]. Phenylbutenoid which is found from the Z. cassumunar rhizome also show antioxidant activity through free radical scavenger and could increase phagocytic activity [7]. The highest value of SFA (%) among the groups treated by ethanol extract is 57,25% (in group which treated by 1000 mg/kgBW of extract). The previous study also reported that Z. cassumunar extract capable to increase the phagocytosis capacity in vitro study [15]. The statistical analysis found that the number of active phagocyte cells (SFA), phagocytic capacity and phagocytosis index in this research did not show a significant difference (p>0.05).
The increasing of ROI secretion after treatment of Z. cassumunar extract
The measurement of peritoneal macrophage ability on secretion of ROI was measured by NBT Reduction assay containing PMA. PMA will stimulate the macrophages to secrete ROI. The presence of ROI (superoxide anion, O2 -) secretion exhibits the increasing of respiration and causes reduction of NBT to form a non-dissolved formazan precipitate in blue colour. The research found that treatment of LPS 0.7 mg/kg bw did not give any significant difference in ROI secretion compare to normal. The treatment of extract with dose of 250 and 1000 mg/kg bw increase the ROI secretion significantly compared to the negative control and normal. But statistical analysis of increasing dose in this study resulted no significant difference in ROI secretion.
The increasing of NO secretion after treatment of Z.cassumunar extract
Nitric oxide (NO) is an effective antibacterial effector in the immune system. The concentration of NO production secretion was found to increase after stimulating macrophages with IFN-γ and TNF-α. The IFN-γ is one of the cytokines which can induce NO secretion without stimulation by others, whereas TNF-α can not induce INOS without IFN-γ. NO is also used as an oxidative stress marker, which the accumulation of free radicals and inability of the antioxidant to eliminate the accumulation of free radicals cause imbalance production of reactive oxygen species [16].
NO is also recognized as an intercellular messenger that has been recognized as one of the most important players in the immune system. Cells of the innate immune system including macrophages, neutrophils and natural killer cellsuse pattern recognition receptors to recognize the molecular patterns associated with pathogens. Activated macrophages then inhibit pathogen replication by releasing a variety of effector molecules, including NO. A large number of other immune-system cells produce and respond to NO. Thus, NO is important as a defense molecule against infectious microorganisms. It also regulates the functional activity, growth and death of many immune and inflammatory cell types including macrophages, T lymphocytes, antigen-presenting cells, mast cells, neutrophils and natural killer cells [17].
Previous study reported that the capability of curcumin, one of the active compound found in many Zingiber family, in increasing the reactive oxygen species (ROS) secretion by activating PPAR-γ and Nrf-2 of macrophage and caused the increasing of CD36 [11].
The present study found that LPS could significantly increase the NO secretion compared to the control. LPS was recognized as antigen by macrophage and followed by stimulating the NO secretion. Treatment with Z. cassumunar extract decreased the NO secretion significantly (Table 3). The NO secretion levels of groups treated with 250, 500 and 1000 mg/ kg bw of extract was higher than that of the normal group, but lower than that of the group injected by 0.7 mg/kg bw of LPS. It means that the treatment of Z. cassumunar extract at all three doses decrease the levels of NO secretion of peritoneal macrophages peritoneum in mice induced by LPS 0.7 mg /kg bw. The previous study reported that intraperitoneal injection of LPS increased NO levels in serum macrophages [18]. The decrease of NO level after treatment with Z. cassumunar extract, could be resulted by the antioxidant compound of the extract. The curcumin compound reported to inhibit NO production in activated macrophages [19]. Curcumin also was reported to reduced the iNOS mRNA expression in LPS injected mice [20].
In vitro study confirmed the result, that Z. cassumunar could decrease the NO secretion in vitro in murine macrophage cell line RAW 264 [21]. Another active compound isolated from Z. cassumunar Roxb was also reported to inhibit NO production of peritoneal macrophage in LPS induce mice [22].
The high level of ROI and NO secretion indicate the good respon of immune system. Nevertheless excessive secretion of ROI and NO can result a reactive oxidant which involved the reactive oxygen compounds such as OH, OOH-, O 2 , and H 2 O 2 . The presence of oxygen is important for the normal metabolism, but oxygen can increase the number of free radicals. Free radicals are highly reactive which can damage cells and can cause oxidative stress. Previous research reported the capability of cassumunin A and B which isolated from Z. cassumunar in prevented the hydrogen peroxide (H 2 O 2 )induced decrease in cell viability caused by oxidative stress [23].
Conclusions
The administration of ethanol extract of Z. cassumunar rhizome for 7 days could increase ROI and decrease NO secretion levels but could not increase phagocytic activity (p> 0.05) of peritoneal macrophage in Swiss mice injected by LPS. These findings indicate that Z. cassumunar rhizome have immunomodulatory effect in vivo. | 2019-04-02T13:12:09.436Z | 2017-11-01T00:00:00.000 | {
"year": 2017,
"sha1": "cbe0cc31b23775667b954f8c9dca4c8097ad7043",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/259/1/012007",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f95672d5c31148add5fef87c0cce1bdfdaf5c1c6",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
} |
55611274 | pes2o/s2orc | v3-fos-license | Effects of different chemical additives on biodiesel fuel properties and engine performance. A comparison review
Biodiesel fuel can be used as an alternative to mineral diesel, its blend up to 20% used as a commercial fuel for the existing diesel engine in many countries. However, at high blending ratio, the fuel properties are worsening. The feasibility of pure biodiesel and blended fuel at high blending ratio using different chemical additives has been reviewed in this study. The results obtained by different researchers were analysed to evaluate the fuel properties trend and engine performance and emissions with different chemical additives. It found that, variety of chemical additives can be utilised with biodiesel fuel to improve the fuel properties. Furthermore, the chemical additives usage in biodiesel is inseparable both for improving the cold flow properties and for better engine performance and emission control. Therefore, research is needed to develop biodiesel specific additives that can be adopted to improve the fuel properties and achieve best engine performance at lower exhaust emission effects.
Introduction
Fossil fuel contributes to the prosperity of the worldwide economy since it is widely used due to high combustion efficiency, reliability, adaptability and costeffectiveness.However, the reserves of petroleum-based fuels are limited and on the verge of reaching their maximum production.Although the majority of the renewable energy technologies are more eco-friendly than conventional energy, the use of these technologies is still limited primarily to stationary operations, mainly due to technological limitations and poor economics [1].The current alternative to diesel fuel can be biodiesel.It can offer many benefits, including the reduction of greenhouse gas emissions, regional development and social structure, especially to developing countries [2,3].
Biodiesel is a renewable and environmental friendly alternative diesel fuel for diesel engine [4,5].It is an oxygenated fuel which contains 10-15% oxygen by weight [6,7], and is a sulphur-free fuel.These facts lead biodiesel to complete combustion and less exhaust emissions than diesel fuel.Biodiesel has a higher viscosity, density, pour point, flash point and cetane number than diesel fuel.On the other hand, the energy content or the net calorific value of biodiesel is about 12% less than that of diesel fuel on a mass basis, causing lower engine speed and power [8][9][10][11].The use of biodiesel instead of the conventional diesel fuel significantly reduces exhaust emissions such as carbon dioxide (CO2), [12,13] particulate matter (PM), carbon monoxide (CO), sulphur oxides (SOX), and unburned hydrocarbons (HC) [14,15].On the other hand, biodiesel has a higher nitrogen oxide (NOX) emissions than diesel fuel [16,17].The main disadvantages of biodiesel are injector coking, engine compatibility, and high price [18].The effects of oxidative degradation caused by contact with ambient air (auto oxidation) during long-term storage present a legitimate concern in terms of maintaining the quality of biodiesel fuel.
A key property of biodiesel currently limiting its application to blends of 20% or less is its relatively poor low-temperature properties [19,20].Petroleum diesel fuels are plagued by the growth and agglomeration of paraffin wax crystals when ambient temperatures fall below the fuel's cloud point.These solid crystals may cause start up problems such as filter clogging when ambient temperatures drop to around -10 to -15 o C [21].While the cloud point of petroleum diesel is reported as -16 o C, biodiesel typically has a cloud point of around 0 o C, thereby limiting its use to ambient temperatures above freezing [22,23].
Published research has shown that the physical properties of biodiesel can be improved by the use of different additives, so that it can solve the problems associated with cold flow properties of biodiesel for their large scale usage in diesel engines.A number of additives have been tried by different researchers for improving the performance and also reducing emissions from diesel engines.The objective of this study to developed data base for the used chemical additives with biodiesel under certain categories and discuss their suitability for each type of biodiesel according to their properties.
Chemical additives
Several approaches have been proposed to improve the low temperature properties of biodiesel, including: blending with petroleum diesel; the use of additives; and the chemical or physical modification of either the oil feedstock or the biodiesel product [24].Blending with petroleum diesel is only effective at low biodiesel proportions (up to 30% by vol.) with cloud points to around -10 o C [23].Clearly, blends with petroleum diesel do not change the chemical nature and therefore the properties of biodiesel will not facilitate their use at higher concentrations.Since the aim must be to maximize biodiesel utilization, petroleum blends with biodiesel will not be discussed further in this review.The use of additives can be further classified into traditional petroleum diesel additives and emerging new technologies developed specifically for biodiesel.Several authors have published different works to improve the low-temperature properties of biodiesel by the usage of different additives for their convenient handling and usage at different climatic conditions.Traditional petroleum diesel additives can be described as either pour point (PP) depressants or wax crystalline modifiers.Pour point depressants were developed to improve the pump ability of crude oil and do not affect nucleation habit.Instead, these additives inhibit crystalline growth thereby eliminating agglomeration.They are typically composed of low molecular weight copolymers similar in structure to aliphatic alkane molecules, the most widely applied group being copolymers of ethylene vinyl ester.Wax crystalline modifiers, as the name suggests, are copolymers that disrupt part of the crystallization process to produce a larger number of smaller, more compact wax crystals [25].
Effect of chemical additives on fuel properties
Natural and synthetic antioxidant are often added to protect oil and fats by minimizing or retarding oxidation [26,27].Several studies have been conducted on both types of additives to improve cold flow properties for blends and pure biodiesel.For example, the pour point of neat soybean methyl ester was lowered by as much as 6 o C. Similar improvements in cold filter plugging point (CFPP) were achieved but no discernible improvement in CP was reported, as may have been expected when taking into account their mode of action.As previously stated, a potential mechanism for reducing the CP of biodiesel is the use of bulky moieties that disrupt the orderly stacking of ester molecules during crystal nucleation.Details of experimental results were reported [28] to improve the low-temperature performance of palm oil products, with emphasis on non-food uses, and to find some additives (synthesized or commercially available) suitable to reduce the pour point and cloud point values of palm oil products.The samples studied in this research include palm olein (PO), super olein (SO), palm oil methyl esters (POME), palm kernel oil methyl esters (PKOME), a blend of POME and PO at a 2:1 ratio (POMEPO), a blend of POME and SO at a 2:1 ratio (POMESO), a blend of PKOME and PO at a 2:1 ratio (PKOMEPO) and a blend of PKOME and SO at a 2:1 ratio (PKOMESO).Among the additives studied in this research were Tween-80, dihydroxy fatty acid (DHFA), acrylated polyester prepolymer, palm-based polyol (PP), a blend of DHFA and PP at a 1:1 ratio (DHFAPP), an additive synthesized using DHFA and ethyl hexanol (DHFAEH), and castor oil ricinoleate.All the additives used showed satisfactory results, with more significant reductions of pour point and cloud point values observed for POME, PKOME, POMEPO, POMESO and PKOMESO samples.The biggest reduction of the pours point value in this research was about 7.5 o C (by the addition of 1.0% DHFA to POMEPO), while the biggest reduction of the cloud point value was about 10.5 o C (by the addition of 1.0% DHFA + 1.0% PP to POME).The significant reductions in pour point and cloud point values of POME, PKOME, POMEPO, POMESO and PKOMESO by the additives used indicate that the additives might be able to improve the low-temperature properties of palm oil products, for instance biodiesel.They speculated that the effectiveness in particular of the polyhydroxy compounds was due to the interaction between the hydroxy groups of the additives and the samples.Unfortunately, a large increase in the viscosity of the blends was reported.An addition of 1.0% PP to palm oil methyl ester increased its viscosity from 29.5 cP to 42.2 cP.Some effort has also been made to utilize the major by-product of biodiesel manufacture, glycerol, by reacting it with isobutylene to produce butyl ethers of glycerol [29].A CP of -5 o C for 12% butyl ether and methyl ester was claimed.Further improvement in the low temperature properties of the palm biodiesel diesel blend at 3 o C was achieved by adding 1% of a palm based additive [30].
Effect chemical additives on oxidation stability
Oxidative stability of biodiesel was improved by adding natural and synthetic antioxidants at the varying concentrations between 250 and 1000 ppm [31][32][33].The various natural and synthetic antioxidants [α-tocopherol (α-T), butylated hydroxyanisole (BHA), butyl-4methylphenol (BHT), tert-butylhydroquinone (TBHQ), 2, 5-di-tert-butyl-hydroquinone (DTBHQ), ionol BF200 (IB), propylgallate (PG), and pyrogallol (PY)] improve the oxidative stability of soybean oil (SBO), cottonseed oil (CSO), poultry fat (PF), and yellow grease (YG) based biodiesel.The results indicated that different types of biodiesel had different natural levels of oxidative stability, indicating that natural antioxidants play a significant role in determining oxidative stability.Moreover, PG, PY, TBHQ, BHA, BHT, DTBHQ, and IB could enhance the oxidative stability for these different types of biodiesel.They also identified that antioxidant activity increased with increasing concentration.The induction period of SBO-, CSO-, YG-, and distilled SBO-based biodiesel could be improved significantly with PY, PG and TBHQ, while PY, BHA, and BHT showed the best results for PF-based biodiesel.They concluded that the effect of each antioxidant on biodiesel differs depending on different feedstock.Further they identified that the effect of antioxidants on B20 and B100 was similar; suggesting that improving the oxidative stability of biodiesel can effectively increase that of biodiesel blends.The oxidative stability of untreated SBO-based biodiesel decreased with the increasing indoor and outdoor storage time, while the induction period values with adding TBHQ to SBO-based biodiesel remained constant for up to 9 months.Although α-tocopherol showed very good compatibility in blends, it was significantly less effective than the synthetic antioxidants screened in this work.The cetane improvers DTBP and EHN are effective in reducing NOx by 4% in B20 blends.DTBP is also effective in NOx reduction for B100 fuels but not in proportion to the NOx reduction observed for B20 blends.They observed that cetane improvers act largely to lower the NOx produced during the burning of the petroleum diesel fuel.The antioxidant TBHQ significantly reduced NOx but also caused a small increase in particulate matter.
The effect of antioxidants addition on pollutant emissions from the combustion of palm oil methyl ester blends with No. 2 diesel was investigated in a nonpressurised, water-cooled combustion chamber [34].Antioxidant additives butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and tert-butyl hydroquinone (TBHQ) were individually dissolved at varying concentrations in B10 and B20 fuel blends for testing.Both BHA and TBHQ were effective in lowering the nitric oxide (NO) emission produced, where their concentrations in the fuel blends were shown to scale proportionately to NO levels in the flue gas.However, the addition of BHT to both fuel blends, increased the generation of NO during combustion.BHA was found to decrease the carbon monoxide (CO) levels when added to B10 and B20, while both BHT and TBHQ were observed to raise CO formation at all test points.With the proper selection of additives type and quantity for application to specific biodiesel blends, this simple measure has been shown to be an effective pollutant control strategy which is more economical than other existing technologies.The addition of the synthetic antioxidant tertbutylhydroquinone [35] at the concentration of 300 mg/kg with cottonseed biodiesel was sufficient to obtain acceptable oxidation stability values (>6 hours).Thermogravimetric analysis was also performed and similar profiles were verified for both ethylic and methylic biodiesels.Therefore, this work demonstrates the feasibility of using the ethanolic route to produce cottonseed oil biodiesel.
Other research [36] studied experimentally and compared the effect of antioxidant additives on NOX emissions in a jatropha methyl ester fuelled direct injection diesel engine.The antioxidant additives Lascorbic acid, α-tocopherol acetate, butylated hydroxytoluene, p-phenylenediamine and ethylenediamine were tested on computerized Kirloskarmake 4 stroke water cooled single cylinder diesel engine of 4.4 kW rated power.Their results showed that antioxidants considered in their study are effective in controlling the NOX emissions of biodiesel fueled diesel engines.A 0.025% concentration of p-phenylenediamine additive was optimal as NOX levels were substantially reduced in the whole load range in comparison with neat biodiesel.However, hydrocarbon and CO emissions were found to have increased by the addition of antioxidants.
The influence of Oxidative stability of palm biodiesel by adding natural and synthetic antioxidants additive were investigated experimentally [37,38].The experimental study conducted on the crude and distilled methyl esters of palm oil and found that crude palm oil has better oxidative stability.They attributed this to the presence of vitamin E (about 600 ppm), a natural antioxidant in the crude palm oil methyl esters.Natural and synthetic antioxidants were used in this study to investigate the effect on the oxidative stability of distilled palm oil methyl esters.It was found that both types of antioxidant showed beneficial effects in inhibiting the oxidation of distilled palm oil methyl esters.They found that the synthetic antioxidants were found to be more effective than the natural antioxidants as lower dosage (17 times less) was needed to achieve the minimum rancimat induction period of 6 hours as required to meet the European standard for biodiesel (EN 14214).
Effect chemical additives on engine performance and emissions
Many studies investigate the eliminate of biodiesel NOx effect by evaluation of formulation strategies [39,40].This was accomplished by spiking a conventional soy-derived biodiesel fuel with methyl oleate or with cetane improver.The conventional B20 blend produced a NOx increase of 3.5% relative to petroleum diesel, depending on injection timing.However, when they used a B20 blend where the biodiesel portion contained 76% methyl oleate, the biodiesel NOx effect was eliminated and a NOx neutral blend was produced.Increasing the methyl oleate portion of the biodiesel to 76% also had the effect of increasing the cetane number from 48.2 for conventional B20 to 50.4,but this effect is small compared to the increase to 53.5 achieved by adding 1000ppm of 2-ethylhexyl nitrate (EHN) to B20.They identified that for the particular engine tested, NOX emissions were found to be insensitive to ignition delay, maximum cylinder temperature, and maximum rate of heat release.The dominant effect on NOX emissions was the timing of the combustion process, initiated by the start of injection, and propagated through the timing of maximum heat release rate and maximum temperature.
Higher brake power produced over the entire engine speed range obtained [41,42] with 1% 4-nonyl phenoxy acetic acid (NPAA) additive in comparison to blended palm biodiesel B20 and B0 (diesel).The maximum brake power obtained at 2500 RPM is 12.28kW from B20 blended with 1% additive followed by a 11.93kW (B0) and 11.8kW (B20).The result implied that the biodiesel with some additives (B20+1%) shows the best engine F=$F= #;# '()* 03002-p.3performance and reduce the exhaust emission including NOX.They contributed to the increase of fuel conversion efficiency by improving fuel ignition and combustion quality due to the effect of fuel additive in B20 blend.
Other experimental studies [43][44][45] were carried out to evaluate the effect of Triacetin (T) as an additive with biodiesel on direct injection diesel engine performance and combustion characteristics.By adding triacetin [C9H14O6] additive to biodiesel, the results showed that the engine knocking problem can be alleviated to some extent and the tail pipe emissions are reduced.A comparative study was conducted using Petro-diesel, biodiesel and additive blends of biodiesel on the engine.Coconut oil methyl ester (COME) was used with an additive at various percentages by volume for all load ranges of the engine viz.at no load, 25, 50 and 75% of full load and at full load.Their results showed that performance is compared with neat diesel in respect of engine efficiency and exhaust emissions.Among the all blend fuels tried, 10% Triacetin combination with biodiesel shows encouraging results.
Ethanol is a low cost oxygenate with high oxygen content (approximately 35%) that has been used in biodiesel-ethanol blends [46].It was reported [47] that the ethanol-diesel-biodiesel fuel blends are stable well below sub-zero temperature and have equal or superior fuel properties to regular diesel fuel ethanol and methanol, as well as products derived from these alcohols, such as ethers, are under consideration or in use as alternative fuels or as an additive biodiesel fuel.Methanol offers very low particulate emissions but the problems are their toxicity, low energy density, low cetane number, high aldehyde emissions, and harmful influence on materials used in engine production.Ethanol seems to be the best candidate as a sole fuel as a component of either gasoline or diesel oil [48].Until recently ethanol was recognized only as a component of gasoline and not as a component of diesel oils.The properties of ethanol enable applying it also as a component of diesel oil.The potential of oxygenates as a means of achieving zero net CO2 renewable fuel, has resulted in considerable interest in the production and application of ethanol.In many countries such as the United States of America, Canada, Australia, Brazil, South Africa, Denmark, Sweden and others ethanol programs are realized.The research on ethanol programs is directed to identify factors that could influence engine performance and exhaust emissions.An understanding of these factors is necessary for the interpretation of the test results.Methanol can be produced from coal or petrol based fuels with low cost production, but it has very limited solubility in the diesel fuel.On the other hand, ethanol is a biomass based renewable fuel, which can be produced from vegetable materials, such as corn, sugar cane, sugar beets, sorghum, barley and cassava, and it has higher miscibility with diesel fuel [49].
Improvement of the low-temperature operability, kinematic viscosity, and acid value of poultry fat methyl esters were investigated [50] with the addition of ethanol, isopropanol, and butanol.The blends of ethanol in poultry fat methyl esters afforded the least viscous mixtures, whereas isopropanol and butanol blends were progressively more viscous, but still within specifications contained in ASTM D6751 and EN 14214.However, this study identified blends of alcohols in poultry fat methyl esters resulted in failure of the flash point specifications found in ASTM D6751 and EN 14214.Flash points of butanol blends were superior to those of isopropanol and ethanol blends, with the 5 vol% butanol blend exhibiting a flash point (57 o C) superior to that of No. 2 diesels fuel (52 o C).The most interesting observation is that blends of alcohols in poultry fat methyl esters resulted in an improvement in acid value with increasing content of alcohol.An increase in moisture content of biodiesel was observed with increasing alcohol content, with the effect being more pronounced in ethanol blends versus isopropanol and butanol blends.There wasn't any phase separation of alcohol-methyl esters samples observed in this study at below the ambient temperatures.
The influence of ethanol and kerosene on Mahua methyl ester (Mahua biodiesel) were studied experimentally [51] towards the objectives of identifying the pumping and injecting of these biodiesel in CI engines under cold climates.Effect of ethanol and kerosene on the cold flow behaviour of this biodiesel was studied.A considerable reduction in pour point has been noticed by using these cold flow improvers.Four concentrations of ethanol and kerosene blends, i.e. 5%, 10%, 15% and 20%, were tested with Mahua biodiesel for cold flow studies.The reduction in cloud point of MME was from 18 o C, to 8 o C, when blended with 20% of ethanol and up to 5 o C, when blended with 20% of kerosene.Similarly the reduction in pour point was from 7 o C, to -4 o C, when blended with 20% ethanol and up to -8 o C, when blended with 20% kerosene.MME with 10% ethanol and 10% diesel reduces the pour point from 18 o C, up to -5 o C. The researchers concluded that ethanol and kerosene improve the cold flow properties MME when blended up to 20%.However, higher blends with ethanol are discouraged as it may reduce the overall calorific value.Also ethanol has very low value of cetane number.The results obtained by them showed that dieselethanol blended MME had similar performance at part load and superior performance at full load to that of the diesel.They obtained an average CO% reduction in 20% ethanol blended biodiesel over diesel was as high as 50%, reduction HC emission for ethanol blended biodiesel (E20 and E10) was lower than 9.15% and 5.25%, respectively, the ethanol blended biodiesel has shown low NOX emission and was lowest for MMEE20 blend, smoke emissions were lower 20% ethanol blended biodiesel.
Anhydrous ethanol was experimentally investigated [52] as an additive to B20 diesel oil-soybean biodiesel blends on a passenger vehicle exhaust pollutant emissions.Blends of diesel oil and soybean biodiesel with concentrations of 3% (B3), 5% (B5), 10% (B10) and 20% (B20) were used as fuels.Anhydrous ethanol was added to B20 fuel blend with concentrations of 2% (B20E2) and 5% (B20E5).The results showed that increasing biodiesel concentration in the fuel blend increased carbon dioxide (CO2) and oxides of nitrogen (NOX) emissions, while carbon monoxide (CO), hydrocarbons (HC) and particulate matter (PM) 03002-p.4emissions were reduced.The addition of anhydrous ethanol to a B20 fuel blend proved it could be a strategy to control exhaust NOX and global warming effects through the reduction of CO2 concentration.However, it may require fuel injection modifications, as it increases CO, HC and PM emissions.With the addition of 2-5% of ethanol to B20 the NOX emission levels were reduced to that of B3.With an increased biodiesel concentration in the blend with diesel oil, reduced particulate matter emission was verified.Nevertheless, the fuel blends containing ethanol (B20E5 and B20E10) showed increased PM emission.Their results showed that the use of ethanol as an additive to biodiesel-diesel oil blends can be an ally to control NOX emissions and global warming though CO2 concentration reduction, but is unfavourable to CO, HC and PM emissions.
Biodiesel from waste cocking oil blends with ethanol and methanol [53,54] were run in a diesel engine under the same operating conditions and compared to a baseline diesel fuel.Overall, brake specific fuel consumption of alcohol blends was higher than for diesel, while ethanol-blended fuels show lower BSFC than methanol-blended fuels.There was no significant difference in exhaust gas temperature.Increasing alcohol concentration reduces NO emissions, while increasing CO and HC emissions.Biodiesel-ethanol-diesel blends, as compared to standard diesel, increase CO and HC emissions, while reducing NO emissions.Interestingly, biodiesel-methanol-diesel blends have opposite effects on the emissions.According to the study's results, methanol blends would be the choice if CO and HC emissions are the aim and ethanol blends would be the right choice for reducing NO emissions for the concentrations investigated in this work.Overall, emissions strongly depend on engine operating conditions and alcohol blend ratios, which could have positive and negative effects overall, due to oxygen content and cooling effects [55].
Diethyl ether (DEE), an oxygenated additive can be added to diesel/biodiesel fuels to suppress the NOx emission.DEE is an excellent ignition enhancer and has a low auto ignition temperature [56].It is an aid for cold starting and ignition improver for diesel water emulsion [57].Detailed experimental results reported [58] on an evaluation of the effects of using diethyl ether and ethanol as additives to biodiesel/diesel blends on the performance, emissions and combustion characteristics of a direct injection diesel engine.The test fuels are denoted as B30 (30% biodiesel and 70% diesel in vol.), BE-1 (5% diethyl ether, 25% biodiesel and 70% diesel in vol.) and BE-2 (5% ethanol, 25% biodiesel and 70% diesel in vol.) respectively.The results indicate that, compared to B30, there is a slightly lower brake specific fuel consumption (BSFC) for BE-1.Drastic reduction in smoke was observed with BE-1 and BE-2 at higher engine loads.Nitrogen oxide (NOX) emission is founded slightly higher for BE-2.Hydrocarbon (HC) emissions are slightly higher for BE-1 and BE-2, but carbon monoxide (CO) is slightly lower.The peak pressure, peak pressure rise rate and peak heat release rate of BE-1 are almost similar to those of B30, and higher than those of BE-2 at lower engine loads.At higher engine loads the peak pressure, peak pressure rise rate and peak heat release rate of BE-1 are the highest and those of B30 are the lowest.BE-1 reflects better engine performance and combustion characteristics than BE-2 and B30.
Bio-fish oil [59] blended with diethyl ether as an oxygenate additive and EGR technique was also used to improve the performance and reduce the emission of the engine.Encouraging results were obtained from their investigation.The percentage reduction in CO, CO2, NOX and CXHy were 91%, 62%, 92% and 90% respectively attained when the engine was run at a maximum load using BFO with 2% additive with EGR, and there was a reduction in all the percentages when the engine was run in other loads also.In the case of NOX, there was an increase of this emission by about 48% in the maximum load with BFO when compared with diesel.The optimum values of the engine emission in this study were obtained with 2% of additives, when this percentage is increased or decreased the emission were increased.
In another studies [60,61] oxygenated additive diethyl ether (DEE) was blended with biodiesel in the ratios of 5%, 10%, 15% and 20% and tested for their performance.Compared with biodiesel, a reduction of 15% of NOX emission was observed for 20% DEE blends at full load, which was the highest reduction among the blend.The higher oxygen content of DEE reduced the smoke opacity.A reduction of 14.63% of smoke opacity was observed for 20% DEE blends than for biodiesel at full load.HC emissions were found to increase with the addition of DEE with biodiesel.This study concluded that a 20% DEE blend with Thevetia peruviana biodiesel would result in better performance and lesser emissions than other combinations.
Performance and emission characteristics [62] of diesel engine fuelled with blends of pongamia biodiesel and diesel determined at different proportions of diethyle ether.The engine NOX emission was higher than the diesel fuel operation with all blends.The addition of diethyl ether to the blends reduced the NOX emission at low and medium loads; however, at high loads the NOX emission was higher compared to diesel and lower compared to the corresponding biodiesel blend.The addition of diethyl ether to biodiesel blends reduced both NOX and smoke emission further.The biodiesel blends tested showed a significant reduction in smoke emission.Further improvement in smoke emission was obtained by the addition of DEE.The addition of DEE resulted in a marginal deterioration of thermal efficiency.It is therefore concluded that the addition of 15%-20% DEE to biodiesel blends would result in a reduction of both NOX and smoke emission.
The performance and emission characteristics of diesel, (Karanja oil methyl ester) JOME biodiesel were analyzed and compared [63,64] with JOME diethyl ether blends as an additive at different proportions with biodiesel in a single cylinder, four stroke naturally aspirated, computerized diesel engine.The measured performance parameters were brake thermal efficiency, brake specific fuel consumption and engine exhaust emission of CO, CO2, HC, NOX and smoke intensity.Significant improvements in performance parameters and exhaust emissions were observed with the addition of diethyl ether blends with biodiesel.It was concluded that F=$F= #;# '()* 03002-p.5 the B-D15 was found to be the optimum blend on the basis of performance and emission characteristics.
Conclusions
Due to the continuous effort to make biodiesel fuel economically viable, as well as to use cleaner fuels, additives will become an indispensable tool in the global trade.The technical specifications of additives not only cover a wide range of subjects but also most subjects are interdependent.Additives used to improve the properties of biodiesel may further improve combustion performance of biodiesel engine.An additive used to improve ignition and combustion performances of biodiesel is advantageous to power recovery of biodiesel engine, thus it will promote economy, and meanwhile this will also improve engine power.Oxygenates additives can improve PM emissions of biodiesel, but it would not be useful for power recovery.Furthermore, small proportion of liquid chemical additive added into biodiesel and its blends with diesel can be advantageous to HC and CO2 emissions.
DOI: 10
.1051/ C Owned by the authors, published by EDP Sciences, 201 | 2018-12-07T15:55:23.594Z | 2016-01-11T00:00:00.000 | {
"year": 2016,
"sha1": "5e4525fcae18a8519a896d070466795c9749fad6",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/01/matecconf_ses2016_03002.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5e4525fcae18a8519a896d070466795c9749fad6",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
3151384 | pes2o/s2orc | v3-fos-license | Experimental evidence for enhanced top-down control of freshwater macrophytes with nutrient enrichment
The abundance of primary producers is controlled by bottom-up and top-down forces. Despite the fact that there is consensus that the abundance of freshwater macrophytes is strongly influenced by the availability of resources for plant growth, the importance of top-down control by vertebrate consumers is debated, because field studies yield contrasting results. We hypothesized that these bottom-up and top-down forces may interact, and that consumer impact on macrophyte abundance depends on the nutrient status of the water body. To test this hypothesis, experimental ponds with submerged vegetation containing a mixture of species were subjected to a fertilization treatment and we introduced consumers (mallard ducks, for 8 days) on half of the ponds in a full factorial design. Over the whole 66-day experiment fertilized ponds became dominated by Elodea nuttallii and ponds without extra nutrients by Chara globularis. Nutrient addition significantly increased plant N and P concentrations. There was a strong interactive effect of duck presence and pond nutrient status: macrophyte biomass was reduced (by 50 %) after the presence of the ducks on fertilized ponds, but not in the unfertilized ponds. We conclude that nutrient availability interacts with top-down control of submerged vegetation. This may be explained by higher plant palatability at higher nutrient levels, either by a higher plant nutrient concentration or by a shift towards dominance of more palatable plant species, resulting in higher consumer pressure. Including nutrient availability may offer a framework to explain part of the contrasting field observations of consumer control of macrophyte abundance. Electronic supplementary material The online version of this article (doi:10.1007/s00442-014-3047-y) contains supplementary material, which is available to authorized users.
Introduction
Vertebrate herbivores can strongly affect vegetation biomass both through direct consumption and through altering nutrient availability in terrestrial systems (McNaughton et al. 1997;Pastor et al. 2006;Bakker et al. 2009;Schrama et al. 2013). However, the importance of vertebrate herbivores as a factor structuring submerged vegetation is unclear (Lodge et al. 1998). Generally, the abundance of primary producers is controlled by bottom-up and top-down forces which may interact with each other in both terrestrial and aquatic systems (Shurin et al. 2006;Gruner et al. 2008). However, whereas growth and abundance of freshwater macrophytes are strongly influenced by the availability of resources for plant growth [recently reviewed in Bornette and Puijalon (2011)], the importance of top-down control by vertebrate consumers is debated because field studies yield contrasting results (Marklund et al. 2002). Herbivores can strongly reduce macrophyte abundance (Van Donk and Otte 1996;Weisner et al. 1997;Hilt 2006), but in other water bodies no effect of herbivores on macrophyte biomass could be found (Perrow et al. 1997; Abstract The abundance of primary producers is controlled by bottom-up and top-down forces. Despite the fact that there is consensus that the abundance of freshwater macrophytes is strongly influenced by the availability of resources for plant growth, the importance of top-down control by vertebrate consumers is debated, because field studies yield contrasting results. We hypothesized that these bottom-up and top-down forces may interact, and that consumer impact on macrophyte abundance depends on the nutrient status of the water body. To test this hypothesis, experimental ponds with submerged vegetation containing a mixture of species were subjected to a fertilization treatment and we introduced consumers (mallard ducks, for 8 days) on half of the ponds in a full factorial design. Over the whole 66-day experiment fertilized ponds became dominated by Elodea nuttallii and ponds without extra nutrients by Chara globularis. Nutrient addition significantly increased plant N and P concentrations. There was a strong interactive effect of duck presence and pond nutrient status: macrophyte biomass was reduced (by 50 %) after the presence of the ducks on fertilized ponds, but not in the unfertilized ponds. We conclude that nutrient availability interacts Marklund et al. 2002;Rip et al. 2006). The biomass density of grazers, particularly waterfowl, as well as grazer identity, may explain part of the observed variation (Wood et al. 2012a), whereas others argue that grazers only have important effects under restricted conditions of low vegetation density or in certain seasons (Marklund et al. 2002).
However, nutrient quality of the vegetation is another factor that could explain the variable results. Generally, the proportion of primary production that is removed by herbivores increases with plant nutritional quality (often expressed as foliar N concentration), both in terrestrial and aquatic systems (Cebrian and Lartigue 2004;Shurin et al. 2006). Macrophytes take up nutrients from the sediment and the water column (Carignan and Kalff 1980;Madsen and Cedergreen 2002) and plant nutrient concentration increases with nutrient availability in the water body (Cronin and Lodge 2003), making plants more attractive for consumers (Dorenbosch and Bakker 2011). Furthermore, increased nutrient availability may also lead to a shift in vegetation composition (Blindow et al. 1993;Van den Berg et al. 1999), which may in turn affect plant palatability, as macrophyte species differ in palatability to generalist consumers (Elger et al. 2004;Dorenbosch and Bakker 2011).
Although increasing nutrient availability may alter plant nutrient quality, it also modifies plant growing conditions, which will affect plant regrowth after grazing (Wise and Abrahamson 2007). Factors that limit macrophyte growth change from nutrients to light over a gradient of increasing nutrient availability (Bornette and Puijalon 2011). Macrophyte biomass initially increases with increased nutrient availability, but macrophytes eventually disappear under eutrophic conditions with phytoplankton dominance, large epiphyton loading and as a result, strong light limitation (Sand-Jensen and Borum 1991;Scheffer et al. 1993;Jeppesen et al. 2000;Hilt 2006). Therefore, macrophyte tolerance to grazing may be altered with nutrient availability (Gayet et al. 2011). Furthermore, consumers can also increase nutrient availability in the water through allochthonous nutrient input (Manny et al. 1994;Hahn et al. 2008) and through an increase in autochthonous nutrient cycling (Mitchell and Wass 1995;Vanni 2002). This may enhance plant growth in nutrient-limited systems, but not in water bodies that are already eutrophic.
To date, there have not been any controlled experiments that simultaneously investigated the effects of nutrient status of a water body and waterfowl grazing impact. We created ponds of different nutrient status through fertilization and introduced facultative herbivorous ducks (northern mallards Anas platyrhynchos L.) on half of the ponds. We removed the ducks after 8 days, and measured plant biomass as well as plant and water nutrient concentrations in the duck and control ponds, both immediately after the ducks were removed and 6 weeks later, to be able to measure direct and indirect effects of duck presence. We hypothesize that: 1. Direct consumer impact is larger in nutrient-rich than nutrient-poor water bodies, which could be explained by higher plant nutritional quality. 2. Indirect consumer impact on macrophyte biomass depends on (re)growth after the presence of consumers which could be: (a) higher in nutrient-rich than nutrientpoor systems, as more nutrients for growth are available in the latter; or (b) higher in nutrient-poor than nutrientrich systems, due to light limitation in the latter.
Experimental ponds
We conducted the experiment in 2007 and used 20 experimental ponds out of a set of 36 which had been established in 2005 in Loenderveen, the Netherlands (52°12′N, 5°02′E); see also Bakker et al. (2010). Each pond was 1.25 m deep with 0.3 m of sediment (10:1 sand and clay mixture). The water was controlled with a standpipe and fixed at 0.5-m depth. The ponds were square shaped (with slopes of 45°) with 20 m 2 of water surface area and 9 m 2 of sediment surface area and held 7 m 3 of water. The ponds contained a mixed macrophyte vegetation dominated either by Elodea nuttallii Planch. St John (hereafter Elodea) in the fertilized ponds or Chara globularis Thuill (hereafter Chara) in the ponds without extra nutrients. Other species were present in low amounts including: Ceratophyllum demersum L., Myriophyllum spicatum L., Potamogeton pectinatus L., Potamogeton perfoliatus L. and Ranunculus circinatus Sibth. Elodea is a facultative rooting species that reproduces clonally in our region, and Chara has rhizoids and is a spore plant which forms oogonia later in the season. Both species can take up nutrients from the sediment and the water column (Vermeer et al. 2003;Angelstein and Schubert 2008;Wüstenberg et al. 2011). Ponds were fishless and covered with nets to prevent grazing by wild waterfowl. The ponds were arranged in a grid with 1.5 m between ponds. The ponds had been cleared of remaining above sediment macrophyte biomass in winter through raking and were re-filled with fresh dephosphatised lake water in early spring (water properties of inlet water: 0.650 mg NH 4 -N L −1 , 0.650 mg NO 3 -N L −1 and 0.007 mg PO 4 -P L −1 ; Waternet, unpublished data).
Nutrient treatment
Half the ponds received weekly nutrient additions during the growing seasons in 2006 ) and in 2007 (60 g NH 4 NO 3 and 15 g KH 2 PO 4 , which corresponds with 3.0 mg N L −1 and 0.5 mg P L −1 , respectively) between 4 May and 20 August. These nutrient levels were chosen to simulate a eutrophic condition under which submerged macrophytes would still be able to grow (Portielje and Roijackers 1995;Van de Bund and Van Donk 2004;Bakker et al. 2010). The other half did not receive any additional nutrients. High rainfall levels in 2007 kept the water level at 0.5 m all summer.
Grazing experiment
We used captive mallards (Anas platyrhynchos L.) as our model consumer species. Mallard ducks are omnivorous, but their diet consists of about 90 % plant material (Wood et al. 2012a). Mallards are able to reach a depth up to 40 cm when dabbling (Kear 2005). In a test trial the mallards readily consumed macrophytes when these were offered in feeding trays in a shallow water layer. We used only female mallards in our experiment to avoid males being distracted by nearby female presence. The experiment was approved by the Animal Experiments Committee of the Royal Netherlands Academy of Arts and Sciences (protocol CL2007.01). Macrophytes were sampled on 15 June to determine biomass before the start of the duck experiment (see the section "Measurements" below for the sampling methods). We then selected 20 ponds of the 36 (ten with and ten without nutrient addition) which contained enough biomass to sustain mallards for at least 7 days, containing >700 g dry macrophyte standing crop per pond, based on the assumption that ducks need about 10 % of their wet body weight as dry weight food (Kear 2005), i.e. approximately 100 g dry macrophytes per mallard per day. We randomly assigned the duck treatment to five ponds of two nutrient levels. Before the ducks were introduced on the ponds, macrophyte biomass was equal among the nutrient treatments and the assigned duck treatments (twoway ANOVA, nutrient treatment, F 1,16 = 0.57, P = 0.46; duck treatment, F 1,16 = 0.003, P = 0.96; nutrients × duck, F 1,16 = 0.052, P = 0.82). On 2 July 2007 mallards were introduced in ten ponds (one mallard per pond); primaries were clipped before duck release. The ponds were fenced with a low fence around the pond about 1 m from the edge and a net on top which was 1 m above the ground around the pond and 2 m high in the middle of the pond. The area around each pond consisted of tiles, without terrestrial vegetation. The ducks were not provided with additional food other than that naturally present in the ponds. We placed two floating platforms (0.5 × 0.5 m) in these ponds, of which one contained a roof to provide shelter. The ducks used the platforms for resting and defecating. Although ducks could climb out of the pond, we observed only one to do so once. Also, we found no tracks or droppings on the tiles that could indicate that ducks left the ponds when we were not looking, whereas we found a lot of these tracks on the platforms in the pond. Rain washed away most of the droppings from the platforms; where necessary we washed the platforms in the ponds daily to permit nutrients from the faeces to return to the ponds. All ducks produced droppings during the experiment on multiple days. All mallards were removed on 10 July 2007, eight days after their introduction. The length of the feeding trial was based on the amount of macrophytes in the ponds on 15 June, where the pond with the lowest macrophyte biomass was calculated to be able to sustain mallard feeding for 8 days (see above). After the mallards were removed weekly nutrient addition was continued until 20 August.
Measurements
Macrophyte biomass was sampled on 15 June, 2 weeks before the start of the experiment, on 11 July, the day after mallards were removed and on 20 August, 6 weeks after mallard grazing. Macrophyte biomass was collected from a round metal sampler with a diameter of 0.5 m (0.2 m 2 ) and 0.5 m height, which was placed randomly in each pond while avoiding duplicate sampling in time. Also, the area where the floating platforms could cast shade on the vegetation was avoided. One sample/pond per harvest date was taken, to avoid too much disturbance of the vegetation. The sampler was pressed firmly into the sediment and macrophytes were collected by hand, cleaned in the lab, sorted to species and dried at 60 °C for 4 days. Only charophytes were not sorted to the species level; Chara globularis was the dominant species with small amounts of Chara vulgaris present .
The nutrient concentration in the dominant plant species was determined in subsamples taken from the biomass harvest on 11 July. Elodea plants were collected from all ponds and Chara from all unfertilized ponds and from three fertilized ponds, as it was rare in these ponds. By 20 August it had disappeared altogether from the fertilized ponds; therefore, plant material from 20 August was not further analysed. All plant material was thoroughly cleaned in the lab. The rest of each sample was dried for 3 days at 60 °C and ground (1-mm mesh). C and N concentration was determined through combustion on an element analyser (Euro EA 3000; Hekatech, Wegberg, Germany). The P concentration of the macrophytes was determined by first incinerating the ground samples for 30 min at 500 °C, followed by a 2 % persulphate digestion step in an autoclave for 30 min at 121 °C. The digested samples were analysed using a QuAAtro segmented flow analyser (Seal Analytical, Beun de Ronde, Abcoude, the Netherlands).
We collected water samples from each pond on 2 (before duck release) and 11 July (immediately after) and 20 August (6 weeks after) by taking a 200-mL sample from each pond. Chlorophyll a concentrations were determined using the PHYTO-PAM fluorometer (Heinz Walz, Effeltrich, Germany) (Lürling and Verschoor 2003). To measure nutrient availability in the ponds we filtered the water samples over a 0.7-µm-mesh GF/F Whatman filter and analysed them with continuous-flow analysis on an auto-analyser (Skalar Sanplus Segmented Flow Analyser; Skalar Analytical, Breda, the Netherlands) to determine PO 4 , NO 3 and NH 4 concentrations in the water. The pH and conductivity was measured in situ with a portable probe (340i SET, 2E30-101B02; Wissenschaftlich Technische Werkstätten, Weilheim, Germany) in the field. Light (photosynthetically active radiation) was measured with a LI-190 Quantum Sensor and LI-192 (underwater) (LI-COR Biosciences, Lincoln, NE), light availability was expressed as the percentage of light available at the bottom of the pond (50-cm depth), relative to ambient light. Alkalinity was measured in the lab by titration with 0.05 M HCl.
Epiphyton load on the plants was measured on 26 July on Elodea as this was the only species which occurred in all the ponds in sufficient densities. We collected three branches of Elodea plants (mean 0.26 ± 0.02 SE g dry weight in total) per pond, placed these in a 250-mL bottle filled with filtered (0.7-µm mesh) lake water and shook the bottle gently for 1 min following the method of Zimba and Hopson (1997). We selected pieces of 5-10 cm of fresh and green Elodea stems of the upper part of the shoots. Elodea pieces were removed, dried at 60 °C and weighed. The water samples containing the algae were filtered over washed Whatman GF/F filters, the filters were ashed for 2 h at 555 °C and algal biomass was calculated from the weight difference of the filters before and after ashing and divided by the dry weight of the Elodea branches in each bottle.
Because snail grazing can strongly affect epiphyton abundance (Jones et al. 2002) we counted the number of snails that were floating on the water surface as a proxy for snail abundance on 30 July. These were all Lymnea stagnalis L.
Data analysis
We used a repeated-measures ANOVA to test whether consumer impact on macrophyte biomass is larger in nutrient-rich than nutrient-poor water bodies with nutrient treatment and duck presence as fixed factors and time in the experiment as repeated measure (before, immediately after, and 6 weeks after introducing ducks). The data followed a normal distribution and homogeneity of variances was obtained without transformations. We indeed found a significant interaction between nutrient treatment and duck presence as well as a three-way interaction between time of measurement, nutrient treatment and duck presence. Therefore, we further tested the impact of duck presence per date for the unfertilized and fertilized ponds using independent t-tests on which we applied a Bonferroni correction for multiple testing. The direct effect of ducks could be determined immediately after removal of the ducks, the indirect effect 6 weeks after introduction of the ducks. To test whether the nutrient concentrations (N and P) in Chara and Elodea plants were higher in fertilized ponds, we used a three-way ANOVA with nutrient treatment, duck presence and plant species as fixed factors. Data were log 10 transformed to obtain homogeneity in variances. There was a significant three-way interaction among the fixed factors, therefore plant nutrient concentrations were tested for each plant species separately. This revealed that there was no effect of duck presence, therefore this treatment was removed as a factor and plant nutrient concentrations were tested with two-way ANOVAs with nutrient treatment and plant species as fixed factors. To test whether water nutrient availability and light availability affected plant (re) growth, repeated-measures ANOVAs with nutrient and duck treatment as fixed factors were used. Nutrient data were log 10 transformed and light availability was square root transformed to obtain homogeneity of variance and a normal distribution of the data. Epiphyton load was tested with a Kruskall-Wallis test as the data were not normally distributed, also not after transformation. Differences among nutrient and duck treatments were tested with a multiple rank test. Snail density was tested with a two-way ANOVA after log 10 transformation with nutrient and duck presence as fixed factors. The relationship between epiphyton biomass and snail density was tested with a Spearman rank correlation.
Effect of duck presence on macrophyte biomass
There was a strong interaction between nutrient treatment and duck presence on macrophyte biomass, which changed over time (Table 1; Fig. 1). Immediately after duck presence, macrophyte biomass was reduced by approximately 50 % in the fertilized ponds, whereas mallards had no measurable effect on macrophytes in the unfertilized ponds (Table 2; Fig. 1). At the end of the growing season, 6 weeks after the mallards had been present on the ponds, macrophyte biomass had increased in the fertilized ponds without ducks, whereas the fertilized vegetation where ducks had been present had not grown much during this time, resulting in significantly lower biomass in the ponds where ducks had been (Fig. 1b). In contrast, in the unfertilized ponds, if anything, macrophyte biomass had increased in the ponds where ducks had been present, but the difference in biomass between the duck treatments was not significant (Table 2; Fig. 1a).
In the unfertilized ponds Chara was the dominant species with 78-95 % abundance in the biomass samples over time (see Fig. 1, Online Resource 1). The fertilized ponds were increasingly dominated by Elodea with 40-99 % abundance in the biomass samples over time. The ponds where ducks were introduced tended to have initially a larger proportion of Chara than the ponds without ducks, Table 1 Results of repeated-measures ANOVA with nutrient treatment and duck presence as fixed factors and macrophyte biomass and water nutrient and light availability in the experimental ponds as dependent variables Data were tested over time before ducks were released in the ponds (2 July), immediately after ducks were removed (11 July) and 6 weeks later (20 August). Significant results at P < 0.05 are indicated in italic. Differences among treatments are indicated in Fig. 1 and 3 Tables 1 and 2 Table 2 Results of independent t-tests comparing the macrophyte biomass in ponds with ducks and without ducks, for the unfertilized and fertilized ponds, respectively Data are presented in Fig. 1
Plant nutrient concentrations
Nutrient addition resulted in a doubling and tripling of N in Chara and Elodea plants, respectively, and more than four to seven times higher P concentration, respectively ( Fig. 2; Table 3). There was a significant three-way interaction among nutrient treatment, duck presence and plant species on plant N concentration (Table 3). However, further testing of the effect of nutrient treatment and duck presence revealed no significant effect of duck presence on the nutrient concentrations of either plant species (see Table 1, Online Resource 1). When removing duck treatment as a factor, it became apparent that plant nutrient concentrations differed strongly among nutrient treatments and plant species. The concentration of P was higher in Elodea compared to Chara and higher in the fertilized ponds (two-way ANOVA, nutrient treatment F 1,31 = 202.18, P < 0.001; plant species, F 1,31 = 20.78, P < 0.001; nutrients × species, F 1,31 = 3.13, P = 0.087). The concentration of plant N depended on the nutrient status of the pond and the plant species: Chara in the unfertilized ponds contained the lowest plant N concentration, whereas Elodea in the fertilized ponds contained the highest plant N concentration (two-way ANOVA: nutrient treatment, F 1,31 = 135.43, P < 0.001; plant species, F 1,31 = 72.59, P < 0.001; nutrients × species, F 1,31 = 5.22, P = 0.029).
Water nutrients and light availability
Water nutrient concentrations were higher in the fertilized ponds, whereas there were no significant effects of duck presence at any of the sampling dates (Table 1; Fig. 3a-c). Before ducks were introduced, the fertilized ponds with an assigned duck treatment turned out to have a higher PO 4 concentration than the fertilized ponds which no ducks were assigned to. This difference was no longer significant after ducks had been present on the ponds (Fig. 3c). Almost twice as much light reached the bottom of the unfertilized compared to the fertilized ponds ( Fig. 3d; Table 1). Light availability in the fertilized ponds was still quite high, about 20-40 % of the ambient light. The effect of ducks was not significant and there was no trend in time (Table 1). The chlorophyll a concentration in the ponds ranged from 7 to 1,334 µg L −1 and was on average 107 ± 29 µg L −1 (mean ± SE) with no trend in time, pH in the ponds was 9.4 ± 0.05 with no trend in time, conductivity decreased from 141 ± 4 at the first sampling date to 124 ± 3 µS cm −1 in August and alkalinity decreased from 1.52 ± 0.05 to 0.88 ± 0.04 mEq L −1 during the experiment. There were no significant effects of duck or nutrient treatments on chlorophyll a concentration due to large variation in chlorophyll a concentrations within treatments, and no effects on pH, conductivity and alkalinity either (data not shown).
Epiphyton and snails
Epiphyton biomass was four times higher on Elodea plants in the ponds without versus those with added nutrients (Fig. 4a). This difference was significant between the treatments where no ducks had been, whereas the ponds where ducks had been were intermediate in epiphyton biomass (Kruskall-Wallis test, H 3,20 = 9.56, P = 0.023). The density of floating snails was more than sevenfold higher in the fertilized compared to the unfertilized ponds ( Fig. 4b; ANOVA, F 1,16 = 10.12, P = 0.006), whereas there was no effect of duck presence (F 1,16 = 0.02, P = 0.90) and no significant interaction (F 1,16 = 0.01, P = 0.92). There was a negative relationship between snail density and epiphyton biomass (Spearman rank correlation: r = −0.55, P = 0.013).
Discussion
We found evidence that consumer impact on macrophytes varies with pond nutrient status. Consumers had no effect on macrophyte biomass under nutrient-poor conditions, whereas they strongly reduced macrophyte Data were collected before (2 July), immediately after (11 July) and 6 weeks (20 August) after the presence of ducks on the ponds. a NO 3 , b NH 4 , c PO 4 and d light availability at the bottom of the ponds relative to ambient light. Data are means + SE (n = 5). Data were tested with repeated-measures ANOVAs; see Table 1 for results. Different letters indicate statistically different treatments for each parameter tested per date (two-way ANOVA, followed by post hoc Tukey test if a significant interaction between nutrient and duck treatment was present; P < 0.016, as a result of Bonferroni correction for three dates tested per parameter). Capital letters indicate a significant main effect of nutrient treatment; small letters indicate significant differences among nutrient and duck treatments as a result of a significant interaction between nutrient and duck treatments. Dashed line Unfertilized, solid line fertilized, open symbols no ducks, filled symbols with ducks biomass in fertilized ponds. In contrast to field studies, waterfowl species and density were controlled in our ponds and the differences thus depend on the properties of the ponds. We hypothesized that differences in plant nutritional quality or plant growth conditions may explain increased consumer impact on macrophyte biomass under nutrient-rich conditions. Below we discuss the evidence for the hypothesized mechanisms that may explain this result.
Plant quality, nutrient addition and grazing pressure Plant nutrient concentration was significantly affected by the nutrient treatments. Both Chara and Elodea plants contained a two to sevenfold higher N and P concentration when grown in fertilized ponds compared to unfertilized ponds. Therefore, we found support for our first hypothesis that increased nutrient availability in the environment resulted in higher grazing pressure, which coincided with higher nutrient concentration of the plants. Alternatively, these results can also be explained by a species-specific difference in palatability of Chara and Elodea, if Elodea is more palatable irrespective of its nutrient concentration. However, mallards did not seem to feed preferentially on Elodea, which occurred amongst the dominant Chara vegetation in the unfertilized ponds at up to 20 % of the total biomass. The N and P concentrations were generally low in Chara and Elodea under nutrient-poor conditions and clearly highest in Elodea in the fertilized ponds. This suggests that the nutrient-poor conditions were important in the lack of mallard feeding on macrophytes, apart from a possible general preference for Elodea over Chara. Separate feeding trials to determine duck preference for these macrophyte species showed that both species are palatable for mallard ducks (Ahmad, Bakker and Klaassen, unpublished data). Our study is representative for generalist consumers, whereas specialist feeders, such as red-crested pochard (Netta rufina Pallas), which feed particularly on charophytes, may induce different effects (e.g. Matuszak et al. (2012)). In our study the effect of plant species and pond nutrient status cannot be fully separated and the relative importance of plant species and plant nutrient concentrations in determining grazing pressure thus remains to be investigated in more detail. Furthermore, whereas nutrient addition led to higher plant nutrient concentrations in several experiments with macrophytes (Cronin and Lodge 2003;Dorenbosch and Bakker 2011), it should be noted that the nutrient concentrations found in plant tissues do not necessarily always reflect nutrient concentrations found in the water (Casey and Downing 1976). An increase in plant nutrient concentrations after nutrient addition and a concomitant increase in grazing pressure have been found across ecosystems. Nutrient addition increased the N concentration or decreased the C:N ratio in macrophytes, which enhanced consumption by fish (Dorenbosch and Bakker 2011), in salt marsh plants, which became more attractive for geese (Bos et al. 2005;Stahl et al. 2006), and in grassland plants leading to higher grazing pressure by rabbits . Similarly, sea turtles (Christianen et al. 2012) andungulates (Van der Wal et al. 2003) are attracted to fertilized plants with increased N concentrations. Plant N concentration is generally found to correlate positively with herbivore consumption (Cebrian and Lartigue 2004). For herbivorous ducks and geese, plant quality such as N concentration is an important determinant of foraging decisions (Durant et al. 2004), and the strong increase in geese and swan numbers in north-western Europe has been at least partly attributed to the increased use of fertilizer on agricultural lands (Van Eerden et al. 2005). Therefore, nutrient availability and plant nutritional quality may be an important parameter to consider when predicting grazing pressure on submerged macrophytes. there were no differences between the duck treatments (two-way ANOVA). See "Results" for statistical tests Herbivory and omnivory in aquatic vertebrates No significant removal of vegetation was found in the unfertilized ponds, suggesting that no macrophytes were consumed by the ducks. The ducks may have fasted to a certain extent, but they also ate, as droppings were observed for all ducks in both nutrient treatments during the 8 days that the ducks stayed on the ponds. Because the droppings were returned to the ponds, in order to study nutrient cycling, no diet data are available, but most likely, the mallards ate macro-invertebrates, as they are facultative herbivorous species (Kear 2005;Wood et al. 2012a).
Macro-invertebrates are abundant in the ponds (Declerck et al. 2011) and are a good food source for many waterfowl species, particularly the smaller species (Wood et al. 2012a). Most waterfowl species that consume macrophytes are omnivorous (Wood et al. 2012a) and they can shift to alternative prey when macrophytes are an unprofitable food source. We observed such a diet switch of facultative herbivorous fish in a different experiment in the same ponds (Dorenbosch and Bakker 2012): they prefer macroinvertebrates over macrophytes, but increase macrophyte consumption when plants are fertilized (Dorenbosch and Bakker 2011). In the wild, ducks on water bodies of low nutrient status may either move to other water bodies or shift their diet towards macroinvertebrates.
Apart from plant quality, the foraging costs may determine whether macrophytes are being consumed by waterfowl. Elevated costs of foraging can make macrophytes unattractive food resources, and cause waterfowl to shift to alternative food resources (Wood et al. 2013). Waterfowl which have to reach macrophytes by upending have greater foraging costs than when feeding on food resources just below the surface (Guillemain et al. 2000;Nolet et al. 2006). As Elodea can be a canopy-forming species and Chara species remain at the bottom, foraging on Elodea may be less costly than on Chara sp. As our ponds were shallow with 50-cm water depth there was little difference in plant height and therefore the difference in foraging effort seemed small, but we cannot entirely exclude that Elodea may have been somewhat less costly to access for the ducks.
Plant (re)growth after consumer presence
The longer term impact of consumers on plants depends on the recovery of the plants after being grazed and the indirect effects of consumer presence, including alterations of nutrient availability. Our second hypothesis was partly supported: the macrophytes responded differently to consumer presence in the nutrient treatments. The plants in the unfertilized ponds seemed not to be grazed and therefore there was no recovery 6 weeks after duck presence either. In the fertilized ponds, we observed a lack of regrowth of Elodea after duck presence, whereas in the ponds without ducks Elodea biomass had increased by 51 % after 6 weeks.
Plant regrowth after grazing is affected by nutrient and light availability (Hawkes and Sullivan 2001;Wise and Abrahamson 2007). The reduced amount of Elodea after grazing may have lost competition for nutrients from phytoplankton and epiphyton which possibly induced light limitation for macrophyte (re)growth (Sand-Jensen and Borum 1991;Hilt 2006), but we cannot test this because algal density (chlorophyll a concentration) and epiphyton load were not elevated under fertilized conditions. In the case of epiphyton, there was even less in the fertilized ponds, probably due to the grazing pressure by snails (Jones et al. 2002;Bakker et al. 2013). Also, light availability in the fertilized ponds was still rather high (minimum 20 % of ambient light on the bottom of the shallow pond) considering that Elodea sp. plants can still grow under very low light availability, also after being cut from the mother plant (Abernethy et al. 1996;Barrat-Segretain 2004). Possibly, snail grazing prevented recovery of Elodea after duck grazing, as snails can inhibit sprouting of plants and thus regeneration, when present in high enough densities (Elger et al. 2007). Furthermore, grazing by invertebrates and waterfowl has been shown to induce reallocation of resources to belowground plant parts and subsequent early senescence in above-sediment plant material in several macrophyte species (Hidding et al. 2009;Miler and Straile 2010). This may, therefore, also be an explanation for limited above-sediment recovery of grazed plants. Whereas herbivores can change the nutrient availability by importing nutrients from elsewhere (Kitchell et al. 1999;Hahn et al. 2008), through the consumption of plants and return of nutrients via faeces (Vanni 2002), we did not find evidence of enhanced nutrient availability as the nutrient concentration was not elevated in the water column nor in the macrophytes after the presence of the ducks in both nutrient treatments. Therefore, lower plant regrowth after grazing in the fertilized ponds confirms the pattern of our hypothesis 2b, which seems to be not due to abiotic factors, as we hypothesized, but possibly to biotic factors of snail grazing or plant reallocation of resources.
Comparison with the field
The two dominant macrophyte species in our ponds are also frequently found in the field where alkaline oligotrophicto-mesotrophic waters are commonly dominated by charophytes (Van de Bund and Van Donk 2004;Rip et al. 2006;Ibelings et al. 2007), whereas mesotrophic-to-eutrophic waters are often dominated by Elodea sp. (Van Donk and Otte 1996;Perrow et al. 1997;Van de Haterd and Ter Heerdt 2007). Most studies that measured grazing impact on Chara-dominated vegetation found no significant effect of grazing on plant biomass during the summer ( Van den Berg 2001;Rip et al. 2006;Hidding et al. 2010); but see Matuszak et al. (2012). In two eutrophic lakes dominated by Elodea sp., herbivores significantly reduced plant biomass (Van Donk and Otte 1996; Van de Haterd and Ter Heerdt 2007). The results of our pond study could explain the differences observed in these field studies, but we should keep in mind that in the field the density and species of consumers may differ among water bodies, which can change the impact on the macrophytes (Wood et al. 2012a, b). Currently few field studies of consumer control of aquatic macrophytes consider or report plant species identity or nutrient availability. We conclude that including water nutrient status and identity of the dominant plant species in the analysis of consumer control of macrophyte biomass may provide a framework with which to understand and predict top-down control in aquatic benthic systems. | 2017-04-04T16:39:53.284Z | 2014-09-07T00:00:00.000 | {
"year": 2014,
"sha1": "15820508b15663ec3c7e188e446165bd7f4084d8",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00442-014-3047-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9aa48255ab3d99a27a48b3763f26a34e5c49681b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
245921641 | pes2o/s2orc | v3-fos-license | Duration of Protection against Mild and Severe Disease by Covid-19 Vaccines
Abstract Background Vaccines against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the virus that causes coronavirus disease 2019 (Covid-19), have been used since December 2020 in the United Kingdom. Real-world data have shown the vaccines to be highly effective against Covid-19 and related severe disease and death. Vaccine effectiveness may wane over time since the receipt of the second dose of the ChAdOx1-S (ChAdOx1 nCoV-19) and BNT162b2 vaccines. Methods We used a test-negative case–control design to estimate vaccine effectiveness against symptomatic Covid-19 and related hospitalization and death in England. Effectiveness of the ChAdOx1-S and BNT162b2 vaccines was assessed according to participant age and status with regard to coexisting conditions and over time since receipt of the second vaccine dose to investigate waning of effectiveness separately for the B.1.1.7 (alpha) and B.1.617.2 (delta) variants. Results Vaccine effectiveness against symptomatic Covid-19 with the delta variant peaked in the early weeks after receipt of the second dose and then decreased by 20 weeks to 44.3% (95% confidence interval [CI], 43.2 to 45.4) with the ChAdOx1-S vaccine and to 66.3% (95% CI, 65.7 to 66.9) with the BNT162b2 vaccine. Waning of vaccine effectiveness was greater in persons 65 years of age or older than in those 40 to 64 years of age. At 20 weeks or more after vaccination, vaccine effectiveness decreased less against both hospitalization, to 80.0% (95% CI, 76.8 to 82.7) with the ChAdOx1-S vaccine and 91.7% (95% CI, 90.2 to 93.0) with the BNT162b2 vaccine, and death, to 84.8% (95% CI, 76.2 to 90.3) and 91.9% (95% CI, 88.5 to 94.3), respectively. Greater waning in vaccine effectiveness against hospitalization was observed in persons 65 years of age or older in a clinically extremely vulnerable group and in persons 40 to 64 years of age with underlying medical conditions than in healthy adults. Conclusions We observed limited waning in vaccine effectiveness against Covid-19–related hospitalization and death at 20 weeks or more after vaccination with two doses of the ChAdOx1-S or BNT162b2 vaccine. Waning was greater in older adults and in those in a clinical risk group.
In the United Kingdom, we recently found that vaccine effectiveness was slightly lower against symptomatic disease with the B.1.617.2 (delta) variant than with the B.1.1.7 (alpha) variant among adults who had received two doses of the BNT162b2 vaccine (Comirnaty, Pfizer-BioNTech) (88.0% vs. 93.7%)or the ChAdOx1-S vaccine (also known as ChAdOx1 nCoV-19; Vaxzevria, AstraZeneca) (67.0% vs. 74.5%),administered over an extended interval of 12 weeks. 1Other countries with high vaccination rates have reported substantially reduced protection against infection with the delta variant.Among elderly nursing home residents who had received two doses of a messenger RNA (mRNA) vaccine 3 weeks apart in the United States, vaccine effectiveness was 74.7% against symptomatic or asymptomatic SARS-CoV-2 infection during the period of March through May 2021 but decreased to 53.1% during the period of June and July 2021, when the delta variant was predominantly circulating. 8In Qatar, no evidence of protection against infection was seen at 20 weeks or more after mRNA vaccination. 9evertheless, the extent to which reduced vaccine effectiveness is a result of a new variant or waning immunity remains unclear.Immunogenicity data indicate that antibody titers wane relatively rapidly after the receipt of two doses of vaccine, which suggests that waning may be an important factor in reported decreases in vaccine effectiveness over time, although decreases in antibody titers may be more rapid than decreases in protection. 10A number of serologic markers have been found to correlate with SARS-CoV-2 infection but not with severe or fatal Covid-19. 11,12n the United Kingdom, Covid-19 vaccines have been used since early December 2020.Initially, a 3-week interval between doses of the BNT162b2 vaccine was used (for a period of approximately 4 weeks), which was then changed to an extended 12-week interval for all vaccines until June 2021; at that time, the interval was reduced to 8 weeks, after the emergence of the delta variant.In this study, we estimated vaccine effectiveness over time since receipt of the second dose of the ChAdOx1-S, BNT162b2, and mRNA-1273 (Spikevax, Moderna) vaccines in order to investigate the waning of protection against symptomatic Covid-19 and related hospitalization and death separately for the alpha and delta variants.
Study Design
We used a test-negative case-control design to estimate vaccine effectiveness of two doses of the ChAdOx1-S, BNT162b2, and mRNA-1273 vaccines against symptomatic disease as confirmed on polymerase-chain-reaction (PCR) testing, against hospitalization within 14 days after confirmation on PCR testing, and against death within 28 days after confirmation on PCR testing.The analysis was stratified to assess vaccine effectiveness against the alpha and delta variants during the periods when they were circulating.For each outcome of interest, we compared vaccination status in symptomatic adults who had PCR-confirmed SARS-CoV-2 infection (case participants) with vaccination status in adults who reported symptoms of Covid-19 but had a negative PCR test for SARS-CoV-2 (control participants).
Data Sources
The data sources are described in detail in the Supplementary Appendix, which is available with the full text of this article at NEJM.org.Community-testing data between December 8, 2020, and October 1, 2021, were included.Data were restricted to persons who had reported symptoms and samples obtained for PCR testing up to 10 days after symptom onset in order to account for the reduced sensitivity of PCR testing after this period.Persons who had previously tested positive for SARS-CoV-2 (on PCR or antibody testing) were excluded from the analysis.
Before May 2021, the alpha variant was the main viral variant circulating in the United Kingdom, after which the delta variant predominated.Cases were categorized as being due to the alpha or delta variant on the following bases: first, on the basis of results of whole-genome sequencing; second, on the basis of the spike (S)-gene target status (alpha: target-negative before June S1 in the Supplementary Appendix). 13esting data were linked to the Emergency Care Data Set (ECDS) to assess vaccine effectiveness against hospitalization.We included emergency department visits resulting in inpatient admission among persons who had symptoms within 14 days after the positive test and whose visit was not related to injury.ECDS data include hospital admissions through NHS emergency departments in England but not elective admissions.Only first visits in the 14-day period were included if a person had multiple admissions from emergency care.To allow for delays in the ECDS data flow, only case and control participants with sample dates by September 17, 2021, were included.We used a sensitivity analysis to assess only admissions of persons with Covid-19 or respiratory SNOMED CT (Systematized Nomenclature of Medicine-Clinical Terms) codes as described in the Public Health England weekly bulletin for emergency departments. 14or the assessment of vaccine effectiveness against death, we used the NHS digital data on deaths reported in the National Immunisation Management System (NIMS).To allow for delays in death registrations and for all case participants to have at least 28 days of follow-up, we included only case and control participants with test results by July 29, 2021.
Statistical Analysis
Details of the statistical analysis are provided in the Supplementary Appendix.Vaccine effectiveness was adjusted in logistic-regression models for participants' age, sex, index of multiple deprivation (a measure of socioeconomic status), race or ethnic group, care home residence status (for analyses including persons ≥65 years of age), geographic region, period (calendar week), health and social care worker status (for analyses involving persons <65 years of age), and status of being in a clinical risk group (available only for persons <65 years of age) or a clinically extremely vulnerable group (any age).Clinical risk groups included a range of chronic conditions as described in the Green Book, 15 whereas the clinically extremely vulnerable group included persons who were considered to be at the highest risk for severe Covid-19, including those with immunosuppressed conditions and those with severe respiratory disease. 16For deaths, the period was modeled with the use of a cubic spline owing to smaller numbers.
Analyses were stratified according to age group and according to the timing of vaccination in the general population (Table S2).Among persons 65 years of age or older, analyses were further stratified on the basis of an assessment of the participant's clinical vulnerability.Among persons 40 to 64 years of age, analyses were also stratified on the basis of a determination of being in the clinically extremely vulnerable group or in a clinical risk group.
Vaccine effectiveness was assessed for each vaccine separately and according to intervals after vaccination of at least 28 days after the first dose and at least 14 days after the second dose.To assess potential waning of vaccine effectiveness, we used intervals of 1 week (7 to 13 days), 2 to 9 weeks, 10 to 14 weeks, 15 to 19 weeks, and 20 weeks and after the receipt of the second dose.(Because second doses only started to be delivered in large numbers from late March 2021, the maximum follow-up in most groups was approximately 6 months.)For the earliest vaccinated group (persons ≥65 years of age), the last followup period was further stratified into periods of 20 to 24 weeks and of 25 weeks and beyond.An additional analysis of vaccine effectiveness against hospitalization among persons 80 years of age or older was assessed according to the interval between vaccine doses (≤28 days or ≥56 days).
Descriptive Statistics and Characteristics
A total of 7,106,982 eligible SARS-CoV-2 PCR tests with a sample date within 10 days after symptom onset were assessed.Of these tests, 6,056,673 (85.2%) were successfully linked to the NIMS database for vaccination status, including tests for 84.7% of the case participants and for 85.4% of the control participants.The demographic characteristics of participants with linked and unlinked tests are summarized in Table S3.Of the participants with linked tests, 1,706,743 had a first recorded positive test result for SARS-CoV-2 during the study period, of whom 544,468 were classified as having infection with the alpha variant, 1,125,257 as having infection with the The New England Journal of Medicine Downloaded from nejm.org on January 21, 2022.For personal use only.No other uses without permission.
Copyright © 2022 Massachusetts Medical Society.All rights reserved.
delta variant, and 37,018 as having infection with another or unknown variant (not included in the analysis of vaccine effectiveness).Sequencing status according to S-gene target failure over time showed the high positive predictive value of using the S-gene target failure approach in the weeks in which the variant status was unknown.Over the same period, 4,349,930 negative tests were included from 3,763,690 participants (of whom 510,177 had two negative results and 76,063 had three negative results, all occurring >7 days after a previous negative test) (Table S4).Overall, 2,376,037 participants (39.2%) had received two doses of the ChAdOx1-S vaccine, 2,133,769 (35.2%) had received two doses of the BNT162b2 vaccine, 176,235 (2.9%) had received two doses of the mRNA-1273 vaccine, and 12,169 (0.2%) had received a mixed course of two different vaccines or had an interval of less than 19 days between doses; this last group was excluded from further analyses.A total of 22,575 participants with positive test results were hospitalized within 14 days after the test, and 6336 died within 28 days after the test (Table S5).
Vaccine Effectiveness Estimates and Vaccine Waning
Vaccine effectiveness and numbers of case and control participants according to vaccine, dose, and age group for the various outcomes are summarized in Tables S6 and S7.In general, vaccine effectiveness was higher with the mRNA vaccines than with the ChAdOx1-S vaccine with regard to several comparisons: against the more severe outcomes as compared with symptomatic infection, with the alpha variant as compared with the delta variant, and among younger persons as compared with older persons.Logistic-regression results for the all-age analysis for all the variables included in the models, along with goodness-of-fit assessments, are shown in Table S8.
Table 1 and Figure 1 summarize vaccine effectiveness against symptomatic disease according to week after receipt of the second dose for the delta variant (numbers of case and control participants are summarized in Table S9 and according to week in Fig. S1).Vaccine effectiveness against symptomatic disease due to the delta variant peaked in the early weeks after receipt of the second dose, then decreased by 20 weeks to 44.3% (95% confidence interval [CI], 43.2 to 45.4) for the ChAdOx1-S vaccine and to 66.3% (95% CI, S10.Followup after infection with the alpha variant was limited because this variant stopped circulating by the time that later follow-up periods were reached (Tables S11 through S13).
Limited waning of vaccine effectiveness was noted with regard to protection against hospitalization.Vaccine effectiveness against hospitalization with infection with the delta variant was 80.0% (95% CI, 76.8 to 82.7) with the ChAdOx1-S vaccine and 91.7% (95% CI, 90.2 to 93.0) with the BNT162b2 vaccine at 20 weeks or more after vaccination (Fig. 1 and Table 2).Similarly, limited waning of vaccine effectiveness was noted against deaths due to the delta variant for the ChAdOx1-S vaccine (84.8%; 95% CI, 76.2 to 90.3) and the BNT162b2 vaccine (91.9%; 95% CI, 88.5 to 94.3) at 20 weeks or more after vaccination (Fig. 1 and Table 3).Combined results for all the vaccines regarding effectiveness against hospitalization among participants 65 years of age or older are shown in Figure S2.The numbers of case and control participants in these analyses of hospitalization and death are summarized in Tables S14 and S15.
In general, lower vaccine effectiveness against hospitalization was seen in the oldest age group (≥65 years of age), except in the period of 20 weeks or more after vaccination with the ChAdOx1-S vaccine, although limited data were available for the group of persons 40 to 64 years of age for the period of 20 weeks or more after vaccination and the confidence intervals overlapped (Table 2 and Fig. 2).Results from the sensitivity analysis with the use of only hospitalizations that had been coded as respiratory admissions were similar to those of the primary analysis, showing vaccine effectiveness of 82.6% (95% CI, 79.1 to 85.4) for the ChAdOx1-S vaccine and 93.5% (95% CI, 91.9 to 94.7) for the BNT162b2 vaccine against the delta variant at 20 weeks or more after vaccination (Table S16).Similar results also observed in analyses that included only control participants who went on to be hospitalized within 14 days after testing, but the number of control participants was much lower than in the primary analysis (Table S17 and Fig. S3).
Stratification according to risk-group status identified greater waning in vaccine effectiveness against hospitalization with the delta variant among persons 65 years of age or older in the clinically extremely vulnerable group than among those not in the clinically extremely vulnerable group (Fig. 3).Very little evidence of waning was seen up to 20 weeks or more after vaccination among persons 65 years of age or older who were not in the clinically extremely vulnerable group and had received the BNT162b2 vaccine.Greater waning was seen among persons 40 to 64 years of age in clinical risk groups than among their healthy peers, although by 20 weeks or more after vaccination, vaccine effectiveness with the ChAdOx1-S vaccine was similar in the two groups (Fig. S4).With regard to symptomatic infection, although vaccine effectiveness was lower among participants in risk groups, waning was similar according to risk status (Tables S18 and S19).
An analysis that was restricted to persons 80 years of age or older who had received the BNT162b2 vaccine before January 4, 2021, showed lower vaccine effectiveness among participants with a short interval (≤4 weeks) than among those with an extended interval (≥8 weeks) between doses in the latest follow-up periods (≥20 weeks after the second dose).However, confidence intervals were wide and overlapping (Fig. S5).
Discussion
Our data provide evidence of waning of protection against symptomatic infection after the receipt of two doses of the ChAdOx1-S or BNT162b2 vaccine from 10 weeks after receipt of the second dose.Protection against hospitalization and death, however, was sustained at high levels for at least 20 weeks after receipt of the second dose.At 20 weeks or more after receipt of the second dose, we observed more waning with the ChAdOx1-S vaccine than with the BNT162b2 vaccine, although the groups who received each vaccine differed. 6aning of protection against hospitalization was greater in older adults and in participants in a clinical risk group.Among persons 65 years of age or older who were not in a clinical risk group, however, protection against hospitalization remained close to 95% with the BNT162b2 vaccine and just under 80% with the ChAdOx1-S vaccine at 20 weeks or more after receipt of the second dose.
8][19] In addition to the emergence of the more transmissible delta variant, waning protection --* When vaccine effectiveness was calculated as 100%, the numbers of total case and control participants are shown in parentheses.Persons in a clinical risk group had a broad range of chronic conditions as described in the Green Book. 15he clinically extremely vulnerable group included persons who were considered to be at highest risk for severe Covid-19. 16e New England Journal of Medicine Downloaded from nejm.org on January 21, 2022.For personal use only.No other uses without permission.
Copyright © 2022 Massachusetts Medical Society.All rights reserved.
n engl j med nejm.orgagainst symptomatic infection with increasing time since vaccination is also probably contributing to the increase in the incidence of Covid-19 in the United Kingdom and elsewhere.However, the incidence of Covid-19-related hospitalization and death has remained low, especially among vac-cinated adults. 20Our finding of only limited waning of protection against hospitalization or death in most groups that we studied is consistent with the preserved vaccine effectiveness against hospitalization that was observed in Qatar. 9egional U.S. studies have also shown sus- tained high vaccine effectiveness against Covid-19related hospitalization despite the emergence and rapid local spread of the delta variant.Across 18 U.S. states, vaccine effectiveness after the receipt of two vaccine doses administered 3 weeks apart among adults (median age, 59 years) who had been admitted to 21 hospitals during the period from March 11 to July 14, 2021, was 86% (95% CI, 82 to 88) overall; vaccine effectiveness was 87% (95% CI, 83 to 90) among patients with illness onset during the period from March through May, as compared with 84% (95% CI, 79 to 89) among those with illness onset during the period of June and July 2021, with no evidence of a significant decrease in vaccine effectiveness over the 24-week period. 21A similar study involving adults in New York during the period from May 3 to July 25, 2021, showed hospitalization rates to be lower by a factor of nearly 10 among vaccinated adults (>90% of whom had received two doses of mRNA vaccine 3 weeks apart) than among unvaccinated adults (1.31 vs. 10.69 per 100,000 person-days).Vaccine effectiveness against hospitalization remained relatively stable (91.9 to 95.3%) during the surveillance period, although the age-adjusted vaccine effectiveness against new cases of Covid-19 decreased from 91.7% to 79.8%, a change that coincided with an increase in the circulation of the delta variant from less than 2% to more than 80% of cases. 22Conversely, reports have appeared of an increased proportion of hospitalization among infected adults who had been vaccinated the earliest and had received two doses of the BNT162b2 vaccine 3 weeks apart in Israel. 17The shorter interval of 3 weeks as well as the longer follow-up in a population with rapid vaccine uptake in Israel may be factors in explaining this difference as compared Shown are data regarding vaccine effectiveness against Covid-19-related hospitalization with the delta variant, according to time since the second dose of vaccine and clinically extremely vulnerable group status, among persons 65 years of age or older.The clinically extremely vulnerable group included persons who were considered to be at highest risk for severe Covid-19. 16The numbers were too small for the assessment of Covid-19-related hospitalization at 1 week.I bars indicate 95% confidence intervals.n engl j med nejm.org with findings in the United Kingdom, the United States, and Qatar.
A In Clinically Extremely Vulnerable
Our findings and those from Qatar and the United States raise important questions about the timing of third doses of vaccine in adults who remain protected against hospitalization and death for at least 5 months after the receipt of two doses.Israel was one of the first countries to immunize adults with the BNT162b2 vaccine and began offering a third dose of the same vaccine to older adults starting in July 2021. 23Early data indicate that the third dose was associated with large reductions in the incidence of SARS-CoV-2 infection within 1 week after vaccination, with greater reductions in the second week. 23he duration of protection offered by the third dose, however, is uncertain.Many countries, including the United Kingdom and the United States, now offering a third dose.
A third dose of vaccine improves both humoral and cellular immunity against SARS-CoV-2, with increased neutralizing activity against different variants, including the delta variant, which is likely to improve protection against infection. 24Waning of vaccine effectiveness against severe disease outcomes was relatively limited in most cohorts in this study but is likely to continue with time since the receipt of two vaccine doses.Decisions on timing of the third dose must balance the rate of waning immunity against the prevalence of disease, including the risk of new variants, and the prioritization of persons at highest risk for severe disease.Existing evidence suggests that vaccine effectiveness increases with longer intervals between doses and, if this also applies to third doses, the administration interval will also need to be considered. 25At the same time, it is possible that third doses will be more reactogenic than previous doses, especially if the recipient receives different vaccines for the initial and booster doses. 26Attractive alternatives include half-dose boosters or boosting with variant-targeted vaccines, which are both under investigation. 27or the United Kingdom and countries with administration intervals that are longer than the licensed interval, another important consideration is that the extended interval of 8 to 12 weeks between vaccine doses provides higher serologic responses and increased vaccine effectiveness than the licensed interval of 3 to 4 weeks for mRNA vaccines, 25 which may provide the populations in these countries with better, longer-term protection. 12This hypothesis is supported by our cur-rent findings comparing short and long administration intervals among persons 80 years of age or older.
We found that waning effectiveness against hospitalization was greatest among persons in clinical risk groups.Other studies have shown lower immune responses and vaccine effectiveness among persons in clinical risk groups, most notably those with immunosuppression. 10,21,28,29The United Kingdom and other countries already recommend a third dose of Covid-19 vaccine for all adults as part of their primary immunization course. 30,31his study has some limitations.The testnegative case-control study design is observational and, therefore, subject to potential bias.The very narrow 95% confidence intervals in some analyses relate to the large sample size and do not account for what may be relatively larger effects of bias.A detailed quantification of potential bias is beyond the scope of this article, but others have assessed some biases such as exposure and outcome misclassification when using the test-negative design for hospitalized case and control participants. 32A full discussion of these limitations is provided in Section S3.The likely direction of these biases, if they exist, would be to reduce vaccine effectiveness, with the reduction being greater with longer intervals after vaccination.Other limitations include our limited ability to assess waning vaccine effectiveness against the alpha variant owing to low circulation since June 2021.In addition, these estimates of vaccine effectiveness relate to the population of persons who seek testing and were successfully matched to the NIMS database, so they may not be representative of the whole population.For example, a higher proportion of non-White persons than White persons do not match to the NIMS database.We also relied on tested persons declaring their symptoms when the test was requested, and some asymptomatic persons may declare symptoms in order to access the test.Overall vaccine effectiveness will be attenuated if it is lower against asymptomatic infection and, for control participants, may mean that they were not matched on the basis of exposure to an infectious disease that led to symptoms.
Our study showed evidence of significant waning of vaccine effectiveness against symptomatic disease, but with limited waning against severe disease, for at least 5 months after an extendedinterval, two-dose schedule with the ChAdOx1-S The New England Journal of Medicine Downloaded from nejm.org on January 21, 2022.For personal use only.No other uses without permission.
Copyright © 2022 Massachusetts Medical Society.All rights reserved.
Figure 1 .The
Figure 1.Vaccine Effectiveness against Symptomatic Covid-19 and Related Hospitalization and Death in England.Vaccine effectiveness was assessed among persons 16 years of age or older who had received two doses of the ChAdOx1-S or BNT162b2 vaccine in England.Shown are data regarding vaccine effectiveness against infection with the B.1.1.7 (alpha) and B.1.617.2 (delta) variants, according to time since the second dose of vaccine.There were insufficient cases of infection with the alpha variant in the later periods after vaccination, given that the alpha variant had largely disappeared in the United Kingdom by this stage.The numbers were too small for the assessment of death at 1 week.I bars indicate 95% confidence intervals.Covid-19 denotes coronavirus disease 2019.
Figure 2 .
Figure 2. Vaccine Effectiveness against Covid-19-Related Hospitalization among Persons Who Received Two Doses of the ChAdOx1-S or BNT162b2 Vaccine, According to Age Group.Shown are data regarding vaccine effectiveness against Covid-19-related hospitalization with the alpha and delta variants, according to age group and time since the second dose of vaccine.I bars indicate 95% confidence intervals.
Figure 3 .
Figure 3. Vaccine Effectiveness against Covid-19-Related Hospitalization among Persons 65 Years of Age or Older Who Received Two Doses of the ChAdOx1-S or BNT162b2 Vaccine, According to Clinically Extremely Vulnerable Group Status.
Journal of Medicine Downloaded from nejm.org on January 21, 2022.For personal use only.No other uses without permission.Copyright © 2022 Massachusetts Medical Society.All rights reserved.
Dur ation of Protection by Covid-19 VaccinesS-gene testing was not done, on the basis of time period (alpha: from January 4, 2021, to May 2, 2021; delta: from May 24, 2021), because these variants were responsible for more than 80% of the cases of infection in all the weeks during this period (>95% in most weeks) (Table 28, 2021; delta: target-positive from April 12, 2021); and third, for cases in which sequencing or The New England Journal of Medicine Downloaded from nejm.org on January 21, 2022.For personal use only.No other uses without permission.Copyright © 2022 Massachusetts Medical Society.All rights reserved.n engl j med nejm.org 3
Table 1 . Vaccine Effectiveness against Symptomatic Covid-19 with the Delta Variant among Persons in England Who Received Two Doses of the ChAdOx1-S or BNT162b2 Vaccine, According to Weeks since Receipt of the Second Dose.* Vaccine and Age Group Vaccine Effectiveness (95% CI)
Dur ation of Protection by Covid-19 Vaccines 65.7 to 66.9) for the BNT162b2 vaccine.Waning of vaccine effectiveness was greater in persons 65 years of age or older than in persons 40 to 64 years of age.Follow-up was insufficient to estimate waning of vaccine effectiveness in persons younger than 40 years of age, who had been vaccinated more recently.The effectiveness of the mRNA-1273 vaccine against symptomatic disease and hospitalization is shown in Table † Results in this column for persons 65 years of age or older are only for the period of 20 to 24 weeks after receipt of the second vaccine dose.The New England Journal of Medicine Downloaded from nejm.org on January 21, 2022.For personal use only.No other uses without permission.Copyright © 2022 Massachusetts Medical Society.All rights reserved.n engl j med nejm.org 5
Table 2 . Vaccine Effectiveness against Delta Variant-Related Hospitalization among Persons in England Who Received Two Doses of ChAdOx1-S or BNT162b2 Vaccine, According to Weeks since Receipt of the Second Dose.* Vaccine, Age Group, and Subgroup Vaccine Effectiveness (95% CI)
Dur ation of Protection by Covid-19 Vaccines The New England Journal of Medicine Downloaded from nejm.org on January 21, 2022.For personal use only.No other uses without permission.Copyright © 2022 Massachusetts Medical Society.All rights reserved.nengl j med nejm.org7 | 2022-01-14T06:16:49.512Z | 2022-01-12T00:00:00.000 | {
"year": 2022,
"sha1": "759e3702b0d2316518dbbe6c48c1b7203df31f51",
"oa_license": null,
"oa_url": "https://www.nejm.org/doi/pdf/10.1056/NEJMoa2115481?articleTools=true",
"oa_status": "BRONZE",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8064276772517a9995afa8e05a8665dc3a581ef4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17849756 | pes2o/s2orc | v3-fos-license | Evaluating Classification Strategies in Bag of SIFT Feature Method for Animal Recognition
: These days automatic image annotation is an important topic and several efforts are made to solve the semantic gap problem which is still an open issue. Also, Content Based Image Retrieval (CBIR) cannot solve this problem. One of the efficient and effective models for solving the semantic gap and visual recognition and retrieval is Bag of Feature (BoF) model which can quantize local visual features like SIFT perfectly. In this study our aim is to investigate the potential usage of Bag of SIFT Feature in animal recognition. Also, we specified which classification method is better for animal pictures.
INTRODUCTION
In Content Based Image Retrieval (CBIR) (Qi and Snyder, 1999) proposed in the early 1990s, images are automatically indexed by extracting their different low level features such as texture, color and shape.Semantic gap is a well-known problem among Content Based Image Retrieval (CBIR) systems.This is caused by humans tendency to use concepts, such as keywords and text definitions, to understand images and measure their resemblance.Although low-level features (texture, color, spatial relationship, shape, etc.) are extracted automatically by computer vision techniques, CBIR often fails to describe the high-level semantic concepts in user's mind (Zhou and Huang, 2000).These systems cannot effectively model image semantics and have many restrictions when dealing with wide ranging content image databases (Liu et al., 2007).
Another problem caused by using low level features like texture, color and shape is that they need image digestion.But Scale-Invariant Feature Transform (SIFT) (Lowe, 1999) is a robust feature in scaling, rotation, translation, illumination and partially invariant to affine distortion.Also, there is no need to digest images.The only thing we need to do is to quantize SIFT features by well-known Bag of Feature (BoF) technique.
Furthermore, in most of the previous works we observed that there isn't any appropriate investigation on animal annotation and animal picture recognitions because they have the same environments which caused low accuracy.For this reason, our objective in this study, is to investigate the potential usage of bag of SIFT feature in animal recognition.And find out which kind of classification is more suitable to our animal recognition system.
LITERATURE REVIEW
At the starting point of BoF methodology we must identify local interest regions or points.Then we can extract features from these points, both of which described in the following section.
Interest point detection: There are several distinguished methods which are listed below (Mikolajczyk et al., 2005).
Harris-Laplace regions:
In this method corners are detected by using Laplacian-of-Gaussian operator in scale-space.
Hessian-Laplace regions: Are localized in space at the local maxima of the Hessian determinant and in scale at the local maxima of the Laplacian-of-Gaussian.
Maximally Stable External Regions (MSERs):
Are components of connected pixels in a threshold image.A water-shed-like segmentation algorithm is applied to image intensities and segment boundaries which are stable over a wide range of thresholds that define the region.
DoG regions:
This detector is appropriate for searching blob-like structures with local scale-space maxima of the difference-of-Gaussian.Also it is faster and more
Salient regions:
In circular regions of various sizes, entropy of pixel intensity histograms is measured at each image position.
In our study we used Harris-Laplace for finding key points.
SIFT feature descriptors: After interest Points are detected we can describe them by their features like SIFT.SIFT is an algorithm published by for detecting and describing local features in images.Each SIFT key point is a circular image region with an orientation.It is described by four parameters: key point center (x and y coordinates), its scale (the radius of the region) and its orientation (an angle expressed in radians).SIFT detector is invariant and robust to translation, rotations, scaling and partially invariant to affine distortion and illumination changes.Four steps involved in SIFT algorithm.
Scale-space extrema detection: Which i locations and scales that are identifiable from different views (Gaussian blurring and sigma) of the same object.
Keypoint localization: Eliminate more points from the list of keypoints by finding those that have low contrast or are poorly localized on an edge.
Orientation assignment: Assign a consistent orientation to the keypoints based on local image properties.
Keypoint descriptor: Keypoint descriptors typically uses a set of 16 histograms, aligned in a 4×4 grid, each with 8 orientation bins, one for each of the main com pass directions and one for each of the mid these directions.This result come up in a feature vector containing 128 elements.
In other words, each pixel in an image is compared with its 8 neighbors as well as 9 pixels in next scale and 9 pixels in previous scales.If that pixel is a local extrema, it means that the keypoint is best represented in that scale.
Figure 1 shows 2 examples of SIFT features of Harris-Laplace key points which are generated by our experiment.
Laplace key points compact (less feature points per image) than other In circular regions of various sizes, entropy of pixel intensity histograms is measured at Laplace for finding After interest Points are detected we can describe them by their features like SIFT.SIFT is an algorithm published by Lowe (1999) for detecting and describing local features in images.
point is a circular image region with an orientation.It is described by four parameters: key point center (x and y coordinates), its scale (the radius (an angle expressed in radians).SIFT detector is invariant and robust to translation, rotations, scaling and partially invariant to affine distortion and illumination changes.
Which identify those locations and scales that are identifiable from different views (Gaussian blurring and sigma) of the same Eliminate more points from the list of keypoints by finding those that have low contrast Assign a consistent orientation to the keypoints based on local image Keypoint descriptors typically uses a set of 16 histograms, aligned in a 4×4 grid, each with 8 orientation bins, one for each of the main compass directions and one for each of the mid-points of these directions.This result come up in a feature vector In other words, each pixel in an image is compared with its 8 neighbors as well as 9 pixels in next scale and 9 pixels in previous scales.If that pixel is a local extrema, it means that the keypoint is best represented Figure 1 shows 2 examples of SIFT features of Laplace key points which are generated by our Visual word quantization: After extracting features, images can be represented by sets of keypoint descriptors.But they are not meaningful.problem Vector Quantization techniques (VQ) are presented to cluster the keypoint descriptors into a large number of clusters by using the K algorithm and then convert each keypoint by the index of the cluster to which it belongs.By using Bag of Feature (BoF) method we can cluster similar features to visual words and represent each picture by counting each visual word.This representation is similar to the bag-of-words document representation in terms o semantics.There is a complete definition of BoW in the next part.
Bag of Words (BoW) model: Bag of Words (BoW) model is a popular technique for document classification.In this method a document is represented as the bag of its words and features are extracted from frequency of occurrence of each word.Recently, the Bag of Words model has also been used for computer vision (Perona, 2005).Therefore instead of document version name (BoW) Bag of Feature which is described below.
Bag of Feature (BoF) model: These days, Bag of Feature (BoF) model is widely used for image classification and object recognition because of its excellent performances.
Steps of BoF method are listed as follows: • Extract Blobs and features (e.g., SIFT) on training and test Blobs of images • Build visual vocabulary using a classification method (e.g., K-mean) and descriptor quantization • Represent images with BoF histograms • Image classification (e.g., SVM) The related works in this area by Choi presented a method for creating fuzzy multimedia ontologies automatically.They used SIFT feature extraction for their feature extraction and BoF for their feature quantization.Zhang et al. (2012) aspects of the various Automatic Image Annot (AIA) method, including both feature extraction and semantic learning methods.Also major methods are discussed and illustrated in details.Tousch re-viewed structures in the field of demonstration and analyzed how the structure is used.They first demonstrated works without structured vocabulary and then showed how structured vocabulary started with introducing links between categories or between features.Then reviewed works which used structured vocabularies as an input and analyzed how the structure is exploited.Jiang et al. (2012) Diffusion (SD) approach which enhanced the previous annotations (may be done manually or with mach After extracting features, images can be represented by sets of keypoint descriptors.But they are not meaningful.For fixing this problem Vector Quantization techniques (VQ) are presented to cluster the keypoint descriptors into a large number of clusters by using the K means clustering algorithm and then convert each keypoint by the index belongs.By using Bag of Feature (BoF) method we can cluster similar features to visual words and represent each picture by counting each visual word.This representation is similar to the ument representation in terms of s a complete definition of BoW in the Bag of Words (BoW) model is a popular technique for document classification.In this method a document is represented as the bag of its words and features are extracted from ncy of occurrence of each word.Recently, the Bag of Words model has also been used for computer .Therefore instead of document Feature (BoF) will be used These days, Bag of Feature (BoF) model is widely used for image classification and object recognition because of its Steps of BoF method are listed as follows: Extract Blobs and features (e.g., SIFT) on training Build visual vocabulary using a classification mean) and descriptor quantization Represent images with BoF histograms Image classification (e.g., SVM) The related works in this area by Choi et al. (2010) presented a method for creating fuzzy multimedia ontologies automatically.They used SIFT feature extraction for their feature extraction and BoF for their (2012) analyzed key aspects of the various Automatic Image Annotation (AIA) method, including both feature extraction and semantic learning methods.Also major methods are discussed and illustrated in details.Tousch et al. (2012) viewed structures in the field of demonstration and analyzed how the structure is used.They first demonstrated works without structured vocabulary and then showed how structured vocabulary started with introducing links between categories or between atures.Then reviewed works which used structured vocabularies as an input and analyzed how the structure proposed Semantic (SD) approach which enhanced the previous annotations (may be done manually or with machine learning techniques) by using a graph diffusion formulation to improve the stability of concept annotation.Hong et al. (2014) proposed Multiple-Instance Learning (MIL) method by performing feature mapping MIL to change it to a single-instance learning problem for solving the problem of MIL method.This method is able to explore both the positive and negative concept correlations.It can also select the effective features from a large and diverse set of low-level features for each concept under MIL settings.Liu et al. (2014), presented a Multi-view Hessian Discriminative Sparse Coding (MHDSC) model which mixed Hessian regularization and discriminative sparse coding to solve the problem of multi-view difficulties.Chiang (2013) offered a semi-automatic tool, called IGAnn (interactive Image Annotation), that assists users in annotating textual labels with images.By collecting related and unrelated images of iterations, a hierarchical classifier related to the specified label is built by using proposed semi-supervised approach.Dimitrovski et al. (2011) presented a Hierarchical Multi-label Classification (HMC) system for medical image annotation, where each case can be in multiple classes and these classes/labels are organized in a hierarchy.In most of the reviewed literature, BoF with SIFT feature has the key role in feature extraction and quantization and shows better results in comparison with using other low level feature like color or texture alone (Tsai, 2012).
Figure 2 and 3 depict the stages of Animal recognition using BoF model for training and testing, respectively.
METHODOLOGY
In this study we will investigate the potential and accuracy of BoF model with SIFT feature, K method for clustering and quantization of words and 6 different kinds of classification (NN L2, NN Chi linear, SVM LLC, SVM IK and SVM chi domain (animal) to find which one is more effective.
Because of the variety of animal pictures and natural environment, our dataset is Caltech 256 et al., 2007).We investigate 20 different animals or 20 concepts from different kinds of animals ( butterfly, camel, dog, house fly, frog, giraffe, goose, gorilla, horse, humming bird, ibis, iguana, octopus, ostrich, owl, penguin, starfish, swan different environments (lake, desert, sea, sand, jungle, bushy, etc.).For each animal, 40 images are randomly selected for training and 10 images are randomly selected for testing.The total number of images is 800 for training and 200 for testing, The number of extracted code words is 1500 and for evaluating the accuracy of each concept we used a well formulas Precision, Recall and Accuracy 2012; Chiang, 2013;Fakhari and Moghadam, 2013;Lee et al., 2011).
Although, we have just focused on 20 different animals, this method can be used for other Fig. 4: Visual word example in animal BOF Appl. Sci. Eng. Technol., 10(11): 1266-1272, 20151269 In most of the reviewed literature, BoF with SIFT feature has the key role in feature extraction and quantization and shows better results in comparison with using other low level feature like color or texture stages of Animal recognition using BoF model for training and testing, In this study we will investigate the potential and accuracy of BoF model with SIFT feature, K-mean method for clustering and quantization of words and 6 ferent kinds of classification (NN L2, NN Chi 2 , SVM linear, SVM LLC, SVM IK and SVM chi 2 ) in a special domain (animal) to find which one is more effective.
Because of the variety of animal pictures and natural environment, our dataset is Caltech 256 (Griffin .We investigate 20 different animals or 20 concepts from different kinds of animals (bear, butterfly, camel, dog, house fly, frog, giraffe, goose, gorilla, horse, humming bird, ibis, iguana, octopus, ostrich, owl, penguin, starfish, swan and zebra) in different environments (lake, desert, sea, sand, jungle, bushy, etc.).For each animal, 40 images are randomly selected for training and 10 images are randomly selected for testing.The total number of images is 800 testing, The number of extracted code words is 1500 and for evaluating the accuracy of each concept we used a well-known formulas Precision, Recall and Accuracy (Tousch et al., Fakhari and Moghadam, 2013; e have just focused on 20 different other animals or other categories rather than animals.All we need to do is to separate the folder of new concept and change its name.Then all the stages can be automatically don our algorithm.False negatives (fn): Items which were not labeled as belonging to this class but should have been.
True negative (tn):
The number of items correctly not labeled as belonging to this class:
DISCUSSION
Normalized confusion matrix is a n×n matrix for showing how many test images are correctly classified and how many are misclassified in other classes.Which means it can find in each concept how many of them are classified by the others.Therefore by using this matrix we can analyze and find the reason for the misclassification of some pictures and find a good solution for it.Figure 5 shows our final experimental results for 20 concepts (bear, butterfly, camel, dog, house fly, frog, giraffe, goose, gorilla, horse, humming bird, ibis, iguana, octopus, ostrich, owl, penguin, starfish, swan and zebra), 40 images are randomly selected for training and 10 images are randomly selected for testing.It means the total number of images is 800 for training and 200 for testing.The number of extracted code words is 1500 and for computing the accuracy of each concept, we used well-known formulas Precision, Recall and Accuracy in six kinds of image classification methods (NN L2, NN Chi 2 , SVM linear, SVM LLC, SVM ik, SVM chi 2 ).All of them are respectively depicted in Fig. 6 to 8.Although we have just focused on 20 different animals, this method can be scalable to other concept.And all we need is to separate the folder of new concept and change its name to that new one.Then all the stages can be automatically done by our experiment.
Clearly, the results of SVM Chi-square are better than other ones which are shown in Fig. 5. Therefore SVM Chi-square is a better classifier.The running of our code provides better results for three specific animals: zebra, horse and starfish.This is probably the result of a better distinguishing pattern in these animals.So if we can omit the unimportant parts of our dataset pictures, we will get more accurate results.
CONCLUSION
Our objective in this research was to find the potential usage of bag of feature in animal recognition and other concepts within recognition category.After implementation of our experiment we got reasonable results which show BoF is a good selection for finding animals in nature.Also, SVM Chi-square has a better accuracy in comparison with NN L2, NN Chi 2 , SVM linear, SVM LLC, SVM IK.But most of the animals are the same as their environment because nature wants to protect them against enemies.In future if we omit the background parts we can definitely get better result.Therefore in future we want to extract regions for addressing the location of objects and extract other features as well (Color, Texture, Shape and Spatial location etc.) to get better results.
Fig. 1 :
Fig. 1: Detected SIFT features of Harris-Laplace key points as circles compact (less feature points per image) than other detectors.
Fig. 2 :
Fig. 2: Animal recognition using BoF model training stages Figure 2 Illustrates training model of Bag of SIFT Feature in animal pictures which was implemented by MATLAB 2014.Then we tested Bag of SIFT Feature with test model which is shown in Fig. 3.All the pictures for both models are generated by our experiment.Accuracy: For measuring the accuracy we used 2 famous methods: Precision, Recall and accuracy which are used in Tousch et al. (2012), Chiang (2013), Fakhari and Moghadam (2013) and Lee et al. (2011).Their formulas are in (1), (2) and (3) and also the definition of tp, tn, fp and fn are as follows.True positives (tp): The number of items correctly labeled as belonging to this class.False positives (fp): Items incorrectly labeled as belonging to this class. | 2016-01-09T01:06:55.066Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "8c74d2d32e762e0245d5becd278008b8981fbbcd",
"oa_license": "CCBY",
"oa_url": "https://www.maxwellsci.com/announce/RJASET/10-1266-1272.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c6c716d368c3b377a0582dc0bb858a873c0e8eb5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
46759672 | pes2o/s2orc | v3-fos-license | Analytical Expressions for Numerical Characterization of Semiconductors per Comparison with Luminescence
Luminescence is one of the most important characterisation tools of semiconductor materials and devices. Recently, a very efficient analytical set of equations has been applied to explain optical properties of dilute semiconductor materials, with an emphasis on the evolution of peak luminescence gain with temperature and its relation to sample quality. This paper summarizes important steps of the derivation of these expressions that have not been presented before and delivers a theoretical framework that can used to apply exactly solvable Hamiltonians for realistic studies of luminescence in various systems.
Introduction
Materials development requires characterization techniques. Among them, photoluminescence, or in more general terms, radiation emission due to different excitation mechanisms [1], is a very powerful tool to study semiconductor materials and map specific characteristics of new devices [2,3] for applications from the THz-Mid Infrared (TERA-MIR) to ultraviolet ranges [4][5][6]. A one-to-one correspondence between measured spectra and fundamental materials properties requires a clear theoretical model, ideally easy to understand and to programme, but at the same time with microscopic information for conclusive interpretation and as free as possible from phenomenological parameters.
A recent theoretical effort led to the development of analytical solutions for the interband polarization, which plays the selfenergy role in the Dyson equation for the Photon Green's functions [7], which have been applied them to study photoluminescence of Coulomb-correlated semiconductor materials. The accuracy of the resulting easily programmable solutions has been demonstrated by consistently explaining the low temperature s-shape of the luminescence peak of dilute semiconductors, such as ternary GaAsSb, InAsN, and quaternary InAs(N,Sb) [7][8][9]. The interplay of homogeneous versus inhomogeneous broadening at low and high temperatures are described, together with the relevance of many body effects, which are in very good agreement with experiments [10][11][12][13]. A similar set of equations was also used to study nonlinearities in GaAs-AlGasAs and GaAsN-AlGaAs superlattices [14][15][16]. The superlattice case is particularly noteworthy for room temperature GHz nonlinear multiplication into the THz range [17,18]. This paper has two objectives: to show hitherto unpublished details of the mathematical steps that lead to the equations used in Refs. [7][8][9][14][15][16], and to draw a bridge between the luminescence and nonlinear absorption calculations in superlattices. It is organized as follows: the main steps involving manipulations of hypergeometric functions that characterize the solution of the Hulthén potential problem are delivered. Next, a direct connection between luminescence and absorption equations is given together with a connection with generalized semiconductor Bloch equations, with potential for the study of polaritons in superlattices within a dielectric approach such as that used to investigate valence band THz polaritons and antipolaritons [6]. A brief conclusion follows.
Mathematical Model
In order to make this paper self-contained, some of the steps shown in Ref. [7] are followed to guide the presentation towards the more complete derivation presented here.
Integro-Differential Equation for the Power Spectrum
Luminescence, or equivalently the optical power density spectrum I(ω), is described quantum mechanically by the Poynting vector, which can be directly related to the transverse polarization function P, which is the selfenergy in the Dyson equations for the transverse photon Green's function components in the Keldysh formalism [7,19,20]. All the quantities presented in this paper are considered in frequency space, i.e., evaluated at steady state.
The free photon Green's function represents the photons propagating without any interaction with the medium. When carriers are injected the transverse polarization function P, which is the selfenergy in photon Green's function Dyson equation, determines how the excited medium modifies the photon propagation. The lesser Keldysh component P < is proportional to the carriers recombination rate and yields the number of emitted photons per unit area. It thus governs the power emission spectrum, as seen in Equation (1). The imaginary and real parts of P r are, respectively, proportional to absorption and gain and refractive index changes, since the dielectric function of the medium reads (ω) = 1 − c 2 ω 2 P r (ω) as shown in Ref. [19]. The starting points for the results derived here are the equations in Refs. [7,19,20].
Here, e, c, Ω, and Π denote, respectively, the electron charge, the speed of light, the sample volume and the velocity matrix element, which is the expectation value of the velocity operator, i.e., the momentum operator divided by the electron mass. It stems directly from the fact that current is charge times velocity. The formal definition of the transverse polarization function selfenergy in terms of functional derivatives is P = − 4π c δJ δA , where J and A are, respectively, expectation values of the induced current and vector potential operators. The full expression involves labels along the Keldysh contour and is tensorial. A complete discussion is beyond the scope of this paper. For details see Refs. [19,20].
The crystal momentum → k is a consequence of Fourier transforming from real space. Likewise, ω and thus the photon energy ω stem from a corresponding Fourier transformation from time to the frequency domain. The matrix element satisfies the integro-differential equation [20], where W is the screened Hulthén potential [21][22][23][24]. Furthermore, Electrons or holes are labelled, respectively, by λ = {e, h}, the renormalized energies e λ ( → k ), and dephasing Γ λ are calculated from the real and imaginary parts of the selfenergy in the Dyson equation for the retarded carriers Green's functions. This paper focuses on quasi-equilibrium luminescence and on three dimensional (bulk semiconductors) with one conduction and one valence band.
Under these conditions, f λ denotes a Fermi function characterized by a chemical potential µ λ and the spectral function in Equation (4) for each particle, derived from components of the carriers' Green's function in the Keldysh formalism, readŝ The next step is to re-write the last term in Equation (4) by means of the identity (6) and to approximate this factor by where µ = µ e + µ h is the total chemical potential, where T is the temperature in Kelvins and K B is the Boltzmann constant. Different versions of this approximation has been used before in phenomenological approaches for absorption Refs. [21][22][23] and delivered good agreement with experiments (see details and further references in Ref. [23]). Within the Keldysh Green's functions, context, a detailed derivation of its application is given in Ref. [20]. The fully numerical solutions of the equations that use this version of the approximation have given very good agreement with both single beam and pump-probe luminescence [20,25]. Its usefulness has been further confirmed recently by the good agreement between the analytical solutions shown here and the experimental luminescence of dilute semiconductors [7][8][9]14].
is an excellent approximation. Note that this theory is applied for photon energies around the semiconductor bandgap and thus in Figures 1 and 2, Under these conditions, denotes a Fermi function characterized by a chemical potential and the spectral function in Equation (4) for each particle, derived from components of the carriers' Green's function in the Keldysh formalism, reads The next step is to re-write the last term in Equation (4) by means of the identity and to approximate this factor by 1 − ( ) − ( − ) ≈ tanh (ℏ ) , where = + is the total chemical potential, where T is the temperature in Kelvins and is the Boltzmann constant. Different versions of this approximation has been used before in phenomenological approaches for absorption Refs. [21][22][23] and delivered good agreement with experiments (see details and further references in Ref. [23]). Within the Keldysh Green's functions, context, a detailed derivation of its application is given in Ref. [20]. The fully numerical solutions of the equations that use this version of the approximation have given very good agreement with both single beam and pump-probe luminescence [20,25]. Its usefulness has been further confirmed recently by the good agreement between the analytical solutions shown here and the experimental luminescence of dilute semiconductors [7][8][9]14]. ( , ), the term in curly braces in Equation (6) which has been approximated by ( , ) ≈ 1, evaluated at ℏ = for bulk GaAs at low temperatures, where the dephasing is typically small Γ . In this case, only a small range of detunings − ′ contribute to the integral in Equation (4). Thus the range chosen in the x-axis, from zero to approximately twice the exciton binding energy (2 ), is even larger than necessary. (a) T = 10 K; (b) T = 20 K.
Low temperature luminescence is typically performed with a small density of injected carriers. Very good agreement of this theory with results from different experimental teams for a variety of materials has been obtained with carrier densities around 10 15 carriers/cm 3 [7][8][9], further justifying the range of densities in the y-axis. The theory has also been used for high temperatures and high densities to investigate optical nonlinearites [14][15][16], and this range is illustrated in Figure 2. (6) which has been approximated by FF(ω, ω ) ≈ 1, evaluated at ω = E g for bulk GaAs at low temperatures, where the dephasing is typically small Γ λ . In this case, only a small range of detunings ω − ω contribute to the integral in Equation (4). Thus the range chosen in the x-axis, from zero to approximately twice the exciton binding energy (2e 0 ), is even larger than necessary. (a) T = 10 K; (b) T = 20 K.
Low temperature luminescence is typically performed with a small density of injected carriers. Very good agreement of this theory with results from different experimental teams for a variety of materials has been obtained with carrier densities around 10 15 carriers/cm 3 [7][8][9], further justifying the range of densities in the y-axis. The theory has also been used for high temperatures and high densities to investigate optical nonlinearites [14][15][16], and this range is illustrated in Figure 2. ( , ), the term in curly braces in Equation (6) which has been approximated by ( , ) ≈ 1, evaluated at ℏ = for bulk GaAs at higher temperatures and high densities, where the dephasing Γ is larger than in the low temperature case. In this case a wider range of detunings − ′ contribute to the integral in Equation (4). Thus, the range chosen in the x-axis is even larger Relevant dephasing mechanisms such as electron-electron, electron-phonon and electronimpurity scattering can be added to the selfenergy [17,18], and the resulting Γ is frequency and momentum dependent. However, in what follows, it is replaced by averaged values, leading to a simple approximation for { ( , } consistent with the Ansatz solution, where Γ = Γ + Γ and ≡ tanh (ℏ − )/2 . In 3D, the material resonance energy is: Δ = ℏ + , where 1 * = 1 + 1 . The bandgap is given by the sum of the fundamental band gap , and a many body renormalisation term Δ where , denote, respectively the electron and hole effective masses. The equation for , simplifies to: The total dephasing Γ will determine the luminescence linewidth. Thus, it can be treated as a phenomenological parameter used to interpret data, and at the same time estimate the strength of the scattering and dephasing processes [7][8][9] by comparison of adjusted data with microscopic calculations derived from the relevant selfenergies [17,18]. At this point, the Kubo-Martin-Schwinger (KMS) relation under the form derived in Ref. [20] can be applied to Equation (8), together with the auxiliary variable: Λ , = , (ℏ )/( ) , leading to the relation: Expressing , from Equation (8) in terms of Λ , the corresponding integro-differential equation becomes (6) which has been approximated by FF(ω, ω ) ≈ 1, evaluated at ω = E g for bulk GaAs at higher temperatures and high densities, where the dephasing Γ λ is larger than in the low temperature case. In this case a wider range of detunings ω − ω contribute to the integral in Equation (4). Thus, the range chosen in the x-axis is even larger than necessary. (a) T = 150 K, (b) T = 300 K.
Relevant dephasing mechanisms such as electron-electron, electron-phonon and electron-impurity scattering can be added to the selfenergy [17,18], and the resulting Γ λ is frequency and momentum dependent. However, in what follows, it is replaced by averaged values, leading to a simple approximation for Im{P 0 r (k, ω)} consistent with the Ansatz solution, where Γ = Γ e + Γ h and ϑ ≡ tanh[β( ω − µ)/2]. In 3D, the material resonance energy is: The bandgap E g is given by the sum of the fundamental band gap E 0 g , and a many body renormalisation term ∆E g where m e , m h denote, respectively the electron and hole effective masses. The equation for P r ( → k , ω) simplifies to: The total dephasing Γ will determine the luminescence linewidth. Thus, it can be treated as a phenomenological parameter used to interpret data, and at the same time estimate the strength of the scattering and dephasing processes [7][8][9] by comparison of adjusted data with microscopic calculations derived from the relevant selfenergies [17,18]. At this point, the Kubo-Martin-Schwinger (KMS) relation under the form derived in Ref. [20] can be applied to Equation (8), together with the auxiliary variable: Λ( , leading to the relation: . Before proceeding, the Hulthén potential [21][22][23][24] should be revised. The usual approximation for a static 3D screened potential is the Yukawa potential, W Y ( , has known analytical solutions that have proven to be very useful for the description of bulk absorption [22]. Recent applications have confirmed its relevance to explain experimental luminescence studies [7][8][9]. Figure 3 shows that, in the range of carrier densities and temperatures of interest, the Yukawa potential can be replaced by the Hulthén potential with negligible differences in numerical values. ) . Before proceeding, the Hulthén potential [21][22][23][24] should be revised. The usual approximation for a static 3D screened potential is the Yukawa potential, However, the corresponding Schrödinger equation does not have known analytical solutions. In contrast, the Hulthén potential: (| |) = −2 /((exp (2 | |) − 1)), has known analytical solutions that have proven to be very useful for the description of bulk absorption [22]. Recent applications have confirmed its relevance to explain experimental luminescence studies [7][8][9]. Figure 3 shows that, in the range of carrier densities and temperatures of interest, the Yukawa potential can be replaced by the Hulthén potential with negligible differences in numerical values. Both cases depend on the temperature T and carrier density N through the inverse screening length and the inset explains the results, because increases with increasing carrier density and with decreasing temperature. The dot-dashed (cyan) curve is for T = 10 K, while the double-dot-dashed (orange) curve is for T = 300 K.
The Fourier transform of the Hulthén potential has an analytical expression, where Ω is the sample volume, is the Trigamma function [26], is the screening wavenumber and by including in at Equation (11), = / . Analytical approximations for and are given in Ref. [7]. Note that the bandgap renormalization including Coulomb hole and screened exchange corrections reads Both cases depend on the temperature T and carrier density N through the inverse screening length κ and the inset explains the results, because κ increases with increasing carrier density and with decreasing temperature. The dot-dashed (cyan) curve is for T = 10 K, while the double-dot-dashed (orange) curve is for T = 300 K.
The Fourier transform of the Hulthén potential has an analytical expression, where Ω is the sample volume, ψ is the Trigamma function [26], κ is the screening wavenumber and by including ϑ in W at Equation (11), 0 = 0 /ϑ. Analytical approximations for µ and κ are given in Ref. [7]. Note that the bandgap renormalization including Coulomb hole and screened exchange corrections reads The Fermi functions f e , f h are evaluated at the peak of the spectral function for each particle, i.e., in Equation (5), ω = e λ ( → k ). More details are given in Ref. [7]. Equation (13) goes beyond phenomenological term for the bandgap shift [21][22][23], and also, in contrast to those, here we can in principle take into account a reduction in the Coulomb interaction due to phase space filling through the factor ϑ. Note however that in the range of carrier densities and temperatures of interest ϑ ≈ 1 i.e., 0 ≈ 0 , as shown in Figure 4. At quasi-equilibrium, used in Refs. [7][8][9][14][15][16]22], the total chemical potential is calculated self-consistently with the many body renormalization of the bandgap and can be written exactly as = + , where is the total free carrier chemical potential calculated from the bottom of each band. In other words, the inversion factor can be equivalently written as ≡ tanh (ℏ − − )/2 , and it is now clear why Figure 4 has the detuning ℏ − in the x-axis. Furthermore, the 3D exciton binding energy for GaAs is 4.2 meV and there is no luminescence of absorption for a detuning below 4.2 meV, unless there are deep levels due to impurities and defects, which are not considered here. Thus the approximation, ≈ 1 i.e., ≈ for the dielectric constant used in the Hulthén potential is clearly excellent in the low power luminescence case. Nonlinear absorption studies are only meaningful away from population inversion leading to optical gain, i.e., the studies are in the range ≥ 1. Thus a decreasing occupation reflects phase space feeling and even for ≠ the approach is valid. In order to study the gain regime, the approximation used in the literature is to make at the Hulthén potential = and consider the inversion factor only on the right hand side of Equation (8). In other words, in the traditional "plasma theories" for bulk semiconductor absorption and gain, phase space filling ( ≠ 1) is not taken into account. Since in the high density case where gain develops, the Hulthén potential decreases due to screening, which described by large in Equation (12), there is still good agreement with experiments. See e.g., Refs. [22,23]. Equation (11) can now be Fourier-transformed Here Ω is the sample volume, ( ) denotes the Dirac delta function. Expanding Λ( , ) in the basis of eigenstates of the Hamiltonian: ℋ = − ℏ ∇ − ( ), At quasi-equilibrium, used in Refs. [7][8][9][14][15][16]22], the total chemical potential µ is calculated self-consistently with the many body renormalization of the bandgap E g and can be written exactly as where µ is the total free carrier chemical potential calculated from the bottom of each band. In other words, the inversion factor can be equivalently written as ϑ ≡ tanh β( ω − E g − µ)/2 , and it is now clear why Figure 4 has the detuning ω − E g in the x-axis. Furthermore, the 3D exciton binding energy for GaAs is 4.2 meV and there is no luminescence of absorption for a detuning below 4.2 meV, unless there are deep levels due to impurities and defects, which are not considered here. Thus the approximation, ϑ ≈ 1 i.e., 0 ≈ 0 for the dielectric constant used in the Hulthén potential is clearly excellent in the low power luminescence case. Nonlinear absorption studies are only meaningful away from population inversion leading to optical gain, i.e., the studies are in the range ϑ ≥ 1. Thus a decreasing occupation reflects phase space feeling and even for 0 = 0 the approach is valid. In order to study the gain regime, the approximation used in the literature is to make at the Hulthén potential 0 = 0 and consider the inversion factor only on the right hand side of Equation (8). In other words, in the traditional "plasma theories" for bulk semiconductor absorption and gain, phase space filling (ϑ = 1) is not taken into account. Since in the high density case where gain develops, the Hulthén potential decreases due to screening, which described by large κ in Equation (12), there is still good agreement with experiments. See e.g., Refs. [22,23].
Equation (11) can now be Fourier-transformed Here Ω is the sample volume, δ( → r ) denotes the Dirac delta function. Expanding Λ( → r , ω) in the basis of eigenstates of the Hamiltonian: Thus, Equation (15) can be rewritten as Projection onto state ν yields Substitution into Equation (16) Λ Fourier-transforming back to k-space From , a closed expression can be obtained.
where a factor 2 for spin has been explicitly written out of the summation over all quantum numbers. Introducing and combining Equations (1), (2), (10) and (21) leads to where δ Γ = 1 π Γ ( ω−E g −E ν ) 2 +Γ 2 reduces to a Dirac delta function for Γ → 0. The velocity matrix element is expressed in terms of the dipole moment matrix element and the fundamental bandgap as |Π| = (E 0 g / )|S|x|X|. Next, the Schrödinger Equation for the Hulthén potential must be solved, so that ψ ν (0) can be inserted in Equation (23). The first step is to separate the wavefunction in radial and angular parts. The label ν thus spans the set {n, l, m}, The corresponding Schrödinger Equation, which is a generalized Wannier equation [23] can be cast in the form: The energy eigenvalues depend only on the {n, l} quantum numbers, and thus we can replace E n by E nl . Introducing the 3D Rydberg e 0 and Bohr Radius a 0 , as well λ = 2κ, g = 1/(κa 0 ) and Note that the angular momentum operator has been applied to the wavefunction directly from Equation (25) to Equation (26), i.e., L 2 ψ ν ( → r ) = l(l + 1) 2 ψ ν ( → r ). Only solutions that do not vanish at r = 0 contribute to the emitted power, so l = 0 is selected. The labels "nl" will be dropped at the moment to simplify the notation, Introducing u = r f , and β = − λ 2 , Equation (27) is transformed into The auxiliary variables, z = 1 − e −λr and w = u z(1−z) β , lead to the equation which reduces to the Hypergeometric Equation [26,27], The generalized Wannier Equation, Equation (25), has two types of solutions: bound states for ν < 0 and unbound solutions for ν > 0. The wavefunctions and eigenvalues are thus different and it makes sense to study each case separately and then add all contributions when a sum over all possible ν as required from Equation (23).
Continuum States
The unbound solutions that make a continuum have positive eigenvalues, ν > 0, and thus imaginary β ν = i √ ν /λ. Dropping labels to simplify the development in the next equations yields which can be written for simplicity as Note that the transformation z = 1 − e −λr is being used. The solution that will be later inserted in Equation (23), will be normalized in a sphere of radius r = R and asymptotic solutions, obtained a large radius R will be investigated. Next, Equation (15.3.6) from Ref. [26] is used, i.e., F(a, b, c; z) = . Furthermore, note that lim R→∞ z = 1 and for all values of ξ, λ, δ, F(ξ, λ, δ; 0) = 1. Thus, [26]), gives in the asymptotic limit Leading to asymptotic forms of u, |u| 2 The normalization constant is thus given by (52) However, |Γ(−2i|β|| 2 = π 2|β|sinh(2π|β|) , see e.g., Equations (6.1.29) and (6.1.31) of Ref. [26], plus a little algebra deliver the continuum normalization constant The required value of the wave function at the origin can thus be expressed as Next, note that lim r→0 z = 0, but lim r→0 z r = λ, leading to The sum of continuum states becomes an integral, ∑ ν . . . = Rλ π ∞ 0 . . . Introducing 2 for spin and changing variables, the continuum contribution becomes which combines with the bound states to deliver the power spectrum Here, ζ = ω−E g e 0 , e n = − 1 n 2 (1 − n 2 g ) 2 , I 0 = e 2 ω 2 |Π| 2 πe 0 c 3 a 3 0 and the square of the velocity matrix element where the spin orbit shift, the free-carrier bandgap, and renormalized bandgap are given by ∆, E 0 g , and E g . Note that this approach does not include cavity effects, which can be introduced in the Photon Green's functions solution following Ref. [19]. Quasi-periodic structures can also be addressed by a Green's functions formalism as shown in Ref. [28].
Numerical Application
The goal of this section is to illustrate the approach and the many quantities and parameters used making reference to published material, where the equations are used delivering very good agreement with experimental data. Photoluminescence is a very powerful tool to characterize semiconductor materials and map specific characteristics of new devices. Equation (58) is the reference, since it delivers the emission spectrum, which can be directly compared with experimental data. The carriers generated by the photo excitation process modify the spectrum and these modifications are described in Equation (58) approximately by the corrections induced by the (screened) potential, bandgap renormalization, and changes in linewidth governed by the dephasing or scattering Γ. The temperature T can be measured and used as an input parameter. The carrier density N can be estimated by measuring the input power and its spot size when focused on the sample, but in our recent investigations, where this theory has been very successfully compared with experiments [7][8][9], it has been treated as a free parameter, which has been globally adjusted. The other parameters that characterize the material, i.e., the fundamental band gap E 0 g , the electron and hole effective masses m e , m h , the static dielectric constant 0 , and the spin-orbit shift ∆, can either be found in the literature or robust numerical methods such as simulated annealing can be used to determine these parameters by direct comparison with experiments.
As a reference for the material parameters recently used for dilute nitrides and bismides and the corresponding bandstructure calculations that lead to the material parameters, see: Ref. [7] for GaAs 1−x Bi x ; Ref. [9] for InAs 1−x N x and Ref. [8] for more complex quaternary materials, such as InAs 1−x−y N x Sb y . The "s-shape" in the luminescence profiles as a function of temperature for these materials have been well explained in Refs. [7][8][9]. However, in the case of completely new materials, expecting to have general characteristics as the ternary or quaternary above, the parameters leading to the bandstructure may be unknown. This theory can be used as a numerical characterization tool as follows.
The corresponding bandstructure can depend on a number of unknown parameters for new compounds, but the approach used in Refs. [7][8][9] can be extended in the following way to extract these parameters by a systematic comparison between theory and experiments. For fixed excitation power, the luminescence can be measured for a number of different temperature points. The dephasing corresponding to different excitation processes can be calculated or taken also as a parameters. Thus, at each Temperature T, there is an ensemble of parameters, such as Experiments provide a series of data points measured at T = (T 1 , . . . , T N ). The calculated luminescence spectrum will be a function of T and will depend on the ensemble of parameters, denoted u C (T i ). The least squares method leads to estimates of the parameter ensemble E by minimizing the residual between the theoretical function and the experiments. Therefore, the problem becomes Trust Region-Reflective (TRR) methods deliver an efficient solution for this numerical problem and Ref. [8] gives further details of their application. Figure 5 depicts a numerical example to further illustrate choices for the main input parameters. Short period superlattices with strong delocalization of the electron and hole wavefunctions can be described in many cases by anisotropic 3D media, characterized by in-plane and transverse (along the growth direction) effective masses and dielectric constants. The anisotropy parameter γ is given by the ratio between the in-plane µ and perpendicular µ ⊥ reduced effective masses, γ = µ /µ ⊥ , with These can be calculated from the corresponding free carrier Hamiltonian, and full details of the method, which has led to good agreement with experimental data, can be found in Refs. [15,16,21,29]. Figure 5 shows calculated luminescence using the modified parameters determined by anisotropic medium theory for a short period GaAs-Al 0.3 Ga 0.7 As superlattice with repeated barrier and well widths equal to 2 nm. The resulting effective masses are m e ≈ 0.08; m h ≈ 0.12, m e⊥ ≈ 0.08, m h⊥ ≈ 0.53. These lead the anisotropy parameter γ = 0.67. The resulting exciton binding energy and Bohr radius are given respectively by e 0 = 5.37 meV and a 0 = 11.04 nm. Except of course for the actual value of the bandgap, which is larger due to quantum confinement, the strong delocalization of electrons and holes in this short period superlattice make the evolution of the luminescence with temperature look qualitatively similar to a three dimensional (bulk) semiconductor, notably the evolution of the line-shape. This is quite similar to the calculations presented in Ref. [9], which are in very good agreement with the experiments discussed in Ref. [13].
Conclusions
Photon Green's function techniques have been used to study different types of luminescence over the years, but the solutions are typically numerically intensive and not accessible for experimentalists or non-specialists. To bridge this gap, a simple analytical solution of the relevant Green's functions was necessary and the approach described here meets those needs. Notably, the evolution of luminescence with temperature has been successfully compared for GaAsBi [7] and InAs(N,Sb) [8] and InAsN [9] dilute semiconductors. Furthermore, the approach has been used to predict nonlinearities in short period superlattices treaded as anisotropic three-dimensional media [14][15][16]. However, details of the mathematical steps needed to achieve the final formulas used in these publications have not been previously presented and they are given in this review, complemented by numerical results demonstrating the range of validity of the main approximations used in the development of the approach. To complete the picture, an application section shows how to use the method as a numerical characterization machine, and the main parameters needed in typical simulations are illustrated with results for luminescence of short period superlattices. Screening of the Coulomb interaction between electrons and holes is discussed by means of the Hulthén potential and the steps provided are suitable as a guideline to the study of other interacting potentials of interest. They can be followed for further development of a suite of algorithms for efficient and easily programmable numerical characterization tools for a host of new bulk materials or superlattices that can described be as effective 3D media using anisotropic medium approximations. Except of course for the actual value of the bandgap, which is larger due to quantum confinement, the strong delocalization of electrons and holes in this short period superlattice make the evolution of the luminescence with temperature look qualitatively similar to a three dimensional (bulk) semiconductor, notably the evolution of the line-shape. This is quite similar to the calculations presented in Ref. [9], which are in very good agreement with the experiments discussed in Ref. [13].
Conclusions
Photon Green's function techniques have been used to study different types of luminescence over the years, but the solutions are typically numerically intensive and not accessible for experimentalists or non-specialists. To bridge this gap, a simple analytical solution of the relevant Green's functions was necessary and the approach described here meets those needs. Notably, the evolution of luminescence with temperature has been successfully compared for GaAsBi [7] and InAs(N,Sb) [8] and InAsN [9] dilute semiconductors. Furthermore, the approach has been used to predict nonlinearities in short period superlattices treaded as anisotropic three-dimensional media [14][15][16]. However, details of the mathematical steps needed to achieve the final formulas used in these publications have not been previously presented and they are given in this review, complemented by numerical results demonstrating the range of validity of the main approximations used in the development of the approach. To complete the picture, an application section shows how to use the method as a numerical characterization machine, and the main parameters needed in typical simulations are illustrated with results for luminescence of short period superlattices. Screening of the Coulomb interaction between electrons and holes is discussed by means of the Hulthén potential and the steps provided are suitable as a guideline to the study of other interacting potentials of interest. They can be followed for further development of a suite of algorithms for efficient and easily programmable numerical characterization tools for a host of new bulk materials or superlattices that can described be as effective 3D media using anisotropic medium approximations. | 2018-04-03T00:50:29.967Z | 2017-12-21T00:00:00.000 | {
"year": 2017,
"sha1": "b886bea4074edb3e83865c27258535f6eb591ad2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/11/1/2/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b886bea4074edb3e83865c27258535f6eb591ad2",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
143813880 | pes2o/s2orc | v3-fos-license | Toward a Taxonomy of Harm
When we organize knowledge we act. The wholesomeness of our actions can be measured in the proportion of good or harm they do. How then do we identify and define potential harm in knowledge organization systems? A starting point for contributing to the greater good is to examine and interrogate existing knowledge organization practices that do harm, whether that harm is intentional or accidental, or an inherent and unavoidable evil. As part of the transition movement, the authors propose that we take inventory of the manifestations and implications of the production of suffering by knowledge organization systems through constructing a taxonomy of harm. The aim of our work is (1) to heighten awareness of the violence that classifications and naming practices carry, (2) to unearth some of the social conditions and motivations that contribute to and are reinforced by knowledge organization systems, and (3) to advocate for intentional and ethical knowledge organization practices to achieve a minimal level of harm.
Toward a Taxonomy of Harm
When we organize knowledge we act.The wholesomeness of our actions can be measured in the proportion of good or harm they do.How then do we identify and define potential harm in knowledge organization systems?A starting point for contributing to the greater good is to examine and interrogate existing knowledge organization practices that do harm, whether that harm is intentional or accidental, or an inherent and unavoidable evil.As part of the transition movement, the authors propose that we take inventory of the manifestations and implications of the production of suffering by knowledge organization systems through constructing a taxonomy of harm.The aim of our work is (1) to heighten awareness of the violence that classifications and naming practices carry, (2) to unearth some of the social conditions and motivations that contribute to and are reinforced by knowledge organization systems, and (3) to advocate for intentional and ethical knowledge organization practices to achieve a minimal level of harm.We do not aim to be prescriptive, but rather, we will describe many of the consequences of present knowledge organization systems, with the hope that it will stimulate and support corrective efforts.
Theoretical Underpinnings
The theoretical underpinnings of the taxonomy of harm derive from Žižek, Foucault, and Haraway, and Arendt who explore the semantic violence imposed by language and categories, as well as Buddhist teachings on harm and suffering.Drawing from Donna Haraway and Buddhist tenets on co-origination and mutual reinforcing ontology, we find wholesomeness, or interconnectedness, to be a central component in theorizing our taxonomy of harm.We recognize and commit to the ontological position that developing, maintaining, and using knowledge organization systems are acts in constant motion which stand in relation to others.Adler, M., & Tennis, J. (2013).Toward a Taxonomy of Harm.NASKO,4(1).Retrieved from http://journals.lib.washington.edu/index.php/nasko/article/view/14641Such systems, as tools we create, are always becoming together in mutually defining and reinforcing relationships.Classifications, those who classify, and those being classified are coconstitutive.At the same time, the use of language can often be a violent act and classifications always have the potential to inflict some degree of damage.Given this seemingly inescapable truth we ask, following Haraway (2007), "What might a responsible 'sharing of suffering' look like in classification and naming practices?"As knowledge workers, we have a responsibility to do the least harm possible.
Hannah Arendt believed that, in order to understand how violence works, we must be careful not to conflate violence with the concepts of power and authority."Indeed one of the most obvious distinctions between power and violence is that power always stands in need of numbers, whereas violence up to a point can manage without them because it relies on implements.A legally unrestricted majority rule, that is, a democracy without a constitution, can be very formidable in the suppression of the rights of minorities and very effective in the suffocation of dissent without any use of violence (41-42).
For Arendt, violence is distinguished from power by its instrumental character, with tools designed and used to increase strength.Power derives from a group of people acting in concert; a majority rule can suppress rights of minorities without tools, she argues (44-45).Authority can be vested in persons or in offices."Its hallmark is unquestioning recognition by those who are asked to obey; neither coercion nor persuasion is needed" (45).What we hope to do here is show that bibliographic tools, particularly language and classifications, can be used as instruments of violence."Violence, being instrumental by nature, is rational to the extent that it is effective in reaching the end that must justify it.And since when we act we never know with any certainty the eventual consequences of what we are doing, violence can remain rational only if it pursues short-term goals.(Arendt,79) These are never neatly compartmentalized, so institutionalized power often appears in the guise of authority.Haraway observes that pain is often caused by an instrumental apparatus and is not borne symmetrically.Rather, those in positions to wield the apparatus have more control over actions and their effects.For our purposes, we view classification systems to be instrumental apparatuses capable of systemic and symbolic violence.Žižek (2008) outlines three kinds of violence -subjective, objective, and symbolic.Here we are interested in ways that language produces violence, which is primarily a symbolic form of violence.Žižek identifies a "direct link between the ontological violence [creating things in the world] and the texture of social violence (of sustaining relations of enforced domination) that pertains to language" (71).He suggests that the violence in the human ability to speak resides in its function of "othering" people, including our closest neighbors, which inherently leads to oversimplification and division.As Tennis (2013) has pointed out, objective violence can surface in our work, because our work is rooted in what Žižek calls symbols and systems.First, we use the symbolic systems of language and its more refined subset of indexing languagesoften controlled indexing languages.And we operate within systems, as defined by Žižek that are part of the socio-political systemlegitimated as components to help the (capitalist) democratic citizen (45).
Manifestations of objective violence can take on multiple forms, with myriad consequences.The present project is a move toward identifying symbolic and systemic violence in KO.
We invoke Ron Day's critical research in information studies to illustrate ways that violence can materials in information work.Day (2011) calls for a critical evaluation of our present networked information society, which has produced an increased need for "the transmission and inscription of 'clear' statements and the establishment of common classification structures, cataloging terms, and technical linking protocols" (25-26).According to Day, flattened hierarchies have brought more freedom for knowledge workers in the workplace, with the cost in restriction of the worker's freedom of expression.We take this to be an example of symbolic violence.
Day's account of the production of needs by information systems serves as an illustration of systemic violence.He has concluded that the core traditions of information science are defined by the psychology of need, which is "based on a normative psychology of cultural forms and social situations, constructed by analyzing language vocabulary and other semantic markers and social associations" (Day 29).Information systems produce users and needs, rather through taking advantage of and shaping social dynamics through algorithmic functions.
Much of Foucault's work interrogates the normalizing effects of disciplinary systems, which serve to correct deviant behavior by coercing citizens to live according to society's standards or norms.Discipline and Punish reveals how techniques and institutions have converged to create the modern system of disciplinary power, which situates individuals in a field of documentation, as results of exams are recorded in documents that provide detailed information about the individuals examined and allow power systems to control them.On the basis of these records, those in control can formulate categories, averages, and norms that are in turn a basis for knowledge.Viewed in this light a knowledge organization system is an instrument of documentation that carries disciplinary power.At the same time it provides evidence of the position from which people and institutions classified others or have become categories.
We are also speaking directly to Feinberg's (2011Feinberg's ( , 2007) ) research on classifications as situated knowledges, authority and voice, and morality by reflecting on the positionality from which people classify and the moral obligations we have in subject creation.We are also building upon Olson and Schlegl's (2001) meta-analysis of subject access, in which they delineate treatments of topics as exceptions to a norm.Bowker and Stars' (2000) research unmasks classifications as hidden infrastructures that carry meaningful consequences in the lives of those who are classified and who fall outside of social norms.
These theoretical underpinnings inform our work, and guide our ontological commitment, recognition of the problem of harm in knowledge organization systems, and guide our decisions about how to organize the taxonomy of harm.
Harm is apparent to us when we deviate from agreed upon set of precepts that dictate what is ethical.If we agree that there are particular precepts in the field of knowledge organization we can then decide as a community what is ethical and what can be interpreted as causing harm.Elsewhere we have proposed some precepts which may be useful in this discussion (Tennis, 2013).These precepts can be interpreted as being prescriptive to a point, but in an effort to align our theoretical position with Buddhist ethics, we also assume a non-dualistic position that prescribes, but in a particularly impermanent and contextually sensitive manner.
Organizing principles
The most appropriate structure for a taxonomy of harm is open to discussion.Furthermore, the authors recognize their positions of privilege and the risks that naming conditions and concepts carry.To name is to wield some degree of power, and to organize any part of the universe is, to a lesser or greater degree, a coercive act.With that in mind, we believe this project is imperative.What we are naming and organizing are acts, actors, and effects of harm.To call these acts out and name them is to bear witness to suffering, to hold organizers of information accountable and reveal ways in which we are complicit or willing participants in reproducing harm, and to begin to take inventory of the weightiness of classification and categorization.We also acknowledge the limitations of language to describe suffering; our taxonomy here will be constrained by language and categories, just as classifiers of all sorts struggle to fit ideas, affects, and effects into words.Nevertheless, we must try and recognize that this taxonomy is intended to be amended, rearranged, and corrected.We call upon the community of knowledge organizers to reach a sort of consensus on what constitutes harmful acts and what might be done, knowing that debate will always surround many of the concerns we raise here.The classified and the classifier are mutually constitutive; beings are always becoming together in relationships.
The act of calling something into being by name is to done as a witness who stands in a particular position.There are at least three levels on which classifiers bear responsibility: A) to name those conditions that remain unsaid or unnamed, particularly with regard to suffering; B) to recognize their positionality with respect to that being named; and C) to classify with intentionality toward justice and doing the least harm.By naming phenomena, events, or groups of people we are providing evidence of witnessing.Adler, M., & Tennis, J. (2013).Toward a Taxonomy of Harm.NASKO, 4(1).Retrieved from http://journals.lib.washington.edu/index.php/nasko/article/view/14641 The taxonomy of harm will be organized around three main questions, which each have intersecting concerns as are described below.We ask A) What happens?B) Who participates?and C) Who is affected and how?
What happens?
In order to examine what happens when we classify, we operationalize tenets of Buddhism to apply it in everyday practice of knowledge workers.We must consider (1) actions, (2) the wholesomeness of these actions, (3) the intentionality with which the actions are carried out, and (4) the implications of those actions.It is important to acknowledge that harm is installed.All knowledge organization systems are potentially harmful, and the consequences might vary greatly depending on perspectives (Tennis, 2012).
Actions
Following Olson and Schlegl's (2001) analysis of literature on bibliographic subject standards, we are locating harmful actions by looking for cases of exceptionalism, ghettoization, omission, inappropriate structure of the standard, biased terminology, erasure, and pathologization.Each of these can be understood as problems of normalization or disciplining.
And what classifications do, particularly for groups of people, but also across the disciplines and on range topics, is reproduce and reify norms.
Treatment of a topic as an exception occurs when something "is represented as being outside of some accepted norm" (Olson and Schlegl 2001, 67).[this seems like an umbrella category for omission/ghettoization/bias, doesn't it?]"Ghettoization is the problem of gathering and then isolating a topic rather than integrating it....indicative of the practice of considering disturbing ideas as other to be set aside, outside of the mainstream" (67, 69)."Omitting a topic is often a problem of the lack of currency of subject access standards, but may also be a problem of Adler, M., & Tennis, J. (2013).Toward a Taxonomy of Harm.NASKO, 4(1).Retrieved from http://journals.lib.washington.edu/index.php/nasko/article/view/14641underlying assumptions" (Olson and Schlegl,68) We suggest adding erasure as an harmful action, distinguishing it from omission.Erasure suggests greater purposiveness, the removal or covering up of something that was once there, rather than simply leaving it out.The reparative processes are slightly different, i.e., to counter omission we would write something into the story, as historians have given voice in recent decades to those left out.To overcome erasure requires a restoring or recovering.For example, Google just removed the word bisexual from its block list.It was there until the fall of 2012.This had rendered and entire community invisible because of the far reach of Google.It was present and then erased, and in order to repair the situation someone needs to recover the term.We also add pathologization as a particular form of bias when classifications serve as a sort of diagnosis and reproduce medicalized norms.
All of these categories are connected; for instance, some of the ghettoization may result from the structure of standard, as illustrated by LCSH.This is a system in which categories are marked and unmarked, and within the unmarked categories are, implicitly, all of the groups that have yet to be named as well as those that do not require a name because they are assumed to be normative.The heading "Women accountants" is a typical case.There is no need for a heading "Male accountants," because maleness is the norm."Asian American bisexuals" is another kind of case.There are at least two components, "Asian Americans" and "bisexuals," and both of these arose as marked categories.To illustrate the point, we do not find "Asian American heterosexuals" or "Caucasian bisexuals."Such marked categories set up a binary opposition of what something is and what something is not.
Wholesomeness
"living well, flourishing, and being 'polite' (political/ethical/in right relation) means "staying inside shared semiotic materiality, including the suffering inherent in unequal and ontologically multiple instrumental relationships."(Haraway,72) In consideration of wholesomeness we ask how these subjects are constructed in relation to others and to the knowledge workers producing them.Subjects are response-able: "responsibility is a relationship crafted in intra-action through which entities, subjects and objects, come into being" (Haraway 71) According to Buddhist principles, the pair of notions crucial to the study of Right View is that of subject and object.The world is an object of the mind."Subject and object manifest together at the same time and depend on each other" (Hanh,75).Interbeing in everything."How we view the world affects everything within it" (Hanh,76).
Failing or refusing to come face-to-face reduces our ability to recognize the extent of our relations and how our acts affect others and ourselves.We might also think in terms of the Buddhist notion of karma.One does not act in isolation when one produces or applies a system, and the classificationist bears a responsibility to do the least harm.Actions carried out with wisdom, compassion, and awareness of others are beneficial to those who are classified, as well as the classifiers and the world.
Intentionality
"According to the First Noble Truth, we need to call our suffering by its true name.Once we have named what is causing us to suffer, we are more able to look deeply into each suffering in order to find a way to transform it."(Thich Nhat Hanh,Good Citizens,31) Intentionality is a essential component in understanding what happens, as one may intentionally perform an evil act knowing that it is evil and will cause harm, one may produce suffering not knowing that the action is wrong or will cause harm, or one might cause suffering simply by accident.The purposefulness of the action depends to a great extent on intent, and this should have bearing on the meaning of the action.This matters because most acts of knowledge organization are not performed with an intent to harm.In unmasking our role in causing harm when we classify, it is hoped that we will inspire a will to more intentionally do the least harm.Adler, M., & Tennis, J. (2013).Toward a Taxonomy of Harm.NASKO,4(1).Retrieved from http://journals.lib.washington.edu/index.php/nasko/article/view/14641 Tennis has identified five levels of intentionality and two measures of knowledge of acts, which combined, can guide the ethical considerations of actions."Intention for our purposes is: performing an action for a specific purpose.If we want to believe we are doing good work, then we have to believe our intentions are good" (Tennis).
A critical objective of this project is to call out to classifiers and invite them to reflect on their intentions when they perform an organizing act.We will not speculate as to the intentions of producers or users of classifications, but rather, we ask knowledge organizers to consider their own intentions when they act.
Implications
"Far from making us more knowledgeable and careful toward other beings, information can give us a comforting stupidity."(Day 29) Implications include questions of morality, types of effects, and why these consequences matter.Again, Haraway and Buddhist tenets will guide us in observing implications.The question of implications remains open and will continue to reveal themselves.We can offer a starting point for considering some of the implications of this project.
Olson and Schlegl have concluded from their intertextual reading of the subject access literature that "our focus on users, our quest for objectivity, and the standardization we use to achieve these goals may be at least partly responsible for our systemic problems" (Olson and Schlegl,62).In service to these goals, subject tools have contributed to larger systemic and symbolic conditions.Smith (1999) implicates classification systems as central to imperialist discourses.She writes, "The collective memory of imperialism has been perpetuated through the ways in which knowledge about indigenous peoples was collected, classified and then represented in various ways back to the West, and then, through the eyes of the West, back to those who have been colonized" (1-2).Classifications present ideologies and attitudes, Adler, M., & Tennis, J. (2013).Toward a Taxonomy of Harm.NASKO, 4(1).Retrieved from http://journals.lib.washington.edu/index.php/nasko/article/view/14641depending upon the lens through which a classifier views the world.In the case of imperialism, various legitimatizing discourses play out, including those of salvation, economics, and health.
Of course, there is the central question of access to information.By way of objective, standardized, and "user"-centered categories (which, according to Day, effectively produce users and their needs), our systems and terminologies fundamentally impede access to resources.
Who participates?
If we follow the stance of co-origination, then no one escapes responsibility in the production of knowledge organization systems.Clearly, the people and agencies who create classification systems carry power in relation to those being classified and those using the system.However, if we take it to be true that such systems are always becoming together with those who produce, use, and give meaning to the systems, we must ask about the agency and influence of the classified and the consumers of the systems.Is there a dialogue, resistance, or common ground among the classifiers and the classified?
Participants hold varying degrees of power.Those who creating and structuring a system or authorize names and categories wield greater power than those who select from existing systems and apply already authorized categories or from those who recycle already produced metadata.At every level, though, there is an opportunity to call one's actions into question, to ask whether the given name is the ethically sound choice.Ethically speaking, the optimal choice may be to reject what is offered, to refuse or elicit change, or even to remain silent.
Who is affected?
Those affected may be individuals, groups, nations, and any configuration of individuals who are served by or are somehow in service to a classification system.We will not be able to examine every instance of harm or every group or individual harmed.The goal is to recognize We identify three key actions: bias, erasure, and pathologization.By drawing from the psychiatric literature, catalogers have implicitly accepted the assumption that certain sexual behaviors and expression are medical concerns.The heading is applied to works in the humanities and social sciences, which generally resist medicalizing discourses.By imposing medicalized language onto works that do not use such terminologies, there is a form of erasure, a refusal to allow the literatures speak on their own behalf.
Those most directly affected are the people that would consult a catalog to find materials assigned this heading.Those who produce or read texts and reside outside of the psychiatric discipline, in particular humanities and social science scholars and public library patrons, will not only be underserved by the heading, but are also subjected to a pathologizing term.For example, the book description for Part-time Perverts: Sex, Pop Culture, and Kink Management published by Praeger in 2011, reads: An interdisciplinary exploration of sexual perversion in everyday life.Drawing on her own experience, as well as on pop culture and a multidisciplinary mix of theory, the author shifts the discussion of perversion away from the traditional psychological and psychiatric focus and instead explores it through a feminist lens as a social issue that affects everyone.
Despite the clearly stated aim to position alternative sexualities outside the medical establishment and inside an interdisciplinary field of cultural studies, the only subject headings applied to the bibliographic record for this book are "Paraphilias" and "Sex customs."The author has no recourse, other than to petition LC to drop or change the medically derived heading.The act of naming, in this case, ignores the author's stated objective and disciplines the work by situating it in psychiatry.
The implications of the heading are too expansive to detail here.The most direct effect is the limitation on access to information, as an obscure medical term is used to provide subject access for materials in a range of disciplines outside of psychiatry.But what is at stake here is much more than access to information, as this heading ultimately serves to reproduce dominant discourses concerning normal and abnormal sexualities.Inherent in the authorization of this word are histories of power, normativity, and citizenship borne out of state-defined notions of health.
The heading presents an almost paralyzing ethical dilemma.Is it better to have no heading at all that groups "deviant" sexual behaviors together?If we do use a term, what should it be?What are our intentions when we use this word?If it is to provide access, we are failing.It is unlikely that any librarian has set out to reproduce discriminatory or negatively biased assumptions.
The concept and field of eugenics can give us another example of harm.Eugenics is a term that first appears in the Dewey Decimal Classification in 1911.At that time it is considered a biological science.As of the 1950s it is no longer possible for a classifier to place a book primarily on eugenics in the biological sciences.The other options are social sciences, applied sciences, and philosophy and ethics.And while eugenics has a diverse set of related fields, ranging family planning to anthropometry, we see a different kind of erasure here.This is especially true since eugenics is still used in population genetics work, albeit there is an open debate about what counts as eugenical work and thought (Paul, 1995).Yet even with that debate population genetics is squarely a biological science, so the erasure here seems to be more about avoiding a term that might have negative consequences when in fact it is the term used in the literature.
Along with erasure another action taken is inappropriate structure.If we relegate eugenics to applied sciences then we are not situating literatures on this aspect of population genetics in with other aspects of evolutionary biology specifically or biology generally.Finally, the relationship between old classes and new classes in successive editions of a scheme, used in the same collection causes another form of inappropriate structure, the where materials classed under older and now outdated class numbers occupy a strange position in relation to biological texts.In the case of eugenics we see materials with this subject in the same class as those that have the reproductive parts of plants as their primary topic.The ethical concerns here is the harm caused in misrepresentationsevering the cord to the earlier appearance of the concept.Haraway (2007) has stated that to live well means "staying inside shared semiotic materiality, including the suffering inherent in unequal and ontologically multiple instrumental relationships" (72).In working toward a taxonomy of harm we will get inside classification systems and realize and share the effects of knowledge organization systems, with the awareness that we have a responsibility toward the subjects that we organize.By witnessing some of the harmful effects of classifications we can continue to transition toward doing the least harm and the greatest good. | 2018-12-12T11:02:40.737Z | 2013-10-31T00:00:00.000 | {
"year": 2013,
"sha1": "55a11069ea3b5213b412b0b898ff6990984d1315",
"oa_license": "CCBY",
"oa_url": "https://journals.lib.washington.edu/index.php/nasko/article/download/14641/12285",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "55a11069ea3b5213b412b0b898ff6990984d1315",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
119207063 | pes2o/s2orc | v3-fos-license | On stars, galaxies and black holes in massive bigravity
In this paper we study the phenomenology of stars and galaxies in massive bigravity. We give parameter conditions for the existence of viable star solutions when the radius of the star is much smaller than the Compton wavelength of the graviton. If these parameter conditions are not met, we constrain the ratio between the coupling constants of the two metrics, in order to give viable conditions for e.g. neutron stars. For galaxies, we put constraints on both the Compton wavelength of the graviton and the conformal factor and coupling constants of the two metrics. The relationship between black holes and stars, and whether the former can be formed from the latter, is discussed. We argue that the different asymptotic structure of stars and black holes makes it unlikely that black holes form from the gravitational collapse of stars in massive bigravity.
Since the Hassan-Rosen theory contains two metrics, the SSS spacetimes necessarily become more involved. In this paper we are interested in metrics that are asymptotically flat. These spacetimes are asymptotically classified according to the relative strength of the massive and massless spin-2 mode that the theory contains [42], and the conformal relationship between the two metrics at infinity. Concerning the black hole solutions, it was shown in Ref. [43] that if one assumes non-singular solutions, the two metrics must share a common Killing horizon. This means that black hole solutions are highly restricted. For star solutions, one has the option of how to couple matter to the two metrics. In this paper we opt for the commonly chosen approach of coupling only one of the metrics to matter. The theory predicts that including a gravitational source gives rise to a fixed relationship between the asymptotic massive and massless spin-2 modes [28,44]. In this paper we show that this relationship is not the same as that for black holes. This makes it unlikely that the black holes that the theory contains are end-states of the gravitational collapse of matter. A possible cause is that the symmetry between the two metrics that the black holes display through the common Killing horizon, is broken when one couples only one of the metrics to matter.
Spherically symmetric systems in the context of the Hassan-Rosen theory were first studied in Ref. [45], where, in particular, the perturbative solutions to the equations of motion was published. Ref. [46] performed an extensive numerical study, and gave conditions for the existence of asymptotically flat black hole solutions. These solutions were further studied in depth in Ref. [47]. Star solutions and the so-called Vainshtein mechanism, described further below, was studied in Ref. [44]. This reference is central to the analysis performed in this paper. Solutions for charged black holes were found in Ref. [48], and for rotating black holes in Ref. [49]. Stability properties of the black holes were investigated in Refs. [50][51][52][53][54]. A general review of black holes in massive bigravity can be found in Ref. [55].
The goal of this paper is to investigate what conditions on the parameters of the theory that give rise to phenomenologically viable SSS solutions. Allowing for general parameter values by approaching the regime where the Hassan-Rosen theory becomes equivalent to general relativity, we constrain the ratio between the coupling constants of the two metrics. Furthermore, we analyse the relationship between black hole and star solutions, in order to see whether the gravitational collapse of stars can lead to the black hole solutions of massive bigravity. This paper is organized as follows. In Section 2 we introduce the Hassan-Rosen theory and the spacetime configuration under consideration. Section 3 describes the asymptotic solutions, and in Section 4 we state the solution for stars and their phenomenology. In Section 5 we constrain the phenomenology of galaxies. Section 6 discusses the relationship between stars and black holes, and if black holes in massive bigravity can be considered as end-states of the gravitational collapse of stars. We conclude in Section 7.
Setup
The Lagrangian for the Hassan-Rosen theory is given by where L m is the matter Lagrangian and e n are the elementary symmetric polynomials presented e.g. in Ref. [56]. Varying the Lagrangian yields the equations of motion Here, we have defined 4) and the matrices Y n are given in Ref. [56]. The parameter κ is in principle redundant, since it can be put to unity through a rescaling of f µν and the β n (see e.g. Refs. [56,57]). We will keep it explicit, however, since it makes the limit to general relativity manifest. For the fields g µν and f µν , we use the following spherically symmetric and diagonal ansatz 1 where a prime signifies a derivative with respect to r. This form for g µν and f µν is the most general diagonal form of the metrics after using the possibility of doing a rescaling of the radial coordinate. Notice that f µν can equivalently be written and U (r) be interpreted as the radial coordinate for the f -metric. The energy density and pressure are given by ρ(r) = −T 0 0 and P (r) = T i i /3 (summation over i implied), and they satisfy the following conservation equation: In this paper, we will combine analytic and numerical studies. For the numerical analysis, we follow Ref. [46] and put the equations of motion in the following form: Y ′ = F 2 r, Q, N, Y, U, ρ, P, c, m 2 , β 1 , β 2 , β 3 , κ , U ′ = F 3 r, Q, N, Y, U, ρ, P, c, m 2 , β 1 , β 2 , β 3 , κ , Q ′ = F 4 r, Q, N, Y, U, ρ, P, c, m 2 , β 1 , β 2 , β 3 , κ , P ′ = F 5 r, Q, N, Y, U, ρ, P, c, m 2 , β 1 , β 2 , β 3 , κ , where c is defined below. The function a can be solved for directly once the other fields are given. When ρ = P = 0, i.e. in vacuum, F 1 , F 2 , F 3 become independent of Q. In vacuum, one thus first solves three first order equations for N , Y and U , and then integrate F 4 to get Q. When ρ and P are non-vanishing, the five first order differential equations instead have to be solved simultaneously.
Asymptotic structure
Since we are interested in solutions that are asymptotically flat, the metrics should approach at infinity. Here c is an asymptotic conformal factor between the two metrics. In order for Eq. 3.1 to be an solution, we need to impose to cancel the cosmological constant terms for g µν and f µν . 2 Linearizing around the flat space backgrounds, i.e. expanding the metric components as Q = 1 + δQ, N = 1 + δN , a = c(1 + δa), U = cr (1 + δU ) and Y = 1 + δY , gives These solutions, first appearing in Ref. [45], are well-known and have been presented on several occasions in the literature. The parameters C 1 and C 2 regulate the strength of the massive and massless modes. The graviton mass m g is given by Let us discuss the free parameters that we have at our disposal. Of the five β n , two have been fixed in order to yield asymptotically flat solutions. This leaves β 1 , β 2 and β 3 as free theory parameters. m 2 is not a free parameter since it can be absorbed into the β n :s. We will keep it explicit, however, since it sets an overall length scale when the β n :s are of order unity. As mentioned above, κ is also redundant since it can be put to unity through a rescaling of f µν and the β n :s. Since it is important for discerning solutions that lie close to those of general relativity, we will, however, keep it explicit. Added to this, we have the conformal factor c. On the whole, for vacuum solutions, we have four global parameters, 2 Notice that in Refs. [46] and [47] the parametrization was used, for which c = 1.
the three β i :s and c, together with the local parameters C 1 and C 2 , which controls the strength of the massless and massive modes. As discussed later, including a gravitational source fixes the relation between C 1 and C 2 . The equation of motion have the property that under the rescaling a solution is mapped onto a new solution [46]. We will interchangeable use r V (defined below) or λ g ≡ m −1 g as radial coordinate. The linear solutions are valid up to the radius where higher order terms become important. This radius is usually called the Vainshtein radius, and was first identified in Ref. [58] in 1972. In massive bigravity, the Vainshtein radius is where M tot is defined as the total mass of a source. In Ref. [58], Vainshtein also conjectured that there should exist a mechanism, later dubbed the Vainshtein mechanism, that effectively restores general relativity inside the Vainshtein radius. That this exists in the context of massive bigravity for SSS spacetimes was shown in Ref. [46] for the case of κ → 0, and in Ref. [44] for the r ≪ λ g limit. It is important to note, however, that the existence of the Vainshtein mechanism depends on the specific choice of the β i parameters. For recent phenomenology concerning the Vainshtein mechanism, see Refs. [23,59,60] and references therein.
Stars
In this section we study the phenomenology of stars in massive bigravity. As a source, we use a star with constant energy density ρ ⋆ , pressure P (r) and radius r ⋆ . The pressure has to satisfy the conservation equation (2.7), and vanish at the surface of the star. The mass interior to r is and the total mass of the star is thus We have three effective scales for the stars: r ⋆ , r V and λ g . We will assume that r ⋆ ≪ λ g , and comment on both the r ⋆ < r V scenario as well as r ⋆ > r V . As shown in Refs. [28,44], the introduction of a source fixes the relation of C 1 and C 2 in the linear solutions to and C 1 = 2M tot /(1 + κc 2 ). The linear solutions then become Asymptotically, the fields thus look like a massless general relativity (GR) like term plus a Yukawa term. They exhibit the usual vDVZ-discontinuity [61][62][63] which can be probed observationally. As r ≫ λ g , the Yukawa term decays, however, and the fields look identical to general relativity. Is is only when r λ g , or when higher order terms become important, that we can expect any observational signatures. When massive bigravity is used for cosmological applications, for κc 2 ∼ 1, we expect λ g to be of the order of the Hubble scale today. It is then an excellent approximation that r ⋆ ≪ λ g . For this framework, it was shown in Ref. [44] that it is possible to obtain approximative analytical solutions by assuming that all fields and their derivatives are close to the flat space background, with the exception of U/r.
the metric perturbations can be expressed as These fields are thus functions of M (r), P (r) and µ, where µ satisfies a seventh-degree polynomial: (4.14) The function µ satisfies −1 < µ ≤ 0 for all physically relevant cases. In Sec. A, we show that real valued solutions to Eq. 4.14 that approach zero at infinity (which corresponds to the asymptotically flat solutions) exist if α > −1/ √ β. Furthermore, one must also have α < −d 1 /d 2 when d 2 < 0, where These constraints are depicted in Fig. 1 and are more restrictive than those presented in Ref. [44]. In terms of β i , we can write α > −1/ √ β and β > 1 as (using the normalization That is, we need β 2 to be strictly negative and β 1 and β 3 to be strictly positive.
For the phenomenological analysis, we will use the following definitions of the potentials: From Eqs. 4.10, 4.11 and 4.17, we have where v is the circular velocity. From this we see that as long as µ stays real and finite as r → 0, we will recover GR at small radii, as long as the potentials are small. Outside the source, we have . (4.22) The limit r ≫ r V is derived by noting that we can neglect higher order terms in µ, and solve Eq. 4.14 as (4.23) Note that this "asymptotic" value is only valid far inside the Compton wavelength of the graviton and represents the maximal deviation we expect from GR. The deviation is monotonically increasing with κc 2 and has a maximal value of 1/3. For r ≫ λ g , we recover GR again. If we are far outside the graviton Compton wavelength, the exponential term will be negligible and GR is recovered. If we are far inside the Vainshtein radius r V ∼ (M tot λ 2 g ) 1/3 , and GR is restored again. We thus expect the largest deviations from GR to happen at r V r λ g . As an example, assuming m g = H 0 , for the Sun, we have We thus only expect the gravitational field from the Sun to be modified on scales much larger than the distances to its closest star neighbours, where they of course are completely negligible anyway. In the solar system (r ∼ 1 AU), deviations are of order (r/r V ) 3 ≈ 10 −21 . This value is way below current observational constraints showing that on AU scales, deviations from the inverse square force law is 10 −9 [64]. This observational constraint indicate that λ g 1 kpc, except for the case of κc 2 ≪ 1 when constraints on λ g will be weaker.
For parameter values not fulfilling the requirements given, we may still have everywhere real solutions if the radius of the source is bigger than its Vainshtein radius. For example, if the source has a constant density (and zero pressure), µ will be constant inside the source. As shown in Ref. [23], the requirement of having sources larger than their Vainshtein radii corresponds to them having densities smaller than order of the critical density of the Universe, if m g ∼ m ∼ H 0 . We still might be able to have more compact sources if we let κ be very small since m g ∝ mκ −1/2 . For a source with mean density ρ ⋆ , for small κ, we can write Eq. 4.14 at the surface of the source where the critical background density of the Universe today is given by Demanding that solutions exist down to the surface of a neutron star for which we generally need (ρ ⋆ /ρ cr ) (H 0 /m g ) 2 ≈ (ρ ⋆ /ρ cr ) κ (H 0 /m) 2 to be smaller than order one, which means that κ should be less than the ratio of the critical density of the universe and the source density, assuming m ∼ H 0 . 4 For general values of m we get (4.28) Alternatively, we can constrain the Compton wavelength of the graviton λ g ρ cr ρ neutr r H ≃ 28 km. (4.29) where r H ≡ H −1 0 ≈ 1.3 · 10 26 m. Note however that this very restrictive limit only needs to be fulfilled for parameter values not fulfilling the ones illustrated in Fig. 1.
In Ref. [65] a limit of √ κ 10 −17 was derived in order to push scalar instabilities back before BBN. For this to work, we need at least two β i = 0. For background and perturbation solutions, the main idea is that in the limit of κ → 0, the ratio between the scale factors of the two metrics goes to a constant determined by the values of the β i and c. This gives a cosmological constant-like contribution to the Friedmann equation and well-behaved perturbation theory.
Galaxies
We now turn to the phenomenology of galaxies. In the dark matter paradigm, there is a somewhat unexpected large correlation between the distribution of baryonic and dark matter. One of the main arguments for MOND is that it is able to explain this correlation on galactic scales [66]. However, it fails on larger scales [67]. The Vainshtein radius on the other hand naturally adapts to the scale of the object.
We add a galactic source (with negligible pressure) with density profile being truncated at r = r g ≡ lr V , where the parameter l sets the compactness of the galaxy and is of order one, or slightly lower, for m g ∼ H 0 . We then have We can now write (inside the galaxy; outside the galaxy the solution is given by Eqs. 4.20) This typically gives result like in Fig. 2 where κ = c = l = 1, α = 1 and β = 4. The observed gravitational lensing and dynamical properties of elliptical galaxies are consistent with general relativity predictions, to an accuracy of ∼ 5 % [28,68]. We have three ways to make our model consistent with lensing constraints. The first is to make the Compton wavelength so small that we are well outside it for the lensing and dynamical observations (basically, the velocity dispersion of stars). The lensing radii typically are ≃ 5 kpc and the velocity dispersion integrated out to similiar radii. In order not to be in conflict with the observed constraints, we thus need λ g 0.5 kpc. However, as noted in Sec. 4, such small values are ruled out by Solar system constraints.
The second possibility is that the so called gravitational slip γ -the ratio of the gravitational potentials experienced by massive and massless particles -is small. The largest deviations from general relativity predictions are found between the Vainshtein radius and the Compton wavelength where Using data from the strong gravitational lens sample observed with the Hubble Space Telescope Advanced Camera for Surveys by the Sloan Lens ACS (SLACS) Survey [69], we constrain κc 2 0.1 at 2 σ. The third possibility, valid for parameter values for which we have a functioning Vainshtein mechanism, is to make sure that we are well inside the Vainshtein radius, kpc. (5.5) For large κc 2 , we typically need to be a factor of 10 inside the Vainshtein radius not to be in conflict with observational limits, corresponding to m g /H 0 40 or λ g 0.1 Gpc. To summarize, strong lensing galaxy systems constrain the graviton Compton wavelength λ g to be either smaller than ∼ 0.5 kpc or larger than ∼ 0.1 Gpc, or the combination κc 2 to be smaller than ∼ 0.1. However, λ g < 0.5 kpc is disfavoured by Solar system constraints. We also note that we generally expect the velocity dispersion in galaxies and galaxy clusters to increase as compared to the general relativity prediction, on scales similar to the sizes of the systems if m g ∼ H 0 or slightly larger. This will have an effect on the predicted abundance of dark matter in these systems, namely that we need less dark matter than in the general relativity case. However, since we maximally expect the velocity dispersion squared to increase by a factor of 1/3, the effect is not large enough to completely evade the need for dark matter in galaxies and galaxy clusters.
Vacuum solutions
In the previous section, we studied stars, galaxies and their phenomenology. In this section we comment on the relationship between the star solutions and vacuum solutions, such as black holes. Our chief interest here is to understand if the bimetric black holes can be the end-state of the gravitational collapse of massive stars.
Vacuum solutions in massive bigravity were studied extensively in Ref. [46]. Following the proof of Ref. [43]-that for non-singular metrics there has to be a common Killing horizon-we expand the fields N , Y and U close to the horizon, situated at r = r h , as From Eqs. 2.8, the coefficients a n , b n and c n can all be expressed in terms of u and a 1 , where u is arbitrary and a 1 satisfies a quadratic polynomial with coefficients depending on u and the parameters of the theory (i.e. c and the β i parameters). Since there are three equations of motion, and three free parameters (u, C 1 and C 2 ) there exists at most a discrete set of solutions for a given value of c and the β i parameters. The structure of these solutions was investigated extensively in Ref. [47], in the case of c = 1. This was done through a shooting method, where u, C 1 and C 2 were varied until the solution with asymptotic flatness was found. In this paper we have performed a similar numerical study, but with general c. Our results are in agreement with Refs. [47] and [46] wherever they overlap. It was found in Ref. [47], that for a given value of the β i parameters, the solutions are classified by r h /λ g , i.e the ratio between the horizon and the Compton wavelength of the graviton. An upper bound for r h /λ g is 0.876, a value related to the Gregory-Laflamme instability (see Ref. [70] for an interesting discussion of this result). Above that bound, only the bi-Schwarzschild solution exists (i.e. g µν is equal to the Schwarzschild solution, and f µν = c 2 g µν ). The minimum value of r h /λ g depends on the model under consideration. The conjectured parameter structure presented in Ref. [47] is that when β 3 is non-zero, solutions cease to exist below a critical value of r h /λ g (which excludes realistic astrophysical black holes). When β 3 = 0, β 2 > 1 and β 1 < −1, black hole solutions exist for all values of r h /λ g below the Gregory-Laflamme bound.
Moving beyond the case of c = 1, we show in Fig. 3 the metric field N for different values of c and with r H /λ g = 0.04. This shows that several possible black hole solutions The function U/r solved using the full equations of motion numerically (dashed) and using the approximate solution, given by Eq. 4.14 (dotted). U/r departs from the constant solution predicted by the approximate solution when the other metric fields become nonlinear. Right panel: The metric function N divided by the GR-solution and N divided by Q. For the GR-solution, the Schwarzschild radius is given by r S /λ g = 10 −4 ; this ensures that the horizon of the GR and bigravity solutions conicide. In both the left and right panel, C 1 /λ g = 5 × 10 −5 , C 2 = −2/3 × C 1 and β 1 = 7, β 2 = −5, β 3 = 4, c = κ = 1 (these specific values ensure that the solution exists within the Vainshtein radius). N/N GR and N/Q approach unity as C 1 /λ g decreases.
are possible for fixed r H /λ g , as long as c is varied. We also display the relationship between the constants u and c.
Stars and black holes. Concerning the relationship between the star and black hole solutions, we note the following: First of all, u < c for the stars, but u > c for the black holes. Secondly, for star solutions to exist inside the Vainshtein radius, we must have β 3 c 3 > 1. For the black holes, we must instead have β 3 = 0 for solutions to exist for all r h /λ g < 0.876, according to the conjecture of Ref. [47]. Finally, the asymptotic structure is different as compared to the black holes and stars. For stars, we have C 2 /C 1 = −2/3. For the black holes, while a full parameter scan is beyond the scope of this paper, we conjecture that all black holes satisfy This conjecture follows from a numerical analysis, where we find that the point C 2 c 2 = 2C 1 /3 (in the following we put κ = 1) marks a transition for the behaviour of N . Above this value, i.e. C 2 c 2 > 2C 1 /3, N will generically become larger than unity. 5 Below this value, N will become less than unity. Furthermore, for C 2 > 0, U/r will grow larger than c as one integrates from infinity towards lower r, and for C 2 < 0, it will become smaller. The point C 2 = 0 corresponds to the Schwarzschild solution, and as C 2 → 0, r h /λ g approaches the value 0.876 given by the Gregory-Laflamme instability. For the black holes, we have that N should become less than unity (and eventually approach zero), and U/r should be larger than c. Thus, for the black holes, we should have 0 < C 2 c 2 < 2C 1 /3. This is also confirmed for the black hole solutions that we have studied. There thus seem to be qualitative difference concerning the overall sign of the massive Yukawa modes when comparing stars and black holes. This stands in contrast to the case of general relativity, where a spherical collapse of a massive star into a black hole does not change the asymptotic spacetime structure. 6 What happens, then, in vacuum when the asymptotic structure of stars is imposed? Solving the full numerical system, we find that the fields of g µν approach the Schwarzschild solution (for parameters that satisfy the bounds given in Eq. 4.16). The function U/r remains constant in a region inside the Vainshtein radius, but starts to grow close to the horizon of g µν . The fields a and Y remain small and finite. We depict this scenario in Fig. 4.
An interesting curvature invariant, introduced in Ref. [43], is This function remains finite for all non-singular metrics, in particular for the black hole and star solutions. For the vacuum solution shown in Fig. 4, it does, however, diverge close to the horizon of the g µν metric. This is related to the fact that there is no common horizon for both g µν and f µν when C 2 /C 1 = −2/3 in vacuum.
Instabilities. Let us also discuss the instabilities that are present for the bi-Schwarzschild solutions. It was shown in Refs. [50,51] that there exists unstable modes when the horizon radius of the source is less than the Compton wavelength of the graviton. This instability is, however, rather mild, with a timescale equal to the inverse graviton mass. When the latter is of the same size as the Hubble scale today, this means that the instability will require the entire lifetime of the universe to grow significantly. It does, therefore, not have to be important for astrophysical black holes. Intriguingly, the instability was shown to be absent for the non-diagonal bi-Schwarzschild solutions [53], as well as for the partially massless case [52]. Now, as was argued in Refs. [50,51,70], the instability shows that the bi-Schwarzschild solution can not be considered the end-state of a gravitational collapse. It is unclear whether the other black hole solutions, with massive hair, are stable or not. On the whole, then, there are two reasons why the end-state of gravitational collapse is unclear: the instabilities present for the bi-Schwarzschild case (which could also be present for the other black hole solutions), and the different asymptotic structure of stars and black holes.
To summarize, there is a qualitative difference between the star and black hole solutions. The end state of a collapse of a star is therefore uncertain. It could lead to a novel spherically symmetric solution that as of yet has not been discovered. It might lead to a time-dependent solution that does not settle down into a static final state. It seems unlikely, however, that it will lead to the black hole solution that share a common horizon for g and f . We therefore conjecture that black holes in massive bigravity can not be formed from the collapse of stars. This is probably due to the fact that the black hole solutions share a symmetry between g µν and f µν (i.e. a common horizon), whereas the coupling of matter to only one metric, e.g. g, breaks this symmetry. An interesting question is whether this conjecture also holds true when coupling matter to both fields.
Conclusions
In this paper, we have investigated the phenomenology of stars and galaxies in massive bigravity. Furthermore, we have discussed the relationship between black holes in massive bigravity and stars.
For the stars, we have been interested in the existence of solutions where the radius of the star is much smaller then the Compton wavelength of the graviton. The latter is usually assumed to be of the order of the Hubble scale of the universe today, when massive bigravity is used for cosmological applications. The parameter constraint that we found, which generalizes earlier work in Ref. [44], states that β 2 needs to be strictly negative and β 1 and β 3 needs to be strictly positive. If these conditions are not met, we have shown that the ratio between the Planck masses of the two metrics needs to be less than 10 −22 , when the length scale of the theory is of the order the Hubble scale today.
Moving on to galaxies, we show that the graviton Compton wavelength λ g either has to be so small (less than ∼0.5 kpc) so that the massive Yukawa mode does not produce sizable deviations between the lensing and dynamical observations. This is, however, in conflict with Solar system measurements. Another possibility is that λ g is so large that the galaxies fall within the Vainsthein radius. This requires λ g 0.1 Gpc. Yet another possibility is that κc 2 0.1, which makes the deviation in the gravitational slip undetectable.
Finally, an open and interesting question, that deserves further studies, is the end-state of gravitational collapse. In general relativity, the asymptotic structure is unchanged as a star undergoes spherical collapse to a black hole (a fact related to Birkhoff's theorem). In massive bigravity, we find that the asymptotic structure of stars and black holes is qualitatively different. This is related to the sign of the massive Yukawa mode. This makes it unlikely that the black hole solutions are end-states of gravitational collapse. This could potentially be related to the fact that for the black holes g µν and f µν have a common Killing horizon. This symmetry is, however, broken by stars, since only one of the metrics couple to matter. It would therefore be interesting to investigate the star solutions when coupling both metrics to matter.
A Real solutions
In this Appendix we derive constraints on parameter values needed to have static, spherically symmetric solutions that are asymptotically flat and valid att all radii. We will make numerous references to the left hand side (LHS) and right hand side (RHS) of the polynomial equation (4.14). Assuming that we are outside the source, M (r) = M tot , the pressure is zero and we start by noting that the the RHS is zero at µ = −1 and µ = ±1/ √ β (for β > 0; for β ≤ 0, the only root is at µ = −1) and is being divided by r 3 . As r → ∞, the RHS becomes flat and as r → 0, RHS → ±∞, except at the points where it is zero. Defining (for the pressureless case) which is zero at µ = −1. Furthermore, which at µ = −1 is 2(1 − β). This is negative for β > 1 and vice versa. The LHS on the other hand has a shape that is fixed by the values of β i , µ, κ and c. It is always zero at µ = 0 and This has to be smaller than or equal to zero in order for solutions not to become imaginary as r → 0, since the RHS gets arbitrarily negative close to µ = −1. This means that we need to set β = −1 − 2α. Close to µ = −1, we can expand the RHS and LHS sides as LHS ∼ − 2 3 (1 + α) 3 + 2κc 2 + α(3 + κc 2 ) (µ + 1) 2 , showing that we will not have real solutions as r → 0 since the RHS always will be less than the LHS for some finite r. | 2015-12-04T11:07:11.000Z | 2015-07-03T00:00:00.000 | {
"year": 2015,
"sha1": "da9a509a004fc910f96c53521d3936dc29887953",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1507.00912",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "da9a509a004fc910f96c53521d3936dc29887953",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
99960657 | pes2o/s2orc | v3-fos-license | Catalyst-free site-specific surface modifications of nanocrystalline diamond films via microchannel cantilever spotting
The properties of nanocrystalline diamond (NCD) films offer great potential for the creation of various sensing and photonic devices. A great challenge in order to materialize such applications lies in achieving the micrometrically resolved functionalization of NCD surfaces. In the present work, we introduce a facile approach to meet this challenge employing the novel strain-promoted alkyne–azide cycloaddition “click” chemistry reaction, a catalyst-free ligation protocol compatible with biomolecules. The ability to achieve well-resolved multicomponent patterns with high reproducibility is demonstrated, paving the way for the fabrication of novel devices based on micropatterned NCD films.
Introduction
Besides being the hardest of all materials, diamond occupies a special place in industry. Its electronic properties, including a 5.47 eV band gap, make diamond a promising semiconductor material. Currently, diamond can be prepared synthetically either as a bulk material under high pressure and high temperature or in the form of thin lms by chemical vapor deposition (CVD). The surface electronic properties of intrinsic diamond can be signicantly altered by contact with oxidative or reductive plasma. Diamond hydrogen or oxygen surface termination results in marked differences in properties such as electrical conductivity, electron affinity, and surface wettability. 1 Bulk doping with boron (p-type) and nitrogen or phosphorous (n-type) can also be incorporated during the thin lm deposition process to tune the semiconductor properties. 2,3 Microwave CVD has become the preferred deposition technique for opto-electronic applications due to its advantages, including large area deposition, high growth rate, and high surface quality. This technique usually yields lms composed of nanocrystals, termed nanocrystalline diamond (NCD). Their inexpensive production on a large variety of substrates of arbitrary size and shape compared to monocrystalline diamond makes NCD lms a more feasible alternative for industrial applications. 4,5 Recently, NCD lms have been utilized for advanced electronic devices in the eld of sensing, including gas sensors and photonic devices, as well as biosensors. [6][7][8][9] As an example, hydrogen-terminated NCD lms have been shown to exhibit changes in their surface conductivity in the presence of phosgene and could be utilized as a physical transducer to detect this gas. 10 In a similar fashion, the amperometric detection of glucose was accomplished by the immobilization of glucose oxidase enzyme on NCD lms. 11 Generally, diamond-based sensing devices rely on the binding and reaction of molecules directly at the NCD surface. 12,13 In the vast majority of cases, the specicity of the sensor response depends on the presence of appropriate receptors immobilized onto the surface. 9 Thus, in order to extend the applications of NCD lms in the eld of sensing, as well as biotechnology, the ability to functionalize their surface is a prerequisite. 9 In particular, being able to encode different functionalities in precise regions of NCD lms can be envisioned as a feasible route for the design of surfaces with controlled interactions in biological environments, 14,15 as well as high-throughput sensors.
Surface immobilization of bioactive molecules by spotting and printing methods has drawn much attention in recent years, as it is a base technique for the preparation of structured, bioactive surfaces. [16][17][18][19] For highest feature resolution in the nanoscale, techniques like dip-pen nanolithography (DPN) and polymer pen lithography (PPL) offer interesting lithography routes. 16,20 While ink jet printing offers contactless highthroughput spotting at larger length scales (usually >50 mm), techniques as microcontact printing (mCP) and spotting with microchannel cantilevers (mCS) 21 can readily address the scale range down to a few microns. 18,22 The deposition of microreactors on a surface to allow liquid phase coupling chemistry to take place for prolonged time or transfer of otherwise not writable species is used in DPN and PPL in the form of matrixassisted transfer (MA-DPN/MA-PPL). 23 Similarly, mCS was previously demonstrated for the spotting of femtoliter-scale reaction chambers for surface immobilization of bioactive substances into microarrays by "click" chemistry. 24,25 The click reaction chosen in the previously mentioned approaches is the copper-catalyzed alkyne-azide cycloaddition (CuAAC), that has proven enormously useful for biochemical coupling reactions since the introduction of the concept by Sharpless. 26,27 In particular, several examples have been reported in which the CuAAC reaction was employed to functionalize the surface of diamond materials, including nanoparticles, 28 detonation diamond nanoparticles, 29 or conductive diamond, [30][31][32][33][34] which provided an avenue to electrochemically stable, functional electrodes.
However, since terminal alkynes are rather unreactive towards azides, the efficiency of the CuAAC reaction strictly relies on the presence of a Cu(I) catalyst species, oen achieved by combining CuSO 4 with sodium ascorbate. This requirement poses a serious drawback for several applications. 35 In the electronic elds the presence of Cu-ions on the surface can disrupt monolayer conductivity. 36 Particularly in many applications in the biological eld, the presence of such ions cannot be tolerated, as they are cytotoxic and can disrupt the conformation of the biomacromolecules to be patterned, including proteins and DNA. 37,38 Though extensive washing or the use of heterogeneous catalysis can oen alleviate the problem, concerns about the need for Cu-ions remain. 39 In order to overcome these limitations altogether, reactions are sought which proceed rapidly at room temperature, in aqueous solvents, and do not require any catalyst. In this regard the copper-free strain-promoted alkyne-azide cycloaddition (SPAAC) click reaction, developed by Bertozzi and coworkers, has attracted considerable attention. 35,40 It relies on the use of strained cyclooctynes and their derivatives to activate the reactions. Thus, reaction rates are very fast without the need for a metal catalyst. Its bioorthogonality and compatibility with biological molecules has been thoroughly demonstrated, as the reaction could even be applied to in vivo labeling and imaging. 40,41 Importantly, SPAAC reactions have been found to be suitable for the functionalization of material surfaces such as silicon, gold, or glass but also quantum dots, nanoparticles and polymer nanobers. 42,43 Silane chemistry was used by Popik and coworkers to mediate the covalent surface attachment of the strained alkyne to glass slides, aer which the SPAAC reaction was carried out in a surface-conned fashion. 44 A further renement of this strategy consisted in the surface graing of functionalizable polymer brushes and their side-chain substitution with "masked" dibenzocyclooctynes (DBCO), which could be converted to the strained alkyne form by irradiation with UV-light, allowing the ensuing SPAAC reaction to take place in a spatially resolved manner. 45 A later approach used microcontact printing in order to achieve the patterning. 46 Postpolymerization-functionalized surface-graed polymer brushes were also used to compare the reaction kinetics of two types of DBCO-derivatives with an azide. 47 On the other hand, the surface immobilization of azide groups followed by SPAAC reaction with a cyclic alkyne present in solution has also been exploited. 48 Various surface functionalization strategies based on SPAAC have been reviewed recently. 49 The advantages of this novel catalyst-free click reaction can be therefore envisioned to further expand the power of mCS, PPL, and DPN.
Herein, we demonstrate a facile route for the surface functionalization of NCD lms via mCS in order to create small-scale patterns suitable for different types of sensor applications. For this purpose, we create functionalized silane coatings bearing chemical groups that can participate in either the established CuAAC reaction or the catalyst-free SPAAC reaction. Moreover, we compare the effectiveness of both protocols for mCS. We further assess the possibility of employing this method to create well-resolved multi-component patterns on the NCD lm surface.
Surface modication
The deposition of nanocrystalline diamond lm on the silicon substrates proceeded in two steps: (i) seeding for 40 min, and (ii) microwave plasma-enhanced chemical vapor deposition (PECVD). Prior to the diamond deposition, the substrates were ultrasonically cleaned in isopropyl alcohol and aerwards rinsed with deionized water. Then, the substrates were immersed for 40 min into an ultrasonic bath containing an ultra-dispersed diamond colloidal suspension. This process leads to the formation of a 5 to 25 nm thin layer of nanodiamond powder on the Si substrate. The nucleation procedure was followed by the microwave PECVD using a pulsed-linear antenna microwave chemical plasma system (Roth&Rau AK 400). 4 The deposition of the NCD layer was carried out using a microwave power of 2 Â 1700 W, pressure of 0.1 mbar, with a gas mixture of H 2 , CH 4 and CO 2 (100/5/30 sccm) and substrate temperature of 500 C.
2.2.1.2 Self-assembled monolayer (SAM) of epoxide. The substrates coated with a deposited NCD layer were rinsed twice with ethanol and deionized water, dried by blowing with nitrogen, and exposed to air plasma for 20 min. Subsequently they were immersed in a 1% (v/v) solution for (3-glycidyloxypropyl)trimethoxysilane in dry toluene and kept for 8 h at room temperature in a dry environment. Aer this time, the samples were removed from the solution, rinsed with toluene and acetone and dried by blowing with nitrogen.
2.2.1.3 Immobilization of the clickable group. Immediately aer silanization, the samples were immersed in a solution of either propargylamine 2% (v/v) or dibenzocyclooctyne-amine (DBCO, 1 mg mL À1 ) in dry dichloromethane in individual sealed vials. The reaction was allowed to proceed at 35 C for 12 h. Subsequently, the samples were rinsed with dichloromethane, acetone, ethanol, and water, and dried by blowing with nitrogen.
2.2.2 Pattern fabrication 2.2.2.1 Dye solutions used for patterning. All uorescent dye solutions used in this work in the mCS procedure ("inks") consisted of either TAMRA-azide, Alexa uor 488-azide, or Cy5azide, in all cases at a concentration of 100 mg mL À1 , dissolved in a mixture of water/glycerol (7 : 3) to prevent drying. For the solutions to be patterned on propargyl-functional NCD lms via CuAAC, CuSO 4 (10 mM) and sodium ascorbate (20 mM) were added as catalyst. In the case of the dye solutions patterned on DBCO-functional NCD lms using the SPAAC reaction, no catalyst was added.
2.2.2.2 Pattern writing via mCS. All patterns were written with an NLP 2000 system (NanoInk, USA) using on-board soware (NanoInk, USA) that allows programming of the dwell time and position of the surface patterning tool (SPT-S-C10S, Bioforce Nanosciences). Prior to use, the SPT pens were plasma cleaned by oxygen plasma (10 sccm O 2 , 100 mTorr, 30 W for 5 min). Then, the SPT was mounted onto the tip holder by doublesided sticky tape and the pen reservoir was lled with 1 mL of the corresponding dye solution. The spotting proceeded at a relative humidity of 60% and with the sample stage titled by 8 with respect to the tip. For all patterns, a dwell time of 0.5 s was used. Aer the lithography process, the click reaction between surface and azide-functional dye (either CuAAC or SPAAC) was allowed to proceed for 1 h in dark at a constant humidity of 60%. Subsequently, the samples were rinsed with ethanol and deionized water to remove the excess solution, and dried by blowing with nitrogen.
Characterization methods.
Surface characterization was performed on pristine (as-grown) and chemically modied NCD lms.
2.2.3.1 Scanning electron microscopy (SEM). The morphology of the nanocrystalline diamond lms was obtained by scanning electron microscopy (SEM, Raith e_LiNE). The images were taken at accelerating voltages of the primary electrons of 10 kV and in-lens detector was used.
2.2.3.2 Fluorescence microscopy. Fluorescence microscopy images were taken with a Nikon upright uorescence microscope (Eclipse 80i), equipped with a sensitive camera CoolSNAP HQ 2 (Photometrics). The broadband excitation light source (Intensilight illumination) is combined with sets of lters (Texas Red, FITC, DAPI) to separate excitation and emission spectra, depending on the used dye molecule. The uorescence image analysis for determination of spot area was done in ImageJ soware by applying a threshold to the raw images and using the particle analysis tool. 50 2.2.3.3 Atomic force microscopy (AFM). The surface topography of the NCD lms was characterized in quantitative nanomechanical mapping mode of PeakForce AFM system (ICON, Bruker) using new Multi75AL cantilever treated in CF 4 plasma (pressure 100 mTorr) for 30 s. The root-mean-square roughness (R RMS ) was evaluated from two scan area sizes: 5 Â 5 mm 2 and 1 Â 1 mm 2 .
2.2.3.4 Raman spectroscopy. Micro-Raman spectroscopy (InVia Reex by Renishaw) using a HeCd laser with a 442 nm excitation wavelength, and spot diameter 2 mm was employed to assess the allotropic components of the NCD lms.
2.2.3.5 Thickness measurement. The thickness of the NCD lms was evaluated from the interference fringes of the reectance spectra measured in UV-vis-NIR region using a custommade device and commercial soware for modelling the optical properties of the thin lm.
2.2.3.6 X-ray photoelectron spectroscopy (XPS). A K-Alpha spectrometer (Thermo Fisher Scientic, East Grinstead, UK) was used to perform XPS measurements. The samples were analyzed using a micro-focused, monochromated Al Ka X-ray source (400 mm spot size). The kinetic energy of the electrons was measured using a 180 hemispherical energy analyzer operated in the constant analyzer energy mode (CAE) at 50 eV pass energy for elemental spectra. Thermo Avantage soware was used to analyze the spectra. The spectra were tted with one or more Voigt proles (binding energy uncertainty: AE0.2 eV). The analyzer transmission function, Scoeld sensitivity factors, 51 and effective attenuation lengths (EALs) for photoelectrons were applied for quantication. EALs were calculated using the standard TPP-2M formalism. 52 All spectra were referenced to the C 1s peak of hydrocarbons at 285.0 eV binding energy controlled by means of the well-known photoelectron peaks of metallic Cu, Ag, and Au.
Results and discussion
In this study, we report a new approach for the surface functionalization of NCD lms via mCS using alkyne-azide cycloaddition click reactions with the purpose of creating small-scale patterns (Fig. 1).
Deposition of nanocrystalline diamond lms
The growth of NCD lms on silicon wafer substrates was carried out using a plasma-enhanced CVD method. The thickness of the pristine NCD layers was measured by the spectral reectance method assessing the interference fringes and was found to be 448 nm. Further characterization of the NCD layers was carried out by scanning electron microscopy (SEM), Raman spectroscopy, and atomic force microscopy (AFM). SEM image (Fig. 2a) clearly indicates the polycrystalline nature of a continuous lm consisting of grains in sizes up to 200 nm. Moreover, a SEM micrograph of the cross-section of the layer (ESI, Fig. S1 †) supports the homogeneity of the NCD layer thickness, providing a value of (435 AE 13) nm, which corresponds well with the measurement by optical means. The topography of the lm, as assessed by AFM, also reveals a uniform and homogeneous coverage of the Si substrate with the diamond lm (Fig. 2c). The surface roughness (R RMS ) was evaluated from 5 Â 5 mm 2 and 1 Â 1 mm 2 scanned areas and was determined to be (16 AE 2) nm. This relatively low value of R RMS in comparison to other NCD growth methods (typically between 30 and 50 nm for NCD lms) 53 is an important feature for many areas of application. It is noteworthy that the roughness can be tuned by varying the deposition parameters such as temperature, gas ow rate, and pressure, as was shown in previous reports. 54 For instance, it has been reported that the pressure has a sharp inuence on the deposition process, as it can lead to inhibition of the lm growth. 4 Increasing the partial pressure of oxygen in the precursor gases either as a pure O 2 or in the form of CO or CO 2 has been described for the purpose of enhancing the growth rate and improving the diamond lm quality. 55,56 Moreover, the partial pressure of hydrogen in the feed gas plays an important role in determining the morphology of the lm. Its decrease was reported to lead to a change the morphology from discrete wellfaceted diamond grains larger than 100 nm to ner granular structures with sizes in the order of 30 nm. 57 The Raman spectrum (Fig. 2b) is characterized by three strong contributions. The peak centered at 970 cm À1 (Si-peak) reects the second-order peak of the Si substrate. 58 The Raman peak centered at 1330 cm À1 corresponds to the diamond (sp 3 -bonding) component. The broad band at approximately 1610 cm À1 is attributed to the non-diamond phase (G-band), i.e., sp 2 -bonded carbon atoms. 4 The high intensity of the diamond peak with respect to the G-band indicates a strong predominance of the diamond phase in the lm with respect to the graphite component.
Modication of NCD lms with reactive coatings for mCS
In order to allow for a rapid and well-dened functionalization of the prepared NCD lms using mCS, the surfaces were coated with a siloxane layer containing two types of chemical groups able to undergo a click chemistry reaction in the presence of an azide-containing molecule. The versatility of silane chemistry has enabled its utilization for a variety of nanoapplications. These range from the silica coating of iron particles to the surface modication of carbon nanobers for polymer nanocomposites. [59][60][61] The preparation of these clickable layers proceeded in two steps: (1) immobilization of a silane containing an epoxide ring, and (2) nucleophilic epoxide opening with a primary amine bound to the clickable group (Fig. 1). While the silane immobilization step was common for both types of layers prepared, the nature of the clickable group immobilized could be changed simply by selecting an appropriate primary amine.
The successful preparation of the targeted layers was conrmed by means of XPS. Fig. 3 shows the changes recorded in the Si 2p and N 1s regions of the spectra aer the successive reaction steps. The Si 2p region in particular conrms the attachment of the siloxane layer by the appearance of two signals. 62 They arise from the Si atoms in the layer bonded to different oxygen-containing surface groups and to each other by siloxane bonds. Their binding energies are observed at 101.5 eV and 102.8 eV, and indicate that the Si atoms are bonded to between 1 and 3 oxygen atoms. 63,64 The signals are broadened by splitting into closely spaced 2p 3/2 and 2p 1/2 components due to spin-orbit coupling, which are not individually resolved.
Propargylamine was chosen to prepare the surface to be functionalized using CuAAC. In our previous work we have demonstrated the effectiveness of this strategy for the surface functionalization of model substrates using DPN. 65 However, in order to overcome the requirement for a transition metal catalyst in the DPN ink formulation, we introduced a surface capable of participating in the SPAAC click reaction. While the CuAAC has enabled great advances in surface functionalization in general, and in particular for DPN and mCS, the use of a Cubased catalyst may pose severe limitations for the patterning of some types of biomolecules whose activity may be impaired. Two main strategies have been introduced to increase the reactivity of the alkyne group in order to enable the cycloaddition reaction with an azide to proceed without catalyst, involving the activation of the triple bond with uorine electron-withdrawing groups or placing it in a strained cycle. 66 In particular, dibenzocyclooctynes (DBCOs) show signicantly increased reaction rates with azides, which allow for the strainpromoted alkyne-azide cycloaddition to proceed in absence of any Cu-species. 67 Thus, a DBCO-presenting a primary amine group was chosen in this work for the nucleophilic attack to the epoxide ring (Fig. 1). The binding of both types of amines to the surface is observed in the XPS spectra of the N 1s region, where signals are observed corresponding to C-N/O]C-N groups (399.5 eV) and to partially protonated amino groups (401.3 eV), 62,68 in contrast to the non-reacted epoxide surface (Fig. 3).
Pattern fabrication via mCS and comparison of reactions
The ability to create microarrays on the NCD lms modied with "clickable" coatings was demonstrated using mCS. This was achieved by spotting the ink with an SPT attached to a DPN platform where the sample stage can be moved with high precision in all three directions (x, y and z). 21 The spotting process was relatively fast as it can yield an area of 500 Â 500 mm 2 with 100 dots in about 2 min. 69 Firstly, we compared two different types of click chemistry reactions, i.e. CuAAC and Cu-free SPAAC click reactions, by performing mCS with a uorescent azide-containing molecule on the NCD lms modied with either propargyl or DBCO reactive groups. It should be noted that in both cases aer the lithography process was completed the samples were incubated in dark, at room temperature, at a relative humidity of 60% for 1 h and subsequently washed with ethanol, rinsed with deionized water, and dried. Arrays of dots were printed using a TAMRAazide ink solution (containing 10 mM CuSO 4 and 20 mM sodium ascorbate) on the propargylamine-modied NCD lm. Fig. 4a shows a uorescence microscopy image of the fabricated TAMRA-azide spot arrays. All printed spots are uniform in intensity (Fig. 4b). The size distribution histogram of the spots in the array is presented in Fig. 4c and shows calculated mean radius of the spots in the pattern 10.2 mm.
To demonstrate a catalyst-free click chemistry reaction on the NCD lm, a dibenzocyclooctyne-modied NCD substrate was employed for the printing of the ink solution containing TAMRA-azide but neither CuSO 4 nor reducing agent. The obtained uorescence image (Fig. 4d) shows a homogeneous binding of the ink over the whole pattern area aer the washing procedure. In contrast to the propargyl-modied NCD surface, a markedly higher uorescence signal was observed for patterns written on the DBCO-modied NCD surfaces, i.e. where the SPAAC click reaction took place. Furthermore, the intensity prole across one line of spots exhibits a uniform distribution and the average spot radius was 9.5 mm (Fig. 4e and f). In both cases, a narrow distribution of the spot sizes is observed. The signicantly increased uorescence observed on the DBCO-modied NCD surfaces aer mCS with the TAMRA-azide solution with respect to the propargyl-functional NCD surface indicates that the SPAAC reaction proceeds more rapidly and effectively in comparison with the established CuAAC. Thus, SPAAC could be used to achieve the effective functionalization of the surface aer 1 h of reaction following the printing procedure.
Multicomponent arrays
Multicomponent patterns are a valuable tool as they demonstrate the potential use of the protocol for creating microarrays, important for various types of sensing devices. 9 In previous works, we have applied several scanning-probe-based patterning techniques in combination with CuAAC for the fabrication of such arrays. 24 Due to the interest in eliminating the need for the Cucatalyst, in the current work we fabricated multicomponent arrays employing DBCO-modied NCD substrates. This approach enabled us to assess the effectiveness of the SPAAC reaction for such purpose. For the formation of multicomponent patterns the SPT was rst loaded with TAMRA-azide ink and the rst sub-pattern of 50 spots was written. Subsequently, the same procedure was applied for the inks Alexa Fluor 488azide and Cy5-azide, respectively. Fig. 5a shows the results of uorescence measurement aer the binding and washing steps. The image evidences the formation of well-resolved subpatterns from the three different uorescent dyes. Fig. 5b shows a typical uorescence intensity prole measured along a horizontal line in the multi-component array. It can be seen that while TAMRA-azide shows a stronger signal than Alexa Fluor 455-azide or Cy5-azide, the intensities corresponding to each sub-pattern are highly reproducible. Thus, the possibility to encode the NCD surface with multiple inks containing azidefunctional molecules in a rapid catalyst-free approach is demonstrated. This is of high relevance for the fabrication of biochips, which can exploit the properties of the NCD lm.
Arguably, a possible limitation of printing techniques based on scanning probe tools is their low throughput. Importantly, in the case of DPN and related methods, this problem can be easily overcome by parallelization of the process. 70,71 This is achieved by xing several scanning probe tools in parallel while the sample stage is scanned. Thus, the same pattern can be printed many times simultaneously signicantly increasing the throughput, which is of relevance for translation of such methods to practical applications.
Conclusions
A new route for the covalent surface functionalization of nanocrystalline diamond lms via catalyst-free click chemistry utilizing spotting with microchannel cantilevers (mCS) was introduced. This could be achieved by following a very simple surface modication protocol applied to the NCD lm. Moreover, we demonstrated the ability to perform multicomponent patterns using inks containing three different azide-functional molecules and applying them onto the dibenzocyclooctyne-modied NCD surface. Results presented herein indicate that formation of a multicomponent micropattern on the modied NCD surface by mCS represents a promising strategy for the fabrication of high-density arrays that will be useful in elds such as gas sensors, biosensors, as well as optical devices. | 2018-12-04T00:04:44.960Z | 2016-06-15T00:00:00.000 | {
"year": 2016,
"sha1": "cf0ae766d4f2dd0730e78e38931fa1c77a543f01",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2016/ra/c6ra12194b",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "6178971be0f5b0b3c1826c6408275ecd2552a847",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
229689764 | pes2o/s2orc | v3-fos-license | Real-world Evidence to Estimate Prostate Cancer Costs for First-line Treatment or Active Surveillance
Background Prostate cancer is the most common cancer in men and second leading cause of cancer-related deaths. Changes in screening guidelines, adoption of active surveillance (AS), and implementation of high-cost technologies have changed treatment costs. Traditional cost-effectiveness studies rely on clinical trial protocols unlikely to capture actual practice behavior, and existing studies use data predating new technologies. Real-world evidence reflecting these changes is lacking. Objective To assess real-world costs of first-line prostate cancer management. Design, setting, and participants We used clinical electronic health records for 2008–2018 linked with the California Cancer Registry and the Medicare Fee Schedule to assess costs over 24 or 60 mo following diagnosis. We identified surgery or radiation treatments with structured methods, while we used both structured data and natural language processing to identify AS. Outcome measurements and statistical analysis Our results are risk-stratified calculated cost per day (CCPD) for first-line management, which are independent of treatment duration. We used the Kruskal-Wallis test to compare unadjusted CCPD while analysis of covariance log-linear models adjusted estimates for age and Charlson comorbidity. Results and limitations In 3433 patients, surgery (54.6%) was more common than radiation (22.3%) or AS (23.0%). Two years following diagnosis, AS ($2.97/d) was cheaper than surgery ($5.67/d) or radiation ($9.34/d) in favorable disease, while surgery ($7.17/d) was cheaper than radiation ($16.34/d) for unfavorable disease. At 5 yr, AS ($2.71/d) remained slightly cheaper than surgery ($2.87/d) and radiation ($4.36/d) in favorable disease, while for unfavorable disease surgery ($4.15/d) remained cheaper than radiation ($10.32/d). Study limitations include information derived from a single healthcare system and costs based on benchmark Medicare estimates rather than actual payment exchanges. Patient summary Active surveillance was cheaper than surgery (−47.6%) and radiation (−68.2%) at 2 yr for favorable-risk disease, while savings diminished by 5 yr (−5.6% and −37.8%, respectively). Surgery cost less than radiation for unfavorable risk for both intervals (−56.1% and −59.8%, respectively).
Introduction
Prostate cancer is the most commonly diagnosed cancer in men and second leading cause of cancer-related deaths, with 192 000 new diagnoses and 33 000 deaths expected in 2020 [1]; yet the majority of cases are detected by screening, are slow growing, and will not become clinically evident during the patient's lifetime [2]. Active surveillance (AS) of low-risk cancers, which defers aggressive treatment until disease progression, is an increasingly followed management strategy [3]. This strategy aims to lower cost [4,5] and decrease treatment-related morbidity without impacting survival [6]. However, changes in the risk composition of the patient population due to changing screening guidelines [7][8][9] and recent incorporation of expensive new technologies such as routine multiparametric magnetic resonance imaging (mpMRI) into surveillance regimens have likely increased cost [10][11][12].
Previous cost-effectiveness studies of management strategies for localized prostate cancer usually favor AS with follow-up durations under 10 yr, while those with longer follow-up support aggressive surgical or radiation treatment [4][5][6][13][14][15]. However, these studies have important limitations. They rely on simulation of theoretical patients in which cost estimates reflect only services included in clinical trial protocols, which greatly differ from routine clinical care. Many studies assume that all patients' decisions conform to those of the average patient, do not account for patient comorbidities, and do not account for deviation from standard care [16][17][18][19][20]. This is especially relevant in AS, which lacks consistent long-term protocols. Furthermore, existing literature relies on data collected before the dissemination of new high-cost technologies [10][11][12]. Given differences in patient populations and clinical outcomes between randomized trials and real-world data, these assumptions underlying previous studies are concerning.
The USA addressed the need to incorporate evidence from real-world data under the 21st Century Cures Act [21]. Specifically, clinical assertions should include evidence from routinely captured clinical data, including electronic health records (EHRs) [21]. In parallel, insurance companies are increasingly demanding proof of real-world effectiveness of treatments to support reimbursement decisions [9]. However, secondary use of EHRs is challenging. The data are noisy, and require repurposing of billing codes and use of artificial intelligence to process multimodal data, including clinical notes [22,23]. For example, AS, which does not have a designated billing code, is difficult to identify reliably. In addition, it is challenging to obtain useful, reliable cost data that can be shared easily; closely guarded negotiated payments between hospitals and payors are proprietary, actual costs vary between payers, patients can change insurance coverage, and treatments have different densities of utilization and charges over time.
Understanding rising costs in routine care is therefore pertinent yet challenging, particularly in prostate cancer where incorporation of new high-cost technologies is coupled with an extended treatment course [24]; furthermore, patients are increasingly sharing in the burden of these rising costs [25,26]. In this study using real-world data, we characterize initial management costs of prostate cancer at 2 and 5 yr following initial diagnosis. We leverage an existing cost-of-care methodology [27] and the US Medicare Fee Schedule [28] to produce risk-stratified estimates of average calculated cost per day (CCPD). This framework provides increased transparency in healthcare spending, which can facilitate innovation, targeted reform, and shared decision-making.
2.
Patients and methods
Data sources and study cohort
We used a clinical data warehouse (CDW) that integrates patient-level clinical data from EHRs reflecting clinical care at a tertiary academic medical center and associated network practice sites [29]. as they could not be assigned cancer risk (Fig. 1).
Patients were classified as "unfavorable" if they had either stage !3 or Gleason grade group !3 disease and were otherwise classified to have a "favorable" risk. Age was calculated at diagnosis. Charlson Comorbidity derived from a single healthcare system and costs based on benchmark Medicare estimates rather than actual payment exchanges. Patient summary: Active surveillance was cheaper than surgery (À47.6%) and radiation (À68.2%) at 2 yr for favorable-risk disease, while savings diminished by 5 yr (À5.6% and À37.8%, respectively). Surgery cost less than radiation for unfavorable risk for both intervals (À56.1% and À59.8%, respectively).
Index at diagnosis was determined using active diagnoses in the patient's EHR over the past year. This study received approval from Stanford University's Institutional Review Board.
Outcomes
For each patient, all CPT codes were gathered with year of service and assigned a "cost" for each service, drug, or procedure by matching with the US Centers for Medicare and Medicaid Services (CMS) Medicare Fee Schedule and incorporating facility payments from CMS under the inpatient and relevant payment systems [27,28], adjusted to 2017-US$ via the US Bureau of Economic Analysis GDP Implicit Price Deflator [30]. Receipt of mpMRI was determined through data mining of radiological reports.
For primary analyses, an episode of care was defined from the date of diagnosis to the last follow-up or the maximum study interval (24 or 60 mo), whichever is earlier. Calculations over 60 mo were restricted to patients with follow-up !4 yr. For secondary analyses, the episode of care was split at 30 d following initial therapy into "initial treatment" and "post-treatment surveillance" periods ( Fig. 2). A time period of 1 mo after treatment was chosen to capture immediate complications and postoperative care. For each patient, all billing codes within the episode of care were collected, assigned costs, summed, and then divided by the episode's duration to yield CCPD. All billing codes within the given interval were used to capture most potential complications, rather than making a priori assumptions on relevant services since the analysis is comparative and one cannot be certain if indirect events such as a pneumonia were or were not related to the cancer or treatment. We expressed time in days instead of months or years to enable a realistic comparison between management strategies that differ in distribution of services over time.
Statistical analysis
We separately compared men with favorable-risk and those with unfavorable-risk disease by treatment type. We plotted the density of healthcare encounters over time as the number of unique dates with at least one CPT code normalized by the number of uncensored patients in monthly bins. We assessed unadjusted CCPD using the Kruskal-Wallis test given a right-skewed cost distribution and implemented an analysis of covariance model with log-linear transform to provide the estimated mean CCPD adjusted for age and Charlson comorbidity. SQL was used for (Table 2) after the diagnosis date, using the earlier of the maximum eligibility period or duration from diagnosis to last follow-up as the time period for determining the calculated cost per day. Secondary analysis separately assessed the initial treatment and post-treatment component periods for patients receiving definitive management with surgery or radiation (24 mo: Table 2; 60 mo: Supplementary Table 4). As active surveillance (AS) has no distinction between initial treatment and posttreatment components, given that patients forgo definitive treatment in favor of carefully monitoring for disease progression, we assessed only AS in the primary analysis over the entire eligibility period. We defined the initial treatment period as the time from the diagnosis date to 1 mo after either the date of surgery or the date of last radiation treatment, while the post-treatment period comprised the remainder of the entire eligibility period. We chose the time period of 1 mo after treatment to capture immediate treatment complications and postoperative care. We determined the last radiation treatment code by searching the 4 mo following treatment start date; the period of 4 mo was chosen to ensure that codes were associated with initial and not subsequent treatment.
database extraction and calculation of CCPD within the CDW. Statistical analyses were performed using R version 3.6.0 (R Foundation for Statistical Computing, Vienna, Austria).
Study cohort and characteristics
A total of 3433 men were included in the study (Fig. 1), with a mean age of 65 yr, generally low comorbidities (78.1% Charlson 0), and predominantly (67.8%) favorable-risk disease; the sample comprised mostly non-Hispanic white individuals insured through Medicare or privately (Table 1). Surgery (54.6%) was most common, followed by radiation (22.3%) and then AS (23.0%). Compared with AS, surgery patients were younger with fewer comorbidities, privately insured, and non-Hispanic white. Radiation patients also had fewer comorbidities than AS patients but were older, insured through Medicare, and included more racial/ethnic minorities, with the greatest difference seen in black men (Table 1 and Supplementary Table 1). In comparison with surgery, radiation patients were more likely to have unfavorable disease (Table 1).
Healthcare system interactions
Favorable-risk patients had fewer encounters than their unfavorable counterparts. Over 2 yr, radiation had the most visits (18) followed by surgery (eight) and AS (eight), and unfavorable-risk patients undergoing radiation had more activity (43) than surgery patients (10). Differences were most evident in the first 6 mo and initial treatment period. When followed for 5 yr, AS had more (22) visits than surgery (12) but fewer than radiation (40; Fig. 3 and Table 2).
Costs of treatment
The median cost of care increased over the study period, with CCPD particularly increasing around 2013, which coincided with increasing utilization of mpMRI across all treatments, especially AS (Fig. 4)
Discussion
We developed a framework for comparing cost of care for localized prostate cancer using data from a real-world setting. We found that AS was the least costly strategy over the first 2 yr of management of favorable-risk tumors, providing savings of 47.6% and 68.2% compared with surgery and radiation, respectively, while savings were much smaller by 5 yr at 5.6% and 37.8%, respectively. At both 2 and 5 yr, surgery was cheaper than radiation in both favorable (39.3% and 34.2%, respectively) and unfavorable (56.1% and 59.8%, respectively) risk. Diminishing savings with AS at longer time intervals likely represents continued costs of surveillance as well as reclassification and treatment of some AS patients compared with predominantly one-time definitive treatments for patients with favorable risk. The introduction of expensive technologies such as mpMRI appears to coincide with increasing costs of care, although for now AS remains a cheaper strategy despite increasing utilization. Given the differences in age from Medicare Fee Schedule [28] and presented in 2017 USD/d [30]. Episodes of care (Fig. 2) include the total period over the first 24 mo or 60 mo from diagnosis (primary analysis), and for patients receiving definitive management with SUR or RAD, additional subdivisions of initial treatment and post-treatment component periods of the 24 mo since diagnosis (see Supplementary Table 4 for 60-mo equivalents). Unadjusted CCPD was assessed by the Kruskal-Wallis test by ranks. Adjusted CCPD accounts for age and comorbidity via analysis of covariance log-linear models. and comorbidity between surgery and radiation, it is reassuring that these relationships remained consistent before and after adjusting CCPD. We additionally provide new data on the distribution of healthcare system interactions over time, demonstrating a concentration of services in the initial treatment period with radiation therapy involving the greatest intensity of visits. Our findings support the view that AS can be a preferable treatment for favorable-risk localized prostate cancer, providing both higher quality of life and up-front cost savings, although these savings appear to diminish over more extended timeframes.
Our findings leverage real-world actual practice data that include new high-cost technologies to provide cost estimates for prostate cancer management in routine clinical care. Our study has the strength that it reflects the realities of actual clinical care that may deviate from guidelines or clinical trial protocols that typically underpin the assumptions used to design traditional cost-effectiveness modeling studies. Further, our approach is transparent and generalizable, and can easily be implemented in any claims or EHR-based data ecosystem where codes reflecting services can be linked to a fee schedule; in our US study, we link CPT codes with the Medicare Fee Schedule and hospital facility payment systems, but these could be substituted for studies in other delivery systems. The cost estimates we attribute to AS, surgery, and radiation and initial up-front savings provided by AS that diminish with extended followup are in line with prior reviews [12] and simulation studies [5,6,13,14] that did not consider new high-cost technologies such as mpMRI in their design. Interestingly, one recent simulation that considered the role of mpMRI in AS, as compared with traditional transrectal ultrasound-guided biopsy, found that mpMRI was cost effective only at lengthy [28], presented in 2017 USD/d [30], with median and error bars representing 25-75th percentile. (B) Multiparametric MRI utilization is given as the percentage of patients with imaging within the first 24 mo following diagnosis. AS = active surveillance; CCPD = calculated cost per day; MRI = magnetic resonance imaging; RAD = radiation therapy; SUR = surgery.
5-yr surveillance intervals at
Medicare rates with substantial sensitivity to price, being no longer cost effective at private charge [31]. Given that all these studies use simulated data extrapolated from clinical trials, they all make assumptions regarding which services and charges to assign to each treatment pathway, resulting in an unclear picture whether high-cost technology is impacting the potential cost savings of AS. Our estimates derive from actual practice without reliance on such assumptions, and we demonstrate initial savings with AS; however, prostate cancer has a protracted course, and we found savings substantially diminished with assessment over 5 compared with 2-yr intervals. Further work will be needed to determine whether these cost relationships hold or reverse with even more distant time horizons, as suggested by modeling studies.
Few studies have attempted to use real-world data to assess costs in localized prostate cancer, with most limited in scope to smaller cohorts using Spanish [4] or German [15] data with limited applicability to the US setting. The sole US study was limited to a Medicare-derived population that was generally older than 75 yr and was therefore ineligible for AS rather than for watchful waiting [32]. While these reports found cost savings with forms of delayed treatment, none included recent data after the introduction of mpMRI, which experienced a rapid 486% increase in utilization between 2013 and 2015 according to one study [10]. While some work has explored real-world costs of new treatments within radiotherapy [12,24], similar comparative estimates between different management strategies such as AS, surgery, and radiation are absent. Given the need for more current real-world cost data, particularly in the USA, our study sought to fill this gap. We have provided comparative estimates demonstrating that AS continues to deliver up-front savings despite recent changes in the clinical landscape, although these diminish as follow-up increases. CCPD will be a useful tool for future work aimed to explore the drivers of increasing cost of care.
Limitations
Information was derived from a single healthcare system, and patterns in regional practices or local population attributes may limit generalizability to other settings; therefore, CCPD must be validated further in other healthcare networks. As it is challenging to maintain extended patient follow-up in real-world data, we were able to assess CCPD through 5 yr only, necessitating further work to assess longer time frames. Although the network contains an academic hospital, a community hospital, and a specialty care alliance, patient activity outside the network may not be captured, leading to an underestimation of costs. Medicare reimbursements are typically less than those from private payors, which would further underestimate the costs. An assessment of actual costs from payment exchanges would require access to closely guarded proprietary accounting data. However, CCPD benchmarks as proxy for cost adequately suited our purposes to obtain comparative measures, especially given that private insurance companies use Medicare prices as a benchmark for setting their own prices [33], and we anticipate that CCPD's shortcomings are likely distributed uniformly and therefore impact relative comparisons minimally. We believe that such comparable benchmarks are more useful for understanding the trends in healthcare costs by focusing on trends in delivery of services rather than on intricacies of constantly changing negotiated rates that generate heterogeneous payment exchanges that vary within institutions among payors and patient plans. Future work will need to address these limitations by assessing costs in other systems, determining areas driving rising costs, and understanding relationships between costs and clinical outcomes.
Conclusions
AS is a viable management strategy that can be encouraged to optimize the quality of life in select patients with favorable-risk disease; using real-world data, we found that initial costs may be reduced with AS, although these savings may not hold for patients followed over extended periods. Generally, definitive treatment with surgery appears less costly than radiation in both favorable and unfavorable disease.
There is a lack of high-quality real-world cost data despite the fact that the widespread presence of EHR data ecosystems, which we demonstrate, can be harnessed to obtain comparable cost benchmarks such as CCPD that avoid traditional challenges to cost transparency. These methods are widely generalizable for application to other areas of clinical care and, for example, have also been applied to assess breast cancer survivorship care [27], another application with treatment options varying in duration and intensity. It is essential to supplement traditional modeling and decision analysis with the understanding of real-world costs of new technologies and management strategies to inform recommendations and identify opportunities to promote high-value care. Therefore, further resources should be devoted to harnessing EHR data for these purposes.
Financial disclosures: Tina Hernandez-Boussard certifies that all conflicts of interest, including specific financial interests and relationships and affiliations relevant to the subject matter or materials discussed in the manuscript (eg, employment/affiliation, grants or funding, consultancies, honoraria, stock ownership or options, expert testimony, royalties, or patents filed, received, or pending), are the following: None. As these data contain patient identifiers, data sharing is not permitted under the constraints of the Institutional Review Board. However, the statistical code will be considered if the proposed use aligns with public good purposes, does not conflict with other requests, contingent on approval from the local ethics committee. Requests can be addressed to the corresponding author.
Appendix A. Supplementary data
Supplementary material related to this article can be found, in the online version, at doi:10.1016/j.euros.2020.11. 004. | 2020-12-17T09:10:38.522Z | 2020-12-10T00:00:00.000 | {
"year": 2020,
"sha1": "edadf396f40981b8db67a043ff7efe4f853c6aa7",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.euros.2020.11.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "07f71c5f2591c2fabfe49c215730df70441d8f33",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59353169 | pes2o/s2orc | v3-fos-license | A review on bacterial stalk rot disease of maize caused by Dickeya zeae
Bacterial stalk rot of maize caused by Dickeya zeae previously known as E. chrysanthemi pv. zeae have economic importance of reduced crop yield up to 98.8%. The disease is more prevalent in rainy season in India. The bacterium prefers high temperature and moisture for their growth result is plant toppled down within week. The pathogen has wide host range (maize, rice, tomato, chilli and brinjal etc.) which help to pathogen for long survival in soil. The bacterium characterized by biochemical and molecular tactics. In present, Pel gene and rDNA specific primers are frequently used for D. zeae characterization. The pathogen significantly controls under in vitro and in vivo condition via bleaching powder (drenching of 100 ppm) and antibiotics. The present studies generated data on pathogen nomenclature, etiology, epidemiology, host range, pathogen survival, biochemical, physiological and molecular characterization, germplasm evaluation and disease management.
INTRODUCTION
Maize is the third largest planted crop after wheat and rice in the world (USDA 2014).Production of maize is constrained by a number of abiotic (unfavorable climate like high and low temperature; nutritional imbalance) and biotic factors such as mycoplasma, nematode, fungi and bacteria (Jugenheimer, 1976).Among the biotic factors the diseases caused by fungi and bacteria are economically more important because they cause heavy yield losses 8.5% (Oerke, 2006).During the recent years bacterial stalk rot disease has emerged as one of the most important disease in kharif sown maize crop in India (Kumar et al., 2015a).The Kharif sown crop has the most susceptible stage coinciding with the annual monsoon rainfall, which aggravates the disease development.Bacterial stalk rot disease was reported for the first time by Prasad (1930), who identified the bacteria involved as E. dissolvens but the symptoms described by him resembled more closely to those incited by E. chrysanthemi pv.zeae.Its importance was realized during 1969 season, when a severe outbreak occurred in Mandi district in Himachal Pradesh.The pathogen spreads from plant to plant and field to field through rainwater and its runoff.The infestation of the disease was described in various parts of the world (Hingorani et al., 1959;Pauer, 1964;Prasad, 1930;Sabet, 1954;Volcani, 1961;Zachos et al., 1963;Martinez-Cisneros et al., 2014).Three bacterial ISSN : 0974-9411 (Print), 2231-5209 (Online) All Rights Reserved © Applied and Natural Science Foundation www.jans.ansfoundation.orgpathogens have been reported to cause stalk rot of maize namely, E. dissolvens, E. chrysanthemi pv.zeae and Pseudomonas syringae pv.lapsa (Prasad 1930;Hingorani et al., 1959;Sinha, 1966).The pathogen has been recently re-classified as D. zeae by (Samson et al., 2005).The survey generated 458 votes from the International Community, and allowed the construction of a top 10 bacterial plant pathogen, in which Dickeya spp.found 9 th place (Mansfield et al., 2012).This bacterium has a wide host range causing soft rot (Bradbury, 1986) which make it difficult to manage this bacterium (Goto, 1979).Maize plant toppled down under severe conditions and foul odor emerges.The disease is causing causing severe grain yield losses which can range from 21 to 98 per cents (Thind and Payak 1978).Favorable environmental conditions: Dickeya zeae is preferred high temperatures and high relative humidity for infection and disease development.High temperature and humidity important for physiological and metabolic activity of bacterium therefore its growing well and producing sufficient pectolytic enzymes which is important for plant cell degradation.It can be a problem with areas of heavy rainfall or where overhead irrigation is used and the water is pumped from a lake, pond, or slow-moving stream.Prasad and Sinha (1980) studied that a temperature of 35°C, 70% RH (relative humidity) and inoculum level of 2 x 10 8 cells/ ml were essential for disease development in 15 to 30 day old maize plants.Saxena and Lal (1984) made an attempt to correlate weather parameter to the disease and found that temperature and RH did not fluctuate much during all the crop seasons.However, a significant difference was in total rainfall and duration of 'bright sunshine was observed.Saxena and Lal (1981) also studied positive correlation of disease with high nitrogen fertilizer.Morphology: D. zeae is a motile, gram-negative, rod shaped bacterium.It is varying from 0.8-3.2x 0.5-0.8µm (average 1.8 x 0.6 µm).There are 3-14, but more usually 8-11, peritrichous flagellae.The bacterium is produced off white, slimy and shiny colonies on King"s B Medium (Fig. 1A and B) (Kumar et al. 2015b).Pathogen mode of infection and symptoms: Initial disease symptoms include discoloration of the leaf sheath, which spread further to stalk, leaves and plant topples down in severing condition and a foul odor is detected (Fig. 2A and B).The first stage of maceration by E. chrysanthemi involves the entry of the bacteria to the parenchymatous tissues of plants that have been physiologically compromised, such as by bruising, excess water or high temperature (Collmer and Keen 1986).The next stage involves local maceration as a result of depolymerization of plant cell walls, followed by necrosis of the entire plant (Barras et al. 1994).Due to the complexity of plant cell walls, which consists of polysaccharides, the main ones being cellulose, hemicellulose and pectin, a variety of enzymes are accordingly produced by E. chrysanthemi for the efficient breakdown of cell walls (Robert- Baudouy et al. 2000).The major enzymes have been found to be pectinases which degrade various components of pectin using different reaction mechanisms.Other hydrolytic enzymes are also produced, such as cellulase isozymes, protease isozymes, xylanases and phospholipases (Collmer and Keen 1986;Hugouvieux-Cotte-Pattat et al., 1996;Kothari and Baig, 2013;Nahar et al., 2015).It has also been reported that E. chrysanthemi is capable of causing systemic disease by spreading through the vascular system of a plant.The physiological symptoms of such infection are yellowing of new leaves, wilting and a mushy, foul smelling stem rot (Slade and Tiffin, 1984).Genetic and physiological studies show that systemic infection of E. chrysanthemi is dependent on two abilities namely, iron acquisition and production of the pigment, indigoidine (Expert and Tousaint 1985;Reverchon et al., 2002).Due to iron scarcity in the environment and its role as an essential element, most organisms have derived the ability to sequester iron by production of lowmolecular-weight high affinity iron-chelating agents called siderophores.These are produced in response to iron limitation in order to capture Fe3+ ions.In a plant -bacteria interaction, the successful competition for iron between the two organisms could determine the outcome of an invasion (Enard et al., 1988).Similarities to other diseases: Pythium stalk rot (Pythium aphanidermatum) causes similar symptoms on maize, but bacterial stalk rot may be accompanied by a foul odour.Host range: D. zeae bacteria have a wide host range.Bradbury (1986) reported that E. chrysanthemi is causal agent of soft rot disease on wide range of plant species in tropical, subtropical and temperate region of the (Kumar 2015).Electron microscopic image of D. zeae (James Hutton Institute, 2017).(Kumar, 2015).
A B
world.It attacks tubers of potato and sweet potato, onion bulbs, bean pods, roots of carrot, turnip, radish and sugar beet, fruits of tomato, brinjal, chillies and papaya and plants of pearl millet, sorghum, brinjal, potato, tomato, tobacco and cabbage (Thind, 1970;Rangarajan and Chakravarti, 1971;Hingorani et al., 1959;Mehta, 1973;Sinha and Prasad, 1977).Goto (1979) reported that E. chrysanthemi caused bacterial foot rot disease of rice in Japan.Similarly, Qiongguang and Zhenzhong (2004) reported foot rot disease of rice in China caused by E. chrysanthemi pv.zeae.Edward et al. (1973); Lakshmanan and Mohan (1980); Khan and Nagaraj (1998) reported tip-over of banana caused by E. carotovora subsp.carotovora and E. chrysanthemi from across the world.In India it was reported to be caused by E. carotovora subsp.carotovora (Edward et al. 1973;Lakshmanan and Mohan 1980;Khan and Nagaraj, 1998), while Chattopadhyay and Mukherjee (1986) attributed it to be E. chrysanthemi.Bacterial heart rot of the pineapple caused by E. chrysanthemi was first reported on Malaysia (Johnston 1957) and has since been described in Costa Rica (Chinchilla et al., 1979), Brazil, andthe Philippines (Rohrbach andJohnson 2003).Erwinia chrysanthemi bacterium is also known as a greenhouse pathogen in mild climate regions (Perombelon and Kelman 1980).Stem rot caused by E. chrysanthemi on tomato in greenhouses has been first reported on Turkey (Cinar and Aysan, 1995).Recently, Kumar et al. (2015a) studied that D. zeae populations of Punjab have wide host range and crossinfecting many hosts (Fig. 3).
Survival:
The soil represents a favorable habitat for microorganisms and is inhabited by a wide range of microorganisms, including bacteria, fungi and protozoa.D. zeae survives in plant debris but the survival period varies from different environmental conditions (Anil Kumar and Chakravarti, 1971b;Prasad and Sinha 1977;Saxena and Lal 1982).The best soil compostion for D. zeae growth is low population of PGPR (Plant growth-promoting rhizobacteria) with infected maize debris in soil.Anil Kumar and Chakravarti (1971b) studied that bacterium survived for 24, 15 and 12 weeks in infected tissue (40% soil moisture) at 10, 20 and 30 °C and for 18, 15, 12 and 12 weeks (kept in soil at 27 °C) at 98, 95, 90 and 81% relative humidity (RH), respectively.However, population of the bacterium was reduced at >90% moisture, due to decreases rates of organic matter decomposition, due to low oxygen supply (Csonka 1989;Killham et al.,1993).Seed survival of the bactertium which artificially inoculated also studied by Anil Kumar and Chakravarti (1971a), they found the bacterium survived for 5 months at 10 and 20 °C with 81 and 93% RH and for 3-4 months at 30 and 35 C with 51 and 62% RH.The bacterium survived for 140 days in autoclaved soil at 40% moisture compared to only 29 days in non-autoclaved soil (Anil Kumar and Chakravarti, 1970).However, Rangarajan and Chakravarti (1970b) studied that stalk rot bacterium survived for 150 and 90 days in sterile and unsterile soils, respectively.Prasad and Sinha (1977) found that a sterilized environment increased the survival period of the bacterium in comparison to an unsterilized environment.It survived for 3-4 months in soil alone and for 4-6 months in soil containing healthy maize stalks.The survival period was longest (9 months) in soil which contained naturally and artificially infected maize plants as debris.Saxena and Lal (1982) also reported the longer survival period in sterilized soils and heavier soils.The maize borer, Chilo partellus, was shown to act as a carrier of this bacterium.It spreads the pathogen from diseased to healthy plants (Thind and Singh, 1976) (Barras et al., 1994;Salmond, 1994).Darrasse et al. (1994) used pel gene sequence to identify E. carotovora and they were observed that tested (Kumar 2015).
isolates ( 89) present 420 bp bands.Similarly, Nassar et al. (1996) developed E. chrysanthemi specific primer set (ADE-1, ADE-2) for detection of 78 strains of E. chrysanthemi and they observed all starins showed 420 bps specific bands (Fig. 6).Similar primers were also used by many autors for detection of that pathogen (Henz et al. 2006;Kaneshiro et al. 2008).Smid et al. (1995) Analysis of whole genome: Genome sequencing of the pathogens an important step to understand the mechanisms of pathogenesis and process of limit host range of the strain.The nucleotide sequence of the genomes of several phytopathogenic bacteria, such as Agrobacterium tumifaciens, Pseudomonas syringae, Ralstonia solanacearum, Xylella fastidiosa and two Xanthomonas oryzae and many species of soft rot Er-winia recently determined (Simpson et al., 2000;Buell et al., 2003;Lee et al., 2005;Salanoubat et al., 2002;Wood et al., 2001;Pritchard et al. (2013).
The Dickeya genus is recentely described six species: dianthicola, dadantii, zeae, chrysanthemi, paradisiacal and solani (Samson et al., 2005;Brady et al., 2012;Van der et al., 2013).Draft genome sequences of eight D. dianthicola and D. solani isolates were recently described (Pritchard et al., 2013), and four complete sequences of Dickeya strains, D. paradisiaca (Ech703), D. zeae (Ech586), D. chrysanthemi (Ech1591) and D. dadantii (Ech3937) have been deposited in GenBank (Glasner et al. (2011).Pritchard et al. (2013) Host-plant resistance: Host plant resistance is the most economic approach to manage this disease.Identification and use of resistance sources in breeding programme have been employed by various researchers (Rangarajan and Chakravarti 1969;Thind and Payak 1976;Ebron et al. 1987;Sah and Arny 1990).Complete resistance to this pathogen has not been reported so far, but various authors have tried to identify qualitative traits loci conferring the qualitative/ multigene resistance against bacteria soft rot (Canama and Hautea 2010).Rangarajan and Chakravarti (1969) evaluated 20 maize varieties (4 composite and 16 hybrids) in the field against E. carotovora pv.zeae (M1 and M2) and observed that all varieties were resistant.Sinha and Prasad (1975) reported partial resistance against E. chrysanthemi pv.zeae in CM 600, CM 104 and CM 105 maize lines and their crosses in the field.Thind and Payak (1976) reported laboratory method (cut stalk method) for evaluation of maize lines against E. carotovora var.zeae.They observed that development of disease reaction in both laboratory and field method was similar but with some minor departures.They concluded that "cut stalk method" can be used for screening maize germplasm.Thind and Payak (1978) evaluated 32 maize entries consisting of 13 inbred lines, 9 hybrids, 6 composites and 4 open pollinated varieties against E. chrysanthemi pv.zeae.They observed that two inbred lines CM-101, CM-110 and two open pollinated varieties CM-600, Basi were found to be tolerant against E. chrysanthemi pv.zeae.Sinha and Prasad (1981) reported that susceptibility of maize varieties was due to enhanced proteolytic enzyme activity and change in protein and total amino acid contents of stalk and leaf tissues of plant in middle and old age of crop.However, Srivastava and Prasad (1981) observed that the susceptibility of maize plants was dependent on the induction of cellulose activity by the bacterium in the infected (Nassar et al., 1996) tissues.Ebron et al. (1987)
Cultural practices:
The pathogen infection can be suppress via organic manure amendment which stimulates the population of beneficial microflora and avoid flooding and excessive irrigation.Ridge sowing method also helps to the farmer to manage that disease.Kumar et al. (2015c) were survey maize growing areas of Punjab and found minimum disease incidence and severity as compared to flat sown method in the farmer field.
Chemicals management:
The use of many chemicals to control of E. chrysanthemi pv.zeae under in vitro and in vivo condition is widely acknowledged by several authors (Chakravarti and Rangarajan, 1966;Rangarajan and Chakravarti, 1969;Thind and Payak, 1972;Saxena and Lal, 1972, 1973, 1974;Randhawa, 1977;Randhawa and Thind, 1978;Randhawa et al. 1979;Sinha and Prasad, 1977).Sabet (1956) tried streptomycin (dihydro streptomycin sulphate) and terramycin (terramycin hydrochloride) singly and in combination on E. chrysanthemi pv.zeae under both condition (in vitro and in vivo).Both the antibiotics were effective singly and in combination against the bacterium by paper-disc methods.Sinha and Prasad (1977) screened 35 chemicals and 15 were found to be effective in disease control, when applied immediately after the inoculation of plants.Chakravarti and Rangarajan (1966) studied effect of streptocycline on 16 species of plant pathogenic bacteria.The antibiotic was effective at all the concentrations (25, 50, 100, 250, 500 and 1000 ppm) against Erwinia species but E. chrysanthemi pv.zeae was highly sensitive.Rangarajan and Chakravarti (1969) made another effort and evaluated various antibiotics and fungicides against Pseudomonas lapsa and Erwinia chrysanthemi pv.zeae by paper disc method.Antibi-otics namely streptomycin, terramycin and streptocycline were found to be very effective against both the organisms at 100 ppm, while penicillin G was totally ineffective at all the concentration tested.Fungicides like dithane M-22, captan, flytolan, ferbam, bisdithane showed little effect against both pathogens.Many others authors also studied the effect of antibiotics on growth of E. chrysanthemi pv.zeae (Rangarajan and Chakravarti 1970a;Thind and Payak 1972;Alberghina 1974;Thind and Soni, 1983).In recently, Kumar et al. (2016) studied the copper fungicides with combination antibiotics significantly inhibit the growth of pathogen under both conditions (in vitro and in vivo).Many authors also studied the significance role of alone antibiotics and combination with copper fungicides to control plant pathogenic bacteria in different crops (Raju et al. 2011;Ravi kumar et al., 2011;Lokesh et al., 2013).E. chrysanthemi pv.zeae is highly sensitive to chlorine.Chlorine has property to completely inhibit the growth of the pathogen at 1 µg/ml under in vitro condition.Different techniques of bleaching powder were used such as sprinkling of chlorinated water between plant rows or on basal internodes of plants or broadcasting of dust or granules (coated and uncoated; containing 22 and 28% chlorine, respectively) between thé rows were effective to reducing the disease incidence significantly but the differences among them were not significant.While, application of granules between rows, first at pre-flowering and then 10 days after, was better than the other methods.Drencing of bleaching powder solution (contains 33% of chlorine) containing 100 µg/ml chlorine during 24 hs before, after and at inoculation time reduced the incidence by 70, 20 and 40%, respectively in potted maize plant.Thind and Payak (1972) studied that chlorinated water (100 µg/ ml chlorine) reduced the incidence up to 75-92% when drenching applied from knee high stage to flowering stage with 15 days intervals.Similarly, Sharma et al. (1982) found that two applications of Klorocin (contains 22% chlorine) at the rate of 250 µg /ml chlorine resulted in significant disease control (48-28%).It was also observed that broad cast of bleaching power in the maize field found effective which acknowledged is widely.Lal and Saxena (1978) applied bleaching powder (25 kg/ha) at two stages, first at flowering stage and the second 10 days after, found significantly result for controlling the disease.Many authors also widely acknowledged the effect of bleaching powder to control bacterial pathogens in different crops (Padmanabhan and Jain 1966;Segall 1968;Lal et al. 1970;Dueck 1974;Verma and Upadhya 1974;Thind and Soni 1983;Shekhawat et al. 1990;Ghosh and Mandal 2009;Sharma and Kumar 2009).Recently Kumar et al. (2016)
Conclusion
The D. zeae prefer infection in presence of required moisture therefore bacterial stalk rot disease occurring in kharif sown maize in India.The D. zeae is used pectolytic enzymes as virulent factor due to this its have multi host range.The bacterium survives in soil and host debris, multi host range also help to the bacterium for long survival.The pathogen is characterized by biochemical and molecular tactics.In present, Pel gene and rDNA specific primers are frequently used for D. zeae characterization.The disease controls in the field condition with help of drenching of 100 ppm bleaching powder and via spray of antibiotics.However, we have more need of work on resistance germplasm and other chemicals to control this disease.
Fig. 4 .
Fig.4.Sensitivity of five different antibiotics against six isolates of D. zeae using HiMedia ® antibiotics discs.Except for streptomycin all the other four antibiotics were ineffective against the test isolates (Kumar 2015).
developed ERWFOR and ATROREV gene specific primers and used these for characterization of E. carotovora subsp.atroseptica and E. chrysanthemi in potato.Toth et al. (2001) used AFLP fingerprinting to determine the taxonomic relationships within E. carotovora and E. chrysanthemi groups based on their genetic relatedness.Fessehaie et al. (2002) studied molecular characterization of DNA encoding 16S-23S rRNA intergenic spacer regions and 16S rRNA of pectolytic Erwinia species.Comparison of 16S rDNA sequences from different species and subspecies clearly revealed intraspecies-subspecies homology and interspecies heterogeneity.Similarly,Slawiak et al. (2009) characterized Dickeya spp.form potato and two strains of Hyacinthus by using biochemical assays, REP-PCR genomic finger printing, 16S rDNA and DNA X sequence analysis.Furthermore,Ali et al. (2013) characterized twenty isolate of E. carotovora subspecies atroseptica (Eca) causing blackleg of potato, with help of subspecies-specific primers Eca 1F and Eca 2R.
Table 1 .
Statistics for the 12 draft Dickeya genome sequences.
Kumar et al. (2017)al.(2017)studiedon survival on the bacterium in vivo and in vitro condition at Punjab Agricultural University, Ludhiana.Highest survival of the pathogen (270 days) was found in both type of soils field and sterilized soil (autoclaved soil) when mixed with host (maize) debris.The period of survival was positive correlated with increase in moisture and was maximum at 90%.The pathogen showed highest log cfu/ml at 30 o C and store Adesh kumar et al. / J.Appl.& Nat.Sci. 9 (2): 1214 -1225 (2017) announced draft genome sequences of 17 isolates of Dickeya, including 12 isolates of D. dadantii, D. chrysanthemi, D. zeae, and D. paradisiaca (Table 1).Similarly Bertani et al. (2013) determined sequence of D. zeae (DZ2Q) from diseased rice from a Roma cultivar grown in the Po Valley.
Nagaraj et al (2012)five antibacterial chemicals viz., stable bleaching powder, streptocycline, cristocycline, blitox and kocide against D. zeae under in vitro and in vivo condition.Stable bleaching powder (100 ppm) found most effective to inhibited growth of the pathogen with increased in yielding of three maize cultivars viz.Dekalb Double (52.4%),PunjabSweetCorn-1 (64 %) and PMH-1(57.9%)cultivars.Biological control: Only few studied available on control of D. zeae by biological agents in case of maize as compared to other crops such as potato and tomato.Kumar et al. (2016)studied efficacy of bioagent (Pseudomonas fluorescence) against D. zeae under in vitro and in vivo condition.It was observed that P. fluorescence found effective only in vitro condition not in the field.Kloepper (1983)studied that application of plant growth-promoting rhizobacteria (PGPR) to potato seed, resulted in significant reduction of populations of E. carotovora in field trials.Nagaraj et al (2012)studied that tip-over disease of banana caused by Erwinia carotovora subsp.carotovora and Erwinia chrysanthemi can be controlled by antagonistic bacteria viz., Bacillus subtilis, Pseudomonas fluorescens and VAM fungi (Glomus fasciculatum). | 2018-12-26T22:31:26.350Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "a68572f5d94ec34cba7c4dfa3508366214743c6e",
"oa_license": "CCBYNC",
"oa_url": "https://journals.ansfoundation.org/index.php/jans/article/download/1348/1291",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a68572f5d94ec34cba7c4dfa3508366214743c6e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
254818496 | pes2o/s2orc | v3-fos-license | Model-Based Field Winding Interturn Fault Detection Method for Brushless Synchronous Machines
: The lack of available measurements makes the detection of electrical faults in the rotating elements of brushless synchronous machines particularly challenging. This paper presents a novel and fast detection method regarding interturn faults at the field winding of the main machine, which is characterized because it is non-intrusive and because its industrial application is straightforward as it does not require any additional equipment. The method is built upon the comparison between the theoretical and the measured exciter field currents. The theoretical exciter field current is computed from the main machine output voltage and current magnitudes for any monitored operating point by means of a theoretical healthy brushless machine model that links the main machine with the exciter. The applicability of the method has been verified for interturn faults at different fault severity levels, both through computer simulations and experimental tests, delivering promising results.
Introduction
Wound field brushless synchronous machines (BSM) are largely used in power generation, besides many other applications, such as in aeronautical and marine fields [1][2][3].They are gaining some ground as an alternative to permanent magnet synchronous machines, which have been largely studied [4], especially in small-and medium-sized motor and generator applications, and more concretely in electromobility and wind power generation, respectively.
The main advantage of BSM is the absence of brushes and slip rings in the excitation system [5].As a result, maintenance requirements are reduced and safer operation is attained as spark production is avoided.In practice, common standards for AC power generators require that the protection systems of these machines shall guarantee power supply in a reliable way, but at the same time they shall trip the machine in case of an internal fault.In this regard, there are minimum requirements for fault protection [6].
There exist many BSM systems topologies for various applications [7][8][9].These systems, as shown in Figure 1, usually consist of a main synchronous machine and an exciter machine that provides the field power to the first one.A rotating diode bridge rectifier is placed between the AC output of the exciter and the main machine's field winding.The excitation system, i.e., the armature of the exciter, the rectifier and the main machine's field winding, rotate together on the same shaft.Therefore, no access is available to the excitation system, thus, when a fault takes place throughout any of the mentioned elements, it should be diagnosed using externally available electromagnetic or mechanical signals.This is a major diagnostic concern in BSM, aside from other common concerns with synchronous machines with static excitation, such as stator interturn short-circuit fault detection.Different electrical fault types can occur in the excitation system of BSM [10].In the particular case of interturn short-circuit faults at the main machine's field winding, these shall be carefully considered as magnetomotive unbalance is derived.These faults are represented by F in Figure 1.
In the cases that voltage regulation is performed, the Automatic Voltage Regulator (AVR) [11] seeks to compensate the loss of available turns by increasing the field current in the same proportion, in order to deliver the same average DC rectifier output level and consequently the same magnetomotive force at the field of the main machine.
Therefore, these faults typically lead to a current increase and, consequently, to an increase in the temperature of the field winding, as well as to mechanical oscillations.
The most common specific approaches regarding rotor interturn fault detection at the main field winding of BSM are based on externally available electrical or mechanical signals analysis.The Fast Fourier Transform (FFT) is applied in [12] and the third harmonic of the direct sequence of the current signal is used as an indicator of the presence of rotor interturn short-circuit faults at the field winding of the main machine.In [11], the transfer function between the output voltage and the exciter field current is monitored to detect rotor interturn faults.Alternatively, in [13], the machine's mechanical vibrations under normal and faulty conditions are analyzed through FFT, evidencing the relationship between these and the winding conditions in order to detect the presence of interturn short-circuits.
If a wider scope is considered, beyond BSM but also applicable to these types of machines [14], one of the most-used methods for rotor interturn fault detection in synchronous machines consists of an offline test known as the pole voltage drop test.There are also some popular online rotor interturn fault detection techniques for synchronous machines that are applicable to BSM, such as the ones based on air gap flux analysis [15][16][17] and on stray flux analysis [18,19], among others.
The most widespread online BSM field winding interturn fault detection methods have been categorized and compared in Table 1.Different electrical fault types can occur in the excitation system of BSM [10].In the particular case of interturn short-circuit faults at the main machine's field winding, these shall be carefully considered as magnetomotive unbalance is derived.These faults are represented by F in Figure 1.
In the cases that voltage regulation is performed, the Automatic Voltage Regulator (AVR) [11] seeks to compensate the loss of available turns by increasing the field current in the same proportion, in order to deliver the same average DC rectifier output level and consequently the same magnetomotive force at the field of the main machine.
Therefore, these faults typically lead to a current increase and, consequently, to an increase in the temperature of the field winding, as well as to mechanical oscillations.
The most common specific approaches regarding rotor interturn fault detection at the main field winding of BSM are based on externally available electrical or mechanical signals analysis.The Fast Fourier Transform (FFT) is applied in [12] and the third harmonic of the direct sequence of the current signal is used as an indicator of the presence of rotor interturn short-circuit faults at the field winding of the main machine.In [11], the transfer function between the output voltage and the exciter field current is monitored to detect rotor interturn faults.Alternatively, in [13], the machine's mechanical vibrations under normal and faulty conditions are analyzed through FFT, evidencing the relationship between these and the winding conditions in order to detect the presence of interturn short-circuits.
If a wider scope is considered, beyond BSM but also applicable to these types of machines [14], one of the most-used methods for rotor interturn fault detection in synchronous machines consists of an offline test known as the pole voltage drop test.There are also some popular online rotor interturn fault detection techniques for synchronous machines that are applicable to BSM, such as the ones based on air gap flux analysis [15][16][17] and on stray flux analysis [18,19], among others.
The most widespread online BSM field winding interturn fault detection methods have been categorized and compared in Table 1.
Machines 2022, 10, 1227 3 of 20 However, it shall be noted that rotor interturn faults usually occur due to turn-to-turn insulation failure.Therefore, insulation monitoring is closely related to these types of faults.The most common insulation failure reasons are related to contamination and to thermal and mechanical stress applied to the rotor windings [20].Insulation condition can be monitored during maintenance through multiple techniques, such as partial discharge tests.In addition, double slot insulation failures or double rotor-ground faults, i.e., when a ground loop is established after a second rotor-ground fault is produced, when a first rotorground fault was already present, can be at the origin of interturn short-circuits.Ground fault detection in BSM [21] can be performed through measurement brushes connected to the field circuit and a ground reference brush, although other online methods have been also developed, such as those based on flux analysis, the measurement of stator currents, the measurement of shaft voltages, twin signal sensing, telemetry or other communication modules [22,23].Nevertheless, the most common methods to locate the position of a ground fault at the field winding of BSM involve disassembling the machine and measuring the winding insulation at different points.
In addition, there exist some model-based approaches for rotor interturn fault detection in BSM, even though they are scarcer.Model-based fault detection techniques are considered to be more convenient in designing protection schemes if the model is sufficiently accurate.Although electrical dq models have been developed for generators, these are not suitable to work for the complete BSM.Therefore, a different model was used in [24], in which discrete and continuous dynamics were combined to detect rotor interturn faults.In a close reference in [25], a diagnosis criterion was suggested based on the relationship between the variation of the field current and the variation of the reactive power.
In order to overcome the main common disadvantage of electrical signal analysisbased and flux analysis-based methods, which is, according to Table 1, the computational complexity associated with the data acquisition time and the processing time, there is scope for faster online model-based approaches for the condition monitoring of excitation systems of BSM [26].
This paper presents an online fault detection method for rotor interturn faults of BSM, which is based on the comparison between the theoretical and the monitored actual exciter field current at steady state.The theoretical exciter field current calculation rests on a simple healthy excitation system model that takes the main machine stator voltage and current magnitudes as inputs and that applies two calculation stages successively, the first for the main machine and the second for the exciter.The computational simplicity enables real-time online diagnostics.The paper consequently develops a protection method and verifies its suitability for interturn faults at the field winding of the main machine.
The main advantages of the method hereby proposed are that it is non-intrusive, that its inputs are variables that are ordinarily available in the industry, and that it has a low computational complexity derived from the calculation algorithm in use, which is simpler than those used in other methods and more specific than other model-based approaches, enabling fast fault detection and protection.Nevertheless, before applying the proposed method, the machine shall be subjected to conventional testing in order to obtain the parameters needed as an input to compute the theoretical model.
In order to verify the method, computer simulations have been developed, and also a wide range of experimental tests has been performed on a special laboratory setup, for different rotor interturn severity levels.As mentioned before, the fault has the effect of increasing the need for excitation power in order to maintain the same output values.Therefore, the method could be generalized to any other fault in the rotating elements that has a similar effect.This paper starts with the principles of the proposed technique, which are thoroughly described in Section 2. Section 3 is dedicated to the computer simulations, followed by Section 4, which is dedicated to the experimental tests.Finally, Section 5 concludes with the main original contributions of the work.
Operational Principles of the Fault Detection Method
The fault detection method hereby proposed is based on two healthy model stages at a fundamental frequency, one for the main machine and the other for the exciter.As indicated above, the method is non-intrusive and its main distinctive factor is that its inputs are variables that are ordinarily available in the industry (a combination of machine output measurements and the exciter field current measurement), without the need for installing any additional internal or external devices or equipment.Therefore, this method is suitable for preliminary online diagnosis of the excitation system of BSM during operation at steady state, prior to moving onto other further diagnosis techniques.
The main machine model and the exciter model are built upon well-known standard methods, such as the Potier or ASA methods [27].Therefore, use is made of conventional testing which leads to the attainment of the no-load characteristic, the sustained threephase short-circuit characteristic and the Potier reactance value.The characteristics are determined from a no-load saturation test and a sustained three-phase short-circuit test, respectively.Moreover, the Potier reactance value determination may additionally need to perform the standard over-excitation test at zero power factor and variable armature voltage.Therefore, the parameters to build the models are easily available through standard testing of the machines.
It shall be noted that the Potier and ASA methods consider the magnetic saturation of the machines, which is more significant in the case of the main machine rather than the exciter, as the latter is usually oversized in order to prevent saturation even if the main machine is overloaded.
First Stage: Main Machine Model
At the first stage, a model of the main machine is constructed.The number of turns of the field winding of the main machine is referred to as N f .The theoretical main machine field current (I f,cal ) is computed from the machine output measurements by means of standard methods, such as ASA or Potier at any healthy operating point.These input measurements consist of three out of the following:
•
On the one hand, voltage measurements (U A and/or and/or U B and/or U C ) and/or current measurements (I A and/or I B and/or I C ).Eventually, line voltages (U AB and/or U AC and/or U BC ) could be also used instead of phase voltages; • On the other hand, the active power measurement (P) and/or reactive power measurement (Q).Alternatively, the apparent power measurement (S) could be used as a replacement for either P or Q.
For example, taking the case of the ASA method, the total equivalent field magnetomotive force, m.m.f.(F f ) results from the vector sum of voltage-related ( → F f ,U ) and current-related ( → F f ,I ) equivalent m.m.f. and the scalar addition of the m.m.f.related to saturation correction (∆F f ), as per Equation (1).
As shown in Figure 2, term → F f ,U represents the equivalent field m.m.f.needed to deliver the given output voltage ( → U) at no-load conditions without saturation and it is computed through the airgap line derived from the no-load saturation characteristic, using its slope value m airgap .Term → F f ,I represents the equivalent field m.m.f.associated with the armature reaction, which is demagnetizing if inductive characterization is assumed, and to the voltage drop at the Potier reactance, and it is computed through the sustained three-Machines 2022, 10, 1227 5 of 20 phase short-circuit characteristic, using its slope value m sc for the given output current ( → I ).Finally, term ∆F f represents the additional need of equivalent field m.m.f.due to saturation and it is computed through the difference between the no-load saturation characteristic and the airgap line for the actual delivered e.m.f.(E r ).Equations ( 3)-( 5) correspond to the mentioned terms.
As shown in Figure 2, term ℱ ⃗ , represents the equivalent field m.m.f.needed to deliver the given output voltage ( ⃗ ) at no-load conditions without saturation and it is computed through the airgap line derived from the no-load saturation characteristic, using its slope value mairgap.Term ℱ ⃗ , represents the equivalent field m.m.f.associated with the armature reaction, which is demagnetizing if inductive characterization is assumed, and to the voltage drop at the Potier reactance, and it is computed through the sustained threephase short-circuit characteristic, using its slope value msc for the given output current ( ⃗ ).Finally, term Δℱ represents the additional need of equivalent field m.m.f.due to saturation and it is computed through the difference between the no-load saturation characteristic and the airgap line for the actual delivered e.m.f.(Er).Equations ( 3)-( 5) correspond to the mentioned terms.Finally, the theoretical main machine field current (If,cal) is calculated through the following expression resulting from the phasor composition shown in Figure 2.
where Finally, the theoretical main machine field current (I f,cal ) is calculated through the following expression resulting from the phasor composition shown in Figure 2. where
Intermediate Rectifier Relationship
It is noteworthy that the full-wave uncontrolled three-phase diode bridge rectifier feeds a highly inductive load which is constantly at a fundamental frequency, consisting of the main machine field winding.Therefore, in healthy conditions, a direct linear relationship can be established between the main machine field current (I f,cal ) considered as constant DC, and the exciter output current (I out,cal ) r.m.s., as per Equation (8).Consequently, the theoretical exciter output current in healthy conditions (I out,cal ) can be directly computed Machines 2022, 10, 1227 6 of 20 from the theoretical main machine field current (I f,cal ).The calculation of the exciter output voltage (U out,cal ) is also unequivocal on the same basis.
At the second stage, a model of the exciter is constructed in order to calculate the theoretical exciter field current (I e,cal ).The number of turns of the field winding of the exciter is referred to as N e .Following the same ASA steps as developed in Section 2.1.1,the general expressions for the exciter model are given by: However, the general exciter model is largely simplified due to the fact that the load connected to the exciter is comparable to a constant equivalent impedance (Z) at a fundamental frequency, as shown in Figure 3.The equivalent impedance represents the load consisting in the diode bridge rectifier and the main machine field winding, thus being constant both in magnitude (|Z|) and in phase (ϕ e ) at a fundamental frequency.The equivalent impedance is inductive, then the armature reaction is demagnetizing.The per-phase equivalent impedance can be expressed as per expression 15.Accordingly, as shown in Figure 4, when the excitation power varies and therefore the exciter output current varies (Iout,cal to I'out,cal), the same happens for the exciter output voltage (Uout,cal to U'out,cal), in the same proportion and keeping the same phase shift, so that equivalent impedance Z remains constant both in magnitude (|Z|) and in phase (φe).As a consequence, both exciter ASA components (ℱ ⃗ , to ℱ ⃗ ′ , and ℱ ⃗ , to ℱ ⃗ ′ , ) also vary in the same proportion and maintain their phase difference (θ), given that they stem from linear relationships with the exciter output voltage and the exciter output current, as per Accordingly, as shown in Figure 4, when the excitation power varies and therefore the exciter output current varies (I out,cal to I out,cal ), the same happens for the exciter output voltage (U out,cal to U out,cal ), in the same proportion and keeping the same phase shift, so that equivalent impedance Z remains constant both in magnitude (|Z|) and in phase (ϕ e ).
As a consequence, both exciter ASA components ( Accordingly, as shown in Figure 4, when the excitation power varies and therefore the exciter output current varies (Iout,cal to I'out,cal), the same happens for the exciter output voltage (Uout,cal to U'out,cal), in the same proportion and keeping the same phase shift, so that equivalent impedance Z remains constant both in magnitude (|Z|) and in phase (φe).As a consequence, both exciter ASA components (ℱ ⃗ , to ℱ ⃗ ′ , and ℱ ⃗ , to ℱ ⃗ ′ , ) also vary in the same proportion and maintain their phase difference (θ), given that they stem from linear relationships with the exciter output voltage and the exciter output current, as per expressions 10 and 11, respectively.Finally, the exciter total equivalent field m.m.f.increases in the same-mentioned proportion ( → F e to → F e ) and a direct scalar magnitude relationship is deduced between the exciter output current (I out,cal ) and the theoretical exciter field current (I e,cal ), avoiding the general use of expression 14.This way, the method for the exciter is ultimately simplified so that with only I f,cal as an input, I e,cal can be directly linearly computed.
It shall be noted that exciters are usually oversized in order to have a fast transient response in case of increasing the current at the main machine field winding.Therefore, they do not saturate at any expected operating point at the steady state.The non-linearity related to saturation has not been considered in the simplification (I e,cal is considered proportional to I out,cal at any steady state operation point), therefore ∆I e = 0, even though a scalar component addition could be eventually performed consisting in the difference between the no-load saturation characteristic and the no-load characteristic of the exciter.
Physically, as the effect of the exciter armature reaction depends on the power factor (cosϕ e ), which in addition to being lagging then demagnetizing remains constant, the phase difference between the exciter main field flux (linked to the exciter field poles) and the exciter armature reaction flux, is constant.Therefore, the exciter resultant flux (linked to the final e.m.f.delivered by the exciter) remains with a constant phase shift at any operating point.This interpretation, given that the impedance connected to the exciter is constant, justifies the direct relationship between the magnitudes of the flux components and the exciter load power.
Fault Detection Method
A schematic layout of the proposed fault detection and protection method is provided in Figure 5.The proposed fault detection method is based on the fact that in case of interturn fault in the field winding, the AVR will call for a greater need of excitation power with respect to the theoretical healthy excitation power, calculated through the two-stage model described in Section 2.1., in order to finally maintain the same machine output setpoints.However, the method is also applicable when the excitation is controlled in manual mode, as it will be developed further on.
phase difference between the exciter main field flux (linked to the exciter field poles) and the exciter armature reaction flux, is constant.Therefore, the exciter resultant flux (linked to the final e.m.f.delivered by the exciter) remains with a constant phase shift at any operating point.This interpretation, given that the impedance connected to the exciter is constant, justifies the direct relationship between the magnitudes of the flux components and the exciter load power.
Fault Detection Method
A schematic layout of the proposed fault detection and protection method is provided in Figure 5.The proposed fault detection method is based on the fact that in case of interturn fault in the field winding, the AVR will call for a greater need of excitation power with respect to the theoretical healthy excitation power, calculated through the two-stage model described in Section 2.1., in order to finally maintain the same machine output setpoints.However, the method is also applicable when the excitation is controlled in manual mode, as it will be developed further on.The needed inputs consist of discrete measurements that are ordinarily available continuously in industrial applications, as the electrical equipment usually includes voltage and current instrument transformers, wattmeters and varmeters in order to monitor the operating point of the machine at each time.In Figure 5, basic electrical input magnitudes (voltages and currents) on each phase have been represented as an example, obtained through measurement voltage transformers (PT) and measurement current transformers (CT), respectively, although any of the combinations of basic and derived variables as described in Section 2.1.1.would be possible.Regarding the exciter field current measurement, a DC current shunt has been represented, although other techniques, such as a Hall effect sensor, could be also used.
First, the theoretical exciter field current value (I e,cal ) is estimated through the two-stage healthy condition model, which is summed up in the following calculation steps: 1.
Main machine model
The theoretical main machine field current value (I f,cal ) is estimated through the first stage healthy condition model, through the application of standard methods, such as the ASA method, as per Section 2.1.1.Specifically, Equation ( 6) provides the desired estimation from the inputs.
Intermediate rectifier relationship
The theoretical exciter output current (I out,cal ) r.m.s. is computed from I f,cal through wellknown relationships of the full-wave three-phase diode bridge rectifier, as per Section 2.1.2.Specifically, Equation ( 8) provides the value of I out,cal from I f,cal and Equation ( 15) provides the value of the theoretical exciter output line voltage (U out,cal ) r.m.s.from I out,cal through a linear relationship.
Exciter model
The theoretical exciter field current value (I e,cal ) is estimated through the second stage healthy condition model, through the application of standard methods, such as the ASA method, as per Section 2.1.3.Specifically, Equation ( 14) provides the value of I e,cal from I out,cal and U out,cal .However, this step could be reduced to a simple linear experimental relationship of I e,cal with I out,cal , thus with I f,cal , owing to the fact that the load connected to the exciter is comparable to a constant equivalent impedance (Z) at a fundamental frequency.
The obtained value of I e,cal represents the exciter field current that would be necessary if the excitation system was in a healthy condition for the same actual operating point of the BSM.Parameter r represents the ratio, at any certain operating point, between the theoretical exciter field current (I e,cal ) and the actual measured exciter field current (I e,mea ), which is usually measured with a shunt or a Hall effect sensor.If I e,mea > I e,mea , as in the faulty condition cases, r < 1.
In the case of interturn faults in the main machine field winding, the fault severity can be arbitrary inside a certain range depending on the number of shorted turns (0 < N ≤ N total ) or can even increase progressively over time.The greater the proportion of shorted turns (N) with respect to the total number of field winding turns (N total ), the fault is said to be more severe.
If the AVR is in operation, it seeks to compensate the mentioned loss of available turns by increasing the exciter field current in the same proportion, in order to deliver the same average DC rectifier output level and consequently the same magnetomotive force at the field of the main machine.Therefore, parameter r coincides with the proportion of shorted turns (N) with respect to the total number of turns (N total ).In this case, an increase in I e,mea would be seen while I e,cal remains constant (r < 1) as the operating point of the machine remains at the same value.
On the other hand, if the AVR is not in operation, a drop in the output reactive power (Q) is produced, and in this case I e,mea remains at the same value while I e,cal decreases proportionally to the ratio between N and N total (r < 1 as well), as the new operating point with a lower reactive power output is considered.
Finally, parameter s represents the percentual fault severity level estimation, i.e., the estimation of the proportion of shorted turns, directly computed from factor r. Equations ( 16) and (17) gather the definitions of r and s, respectively, through which rotor interturn faults can be detected when I e,mea > I e,cal , which leads to r < 1 and s > 0.
The resulting severity estimation (s) shall be used in order to trip the machine, or alternatively to give out an alarm or a warning, when a certain threshold (s trip ) is attained, which is related to a certain proportion of admissible shorted turns.
The value of I e,cal can be affected by a factor k to perform the calculations, so that the rotor interturn fault detection is carried out based on the following comparison: I e,mea > k• I e,cal .The value of factor k shall be set according to the accuracy of the estimation, especially regarding the precision of the measuring devices in use, typically between 1.02 and 1.05 (k = 1.05 in the case of the experimental tests described in Section 4).This would be assimilable to applying a factor of safety with the aim of avoiding unwanted trips or alarms in normal conditions due to inaccuracy issues.
A time delay parameter (T ON ) shall be accurately set according to the exciter capacity to withstand the fault, generally upon an inverse function with respect to the severity level, among other factors.
The protection method is conceived to cover steady state operating points, because given that each variable has a different transient behavior, the reliability of the theoretical exciter field current estimation method during changes of the operating point is not guaranteed.The protection method may be applied computationally at an industrial level through the use of a Digital Signal Processor (DSP) or a Microprocessor to perform the real-time fault detection automatically.
Computer Simulations 3.1. Computer Simulation Model
A Simulink ® simulation model was built, as shown in Figure 6, in order to validate the theoretical principles described in the previous section, which are the basis of the healthy model.On this simulation model, numerous healthy and faulty condition tests were carried out in order to check the usefulness of the detection method, for different [P,Q] operating points.
The value of Ie,cal can be affected by a factor k to perform the calculations, so that the rotor interturn fault detection is carried out based on the following comparison: Ie,mea > k• Ie,cal.The value of factor k shall be set according to the accuracy of the estimation, especially regarding the precision of the measuring devices in use, typically between 1.02 and 1.05 (k = 1.05 in the case of the experimental tests described in Section 4).This would be assimilable to applying a factor of safety with the aim of avoiding unwanted trips or alarms in normal conditions due to inaccuracy issues.
A time delay parameter (TON) shall be accurately set according to the exciter capacity to withstand the fault, generally upon an inverse function with respect to the severity level, among other factors.
The protection method is conceived to cover steady state operating points, because given that each variable has a different transient behavior, the reliability of the theoretical exciter field current estimation method during changes of the operating point is not guaranteed.The protection method may be applied computationally at an industrial level through the use of a Digital Signal Processor (DSP) or a Microprocessor to perform the real-time fault detection automatically.
Computer Simulation Model
A Simulink ® simulation model was built, as shown in Figure 6, in order to validate the theoretical principles described in the previous section, which are the basis of the healthy model.On this simulation model, numerous healthy and faulty condition tests were carried out in order to check the usefulness of the detection method, for different [P,Q] operating points.
Healthy Condition Simulations
The main purpose of the healthy condition simulations is to verify the excitation system model, specifically the direct magnitude relationship between the theoretical main machine field current (I f,cal ) and the theoretical exciter field current (I e,cal ).
First of all, the assumption that the load connected to the exciter is comparable to an equivalent impedance which is constant both in magnitude and in phase shall be verified, as it is the basis of the deduction of the theoretical excitation system model.Therefore, U out and I out waveforms shall be checked for different operating conditions.
As an example, these waveforms are shown for [P 1 = 250 kW, Q 1 = 0 kvar] and for [P 2 1500 kW, Q 2 = 0 kvar] in Figure 7.Both waveforms have a heavy sinusoidal component and they are affected by the diode conduction sequence.When the [P 1 , Q 1 ] case is compared to the [P 2 , Q 2 ] case, in which the exciter needs to provide greater excitation power, it can be seen that only an amplitude variation takes place, resulting in ratios |U out | 1 /|I out | 1 and |U out | 2 /|I out | 2 remaining essentially constant.This means that the magnitude of the equivalent impedance seen from the output terminals of the exciter is constant.Furthermore, if both cases are compared, there is no meaningful phase shift difference between U out and I out waveforms, which is to say that I out keeps the ϕ e delay with reference to U out regardless of the excitation level.It can be concluded that an eventual Z = |Z| ϕe equivalent resistive-inductive impedance remains constant both in its magnitude and its phase.
be seen that only an amplitude variation takes place, resulting in ratios |Uout|1/|Iout|1 and |Uout|2/|Iout|2 remaining essentially constant.This means that the magnitude of the equivalent impedance seen from the output terminals of the exciter is constant.Furthermore, if both cases are compared, there is no meaningful phase shift difference between Uout and Iout waveforms, which is to say that Iout keeps the φe delay with reference to Uout regardless of the excitation level.It can be concluded that an eventual Z = |Z|φe equivalent resistiveinductive impedance remains constant both in its magnitude and its phase.As an extension, Uout and Iout phasors resulting from all the performed healthy condition simulations have been gathered in Figure 7, evidencing their proportional growth in magnitude and their constant phase shift regardless of the excitation level.
On the other hand, in order to verify the assumption made on the rectifier bridge model, the relationship between the main machine field current (If) and the exciter output current magnitude (|Iout|) was checked to be linear, as shown in Figure 8.As an extension, U out and I out phasors resulting from all the performed healthy condition simulations have been gathered in Figure 7, evidencing their proportional growth in magnitude and their constant phase shift regardless of the excitation level.
On the other hand, in order to verify the assumption made on the rectifier bridge model, the relationship between the main machine field current (I f ) and the exciter output current magnitude (|I out |) was checked to be linear, as shown in Figure 8.On the whole, all the results obtained from the performed simulations have been gathered in order to evaluate the relationship between both excitation currents (If and Ie), as shown in Figure 9.It is evidenced that the main machine field current (If) is proportional to the exciter field current (Ie), leading to the conclusion that the simulations sustain the theoretical developments.On the whole, all the results obtained from the performed simulations have been gathered in order to evaluate the relationship between both excitation currents (I f and I e ), as shown in Figure 9.It is evidenced that the main machine field current (I f ) is proportional to the exciter field current (I e ), leading to the conclusion that the simulations sustain the theoretical developments.
Finally, all healthy condition exciter field current (I e ) simulation results are presented in Figure 10 for different [P,Q] operating points at the reference main machine output voltage.It shall be noted that different output voltages result in different parallel surfaces above (U > 400 V or 1 p.u.) or below (U < 400 V or 1 p.u.) the surface shown in the mentioned figure.This healthy set of data is provided as a reference to compare the results in the faulty condition cases for the same operating points in Figure 11.On the whole, all the results obtained from the performed simulations have been gathered in order to evaluate the relationship between both excitation currents (If and Ie), as shown in Figure 9.It is evidenced that the main machine field current (If) is proportional to the exciter field current (Ie), leading to the conclusion that the simulations sustain the theoretical developments.Finally, all healthy condition exciter field current (Ie) simulation results are presented in Figure 10 for different [P,Q] operating points at the reference main machine output voltage.It shall be noted that different output voltages result in different parallel surfaces above (U > 400 V or 1 p.u.) or below (U < 400 V or 1 p.u.) the surface shown in the mentioned figure.This healthy set of data is provided as a reference to compare the results in the faulty condition cases for the same operating points in Figure 11.
Faulty Condition Simulations
Faulty condition tests were carried out to verify the potential of the method to detect electrical faults in the rotating excitation system of BSM, which have in common that they imply a rise in the exciter field current when the AVR system is in service.
Faulty Condition Simulations
Faulty condition tests were carried out to verify the potential of the method to detect electrical faults in the rotating excitation system of BSM, which have in common that they imply a rise in the exciter field current when the AVR system is in service.
Faulty Condition Simulations
Faulty condition tests were carried out to verify the potential of the method to detect electrical faults in the rotating excitation system of BSM, which have in common that they imply a rise in the exciter field current when the AVR system is in service.
Simulations of interturn faults at the field winding of the main machine with different fault severity levels (N/N total = 5, 10, 15 and 20%) have been performed.The exciter field current (I e ) in case of fault at reference voltage (U = 400 V or 1 p.u.) for different [P,Q] operating points is presented in Figure 11.The reference healthy case surface provided in Figure 10 is also shown in Figure 11 as a baseline.
As mentioned before, in order to deliver the same magnetomotive force at the field of the main machine and to consequently maintain the same output, the AVR system tends to compensate the loss of available turns by increasing the main machine's field current (I e ) in the same proportion.Therefore, and given that the main machine's field current (I f ) is linear with the exciter field current (I e ) as has been proved, an increase in I f proportional to the proportion of shorted turns is attained, as shown in Figure 11.
As can be deduced from Figure 11, an appropriate differentiation can be carried out between the faulty condition cases and the healthy case baseline, making straightforward the distinction between the healthy and the faulty condition cases and the fault severity estimation.This fact is made manifestly clear due to the perfectly proportional gaps between parallel surfaces according to the proportion of shorted turns, i.e., in case of a fault the exciter field current exceeds in 5, 10, 15 or 20%, according to N/N total , the healthy analogous operation point.
It shall be noted that if the AVR was not in service, characteristic drops in the output reactive power (Q) would be recognized instead.A new operating point on the faulty condition surface would be attained but for the same exciter field current, and consequently the proportional gaps would be similar.
Experimental Setup
A standard practice in BSM testing is to mount temporary slip rings on the rotor in order to enable to take direct measurements in the excitation system.A schematic representation of this arrangement is provided in Figure 12.A representation of the regular arrangement is also provided in the same figure.between parallel surfaces according to the proportion of shorted turns, i.e., in case of a fault the exciter field current exceeds in 5, 10, 15 or 20%, according to N/Ntotal, the healthy analogous operation point.
It shall be noted that if the AVR was not in service, characteristic drops in the output reactive power (Q) would be recognized instead.A new operating point on the faulty condition surface would be attained but for the same exciter field current, and consequently the proportional gaps would be similar.
Experimental Setup
A standard practice in BSM testing is to mount temporary slip rings on the rotor in order to enable to take direct measurements in the excitation system.A schematic representation of this arrangement is provided in Figure 12.A representation of the regular arrangement is also provided in the same figure.The experimental setup, which is shown in Figure 13, includes (a) an induction motor that drives the shaft, which is controlled by a variable-frequency drive (VFD), and on its same shaft, (b) a BSM, which is excited through a DC variable voltage supply system.A (c) diode bridge rectifier with sectional terminal blocks was made static between the exciter and the main machine and (d) slip rings were installed for both its inputs and its outputs.The experimental setup, which is shown in Figure 13, includes (a) an induction motor that drives the shaft, which is controlled by a variable-frequency drive (VFD), and on its same shaft, (b) a BSM, which is excited through a DC variable voltage supply system.A (c) diode bridge rectifier with sectional terminal blocks was made static between the exciter and the main machine and (d) slip rings were installed for both its inputs and its outputs.The experimental setup, which is shown in Figure 13, includes (a) an induction motor that drives the shaft, which is controlled by a variable-frequency drive (VFD), and on its same shaft, (b) a BSM, which is excited through a DC variable voltage supply system.A (c) diode bridge rectifier with sectional terminal blocks was made static between the exciter and the main machine and (d) slip rings were installed for both its inputs and its outputs.The machine is connected to the grid through a transformer and an adjustable AC busbar set.When the machine is paralleled with the grid, active power (P) and reactive power (Q) can be controlled through the VFD and the DC adjustable voltage supply system, respectively.In addition, the grid side voltage (U) can be modified through the adjustable AC busbar set.
The tests were performed on 4-salient poles, 5 kVA, 400 V, BSM.Detailed data about the main synchronous machine and the exciter are provided in Table 2 and Table 3, respectively.Various measuring instruments were installed at the following points, as can also be seen in Figure 12: 1.
An ammeter at the excitation DC input of the exciter; 2.
An ammeter at the three-phase connection between the exciter and the rectifier; 3.
An ammeter at the DC connection between the rectifier and the main machine field winding; 4.
Three-phase voltage and current sensors and a wattmeter at the output of the main machine.
Healthy Condition Tests
The BSM has been tested in healthy conditions in a wide range of operating conditions.A set of 1575 healthy condition tests were performed for in the range of 0 ≤ P ≤ 1500 W and −1000 ≤ Q ≤ 2500 var, with grid side voltages in the range of 320 ≤ U ≤ 420 V.As it can be inferred, the healthy condition tests have covered generator operation, from under-excited to over-excited conditions, at different grid side voltage values.
The main machine model having been verified in previous works through standard methods [28], the excitation system model description made in Section 2.1.2., based on the direct magnitude relationship between the estimated main machine current (I f,cal ) and the estimated exciter field current (I e,cal ), shall be experimentally verified.The relationship between the main machine actual field current (I f,mea ), which is directly measurable in the special experimental setup, and the exciter actual field current (I e,mea ), was studied for all the healthy condition tests.The result is shown in Figure 14, drawing the conclusion that the experimental tests evidence the aforementioned theoretical developments.The main machine model having been verified in previous works through standard methods [28], the excitation system model description made in Section 2.1.2., based on the direct magnitude relationship between the estimated main machine field current (If,cal) and the estimated exciter field current (Ie,cal), shall be experimentally verified.The relationship between the main machine actual field current (If,mea), which is directly measurable in the special experimental setup, and the exciter actual field current (Ie,mea), was studied for all the healthy condition tests.The result is shown in Figure 14, drawing the conclusion that the experimental tests evidence the aforementioned theoretical developments.On the other hand, the results of the healthy condition tests provide a valuable set of reference experimental data which consists of the value of the actual measured exciter field current (I e,mea ) values for different [P,Q] operating points, at different grid side voltage values.Some exciter field current measurements in healthy conditions are presented in Figure 15 for different [P,Q] operating points at rated output voltage (U = 400 V).It shall be noted that different output voltages result in different parallel surfaces above (U > 400 V) or below (U < 400 V) the shown surface.
The data provided in Figure 15 may be used to assess the accuracy of the estimation method when actual measured exciter field current values (I e,mea ) are compared to the theoretical exciter field current value (I e,cal ) computed through the two-stage healthy condition model for each healthy operating point.The relative errors with respect to I e,mea obtained from this comparison are represented in Figure 16 for each [P,Q] operating point at rated output voltage (U = 400 V).
On the other hand, the results of the healthy condition tests provide a valuable set of reference experimental data which consists of the value of the actual measured exciter field current (Ie,mea) values for different [P,Q] operating points, at different grid side voltage values.Some exciter field current measurements in healthy conditions are presented in Figure 15 for different [P,Q] operating points at rated output voltage (U = 400 V).It shall be noted that different output voltages result in different parallel surfaces above (U > 400 V) or below (U < 400 V) the shown surface.As shown in Figure 16, for operation points at rated output voltage (U = 400 V), the errors committed do not exceed 6% in any case.In general, the errors are more notable for low output voltage and low output power, because with low voltage and current magnitude values at the exciter field winding (voltages and currents that hardly reach 1 V and 10 mA, respectively), relative errors shoot up, given the sensitivity of the measuring devices.The numerical values are displayed in Table 4.As shown in Figure 16, for operation points at rated output voltage (U = 400 V), the errors committed do not exceed 6% in any case.In general, the errors are more notable for low output voltage and low output power, because with low voltage and current magnitude values at the exciter field winding (voltages and currents that hardly reach 1 V and 10 mA, respectively), relative errors shoot up, given the sensitivity of the measuring devices.The numerical values are displayed in Table 4.
The obtained estimation confidence intervals are deemed acceptable for fault detection use.In any case, in industrial applications synchronous machines tend to operate in a steady state inside a specific operating region characterized by a minimum output active power and overexcited conditions, which means that the use of specific sub-models can be considered in different operating regions.This sub-model approach would lead to higher accuracy levels, which is desirable in order to avoid false fault detections or trips in the case of electrical faults that imply low increments in the excitation power (if AVR system is in service) or low drops in the output reactive power (if AVR system is not in service).
Moreover, it shall be noted that when relative errors are studied, it is realized that the main machine model is responsible for the main contributions to the overall twostage model performance errors, which is to say that the fact of linking the main machine model with the exciter model under the theoretical developments does not introduce any significant estimation errors.
Faulty Condition Tests
Faulty condition tests were carried out with the main purpose of verifying the potential of the proposed approach to detect interturn faults at the main machine field winding of BSM (with proportion of shorted turns N/N total ).
These tests have been performed through the connection of different resistors parallel with a rotor pole winding, so as to decrease the current flow through it in the desired proportion, as shown in Figure 17.The field current flow reduction through the mentioned pole has an equivalent effect to shorting a certain number of turns, which is the decrease in the field m.m.f.(ampere-turns) provided by the pole.Accordingly, faulty condition tests have been performed for different percentual fault severity levels: N/N total = 4.36, 7.40, 11.17 and 15.91%, given the relationships provided in Table 5. 022, 10, x FOR PEER REVIEW 18 of 21 higher accuracy levels, which is desirable in order to avoid false fault detections or trips in the case of electrical faults that imply low increments in the excitation power (if AVR system is in service) or low drops in the output reactive power (if AVR system is not in service).Moreover, it shall be noted that when relative errors are studied, it is realized that the main machine model is responsible for the main contributions to the overall two-stage model performance errors, which is to say that the fact of linking the main machine model with the exciter model under the theoretical developments does not introduce any significant estimation errors.
Faulty Condition Tests
Faulty condition tests were carried out with the main purpose of verifying the potential of the proposed approach to detect interturn faults at the main machine field winding of BSM (with proportion of shorted turns N/Ntotal).
These tests have been performed through the connection of different resistors parallel with a rotor pole winding, so as to decrease the current flow through it in the desired proportion, as shown in Figure 17.The field current flow reduction through the mentioned pole has an equivalent effect to shorting a certain number of turns, which is the decrease in the field m.m.f.(ampere-turns) provided by the pole.Accordingly, faulty condition tests have been performed for different percentual fault severity levels: N/Ntotal = 4.36, 7.40, 11.17 and 15.91%, given the relationships provided in Table 5.The results for P = 1000 W at constant voltage (U = 385 V) are shown in Figure 18.The exciter field current measurements for each of the previous faults are referred to as I e,mea,F,4 .36% , I e,mea,F,7 .40% , I e,mea,F,11 .17% and I e,mea,F,15 .91% .In addition, the exciter field current Table 5. Relationships between the parallel resistor value (R n ) and the equivalent proportion of shorted turns (N) with respect to the whole field winding (N total ).The whole field winding constitutes a 12.6 Ω total impedance.
R n [Ω]
N/N total [%] 1.8 15.91 3.9 11.17As can be deduced from Figure 18, an appropriate differentiation can be carried out between the healthy condition case and the faulty condition cases.This fact is made manifestly clear due to the gaps between the lines in healthy and faulty conditions, which is wider for higher Q values.Moreover, the proportional increase in the needed exciter field current with the proportion of shorted turns (N/Ntotal) is verified through the experimental approach.
Conclusions
This paper presents a new model-based detection method for interturn faults at the field winding of the main machine.
The proposed method is based on the comparison of the actual measured exciter field current and the theoretical exciter field current calculated, at each operating point.The theoretical exciter field current is computed through a two-stage model for healthy conditions from the machine output measurements.
At the first stage, a model of the main machine is used.The main machine theoretical field current is computed from the machine output measurements using one of the wellknown standard methods, such as the ASA or Potier methods.
At the second stage, a model of the exciter is used.From the main machine theoretical excitation current, a model of the exciter is proposed in order to calculate the exciter field current using several verified properties.Among these properties, it shall be remarked that given the characterization of an analogous equivalent load connected to the exciter, with constant magnitude and power factor, it is possible to move from a vector relationship to a direct scalar relationship between the exciter field current and the exciter output As can be deduced from Figure 18, an appropriate differentiation can be carried out between the healthy condition case and the faulty condition cases.This fact is made manifestly clear due to the gaps between the lines in healthy and faulty conditions, which is wider for higher Q values.Moreover, the proportional increase in the needed exciter field current with the proportion of shorted turns (N/N total ) is verified through the experimental approach.
Conclusions
This paper presents a new model-based detection method for interturn faults at the field winding of the main machine.
The proposed method is based on the comparison of the actual measured exciter field current and the theoretical exciter field current calculated, at each operating point.The theoretical exciter field current is computed through a two-stage model for healthy conditions from the machine output measurements.
At the first stage, a model of the main machine is used.The main machine theoretical field current is computed from the machine output measurements using one of the wellknown standard methods, such as the ASA or Potier methods.
At the second stage, a model of the exciter is used.From the main machine theoretical excitation current, a model of the exciter is proposed in order to calculate the exciter field current using several verified properties.Among these properties, it shall be remarked that given the characterization of an analogous equivalent load connected to the exciter, with constant magnitude and power factor, it is possible to move from a vector relationship to a direct scalar relationship between the exciter field current and the exciter output current.
The advantages of this new method are its non-intrusiveness and the ordinary availability of the required signals in industrial applications such as in power plants.Moreover, it needs a shorter computational time in comparison with other monitoring techniques.
The use of the method to provide an indication in the event of interturn fault in the field winding is of particular interest for the operator as a first online strategy, before moving to further diagnosis methods.The fault detection method has been validated with consistent results through computer simulations and through experimental tests that were carried out on a special laboratory setup.
For further works, the method could be extended to the detection of any electrical fault throughout the rotating excitation system, such as diode faults or loose connections which render out of service a branch of the rectifier, as all of them imply a difference between the measured and the theoretical exciter field current.Fault classification techniques after detection could also be complemented with other diagnostics techniques for precise fault location throughout the rotating excitation system.
Figure 2 .
Figure 2. Main machine field current excitation model construction through the ASA method: (a) Equivalent exciter field current components determination; (b) Phasor diagram.
Figure 2 .
Figure 2. Main machine field current excitation model construction through the ASA method: (a) Equivalent exciter field current components determination; (b) Phasor diagram.
Figure 3 .
Figure 3. Exciter model (Upper figure: rectifier considered; Bottom figure: equivalent impedance Z simplification).Rotating elements are contained in rectangular sections.
Figure 4 .
Figure 4. Exciter machine field current calculation by the ASA method.Phasor diagram.Figure 4. Exciter machine field current calculation by the ASA method.Phasor diagram.
Figure 4 .
Figure 4. Exciter machine field current calculation by the ASA method.Phasor diagram.Figure 4. Exciter machine field current calculation by the ASA method.Phasor diagram.
Figure 5 .
Figure 5. Simplified layout of the detection method for faults in the excitation system of BSM.Figure5.Simplified layout of the detection method for faults in the excitation system of BSM.
Figure 5 .
Figure 5. Simplified layout of the detection method for faults in the excitation system of BSM.Figure5.Simplified layout of the detection method for faults in the excitation system of BSM.
Figure 8 .
Figure 8. Relationship between I f (DC) and |I out | (AC rms) for the simulation model exciter.
Figure 9 .
Figure 9. Relationship between Ie (DC) and If (DC) for the simulation model exciter.
Figure 9 . 21 Figure 10 .
Figure 9. Relationship between I e (DC) and I f (DC) for the simulation model exciter.Machines 2022, 10, x FOR PEER REVIEW 13 of 21
Figure 11 .
Figure 11.Simulation results.Per-unit exciter field current (Ie) simulation results for different [P,Q] operating points at reference output voltage (U = 400 V or 1 p.u.) in faulty conditions (interturn faults at the field winding of the main machine, with different severity levels: N/Ntotal = 5, 10, 15 and 20%).
Figure 10 . 21 Figure 10 .
Figure 10.Simulation results.Per-unit exciter field current (I e ) simulation results for different [P,Q] operating points at reference output voltage (U = 400 V or 1 p.u.) in healthy conditions.
Figure 11 .
Figure 11.Simulation results.Per-unit exciter field current (Ie) simulation results for different [P,Q] operating points at reference output voltage (U = 400 V or 1 p.u.) in faulty conditions (interturn faults at the field winding of the main machine, with different severity levels: N/Ntotal = 5, 10, 15 and 20%).
Figure 11 .
Figure 11.Simulation results.Per-unit exciter field current (I e ) simulation results for different [P,Q] operating points at reference output voltage (U = 400 V or 1 p.u.) in faulty conditions (interturn faults at the field winding of the main machine, with different severity levels: N/N total = 5, 10, 15 and 20%).
Figure 14 .
Figure 14.Experimental results.The relationship between the measured main machine field current (If,mea) and the measured exciter field current (Ie,mea) for all the experimental healthy condition tests.On the other hand, the results of the healthy condition tests provide a valuable set of reference experimental data which consists of the value of the actual measured exciter field current (Ie,mea) values for different [P,Q] operating points, at different grid side voltage values.Some exciter field current measurements in healthy conditions are presented in Figure15for different [P,Q] operating points at rated output voltage (U = 400 V).It shall be noted that different output voltages result in different parallel surfaces above (U > 400 V) or below (U < 400 V) the shown surface.
Figure 14 .
Figure 14.Experimental results.The relationship between the measured main machine field current (I f,mea ) and the measured exciter field current (I e,mea ) for all the experimental healthy condition tests.
Figure 15 .Figure 15 .
Figure 15.Experimental results.Measured exciter field current (Ie,mea) for different [P,Q] operating points at the rated output voltage (U = 400 V) in healthy conditions.The data provided in Figure 15 may be used to assess the accuracy of the estimation method when actual measured exciter field current values (Ie,mea) are compared to the theoretical exciter field current value (Ie,cal) computed through the two-stage healthy condition model for each healthy operating point.The relative errors with respect to Ie,mea Figure 15.Experimental results.Measured exciter field current (I e,mea ) for different [P,Q] operating points at the rated output voltage (U = 400 V) in healthy conditions.
Figure 16 .
Figure 16.Experimental results.Healthy theoretical model relative errors (%) with respect to the actual measurements collected from healthy condition tests, for different [P,Q] operating points at rated output voltage (U = 400 V).
Figure 16 .
Figure 16.Experimental results.Healthy theoretical model relative errors (%) with respect to the actual measurements collected from healthy condition tests, for different [P,Q] operating points at rated output voltage (U = 400 V).
Figure 17 .
Figure 17.Main machine field winding experimental connection in order to perform interturn faults with different parallel resistors with Rn values: (a) Schema; (b) Connection.
Figure 17 .
Figure 17.Main machine field winding experimental connection in order to perform interturn faults with different parallel resistors with R n values: (a) Schema; (b) Connection.
conditions are provided in the same figure and are referred to as I e,mea,healthy .
Machines 2022, 10, x FOR PEER REVIEW 19 of 21 measurements in healthy conditions are provided in the same figure and are referred to as Ie,mea,healthy.
Figure 18 .
Figure 18.Exciter field current measurements for [P = 1000 W, Q], at fixed output voltage (U = 385 V) in healthy and main field winding interturn fault conditions.
Figure 18 .
Figure 18.Exciter field current measurements for [P = 1000 W, Q], at fixed output voltage (U = 385 V) in healthy and main field winding interturn fault conditions.
Table 1 .
A comparison of various online BSM field winding interturn fault detection methods.
Table 2 .
Main machine data.
Table 4 .
Experimental results.Healthy theoretical model relative errors (%) with respect to the actual measurements collected from healthy condition tests, for different [P,Q] operating points at rated output voltage (U = 400 V).
Table 4 .
Experimental results.Healthy theoretical model relative errors (%) with respect to the actual measurements collected from healthy condition tests, for different [P,Q] operating points at rated output voltage (U = 400 V).
Table 5 .
Relationships between the parallel resistor value (Rn) and the equivalent proportion of shorted turns (N) with respect to the whole field winding (Ntotal).The whole field winding constitutes a 12.6 Ω total impedance.Rn [Ω] N/Ntotal [%] | 2022-12-18T16:15:53.322Z | 2022-12-15T00:00:00.000 | {
"year": 2022,
"sha1": "025798abeafe82ddd2dd3481b504f9afd33ced76",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1702/10/12/1227/pdf?version=1671524457",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d582e6333ff3c9c8bbd3b414e35367e013596cc4",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
225751218 | pes2o/s2orc | v3-fos-license | The value of Muslim and non-Muslim life: A comparative content analysis of elite American newspaper coverage of terrorism victims
A spate of terrorist attacks in the Muslim-majority world and the non-Muslim-majority West has sparked debates about an alleged double standard in Western news coverage of terrorism victims, with critics alleging Western news outlets are less concerned with Muslim victims than non-Muslim victims. This content analysis comparatively examined American newspaper framing of two terror attacks occurring in the non-Muslim West with three attacks occurring in the Muslim-majority world. Findings show American papers covered attacks in non-Muslim-majority societies prominently and framed them as acts of terrorism, and covered attacks in Muslim-majority societies scantily and framed them as internal conflicts.
concerned with Muslim victims of tragedy than non-Muslim victims. For instance, informal analyses by Barnard (2016), Moghul (2016), and Johnson (2016) all contended that Western news media have allotted more attention to Western victims of terrorism than Muslim victims.
Although research has not examined this phenomenon specifically, scholars have looked at news attention to victims in the context of both race and geography. Studies have generally found that Western news outlets pay more attention to white, Western victims of natural and humanitarian disasters than brown, black, and non-Western victims (see Adams, 1986;Fair, 1993;Hanusch, 2008Hanusch, , 2012Hawkins, 2002;Joye, 2009Joye, , 2010Moeller, 1999;Simon, 1997;Van Belle, 2000), and that outlets also highlight white, Western casualties of war and conflict more than non-white victims (see Griffin and Lee, 1995;Herman and Chomsky, 1988;Youssef, 2009). Some American journalists, however, have argued that allegations of double standards in terrorism coverage, in particular, are unfair and overly simplistic. For instance, Phillips (2015) posited that there is not a double standard per se, but that Western news reporting of terrorism simply follows traditional news values formulas. Specifically, Phillips argued that Western news outlets cover terrorist attacks occurring in Western countries more prominently because they are rare, unusual, and unexpected. Wrong (2014) offered up a similar defense of Western journalism, but her argument centered primarily on conflict reporting in Africa.
This study builds on the aforementioned research and, more specifically, on a study by el-Nawawy and Elmasry (2017) that examined The Washington Post coverage of five prominent terrorist attacks occurring in 2015 and 2016 against Muslim and non-Muslim victims, respectively. Like the study by el-Nawawy and Elmasry (2017), this research examines American newspaper coverage of attacks carried out in Paris, France (November 2015); Ankara, Turkey (October 2015 and March 2016); Maiduguri, Nigeria (January 2016); and Brussels, Belgium (March 2016). While the study by el-Nawawy and Elmasry (2017) was carried out qualitatively and on a single newspaper, however, the current research employs a quantitative content analysis of 10 elite American newspapers.
As the subsequent literature review will make clear, previous studies into the broad areas of research mentioned above have tended to focus on either humanitarian disasters or wars and have generally relied on small samples and qualitative techniques which do not produce statistically generalizable results. Although prior quantitative research on terrorism coverage has been carried out, it has focused on terrorism perpetrators, rather than victims, and has not gone beyond simple measures of news topic prominence. The current study is unique, then, not only because it employs a quantitative approach and a coding scheme that goes beyond news topic prominence, but also because it examines a broad sample of elite, agenda-setting American newspapers and focuses on coverage of diverse victims of Muslim-perpetrated terrorism. As a subsequent section of this article will explain, four of the five attacks under study were carried out, at least partly, by Muslim groups who claim an Islamic identity, while one of the attacks -the March 2016 Ankara attack -was committed by a secular group (the Kurdistan Freedom Falcons or TAK) whose members are nominally Muslim Kurds (Brandon, 2006). This article's comparative focus on victims of Muslim-perpetrated terror is important because it is precisely this issue that is at the heart of many current debates. This article, then, begins forth from the premises that it is important to systematically quantify apparent disparities in elite American newspaper coverage of global terrorism victims, and also to go beyond notions of prominence to consider other aspects of victim humanization. The study's focus on attacks taking place in predominantly white, black, Muslim, non-Muslim, Western, and non-Western societies, respectively, allows it to simultaneously consider the impact of race, religion, and geography on American news coverage patterns. Importantly, all five selected attacks were carried out by Muslims, a fact which helps naturally control for a possible confounding variable -the religious identity of the attacker(s). The study's basic purpose is to determine how 10 elite, geographically dispersed American dailies framed Muslim victims of terrorism in Ankara, black, Muslim victims of terror in Maiduguri, and predominantly white, non-Muslim victims of terrorism in Paris and Brussels.
Background on studied terrorist attacks
This study examines American newspaper framing of five terror attacks that took place in four cities -Ankara, Turkey; Paris, France; Maiduguri, Nigeria; and Brussels, Belgium -within the span of 6 months near the end of 2015 and beginning of 2016. While attacks in Ankara and Maiduguri targeted mainly Muslim victims, attacks in Paris and Brussels were perpetrated against mostly non-Muslims. All five attacks were carried out by Muslims.
The Ankara attacks occurred on 10 October 2015 and 13 March 2016, respectively. In the October attack, two men detonated bombs outside the Ankara central railway station during a pro-Kurdish People's Democratic Party (HDP) peace rally. The attack killed more than 95 people and injured approximately 250 (Melvin, 2015). While no party claimed responsibility, the Turkish government blamed the Islamic State of Iraq and the Levant (ISIL) and Kurdish militants affiliated with the secular Kurdistan Workers' Party (PKK; 'Peace rally bombing ', 2015). The Turkish state considers the PKK to be a terrorist organization, and both sides have been involved in violent confrontation since the 1980s (Yeginsu and Arango, 2015). The March 13, 2016 attack occurred when car bombs were detonated in Güvenpark, a central transport and commerce area in Ankara. Thirtyseven people were killed and 125 injured. The Kurdistan Freedom Falcons (TAK), which has links to the PKK, issued a statement claiming responsibility for the attack (Letsch, 2016). Although the TAK's membership base comprises Kurds who are nominally Muslim, the group's ideology is secular (Brandon, 2006).
The Maiduguri, Nigeria attack took place on January 30, 2016, when militants belonging to the extremist group Boko Haram released gunfire, detonated bombs and set fire to the Nigerian village of Dalori, just northeast of Maiduguri. About 90 people were killed (Isuwa and Searcey, 2016). Boko Haram joined the so-called Islamic State of West Africa and has been launching attacks in northeastern Nigeria since 2009 ('Boko Haram blamed for deadly attack on Nigeria village', 2016).
The Paris, France attack occurred on November 13, 2015, when eight ISIL members carried out six coordinated attacks in the northern and central parts of Paris, France, killing about 130 people and injuring about 400. The attacks hit a major stadium, a concert hall, several restaurants, and bars (Walt, 2015). The Brussels, Belgium attack was carried out on March 22, 2016 by five ISIL-affiliated men who carried out multiple, coordinated attacks. More than 30 people were killed ('Brussels explosions', 2016;Rankin and Henley, 2016).
Literature review
To date, no studies have employed quantitative techniques to compare American coverage of Muslim and non-Muslim victims of Muslim-perpetrated terrorism. However, a good deal of research has been devoted to related areas -coverage of conflict and war (including conflicts in Muslim countries), and reporting of natural and humanitarian disasters, including in Muslim countries.
Western media coverage of victims of conflict and war
A series of studies have evaluated Western, particularly Anglo-American, media framing of violent conflicts. General findings point to marginalized and stereotypical coverage patterns in Western reporting of conflicts taking place in non-Western, predominantly Muslim, and black or brown countries. Research also suggests that victims of these conflicts are often dehumanized and othered by Western media. Although these studies employ qualitative methods and do not focus primarily on terrorism, their focus on Western reportage of Muslim victims of violence makes them relevant to the current research.
A pair of relatively recent studies by Patrick (2016) and Gruley and Duvall (2012) analyzed Western news coverage of violent conflicts in Africa, with results pointing to both a general neglect of the issues and stereotypical reporting patterns. Patrick's (2016) analysis of American and British newspaper coverage of the Bosnian conflict (1992)(1993)(1994)(1995) and Rwandan genocide (1994) showed that reportage of Rwanda was both comparatively scant and grounded in stereotypes about Africa. Gruley and Duvall's (2012) examination of The New York Times and The Washington Post coverage of the war in Darfur, Sudan suggested that coverage lacked contextual background on the origins of the conflict, and highlighted 'the stereotype of tribal conflict in Africa ' (p. 38).
Several other qualitative studies have assessed news coverage of conflicts and violence involving Muslims, including wars in Muslim-majority countries (see Griffin and Lee, 1995;Steuter and Wills, 2009;Yang, 2008;Youssef, 2009). Collectively, this research lends insights into how Western news outlets talk about Muslim and non-Muslim victims and perpetrators of war violence.
A study of the framing of the wars in Afghanistan and Iraq by Canadian newspapers found that coverage tended to dehumanize Muslim victims and characterize them 'as animals, insects and diseases' (Steuter and Wills, 2009: 9). Analyses by Yang (2008), Griffin and Lee (1995), and Youssef (2009), meanwhile, focused on 2003 Iraq War coverage in American newspapers. Yang's (2008) research suggested that The New York Times and The Washington Post framing of the war was conflict-driven, revolving around issues such as weapons of mass destruction and daily combat. The studies by Griffin and Lee (1995) and Youssef (2009) both posited that American newspapers tended to neglect Iraqi casualties of war.
Other recent research has focused on acts of terrorism carried out by Muslims in the West (Kearns et al., 2018;Powell, 2011Powell, , 2018. Although studies by Powell (2011Powell ( , 2018 and Kearns et al. (2018) focused on perpetrators, rather than victims, of terrorism, they offer insights relevant to the current research, in particular because they use comparative approaches and seem to provide evidence of alleged double standards in American reporting.
Powell's research comparatively examined American news coverage of terrorist attacks carried out by Muslim and non-Muslims. Results suggested that American news outlets discursively linked Muslim perpetrators with Islam, fanaticism, evil, and global terrorist networks, while discursively constructing non-Muslim perpetrators as mentally ill, irrational, and products of America's gun problem. A study by Kearns et al. (2018) provides perhaps the most robust quantitative evidence of a double standard in American news treatment of Muslim and non-Muslim violent perpetrators. The research found that American news outlets allotted significantly more coverage to terror attacks committed by Muslims than attacks carried out by non-Muslims.
Western media coverage of victims of humanitarian disaster
A number of studies have focused on natural and humanitarian disasters, with most studies suggesting that Western news tends to allocate insufficient focus to death and tragedy in the Global South, something which scholars argue devalues non-Western human life. For example, a number of studies have found that, in the context of disaster news, American media privilege American and other Western lives, while ignoring or downplaying deaths in the Global South. Although these studies do not focus on terrorism or violent conflict, they are relevant to the current research because they seem to point to a double standard in news treatment of different kinds of victims. Moeller (1999) argued that 'compassion fatigue' among American news consumers and a series of structural constraints on news organizations have led to a systematic decrease in foreign news in general, and news about foreign disasters and suffering in particular. Moeller (1999) cited an apparent American journalism truism: 'one dead fireman in Brooklyn is worth five English bobbies, who are worth 50 Arabs, who are worth 500 Africans' (p. 22). Hawkins (2002) affirmed this, arguing that, 'if anything, it is an understatement' (p. 230).
One of the earliest empirical assessments of this phenomenon was carried out by Adams (1986), who analyzed American broadcast news coverage of earthquakes in six countries. The study found that earthquakes occurring in geographically and culturally distant locations received scant American news attention, despite very large casualty figures. Meanwhile, results showed that Western European earthquakes were covered prominently.
Studies by Van Belle (2000), Simon (1997), Joye (2009Joye ( , 2010, and CARMA (2006) provide further support for the thesis that Western news media tend to privilege Western lives over others. Van Belle's (2000) analysis concluded that foreign disasters geographically distant from the United States are generally ignored by American news media, while Simon (1997) found that geographically distant earthquakes received less prominent coverage in American media than earthquakes in countries closer to the United States. Joye's (2009) study suggests that disaster sufferers in the United States and Australia were privileged on Western broadcast channels, while victims in Indonesia and Pakistan were marginalized. Another study by Joye (2010) suggested that Western news coverage of the 2003 SARS outbreak lacked empathy, identification and compassion; presented information in an 'us' versus 'them' dichotomy; and helped 'reproduce a Euro-American centered world order' (p. 586). Meanwhile, a CARMA (2006) study examined American media portrayals of six geographically diverse disasters. Hurricane Katrina, the only studied disaster taking place in the United States, received 'by far the highest volume of coverage' (p. 11).
Other research suggests that non-Western victims of crises are given fair attention by Western news media, but covered stereotypically. For example, Singer et al. (1991) found that while disasters occurring in the United States are given 'disproportionate attention in the U.S. press' (p. 48), disasters in other parts of the world, including the Global South, are also covered prominently, particularly when casualty figures are high. Campbell (2012) posited, however, that Western images of African famine are stereotypical and sensational, working to exoticize Africans.
Framing theory
Framing is a theoretical approach for deciphering meanings, interpretations, connotations, and implications in a text. Framing is 'the process of culling a few elements of perceived reality and assembling a narrative that highlights connections among them to promote a particular interpretation' (Entman, 2007: 164).
Frames that are adopted and projected through news media can increase the prominence of certain events in ways that may impact audience perceptions and interpretations (Entman, 1993(Entman, , 2007. Frames can be helpful in simplifying complex information and events (Entman, 1993). Frames make information 'accessible to lay audiences because they play to existing cognitive schemas' (Scheufele and Tewksbury, 2007: 12). Frames can be categorized as 'issue-specific' frames, which tend to focus on the particularities of certain topics, or 'generic' frames, which deal with broad, wide-ranging contexts (De Vreese, 2005: 55).
A news frame usually has distinguishable, identifiable, observable, and recognizable elements or framing devices, such as headlines, phrases, images, keywords, sourcing, leads, and metaphors (De Vreese, 2005). The 'lexical choices of words or labels' can impact audience interpretations (Pan and Kosicki, 1993: 62).
The effects of news framing can vary depending on the nature of the events being framed and the receivers' knowledge level regarding these events (Scheufele and Tewksbury, 2007). A frame's effectiveness is also determined by how persuasive it is and whether it faces competing or alternative frames (Chong and Druckman, 2007). The success of any news frame in affecting audience evaluation 'increases . . . when it comes from a credible source, resonates with consensus values, and does not contradict strongly held prior beliefs' (Chong and Druckman, 2007: 104). The prominence of frames is enhanced when they are associated with popular cultural symbols. Creating mental associations 'is a product of the interaction of texts and receivers' (Entman, 1993: 53).
Several factors play a critical role in how journalists frame news stories, including 'social norms and values, organizational pressures and constraints, pressures of interest groups, journalistic routines, and ideological or political orientations of journalists' (Scheufele, 1999: 109). All of these factors affect the 'decision frame' or 'the decisionmaker's conception of the acts, outcomes, and contingencies associated with a particular [framing] choice' (Tversky and Kahneman, 2013: 453).
Researchers of media discourse can determine news frames inductively by generating them from media content, or deductively by constructing them prior to the analysis (De Vreese, 2005).
Hypotheses and research questions
Several hypotheses and research questions seek to formally compare American newspaper coverage of the Paris and Brussels attacks with coverage of the Ankara and Maiduguri attacks.
Based on prior literature indicating that Western victims receive more Western news attention than non-Western victims, the first hypothesis predicts that the Paris and Brussels attacks will generate more news coverage than the Ankara and Maiduguri attacks.
H1:
Terrorist attacks occurring in non-Muslim, Western societies (i.e. Paris and Brussels) will receive more prominent coverage than attacks occurring in Muslimmajority societies (i.e. Ankara and Maiduguri).
This hypothesis is parsed out with four sub-hypotheses, which predict that the Paris and Brussels attacks will generate more articles, longer reports, more prominently placed stories, and more photographs than the Ankara and Maiduguri attacks.
H1a: There will be more newspaper articles devoted to terrorist attacks occurring in Paris and Brussels than to attacks occurring in Ankara and Maiduguri.
H1b: Articles about terrorist attacks in Paris and Brussels will be longer, on average, than articles about attacks in Ankara and Maiduguri.
H1c: Articles about terrorist attacks in Paris and Brussels will be placed more prominently within newspapers than articles about attacks in Ankara and Maiduguri.
H1d: Articles about terrorist attacks in Paris and Brussels will feature more accompanying photographs, on average, than articles about attacks in Ankara and Maiduguri.
Based on prior research suggesting that violent attacks can be framed as either acts of terrorism or internal conflicts (Norris et al., 2003;Jorndrup, 2016;Lewis and Reese, 2009), the second hypothesis predicts that the Paris and Brussels attacks will be more likely to be framed as acts of terrorism than the Ankara and Maiduguri attacks, which the hypothesis predicts will be more likely to be framed as internal conflicts.
H2: Studied American newspapers will be more likely to frame attacks occurring in Paris and Brussels as acts of terrorism than as internal conflicts, and more likely to frame attacks occurring in Ankara and Maiduguri as internal conflicts than as acts of terrorism.
The third and fourth hypotheses are grounded in past literature suggesting that Western news is more likely to humanize and highlight Western victims. These hypotheses predict that news reports will include more personal details about victims of the Paris and Brussels attacks than victims of the Ankara and Maiduguri attacks, and that reports will more often quote victims, family members, and civilian eyewitnesses of the Paris and Brussels attacks.
H3: Terrorism victims in Paris and Brussels will be humanized through personal details more than terrorism victims in Ankara and Maiduguri.
H4: Articles about terrorist attacks in Paris and Brussels will be more likely to feature quotes from victims, family members, or civilian eyewitnesses than articles about terrorist attacks in Ankara and Maiduguri.
Three research questions address how often the religious identity of attackers (Islam) was mentioned in news reports, the extent to which news articles linked attacks with the US-led war on terrorism, and how likely reports were to quote US officials.
RQ1:
Will the perpetrators of attacks occurring in Paris and Brussels be more likely to be associated with the religion of Islam than the perpetrators of attacks in Ankara and Maiduguri?
RQ2:
Will newspaper articles about attacks in Paris and Brussels be more likely to link these attacked societies to the US-led war on terror than articles addressing attacks in Ankara and Maiduguri? RQ3: Will US officials be quoted more in articles addressing attacks in Paris and Brussels than in articles addressing attacks in Ankara and Maiduguri?
Method
This study used content analysis to examine how elite American newspapers framed Muslim and non-Muslim terrorism victims. Content analysis is a quantitative research technique that enables researchers to systematically examine large quantities of content.
Following el-Nawawy and Elmasry (2017), five terrorist attacks -three that targeted primarily Muslim victims, and two which targeted primarily non-Muslim victims -were selected for analysis. All five attacks met textbook definitions of terrorism and were car- Ten prominent American daily newspapers were selected for analysis based on circulation figures and geographic distribution. The website Statista (2018) was used to generate circulation figures. The original plan was to select the 10 highest circulating American daily papers. However, this strategy would not have yielded meaningful geographic diversity, especially because several of the top-circulating papers are based in New York. Some papers, then, were eliminated, and high-circulating papers representing other geographic regions -the Midwest, the South, and the Pacific Northwest -were selected. The 10 papers selected for study were The Chicago Tribune, The Cleveland Plain-Dealer, The Denver Post, The Houston Chronicle, The Minneapolis Star-Tribune, The Los Angeles Times, The New York Times, the USA Today, The Wall Street Journal, and The Washington Post. These widely circulated, geographically dispersed newspapers likely contribute to shaping public opinion on a variety of issues, including matters concerning Islam, Muslims, and terrorism.
For all 10 studied newspapers, we aimed to select the 5 days/editions published immediately after each of the five attacks. This would have yielded a total of 250 issues (10 × 5 × 5 = 250), but the USA Today does not publish Saturday and Sunday editions, a fact that dictated that we were not always able to find five consecutive days of coverage following attacks. Ultimately, then, we searched through 19 editions of the USA Today, and 25 issues of each of the other nine studied newspapers, for a total of 244 editions searched. Our strategy involved looking through the main news section of all newspaper issues to search for news and opinion articles about the attacks. This yielded a total of 713 articles. This total included 400 articles on the Paris attack, 241 articles on the Brussels attack, 37 articles on the first Ankara attack, 25 articles on the second Ankara attack, and 10 articles on the Maiduguri attack. An undergraduate research assistant used the Library of Congress' newspaper archives to retrieve and collect all studied newspaper editions. The overwhelming majority of coded articles (297, 89.2%) were news articles, while just 36 (10.8%) were editorials.
We coded all found articles on both of the Ankara attacks and the Maiduguri attack. To keep the study manageable, we used systematic random sampling to select Paris and Brussels articles for final analysis. For Paris, we selected every third article, and for Brussels, we selected every other article. The final coded sample, then, included a total of 333 articles -139 for Paris, 122 for Brussels, 37 for the first Ankara attack, 25 for the second Ankara attack, and 10 for the Maiduguri attack.
The coding scheme sought to measure prominence, humanization, dominant frame, links to the West, and sourcing. Several items measured prominence, including article placement, article length (in words), and number of photographs. Framing was dichotomously coded as either 'terrorism' or 'internal conflict'. Coders were instructed to code as 'terrorism' any article that described the covered attack as an act of wanton aggression without apparent justification, and as 'internal conflict' any article that primarily discussed a political conflict that may have motivated attackers. Humanization was measured by counting the number of personal details -name, age, occupation, charitable/volunteer/community work, relationship status, nationality, and number of children -mentioned about victims. The coding scheme also assessed the extent to which attacks were linked to other attacks carried out in the West, and the total number of quotes allotted to US government officials, and victims, family members, friends, or civilian eyewitnesses. The coding scheme did not account specifically for newswire material -which is often reproduced in daily papers across the United States -because duplicated newswire material still broadly represents American news discourse.
Two graduate students served as coders. Training on the coding scheme was carried out over a period of several weeks in the spring of 2018. Intercoder reliability testing was carried out on a total of 12% of the sample (N = 39). Scott's pi was calculated for all nominal level variables. For ratio level variables, Krippendorf's alpha was calculated. All reliability calculations were completed with ReCal, an online tool developed by Freelon (2010). Reliability figures ranged from very good to perfect. Scott's pi figures for nominal level variables ranged from .85 to 1.0, and Krippendorf's alpha figures for ratio level variables ranged from .91 to 1.0.
Results
The first hypothesis predicted that terrorist attacks occurring in the non-Muslim, Western societies of Paris and Brussels would receive more prominent news coverage in the 10 studied American newspapers than attacks taking place in the Muslim-majority societies of Ankara and Maiduguri. This hypothesis included four sub-hypotheses, which predicted that the Paris and Brussels attacks would generate more news articles, be longer and placed more prominently, and include more photographs than attacks in Ankara and Maiduguri. Results suggest strong support for all four sub-hypotheses.
In all, the 10 studied newspapers published 713 articles about the examined attacks in the 5 days of coverage following the events. The overwhelming majority of these articles were written about the Paris (N = 400) and Brussels (N = 241) attacks. Only 72 articles were published about the three events occurring in Muslim-majority societies. Of these, the first Ankara attack generated the most articles (N = 37), followed by the second Ankara attack (N = 25), and the Maiduguri attack (N = 10). H1a was supported. These results are displayed in Table 1.
H1b was also supported. Articles about the Paris and Brussels attacks were longer than articles covering attacks in Ankara and Maiduguri. On average, articles about Paris and Brussels featured more words (M = 749.10, SD = 367.57) than articles about Muslimmajority societies (M = 454.00, SD = 312.77). An independent samples t-test showed these differences to be statistically significant at the .05 level (t(331) = 6.2, p < .001). These results are displayed in Table 2. H1c was also supported. Articles about the Paris and Brussels attacks were significantly more likely to appear on the front-page of newspapers than articles about the Ankara and Maiduguri attacks. In all, 35 percent of Paris and Brussels articles appeared on the front-page, compared with just 14 percent of Ankara and Maiduguri articles. A chi-square test showed these differences to be statistically significant at the .05 level ((χ 2 (df = 1, N = 333) = 12.1, p < .001). These results are displayed in Table 3.
On average, articles about attacks in Paris and Brussels included more accompanying photographs (M = 1.10, SD = 1.18) than articles about Ankara and Maiduguri (M = .81, SD = .78). An independent samples t-test showed these differences to be statistically significant (t(331) = 2.02, p = .044). H1d was thus supported. Table 4 displays these results.
The second hypothesis predicted that studied American newspapers would be more likely to frame the Paris and Brussels attacks as acts of terrorism, and the Ankara and Maiduguri attacks as the products of internal conflicts. Results displayed in Table 5 show that examined papers almost exclusively framed the Paris and Brussels attacks as acts of terrorism (99.6%), while relying most heavily on an internal conflict frame in coverage of the Ankara and Maiduguri attacks (56.9%). A chi-square test revealed these differences to be statistically significant at the .05 level (χ 2 (df = 1, N = 333) = 163.8, p < .001). H2 was supported.
Hypothesis 3 predicted that terrorism victims in Paris and Brussels would be humanized through personal details more than terrorism victims in Ankara and Maiduguri. On average, articles about attacks in Paris and Brussels included more personal details about victims (M = 2.26, SD = 7.96) than articles about attacks in Ankara and Maiduguri (M = .25, SD = .84). An independent samples t-test showed these differences to be statistically significant at the .05 level (t(331) = 2.14, p = .033). These results are shown in Table 6. H3 was supported. Hypothesis 4 predicted that articles about attacks in Paris and Brussels would be more likely to feature quotes from victims, family members, or civilian eyewitnesses than articles about attacks in Ankara and Maiduguri. As shown in Table 7, results indicate that articles about attacks in Paris and Brussels did, on average, generate more quotes (M = .99, SD = 2.96) than articles about Ankara and Maiduguri (M = .60, SD = 1.15). These differences were not statistically significant, however (t(331) = 1.10, p = .273). H4 was thus not supported.
This study also presented three research questions about associations with Islam and the United States.
The first research question asked whether perpetrators of attacks occurring in Paris and Brussels would be more likely to be associated with the religion of Islam than the perpetrators of the Ankara and Maiduguri attacks. The religious identity of attackers was more likely to be mentioned in articles covering Paris and Brussels (84.7%) than in articles covering the Ankara and Maiduguri attacks (69.4%). A chi-square test revealed statistically significant differences (χ 2 (df = 1, N = 333) = 8.64, p = .003). The second research question asked whether newspaper articles about attacks in Paris and Brussels would be more likely to link these attacked cities to the US-led war on terror than articles addressing attacks in Ankara and Maiduguri. The Paris and Brussels attacks were more likely (48.3%) to mention the global war on terror than the Ankara and Maiduguri attacks (18.1%). These differences were statistically significant (χ 2 (df = 1, N = 333) = 21.19, p < .001).
The third research question asked whether US officials would be quoted more in articles addressing attacks in Paris and Brussels than in articles addressing attacks in Ankara and Maiduguri. US officials were more likely to be quoted in articles covering Paris and Brussels (52.5%) than articles covering Ankara and Maiduguri (26.4%). A chi-square test showed these differences to be statistically significant at the .05 level (χ 2 (df = 1, N = 333) = 15.44, p < .001).
Discussion
This study sought to determine to what extent differences in the religious, racial, and geographic identities of terrorism victims affect how 10 elite American newspapers report on them. Although this study builds on previous research in the broad area of media framing of global news events, it may be the first to include a comparative analysis of coverage of terrorism victims from a broad range of major American papers.
Findings from the content analysis point to clear differences in how the studied American dailies covered attacks that targeted primarily non-Muslim victims, on one hand, and primarily Muslim victims, on the other hand. American newspapers covered attacks in Paris and Brussels very prominently and framed them almost exclusively as acts of terrorism. Meanwhile, two attacks in Ankara and a single attack in Maiduguri were scantily covered despite high casualty figures. These attacks were also framed mostly as internal conflicts, despite the fact that all three attacks easily meet textbook definitions of terrorism. Importantly, the examined newspapers were also more likely to personalize Paris and Brussels victims than they were Ankara and Maiduguri victims. Furthermore, the Islamic identity of the perpetrators was more pronounced in articles dealing with Paris and Brussels than in articles about Ankara and Maiduguri. Also, attacks in Paris and Brussels were more likely to be linked to the US-led war on terrorism and feature quotes from US officials.
What is surprising about the results presented here, perhaps, is the scope of marginalization and neglect of victims in non-Western, Muslim-majority societies, including one black African society. Three major attacks in Ankara and Maiduguri generated just 72 articles in 10 major newspapers in 5 days of coverage. The two attacks in Paris and Brussels generated about nine times this many articles (N = 641). Articles about Paris and Brussels were also more likely to appear on the front-page of newspapers and were significantly larger and included more photographs, on average, than articles about Ankara and Maiduguri.
Although the five attacks studied here were characterized by different political contexts, all five fit neatly within the textbook definition of terrorism -targeting civilians for political reasons (Ganor, 2007) -and all produced significant casualties. In fact, the first Ankara attack produced three times the number of deaths as the Brussels attack, yet Brussels generated more than six times as many articles (N = 241) as the first Ankara attack (N = 37). The Maiduguri attack -which victimized black African Muslims -was the least covered event, despite a comparatively high death toll.
The framing differences uncovered here might point to important ideological constraints on US news coverage of terrorism. It may be that aspects of American journalism are constrained by ideological stereotypes both precluding the possibility that Muslims can be victims of terrorism and consistently associating Muslim countries with internal conflict (even when the contextual circumstances do not warrant such an association). The ideological underpinnings of US news coverage of the war on terror should be the subject of further scholarly research.
This study did not examine effects and cannot make any definitive statements about how readers might interpret sampled news articles. It is fair to consider, though, how American readers might understand stories about these attacks and the people they victimized given how they were covered. It is possible that readers may come away with the impression that societies like Ankara and Maiduguri are simply mired in prolonged conflict, and that the attacks represented battles in a war rather than acts of terrorism against civilian populations. Moreover, and perhaps more importantly given the paucity of coverage, it may be that many American newspaper readers would simply not be made sufficiently aware of the Ankara and Maiduguri attacks.
If this coverage represents a type of stable pattern, the long-term result may be the consistent impression that Western non-Muslim societies (like Paris and Brussels) are the only, or primary, victims of global terrorism. Also, given the attention allotted to the Paris and Brussels attacks, and also the focus on personalization, American readers may come to sympathize more with victims of these attacks. The patterns are in line, then, with what previous research has argued -namely, that Western reportage tends to feed the notion that Muslims are a menacing threat against a victimized, non-Muslim, white, Christian West (see Powell, 2011).
These differences also '[provide] evidence indicating that only stories involving a white or ex-colonial angle are taken seriously by media outlets in the developed world' (Franks, 2012: 207). Apparently, the extent of Anglo-American media coverage of a given terrorist attack is not determined by the number of fatalities or casualties, but rather by the media outlets' religious and cultural affinities with the victims. This coverage tends to challenge the principles of 'common humanity' and the 'equality of victimhood' (Patrick, 2016: 151).
As mentioned in the 'Introduction' section, some writers have defended western reporting of foreign conflict. For example, the aforementioned article by Wrong (2014) argued that Western journalists do a good job contextualizing foreign conflict, and that academic arguments against Western journalistic practice seem unrealistic. Specifically, Wrong argued that academic instructions to journalists in foreign conflict zones are 'easier to say than do'. She also posited that current Western reporting of foreign conflictparticularly in Africa -provides sufficient nuance and context to inform news audiences, who, she said, are more intelligent than news scholars tend to believe. Wrong's defenses are arguably problematic, in the specific context of Africa and also if applied to broader global levels. First, academic instructions to provide greater context, proportionate coverage, and more realistic descriptors, are not unrealistic. Rather, they are in line with Western journalistic principles of fairness, balance, and context. Second, Wrong's argument that foreign reporting provides adequate nuance and context is not substantiated by evidence and seems to belie available data, including that which is cited in the literature review of this paper. Finally, Wrong's specific claim that American news audiences are well informed about African and other global issues seems to contradict empirical data. For example, a recent survey by National Geographic's Council on Foreign Relations found that college-aged Americans were largely uninformed about basic matters of global affairs (National Geographic Council on Foreign Relations, 2016).
el-Nawawy and Elmasry (2017) documented Phillips' (2015) defense of American news coverage of terrorist atrocities. Phillips' The Washington Post editorial contended that terrorist attacks occurring in major European cities generate more American news interest not because there is a double standard at play, but, rather, because attacks in those societies are rare and unusual. But, as el-Nawawy and Elmasry (2017) note, Turkey and France rank similarly on the Global Terrorism Index (Institute of Economics and Peace, 2016), so this argument does not seem to explain why an attack in Paris would generate so many more articles, front-page articles, and photographs than an attack in Ankara. Phillips also argued that major European cities share cultural similarities with the United States, something which influences American journalists to pay more attention to attacks in those cities. But, here, Phillips essentially makes the double standard argument for those against whom he is ostensibly debating. The argument that scholars (see Adams, 1986;Fair, 1993;Hanusch, 2008Hanusch, , 2012Hawkins, 2002;Joye, 2009Joye, , 2010Moeller, 1999;Simon, 1997;Van Belle, 2000) have made is that it is inappropriate to privilege Western victims simply because they look and talk like 'us'.
Victim racial and religious identities are not necessarily the only factors at play in the coverage disparities described here. Another contributing explanation for these coverage discrepancies lies in Wallerstein's world systems theory, which suggests 'a power hierarchy between core [mostly Western countries] and periphery [mostly Global South countries] in which powerful and wealthy "core" societies [have the upper hand over] weak and poor "peripheral" societies' (Chase-Dunn and Grimes, 1995: 389). Between the core and peripheral countries lie the semi-peripheral countries, which share characteristics of both the core and the periphery (Chase-Dunn and Grimes, 1995). The core countries' political and economic superiority over peripheral and semi-peripheral countries affects global information and news flows. News media in the core countries often play a hegemonic role that allows them to make decisions on 'who to include and exclude from the international communication network' (Himelboim, 2010: 387).
Our findings showed that victims of attacks in France and Belgium -two core countries -were prioritized by the American newspapers under study, especially when compared with their counterparts Turkey (a semi-periphery country) and Nigeria (a periphery country). The patterns of coverage that this study revealed point to 'the usual practices of . . . U.S. media . . . [which] often fail to cover . . . [periphery and semi-periphery] countries adequately, if there is coverage at all' (Chang et al., 2009: 151). Our findings indicate that a country's place 'in the world system . . . determines its overall newsworthiness' (Golan, 2008: 54).
World systems theory should not, however, be seen as the only explanation for the coverage patterns described here, or as a substitute for the importance of religious and racial identity markers. It is likely that the fact that the primary victims of the Ankara and Maiduguri attacks were Muslims affected how American papers approached those attacks. And it is equally likely -perhaps more likely -that the fact that the Maiduguri victims were both Muslim and black contributed to the dearth of articles generated by that event. In short, it is possible that the racial and religious identities of victims and a country's position in the world systems framework can affect Western coverage tendencies.
Future research should look at other terror attacks and attempt to further parse out the influence of race, religion, and a country's position in the world systems framework on American and other Western news coverage patterns.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article. | 2020-06-11T09:02:57.404Z | 2020-06-10T00:00:00.000 | {
"year": 2020,
"sha1": "b8968c3d59a335383bf65ff76aa1d1f439eb129b",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1464884920922388",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "db2d8c7ea713d3b525549ef671f7341af85fb6fc",
"s2fieldsofstudy": [
"Political Science",
"Sociology"
],
"extfieldsofstudy": [
"Political Science"
]
} |
234395964 | pes2o/s2orc | v3-fos-license | Spatial and temporal distribution of mung bean (Vigna radiata) and soybean (Glycine max) roots
Spatial and temporal distribution of roots of mung bean and soybean originated from different geographical backgrounds is an important scientific issue. The aim of this study was to research the spatial and temporal distribution of roots system of soybean cultivar ‘Hefeng55’ and mung bean cultivar ‘Jilv7’ which can elucidate differences between soybean roots and mung bean roots in the key spatial and temporal locations. The roots at V6, R2, R4, R5, R6, and R7 stages were collected to acquire data of root length, root surface area, root volume and root dry weight. 49.8%, 11.7%, 13.2%, 14.7% and 10.6% of soybean roots and 57.8%, 10.7%, 11.2%, 11.9% and 8.4% of mung bean roots were in 0-5, 5-10, 10-15, 15-20 and 20-25 cm horizontal soil layers, respectively; 79.2%, 11.5%, 4.3%, 1.8%, 1.1%, 1.0% and 1.1% of soybean roots and 70.0%, 12.3%, 8.0%, 3.0%, 1.6%, 1.7% and 3.4% of mung bean roots were in 0-20, 20-40, 40-60, 60-80, 80-100, 100-120 and 120-140 cm vertical soil layers, respectively. Compared with mung bean, soybean had a much larger root system during development. In horizontal direction, soybean root tended to be more laterally developed, but the distribution of mung bean root was more uniform in vertical direction. With a greater root surface area to weight ratio (AWR), mung bean had a finer root system than soybean. These findings can help to clarify the fourdimensional spatial and temporal distribution characteristics of legumes and may provide reference for production practice of soybean and mung bean in the future.
Introduction Introduction Introduction Introduction
Roots are an important organ of plants (Fang, 2011) which determines the ability of plants to absorb water and nutrients (Vamerali et al., 2003;Ehdaie et al., 2010). Roots of different crops have different distribution characteristics (Lynch, 1995;Benjamin and Nielsen, 2006). Comparing the temporal and spatial distribution characteristics of different crops' roots is beneficial to research the root structure differences and the adaptability of root systems to the soil environment among different crops (Gan, 2009;Fan et al., 2016).
The distribution characteristic of roots is the basic attribute during plant development (Atta et al., other plant parts (Eissenstat and Yanai, 2002;Waisel and Eshel, 2002). Liu et al. (2011) studied the distribution of pulses and found that surface area was mainly distributed in 0-60 cm soil layer. Benjamin and Nielsen (2006) reported 97% of the root dry weight of soybean and about 80% of the root dry weight of chickpea and field pea were in the surface 23 cm. Mitchell and Russell (1971) found that 90% or more of the root dry weight of soybean was concentrated in the upper 7.5 cm early in the season and in the upper 15 cm during the remainder of the season. Both soybean and mung bean are leguminous crops, which originated from different geographical backgrounds. Various workers have studied the root distribution of soybean in the last few years (Calonego et al., 2010;Farmaha et al., 2012), but the difference between these two crops' root distribution at different time and space points is still unclear. In this study, we used innovative horizontal and vertical devices to study the root distribution of soybean and mung bean in 0-5, 5-10, 10-15, 15-20 and 20-25 cm horizontal soil layers and in 0-20, 20-40, 40-60, 60-80, 80-100, 100-120 and 120-140 cm vertical soil layers, respectively. We hypothesized that the root system of soybean and mung bean have different trends in temporal distribution and different pattern in spatial distribution. It can give a deeper understanding of the four-dimensional spatial and temporal distribution characteristics of legume roots and may provide reference for breeding new cultivars of soybean and mung bean in the future.
Experimental site
The experiment was carried out at outdoor test site in National Coarse Cereals Engineering Research Centre, Daqing, China on June 5, 2015 (soybean) and June 5, 2016 (mung bean). The annual precipitation at the experimental site was 508.7 mm, the average annual temperature was 5.60 °C, the effective accumulated temperature was 2900-3000 °C, and the sunshine duration was 1158 h (Collected from Daqing Weather Station).
Experimental devices
There were two kinds of devices: horizontal device and vertical device ( Figure 1). The horizontal device was a cylindrical metal barrel with a diameter of 50 cm and a height of 50 cm, which was fixed with a cross steel frame with a length of 54 cm on both sides of the metal barrel. The inside of the metal barrel was equipped with a diameter of 10 cm, 20 cm, 30 cm and 40 cm of metal net. The distance between the metal net was 5 cm and the metal net were fixed on the cross-steel frame by nylon straps (Figure 2). The vertical device was a cylindrical plastic barrel with a diameter of 30 cm and a height of 150 cm ( Figure 3). In order to facilitate sampling, inside the vertical device was a plastic water belt of 30 cm in diameter and the soil was filled in the plastic water belt ( Figure 4). The lower end of the plastic water belt was sealed and four round holes were cut using scissors. The soil was chernozem, with physical and chemical properties characterized by a pH of 7.8, effective phosphorus of 13.69 mg·kg -1 , alkali-hydrolyzed nitrogen of 134 mg·kg -1 , available potassium of 204 mg·kg -1 , and organic matter of 32.8 g·kg -1 . The soil was screened before pouring into the devices to remove grass root, tree root and large granular clods and stones. Then the soil was filled into the vertical device and horizontal device, respectively (1.15×102 kg·m -3 in density).
Experiment design, species and seeding
Soybean cultivar 'Hefeng55' and mung bean cultivar 'Jilv7' were planted at five seeds per barrel separately in 48 barrels which included 24 horizontal devices and 24 vertical devices. Two seedlings were retained, and grown with four replications per growth stage and type of device.
In horizontal devices, the centre of the cross section of the column was taken as the starting point to obtain root samples in the horizontal direction i.e. 0-5 cm, 5-10 cm, 10-15 cm, 15-20 cm and 20-25 cm layers. In vertical devices, the upper soil surface was taken as the starting point to obtain soil samples with root in 0-20 cm, 20-40 cm, 40-60 cm, 60-80 cm, 80-100 cm, 100-120 cm and 120-140 cm soil layers in vertical direction. The plants were clipped at cotyledons by scissors before sampling. Soil samples containing root were soaked in a plastic bucket filled with water until the soil became soft and then filtered. The obtained root samples were washed with clean tap water and then placed in a plastic, sealable bag, and the bag was placed in a refrigerator for further use.
Data collection
The harvested root samples were placed in a clear glass tray filled with water. The roots were washed to remove soil particles and other dirt that could hamper efficient scanning of root samples. The glass tray was placed on a scanner (Epson V700) and digital images were generated at 400 dpi. Digital image analysis of root samples was conducted using WinRHIZO (version 2014a, Reagent Instruments Inc., Quebec, Canada) and the data included root length, root surface area and root volume, from which root length density (RLD), root surface area density (RSAD) and root volume density (RVD) were estimated as follows: RLD = L/V0 RSAD = S/V0 RVD = V/V0 V0 = πr 2 h where V is the root volume, L is root length, S is root surface area, V0 is the soil volume, r is the radius, and h is the height.
After scanning, the roots were removed from glass tray and subsequently were placed in an oven at 105 °C for 2 hours, then drying to constant weight in 75 °C oven. The dry weight of roots was obtained by analytical balance and the root dry weight density (RDWD) was estimated as: RDWD = M/V0 V0 = πr 2 h where M is the root dry weight.
Statistical analysis
Differences between soybean and mung bean roots were determined by LSD test by SPSS 22.
Temporal distribution of total root dry weight
The distribution of total root dry weight showed that 'Hefeng55' had significantly greater root dry weight than 'Jilv7' during all growth stages in horizontal devices, and total root dry weight of 'Hefeng55' were significantly greater than 'Jilv7' except for V6 in vertical devices (Figure 8).
Root dry weight density as estimated by vertical device: 'Hefeng55' had significantly greater root dry weight density than 'Jilv7' in 0-20 cm soil layer during all growth stages, in 20-40 cm soil layer at R2, R4, R5, R6, R7 and in 40-60 cm soil layer at R6. The percentage of root dry weight in 0-20, 20-40, 40-60, 60-80, 80-100, 100-120 and 120-140 cm vertical soil layers to total root dry weight of 'Hefeng55' were 79.2%, 11.5%, 4.3%, 1.8%, 1.1%, 1.0% and 1.1%, respectively, and the percentage of root dry weight in 0-20, 20-40, 40-60, 60-80, 80-100, 100-120 and 120-140 cm vertical soil layers to total root dry weight of 'Jilv7' were 70.0%, 12.3%, 8.0%, 3.0%, 1.6%, 1.7% and 3.4%, respectively (Table 8) Table Table Table Table 7 7 7 7. . . . Root dry weight density (g · m -3 ) of soybean cultivar 'Hefeng55' and mung bean cultivar 'Jilv7' in different horizontal soil layers at V6, R2, R4, R5, R6 and R7 growth stages Data represent average ± standard error. Distinct letters in the row indicate significant differences. Significant at the 0.05 probability level. Table Table Table Table 8 8 8 8. . . . Root dry weight density (g · m -3 ) of soybean cultivar 'Hefeng55' and mung bean cultivar 'Jilv7' in different vertical soil layers at V6, R2, R4, R5, R6 and R7 growth stages Data represent average ± standard error. Distinct letters in the row indicate significant differences. Significant at the 0.05 probability level Root surface area to root weight ratio In horizontal devices, 'Jilv7' had greater root surface area to root weight ratio (AWR) except for R7, and a significantly greater AWR of Jilv7 was found at V6 than 'Hefeng55'. In vertical devices, 'Jilv7' had significantly greater AWR than 'Hefeng55' during all growth stages (Figure 9). In this research, the variation and trend of horizontal distribution of total root length of soybean and mung bean were identical during all growth stages, but soybean had greater total root length than mung bean. In horizontal direction, both soybean and mung bean had the largest proportion of root length in 15-20 cm soil layer followed by 10-15, 20-25 and 5-10 cm soil layers, and the minimum proportion of root length was found in 0-5 cm soil layer. Root length of soybean in vertical 0-20 cm and 20-40 cm soil layers reached 56.8% and 23.2% of the total root length, respectively, which were higher than 51.2% and 22.9% of mung bean in the same soil layer. This is similar to the finding of Gao et al. (2010). However, both the ratio of root length to total root length of mung bean in vertical 40-60 and 60-80 cm soil were greater than those of soybean. In soil layers below 80 cm in vertical direction, the root length percentages of soybean and mung bean were similar. The root length of soybean and mung bean were mainly concentrated in 0-40 cm vertical soil layer, and root length of mung bean was more evenly distributed in the vertical direction compared with soybean.
The root length density of the crop could be used to reflect extension and distribution of crop root (Adiku et al., 2001;Zhu, 2010;Liu et al., 2011). With the deepening of the soil layer, root length density of soybean and mung bean decreased gradually. One possible explanation for this phenomenon is that mechanical impedance limited the extension of the root system (Logsdon et al., 1987).
The distribution of crop root volume is critical to growth, development and yield formation of crop (Rao and Ito, 1998). The trend of total root volume of mung bean was more stable than that of soybean, but the total root volume of soybean was greater than that of mung bean during all growth stages. Spatial distribution of root showed that the root volume of mung bean was more concentrated in 0-5 cm horizontal soil layer compared with soybean, but soybean root volume tended to develop laterally. Both soybean (26.6%) and mung bean (23.9%) had the largest proportion of root volume in horizontal 15-20 cm soil layer. The ratio of root volume to total root volume in horizontal 0-5 cm soil layer of mung bean was greater than that of soybean, but the proportion of soybean in other horizontal soil layers were greater than those of mung bean. In vertical direction, soybean and mung bean were similar in root volume percentage in 0-20 cm vertical soil layer, and both of them had the greatest proportion in this layer. This is similar to the finding of . The root volume percentage of soybean in vertical 20-40 cm soil layer was higher than that of mung bean, but in the subsequent 40-80 cm vertical soil layer, mung bean had a larger root volume percentage compared with soybean. This showed that the difference of root volume distribution between soybean and mung bean was mainly concentrated in the upper middle soil layers in vertical direction. Compared with mung bean, soybean had a larger proportion of root volume in the upper soil layer, while proportion of mung bean root volume was higher than that of soybean in the middle soil layer.
In horizontal spatial distribution, the root surface area of soybean and mung bean were mainly distributed in the soil layer of 10-25 cm. Compared with soybean, mung bean had a larger percentage of root surface area in horizontal 0-10 cm soil layer. In vertical direction, soybean and mung bean had a similar distribution of root surface area in the upper, middle and lower layers. For root dry weight distribution, the dry root weight of soybean (49.8%) and mung bean (57.8%) were mainly distributed in 0-5 cm horizontal soil layer, and both dry weight of soybean and mung bean in vertical 0-20 cm soil layer reached more than 70% of the total dry weight, more than 82% of total root dry weight in vertical 0-40 cm soil layer. This is similar to the finding of Mitchell and Russell (1971) who found the highest proportion of root dry weight density in vertical 0.23 m soil layer.
A low AWR indicates either a thicker root system or roots with higher specific density (Benjamin and Nielsen, 2006). Mung bean had greater AWR than soybean during all growth stages. This showed mung bean had a finer root system or roots with lower specific density. From V6 to R7, soybean and mung bean AWR decreased indicating a thickening or densification of the root material.
In horizontal devices, we found that the maximum of total root length, total root surface area, and total root volume of soybean and mung bean were at R5. However, in vertical device, the maximum of total root length, total root surface area, and total root volume of soybean were at R5. But for mung bean, the maximum of total root length and total root volume were found at R4 and the maximum total root surface was at R2. The reason may be that limiting the extension of roots in horizontal direction accelerated the aging process of mung bean roots, but soybean roots showed stronger adaptability than mung bean.
Conclusions Conclusions Conclusions
Compared with mung bean, soybean had a much larger root system during development; In horizontal direction, root system was mainly concentrated in the 0-5 cm soil layer, but soybean root tended to be more laterally developed compared with mung bean. In vertical direction, the distribution of mung bean root was more uniform than that of soybean; With a greater AWR, mung bean had a finer root system than soybean.
Authors' Contribution HZ: Collection of samples, data collection and analysis, article writing; DZ and NF: Guidance on methods. All authors read and approved the final manuscript. | 2020-12-24T09:05:08.983Z | 2020-12-22T00:00:00.000 | {
"year": 2020,
"sha1": "7d0bb139befd5bcf82bf8787905eb0a069f1f644",
"oa_license": "CCBY",
"oa_url": "https://www.notulaebotanicae.ro/index.php/nbha/article/download/11780/9073",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "87ad964cb2499afff9dffe7e0910c4beb45e5e35",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
2783 | pes2o/s2orc | v3-fos-license | Radiation Dose from Whole-Body F-18 Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography: Nationwide Survey in Korea
The purpose of this study was to estimate average radiation exposure from 18F-fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) examinations and to analyze possible factors affecting the radiation dose. A nation-wide questionnaire survey was conducted involving all institutions that operate PET/CT scanners in Korea. From the response, radiation doses from injected FDG and CT examination were calculated. A total of 105 PET/CT scanners in 73 institutions were included in the analysis (response rate of 62.4%). The average FDG injected activity was 310 ± 77 MBq and 5.11 ± 1.19 MBq/kg. The average effective dose from FDG was estimated to be 5.89 ± 1.46 mSv. The average CT dose index and dose-length product were 4.60 ± 2.47 mGy and 429.2 ± 227.6 mGy∙cm, which corresponded to 6.26 ± 3.06 mSv. The radiation doses from FDG and CT were significantly lower in case of newer scanners than older ones (P < 0.001). Advanced PET technologies such as time-of-flight acquisition and point-spread function recovery were also related to low radiation dose (P < 0.001). In conclusion, the average radiation dose from FDG PET/CT is estimated to be 12.2 mSv. The radiation dose from FDG PET/CT is reduced with more recent scanners equipped with image-enhancing algorithms.
INTRODUCTION
In the current medical practice, radiological imaging studies are of critical importance in every aspect of patient management, and thus, they have been dramatically expanded in recent years. Most commonly used radiological imaging methods are planar X-ray and computed tomography (CT), which cause radiation exposure of patients (1). Although the benefit from medical imaging far surpasses the potential risk of radiation, medical doctors need to properly understand the risk and benefit of radiation exposure in decision making of imaging studies. 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET) is a molecular imaging method that visualizes glucose metabolism in vivo. Currently, a hybrid imaging of FDG PET/CT is widely used in clinical practice for diverse diseases such as cancers, inflammatory diseases, neurological disorders, and myocardial metabolic disorders. Due to its usefulness, FDG PET/CT has been rapidly expanded; in Korea, more than 500,000 examinations were performed in 2014 (2). With the increase of FDG PET/CT examinations, a concern has been raised with regard to the radiation exposure by PET/CT, because it causes both internal and external radiation from radiopharmaceutical administration and CT acquisition.
The radiation dose of FDG PET/CT depends on both injected activity of FDG and CT protocol. Notably, radiation dose may be reduced with recent PET/CT scanners, which enable sensitive gamma ray detection for PET and dose-reduction algorithms for CT. Thus, radiation dose from FDG PET/CT should be estimated separately in each society that has different conditions regarding scanner equipment and cultural background of medical imaging. However, there have been scarce data on radiation dose of FDG PET/CT based on a real world survey in Korea.
In this study, a nation-wide survey was conducted in Korea to estimate the average radiation dose of FDG PET/CT exami-
Questionnaire survey
The study design was exempted from the ethical review by the decision of the Institutional Review Board of Seoul National University Hospital (E-1511-003-713). This survey aimed to include all working PET/CT scanners in Korea, which were known to be 154 scanners in 117 institutions according to a survey in 2013 (3). In July 2015, a survey questionnaire was e-mailed to the persons in charge of PET/CT examinations in all institutions where PET/CT was in operation. The questionnaire was designed for dosimetry-related information in usual FDG PET/CT examinations covering torso (from the skull base to the upper thigh) area. The questionnaire was composed of 3 parts (Table 1); the first part was related to the equipment information such as manufacturer, model name, and installation date; the second part was related to the examination protocol in terms of FDG injection and PET/CT acquisition, including image-enhancing methods such as time-of-flight (TOF) acquisition and point spread function (PSF)-recovery algorithms. In the third part, patient dosimetry data of PET/CT in real practice were requested, for the most recent 10 patient results per each scanner, including age, sex, body weight, scan-covered area, scan length, injected activity of FDG, CT parameters of volume CT dose index (CTDIvol) and dose-length product (DLP).
Estimation of radiation dose
Effective dose from FDG PET was calculated from the injected FDG activity using the conversion factor presented by the International Commission on Radiological Protection (4). Effective dose from CT was calculated from CT parameters using the CT-Expo method (version 2.4, Institut fűr Diagnostische und Interventionelle Radiologie, Hannover, Germany) with tissue weighting factors defined in the publication 60 of the ICRP (5,6). When additional contrast-enhanced CT scans were obtained after conventional FDG PET/CT scan, only the CT scan for attenuation correction and lesion localization was included in the analysis.
Statistical analysis
Radiation dose of FDG PET/CT was calculated from the injected FDG activity and DLP in real practice. Additionally, the influence of equipment characteristics on radiation dose was assessed in terms of equipment age (installation year) and use of dose-reduction software, TOF acquisition, and PSF-recovery algorithm. All values were expressed as mean ± standard deviation. In comparison of continuous variables, χ 2 test and one-way ANOVA test with Bonferroni's post-hoc correction were used and P values less than 0.05 were regarded significant. All statistical analyses were conducted using a commercial statistical software package (SPSS version 22, IBM SPSS statistics, Chicago, IL, USA).
Ethics statement
The study design was exempted from the ethical review by the institutional review board of Seoul National University Hospital (E-1511-003-713). Informed consent was also waived.
Collection of questionnaires
The questionnaires were returned from 73 institutions and information of 105 PET/CT scanners was collected. The response rate was 62.4% on institution-basis and 68.2% on scanner-basis. Regional distribution of the institutions that responded in this TruePoint40 (12), TruePoint64 (1), mCT20 (2), mCT40 (1), mCT 64 (5), mCT128 (6), mCT X4R (1), mCT FLOW (2) nation-wide survey is shown in Fig. 1. The response included PET/CT results of 1,041 adults (M:F = 633:408, age 60 ± 13 years, body weight 61.4 ± 11.4 kg) and 3 children. One responder returned only FDG PET results without CT information, and only the PET results were included in the analysis. Characteristics of the enrolled PET/CT scanners are summarized in Table 2.
Injected FDG activity and radiation dose
The distributions of FDG activity and CTDIvol are shown in Fig. 2. In adults, mean injected activity was 310 ± 77 MBq (range 126-729 MBq). In 90.2% of responding institutions, the injected activity was determined primarily based on body weight of a patient; mean value of injected activity per body weight was 5.11 ± 1.19 MBq/kg (range, 2.56-11.40 MBq/kg) ( Fig. 2A). When radiation dose was calculated from the injected activity of all real practice data, 75th percentile of injected activity was 368 MBq (Fig. 3A) and mean effective dose from FDG was estimated to be 5.89 ± 1.46 mSv (range 2.39-13.85 mSv). In children, the injected activity was determined primarily based on body weight of a patient in 75.5% of the surveyed institutions. Mean value of injected activity per body weight was 4.47 ± 1.20 MBq/kg, which was slightly lower than that of adults (P = 0.01). Among the collected data of real examinations, 3 were results of children, in which the injected dose was 4.37, 4.81, and 4.93 MBq/kg.
With regard to CT scan, mean CTDIvol was 4.60 ± 2.47 mGy (range 0.97-15.19 mGy) in adults. Mean CTDIvol of the surveyed 73 institutions are shown in Fig. 2B and 75th percentile was 5.96 mGy (Fig. 3B). Mean DLP was 429.2 ± 227.6 mGy•cm (range, 99.0-1,274.0 mGy•cm) and 75th percentile was 561 mGy•cm (Fig. 3C). Based on the results, radiation dose from CT component in adult patients was estimated to be 6.26 ± 3.06 mSv. In 9 institutions (12.3%), additional CT scan was routinely performed with contrast-enhancement or breath-holding, for which radiation dose was not evaluated.
Factors affecting radiation dose of FDG PET/CT
Among the surveyed PET/CT scanners, 73 were equipped with software for CT dose reduction. Mean DLP was not significantly different between scanners equipped with the software and those without the software (436.1 ± 217.1 mGy • cm vs. 412.9 ± 250.4 mGy • cm, P = 0.14).
When PET/CT scanners were classified into 3 groups according to installation year, 42 were less than 5 years old, 50 were 5-10 years old, and 13 were more than 10 years old. Injected FDG activity was significantly reduced in more recently installed scanner groups (P < 0.001, Table 3). In addition, radiation dose from CT was also significantly lower in more recently installed scanner groups (P < 0.001 for both CTDIvol and DLP).
In PET acquisition, TOF technology was available in 45 PET scanners. Mean injected FDG activity for the TOF-available scanners was lower than that for TOF-unavailable scanners (4.76 ± 0.96 MBq/kg vs. 5.37 ± 1.28 MBq/kg, P < 0.001). PSF-recovery algorithm was equipped in 36 PET scanners. Mean injected FDG activity for these scanners was also lower than that for PSF recovery-unavailable scanners (4.64 ± 0.85 MBq/kg vs. 5.36 ± 1.27 MBq/kg, P < 0.001).
DISCUSSION
In this nation-wide survey, which covered approximately 55% of the total PET/CT scanners in operation in Korea, the average radiation dose from FDG PET/CT was estimated to be 12.2 mSv; 5.89 mSv from FDG PET and 6.26 mSv from CT. It was also demonstrated that more recent PET/CT scanners equipped with certain image-enhancing methods are related to lower radiation dose.
With the recent expansion of radiological imaging procedures, medical radiation exposure has been rapidly increased during the last 3 decades; in the United States, annual per capita medical radiation exposure has been increased from 0.53 mSv in 1980 to 3.0 mSv in 2006, the largest source of which was CT (7). The proportion of CT examination has become more considerable in medical radiation exposure because the amount of CT examination is related to economic development (8). However, a concern recently has been raised that FDG PET/CT would be another large source of medical radiation exposure because it has been increased rapidly over the last 10 years. In Korea, a total of 308,663 PET/CT examinations were performed in 2009 (3), and approximately 513,000 FDG PET/CT examinations, in 2014 (9). Additionally, FDG PET/CT is a source of both internal and external radiations; internal radiation from intravenously injected FDG, and external radiation from CT imaging. On the other hand, a single examination of FDG PET/CT may substitute several CT scans and nuclear imaging studies because it covers whole body in a single scan. Thus, medical doctors need to understand the radiation dose from FDG PET/CT and to make a decision for diagnostic imaging based on appropriate risk-benefit assessment.
There have been nation-wide surveys of radiation dose from FDG PET/CT and its quality control in some European countries (10)(11)(12). However, radiation dose can vary widely accord- ing to imaging protocols and scanner models. Additionally, because recent PET/CT scanners are equipped with highly sensitive detectors and dose reduction algorithms, FDG PET/CT can be performed with lower radiation dose than before. In Korea, many PET/CT scanners have been installed in recent years with expansion of its use. Thus, a real world survey is required to estimate overall radiation dose of FDG PET/CT. The average radiation dose demonstrated in this study is grossly similar to the previously reported results. In a nation-wide survey in France, mean radiation dose from FDG PET/CT was estimated to be 14.3 mSv; 5.6 mSv from FDG PET and 8.7 mSv from CT (10). In our study, the dose from FDG PET was slightly higher whereas the dose from CT was lower than that in the French survey. The injected FDG activity recommended by the European Association of Nuclear Medicine (EANM) is 2.5-5.0 MBq/kg (13), which is slightly lower than the mean injected FDG activity surveyed in our study (mean 5.11 MBq/kg). It is speculated to be caused by different imaging protocol. In the EANM guideline, the injected activity is based on a protocol using a fixed scan time of 5 minutes/bed; however, in Korea, scan time is usually less than 2-3 minutes/bed, chiefly for patients' convenience and high throughput. In a recent guideline, injected activity is determined considering scan time; 7-14 MBq•min• bed -1 •kg -1 (14). Although the current FDG activity is within a reasonable range, more efforts should be made in the future based on the balance of risk and benefit.
An intriguing point of this study is the relationship between equipment characteristics and radiation dose. Both the radiation doses from FDG and CT were significantly reduced by using more recently installed scanners equipped with image-enhancing methods. TOF acquisition algorithms can reduce background signal noise and cause an increase in sensitivity (15). As PSF-recovery algorithms can enhance image resolution and overall image quality (16), TOF technology combined with PSFrecovery algorithm would be a main cause of the reduced injected activity. The optimal injected FDG activity is determined in each institution by considering image quality and patients' radiation safety. The present study demonstrated in a real world study that injected FDG activity is reduced by using more recent scanners equipped with these algorithms based on the improved image quality. Additionally, the radiation dose from CT was also lower in more recent scanners, although mean DLP was not significantly different between scanners with and without dose reduction software. It is speculated that hardware factors such as multi-detectors are more important than software factors. Additionally, the influence of specific CT protocol in each institution should be investigated in further studies. Considering the results of the current study, use of obsolete scanners should be discouraged by health insurance reimbursement system or healthcare policy, for patients' radiation safety.
Quality control programs of imaging equipment and proto-col are also important for maintaining the performance of diagnostic tests and reducing unnecessary radiation exposure. In a quality control program, various steps of image acquisition and reconstructions are checked up and authoritative recommendations for standardized quality control protocols have been reported regarding daily procedures, calibration of PET/CT scanners and image quality evaluation (13,14). Quality control and standardization of imaging procedures are necessary not only for radiation safety but also for comparing image results between different institutions in case of multicenter clinical trials (17,18). The International Commission on Radiological Protection recommended constitution of the national diagnostic reference levels (DRL) to achieve evidence-based medical radiation protection (19). DRLs are defined as dose levels in medical radiological diagnostic practices or typical levels of radiopharmaceutical activity for groups of standard-sized patients or standard phantom (20). In terms of radiation protection and standard procedure, DRLs are recommended to be implemented for medical radiation diagnostic procedures. The present study is expected to be a basis for establishing the national DRLs for FDG PET/CT scan; the DRL for CT component of whole body FDG PET/CT may be suggested as 560 mGy•cm (75th percentile of DLP in Fig. 3C), which is lower than the value of 750 mGy•cm proposed in a French survey (10). DRL for FDG activity may be suggested as 370 MBq (75th percentile of injected activity in Fig. 3A), which is similar to proposed values of 350-385 MBq in other countries (10,21,22).
The present study has a limitation that it is based on a questionnaire survey without actual evaluation of radiation dose in each scanner. However, this is the first study that conducted a nation-wide survey on radiation dose of FDG PET/CT in Korea, and more than 50% of scanners were included in this study. Further studies are required regarding actual measurement of radiation dose.
In conclusion, the average radiation dose from FDG PET/CT is estimated to be 12.2 mSv from this nation-wide survey in Korea. The radiation dose is reduced with more recent scanners equipped with image-enhancing algorithms. The results are expected to be a basis for establishing the national DRLs for FDG PET/CT scan.
DISCLOSURE
The authors declare that they have no conflict of interest.
AUTHOR CONTRIBUTION
Conception and design: Paeng JC, Cheon GJ, Lee JS. Acquisition of data: Kwon HW, Kim JP, Lee HJ. Analysis and interpretation of data: Kwon HW, Kim JP. First draft of manuscript: Kwon HW, Paeng JC. Revision and critical review of the manuscript: | 2016-05-12T22:15:10.714Z | 2016-02-01T00:00:00.000 | {
"year": 2016,
"sha1": "9623381727023d1f06da99bec55ddb0d6a4c7539",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3346/jkms.2016.31.s1.s69",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9623381727023d1f06da99bec55ddb0d6a4c7539",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244765044 | pes2o/s2orc | v3-fos-license | Association between metabolic syndrome and incidence of ocular motor nerve palsy
To assess the association between metabolic syndrome (MetS) and the development of third, fourth, and sixth cranial nerve palsy (CNP). Health checkup data of 4,067,842 individuals aged between 20 and 90 years provided by the National Health Insurance Service (NHIS) of South Korea between January 1, 2009, and December 31, 2009, were analyzed. Participants were followed up to December 31, 2017. Hazard ratio (HR) and 95% confidence interval (CI) of CNP were estimated using Cox proportional hazards regression analysis after adjusting for potential confounders. Model 1 included only incident CNP as a time-varying covariate. Model 2 included model 1 and individual’s age and sex. Model 3 included model 2, smoking status, alcohol consumption, and physical activity of individuals. We identified 5,835 incident CNP cases during the follow-up period (8.22 ± 0.94 years). Individuals with MetS (n = 851,004) showed an increased risk of CNP compared to individuals without MetS (n = 3,216,838) after adjustment (model 3: HR = 1.35, 95% CI 1.273–1.434). CNP incidence was positively correlated with the number of MetS components (log-rank p < 0.0001). The HR of CNP for males with MetS compared to males without MetS was higher than that of females with MetS compared to females without MetS (HR: 1.407, 95% CI 1.31–1.51 in men and HR: 1.259, 95% CI 1.13–1.40 in women, p for interaction = 0.0017). Our population-based large-scale cohort study suggests that MetS and its components might be risk factors for CNP development.
www.nature.com/scientificreports/ bursement from the National Health Insurance Service (NHIS) for medical expenses. In this way, the NHIS gathers all healthcare utilization information including demographics, medical treatment, procedures, and disease diagnoses using codes from the International Classification of Diseases, 10th Revision-Clinical Modifications (ICD-10-CM). In addition, the NHIS offers a national health screening program (NHSP) for all beneficiaries aged ≥ 20 years at least every 2 years. In the present study, we used a customized NHIS database cohort that included 40% of the Korean population who were selected by stratified random sampling to ensure that the sample was representative of the entire population. Among individuals aged between 20 and 90 years who participated in the NHSP in 2009, a total of 2,721,914 eligible individuals were finally identified after excluding those who had missing information (n = 43,312) and those who had a previous history of CNP or new CNP diagnosis or death within 1 year after the date of their health examination (n = 6802). We used this 1-year time lag in sensitivity analysis to avoid the problem of reverse causation. Therefore, eligible individuals were followed up for CNP cases from 1 year after the date of their health examination (the 1-year time lag) until December 31, 2017. This study adhered to the tenets of the Declaration of Helsinki. It was approved by the Institutional Review Board of Samsung Medical Center (IRB; IRB no. SMC 2020-09-050). The IRB of Samsung Medical Center waived the requirement of informed consent from individual patients because data used were public and anonymized under confidentiality guidelines.
Assessment. Standardized self-reported questionnaires were used to collect general health behavior and lifestyle information at the time of enrollment 14 (Supplementary File 1). Smoking status was categorized into non-smokers, ex-smokers, and current smokers. Alcohol drinking was categorized into non-drinkers, mild to moderate drinkers, and heavy drinkers according to the amount of alcohol consumed on one occasion. Individuals who consumed over 30 g of alcohol per day were defined as heavy alcohol drinkers. Regular physical activity was defined as performing high-intensity exercise for at least 20 min three times per week or moderate-intensity exercise for at least 30 min five times per week. High-intensity physical activity was defined as physical activity that caused extreme shortness of breath (e.g., running, bicycling at high speed, or carrying boxes upstairs). Moderate-intensity physical activity was defined as physical activity that caused substantial shortness of breath (e.g., brisk walking, tennis, bicycling, carrying light boxes, and cleaning). Low-income level was defined as the lower quintile of the entire population.
Height (cm) and weight (kilogram [kg]) were measured using an electronic scale in medical institutions during health examinations. WC (cm) was measured at the middle point between the rib cage and iliac crest by trained examiners. Body mass index (BMI) was calculated as body weight in kg divided by height in meters squared (m 2 ). General obesity was defined as a BMI of ≥ 25 kg/m 2 based on the World Health Organization recommendations for Asian populations 15 . Abdominal obesity was defined as a WC of ≥ 90 cm for men and ≥ 85 cm for women according to the Asian-specific WC cutoff for abdominal obesity 16 . Blood samples were drawn after overnight fasting to measure serum levels of glucose, total cholesterol, triglycerides, HDL-C, low-density lipoprotein (LDL)-cholesterol, hemoglobin, serum creatinine, aspartate aminotransferase (AST), alanine aminotransferase (ALT), and gamma-glutamyl transpeptidase (γ-GTP).
Baseline comorbidities were identified based on past medical history with clinical and pharmacy codes of the ICD-10-CM. We defined hypertension as blood pressure (BP) of ≥ 140/90 mmHg or at least one claim per year for an antihypertensive medication prescription under ICD-10-CM codes I10-I13 and I15. Diabetes mellitus (DM) was defined by a fasting glucose of ≥ 126 mg/dL or at least one claim per year for a prescription of hypoglycemic agents under ICD-10-CM code E11-E14. Dyslipidemia was defined as a total cholesterol level of ≥ 240 mg/dL or at least one claim per year for a lipid-lowering medication prescription under ICD-10-CM code E78. Statistical analysis. Baseline characteristics of study participants according to the presence of MetS are presented as mean ± standard deviation for continuous variables and absolute frequencies for categorical variables. The Kolmogorov-Smirnov analysis to assess normal distribution of variables failed due to our large number of subjects. Instead, a histogram was drawn for each variable to confirm that it had a graph close to normal distribution (Supplementary File 2). Values were compared by independent t-test for continuous variables or chi-squared test for categorical variables. In Table 1, we analyzed the power and effect size for all inferential tests using the G*Power program. Incidence rates of ocular motor CNP were calculated by dividing the number of events by 1000 person-years. We performed multivariable Cox proportional hazards regression analysis to evaluate the association of MetS with incident ocular motor CNP and calculated the hazard ratios (HRs) and 95% confidence interval (CIs). Model 1 included only incident CNP as a time varying covariate. Model 2 included model 1 and individual's age and sex. Model 3 included model 2, smoking status, alcohol consumption, and physical activity of individuals. In addition, we evaluated the risk of incident ocular motor CNP according to the coexistence of general obesity and MetS. We also conducted clinically relevant subgroup analyses and cal-
Results
A total of 4,067,842 eligible participants were included in our cohort. At baseline, 851,004 individuals (20.92% of the total population) were diagnosed with MetS. Study participants were followed up until December 31, 2018, with an average follow-up duration of 8.22 ± 0.94 years. Table 1 shows baseline characteristics of the study population according to the presence of MetS. The proportion of men was higher in the MetS group than in the non-MetS group (60.79% vs. 53.49%). The mean age was 54.97 ± 12.92 years in the MetS group and 44.97 ± 13.63 years in the non-MetS group. The mean age was 51.24 ± 12.85 years for men and 60.76 ± 10.71 years for women in the MetS group. Individuals with MetS more frequently had DM, hypertension, dyslipidemia, and chronic kidney disease. The MetS group also exhibited higher mean BMI, WC, DBP, SBP, serum total cholesterol, and triglyceride values than the non-MetS group. Mean LDL-C and HDL-C values were lower in individuals with MetS than in those without MetS. Proportions of ex-smokers, current smokers, and heavy drinkers were higher in the MetS group than in the non-MetS group. Proportions of people with regular exercise and low income were higher in the MetS group than in the non-MetS group. All variables were significant (p-values < 0.001). This appeared to result from the very large sample size. We also calculated the power and effect size (absolute standardized difference, ASD) for each variable in Table 1. The power was 1 for all statistical tests in Table 1 due to the large sample size. A total of 5835 individuals were diagnosed with ocular motor CNP during the follow-up period. The incidence rate of ocular motor CNP in the MetS group was approximately 2.19 times higher than that in the non-MetS Table 1. Baseline characteristics of the study population. N number, SD standard deviation, BMI body mass index, WC waist circumference, SBP systolic blood pressure, DBP diastolic blood pressure, HDL-C highdensity lipoprotein cholesterol, LDL-C low-density lipoprotein cholesterol, PA physical activity. a Geometric mean. Fig. 1 and Table 3 presents the incidence probability of ocular motor CNP according to the number of MetS components compared to the group without any components. Ocular motor CNP incidence was positively correlated with the number of MetS components (log-rank p < 0.0001). The HR for incident ocular motor CNP compared to people without any MetS components gradually increased with the number of components (p for trend < 0.0001) ( Table 3). These associations persisted even after adjusting for potential confounding variables including smoking status, alcohol consumption, and physical activity. Individuals with three MetS components were at 65% higher risk of developing ocular motor CNP (model 3, HR, 95% CI 1.65, 1.51-1.81). Those with all five components were at a 97% higher risk of developing ocular motor CNP compared to those without any components (model 3, HR, 95% CI 1.97, 1.70-2.26). www.nature.com/scientificreports/ There was a significant interaction between sex and MetS on the risk of ocular motor CNP (p for interaction = 0.0017). The HR for males with MetS to develop ocular motor CNP compared to males without MetS was higher than that for females with MetS compared to females without MetS (HR, 95% CI 1.407, 1.31-1.51 in men and 1.259, 1.13-1.40 in women).
Discussion
Criteria for the diagnosis of MetS have been suggested by various organizations including the World Health Organization (WHO), the European Group for the Study of Insulin Resistance (EGIR), the National Cholesterol Education Program-Third Adult Treatment Panel (NCEP ATP III), the American Association of Clinical Endocrinologists (AACE), and the International Diabetes Federation (IDF) 3-5, 12, 13, 15 . All groups agree on critical components of metabolic syndrome: obesity, insulin resistance, dyslipidemia, and hypertension. This study used the NCEP ATP III criteria for hypertension, hypertriglyceridemia, low HDL-C, and hyperglycemia. For abdominal obesity, this study used the definition in Korean clinical practice guidelines. MetS affects nearly 30% of the world's population and causes a two to threefold increase in morbidity and mortality compared to healthy people 17 . In Korea, a 22.4-32.1% prevalence of MetS has been reported 8,18,19 . MetS has been associated www.nature.com/scientificreports/ not only with cardiovascular disease and type 2 DM, but also with diverse diseases such as cancer 20, 21 , respiratory disease 22 , and chronic kidney disease 23 . The association between MetS and neurodegenerative diseases such as Alzheimer's disease and multiple sclerosis has also gained attention 24,25 .
Our population-based cohort study found that the incidence of ocular motor CNP was 2.19 times greater for individuals with MetS compared to those without. Individuals with MetS had a 35% higher risk of incident ocular motor CNP than individuals without MetS during the mean follow-up period of 8 years. Each component of MetS was significantly associated with a higher risk of incident ocular motor CNP. As the number of MetS components increased, the risk of incident ocular motor CNP gradually increased. The hazard ratio for ocular motor CNP in males with MetS compared to males without MetS was higher than that in females with MetS compared to females without MetS.
In our study, a high fasting plasma glucose and an abdominal obesity were associated with a 43% increase and a 24% increase, respectively, in the risk of developing ocular motor CNP after adjusting possible confounding factors. Hyperglycemia affects the tricarboxylic acid (TCA) cycle and glycation reactions, resulting in oxidative stress 26 . Furthermore, advanced glycation end products (AGEs) can promote the inflammation cascade by activating AGE receptors on immune cells 27 . Based on these concepts, diabetes can be thought of as a chronic inflammatory disease 28 . Oxidative stress and inflammation, especially the overproduction of reactive oxygen species (ROS) from partial reduction of O 2 , can cause mitochondrial dysfunction in many cells including neurons. Due to high metabolic activity and dependence on energy supply, mitochondrial damage from oxidative stress such as ROS can cause nerve cell damage 29 . Moreover, leptin, a signaling molecule produced by adipocytes that acts on the hypothalamus to increase satiety, is overproduced in obese individuals. High plasma concentrations of leptin can induce leptin resistance in the brain. Thus, obese people continue to feel hungry. Leptin is also involved in immune modulation, such as leukocyte extravasation and the development and activation of leukocytes 30 . These pro-inflammatory environments in both obesity and type 2 diabetes mellitus contribute to the increased permeability of the blood-brain-barrier (BBB) 31 . BBB breakdown causes decreased removal of waste and increased infiltration of immune cells, leading to disruption of glial and neuronal cells.
In one previous study, diabetes and WC were the main metabolic factors associated with polyneuropathy, whereas SBP, triglyceride levels, and HDL-C levels were not 32 . In our study, not only diabetes, but also hypertension, high triglyceride levels, and low HDL-C levels significantly increased the risk of ocular motor CNP. The prevalent coexistence of hypertension 10, 33 , obesity 33 , and dyslipidemia 11 has been reported in ocular motor CNP patients. In one clinical study on the etiology of ocular motor CNP, hypertension and dyslipidemia along with DM were prevalent not only in patients with presumed microvascular ischemia but also in patients with other identifiable causes 11 . In our study, the presence of hypertension, elevated serum levels of triglycerides, and decreased levels of HDL-C increased the risk of ocular motor CNP by 13%, 18%, and 24%, respectively.
Individuals with a higher number of MetS components were at a higher risk of incident ocular motor CNP in this study. Our results suggest that each component of MetS has an additive effect on the risk of the development of ocular motor CNP. We presume that treating MetS as a whole might be of value to reduce the incidence of ocular motor CNP. However, due to limitations of a correlation study, we were unable to confirm the causative effect of MetS on ocular motor CNP. Further studies are warranted to explore the effect of MetS treatment on the development and progression of ocular motor CNP.
The HR of ocular motor CNP for males with MetS compared to males without MetS was higher than that for females with MetS compared to females without MetS. This finding suggests that men with MetS are more vulnerable to ocular motor CNP than women with MetS. Also, the proportion of males (23.12%) in the MetS group was higher than that of females (18.24%) and the mean age of the males in the MetS group was lower than that of the females in our cohort. These results suggested that men in our cohort were more prone to MetS and that men with MetS were more prone to ocular motor CNP. Although the underlying mechanism is currently unclear, sex differences in body fat distribution, glucose homeostasis, and lipid metabolism have been previously reported 34,35 . Women generally have higher adiposity relative to men throughout their entire lifespan. However, men often have higher adipose tissue distributed in the central or abdominal subcutaneous region. This distribution might be predominantly sex hormone-dependent 36 . Subcutaneous abdominal fat has been correlated with an increased susceptibility for MetS. Visceral lower body fat has been associated with reduced metabolic risk. It might be protective against adverse effects of obesity. Furthermore, visceral adipocytes in men exhibit higher rates of fatty acid turnover and lipolysis than those in women, leading to a greater release of free fatty acid into the circulation. Both sex hormones and sex chromosome complement may play roles in such differences. However, relatively few studies have focused on the underlying mechanisms 37,38 . Gender may not only affect obesity, but also affect neurological complications. In an experimental study using diabetic mice, male mice developed greater diabetes-induced cognitive deficits and peripheral neurovascular dysfunction than female mice 39 . A human study comparing male and female diabetic patients showed that males developed neuropathic complications 4 years earlier than females 40 .
This study has several limitations. They should be considered when interpreting our results. First, this was a retrospective cohort case-control study. Thus, our data could only suggest the correlation between MetS and ocular motor CNP. The causative relationship between the two could not be predicted. Second, the status of MetS changes over time. However, we used single, fixed measures at baseline. Thus, in this study, we do not know whether an improvement in MetS would decrease the risk of ocular motor CNP development. Third, since our study used insurance data comprised of diagnostic codes, there might be a possibility of misdiagnosis. Moreover, due to the lack of data, some possible confounding factors such as dietary factors that may affect Mets or the duration of Mets could not be evaluated. Lastly, our data were comprised of mostly Koreans. Thus, our results might not be applicable to other ethnicities. However, the present study was a population-based, nationwide large-scale cohort study with merits to offset its limitations. Also, our study suggested the harmful effect of MetS on the development of ocular motor CNP in the general population for the first time to the best of our knowledge. www.nature.com/scientificreports/ In conclusion, we found that MetS and its components were independent risk factors for ocular motor CNP development using a population-based sample over 8 years. Components of MetS had additive effects. An increased number of MetS components was associated with an increased risk of ocular motor CNP incidence, suggesting the importance of an integrated risk management approach for MetS. Males with MetS vs. males without MetS had a higher hazard ratio of experiencing ocular motor CNP than females with MetS vs. females without MetS.
Data availability
The data that support the findings of this study are available from Korean National Health Insurance (NHI). However, restrictions apply to the availability of these data, which were used under permission for the current study as such data are not publicly available. Data are however available upon reasonable request and with permission of Korean National Health Insurance (NHI). | 2021-12-01T06:23:52.321Z | 2021-11-29T00:00:00.000 | {
"year": 2021,
"sha1": "e0627f6d3c18c32d9f0021c631c7fef55d2a8f90",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-02517-3.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "a146dfae5df2e01708586bc6d61d6d465d7ae31f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
58360801 | pes2o/s2orc | v3-fos-license | Acute and Recovery Changes of TNF-α and IL-1 β in Response to Aerobic Exercise in Smokers and Non-smokers
Introduction: Recent evidence has shown that acute exercise affects the immune response in healthy individuals. However, the effect of aerobic exercise on inflammatory markers in smokers has not been well studied. This study evaluated acute and recovery responses of inflammatory cytokines to moderate aerobic exercise in male smokers. Methods: For this purpose, 15 sedentary male smokers and 15 male non-smokers matched for age and body mass index (BMI) performed aerobic exercise involving 40-minute running at 70% of maximum heart rate (HRmax). Blood samples were obtained pre-exercise (baseline), immediately post exercise (zero), as well as 60 minutes and 24 hours after exercise for analysis of interleukin-1 beta (IL-1β) and tumor necrosis factor-alpha (TNF-α) levels. The data were analyzed using SPSS 16.0 and one-way analysis variance (ANOVA) with repeated measures. Results: No differences were observed for baseline IL-1β between smokers and non-smokers. However, serum TNF-α level was significantly higher in smokers (75.1 ± 14.3) than in nonsmokers (37.2 ± 9.11) at baseline (P = 0.01). Aerobic exercise significantly reduced TNF-α level immediately after exercise (58.9 ± 11.6), and at 60 minutes (50.1 ± 14.8), and 24 hours (53.44 ± 12.3) post exercise in comparison with baseline (P = 0.02) in smokers. TNF-α levels remained significantly higher in smokers compared to non-smokers immediately, 60 minutes, and 24 hours post exercise. IL-1β levels revealed no significant differences between smokers and nonsmokers at baseline, immediately, 60 minutes, and 24 hours post exercise. Furthermore, exercise did not significantly affect acute or recovery changes of TNF-α and IL-1β in non-smokers. Conclusion: In conclusion, based on acute and recovery responses of serum TNF-α to exercise, it seems that a moderate aerobic exercise may reveal beneficial effects on inflammatory profile
Introduction
Regular consumption of tobacco has been known as the second leading cause of death.It is predicted that more than 9 million people will lose their lives each year due to its consumption until 2030. 1 Heart diseases and some lung conditions such as lung cancer and chronic obstructive pulmonary disease (COPD) and skeletal muscle inflammation are the major consequences of tobacco consumption. 2,3ro-inflammatory cytokines have been suggested to be secreted from immune cells as a result of smoking.Recent studies have reported changes in inflammatory cytokines in healthy smokers, not only in the lungs but also in blood circulation.Some studies have reported high levels of these inflammatory cytokines even 10 to 20 years after quitting. 4ecent studies have revealed that inflammatory reactions (i.e.elevation in cytokines such as interleukin-1 beta [IL-1β] and tumor necrosis factor-alpha [TNF-α]) are increased particularly in response to cigarette smoking predisposing to chronic disorders such as diabetes, 5,6 On the one hand, the role of smoking in increasing the inflammatory cytokines was observed in some other studies. 7eanwhile, some studies have confirmed the additive effect of smoking on serum TNF-α as one of the proinflammatory cytokines in blood circulation. 8,9Increased IL-1β leads to respiratory inflammation, destruction of elastic fibers in pulmonary alveolar walls, obstruction of the airway wall, and accumulation of lymphocytes in the respiratory airways. 10Scientific references have reported higher levels of IL-1β in smokers compared to non-smokers. 11Therefore, implementing strategies for reducing the effects of smoking has always been health science researchers' top priority.
The role of exercise and physical activity is of great importance in regulating inflammation.In this context, although the inflammatory response to various exercise training has been less frequently studied in smokers, scientific findings in other healthy and sick populations have shown the anti-inflammatory impacts of exercise with a reduction in the levels of inflammatory mediators such as IL-1β. 12In addition, TNF-α level was reduced in skeletal muscle of thin older people in response to exercise. 13Despite the evidences, there are few studies regarding acute or recovery responses of cytokines such as TNF-α and IL-1β to exercise in smokers.Therefore, this study aimed to measure and compare acute and recovery responses of these inflammatory cytokines following an exercise session in smokers and non-smokers.
Human Subjects and Study Inclusion
This semi-experimental study examined acute and post exercise responses of serum TNF-α and IL-1β in smoker and nonsmokers.Participants including 15 non-trained healthy male smokers and 15 nonsmokers matched for sex, age, height and body mass index (BMI) were recruited by convenience sampling.The sample size was determined according to equation 1.
Inclusion and Exclusion Criteria
To understand medical history, subjects were asked to complete questionnaires on present medications, general health, alcohol consumption and smoking.The inclusion criterion for the smoker group was smoking history of at least 10 cigarettes a day for 3 years (current smokers). 14ll subjects of the two groups were inactive and nonalcoholics.None of the subjects used therapies or drugs for obesity, and none had a past history of injury or disease that would prevent daily exercise.All subjects of two groups had not participated in regular diet programs/ exercise for the preceding six months.Patients with a known history of neuromuscular disease, acute or chronic respiratory infections and cardiopulmonary disease were excluded from the study.Any subject who did not complete the exercise program was also excluded from the final analysis.
Anthropometric Measurements
Body weight was measured to the nearest 10 g using digital scales (OMRON, BF: 508, Finland).The height of barefoot participants was measured to the nearest 0.1 cm.Body fat percentage was measured using body composition monitor (OMRON, BF: 508, Finland).BMI was measured for each subject by the division of weight (kg) by height (m 2 ).
Acute Exercise and Blood Analysis Aerobic exercise test lasted 40 minutes at 70% of maximum heart rate (HRmax) involved running on a flat surface with no slope.Target HR was controlled using polar telemetry system.Venous blood samples were obtained before, immediately, 60 minutes and 24 hours after exercise test in the 2 groups.Blood samples were dispensed into EDTA-coated tubes and then centrifuged for 10 minutes in order to separate serum.Serum was used to measure TNF-α and IL-1β by ELISA.
Serum TNF-α concentration was determined using ELISA for quantitative detection of human TNF-α (Human TNF-α total Platinum ELISA BMS2034/ BMS2034TEN, Biovendor, Vienna, Austria).The intraassay CV for TNF-α was 6.0% and inter-assay variability ranged from 23 to 1500 pg/mL.Serum IL-1β was determined using ELISA for quantitative detection of human IL-1β (Human IL-1β Platinum ELISA BMS224/2/ BMS224/2TEN, Biovendor, Vienna, Austria).The intraassay CV for IL-1β was 5.1%, and inter-assay variability ranged from 3.9 to 250 pg/mL.Data Analysis Data were analyzed using the Statistical Package for Social Sciences (SPSS) for Windows, version 16.0.Normal distribution of the data was determined by the Kolmogorov-Smirnov normality test.One-way analysis of variance (ANOVA) was used to effectively assess the changes in serum TNF-α and IL-1β by exercise test in the 2 groups.A P value of less than 0.05 was considered statistically significant.
Results
Values of anthropometric characteristics have been reported as means and standard deviations (Table 1).Based on independent sample student t test, no statistically significant differences were found between smokers and non-smokers with regard to anthropometric characteristics (P > 0.05).
TNF-α serum levels were significantly higher in smokers than in non-smoker subjects at baseline (P = 0.01).In contrast, smokers and non-smokers did not reveal any significant difference regarding serum IL-1β at baseline (P = 0.31).
As mentioned above, the main objective of the present study was to determine the acute and recovery response of IL-1β and TNF-α to aerobic exercise in the two groups.Based on repeated measure data, although acute and recovery response (1 and 24 hours) of serum TNF-α to exercise test was not significantly different compared with pre-exercise (baseline) in non-smokers (P > 0.05), its levels were significantly decreased after exercise (0, 1 and 24 hours) compared to pre-exercise in smoker subjects (Table 2).
Regarding serum IL-1β response to aerobic exercise test, no differences were observed immediately and recovery post-exercise compared to pre-exercise in this cytokine concentration in smoker and non-smokers (P > 0.05, Table 3).
Discussion
Although previous studies have confirmed higher levels of inflammatory cytokines in smokers than non-smokers, in the current study, no significant differences were observed between smokers and non-smokers in terms of serum IL-1β levels at baseline.Furthermore, no significant changes were detected at acute (1 hour) and recovery (24 hours) phases post exercise in smoker and non-smoker groups.This is despite the fact that smokers had significantly higher serum TNF-α levels compared to non-smokers at baseline and recovery phases.In this regard, a study reported increased IL-1β, not only in smokers but also in people indirectly exposed to smoking. 14he fact that no difference was observed in IL-1β in smokers and non-smokers can be attributed to our small sample size.However, levels of inflammatory profile in smokers do not follow a regular pattern.Nevertheless, smokers showed higher serum TNF-α compared to non-smokers.In addition to smoking, inflammatory cytokines levels seem to fluctuate in response to exercise and physical activity as well.In the present study, the reduction of TNF-α at acute or recovery phases (1 and 24 hours respectively) in response to exercise was observed in male smokers.In general, both acute and long-term exercises seem to affect inflammatory cytokines.In this regard, low-intensity aerobic exercise and combined aerobic-resistance training program decreased levels of IL-1β in people suffering from obesity and diabetes. 15imilar to our findings, no significant change occurred in the levels of IL-1β as a result of long-term exercise training in another study. 16In addition, short-term exercise training did not lead to any change in the plasma levels of IL-1β. 17In some studies, despite the increase of physical fitness levels in female smokers in response to exercise training for 12 weeks, inflammatory cytokines did not significantly change in response to exercise. 12In another study, aerobic training for 12 weeks significantly decreased serum CRP levels, but no changes were observed in TNF-α level. 18n the current study, despite the lack of change in IL-1β in response to aerobic exercise in the smokers, serum levels of TNF-α showed a decreasing trend immediately 19 Moreover, in the study of Duran et al, despite the acute and recovery increase in IL-6 in response to a high-intensity interval training session (10 sets of 2-minute cycling followed by one-minute rest intervals), the levels of TNF-α and IL-10 showed no significant changes. 20n another study, significant differences were not observed in the levels of IL-6 at 40 minutes post moderateintensity cycling exercise between smokers and nonsmokers. 21It has been noted that systemic inflammation in response to smoking results from stimulation of various types of inflammatory cells. 11,22The results of a recent study showed that serum TNF-α and IL-1ß levels in smokers were significantly higher compared to nonsmokers and simultaneous inhibition of TNF-α and IL-1ß signaling pathways prevented endothelial disruption caused by smoking. 23
Limitation
We did not measure other inflammatory cytokines which is the main limitation of the current study.
Conclusion
Acute or recovery response of the two inflammatory cytokines IL-1ß and TNF-α to an aerobic exercise session with moderate-intensity are different in male smokers.In this regard, despite the absence of change in IL-1ß, serum levels of TNF-α immediately decreased significantly compared to baseline levels, within 1 and 24 hours after the exercise.In this regard, it can be stated that a relatively long-term aerobic exercise session with moderate intensity is, to some degree, associated with improvement of the inflammatory profile with emphasis on TNF-α in a recovery period of up to 24 hours after the test in non-active male smokers, while these changes are not observed in male non-smokers.
Ethical Approval
The study was approved by the Ethics Committee of Islamic Azad University, Iran (73/454791).Informed consent was obtained from all participants before recruitment to the project.
Table 1 .
Anthropometric Indexes in Our Sample Population
Table 2 .
Acute and Recovery Response of Serum TNF-α to Aerobic Exercise of 2 Groups
Table 3 .
Acute and Recovery Response of Serum IL-1β to Aerobic Exercise of 2 Groupsand 24 hours following exercise.Nevertheless, TNF-α levels demonstrated no such changes in non-smokers compared to baseline levels.These indicated no acute and recovery responses of IL-1β to exercise in smokers and non-smokers.Regarding the acute response of IL-1β to exercise, Martin et al showed that a single session of exercise increased IL-1β in obese mice. | 2019-01-21T14:12:04.821Z | 2018-10-03T00:00:00.000 | {
"year": 2018,
"sha1": "e112393e618248eac9e128b27da91787a2cfd39a",
"oa_license": "CCBY",
"oa_url": "http://ijbsm.zbmu.ac.ir/PDF/ijbsm-3193",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e112393e618248eac9e128b27da91787a2cfd39a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216275681 | pes2o/s2orc | v3-fos-license | Statistical analysis on tribology behavior of stainless steel surface in aloe vera blended lubricant
Lubricant oil is one of the important oils that is needed for any vehicle and the additive added into the lubricant is required to boost the potential of lubricant to protect engine or any compartment that required lubrication. The presence of lubrication oil is very important in engine which is functioned to protect engine especially during combustion processes and it is needed to cool down the temperature of engine after every process. However, most of the lubricant oils are not very eco-friendly and they can have caused harm to the environment system. Several researches have been carried out recently to find new alternative additives for lubricant oil and it is safer for our environment which is now in danger. One of the alternative additives that have been widely suggested by several researchers is involving plant such as vegetables. An experiment has been carried out to justify the potential of Aloe Vera as a new additive that can replace existence chemical and synthetic additive which is harmful to the environment and costly to produce it. The conclusion of the experiments is the researcher agrees to the suitability of Aloe Vera as a new additive by using simple experiments method. Response surface method was used to design the experiments and finding the relationship between the parameters and responses. It shows that Aloe Vera additives reduce the coefficient of friction.
Introduction
The lubricant oil is important in internal combustion engine which functioned in reducing friction between the rubbings and bearing surface. Proper lubrication of all moving parts is essential for the Internal Combustion engine or IC Engine. Other than that, lubricant oil is important in preventing corrosion on the metal surface, to reduce expansions of metal cause by frictional heat and acts as a coolant to metal due to heat transfer media by carrying heat away from the bearings, cylinder and pistons. An important purpose of engine oil is to lubricate engine parts so that friction and wear are reduced. The lubricant technology has undergone various improvements in order to extend the life of moving parts, so it can operate under many different condition of speed, temperature, and pressure. In the internal combustion engine, loss of energy to friction between the piston ring and cylinder liner constitutes 20% to 40% of total mechanical loss and is regarded as the greatest mechanical friction loss [1]. Different engines lubrication regimes significantly affect the tribological performance of the engine [2]. Aloe Vera Nano cellulose is one of highlight for the best potential additive to lubricants creations. The presence of Amino acetic acid (glycine) in Aloe Vera makes it worthy as additive since Glycine and its derivatives have been used as inhibitor to prevent corrosion of metals in various environments-acidic, neutral and deuterated carbonate solution [3].
Tribology in cylinder wall, wear and corrosion of an engine wall cylinder are serious and the cylinder is the heaviest wear part of the engines. For piston skirt and the cylinder liner/wall, the friction is control by the diameter clearance, the design of the piston and the inclining action of the piston, the design of the piston skirt, the surface roughness and the pre-conditions for lubrication [14]. Strong adhesive forces between the piston rings and cylinder liner may occur under poor lubrication, causing to piston ring scuffing that comprises high friction forces and the formation of severe wear scars on the piston ring and cylinder surfaces [15]. Friction estimations by different author have been done with piston rings in combustion engines, by utilizing the drifting chamber liner or versatile bore system. In the estimation plans, the cylinder liner is permitted to move axially for the moment remove that is essential for empowering force estimations from the liner. As the relating typical constrain on a specific piston ring is known just to a specific degree, a succinct grating coefficient curve cannot have built up. The friction constrain readings are, be that as it may, exceptionally valuable, as they express the hub stacks on the rings, the frictional losses of the engines and the friction force varieties in real engines. On the other hand, for the engine tests utilizing the floating liner system, the pistons assembly friction has been contemplated by motored engines tests.
Cylinder bore polishing, which can be subdivided into light, medium and heavy polishing, is the first occurrence of wear in a cylinder liner. A light degree of bore polishing increases the oil consumption. When the bore polishing has evolved to a stage of heavy polishing, and most of the oilretaining honing pattern has been erased, the risk of lubrication starvation and scuffing is obvious [16]. The thermal loads cause lubricant degradation by ageing and partial evaporation. The chemical loads comprise dilution by fuel, acidic combustion products and water vapor from the combustion process. The erosive loads comprise the mechanical effect of the flushing by hot gases along the upper parts of the cylinder liner surface, and the removal of oil from the liner surface. The wear of the cylinder liner is additionally accelerated by solid carbon particles from the combustion process, and possibly by dust from the intake air that can contribute by causing abrasive wear. Wear of cylinder liners occurs as well in the mid-stroke region of the piston ring motion. The wear of the cylinder liner is higher on the antithrust side than on the thrust side of the liner, owing to the distribution of the thrust forces during the different cycles of the engine [17].
On the piston ring and cylinder liner surfaces evidence of scuffing may be found in the shape of wear scars indicating, example plastic deformation, abrasive ploughing and the adhesive transfer of work hardened cast iron to a chromium-plated piston ring, and a "white layer" that indicates that the temperature has locally exceeded 750°C [18]. Metallurgical investigations by Shuster and co-workers on initial scuffing failures have shown the presence of minor iron-based particles on the face surfaces of Mo-and Cr-coated piston rings, and the presence of martensitic transformation on the cylinder liner surface [19]. Since the introduction of aluminium cylinders in automotive engines three decades ago, iron plating of the piston skirt has been the prime solution against piston scuffing. Recently, Wang and Tung have presented the results of a scuffing resistance study on various candidate coatings for aluminium piston skirts in aluminium cylinder liners [20].
Statistical Design and Analysis is the theoretical predictions based on experimental observations mark the essence of beneficial research. Proper use of statistical techniques significantly improves the efficiency of the experiments and allows attracting meaningful conclusions from the experimental statistics. There are two basic elements of challenge in scientific experimentation: the design of the experiment and the statistical evaluation of the data. A success experimentation calls for understanding of the important elements that affect the output. Design of experiments allows deciding the factors, which can be vital for explaining a process variation. Statistical design and analysis is processes to identify and making initial decision before and after the experiment processes. For this purposed of surface study, the DOE is chosen as the statistical design and analysis and factorial design as a method for this research. Multiple Objective Optimizations or MOO, considers optimization issues including more than one objective function to be upgraded at the same time. MOO issues emerge in many fields, for example, engineering, financial aspects, and logistics, when ideal choices should be taken within the sight of exchange off between at least two clashing problems. For example, development of another segment may include limiting weight while amplifying quality or picking a portfolio may include augmenting the normal return while minimizes the risk. In tribology application, the MOO is suitable to use with any method such as Taguchi Method, Factorial Design and Analysis of Variance (ANOVA). The effect of the three process parameters, i.e., concentration, machine running times and traverse speed on surface roughness and coefficient of friction will be investigated. In this investigation, it wills using full factorial method is designed with three levels of parameters with the help of software Minitab version 17. It is predicted that full factorial method is a good method for optimization of various machining parameters as it reduces number of experiments.
Full Factorial Design is utilized for synchronous investigation of a few variable impacts on the procedure. By varying levels of factors simultaneously it can find optimal solution responses are measured at all combination of the experimental factor levels. The blend of the factor levels represent to the conditions at which responses will be measured. Each trial condition is a run. The reaction estimation in a perception. The whole arrangement of run is an outline of examination. It is utilized to discover the factors which are the most impact on the reaction and their cooperation between at least two components on response [33].
Experimental Tribo-tester Setup
Tribological behaviour test was performed to determine friction coefficient and specific wear rate between the contact surfaces using a wear tester. Figure 1 shows a schematic diagram of the wear tribo-tester for evaluating the friction coefficient for base oil and aloe vera-oil. It is simply designed sliding between movement surfaces that contact which are tool made from grey cast iron and specimen aluminium 6061. It functions mostly like piston-ring contact in engine and the tester are multi used tester which is generally work more in friction and wear experiment. The study of the tribological behaviour, the wear tester move reciprocating sliding friction was employed. Table 1 as shown below is the data that retrieved from an experiment, which is containing several of speed. To obtain the best result, trial and error method was used to choose the suitable speed, which can give more accurate coefficient of friction. Only one speed is chosen because the suitability of Design of Experiments only allows minimum two data and one dependent (response) data only. For 20 N loads, the reasonable data that available has been shown in table 2. The data has been retrieved from an experiment of Aloe Vera as lubricant additives. Table 3 as shown below is the data from an experiment which is containing several of speed. The speed involve in this research are 280, 300 and 320 rpm. For 90 N loads, the reasonable data that available has been shown in table 4. The data has been retrieved from an experiment of Aloe Vera as lubricant additives. From all table 4, it can be seen that the parameter for load and concentration is set to be constant but run in three different speed rpm. However, the average value will be taken as the result that is at 200 rpm and 300 rpm, respectively. This is because 200 rpm and 300 rpm are the speed chooses based on top dead centres concept in combustion engine, even though the test was run for less 20 rpm and more 20 rpm from that desired rpm point value. every single conceivable association. Full factorial plans are substantial contrasted with screening outlines, and since abnormal state collaborations are regularly not dynamic, they can be wasteful. They are ordinarily utilized when you have few elements and levels and need data about every single conceivable communication [34].
Data Acquired and Screening Processes: 90N load
In a FFD, you carry out an experimental run at every combination of the factor level. The sample size is made from the numbers of levels of the elements. As an example, a factorial experiment with a two-level factor, a three-level factor, and a 4-degree element has 2 x 3 x 4 = 24 runs. The FFD platform supports both continuous elements and categorical elements with arbitrary numbers of ranges. Its miles assumed that you could run the rigors in a very random fashion.
FFD are the maximum conservative of all layout types. However, due to the fact the sample length grows exponentially with the number of factor, FFD are frequently too pricey to run. Custom designs, definitive screening designs, and screening designs are less conservative but extra efficient and costpowerful. In order to study the effects of the tribological parameters, the most important tribological criteria which coefficient of friction (COF) act as the response. Table 5 shows the suitable levels of the factors used to design the parameters for a tribological experiment for 20 N.
Mathematical modelling and equation in FFD.
The modelling in this research is performed by regression analysis through FFD. In the present research, FFD is utilized to establish the relationship between the two tribology parameter and two responses COF and Surface Roughness. The experimental values were analysed and mathematical models were developed that illustrated the relationship between the process parameters and responses. The coefficients of the models were estimated by statistical analysis from the experimental results. Overall = 9 COF experiments are carried out.
First order model
The simplest model which can be used in FFD is based on a linear function. For its application is necessary that the responses obtained are well fitted to the following equation: (1) Where n is the number of variables, is the constant term, represents the coefficients of the linear parameters, represents the variables and is the residual associated to the experiments. The proposed linear model correlating the responses and independent variables can be representing by the following expressions derived from equation (1): Where y is response, C, m and n are the constants. MINITAB 17 Software was used to generate the response surface plot and factorial plot, and the optimization of the factors to investigate COF.
Second Order Model
In linear model, the responses should not present any curvature [35]. To evaluate curvature, a secondorder model must be used. Three level factorial designs are used in the estimation of first order effects, but they failed when additional effects, such as second-order effects are significant. So a central point in three-level factorial designs cannot be used to evaluating curvature. The next level of the polynomial model should contain additional terms, which describe the interaction between the different experimental variables. A second order-polynomial response empirical model can develop as follows to evaluate the parametric effects on the various tribological criteria: ( Where f(x) is the response, which is COF. It was created by various process variables of tribological parameters. ß0, ßi, ßii, and ßij are the regression coefficients for intercept, linear, quadratic, and interaction terms, respectively. Xi and Xj are the independent variables. Contour plots were obtained using the fitted model by keeping the least dynamic independent variable at a constant value while changing the other two variables [36].
Empirical Model and Regression Analysis on First Order Model: 20 N Load
Below are result and discussion of statistical analysis of Aloe Vera concentration as additive for lubricant oil. Table 6 represents the results of experiments conducted to investigate the tribological properties of Aloe Vera as lubricant oil additives while figure 2 shows the residual plot for the COF of as a function of the independent variable. From the normal distribution of the data along the line in the normal probability plot from figure 2, it can assumed that the residuals of the models of the COF for of Aloe Vera concentration as lubricant oil additives are normally distributed. Table 7 represent the analysis of variance (ANOVA) for COF. From table 7, the fit summary recommends that the empirical model is statistically significant for the analysis of COF. The value of was more than 95 % which means that the empirical model provides an excellent explanation of the relationship between the independent variables (factors) and the response (COF). Based on table 7 below, the associated P-value for the model was lowers than 0.05 (95% confidence interval). This indicates that the model was considered statically significant. This implies that the model could fit, and it is adequate [38]. Table 8, the table 8 below are the analysis of coefficient and effect of concentration and times to the COF. As you can see in Table 8, only 0% and 0.5 % concentration and times of 3 min and 6 min, give value to the analysis where 1.0 % does not appears in first order of COF. It can be explained as if P-value is less than 0.05; it means the model is significant at 95% confidence level. Furthermore, the significance of each coefficient in the full model was examined by the T-values and P-values and the results are listed in table 8. The larger values of T-value and smaller values of P-value indicate that the corresponding coefficient is highly significant [37]. To calculate the parameter of Concentration, C and Time, T, the least square method was used with the aid of MINITAB. The first-order linear equation used to predict the COF was expressed as
Empirical Model and Regression Analysis on Second Order Model: 20N Load
For table 9, this is the data that has been analysing by using MINITAB 17. The responses for this second order model are FITS 2. However, for second orders there are no value of P-Value has stated in Table 10 and 9, and the Rsq is 100%, which is invalid for the statistical analysis. To prove for this 2 nd order is relevant or not, the existence of P-value which is less than 0.05 are required so the 95% confident interval can be [37]. However, the value is unavailable so the second order model is invalid for this research. S=* R-sq=100% R-sq(adj)= *% R-sq(pred)=*% As you can see in table 11, the error is stated as 0 which is not meet requirement of statistical analysis where the error is must be greater than zero and the P-value is not available so it cannot determine either the 95% of confidence interval can be achieve or not. The goodness of fit of the mathematical models was also tested by coefficient of determination (R-sq) and adjusted coefficient of determination (R-sq adj). The R-sq is the proportion of the variation in the dependent variable explained by the regression model. On the other hand, R-sq (adj) is the coefficient of determination adjusted for the number of independent variables in the regression model. Unlike R-sq, the R-sq adj may decrease if the variables are entered in the model that does not add significantly to the model fit. The R-sq and R-sq adj values of mathematical models are found (*) and (*) respectively which clearly indicate the excellent correlation between the experimental and the predicted values of the responses are not be achieved [37]. The first-order linear equation used to predict the COF was expressed as the regression equation, the influence of Concentration x Times makes the data not significant and some of data become invalid. Equation Hence, the results given in table 12 suggest that the influence of Concentration × Times (C × T), make the second order non-significant and therefore can be removed from the full model to further improve the model or to make it valid.
Empirical model and regression analysis on first order model: 90 N Load
Below are result and discussion of simulation of Aloe Vera concentration as additive for lubricant oil. The same method is repeated for 90 N. Table 12 represents the results of experiments conducted to investigate the tribological properties of Aloe Vera as lubricant oil additives while figure 3 shows the residual plot for the coefficient of friction of as a function of the independent variable. From the normal distribution of the data along the line in the normal probability plot from figure 3, it can assumed that the residuals of the models of the coefficient of friction for Aloe Vera as lubricant oil additives are normally distributed. Figure 3. Residual plots of 90 N data obtained for coefficient of friction. Table 13 represent the analysis of variance (ANOVA) for coefficient of friction. From table 13, the fit summary recommends that the empirical model is statistically significant for the analysis of COF. The value of R-sq was more than 95% which means that the empirical model provides an excellent explanation of the relationship between the independent variables (factors) and the response (COF). Based on table 13 below, it can be appreciated that the P-value is less than 0.05 which means that the model is significant at 95% confidence level [42]. This indicates that the model was considered statically significant. This implies that the model could fit, and it is adequate [38]. To calculate the parameter of Concentration, C and Time, T, the least square method was used with the aid of MINITAB 17. The first-order linear equation used to predict the coefficient of friction was expressed as equation 6 For table 15, this is the data that has been analysing by using MINITAB 17. The responses for this second order model are FITS 2. For second orders there is no valid value for P-Value has stated and the R-sq is 100% which is invalid for the statistical analysis. It can be explained as the P-value is needed, so 2 nd order model can be determining either significant or not. So again, the second order models are invalid for full factorial design. 0.000746 S = * R-sq = 100% R-sq(adj) = * R-sq(pred) = * The goodness of fit of the mathematical models was also tested by coefficient of determination (Rsq) and adjusted coefficient of determination (R-sq adj). The R-sq is the proportion of the variation in the dependent variable explained by the regression model. On the other hand, R-sq (adj) is the coefficient of determination adjusted for the number of independent variables in the regression model. The R-sq and R-sq adj values of mathematical models are found (*/invalid) respectively which clearly indicate the excellent correlation between the experimental and the predicted values of the responses cannot be achieved [37]. figure 4, the overall discussion that can obtain from the statistical analysis for both 20 N (1 st and 2 nd order) is the increasing of concentration value cause the value of COF decreased gradually while the increasing of times causes the value of COF increase. The same pattern has been record by 90 N (1 st and 2 nd order), where the concentration causes the COF decreased sharply but the increasing of time causes the value of COF decreased then increased a bit. It agrees that time and concentrations affect the value of coefficient of friction.
Factor (Concentration and RPM) against COF.
In figure 5 below, the overall discussion that can obtain from the statistical analysis for both 20 N is the increasing of concentration value cause the value of COF decreased gradually while the increasing of times causes the value of COF increase. The same pattern has been record by 90 N, where the concentration causes the COF decreased and the increasing of time causes the value of COF decreased gradually. 16 Figure 6 is utilized to demonstrate the correlation between the residuals and from this, it is underscored that an inclination to have keeps running of positive and negative residuals shows the presence of a specific correlation. Likewise, the plot demonstrates that the residuals are dispersed equitably in both positive and negative along the run. Subsequently the information can be said to be autonomous [37]. Figure 7 is utilized to demonstrate the correlation between the residuals and from this, it is underscored that an inclination to have keeps running of positive and negative residuals shows the presence of a specific correlation. Likewise, the plot demonstrates that the residuals are dispersed equitably in both positive and negative along the run. Subsequently the information can be said to be autonomous [37].
Conclusion
As conclusions, the study examined the effects of various control parameters namely speed, load, and volume composition on the responses of coefficient of friction and wear rate. The following conclusion can be derived based on the results obtained. From the research, there are several conclusions can be make; 1. The relationship of coefficient of friction with Aloe Vera concentration, load, speed (RPM) and Times has been effectively gotten by utilizing FFD at 95% confidence interval. 2. According to the concentration versus time result, the friction coefficient between the three concentrations of oil shows a considerable difference. 3. It does show that by using Aloe Vera concentration as a lubricant additive, it gives small impact on the Stainless-steel surface regarding on the coefficient of friction. 4. The data accumulated near the best fit lines in 1 st order model shows that the result are significant and can be accepted as suitability of Aloe Vera as additive. 5. For second order, the data are negligible due to the invalid data that has been recorded and we can assume that, the FFD is unsuitable for second order.
So, the experimental conclusion which stated that the Aloe Vera can be used as additive is acceptable. | 2020-03-05T10:17:55.437Z | 2020-03-05T00:00:00.000 | {
"year": 2020,
"sha1": "386bfc44cf86e8446d0135d93489cf686fdaeecf",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/736/5/052035",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "beaf31eec3c1738acf64e5e73997765ac928a647",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
267371766 | pes2o/s2orc | v3-fos-license | Usefulness of 18F-FDG PET-CT in the Management of Febrile Neutropenia: A Retrospective Cohort from a Tertiary University Hospital and a Systematic Review
Febrile neutropenia (FN) is a complication of hematologic malignancy therapy. An early diagnosis would allow optimization of antimicrobials. The 18F-FDG-PET-CT may be useful; however, its role is not well established. We analyzed retrospectively patients with hematological malignancies who underwent 18F-FDG-PET-CT as part of FN management in our university hospital and compared with conventional imaging. In addition, we performed a systematic review of the literature assessing the usefulness of 18F-FDG-PET-CT in FN. A total of 24 cases of FN underwent 18F-FDG-PET-CT. In addition, 92% had conventional CT. In 5/24 episodes (21%), the fever was of infectious etiology: two were bacterial, two were fungal, and one was parasitic. When compared with conventional imaging, 18F-FDG-PET-CT had an added value in 20 cases (83%): it diagnosed a new site of infection in 4 patients (17%), excluded infection in 16 (67%), and helped modify antimicrobials in 16 (67%). Antimicrobials could be discontinued in 10 (41.6%). We identified seven publications of low quality and one randomized trial. Our results support those of the literature. The available data suggest that 18F-FDG-PET-CT is useful in the management of FN, especially to diagnose fungal infections and rationalize antimicrobials. This review points out the low level of evidence and indicates the gaps in knowledge.
Introduction
Febrile neutropenia (FN) is a common complication of cancer, particularly in patients with hematological malignancies who receive high-intensity chemotherapy or stem cell transplantation (SCT) [1].In this setting, infections are an important complication and cause significant morbidity and mortality.The etiological workup of FN includes, in addition to microbiological tests, different imaging tests; among them, the most commonly used is computed tomography (CT).Specifically, high-resolution computed tomography is recommended for suspected invasive fungal infection (IFI) [2].
Unfortunately, in a large number of cases it is not possible to determine the cause of the fever with conventional imaging.This results in the prolonged use of empirical broad-spectrum antimicrobials, both antibiotics and antifungals [3].In addition, in almost 50% of the cases according to some studies [4], the etiology of FN is not infectious.In these situations, an early etiological diagnosis would allow the performing of targeted diagnostic tests and optimization of antimicrobial therapy, including timely discontinuation of unnecessary antimicrobials [4].Therefore, there is a need to implement better complementary tests to improve diagnosis and allow de-escalation of antimicrobials.
PET with 18 F-fluorodeoxyglucose provides functional information that correlates with the anatomical data provided by CT.Using 18 F-FDG as a radiotracer, information on the metabolism in different tissues is obtained.Unlike conventional CT, 18 F-FDG-PET-CT is capable of evaluating more than one area of the body in just one session in addition to providing metabolic information, allowing one to more easily detect clinically silent lesions [5].This imaging technique is typically used for staging cancer and has seen its use increase lately as part of the study of fever of unknown origin or FN [1,6].
In recent years, several studies have shown the potential of 18 F-FDG PET-CT to localize the source of fever and to differentiate between infection and other etiologies in patients with FN.Although 18 F-FDG-PET-CT has not shown a clear benefit in the differential diagnosis of cancer and infection, nevertheless, it could have a role in FN and in detecting the spread of infection and occult infection [7].Other authors have underlined the high negative predictive value of 18 F-FDG-PET-CT that facilitates the adjustment of antimicrobial treatment [4,8].In particular, they point out that 18 F-FDG-PET-CT could play an important role in the diagnosis of IFI and help with the withdrawal of empirically initiated antifungals [1,4,8].In a recent review on the management of patients with high-risk FN, the authors suggest that 18 F-FDG-PET-CT could be especially helpful in these patients when it comes to reducing the antibiotic spectrum without changes in ICU admissions or mortality [9].
However, the literature on this topic is scarce and mainly consists of short cases series in diverse settings.In addition, most of these studies do not compare the performance of conventional tests and 18 F-FDG-PET and the added value provided specifically by 18 F-FDG-PET-CT.Consequently, its role in routine clinical practice has not been well established so far [1,4,6,8].
Although several authors have reviewed the existing literature [1,4,[6][7][8], to our knowledge, there is not a comprehensive and systematic review on this topic.
In the present article, we analyze our center's data on the usefulness of 18 F-FDG PET-CT in hematological patients with FN and its added value compared with conventional imaging, and we perform a systematic review of the published literature on the use of 18 F-FDG PET-CT in that setting.
Single-Center Retrospective Cohort Study 2.1.1. Design, Study Period, and Subjects
Our institution is a 613-bed tertiary-care teaching hospital in Madrid, Spain.The Hematology Department has an active SCT program, including allogeneic SCT (haploidentical and cord as well).
We performed a retrospective observational study including all adult patients admitted to Puerta de Hierro-Majadahonda Hospital between 2015 and 2022 diagnosed with hematological malignancies (leukemia, aplasia, myelodysplastic syndrome, multiple myeloma, or lymphoma) undergoing chemotherapy or SCT who underwent at least one 18 F-FDG PET-CT as part of the FN management.
Data Collection
Epidemiological, clinical (including the type of hematological malignancy and SCT, the type of infection, localized or disseminated disease, and the type of pathogen), labora-tory, and imaging data were extracted from electronic medical records (SELENE System, Cerner Iberia, S.L.U., Madrid, Spain) using a standardized data collection form.The 18 F-FDG PET-CT indication and impact of the results on FN management were specifically addressed.All data were included by a primary reviewer and, subsequently, checked by two senior physicians.
2.1.3. 18F-FDG PET-CT Technique All 18 F-FDG PET-CT scans were performed according to EANM (European Association of Nuclear Medicine) guidelines, in hybrid PET/CT chamber systems [10].The CT component was non-contrast enhanced.All patients complied with a previous fasting period of at least six hours (12)(13)(14)(15)(16)(17)(18) h in cases of suspected endocarditis; in this case a dietary modification protocol was also applied).Ideally, they should maintain blood glucose levels lower than 180 mg/dL.If insulin was administered, the injection of 18 F-FDG would be spaced at least four hours apart.For infectious and inflammatory diseases, the same acquisition, reconstruction, and post-processing described in the procedures of the EANM for tumors were used [10,11].Full-body 18 F-FDG PET-CT, from the cranial vertex to the feet in a supine position, was acquired approximately 50-60 min after the intravenous injection of 370 ± 30 MBq 18 F-FDG depending on the patient's weight.When infective endocarditis was a possibility, the study was completed with dedicated cardiac 18 F-FDG PET-CT acquisition.
The 18 F-FDG PET-CT images were analyzed for increased uptake of 18 F-FDG outside the areas of physiological incorporation.A qualitative analysis was carried out, considering the uptake pattern (focal, linear, or diffuse) and the distribution of the radiotracer in the pathological area or lesion (homogeneous or heterogeneous), and a semiquantitative analysis was performed considering the intensity of the uptake.The images were interpreted as normal, equivocal, or with pathological uptake according to the standard uptake values (SUV): the visual scores were 0, no pathological uptake; 1, uptake similar to the vascular pool in the mediastinum; 2, uptake higher than the vascular pool but lower than the liver pool; 3, uptake similar to or slightly higher than the liver; 4, uptake clearly higher than the hepatic, where 0 and 1 would be negative and 2, 3, and 4 would be positive (always assessing the location and alternative causes that explain the uptake).
Other Imaging Techniques
The diagnostic workup for FN was performed at the discretion of the treating physicians.For every case, the results of conventional imaging techniques performed during the episode were compared with the 18 F-FDG PET-CT results, according to the reports by radiology specialists (or cardiologists, when applicable).This included X-ray, CT, MRI, and, in the case of bloodstream infection caused by Gram positives or yeasts, echocardiography.
Definitions
We followed the criteria for febrile neutropenia as per the NCCN guidelines [12]: -For fever, a single temperature equivalent to ≥38.3 • C orally or equivalent to ≥38.0 • C orally over a 1 h period; -For neutropenia, ≤500 neutrophils/mcL or ≤1000 neutrophils/mcL and a predicted decrease to ≤500/mcL in the next 48 h.
In addition, we also evaluated patients with persistent low-grade fever (temperature > 37.5 • C for more than 72 h).
Usual Care
For antimicrobial prophylaxis, see supplementary material S1 (for acute myeloid leukemia (AML), levofloxacin and posaconazole during the neutropenia period and acyclovir in cases who receive fludarabine; for acute lymphoblastic leukemia (ALL), cotrimoxazole and acyclovir and posaconazole when finishing vincristine; for autologous SCT, cotrimoxazole impregnation prior to transplantation, levofloxacin, fluconazole, and acy-clovir; for allogeneic SCT, cotrimoxazole impregnation prior to transplantation followed by nebulized pentamidine, levofloxacin, posaconazole, and acyclovir, plus letermovir and azithromycin in high-risk patients).
Regarding empirical antimicrobials, before 2019, empirical therapy for febrile neutropenia consisted of meropenem.From 2019 on, empirical therapy consisted of piperacillintazobactam or cefepime (the latter in cases where no anaerobic coverage was deemed necessary).No surveillance cultures were obtained routinely, but in cases known to be colonized by resistant microorganisms, antimicrobial therapy was adjusted accordingly.
In both periods, teicoplanin was added in cases of catheter infection suspicion, and amikacin was added in cases of septic shock.
Data Analysis
Quantitative variables are expressed as means and standard deviations (SD) and/or medians and interquartile ranges (IQR), and qualitative variables are expressed as frequencies and percentages.Measures of central tendency (mean and SD, and median and IQR) and proportions were calculated with IBM SPSS Statistics 22.
Systematic Literature Review
The studies were identified through a systematic search in different bibliographic databases using search terms (MeSH) related to the topic, specifically, Pubmed, Embase, and Cochrane Library.These databases were searched without language or publication date restrictions (see search strategy in supplementary material S2).We did not exclude articles based on the retrospective or prospective nature of the study.The reference lists of the relevant studies were checked to identify additional relevant articles.
To be eligible, a study had to evaluate the use of 18 F-FDG PET-CT in the management of FN. References were screened by two researchers based on the title and abstract using the PICOS framework (Table 1).Irrelevant references were excluded with explicit reasons.In a second step, the remaining references were screened based on the full text.This systematic review was performed according to the recommendatio PRISMA guidelines.A checklist is included as supplementary material.
Single-Center Retrospective Cohort Study
Among 638 eligible patients with FN during the study period, 24 episodes o detected in 23 patients who underwent 18 F-FDG PET-CT as part of the FN worku 1).The characteristics of the patients who underwent 18 F-FDG PET-CT for FN displayed in Table 2. Fifty-seven percent were men.The mean age was 58.6 years Regarding the non-onco-hematological comorbidities, 17% had diabetes, 8% ha moderate chronic kidney disease, and 4% had chronic pulmonary disease (CO most frequent onco-hematological underlying disease was acute myeloid leuke patients (50%), followed by acute lymphoblastic leukemia (ALL) in 3 patients (1 tiple myeloma (MM) and myelodysplastic syndrome (MDS) in 2 patient each NK immunodeficiency in 1 patient (4%).Four patients (17%) were stem cell tra tion recipients.The reasons for the SCT were as follows: ALL in two patients, M patient, and MDS in one patient.
patients
•Admitted with haematological malignancies 184 patients The characteristics of the patients who underwent 18 F-FDG PET-CT for FN study are displayed in Table 2. Fifty-seven percent were men.The mean age was 58.6 years (SD 19.6).Regarding the non-onco-hematological comorbidities, 17% had diabetes, 8% had at least moderate chronic kidney disease, and 4% had chronic pulmonary disease (COPD).The most frequent onco-hematological underlying disease was acute myeloid leukemia in 12 patients (50%), followed by acute lymphoblastic leukemia (ALL) in 3 patients (12%), multiple myeloma (MM) and myelodysplastic syndrome (MDS) in 2 patient each (8%), and NK immunodeficiency in 1 patient (4%).Four patients (17%) were stem cell transplantation recipients.The reasons for the SCT were as follows: ALL in two patients, MM in one patient, and MDS in one patient.
The majority of patients fulfilled criteria for persistent fever (58%), followed by patients with persistent low-grade fever (33%).Neutrophil counts on the day of the 18 F-FDG PET-CT were below 100 cells/mm 3 in 12 (50%) and between 100 and 500 cells/mm 3 in 10 (42%) patients.Two (8%) had neutrophil counts above 500 cells/mm 3 on the day the test was performed but had recently recovered from severe neutropenia during the previous week.
Patients were receiving antimicrobial prophylaxis according to risk factors, following the institution's protocols (supplementary material S1).The distribution of the empirical treatments administered at the moment of 18 F-FDG PET-CT is shown in Table 1, the most commonly used drugs being meropenem (54%) and piperacillin-tazobactam (37%).In five cases of 24 FN episodes (21%), fever was considered of infectious etiology.The etiology was bacterial infection in two cases (8%), fungal in two (8%), and parasitic in one (4%).No viral infection was diagnosed.In one case (4%) there was a clinical diagnosis of infection, but a microbiological etiology could not be determined (catheter uptake without isolation of microorganisms).Among the bacterial infections, two bloodstream infections were diagnosed: a case of catheter-related S. haemolyticus bacteremia and another of catheter-related persistent E. faecium bacteremia without catheter uptake or septic metastases.Regarding fungal infections, one patient presented an invasive fusariosis and another with possible pulmonary IFI.The remaining patient with a known infectious etiology had visceral leishmaniasis.
In all but one patient, the infection was localized.The disseminated case was a patient initially diagnosed of naso-sinusal fusariosis, where the initial 18 F-FDG PET-CT helped to diagnose the occult source of fever, and in addition, the monitoring 18 F-FDG PET-CT detected the dissemination of the infection.
The median duration of antimicrobial therapy until 18 F-FDG PET-CT was performed was 13.5 days .The fever was considered to be secondary to non-infectious causes in 20 cases (83.3%): in 11 (46%) it was secondary to the underlying hematologic malignancy, in 1 (4%) it was considered of inflammatory etiology (reduction in corticosteroids in the context of disseminated mycobacterial infection), there were 3 cases (12%) of engraftment syndrome and 2 of graft-versus-host disease (GVHD) (8%), and in 3 cases (12%) an etiology was not identified.
Characteristics of the 18 F-FDG PET-CT as Compared with Conventional Imaging
The indication for 18 F-FDG PET-CT was the study of FN in all patients.The median time of neutropenia before 18 F-FDG PET-CT was 13.5 days (3-74), and the median time to 18 F-FDG PET-CT from the beginning of fever was 13 days (3-28).
The 18 F-FDG PET-CT showed pathological uptake in 20 (83%) cases.The most frequent location of this uptake was intra-abdominal visceral (37%, mainly hepato-splenic uptake), followed by bone marrow and lung uptake (21%).The most common distribution was multifocal in 71% of cases followed by focal uptake in 12%.The remaining 17% did not show any 18 F-FDG uptake.
Table 3 shows the results of conventional imaging compared with those of 18 F-FDG PET-CT and the added value of 18 F-FDG PET-CT in the management of febrile neutropenia.The 18 F-FDG PET-CT provided added value to the previous conventional imaging study in 20 patients (83%) considering as such the diagnosis of new infection or the exclusion of infection that led to the modification of antimicrobials or the initiation of treatment of the underlying hematological disease.It contributed to the diagnosis of new sites of infection in 4 patients (17%), ruled out infection in 16 patients (67%), and helped modify antimicrobial treatment in 16 patients (67%).It also allowed starting chemotherapy or immunomodulators in four patients: two patients started chemotherapy, one patient received specific treatment for graft-versus-host disease, and corticosteroid doses were increased in another patient.
Pathological uptake in 18 F-FDG PET-CT helped perform targeted diagnostic tests in 58%: four patients (17%) underwent fine-needle aspiration/biopsy of the pathological uptake area, and one patient (4%) underwent bronchoscopy, among other complementary tests.
The results of 18 F-FDG PET-CT implied the removal of the venous catheter in one case and surgical debridement in another one for source control.
In the cases that were eventually diagnosed as infection, the most common site of infection identified by 18 F-FDG PET-CT was hepatosplenic and biliary (8%), followed by catheter and pulmonary (4%).
Among the non-infectious causes, the most common reason for pathological uptake in 18 F-FDG PET-CT was the underlying disease in 11 patients (46%).Concerning antimicrobial therapy, in 16 patients (67%) the antimicrobial spectrum was modified based on 18 F-FDG PET-CT results.In 2 (8%) patients, antimicrobials were de-escalated; in 1 (4%) case the spectrum was expanded; and in 10 (42%) patients it allowed the discontinuation of antimicrobials.There was a need to start new antimicrobial treatment in three (12%) patients, in one of the cases also accompanied by surgical debridement.
Only one patient had more than one 18 F-FDG PET-CT during the study of the episode of FN.The patient diagnosed with sinus IFI due to Fusarium had a control 18 F-FDG PET-CT scan performed one month after the first one, which, as aforementioned, detected dissemination of the infection to the lungs as well as persistence of the sinusal infection.
Systematic Literature Review Search Strategy and Inclusion
The literature search retrieved 341 references that were de-duplicated, and non-English, Spanish, and French references were excluded (Figure 2).The remaining references were screened for eligibility based on the titles and abstracts (of which 160 were excluded).Based on full-text evaluation of the remaining publications, 16 articles that evaluated the use of 18 F-FDG PET-CT in the management of FN were included.This resulted in a sample of one RCT and 15 publications.Among them, five narrative reviews and a survey were excluded from the present analysis.Two-case reports were excluded to avoid publication bias.Eight articles were selected (Table 4).
Quality appraisal
Microorganisms 2024, 12, x FOR PEER REVIEW 22 of 31
Search Strategy and Inclusion
The literature search retrieved 341 references that were de-duplicated, and non-English, Spanish, and French references were excluded (Figure 2).The remaining references were screened for eligibility based on the titles and abstracts (of which 160 were excluded).Based on full-text evaluation of the remaining publications, 16 articles that evaluated the use of 18 F-FDG PET-CT in the management of FN were included.This resulted in a sample of one RCT and 15 publications.Among them, five narrative reviews and a survey were excluded from the present analysis.Two-case reports were excluded to avoid publication bias.Eight articles were selected (Table 4).
Quality appraisal
The quality of the eight included publications was evaluated as moderate to poor.The methodology used was heterogeneous.Because of the nature of the interventions, blinding of the patients and staff was not possible.Three of the studies were retrospective and, thus, non-random by design.Among the prospective studies, in one case, there was no comparison of 18 F-FDG PET-CT and conventional techniques, whereas the remaining studies performed both techniques on the same patients sequentially; therefore, there was no randomization to one or the other study.Likewise, there was no randomization of the sequence in which the techniques were performed.The quality of the eight included publications was evaluated as moderate to poor.The methodology used was heterogeneous.Because of the nature of the interventions, blinding of the patients and staff was not possible.Three of the studies were retrospective and, thus, non-random by design.Among the prospective studies, in one case, there was no comparison of 18 F-FDG PET-CT and conventional techniques, whereas the remaining studies performed both techniques on the same patients sequentially; therefore, there was no randomization to one or the other study.Likewise, there was no randomization of the sequence in which the techniques were performed.
Only one open-label randomized controlled trial was identified.Although masking of the randomization was not possible, the clinical impact of the randomized scans and the cause of neutropenic fever were assessed by an independent adjudicating committee to reduce the risk of bias.
•
Results of the studies according to the methodology (Figure 2) 1.
Clinical trial
Recently, a multicenter phase 3 controlled clinical trial [3] was published that randomized patients with high-risk NF 1:1 to perform CT vs. PET-CT.The primary endpoint was a combination of starting, stopping, or changing the spectrum of antimicrobial therapy as a result of the information provided by the imaging technique.They included a total of 134 patients (PET-CT 65; CT 69).Antimicrobial rationalization occurred in 82% of patients in the 18 F-FDG PET-CT group and 65% in the CT group.The most frequent action was the reduction in the spectrum of antimicrobial therapy, 43% for 18 F-FDG PET-CT compared with 25% for CT (p = 0.024).The authors concluded that 18 F-FDG PET-CT was associated with better optimization of antimicrobial therapy and could help decision making in this type of patient.The drawback of this clinical trial was the lack of direct comparison of 18 F-FDG-PET-CT and conventional imaging in the same patient.The authors did not provide information about whether the differences in baseline characteristics or the final fever etiology between patients who underwent 18 F-FDG PET-CT or conventional CT were statistically significant.
Original articles
Seven original articles that studied the usefulness of 18 F-FDG PET-CT in FN were found through the systematic search, five of them prospective and two retrospective [14][15][16][17][18][19][20].The characteristics of the original articles that evaluated 18 F-FDG-PET-CT's usefulness for FN management are summarized in Table 3.
a.
Studies comparing the results of conventional tests and 18 F-FDG PET-CT in the same patient Four articles [14,15,19,20] compared conventional imaging and 18 F-FDG-PET-CT performed in the same patient as part of the study of FN, in order to assess which one provides more information to improve management.Only one of these provided individual data with a head to head comparison of conventional imaging with 18 F-FDG-PET-CT in the same patients [19].A total of 161 patients with FN were evaluated in these studies (147 adults and 14 children).The median time to 18 F-FDG PET-CT from the beginning of fever to the performance of 18 F-FDG PET-CT varied between 6 and 14 days.The median time from conventional imaging to 18 F-FDG PET-CT was not provided in any of these studies.The final diagnosis was infectious in a high proportion of cases, varying from 55% to 79%.
Camus et al. [14] carried out a prospective single-center study to investigate the ability of 18 F-FDG-PET-CT to find the source of infection in patients with FN.In this study, among the 38 patients with a final clinical diagnosis of infection (79%), 23 had a pathological FDG uptake, resulting in a 18 F-FDG-PET-CT sensitivity of 61%.Among the 17 patients diagnosed with pneumonia by conventional evaluation, 18 F-FDG-PET-CT detected pulmonary uptake in 11 (64.7%) and uptake at multiple levels in 6 (35.3%).Gafter-Gvili et al. also evaluated in a prospective study the performance of 18 F-FDG-PET-CT for the diagnosis and treatment of infections in high-risk patients with FN [15].In this case, the sensitivity of 18 F-FDG-PET-CT was 79.8% compared with 51.7% for chest/sinus CT alone.The specificities were 32.14% versus 42.85%.Furthermore, in more than 50% of patients, 18 F-FDG-PET-CT changed the pre-test diagnosis and helped modify patient management.Both studies concluded that 18 F-FDG PET-CT had the ability to assist in the evaluation and management of these patients.To assess the impact of 18 F-FDG-PET-CT on persistent or recurrent fever.n = 14.In 11 of them (79%), 18 F-FDG-PET-CT had a clinical impact: in three, treatment was de-escalated, and in five, antibiotics were discontinued. 18F-FDG-PET-CT scans identified new foci in seven patients. 18 A third prospective study, carried out by Guy et al. [19], which included 20 patients with NF who underwent 18 F-FDG-PET-CT in addition to conventional techniques, revealed that 18 F-FDG-PET-CT was able to identify nine infections that CT was not able to identify and had a clinical impact in 75% of patients since it inducedtreatment changes.Like previous articles, it concluded that 18 F-FDG-PET-CT was useful in the assessment of NF.
A last, retrospective study [20], carried out in a pediatric population that included 14 patients, observed that 18 F-FDG PET-CT had a positive impact in 11 patients (79%), favoring the rationalization of antimicrobials in three (21%) and their discontinuation in five (36%).Furthermore, compared with conventional tests, it helped identify new sites of infection in seven (50%) patients and contributed to the final diagnosis in six (43%) patients.As in previous articles, the authors consider the potential usefulness of 18 F-FDG-PET-CT as part of the study of FN.
Another aspect to highlight is the usefulness of 18 F-FDG-PET-CT to assess the dissemination of infections.In the articles by Camus and Gvili discussed previously, 18 F-FDG-PET-CT was used to detect occult lesions that led to the diagnosis of disseminated infection in 27% and 1.3% of cases, respectively [14,15].
All of these studies [14,15,19,20], emphasize the usefulness of 18 F-FDG-PET-CT in the diagnosis of fungal infection and the rationalization of antifungal treatment.a.
Studies that do not compare conventional tests and 18 F-FDG PET-CT in the same patient Three articles evaluated the contribution of 18 F-FDG PET-CT in the diagnosis of infection without doing a head to head comparison with conventional tests in the same patient.These studies either performed conventional tests and 18 F-FDG PET-CT in different patients [17] or performed only 18 F-FDG PET-CT [16,18].A total of 92 patients with FN were evaluated in these studies.
The retrospective study by Koh KC et al. [17] evaluated the impact of 18 F-FDG-PET-CT on the use of antimicrobials in FN.They identified two groups: one that had undergone 18 F-FDG-PET-CT (n 37) and another that had had conventional imaging (n 76).There were no significant differences between cases and controls with respect to age, sex, underlying malignancy, and chemotherapy.The 18 F-FDG-PET-CT determined the cause of FN in 94.6% of patients compared with 69.7% in the conventional imaging group.Furthermore, 18 F-FDG PET-CT had a significant impact on antimicrobial use compared with conventional imaging (35.1% vs. 11.8%;p 0.003) and was associated with a shorter duration of antifungal therapy.The authors stated that 18 F-FDG PET-CT improved diagnostic performance and allowed the rationalization of antimicrobials in these patients.In this study as well, the usefulness of 18 F-FDG-PET-CT for the diagnosis of IFI and the rationalization of antifungal treatment was evidenced.
The two remaining studies did not compare 18 F-FDG-PET-CT with conventional imaging.The retrospective study by Mahfouz T et al. [18] reviewed the contribution of 18 F-FDG-PET-CT performed to 248 patients with multiple myeloma (MM) for the staging or diagnosis of infection where there was an uptake atypical for myeloma that could be suggestive of infection.A total of 165 infections were identified in 143 adults with MM, 27 of these episodes being in the context of neutropenia.The 18 F-FDG PET-CT detected 46 infections not detectable by other methods, helped determine the extent of infection in 32 episodes, and led to modification of the diagnosis and therapy in 55.In patients with staging 18 F-FDG PET-CT, twenty silent infections were detected.They concluded that 18 F-FDG PET-CT in MM was a useful technique for diagnosing infection; unfortunately, the authors did not provide specific results for the subset of patients with FN.
The prospective study by Vos FJ et al. [16] included 28 hematological patients with neutropenia who underwent 18 F-FDG-PET-CT in cases of CRP levels greater than 50 mg/L.In 26 out of 28 (92.9%)patients, that increase in CRP levels was accompanied by fever.The median time from starting chemotherapy to 18 F-FDG PET-CT was 14 days.They found pathological FDG uptake in 26 of 28 cases (92.9%).The authors did not specify in how many cases 18 F-FDG-PET-CT guided the performance of diagnostic tests.In this study, pulmonary uptake was significantly associated with the presence of IFI (p = 0.04).They determined that 18 F-FDG-PET-CT in the context of increased CRP was capable of detecting infection in situations of severe neutropenia.An evaluation of the impact of 18 F-FDG-PET-CT on antimicrobial prescriptions was not provided.
Discussion
Our data indicate that 18 F-FDG-PET-CT is useful in the management of FN.In 87% of the cases it helped to confirm or rule out infection, allowing optimization of empirical antimicrobial treatment including de-escalation or discontinuation of unnecessary antimicrobials in 16 cases (67%).These results support data from the 318 total cases with 18 F-FDG-PET-CT for FN analyzed in the studies included in the present review.
To the best of our knowledge, the present study is one of the few that compares the performance of conventional tests and 18 F-FDG PET-CT in the same patient during the FN episode.When comparing conventional imaging with 18 F-FDG-PET-CT performed on different patients, the differences in underlying diseases or in fever etiology could account for the differences observed in the yields of these tests.Comparing both techniques in the same patient overcomes this limitation.In addition, only patients who were still neutropenic at the moment of 18 F-FDG-PET-CT were selected (with the exception of two patients who had recovered neutrophils very recently, fewer than 3 days before PET), so that we cannot attribute the better performance of 18 F-FDG-PET-CT to neutrophil recovery.
The 18 F-FDG PET-CT was especially relevant in the diagnosis of uncommon fungal and parasitic infections, such as fusariosis or leishmaniasis.Interestingly, in the present series the proportion of infectious etiology was low, only 16.7%.Being a retrospective study, we cannot exclude that only patients with a lower probability of infectious cause (non-responders to antimicrobials, with already negative prior tests) underwent 18 F-FDG-PET-CT.In any case, the ability to rule out infection in these cases is the reason why antimicrobial therapy could be adjusted, similar to what other authors report (in spite of having a much higher proportion of infectious etiologies, ranging from 55% to 79%).Another important aspect is that thanks to the 18 F-FDG PET-CT results in cases in which infection was ruled out, patients were able to resume the chemotherapy treatment necessary for their underlying hematological disease, similar to other works [21].
Another benefit that 18 F-FDG-PET-CT provides is its potential to detect dissemination of infection, especially in cases of IFI.Several articles state that 18 F-FDG-PET-CT may have greater sensitivity than conventional tests to detect dissemination and occult lesions in the context of IFI [1,4,22].In the analysis of our data, 18 F-FDG-PET-CT was key in the diagnosis of disseminated fungal disease in one of the patients, which led to changes in therapeutic management.
Limitations to stating the role of 18 F-FDG-PET-CT in febrile neutropenia workup are access and cost.
The systematic search retrieved several very heterogeneous articles that intended to evaluate the benefits of 18 F-FDG-PET-CT in FN.The different methodologies, the lack of direct comparison between the techniques, and the different populations of patients studied precluded the performance of a metanalysis.With the exception of the only randomized controlled trial, the quality of the retrieved studies was poor.The lack of randomization together with the impossibility of masking increases the risk of bias.In this systematic review we analyze the results of these studies and point out knowledge gaps and unanswered questions.
First, although some of them are prospective studies, all are single-center studies that include only a small number of patients.The study by Mahfouz T et al. [18], even if it analyzes a large sample, is a retrospective study that only includes patients with MM, with a small proportion of FN.The studies by Koh et al. [17] and Guy et al. [19] were performed at the same hospital during the same period and might thus include in part the same patients.
We consider that the most relevant limitation of the majority of the articles is that they do not compare conventional tests and 18 F-FDG-PET-CT to better discern what value the 18 F-FDG-PET-CT adds in patients with FN, and among those that do (only five small studies) [14,15,17,19,20], not all perform both techniques on the same patient [17].In this sense, the clinical trial by Douglas A. et al. [3] is a significant contribution in this area, but similar to others, its main limitation is not performing both tests in the same patient.As aforementioned, differences in underlying baseline characteristics or even in the cause of fever are difficult to address with this design and could explain, at least to some degree, the differences observed in the yields of the techniques that are being evaluated.When evaluating a diagnostic test, we believe it should be compared with other tests performed on the same patient.
Moreover, among those that compared 18 F-FDG-PET-CT with conventional imaging, only one provided individual patient data [19].This fact, in addition to the aforementioned methodologic heterogeneity, made it impracticable to perform a metanalysis.
In spite of these limitations, according to the results of the aforementioned articles that altogether include a total of 344 cases of FN, 18 F-FDG-PET-CT seems to provide relevant information for the management of FN in a high proportion of cases.
In several of the studies the information provided by 18 F-FDG PET-CT is especially relevant in the case of difficult to diagnose infections such as IFI [1,3,15,17,20] or parasitic infections.In addition to helping diagnose IFI and unveil dissemination, it also seems to be more useful to monitor the response to treatment than CT alone [23] since CT in some cases continues to show radiological lesions corresponding to scar tissue that do not show pathological uptake in 18 F-FDG PET-CT, allowing the ending of antifungal treatment [1].
Of even greater importance is 18 F-FDG PET-CT's contribution to optimizing antimicrobial use in FN.Due to the high negative predictive value of 18 F-FDG PET-CT, it allows reducing the use of broad-spectrum antimicrobials, favoring in many cases de-escalation and even discontinuation of empirical treatment, mainly of antifungals [1,3,17].
The clinical trial by Douglas et al. provides relevant information about the safety of basing clinical decisions on 18 F-FDG PET-CT results.However, a formal cost-effectiveness analysis is pending to justify better access to 18 F-FDG PET-CT in FN high-risk patients [3].
Clinicians experienced with the use of 18 F-FDG PET-CT for the study of infection favor its use for prolonged FN and for an IFI diagnosis, according to a survey carried out by the Australian group.In particular, physicians who treat onco-hematological patients are likely to use 18 F-FDG PET-CT in patients with FN to optimize the diagnosis and therapeutic management [24].
Many unanswered questions remain.In many cases, 18 F-FDG PET-CT is considered when fever persists despite empirical antimicrobial treatments.But how long should we wait before performing 18 F-FDG PET-CT?When will it perform better during the course of FN?Is there a basic workup that should be performed before considering 18 F-FDG PET-CT?Should this basic set of studies include conventional imaging, or should 18 F-FDG PET-CT replace them?Is there a particular subset of patients who would benefit more from 18 F-FDG PET-CT?More studies with an adequate design are needed to clarify these points.We believe that a large multicentric prospective study that selects a well-categorized and homogeneous population of high-risk FN patients, using a protocolized workup for FN that includes both conventional imaging tests and 18 F-FDG PET-CT performed on the same patient during a short pre-established time window, would be an appropriate model to clarify the role of 18 F-FDG PET-CT and thus a diagnostic protocol that could include 18 F-FDG-PET-CT.
Institutional Review Board Statement:
The authors confirm that the study was approved by the Institutional Review Board (CEIm) at Hospital Universitario Puerta de Hierro (Majadahonda) (PI 109/23), and a waiver for informed consent was granted.The study complied with the provisions in European Union (EU) and Spanish legislation on data protection and the Declaration of Helsinki.
Data Availability Statement: After publication, the data will be made available to others upon reasonable requests to the corresponding author.A proposal with a detailed description of study objectives and statistical analysis plan will be needed for evaluation of the reasonability of requests.It might also be required during the process of evaluation.Deidentified participant data will be provided after approval from the principal researchers of Hospital Universitario Puerta de Hierro (Majadahonda).
P
(population): Patients of all ages with febrile neutropenia in the setting of oncohematologic malignancy I (intervention): 18 F-FDG PET-CT in the setting of febrile neutropenia management C (comparison): Conventional diagnostic tests used for febrile neutropenia workup Discrepancies in the scores were resolved through discussion.
Table 1 .
Research question in PICOS framework.
Table 2 .
Characteristics of FN episodes from our hospital.
Table 3 .
Evaluation of conventional imaging and added value of18F-FDG PET-CT in 24 patients with febrile neutropenia.
Table 4 .
Summary of the eight articles selected by means of the systematic search. | 2024-02-02T16:13:34.297Z | 2024-01-31T00:00:00.000 | {
"year": 2024,
"sha1": "43a9d2bcbdbb9359dbb217a45be9eb5468fbf73c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/12/2/307/pdf?version=1706699862",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4cc2bc06d03d43b768195995156ae984680ffcd9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249207263 | pes2o/s2orc | v3-fos-license | Study on Low-Frequency Repetitive Transcranial Magnetic Stimulation Improves Speech Function and Mechanism in Patients With Non-fluent Aphasia After Stroke
Objective To explore the therapeutic effect and mechanism of low-frequency repetitive transcranial magnetic stimulation on the speech function of patients with non-fluent aphasia after stroke. Methods According to the inclusion and exclusion criteria, 60 patients with post-stroke non-fluent aphasia were included and randomly divided into treatment group (rTMS group) and sham stimulation group (S-rTMS group). Patients in rTMS group were given low-frequency rTMS + ST training. Patients in the S-rTMS group were given sham low-frequency rTMS + ST training. Once a day, 5 days a week, for a total of 4 weeks. The Western Aphasia Battery and the short-form Token test were used to evaluate the language function of the patients in the two groups before and after treatment. Part of the enrolled patients were subjected to functional magnetic resonance imaging examination, and the morning fasting venous blood of the enrolled patients was drawn before and after treatment to determine the content of BDNF and TNF-α. Results In the comparison before and after treatment within the group, all dimensions of the WAB scale of the patients in the rTMS group increased significantly. Only two dimensions of the WAB scale of the patients in the S-rTMS group improved significantly after treatment. The results of the short-form Token test showed that patients in the rTMS group improved significantly before and after treatment. The resting state functional magnetic resonance imaging of the two groups of patients before and after treatment showed: the activation of multiple brain regions in the left hemisphere of the rTMS group increased compared with the control group. The serum BDNF content of the patients in the rTMS group was significantly higher than that of the patients in the S-rTMS group after treatment. Conclusion Low-frequency rTMS combined with conventional speech training can significantly improve the speech function of patients with non-fluent aphasia after stroke.
INTRODUCTION
Aphasia refers to a type of language disorder syndrome in which organic brain diseases are caused by various reasons, which cause damage to related brain areas that dominate brain language expression and listening comprehension, so that patients cannot perform normal speech expression and understand the other party's words. It is very common in patients with cerebrovascular disease. According to research statistics, the incidence of aphasia in stroke patients is about 20-40% (Menichelli et al., 2019).
Aphasia Recovery Mechanism
Regarding the mechanism of aphasia recovery, when the language hub of the dominant hemisphere is damaged in the acute phase, its inhibition of the surrounding brain areas will be weakened, which promotes the activation of the brain areas around the damaged brain area and the functional reconstruction of plasticity, and promotes the recovery of the patient's language function. In the subacute phase, the mirror brain area of the language hub of the right hemisphere is activated due to the weakening of the inhibition of the dominant hemisphere, which is beneficial to the recovery of the function of patients with aphasia to a certain extent. In the chronic recovery period, as the function of the dominant hemisphere on the left side of the brain gradually recovers, its activation level gradually increases during language training, and the inhibition to the right hemisphere gradually increases. At the same time, the activation level of the right hemisphere gradually decreased the language hub gradually returns to the left dominant hemisphere. Therefore, in the chronic phase, in order to reduce the inhibitory effect of the non-dominant hemisphere on the dominant hemisphere, it is necessary to inhibit the corresponding brain areas of the non-dominant hemisphere, and at the same time, it can excite the language hub in the dominant hemisphere and promote the recovery of the language function of the patients. Also in clinical practice had showed that cortical stimulation could facilitate functional improvement (Zhang J. et al., 2021).
Application of rTMS in Aphasia
Repetitive transcranial magnetic stimulation technology is one of the main representatives of non-invasive brain stimulation technology that has emerged in recent years. It not only has a temporary inhibitory or excitatory effect on the cerebral cortex, but also has a long-term plasticity change effect. A large number of research results affirm its efficacy in the treatment of aphasia (Rossetti et al., 2019), but the specific mechanism of action is still unclear. Some scholars use the method of functional magnetic resonance to explore the specific mechanism of the rTMS by the specific activated/inhibited brain regions, but the conclusions are very different (Szaflarski et al., 2018;Arheix-Parras et al., 2021;Fahmy and Elshebawy, 2021;Neri et al., 2021). In addition, studies have also found that: after rTMS treatment, the levels of brain-derived neurotrophic factor in peripheral blood of patients with depression was higher than before, which may be one of the mechanisms of rTMS (Zhao et al., 2019).
Research Purposes
In this study, low-frequency repetitive transcranial magnetic stimulation was applied to the posterior inferior frontal gyrus of the right cerebral hemisphere in patients with non-fluent aphasia after stroke. Clarify its therapeutic effect on the language function of patients with aphasia, and some patients were enrolled in the rest state functional magnetic resonance scan before and after treatment, using low-frequency amplitude score, degree centrality method to statistically analyze the scanned image data, to identify specific activated or inhibited brain regions, and combined the method of functional connection to explore the plasticity changes of specific brain regions. At the same time, before the start of treatment and after the end of the treatment course, the early morning venous blood of the enrolled patients was collected to determine the content of BDNF, and to explore the treatment mechanism of rTMS in patients with non-fluid aphasia after stroke from the perspective of cytokines, providing clinical and theoretical support for the clinical treatment of aphasia.
Research Object
According to the inclusion and exclusion criteria, 60 patients with post-stroke aphasia who were hospitalized in the Rehabilitation Medicine Department of Qingdao University Affiliated Hospital from 2017-12 to 2019-10 were randomly divided into treatment group (rTMS group) and control group (S-rTMS group). This study was reviewed by the ethics committee of the Affiliated Hospital of Qingdao University (qyfykyll 2018-23). Written informed consent was obtained from the individual for the publication of any potentially identifiable images or data included in this article.
Inclusion Criteria
(1) Clinical compliance with the criteria of "Diagnosis Essentials for Various Cerebrovascular Diseases" formulated by the Fourth National Cerebrovascular Disease Conference of the Chinese Medical Association in 1995; CT or MRI confirmed as the first stroke in the left hemisphere (dominant hemisphere); (2) Right Handy (standardized measurement), normal language function before onset; (3) The course of illness is about 2 weeks to 6 months after stroke; (4) Western Aphasia Battery (WAB) aphasia quotient (AQ) < 93.8, non-Fluent aphasia, with a score of 0-4 for speech fluency; (5) Chinese is the first language, and the education level above elementary school can cooperate to complete the assessment; (6) No epilepsy, severe heart disease, severe physical disease; (7) Clear mind, cooperative physical examination, and orientation Complete, without obvious memory impairment and intellectual impairment; (8) Able to independently maintain a sitting position for more than 30 min; (9) Patients and family members sign informed consent.
Exclusion Criteria
(1) Complicated with other neurodegenerative diseases, such as speech disorder caused by Parkinson's disease, dementia, etc.; (2) Auditory or visual defects may affect assessment and treatment; (3) Application of drugs that change the excitability of the cerebral cortex (antiepileptic drugs), sleeping pills, benzodiazepines, etc.; (4) Combined with epilepsy, severe heart, liver, kidney dysfunction or other serious physical diseases; (5) Unconscious and unable to cooperate with examination and treatment; (6) A history of mental abnormalities; (7) According to safety guidelines, there are contraindications to rTMS and MRI, such as metal foreign bodies in the body or other electronic devices implanted in the body.
Among them, 60 patients completed the initial evaluation of the WAB scale and the short-form Token test. The patients in the rTMS group completed the entire experimental process and completed the final evaluation of the WAB scale and the shortform Token test. One patient in the S-rTMS group was converted due to recurrence of cerebral hemorrhage. He was admitted to neurosurgery for treatment, and another patient was discharged from the trial early due to personal reasons. Therefore, only 28 patients completed the final evaluation of the WAB scale and the short-form Token test. In order to explore the specific mechanism of rTMS, we performed resting functional magnetic resonance scans on some patients before and after treatment, and collected the peripheral blood of the patients, and measured the changes of BDNF and TNF-α in their peripheral serum. The general information of the enrolled patients is shown in the following table (Table 1).
Apparatus
All experiments were completed in the Department of Rehabilitation Medicine and the Central Laboratory of the Affiliated Hospital of Qingdao University.
Measurement of Motor Threshold
Select the contralateral abductor pollicis brevis muscle as the measuring muscle, and place the recording electrode on the muscle abdomen of the muscle and the reference electrode on the first joint of the thumb of the ipsilateral upper limb. The stimulation coil stimulates the patient's right brain, gradually adjust the position of the stimulation coil, determine the most suitable stimulation position and stimulation intensity (at this time the incubation period is shortest, and the amplitude is the largest), and gradually adjust the output intensity to find out 10 consecutive stimulations that trigger the contralateral thumb short The stimulus intensity that the abductor motor evoked potential appears at least 5 times and the amplitude is not less than 50 µV is the motor threshold.
Stimulation Site
select the patient's non-dominant hemisphere (right hemisphere) at the back of the inferior frontal gyrus as the stimulation site, place the stimulation coil close to the surface of the patient's skull and place it tangentially, the center point of the "8" coil is placed at the mark, and the stimulation coil The handle points vertically to the patient's back occiput. The body surface positioning method is selected according to the electrode positioning map calibrated by the International Electroencephalography Society. Before and after treatment, WAB scale assessment and resting functional magnetic resonance scan were performed on the two groups of patients. The stimulus parameters and stimulus parts of the sham stimulation group were the same as the treatment group, but the stimulation coil was perpendicular to the surface of the skull.
Stimulation Parameters
Set 80% of the motor threshold as the stimulus intensity, the stimulus frequency is 1 Hz, 10 pulses are a sequence, the sequence interval is 2 s, 100 sequences per day (total 1,000 pulses in total), treatment for 5 days a week. The total course of treatment is 4 weeks.
Routine Speech Training and Language Function Scale Assessment
Conventional speech training is conducted by speech therapists including Schuell training method, blocking removal method, de-inhibition method, program training method and other methods to conduct one-to-one speech training for patients, and appropriately combine computer picture naming training, etc. Each training time is about 30 min. The Western aphasia battery (WAB) Chinese version scale and the short-form Token test were used to evaluate the two groups of patients before and after treatment, and the evaluation results were summarized according to each dimension.
Functional Magnetic Resonance Parameter Setting
Use resting functional magnetic resonance (Rs-fMRI) scan for all subjects. The scan parameters are: TR 2,000 ms, TE 30 ms, slice thickness 5.0 mm, no interval, visual The field angle is 240 mm × 240 mm, and the matrix is 960 mm × 960 mm. The imaging range covers the whole brain as much as possible. There are 25 layers from the base of the skull to the parietal lobe, with 279 frames in each layer, and a total of 6,975 images are collected. The acquisition time is 558 s. During the examination, the patient is required to avoid any purposeful thinking activities as much as possible, lie supine on the examination bed with eyes closed, breathe calmly, and keep consciousness. Start scanning after the patient adapts to the magnet and surrounding environment.
Image Preprocessing
based on Matlab R2017b platform for preprocessing, and then use DPABI v4.0 1 and SPM12 software to process the image data. The processing steps are as follows: format conversion, time layer correction, head movement correction, spatial standardization, de-linear shift, regression covariate, etc. Select the 0.010 ∼ 0.027 Hz (slow5) sub-band to process and analyze the image.
Fractional Amplitude of Low-Frequency Fluctuation
The fALFF value is the ratio of the sum of the amplitude of the preset frequency band to the sum of the amplitude of the whole 1 http://rfmri.org/dpabi frequency band, and then normalize the whole brain voxels, that is, divide by the mean value of whole brain f ALFF to get mfALFF, then Gaussian smoothing (FWHM is 4 mm × 4 mm × 4 mm), you can get the smfALFF result.
Degree Centrality Analysis
Each voxel is a node, and the connection between the voxel and the voxel is called an edge. Calculate the Pearson's correlation coefficient between any two voxels (nodes) with obvious functional connection in the brain function connection group, according to the threshold level of r > 0.25, you can get a (number of voxels) × (number of voxels) undirected adjacency correlation matrix, get the weighted DC value, and then divide it with the whole brain DC mean, that is, complete the data standardization process, and then perform Gaussian smoothing for statistical analysis between groups.
Functional Connectivity Analysis
Select several speech-related brain regions of interest (ROI) based on previous research at home and abroad, calculate the average time series of each ROI, and then perform pairwise analysis of the above ROI Pearson correlation calculation analysis between, obtain the correlation coefficient between any two ROIs, thus get the correlation matrix, and then normalize it, and then enter the next step of processing and analysis.
Serum Processing and Storage Methods
Collect 3 ml of early morning venous blood from the enrolled patients before the treatment and after the end of the treatment course. After centrifugation at room temperature for 5 min, store the centrifuged serum in a refrigerator at −80 • C for future reference. If precipitation occurs during storage, it needs to be centrifuged again. Detection steps: balance reagents, prepare reagents, add samples, develop color, terminate the reaction, determine optical density (OD value), use ELISA calc software to calculate serum factor content, etc.
Statistical Analysis
The scores of the various dimensions of the WAB scale and the short-form Token test score data are analyzed using SPSS19.0 statistical software package. The measurement data obtained in this experiment is expressed as (χ ± s), and the measurement data within and between groups are compared using single Factor analysis of variance, count data using chi-square test, pairwise 17.57 ± 9.19* *There is a statistical difference in the comparison before and after treatment within the group, P < 0.05. # Comparison of the two groups after treatment between the groups, there is a statistical difference, P < 0.05.
Frontiers in Aging Neuroscience | www.frontiersin.org correlation analysis using linear correlation analysis, P < 0.05 indicates that the difference is statistically significant. Use DPABI v4.0 software to perform two-sample t-test on the obtained slow5 band fALFF, DC, and FC image data in the rTMS group (after treatment-before treatment) and S-rTMS group (after treatment-before treatment), and use GRF correction to perform multiple Comparative correction, the threshold is individual level P < 0.05, clump level P < 0.05. Serum BDNF and TNF-α levels were analyzed using SPSS19.0 statistical software package. The measurement data obtained in this experiment are expressed as FIGURE 1 | fALFF analysis of the difference between the two groups before and after treatment. The blue area is the brain area where activation is inhibited (threshold: individual level P < 0.05, clump level P < 0.05).
Frontiers in Aging Neuroscience | www.frontiersin.org FIGURE 2 | DC analysis of the distribution of brain regions with significant differences between the two groups before and after treatment, the yellow area is the activated brain area (threshold: individual level P < 0.05, clump level P < 0.05).
χ ± s. The measurement data within and between groups are compared by single-factor analysis of variance, P < 0.05 indicates a statistical difference.
RESULTS
(1) The effect of low-frequency rTMS stimulation on the Broca mirror area in the right inferior frontal gyrus on the dimensions of the WAB scale and the short-form Token test scores in patients with non-fluent aphasia after stroke: both the scores of each dimension of the WAB scale in the rTMS group before and after treatment and the short-form Token test scores were significantly improved (P < 0.05), while the WAB scale of patients in the S-rTMS group only had three dimensions of spontaneous language, naming, and aphasia quotient, and the short-form Token test scores were significantly improved (P < 0.05). After treatment, the scores of the two groups of patients were only statistically different in the three dimensions of WAB scale, spontaneous language, naming, and aphasia quotient (P < 0.05), see Table 2 for details.
(2) fALFF analyzes the brain regions where the difference between the two groups is more meaningful: through data analysis, it can be seen that in the Slow5 subband, there are two Clusters with statistical differences, The fALFF value of multiple brain regions of the patients in the rTMS group was decreased than that of the patients in the S-rTMS group, such as the right dorsolateral superior frontal gyrus, right supplementary motor area, right inferior frontal gyrus pars opercularis (voxel 56, MNI X = 36, Y = −39, Z = 15, T = −4.76, P < 0.05), right Brodmann area 8, right angular gyrus, right supramarginal gyrus, and right middle temporal gyrus (voxel 19, MNI X = 27, Y = −9, Z = 24, T = −5.37, P < 0.05) indicating that the activation of the above brain regions in the rTMS group was suppressed than that of the patients in the S-rTMS group. See Figure 1 for details. (3) DC analysis of the brain regions where the difference between the two groups is more meaningful: the results show that the DC value of multiple brain regions of the patients in the rTMS group was enhanced than that of the patients in the S-rTMS group, such as the left parietal lobe [superior parietal lobule (voxel 78, MNI X = −12, Y = −81, Z = 48, T = 4.74, P < 0.05), angular gyrus)], left frontal lobe [BA6 area, middle frontal gyrus, superior frontal gyrus, supplementary motor area (voxel 35, MNI X = −3, Y = −24, Z = 57, T = 6.70, P < 0.05), paracentral lobule], bilateral Limbic lobe (cingulum gyrus) indicating that the activation of the above brain regions in the rTMS group was significantly higher than that of the patients in the S-rTMS group. See Figure 2 for details.
(4) On the basis of the previous image processing, according to the pre-selected multiple ROIs related to the language function, the pairwise function connection analysis is performed on the basis of the difference between the two groups, and the t-test is performed. The result shows: between the left frontal lobe (supplementary motor area) (voxel 35, MNI X = −3, Y = −24, Z = 57, T = 6.70, P < 0.05) and the right temporal lobe (middle temporal gyrus) (voxel 19, MNI X = 27, Y = −9, Z = 24, T = −5.37, P < 0.05) became stronger (P < 0.05), indicating that the connection between the two hemispheres of the patients in the rTMS group was strengthened as shown in Figure 3.
(5) Changes in the serum BDNF content of the two groups of patients before the treatment and after the end of the treatment course: the serum BDNF content (pg/ml) of the patients in the rTMS group increased from 35.34 to 42.09 (P < 0.05), while the serum BDNF of the patients in the S-rTMS group the content (pg/ml) increased from 31.24 to 34.76 (P > 0.05). There was no significant difference in serum BDNF content between the two groups before treatment (P > 0.05). After the treatment, the serum BDNF content of patients in the rTMS group was significantly higher than that of the patients in the S-rTMS group (P < 0.05) and the specific results are shown in Table 3.
DISCUSSION
Aphasia is an organic brain disease caused by various reasons, causing damage to the related brain areas that dominate brain language expression and listening comprehension, which leads to a language disorder syndrome with abnormal speech expression and abnormal listening comprehension. Among the various diseases of brain damage, stroke is the most common. According to research statistics, about 30% of stroke patients are accompanied by aphasia (Menichelli et al., 2019).
As a new non-invasive technology that directly acts on the cerebral cortex, rTMS has been proven by many studies to treat patients with aphasia after stroke, but its specific mechanism is still controversial.
In this study, patients with non-fluent aphasia after stroke were treated with low-frequency rTMS for 4 consecutive weeks. The WAB scale and short-form Token test were used to evaluate the aphasia of patients before and after the treatment. The results of the study found that the low-frequency rTMS on the posterior inferior frontal gyrus of the right hemisphere combined with conventional speech training compared with false low-frequency rTMS combined with conventional speech training, the patient's naming, spontaneous language and other expression skills have been significantly improved.
Low-frequency rTMS combined with speech training can significantly improve the expression ability of patients with aphasia. The results of this study are consistent with the conclusions of previous studies. Some scholars use low-frequency rTMS as a single treatment method. After a short-term treatment is given to the patient's non-dominant hemisphere inferior frontal gyrus (the mirror area of the Broca area) for a short period of time, it is found that this single stimulation can improve the accuracy of patient naming, and the patient's reaction time will be significantly shortened (Terao and Ugawa, 2002). Harvey et al. (2019) used another continuous theta pulse magnetic stimulation (similar to low-frequency rTMS stimulation) to act on the posterior part of the right inferior frontal gyrus of patients with chronic aphasia. After treatment, they found that the patient's picture naming ability was significantly improved, indicating that this treatment plan is beneficial to improve the patient's naming ability (Harvey et al., 2019). Some scholars also use low-frequency rTMS to stimulate the posterior part of the inferior frontal gyrus of the non-dominant hemisphere for 10 times. The results show that this treatment can significantly improve the patient's language fluency (Lopez-Romero et al., 2019). Some scholars have also combined low-frequency rTMS on the posterior part of the right inferior frontal gyrus with speech training. After 2 weeks of treatment, the language fluency of patients with aphasia in the treatment group has improved greatly compared with the control group (Haghighi et al., 2017).
Our study combined low-frequency rTMS with conventional speech training and found that its therapeutic effect was significantly better than that of simple speech training. There are many similar studies. Yoon et al. (2015) combined low-frequency rTMS therapy with speech therapy to explore the combination of the two and the therapeutic effect. The treatment course was 4 weeks. The Korean version of the WAB scale was used to evaluate the two groups of patients before and after treatment. It was found that the naming and retelling ability scores of the patients after low-frequency rTMS stimulation increased significantly, and rTMS combined with speech training can be used as a treatment for brain Effective treatment for patients with non-fluent aphasia after stroke (Yoon et al., 2015). In order to explore the specific mechanism of low-frequency rTMS on patients with aphasia, we performed resting functional magnetic resonance examinations on some patients before and after treatment, and used different analysis methods to perform statistical analysis on the obtained image data. First, use the fractional Amplitude of Low-frequency Fluctuation (fALFF) method to analyze. This method is to standardize the ALFF at the individual level. After the standardization at the individual level, the shortcomings of ALFF can be effectively avoided, and the sensitivity and specificity of detection can be improved. Some scholars (Liu, 2016) used functional magnetic resonance to study the brain plasticity of patients with aphasia and found that rehabilitation training can increase the ALFF value of the temporal lobe of the left cerebral hemisphere and the right cerebellum, suggesting that the above brain areas are in the recovery process of aphasia play an important role.
Our study found that the fALFF value of multiple brain regions in the right hemisphere frontal lobe (right inferior frontal gyrus pars opercularis, right supplementary motor area, etc.) decreased. It shows that the activation of the above brain regions was significantly inhibited in the rTMS group of patients. The reason for the analysis is that low-frequency rTMS treatment on the mirror area of the Broca area of the right regions can inhibit the activation of this area, showing the above-mentioned hypoperfusion in the brain regions, and the local blood oxygen is disproportionately reduced due to the decreased oxygen consumption of neurons. Deoxyhemoglobin (paramagnetic) is relatively increased, so it shows a weakened signal. This shows that low-frequency rTMS can indeed significantly inhibit the ROI of the target area, thereby reducing the activation of the right brain area, reducing its inhibition of the Broca area of the dominant hemisphere through the corpus callosum, promoting the activation of the Broca area of the dominant hemisphere, and improve the speech expression functions of patients with aphasia by promote the brain plasticity. Some scholars have found that the degree of language impairment in patients with aphasia is positively correlated with the Pearson correlation test of the right middle frontal gyrus (Zhu et al., 2014), which is consistent with our study that the activation of the right frontal lobe decreases and the activation of the left inferior frontal gyrus increases during the recovery period of speech function.
At the same time, the activation of the right temporal lobe (middle temporal gyrus) and right parietal lobe (corner gyrus, right superior marginal gyrus) and other brain regions in the rTMS group was also inhibited. The analysis reason was considered to be given to the right inferior frontal gyrus with low frequency after rTMS treatment, the activation of this area decreases, and at the same time, there are different degrees of functional connection between this area and the surrounding brain areas. The function decline of a brain region will also affect the function of the surrounding brain regions. The function of the above-mentioned regions in the right hemisphere decreases, and the function of the above-mentioned regions will weaken the inhibitory effect to the corresponding brain regions of the dominant hemisphere, thereby promoting the functional activation and recovery of the above-mentioned brain regions in the dominant hemisphere. As the above brain regions in the dominant hemisphere are the reading center and the naming center, it can also explain why the low frequency rTMS stimulated on the posterior part of inferior frontal gyrus in the right hemisphere can improve naming and dyslexia in patients with aphasia.
Then, we used the Degree Centrality (DC) method to analyze. DC reflects the number of connections in the adjacent areas of the brain. Specifically, it refers to the number of direct connections between a node in the brain and other adjacent nodes, which can be directly quantified (Van den Heuvel and Sporns, 2013). DC can reflect the attributes of important nodes (hub nodes) at the center of the brain network. Because of its high connectivity with the surrounding brain nodes, it has a core dominance, and even has long-distance connections with other nodes, and its functions are the most complex, so its energy consumption (oxygen consumption) is higher than that of general nodes, also it is easy to be damaged in cerebrovascular diseases (Bullmore and Sporns, 2012). Wise (2003) found that speech training can activate the brain areas around the damaged language center in the dominant hemisphere. Other studies believe that inhibiting the activation level of the right cerebral hemisphere and reducing its inhibition to the dominant hemisphere through the corpus callosum can improve the long-term efficacy of patients with aphasia (Breier et al., 2009).
The left triangular of the inferior frontal gyrus in the dominant hemisphere is known as the classic language brain area of "oral expression, " which is mainly used for speech planning and execution. In recent years, scholars believe that the scope of the classic Broca area should include other areas of the frontal lobe, such as the frontal middle gyrus of the dominant hemisphere which responsible for participating in language production. At the same time, studies have found that the superior hemisphere superior frontal gyrus is also a key area in the language network, which is related to patients' language fluency and functions such as semantic conversion, retelling, naming, and listening comprehension (Sollmann et al., 2014). The results of our study showed that the DC value of the brain regions was increased such as the left superior parietal lobule, the left angular gyrus, left frontal lobe (BA6 area, middle frontal gyrus, superior frontal gyrus, supplementary motor area, paracentral lobule), bilateral Limbic lobe (cingulum gyrus) indicating that the above brain regions of the patients in the rTMS group were activated compared with that of the patients in the S-rTMS group, which also supports the above view.
Finally, we adopt the method of Functional Connection (FC) to analyze. FC reflects the degree of connection of neuronal activity between different brain regions that are far away. Through rs-fMRI, the functional network and anatomical structure of the entire brain can be studied. Li (2017) studied the brain function of patients with motor aphasia after stroke and found that after 1 month of rehabilitation, the functional connection between the middle temporal gyrus of the left dominant hemisphere and the left frontal lobe, insula and other brain regions increases. At the same time, the functional connection between the middle temporal gyrus of the left dominant hemisphere, the marginal lobe of the left hemisphere, and the cerebellum decreased. During the recovery period of aphasia, the functional connection between the left middle frontal gyrus and the undamaged brain area around the damaged brain area also increases. In addition, some researchers believe that (Zhu et al., 2017), patients with acute stroke not only have disordered language central function, but also interfere with the default network of the brain, which leads to a decline in the cognitive function of stroke patients.
Some scholars (Sreedharan et al., 2019) found that the recovery of language function in patients with aphasia after stroke is often accompanied by changes in functional connectivity: in the acute phase, the functional connection coefficient of the language neural network is significantly reduced. In the chronic phase, the functional connection coefficient of the language neural network is significantly enhanced. The study also found that even in high-risk patients, there is a decrease in resting functional connectivity. Some scholars (Zhang C. et al., 2021) found in the study of patients with motor aphasia after stroke that the patients were accompanied by a significant decline in language ability, and the average functional connectivity index of the frontal and parietal lobe in the left dominant hemisphere also decreased significantly. As language comprehension improves, and the average connection index of the frontal and parietal lobe of the dominant hemisphere also gradually increases. The above phenomenon shows that the language comprehension ability of patients with aphasia after brain injury improves, it may be achieved by changing the functional connections between brain areas. Moreover, research has confirmed that the improvement of language function in patients with aphasia is also related to the changes in the functional connections of brain regions.
The supplementary motor area is very important for motion control. The front area of the supplementary motor area is mainly responsible for the preparation and selection of sports. The back of the supplementary motor area is responsible for the execution of the movement, and the entire supplementary motor area plays a decisive role in both the low-level execution of the movement and the high-level control of the movement. Studies have confirmed that the treatment of bilateral supplementary motor area in patients with aphasia can improve the naming ability of patients (Naeser et al., 2020). This also indicates that the treatment of low-frequency rTMS on the mirror area of the Broca area in the right hemisphere can improve the language function of patients with aphasia by improving the number and efficiency of functional connections in multiple brain areas. Our research also found that after low-frequency rTMS treatment, the functional connection between the supplementary motor area of patients with aphasia and some brain regions of the bilateral hemispheres was significantly enhanced, and this may be one of the possible mechanisms for low-frequency rTMS to improve patients with aphasia.
At the same time, in order to explore the specific therapeutic mechanism of low-frequency rTMS, we measured the changes in peripheral serum BDNF and TNF-α concentrations of the enrolled patients before and after treatment, and planned to explore the specific mechanism from the perspective of cytokines.
Studies have confirmed that there are many nutritional factors in the brain to promote the recovery and improvement of brain function in patients. After rTMS treatment, it may promote the release of some nutritional factors in the brain and promote the repair or improvement of damaged brain function (Arheix-Parras et al., 2021).
Studies have found that after rTMS treatment, the levels of BDNF in peripheral blood of patients with depression are higher than before, which may be one of the mechanisms of rTMS (Zhao et al., 2019). So, we detected the serum BDNF concentration in peripheral blood of the two groups of patients before and after treatment.
It is well known that BDNF is essentially a protein, which plays an important role in the growth and differentiation of nerve cells, and can repair damaged nerve cells, thereby improving advanced cognitive functions (learning, memory, etc.) (Asadi et al., 2018;Huey Fremont et al., 2020), especially, it plays an important role in mediating the neural plasticity process of language function recovery in patients with aphasia after stroke (Di Pino et al., 2016). Animal studies have also shown that BDNF promotes long-term potentiation (LTP) through TrkB signaling (Lamb et al., 2015), which is considered to be essential for the intermittent memory process of the hippocampus (Zagrebelsky and Korte, 2014). Moreover, studies have found that BDNF can cross the blood-brain barrier through a high-volume saturated transport system. Animal studies have observed a positive correlation between the levels of BDNF in the brain and blood (Angelucci et al., 2011). Morichi et al. (2013) also found that changes in BDNF in peripheral blood are related to changes in cerebrospinal fluid (CSF) BDNF, and changes in BDNF at the peripheral level may reflect changes in BDNF in the brain. Therefore, some scholars believe that changes in BDNF at the peripheral level may reflect or at least partially reflect changes in brain BDNF (Fritsch et al., 2010). Winter et al. (2007) also found that the serum BDNF content increased significantly after high-intensity exercise, and the vocabulary learning speed was also significantly improved. They believed that the increase in the short-term learning success rate was related to the increase in the BDNF level.
Therefore, in this study, the peripheral blood of patients in the enrolled group was collected before and after low-frequency rTMS treatment, and the changes in peripheral serum BDNF content were measured, and then to explore the BDNF content changes in central nervous system. The results showed that the serum BDNF content of peripheral blood in the rTMS group increased significantly before and after treatment, and the serum BDNF content of the patients in the rTMS group was significantly higher than that of the patients in the S-rTMS group after treatment. This suggests that patients with nonfluent aphasia have a significant increase in serum BDNF levels in peripheral blood after low-frequency rTMS treatment. Many previous studies have confirmed that the increase in serum BDNF content in peripheral blood can reflect the changes in BDNF content in the patient's brain from the side. Therefore, we believe that after low-frequency rTMS stimulation, the content of BDNF in the brain of patients also increases to a certain extent, which may be one of the mechanisms of low-frequency rTMS.
In summary, our research results show that low-frequency rTMS combined with conventional speech training can significantly improve the language function of patients with nonfluent aphasia. In addition to directly changing the excitability of the cortex of the stimulated brain area, rTMS can inhibit the activation of different brain areas in the frontal and temporal lobes of the right cerebral hemisphere, and promotes the activation of different brain regions in the frontal and temporal lobes of the left dominant hemisphere, thereby improving the function of different brain regions and promoting changes in brain plasticity. It will also affect the transmission, expression and release of various cytokines and neurotransmitters, especially the expression and release of BDNF, which in turn promotes changes in brain plasticity. This may be one of the mechanisms by which rTMS promotes the improvement of the central nervous system, especially the language function of the brain.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethics Committee of The Affiliated Hospital of Qingdao University. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
QW and GB: conceptualization. LJ: methodology and formal analysis. YW: software. PM and XP: validation. SY and YZ: investigation. SH: resources and data curation. GB: writingoriginal draft preparation. GB, LJ, and QW: writing-review and editing. All authors have read and agreed to the published version of the manuscript. | 2022-06-01T13:28:00.348Z | 2022-05-30T00:00:00.000 | {
"year": 2022,
"sha1": "a6d3c1852df53fa53fa658038fae67ac2f64c6d3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "a6d3c1852df53fa53fa658038fae67ac2f64c6d3",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
237572331 | pes2o/s2orc | v3-fos-license | Molecular Gas in the Nuclear Region of NGC 6240
NGC 6240 is a luminous infrared galaxy in the local universe in the midst of a major merger. We analyze high-resolution interferometric observations of warm molecular gas using CO J = 3 - 2 and 6 - 5 in the central few kpc of NGC 6240 taken by the Atacama Large Millimeter Array. Using these CO line observations, we model the density distribution and kinematics of the molecular gas between the nuclei of the galaxies. Our models suggest that a disk model represents the data poorly. Instead, we argue that the observations are consistent with a tidal bridge between the two nuclei. We also observe high velocity redshifted gas that is not captured by the model. These findings shed light on small-scale processes that can affect galaxy evolution and the corresponding star formation.
INTRODUCTION
NGC 6240 (Wright et al. (1984); Thronson et al. (1990)) is a unique galaxy in the local universe (z = 0.02448) in the midst of a major merger event (Fried & Shulz 1983) that is triggering high star formation rates (Genzel et al. (1998); Tecza et al. (2000)) and active galactic nuclei (AGN) activity in its two progenitor nuclei (Vignatti et al. 1999), separated with a projected distance of around 1 kpc. This activity results in a far-infrared (FIR) luminosity L F IR ≈10 11.8 L (Sanders et al. (1988); Thronson et al. (1990); Sanders & Mirabel (1996)) that classifies it as a luminous infrared galaxy (LIRG), just below the threshold that would classify it as an ultra-luminous infrared galaxy (ULIRG). Its luminosity is expected to cross this threshold when a second starburst is triggered during final coalescence (Engel et al. 2010). As such, NGC 6240 presents an excellent opportunity to study in fine detail the processes that power ULIRG activity.
The nuclear region (<1 kpc) of LIRGs and ULIRGs is a critical location to study as it is often the location of the highest star formation rates and the origin of many stellar feedback processes, such as stellar winds and AGN outflows. Correspondingly, in the most luminous LIRGs (L IR > 6×10 11 L ) the majority of the mid-IR emission originates from this central kpc region (Alonso-Herrero 2013). The dynamics in this central region are complicated by the galaxy merger interactions that trigger the processes that lead to the high IR luminosity of most LIRGs (Lonsdale, Farrah & Smith 2006). The nuclear molecular gas dynamics of NGC 6240 are no exception, with a concentration of molecular gas that peaks in emission between the two nuclei while the stars and dust are found to be concentrated around the two nuclei (Tacconi et al. (1999); Tecza et al. (2000); Engel et al. (2010)).
The molecular gas dynamics of NGC 6240's nuclear region have been studied and modeled for decades without consensus. Tacconi et al. (1999) observed a velocity gradient in CO J = 2 − 1 in the nuclear region and modeled it as a rotating disk. A similar disk model was used to describe the motions of HCN by Scoville et al. (2014). This inter-nuclear disk model has both been used to support other authors' observations of molecular gas (e.g. Iono et al. (2007) in CO J = 3 − 2 and HCO + (4 − 3)) and been claimed to be unphysical in the context of other observations (e.g. Gerssen et al. (2004) in Hα+[N II]). Alternate geometries have been proposed to explain this central molecular gas, including a tidal bridge connecting the two nuclei (Engel et al. 2010) and the origin site for a warm molecular outflow (Cicone et al. 2018). Even more recently, Treister et al. (2020) present observations of the nuclear region in CO J = 2 − 1 in unprecedented detail with angular resolutions of 0.03 . They find the central region to be a clumpy concentration of gas dominated by a high velocity outflow and a gas bridge connecting the two nuclei and interacting with the stellar disk kinematics.
With the lack of consensus on the geometry of the inter-nuclear molecular gas, updated detailed modeling of the central nuclear region using multiple new line transition observations in high resolution is needed. New telescopes have allowed the nuclear region of NGC 6240 to be observed in unprecedented detail and have enabled this modeling. In this paper we use high resolution CO J = 6 − 5 and CO J = 3 − 2 observations from the Atacama Large Millimeter Array (ALMA) to model the velocity profile and density distribution of the central molecular gas using the Line Modeling Engine (LIME) (Brinch & Hogerheijde 2010). These higher-J transitions trace shock excited warm molecular gas that was detected but unresolved in observations by Herschel (see, e.g. Kamenetzky et al. (2014)). LIME produces emission line profiles based on three-dimensional velocity and density distributions provided to the modeling code. We alter these three-dimensional distributions to match the modeled to observed emission. We find multiple fiducial models, with the model whose emission most closely matches observations suggesting the central molecular gas is flat and pancake-like with a high velocity dispersion compared to its rotational velocity, suggesting that it is not a self-gravitating, rotating disk but instead a transient tidal bridge connecting the two nuclei.
The residuals from the fiducial model definitively show high velocity gas (> 300 km/s) that is not associated with the nuclear pancake-like concentration of gas. This high-velocity gas exists in others groups' observations and is potentially associated with outflows postulated therein (Cicone et al. (2018); Müller-Sánchez et al. (2018); Treister et al. (2020)). The study of the high velocity gas is outside the scope of this work, which aims to describe the properties of the majority of the nuclear molecular gas. The presented observations also reveal extended emission features, some of which correspond to previously studied filamentary structures, and others of which are new to these observations. Section 2 presents the ALMA observations of CO J = 3 − 2 and J = 6 − 5 and continuum observations at 345 and 678 GHz. We analyze these observations in Section 3 and calculate the mass of the dust from the continuum emission, highlight observed extended molecular gas emission features, and explore the velocity structure of the gas. Section 4 presents a non-local thermodynamic equilibrium model created using LIME that fits the nuclear molecular gas's velocity and density distributions. We find that the molecular gas between the two nuclei is unlikely to be a self-gravitating disk, but could instead be a tidal bridge in light of the model findings.
OBSERVATIONS
Observations of CO J = 3 − 2 and J = 6 − 5 were completed using ALMA with baselines of 1.6 km and 460 m, respectively. These are the first observations of the nuclear region of NGC 6240 in CO J = 6 − 5 and the highest resolution observations to date in CO J = 3 − 2. The CO J = 3 − 2 observations were completed during Cycle 2 for project number 2013.1.00813.S. The CO J = 6−5 observation was completed for project number 2015.1.00658.S during Cycle 3. The longest baselines, full-width half-maxima (FWHM) of the beams, channel widths, reported channel root mean squared (RMS), central frequency of continuum observations, bandwidth (BW) of continuum observations, and RMS of continuum observations are reported in Table 1. For the integrated moment 0 maps, σ integrated is calculated using line channel RMS values σ line channel from Table 1. The RMS in the moment map is calculated using σ integrated = √ n chan σ line channel where n chan is the number of channels included in the integration.
To check the reliability of the provided reduced data products, we re-imaged the data using the Common Astronomy Software Applications (CASA) software package and the National Radio Astronomy Observatory (NRAO) provided imaging script. Re-imaging was completed with natural, uniform, and Briggs weighting schemes and user-created masks. For CO J = 3 − 2 no new structure emerged in the molecular gas when re-imaging the data, nor could we improve upon the noise. Therefore, we deemed the NRAO provided data products sufficient for analyses of the CO J = 3 − 2 molecular gas.
Re-imaging was required for the CO J = 6 − 5 and both continuum observations due to contamination in the continuum maps from extremely high-redshift with respect to systematic (∼ 400 − 740 km/s) molecular gas. This CO line contamination created a false continuum source between the two nuclei with the same peak flux density as the southern nucleus. Upon re-imaging, this source disappeared. We use Briggs weighting with robust parameter 0.5 for the re-imaged maps to match the weighting scheme used for the NRAO-provided molecular gas data products.
Below, we introduce all observations with further details of all figures discussed in the following sections. At the distance of NGC 6240, 1 corresponds to 500 pc of projected distance. AGN locations from Hagiwara et al. (2011) are included as crosses in the figures.
The two continuum maps are shown in Figure 1, revealing continuum emission in the vicinity of the nuclei as observed by previous authors (e.g. Scoville et al. (2014)). The concentration around the southern nucleus is much brighter than the northern concentration, with flux densities reported in Table 2. In detail, the morphologies appear to be different because of differing synthesized beams and signal-to-noise ratios in the two maps; however, there also differences that arise from real structure. In the south, the 678 GHz emission has a peak on the 345 GHz nucleus and in addition a stronger emission clump, ∼70% higher intensity, ∼0.4 north of the southern nucleus and collinear with the two nuclei. The bright clump was seen also by Scoville et al. (2014). This fidelity of continuum observations lends confidence to the associated line observations presented in this paper. The differences between the 678 and 345 GHz observations likely derives from a gradient of dust temperature and optical depth along the axis between the two nuclei. The 678 GHz source may arise in a hot spot on the axis between the two nuclei or in a local fluctuation in the optical depth; it should be observed further. The potential temperature and optical depth gradient, with temperature increasing to the north, is supported by the CO J = 6 − 5 moment 0 centroid corresponding better than the CO J = 3 − 2 moment zero centroid to the 678 GHz dust emission.
Moments 0, 1, and 2 (integrated flux density, average velocity, and velocity dispersion) for CO J = 3−2 are plotted in Figure 2 with the 345 GHz continuum contours included. Similarly, Figure 3 shows the moments of CO J = 6 − 5 with the 678 GHz continuum contours. Observations in both J = 3 − 2 and J = 6 − 5 show a concentration of molecular gas between the two nuclei, as observed previously in other observations of the molecular gas (e.g. Tacconi et al. (1999), Scoville et al. (2014), Treister et al. (2020). Also similar to previous observations, the CO moment 1 maps show a velocity gradient from highly redshifted with respect to systematic (<v> ∼ 350 to 400 km/s) to blueshifted with respect to systematic (<v> ∼ -100 to -150 km/s) along a position angle of approximately 34 • . Extended structure including two dim features of molecular gas to the SE and SW of the southern nucleus are observed in the CO J = 3−2 maps that have also been observed in other wavelength bands, such as H 2 (Max et al. 2005), Fe XXV, and Hα (Wang et al. 2014) compared in Figure 7.Although these lines trace different components of the ISM (molecular, shocked, ionized respectively) they are all associated with star formation activity. We also observe new extended emission to the north in the CO J = 3 − 2 observation.
Channel maps for the CO J = 3 − 2 and CO J = 6 − 5 observations are shown in Figure 4 and Figure 5, respectively. Radio velocities of the observations are calculated by the CASA software relative to the central observed frequencies, 345.796 GHz for CO J = 3 − 2 and 692.2 GHz for CO J = 6 − 5. The systemic velocity of NGC 6240 is taken as the velocity corresponding to the peak flux density of the galaxy-integrated line profile from the observations of CO J = 3 − 2 and CO J = 6 − 5: 7180 ± 20 km/s for CO J = 3 − 2 and 7150 ± 20 km/s for CO J = 6 − 5.
The velocity structure of the inter-nuclear gas is most apparent in the higher sensitivity CO J = 3 − 2 channel map, showing emission at ∼-500 km/s with respect to systematic near the southern nucleus and emission at ∼ 650 km/s with respect to systematic located between the two nuclei. The CO J = 6 − 5 channel map shows a similar velocity structure to that of the CO J = 3 − 2. This velocity structure is best resolved in the nuclear region between the two AGN, where the gas emission is brightest. 3. CONTINUUM AND EXTENDED MOLECULAR GAS EMISSION ANALYSIS
Mass from Continuum
To calculate the mass of the dust M d from the continuum observations we use (Casey 2012), where S ν is the flux density in the continuum frequency band, D L is the luminosity distance of 108 Mpc, κ ν is the dust mass opacity coefficient, and B ν (T ) is the blackbody emission in the continuum frequency band for a dust temperature T. For this calculation, we use the 345 GHz as the measure of S ν because it should have a lower optical depth than the 678 GHz continuum observation. The total S 345GHz for this observation is 27 mJy, 18% of the galaxy-integrated flux density measured at 850 µm by SCUBA of 150 mJy (Klaas et al. 2001). This flux density recovery is comparable to Scoville et al. (2014) who measured 18-24 mJy at 340 GHz for a comparable beam size and (Hagiwara et al. 2011). Upper Right: The same, but zoomed to show only the nuclear region. Lower Left: Moment 1 in the nuclear region with channels below 5 σ line channel masked out prior to moment calculation. Lower Right: Moment 2 in the nuclear region with channels below 5 σ line channel masked out prior to moment calculation. The beam FWHM contour is plotted in the lower right of each image, 1 corresponds to 500 pc, east is left, and north is up. Velocities are calculated relative to the average radio velocity of CO J = 3 − 2 for the entire observation, 7180 km/s, calculated relative to its rest frequency 345.796 GHz.
sensitivity with ALMA in Cycle 0. The SCUBA beam is as large as our primary beam of the ALMA observations with a diameter of 15 (Klaas et al. 2001), and as such we can expect that their measurement includes extended emission that is lost in our high-resolution observations. Therefore we expect our value of S ν to be lower than that measured with SCUBA. For B ν (T ) we choose a dust temperature of 56 K, the dust temperature fit in Kamenetzky et al. (2014) using a greybody fit to Herschel -SPIRE, IRAS, Planck, SCUBA, and ISO photometry for NGC 6240. This dust temperature is a galaxy-averaged property since Kamenetzky et al. (2014) used observations with beam FWHM between ∼ 17 and 45 , at least 100 times larger than the ALMA observations. It is likely that the dust temperature in NGC 6240's nuclear region is higher than the galaxy-averaged temperature due to the influences of concentrated star formation and AGN luminosity. However, we do not have flux density measurements across sufficient wavelengths in this central region to independently calculate the dust temperature. From James et al. (2002) we use κ 850 = 0.07 m 2 kg −1 and κ ν ∝ ν 2 to find κ 870 = 0.067 m 2 kg −1 . 678 GHz continuum at 5, 10, and 15 σcont. Channels below 5σ are masked out of the integration. Upper right: Moment 1 of CO J = 6 − 5, with pixels below 5 σ line channel masked out prior to moment calculation. Lower panel: Moment 2 of CO J = 6 − 5, with pixels below 5 σ line channel masked out prior to moment calculation. The beam FWHM contour is plotted in the lower right of each image. Channels below 5σ line channel are masked out of the integration for all maps, 1 corresponds to 500 pc, east is left, and north is up. Velocities are calculated relative to the average velocity of CO J = 6 − 5 for the entire observation, 7150 km/s calculated relative to the CO J= 6 − 5 rest frequency 691.473 GHz.
The calculated masses for the regions outlined in Figure 6 are tabulated in Table 2. The total mass of the dust is calculated to be 1.2×10 7 M . As a check on this total dust mass we compare to the galaxy-integrated dust mass of 5×10 7 M found in Kamenetzky et al. (2014) from their dust SED model described earlier in this section. Our calculated dust mass is 24% of this value, consistent with filtered flux from the ALMA observations.
Using a gas-to-dust mass ratio of 100 we can convert the dust masses from Table 2 to a total gas mass, 1.2×10 9 M . The mass values derived from the continuum emission are similar to the values derived from the sub-mm continuum (235 GHz) in Treister et al. (2020) of 2.8×10 9 M . The gas mass is converted to a column density N H2 by dividing the gas mass by the mass of molecular hydrogen, the projected size of the region on the sky, and multiplying by the fraction of the total mass assumed to be molecular Hydrogen (0.73, assuming solar metallicity Hollenbach & Thronson (1987)). The average column density for the entire nuclear region is 3×10 22 cm −2 , while the concentrations around the two nuclei both have higher column densities of 1.5×10 23 cm −2 . This value is consistent with the findings of Tacconi et al. (1999) of N (H 2 ) ∼ 1-2×10 23 cm −2 . Table 2. Integrated continuum flux densities Sν , derived dust masses from the 345 GHz continuum calculated using Equation 1, gas masses assuming a gas-to-dust mass ratio of 100, and associated column densities of the regions outlined in Figure 6.
Extended Molecular Gas Emission
A dim trail of gas extending directly north of the central concentration, which we call the "Northern Finger", is observed in the higher-sensitivity CO J = 3 − 2 observation but not in CO J = 6 − 5 or either continuum observation. It is labeled in the CO J = 3 − 2 moment 0 map in Figure 2. It is not spatially coincident with observed H 2 V = 1 − 0 S(1) and S(5) emission (Max et al. (2005), their Figure 12a). It dimly appears in their Figure 2a The Finger's average velocity is 44 km/s with a FWHM of ∼ 300 km/s, measured with an integrated spectrum of the emission in this region. The low average velocity suggests this gas is unlikely to be an outflow. A nuclear inflow would cause σ v to be elevated near the nuclei, for which there is no conclusive evidence. The low velocity is consistent with a tidal tail or bridge but the high FWHM does not conform to this idea. The finger extends from the northern nucleus towards the northern dust lane of the galaxy, which is interpreted as a tidal tail by other authors (Gerssen et al. (2004); Yun et al. (2001)). The top edge of the finger is spatially coincident with the eastern edge of the dust lane, suggesting a possible correspondence. The dust lane is much wider and extends much farther beyond the extent of the finger -approximately 7 wide (roughly East/West) and over 20 long (roughly North/South), beyond the extent of our observations. There is also extended structure surrounding the two nuclei that align with H 2 V = 1−0 S(1) and S(5) concentrations presented in Max et al. (2005) (their Figures 10 and 11, reproduced in our Figure 7): a molecular concentration to the NE of the northern nucleus, a concentration to the SW of the southern nucleus, and dim, diffuse emission extending to the SE of the southern nucleus. The faint arms of H 2 extending SE and SW observed in Max et al. (2005) align with these faint arms extending SE and SW from the central concentration and are also bright in Fe XXV and Hα (Max et al. 2005;Wang et al. 2014). Wang et al. (2014) postulates that these faint molecular arms are molecular gas entrapped and shocked by the superwind caused by a vigorous starburst in the southern nucleus. Similarly, Max et al. (2005) argue that these arms are a thin layer of gas at the edges of soft X-ray bubbles observed in Komossa et al. (2003), where a starburst driven wind is "driving shocks or ionization fronts into the interstellar medium and surrounding molecular clouds". We therefore conclude that the CO arms to the SE and SW of the southern nucleus are likely associated with the same starburst-driven superwind shocked gas.
MOLECULAR GAS BETWEEN THE NUCLEI: A TEST MODEL MOTIVATED BY HISTORY AND OBSERVATIONS
In Tacconi et al. (1999) and in works since (e.g. Iono et al. (2007); Scoville et al. (2014)), the velocity gradient observed in the central molecular concentration was interpreted and modeled as a disk of gas between the two nuclei. The presence or absence of a molecular disk in the nuclear region of NGC 6240 is the basis for other authors' arguments regarding important stellar feedback processes, such as the outflow studied in Cicone et al. (2018). The disk model has been claimed to be unphysical in other studies of molecular gas in NGC 6240 (e.g. Gerssen et al. (2004)), but no further modeling has definitively proven or disproven the internuclear disk model. More generally, the nuclear regions of (U)LIRGs are critical to understand as they are often the location of the highest star formation rates and the origin of many feedback processes that affect star formation, such as stellar winds and AGN outflows (Alonso-Herrero 2013).
For these reasons, we model this nuclear region as a concentration of gas with exponentially decreasing velocity and density profiles from the central peak using the observations in CO J = 3 − 2 and J = 6 − 5 to explore in detail the validity of this disk interpretation. The modeled gas concentration is not constrained to a self-gravitationally bound disk geometry, rather, it is a general concentration whose resultant velocity and density profiles can be compared to the disk interpretation. Our fiducial model finds the molecular gas concentration between the two nuclei is likely flat and pancake-like, with a high velocity dispersion that could cause it to dissipate as quickly as ∼ 0.3 Myr. Its geometry suggests this transient structure could be a tidal bridge connecting the two nuclei.
Code for Generating the Test Model: the Line Modeling Engine (LIME)
To generate the molecular gas model that tests the validity of the disk interpretation, we use the Line Modeling Engine (LIME) (Brinch & Hogerheijde 2010). LIME uses non-local thermodynamic equilibrium radiative transfer and molecule rotational energy level populations calculations to predict the line and continuum emission from molecular clouds. The user defines a 3-D model describing the density distribution of hydrogen molecules of the source, then assigns 3-D temperature and velocity distributions to those molecules. The gas-to-dust mass ratio, the abundance of CO relative to hydrogen, and the 3-D distribution of the dust temperature are also set by the user. Finally, the user defines the distance to the source to obtain the proper distance scale per pixel and observed flux density.
The code does not require that the source is in local thermodynamic equilibrium, and instead solves for population levels iteratively until the model populations have converged at all grid points. After convergence, LIME ray-traces photons to obtain an image of the modeled source at a user-defined observing angle. This simulation methodology allows great flexibility in both geometry and kinematics of the simulated source and minimizes the assumptions made about the source to generate the observed line profiles.
To compare the observed data to the simulated model images, we use the CASA package to smooth and continuum subtract the simulated images. The resulting images have the same beam size as the observations. We must also account for the redshift of the galaxy, which is not included in the LIME simulations. To do this, we measure the central velocity of the galaxy-integrated line profile from the observations of CO J = 3 − 2 and CO J = 6 − 5: 7180 ± 20 km/s for CO J = 3 − 2 and 7150 ± 20 km/s for CO J = 6 − 5. These velocities are subtracted from the observed moment 1 maps in Figures 2 and 3, which we can compare to the simulations that are simulated with zero redshift.
Parameters of the Test Model
Using LIME, we model the 3-D central gas concentration with density and velocity distributions with exponentially decaying density and velocity profiles. Our 3-D model for the disk density follows the function where r = x 2 + y 2 , r sh is the radial scale height of the density, z sh is the vertical scale height, and n H2,0 is the central H 2 number density of the gas concentration. The total gas mass within the extent of our ALMA observations is set to a simulation parameter g mass , from which n H2,0 is calculated based on the simulation geometry. We also allow an overall density asymmetry by multiplying the density on one half of the gas concentration (x>0) by a model parameter a n . For the CO J = 3 − 2 model we also allow the extent of the gas to differ for x>0 and x<0 by varying the scale height on either side of the modeled concentration.
We model the velocity with v(r) = v circ (1 − exp(−r/r circ )), where v circ is the circular velocity and r circ is the radial scale length of the circular velocity. We use the exponential profile because sside from the high velocity gas component, we find no sufficient justification for deviation from a rotation curve that rapidly becomes flat with radius, (which would be appropriate in the case of a self-gravitating disk). In Equations 2 and 3, parameters except r and z are fit parameters that can be tuned to match the resultant modeled line profiles to the observed data. In addition to those parameters, the LIME model requires a gas inclination, position angle, dust temperature T dust , turbulent velocity v turb , gas temperature T , and gas-to-dust mass ratio (set to 100). All available parameters, fiducial fit results, and acceptable ranges of fiducial fit parameters can be found in Table 3 and are discussed in Section 4.3.
The position angle of the gas concentration is modeled as 34 • , the angle between the maximum of the highly blueshifted gas and that of the highly redshifted gas from Figure 8. This is close to the position angle used by Tacconi et al. (1999) of 40 • for their disk models. Thirty-four degrees was chosen following methodologies laid out in Cicone et al. (2018) and Tacconi et al. (1999). Following the methodology presented in Cicone et al. (2018), we separate the observations into "quiescent" [-200, 250] km/s gas and high velocity gas. Figure 8 shows the contours of the observed highly redshifted (CO J = 3 − 2: [250, 700] km/s, CO J = 6 − 5: [250, 740] km/s) and highly blueshifted (CO J = 3 − 2: [-500, -200] km/s, CO J = 6 − 5: [-420, -200] km/s) gas in panels a and b. The centroids between these two velocity extremes (34 • ) is at a steeper angle than between the two nuclei (∼19 • ). The contours of moderate velocity, or "quiescent" gases of velocity [-200, 250] km/s are shown in in panels c and d. The velocity gradient observed in moment 1 for CO J = 3 − 2 and J = 6 − 5 remains clearly present in the high velocity gas, with distinct centroids of emission at a PA of ≈ 34 • . The lower velocity gas has much more overlap between the redshifted and blueshifted components, without as clear of a separation between centroids of emission. This lower velocity gas aligns approximately with the semi-major axis of the central gas concentration at a PA of ≈ 0 • , along the north/south axis. Due to the distinct gradient present in the high velocity gas that more closely resembles a traditional disk, we choose the position angle based on the highest velocity gas. However, we find during modeling that the position angle does not influence the fit parameters outside of acceptable ranges already incorporated into the model.
The center of the modeled gas concentration is placed at the maximum of the observed integrated line emission, following the methodology of Tacconi et al. (1999). If the gas were a simple disk, the maximum of the line emission would be expected to align closely with the turnover point of the velocity gradient. As noted in Tacconi et al. (1999) and confirmed in this paper's observations, this is not the case. This offset could be caused by a number of phenomena, including that the gas is not a simple disk. The offset could also be explained by a gas concentration with a highly asymmetric mass distribution leading to an offset in the maximum line emission. Optical depth effects could also displace the velocity gradient turnover point from the maximum of the line emission.
Test Model Fitting Strategy and Fiducial Fits
In NGC 6240 the CO J = 6 − 5 emission has been found to be dominated by relatively hot gas while the CO J = 3 − 2 emission is dominated by by cool/warm gas (Kamenetzky et al. 2014). The upper levels of the CO J = 6 − 5 and J = 3 − 2 CO lines are at very different energies, with the J = 6 level at ∼116 K and the J = 3 level at ∼33 K; it is clear that J = 6 − 5 line will arise in warmer gas than does the J = 3 − 2. The true excitation picture is of course more complicated than a simple model of 'warm' and 'cool' components. The J = 3 − 2 line is to some extent sensitive to warm and hot gas: it is observed in quite hot regions (e.g. Consiglio et al. (2017)) and in starburst galaxies is usually in excess of what the true 'cold' gas component predicts (Kamenetzky et al. 2014). But although there is probably some overlap between the J = 6 − 5 and J = 3 − 2 emitting gas, in bulk they preferentially probe two different temperature regimes.
We therefore generate separate models for a 'warm' component matching the J = 6−5 profile and a 'cool' component matching the J = 3 − 2 line. Models of the two temperature components have the same velocity structure, gas inclination, dust temperature and position angle. The gas mass, density and temperature distribution are different for the two models. In effect we require the cool and warm gas to have the same morphology, but permit the gas' extent and density to vary within that morphology.
Due to the simpler nature of the CO J = 6 − 5 line profiles compared to those of CO J = 3 − 2, we first use CO J = 6 − 5 observations to constrain the velocity structure, inclination and position angle, and dust temperature. We then applied the fiducial fit parameters from the CO J = 6 − 5 model to find the gas density distribution and gas temperature of CO J = 3 − 2.
The model parameters are found by eye due to the complexity of the nuclear gas structure and dynamics, the large parameter space, and the number of fit parameters available to the model. The parameters are chosen to minimize the apparent differences between the modeled line profiles and the observed line profiles extracted at points separated by a beam FWHM along the major and minor axes of the modeled gas. Once a fiducial fit is found, ranges on acceptable fit parameters are found by varying one parameter at a time until the model's line profiles are no longer acceptably close to the observed line profiles. "Acceptable" fits are determined by a combination of factors: first, the modeled line profiles on average cannot appear to differ from the observed line profiles by more than ∼ 25% in height, width, or central velocity. The fits focus on the brightest region along the gas' major axis. Second, the shape of the modeled line profiles should approximately match that of the observed line profiles.
Many of the parameters can be well constrained by the observed line profiles, for example r sh by the rate of decay of the line profile heights with radius and v turb by their widths. The singly peaked profiles and lack of gradient in their central velocities constrains the inclination to nearly face-on models. Inclinations between 45 • and 135 • are strongly disfavored because they showed doubly peaked profiles. Other parameters are more difficult to constrain. v circ and r circ are the least constrained because they are anti-correlated (see Equation 3) leading to large fit ranges as they trade Table 3. Fiducial LIME model values for CO J = 3 − 2 and J = 6 − 5 (column 2 ), as well as the ranges in parameter space that resulted in acceptable models (column 3 ). Velocity structure, gas inclination, and dust temperature is shared between the two models while gas mass, density distribution, and gas temperature vary between the two. Gas masses are calculated by integrating the modeled density over the observed extent of the gas (the high signal-to-noise ratio areas in Figures 2 and 3 off against each other. However, v circ is not wholly unconstrained. It cannot be so high as to shift the line centers, but must be high enough to create the singly peaked profiles. The temperature T is constrained by the brightness of the lines, with higher temperatures (and therefore higher CO rotational excitation) corresponding to brighter lines. All available parameters, fit values, and acceptable ranges of model parameters are reported in Table 3. The fiducial fit model whose emission profiles most closely match the observed profiles is a nearly face-on pancake-like distribution of gas, with a scale height z sh of 20-60 pc and a radial scale length of 390 pc at an inclination of 22.5 • from face-on. The line profiles extracted from these models along the semi-major axis of the gas concentration for CO J = 6 − 5 and J = 3 − 2 are compared to those from the data in Figure 9.
A wide range of parameter space resulted in fits of similar quality to our fiducial model. As an example, we include a thicker-disk model with the parameters in Table 4 with the corresponding modeled line profiles in Figure 10. This model is thicker than the models presented in Table 3, with a vertical scale height z sh of 60 pc instead of 20 pc, a gas concentration that is more horizontally concentrated (r sh = 270 pc instead of 390 pc), and with a lower ratio of turbulent to circular velocity (v turb /v circ = 0.44 instead of 1.4). While this thicker model's parameters are all within the thin disk's acceptable ranges and still capture the majority of the observed emission (line profile width, intensity, and in some cases line profile shape), the line profiles are more doubly peaked and less representative of the observed line profiles. Table 4. Fiducial LIME model values for the thicker modeled CO J = 6 − 5 gas concentration compared to the thinner model. Parameters not listed are the same as in Table 3. The thick model's line profiles are presented in Figure 10. The thin model's values are taken from Table 3 and the line profiles are presented in Figure 9. Table 16) of CO J = 1 − 0 to 13 − 12 to model gas temperatures, H 2 column densities, gas masses, and H 2 number densities for NGC 6240. We find our CO J = 3 − 2 model has a temperature and mass consistent with their calculations. The CO J = 6 − 5 model is on the high end of both temperature and mass, but the total modeled mass of the gas is consistent with other findings for NGC 6240.
Fiducial Model Parameters in Context
Our models indicate the CO J = 6 − 5 emission is dominated by relatively hot (T = 2,000 K) gas, and the CO J = 3 − 2 emission by warm (T = 600 K) gas. Kamenetzky et al. (2014) modeled the CO emission as a combination of cold (16 K with 1σ range [5, 50] K) and hot (1,260 K with 1σ range [790, 2,000] K) gas. They found the CO J = 3 − 2 emission is dominated by a mixture of both cold and hot gas, while the CO J = 6 − 5 emission is produced primarily by the hot gas component. Our CO J = 3 − 2 fiducial model temperature of 600 K lies between the temperatures of Kamenetzky et al. (2014)'s cool and hot components, supporting the argument that the CO J = 3 − 2 emission is created by a combination of gas temperatures. Also in line with their model, our model finds the CO J = 6 − 5 emission to be dominated by hot (2,000 K) gas, at the upper bound of their modeled 1σ range on gas temperature. Our model temperatures may be on the higher end due to the much smaller ALMA beam sizes that probe the nuclear region while Kamenetzky et al. (2014)'s models incorporate Herschel data that combines emission from both nuclear and extended regions. The nuclear region is likely to be much warmer than the extended gas due to star formation and AGN activity concentrated in the central kpcs. The high gas temperatures are supported by the presence of prominent H 2 1 − 0S(1) emission that requires temperatures > 1,000 K to be excited (Max et al. (2005); Meijerink et al. (2013)).
The model fit dust temperature range of [20, 100] contains Kamenetzky et al. (2014)'s best fit dust temperature of 56 K that we used in our continuum mass calculations (Section 3.1). Kamenetzky et al. (2014) also models the masses of these cool and warm components, finding a cool gas mass of 2×10 9 ± 5×10 8 M and a warm gas mass of 4×10 8 ± 10 8 M . Our total modeled mass for CO J = 3 − 2 is 7.6×10 8 M , higher than Kamenetzky et al. (2014)'s modeled warm gas mass but lower than their cool gas mass. This is again consistent with the theory that CO J = 3 − 2 is a mixture of cool and warm gas components. The ranges of CO J=6 − 5 gas mass modeled by LIME spans a factor of two (Table 3) that contain Kamenetzky et al. (2014)'s masses. However, our modeled hot gas mass (captured by simulations of CO J = 6 − 5) is 6−7×10 8 M , higher than their best-fit warm gas mass by around 50%. Despite the high mass of the hot gas in the model, the total modeled mass of ∼1.4×10 9 M (CO J = 3 − 2 plus CO J = 6 − 5) is close to the nuclear region's mass calculated from the continuum in Section 3.1 (1.2×10 9 M , region 1, Figure 6). This presents a possible conundrum: the dust continuum observations in Section 3.1 suggest that much of the dust emission is not captured by these observations, while the modeled gas masses do appear to capture the majority of the gas mass. It is possible that this discrepancy comes down to the high L CO /L F IR ratio in NGC 6240, around 10 times higher than other nearby galaxies (Kamenetzky et al. 2014). The CO emission could be dominated by the shocked gas observed with ALMA, while there is more dust and cold (less luminous) gas outside of the ALMA observations that would included in Herschel observations. In other words, due to the concentrated nuclear excitation of CO in NGC 6240, the ALMA observations efficiently captured CO emission, but not the extended dust emission.
Transience and Stability of Fiducial Models
In this section we examine the modeled velocity dispersion and circular velocities in the context of the model geometry, finding that the model suggests a transient structure that is stable against collapse.
The ratio of our average rotational velocity to our modeled velocity dispersion is <v>/ σ ∼ 0.7 for the thin fiducial models presented in Figure 9. According to Tacconi et al. (1999) this is in the range that indicates a disk that must be geometrically thick. However, the fiducial models are quite thin, with z sh of only 20 pc and r sh of 250−390 pc. This aspect ratio is unlikely for a self-gravitating structure with such high velocity dispersion since the dispersion would have the effect of "puffing up" the disk. It is also possible that with such a thin concentration of gas, the modeled high velocity dispersion could partially be accounted for by shear. That is, the gas concentration could be undergoing tidal disruption in the plane of the sky. The thicker fiducial model presented in Table 4 and Figure 10 has a higher ratio of <v>/ σ ∼ 2.3 and a thicker geometry (z sh = 60 pc, r sh = 270 pc). In both senses this model is less extreme than the thinner model, though it visibly does not fit the observed line profiles as well. That the thicker model would correspond to a lower velocity dispersion may seem counterintuitive, expecting that higher velocity dispersions would "puff up" the gas. However, if one invokes shear in the plane of the sky this would artificially inflate the velocity dispersion for thinner and more optically thin gas concentrations.
We can also conduct a back-of-the-envelope calculation to determine the lifetime of the gas concentration given its modeled velocity dispersion and vertical extent. For the thin model, the velocity dispersion is 140 km/s (assuming no contribution from shear) and the vertical extent of the concentration is ∼ 40 pc (two vertical scale heights). Gas moving at 140 km/s would travel this distance after 300,000 years. This is an extremely short timescale in the context of galaxy mergers that occur over timescales of Gyr, suggesting the model represents a highly transient structure. For the thicker model, gas traveling at a speed equal to the velocity dispersion of 110 km/s would travel two scale heights' distance (120 pc) in ∼ 1 Myr, still a short timescale in the context of galaxy mergers. Such short lifetimes are unlikely for a self-gravitating structure.
As a further test of the models in the context of a self-gravitating structure, we can compare the velocity dispersion and circular velocities to the escape velocity from the modeled gas concentration. For this calculation, we use the escape velocity of 2GM/r for the entire gas concentration with M =1.4×10 9 M . At the radial scale height of the thinner model r sh = 390 pc, v esc is 170 km/s. This value is similar to the thin models' velocity dispersion of 140 km/s, and is less than the circular velocity (100 km/s) added to the modeled dispersion. At the radial scale height of the thicker model r sh = 270 pc, v esc is 210 km/s. This exceeds the thicker model's turbulent velocity of 110 km/s but is less than its circular velocity alone (v circ = 250 km/s). This means both models' gas velocity can exceed the escape velocity of the gas concentration, indicating it is extremely unlikely this gas is a self-gravitating disk.
Given the fiducial LIME model, we can also check the stability of the gas concentration by calculating the Toomre parameter Q = σ r κ / π G Σ gas . Here κ = √ 3 v max / R is the epicyclic frequency, σ r is the line-of-sight velocity dispersion, and Σ gas is the mass surface density of the gas (Toomre 1964). Q for the thinner LIME model is 2.3 for CO J = 6 − 5 and 2.5 for the CO J = 3 − 2 model. A Q > 1 means the gas is stable against collapse at this time, with the majority of models within the acceptable parameter ranges fitting into this category. These high values of Q are consistent with this highly turbulent system. The majority of the luminosity being generated in this central mass concentration is then unlikely to be due to star formation, consistent with other papers that argue for heating from shocks and superwinds originating in starbursts around the nuclei, not in the central region between the two nuclei (e.g. Tecza et al. (2000); Max et al. (2005); Engel et al. (2010)). ? Figures 11 and 12 show the moment 0 and 1 normalized residuals (upper panels) and moment 1 absolute residual (lower panel) between ALMA data and the thinner fiducial LIME models for CO J = 6 − 5 and J = 3 − 2 (presented in Table 3 and Figure 9). The normalized fractional residuals are calculated by subtracting the model from the observation and dividing by the observation for each pixel. We present the residuals for the thinner and not the thicker models because they most closely represent the observed line profiles.
Model Residuals: What is Captured by the Model
The residuals for each moment take a similar shape for both emission lines. The low values of the moment 0 fractional residual in the brightest central region indicate that the test model captures the majority of the line emission in this central region. However, the model misses the northernmost portion of the observed gas concentration, especially apparent in the CO J = 6 − 5 residual. The test model does not accurately describe the <v> data, as evidenced by the large values and amount of structure that remains in the fractional residuals of moment 1 for both CO J = 6 − 5 and J = 3 − 2. The non-normalized residual of <v> (observed -model), shown in the bottom panel of each figure, remains largely unchanged from the observed values. This indicates that the model does not reproduce the velocity structure well at all, and explains why the normalized residuals are all close to one.
The residuals shed light on what components of NGC 6240's molecular gas are captured by the test model, if any. There are aspects of the gas that are clearly not captured, for example the modeled line profiles in Figure 9 miss a shoulder of highly redshifted emission (> 300 km/s) in the northeast plotted therein as dark brown and black line profiles. This emission corresponds to the highly redshifted gas visible in the average velocity maps (∼ 300 km/s in Figure 2 and ∼ 400 km/s in Figure 3) and in the channel maps (up to 665 km/s in Figure 4 and up to 724 km/s in Figure 5). It also corresponds to the highly redshifted gas in Figure 8 with an average velocity of ∼400 km/s when isolated from the quiescent gas, or ∼250 km/s when including the quiescent gas. This gas is dim compared to the majority of the line emission, and as such does not augment the normalized fractional residual moment 0 maps above ∼0 despite not being described by the models.
Despite missing this high velocity gas, the low values for the integrated line emission residuals in the central/southern portion of the central gas concentration indicate that the models capture the majority of the CO emission in this, the brightest and most massive part of the ALMA observations. We calculate that the model captures 96% of the observed emission, a value found by dividing the sum of the modeled CO J = 6 − 5 emission divided by the sum of the observed emission within the 5σ contour area. Capturing the majority of the emission indicates we accurately model the gas mass and gas temperature, which are degenerate parameters that each increase total emission when increased. Constraints on gas mass and temperature are provided by the modeled line profile shapes with exceedingly high masses or low temperatures resulting in doubly-peaked profiles. Accurate models of mass and temperature do not directly imply accurate models of gas distribution. The other parameters important for this are r sh , well constrained by the visible extent of the gas, and z sh . z sh is somewhat difficult to constrain because the gas is nearly face-on, but thick gas results in doubly peaked profiles constraining z sh to small values (tens of parsecs). Density is directly calculated from the gas mass, r sh , and z sh . Our constraints on these values and the gas temperature indicate we have successfully modeled the physical distribution and density of the gas. Our models found that this central gas region is pancake-like: quite thin (tens of pc) with respect to its extent (hundreds of pc).
While the fractional residuals of the line emission appear to accurately capture the distribution of the gas, the velocity residuals tell a different story. The normalized fractional velocity residuals (upper right panel of each figure) are comprised almost entirely of values close to one, with absolute residuals (lower panel of each figure) close to the average velocities of the observed gas. These large residuals suggest the simple exponentially decaying velocity profile (like that of a disk) does not accurately capture the kinematics of the gas. When taken with the arguments in Section 4.4 that the modeled velocity dispersion indicates a transient structure whose velocities can exceed the escape velocity of the gas concentration, we suggest that the nuclear molecular gas emission is dominated by a thin pancake-like gas concentration that is not rotating like a disk. This conclusion is also consistent with the findings of Section 4.4, finding our masses and temperatures fit within current observations but the high velocity dispersion relative to the circular velocity is unlikely for such a thin geometry if the gas were to be a self-gravitating disk.
IS THE CENTRAL MOLECULAR GAS A TIDAL BRIDGE?
The models indicate that the gas distribution has a vertical thickness of tens of parsecs with a horizontal extent of hundreds of parsecs. At the location of the brightest molecular gas emission the fractional moment 0 residuals between the observations and the models are close to zero. Capturing the majority of the emission indicates we accurately model the gas mass and gas temperature, which are degenerate parameters that each increase total emission when increased. However, as argued in Section 4.6, the molecular gas kinematics are not well captured by the disk model as demonstrated by the high fractional and absolute residual values for <v> (moment 1). Additionally, our calculations and discussion in Section 4.5 indicate the gas is unlikely to be a self-gravitating disk, consistent with arguments from Treister et al. (2020), Engel et al. (2010 and Cicone et al. (2018) mentioned in the introduction. The contraindication of a gravitationally stable disk is true for both the thin and thicker fiducial models. Therefore, we must consider other possibilities to explain the molecular gas kinematics. The extended distribution of the model could suggest a tidal bridge between the two nuclei, stretched by tidal forces along the semi-major axis of the observed CO emission at PA = 0 • . Other scenarios are possible, such as gas infall and outflow and multiple disks. However, it would not be surprising for there to be tidally disruption of dissipative gas in the center of two merging galaxies that have recently undergone a first pass. Furthermore, NGC 6240 does not exhibit the clear evidence for multiple disks as in other double-nucleus galaxies, such as Arp 220 (Wheeler et al. 2020). Given that there is already evidence for a tidal bridge between the nuclei, we explore this interpretation. If the gas is indeed a tidal bridge, its low density means it is unlikely to be in virial equilibrium and could result in sub-thermal excitation. The X-factor (the relationship between N (H 2 ) and the CO luminosity) only applies for virialized, denser clouds, and can be radically modified for non-virialized, thin clouds (Tacconi et al. 2008). Physically this can manifest as higher than expected CO line luminosity compared to the observed H 2 and dust luminosity as well as offsets between the peak of CO line emission and the peak of H 2 and dust emission (Engel et al. 2010;Tacconi et al. 2008). Our observations show an offset between CO and dust emission peaks, and comparison to H 2 (1 − 0) S(1) (Max et al. 2005) in Figure 7 also shows an offset between H 2 and CO emission. This sub-thermal and therefore optically thinner emission could also help to explain the anomalously high L CO /L F IR observed in NGC 6240, around 10 times higher than other nearby galaxies (Kamenetzky et al. 2014). To check for the possibility of sub-thermal emission we compare the observed density to the critical densities n crit for the observed lines. The modeled average H 2 density is ∼ 1×10 3 cm −3 for both the thicker and thinner fiducial models. The critical density n crit for CO J = 3 − 2 emission of 3.6×10 4 cm −3 and 2.9×10 5 cm −3 for CO J = 6 − 5. n crit for both transitions is 1-2 orders of magnitude higher than the modeled density, indicating that sub-thermal emission is likely in this central region.
The presence of a tidal bridge or ribbon between the two nuclei is also supported by observations of H 2 1 − 0 S(1) and S(5) presented in Max et al. (2005) and recreated in our Figure 7. They observe a ribbon of H 2 with a reverse S shape (their Figure 10) extending between the northern and southern nuclei and postulate it could be material flowing along a bridge connecting the two progenitor galaxy nuclei. They compare this geometry to the tidal bridges predicted by computer simulations (e.g. Barnes & Hernquist (1991); see also Figs. 9 and 10 in Barnes & Hernquist (1996); Figs. 1 and 4 in Barnes (2002)). One important note is that these simulations show tidal bridges on scale of ∼ 5-40 kpc, while the projected separation between the nuclei in NGC 6240 is only ∼ 1 kpc. Nonetheless, smaller scale simulations (e.g. Hopkins et al. (2013)) also show significant mass flowing between galaxy nuclei on scales 5 kpc. While the irregular H 2 morphology does not follow the morphology of CO, this could be explained by an altered CO-to-H 2 conversion factor in this nuclear region (Tacconi et al. 2008) causing CO to be brightened with respect to H 2 and dust along the N/S axis as described above. The differences in the two observations are unlikely to be caused by differences in sensitivities or beam sizes, as the H 2 observations show offsets in peak brightness located at the southern nucleus, farther than 1.5 beam FWHM away from the peak brightness observed in the CO observations.
Tidal bridges and filaments have been shown to have relatively small line-widths (∼ 50 -100 km/s) in tidal dwarf galaxies (Braine et al. 2001). In larger galaxies the linewidths and velocities are larger, for example the CO FWHM is ∼ 200 km/s with <v> ∼ 130 km/s in the bridge between the Taffy galaxies (Braine et al. 2003). In Arp 194 (total dynamical mass > 10 11 M ), the dispersion is ∼ 25-125 km/s with |<v>|< 150 km/s in the bridge connecting the two galaxies (Zasov et al. (2016), their Figure 3). In Figure 13 we plot the velocity and dispersion along the major axis of the observed gas (PA = 0 • ) and the major axis of the modeled gas (PA = 33.7 • ) to compare to these observed line widths and velocities. For both extractions, the velocities all lie below approximately |100| km/s, with a structure that moves smoothly from redshifted to blueshifted, as expected for a tidal bridge. The total velocity dispersion shows values of 150-180 km/s, slightly higher than the bridge in Arp 194 but within range of the Taffy galaxy bridge. Within the context of these observations a tidal bridge remains a plausible explanation of the internuclear molecular gas.
POSSIBLE FATES OF THE MOLECULAR GAS
The geometry of the merger that formed NGC 6240 is discussed in detail in Engel et al. (2010). They propose that the merger has a geometry that tends towards coplanar/prograde because of the extended tidal tails observed in NGC 6240. Lotz et al. (2008) presents simulations of equal-mass gas-rich mergers with a variety of geometries, including coplanar prograde encounters like NGC 6240. All coplanar encounters of gas-rich Sbc-type galaxies simulated in Lotz et al. (2008) have two peaks of star formation, one around 1 Gyr and another stronger starburst around 2 Gyr after the merger begins. Consistently, Tecza et al. (2000) argued that we are observing NGC 6240 shortly after the first encounter triggered an initial starburst, as did Engel et al. (2010) who argued NGC 6240 "has currently elevated levels of star formation compared to a quiescent galaxy; and will experience another, likely stronger, peak in star formation rate in the near future when the galaxies coalesce". Engel et al. (2010) also make an important note that NGC 6240 is currently barely below the ULIRG classification of L IR 10 12 L , and will likely breach that threshold once the second starburst is triggered, supported by the stronger second starburst predicted by the simulated merger models in Lotz et al. (2008).
It is possible that the tidal bridge will fall onto the nuclei prior to the second pass and final coalescence, possibly triggering the second starburst or feeding further AGN activity. We calculate the free-fall time of the gas of mass m onto a nucleus of mass M using where R is the distance between the gas and the nucleus. As this is an order-of-magnitude calculation, we split the molecular gas mass in half and assume half will fall to the northern nucleus and half to the southern. We use the nuclei masses of 1.3×10 10 M for the southern nucleus and 2.5×10 9 M for the northern (Engel et al. 2010). The projected radius from the maximum of the integrated line emission to the northern nucleus is 590 pc and 145 pc to the southern. This means t f f,northern = 4.3×10 6 years and t f f,southern = 2.5×10 6 years. That is, the free-fall time of the gas onto the nuclei is a few Myr while the time between the first pass and the galaxies' maximum separation prior to the second pass is likely to be ∼ 400 Myr (Lotz et al. 2008). Therefore, it is likely that this gas will fall onto the nuclei prior to the next pass, possibly adding to the current nuclear starburst. One issue with this interpretation is the high modeled velocity dispersion and small vertical extent that indicate a transient structure that will dissipate after ∼0.3 Myr (see argument at the end of Section 4.4). Therefore, it is possible that the nuclear concentration of gas will dissipate prior to streaming onto the nuclei.
SUMMARY
NGC 6240 presents an interesting test case of a galaxy merger between first pass and final coalescence, an intermediate and turbulent stage of galaxy evolution. It provides a detailed example to study what comes just before galaxies evolve into ULIRGs, as it is just below the classification threshold of a ULIRG but is likely to evolve into a ULIRG when the second, stronger, merger-induced starburst is triggered. We presented high-resolution ALMA observations of CO J = 6 − 5 and J = 3 − 2, the first observations of the nuclear region of NGC 6240 in CO J = 6 − 5 and the highest resolution observations to date in CO J = 3 − 2. We observe similar morphology to previous CO observations, notably, a concentration of gas between the two nuclei that is distinct from the continuum that is itself centered around the nuclei (Tacconi et al. 1999;Scoville et al. 2014, etc.). We model the molecular gas density and velocity distributions using LIME and find a thin, pancake-like distribution of gas whose velocities and velocity dispersions indicate a transient concentration that is unlikely to be a self-gravitating disk. The model captures the majority of the gas' emission but fails to capture the gas' kinematics, as demonstrated by the residual maps. We instead argue that the nuclear region observation is consistent with superposed emission from a tidal bridge and highly redshifted gas. This work demonstrates the importance of high-resolution multi-line observations when trying to disentangle the effects of energetic gas acceleration mechanisms, star formation, and tidal forces in the central regions of major mergers.
We argue that the majority of the central molecular gas concentration is a tidal bridge connecting the two nuclei of the progenitor galaxies. Our fiducial models show that this central gas region is likely quite thin (vertical scale height of 20 to 60 pc) with respect to its extent (horizontal scale height of 240 to 500 pc). That this central gas is a bridge connecting the nuclei is supported by Engel et al. (2010) who argue for an altered CO-to-H 2 conversion factor in the central region that would be exacerbated by a drawn-out, thin, and diffuse gas bridge. The H 2 observations presented in Max et al. (2005) also support the idea of a tidal bridge, which are in turn motivated by simulations from Barnes & Hernquist (1991) and Barnes & Hernquist (1996) that show mergers can result in material flowing along bridges connecting progenitor galaxies. Kinematic arguments from Treister et al. (2020) suggest the gas bridge connects the two nuclei with clear streaming kinematics. Gas that is subject to gravitational torques, such as that in a tidal bridge in the nuclear region, will likely fall into the nuclear regions by the time of final coalescence (Souchay et al. 2012). Therefore, the molecular gas in the central region will likely fall into the nuclear regions of NGC 6240 and contribute to the second starburst that will turn NGC 6240 into a bona fide ULIRG. However, the high velocity dispersion with respect to the vertical extent of the gas means it is also possible that the structure will dissipate prior to this streaming. These observations and models shed light on one mechanism for star formation in major gas-rich mergers, that is, small-scale tidal bridges forming between progenitor galaxy nuclei that may ultimately feed into the nuclear regions. | 2021-09-21T01:15:34.168Z | 2021-09-19T00:00:00.000 | {
"year": 2021,
"sha1": "4b7e14f570679be7e5bc83316620dd9ea40def89",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2109.09145",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4b7e14f570679be7e5bc83316620dd9ea40def89",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
271402883 | pes2o/s2orc | v3-fos-license | Amenorrhea as a presentation of Cushing’s syndrome
Summary Menstrual cycle abnormalities are common in premenopausal females with Cushing’s syndrome, although the underlying mechanism is poorly understood. Signs and symptoms found in Cushing’s syndrome overlap with polycystic ovarian syndrome (PCOS). The patient is a 33-year-old female previously diagnosed by a gynecologist with PCOS and treated with oral contraceptive pills (OCPs) for 2 years. She then discontinued her OCPs without consulting a clinician, resulting in amenorrhea for 6 months, for which she presented. She also had symptoms of depression and anxiety but had no other signs and symptoms of Cushing’s syndrome, except a plethoric face. Initial lab work showed evidence of central hypogonadism (low luteinizing hormone, follicle-stimulating hormone, and estrogen), so a complete anterior pituitary hormone workup was done. Her thyroid-stimulating hormone was also low with a low free T4 level. Prolactin level was normal, but surprisingly, her AM cortisol level was high. The Cushing’s syndrome workup revealed non-suppressed cortisol after a 1 mg dexamethasone suppression test and positive 24-h urine cortisol with suppressed adrenocorticotrophic hormone. A CT scan of her adrenal glands revealed a left adrenal adenoma. She underwent a left adrenalectomy, after which her menstrual cycles became regular again, and pituitary function has recovered. Learning points In Cushing's syndrome, female patients can have menstrual abnormalities due to the high cortisol levels, which can affect gonadotrophin levels. We encourage clinicians to include Cushing's syndrome in the differential diagnosis of patients with central hypogonadism.
Background
We are presenting a case report of an unusual presentation of Cushing's syndrome.Our case involved amenorrhea accompanied by nonspecific signs and symptoms of Cushing's syndrome, which was initially misdiagnosed as polycystic ovarian syndrome (PCOS).This paper highlights the importance of including Cushing's syndrome in the differential diagnoses of central hypogonadism.
Case presentation
We present the case of a previously healthy 33-yearold Kuwaiti female patient who visited our clinic with a 6-month history of amenorrhea, along with nonspecific symptoms of joint pain, mood swings, slight weight gain, fatigue, and mild facial acne.Her mother reported that she had been depressed and very anxious for the last 2 years.However, they did not seek psychiatric treatment because they attributed it to the quarantine measures due to the COVID-19 pandemic.She had a history of regular periods until 3 years ago when they became irregular and without dysmenorrhea.A gynecologist diagnosed her with PCOS, and she was on oral contraceptive pills (OCPs) for almost 2 years.She discontinued them without consulting a physician, and since then, she has developed amenorrhea.
Medical history
The patient had no history of known chronic diseases, similar conditions, previous hospital admissions, or surgical interventions.She denied using herbal medicine or over-the-counter medication and had no known allergies.She was a non-smoker, did not drink alcohol, and had a negative family history.
Physical examination
On physical examination, the patient was alert and conscious, looking well, and weighed 49 kg, a height of 150 cm, and a body mass index of 21.8 kg/m 2 .Her blood pressure was 150/70 mm Hg, heart rate was 80 beats/min, and respiratory rate was 18 breaths/min on room air.The hands examination was normal, with no clubbing, palmar erythema, muscle wasting, or hirsutism, and no acne or old scar lesions were found.The head and neck examination revealed a plethoric face.Chest examination revealed normal breath and heart sounds, with no murmurs or additional sounds.On abdominal examination, no striae, scars, tenderness, or organomegaly were found, and bowel sounds were normal.Lower limb examination was normal with no purpura, skin ulcers, muscle wasting, or edema.The patient's presentation suggests PCOS-related amenorrhea, possibly due to her discontinuation of OCPs.However, her joint pain, mood swings, and weight gain are non-specific symptoms that require further investigation.The plethoric face could be due to Cushing's syndrome or other hormonal disorders, and a more detailed endocrine evaluation is warranted.
Investigations
The patient's basic blood work showed mild hypernatremia with potassium at the lower end of the normal range (Table 1).Her initial hormonal blood work results showed low luteinizing hormone (LH), follicle-stimulating hormone (FSH), and estradiol levels, so a complete anterior pituitary hormone workup was conducted (Table 2) along with a pituitary MRI.She had a normal prolactin with low thyroid-stimulating hormone (TSH) and a low free T4.Surprisingly, she had high AM cortisol levels with suppressed adrenocorticotrophic hormone (ACTH).MRI of the pituitary gland showed no abnormalities.To further investigate, a 1 mg overnight dexamethasone suppression test and a 24-h urine cortisol test were conducted, which revealed non-suppressed cortisol and extremely high urine cortisol levels, respectively (Table 3).A CT scan of the adrenal gland revealed a well-defined triangular mass lesion in the left adrenal gland, measuring 4.3 × 3.3 × 3.5 cm with a mean density of 22 HU, suggestive of an adenoma (Fig. 1).
The patient was diagnosed with Cushing's syndrome, central hypogonadism, and central hypothyroidism.
Management
To manage her hypothyroidism and prepare her for left adrenalectomy, the patient was started on levothyroxine 75 mcg orally daily, targeting free T4 in the mid-high normal range.A laparoscopic adrenalectomy was performed without complications.The pathology report was consistent with adrenal cortical adenoma (Weiss score: 0) (Supplementary File 1, see section on supplementary materials given at the end of this article).The patient's post-operative course was uneventful, and she was started on hydrocortisone,
Outcome and follow-up
The patient was closely monitored for signs of adrenal insufficiency, and her cortisol levels were checked regularly.Her thyroid function had recovered, and levothyroxine was stopped.
After 6 months, the patient reported having regular menstrual periods, significant weight loss (3 kg) and was symptom free.The patient continued to follow up with our service to monitor for any recurrence of her symptoms and to adjust her cortisol hormone replacement therapy as needed.
Discussion
Amenorrhea is the absence of menstruation.Primary amenorrhea is the absence of menarche in a female age 15 or older, whereas secondary amenorrhea is the absence of menstruation for at least 3 months after regular menstruation is established.It can be further classified by the anatomic location of disturbance (hypothalamus, pituitary, uterus, or vagina).Testing for the presence of hyperandrogenism can help narrow differential diagnoses (1).
In females with Cushing's syndrome, experiencing menstrual disturbances is a frequent occurrence.Of the 390 female patients in the European registry on Cushing's syndrome, 56% reported experiencing menstrual disturbances (2).
In addition, Bolland and colleagues conducted a nationwide survey in New Zealand and discovered that 35.5% of female patients with Cushing's syndrome experienced menstrual disruption (2).
Lado-Abeal et al. studied 45 female patients with
Cushing's syndrome and found that around 80% had menstrual irregularities.Menstrual cycle abnormalities are common in premenopausal females with Cushing's syndrome, although the underlying mechanism is poorly understood.Signs and symptoms found in Cushing's syndrome overlap with PCOS.These include amenorrhea or oligomenorrhea, obesity, hirsutism, exaggerated gonadotropic response to gonadotropin-releasing hormone (GnRH), and low sex hormone-binding globulin levels with high androgen levels in the blood (3).
Lado-Abeal et al. observed that menstrual irregularities in Cushing's syndrome are due to hypogonadotropic hypogonadism, as opposed to PCOS patients.This finding is supported by the fact that these patients' LH and FSH levels were inappropriately low for the estrogen levels in their blood.They also performed a GnRH stimulation test on their patients, and their patients' FSH reserve was normal, and the LH response was normal or exaggerated.This suggests that the pituitary gonadotropin reserve is normal or increased.It has led to the conclusion that menstrual abnormalities in patients with Cushing's syndrome are likely due to abnormal hypothalamic GnRH secretion caused by long-standing high cortisol levels, which block the secretion of GnRH from the hypothalamus and the action of LH and FSH on the ovaries.Furthermore, it was also observed that serum cortisol levels was negatively associated with serum estrogen level; however, this association was not found between estrogen and androgen levels, and they, therefore, concluded that the menstrual irregularities are due to high cortisol and not high androgen levels in the blood.Also, the normalization of cortisol levels with metyrapone is associated with treating menstrual abnormalities, although metyrapone increases serum androgen levels.This is further supported by the observation that although administering testosterone to female-to-male transsexuals can lead to morphologic features of PCOS, this only occurs when the testosterone levels are higher than normal males or females with virilizing tumors.Also, serum gonadotropins and LH pulsatility are unaffected if the testosterone level is only raised to the level of normal men, and menstrual irregularities would not occur.In addition, males with estrogen resistance due to a mutation of the estrogen receptors have elevated gonadotropin levels in the blood, although their serum androgen level is normal.This indicates that estrogens, not androgens, mainly regulate the feedback mechanism in the reproductive (3).Kaltsas et al. observed 13 Cushing's syndrome patients, with 70% having menstrual disturbances.They found that in women with Cushing's syndrome, the mechanism of menstrual disturbance can either be due to a PCOS phenotype, in which the ovaries are enlarged, or due to suppression of GnRH in the hypothalamus from the high cortisol levels, in which ovarian volume is preserved.This depends on the cortisol level; if the cortisol level is not high enough to suppress the hypothalamic secretion of GnRH, a PCOS phenotype will develop with or without irregular menstruation.It was also hypothesized that a PCOS phenotype in Cushing's syndrome may develop due to hypercortisolemia-induced hyperinsulinemia and insulin resistance.This suggests that in patients with PCOS exhibiting other signs such as hypertension or myopathy, we should consider Cushing's syndrome as a differential diagnosis (4).
Because of the impact of hypercortisolism on the hypothalamic-pituitary-ovarian axis, pregnancy is rare in Cushing's syndrome.A systematic review by Caimari et al. noted that among pregnant females with active Cushing's syndrome, an adrenal source of Cushing's syndrome is the most common cause.This can be attributed to the fact that in Cushing's disease, elevated ACTH levels lead to overproduction of both cortisol and androgens, while cortisol-secreting adrenal tumors primarily lead to the overproduction of cortisol without the concurrent elevation of androgens (5).Our patient's hemoglobin level is normal.This is important as Cushing's syndrome can affect red blood cell parameters.Detomas et al. conducted a retrospective monocentric study that showed differences in hemoglobin and hematocrit levels between patients with endogenous Cushing's syndrome and control subjects.Controls are patients with non-functional adrenal incidentalomas or non-secretory pituitary microadenomas.The study included 210 patients, consisting of 162 females and 48 males, matched in age and sex with controls.It was concluded that hemoglobin and hematocrit levels are higher in females with endogenous Cushing's syndrome compared to controls (6).
In our case, the patient's presenting symptom was amenorrhea, and she exhibited ideal body weight with non-specific symptoms of joint pain, mood swings, and slight weight gain.She did not have significant symptoms of hyperandrogenism.The blood work showed a picture of central hypogonadism.Therefore, physicians should consider the possibility of Cushing's syndrome as a rare cause of hypogonadotropic hypogonadism.
Patient consent
Written consent was obtained from each patient or subject after explaining the purpose and nature of all procedures used.
Figure 1 CT
Figure 1 CT scan of the adrenal glands which reveals a well-defined triangular mass lesion in the left adrenal gland, measuring 4.3 × 3.3 × 3.5 cm.(A) Axial view and (B) coronal view.
Table 1
Basic blood work. | 2024-07-25T06:17:51.652Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "ec489dc664ca3d8247ed0416048c0f8af8185506",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1530/edm-23-0152",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "74ef3b1426dafd994a234cfcbf5b0d9fee76084a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9070239 | pes2o/s2orc | v3-fos-license | Addition of rapamycin and hydroxychloroquine to metronomic chemotherapy as a second line treatment results in high salvage rates for refractory metastatic solid tumors: a pilot safety and effectiveness analysis in a small patient cohort.
BACKGROUND
Autophagy is an important oncotarget that can be modulated during anti-cancer therapy. Enhancing autophagy using chemotherapy and rapamycin (Rapa) treatment and then inhibiting it using hydroxychloroquine (HCQ) could synergistically improve therapy outcome in cancer patients. It is still unclear whether addition of Rapa and HCQ to chemotherapy could be used for reversing drug resistance.
PATIENTS AND METHODS
Twenty-five stage IV cancer patients were identified. They had no clinical response to first-line metronomic chemotherapy; the patients were salvaged by adding an autophagy inducer (Rapa, 2 mg/day) and an autophagosome inhibitor (HCQ, 400 mg/day) to their current metronomic chemotherapy for at least 3 months. Patients included 4 prostate, 4 bladder, 4 lung, 4 breast, 2 colon, and 3 head and neck cancer patients as well as 4 sarcoma patients.
RESULTS
Chemotherapy was administered for a total of 137 months. The median duration of chemotherapy cycles per patient was 4 months (95% confidence interval, 3-7 months). The overall response rate to this treatment was of 40%, with an 84% disease control rate. The most frequent and clinically significant toxicities were myelotoxicities. Grade ≥3 leucopenia occurred in 6 patients (24%), grade ≥3 thrombocytopenia in 8 (32%), and anemia in 3 (12%). None of them developed febrile neutropenia. Non-hematologic toxicities were fatigue (total 32%, with 1 patient developing grade 3 fatigue), diarrhea (total 20%, 1 patient developed grade 3 fatigue), reversible grade 3 cardiotoxicity (1 patient), and grade V liver toxicity from hepatitis B reactivation (1 patient).
CONCLUSION
Our results of Rapa, HCQ and chemotherapy triplet combination suggest autophagy is a promising oncotarget and warrants further investigation in phase II studies.
INTRODUCTION
Periodical delivery of standard recommended chemotherapy doses in some types of cancers is often associated with significant toxicity without therapeutic gain. The frequent administration of low doses (1/10 th to 1/3 rd of maximum tolerated dose, MTD) of certain antineoplastic drugs, known as metronomic chemotherapy, has demonstrated its efficacy and is now getting more popular [1]. The anti-cancer effect occurred principally via an antiangiogenic/anti-vascular mechanism [1,2]. Several in vivo experiments have shown that metronomic chemotherapy is more effective in combination with anti-angiogenic, immunotherapeutic, or targeted therapeutic agents [3,4]. A growing number of clinical studies have adopted the concept of combined metronomic chemotherapy with anti-angiogenic therapy and have reported an increase in progression-free survival (PFS) in cases of recurrent glioblastoma multiforme [5], cisplatin-refractory ovarian cancer [6], advanced breast cancer [7,8], non-small cell lung cancer [9], hepatoma [10], and colon cancer [11].
Autophagy is known to promote cancer growth and survival under conditions of nutrient deprivation, hypoxia, or DNA damage caused by chemotherapy [12]. Hydroxychloroquine (HCQ)-a clinically approved antirheumatoid drug-is an analogue of chloroquine (CQ) and acts as a lysosomotropic agent; HCQ inactivates lysosomal enzymes by increasing intralysosomal pH and significantly inhibiting the last process of autophagy [12]. A greater inhibition of the proliferative activity of various types of cancer has been reported when chemotherapy was combined with the inhibition of autophagy [13]. The premise of inhibiting autophagy to overcome chemotherapy resistance has been clinically investigated [14][15][16][17][18][19]. Rapa, a clinically approved anti-rejection drug, also known as mammalian target of rapamycin (mTOR) inhibitor, can induce cellular autophagy [20]. Autophagy modulation by combined treatment with an mTOR inhibitor (Rapa) and a lysosome inhibitor (HCQ) was shown to be effective in models of breast cancer, melanoma, and glioma [21][22][23][24]. We have found the triplet combination of HCQ, Rapa, and chemotherapy to be synergistic by pushing autophagy through Rapa + chemotherapy and then blocking the final autophagy process through HCQ [25].
A Rapa analogue, everolimus, in combination with HCQ, was found to inhibit growth of endothelial progenitor cells [26]. In this retrospective report, by collecting anecdotal cases in our institute, we found that addition of Rapa and HCQ to a metronomic chemotherapy therapy regimen might be an attractive way to increase sensitization to both the anti-cancer and the antiangiogenesis effect of chemotherapy.
Patient characteristics
A total of 46 patients received metronomic chemotherapy from May 2012 to September 2014, and 25 of them fitting the study criteria were included in the analysis (17 women and 8 men). The median age was 62 years (range 47-76). The Eastern Cooperative Oncology Group performance status was 0 in 10 patients and 1 in 15 patients. The characteristics of the patient population are summarized in Table 1.
Tumor responses
Within the group of 25 evaluable patients, 10 (40%) experienced PR, and 11 (44%) had SD. Eightyfour percent of patients experienced clinical benefits for more than 3 months. The clinical characteristics and results of patients who received this treatment strategy are summarized in Table 2. Representative images and tumor marker changes before metronomic chemotherapy, and before and after salvage metronomic chemotherapy are shown in Figures 1 and 2. Many patients documented in the study had non-measurable lesions but had a drop in tumor markers >50%. The median follow-up time was 11 months (range, 3-28 months). The median duration of salvage treatment was 4 months (95% confidence interval, 3-7 months) before disease progression, contented stop, or refusal to continue treatment. It was very difficult to evaluate the effect of adding Rapa + HCQ to metronomic chemotherapy on the PFS in such a heterogeneous group of patients. Nevertheless, the state of 2 patients progressed from PD to PR and that of another 8 patients progressed from SD to PR, following Rapa + HCQ salvage treatment, suggesting an encouraging response to this treatment.
Toxicities
Data related to non-hematologic toxicity indicated that therapy was well tolerated. As shown in Table 3, 8 patients (32%) reported grade ≥1 fatigue, including 1 who had grade 3 fatigue and had to discontinue the treatment; diarrhea followed as the second most common toxicity, with 4 (16%) patients exhibiting grade 2 and 1 (4%) reporting grade 3 diarrhea. Two (8%) patients had mucositis and 1 (4%) reported grade 3 nausea/vomiting or renal toxicity. One patient (patient # 23) had grade 3 cardiotoxicity, with a left ventricle ejection fraction of 35%. The patient recovered after discontinuing all treatment. She had no history of doxorubicin usage. One patient (patient # 22) experienced grade V hepatitis, which was attributed to the reactivation of previously www.impactjournals.com/oncotarget unnoted hepatitis B virus. She had not been administered prophylactic anti-viral medicine. Myelotoxicity was relatively common, with 8 patients (32%) developing grade ≥ 3 thrombocytopenia, 6 patients (24%) developing grade ≥ 3 leucopenia and 3 patients (12%) having grade ≥ 3 anemia. None of the patients developed febrile neutropenia, and they all recovered quickly from myelotoxicities after one to two weeks of treatment interruption.
DISCUSSION
This is the first report on the addition of Rapa and HCQ to conventional metronomic chemotherapy, which was found to be safe and well tolerated in a variety of cancer types. Most importantly, this chemotherapeutic combination was associated with a 40% observed response rate and an 84% disease stabilization in a cohort of patients refractory to their chemotherapy regimen. The significant clinical benefit observed in patients resistant to chemotherapy was unexpected and merits further investigation.
A growing number of clinical studies have shown that the anticancer effect of metronomic chemotherapy is primarily a consequence of its anti-angiogenic effect. Lowdose chemotherapy is preferentially cytotoxic to dividing endothelial cells [30], leading to death of circulating endothelial progenitor cells [31] and a decreased microvessel density [32]. Rapa, although traditionally thought of as an immunosuppressive drug, may also inhibit tumor growth and have anti-angiogenic effects [33].
Recently, CQ has also been reported not only to reduce tumor growth, but also to improve tumor angiogenesis in a mechanism independent of autophagy [34]. CQ normalized tumor vessel structure and perfusion function, improved hypoxia, and reduced tumor invasion through endosomal Notch 1 trafficking and signaling in endothelial cells [34]. The synergistic effect of everolimus and CQ combination on endothelial cell apoptosis was found to be linked to the down-regulation of ERK1/2 phosphorylation in these cells [26]. Clearly, the combination of metronomic chemotherapy, Rapa and HCQ must activate common antiangiogenic pathways. The CT attenuation of Hounsfield Units in bladder cancer with liver metastases as shown in Figure 1 is typical evidence of anti-angiogenic effects after combination treatment [35].
It is not yet known whether the positive results obtained using this strategy is caused solely by additive anti-angiogenic effects or by modulation of autophagy. Several phase II clinical trials have examined the potential benefits of adding either mTOR inhibitors (Rapa, everolimus, tenolimus) or HCQ (autophagy-lysosome inhibitor) to conventional chemotherapy in the treatment of patients with malignant gliomas [36,37], non-Hodgkin lymphomas [38], sarcoma [39], melanoma [15,16], breast [40], non-small cell lung [41,42], esophageal [43], and head and neck cancers [44]. Neither strategy had resulted in impressive results. This is the first clinical report describing concomitant use of Rapa, HCQ, and chemotherapy in various cancer types. The results of this self-controlled study indicated that such combinations were not only effective, but also could reverse drug resistance. The autophagy inducer and lysosomal inhibitor in combination with chemotherapy seem to work synergistically. The doses of Rapa (2 mg/day) and HCQ (400 mg/day) were derived from conventional therapeutic doses used in rheumatoid arthritis, kidney transplantation, or treatment of lymphangioleiomyomatosis [45][46][47]. These doses are not MTD, especially for HCQ, which was reportedly used at a dose of 1200 mg/day in combination with temozolomide for the treatment of solid tumors [8]. The dosages of these drugs were not obtained through serial phase I studies of different chemotherapeutic combinations, and therefore, the doses of these drugs could be increased further prior to committing to a phase II trial. Another weakness of this report is the lack of pharmacodynamic assays in tumor tissues or peripheral mononuclear cells. Indeed, the targeted signaling pathways (autophagy or angiogenesis) might not have been modified by these drugs. We have conducted a molecular imaging study in sarcoma patients before and after 2 weeks of treatment with Rapa (2 mg/day) and HCQ (400 mg/day) on the basis of reports that cancer-associated fibroblasts could be potential oncotargets [48].
Although a clinical benefit rate of 84% in this selectively chosen patient cohort seems impressive, additional randomized phase II trials are required before the claim of actual clinical benefits can be accepted. Nevertheless, the real benefit of dual modulation of autophagy might be even more impressive in combination with standard dose chemotherapy.
Patient selection
Patients chosen for analysis were required to have incurable metastatic/recurrent disease and no clinical response from current metronomic chemotherapy regardless of the primary tumor type. Patients needed to continue the same metronomic chemotherapy in addition to Rapa (2 mg qd) and HCQ (400 mg qd) as salvage treatment. They needed to have an Eastern Cooperative Oncology Group performance status of 0 to 1; a life expectancy of at least 3 months before salvage treatment, and either have at least one single site of measurable (twodimensional) disease or serial and continuous (>3 monthly measurement) elevated tumor markers (at least twice the upper limit) in patients with non-measurable lesions. Patients were excluded if treatment with metronomic therapy and salvage therapy was shorter than 12 weeks, unless there was a radiographic confirmation of disease progression. This retrospective study was approved by our institutional review board (IRB).
Treatment
Patients were treated using different metronomic regimen based on their disease type: cyproterone, 50 mg orally 1 tablet twice daily (1# bid), and, docetaxel, 40 mg per body intravenously (i.v.) every 2 weeks (q2w) for prostate cancer; capecitabine, 500 mg 1# bid, vinorelbine, 30 mg orally once a week (qw), and gemcitabine, 800 mg/ m 2 i.v. q2w for breast cancer; vinorelbine, 30 mg orally qw plus docetaxel, 40 mg per body i.v. q2w for lung cancer; capecitabine, 500 mg 1# bid, and irinotecan, 100 mg/m 2 i.v. q2w for colorectal cancer; carboplatin, 150 mg per body, plus gemcitabine, 1400 mg per body i.v. q2w for bladder cancer; cyclophosphamide, 50 mg orally every other day (qod), methotrexate, 50 mg orally qw, tegafur and uracil (UFT), 1# three times a day (tid), and cisplatin, 30 mg/m 2 i.v. q2w for head and neck cancers; cyclophosphamide, 50 mg orally qod, etoposide, 50 mg orally qod, and gemcitabine, 1400 mg per body i.v. q2w for sarcoma. Rapa (2 mg) and HCQ (400 mg) treatments were started following the physician's suggestion the patients' signing an agreement for the additional treatment. The dose of Rapa, HCQ was choosing from conventional therapeutic doses used in rheumatoid arthritis, kidney transplantation. The metronomic chemotherapy schedule was followed as given above. There was no dose modification of Rapa, HCQ, or chemotherapy. The only treatment interruption was scheduled when grade ≥ 3 myelotoxicity was observed, with a maximum delay of 3 weeks being allowed. Supportive care agents such as anti-emetics, antibiotics, loperamide, growth factors, transfusions, or fluid supply were administered as indicated.
Treatment was discontinued with the development of grade IV non-myelotoxicity, patient intolerance, or disease progression. Treatment-related toxicity was assessed every 2 weeks. Toxicity was scored according to the Common Terminology Criteria for Adverse Events (CTCAE) v4.0 [27]. Response was evaluated using chest radiography, computed tomography (CT), or positron emission tomography-computed tomography, which were obtained in principle every 2 to 3 months. Tumor markers were assessed every 1 to 3 months. Response Evaluation Criteria in Solid Tumor (RECIST) v1.1 guidelines were used as follows: complete response (CR, disappearance of measurable disease without development of new lesions, with tumor markers dropping to normal range), partial response (PR, at least 30% reduction in the sum of the longest diameters measured at disease sites or enhanced area), progressive disease (PD, at least 20% increase in the sum of the longest diameter measured disease sites or appearance of new lesions), and stable disease (SD, if determination did not meet criteria of CR, PR, or PD, or those patients with tumor markers decline of > 50% in non-measurable lesions) [28,29]. The radiologic evaluation of the response was obtained by consensus | 2016-05-12T22:15:10.714Z | 2015-04-12T00:00:00.000 | {
"year": 2015,
"sha1": "4c776ffd0b6ea0bd82053087cc2da58025507a58",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=3793&path[]=8077",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4c776ffd0b6ea0bd82053087cc2da58025507a58",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263648765 | pes2o/s2orc | v3-fos-license | A Review of Research on Blockchain Empowered Supply Chain Financing in China
: To promote theoretical and empirical research on blockchain-enabled supply chain finance in China, this article provides an overview of relevant studies in China. Currently, research related to blockchain-enabled supply chain finance in China primarily focuses on three aspects: the underlying mechanisms of blockchain-enabled supply chain finance (including decentralization and consensus mechanisms, distributed storage, tamper resistance, and anti-denial features, as well as smart contracts), the positive effects of blockchain-enabled supply chain finance (including comprehensive effects, credit transmission, risk management) and the application scenarios of blockchain-enabled supply chain finance, and the impact mechanisms of blockchain on supply chain finance gaming behaviors (including the influence of blockchain on supply chain finance decision-making, risk assessment, and the supply chain financial system). Overall, research in China on the positive effects of blockchain-enabled supply chain finance (such as cost and benefit analysis, micro-level efficiency) is relatively limited. Additionally, studies on the factors affecting the adoption of blockchain in supply chain finance and the behavior of banks and small and medium-sized enterprises (SMEs) in blockchain adoption within supply chain finance are relatively scarce. Given the backdrop of financial technology, further research is needed to deepen our understanding of various aspects related to blockchain-enabled supply chain finance in China.
Introduction
Supply chain financing is a financing mode that takes the core enterprises in the supply chain and their related upstream and downstream supporting enterprises, and develops the overall financial solution based on the cargo right and cash flow control according to the transaction relationship and industry characteristics of the enterprises in the supply chain.Supply chain financing is considered a novel financing method that fosters a win-win situation for multiple parties.
It evaluates the credit risk of small and medium-sized enterprises (SMEs) from a supply chain perspective, emphasizing the stability of the supply chain and the reliability of trade backgrounds.It serves as an effective approach to address the financing challenges faced by SMEs.In accordance with the "Guidance on Promoting Supply Chain Financial Services for the Real Economy" issued by the China Banking and Insurance Regulatory Commission (CBIRC) (CBIRC Office [2019] No. 155), banks Li-na Dong and Mu Zhang / Journal of Risk Analysis and Crisis Response, 2023, 13 (3), 220-230 and insurance institutions are required to leverage core supply chain enterprises.They should integrate various types of information, including logistics, information flow, and fund flow, based on genuine transactions between core enterprises and upstream/downstream chain enterprises.This integration aims to provide a comprehensive range of financial services, including financing, settlement, and cash management, to enterprises throughout the supply chain.However, traditional supply chain finance faces challenges due to factors such as technology and management.These challenges include ineffective credit transmission, a lack of reliable information systems, and cumbersome operational processes.As a result, difficulties in obtaining financing, high financing costs, and slow financing processes for SMEs within the supply chain persist (Zhou Lei et al., 2021) [1].
Blockchain's innovative features such as decentralization, consensus trust, smart contracts, and collective supervision align naturally with the characteristics of supply chain finance, which involve multiple participants, upstream and downstream collaboration, and multi-level credit transmission.
Blockchain technology-driven supply chain finance can ensure value transfer and multi-layered credit penetration.It holds the potential to break through the limitations of traditional supply chain finance, which often struggles to cover small and medium-sized enterprises (SMEs) at the tail end, leading to a transformation in traditional supply chain finance.In light of this, the "Guidance on This article will provide an overview of the current state of research on blockchain-enabled supply chain financing in China, including the intrinsic mechanisms, the positive impact of blockchain on supply chain financing, and its influence on the game theory of supply chain financing.
The aim is to further promote theoretical and empirical research on blockchain-enabled supply chain financing.The structure of the remaining sections of this article is as follows: Section 2 introduces the intrinsic mechanisms of blockchain-enabled supply chain financing.Section 3 presents the positive effects of blockchain-enabled supply chain financing and its application scenarios.Section 4 discusses the impact mechanisms of blockchain on the game theory of supply chain financing.Section 5 provides a brief conclusion and summary.
Decentralization and Consensus Mechanisms
In a digital supply chain network empowered by blockchain, once the core enterprise establishes ownership, the accounts receivable that carries its credit can be seamlessly recorded on the blockchain without the need for any centralized institution for registration or authentication, creating a flexible set of digital credit credentials, thus achieving "tokenization."After "tokenization," the digital accounts receivable certificates can be distributed, verified, split, and circulated on the blockchain value network (Zhou Lei et al., 2021) [1].
Blockchain's consensus mechanisms, based on algorithms such as proof of work and proof of stake, can further establish "machine trust."This allows digital credit credentials to circulate with trust on the blockchain, extending the creditworthiness of the core enterprise to small and mediumsized enterprises (SMEs) across the entire supply chain, including end-point nodes (Zhou Lei et al., 2021) [1].
At every level of suppliers and distributors within the supply chain, whether they have direct transactional relationships with the core enterprise, they can utilize the blockchain-recorded digital credit credentials, which have been split and circulated step by step, based on their specific needs.
These credentials can be used to select the redemption of due claims, split payments to upstream and downstream enterprises at no cost, or use them as collateral for low-cost financing from banking and financial institutions (Zhou Lei et al., 2021) [1].
Distributed Storage and Anti-tampering, Anti-repudiation
With the use of blockchain networks, supply chain consortia can be formed.Utilizing distributed storage and consensus mechanisms, the credit information of various entities within the supply chain and the authentic records of transactions are stored on a distributed ledger.Once any credit records and transaction information are verified through consensus, they are rapidly disseminated across the entire network and written into various nodes in an immutable manner.This establishes an efficient mechanism for sharing credit (Li Ming-xian and Chen Ke, 2021) [2].
On the basis of credit sharing, block chain as the value of the Internet architecture, has a unique economic incentive constraint mechanism, through the issuance of carrying credit "token" incentive and distribution of incentive, can be the subject on the supply chain trustworthy credit into blocks, maximize credit value and trustworthy joint incentives, eventually promote small and medium-sized enterprises, core enterprises and bank financial institutions win-win cooperation (Zhou Lei et al., 2021) [1].
After docking with the blockchain platform, once integrated with a blockchain platform, in cases of default within the supply chain, adverse credit information is verified and stored in a distributed manner across all nodes in the network.It becomes tamper-resistant and irrefutable.When entities with a history of default apply for financing again, banking and financial institutions not only automatically reject loans based on smart contracts due to adverse records but also store the loan rejection records in a distributed manner on the blockchain network.This creates a highly efficient mechanism for joint punishment of dishonesty, where the cost of a single default significantly outweighs any potential gains (Zhou Lei et al., 2021) [1].
Smart contracts are a set of predefined scenarios and corresponding actions triggered by contract execution, as per established conditions and transition rules (Zhan Ji-zhou and Zhang Ge-wei, 2023) [3].The most significant advantage of smart contracts lies in their ability to intelligently assess trigger conditions, greatly reducing performance costs and enhancing transaction efficiency.When applied to supply chain financing scenarios, smart contracts improve the management model of traditional supply chain financing for small and medium-sized enterprises (SMEs).It eliminates the need for cumbersome manual processes such as accounts receivable assessment and inventory monitoring.
Instead, it directly integrates the "credit chain" of core enterprises and SMEs, enabling low-cost, intelligent, automated, and auditable analysis and processing of transaction information and data.
While effectively controlling risks, this significantly reduces the credit and operational costs associated with SME financing (Zhou Lei et al., 2020) [4].
When SMEs apply for financing, banking and financial institutions integrated with blockchain platforms will automatically complete the approval process based on predefined response conditions and rules.They will also automatically retrieve the enterprise's credit limit and related credit records from the credit management module for consensus validation.Once consensus at the network layer is confirmed, the contract executes automatically, and the disbursement is completed.SMEs can access the funds almost instantly, significantly enhancing financing efficiency (Long Yun'an et al., 2019) [5].
Combining smart contract technology with the distributed storage capabilities of blockchain ensures that all transaction records between core enterprises and SMEs on the supply chain are intelligently verified and stored in an immutable manner by all nodes.Based on real transactions and credit-bearing digital credentials, they are programmable throughout the entire lifecycle of split transfers and trusted circulation.When the core enterprise makes the final payment, funds are automatically transferred to all holders of digital credentials according to preset response rules, cashing out all transactions and completing financing repayment.Therefore, blockchain not only empowers SME financing by reducing costs and increasing efficiency but also enhances the flow of funds and collaborative efficiency throughout the entire supply chain (Zhou Lei et al., 2021) [1].
Comprehensive Effects
In terms of comprehensive effects, Duan Wei-chang (2018) [6], in conjunction with the logical framework of supply chain management and the business model of supply chain finance, starts with fundamental elements such as documents and contracts.They provide a detailed analysis of the restructuring process and innovative effects of blockchain technology on business processes and business models.Zhu Xing-xiong et al. (2018) [7] suggest that a blockchain supply chain finance platform can integrate "four flows" into one, expand the range of service recipients, strengthen risk management, establish accounts receivable confirmation, manage collateral and its pricing, and manage cash flows.Chu Xue-jian and Gao Bo (2018) [8] argue that the combination of blockchain and supply chain finance can achieve information symmetry among participating parties, facilitate the transmission of core enterprise credit, make the supply chain finance process visible, enhance risk control, and provide full coverage of services.Xu Di-di (2019) [9] believes that blockchain technology Li-na Dong and Mu Zhang / Journal of Risk Analysis and Crisis Response, 2023, 13(3), 220-230 can facilitate information transmission in supply chain finance, establish a multi-party cooperation and coordination mechanism for supply chain finance, address the challenges of risk control in supply chain finance, and simplify the operational processes of supply chain finance.Lin Nan (2019) [10] suggests that blockchain technology can establish a transparent financing ledger, eliminate information asymmetry issues, achieve financial disintermediation, reduce human-induced factors, provide smart contract capabilities, reduce human resource costs in supply chain finance, serve as a supplement to electronic bills of exchange, enhance the quality of supply chain finance services, innovate financial transaction mechanisms, and build a more orderly ecosystem for supply chain finance.Bai Yan-fei et al. (2020) [11] contend that blockchain technology empowers supply chain finance in three ways: building trust mechanisms among on-chain entities, reducing management risks, and increasing the radius of credit transmission; achieving efficiency, security, and privacy protection.Zhou Da-yong and Wu Yao (2020) [12] analyze the application of blockchain technology in supply chain finance, highlighting its potential to reduce information asymmetry, expand the scope of supply chain finance, make movable property collateral financing possible, and enhance the efficiency of supply chain finance.Xue Yang (2021) [13] believes that commercial banks have used blockchain technology to realize the iterative upgrading of product system, financing mode innovation and the expansion of long-tail customers, effectively reducing the loss of financial assets.Furthermore, Guo Jue and Chen Chen (2020) [14] analyze various aspects of blockchain technology, including how it breaks through the "information island" at the end, addresses the financing issues of small and medium-sized enterprises, establishes channels for mutual information exchange among key participants, assists banks in resisting market risks, and constructs industry alliances for supply chain finance, among other issues.
Credit Transmission
Regarding credit transmission, Wang Xin and Chen Li-yuan (2020) [15] suggest that blockchain technology has unique technical advantages for the lossless transmission of core enterprise credit in multi-tier supply chain finance scenarios.It helps enhance supply chain information transparency, ensures the transfer of value and multi-layer credit penetration, and achieves comprehensive risk monitoring and regulatory oversight.Lin Yong-min et al. (2021) [16], based on the coupling analysis of supply chain finance pain points and blockchain technology characteristics, have constructed an alliance chain framework centered around core enterprise credit to address the issue of credit penetration in supply chain finance.Their research shows that trustworthy, divisible, and transferable electronic debt certificates enable the lossless transmission of core enterprise credit along trade relationships.The "trust without intermediaries" model reduces the overall operating costs of the supply chain, and smart contracts that lock the payment settlement path automate the payment settlement process, reducing risk and expanding the financial market size.Yang Hong-xiong and Chen Jun-shu (2022) [17] establish a fundamental theoretical model of "digital credit governancenetwork embeddedness -supply chain finance performance."Using a structural equation model, they analyze the impact mechanism of blockchain technology on supply chain finance performance from the perspective of network embeddedness, unveiling the "black box" of how blockchain technology improves supply chain finance performance.
In terms of risk management, Wang Li-hua and Liu Ling (2020) [18] suggest that blockchain technology contributes to risk management in supply chain finance through various mechanisms.It reduces information asymmetry by providing distributed ledger functionality for real-time data sharing.The consensus mechanism and tamper-resistant features ensure the authenticity of data and transactions, reducing moral hazards.Smart contract functionality helps lower operational and market risks, thereby cutting risk management costs.Timestamping and traceability features enable comprehensive regulatory oversight.Wang Hong-yu and Wen Hong-mei (2021) [19] argue that block chain technology with its distributed accounting, information sharing, asymmetric encryption technology advantages, and use its service system can accurately record all kinds of information of agricultural supply chain, realize the information resource sharing, break the barriers of traditional agricultural supply chain financial information, break financial institutions to accurately verify information problem.Fu Han-yi et al. (2021) [20] believe that applying blockchain technology to supply chain finance can enhance the authenticity of data to reduce credit risks, improve the timeliness of data to mitigate moral hazards, and increase data transparency to lower operational risks.Feng Shan-shan and Li Yong-mei (2022) [21] conducted research that shows the application of blockchain technology can reduce the probability of credit risk occurring in supply chain finance, effectively meeting the financing needs of small and medium-sized enterprises in the upstream and downstream of the supply chain.Additionally, Han Jing-wang and Han Ming-xi (2022) [22] argue that blockchain technology offers advantages in innovating supply chain finance by ensuring information flow, safeguarding information security, and strengthening risk control.
Application Scenarios of Blockchain-Enabled Supply Chain Financing
Regarding the application scenarios of blockchain empowering supply chain finance, Han Jingwang and Han Ming-xi (2022) [22] studied the implementation of blockchain technology in supply chain finance system structure innovation, smart contract based on block chain technology in supply chain finance innovation, and block chain technology in the innovation of risk control innovation.Li Xiao-peng et al. (2022) [23] selected three typical supply chain financing business scenarios, including accounts receivable financing, confirmed warehouse financing, and movable asset pledge financing.
They analyzed the application of blockchain smart contracts in supply chain finance.Based on this analysis, they constructed a supply chain financing platform based on smart contract technology.
They also used SPESC language to compile smart contracts and provided relevant recommendations.
Zhan Ji-zhou and Zhang Ge-wei (2023) [3] conducted research on various aspects, including innovation in supply chain inventory pledge financing models, innovation in supply chain accounts receivable financing models, innovation in supply chain advance payment financing models, innovation in multi-level supply chain credit financing models, and the security mechanisms of blockchain-empowered supply chain finance.
Impact Mechanisms of Blockchain on Supply Chain Financing Decision-Making
Regarding the impact mechanisms of blockchain on supply chain finance decision-making, Zhang Lu (2019) [24] initially discussed supply chain finance service models and blockchain incentive mechanisms from a game-theoretic perspective.Tang Dan and Zhuang Xin-tian (2019) [25] used the Li-na Dong and Mu Zhang / Journal of Risk Analysis and Crisis Response, 2023, 13(3), 220-230 newsboy model to compare and analyze the differences in benefits among various supply chain entities in the context of blockchain debt conversion platforms and traditional supply chain financing models.They discussed the advantages of blockchain debt conversion platforms in reducing costs, increasing returns, and facilitating turnover.Deng Ai-min and Li Yun-feng (2019) [26] proposed specific application scenarios for blockchain smart contract technology in supply chain factoring business, focusing on the transfer of debt securities, upstream supplier factoring financing, and core enterprise mature payments.Using the idea of game theory, from the perspective of blockchain node activity technology, modeling and analysis of the automatic execution mechanism of smart contract, knowing that any rational node x will always choose to follow the protocol to make it automatically executed, emphasizing the important role of blockchain technology for the object of business process.
They also conducted a three-party game analysis of supply chain factoring financing processes considering the influence of blockchain technology from the perspective of supply chain business entity decision-making, seeking equilibrium solutions (lending, repayment, repayment) based on the principle of utility maximization and highlighting the optimization effect of blockchain technology on decision-making behavior of the parties involved.Li Jian et al. (2020) [27] focused on small and medium-sized enterprise warehouse receipt pledge business.Using a comprehensive integrated methodology, they quantitatively modeled the impact of blockchain technology on various aspects of supply chain finance.They studied the loan and production decisions of production enterprises before and after using blockchain technology, analyzing the effects of blockchain technology on different types of companies' operations.They also used the Value at Risk (VaR) risk measurement method to study the bank's pledge rate decisions before and after using blockchain technology, analyzing the impact of blockchain technology on bank pledge rate decisions.Liu Lu et al. (2021) [28] established a three-level supply chain decision-making model involving manufacturers, distributors, and retailers to quantitatively analyze the impact of blockchain credit transmission technology on supply chain finance.They used the Stackelberg game method to characterize both traditional supply chain finance models and blockchain supply chain finance models.They optimized and solved the game model using inverse induction, obtaining equilibrium states for both financing models in terms of optimal wholesale prices, distribution prices, and order quantities in the supply chain.They also conducted sensitivity analysis on key parameters such as the initial capital of retailers and the time value of corporate funds.Tang Dan and Zhuang Xin-tian (2021) [29] constructed a comparative model based on revenue-sharing contracts for bank credit financing, commercial credit financing, and blockchain supply chain finance.They explored the optimal decision-making solutions for supply chains under various financing models and quantitatively analyzed how supply chain performance varies with different parameters.Their research showed that when the cost of funds for retailers is lower than that of manufacturers, commercial credit financing is always superior to bank credit financing.They identified two threshold points for platform fees in blockchain supply chain finance, suggesting that blockchain supply chain finance is the optimal model for both manufacturers and retailers only when the platform fee is below the lower threshold point.Additionally, a higher revenue-sharing ratio results in higher manufacturer profits and lower retailer profits.
Additionally, Wang Dao-ping et al. (2023) [30] used quantitative models in the context of blockchain to depict the predictive role of applying blockchain technology in the face of output uncertainty.They analyzed the influence of the degree of blockchain technology application on the production decisions of small and medium-sized enterprises, bank lending decisions, and the Li-na Dong and Mu Zhang / Journal of Risk Analysis and Crisis Response, 2023, 13(3), 220-230 expected profits of borrowing companies and banks.They also studied the credit limit decisions of banks considering risk avoidance using a downside risk control model.The research findings indicated that the planned production volume of borrowing companies increases with the higher degree of blockchain technology application.The loan amounts set by banks seeking profit maximization also increase under certain conditions as the degree of blockchain technology application rises.The bank's credit limit decisions, influenced by the changing degree of blockchain technology application, are related to risk tolerance and the mean of output fluctuations.The expected profits of borrowing companies initially decrease and then increase with the increasing degree of blockchain technology application.When the mean of supplier output fluctuations is large, the expected profits of banks increase with the higher degree of blockchain technology application.
Impact Mechanisms of Blockchain on Supply Chain Financing Risk
Regarding the impact mechanisms of blockchain on supply chain finance risk, Yang Hong-zhi et al. (2020) [31] found that the introduction of blockchain into supply chain finance platforms will reduce the probability of corporate default, strengthen the equilibrium point in the game for all participating entities (cooperation, compliance, compliance), and increase the profits of all parties in the game.At the equilibrium point in the game, all parties achieve a win-win situation.Gong Qiang et al. (2021) [32] constructed a theoretical model for enterprises in the supply chain network to collateralize financing from banks.They used Bayesian game theory to systematically analyze the economic operation principles of digital supply chain finance and its pros and cons compared to traditional supply chain finance.Research found that when the chain on the supply chain enterprises reached a certain number, and the quality of chain information reached a certain level, the block chain consensus mechanism reveals the enterprise information will approach the real information, to prevent enterprise information manipulation, malicious fraud, make the bank in the case of effective control of risk for the supply chain enterprise accessibility is high enough, low cost enough financing services.On the contrary, when the number of enterprises in the upper chain is small or the quality of the information on the chain cannot be guaranteed, banks are more suitable for them to prevent and control risks through traditional offline due diligence and other methods.Zhou Lei et al. (2021) [1], based on an analysis of the mechanism of blockchain's empowerment in supply chain finance, constructed a dynamic evolutionary game model between financial institutions and small and microenterprises, as well as core enterprises and small and micro-enterprises.They concluded that connecting to a blockchain platform is the dominant strategy for financial institutions.Blockchain helps small and micro-enterprises make compliance decisions by facilitating credit segmentation and circulation, improving financing efficiency, increasing default costs, and reducing financing rates, among other transmission pathways.It also encourages small and micro-enterprises to make compliance decisions through network-based cooperative credit incentives, joint punishment for dishonesty, and reasonable profit sharing.This leads to the game balance to financial institutions dare to lend, willing to lend, showing the ideal state of "double trustworthiness" of core enterprises and small and micro enterprises.In addition, Hasqiqige and Zhao Li-li (2022) [33] established the evolution game model of SMEs and financial institutions under the accounts receivable financing mode.They determined the dominant strategies for small and medium-sized enterprises and financial institutions to connect to the blockchain, and analyzed the impact of punishment and incentive factors on the decision-making and evolution paths of both parties in the model.In addition, Li Jun-qiang and Wang Yu (2022) [34], through the construction of an evolutionary game model for financial institutions and small and medium-sized enterprises under accounts receivable financing, conducted a dynamic evolutionary analysis of the strategy choices of various stakeholders using MATLAB simulations.The analysis results indicate that when the admission and operation costs of blockchain decrease, or when the incentives for blockchain increase, or when the rewards for small and medium-sized enterprises for compliance increase, small and medium-sized enterprises are more inclined to comply with contracts, and financial institutions are also more inclined to use blockchain technology in supply chain finance.
Impact Mechanisms of Blockchain on Supply Chain Financing System
In the context of the impact mechanism of blockchain on the supply chain finance system, Lou Yong et al. (2022) [35] introduced a theoretical framework of blockchain + supply chain finance.They investigated the effects of blockchain technology on the supply chain finance system from the dual perspectives of optimizing financing efficiency for both banks and enterprises.They used a threeparty game and dynamic evolutionary game model to conduct their research.The study found that the introduction of blockchain technology reduced the financing constraints in the supply chain finance market and increased the accessibility of funds for financing enterprises.Additionally, the low operational costs of blockchain technology ultimately improved the financing efficiency of all participants in the blockchain + supply chain finance model.In the long term, achieving the optimal balance places certain demands on blockchain technology itself.These demands include reducing the operational costs of blockchain and designing stable and effective constraints for the supply chain finance system, which to some extent replace the inherent stability function of the supply chain itself.
Brief Review
Currently, research related to blockchain-enabled supply chain finance in China primarily focuses on three aspects: the underlying mechanisms of blockchain-enabled supply chain finance (including decentralization and consensus mechanisms, distributed storage, tamper resistance, and anti-denial features, as well as smart contracts), the positive effects of blockchain-enabled supply chain finance (including comprehensive effects, credit transmission, risk management) and the application scenarios of blockchain-enabled supply chain finance, and the impact mechanisms of blockchain on supply chain finance gaming behaviors (including the influence of blockchain on supply chain finance decision-making, risk assessment, and the supply chain financial system).
However, research in China on the positive effects of blockchain-enabled supply chain finance (such as cost and benefit analysis, micro-level efficiency) is relatively limited.Additionally, studies on the factors affecting the adoption of blockchain in supply chain finance and the behavior of banks and small and medium-sized enterprises (SMEs) in blockchain adoption within supply chain finance are relatively scarce.In summary, in the context of financial technology, there is a need for further research in China to deepen the understanding of several aspects related to blockchain-enabled supply chain finance.
Promoting
Supply Chain Financial Services for the Real Economy" issued by the China Banking and Insurance Regulatory Commission (CBIRC) (CBIRC Office [2019] No. 155) encourages banking and financial institutions to cooperate with core enterprises and utilize technologies such as the internet, Internet of Things (IoT), blockchain, biometrics, and artificial intelligence (AI) to build supply chain financial service platforms for upstream and downstream chain enterprises.Furthermore, the "Opinions on Regulating the Development of Supply Chain Finance to Support the Stable Circulation and Optimization Upgrade of the Supply Chain Industry" (Yin Fa [2020] No. 226) stipulates that all participants in supply chain finance should prudently employ new-generation information technologies such as blockchain, big data, and AI.They should also continuously enhance the security and operational monitoring capabilities of supply chain financial service platforms and information systems to effectively mitigate risks related to information security and network security.Therefore, conducting in-depth research on issues related to blockchain-enabled supply chain financing holds significant theoretical and practical significance.It can accelerate the digitization and intelligence of supply chain finance, alleviating the financing difficulties faced by small and medium-sized enterprises (SMEs). | 2023-10-05T15:40:38.949Z | 2023-09-30T00:00:00.000 | {
"year": 2023,
"sha1": "ce64c120dc1b4d117fecbe7d95087e456dddc7b8",
"oa_license": "CCBYNC",
"oa_url": "https://jracr.com/index.php/jracr/article/download/386/414",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5a8fb93696277cee037f9dafd73bcdc6121ad331",
"s2fieldsofstudy": [
"Business",
"Economics",
"Computer Science"
],
"extfieldsofstudy": []
} |
31947642 | pes2o/s2orc | v3-fos-license | Information and Training Needs Regarding Complementary and Alternative Medicine: A Cross-sectional Study of Cancer Care Providers in Germany
Background: Among cancer care providers (CCPs), lack of knowledge constitutes an important barrier to the discussion of complementary and alternative medicine (CAM) use with patients. This study assessed CCPs’ needs and preferences regarding CAM information and training (I&T). Methods: An online survey was completed by 209 general practitioners, 437 medical specialists, 159 oncology nurses and medical assistants, and 244 psychologists and social workers engaged in cancer care. Latent class analysis (LCA) was used to identify subgroups of individuals with distinct preference patterns regarding I&T content. Results: CCPs prefer CAM I&T to be provided as lectures, information platforms on the internet, workshops, and e-mail newsletters. Concerning subject matters, many CCPs considered CAM therapy options for the treatment of a variety of cancer disease- and therapy-related symptoms to be very important (75%-72% of the sample); the same applies to an “overview of different CAM therapies” (74%). LCA identified 5 latent classes (LCs) of CCPs. All of them attached considerable importance to “medical indication,” “potential side effects,” and “tips for usage.” LCs differed, however, in terms of overall importance ratings, the perceived importance of “patients’ reasons” for using specific CAM therapies, “case examples,” and “scientific evidence.” Notably, the 5 LCs were clearly present in all 4 occupational groups. Conclusions: CAM I&T should provide CCPs with an overview of different CAM therapies and show how CAM might help in treating symptoms cancer patients frequently demonstrate (eg, fatigue). Moreover, I&T programs should be flexible and take into account that individual information needs vary even within the same occupational group.
Introduction
According to a meta-analysis, as many as 41% of cancer patients use complementary and alternative medicine (CAM) therapies, and the prevalence of CAM use appears to be increasing. 1 Many cancer patients wish to know more about CAM 2,3 but rely primarily on friends, family members, and the media for information. 4 Patient-doctor discussions on CAM are crucial for a number of reasons 5 but rarely take place. 6 Many cancer patients do not disclose their CAM use to cancer care providers (CCPs) [6][7][8][9] either because it does not occur to them to do so, or they believe that their CAM use has no influence on their conventional cancer treatment, and/or because they expect physicians to have a negative attitude toward CAM and to be unable to help. 7,8 Likewise, CAM use is rarely proactively addressed by CCPs. According to a recent US survey, oncologists had discussed herbs and supplements with an average of 41% of their cancer patients over the previous 12 months, and only 26% of these discussions were initiated by the oncologists themselves. 10 A main barrier would appear to be a lack of knowledge: Two out of 3 oncologists in this sample indicated that they did not know enough to be able to answer 666372I CTXXX10.1177/1534735416666372Integrative Cancer TherapiesKlein and Guethlin research-article2016 1 Johann Wolfgang Goethe University, Frankfurt am Main, Germany patients' questions properly, and 59% reported not having received any education on the topic. In both the US study and a survey of hospital doctors and general practitioners (GPs) in New Zealand, 11 self-perceived knowledge was found to be a significant predictor of readiness to proactively discuss CAM use with patients. Hence, physicians and other CCPs should have access to reliable information and training (I&T) on CAM. 11 The purpose of the present study was to assess CCPs' needs for further I&T on CAM. Our focus was on the 4 main occupational groups involved in cancer care in Germany: medical specialists in oncology (MSs), oncology nurses and medical assistants (ONs/MAs), GPs, and psychologists and social (education) workers engaged in psycho-oncology care and social medicine (POs/SWs). We first aimed to examine what CAM therapies and potential fields of application for CAM (ie, treatment of specific cancer and cancer therapy-related symptoms) are of greatest importance and should be covered by information materials and training programs for CCPs. Second, using latent class analysis (LCA), we investigated whether subgroups of CCPs exist that show distinct preference patterns with regard to specific I&T content (eg, mechanisms of actions, evidence from studies regarding efficacy, potential side effects, tips for use) and how prevalent subgroups demonstrating the identified specific preference patterns are in each of the 4 occupational groups. Third, we determined CCPs' preferences with regard to the form CAM I&T should take.
Methods
An online survey was used to investigate CAM I&T needs among CCPs from all main areas of cancer care in Germany (see below). The study was conducted as part of the KOKON competence network for CAM in oncology, which was funded by the German Cancer Aid association from 2012 to 2015. The Head of the Institutional Review Board of the University Hospital Frankfurt/Main decided on the basis of the professional code of conduct of the Medical Association of the Federal State of Hessen/Germany ( § 15 BO hess. Ärzte) that specific ethical approval was not required for this investigation.
The items on the questionnaire were based on the results of semistructured interviews with 63 individuals from the 4 targeted occupational groups. To ensure a common, broad understanding of the term CAM among all survey participants, the questionnaire began with an item block gauging the importance of 25 different CAM therapies as potential subjects for CAM I&T. Next, the importance of 13 possible fields of application for CAM-that is, specific cancer disease-and therapy-related symptoms-were to be rated in the same manner. Further topics were how often and in what situations CAM information needs emerge and how participants have sourced information on CAM in the past. We also asked about experiences with existing sources of information, the perceived importance of specific I&T content on CAM, previous participation in training courses on CAM, preferences regarding the forms I&T should take, personal information (sociodemographic data, professional education, current occupation), and professional experience in oncology care and with patients using CAM as well as attitudes toward CAM.
Cognitive interviews with a GP, a MS, and an ON were used to test preliminary versions of the questionnaire for comprehensibility, and a programmed online version was piloted by a physician, a psycho-oncologist, and an ON.
Recruitment of Participants and Definition of the Analytical Sample
Our aim was to survey members of the 4 above-mentioned occupational groups that work in inpatient and outpatient oncology care, oncology rehabilitation centers, and counseling centers for cancer patients throughout Germany. From July 2013 to August 2014, we contacted scientific medical societies, German Cancer Society working groups, professional associations, educational institutions, and other national institutions. With only few exceptions, these societies and institutions forwarded the study information letter and the invitation to participate to their members.
A total of 1257 individuals completed the online questionnaire between September 2013 and August 2014. A subsample of 128 participants did not have current working experience with cancer patients and were, therefore, excluded. Of the remaining 1129 survey participants, 80 individuals did not belong to any of the 4 targeted occupational groups; thus, the final analytical sample size was n = 1049.
Statistical Analyses
CCPs' preferences with respect to the subject matter and the different ways of providing CAM I&T were studied by means of descriptive statistics using SAS version 9.3. To investigate whether subgroups of individuals exist that show distinct preference patterns with regard to the I&T content, we performed latent class (LC) modeling 12 using CCPs' importance ratings (on a 4-point Likert scale ranging from "very important" to "not at all important") for 9 specific items. These were the medical indication for the CAM therapy in question, patients' reasons for using it, case examples, summary of evidence from studies, appraisal of evidence from studies, study references, mechanisms of action, potential side effects, and tips for use. The exact wordings of the 9 items are given in Supplementary File 1 (supplementary material available at http://ict.sagepub. com/supplemental). By means of the SAS procedure LCA, 12 we used the EM algorithm 13 to estimate maximum likelihood parameters for models with 1 to 6 LCs. For each model, 100 different start value sets were used to avoid the issue of local maxima. Model selection-that is, the decision on the number of LCs (latent subgroups of individuals)-was based on the Bayesian Information Criterion. 14 To characterize the LCs in the selected model, we provide a figure that depicts the expected values of the identified LCs for the 9 indicators. This figure shows the preference patterns and displays the probabilistic class sizes-that is, the a priori probabilities of LC membership, which were directly estimated by the model. A short description of the LC model and an explanation of how the expected values in the figure were calculated from the model parameter estimates are provided in Supplementary File 2. To allow the frequency distributions of the identified LCs to be compared across the 4 occupational groups, individuals were allocated to the different LCs according to the maximum posterior probability rule-that is, they were assigned to the class to which the probability that they belonged was highest.
Results
A description of the sample in terms of sociodemographic and occupational characteristics, perceived importance of being well-informed with regard to CAM therapies, attitude toward CAM, and confidence in discussing CAM with cancer patients is provided in Table 1. Among the 4 occupational groups, MSs make up the largest group (n = 437), but even the smallest group (ONs/MAs) contains n = 159 participants. Participants in each group come from at least 15 of the 16 German federal states, and members of all 4 groups also have considerable experience of working with cancer patients. GPs have the greatest experience of working with them (mean = 20 years; SD = 9), and 77% of POs/ Percentage of participants choosing "I strongly agree" on a 4-point Likert scale ranging from "I strongly agree" to "I strongly disagree". f The wording of the item was, "Basically I have a positive attitude towards CAM in oncology." (n = 1035 valid answers). g The wording of the item was, "I feel confident when discussing CAM therapies with cancer patients." (n = 1036 valid answers). h Percentage of participants choosing "I agree" on a 4-point Likert scale ranging from "I strongly agree" to "I strongly disagree." SWs said that cancer patients make up a very large or large share of their patients. The share of individuals who consider a very large or large proportion of their cancer patients to be interested in CAM varies considerably between the 4 groups (range = 36%-66%) and is highest among GPs and lowest among ONs/MAs. More than half the survey participants strongly agree that being well-informed with respect to CAM is important for their daily work. This proportion is especially large among GPs and lowest among POs/SWs. Confidence in discussing CAM with cancer patients was low in all 4 occupational groups, with GPs being the most confident.
Information and Training Needs
All 4 groups report frequently needing further information on CAM in their daily working lives (see Figure 1). In the total sample, the proportion of individuals with "very frequent" or "frequent" information needs amounts to 58%. When "occasional" information needs are included, the proportion rises to as much as 92%.
With regard to the subject matters to be included in information materials and training programs on CAM, an "overview of CAM therapies for cancer patients" was rated to be "very important" by nearly 74% of the total sample (range across the 4 occupational subsamples: 70%-84%). In contrast, "very important" ratings with regard to information on the 25 named individual CAM therapies ranged from 5% (Bach flower remedies) to 55% (relaxation techniques/ meditation) in the total sample. Furthermore, the 4 occupational groups do not vary much as regards the 5 CAM therapies they consider most important. In the total sample, these are (1) relaxation techniques/meditation (55% "very important" ratings; range across the 4 occupations: 51%-62%), (2) herbal drugs (44%; range: 40%-55%), (3) nutritional supplements, vitamins and trace elements (39%; range: 36%-46%), (4) homoeopathy (39%; range: 29%-61%), and (5) mistletoe therapy (39%; range: 31%-54%). Many ONs/MAs also regard it as very important that I&T includes information on compresses (45%) and aromatherapy (41%), whereas POs/SWs regard visualization (52%) as very important as well (see upper part of table in Supplementary File 3). With regard to the subject matter of I&T, survey participants were also asked to rate the importance of 13 potential fields of application for CAM therapies, all of which were disease-or therapy-related symptoms cancer patients are known to often exhibit. A large majority of survey participants thought that every one of the fields of application was a "very important" topic for CAM I&T. In the total sample, the 5 most important application areas were (1) fatigue (75%; range across the 4 occupational groups: 72%-80%), (2) tumor-related pain (73%; range: 69%-87%), (3) psychological afflictions such as anxiety or depression (72%; range: 66%-81%), (4) nausea and vomiting (68%; range: 62%-89%), and (5)
Latent Subgroups of Individuals With Distinct Preference Patterns Regarding Content of CAM I&T
When using LC analysis to assess the importance ratings ascribed by participants to 9 items that CAM I&T could potentially focus on, 5 distinct LCs (subgroups) were identified. The preference patterns characterizing these 5 LCs can be seen in Figure 2. All LCs attached considerable importance to "medical indication," "tips for usage," and "potential side effects." They differed, however, in terms of mean overall importance ratings as well as in the perceived importance of "scientific evidence" on one hand and "patients' reasons" (for CAM use) and "case examples" on the other. Individuals belonging to the largest LC (π 1 = 29%) are likely to consider each of the 9 items "very important" and can hence be characterized as "very interested in all content." Those belonging to the third largest LC (π 3 = 22%) tend to rate all 9 items as "rather important" and may thus be labeled "moderately interested in all content." Whereas the second largest LC (π 2 = 28%) can be characterized as "especially interested in scientific evidence," the fourth largest LC (π 4 = 16%) appears to be "particularly interested in medical indication." The fifth and smallest LC (π 5 = 5%) can be characterized as "moderately interested in patients' reasons and case examples." As demonstrated in Figure 3, each of the 5 latent subgroups is present within each of the 4 occupational groups. LC 1 ("very interested in all content") is the most frequently occurring latent subgroup among ONs/MAs (38%) as well as among POs/SWs (31%). It is also often observed among MSs (31%). In this group, however, LC 2 ("especially interested in scientific evidence") is slightly larger (35%). In contrast, LC 3 ("moderately interested in all content") constitutes the largest latent subgroup among GPs.
How Should Information on CAM Be Presented and What Form Should Training Programs Take?
When asked how they would prefer information on CAM to be provided, CCPs most frequently chose "information platforms on the internet" (67% of the total sample), "lectures on specific CAM-related topics" (62%), and "regular e-mail newsletters" (62%). Furthermore, publications in scientific journals are highly regarded by MSs as well as by ONs/MAs, whereas many GPs favor the opportunity to contact experts. (For further details regarding preferred sources . Survey participants were also asked what functions they would like to see if a new information platform was developed for the internet. The 2 most popular functions were "keyword search for specific disease-related symptoms" (81%) and "keyword search for specific CAM therapies" (75%).
With regard to training programs, "lectures (eg, as part of a conference)" (72%) were preferred to other forms of education, followed by "face-to-face workshops" (63%) in second and "continuing education courses that are accessible at all times on the internet" (45%) in third place (see lower part of table in Supplementary File 4 for further details regarding preferred types of training).
Discussion
The 4 occupational groups participating in the German nationwide survey prefer CAM I&T to be provided in the form of lectures, information platforms on the internet, face-to-face workshops, and e-mail newsletters. All 4 groups considered the most important subjects to be an "overview of different CAM therapies for cancer patients" and a variety of diseaseand therapy-related symptoms as potential application areas of CAM in oncology (especially "fatigue," "tumor-related pain," and "psychological afflictions"). The 3 CAM therapies that participants thought it was most important that CAM I&T focus on were "relaxation techniques/ meditation," "herbal drugs," and "nutritional supplements/vitamins/trace elements." When examining the ratings given by CCPs for 9 items relating to the content of I&T for particular CAM therapies, it was possible to identify 5 LCs with distinct preference patterns. These 5 latent subgroups not only differed in their mean overall importance ratings but also in the importance attached to specific content such as "scientific evidence" on one hand and to "patients' reasons" and "case examples" on the other. Interestingly, all 5 latent subgroups were found to be present in each occupational group, which means that there is substantial heterogeneity not only between but also within the 4 occupations.
Even though we took great care to include CCPs from all main areas of cancer care in Germany, one of the limitations of this study relates to convenience sampling. Moreover, the questionnaire took participants about 20 minutes to fill in, which may have been a barrier for those who are not especially interested in CAM. It is, therefore, uncertain to what extent noteworthy characteristics of survey participants, such as the high importance they attached to being well-informed about CAM and their very frequent information needs, can be transferred to the total population of CCPs in Germany. Acknowledging that future users of CAM I&T will probably be those who are interested in CAM, the present findings can nonetheless be considered suitable for helping in the development of demand-based information materials and training programs. Strengths of this study concern the careful, mixed-methods-based development of the questionnaire and the focus on CCPs' actual needs and preferences with regard to the content and type of CAM I&T. The present findings thus fill a gap left by previous studies investigating CCPs' knowledge, attitudes, and practices with regard to CAM 10,11,15-27 rather than specific CAM I&T needs. Furthermore, our sample of CCPs from all over Germany was large enough to conduct subgroup analyses. This enabled us to gain insights into the specific needs and preferences of 4 occupational groups. By applying LC analysis to investigate interindividual heterogeneity independently of predefined, known characteristics, it was also possible to identify 5 latent subgroups of CCPs with differing needs with respect to the content of CAM I&T.
Based on these findings, we would recommend the development of CAM I&T for CCPs that first presents a general overview of different CAM therapies used by cancer patients. More specific I&T programs should then provide information on CAM therapy options for symptoms associated with cancer disease and antitumor therapy-for example, for fatigue, tumor-related pain, and psychological afflictions. Alternatively, they could focus on specific CAM therapies such as relaxation techniques or phytotherapy. CAM I&T should, of course, also take the specific needs and preferences of the targeted occupational groups into account. A small group of GPs (about 9%) clearly preferred case reports over learning about studies in the field (LC5). We consider it necessary to inform all health care personnel who are skeptical about scientific studies (irrespective of their preferences) of study results in order to enable them to give their patients the chance to make informed decisions. Nonetheless, CAM I&T for CCPs should be flexible enough to consider participants' interests independently of their occupation. This especially applies to "scientific evidence," "patients' reasons," and "case examples," to which varying degrees of importance were attached, depending on latent subgroup membership, whereas information on "medical indication," "tips for usage," and "potential side effects" should always be provided.
It may, however, not be enough to enhance CCPs' knowledge of CAM. To allow for the development of trust and openness between patients and CCPs, a reluctance to discuss CAM therapies must be overcome. 7,28 It is, therefore, also important to train CCPs in initiating a sensible discussion on CAM by asking the "right" questions. 29 As an example, a recent study by Ben-Arye et al 29 found that the disclosure rate of dietary supplement (DS) use in cancer patients can be increased by naming DS options and by using DS-related keywords such as "teas" and "infusions." Likewise, the "how" in communicating treatment options should not be neglected but rather trained in workshops on CAM. Such training programs could follow the guidance provided by the model of Frenkel et al 5 on effective patientdoctor communication on CAM. | 2018-04-03T02:00:49.195Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "1c594c594c545f3f8f2c7fa8120dc3e95a89e932",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/1534735416666372",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c594c594c545f3f8f2c7fa8120dc3e95a89e932",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51821590 | pes2o/s2orc | v3-fos-license | Small world of Ulam networks for chaotic Hamiltonian dynamics
We show that the Ulam method applied to dynamical symplectic maps generates Ulam networks which belong to the class of small world networks appearing for social networks of people, actors, power grids, biological networks and Facebook. We analyze the small world properties of Ulam networks on examples of the Chirikov standard map and the Arnold cat map showing that the number of degrees of separation, or the Erd\"os number, grows logarithmically with the network size for the regime of strong chaos. This growth is related to the Lyapunov instability of chaotic dynamics. The presence of stability islands leads to an algebraic growth of the Erd\"os number with the network size. We also compare the time scales related with the Erd\"os number and the relaxation times of the Perron-Frobenius operator showing that they have a different behavior.
I. INTRODUCTION
In 1960 Ulam proposed a method [1], known now as the Ulam method, for generating discrete, a finite cell approximate of the Perron-Frobenius operator for a chaotic map in a continuous phase space. The transition probabilities from one cell to others are determined from an ensemble of trajectories which generates the probabilities of Markov transitions [2] between cells after one map iteration. In this way the Ulam method produces the Ulam networks with weighted probability transitions between nodes corresponding to phase space cells. For one-dimensional (1D) fully chaotic maps [3,4] the convergence of the discrete dynamics of this Ulam approximate of the Perron-Frobenius operator (UPFO), at cell size going to zero, to the continuous dynamics has been mathematically proven in [5]. The properties of UPFO was studied for 1D [6][7][8] and 2D [9][10][11][12] maps. It was shown that the UPFO finds useful applications for analysis of dynamics of molecular systems [13] and coherent structures in dynamical flows [14]. Recent studies [15,16] demonstrated similarities between the UPFO, corresponding to Ulam networks, and the Google matrix of complex directed networks of World Wide Web, Wikipedia, world trade and other systems [17][18][19].
From a physical point of view the finite cell size of UPFO corresponds to the introduction of a finite noise with amplitude given by a discretization cell size. For dynamical maps with a divided phase space, like the Chirikov standard map [20], such a noise leads to the destruction of invariant Kolmogorov-Arnold-Moser (KAM) curves [3,20,21] so that the original Ulam method is not operating in a correct way for such maps. However, the method can be considered in its generalized form [22] when the Markov transitions are generated by specific trajectories starting only inside one chaotic component thus producing Markov transitions between cells belonging only to this chaotic component. Due to ergodicity on the chaotic component even only one long chaotic trajectory can generate a complete UPFO avoiding the destruction of KAM curves and stability islands. It was also shown numerically that the spectrum of the finite size UPFO matrix converges to a limiting density at cell size going to zero [22].
Certain similarities between the spectrum of UPFO matrices of Ulam networks and those of Google matrix of complex directed networks have already been discussed in the literature (see e.g. [19]). Here, we address another feature of Ulam networks showing that they have small world properties meaning that almost any two nodes are indirectly connected by a small number of links. Such a small world property with its six degrees of separation is typical for social networks of people [23], actors, power grids, biological and other networks [24][25][26]. Thus the whole Facebook network of about 700 million users has only four degrees of separation [27].
The paper is organized as follows: Section II presents the main properties of the two symplectic maps considered, construction of Ulam networks is described in Section III, small world properties of Ulam networks are analyzed in Section IV, the relaxation rates of the coarsegrained Perron-Frobenius operator are considered in Section V. The obtained results are discussed in the last Section VI.
II. DYNAMICAL SYMPLECTIC MAPS
We analyze the properties of Ulam networks for two examples being the Chirikov standard map [20] and the Arnold cat map [21]. Both maps capture the important generic features of Hamiltonian dynamics and find a variety of applications for the description of real physical systems (see e.g. [28]).
The Chirikov standard map has the form: Here bars mark the variables after one map iteration and we consider the dynamics to be periodic on a torus so that 0 ≤ x ≤ 1, 0 ≤ p ≤ 1. It is argued that the last KAM curve is the one with the golden rotation number arXiv:1807.05204v1 [nlin.CD] 13 Jul 2018 being destroyed at critical K c = K g = 0.971635406... [29]. Indeed, further mathematical analysis [30] showed that all KAM curves are destroyed for K ≥ 63/64 while the numerical analysis [31] showed that K c − K g < 2.5 × 10 4 . Thus it is most probable that K c = K g and the golden KAM curve is the last to be destroyed (see also the review [32]). The Arnold cat map [21] of the form, is the cornerstone model of classical dynamical chaos [3]. This symplectic map belongs to the class of Anosov systems, it has the positive Kolmogorov-Sinai entropy h = ln[(3+ √ 5)/2)] ≈ 0.9624 and is fully chaotic [3]. Here the first equation can be seen as a kick which changes the momentum p of a particle on a torus while the second one corresponds to a free phase rotation in the interval −0.5 ≤ x < 0.5; bars mark the new values of canonical variables (x, p). The map dynamics takes place on a torus of integer length L in the p-direction with −L/2 < p ≤ L/2. The usual case of the Arnold cat map corresponds to L = 1 but it is more interesting to study the map on a torus of longer integer size L > 1 generating a diffusive dynamics in p [33,34]. For L 1 the diffusive process for the probability density w(p, t) is described by the Fokker-Planck equation: with the diffusion coefficient D ≈ x 2 = 1/12 and t being iteration time. As a result for times t L 2 /D the distribution converges to the ergodic equilibrium with a homogeneous density in the plane (x, p) [34].
III. CONSTRUCTION OF ULAM NETWORKS
We construct the Ulam network and related UPFO for the map (1) as described in [22]. First we reduce the phase space to the region 0 ≤ x < 1 and 0 ≤ p < 0.5 exploiting the symmetry x → 1 − x and p → 1 − p. The reduced phase space is divided into M × (M/2) cells with certain integer values M in the range 25 ≤ M ≤ 3200. To determine the classical transition probabilities between cells we iterate one very long trajectory of 10 12 iterations starting inside the chaotic component at x = p = 0.1/(2π) and count the number of transitions from a cell i to a cell j. Depending on the value of K it is possible that there are stable islands or other non-accessible regions where the trajectory never enters. This corresponds to certain cells that do not contribute to the Ulam network. In practice, we perform trajectory iterations only for the largest two values M = 3200, M = 2240 and apply an exact renormalization scheme to reduce successively the value of M by a factor of 2 down to M = 25 and M = 35 (for these two cases the vertical cell-number is chosen as (M + 1)/2 with the top line of cells only covering half cells). We consider the dynamics for four different values of K: the golden critical value K = K g = 0.971635406, K = 5, K = 7 and K = 7 + 2π. There are small stability islands for the last three cases. The original Ulam method [1] computes the transition probabilities from one cell to other cells using many random initial conditions per cell but for the Chirikov standard map this would imply that the implicit coarse graining of the method produces a diffusion into the stable islands or other classically non-accessible regions which we want to avoid. The typical network size (of contributing nodes/cells) is approximately For the Arnold cat map (2) we divide the phase space −0.5 ≤ x < 0.5 and −L/2 ≤ p < L/2 into M × LM cells where in this work we mostly choose L = 3 and M is taken from a sequence of prime numbers starting with M = 29 and increasing M roughly by a factor of 1.4 in order to minimize certain arithmetic effects from non-prime numbers. Since the Arnold cat map does not have any inaccessible regions, both variants of the Ulam method, with many random initial conditions or one long trajectory (using a suitable irrational choice of the initial position) work very well.
However, due to the exact linear form of (2) it is even possible to compute directly very efficiently and exactly (without any averaging procedure) the transition probabilities. Details of this procedure for the Arnold cat map together with a discussion of related properties of the UPFO for the standard map are given in Appendix A. The results for the UPFO for the cat map given below in this work have all been obtained for the exact UPFO computed in this way.
IV. SMALL WORLD PROPERTIES OF ULAM NETWORKS
To study the small world properties of the Ulam networks we compute a quantity which we call the Erdös number N E (or number of degrees of separation) [26,35]. This number represents the minimal number of links necessary to reach indirectly a specific node via other intermediate nodes from a particular node called the hub.
Here the (non-vanishing) transition probabilities are not important and only the existence of a link between two nodes is relevant. The recursive computation of N E for all nodes can be done very efficiently for large networks by keeping a list of nodes with same N E found in the last iteration and which is used to construct a new list of nodes with N E increased by unity as all nodes being connected to a node of the initial list and not yet having a valid smaller value of N E (for nodes found in a former iteration). After each iteration the list will be updated with the new list and the initial list of this procedure at N E = 0 is chosen to contain one node being the hub. Fig. 1 shows the probability distributions w E (N E ) of the Erdös number N E (using a hub cell at x = 0.1/(2π), (32) for K = 7 (K = 7 + 2π). This behavior can be understood in the framework of the discussion in Appendix A showing that the image of an initial square cell is (up to non-linear corrections) a parallelogram with extreme points (relative to a certain reference cell) ∆s(ξ 0 , ξ 0 ) and ∆s(ξ 0 +A+ 2, ξ 0 +A+ 1) where ξ 0 is a quasi-random uniformly distributed quantity in the interval ξ 0 ∈ [0, 1[ and ∆s = 1/M is the linear cell size. Here we assume that A = K cos(2π∆s x i ) > 0 (the argumentation for A < 0 is rather similar with A → |A|). The parallelogram covers in horizontal direction nearly always two cells and in diagonal direction ξ 0 +A+1 ≥ 2 cells where u is the ceil function of u being the smallest integer larger or equal than u. Therefore typical values of N l = 2 ξ 0 + A + 1 are indeed even numbers with 4 ≤ N l ≤ N values in Fig. 1. Actually for K = 7 + 2π ≈ 13.283 we also understand that the probability for N l = N (max) l = 32 is quite strongly reduced because even for maximal A = K we need that the offset satisfies ξ 0 > 1 − 0.283 which is statistically less likely. Apart from this there is also a slight increase of histogram bins with larger values of N l due to the cosine factor in A applied on a uniformly distributed phase. For sufficiently large M this argumentation does not depend on system/network size. We mention that for very small values of M there are deviations from this general picture, with some small probabilities for odd values of N l due to boundary effects, also related to stable islands and inaccessible phase space regions (especially for K = K g ). For the largest values M = 3200, 2240 and K = 7 + 2π the figure shows some small deviations due to statistical fluctuations since the average ratio of trajectory transitions per link 10 12 ) ≈ 6000 is rather modest. Furthermore, the data for K = 5 (not shown in Fig. 1) are also in agreement with this general picture with N (max) l = 14 and typically N E ≈ 11 ± 3. According to Fig. 2 the average Erdös number for the three cases with K ≥ 5 behaves approximately as Here C 1 , C 2 are some numerical constants which have no significant dependence on the hub choice as long it is not close to some stable island or similar. The typical values of C 2 are close to h −1 with h = ln(K/2) being the Lyapunov exponent of the standard map (for K > 4) [20]. This is due to the theoretically expected behavior is the number of cells indirectly connected to the hub after N E iterations. This theoretical behavior is rather well confirmed by the data of left panel of Fig. 1 (when presented in log presentation for the y-axis and multiplied with N d ). The exponential increase saturates at N E = N E with e h N E ≈ αN d and α being a constant of order of unity implying C 2 = 1/h and C 1 = ln(α)/h.
We have performed a similar analysis of N l and N E also for the Arnold cat map. Here the link number N l is constant for all nodes with values 4, 5 or 6 depending on the parity of M or LM as explained in Appendix A. The behavior of N E is presented in Fig. 3 showing We also computed the restricted average of N E over the center square box (out of L = 3 squares) with |x| < 0.5 and |p| < 0.5 which turns out to be quite close to the full average with |x| < 0.5 and |p| < L/2 showing that for the Erdös number the diffusive dynamics is apparently not very relevant. To understand the spatial structure of the Erdös number of nodes we show in top panels of Fig. 4 density plots of the phase space probability distribution after a few iterations of the UPFO for the map (1) at K = 7 and M = 400 applied to an initial cell state. One can clearly identify the chaotic spreading along a one-dimensional manifold which fills up the phase space due to refolding induced by the periodic boundary conditions. The lower panels show the full spatial distribution of the Erdös number by a color plot using red (green, light blue) for nodes with smallest (medium, largest) Erdös number. Dark blue is used for non-accessible nodes in the stable islands which have no Erdös number. Furthermore, for a better visibility we consider a full square box with 0 ≤ x < 1 and −0.5 ≤ p < 0.5 where the data for p < 0 is obtained by the transformation x → 1 − x and p → −p from the data with p ≥ 0. In this way the two small stable islands at p = 0 for K = 7 have a full visibility (the influence of orbits sticking near these islands on Poincaré recurrences is discussed in [36]). Nodes with the smallest Erdös number follow the same one-dimensional unstable manifold as the chaotic stretching and nodes with maximal Erdös number are close to the outer boundaries of the stable islands which are last reached when starting from the hub. Fig. 5 shows the probability distributions of N E and N l for the standard map at the golden critical value K = K g = 0.971635406 with a complicated structure of stable islands inside the main chaotic component. The distribution of N E is now rather large with non-vanishing probabilities at values N E ∼ 10 2 and several local maxima due to the complicated phase space structure with different layers of initial diffusive spreading (limited by the golden curve). The distribution of N l is mostly concentrated on the two values N l = 4 and 6 in agreement with the above discussion since N (max) l = 2 2 + K g = 6. The spatial distribution of N E for K = K g (using a hub cell at x = 0.1/(2π) and p = 0) is illustrated in the top panel of Fig. 6 by the same type of color plot used for the lower panels of Fig. 5. In this case N E follows clearly the (very slow) diffusive spreading with smallest N E values in the layers close to the hub and maximal N E values closest to the top layers just below the golden curve.
The bottom panel of Fig. 6 shows the N E color plot (with hub cell at p = x = 0) for the Arnold cat map at L = 3 and the rather small value M = 59 for a better visibility. As for the case K = 7 of map (1) the Erdös number follows a one-dimensional unstable manifold (a straight refolded line for the cat map) and the chaotic spreading reaches quite quickly the two outer square boxes (with |p| > 0.5). We have verified that this behavior is also confirmed by the corresponding N E color plots at larger values of M . The evolution of the nodes with smallest N E values does not follow the classical diffusion which can be understood by the fact that the Erdös number only cares about reaching a cell as such even with a very small probability while the diffusive dynamics applies to the evolution of the probability occupation of each cell. This is similar to a one-dimensional random walk with a diffusive spreading ∼ √ Dt of the spatial probability distribution while the Erdös number (i.e. set of "touched" cells) increases ballistically in time ∼ t. Fig. 7 shows the dependence of the average N E on N d for K = K g which follows a power law N E ∼ N b d with b = 0.297 ± 0.004. For this case the logarithmic behavior N E ∼ ln N d (4) observed for K ≥ 5 is not valid due to the small Lyapunov exponent and complicated phase space structure with slow diffusive spreading and complications from orbits trapped around stable islands. To explain the obtained dependence N E ∼ N 0.3 d we give the following heuristic argument. According to the renormalization description of the critical golden curve the typical time scale of motion in the vicinity of a certain resonance with the Fibonacci approximate of the golden rotation number r n = q n−1 /q n → r g = ( √ 5 − 1)/2 with q n = 1, 2, 3, 5, 8, ... is t n ∼ q n (same for the symmetric golden curve with r = 1 − r g ) [29,37]. At the same time the area of one cell close to the resonance q n with typical size 1/q 2 n scales approximately as A n ∼ 1/(q 2 n t n ) ∼ 1/q 3 n . Since a cell of the Ulam network has an area 1/N d ∼ A n we obtain that t n ∼ N 1/3 d . We expect that the typical time to reach the resonance with largest q n value that can be resolved by the UPFO discretization is of the order of the most probable Erdös number such that N E ∼ t n ∼ N 1/3 d leading to b = 1/3 comparable with the obtained numerical value. Of course, this handwaving argument is very simplified since in addition to Fibonacci resonance approximates there are other resonances which play a role in long time sticking of trajectories and algebraic decay of Poincaré recurrences (see e.g. [38][39][40]). Also as discussed above the Erdös number is for a network with equal weights of transitions while in the UPFO for the Chirikov standard map the transition weights are different.
Indeed, since the Erdös number does not depend on the weight w l of a link it follows in principle a different dynamics than the UPFO applied on an initial localized state. Therefore we also analyzed the statistical distribution of link weights w l . Fig. 8 shows the integrated weight distribution P w (w l ) (fraction of links with weight below w l ) of the UPFO for the Chirikov standard map for different values of K and M . The vertical lines at some minimal value correspond to the smallest possible weight values w (min) l = N d /10 12 being the typical inverse number of trajectory crossings per cell and are due to the finite length of the the iteration trajectory. Apart from this, in the regime w (min) l < w l < 0.1, the behavior is very close to a power law P w (w l ) ∼ w b l with some exponent rather close to b ≈ 0.5 depending on K values and fit ranges. This leads to a square root singularity in the probability distribution p w (w l ) = P w (w l ) ∼ w −0.5 l .
To understand this dependence we remind that according to the discussion of Appendix A the weights w l are given as the relative intersection areas of a certain parallelogram (being the image of one Ulam cell by the map) with the target Ulam cells and that the bottom corner point of the parallelogram (relative to its target cell) is given by ∆s(ξ 0 , ξ 0 ) where ξ 0 ∈ [0, 1[ has a uniform quasi-random distribution (see also bottom right panel of Fig. 11 in Appendix A). If 1 − ξ 0 1 this provides the triangle area (relative to the cell size ∆s 2 ) being: w l = C(A)(1 − ξ 0 ) 2 /2 with a coefficient dependent on the parameter A = K cos(2π∆s x i ) and also if we consider the triangle in the cell around the lowest corner point or the cell right next to it (which may have a smaller area depending on A). Since ξ 0 is uniformly distributed we find (after an additional average over the initial cells, i.e. over the parameter A) immediately that p w (w l ) ∼ w −1/2 l . It is also possible that the top corner point of the parallelogram (instead of the bottom corner point) may produce the minimal weight (among all target cells for a given initial cell). However, the top corner point also lies on the diagonal (relative to its target cell) and produces therefore the same square root singularity.
The appearance of the singularity is certainly very interesting. However, this singularity is integrable and the main part of links still have weights w l comparable to its typical value w l ∼ N −1 l given by the relative intersection areas of the parallelogram with the other target cells. Furthermore, despite this singularity, it seems that the dynamics of the Erdös number follows qualitatively quite well the chaotic dynamics induced by the direct application of the UPFO as can be seen for example in Figs. 4 and 6.
The results of this Section show that in the regime of strong chaos the Ulam networks are characterized by small values of the Erdös number N E ∼ ln N d growing only logarithmically with the network size N d . However, the presence of stability islands can modify the asymptotic behavior leading to a more rapid growth with N E ∼ N 0.3 d as it is the case for the critical golden curve of the Chirikov standard map where a half of the total measure is occupied by stability islands.
V. SMALL RELAXATION RATES OF UPFO
The average (or maximal) Erdös number gives the time scale at which the UPFO touches most (or all) Ulam cells when applied to an initial state localized at one cell (hub) but it does not take into account the probability density associated to the target cells which may be very small for the cells with largest N E at iteration times t ∼ N (max) E . However, the direct iterated application of the UPFO on a typical localized initial state converges exponentially versus a (roughly) uniform stationary distribution (for the accessible cells) as ∼ exp(−γ 1 t/2) where the decay rate is given by γ 1 = −2 ln(|λ 1 |) in terms of the second eigenvalue λ 1 of the UPFO (with the first eigenvalue always being λ 0 = 1 for a non-dissipative map and its eigenvector being the stationary homogeneous density distribution over the chaotic component in the phase plane).
First results for γ 1 were given for the Chirikov standard map in [22] and the Arnold cat map in [34]. Here we present new results for γ 1 obtained by the Arnoldi method for additional values of K and larger M . In most cases an Arnoldi dimension of n A = 1000 (see Ref. [22] for computational details) is largely sufficient to get numerical precise values of γ 1 as well as a considerable amount of largest complex eigenvalues. Only for the Chirikov standard map at K = K g , where the eigenvalue density close to the complex unit circle is rather elevated, we used n A = 3000 (4000) for M ≤ 1600 (1600 < M ≤ 3200). Fig. 9 shows two different representations of the dependence of γ 1 on M or N d ∼ M 2 for the standard map and our usual values K = K g , 5, 7, 7 + 2π. For K ≥ 5 the plot of the top left panel seems to indicate that γ −1 1 ∼ C 1 + C 2 ln(N d ) (with two different regimes for K = 5) possibly indicating that γ 1 ∼ 1/ ln(N d ) → 0 for very large system size. However, the alternative plot of γ 1 versus 1/M in top right panel might indicate a finite limit of γ 1 for M → ∞ at least for K = 7 with a very particular classical behavior due to the stable island [31] visible in (bottom left panel of) Fig. 4. We think that the numerical data does not allow to conclude clearly if the infinite size limit of γ 1 is vanishing or finite since the possible logarithmic behavior may manifest itself at extremely large values of M or N d numerically not accessible.
For K = K g we confirm the power law behavior 10 4 in agreement with the results of [22]. However, as discussed in [22], taking into account the data with N d < 10 4 one may also try a more complicated fit using a rational function in 1/M providing a different behavior γ −1 1 ∼ N 0.5 d ∼ M but this would be visible only for extremely large, numerically inaccessible, values of M . Thus for the case K = K g we can safely conclude that γ 1 → 0 for M → ∞ in agreement with the power law statistics of the Poincaré recurrence time at K = K g .
Concerning the Arnold cat map the very efficient algorithm to compute the UPFO described in Appendix A combined with the Arnoldi method allows to treat rather large values of M , e.g. up to M = 983 corresponding to N d ≈ 3 × 10 6 . We remind that due to the necessity to store simultaneously ∼ n A vectors of size N d it is not possible to consider the Arnoldi method for values such as M = 14699 for which we were able to compute the Erdös number only using the network link structure. We find that apart from λ 0 = 1 (nearly) all real and complex eigenvalues of the UPFO are double degenerate due to the symmetry p → −p and x → −x. Therefore we also implemented a symmetrized version of the UPFO for the cat map where cells at p i < 0 are identified with the corresponding cell at p i > 0 (and x i → −x i ). This allows the reduction of N d by roughly a factor of two (cells at p i = 0 are kept as such) and lifts the degeneracy allowing to obtain more different eigenvalues at given value of n A . For small values of M the symmetrized version may miss a few eigenvalues but at M = 983 we find that the spectra coincide numerically (for the amount of reliable eigenvalues which we were able to compute for the non-symmetrized UPFO). Concerning the computation of γ 1 this point is not important since n A = 100 is already sufficient (both symmetrized and non-symmetrized UPFO) but we verified all γ 1 values also with n A = 1000. = D(2π) 2 /L 2 = π 2 /(3L 2 ) which agrees quite accurately with our numerical values for L ≥ 3 (this was also seen in [34] for smaller M values). Only for L = 2 the numerical value of γ −1 1 is roughly a third larger than the theoretical value which is not astonishing due to the modest value of L = 2 limiting the applicability of the diffusive model. Furthermore, we show as illustration in the right panel of Fig. 10 the top spectrum of ∼ 4000 eigenvalues for the case M = 983, L = 3 obtained by the Arnoldi method for n A = 5000 applied to the symmetrized UPFO. We note that apart from both top eigenvalues (λ 0 = 1 and λ 1 ≈ 0.834183) the spectrum is limited to a complex circle of radius ≈ 0.6 with a quite particular pattern for the top eigenvalues with 0.55 < |λ j | < 0.6 and a cloud of lower eigenvalues with |λ j | < 0.55.
The results of Fig. 10 for the Arnold cat map clearly show that the Erdös number, shown in Fig. 3, is not directly related with the relaxation time 1/γ 1 of the UPFO. As already discussed above on an example of a diffusive process this is related to the fact that the Erdös number does not take into account the variations of transi-tion weights and measures the time when a cell is first touched, leading to a ballistic type of propagation instead of diffusion, while the relaxation time measures the convergence to the stationary homogeneous probability distribution for long time scales.
VI. DISCUSSION
We analyzed the properties of Ulam networks generated by dynamical symplectic maps. Our results show that in the case of strongly chaotic dynamics these networks belong to the class of small world networks with the number of degrees of separation, or the Erdös number N E , growing logarithmically with the network size N d . This growth is related to the Lyapunov exponent of chaotic dynamics. However, the obtained results show that in presence of significant stability islands the Erdös number growth is stronger with N E ∼ N 0.3 d being related to orbits sticking in a vicinity of islands. We also show that the Erdös number is not directly related to the largest relaxation times which remain size independent in the case of a diffusive process like for the Arnold cat map on a long torus. We hope that our results will stimulate further useful inter-exchange between the fields of dynamical systems and directed complex networks. same area, spanned by the two vectors (∆s, ∆s) and (2∆s, ∆s), which intersects with 4 (both M and LM even), 5 (both M and LM odd) or 6 target cells (M odd but LM even) as can be seen in Fig. 11. The relative intersection areas of the parallelogram with each cell provide the exact theoretical transition probabilities given as multiples of small powers of 1/2. For example for the most relevant case of this work, where both M and LM are odd, there are for each initial cell one target cell with transition probability of 1/2 and four other target cells with probability 1/8. For the other cases we have four target cells with probability 1/4 (both M and LM even) or two target cells with probability 3/8 and four target cells with probability 1/16 (M odd and LM even). Furthermore, Fig. 11 also shows the relative positions of the concerned target cell with respect to a reference point being the image of the grid point of the initial Ulam cell. In this way it is possible to compute very efficiently and directly the exact Ulam network for the Arnold cat map which allowed us to choose M up to M = 14699 corresponding to the network size N d = LM 2 ≈ 6.5×10 8 . We have also verified that our exact computation scheme is in agreement with the two other variants of the Ulam method (apart from statistical fluctuations in the latter).
We may also try a similar analysis of the UPFO for the Chirikov standard map which gives three complications: (i) the standard map is only locally linear for large values of M and the scheme will only be approximate due to non-linear corrections; (ii) we have to add a certain (rather random/complicated) offset ξ 0 ∆s (with ξ 0 = K sin(2πx i ∆s)/(2π∆s) mod 1) in the above expressions in terms of x i or p i since an initial point on the integer grid is no longer exactly mapped to another point of this grid as it was the case with the Arnold cat map, and finally (iii) the parallelogram is now spanned by the two vectors ∆s(1, 1) and ∆s(1 + A, A). Here the parameter A ≈ K cos(2π∆s x i ) may take rather large values depending on K and depends on the phase space position x ≈ ∆s x i . The bottom right panel of Fig. 11 shows an example of such a shifted parallelogram with ξ 0 = 0.8 and A = 1.5.
For these reasons, this scheme is not suitable to construct numerically a reliable UPFO for the map (1). However, it is still very useful to understand quite well the distribution of the number N l of connected cells from one initial cell and also the square root singularity in the distribution of weights p w (w l ) of the UFPO for the standard map (see discussions in Section IV for both points). | 2018-07-13T17:39:26.000Z | 2018-07-13T00:00:00.000 | {
"year": 2018,
"sha1": "f51650dc0f4bf8da11e230dd093b4803a4686573",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1807.05204",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f51650dc0f4bf8da11e230dd093b4803a4686573",
"s2fieldsofstudy": [
"Physics",
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
232386124 | pes2o/s2orc | v3-fos-license | Beneficial Impacts of Incorporating the Non-Natural Amino Acid Azulenyl-Alanine into the Trp-Rich Antimicrobial Peptide buCATHL4B
Antimicrobial peptides (AMPs) present a promising scaffold for the development of potent antimicrobial agents. Substitution of tryptophan by non-natural amino acid Azulenyl-Alanine (AzAla) would allow studying the mechanism of action of AMPs by using unique properties of this amino acid, such as ability to be excited separately from tryptophan in a multi-Trp AMPs and environmental insensitivity. In this work, we investigate the effect of Trp→AzAla substitution in antimicrobial peptide buCATHL4B (contains three Trp side chains). We found that antimicrobial and bactericidal activity of the original peptide was preserved, while cytocompatibility with human cells and proteolytic stability was improved. We envision that AzAla will find applications as a tool for studies of the mechanism of action of AMPs. In addition, incorporation of this non-natural amino acid into AMP sequences could enhance their application properties.
Introduction
Antimicrobial peptides (AMPs) have been an area of intensive and multidisciplinary investigation through the last three decades. The interest in these peptides is driven by multiple factors including the small size, high selectivity, low cytotoxicity, and broad evolutionary presence in higher organisms. Importantly, these AMPs have also been of great interest due to the potential pharmaceutical applications noting the duration of these molecules throughout evolution, indicating a low potential for resistance development. More recently, novel applications beyond the traditional pharmaceutical approach have begun to develop using AMPs as components of hydrogels and as antimicrobial surface coatings [1].
A major area of interest in AMPs is the design and development of novel molecules with high activity and bioavailability but with low toxicity to the host. In this pursuit, numerous groups have studied AMPs from a variety of sources to gain insights into the structure-activity relationships within these peptides [2][3][4][5]. However, despite this sustained effort over the past 30 years, there has been no consensus sequence identified or reliable prediction methods for the design of new AMPs. Generally, the majority of these peptides are found to be cationic and amphiphilic, containing a significant number of hydrophobic amino acids, and act via a membrane-disrupting mechanism [6,7]. This membrane disruption is driven by the peptide adopting a facially amphiphilic alpha-helix on the membrane surface which allows the hydrophobic amino acids to insert into the core of the lipid membrane, causing destabilization.
Equally important in the development and application of novel AMPs as therapeutics is the relationship between selectivity, efficacy, and host cytocompatibility. Efficacy has long been a hurdle for AMPs due to concerns of bioavailability, primarily stemming from the natural host mechanisms for digesting proteins. One strategy that many groups have been using to approach the bioavailability problem is the incorporation of non-natural amino acids which may be less susceptible to these proteolytic mechanisms. While this has shown success in a number of cases, the overall understanding of the physico-chemical properties of these amino acids is more limited compared to natural amino acids. Perhaps the most important challenge is developing AMPs with low cytotoxicity to host cells. While AMPs evolved to be part of the immune response in higher organisms, and are thus biocompatible, the modification of these molecules has the potential to disrupt the evolved balance of efficacy and cytocompatibility. Importantly, it has been shown that increasing the hydrophobicity of AMPs and AMP-mimetics can dramatically increase toxicity [8][9][10][11]. These changes in toxicity are likely linked to changes in binding and poreforming propensity as a result of the modulation of the hydrophobic balance in the peptide as has been demonstrated for the AMP Magainin H2 [12][13][14]. Thus, any changes in the AMP sequences must be carefully vetted to ensure any increases in antibacterial activity are not concomitant with increases in broad toxicity.
More recently, AMPs that diverge from the traditional cationic amphiphilic model have been identified including anionic AMPs and Trp-rich AMPs (tryptophan = Trp). The Trp-rich class of AMPs has garnered significant interest as these molecules following the general cationic-amphiphilic model of most AMPs, but with several significant differences. First, as per the nomenclature, these peptides have multiple Trp residues in their sequences, well above the statistically predicted number for peptides of this length. The Trp-rich class of AMPs is generally much shorter in length than typical AMPs, often only 10-20 amino acids long. In order to study the mechanism of action of AMPs, fluorescence provides for an easy readout that can be monitored in a high-throughput fashion in various environments (detergent micelles, lipid bilayers, and even in vivo). The challenge of incorporating a fluorophore into a target peptide lies in balancing response with retention of natural function. Extrinsic labeling with relatively large fluorophores engineered into a protein may perturb its structure and function. Intrinsic fluorescent probes for such studies are of great value, as they are potentially less disruptive to the structure of the biological object of interest. The ability to be excited separately from all other residues, high sensitivity to the local environment, and relatively low abundance in proteins make tryptophan particularly well suited for such studies [15]. Moreover, the sensitivity of tryptophan to the environment [16] and locally present quenchers offers useful information about probe localization [17,18]. However, some of Trp's advantages can also be weaknesses in other scenarios [19]. The sensitivity of Trp to both the local environment and quenchers inherently present in proteins (or other biomolecules) can convolute the analysis of the fluorescence data. For example, when several events (such as binding, quenching, and drastic change of local environment) occur simultaneously and lead to concomitant change in φ (quantum yield) and λ max (emission maximum). Additionally, sequence modifications are necessary for peptides that contain multiple intrinsic fluorophores in order to get information for a distinct site. An approach to address this problem is to substitute the residue of interest with a non-natural analog, which can be selectively excited above~310 nm, where Trp absorption is negligible. Ideally, such substitution should only minimally perturb native peptide's properties. This could be achieved by developing iso-or pseudoisosteric mimics with distinct fluorescence properties [15,[20][21][22]. We identified azulene (Az), a pseudoisosteric hydrocarbon analog of indole that lacks the N-H functionality (Scheme 1), which can serve as an environmentally insensitive probe for fluorescence studies of AMPs while preserving (and even improving) their functionality. Azulene has a number of advantages: its absorption spectrum is well separated from all other amino acid residues, the fluorescence emission photophysics is simple, the quantum yield value for azulene is comparable to that of tryptophan, and it does not possess functional groups that render it sensitive to the local environment [23].
Previously, it was demonstrated that the incorporation of AzAla into the well-characterized venom peptide melittin did not impact the structure/function of the peptide [24]. However, melittin is a longer, membrane-spanning peptide which forms distinct pores in the bilayer, in contrast to the general consensus for AMPs. In this work, we synthetically incorporated azulene into Trp-containing AMP to demonstrate that AzAla does not perturb peptide function and could serve as a probe to dissect the roles and importance of individual Trp residues in multi-Trp AMPs. Scheme 1. Structures of tryptophan (Trp, W) and its analog Azulenyl-Alanine (AzAla, Z).
Using a combination of spectroscopic, microbiological, and biochemical assays, we have characterized the AzAla-containing variants of buCATHL-4B. Most notably, while retaining antimicrobial activity, all of these variants show reduced cytotoxicity to mammalian red blood cells and mouse fibroblasts.
Peptide Synthesis and Purification
The peptides were synthesized by manual Fmoc solid-phase synthesis at elevated temperature using Rink Amide MBHA resin and Fmoc-protected amino acids using previously reported protocols [26]. Final peptides were uncapped at the N-terminus and contained C-terminal amide group. Coupling of Fmoc-β-(1-Azulenyl)-L-Alanine was performed for 30 min at room temperature. Cleavage of the peptides from the resin and side-chain deprotection were simultaneously achieved by treatment with a mixture of trifluoroacetic acid (TFA)/H 2 O/triisopropyl silane (TIS) (95:2.5:2.5, v/v) for 2 h at room temperature. The crude peptides were precipitated and washed with cold methyl-tert-butyl ether and purified on a Shimadzu (Kyoto, Japan) preparative reverse phase High-Performance Liquid Chromatography system with a Jupiter C4 preparative column (Phenomenex, Torrance, CA USA), using a linear gradient of solvent A (0.1% TFA in MilliQ water) and solvent B (90% CH 3 CN, 9.9% MilliQ water, 0.1% TFA). A gradient of 35-65% of solvent B was used at a flow rate of 20 mL/min for 30 min to purify the peptides. The identities of the peptides were confirmed using a Bruker (Billerica, MA USA) MALDI-TOF mass spectrometer. Purity of the obtained peptides was evaluated on an Agilent Infinity II 1260 (Santa Clara, CA USA) with an analytical Zorbax Eclipse XDB-C18 column by Agilent (4.6 mm × 150 mm).
Peptide stock solutions in water were prepared from the lyophilized powder (<90% purity). Lyophilized peptides are stable at −20 • C for three months, and solutions of peptides need to be prepared immediately before the experiment.
Trypsin Digestion
Pure lyophilized buCATHL4B (WWW) peptides were dissolved in MilliQ water (obtained using Elix Millipore, Burlington, MA USA) and syringe filtered (0.2 micron, 6500 rpm for 10 min) prior to checking concentration. The concentration of WWW peptide was determined by measuring absorbance at 280 nm on the UV-Vis spectrophotometer (Agilent 8453, Agilent Technologies, CA, USA) using ε 280 = 16,500 M −1 cm −1 (Expasy). The concentrations of AzAla variants were determined by measuring absorbance at 342 nm on the UV-Vis spectrophotometer using ε 342 = 4212 M −1 cm −1 [27]. The peptide stocks of 2 mM concentration were prepared in buffer (50 mM sodium phosphate, 100 mM NaCl, pH 7.0). Trypsin (1 mg/mL) stock was prepared fresh in buffer and serial dilutions were done for the assay: 0.1 mg/mL, 0.01 mg/mL, 0.001 mg/mL. The reaction samples (500 µL final volume) were prepared in triplicate by mixing WWW (500 µM), trypsin (50 µL of 0.001 mg/mL solution) in buffer (50 mM sodium phosphate, 100 mM NaCl, pH 7.0). The trypsin digest was monitored on the HPLC (Shimadzu) every 15 min at room temperature by following the peak of undigested peptide. Identity of various peaks in HPLC chromatogram were identified by MALDI-TOF. For this, fractions containing peptides corresponding to different peaks on HPLC were collected, lyophilized, and redissolved in 10 µL of solvent B (90% acetonitrile, 9.9% MilliQ water and 0.1% TFA); 10 µL of solvent A (99.9% MilliQ water and 0.1% TFA), and then 2 µL of CHCA (α-Cyano-4-hydroxycinnamic acid) matrix was added (1:10 proportion) and the mixture was loaded onto the MALDI target.
Circular Dichroism Spectroscopy
The circular dichroism (CD) spectra were acquired on the Jasco J-715 CD spectrometer (Easton, MD USA) collecting 64 scans (4 s averaging time) for each spectrum and using a quartz cuvette with a 1 cm path length. The measurements were performed on samples containing peptides (5 µM) in buffer (5 mM phosphate, 10 mM NaCl, pH 7.0) in the presence and in the absence of vesicles (250 µM of 100% POPC). Care was taken that the sample absorbance never exceeded 1.5 at all wavelengths to produce reliable ellipticity values. Mean residue ellipticity (MRE; deg × cm 2 × dmol −1 ) values were calculated using the following equation, where θ is ellipticity (mdeg), l is pathlength (cm), C is peptide concentration (M), and N is number of residues.
Fluorescence
Fluorescence data for the peptides were obtained on a JY-Horiba 914D fluoromax-2 spectrofluorometer (Horiba Scientific, NJ USA) at room temperature. Emission spectra measurements were taken in Spectrosil quartz cuvettes (Starna Cells, Atascadero, CA USA; 1 cm path length cuvette, sample volume 2 mL) using a 5 mm excitation slit width and 5 mm emission slit width for spectra. The measurements were performed on samples containing peptides (2 µM) in buffer (50 mM sodium phosphate, 150 mM NaCl, pH 7.0). The excitation wavelength used to excite AzAla was 342 nm. Fluorescence emission spectra were recorded over the range of 355 nm to 455 nm. To excite tryptophan, the excitation wavelength used was 280 nm. Fluorescence emission spectra were recorded over the range of 300 to 400 nm.
Preparation of Peptide in Vesicles
Solid POPC (12.5 mg) was dissolved in chloroform (0.5 mL) to make a lipid stock (16.5 mM). This stock (303 µL) was pipetted into a glass vial and dried with nitrogen gas and then under vacuum for 1 h. Then, ethanol (10 µL of 100%) was added to the film and vortexed until the film was completely dissolved. This was further resuspended in PBS buffer (5 mM sodium phosphate, 10 mM NaCl, pH 7.0) containing peptide (final concentration = 2 µM).
Minimal Inhibitory Concentration and Minimal Bactericidal Concentration Analysis
Bacteria were streaked from frozen glycerol stocks onto LB agar plates (E. coli D31, S. aureus ATCC 35556, P. aeruginosa PA-01, A. baumannii ATCC 19609). The plates were incubated at 37 • C for~18 h to allow growth. An individual colony from each plate was transferred into individual sterile tubes containing 3 mL of Muller Hinton (MH) broth and subsequently incubated for 18 h with shaking (225 rpm) at 37 • C. An aliquot of this culture was then diluted with fresh MH broth (1:250) and further incubated with shaking (225 rpm) at 37 • C until an OD 600 of 0.2-0.4 was reached. This mid-log culture was then diluted to 5 × 10 5 CFU/mL in fresh MH broth. Serially diluted peptides were dispensed in a 96-well plate such that the final concentration in each well would be in the range of 15 µM to 0.234 µM. Then, 90 µL of the freshly diluted culture was transferred to the 96-well plate, yielding a final volume of 100 µL. The 96-well plate was covered and transferred to a humidified incubator at 37 • C for 18 h. The growth and inhibition of bacterial growth were determined by measuring OD 600 directly in the plate and comparing to an untreated control. The Minimum Bactericidal Concentration (MBC) was determined by transferring 1 µL of culture from each well of the MIC plate onto fresh LB agar plates and allowed to incubate overnight at 37 • C. MBC is determined by the corresponding concentration from the MIC plate that resulted in no colony growth.
E. coli Outer Membrane Permeability
An individual colony of E. coli D31 was transferred into a sterile tube containing 3 mL of Luria Bertani (LB) broth supplemented with 100 µg/mL of ampicillin and subsequently incubated for 18 h with shaking (225 rpm) at 37 • C. An aliquot of this culture was then diluted with fresh LB broth (1:250) again supplemented with 100 µg/mL of ampicillin to induce the expression of β-lactamase and further incubated for with shaking (225 rpm) at 37 • C until the culture reached an OD 600 of 0.2-0.4. The culture was then centrifuged at 2500 rpm for 15 min in a benchtop clinical centrifuge (Clay Adams), the supernatant was discarded, and the pellet was resuspended in an equal volume of PBS (100 mM sodium phosphate, 200 mM NaCl, pH 7). Peptides were serially diluted in 0.01% acetic acid in the same concentration ranges as used in the MIC experiments. A series of the antimicrobial peptide polymyxin B sulfate was included as a positive control. In a clear, flat bottom 96-well plate, 10 µL of the serially diluted peptide was added to each well followed by 80 µL of the resuspended E.coli culture, and 10 µL of 5 mg/mL nitrocefin substrate (dissolved in PBS). Immediately upon the addition of the substrate, the samples were briefly mixed by pipetting and absorbance at 486 nm was measured every 5 min for total of 90 min. Data reported is the average of 3 replicates.
E. coli Inner Membrane Permeability
An individual colony of E. coli D31 was transferred into a sterile tube containing 3 mL of Luria Bertani (LB) broth and subsequently incubated for 18 h with shaking (225 rpm) at 37 • C. An aliquot of this culture was then diluted with fresh LB broth (1:250) supplemented with 100 µL of 100 mM IPTG to induce the expression of β-galactosidase and further incubated for with shaking (225 rpm) at 37 • C until the culture reached an OD 600 of 0.2-0.4. Peptides were serially diluted in 0.01% acetic acid in the same concentration ranges as used in the MIC experiments. A series of the detergent cetyl-trimethyl ammonium bromide (CTAB) was included as a positive control. In a clear, flat bottom 96-well plate, 10 µL of the serially diluted peptide was added to each well followed by 56 µL Z-buffer (60 mM Na 2 HPO 4 , 40 mM NaH 2 PO 4 , 10 mM KCl, 1mM MgSO 4 , 50 mM β-mercaptoethanol, pH 7), 19 µL of the E.coli culture, and 15 µL of 4 mg/mL ONPG substrate (dissolved in Z-buffer). Immediately upon the addition of the ONPG, the samples were briefly mixed by pipetting and absorbance at 420 nm was measured every 5 min for total of 90 min. Data reported is the average of 3 replicates.
Hemolysis
Fresh defibrinated blood from sheep (Hardy Diagnostics, Santa Maria, CA, USA) was transferred to a sterile centrifuge tube and diluted 10-fold with cold, sterile PBS for a total of 15 mL. The sample was centrifuged for 6 min in a clinical benchtop centrifuge. The supernatant was removed, and the pellet was gently resuspended to 15 mL with PBS. This was repeated for a total of 3 washes. After the 3rd centrifugation, the pellet was resuspended in PBS and 90 µL of the red blood cells (RBCs) were transferred to each well of a 96-well round-bottom plate containing 10 µL serially diluted peptides for a final volume of 100 µL in the wells. Wells containing PBS or Triton X-100 were used as the negative and positive controls, respectively. The plate was incubated at 37 • C for 60 min and subsequently centrifuged for 5 min to pellet remaining intact RBCs. Next, 6 µL of the supernatant from each well was transferred to a new 96-well plate containing 94 µL of PBS in each well and the absorbance of each well was measured at 409 nm and 415 nm. Percent hemolysis was calculated by normalizing the values from the negative and positive control wells to 0% and 100% hemolysis, respectively. All experiments were performed at least in triplicate. Error bars represent the standard deviations.
Cytocompatibility of buCATHL4B Peptide and its Derivatives with 3T3 Cells
Pure lyophilized peptides were prepared in 25% ethanol as 10X stocks and spinfiltered at 6500 rpm for 10 min (0.22 µm nylon membrane centrifugal filter, VWR) prior to checking concentration. The concentrations of WWW and AzAla-substituted peptides were determined as described in the Section 2.3.
Mouse embryonic fibroblast cells (3T3, ATCC) were a kind gift from Dr. Mary Beth Monroe's lab (BioInspired Institute, Syracuse University). The cells were cultured in the complete medium consisting of Dulbecco's Modified Eagle's Medium (DMEM) with 4.5 g/L glucose and sodium pyruvate (Corning, Manassas, VA USA), supplemented with 2 mM L-glutamine (Gibco, Gaithersburg, MD USA), 10% fetal bovine serum (Gibco) and 1% penicillin-streptomycin mix (Gibco) at 37 • C, 5% CO 2 and in high humidity environment. When cell monolayer reached 70-80% confluency, the cells were detached after incubation with trypsin-EDTA mixture (Gibco) and the cell quantity was determined using hemocytometer with trypan blue (Gibco) staining to estimate the number of live cells.
After 10,000 cells per well in 100 µL were seeded into black 96-well plates with clear bottom (Greiner Bio-One, Monroe, NC USA), the plates were incubated at 37 • C, 5% CO 2 (high humidity) for 20-24 h. For the experiment, 80 µL of fresh complete medium and 20 µL of 10× peptide stock (the final concentration of ethanol in the culture was 2.5%) were added, and the plates were incubated for 5 h under cell culture conditions. Resazurin assay was performed to evaluate cell viability. Resazurin was added to the final concentration of 0.67 mM and fluorescence of resorufin, the product of resazurin reduction by living cells, was measured after 4 h of incubation using Biotek (Winooski, VT USA) Synergy 2 plate reader (excitation at 530 nm, emission at 590 nm). Cells without peptide treatment (25% ethanol was added instead of the peptide stock) were used as a positive control and cells treated with 3% H 2 O 2 (Fisher) were used as a negative control. Percent viability was calculated as (F peptide treated samples − F negative control )/ (F positive control − F negative control ) × 100% for an average of 3 runs.
Peptides
To test the individual effects of Trp to AzAla substitutions, we designed three variants of buCATHL-4B in which each of the Trp residues was replaced with the non-natural amino acid β-(1-azulenyl)-L-alanine (AzAla, Z, Table 1) [23]. BuCATHL-4B, an antimicrobial peptide identified from genomic sequencing of five different breeds of Asian water buffalo (Bubalis bubalis), is a part of a 12-member family of Trp-rich peptides linked to cathelicidin gene 4 (of 7). Based on Phyre2 simulations (Figure 1), peptide is expected to adopt alphahelical structure [28].
Antimicrobial Activity
The original study on buCATHL-4B confirmed peptide's antimicrobial activity against both Gram-positive and Gram-negative bacteria [29]. The peptide increased bacterial membrane permeability and at low concentrations was shown to enhance the expression of several proinflammatory cytokines [29]. The antimicrobial activity of the peptides against Gram-positive S. aureus and Gram-negative E. coli, P. aeruginosa, and A. baumannii was determined using a traditional broth microdilution assay. The results are shown in Table 2 and compared to the previously reported activity of the parent peptide (WWW) [30]. While MIC determines the lowest concentration of peptide required to inhibit bacterial growth, it does not inform on the mechanism of this process. Thus, minimal bactericidal concentration (MBC) was determined by plating treated bacterial cultures on antibiotic-free solid media (Table 2, Supplemental Figure S1). The MBC results closely tracked with the MIC values, indicating the peptides are acting in a bactericidal manner and not by bacterial growth arrest. These results indicate that the substitution of Trp with AzAla has minimal, if any, effect on antimicrobial activity, with the possible exception of Z8 which exhibited a 4-fold decrease in MBC for Gram-negative S. aureus compared to the wild type peptide.
Bacterial Membrane Permeabilization
The widely accepted mechanism of action for many AMPs is the permeabilization of the bacterial membrane. To investigate the influence of substitutions on peptide's activity, the AzAla-containing peptides were screened for the ability to disrupt the E. coli outer and inner membranes [31,32]. Briefly, membrane impermeable chromogenic substrates were used to assay for compartment-specific enzymes in the periplasmic space (β-lactamase and substrate nitrocefin, to assay outer membrane permeability) and the cytoplasm (βgalactosidase and substrate ONPG, to assay inner membrane permeability) [33,34]. An increase in membrane permeability allows enhanced transport of the substrates across the membrane allowing enzymatic hydrolysis, resulting in a chromophore detectable by absorbance spectroscopy. Interestingly, none of the AzAla-substituted peptides caused any outer membrane permeabilization, as opposed to the parent peptide (Figure 2A, Figures S2 and S3). In contrast, while the parent, Z4, and Z8 peptides induced no permeabilization of the inner membrane, the Z6 peptide caused some delayed damage beginning after 20-30 min of exposure to the highest test concentrations (15 µM and 7.5 µM).
Circular Dichroism to Determine Peptide Structural Effects
The proposed mechanism of action for AMPs often involves the formation of an α-helical secondary structure upon binding to the bacterial membrane surface [35][36][37]. This structural transition is important for helical AMPs as it results in the formation of a facially amphiphilic structure with hydrophobic amino acids sequestered to one face of the helix, promoting membrane insertion. Circular dichroism (CD) can be used to unambiguously determine peptide secondary structure [38]. We determined the secondary structure of the AzAla-substituted peptides in buffer solution and in the buffer supplemented with phospholipid POPC (1-palmitoyl-2-oleoyl-glycero-3-phosphocholine). All peptides adopted mostly helical structure as expected [29], and interaction with lipid vesicles did not change the structure much ( Figure 3).
Fluorescence Studies
The ultimate goal is to demonstrate that AzAla is a suitable substitution for tryptophan in fluorescence experiments; AzAla has a number of spectroscopic properties that make it a unique tool for studies of the mechanism of action of AMPs.
Azulene has an absorption spectrum with transitions distinctly different from those of native tryptophan. The AzAla spectrum is dominated by S 1 -S 0 , S 2 -S 0 , and S 3 -S 0 transitions, centered around 600 nm, 342 nm, and 280 nm, respectively. The S 1 -S 0 transition has a low extinction coefficient (~400 cm −1 M −1 ) and does not lead to fluorescence; the S 3 -S 0 transition overlaps with the Trp absorption spectrum. However, the S 2 -S 0 transition is both distinct from the Trp absorbance bands and exhibits extinction coefficients and quantum yield similar to Trp in the 280 nm region ( Figure S4). The resulting fluorescence emission spectrum of the peptide containing AzAla is compared to the native peptide that has only Trps in Figure 4. Due to the spectral properties, the S 2 transition of AzAla can be selectively excited in the presence of Trp ( Figure 4B). The selective excitation of the AzAla amino acid (λ ex = 342 nm) in the presence of Trp allows for direct interrogation of the single aromatic residue without cross-excitation of the others. This provides for less ambiguity in the analysis of the environment around the AzAla [22,23].
Protease Degradation
For the following experiment we chose trypsin (human endopeptidase that cleaves at arginine or lysine) due to the availability of this protease, although it is mostly found in the digestive system. The goal of this experiment was to provide a proof-of-concept evidence that non-natural amino acids can provide a proteolytic protection. Peptides composed of natural amino acids are susceptible to proteolytic degradation [39]. Proteolytic degradation of peptides is one of the defenses bacteria developed against AMPs; therefore, therapeutic application of peptides and their materials require resistance to proteases [40,41]. In order to gain proteolytic stability, several approaches have been used: (1) changing chirality of peptide from L to D [42][43][44][45][46], (2) mixing L and D peptides [47], (3) use of peptidomimetics [48], and (4) introducing non-natural amino acids [49]. Addition of AzAla to the peptide sequence increases stability of the AMP, with Z8 being the most tolerant to the protease ( Figure 5). The mechanism of proteolytic degradation of WWW by trypsin is described in the Figure S5 and Table S1.
Cytocompatibility of AzAla-Containing Peptides
An important aspect that makes AMPs an attractive target for therapeutic development is the low toxicity these molecules generally exhibit towards host cells. However, with the incorporation of non-natural amino acids, confirming that these modified AMPs retain low levels of cytotoxicity is critical. A widely utilized model system for initial screening for cytotoxicity is a hemolysis assay. The assay relies on the measurement of leakage of hemoglobin from ovine red blood cells (RBCs). Upon incubating the peptides with RBCs, the amount of hemoglobin that has leaked out of the cells is measured using its natural Soret band absorbance at 415 nm. The percent hemolysis exhibited in the samples is calculated by comparing the leakage induced by peptides to that induced by a detergent (positive control) and buffer (negative control). The results are shown in Figure 6. The parent peptide, WWW, induced~34% hemolysis at the highest concentration tested of 15 µM, and~15% hemolysis at 7.5 µM which represents the MIC values of P. aeruginosa and E. coli, respectively. Notably, all of the peptides with the AzAla substitution showed a decrease in hemolysis across all concentrations tested. Specifically, the Z4 and Z8 peptides exhibited a 95% and 89% reduction in hemolysis compared to WWW at 15 µM, respectively. The Z6 peptide exhibited a less dramatic reduction in hemolysis,~14% at 15 µM representing a 64% reduction. The mammalian cell cytocompatibility of AzAla-substituted peptides was also tested using 3T3 mouse fibroblasts. After cells were incubated with peptides for 5 h, the metabolic activity of cells was assessed using a resazurin-based assay. The amount of living cells is determined based on the fluorescence of resorufin, the product of resazurin reduction by mitochondrial enzymes, which are only active if cells are actively respiring (Figure 7).
Discussion
In this work, we establish AzAla as a conservative replacement for Trp that changes a peptide's secondary structure, interaction with lipids, MIC, and even improves the hemolytic and cytocompatibility profiles. The ability to use AzAla in place of Trp is an important tool to deconvolute the contribution of individual tryptophan residues in multi-Trp AMPs to further develop effective antimicrobial agents and a thorough structureactivity relationship in this class of peptides. This approach could also be extended to multi-Trp containing proteins. Additionally, the incorporation of non-natural amino acids improves the resistance of these peptides to proteolytic digestion which may prove to be effective in improving bioavailability.
Naturally occurring AMPs have a wide range of sequences and some have been shown to exhibit dramatic changes in efficacy upon one or more amino acid substitutions. First, we tested the antimicrobial activity of AzAla-substituted peptides against several clinically relevant Gram-negative and Gram-positive strains of bacteria. Antimicrobial efficacy of the wild type peptide and its AzAla-substituted analogs was determined by minimal inhibitory concentration (MIC) and minimal bactericidal concentration (MBC) assays. The data showed that Trp→AzAla substitutions did not impact the MIC and MBC values much when compared to the WT peptide. Notably, the MBC values indicate that the peptides are all acting in a bactericidal manner.
The interaction with a bacterial cell wall is important for the activity of AMPs despite of the mechanism of action. The alteration in amino acid composition can affect the initial interaction and, consequently, the properties of the peptide. The substitution of tryptophan, which is known to interact with the bilayer/headgroup interface, by AzAla can possibly alter the peptide-lipid interactions due to the difference in the side chains. Considering binding to the bacterial membrane is a critical step in antimicrobial activity, we examined if the Trp→AzAla substitution had a negative effect on outer and inner membrane permeabilization using E. coli as a model organism. Enzymatic reporter assays utilizing chromophoric substrates showed that substituted peptides induced significantly less outer membrane permeabilization but similar levels of inner membrane permeabiliza-tion compared to the WT. This is especially interesting as several groups have reported that Trp-rich AMPs may act through an intracellular mechanism [50][51][52][53]. This is consistent with our results presented here, as the loss of outer membrane permeabilization in the AzAla peptides did not correspond to any changes in the MIC or MBC against E. coli.
Structurally, the CD of the free peptides and peptides inserted into model membranes also showed that Trp→AzAla substitution did not alter AMP structure in the buffer and membrane-mimetic membrane. This indicates that the helical propensity of the AzAlacontaining peptides is similar to the Trp-containing peptides, which is consistent with previous reports on this amino acid substitution in other peptide backbones [23,24,27]; however, any quantification of the helical propensity would require significant further experimentation. Retention of the helical conformation of the peptide is important when interpreting the proteolytic digestion assays. The data clearly show the substitution provides protection from the proteolytic degradation by trypsin which preferentially cleaves peptide bonds on the C-terminal side of the cationic residues Lys and Arg. As there are no significant changes in the secondary structure of the peptide, these changes in susceptibility are therefore likely caused by the enzyme losing efficiency at recognizing or binding to the peptide with the non-natural amino acid. Further, the Z8 peptide showed the greatest resistance to trypsin, which we speculate is because this residue is the closest to the cationic residues in the peptide which the trypsin recognizes and cleaves. While this hypothesis requires further testing, it provides some guidance as to targeted replacements of natural amino acids with non-natural ones in the design of novel AMPs.
The importance of engineering protease resistance in AMPs is twofold. Numerous proteases exist at the site of an infection. Bacterial proteases promote invasion and delay wound healing [54]. Proteolytic degradation of peptides is one of the defense mechanisms that bacteria developed against antimicrobial peptides; therefore, therapeutic application of peptides require resistance to proteases [40]. Additionally, human matrix metalloproteinases are overproduced in chronic wounds, contributing to delayed would healing [55]. We utilized a commonly available trypsin protease to demonstrate that AzAla can indeed provide resistance to protease degradation. As the initial experiments showed promising results, peptides' resistance towards bacterial and human proteases will be further investigated. Specifically of interest would be characterization of resistance to proteinase K-a bacterial serine protease that cleaves peptide bonds at the carboxylic sides of aliphatic, aromatic, or hydrophobic amino acids [56], and aureolysin-metalloprotease from S. aureus that cleaves at hydrophobic side chains [57,58] and elastase [59]. Beyond infection site proteases, digestion by proteases in the GI tract is a major hurdle to oral bioavailability of AMPs in a therapeutic application [60,61]. Several groups have shown that the incorporation of nonnatural amino acids has improved the stability of AMPs to a variety of proteases [2,62,63]. The results shown for all three peptides with a single AzAla substitution are consistent with this trend.
The antimicrobial efficacy of the AzAla-containing AMPs is only valuable if the molecules retain the traditionally low cytotoxicity of naturally occurring AMPs. Cytotoxicity was tested by hemolysis of red blood cells and by fibroblast viability assay. Both approaches demonstrated that AzAla substitution improved the biocompatibility of the AzAla-containing buCATHL-4B analogs with mammalian cells. Hemolysis is often used as an initial estimate of peptide toxicity [64,65], while cytocompatibility assay with fibroblast cells is a more precise method to measure the toxicity of AMPs against skin. The results of both approaches are consistent with the literature data for the wild type (WWW) peptide [29], and AzAla-substituted variants exhibited improved cytocompatibility. While the exact mechanism of cytotoxicity from buCATHL-4B has not been demonstrated, the AzAla substitution clearly counteracts this mechanism.
The combination of increased cytocompatibility and resistance to trypsin are promising early data for the further development of these peptides as antimicrobial therapeutics. By maintaining the antimicrobial efficacy but reducing the cytotoxicity, the AzAla substitution effectively increases the therapeutic window available for these AMPs. Naturally, further investigation in animal models would be necessary to develop a more precise toxicity profile, but the initial results are nonetheless promising.
Beyond the direct case of buCATHL-4B, these results support the application of the AzAla amino acid in the design and development of novel AMPs, as well as in peptide therapeutics with other targets. Specifically, the AzAla could be considered for incorporation into a variety of different Trp-rich AMPs such as Tritpticin, indolicidin, and lactoferricin [50,66]. Other Trp-containing AMPs are equally attractive for exploring these substitutions, such as the gramicidins and the clinically approved Daptomycin, as well as potential substitutions for other aromatic residues in clinically relevant antimicrobials like polymyxin B and Teixobactin, which both contain aromatic groups [60,[67][68][69][70]. Further, all peptide therapeutics can be approached from a SAR lens to improve efficacy, bioavailability, and cytocompatibility. Examples of clinically approved peptide therapeutics that contain a Trp residue include Afamelanotide, Semaglutide, and Enfurvitide [71,72]. On a more fundamental level, characterization of the basic biochemical and biophysical properties of non-natural amino acids is essential to designing novel peptides for specific function. Simply put, one of the significant challenges to protein and peptide design using nonnatural amino acids is the lack of detailed information on amino acid characteristics in a wide range of sequences or environments. Through careful and systematic characterization, the application of non-natural amino acids in protein and peptide design can be greatly enhanced.
Conclusions
In summary, we investigated if non-natural amino acid AzAla could be used to replace tryptophan in one AMP sequence that naturally contains three tryptophan side chains. Fluorescence studies of AzAla-substituted peptides showed that AzAla could be excited separately from the remaining present tryptophan residues. CD experiments confirmed that the structure of the peptide does not change much as compared to the original peptide in the buffer and also in lipid vesicles; therefore, AzAla is a suitable replacement for the tryptophan in AMPs and could be used to study the mechanism of AMP function. In addition, we found that the substitution confers a number of benefits onto the original peptide. AzAla-substituted peptides have lower hemolysis and higher cytocompatibility as compared to the WT peptide, while preserving original antimicrobial efficacy. In addition, we demonstrated that AzAla provides protection against proteases, such as trypsin. Currently, we are investigating if AzAla-containing peptides also survive longer in the presence of naturally occurring proteases on the mammalian skin surface. Taken together, our work concludes that AzAla not only preserves natural features of antimicrobial peptides, but also provides beneficial properties. Therefore, AzAla as a Trp analog could be used in other multi-Trp AMPs, such as porcine host defense peptides Tritrpticin [73] or Lys-C (fragment of hen egg white lysozyme [74]). The machinery to incorporate AzAla into bigger proteins is available [75]. Given the rise in antibiotic resistance [76], these findings might result in more efficient AMPs for biomedical applications.
Overall, the AzAla amino acid has the potential to become a useful tool in the design and development of novel, functional peptides. The unique spectral properties coupled with the apparent beneficial impacts on cytocompatibility, and protease resistance can yield application to a number of fundamental and therapeutic systems.
Supplementary Materials: The following are available online at https://www.mdpi.com/2218-2 73X/11/3/421/s1, Figure S1: Minimal Bactericidal Concentration (MBC). MBC was carried out by transferring 1 µL from each well of the MIC experimental 96-well plates onto antibiotic-free LB agar and allowed to grow overnight at 37 • C. Plates were photographed the next morning and MBC was determined visually by the presence/lack of colony growth., Figure S2: Time course of E. coli outer membrane leakage. Ezymatic hydrolysis of nitrocefin by β-lactamase was determined by increases in absorbance at 486 nm. The samples were monitored in 5-min intervals for a total of 90 min. Samples contained varying amounts of serially diluted peptides (A) Z4, (B) Z6, (C) Z8, or (D) Polymyxin-B sulfate. Peptide concentrations shown are in micromolar units. Data shown are the averages of 3 trials with standard deviations (in some cases smaller than the symbol size)., Figure S3: Time course of E. coli inner membrane leakage. Ezymatic hydrolysis of ONPG by β-galactosidase was determined by increases in absorbance at 420 nm. The samples were monitored in 5-minute intervals for a total of 90 min. Samples contained varying amounts of serially diluted peptides (A) Z4, (B) Z6, (C) Z8, or (D) CTAB. Peptide concentrations shown are in micromolar while CTAB concentrations are in millimolar units. Data shown are the averages of 3 trials with standard deviations (in some cases smaller than the symbol size)., Figure S4. Absorption spectra of AzAla-substituted peptides., Figure S5. Trypsin digest of WWW peptide (AIPWIWIWRLLRKG) (A). Overlay of chromatograms acquired at various time points after peptide and trypsin were mixed. The inset graph shows the percentage of undigested peptide peak area monitored over time. Identity of peaks was established by MALDI-TOF using the sample that was digested for 4 hours (see Table S1 below). (B) MALDI-TOF analysis of peak with retention time of 8.3 min: m/z peak at 1237 Da was assigned to AIPWIWIWR peptide segment. (C) MALDI-TOF analysis of peak with retention time of 8.59 min. The peak at 1802 Da was assigned to the undigested peptide. (D) MALDI-TOF analysis of the peak with retention time of 8.87 min. The peak at 1618 Da corresponds to AIPWIWIWRLLR peptide segment. Table S1. Fragments of peptides observed by MALDI-TOF after trypsin digestion. Individual fragments were identified by collecting various peaks after HPLC separation. | 2021-03-29T05:17:47.702Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "eb6d9060c2a6959c98e659f4d9d49a56f694b8ca",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/11/3/421/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb6d9060c2a6959c98e659f4d9d49a56f694b8ca",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7244843 | pes2o/s2orc | v3-fos-license | Anti-carcinogenic properties of curcumin on colorectal cancer
Curcumin has been used in traditional Indian medicine for many centuries for its anti-inflammatory and anti-carcinogenic properties. There has been some promising research concerning curcumin as a safe therapeutic agent for many cancers, colorectal cancer being among them. This has been shown through research in cell cultures, animal models, and humans. At this time, it appears that curcumin ’ s anti-carcinogenic properties are most likely due to its effects on multiple molecular targets, such as nuclear factor κ -light-chain-enhancer of activated B cells (NF- κ B) and activator protein 1 (AP-1). NF- κ B and AP-1 are both major transcription factors that regulate inflammation and thus affect cell proliferation, differentiation and even apoptosis. Curcumin has also been shown to affect a variety of other key players involved in carcinogenesis, such as cyclooxygenase-2, matrix metallopeptidases 2 and 9 and tumor necrosis factor α induced vascular cell adhesion molecule, just to name a few. Although many molecular targets are involved, curcumin has been well tolerated in many studies: doses up to 8 g a day have been confirmed to be safe for In this brief review, we will examine the current studies and literature and touch upon many molecular pathways affected by curcumin, and demonstrate the exciting possibility of curcumin as a chemopreventive agent for colorectal cancer.
INTRODUCTION
Colorectal cancer is the third leading cause of cancer death in the United States [1] . A special need for non-toxic agents which are easy and effective to use for such cancer prevention and treatment is in demand. Epidemiologic findings may suggest a therapeutic possibility. Turmeric (and its active component curcumin) may be such an agent.
In India, traditional medicine uses turmeric for biliary disorders, anorexia, cough, diabetic wounds, hepatic disorders, rheumatism, and sinusitis [2] . It is pharmacologically safe even when consumed up to 100 mg per day in the Indian diet [3] . The incidence of colon cancer worldwide may vary 20-fold, with a higher prevalence in areas such as North America, Europe, Australia and New Zealand. A lower incidence is seen in countries such as India and less developed areas such as South America and Africa. Epidemiology suggests factors related to socioeconomic Park J et al . Curcumin and colorectal cancer and dietary conditions may be important to colorectal cancer development. Significant risk factors include lower fiber intake, high fat diet and low calcium micronutrient intake.
Genetic predisposition to polyposis and cancer is also well established in literature [4] . Colorectal cancer is patterned into sporadic, inherited (10%) and familial (25%) categories. Germline mutations are seen in the two most common forms of inherited colon cancer: familial adenomatous polyposis (FAP) and hereditary nonpolyposis colorectal cancer.
Curative therapy for colon cancer is largely the province of surgery. Adjunctive chemotherapy and radiotherapy may be used depending on the course of the disease [5] . Exploitation of the over-expression of cyclooxygenase-2 (COX-2) in sporadic colon cancers (90%) and 40% of colon adenomas has shown promise for it as an avenue for chemoprevention of colon cancer. Non-steroidal antiinflammatory drugs (NSAIDS), such as sulindac, and COX-2 specific inhibitors, such as Celebrex (celecoxib), have shown great utility as a chemopreventive treatment for patients with the FAP genotype. Increased risk of myocardial infarction with COX-2 inhibitors, which led to the removal of valdecoxib and rofecoxib, and the increased risk of gastrointestinal bleeding and renal failure with NSAIDS (sulindac) make their recommendation problematic [6] .
The response of suppressors of COX-2 for prevention of polyp development and cancer production in FAP does show the utility of control of deregulated pathways and, if not for the side effects, COX-2 inhibitors would be strongly recommended as a cancer/polyp chemopreventive agent. Thus agents that inhibit cellular pathways which create or promote carcinogenesis without toxicity are needed. Curcumin is a strong chemopreventive candidate with these properties (Figure 1) [6] .
Turmeric, a spice common to India and its surrounding regions, is derived from the rhizome of Curcuma Longa. The use of turmeric as a medicinal compound dates back to around 2000 B.C. when it was used as an anti-inflammatory agent. Fractions of turmeric known as curcuminoids (curcumin, demethoxycurcumin, and bisdemethoxycurcumin) are considered active compounds and possess a yellowish orange color [7] . Curcumin finds potential usefulness as an anti-inflammatory, anti-mutagenic, and anti-cancer molecule [8] . It also functions as an anti-oxidant and is capable of inducing apoptosis [9,10] . A wide variety of effects of curcumin are mediated by its capability to act as a free radical scavenger, to alter gene expression of various stress protein and genes involved in angiogenesis, and to inhibit activity of many important transcription factors such as nuclear factor κ-light-chain-enhancer of activated B cells (NF-κB) and activator protein 1 (AP-1) [11][12][13][14][15] . Its abilities are often seen to be concentration dependent. At 10 μm, it has an antioxidant effect and at 50 μm it induces apoptosis, possibly in conjunction with generation of superoxide radicals [16] . Oral intake of turmeric at 4-8 g per day in humans can generate plasma levels of as little as 0.41-1.75 μmol/L. When considering its elicited biological effects by regular oral consumption, the concentration of curcumin is very important. In light of low systemic bioavailability, the role of biotransformed moieties, tetrahexahydro-curcumin, has received interest as to their biologic importance.
The anti-oxidant activity of curcumin can arise either from the OH group or from the CH2 group of the β-diketone (heptadiene-dione) moiety and it has been shown that the phenolic OH groups play a major role in the biological activity of curcumin [7,17] . Most of curcumin's cellular effects are an outcome of its redox characteristics; the phenolic OH groups seem to be the most important moiety in curcumin. Replacement of this group inhibits or eliminates the lipid peroxidation inhibitory and free radical scavenging properties of curcumin [18,19] .
Curcumin, in addition to demonstrating anti-tumor action, has been also shown to be an effective chemopreventive agent. Its action in tumors of colon, stomach and skin involve inhibition of cyclooxygenase, phospholipase A2 and phospholipase-Cr1 [20,21] .
CARCINOGENESIS
Carcinogenesis is a complex process but may be largely considered to be comprised of three phases: initiation, promotion, and progression [22] . These closely related steps: going from a normal cell to a transformed initiated cell (initiation); from initiated to pre-neoplastic cell (promotion); and from pre-neoplastic to neoplastic (progression); may lend themselves to curcumin intervention.
There is suggestive evidence that inflammation may have a role in the three phases of carcinogenesis [23] . Cancer initiation has been produced by oxidative stress and chronic inflammation [24] . Inflammation acts a key regulator in promotion of these initiated cells, possibly by providing them with proliferating signals and by preventing apoptosis [25] . The role of inflammation in tumor induction and subsequent malignant progression has been investigated [26] . Inflammatory response produces cytokines which act as growth and/or angiogenic factors leading transformed cells to proliferate and undergo promotion. Leukocytes produce cytokines, angiogenic factors as well as matrix-degrading proteases that allow the tumor cells to proliferate, invade, and metastasize. Tumor-infiltrating lymphocytes secrete matrix-degrading proteinases like matrix metallopeptidase 9 (MMP-9), thus promoting neoplastic proliferation, angiogenesis, and invasion [26] . These details demonstrate the role of inflammation in all three stages of carcinogenesis. Substantial evidence for the role of inflammation in cancer may be seen by the frequent up regulation of inflammatory mediators like NF-κB. The pathways activated by NF-κB up regulators are implicated not only in tumor growth and progression but also in cancer cell development of resistance to anti-cancer drugs, radiation and death cytokines. NF-κB is an excellent target for anti-cancer therapy [27] . The effect of curcumin on carcinogenesis is felt to be through inhibition of NF-κB as well as other molecular targets ( Figure 2). Tumor initiation is modified by curcumin in several ways. Many of these seem to involve the blockade or inhibition of NF-κB.
EFFECTS ON TUMOR INITIATION BY CURCUMIN
Inflammation may initiate carcinogenesis through the production of reactive oxygen species (ROS) and reactive nitrogen species by activated neutrophils and macrophages that leads to cancer causing mutations [28] . Curcumin has demonstrated significant reduction of levels of inducible nitric oxide synthesis (iNOS). Curcumin inhibits the induction of nitric oxide synthase and is a potent scavenger of free radicals like nitric oxide [29] . NF-κB has been implicated in the induction of iNOS which produces oxidative stress, one of the causes of tumor initiation. Curcumin prevents phosphorylation and degradation of inhibitor κ B α, thereby blocking NF-κB activation which down regulates iNOS gene transcription [30] .
Deregulatory imbalances between adaptive and innate immunity results in chronic inflammation which is associated with epithelial tumorigenesis, the prominent mechanism being NF-κB activation [31] . Curcumin was found to inhibit cell proliferation and cytokine production by inhibiting NF-κB target genes involved in this mitogen induction of T-cell proliferation, interleukin IL-2 production and nitric oxide generation [30] . Reduction induced over expression of cytokines, such as IL-10, IL-6, and IL-18, is accompanied by NF-κB induction which is controlled by and inhibited by curcumin [32] .
Curcumin has been demonstrated to increase expression of conjugation enzymes (phase Ⅱ). These have been shown to suppress ROS-mediated NF-κB, AP-1 and mitogen-activated protein kinases (MAPK) activation [33] . These enzymes, such as sulfotransferase and glutathiones-transferase, conjugate toxic metabolites (through phase I enzymatic action) and then excrete them [33] . Curcumin modulates cytochrome p450 function and has been demonstrated to reduce aflatoxin B1-DNA adduct formation, an inhibitory step important in chemical carcinogenesis [34] . In various cancer models, curcumin was seen to further counteract ROS by increasing ornithine decarboxylase, glutathione, antioxidant enzymes and phase Ⅱ metabolizing enzymes [35] . Heme oxygenase-1 (HO-1) has been seen to counteract oxidative stress, modulate apoptosis and inhibit cancer cell proliferation. Curcumin induces HO-1 expression by signaling through nuclear factor (erythroidderived 2)-related factor 2 (NRF-2) and NF-κB and thereby has the potential to reduce oxidative stress [36][37][38][39][40] . NRF-2 is a transcription factor that regulates the expression of conjugatory enzymes like glutathione-s-transferase via an anti-oxidant response element (ARG) [41] . Curcumin prevents initiation of tumors either by curtailing the proinflammatory pathway or by inducing phase Ⅱ enzymes [42] .
SION SUPPRESSION BY CURCUMIN
Evidence suggests NF-κB has an important role in cancer initiation, promotion and progression. NF-κB binds to DNA and results in transcription of genes contributory to tumorigenesis: inflammation, anti-apoptosis and positive regulators of cell proliferation and angiogenesis [42] . NF-κB activation occurs primary via inhibitor κ B kinase (IKK)mediated phosphorylation of inhibitory molecules [43] . Curcumin blocks NF-κB signaling and inhibits IKK activation [44] . Suppression is also noted on cell survival and Figure 1 Diagram showing curcumin and its potential inhibitory effects on the metabolic pathway of arachidonic acid. The anti-inflammatory properties of curcumin can be attributed to its effects on many molecular targets, 5-lipoxygenase and cyclooxygenase to name a few. Curcumin has been found to inhibit 5-lipoxygenase in-vitro in a concentration dependent manner in mouse epidermal cells [6] . The proposed mechanism of cyclooxygenase (COX) inhibition is believed to be due to the inhibition of Nuclear factor κ-light-chain-enhancer of activated B cells (NF-κB) activation [53] . NF-κB has been the subject of research for the development of anti-cancer therapeutic agents due to its effects on multiple stages of carcinogenesis. Curcumin has been shown to prevent phosphorylation and degradation of inhibitor κ B α, thereby blocking NF-κB activation [30] . NF-κB, through multiple pathways, can promote inflammation, angiogenesis and disrupt cell cycle and apoptosis regulation, thus promoting carcinogenesis.
is implicated in cancer progression and poor prognosis. β-catenin in the cytoplasmic pool is phosphorylated by the axin adenomatous polyposis coli-glycogen synthase kinase 3β complex and subjected to degradation by the ubiquitin proteasome pathway [63] . Non-degraded β-catenin either enters the nucleus to transactivate the TCF/LEF transcription factors, leading to the up regulation of many genes responsible for cell proliferation, or binds to the E-cadherin adhesion complex. Reduction or loss of E-cadherin and/or increased localization of β-catenin in the nucleus is associated with invasive metastatic cancer progression and poor prognosis [64,65] . Curcumin has been found to decrease nuclear β-catenin and TCF4 and hence inhibit β-catenin /TCF signaling in various cell cancer lines [66] . Curcumin induced G2/M phase arrest in the cell cycle and apoptosis in colon cancer cells by impairing Wnt signaling and decreasing transactivation of β-catenin /TCF/LEF, subsequently alternating tumor progression [67] .
The anti-tumor effect of curcumin was evidenced by its ability to decrease intestinal tumors in an animal model of FAP by reducing the expression of the oncoprotein β-catenin [68] . Some human β-catenin /TCF target genes, including cyclin D, MMP7, OPN, IL-8 and matrilysin, play a role in tumor promotion and progression [69] . NF-κB repression and decreased β-catenin signaling are some of the mechanisms by which curcumin suppresses the promotion and progression of cancer.
CURCUMIN CLINICAL TRIALS
Every clinical trial with curcumin has shown it to be safe with minimal adverse effect. Doses of up to 8000 mg per day were well tolerated.
Sharman and colleagues assessed the pharmacodynamic and pharmacokinetic properties of curcumin in 15 Caucasian patients with a history of colorectal cancer [70] . One patient had visible disease at the time of the study and the rest had complete surgical resection. Side effects were minimal, transient and not always determined to be due to curcumin. The one patient with local colonic disease saw a decline in a cancer biomarker, carcinoembryonic antigen, from 310 ± 15 to 175 ± 9 after 2 mo of treatment (440 mg/d). Computed tomography scan revealed that disease of the colon stabilized but metastasis was noted in the liver. This was felt due to probable low systemic bioavailability of curcumin though serum levels were not measured. Safety and tolerability of curcumin doses up to 2.2 g for 4 mo were documented.
A phase I clinical trial assessed tolerability of curcumin in 25 subjects from Taiwan with high risk or premalignant lesions [71] . In this study, curcumin was provided as 500 mg capsules for 3 mo. Twenty four of the twenty five subjects finished the study. Higher doses produced higher systemic levels. Subjects who consumed 2000 mg or less had curcumin levels barely detectable in serum and no detectable levels in the urine. Histological improvements independent of dosage were observed in precancerous lesion in 7 of the cell proliferation genes, including Bcl-2, cyclin D1, IL-6, COX-2 and MMP [44,45] . Curcumin also induces apoptosis by caspase activation of a poly (ADP-ribose) polymerase (PARP) cleavage [41,44] . Regulation of NF-κB by curcumin is associated with activation of caspase 3 and 9, decreasing Bcl-X (L) messenger RNA (mRNA) and increasing Bcl-X (S) and c-IAP-2 mRNA [45] . COX-2 is the inducible form of cyclooxygenase that catalyzes the role limiting step in prostaglandin synthesis from arachidonic acid and plays an important role in cancer and tumor promotion [46,47] . Overexpression of COX-2 leads to malignant cell proliferation and invasion and the effect is reversed by non-steroidal anti-inflammatory agents, elucidating the importance of COX-2 inhibitors in cancer chemotherapy [48] . It has been suggested that COX-2 induction is mediated by NF-κB intracellular signaling pathway [49] . Curcumin has also been noted to decrease proliferation of various cancer cells, especially in the colon by down-regulating COX-2 [45,50,51] . Curcumin inhibits COX-2 but not COX-1 in colon cancer cells, demonstrating its selectivity [52] . It has been shown to inhibit COX-2 expression by repressing degradation of the inhibitory unit inhibitor κ B α and hindering the nuclear translocation of the functionally active subunit of NF-κB, thereby blocking improper NF-κB activation [53] .
Curcumin has been found to reduce the invasion and subsequent metastasis of cancer cells. Curcumin suppresses MMP expression which is believed to play a major role in mediating neovascularization and is increased during tumor progression. MMPs play an important role in endothelial cell migration and tube formation. Two determinants of neovascularization that help in forming new capillaries from preexisting blood vessels are MMP-2 and MMP-9. These two MMPs are known to be involved in tumor angiogenesis mainly through their matrixdegrading capacity [54] . Curcumin down regulates MMP-9 expression by inhibiting NF-κB and AP-1 binding to the DNA promoter region [55] . Adhesion molecules, such as vascular cell adhesion molecules (VCAM), are implicated in cancer progression and they are elevated in patients with advance disease [56] .Curcumin has been noted to cause significant inhibition of tumor necrosis factor α induced VCAM-1 expression, related to the activation of the MAPK NF-κB pathway [57] . Curcumin has been shown to reduce cell migration and invasion induced by osteopontin, an extracellular matrix protein, through the NF-κB pathway [58] . Curcumin may inhibit cancer cell growth through down regulation of IL-1 and IL-8 induced receptor internalization [59] . Curcumin controls cancer progression by either blocking tumor growth or inhibiting its invasive and aggressive potential. Most of the effects in either case are exerted by curcumin-induced NF-κB inhibition.
Certain molecular targets of curcumin's chemoprotective action are β-catenin, β-catenin/T cell factor (TCF), and lymphoid enhance factor (LEF) which are often disrupted in many cancer cells, especially colorectal carcinoma [60][61][62] . Dysregulated β-catenin (TCF) 25 subjects. Frank malignancies were observed in 2 of the 25 subjects during the 3 mo treatment regimen. This study showed the possible activity of chemoprevention, safety and tolerability in doses up to 8000 mg per day, warranting further studies.
A second phase I clinical study by Sharma et al [72] assessed curcumin biomarkers for systemic activity. This was investigated in 15 patients with histologically proven adenocarcinoma of the colon and rectum. Two of the patients had disease seemingly limited to the colon and thirteen beyond the colon. Patients received between 450-3600 mg curcumin per day with water after a 2 h fast in the morning as a single dose. Side effects were mild and some elevation of alkaline phosphatase and lactate dehydrogenase were noted. Patients consuming 3.6 g of curcumin saw a 46% decrease in Prostaglandin E2 (PGE2) levels (P = 0.028). Mean plasma levels of 11.1 ± 0.6 mmol/L were shown at the 1 h point in 3 patients consuming 3.6 g of curcumin. The levels were 1/40 of that noted in the previous study. The previous study used a synthetic version of curcumin while this study used a natural curcumin with the presence of other curcumanoid properties [71,72] . A question of ethnically related nucleotide polymorphism in the metabolizing enzyme UGT1A1 gene which might produce altered metabolism should be considered [73] . No partial responses were seen and no reduction in tumor markers was observed. Safety and tolerability of curcumin was seen up to daily dosage of 8000 mg.
A phase I study was based on evaluating the presence of curcumin metabolites in hepatic tissue and portal blood on 12 patients. Dosages ranged from 450-3600 mg of curcumin capsules which were taken for 7 d before surgery. Only 3 of 12 patients receiving 3600 mg of curcumin had detectable curcumin metabolites. Curcumin, curcumin sulfate and curcumin glucuronide were not present in bile or liver tissues in any patient. Low oral availability was noted in this study but the possibility of an oral agent to treat distant metastases of the gastrointestinal tract was advanced.
Garcea et al [74] studied curcumin levels in the colorectum and the pharmacodynamics of curcumin in 12 patients with confirmed colorectal cancer. The staging of patients was noted; 2 patients with Duke A, 3 patients were Duke B, and 7 patients were Duke C. Patients were assigned to 450 , 1800 or 3600 mg of curcumin per day for 7 d prior to surgery. Detectable curcumin levels were seen in the serum of only one patient (who was taking 3600 mg per day). Every patient had detectable curcumin levels in normal and malignant colorectal tissue ranging from 7 nmol/g to 20 nmol/g of tissue. Curcumin levels were highest in the normal tissue of the cecum and the ascending colon as opposed to the transverse, splenic flexure and the descending colon, which suggests a local effect. COX-2 levels were undetectable in normal tissue but detectable in malignant colorectal tissue. Curcumin was not found to modulate the expression of Cox-2 in malignant tissues. It appears from this study that doses of 3600 mg of curcumin are safe and sufficient to see pharmacodynamic changes in the gastrointestinal tract.
Colonic polyps are considered to be a precursor to cancer. The effect of curcumin has been studied on humans and animals (mice) with FAP coli.
The CS7B1/6J Min/+ mouse is an established model for the study of FAP coli [75] . A study where 0.2% and 0.5% of curcumin in the diet reduced adenoma multiplicity by 39% and 40% compared to control. Concentration in the small intestine mucosa was noted to be between 39 nmol/g and 240 nmol/g of tissue. Curcumin disappeared from the tissues and plasma within 2-8 h after dosing. A sug gested dosage for humans was estimated by extrapolation to be 1.6 g per day. Tumorigenesis was noted in the small bowels of the animal model.
A human study of 5 patients with familial adenomatous polyps was performed using 480 mg of curcumin and 20 mg quercetin three times a day [76] . Four patients had a retained rectum and one had an ileoanal anastomosis. This study spanned 6 mo. All five patients had a decrease in number and size of polyps from their baseline. A mean decrease in polyp number by 60.4% (P < 0.05) and size by 50.9% (P < 0.05) was noted. No adverse effects were noted to any patient and no related laboratory abnormalities were seen. This is the first human demonstration of the reduction in size and number of ileal and rectal polyps in patients with FAP by a curcumin containing agent. The lack of toxicity coupled with the benefits demonstrated makes larger studies compelling.
CONCLUSION
The preponderance of colon cancer is a subject of paramount importance. A need has been demonstrated for compounds that target multiple molecular and cellular pathways which may be important to chemoprevention and/or chemotherapy. Curcumin has demonstrated these chemopreventive properties in cell cultures, animal models and human investigations. Human trials have concluded that curcumin is safe and poses minimal adverse effects. Doses up to 8000 mg per day were well tolerated. Effectiveness in altering pathologic changes was demonstrated. Further studies and possible developments are necessary to fully confirm cardiovascular safety due to suppression of COX-2, albeit the mode of suppression is more difficult than COX-2 inhibitors such as celecoxib, valdecoxib, and rofecoxib. Further studies related to the relevance of bioavailability and curcumin effect on carcinogenesis are important. The development of standardized criteria for preparations of curcumin is critical for further in depth studies. There needs to be further investigation of the role of nucleotide polymorphism and altered metabolism of curcumin (i.e. VGT enzymes) and its impact on carcinogenesis.
Curcumin chemotherapy and chemoprevention of colon cancer presents many exciting possibilities. Many things need to be evaluated, investigated, and developed but the prospects for curcumin as a therapeutic agent are indeed promising. | 2018-04-03T03:36:32.664Z | 2010-04-15T00:00:00.000 | {
"year": 2010,
"sha1": "8318cd6bc7a86c69c2829fc7f04ccbd50486f000",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4251/wjgo.v2.i4.169",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ca0ab2595eda5077a44afc84f1a13d6f94c4017a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2812501 | pes2o/s2orc | v3-fos-license | A Graphical Method for Model Selection
In this paper, we present a graphical method for selection of the model among the many competitive models. The proposed method not only selects the model but also tests the equal prediction accuracy of the models.
Introduction
Model selection among many competing models is one of the crucial jobs in regression and time series analysis.Most of these criteria attempt to find the model for which the predicted values tend to be closest to the true expected values, in some average sense.In this paper, selection of the model among several models based on their out-of-sample forecasting errors is discussed.The proposed method is a two step procedure.In first step, we test the statistical significance of the models with the overall mean and in the second step; we select a good model which has minimum measure of error.Section 2 presents various procedures of model selection.Section 3 presents a graphical method for model selection.Section 4 presents an empirical study by considering the three models with equal number of parameters.Section 5 presents the conclusion.
Methods for Model Selection
There are many proposed methods for model selection.Some of these techniques are presented below.
Model Selection using R2
The use of coefficient of determination, 2 R in model selection is a common practice in regression analysis and time series analysis.We have seen that maximizing 2 R is not a sensible criterion for selecting a model, because the most complicated model will have the largest 2 R value.This reflects that fact that 2 R has an upward bias as an estimator of the population value of 2 R .This bias is small for large n but can be considerable with small n or with many predictors.The major criticism of 2 R is that due to the fact that the addition of an explanatory variable cannot cause this statistic to fall.In comparing predictive power of different models, it is often more helpful to use adjusted , where is the estimated conditional error variance (i.e. the mean squared error) and is the sample variance of y.Unlike ordinary 2 R , if an explanatory variable is added to a model that is not especially useful, then 2 adj R may even decrease.This happens when the new model has poorer predictive power, in the sense of a larger value of the mean squared error.One possible criterion for selecting a model is to choose the one having the greatest value of 2 adj R .This is, equivalently, selection of the model with smallest mean squared error value.
Model Selection using Index of agreement (d)
The index of agreement(d) was proposed by Willmott (1981) to overcome the insensitivity of 2 R to differences in the observed and predicted means and variances .The index of agreement represents the ratio of the mean square error and the potential error (Willmot,1982) and is defined as The potential error in the denominator represents the largest value that the squared difference of each pair can attain with the mean square error in the numerator.The range of d is similar to that of 2 R and lies between 0 (no agreement) and 1 (perfect agreement).Select the model which has maximum index of agreement.
Model Selection using Measures of Error
One method for evaluating a forecasting technique uses the summation of the absolute errors.The mean absolute error (MAE) measures forecast accuracy by averaging the magnitudes of the forecast errors (i.e.absolute values of each error).MAE is most useful when the analyst wants to measure forecast error in the same units as the original series.
The mean squared error (MSE) is another method for evaluating a forecasting technique.This approach penalizes large forecasting errors, since the errors are squared.This is important because a technique that produces moderate errors may well be preferable to one that usually has small errors but occasionally yields extremely large ones.
And the root mean squared error (RMSE) is given as MAPE is a relative error statistic measured as average percent errors of the historical data points and is most appropriate when the cost of the forecast error is more closely related to the percentage error than the numerical size of the error.MAPE is computed as the average of the absolute percentage error values.
Model Selection using Percentage Better Statistic
There are several commonly used types of scale-independent statistic.The first type essentially relies on pair wise comparisons.If method A and method B, say, are tried on a number of different series, then it is possible to count the number of series where method A gives better forecasts than B (using any sensible measure of accuracy).Alternatively, each method can be compared with a standard method, such as the random walk forecast (where all forecasts equal the latest observation), and the number of times each method outperforms the standard is counted.Then the percentage number of times a method is better than a standard method can readily be found.This statistic is usually called 'Percent Better'.
denote the relative error, where * t e is the forecast error obtained from the base method.Usually, the base method is a benchmark method or the naive method where t y ˆis equal to the last observation.
Percentage better where I(u)=1 if u is true and 0 otherwise.We select the model which has maximum percentage better performance comparing to other models.(De Gooijer and Hyndman, 2006).
Model Selection using AIC or SBC
An approach to model selection that considers both the model fit and the number of parameters has been developed.The information criterion of Akaike or AIC, selects the best model from a group of candidate models as the one that minimizes AIC = p n 2 ln 2 where 2 is the residual variance, n is the number of residuals and p is the number of parameters in the model.
The Bayesian information criterion developed by Schwartz or SBC, selects the model
. The second term in both AIC and SBC is penalty factor for including additional parameters in the model.Since the SBC criterion imposes a greater penalty for the number of parameters than does the AIC criterion, use of minimum SBC for model selection will result in a model whose number of parameters in no greater than that chosen by AIC.Often, the two criteria produce the same result.We select the model which has minimum of AIC and SBC values.(Akaike, 1974;Schwartz, 1978).
Model Selection using Friedman Statistic
Friedman's test is used to compare the multiple forecasting models with respect to squared errors or absolute errors and trying to infer whether there are significant general differences in performance of the models.Friedman's test is a nonparametric test which is designed to detect differences among two or more groups.Friedman's test, operating on the sum of the ranks j R , considers the null hypothesis that all models are equivalent in performance (have similar mean ranks).Under the null hypothesis, the following statistic: is approximately distributed as 2 with k-1 degrees of freedom and where k= number of models, n= number of observations in each model.Null hypothesis of equal prediction accuracy of the models is tested using Friedman test.If there is a significant difference among the models, we select the model which has first rank.To discover the great winner of all the competing models, the above procedure should be repeated by eliminating the weakest model, to which the largest rank mostly assigned (AdilKorkmaz and Burak Onemli, 2011).
Model Selection using Principle of Parsimony
All things being equal, simple models are preferred to complex models.This is known as the "principle of parsimony" with a limited amount of data; it is relatively easy to find a model with a large number of parameters that fits the data well.However forecasts from such a model are likely to be poor because much of the variation in the data due to random error is modeled.The goal is to develop the simplest model that provides an adequate description of the major features of the data.The principle of parsimony refers to the preference for simple models over complex ones.(Chatfield, 1991).
A Graphical Method for Model Selection
In this section, we propose a graphical procedure using bootstrap method for the selection of a good model among the several competitive models.The bootstrap has been the object of much research in statistics since its introduction by Efron (1979).The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one's data.It amounts to treating the data as if they were the population for the purpose of evaluating the distribution of interest.Under mild regularity conditions, the bootstrap yields an approximation to the distribution of an estimator or test statistic that is at least as accurate as the approximation obtained from first-order asymptotic theory.(Efron and Tibshirani, 1993).
Let the forecasting error .Bootstrap graphical procedure for selecting a model among the adequate models is given in the following steps:
4.
The lower decision line (LDL) and the upper decision line (UDL) for the comparison of each of the 2 i s are given by: where
5.
Plot i d against the decision lines.If any one of the points plotted lies outside the respective decision lines, hypothesis of equal prediction performance of the models is rejected at level and we may conclude that the prediction performance of the models is not same.
6.
If any one of the points plotted above the UDL, then the corresponding models are considered to be inefficient models and may be eliminated from the analysis.
If the points plotted below the LDL, then the corresponding models can be considered as efficient models for prediction and we select the model which is very close to the x-axis or zero.If the points falling in between the UDL and LDL then the corresponding models can be treated as equally efficient in their prediction accuracy.
This method not only tests the significant difference among the models but also identify the source of heterogeneity of models.The proposed method depends only on the supplied information and does not require any distributional assumptions.
Empirical Study
The following table presents the out-of-sample of size 28 and the forecasts generated from the three adequate models A, B and C each having with estimated parameters p=2 (source: Naveen Kumar Boiroju, 2011).The following table presents the forecasts and errors generated from the three models.We compute the error statistics for the three models and the results are presented below.for the models A, B and C respectively.By applying the bootstrap procedure explained in Section 2, the LDL, CDL and UDL are obtained as 0.074, 0.088 and 0.102 respectively.Prepare a chart as in Figure 1, with the above decision lines and plot the points . From the Figure 1, we observe that B d lie outside the decision lines.Hence, H 0 may be rejected and it may be concluded that the mean absolute errors of three forecasting models are not equal.From the same figure it is observed that
Conclusion
The proposed method being a graphical procedure simultaneously demonstrates the statistical significance and identifies the source of heterogeneity without knowing the underlying distribution of the errors.The proposed procedure depends on the prediction performances that can be measured distances on out-of-sample data and this method can be treated as an alternative test procedure to test the equal prediction accuracy of several models.This proposed method classifies the available prediction models under three categories as inefficient models, equally efficient models and efficient models.Finally the proposed graphical method can be treated as a tool to test the equal prediction accuracy of the models, to classify the models into inefficient, equally efficient and efficient model categories and to choose an efficient model among the several models.
th error generated by the i th model, where m is the number of forecasts generated by the i th model and .g being some specified loss function, for example, e e g or 2 e e g or e e g .And the mean of the error function of the i th model is mean of b-th bootstrap sample form i th model is given by distribution of the mean using B-bootstrap estimates and compute the central decision line (CDL) as , 2, …,B and [x] represents the integer part.
within the LDL and UDL, it indicates that the prediction performance of the models A and C is same.Since the B d value lies below the LDL, therefore the corresponding model B is selected and we may conclude that the model B is an efficient model among the models.
Figure 1 :
Figure 1: Comparison of forecasting models MAPE provides an indication of how large the forecast errors are in comparison to the actual values of the series.A model is said to be good if the MAPE value is not greater than five.Select the model which has minimum MAE, RMSE and MAPE values.(De Gooijer and Hyndman, 2006).
Table 1 : Out-of-sample data, forecasts and errors
A Y ˆB Y ˆC Y ˆA e
Table 2 : Measures of Errors
From the above table it is clear that the model B has maximum index of agreement and minimum MAE, MSE, RMSE, MAPE, AIC and SBC values.Hence the model B is selected among the models.The results of percentage better statistics for the selected models are presented in the following table.
Table 3 : Percentage Better Performance of the Models
From the above table, it is observed that the model A is 21.43% and 46.43% better than the B and C models respectively.Model B is 78.57% and 64.29% better than the A and C models respectively.Model C is 53.57% and 35.71% better than the A and B models respectively.Therefore the best suitable model for forecasting is model B and which has maximum percentage better performance comparing to other models.We apply the Freidman test considering the absolute errors of the models and their mean ranks are 2.304, 1.589 and 2.107 for the models A, B and C respectively.The following table shows the Freidman test statistic and its asymptotic significant probability.
Table 4 : Friedman Test
, therefore the null hypothesis of equal prediction performance of the models is rejected and we may conclude that the prediction performance of the models is not the same.Thus the model B is selected, since it has first rank among the models. | 2014-10-01T00:00:00.000Z | 2012-11-08T00:00:00.000 | {
"year": 2012,
"sha1": "25e361a5ef47b3f9cbbc010c0a6184671afe4199",
"oa_license": "CCBY",
"oa_url": "https://pjsor.com/pjsor/article/download/427/281",
"oa_status": "GREEN",
"pdf_src": "CiteSeerX",
"pdf_hash": "25e361a5ef47b3f9cbbc010c0a6184671afe4199",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
246506525 | pes2o/s2orc | v3-fos-license | Photoreaction of biomass burning brown carbon aerosol particles †
The light-absorbing fraction of atmospheric organic particles, known as brown carbon (BrC) aerosol, can a ff ect climate by in fl uencing global radiative forcing. Regional e ff ects arising from biomass burning BrC pollution are of particular interest, as they can have very high optical depth close to their sources. Due to the numerous fuel types, combustion conditions and reaction pathways encountered during the emission and lifetime of BrC, signi fi cant uncertainty arises from the impact that atmospheric aging can have on aerosol chemical composition and optical properties. Here, we investigate short-term aging processes driven by exposure to ultraviolet light ( z 360 nm) that occur with primary BrC particles generated by smoldering pine wood. Suspended particles were aged in a chamber for a duration of 30 min. The single scattering albedo (SSA) at 405 nm decreased signi fi cantly by approximately 0.02 units (for example, from 0.98 to 0.96) during that time, due to an increase in absorption relative to scattering. This change was associated with an increase in oxygenation, represented by the signal fraction at m / z ¼ 44 ( f 44 ) from the aerosol mass spectrometer (AMS). Surprisingly, the SSA continued to decrease after light exposure stopped, pointing to the presence of long-lived reactive species, perhaps radicals, that we hypothesize to form from photosensitized reactions. While relative humidity (RH) had only a minor impact on the rate of aging during light exposure, the aging rate in the dark may have occurred faster under dry conditions than at 45% RH.
Introduction
Biomass burning injects combustion compounds in enormous quantities into the atmosphere, including those that contribute to brown carbon (BrC) aerosol particles.Along with black carbon (BC), BrC is a component of carbonaceous aerosol that can signicantly impact the Earth's radiative balance by its ability to absorb ultraviolet and visible light. 1,2It was estimated that the magnitude of BrC absorption is up to 25% the magnitude of BC absorption (0.1-0.25 W m À2 ). 3 Regional, near-source absorption is also important for UV uxes and photochemistry.Ground-based measurements have found varying contributions of BrC's absorption relative to BC depending upon proximity to the source. 4Global model predictions show large regional variations. 5][8][9][10] All these factors lead to uncertainties in radiative forcing estimates.Thus, BrC aerosol optical properties and atmospheric behavior need to be better dened. 11][17][18][19] All of these reactants can change the ability of BrC to absorb light, and consequently, its impact on climate.To constrain particulate BrC's impact and lessen uncertainties in our model predictions, it is important to study each aging process to understand its relative contribution to BrC's overall lightabsorption behavior.Of importance is determining the processes that occur both close to the source, oen referred to as in the near-eld, and those that occur as the particles reside in the atmosphere for much longer times in the far-eld.
Photochemical aging of primary brown carbon has been investigated in the laboratory in a variety of media and over different timescales.These experiments have been performed using solvent extracts of BrC lter samples, particles collected on lters, and aerosol particles suspended in air.As described below, condensed-phase photochemistry can lead to either enhanced absorption for relatively short time exposures, decreased absorption, or increased absorption followed by decreases over long experiments.This wide range of behavior is not surprising given that both direct and photosensitized reactions can lead to individual chromophore formation and loss. 20Direct light exposure frequently leads to photolysis, fragmenting chromophores into smaller, less absorbing molecules. 6,19][22] Specically for BrC, Fleming et al. recently described the evolution of specic chromophores during the course of ultraviolet light exposure to lter-suspended biomass burning particles.Total UV-visible absorption measurements aer long irradiation times showed an overall decrease in absorption for all wavelengths on timescales of 10-41 days, although individual absorbing molecules decayed more rapidly. 6In the aqueous phase, Hems et al. observed an increase in the watersoluble fraction of the wood smoke BrC mass absorption coef-cient by a factor of 2 at 400 nm, during 6 hours of light exposure.This effect was attributed to the initial formation of aromatic dimers. 13Wong et al. also conducted experiments on water-soluble BrC and observed an increase in absorption in the initial 15 hours of aging at visible wavelengths associated with high molecular weight species ($400 Da) followed by photobleaching for the remaining 10 hours.The lower molecular weight species, however, experienced rapid photobleaching. 14hong & Jang looked at suspended aerosols generated from hickory wood smoke when exposed to natural sunlight.They observed an initial increase in absorption by 11-64% from secondary organic aerosol formation, followed by a decrease by 19-68% from sunlight photobleaching. 23Jones et al. observed a redshi in the absorption peak from simultaneous UV illumination and drying of a water droplet containing wood smoke BrC. 24Upon UV exposure, Saleh et al. observed enhanced absorption in the near-UV wavelengths compared to longer wavelengths.They measured the formation of SOA, however, in addition to aging by primary particles. 25Finally, Hinks et al. explored the dependence of the rates of photoreactions of 2,4nitrophenol upon environmental parameters, such as relative humidity (RH), temperature and organic matrices, nding that these reactions occur more slowly under drier, colder, and more viscous conditions. 26][8][9][10] Nonetheless, a consistent observation is that an increase in absorption occurs at short timescales followed by a decrease at longer timescales.This behavior is qualitatively similar to what has been observed with aging of aqueous wood smoke BrC using dissolved OH radicals, 13,14 and heterogeneous OH aging of both primary and secondary BrC surrogates. 12,15,17,18This raises the question whether similar processes are involved in both photoreactions and OH oxidative aging.In particular, since the formation of OH radicals is usually initiated by photolysis reactions, OH oxidation as well as direct photochemistry are potentially both occurring in the reaction systems.
A challenge related to performing laboratory aging of BrC samples is the ability to reproduce realistic atmospheric conditions in an indoor, controlled environment.For example, Wong et al. and Hems et al. worked with bulk aqueous media, Fleming et al. conducted experiments on a lter, and Zhong and Jang worked with suspended aerosols with secondary particle formation occurring.The goals of this project are to complement past work by focusing on the optical properties of suspended wood smoke particles when exposed to UV radiation centered at 360 nm.We purposefully scrub most semi-volatile gases that can be rapidly denuded so that we can isolate the photochemical aging behavior associated with primary, low volatility wood smoke aerosol particles, and we avoid using light sources with intensity below 300 nm.This prevents formation of secondary organic aerosol.We track both the scattering and absorption of the suspected particles, in order to measure the single scattering albedo of the particles that is the best measure of their intrinsic ability to absorb light.We use aerosol mass spectrometry to gain some indication of the nature of the chemical changes that occur, and we vary the relative humidity in the chamber to explore the impacts of this environmental parameter.
Generation of wood smoke
A representation of the experimental setup is shown in Fig. 1.Primary brown carbon aerosols were generated based on an experimental setup previously described. 12,27A 4 g piece of pine wood of approximately 8 Â 2 Â 1 cm dimensions was heated in a quartz tube (inner diameter of 2.2 cm) using a furnace at 400 C (Thermo, Lindberg Blue M).Clean air (Linde, Grade Zero 0.1) was owed through the tube at a rate of 2 L min À1 .Aer the rst signs of smoldering from an orange glow, but with no ame, the wood smoke was connected to a 1 m 3 Teon chamber.Before entering the chamber, the smoke passed through an impactor (457 mm diameter, 1 L min À1 ) and cyclone (2.5 mm cutpoint) as well as a buffer volume of approximately 0.024 m 3 , which removed the largest particles.Two denuders with activated charcoal (Sigma-Aldrich, 4-14 mesh) were used aer the buffer volume to remove gas-phase compounds released from the wood smoke.These denuders are sufficiently effective that no secondary organic aerosol was observed in past studies, when high OH concentrations (z10 7 molecules per cm 3 ) were present in the chamber. 12,15Wood smoke was added until the optical scattering signal as well as the total mass loading (typically a few hundred mg m À3 ) were stable.The particles entered a chamber that had been prepared at the desired experimental condition (<15% or 45% RH) with ow from a clean air generator through an H 2 O bubbler.
Photoreaction aging of brown carbon
Particles were aged in the chamber during 30 min by exposure to 24 UVA lights (centered at 360 nm, Fig. S1 †).Light levels measurements in the chamber (StellarNet Inc.BLACK-comet spectrometer) indicate that they are approximately the same when integrating from 300-400 nm as the actinic ux (see Fig. S1 †).The experimental timescales are therefore roughly comparable to atmospheric timescales.We note, however, that there is variability in the estimations of clear-sky actinic ux depending on location and time.Additionally, light exposure varies greatly with different biomass burning plumes and within a single plume, where the darkest areas are located at the center.A dilution ow of approximately 4 L min À1 was continually owing into the chamber.Aer 30 min, the lights were turned off and continuous measurements were taken for approximately 15 more minutes.
Connected to the chamber was an instrument outlet for measurements by a scanning mobility particle sizer (SMPS) equipped with a differential mobility analyzer (DMA; TSI, 3080), an X-ray neutralizer (TSI, 3087), as well as a condensation particle counter (CPC; TSI, 3776).The ow rate from the CPC was set to 0.3 L min À1 , and the sheath ow of the DMA was 2 L min À1 .The measurements were taken during 135 s scans.A photoacoustic soot spectrometer (PASS; Droplet Measurement Technologies) was also connected to the chamber.It measured the aerosol scattering and absorption coefficients at 405 nm.The ow rate through the PASS was 1 L min À1 .During specic experiments, an aerosol mass spectrometer (HR-ToF-AMS; Aerodyne; data analysis performed with SQUIRREL and PIKA sowares on Igor) was sampling as well.The AMS had a ow rate of approximately 0.07 L min À1 .The reported f 44 and f 60 fractions were determined using high-resolution spectral peak tting.
For offline measurements performed with UV-vis, the air from the chamber was pulled through a lter holder containing borosilicate glass lters (47 mm diameter; Pall) at a rate of approximately 15 L min À1 for 1 hour.The lter collection started as soon as the lights were turned off.For the collection of fresh wood smoke, the wood smoke was taken from the chamber without any light exposure.The mass collected on the lters was around 0.3 mg.The samples were extracted with 10 mL of dimethyl sulfoxide (DMSO; Sigma-Aldrich) and sonicated for 1 min.We have previously demonstrated that DMSO is the best solvent to remove colored wood smoke material from the lters. 27Analysis by UV-vis (Ocean Optics) was performed in a 1 cm cuvette, with an integration time of 300 ms and 3 scans per average.
Results
Absorption, scattering and single scattering albedo (SSA) measurements at 405 nm from a single light exposure Fig. 1 Experimental setup for the generation, aging and detection of BrC photoreaction.UV-vis was used as an offline technique and required a separate experiment to the online techniques.The AMS was not used for every experiment.The denuders were composed of activated charcoal.The UVA lights were centered at approximately 360 nm (see Fig. S1 †).SMPS: scanning mobility particle sizer.PASS: photoacoustic soot spectrometer.AMS: aerosol mass spectrometer.experiment are shown in Fig. 2. The SSA is dened using the following equation: where b scat and b abs are the scattering and absorption coefficients, respectively.Before light exposure, the total absorption and scattering data both exhibit a decreasing trend, due to wall loss of particles in the chamber.The SSA, however, is more stable as a function of time (see also Fig. 3b and 4b).With exposure to light, the absorption experiences an initial increase, while the scattering continues to decrease.This translates into an SSA value with a strongly decreasing trend.The SSA data thus decreases as a result of an increase in absorption relative to scattering, and not solely from reduced scattering.In the subsequent gures, only SSA data will be presented.
Changes in the optical properties of primary brown carbon aerosols were investigated under 2 conditions, with average results presented for dry (Fig. 3b and S5 †) and 45% RH (Fig. 4b and S5 †) conditions.In these gures, in order to compare the single scattering albedo from different experiments, a normalized SSA was determined by initializing the SSA value at zero exposure time (SSA o ) to have a value of 1, to account for the inherent experimental variability in starting SSA values.The average absolute starting SSAs with their standard deviations are shown in Table S1.† While initially stable, the SSA starts to decrease upon light exposure with a total decrease of approximately 0.02 during 30 min of light exposure.An additional observation is that, surprisingly, the SSA continues to decrease aer the lights were turned off.This continued aging persists for at least 30 minutes (Fig. S7 †).As discussed below, this suggests that light induces the formation of sufficiently longlived reactive species that are responsible for driving chemistry in the dark.Also, shown in the gures are aerosol particle size distributions and select quantities from the AMS measurements.In particular, the particle size distributions (Fig. 3a and 4a) as a function of time show decreasing particle number concentrations due to losses to the chamber walls but they do not exhibit any abrupt changes in the shape of the distribution upon exposure or removal of light.However, the increase in absorption at exposure time zero is associated with an increase in a measure of the particle oxidation state, represented by the fraction of the AMS organic mass spectrum (f 44 ) (Fig. 3c and 4c).This m/z 44 mass spectral fragment arises from peroxides, esters and acids, oen indicative of oxidation. 7he SSA is an important metric used to estimate aerosol absorption capability by taking into account scattering as well.
To quantitatively compare the extent of SSA decrease for different experimental conditions and during the course of a photoreaction experiment, the linear regression slopes of the changing SSA values versus time are presented in Fig. 5.The more negative the slope, the faster and more signicant the intrinsic absorption increase, i.e. increasing absorption relative to scattering.
In general, the SSA slopes before light exposure are close to zero for all conditions, including during a control experiment during which the lights were not turned on (Fig. S6 †).During light exposure, the slopes were consistently negative, signifying a decrease in SSA.Two of the dry and 45% RH experiments were performed in pairs, one immediately aer the other.While the results in Fig. 5 suggest that the relative humidity doesn't signicantly affect the results when the lights are on, in the paired experiments the SSA slope was consistently more negative for 45% RH.This suggests that there is potentially a small inuence of RH on the optical properties.
When the lights are turned off, the photo-chemically initiated aging continues in the dark, but at a slower rate.Different environmental conditions yield different aging rates in the dark, where the higher relative humidity conditions may exhibit a slower decay than drier conditions with the caveat that the error bars are overlapping.
As described in the Methods section, lter samples were collected aer light exposure for UV-vis analysis.The UV-vis absorption spectra were then measured of the DMSO extracts of the brown carbon material.Although these measurements require sample extraction, they allow us to observe shis in the overall absorption spectrum whereas the absorption measurements described above were at a single wavelength (405 nm).In order to account for experimental variability and compare the shapes of the spectra from different experiments, the absorption was normalized to 1 at 275 nm which is approximately the maximum of the absorption spectrum.Results are shown in Fig. 6.In particular, it is shown that 30 minutes of photoreaction signicantly increased the absorption at wavelengths > 350 nm.
Discussion
The major nding in this work is that primary wood smoke particles exhibit enhanced ultraviolet and visible absorption relative to scattering when exposed to atmospherically relevant wavelengths of ultraviolet light.This is in agreement with prior results from the literature, as discussed in the Introduction. 13,14,26,28This behavior is exhibited not only with the in situ measurements at 405 nm but also with the DMSO extracts, across the near-UV and visible parts of the spectrum.These changes occur without noticeable changes in the particles' size.As well, an intriguing observation is that the changes observed at 405 nm continue aer the light exposure stops.
The results indicate that oxidation is occurring in the particles during the photoreactions.In particular, when the lights are turned on, we observe a pronounced increase in the m/z 44 fraction of the organic mass spectrum, which is indicative of oxidation (see Fig. 3c and 4c).The f 60 AMS fraction also shows signs of oxidative aging during the experiment.This signal is representative of sugars such as levoglucosan and is a common biomass burning tracer.While f 60 is rising before lights are turned on (likely because the increasing mass loading in the chamber at this time is driving increased partitioning of semivolatile species to the particles), the signal abruptly starts declining when the lights are turned on.This collective behavior of a decrease in m/z 60 and an increase in m/z 44 (see also Fig. S2 †) is characteristic of oxidative aging of biomass burning particles. 29As well, levoglucosan decay from light exposure has been previously observed in chamber experiments and was attributed in part to OH oxidation arising from particles. 23We note that levoglucosan does not absorb 360 nm light. 30he mechanisms of formation of reactive species that lead to the f 44 increase and f 60 decrease are unclear.We do not believe that gas-phase OH radicals are being formed in substantial quantities, leading to heterogeneous oxidation.In particular, the ow from the wood smoke source passes through two diffusion denuders which will remove most potential gas-phase precursors, the wavelength of the UV light is high (z360 nm), and we do not ame the wood to form NO x .Most importantly, the character of the oxidation processes does not change aer the light exposure is removed, i.e. removal of light would have instantly removed gas-phase OH sources.
When the lights are on, one possibility is that OH radicals are being generated within the particles.In particular, there is a report in the literature that OH radicals are present in the aqueous phase during light exposure of wood smoke extracts using UVB wavelengths. 14More generally, for many decades, there have been indications that indirect or photosensitized condensed-phase photochemistry can drive the formation of reactive species, such as singlet molecular oxygen. 20,31However, even if such short-lived species are forming during light exposure, such generation mechanisms would stop aer light exposure.
To discuss potential light-driven mechanisms in more detail, we note that there is evidence for the formation of light-induced radicals in a variety of atmospheric particle samples. 20,22,32These radicals are believed to form through excited triplet states of specic, light-absorbing BrC molecules, referred to as photosensitizers.Once excited to their triplet state, these BrC compounds can generate radicals and catalyze radical chain reactions, thus affecting chemical composition and potentially inducing a change in absorption.For example, an excited photosensitizer may react with a neighboring organic compound to form a carbon-centered organic radical, or with O 2 to form reactive peroxy radicals. 33,345][36] Dimerization of organic radicals may give rise to larger, more absorbing species. 21,22If OH arises from higher levels of HO x , then it may add to aromatic rings, increasing their electron densities and ability to absorb light. 15,21,37Observations of aqueous phase reactivity of aromatic carbonyl photosensitizers with phenolic compounds have also been reported.Mechanistic insight into the reactivity of the excited triplet state revealed scavenging of the hydrogen atom from the alcohol functional group, forming an oxygencentered radical. 38Overall, we believe that the potential formation of both C-and O-centered radicals, and of OH radicals, all contribute to increased reactivity in the BrC particles, degradation of sugars such as levoglucosan, and the formation of more absorbing molecules through dimerization or functionalization.It is possible that the tight molecular connes of an aerosol particle promote intermolecular chemistry over the rate at which it occurs in a solvent extract.However, future experiments are needed to determine the details of the dominant mechanism.
We note that, during a light exposure event, we did not observe evidence in the AMS spectrum of higher molecular weight species forming (see Fig. S3 †), only oxidation from the f 44 fraction (Fig. 3c and 4c).However, this does not rule out dimerization of specic precursors to form stronger chromophores, given that the AMS is not well suited to the identication of low concentrations of higher molecular weight absorbing species.However, we do observe evidence of fragmentation, which has been previously demonstrated to occur as samples age. 21,22s noted above, a novel observation in our experiments is the continued aging of the particles in the dark.While the lifetimes of excited triplet state photosensitizers are fractions of a second, 39,40 the lifetime of some radicals formed in the condensed phase can be much longer.Indeed, the term "environmentally persistent free radicals" has been used to describe such species.2][43][44] As one example, Qin et al. recently determined a 100 day half-life of some radicals produced from light exposure of catechol, a common biomass burning aerosol component aer mixing with ambient PM 2.5 particle material. 44ang et al. also observed the lifetime of radicals from PM 1 to be many days, with biomass burning being one of the sources of particulate matter. 43While we cannot condently say whether such long-lived radicals are driving continued oxidation in the particles, it is clear that some reactive intermediate with a lifetime of at least tens of minutes is present.It is interesting to note that experiments performed with the same chamber did not see evidence for continued oxidation in the dark when studying the photoreactions of water-soluble SOA generated from a-pinene ozonolysis. 45Thus, the effect appears to be specic to biomass burning aerosol, perhaps related to its reactive, aromatic composition.
Another important parameter related to the conditions in which aging occurs is the relative humidity.Within the experimental uncertainty, the results obtained in Fig. 5 show that the SSA slope is just marginally (6%) larger at higher relative humidity than in dry conditions, indicating either no impact or a slightly more rapid aging process perhaps reecting more rapid reactive intermediate generation or reaction.Past studies of atmospheric aerosol material surrogates have shown that photoreactions proceed more rapidly at higher RH, presumably due to lower viscosity in the material and more molecular interactions. 26Other photooxidation studies have been shown to occur more rapidly at higher RH values.For example, Zhong & Jang determined that photooxidation of hickory wood smoke, when exposed to sunlight, occurred at a faster rate at higher RH. 23This was also demonstrated by Wong et al.where light exposure of SOA from a-pinene decreases the mass of total organics more rapidly at higher RH (85%) relative to mid-RH (60%) and low-RH (5%). 45As well, Schnitzler et al. observed a more rapid and higher peak absorption at 60% RH relative to 15% RH during OH oxidation of primary pine wood smoke particles. 12y contrast to these photo-oxidation studies, aging may have continued in the dark more rapidly at lower RH than at high RH aer light exposure was terminated.This suggests that particle viscosity may affect the sources and sinks of reactive intermediates in the particles in a different manner.With the sources of such species turned off in the dark, it is possible that the lifetimes of the reactive intermediates are lengthened when molecular motion is suppressed at low RH.There is also potential for concentration effects, given that low RH leads to more concentrated solutes. 33,46We note previous studies on the effects of RH on radical production from photosensitized reactions have observed complex behavior, which may be driven by a variety of factors, including concentration and viscosity. 33
Atmospheric implications
The focus of this study was on aging by photoreaction of primary wood smoke brown carbon particles that will occur on a short timescale aer injection into a sunlit atmosphere.Strong effects across the near UV and visible parts of the spectrum were observed aer only 30 minutes with light levels lower than those typically present in the daytime atmosphere.While this aging effect has previously been investigated in the aqueous phase, 13,14,24 the scope of this project differs from past studies by aging suspended aerosol particles using a UVA light source centered at 360 nm.Other studies have either employed stronger UV sources or performed aging measurements offline with solvent extracts or lters.Thus, we have demonstrated that darkening photoreactions occur in suspended particles that are held under steady environmental conditions.
We note that this project focuses on the aging of primary particles by removing the gas-phase compounds generated from wood burning prior to aging.It is known that the formation of secondary brown carbon aerosols from gas-phase precursors generates additional absorption. 23,25By removing the effects of secondary gas-to-particle conversion, we are able to constrain the effects of light on primary particles only.We note that there is possibility of evaporation from semi-volatile compounds within the chamber, however, there was no observation of signicant changes to size distribution (Fig. 3a and 4a) and no impact on the optical properties.
We demonstrate that this photo-initiated effect persists in the dark, albeit at a slower rate than when light was present.This points to an aging mechanism that involves long-lived radicals or reactive intermediates. 44This stands in contrast to OH-driven aging which only occurs in the light. 1,6The persistent photo-initiated aging that we observe in the dark is relevant for atmospheric conditions where light levels are low.For example, particles generated from a biomass burning event can rapidly shi from direct sunlight exposure to a hazy or cloudy environment.
When placing this darkening photoreaction into an atmospheric context, it is important to consider the timescales involved.The aging mechanism studied here is relevant to short-term light exposure to primary BrC particles.A longer aging timescale would likely induce signicant photobleaching subsequent to the photoenhancement observed at short times. 14For example, Wong et al. saw the initial increase of absorption from high molecular weight water-soluble BrC during the rst 15 hours of aging, following by bleaching for 10 hours.Fleming et al. reported the decay lifetimes of BrC chromophores from different fuels to be on the order of days to a week. 6,14Furthermore, competition between other daytime aging processes, such as OH and O 3 heterogeneous oxidation, is likely to change the overall aging timescale as well.
Fig. 2
Fig. 2 Absorption (a), scattering (b) and single scattering albedo (SSA) (c) results from a single photoreaction aging experiment of BrC.Data shown are from the PASS at 405 nm and for dry conditions (<15% RH).
Fig. 3
Fig. 3 Changes in (a) particle size distribution, (b) single scattering albedo at 405 nm, and (c) AMS f 44 and f 60 fractions over time for a typical photoreaction experiment in dry conditions.Darker shaded regions in panel (a) represent a higher number concentration of particles.In panel (b) the SSA is represented as the ratio of experimental SSA over the SSA at exposure time ¼ 0, in order to directly compare different experiments.The shaded pink region in (b) represent the standard deviation (for 4 different replicates).Note that the changes in f 44 and f 60 during a control experiment (Fig. S4 †) were much smaller than the changes shown in this figure; (a) and (c) are results for one replicate.
Fig. 4
Fig. 4 Changes in (a) particle size distribution, (b) single scattering albedo at 405 nm, and (c) AMS f 44 and f 60 fractions over time for a typical photoreaction experiment at 45% RH.Darker shaded regions in panel (a) represent a higher number concentration of particles.In panel (b) the SSA is represented as the ratio of experimental SSA over the SSA at exposure time ¼ 0 in order to compare multiple experiments.The shaded pink region represent the standard deviation (for 2 replicates).Note that the changes in f 44 and f 60 during a control experiment (Fig. S4 †) were much smaller than the changes shown in this figure.(a) and (c) Are results for one replicate.
Fig. 5
Fig. 5 Single scattering albedo slopes before, during, and after light exposure for dry (pink) and 45% RH (blue).The slopes were recorded for 15 min both before and after the lights were turned on.The slope during light exposure was for a duration of 30 min.The error bars represent the standard deviation.The control experiments are the SSA obtained from adding BrC aerosols to the chamber without any light exposure, where the data before lights were recorded for 10 min instead of 15 min.The absence of error bars for the control conditions after lights is due to absence of data for one of the two control experiments (Fig. S6 †).
Fig. 6
Fig. 6 Averaged normalized absorption spectra for photoreacted wood smoke (red) and control (blue) experiments.The absorption spectra were normalized to 1 at 275 nm in order to directly compare different experiments.The shaded regions represent the standard deviation obtained from the different replicates.The samples were taken at dry conditions (<15% RH). | 2022-02-04T16:35:27.255Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "138e0089c62b9c71c48d2c2ede9621c7724bb5f2",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/ea/d1ea00088h",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b1e08641ec66de1834368d53923874256391c3cb",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
158137260 | pes2o/s2orc | v3-fos-license | The Adoption and Implementation of Transdisciplinary Research in the Field of Land-Use Science—A Comparative Case Study
: Transdisciplinary research (TDR) is discussed as a promising approach in land-use science and spatial research to address complex multifaceted “real-world problems” and to design strategies and solutions for sustainable development. TDR has become a widespread research approach in sustainability science and is increasingly promoted by research programmes and agencies (e.g., Future Earth and Horizon 2020). Against this backdrop, TDR can be considered a (social) innovation in the academic system, which is currently in the midst of an up-scaling diffusion process from a rather small TDR-advocating expert community to a broader science-practice community. We argue that this up-scaling phase also places TDR in a critical state as the concept potentially risks a type of “rhetorical mainstreaming”. The objectives of this study were to analyse how the challenging approach of TDR is currently adopted and implemented in the field of land-use research and to identify potential influencing factors. We studied 13 transdisciplinary research projects from Germany by performing qualitative interviews with coordinators, document analysis and participatory observation during meetings over a period of five years. Results show that the adoption level of the TDR concept varied widely among the studied projects, as did the adoption of the TDR indicators used in our analysis. In many of the investigated projects, we identified a clear lack of conceptual knowledge of TDR. In addition, we found that current academic structures limit the ability of researchers to thoroughly adapt to the requirements of TDR. We conclude that further communication and educational efforts that promote TDR are required. In addition, we advocate for the development of suitable funding instruments that support sustained research structures.
Introduction
Land-use practices and spatial development cover a wide range of sustainability problems. Increasing change in land use often causes environmental damages that reach far beyond the local scope [1]. Different interests and demands as well as values and norms compete for limited land resources and the related ecosystem's services and functions (e.g., Müller et al. [2], Zscheischler et al. [3]). Spatial development faces high degrees of complexity and uncertainty as it "operates in a world of becoming" under ever-changing societal, economic and biogeophysical conditions [4].
These challenges have been amplified by global changes that include land-related issues such as climate change, urbanisation, and decreasing biodiversity [5]. Thus, the demand for societal transformation towards sustainability has been increasingly discussed over the last two decades and programme). The comparative analysis is aimed at identifying potential explanatory factors and assessing their relevance for the adoption of the TDR approach to discuss implications and conclusions for better disseminating this approach.
Case Selection and Access
To analyse the adoption and implementation of the TDR concept, we studied 13 transdisciplinary joint research projects. These projects were funded by the same German funding programme, which was aimed at the development of sustainability solutions for land use-related challenges in Germany. Project objectives included the development of innovative value creation networks for sustainable regional development, new instruments and concepts of resource efficiency for settlement development, decentralised systems of renewable energies, and new technologies supporting sustainable land-use systems [50]. Project duration was between three and five years.
Application of the TDR approach was a pre-requisite for funding, with the call for proposals explicitly referencing a TD concept to integrate knowledge from different disciplines (especially the integration of knowledge from "natural scientific-technological and economic-social scientific disciplines") and involving practitioners such as "decision-makers" and "key actors".
The authors were members of an associated scientific coordination project (SCP) that accompanied these 13 joint research projects over a period of five years (2010)(2011)(2012)(2013)(2014)(2015). The SCP encouraged interaction and mutual learning among the members of all 13 research projects and supported the identification and examination of cross-cutting themes. As one topic of focus was TD, the SCP initiated discussions and workshops addressing this issue. The SCP had no direct influence on the adoption of the TDR approach but presented the researchers with possibilities for reflecting TDR processes in their projects. Concurrently, the SCP also initiated and observed communication processes among project members regarding TDR. Hence, the conditions provided particularly valuable access to the field; numerous informal discussions were complemented by insights from documents and multiple meetings. Hence, the case selection was strongly driven by the access provided via the SCP.
Research Design
The research design is based on an iterative research strategy using an inductive-deductive approach. We began with an explorative phase to develop the research field. We then developed an analytical framework derived from a literature review of the key principles of TDR and the factors that are important for the adoption of social innovation. This framework guided our data collection and analysis. During the analysis process, additional inductive categories were derived from the material.
Data and Material
The results of this study are based on the analysis of empirical data from different sources, obtained via the following procedures: (1). We continuously conducted participant observation during conferences and project workshops, resulting in field notes and protocols (see Figure 1) focussed on communications (informal talks, discussions, and presentations) regarding the TDR approach and corresponding experiences, notions, attitudes and settings. In accordance with de Walt and de Walt [51], we used the observation method to develop a comprehensive understanding of the adoption process of the TDR approach. This method was beneficial for applying and adjusting our analytical framework, developing an interview guide and validating the findings from document analysis and interviews. (2). We performed document analyses of project proposals, reports and web pages from all 13 research projects to explore the planning, operationalisation and implementation of transdisciplinary processes. We applied the categories of our analytical framework and complemented them with inductively derived categories related to the TDR concept.
(3). We conducted and transcribed semi-structured interviews with coordinating researchers to gather information regarding the initial phase of the projects, interdisciplinary collaboration and knowledge integration, and the implementation of practitioner involvement. Although additional interviews with other project participants would have been valuable, we focussed our study (due to resource limitations) on coordinating researchers as the most valuable knowledge carriers and key actors in enabling and constraining the implementation and adoption of the TDR process. In total, we conducted 14 interviews between September and November 2015. For the comparative case analysis, we selected 10 projects based on the sufficient depth and specifications of the interviews. Results of the interviews (presented in "The adoption of the TDR approach in 10 transdisciplinary joint research projects") are supported by direct quotes (Q n) listed in Supplementary Data.
In summary, the analysis combines material from different project phases (ex ante: project proposals; in operando: protocols of meetings and field notes from participant observations; ex post: interviews following project completion). Sustainability 2017, 9,1926 4 of 20 transdisciplinary processes. We applied the categories of our analytical framework and complemented them with inductively derived categories related to the TDR concept. (3). We conducted and transcribed semi-structured interviews with coordinating researchers to gather information regarding the initial phase of the projects, interdisciplinary collaboration and knowledge integration, and the implementation of practitioner involvement. Although additional interviews with other project participants would have been valuable, we focussed our study (due to resource limitations) on coordinating researchers as the most valuable knowledge carriers and key actors in enabling and constraining the implementation and adoption of the TDR process. In total, we conducted 14 interviews between September and November 2015. For the comparative case analysis, we selected 10 projects based on the sufficient depth and specifications of the interviews. Results of the interviews (presented in "The adoption of the TDR approach in 10 transdisciplinary joint research projects") are supported by direct quotes (Q n) listed in Supplementary Data.
In summary, the analysis combines material from different project phases (ex ante: project proposals; in operando: protocols of meetings and field notes from participant observations; ex post: interviews following project completion).
Qualitative Content Analysis:
The interviews, documents and field notes were evaluated and interpreted following the guide of qualitative content analysis according to Mayring [52]. Data processing was performed using the software MaxQDA. After developing an initial analytical framework based on a literature review of the key principles of TDR and the main factors influencing the diffusion processes of social innovations, we developed and refined a category system for the complete material using coding (preferentially using in vivo codes) and paraphrasing (proposition-wise from the interviews; selective from the documents and field notes). Further themes were derived through an iterative process of rereading, following the recommendations of Ryan and Bernhard [53] (cit. after Bryman [54]). Iteratively, we generalised and reduced the analysis corpus by means of the summary technique. In a further step, we explicated and contextualised distinctive (incomplete or contradicting) propositions by linking the different types of material and codes. Subsequently, individual cases were summarised and described according to the final category system. In a final step, results were critically discussed and validated within the team of the SCP.
Analysing the Adoption of TDR: A Set of Key Features and Factors
To study the adoption of the TDR approach, we reviewed the current literature to extract key features of TDR and corresponding indicators to assess TDR adoption in the projects and to identify influencing factors for the adoption of social innovations.
Qualitative Content Analysis
The interviews, documents and field notes were evaluated and interpreted following the guide of qualitative content analysis according to Mayring [52]. Data processing was performed using the software MaxQDA. After developing an initial analytical framework based on a literature review of the key principles of TDR and the main factors influencing the diffusion processes of social innovations, we developed and refined a category system for the complete material using coding (preferentially using in vivo codes) and paraphrasing (proposition-wise from the interviews; selective from the documents and field notes). Further themes were derived through an iterative process of rereading, following the recommendations of Ryan and Bernhard [53] (cit. after Bryman [54]). Iteratively, we generalised and reduced the analysis corpus by means of the summary technique. In a further step, we explicated and contextualised distinctive (incomplete or contradicting) propositions by linking the different types of material and codes. Subsequently, individual cases were summarised and described according to the final category system. In a final step, results were critically discussed and validated within the team of the SCP.
Analysing the Adoption of TDR: A Set of Key Features and Factors
To study the adoption of the TDR approach, we reviewed the current literature to extract key features of TDR and corresponding indicators to assess TDR adoption in the projects and to identify influencing factors for the adoption of social innovations.
Indicators for the Adoption of the TDR Approach
Many scholars have discussed what defines a "good" TDR practice. For the analysis, we focussed on the features described below (Sections 3.1.1-3.1.3) as they are the features most feasibly studied from an external perspective and are regarded the most appropriate for reflecting the adoption of the TDR approach.
Collaborative Problem Framing and Co-Designing the Research Process
TDR is oriented towards the solution of complex real-world problems (e.g., Hirsch Hadorn et al. [55], Roux et al. [56], Mobjörk [24]). Accordingly, many scholars emphasised the need to integrate knowledge, perspectives and interests not only from different disciplines but also from related societal actors when the research project is designed (e.g., Bergmann [57], Tress et al. [58], Wiek [7], Enengel et al. [49], Goebel et al. [59], Lang et al. [28]). The initial phase is considered especially critical, as, in this stage, the most important goals, financial and staff margins, procedural operation possibilities, and limits of the management capacities of the project are determined. Following Lang et al. [28], this phase "orients, frames and enables the core research process." Hence, it should include: (1) the joint identification and definition of the complex real-world problem; (2) the joint formulation of research objectives and the research question; (3) a conceptual and methodological framework for knowledge integration; and (4) the formation of a collaborative research team. We used these determinants as deductive categories for our analyses.
Integrating Knowledge from Different Disciplines (Interdisciplinarity)
Another key feature of TDR is "interdisciplinarity". Originally, the term "transdisciplinarity" was introduced to further clarify the concept of "interdisciplinarity". The resulting notion of TDR as a "perfected interdisciplinarity" persists today, as particularly evidenced by regional differences between Europe and the US [47,60]. In the North American debate, the notion of TDR originated from the "taxonomy of cross disciplinary research" after Rosenfield [61] used the lexical morpheme "trans" to describe a collaborative research approach differing from interdisciplinarity in which researchers "work jointly but still from a disciplinary-specific basis" transcending disciplinary boundaries by "using shared conceptual framework drawing together disciplinary-specific theories, concepts, and approaches to address common problems". Interdisciplinarity is different from multidisciplinarity. The latter refers to the collaboration of disciplines that "relate to a shared goal, but with multiple disciplinary objectives" [58]. In contrast, to our understanding, interdisciplinarity is described by the following principles, which we used for indication: (1) the involvement of several unrelated academic disciplines (with contrasting research paradigms) in a way that forces them to cross subject boundaries [58]; (2) while targeting a common goal [62,63]; (3) leading to interdisciplinary theory development; and (4) the merging of concepts and methods [64].
The differences between TDR and other approaches can be found in the function of involvement and the roles of scientific and societal actors [24,71,72]. Furthermore, Scholz [15] distinguishes between transdisciplinary processes and TDR: whereas the former is a joint-controlled process, the latter is led and controlled by researchers. Hence, the decisions of who should be involved and how and when they should be involved depend strongly on the coordinating researchers of TDR projects. Different forms of involvement are discussed in the literature. Mobjörk [24] differentiates between "consulting transdisciplinarity" (meaning involvement that is limited to responding) and "participatory transdisciplinarity" (referring to fully and equally incorporating knowledge from societal actors with scientific knowledge). Pohl [39] describes the specific quality of transdisciplinary collaboration as "interrelating" perspectives and knowledge instead of simply "adding". Furthermore, Stauffacher [71] and Wiek [11] distinguish different levels of involvement: one-way information, mutual one-way information (mutual learning), collaborative research, and joint decision-making.
To summarise, there are several perspectives and no "one-size-fits-all solution" [71,72] regarding the framing and organisation of science-practice collaboration in TDR. Nonetheless, we identified some common principles that we consider to be mandatory regarding science-practice collaboration in TDR projects: (1) TDR organises and enables mutual learning processes between science and practice and hence must not be limited to data collection and information from societal actors. (2) Knowledge and perspectives must be integrated and interrelated in a cooperative manner. (3) Science and practice must collaborate "on equal footing" [64]. (4) TDR integrates "two pathways": one pathway that is focussed on solving societal problems and one that contributes to scientific knowledge gain by developing "interdisciplinary approaches, methods and general insights" [11,28].
Factors Influencing the Adoption Process
To better understand the adoption and application of the TDR approach, we additionally studied potentially explanatory factors (Sections 3.2.1-3.2.3), which are widely discussed in the field of (social) innovation research:
Knowledge of an Innovation: Notions of the TDR Concept
The diffusion theory by Rogers [35] distinguishes different phases that compose the decision process for adopting an innovation.
In the first phase, knowledge plays a crucial role. In this phase, the potential adopter gains three types of knowledge regarding an innovation: (1) awareness-knowledge (knowledge of the innovation existence); (2) how-to-knowledge (knowledge on how to use the innovation); and (3) principles-knowledge (how and why an innovation works).
Knowledge is considered a decisive factor in the innovation-decision process. The likelihood of adoption increases with the level of "how-to-knowledge" and is especially critical with more complex innovations. Sahin [73] emphasises the importance of "principles-knowledge", stating that "innovations can be adopted without this knowledge, but the misuse of the innovation may cause its discontinuance". Adequate communication can be regarded as critical for the up-scaling process [74]. As an indicator of the factor of "knowledge", we studied the understanding of the TDR concept by scientists in the considered research projects.
Attitudes and Willingness to Adopt the TDR Approach
In the phase of up-scaling and diffusion of an innovation, communication becomes essential not only for enhancing knowledge regarding an innovation but also for shaping the attitudes of adopters, which in turn decisively influence the adoption or rejection of an innovation. The attitude (negative or positive) towards an innovation is strongly influenced by social reinforcement, for example, via colleagues or the community [35] and through social norms and values [74]. Mulgan [75] emphasises the importance and necessity of supporters such as funders. In addition, the success of a social innovation depends on the persuasive skills of innovators. Here, the credibility of the advocators plays a central role [76]. With respect to the factor of attitude and willingness, we studied the attitudes of scientists towards the TDR approach and their motivation to participate in a TDR project.
Compatibility of an Innovation with the Social System
A decisive factor for adoption is the compatibility of the innovation with the social system and social structures. Cajaiba-Santana [77] stresses the need to consider social structures as they enable and constrain agents while acting upon those practices (see also Esser [78]). These structures, which refer to the institutional setting (norms, conventions, and values), can be regarded as composing a "framework that guides individual and collective action" [79]. Giddons [80] regards structures as "both medium and outcome of the reproduction of practices" (p. 5).
In this respect, we define academic structures as collective patterns that determine the opportunities and restrictions of scientific practices, including funding conditions, disciplinary cultures, career pathways, scientific credibility, and powers and duties. To analyse the compatibility of TDR with academic structures, we investigated the call for applications and considered critical reflections on the framing conditions documented in the interviews, workshop protocols and field notes.
Results: The Adoption of the TDR Approach in Investigated TDR Projects
This section presents the results of a comparative case analysis of TDR adoption. The entire investigation comprised 13 joint projects. However, for the analyses (especially those referred to in Sections 4.1 and 4.2.3), 10 cases were used for comparison due to a lack of interview data in the remaining three cases, which provided sufficient depth only. Table 1 provides an overview of the main categories and corresponding characteristics of the investigated ten TDR projects. Deductive categories reflecting the presented analytical framework (see above) were complemented by inductive categories derived from the material. Table 1. Results of qualitative content analysis (sources: interviews and field notes).
Concepts and methods for knowledge integration
Balanced benefits P P P (+)/P Sc + (+) Sc Sc Results show that a broad range of approaches was used to frame the research problem, define the research question and build up the consortium. A small number of projects strived to achieve an iterative process of collaboratively framing the research problem and the resulting research question in which the actors from practice were involved from the beginning and had a voice on equal footing (P1, P6, and P7). However, this process was based on different settings: Project 7 reported that they chose a reduced form of collaborative problem framing during the application phase but compensated for it immediately after the project start, whereas Project 6 was already in an intensive collaborative science-practice process when the call for funding was announced. In contrast, there were several research projects in which only a small group of researchers or an individual researcher was involved in formulating the research problem and research question, designing the entire project, and deciding who should be involved. Partners from practice were involved only to the extent of providing letters of intent to participate. In these research projects, neither criterion was rated as fulfilled (see Table 1).
Nearly all of the projects exhibited identical action patterns following the announcement of the call for proposals, which generally represented the initial moment of project application. Many projects showed a long history of consecutive funding phases, slightly changing and updating their core research questions to meet the requirements of the announcement. The selection of the scientific and practice partners and the project objectives were mainly based on pre-existing contacts and networks. This pattern can be observed in nine projects, thus stressing the importance of mutual trust and network reliance in project partner selection. In an extreme case (P2), the project partner selection process sought to maintain an "old boys" network (patronage): "... which is also related to the person A. He is very strongly network oriented and works very strongly with his mates together. And if he wants to do something, he always looks first for his trusted people. And if someone is familiar with him, he would not look for a better alternative. He's going to try to cover a certain topic with his friends before he might choose a more appropriate partner... " Only one project (P7) had developed an explicit concept for knowledge integration. In two other cases, implicit concepts could be assumed due to the documented project design and reported ideas regarding involvement.
Integrating Knowledge from Different Disciplines
In all of the projects, scientists from a broad range of disciplines were involved. However, involvement had a prevailing character of additive composition (Q18 and Q19). Interdisciplinary collaboration that integrated conceptual frameworks and theory from different disciplines was sporadic and frequently not strategically planned or managed. This sporadic collaboration was also evidenced in project structures in which the sub-projects were separated by discipline. Some coordinators commended this autonomous work style as an asset as it promotes efficiency and minimises the work load (P2 and P4). Other projects had involved disciplines to be "well-rounded" but admitted that these disciplines could be omitted (P4 and P8). These cases represented the largest projects of the funding programme, with approximately 60 scientists involved. Remarkably, projects that managed to merge concepts and methods shared a common overall concept (vision development process), which served as a boundary object and provided guidance for collaboration.
The dominance of natural scientific-technological disciplines was apparent in many cases (Q15, Q16). In two projects, scientists with a social scientific background were responsible for (P1) or were involved in coordination (P7). In many other projects, social scientists were not only outnumbered (the P4 coordinator estimated a relation of 5 to 55) but also regarded as a "service-discipline" to facilitate stakeholder processes, working as transfer agents or science communicators (P2, P4, P5, P7, and P8). This revealed a marginalised importance and low expectations of social scientists' work (Q17). One coordinator observed that social scientists were not considered in the coordination activities: " . . . We also had our colleagues from the socio-economy or the social sciences from socio-geography, which is really quite strange to us, but of course it was difficult for us to co-ordinate, because they had to find out on themselves, to see what they do ...." (P4) Few coordinators reported an increase in acceptance between natural and social science perspectives over the course of the project. Many coordinators reported and welcomed the opportunities for mutual learning processes in TDR projects (Q34-Q36).
Another reported phenomenon was the "hidden research agenda", which occurred in two forms. In one form, several scientists had promised results in the application phase, but after the project start, they followed their own research agenda and supplied few results to the joint project (P5, P8, and P9). In the second form, four coordinators openly admitted that social scientists were involved in the project only because of strategic considerations; i.e., maximising grant opportunities (P2, P4, P8, and P9). One interviewee explained this flawed interdisciplinarity: "Well, 'interdisciplinary'-in my experience this is mainly demanded by calls for proposal, and then (scientists) respond to it and this interdisciplinarity-well-I don't want to say it is faked, but it is tried to be constructed." (P9) In general, the focus of coordinators was less on interdisciplinary research and more on practice-science collaboration.
Science-Practice Collaboration
The quality and role of practitioner involvement varied widely among the projects. In three projects, the science-practice collaboration had a central role and was designed as a process with equal footing (P1, P3, P6, and P7). In these projects, knowledge, interests, and perspectives from practice were considered in much of the researchers' work. Regular meetings between science and practice led to mutual learning.
In other cases, the involvement of practitioners was selective and focussed on product development or on process engineering, and it was accompanied by public relations work or consultancy workshops, indicating a rather traditional understanding of one-way information transfer (P2, P4, and P8). In one case (P5), one of the largest consortiums (approximately 35 partners) underwent an intense stakeholder dialogue process that involved more than 60 actors from practice. However, these activities remained completely independent from the core item of the project, which led to frustration for both the dialogue moderators (social scientists) and societal actors. In all of the studied projects, non-academic actors were classified as "partners" (bound to the project through contracts) or as "actors" (involved through interviews and surveys, focus groups or workshops). Practice partners (municipalities, public authorities, NGOs, consultancies and small and medium-sized enterprises) were often bound to the project via (co-funded) employment at their respective institution. In some cases, specific work packages were outsourced to providers that had been termed "non-scientific project partners". Another frequent notion of involvement was regarding landowners as project partners who provided testing areas (P2, P3, P4, and P8).
In many projects, information transfer and consultation events clearly outweighed more integrative approaches and methods. Stakeholder and public acceptance of science activities and implementation appeared as the prevailing goal of stakeholder involvement. Furthermore, we observed an imbalance between practice and science in output (Q23-Q33). Some projects were very practice oriented, with a strong tendency towards consultancy (P1-P3) and thus a neglected role of scientific output, which was regarded as a by-product: "This was more our problem-to process and make our many insights usable which we gained through the cooperation with each other so that the practitioners are not overstrained, and on the other hand, what we have learned, let me say, as a hobby by the way to utilise. The economist and I .... because, we are the two in the network, which are still most scientifically oriented ..." (P1) Other projects were focussed on scientific output, which corresponded to a weak (P8 and P10) or "outsourced" (P5) practice involvement process. However, several coordinators expressed their dissatisfaction with the general scientific output of their projects (P2, P4, P7, and P10) and questioned the scientific character of TDR in general: " . . . I would not say 'research'. Because I ask myself very often: where is the research now? Because in principle, only people speak to each other. So, of course, now is the question how to do something like that. But is it research?" (P4) Practitioners exhibited considerably different motivations and interests from those of the scientists. Their involvement in the research process was one of the main challenges (Q41 and Q43). Interview partners occasionally criticised practice partners. They complained about practice partner saturation and a resulting lack of motivation (P7), their focus on solution-based results, and their disinterest in integrated and abstract approaches (P2, P4, P8, P9, and P10). Notably, interviewees from projects who complained of disinterested practitioners were from projects that did not involve practitioners during the problem-framing phase.
Knowledge of the TDR Approach
Repeatedly, coordinators stated that "transdisciplinarity" remained a vague concept to them and was likely "just a new term" (P2, P4, P6, and P10) (Q1-Q10). This conceptual uncertainty is remarkable considering that the interviews primarily occurred after project completion or during the project's final stage. Four interviewees equated TDR with "applied research" (Q11-Q14) as illustrated by the following examples: " . . . applied research-um, yes, maybe these are all words that revolve around something similar. They surely find definitions where they can clearly distinguish it. But I do not have one prepared and in my field of imagination. I believe that this is very close together and that it is rather a scientific discourse, where one tries to distinguish any nuances, . . . " (P10) " . . . Otherwise, yes: what is transdisciplinary research? This is a new term. In Germany since 150 years we are doing applied research and so have, yes, our economic status, the reason is that we make applied research, . . . " (P4) In general, the notion of TDR is closely connected to the feature of science-practice collaboration. However, most coordinators described TDR as the collaboration between different scientific disciplines and practice without providing further specifications regarding the qualitative aspects (knowledge integration, mutual learning).
It is apparent that the projects that met many of the TD criteria had coordinators with previous TDR project experience (P1, P6, and P7) and generally deeper knowledge of TDR. In addition, these projects expressed an appreciative attitude towards social scientific disciplines.
To gather information on TDR knowledge, we analysed the project proposals. We reviewed the proposals for criteria of TDR (y-axis; Table 1) and ranked each proposal on a qualitative gradient: (1) elements were frequently verbalised and explained ("mentioned as important"); (2) elements were circumscribed or labelled with comparable items ("vaguely paraphrased"); or (3) no elements were mentioned ("not mentioned").
As Figure 2 shows, a shared notion of TD as a form of science-practice-collaboration starting with a "real-world problem" was common in all proposals. Although a few proposals did not directly use the term "transdisciplinarity", it was paraphrased as "participation of practitioners". In this respect, there appeared to be basic agreement on these two general features. Remarkably, many projects distinguished interdisciplinarity from TD. Moreover, TD appeared to be conceived as an instrument of transfer, meaning the application of solutions to real-world problems from academia into practice. In general, all of the proposals revealed a common understanding of TDR as an approach for harmonising research results with the requirements of practice. Often, the claim of implementing research results was made.
Sustainability 2017, 9,1926 11 of 20 scientific disciplines and practice without providing further specifications regarding the qualitative aspects (knowledge integration, mutual learning). It is apparent that the projects that met many of the TD criteria had coordinators with previous TDR project experience (P1, P6, and P7) and generally deeper knowledge of TDR. In addition, these projects expressed an appreciative attitude towards social scientific disciplines.
To gather information on TDR knowledge, we analysed the project proposals. We reviewed the proposals for criteria of TDR (y-axis; Table 1) and ranked each proposal on a qualitative gradient: (1) elements were frequently verbalised and explained ("mentioned as important"); (2) elements were circumscribed or labelled with comparable items ("vaguely paraphrased"); or (3) no elements were mentioned ("not mentioned").
As Figure 2 shows, a shared notion of TD as a form of science-practice-collaboration starting with a "real-world problem" was common in all proposals. Although a few proposals did not directly use the term "transdisciplinarity", it was paraphrased as "participation of practitioners". In this respect, there appeared to be basic agreement on these two general features. Remarkably, many projects distinguished interdisciplinarity from TD. Moreover, TD appeared to be conceived as an instrument of transfer, meaning the application of solutions to real-world problems from academia into practice. In general, all of the proposals revealed a common understanding of TDR as an approach for harmonising research results with the requirements of practice. Often, the claim of implementing research results was made. Few proposals explicitly targeted the creation of learning processes. The term "knowledge integration" was used explicitly only twice, and only one proposal displayed a sound understanding of the topic and, consequently, created a sub-project specifically assigned to this task. In the remaining proposals, the concept of "knowledge integration" appeared to be lacking. Where the term "integration" was used, it was used in the context of models and data rather than as part of a comprehensive understanding.
Motivation and attitude towards TDR.
The interview data and responses from a workshop on TD conducted in 2012 indicated that respondents exhibited a generally positive attitude towards the TDR approach. All workshop Few proposals explicitly targeted the creation of learning processes. The term "knowledge integration" was used explicitly only twice, and only one proposal displayed a sound understanding of the topic and, consequently, created a sub-project specifically assigned to this task. In the remaining proposals, the concept of "knowledge integration" appeared to be lacking. Where the term "integration" was used, it was used in the context of models and data rather than as part of a comprehensive understanding.
Motivation and Attitude towards TDR
The interview data and responses from a workshop on TD conducted in 2012 indicated that respondents exhibited a generally positive attitude towards the TDR approach. All workshop participants (n = 35) declared transdisciplinary cooperation within their respective project to be "important" (n = 5) or "very important" (n = 30) to project success. However, this generally positive attitude towards TDR was not shared by all of the scientists in the projects. Several coordinators (among the senior researchers) took a very critical stance (P5, P7, and P9). They did not see any need for changing their research practice (Q37). Other researchers tended to considered TDR as an operative necessity for surpassing a specific "threshold" to secure funding.
TDR was also considered an alternative way to attract third-party funding (Q39-Q40). The interviews revealed that funding opportunities and preventing unemployment was a primary reason for starting a TDR project (P2, P3, P4, P7, and P9). In two cases, interviewees admitted that this resulted in a corresponding modification in design and wording of the proposals to meet the requirements of the application call without deeper methodological proficiency (P4 and P9).
However, we also identified a group of researchers who shifted perspectives during the course of the projects (Q38). TDR appeared to be welcomed as an opportunity to pursue and apply transformation measures (P1, P2, P3, P4, P6, and P7). Scientists exhibited a high motivation to contribute to more sustainable land use (Q44).
Compatibility with Academic Structures
The application call explicitly mentioned the TDR approach as an important selection criteria and precondition for achieving grants. However, TDR was not clearly defined. It was described as encompassing interdisciplinary collaboration between academics and the integration of knowledge (interdisciplinarity), in particular to join "nature scientific-technological sciences with the economic and social sciences disciplines" [50]. Furthermore, the call specified that regional "stakeholders" (decision-makers) had to be involved (TD).
Interdisciplinarity and TD were separately mentioned in the call. However, TD is not considered a broader concept that encompasses interdisciplinarity. Rather, it is used in a modular form that increases the societal relevance of research results. Regarding funding conditions, a preliminary initialising phase was neither obligatory nor supported by the funding agency. At less than four months, the period for submitting project proposals was limited, restricting the time available to collaboratively frame the problem and design the research process. One coordinator remarked on the high expenses of a developing a proposal when several researchers are involved in comparison to the moderate chances of grant success.
Half of the coordinators summarised that TDR does not "fit" into current academic structures and the scientific culture (P1, P2, P4, P5, and P10). There was doubt as to whether a project structure is the appropriate form to implement the approach and whether longer-term organisational structures are necessary. In particular, coordinators in the post-doc phase did not succeed in profiling themselves scientifically and ran the risk of opting out of academia after projects (P2, P8, and P9), which was associated with an erosion of TDR-specific knowledge.
Moreover, the management of TDR was described as especially challenging as scientists are difficult to coordinate. The specific "culture of universities" and the hierarchical leadership styles of senior scientists also complicated flexible project management. Time and resource constraints hampered effective collaborative processes, and researchers argued that funding conditions forced them towards some form of solutionism and induced science to be non-scientific.
Discussion
In this comparative study, one objective was to analyse how TDR is currently adopted in the field of land-use research. In addition, we aimed to identify potential influencing factors, assess their relevance for adopting the TDR approach, and consider the implications. We began with the assumption that TDR can be considered as a social innovation.
Our results show that the adoption of the TDR concept varied widely among the studied projects, as did the adoption of the TD indicators used in our analysis. On the one hand, this indicates that in research practice there are different qualities and degrees of TDR. Such a differentiation has scarcely been noted in the theoretical discourse thus far.
On the other hand, we argue that these findings also reveal a constrained adoption and implementation of TDR, which can be traced back to factors frequently discussed in the innovation literature.
There Is a Lack of Sufficient Knowledge of the TDR Concept
"Knowledge of the TDR concept" and "Previous TD experience" appeared to be the factors that most strongly influenced the quality of the transdisciplinary process. We concluded that the more extensive the background knowledge of TD (especially among the coordinating staff), the better the observed performance. This finding underlines the importance of sufficient "principle knowledge" for the innovation process [73]. However, this interpretation stands in contrast to the survey results of Tress et al. [43], who found no correlation between professional experience and the difficulties researchers face in TDR projects. However, whereas Tress et al. [43] focussed on general difficulties perceived by a broad range of researchers, the present study focused on the quality of the TDR process.
Although the coordinators generally showed a "positive general attitude" towards TDR, after 3-5 years of project experience, many coordinators possessed only a vague understanding of the TDR concept. This is remarkable, especially considering that TDR was frequently mentioned as a central feature of the projects. We speculate that this vague understanding might be due a general lack of interest in learning about TDR or a general underestimation of the complexity and corresponding requirements for the coordination of TDR projects [81].
Other scholars (e.g., Brandt et al. [82], Jahn et al. [46], Carew and Wickson [83]) noted that a lack of conceptual clarity regarding the TDR approach persists, which hampers its diffusion to other target groups. Thus, the "TD community" developed its own defined terminology, which is helpful for theoretical discussions but limits its communication to scientists in other fields [82]. Tress et al. [58] found that a lack of common understanding of TD was one major obstacle to integration. They emphasise the importance of conceptual clarity to "compare and evaluate the outcomes of different research approaches" (ibid.). Therefore, the current process of "rhetorical mainstreaming" of TDR is misguided and could marginalise researchers who seriously seek to apply TDR [40]. Thus, the integrity of TDR might be negatively affected by both a continuous degradation of the standards of TDR in practice and the increasing detachment of expert discourse from the rest of academia.
We argue that a missing common and clear definition of TDR clearly constraints its diffusion. This argument is supported by the observation that interviewees did not recognise TDR as something "new" but equated it with "applied research". Thus, it is not unexpected that these coordinators did not see any need for a change of practice.
Funding Conditions and Review Processes Require Adjustment
The vague understanding of TD was apparent as early as the research proposal stage. Many proposals showed clear weaknesses in the conceptual understanding of TDR. This finding raises questions with regard to the scientists involved in the peer-review process and their TDR expertise. Although evaluations of funding conditions often address limited time and financial resources (e.g., Maasen and Lieven [84], Tress et al. [43], Horowitz et al. [16]), the role of peer reviewers remains widely unconsidered. In accordance with Lange and Fuest [81], we assume that the proposal reviewers were selected based on their expertise in land-use science and encountered the same difficulties in appraising the transdisciplinary concepts as did their applicant peers. As we had no access to data on the peer-review process, we cannot assess the extent to which TDR experts were included in the peer-review panel. Regardless, we argue that the involvement of TDR experts in the development of calls and the peer-review process is important for securing quality TDR. Our analysis of the programme call showed that an elaborated concept for the design and management of the transdisciplinary process was not demanded. In addition, the funding conditions limited the opportunities for collaborative framing of the research problem and for developing a common objective. The application phase was not funded, and the application time span of 4 months was very short. A lack of practice-partner involvement arising from short application phases has been documented in other studies and appears to be crucial for the collaborative process (see also Viswanathan et al. [85], Horowitz et al. [16]). More recent TDR funding programmes from the same donor have recognised this shortcoming, and a longer "framing phase" has been implemented. Further investigations are needed to determine the effects of an extended and financed preliminary phase.
However, demanding TDR as a general requirement for funding seems questionable. Such a requirement forces scientists to think in instruments, but TDR is not an end in itself. Implementing TDR should rather depend on the research question that has been posed. Hence, many of the studied projects did not meet several TDR criteria but can be regarded as valuable projects in which the research problem and objectives do not require a "full" TDR approach.
Academic Structures and Cultures Do Not Integrate Well with TDR
In general, our results showed that TDR does not easily "fit" into the established competitive academic system with its discipline-based organisational structures and reputational system (e.g., Russell et al. [86]; see also Rip and van der Meulen [87], Leydesdorff and Gauthier [88]).
Many scientists were under "high pressure for third-party funding". This pressure is prevalent within the whole scientific system and forces researchers into a continual process of proposal writing under an increasing scarcity of research funding [89]. However, the extent to which researchers can adapt to call requirements and adjust their research direction is limited as the evaluation of proposals strongly depends on prior expertise and research content. Thus, current funding mechanisms limit transitions towards new research topics and encourages researchers to engage in scientific "window dressing" [90]. Gläser and Laudel [42] regards "window dressing" as a way to "bootleg money for the start of new research under the cover of existing grants" (p. 125). Our results showed that this practice occurred in the studied projects as some coordinators reported on the "hidden research agendas" of researchers who followed their individual research interests and contributed little to the joint projects. Although our sample size was small, the large-sized projects (in terms of the number of involved project partners) appeared to be prone to this form of pretence.
Hessels et al. [89] explain the tensions that evolve for scientists when they involve stakeholders in the "credibility cycle". In addition to the pursuit of funding, the pressure to publish also increasingly rose [91], preventing scientists from reconciling the scientific demands of research with the promise of societal relevance and the involvement of stakeholders at the same time. This situation appears to be true for research fields "with less generous and powerful stakeholders" [89], as is the case in the field of land-use science. Our results showed that researchers struggled to achieve "balanced benefits" regarding practical relevance and scientific quality and productivity. Most projects were characterised by either a "primacy of practice" or a "primacy of science" [11]. In general, the scientific character and consequently the epistemic function of TDR were questioned [48].
In addition, we assume that the costs and benefits in TDR projects are not allocated in a just manner. Our investigation showed that coordinating positions are neither adequate nor recommendable for junior scientists in their post-doc phase. Post-doc coordinators carry a high load of administrative and representative duties and risk further career opportunities, likely due to a decrease in publication output [92]. The potential loss of experienced TDR researchers for the academic system is accompanied by an erosion of knowledge and skills and hinders the development of professionalised experts who can build their careers by performing TDR. Thus, it is questionable whether a project-oriented organisation is suitable and sufficient to establish an open network structure and field practice for the development of adaptive "learning cycles" [93]. To perpetuate knowledge and many other aims, sole financing through third-party funds and the associated staff turnover should be reconsidered. Hence, the introduction of new and innovative funding instruments that support enduring structures appears necessary [7].
Another structural shortcoming appears to be the antagonism of "competition" and "cooperation". TDR as well as other collaborative research approaches such as CBPAR require inclusive and cooperative practice, whereas traditional science is a "competitive field that is exclusive" [16]. However, we found no empirical evidence for the influence of competition on the adoption of TDR in the studied projects. Rather, our results showed a disciplinary divide between social and nature-technological scientists. In several projects, social scientists had "a service role" [94] and were outnumbered. Their scientific relevance was also questioned, and some coordinators admitted to having "social scientists involved to get funding". Ledford [95] stated that social scientists are often involved in collaborative research teams to "tick a box" for societal impact but "without true commitment". Vadrot et al. [96] argued that the general underrepresentation of social scientists in science platforms dedicated to global challenges, such as IPBES (The Intergovernmental Platform on Biodiversity and Ecosystem Services), "mirrors institutional and knowledge barriers between research disciplines". In contrast, Van Langenhove [97] emphasised the importance of social scientists for addressing global challenges and recognised "reluctance" on the part of social scientists to do so.
Conclusions
The aim of this comparative study was to analyse how the challenging approach of TDR is adopted in 13 TDR projects over a period of five years. The projects covered the field of land-use research performed under similar conditions (same funding programme). In addition, we aimed to identify potential influencing factors to assess their relevance for adopting the TDR approach and to consider implications. Results are based on the analysis of interviews with coordinating researchers, project proposals, reports, web pages, protocols and field notes during meetings. We began with the assumption that TDR can be considered a social innovation in academia that is currently in the critical stage of up-scaling from a small TDR-advocating expert community to a broader science-practice community.
Our results show that the adoption of the TDR concept varied widely among the studied projects, as did the adoption level of the TDR indicators used in our analysis: (1). Only few projects strived to achieve a process of collaboratively framing the research problem and defining the objectives involving actors from practice at the initial project phase. (2). Interdisciplinary collaboration exhibited a prevailing additive character. The integration of conceptual frameworks and theory from different disciplines was frequently not strategically planned or managed. (3). The dominance of natural scientific-technological disciplines was apparent in many cases.
In many of the studied projects, social scientists were not only outnumbered but also regarded as a "service discipline". (4). In a minority of projects, science-practice collaboration had a central role and was designed as a process with equal footing. In many projects, information transfer and consultation events outweighed more integrative approaches.
On the one hand, this indicates that in research practice there are different qualities and degrees of TDR, which has scarcely been noted in the theoretical discourse thus far. In addition, there are no minimal standards yet to distinguish between a TDR project and a non-TDR project.
On the other hand, we argue that these findings also reveal a constrained adoption and implementation of TDR, which can be traced back, among others, to: (1). a lack of knowledge among a broader community of scientists who apply TDR; (2). dysfunctional funding conditions; and (3). contradicting academic structures and cultures.
Even if our results only present a medium sample of projects from the specific field of land-use science and acknowledging that empirical studies from other research fields are needed to prove our findings, we conclude that the idea of TDR is based on expert-driven discussion and concept development, which have not yet been diffused and adopted by a broader community.
Thus, the findings imply that in addition to further communication and educational efforts, novel funding instruments that support sustained structures are needed to promote TDR. These structural changes appear especially important as current adoption practice bears the risk of improper characterisation, implementation, or evaluation of the TDR approach. As a result, a persistent underperformance of TDR may cause its discontinuance and hinder its establishment as a well accepted research approach in academia. | 2019-05-20T13:03:11.020Z | 2017-10-26T00:00:00.000 | {
"year": 2017,
"sha1": "84af173248226d4e2d153d08545e380e3f7dd867",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/9/11/1926/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9cdefb36092e31903067f93e98097fe9378f79ab",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Political Science"
]
} |
239635205 | pes2o/s2orc | v3-fos-license | Energy mix, technological change, and the environment
This paper studies the relationship between the energy mix and the environment using a theoretical framework in which two alternative energy sources are considered: fossil fuels (dirty energy) and renewable energy (clean energy). We find that a positive aggregate productivity shock increases energy consumption and emissions but reduces energy intensity and emissions per unit of output as renewable energy consumption increases, that is, carbon emissions are procyclical but emissions per unit of output are countercyclical. Second, an energy efficiency improvement provokes a “rebound effect” above 100% (the backfire effect), resulting in a rise of pollutant emissions by increasing energy use. Third, a technological improvement in emissions leads to a reduction in emissions per unit of fossil fuel, but also implies a slow-down in the adoption of renewable energy sources. Finally, we consider the case of a decentralized economy in which the government chooses an optimal specific tax on fossil fuel to maximize social welfare. We show that the “second-best” policy is highly effective in correcting the negative effects of the environmental externality and able to almost achive the centralized economy outcome.
Introduction
The energy mix used in the economy is arguably one of the key factors in explaining the dynamic relationship among output, energy consumption, carbon emissions, and the environment (Golosov et al. 2014). However, existing environmental-economic models have focused on a variety of environmental policies, including Pigouvian taxes, abatement instruments, promotion of energy efficiency, limits to emissions, etc., with little attention to the implications of the energy mix and energy transition in linking economic activities with damages to the environment. Production activities requires the use of energy as an additional input to physical capital and labor. Pollutant emissions are not a direct by-product of production activities which would imply constant energy intensity, but they depend on the particular energy mix of the economy, where each type of energy source has a different impact on the environment. Emissions from alternatives energy sources are very heterogenous and therefore attention must be paid to the composition of the primary energy consumption. In general, we can distinguish between two types of energy sources: Renewable and non-renewable. Renewable energy source (hydroelectric power, geothermal, solar, wind and biomass), are considered "lean" energies sources, producing no direct greenhouse gases emissions. On the other side, non-renewable or fossil fuels (oil, natural gas, and coal), are "dirty" energies as they produce direct gas emissions although at different rates (coal produces more emissions than oil and natural gas). 1 This paper contributes to the literature by studying how technological shocks affect the relationship between the environment and the energy mix, using an Environmental Dynamic Stochastic General Equilibrium (E-DSGE) model, and how "second-best" policies can, partially, internalize the negative externality created by emissions. The dynamic relationship between economic growth and environmental protection remains central for sustainable development, where environmental problems are generated by the economic activity can be an impediment for future economic growth (World Bank 2012). Figure 1 plots the energy intensity for the U.S. for the period 1950-2018, measured as the primary energy consumption (thousand BTU: British Thermal Unit) to GDP ratio. During the full period energy intensity declined from 15.12 to 5.45, that is, a reduction of 63.96%. During the same period, carbon emissions to GDP ratio (measured in metric tons carbon dioxide per million dollars) drops from 1040 to 284 units, a reduction of 72.69%. Hence, not all decline in carbon emissions is explained by energy intensity decline. Reduction in energy intensity can be explained by sectorial change toward less energy intensity industries, and by energy efficiency technologies. Reduction in carbon emissions not accounted by energy intensity decline (about 12% for the U.S.) is explained by emissions technological change and by changes in the energy mix toward cleaner energy resources.
3
Environmental Economics and Policy Studies (2022) 24:341-364 Figure 2 plots the proportion of clean (renewable) energy with respect to fossil fuel energy consumption for the U.S. Whereas renewable energy represents a small fraction of total energy consumption, the impact on emissions is large, measured in term of forgone pollutant emissions by the replaced fossil fuels in the total energy consumption. Prior to 1970, the ratio of renewable to fossil fuel energy declined, not as a consequence of a decrease in the use of renewable energy but to a higher expansion in fossil fuels consumption. During the decade of 1970s, it is observed a positive trend in the ratio, with a rapid expansion in renewable energy consumption, with a stagnation during the 1980s and even a decline in renewable energy ratio in the first years of the XXI Century. However, in the last years, renewable energy has been gaining positions with respect to fossil fuels. The analysis done in this paper highlights the importance of the energy mix and energy policies implemented on the alternative energy sources in explaining environmental damage and the relationship between output and the environment. Whereas environmental policies considering a number of instruments have been widely studied in the literature, little focus has been placed in the implications of such policies on the energy mix and their impact on pollution. As pointed out by Atalla et al. (2017) energy mix is the result of the interaction of fuel prices, technology, and energy policies. First, the energy mix can be a policy driving decision by strategic reasons, mainly in economies without fossil fuels resources and as a way to diversify energy sources. Second, energy mix is also determined by environmental concerns. This is evident in the case of nuclear and coal electric power. Finally, the energy mix depends on the relative prices of the alternative energy sources which are mainly driven by technological factors. Tahvonen and Salo (2001) studied the transitions between non-renewable and renewable energy depending on the development stage of an economy. They obtain an inverted-U relationship between fossil fuel and the income level. Atalla et al. (2017) studied the role of fossil fuel prices relative to energy policy in driving the primary fossil fuel mix, and found that relative fossil fuel prices are the main source explaining the fossil energy mix in the U.S., Germany and the UK. However, the question remains open when not only fossil fuels are considered and also other primary non-fossil energy is taken into account. We depart from Atalla et al. (2017) study, by considering also the role of non-fossil fuel energy sources, and focusing on the consequences of technological shocks and fuel price shocks in explaining the energy mix and the impact on the environment.
This paper contributes to the literature by studying the relationship between production and social welfare, and the environment in a Dynamic Stochastic General Equilibrium (DSGE) model with alternative energy sources and endogenous energy transition, and studies how a second-best policy outcome by solving an optimal Ramsey problem differs from that of a central planner. In particular, we propose an economy where two alternative energy sources can be used in the production sector: one energy that produce emissions (i.e., fossil fuels), and another clean energy (i.e., renewable energy). In this framework emissions do not depend on final output, as it has been considered previously in the literature (see Fischer and Springborn 2011;Angelopoulos et al. 2010Angelopoulos et al. , 2013Heutel 2012;Annicchiarico and Di Dio 2015), but on the consumption of fossil fuel energy. Our model considers a three inputs production function: physical capital, labor, and energy. Energy used in the production function is a composite of fossil fuels and renewable energy, and emissions depend on the quantity of fossil fuels used in the final energy mix. The stock of pollutants is an externality affecting negatively to final output (see Nordhaus 2008;Heutel 2012). Two types of technologies related to energy and emissions are considered: Environmental Economics and Policy Studies (2022) 24:341-364 a technology that improves energy use efficiency, and a technology that reduces the quantity of emissions as a function of the quantity of fossil fuels.
We use the model to study the implications for the economy, the energy mix, and the environment of three shocks: an aggregate productivity shock, an energy use efficiency technological shock, and a clean energy technological shock. 2 First, a positive neutral technological shock produces two opposite effects on output. The increase in aggregate productivity also increases output, as expected, but also increases the demand of the two types of energy, resulting in an increase of carbon emissions and in the accumulation of CO 2 in the atmosphere. The higher level of CO 2 concentration in the atmosphere has a negative impact on productivity, limiting the positive effects of the productivity shock on final output. We find that energy consumption is procyclical, as expected, but that the productivity shock reduces energy intensity and emissions per unit of output, consistent with empirical evidence. This is because the expansion in economic activity following the productivity shock increases the demand of both fossil fuels and renewable energy. As also the demand for renewable energy increases, emissions per energy unit decrease. However, it is also true that as a consequence of this productivity shock the renewable to fossil fuel energy ratio falls. The main results is that carbon emissions are procyclical but carbon emissions per unit of output is countercyclical.
Second, we study the implications of an energy efficiency technological shock common to the two energy sources. Energy efficiency technology provokes an increase in the quantity of energy used in the production process, increasing the level of emissions, which implies that the positive initial effect of a technological improvement in energy efficiency leads to an increase in energy consumption and in CO 2 concentration in the atmosphere. Energy efficiency technology not only provokes the well-known "rebound effect" (Frondel et al. 2012;Gillingham et al. 2016), which implies that the positive initial effect of a technological improvement in energy efficiency leads to save less energy than initially expected, but we find that energy efficiency improvement increases energy consumption (a "rebound effect" above 100%), the so-called "backfire effect" (Sorrell 2009;Gillingham et al. 2016), not as a consequence of the optimal response of households who does not internalize the cost of pollution, but as the optimal decision by a central planner to maximize social welfare. Importantly, the energy efficiency shock leads to a decline in the renewable-fossil fuels ratio, resulting in a technology that hinders the adoption of cleaner energy sources.
Third, we consider the case of a technological improvement in emissions (i.e., cleaner technologies as particulate filters and catalytic converters). This is an example of an asymmetric specific technological shock affecting only one of the energy sources: the "dirty" energy, as we assume that renewable energy does not produce carbon emissions. As one would expect, this technological improvement reduces emissions per fossil fuel unit, and hence, also emissions per output reduces. However, surprisingly, this technological change provokes an increase in the quantity of "dirty" energy used in production and reduces the use of renewable energy. Therefore, technological change associated to emissions promotes the use of "dirty" energy sources as the negative externality produced by this energy declines. These results show that environmental policies promoting investment in energy efficiency and emissions efficiency technologies have different effects on the stock of CO 2 concentration in the atmosphere; whereas the former increases the stock of CO 2 , the latter reduces the stock of CO 2 . Nevertheless, both policies are an obstacle to energy transition from non-renewable to renewable energy sources.
Finally, we consider the case of a decentralized economy where the government uses an optimal specific tax on fossil fuel. We assume a benevolent government that solves a Ramsey problem by choosing a per unit tax on fossil fuel consumption to maximize total expected discounted utility, subject to the first-order conditions for household's utility and firms' profits maximization. We calculate the steady state of the economy for three cases: Centralized economy, laissez faire, and the Ramsey problem. We found that distortions introduced by pollution implies a reduction in consumption and an energy mix biased to the use of fossil fuels compared to the central planner solution, where more renewable (clean) energy is used. A benevolent government choosing the optimal tax on fossil fuels shift away the economy from the laissez-faire equilibrium to a Ramsey equilibrium which is close to the social planner solution. The optimal tax policy changes the energy mix of the economy, reducing emissions and increasing output and welfare. Whereas differences in output are small in all three scenarios, the energy mix, emissions and social welfare show important differences across them. The most important result is that the second best policy resulted from the Ramsey problem is able to shift the competitive equilibrium with no internalization of the environmental externality to an equilibrium close to the first best where welfare losses are small compared to the laissez faire scenario.
The rest of the paper is structured as follows. Section 2 presents an E-DSGE model including non-renewable and renewable energy sources as an additional input factor to capital and labor, and solves the model for a centralized economy where the environmental externality is internalized. Section 3 presents the calibration of the parameter of the model. Section 4 studies the dynamic properties of the model to different technological shocks. Section 5 solves the model for a decentralized economy where a benevolent government chooses an optimal specific tax on fossil fuels to maximize social welfare. Finally, Section 6 presents some conclusions.
An E-DSGE model with energy mix
In this section, we develop an E-DSGE model with a three-inputs production function: physical capital, labor and energy. We consider two types of energy sources: Fossil fuels and renewable energy. We assume that for production some energy source must be used as an additional input to capital and labor, and that burning fossil fuels releases greenhouse gases (CO 2 ) into the atmosphere. Renewable energy is a clean energy as it does not produce emissions. The stock of pollution is a negative externality that will negatively affect aggregate productivity. The model includes three technological shocks: an aggregate productivity shock, an energy efficiency Environmental Economics and Policy Studies (2022) 24:341-364 technological shock, and an emission efficiency technological shock. Additionally, the model considers an oil price shock.
Household utility function
The economy is populated by an infinitely lived representative agent who maximizes the expected value of her lifetime utility. Households obtain utility from consumption and leisure. Household utility function is defined as: where C t is the consumption, L t and is working hours, is the constant relative risk aversion parameter, is the Frisch elasticity of labor supply, and > 0 represents the willingness to work. We consider a centralized economy where a central planner maximizes social welfare. The resource constraint for this centralized economy is defined as: where I t is investment in physical capital, Y t is final output (total income), O t is the quantity of fossil fuel, and S t is the quantity of renewable energy. For simplicity it is assumed that the quantity of renewable energy is exogenously given and that no restriction on the extraction of non-renewable energy exists.
In the literature, we find two alternative ways to introduce the negative externality produced by damages to the environment. The first is the introduction of this externality in the aggregate production function. This is the case, for instance, in Heutel (2012) and Golosov et al. (2014). It is assumed that climate change damages the environment and hence production, by reducing productivity. Pollution, defined as the CO 2 concentration in the atmosphere, is considered as a stock variable that accumulate with carbon emissions. Therefore, atmospheric carbon concentration has a negative economic impact reducing final output. The second way to consider externalities from pollution is by assuming that it is can be either a flow or a stock variable, negatively affecting households' utility function. Examples of this modeling are John and Pecchenino (1994), Jones and Manuelli (1995), and Stokey (1988). As pointed out by John and Pecchenino (1994), in general, environmental externalities could arise from production or consumption and could affect welfare or productivity. Following Nordhaus (2007) and Heutel (2012), our model only consider a pollution externality in the production.
Investment accumulates into physical capital. Physical capital stock accumulation equation is defined as: where K t is the capital stock and k (0 < k < 1) is the depreciation rate of physical capital. (1)
Emissions and the stock of pollution
In the environmental-economic literature, a number of works assumes that emissions are a function of final output. However, this assumption neglects the possibility of declines in emissions when output increases. In this context, a negative relationship between emissions and output can only be obtained under technological change affecting abatement and/or emissions. A more realistic assumption is that carbon emissions depends on the energy mix combining both non-renewable dirty energy with renewable clean energy sources. To consider that possibility, in our model, carbon emissions are related with the energy source used in the final production. That is, carbon emission is assumed to be generated by the use of fossil fuels, whereas we assume that renewable energy sources do not produce emissions. In particular, we assume that damages are proportional to the quantity of fossil energy.
where > 0 represents the carbon content of fossil fuel or carbon emission per fossil fuel unit, and B t is an exogenous technology for emission (emissions efficiency), representing fossil fuels consumption technologies that reduce gas emissions. Hence, we are assuming that the change in emissions is equal to the change in fossil fuel consumption. Hence a technological improvement that reduces emissions by fossil fuels energy unit is represented by a decrease in that exogenous shock (i.e., catalytic converter technology). We abstract from the fact that the level of emissions of the fossil fuel mix are different depending on the share of oil, coal, and natural gas, where emissions produced by natural gas are lower than emissions from coal. For instance, if the share of gas increases at the expense of coal, this results in a "cleaner" energy mix. Emissions efficiency technology is assumed to be exogenous and follows a first-order autorregresive process: where B is the steady state value for the emission technology, B < 1 , and B t is a i.i.d. innovation in the stochastic process. Emission efficiency technological progress is represented by a negative shock to B t .
Emissions accumulate into a stock of pollutants, Z t , where the atmospheric carbon accumulation process is given by, Environmental Economics and Policy Studies (2022) 24:341-364
Production technology
The model considers a three-factor aggregate production function: physical capital, labor, and energy. We assume the following aggregate production function, that exhibits constant returns to scale on all factors, represented by a Cobb-Douglas production function: where the term exp(− Z t ) represents the cost of the damage of pollutants measured as forgone output, and > 0 is a parameter governing the elasticity of aggregate productivity with respect to the stock of pollutants. 4 Final output is influenced by a neutral technology component A t (total factor productivity, TFP) and by an externality due to emissions. This externality may be included in the economy by affecting the utility function, instead of the production function. However, the literature considers that this alternative is more appropriate for pollutants that affect health directly and that the stock of pollution is expected to affect the production possibilities of the world economy (Nordhaus 2008).
Energy is a Armington aggregator of fossil fuel and renewable energy: where > 1 is the elasticity of substitution between both types of energies and is a parameter representing the weight of each type of energy in the final energy mix. The model assumes that both types of energies are imperfect substitutes. The amount of energy used, E t , is influenced by an energy-augmenting technological change of the economy, denoted by D t . The higher is D t , the more energy efficient is the production sector. Total Factor Productivity is assumed to be exogenous and follows a first-order autorregresive process: where A is the steady state value for TFP, A < 1 , and A t is a i.i.d. innovation in the stochastic process. A similar stochastic process is assumed for D t :
Centralized equilibrium
Given the existence of a negative externality on the environment, we consider a centralized economy. The central planner solution is derived by choosing the path for consumption, labor, capital, fossil fuels, renewable energy, and stock of pollution, to maximize the sum of the discounted utility subject to resource, technology, and carbon emissions constraints. From the first order conditions for the centralized problem, we obtain the following equilibrium conditions: 5 where is the discount factor. Expression (11) is the optimal labor supply. Expression (12) is the optimal consumption path. Expression (13) is the equilibrium condition for renewable energy consumption, whereas expression (14) indicates the optimal stock of pollution. Notice that the price for the fossil fuels includes the pollution externality cost.
Data and calibration
This section presents the calibration of the parameters of the model. Since the model is composed by macroeconomic parameters and also parameters related to emissions, we use different sources for its calibration. Macroeconomic parameters are calibrated from the Real Business Cycle literature, while energy and emissions parameters are taken from studies related to environment and climate change, mostly from Stern (2012), Heutel (2012). We use data for the U.S. The discount factor (for annual data) is fixed to 0.975, whereas the relative risk aversion parameter is equal Bjørnland and Larsen (2016) and Evgenidis et al. (2020) for examples of analysis with a decentralized economy. . We assume that the fraction of labor compensation over total income is 0.65. As the production function assumes the existence of constant returns to scale, the sum of the technological parameters for the other two inputs, capital and energy, must be 0.35. The technological parameter governing the elasticity of output with respect to energy is obtained from the proportion of energy consumption over GDP and is estimated to be 0.0982. Therefore, the elasticity of output with respect to physical capital is 0.2518. The parameter representing the proportion of fossil fuels on total energy mix is fixed at 0.84, accounting the rest 0.16 for renewable energy. The parameter governing the elasticity of substitution between fossil fuels energy and renewable energy is fixed at 1.5. Finally, environmental parameters are taken from Nordhaus (2008) and calibrated simultaneously to produce a loss of productivity of 1% in the steady state. The pollution decay rate is fixed at 0.012, as it is standard in the literature, which corresponds to a half-life of carbon concentration of around 58 years. Heutel (2012) estimates a elasticity of emissions with respect of output of 0.696, whereas the productivity loss from pollution is estimated to be of 0.26%. We fix the emission parameter to be 0.1, resulting in a pollution damage parameter of 0.0875 to reduces productivity a 1% in steady state. Parameters for the autoregressive exogenous TFP shock as estimated as the Solow's residual from the production function. Estimated values are a persistence parameter of 0.923 and a standard deviation of 0.0079. Finally, given the uncertainty about parameters for the exogenous processes for energy efficiency and emissions efficiency shocks, we use an autorregresive parameter of 0.9, and a standard deviation to 0.01 for both shocks, as these values (2007)). A summary of the calibration of the parameters is presented in Table 1.
Technological shocks
The calibrated model is used to study how the economy, the energy mix, the level of emissions, and the environment, respond to different shocks. In particular, we are interested in studying the impact of different technological shocks on the energy mix and emissions, and their implications on the shift from fossil fuels to renewable energy sources that can increases output without further damages to the environment. We simulate three technological shocks: a total factor productivity shock (i.e., a neutral technological shock), an energy efficiency technological shock (energyaugmented shock), and an emissions efficiency technological shock.
Aggregate productivity shock
First, we present some simulations to show the dynamics of the model economy via impulse-response functions to an aggregate productivity shock. This first exercise considers the case of an exogenous idiosyncratic positive neutral shock to the economy, represented by an increase in total factor productivity (TFP), A t . Expansion of economic activity following the productivity shock is expected to increase energy consumption, but its effects on CO 2 emissions will depend on how the energy mix will change. Empirical evidence shows that total energy consumption is procyclical, a result which is also consistent with the model. The key question here is, given the existence of two alternative energy sources with a separate effect on emissions, how the shock affects the energy mix and damages to the environment. Figure 3 plots the impulse-response for the main aggregate variables of the economy, as percentage deviations from the steady state. As expected, the rise in total factor productivity increases output. This rise in output is distributed between consumption and investment, similar to the response observed in a standard DSGE model. The amount of inputs also increases, including energy, given that marginal productivity of each input is now higher, as this is the case of a neutral technological shock. The demand of both types of energy increases, being higher the response of fossil fuels to the shock compared to that of renewable energy. As a consequence of the increase in fossil fuels consumption, the level of emissions also increases. Therefore, our model produces, using our benchmark calibration, a positive relationship between output and environmental deterioration when an aggregate productivity shock occurs. Emissions are procyclical as the neutral technological shock increases the demand for all energy sources. Importantly, the effects of the positive aggregate productivity shock on economic activity are mitigated due to the counter-effect of the pollution externality by reducing productivity gains following the shock. The higher the cost of the pollution externality, the less the output increases following a positive Environmental Economics and Policy Studies (2022) 24:341-364 TFP shock. These results are consistent with the findings of Heutel (2012), as he found that carbon emissions are significantly procyclical with an elasticity with respect to GDP between 0.5 and 0.9. However, that result emerges directly from the modelling assumption that emissions is a proportion of output. In the model presented in this paper, emissions are not proportional to output but to the consumption of fossil fuel, but we also find that carbon emissions are procyclical, as the neutral technological shock increases the demand of both renewable and fossil fuels energy sources. Our estimated value is an elasticity of 0.8142, a value in the range estimated by Heutel (2012).
This simulation exercise illustrates that a substitution effect of "dirty" by "clean" energy does not happen endogenously for the benchmark calibration, and hence, expansion of the economic activity does not produces endogenous energy transition from fossil fuels to renewable energy sources. Only when the cost in foregone output is higher enough, would changes in the energy mix reduces the use of fossil fuels to mitigate environmental damage. But this is not the case in our simulated economy, where the profits from increasing fossil fuels consumption are higher than the cost of damages to the environment. The other important result we obtain is that the productivity shock reduces energy intensity. In response to the shock both output and energy consumption increase, but where the increase in the first is higher than in the second. Therefore, we show that TFP shocks are an important source in explaining the decline in energy intensity observed in the data. However, from the benchmark simulation it is clear that energy intensity decline does not necessarily implies a lower level of emissions when output growths. Finally, another outstanding result is that the productivity shock reduces the level of carbon emissions per unit of output as a consequence of an expansion in renewable energy sources. Nevertheless, total carbon emissions increases as a consequence of the higher fossil fuels consumption, and hence, the accumulation of CO 2 in the atmosphere accelerates. Table 2 compares observed moments for output, consumption, investment, fossil fuel, renewable energy and emissions, and simulated figures from the model following a TFP shock. Relative standard deviation of fossil fuel to output is 1.72 in the data, compared to a value of 1.63 produced by the model. Additionally, the model is able to matches the relative standard deviation of emissions to output with that observed in the data. Therefore, the model explains quite well observed variability for these key variables. However, little predicting power of the model for the relative standard deviation of renewables and correlations is obtained. Relative standard deviation of renewable to output is around 3.67 in the data, indicating that volatility in the production of renewable energy is much larger compared to that of aoutput. However, the model generates a relative standard deviation much lower, of only 1.09, where simulated volality of renewables is similar to that of output. On the other hand, the model produces a correlation of energy sources, and emissions with output close to one. However, the data show a much lower correlation with output of theses variables; around 0.6 for fossil fuels and emissions, and 0.23 for renewables, indicating that other shocks, appart from technological shocks, drive the relationship between output, energy and emission across the business cycle. Finally, the model produces a correlation between fossil fuel and emissions of one by construction. However, this value is very close to the one observed in the data of 0.98.
Energy efficiency shock
Second, we study the response of the economy to a shock that increases energy efficiency, D t . Given that most anthropogenic emissions of greenhouse gases released into the atmosphere are generated by energy consumption, environmental policies have focused on promoting energy efficiency as an instrument to reduce emissions. Energy efficiency refers to technological changes that reduce the amount of energy needed to produce a given quantity of goods and services in combination with the other inputs, resulting in a decline in energy intensity (an energy-augmenting technical change). This shock is general to the consumption of energy per output unit, and hence, affects symmetrically to the two energy sources. The implications of energysaving technological changes on the economy have been widely studied in the literature, for instance, in Newell et al. (1999). They obtain the energy price changes are the main driving force for energy-efficiency technological change. Here, we pay attention to how energy efficiency changes the energy mix. We find that energy efficiency technology not only provokes the well-known "rebound effect" (Frondel et al. 2012;Gillingham et al. 2016), which implies that the positive initial effect of a technological improvement in energy efficiency leads to save less energy than initially expected, but we find that energy efficiency improvement increases energy consumption, resulting in the so-called "backfire effect" (Sorrell 2009;Gillingham et al. 2016), not as a consequence of the optimal response of households who does not internalize the cost of pollution, but as the optimal decision by a central planner to maximize social welfare. Energy intensity reduces as energy efficiency increases, but surprisingly also the level of emissions increases and hence, energy efficiency policies have harmful consequences for the environment as they incentive energy consumption. Figure 4 plots the impulse-response functions of the main variables of the model to an energy-efficiency technological shock. As expected, the response of output is positive, as the energy-saving shock increases productivity of one of the inputs: energy. As a consequence, the response of consumption and investment is also Percentage deviation of each variable with respect to its steady state positive, indicating that this efficiency technology shock increases physical capital accumulation. What is more important is that it is observed that gains in energy efficiency lead to an increase in the demand of energy. As the shock is common to the two energy sources, both the demand of non-renewable and the demand renewable energy increase, where the increase in renewable energy is larger than the rise in fossil fuels energy. This larger response of the renewable energy is consequence of the increase in the relative price of fossil fuels energy, as the pollution externality cost rises the total cost of fossil fuels. However, the increase in the user cost of fossil fuels caused by the pollution externality cost is lower than the reduction in the user cost of this energy source due to the improvement in energy-efficiency, resulting in a final rise in the quantity of energy used in production, and resulting in more emissions.
These results are consistent with the so-called "rebound effect" or "take-back effect" described in the literature on energy-efficiency, consisting in a reduction in the expected gains or, even in a loss, from new technologies that increase the efficiency of energy use. That effect is derived from the optimal response of economic agents to a technological improvement in energy efficiency, leading to a rise in energy consumption. This is the mechanism that we observe in our model, where this technological shock provokes a rise in the quantity of energy used in the production activity, as the rise in energy-efficiency is equivalent to a reduction of the energy price. For the benchmark calibration of the model, we obtain a "rebound effect" higher than 100%, the so-called backfire effect, which generates a negative effect on the environment, as the technological improvement in energy efficiency implies a rise in the emission of pollutants. Given that energy is a normal and, also an ordinary good, the rebound effect can be explained according to an income and a substitution effects. In our theoretical framework this technological shock increases productivity of energy in producing the final good, increasing the demand of this input and reducing the demand for the other inputs. This is the substitution effect. On the other hand, the rise in the productivity of one of the factors, increases aggregate productivity, increasing the demand for all factors. This is the income effect. Both effects contribute to the observed backfire effect and the consequent increase in the carbon concentration in the atmosphere.
Emissions efficiency technology shock
Finally, we study the implications of a technological change that increases emissions efficiency, i.e., a negative shock to B t . This shock implies a reduction in the level of emissions per fossil fuel unit used in the production process. For instance, this is the case of an improvement in catalytic converter technology. This shock can also be interpreted as an improving in abatement technology. In this case, the technological change does not affect energy efficiency but emissions efficiency specific to fossil fuels consumption, resulting in an asymmetric shock depending on the type of energy source consumed. Therefore, this shock will reduce the level of emissions per fossil fuel unit, but it does not directly affect to the use of renewable energy sources, as the later is a "clean" energy, and thus, not related to emissions. However, given the general equilibrium effects generated by our model economy, this specific shock to the use of non-renewable energy will also change renewable energy consumption.
As shown in Figure 5, output increases in response to this technological shock. This change in total output results in a rise in consumption and investment. Investment shows a positive response in the following periods, increasing the stock of physical capital. Importantly, the amount of energy used in production increases but, in balance, produces a lower level of emissions. The economic intuition behind these effects is the following. This shock is equivalent to a reduction the cost of the pollution externality, increasing productivity and reducing the user cost of fossil fuels energy. The positive effect on economic activity is explained by two forces. First, the shock reduces damages to the environment and relieving its harmful effects on productivity and expanding output. This initial expansion in output results in higher investment, increasing capital stock. Therefore, the effects of an emission efficiency shock on the economy are, qualitatively, similar to those of an aggregate productivity shock.
Investing in cleaner technologies has two opposite effects. First, the shock means that less carbon emissions are generated by unit of fossil fuel consumption. However, total carbon emissions will depends not only on the direct effect of the shock on emissions per fossil fuel unit but on the indirect effect of the shock on fossil fuels consumption. Indeed, it is observed an increase in energy consumption, at the same time that the level of emissions declines. Second, there is an increase in the use of fossil fuels and a reduction in the renewable energy use. These two different results are consequence of the different impact of this technological shock over the two energy sources. The shock directly affects to the use of the fossil fuels energy source. The shock reduces the level of emissions per unit of energy, reducing the externality cost of using the "dirty" energy. As an indirect effect, that changes the Percentage deviation of each variable with respect to its steady state relative price of both energy sources, reducing the price of the fossil fuel energy relative to the price of the renewable energy source. This provokes a substitution of renewable energy by fossil fuels energy. The rise in the demand of fossil fuels is larger than the down of renewable energy, resulting in a total rise in the demand of energy.
In sum, an emission efficiency technological shock is equivalent to a reduction in the user cost of the "dirty" energy, and hence, increasing the relative price of the "clean" energy relative to the "dirty" energy. Indeed, the literature suggests that the relative low prices of fossil fuels had driven technological progress to fossil fuels intensive industries. Emission efficiency technological change provokes similar results, having a negative impact on energy transition to renewable energy sources. Overall, the social cost of energy declines following this shock, increasing the quantity of energy used in the production. Nevertheless, the effect of the rise in the quantity of energy used for production is compensated by the reduction in emissions by energy unit, resulting in an environmental quality improvement.
Decentralized economy
The model presented above assumed a centralized economy in which a benevolent central planner chooses optimal values for output, consumption, investment, labor, energy mix, and emissions simultaneously to maximize social welfare. In that context, the central planner chooses the optimal energy mix (proportions of fossil fuel and renewable energy) such as the negative externality produced by the use of fossil fuel is internalized. However, the solution of the model for a centralized economy does not consider any particular policy to internalize the negative externality, as it is assumed that the central planner can choose optimal quantities for all variables, including environmental ones. In this section, we solve the model for the case of a decentralized economy. Under "laissez faire", households maximize utility and firms maximize profits without taking into account the negative pollution externality provoked by the use of fossil fuels. This would lead to an excessive consumption of fossil fuels with respect to the optimal, increasing pollution and generating climate damages that lead to productivity losses. The competitive equilibrium is extended by including a government that can implement a specific environmental policy by solving a Ramsey problem. In this context, the government can implement an optimal policy by, given the optimizing behavior of both households and firms, introducing an additional distortion that can off-set the environmental negative externality. We consider that the government use a tax policy where the instrument is a specific tax (a per unit tax) on fossil fuel consumption. Heutel (2012) showed that the outcome under a Ramsey optimal fiscal policy can also be obtained by a government implementing a Ramsey optimal quantity policy.
First, we consider a decentralized economy populated by households and firms, in which each agent takes optimal decisions without internalizing the cost of the environmental externality. In this framework, it is assumed that household maximize discounted utility subject to the budget constraint by choosing optimal path for Environmental Economics and Policy Studies (2022) 24:341-364 consumption, labor supply, and capital stock. Household maximization problem is defined as, where the budget constraint is given by: where W t is wage, and R t is capital return. From the household's maximization problem we find the following two equilibrium conditions, representing optimal labor supply and the optimal consumption path, respectively, Next, consider the behavior of the representative firm. In a market economy, firms maximize profits and do not internalize the cost of emissions, that is, they take the stock of pollution as given. Profits are defined as: where P o,t is the price of fossil fuel, and P S,t is the price of renewable energy.
First order conditions from profits maximization, determine the prices for inputs, and are given by, The combination of the above equilibrium conditions for both households and firms defines the competitive equilibrium.
Next, we extend the competitive equilibrium to an optimal Ramsey problem by adding a government. We consider a benevolent government which chooses the optimal specific fossil fuel tax by solving a Ramsey problem to maximize social welfare. The government chooses the optimal tax such as the discounted utility is maximized subject to the first order conditions from the households and firms maximization problems. The environmental fiscal policy by the government, changes firms' profits such as, where o t is a per unit tax on fossil fuel. This changes the behavior of the firm in choosing the optimal quantity of fossil fuels used as an input, as, We assume that all revenues from the fossil fuel tax are returned to the households as a lump-sum transfer. Hence, with a government implementing an environmental tax policy the household's budget constraint becomes, The benevolent government's maximization problem is given by, subject to (17) (3). This maximization problem represents a government that, in choosing the optimal specific tax rate on fossil fuel, account for the impact on the tax on the optimal behavior by households and firms. Table 3 shows the difference of steady states for the "laissez faire" and the Ramsey problem with respect to the steady state obtained for the centralized economy in order to study how far are both outcomes from the first-best. First, we find that differences in output are small. Output in the Ramsey problem is equal to that in the centralized economy, whereas in laissez faire output is a 0.31% lower. This is explained by the effect of pollution damage on aggregate productivity. On one hand, fossil fuel represents an input that has a positive contribution to output. On the other hand, consumption of fossil fuel produces emissions that have an harmful impact on aggregate productivity. Both opposing forces tend to compensate, resulting in small differences in output across scenarios. However, differences are larger for the rest of variables of the model. The Ramsey problem represents a second best policy and hence, equilibrium from the optimal tax policy is located between the social planner solution and the laissez faire outcome. Fossil fuel consumption, emissions and pollutants concentration are all about a 48% higher in the laissez faire scenario compared to the centralized outcome where the negative externality is fully internalized. Additionally, the consumption of renewable energy is around 14% lower in laissez faire, as it is optimal for the social planner to increase the use of clean energy and reduce the use of fossil fuels. Although the impact on output is small, consumption is found to be a 3.4% lower in laissez faire, as a fraction of output is forgone due the climate damages, reducing the number of resources that can be devoted to consumption activities. Results from the Ramsey problem show that fossil fuel, emissions and pollutants concentration are around a 7% higher, whereas consumption of renewable energy is a -2.4% lower, compared to the centralized outcome. These differences sharply contrast with the outcome in the laissez faire scenario, indicating that the second best policy success in achieving energy transition by reducing the use of fossil fuels and increasing the renewable energies. All these results lead to a consumption a mere 0.4% lower compared to the centralized outcome for an equivalent level of output. Finally, with the benchmark calibration of the model social welfare in laissez faire is a 0.47% lower than in the first-best, whereas welfare is only a 0.07% lower in the Ramsey problem, indicating that this second-best policy is highly effective in correcting the negative effects of the environmental externality and able to almost achieve the centralized economy outcome. As the damage parameter increases, differences across the scenarios also increase.
Concluding remarks
This paper studies how the energy mix of renewable versus non-renewable energy sources is affected by technological shocks, and their implications for energy transition, carbon emissions, and the environment. Our starting evidence is that pollutant emissions varies widely depending on the energy source, and hence, alternative technological and price shocks have different effects on the environment depending on how they change the energy mix. The paper investigates those links using an E-DSGE model where final good sector productivity is negatively affected by pollutant emissions. The model uses a three factor production function: capital, labor, and energy, where two energy sources, fossil fuel and renewable energy, are considered. The summary of the main results of the paper are the following. First, energy consumption and emissions are procyclical, but emissions per unit of output are countercyclical, consistent with empirical evidence. This is a direct consequence of the decline in energy intensity as economic activity expands. We find that a neutral technological shock provokes an expansion of the economic activity, a higher energy consumption, increasing both fossil fuels and renewable energy consumption, and generates more emissions, resulting in a harmful impact on the environment. Second, an energy efficiency shock provokes a rebound effect above 100% (the socalled backfire effect), resulting in an increase in emissions, indicating that energy efficiency environmental policies must include additional instruments to avoid nonanticipated negative effects on the environment. By contrast, emissions are reduced in the case of an emission efficiency technological improvement, and in the case of an increase in the price of the "dirty" energy, although the transmission mechanisms are different. The emission efficiency technological shock reduces carbon emissions but also increases energy consumption and output. As pointed out by Acemoglu et al. (2012), if "dirty" and "clean" energy are sufficiently substitutable, then there is room for implementing directed technical change policies under alternative environmental policies to redirect technical change to renewable energy sources and reduce environmental damage.
In our theoretical framework, emissions are generated by the use of fossil fuel in the energy production activity, where renewable do not produce any externality. As fossil fuel consumption increases, the pollution externality cost also increases, reducing productivity. This makes the use of "dirty" energy more expensive. The model is solved for a decentralized economy in which a benevolent government solve a Ramsey problem and chooses an optimal specific tax on fossil fuels to maximize social welfare. The specific tax on fossil fuel provokes a substitution of fossil fuels energy by renewable energy, resulting in a decline in the level of emissions. We found that the second-best policy resulted from the Ramsey problem is able to shift the competitive equilibrium with no internalization of the environmental externality to an equilibrium close to the first-best where welfare losses are small compared to the laissez faire scenario. | 2021-10-22T15:23:21.755Z | 2021-09-04T00:00:00.000 | {
"year": 2021,
"sha1": "b7c9aded7bf86e568fe3a486ce6b628b5d4b3bae",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10018-021-00324-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "b426a18e95e9bc39db15558630bf6cd77a0b8927",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
228897518 | pes2o/s2orc | v3-fos-license | Kinetic Behaviour of Pancreatic Lipase Inhibition by Ultrasonicated A. malaccensis and A. subintegra Leaves of Different Particle Sizes
Gallic acid and quercetin equivalent were determined in the crude extract of matured leaves Aquilaria malaccensis and Aquilaria subintegra. The leaves of both Aquilaria species were dried at 60 °C for 24 hours, ground and sieved into particle size of 250, 300, 400, 500, and 1000 μm. Then, each particle size of leaves was soaked in distilled water with a ratio of 1:100 (w/v) for 24 hours and undergoes the pretreatment method by using ultrasonicator (37 kHz), at the temperature of 60 °C for 30 minutes. The crude extracts were obtained after about 4 hours of hydrodistillation process. The highest concentration of gallic acid and quercetin equivalent was determined in the crude extract from the particle size of 250 μm. The kinetics of pancreatic lipase inhibition was further studied based using the LineweaverBurk plot, wherein the concentration of p-NPP as the substrate and pancreatic lipase were varied. Based on the formation of the lines in the plot, the crude leaves extract of both Aquilaria species exhibit the mixed-inhibition on pancreatic lipase, which indicates that in the reaction, the inhibitors were not only attached to the free pancreatic lipase, but also to the pancreatic lipase-(p-NPP) complex. The reaction mechanism was similar to non-competitive inhibition; however the value of dissociation constant, Ki, for both inhibition pathways was different. The inhibition shows an increment in MichaelisMenten constant (Km) and a reduction in the maximum pancreatic lipase activity (Vm) compared to the reaction without Aquilaria spp. crude extracts (control). This proved that the inhibition occurred in this reaction. Copyright © 2020 BCREC Group. All rights reserved
Introduction
The issue of obesity and weight gain has become a priority now as it involves a person's level of health, from which critical illnesses can and obesity natural products are becoming more and more urgent. Nowadays, people are more concerned about the fat content in their body because for them, this is a major cause towards obesity and weight gain. Due to this matter, pancreatic lipase inhibitors have received more and more attention, since pancreatic lipase is a digestive enzyme produced by stomach and pancreas, which primarily function to break down fats into smaller molecules that can be easily absorbed and digested by intestines. Nevertheless, in theory, too much pancreatic lipase activity could be irritating because it could lead to obesity.
Alternatives found to slow down this enzyme activity is by taking the synthetic inhibitors such as anti-obesity drugs which have been deeply studied over decades. Last 13 years ago, FDA has approved four weight loss drugs which are Orlistat, Contrive, Belviq and Qsymia. There are only three drugs were approved by the FDA as adjunctive therapy for chronic weight management which are Orlistat that was approved in 1999, Belviq and Qsymia, both were approved in 2012 [1]. These clinical medications manipulate body weight by increasing energy expenditure, suppressing appetite, or inhibiting pancreatic lipase to decrease lipid absorption in the intestine [2]. Thus, as of September 2013, only Orlistat that has been approved by FDA and EMA (European Medicines Agency) for chronic weight management which able to inhibit gastrointestinal lipases in reducing fat absorption and it became the only FDA-approved weight loss drug that is available without a prescription [3]. The drugs are potential to decrease weight but possible side effects are always a big public health concern in new drug product development. However, Orlistat has a number of safety issues, including hepatotoxicity, nephrotoxicity, pancreatitis, kidney stones and the most common adverse effect is steatorrhea [3]. The risks also include severe liver damage, acute pancreatitis, acute renal failure, and pre-cancerous colon lesions. Other than that, Orlistat also interfere the fatsoluble vitamin absorption, which could cause a transient vitamin A, D, E, or K deficiency [4]. As conclusion, due to the modest efficacy, undesirable adverse effects, and serious health risks given by orlistat has highlighted the lacks of using Orlistat which later emphasizing the needs for other obesity drug options.
Therefore, traditional herbal medicines were believed in being able to suppress appetite and promote weight loss. These natural materials could be relatively more economical with no toxic side effects when compared with the synthetic drugs. Natural products have potential in the treatment of obesity and it is still largely unexplored [5]. By exploring the potential of Aquilaria sp., it might be an excellent alternative strategy for the development of safe and effective obesity control therapies. World Health Organization (WHO) declares that about 20000 plants are used for medical purposes and over 80% of world's population are using them for their primary health care [6]. This can be seen from the previous studies that explored anti-obesity compounds derived from green, white and black tea leaves which later become very popular functional food to be consumed [7]. There were findings proven that strong pancreatic lipase inhibitory activity were obtained for the in-vitro anti-lipase and antioxidant assays using crude ethanolic extracts from 30 plants grown in Oaxaca, México [8]. Strong inhibition of pancreatic lipase and pancreatic cholesterol esterase activities, as well as the inhibition of cholesterol micelle formation, and bile acid binding was also was proven by using mulberry leaf extract [9]. Thus, the results obtained by researchers proven that plant is able to help in controlling obesity.
Plant-derived lipase inhibitors have become potential research hotspots compared with chemically synthesized lipase inhibitors. Although it is difficult to determine the active components in the plant-based product sell in the market, but for consumer this product is relatively cheap, safe and reliable and known to be an effective slimming product, which makes the plant-based product is in demand. Therefore, this research explored the potential of two Aquilaria sp. namely A. malaccensis and A. subintegra as the pancreatic lipase inhibitor. Moreover, as up-to-date, it is nowhere to be found any studies on these species as the inhibitor for pancreatic lipase. The crude extract was obtained from hydrodistillation of ultrasonicated A. subintegra and A. malaccensis leaves for different particle size of 250, 300, 400, 500, and 1000 µm. Prior to extraction process, soaking and ultrasonication methods were introduced because it were claimed by most of researchers that both soaking and ultrasonication process were found to increase the specific surface area of the prepared samples while decreasing the average particle size and enhanced the yield of gallic acid and quercetin produced during extraction process [10−12]. The content of gallic acid and quercetin equivalent in the crude extracts were deter-mined and the inhibitory studies were done.
Research on lipase inhibition by plant-based products has widely done and some of the research shows obvious inhibitory effects [13]. Yet, due to the low content of active ingredients, complicated in extraction procedures and low recovery rate, which make it cannot be produced in large quantities. Consequently, only few of them had undergo the clinical stage. This is a major drawback in commercialization of lipase inhibitors derived from edible plants.
One of the suggestion is to study the action mechanism of natural compounds on pancreatic lipase, while the high-activity pancreatic lipase inhibitors are still continuously screened [13]. Thus, this research also emphasis on the reaction mechanism in inhibition of pancreatic lipase by A. malaccensis and A. subintegra leaves crude extract, wherein the activity of pancreatic lipase inhibitor was determined throughout the reaction. This research is one of the fundamental works for future studies on the natural obesity treatment, which is a new direction of the key research in the lipase inhibition mechanism field. It is also expected that the scientific data obtained would be beneficial towards the global development of agricultural and healthcare.
Drying, Milling, Sieving and Pretreatment of Leaves
The matured leaves of A. malaccensis and A. subintegra were collected from a farm in Selangor. The leaves were washed thoroughly and left to dry at room temperature before being dried in the oven (Memmert) at 60 ºC for 24 hours. The dried leaves were milled by using Mastar (MAS-160BL (A)-1) blender to obtain fine powder. Next, the ground leaves were sieved into uniform particle sizes of powdered leaves at 250, 300, 400, 500 and 1000 µm. Then, each of particle size of A. malaccensis and A. subintegra powdered leaves were soaked for 24 hours in distilled water with a ratio of 1:100 (w/v) at room temperature. After soaking process, all particle sizes of leaves were ultrasonicated in a NEXXsonics NS-A-18H ultrasonicator. This pretreatment process was done at 30 minutes for each sample with a frequency of 37 kHz at 60 ºC.
Hydrodistillation of Crude Extract
The pre-treated A. malaccensis and A. subintegra leaves was hydrodistilled by using a TOPS MS-6 heating mantle at constant atmos-pheric pressure and at 70 °C until a sufficient amount of hydrodistillate was obtained (basically in a range of 300 to 400 mL). Then, the extracted leaves were evaporated under reduced pressure at 40 ºC to get a concentrated product by using a Heidolph rotary evaporator. The crude leaves extract was kept in the refrigerator for further analysis and experiment in order to prevent any microbe from breeding through the samples.
Determination of Total Phenol Contents (TPC)
The total phenolic content was determined by using Folin-Ciocalteau method [14−17]. The mixture of 0.2 mL of leaves extract and 0.2 mL of Folin-Ciocalteau reagent were left for 4 minutes, in order to allow the reaction before it was added with 1 mL of 15% Na2CO3. After Na2CO3 was added, the mixture was allowed to stand for another 2 hours at room temperature. Then, the absorbance was measured at the wavelength of 760 nm. The absorbance was then used to obtain the concentration of gallic acid equivalent, by using the equation of gallic acid calibration curve that was constructed from the absorbance and concentration of gallic acid in the dilutions prepared from gallic acid stock solution. The stock solution was prepared by dissolving 100 mg of dry gallic acid in 2 mL of ethanol and diluted to volume with water in a 100-mL volumetric flask. Dilutions of this stock solution were done to have a range of gallic acid concentration between 0 and 1 mg/ml. The absorbance of each dilution solution was taken after Folin-Ciocalteu method was carried out and the plot of absorbance vs. gallic acid concentration was constructed. The results were expressed as mg/ml of gallic acid equivalent (GE). All readings were triplicated and the average of the readings was taken.
Determination of Total Flavonoid Contents (TFC)
The content of total flavonoids was determined by using the aluminium chloride (AlCl3) assay [18]. The amount of 0.5 mL of leaves extract, 0.1 mL 10% AlCl3, 0.1 mL of potassium acetate, and 4.3 mL of deionized water were mixed. Then, the mixture was incubated for 30 minutes at room temperature. The absorbance was then measured at the wavelength of 415 nm using spectrophotometer. The absorbance was then used to obtain the concentration of quercetin equivalent, by using the equation of quercetin calibration curve that was construct-ed from the absorbance and concentration of quercetin in the dilutions prepared from the quercetin stock solution. The stock solution was prepared by dissolving 100 mg of dry quercetin in 2 mL of ethanol and diluted to volume with water in a 100 mL volumetric flask. Dilutions of this stock solution were done to have a range of quercetin concentration between 0 and 1 mg/mL. The absorbance of each dilution solution was determined by using the AlCl3 method and the plot of absorbance vs. quercetin concentration was constructed. The results were expressed as mg/ml of quercetin equivalent (QE). All readings were triplicated and the average of the readings was taken.
Pancreatic Lipase Inhibition Reaction
Inhibition activity of pancreatic lipase was studied via spectrophotometric analysis [19,20]. Crude porcine pancreatic lipase (PPL) from Sigma (USA) was suspended in tris-HCl buffer (pH 7.4) with 2.5 mmol NaCl to give a concentration of 200 U/mL of pancreatic lipase. Then, p-nitrophenyl palmitate (p-NPP) was dissolved in water as pancreatic lipase substrate. Next, 1 mL of crude leaves extracts were mixed with 1 mL of enzyme suspension, 3 mL of substrate solution and 6 mL of tris-HCl buffer. The reaction was carried out in water bath at temperature of 37 °C for 30 minutes. After the reaction, 1 mL of acetone and ethanol mixture at a ratio of 1:1 was added in order to stop the PPL activity. The absorbance was measured by using spectrophotometer at wavelength of 410 nm. The absorbance was used to obtain the amount of p-nitrophenol (p-NP) liberated by using the equation from p-NP calibration curve. The calibration curve was constructed from the absorbance and concentration of p-NP at the range from 0 to 1000 µmol/mL. The readings were triplicated and averaged. The standard PPL activity which was without Aquilaria crude extract and the PPL activity of sample with Aquilaria crude extract were calculated using Equation (1). (1) Where [Cp-NP] denotes of p-nitrophenyl released from p-NPP by 1 mL of PPL at 37 o C (μmol) and tR denotes of the reaction time (min). Then, the percentage of inhibition (PI) was determined by using Equation (2).
Evaluation of Pancreatic Lipase Inhibition Kinetics
The kinetics analysis of PPL activity inhibited by A. malaccensis and A. subintegra leaves crude extracts was determined by using the graphical method via. double reciprocal (Lineweaver-Burk) plots. The plots were constructed at different p-NPP concentration varying from 100 to 1000 µM/mL for the PPL reaction with and without Aquilaria crude extract (control). The mode of inhibition was determined by looking at the pattern of interception and crossing of linear lines for the reciprocal data of PPL activity with and without inhibition vs. p-NPP concentration. All kinetic parameters which are the Michaelis-Menten constant (Km) and the maximum reaction rate or enzyme activity (Vm), were obtained from the reciprocal of Michaelis-Menten Equation (3), which given by Equation (4), wherein the Vm was calculated from the interception at y-axis and Km was calculated from the slope of the linear graph. The inhibition constant (Ki) was calculated by substituting Km and Vm in the Michaelis-Menten kinetic equation (Equation 5), which was modified by taking the additional terms of Ki in the reaction into account when the mode of inhibition has been identified from the Lineweaver-Burk plot.
Content of Gallic acid and Quercetin in Ultrasonicated Aquilaria Leaves Crude Extract of Different Particle Sizes
The content of gallic acid and quercetin in A. malaccensis and A. subintegra crude extracts of different particle size of ultrasonicated leaves is shown in Figure 1. It was found that, the highest concentration of gallic acid and quercetin determined in both Aquilaria leaves crude extracts was at the particle size of 250 µm. This indicates that the total phenolic and flavonoid were significantly increased with decreasing particle size [21]. The particle size re- duction of material from plant has turned into a fundamental viewpoint, where it has a significant effect in the extraction of active compounds, in which it provides a shorter mass transfer distance for the extraction solvent to travel and also more surface area were available for molecular transport [22−24]. It was also found that the extensive mass transfer of solute between phases also decreased the time for maximum phytochemical content to be extracted [25]. Furthermore, sample with larger particle size will have a smaller surface area which would restricted the solubility of water soluble components and led to decrease in the values of total phenolic and flavonoid content [26]. The aid of pretreatment method by using ultrasonication also contribute a significant effect, where the current particle size of leaves was reduced proportionally to the duration of time in this process, which leads to a reduction of particle diameter resulted by the cavitation energy generated by ultrasound, that raised local pressure changed and shifted in liquid which resulting in damaged on the particle [27]. There-fore, optimization efforts on the ultrasonication process, specifically on the reduction of particle size, can be rationally developed with the presence of the quantification of such heuristic rules for each plant source [25]. The sample matrix, the particle size and the extraction technique also were strongly influence the phenolic and flavonoid extraction [28]. It was found that the concentration of gallic acid and quercetin equivalent in A. subintegra crude extract were 10% and 20% higher than the concentration of gallic acid and quercetin equivalent in A. malacensis crude extract.
In-vitro Inhibitory effect of Ultrasonicated Aquilaria Leaves Crude Extract at Different
Particle Sizes on Pancreatic Lipase Figure 2 shows the percentage inhibition of porcine pancreatic lipase (PPL) by A. malaccensis and A. subintegra leaves crude extract that was calculated by using Equation (2) for particle sizes range of 250, 300, 400, 500, and 1000 µm. The highest percentage of PPL inhibition was given by the crude extract from leaves with particle size of 250 µm for both Aquilaria species. The percentage of PPL inhibition increased as the content of gallic acid and quercetin equivalent in the crude extract of different particle size increased. The result is in line with research done using green tea powders, wherein the inhibition depends on the content of phenolic and flavonoid of different particle size in the leaves [29]. The results obtained also revealed that antioxidant activity was dependent on particle size of powders [29]. Thus, it shows that smaller particle size leads to higher percentage of pancreatic lipase inhibi- tion, due to the total phenolic and flavonoid content in the sample. The percentage of PPL inhibition was higher for A. subintegra compared to A. malaccensis, due to the content of gallic acid and quercetin equivalent were higher in the A. subintegra crude extract.
Kinetic Inhibition of Ultrasonicated Aquilaria Leaves Crude Extract at Different Particle Sizes
The mode of PPL inhibition exhibits by the Lineweaver-Burk plots of different particle size of ultrasonicated A. malaccensis and A. subintegra leaves was found to be mixed-inhibition (Figures 3 and 4), which shows that Aquilaria extract was able to bind to free PPL and also to PPL-p-NPP complex. Based on the mode of inhibition identified, the overall inhibition reaction mechanism of mixed-inhibition featured by A. malaccensis and A. subintegra leaves crude extract is shown in Equation (5). Based on the reaction mechanism, both gallic acid and quercetin equivalent in Aquilaria crude extracts bound to PPL and to PPL-p-NPP complex, and there were possibilities for these in- hibitors to bind at both state of PPL at the same time. In mixed-inhibition, the inhibitor is capable of binding to both the free enzyme and to the enzyme-substrate complex [30].
The linear line equation of Lineweaver-Burk plot (Equation 4) is the reciprocal of Michaelis-Menten Equation (3), wherein it can be written as Equation (6) to represent the overall PPL inhibition reaction by Aquilaria extract. The kinetic parameters which are the Michaelis-Menten constant (Km), maximal velocity (Vm) and inhibition constant (Kia and Kib) for the mixed-type of inhibition reaction were calculated by using Equation (4) and (6). By referring to the overall reaction mechanism of Aquilaria extract shown in Equation (5), Kia is the inhibition constant for binding of Aquilaria extract to the PPL and Kib is the inhibition constant for binding of Aquilaria extract to the PPL-p-NPP complex. All calculated values of kinetic parameter were tabulated in Table 1.
Based on the linear line equation obtained from the Lineweaver-Burk plots (Table 1), the interception at y-axis of the plot presented the value of 1/Vm, where Vm was calculated from this reciprocal value. The slope of the graph presented the value of Km/Vm, which is also known as specificity time [31]. Thus, the value of Km was calculated by substituting the value of Vm which was formerly obtained from the yaxis interception. Therefore, in inhibition study, the Km and Vm in Equation (4) was referred to the Km and Vm value of inhibition, which can also be denoted as Km,app and Vm,app. In inhibition reaction, Equation (6) shows that the slope of Km/Vm was decreased by a factor of {1+[Aqila Extract]/Kia}, due to the inhibitory effect given by Aquilaria extract. In order to show that Km and Vm were affected by Aquilaria crude extract and not by p-NPP, thus Equation (4) and Equation (6) was correlated and rearranged to obtain Equation (7) and (8), in which later it can be used to calculate Kia and Kib, where [Aquilaria Extract] was the concentration of gallic acid and quercetin equivalent in the Aquilaria crude extract.
The value of Vm also decreased because Aquilaria crude extract was proficient to prevent catalysis regardless of whether p-NPP was attached to PPL during the state of PPL-p-NPP complex (Equation 5). With mixed-type inhibitors in the reaction, the Km varied with the relative values of Kia and Kib, wherein the value of Km increased as the Kia or Kib value decreased. Moreover, in mixed-inhibition, the dissociation constant of inhibitor binding at free PPL was differed from the dissociation constant for binding at PPL-p-NPP complex. The value of Ki for an inhibitor is analogous to Km for a substrate, where it reflects the strength of the interaction between the PPL and Aquilaria crude extracts at the state with or without p-NPP. A small Ki value reflects the tight binding of an inhibitor to an enzyme, whereas a larger Ki value reflects the weaker binding. The results found that the Kib value was higher than Kia, in which this indicates that the affinity of inhibitor bound to free PPL was higher than the binding of inhibitor to PPL-p-NPP complex, which makes the inhibitory effect stronger. Yet, according to the Federation of European Biochemical Societies, the case in which Kia is lower than Kib are known as predominantly competitive inhibition [32]. Furthermore, the presence of p-NPP on PPL has no influence on the ability of a mixed type inhibitor to bind to PPL because even though it does not have structural similarity to p-NPP, it was able to bind to both the free PPL and the PPL-p-NPP complex. Although the binding is away from the active site, it can still alter the conformation of the enzyme and reduce its catalytic activity due to changes in the nature of the catalytic groups at the active site [33].
Similar inhibition mode is also observed in the pancreatic lipase inhibition study done by Ong et al. using the extracts of Eleusine indica (L.) Gaertner [34]. It indicates the possibility of pancreatic lipase-substrate complex formation with the inhibitor binding at a distinct site from the active site resulting in reduction in complex affinity. Thus, this explained the increase in Km for the inhibition of PPL by using the crude extract from ultrasonicated Aquilaria leaves. In the inhibition by A. subintegra crude extract, it was found that the effect of mixed-inhibition was a reduction in Vm and decrement in Km as the particle size of leaves increased.
However, Km for PPL inhibition by A. malaccensis crude extract increased for the particle size of 400 and 500 µm which indicates that the affinity between PPL and p-NPP was lower, regardless to the content of the gallic acid and quercetin equivalent in A. malaccensis crude extracts. The sudden increment of Vm at 500 µm of A. malaccensis leaves was due to the higher PPL activity at lower p-NPP concentration compared to PPL activity at higher p-NPP concentration ( Figure 3). Thus, it can be concluded that a larger value of Km shows a weak binding of substrate to enzyme [35]. The change in Km also might varies, and it is depend on the relative values of inhibition constant at free enzyme (Kia) and inhibition constant at enzyme-substrate complex (Kib).
Conclusions
Based on the results obtained, it turns out that the smallest particle size of leaves which was 250 µm, that have been soaked for 24 hours with a ratio of 1:100 (w/v) and ultrasonicated at 60 °C for 30 minutes resulted the highest content of gallic acid and quercetin equivalent in Aquilaria crude extract, as well as giving the highest percentage of inhibition towards PPL. It was proven that smallest particle size provides a great effect in obtaining higher gallic acid and quercetin content in the crude extract. The lowest PPL activity was found in the inhibition reaction using Aquilaria crude extract from 250 µm particle size of leaves. The existence of gallic acid and quercetin equivalent in the crude extracts had contributed to the inhibition of PPL, with the highest percentage PPL inhibition of 82% for 1 mL of Aquilaria crude extract used. The percentage of PPL inhibition could be increased with increasing volume of Aquilaria crude extract used. Studies on kinetics were initiates by identifying the mode of inhibition presented by Aquilaria against PPL using Lineweaver-Burk kinetic plot. The mode of inhibition identified for the sample with highest content of gallic acid and quercetin equivalent was mixedinhibition. In this type of inhibition, the value of Km was higher and Vm was lower compared to the value of Km and Vm for the non-inhibited PPL. It indicates that this reversible inhibitor decreased the rate of PPL activity and also reduced the affinity between p-NPP and PPL to react. Furthermore, it is believed that the pancreatic lipase has the other site than the active site for both the p-NPP and inhibitor in Aquilaria crude extract to bind. The inhibition constant for mixed-inhibition were known as Kia and Kib, wherein these two constants had different values with one was higher to the other one. Lower value of Kia indicates that the affinity between inhibitors in Aquilaria leaves crude extract and free PPL was higher. The inhibitor activity and kinetic parameters determined from Aquilaria spp. is expected to benefit in controlling obesity and also problems associated with excess weight. Besides, it can further increase the potential of widely planted and wildly grown Aquilaria species. It is expected that this study will extend the knowledge on enzyme inhibition kinetics and all related parameters that may able to help in identifying the level of effectiveness of an inhibitor that would also benefits the relevant research in this area. | 2020-11-12T09:08:45.790Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "e73382fc9390a74bbbd380a5a4c09f79ddff9409",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal2.undip.ac.id/index.php/bcrec/article/download/8864/4702",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8c8884512bcd721a98dac83cd02ca13d221f98b9",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
39154056 | pes2o/s2orc | v3-fos-license | Synthesis and swelling properties of a poly(vinyl alcohol)-based superabsorbing hydrogel
Article history: Received January 26, 2013 Received in Revised form May 10, 2013 Accepted 20 May 2013 Available online 24 May 2013 Superabsorbent hydrogels based on poly(vinyl alcohol) were prepared by a crosslinking technique using glutaraldehyde as a crosslinker. The hydrogel structure was confirmed using scanning electron microscopy (SEM). Results from SEM observation showed a porous structure with smooth surface morphology of the hydrogel. We have systematically optimized the certain variables of hydrogel synthesis (i.e. the crosslinker concentration, poly(vinyl alcohol) content, time and temperature of crosslinking reaction) to achieve a hydrogel with maximum water absorbency. It was concluded that under the optimized conditions, maximum capacity of swelling in distilled water was equal to 231 g/g. The absorbency under load (AUL) of hydrogels was also measured. In addition, swelling ratio in various salt solutions was determined and the hydrogels exhibited salt-sensitivity properties. © 2013 Growing Science Ltd. All rights reserved.
Introduction
Superabsorbent polymers (SAPs) are cross-linked networks that can absorb a great amount of water or aqueous solutions 1 . Due to unique properties of SAPs, these polymers was synthesized and characterized by the several research groups in the world [2][3][4][5] .
SAPs are mainly used in various industries such as hygienic, foods, cosmetics, and agriculture [6][7][8] . Since they responded to changing environmental conditions such as temperature 9 , pH 10 and solvent composition 11 , SAPs have been attracting much attention in medical and mechanical engineering fields.
Polyvinyl alcohol (PVA) is a synthetic polymer that is soluble in water. Because of desirable characteristics, PVA hydrogels have been used in various pharmaceutical and biomedical applications [12][13][14][15][16][17][18][19][20][21] . However, PVA must be cross-linked in order to that can be useful for a wide range of applications. Cross-linked PVA can be synthesized by chemical or physical methods. The use of chemical crosslinkers such as glutaraldehyde and the use of electron beams are the most common methods of chemical crosslinking of PVA. Physical crosslinked PVA hydrogels can be prepared using methods such as "freezing-thawing".
In the present report, we describe the preparation and characterization of a poly (vinyl alcohol)based hydrogel. The effect of reaction variables affecting the swelling capacity of the hydrogel and swelling behavior in various salt solutions was investigated.
Synthesis of hydrogel
A general reaction mechanism for PVA-based hydrogel formation is shown in Scheme 1. As seen from this Scheme, the acetal linkage was formed between aldehyde groups of glutaraldehyde and hydroxyl groups of PVA backbones. Scheme 1. Proposed mechanistic pathway for synthesis of the PVA-based hydrogels.
Characterization
In general, the scanning electron microscopy (SEM) shows microstructure morphologies of hydrogels. The SEM of the synthesized hydrogel is shown in Fig. 1. This picture verifies that the synthesized PVA-based hydrogel in this work has a porous structure. Existence of these pores in hydrogels strongly increases the swelling kinetics of the resulted product.
Effect of crosslinker concentration
As mentioned in "Introduction" section, in order to the PVA hydrogel to be useful, it must be cross-linked. In the hydrogel synthesis, crosslinking agents prevent dissolution of the hydrogels. As shown in Scheme 1, glutaraldehyde acts as crosslinker. Fig. 2 shows the influence of the crosslinking agent on the swelling capacity of hydrogel. As indicated in Fig. 2, higher crosslinker concentration decreases the space between the copolymer chains and, consequently, the resulted highly crosslinked rigid structure cannot be expanded and it holds a large quantity of water.
Effect of PVA concentration
The swelling dependency of the hydrogels on PVA amount is shown in Fig. 3. Maximum swelling capacity (187 g/g) has been observed at 2.4 wt% of PVA, while other factors were kept constant. Swelling of hydrogel is considerably increased with the increase of PVA value from 1.2 to 2.4 wt%. This behavior is attributed to the availability of more sites for crosslinking. However, upon further increase in the substrate concentration, increase in the reaction medium viscosity restricts the movements of PVA chains, thereby decreasing the absorbency.
Effect of the reaction bath temperature
In this series of experiments, the swelling ratio as a function of the reaction temperature was studied by varying the temperature of the water bath from 50 to 100 o C (Fig. 4). Increase in swelling values with increasing the temperature up to 80 o C can be attributed to the rising of the rate of diffusion of glutaraldehyde into PVA backbones. At the temperatures higher than 80 o C, however, a possible "thermal crosslinking" of the PVA backbones may play a major role in leading low-swelling hydrogels. In addition, the swelling loss may be related to an increase of crosslinking extent via completion of the di-acetal formation by further reaction of the possible mono-acetal species with another PVA chain (Scheme 1). Swelling capacity is gradually increased up to 1 h. Meanwhile, the swelling capacity of the hydrogel synthesized after 1 h (i.e. 231g/g) is appreciably decreased to 20-30g/g in longer reaction time due to enhancement of the crosslinking extent. No remarkable change of water absorbency was observed in the case of longer time of the reaction.
Swelling in Various Salt Solutions
Swelling capacity in salt solutions has prime significance in many practical applications such as water release systems in agriculture. In the present study, swelling capacity was studied in 0.15M various chloride salt solutions (Fig. 6). As shown in the Fig. 6, the swelling ability of hydrogels in various salt solutions is decreased compared to the maximum swelling values in distilled water (231 g/g).
Absorbency Under Load
In general, in order to investigate the gel strength of hydrogels the Absorbency Under Load (AUL) of them was measured. Load-free absorbency usually given in the basic scientific literature and AUL value is a parameter often reported in the technical data sheets and patent articles. Thus, the study of this parameter is of great significance from industrial point of view. The AUL of synthesized hydrogels in this paper in 0.15M NaCl solutions was shown in Fig. 7. This figure shows the AUL of PVA-based hydrogels under various pressures as the function of swelling time. As shown, the minimum time needed for the highest AUL in the case of each load was determined to be 40 minutes. After this time, the AUL values were unchanged. In addition, the final AUL values were decreased with the increase of loaded pressure.
Conclusions
In the present study, PVA superabsorbent hydrogel was synthesized in an aqueous solution using glutaraldehyde as a crosslinking agent. Swelling capacity of the synthesized hydrogels is affected by the crosslinker concentration. The swelling is decreased by increasing the glutaraldehyde concentration. The effects of PVA content, reaction time and temperature on swelling capacity were also investigated. Swelling measurement of the hydrogels in different salt solutions showed swelling loss in comparison with distilled water. This can be attributed to charge screening effect. Finally, the measurement of absorbency under load of the optimized hydrogels shows that the AUL values were diminished with increasing of loaded pressure.
Instrumental Analysis
The surface morphology of the gel was examined using scanning electron microscopy (SEM). Dried superabsorbent powder was coated with a thin layer of palladium gold alloy and imaged in a SEM instrument (Leo, 1455 VP).
Materials
Poly(vinyl alcohol) with molecular weight of 50000 was obtained from Aldrich, Milwaukee, WI, USA and used without further purification. Glutaraldehyde (from Merck) was of analytical grade and was used without further purification.
Preparation of Hydrogel
Aqueous PVA solution was prepared by dissolving 3.0 g PVA powder in 30 ml deionized water and then heating it at 85 °C for 10 h. Glutaraldehyde with various concentrations as cross-linking agent was added to the resulting solution . After 60 min, the reaction product was allowed to cool to ambient temperature. The hydrogel was poured to exceed non-solvent ethanol (500 mL) and kept for 24 h to remove of absorbed water. Then ethanol was decanted and the product was scissored to small pieces. Again, 100 mL fresh ethanol was added and the hydrogel was stored for 24 h. Finally, the filtered hydrogel was dried in oven at 50 o C for 10 h. After being ground by mortar, the powdered superabsorbent was stored by being protected protecting from moisture, heat and light.
Swelling measurements using tea bag method
The tea bag (i.e. a 100 mesh nylon screen) containing an accurately weighed powdered sample (0.5 ± 0.001 g) with average particle sizes between 40-60 mesh (250-350 ) was immersed entirely in distilled water (200 mL) or desired salt solution (100 mL) and allowed to soak for 3 h at room temperature. The tea bag was hung up for 15 min in order to remove the excessive fluid. The equilibrated swelling (ES) was measured twice using the following equation: The accuracy of the measurements was ±3%.
Absorbency under load (AUL)
AUL was measured using a piston assembly allowing the addition of weights on top of the superabsorbent sample. 22 A macro-porous sintered glass filter plate (d=80 mm, h=7 mm) was placed in a petri dish (d=118 mm, h=12 mm), and the weighted dried sample (0.5±0.01g) was uniformly placed on the surface of a polyester gauze located on the sintered glass. A cylindrical solid load (Teflon, d=60 mm, variable height) was put on the dry hydrogel particles while it could be freely slipped in a glass cylinder (d=60 mm, h=50 mm). Desired load (applied pressure 0.3, 0.6, and 0.9 psi) was placed on the hydrogel sample. Then 0.9% saline solution was added so that the liquid level was equal to the height of the sintered glass filter. Whole of the set was covered to prevent surface evaporation and probable change in the saline concentration. After 60 min, the swollen particles were weighted again, and AUL was calculated according to Eq. (1). | 2017-10-10T23:13:00.294Z | 2013-07-01T00:00:00.000 | {
"year": 2013,
"sha1": "0185e0386d5d787caa77af6010bb163cb10a76c0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5267/j.ccl.2013.05.001",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0185e0386d5d787caa77af6010bb163cb10a76c0",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
65096562 | pes2o/s2orc | v3-fos-license | A Review on Gas Well Optimization Using Production Performance Models — A Case Study of Horizontal Well
This study considered the solution methods to determine optimal production rates and the rates of lift gas to optimize regular operational objectives. The foremost tools used in this research are offered as software platforms. Most of the optimization hitches are solved using derivative-free optimization based on a controlled well Performance Analysis, PERFORM. In line with production optimization goal to maximize ultimate recovery at minimum operating expenditure, pressure losses faced in the flow process are reduced between the wellbore and the separator. Nodal analysis is the solution technique used to enhance the flow rate in order to produce wells, categorize constraints and design corrective solution. A hypothetical case is considered and sensitivity analysis using the IPR Models for horizontal gas wells provides the effect on pressure and liquid drop out. The gas lift method is economically valuable as it produced an optimal economic water cut of 80 percent with 2 4 MM scf/day rate of gas injection; thus, 1800 2000 STB/day gas was produced.
Introduction
Oil production technology is the series of activities related to the production or injection wells often described by a performance or injection capacity indicator.Open Journal of Yangtze Gas and Oil The engineers in Production section frequently deal with one or multiple wells at a given time and with the delivery of oil and gas from the well to the point of sale.Most importantly, it often goes beyond economic motivation to accelerate production by increasing productivity or injecting into wells.Optimization of production wells ensures that these wells and installations operate at maximum capacity to maximize productivity.
There are numerous adverse effects associated with low flowing bottom hole pressure, such as precipitation of scale, deposition of paraffin and asphaltene, gas and water coning.It is therefore important from the outset to recognize and understand that stimulation and the rise in index of presumptive productivity will not automatically lead to an increase in the production rate of the well; but instead, increasing the appropriate portion of the productivity index with an increase in production rate and/or a decrease in drawdown, is dependent on the needs of each separate well.Consequently, optimization of the production goal is to increase productivity and improve the overall value of assets (short-term), while meeting all physical and economic/financial constraints.
In early optimization research, reservoir models and linear methods of programming were presented.Aron of sky and Lee proposed these linear programming models to optimize benefits and planned the production of several similar reservoirs.Wellhead choke is usually chosen so that the pressure fluctuations in the pipeline for the downstream will not have any impact on the well flow rate.To ensure this situation, choke flow must be in critical flow conditions.In other words, choke flow is at acoustic velocity.In order for this condition to exist, the pressure in the downstream pipeline should be about 0.55 or less from the inlet pipe or tubing.The low flow rate in this situation is only a function of the upstream or tubing pressure [1].
An integrated approach to improve productivity with reservoir management will balance the short-term optimization goals of production and long-term reservoir development task; to have a more rational impact on the field development.Well performance analysis plays a vital role in production management and optimizing the performance of a gas well.The problems facing in this analysis can be divided into two types: the behaviour of the well when designing for completion (with emphasis on short-term) and initial production state on productivity of the well.The second concern is related to long-term behaviour of the well.At this stage, changes in productivity of the well are taken into account and projected as the reservoir pressure reduces [2].
This study considered the solution methods to determine optimal production rates and the rates of lift gas to optimize regular operational objectives.
Statement of Problem
At some point in the life of the well, recovery may not correspond to physical or economic constraints, and the closure or shut-in of the well will be required.At this stage, corrective measures or changes will be made if the preliminary analysis provides for the creation of an additional economic value.The objectives of Open Journal of Yangtze Gas and Oil production optimization can be to improve the efficiency of the reservoir inlet or to reduce the output flow efficiency.The results can be more production with reduction in pressure drop/drawdown.As a rule, the production of sand and high water influx indicate the need to revitalize the environment of the downhole gas well.
To optimize field performance, it is necessary to understand the pressure at the reservoir inflow, vertical lift of the wellbore and the surface pressure of the facilities.Production optimization refers to various dimensions/measures, analysis, modeling, prioritization and implementation to improve gas and oil field performance.
The main objective of this study is to optimize well productivity by analyzing a producing well.The study emphasizes on simple objective functions that optimize weighted daily flows.
Literature Review
In oil and gas fields, production of hydrocarbon is often limited to the conditions of the reservoir, networks of pipeline, treatment plants for the fluids, economic and safety considerations, or a blend of these considerations.The field operators is faced with the charge to develop optimum operational approaches to accomplish definite operational goals.The ultimate goal of almost all efforts to form an oil and gas field is to develop an optimal strategy for the development, management and operation of the field.Optimizing production operations for certain fields can be an important factor if the production volumes are to be increased to reduce production costs.Though it may be useful for individual wells to perform nodal analysis for prediction, but large systems require a more complex method to accurately predict reaction of a complex system for production [3].The interaction of the flow between the wells can play a significant part in some problems of rate distribution.In most cases, the problem of distribution of rates is expressed as a general nonlinear limited optimization and solved by the method of sequential quadratic programming.Various compositions have been investigated by different researchers [4].
The application of optimization methods in the oil industry (upstream) was reported earlier, but it began to flourish in the 1950s.
Deliverability of Gas Well
Well deliverability is designed from the inflow performance relationship (IPR) and from the curve intersection of the vertical lifting performance (VLP).The IPR includes the environments and constraints of the reservoirs, while the VLP reflects on wells that are producing.
In order to optimize the production of gas, pressure drop occurs when the reservoir fluid moves from the reservoir to the surface through the well, the production tubing and processing facilities [5].This concept combines the flow of the reservoir, as shown by most wells IPR, with the tubing performance capacity Open Journal of Yangtze Gas and Oil curve that embodies substantially all of the pressure drop connected with well tubing connections.This combination brings the components of the oil production system together and can also be applied during the diagnosis, analysis and identification of incorrect or defective parts of the well system.This approach is known in the petroleum industry as well performance analysis or also as nodal analysis.
Well performance analysis is used not only for determining a specific well IPR and performance of tubing; but can also be used to test a number of different options for modification.These options include the diameter of the tubing, the pressure at the well head, the type and size of choke, the density of perforations, horizontal and complex wells and hydraulic fracturing.If all options are properly taken into account, they can lead to economic optimization: the additional cost among design can be balanced with the increase in well productivity or performance.
One of the main problems of the oil and gas producer is how long it will be without necessary intervention.Intervention is costly and can alter any previous economic task.Sometimes it may be even more economical to abandon the well and drill a new well or simply move to another area.
Horizontal Wells
There are wells that hit the reservoir 90 degrees vertically and extend the tunnel.
It has been discovered that not all reservoirs are good candidates for horizontal technology.Thus, horizontal wells are suitable for thin deposits (less than 500 feet thick), deposits with lower productivity than vertical wells, narrow forms with horizontal and vertical permeability, reservoirs that have fractures, and reservoirs with water or gas coning.Horizontal wells are mostly drilled as an alternative to hydraulic fractured vertical well.Brown and Economides [6] presented a series of studies comparing characteristics of horizontal well and fractured vertical well.
A more advanced concept is that a horizontal well can be drilled exactly in a favorable direction, that is, usually at maximum horizontal permeability.In very anisotropic sediments, this will still bend the solution in favor of horizontal wells.
Methodology
Production data, well details and reservoir data for this research were collected from an offshore gas field (Table 1) that has been completed.
Well Performance IPR Models Used
Giger et al. [7] IPR model Giger et al. [7] presented the first mathematical model for analyzing productivity of horizontal wells intersecting fractures, in which flow in the rock matrix and fractures were formulated for the short and long horizontal wells and then combined to obtain a radial flow equation for the whole flow path from external boundary to wellbore.
It is applied to a reservoir in steady state and used to calculate the sand-face pressure and flow rate pairs for isotropic and anisotropic reservoirs.For anisotropic reservoirs, Muskat method is used to calculate equivalent reservoir permeability and adjust the rest of the parameters.The method can be applied to both oil and gas wells.
Jones et al. [8] IPR: R wf g g
P P a bq q
where, a = Laminar flow coefficient, b = Turbulence coefficient, P R = Average Reservoir pressure, P wf = Bottomhole Pressure and q g = Gas flow rate.
This model, mostly applied to gas wells, is used to account for turbulence in a producing gas well.Jones et al. [8] can also be used in oil wells with high GOR.
Therefore, this model is suitable for reservoir above bubble point.Vogel equation can be used to adjust Jones equation below bubble point pressure for solution gas drive reservoirs.
Results and Discussion
The offshore well data was utilized to analyze solution methods in determining optimal production rates.Decline curve analysis was applied to identify the natural gas production optimization in horizontal well.Applying Giger's model, the results for pressure effect in horizontal wells were presented in Figure 1 (1) 2900.0 76.7 (2) 2850.0 74.3 (3) 2800.0 71.9 (4) 2750.0 69.5 At the recommended solution point, using Giger Et Al IPR model, we observed very high flow rate of 3508 bbl/D and an equally excessively high pressure drop of 74.3 psig, which further increased at the completion intervals, with a higher pressure drop of 76.7 psig.However, a recommended pressure drop of 71.9 is required to maintain an optimal production rate.Based on the system analysis, further reduction in pressure only leads to lower liquid drop out and reduced flow rates.
Effects of pressure in horizontal wells using Economides et al.IPR Model Open Journal of Yangtze Gas and Oil The sensitivity analysis were conducted to know the effect of reservoir pressure in the horizontal well using Joshi model (Figure 3) and Renard and Dupey model (Figure 4).In Table 3, the results of evaluation for some steady state models using the horizontal well data were presented.
Effect of Reservoir pressure in horizontal wells Using Joshi IPR Model 2750.0 24.3The result of the analysis shows that, lowering the well head pressure to 100 psi is recommended if the desired production optimization is to extend the well's life by 70% water cut, which can optimize production.The possible solution will be to change the size of the tubing.But, this is not recommended, since the well production did not cause an increase in the rate of gas production.The gas lift method is economical in this case, since it produced an optimum economic water cut of 80 percent when gas was injected at the rate of 2 -4 MM scf/day to produce 1800 -2000 STB/day of gas.Open Journal of Yangtze Gas and Oil Table 3. Comparative evaluation of horizontal wells steady state models.
Conclusion
This study considered the solution methods to determine optimal production rates and the rates of lift gas to optimize regular operational objectives.The foremost tools used in this research are offered as software platforms.The natural gas production optimization in horizontal well gas field has been identified by decline curve analysis.The necessary parameters to optimize the well performance have been identified in this study.The result of the analysis shows that, lowering the well head pressure to 100 psi is recommended if the desired production optimization is to extend the well's life by 70% water cut; which can optimize production.The possible solution will be to change the size of the tubing.But this is not recommended, since the well production did not cause an increase in the rate of gas production.To perform this analytical technique we need good reservoir engineering concept.The gas lift method is economical in this case, since it produced an optimum economic water cut of 80 percent when gas was injected at the rate of 2 -4 MM scf/day to produce 1800 -2000 STB/day of gas.
and that of Economides et al. were presented in Figure 2. Effect of Pressure in horizontal wells using Giger et al.IPR Model
Figure 1 .Completion
Figure 1.Plot of differential graph using Giger et al. model.
Figure 2 .
Figure 2. Sensitivity analysis plot using economides et al. model.
Table 1 .
Reservoir data for IPR models.
Table 2 .
Categories of horizontal IPR types. | 2018-12-30T03:39:53.503Z | 2018-01-30T00:00:00.000 | {
"year": 2018,
"sha1": "2b4be46a8a37964dd0ae4a2f2fccdb7c70332c93",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=82194",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2b4be46a8a37964dd0ae4a2f2fccdb7c70332c93",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
225880616 | pes2o/s2orc | v3-fos-license | Adoption of Climate-Smart Practices and the Effects on Production Efficiency of Maize Farmers in Northern Nigeria
Adoption of climate-smart practices (CSP) and the effects on production efficiency of maize farmers in Northern Nigeria was carried out withprimary data, collected with the use of structured questionnaires and household interview method administered on educated and illiterate farmers, respectively. Descriptive and inferential statistics analysed the data and identified the constraints tothe adoption. Richardian net revenue technique estimated the costs and returns, while Multinomial logit statistics interpreted the probability ofCSP-adoption or non-adoption based on farmers’characteristics. Marginal factor productivity analysis determined the influence of CSP-adoption on the production efficiency, while Likert scale established severity of adoption constraints.Kano’s 168 CSP adopters’ and 110 non-adopters’ mean incomes were ₦140,198 and₦54,622at ₦2 and ₦0.8 returns, respectively. Nasarawa’s 107 CSP-adopters and 106 non-CSP incomes were ₦162,545 and ₦85,570 at ₦1.8 and ₦0.9returns, respectively. Kano and Nasarawa CSP-adopters’estimated Pseudo-Rwere 51% and 50% at 52% and 51% significant Log likelihood ratios (LR),respectively. Access to credit, extension, educational level and income were significant and positively related to CSPadoptiondecisions at P≤0.5 and P≤0.1 in Kano and Nasarawa, respectively. The CSP-adopters were technically efficient and Z-test 2.61 and 2.83 at P≤0.1 for Kano and Nasarawa, respectively,at 35%coefficient of variation rendered the hypothesis of no significant difference between the mean incomes of both farmersto be rejected. High cost of inputs was the most severe constraint to CSP-adoption. Thus, maize farmers are advised to join cooperative societies to enjoy cheaper production costand increase production efficiency.
INTRODUCTION
In Nigeria, food scarcity is a major problem despite the abundance of natural and productive resources.The agricultural practice is subsistence, traditional and climate basedwith low production output (Central Intelligence Agency (CIA), 2014). Farmers in Northern Nigeria constantly seek for new farm practices that are adaptable to the climate change to maintain crops productivity. Climatesmart practice (CSP) is thus, inevitable, as it refers to the precise way of farming, tilling, fertilizing, weeding, tending and harvesting of agricultural produce. Lipper et al. (2014) defined it as an approach by which new realities of transformation and re-orientation are carried out for agricultural development.According to McCarthy et al. (2012), the word, ''smart'' has been defined asan acronym derived from the words: specific; measureable; achievable; reliable and timely (SMART).Graziano da Silva (2016) assumedit to affect food security, rapid transformation of farming and food systems, which needs to be adoptedin order to cope with the critical food scarcity through increased food production and poverty reduction.
Nigeria's high population growth rate of about 2.7%has resulted to increase in food consumption to about 150kg grains per person. The country is known to be the tenth world's producer of maize at about 10.4 million tonnes annually (FAO, 2007and Philip et al., 2006. Maizeoutput in Nigeria is low due to climate change, and the National Bureau of Statistics (NBS) (2016), estimated the Gross Domestic Product (GDP) to be about 78.43 billion Naira in 2015. Although, most of the consumed quantity in Nigeria is known to be produced in Northern-Nigeria, previous studies (Jatto et al.,2015 and Ajao, 2011)did not established how the CSP-adoption affects the production efficiency in the area.This study, therefore, aimed to providelasting solution to food scarcity in Nigeria by bridging the gap caused by paucity of research information in the area. This has been achieved by determining the level of awareness and adoption of CSP among maize farmers in Northern Nigeria; estimating the costs and returns of CSP-adopters and nonadopters; identifying the farmers' socio-economic factors which influence the adoption of CSP and the constraints that limit its adoption among the maize farmers in the study area. Policy makers are expected to be guided by thisinformation to know whether the existing CSP has fully been adopted and optimum efficiency achieved in the study area. Investors are expected to be guided by such policies to make investment decisions, while the rural poor and small scale farmers are expected to adjust their productions resources to improve maize output, earn more income and develop themselves. The study has indicated more areas for further studies.
Theoretical and Conceptual Framework:
Agriculture is very important to Nigeria's economy, but the severe climate-change hasbecome a major economic concern. The adoption of climate-smart practice (CSP) ensures food security and environmental preservation against climatic destruction (Beddington et al., 2011). The repairs and mitigatesof adverse effects on agricultural productivityensure food security, based on the farmer's socio-economic characteristics. This must have made Terdoo and Adekola (2014) to assume that climatic change assessment in Africa requires integrative sustainability concept in the research framework to reduce arbitrariness.
Agricultural production efficiency can be allocative (scale), technical or economic. Economic efficiency is the combination of minimum factors of production to obtain optimum output. Scale efficiency is the optimization of total output at the least cost of production in term of size (Lee et al., 2011). Technical efficiency is the ability to obtain maximum output by utilizing a certain amount of technology (Ajetomobi and Abiodun, 2010). Climate change affects agricultural production efficiency at every stage and its economic impacts is felt through reduced farm output and incomes for farming households (FAO, 2016).
Conceptually, the Ricardian regression modelsdid revealincreasing precipitation by 1mm and decreasing temperature by 1 0 C to affect the net revenue of maize and sorghum significantly in Northern Ghana (Bawayelaazaa et al., 2016). This confirms the link between climate and crop revenue, as the model involves the use of both linear and quadratic terms for the climatic variables mayindicate higher temperatures and variations in precipitation levels (Lipper et al.,2014). It has been found to be the most suitable in determining the net revenue of both farmers. Multinomial logit regression equation enabled the differentiation of farmers into CSP-adopters and non-adopters. Meanwhile, the quest to end hunger and reduce poverty in Nigeriarequires adoption of CSP. Although, findings from some African countries, who adopted CSP have indicated positive improvement in output (Terdoo and Adekola, 2014), it is yet to beproven with empirical data in Nigeria, thus, the need for this study.
Study Area:
The study was carried out in Kano and Nasarawa States in northern zone of Nigeria. The zone is located between longitude 2 0 44' and14 0 42'East and latitude 02 0 27'N and 14 0 00' North. It occupies about 773,373 square kilometres and is populated with about 52,4 million people (United Nations, 2018). Kano State represents the Sahel Savannah regionand is made up of about 20,760 square Km2, with 86,000 hectares of dry-season irrigation farmland, 75,000 hectares of fallow and grazing land and 1,754,200 hectares of ordinary farm land (Kano ADP, 2011). The State has 400-1,200mm average rainfall per annum and temperature range of 14.02 0 C-32.03 0 C. The citizens are mostly farmers, traders and partisans and own about 4 hectares of farmland per person on the average (Shuaibu, 2018).Nasarawa Staterepresents the Guinea Savannah regionand is a cosmopolitan State, which lies in the central part of Nigeria within Longitude 7 0 0'E-9 0 37'E and Latitude 7 0 45'N-9 0 25'N. It covers about 27,137.8Km 2 and is populated with about 2,523,400 people (NBS, 2016). The State has rich fertile soil from the cretaceous sand, silt, lime and iron stones and shale (Agidi et al., 2017). It has about 6-7months average annual rainfall of about 1100-2000mm from April to October. Agricultural practices in both Statesare subsistenceand traditional with bush clearing and burning, which exposes the soil surface to erosion, drought and desert encroachment (Farauta et al., 2011).
Method of Data Collection:
Primary data collected for this study were mainly on the maize farmers' socio-economic characteristics and the production data fromthe selected farmers. Both structured questionnaires and household interview schedulefor the literate and illiterate farmers, respectively,were used bytrained enumeratorsbetween June and September, 2018to collect the data. The production data included maize annual outputs (kilogrammes), inputs, such as farm size cultivated (hectares), maize seed (kilogrammes), labour (man-days), fertilizer (kilogrammes) and capital (Naira and Kobo). Multi-stage sampling technique enabled the selection of maize farmers with the desired characteristics since there was no proper record of the target population. Ten percent of registered sample frame, which was491(275 CSP and 216 non-CSP) made up the sample size for the study. This technique was straight forward, unbiased, and cost effective.
Analytical Techniques:
Descriptive and inferential statistics, Richardian net revenuetechnique, multinomial logit and Marginal factor productivity analysis were used in analyzing the data. These were to analyze the awareness andCSP-adoption;estimate and compare costs and returns of both farmers;assess the influence of farmers' socio-economic factors on CSPadoption and determine the adoption or non-adoption of CSP), respectively. The expected value of the dependent variable Y is interpreted as the probability that a farmer with certain characteristics X will adopt CSP or not adopt CSP, and the scored is 1 or 0. Marginal factor productivity analysis determined the influence of CSP-adoption on maize production efficiency (allocative, technical and economic).The intensity of the constraints was measured with the use of Likert scale (1 = very serious; 2 = serious; 3 = not serious; 4 = not a problem and 5 = no response). The results of the analysis were compared between the States studied and presented in Tables.
Richardian Farm Net Revenue Function:
This was used in achieving the estimates, comparing the costs and returns of CSP-adopters and non-adopters.The influence of CSP-adoption on production efficiency of maize is: Where: R represents the net revenue per hectare; P is the price of maize; Q is the maize output (kg); X is the purchase output; F is the climate variable; G is the set of economic variables; Z is the farm size variables and Px is the costs of input. Following Mendelsohn et al. (1994Mendelsohn et al. ( , 2003, the specified net revenue model is as shown in equation (2),as: Where: V represents the net farm revenue; F is the climatic variable; G is the economic variables; β is the coefficient and U is the error term.
Multinomial Logit regression equation:
Multinomial logit regression equation uses linear probability model and estimates in a linear form for the discrete dependent variable Y, according to the farmer's characteristics X. This can be translated into adoption (1) or non-adoption (0) of CSP. The model is expressed thus: characteristics. Non-linear estimation method was used since error term may not be normally distributed and may cause heteroscedasticityin the estimation of β, withordinary least square method.Logit model was employed for its cumulative logistic probability function, which is easier to determine and is: Where: e represents the natural logarithm (2.718); P 1 is the probability of CSP-adoption;X i is the socio-economic characteristics of i th farmer; βis the regression coefficient; and a is the constant term. Maximum likelihood estimation (MLE) of Logit model was applied for the smallness of the data. Students' Package for Statistical Studies (SPSS) was used to analyze the data. Explicit form of the model is as expressed in equation (5).
Where: X 1 is the age of the farmer; X 2 is the household size; X 3 is the educational level; X 4 is the farm size; X 5 is access to credit; X 6 is access to extension service,X 7 is fertilizer used, X 8 is the agrochemicals used and X 9 is the climatic variable (increase/reduction in rainfall (mm 3 ))
Production efficiency of maize farming in Northern Nigeria:
Marginal factor productivity (MFP) analysis was used to achieve this objective and is; Where: r is the efficiency ratio; MVP is the marginal value product;MFCis the marginal factor cost. When:r=1 (implies efficiency); r >1 (implies under-utilization of resources) and r <1 (implies over-utilization of resources). =refers to equal to;>is greater than, and<is less than.
Z-test Formula:
The Z-test was used in testing the null hypothesis and is expressed as: Where: 1 X is the mean income of CSP-adopters; 2 X is the mean income of non-adopters; 2 1 is the income variance of CSPadopters; 2 2 is the income variance of non-adopters; n 1 is the number of observation of CSP-adoptersand n 2 is the number of observation of non-adopters.
Awareness of Climate Change and Adoption of the CSP in Maize Production in Northern Nigeria:
The result of the descriptive statistics, used in analyzing the data on farmers' awareness of climate change and the adoption of the CSP in maize production is as presented in Table 1.Tofa Local Government Area of Kano State has the highest percentages(34% and 28%) of both maize farmers who were aware and adopted the CSP and those who were not aware and did not adopt CSP in maize production, respectively.In Nasarawa State also, majority of the CSPadopters (20%) and non-adopters (26%) were from Keffi LGA.
ii) Costs and returns of CSP adopters and Non-CSP adopters in maize production in Northern Nigeria:
Average costs and returns for both CSP maize adoptersand non-CSP maize adopters are as presented in Table 2. Kano CSP-adopters produced the maize at a gross margin of about ₦149,711 against their non-CSP counterparts' ₦71,885 gross margin and the CSP-adopters net farm incomewas about ₦140,198, while that of the non-CSP adopters was about ₦54,622. The CSP and the non-CSP adopters gained about ₦2 and ₦0.8 respectively to every ₦1 invested in the maize production in Kano State.In Nasarawa State, the CSPadopters produced about ₦109,301 gross margin, while the non-CSP adopters produced about ₦88,288 gross margin in the maize production. The net farm income of both CSP and non-CSP adopters were ₦162,545 and ₦85,570 respectively, which also indicated that to every ₦1 invested, the CSP-adopters gained ₦1.8 while the non-adopters gained ₦0.9. This result is in agreement with the findings of Farauta et al. (2011), where climate change and adaptation measures in Northern Nigeria was found to be more beneficial than non-adoption of climate smart practice. iii).
Influence of Farmers' Socio-economic Factors on Adoption of CSPs Maize in the Study Area:
The socio-economic factors of the farmers studied from the two States in Northern-Nigeria is as shown in Table 3a and 3b. Majority of the farmers from both Kano and Nasarawa States who adopted the climate smart practice in maize production were found to be 85% and 83% male respectively. The non-CSP adopters too were mostly males in both States. This indicates the imbalance of gender distribution in the agricultural sector as the maize farmers were mostly male, who are known to be the sole owners of major agricultural production assets in Northern Nigeria. This result is in agreement with the findings of Onogwu et al. (2017), where male dominance in agriculture was reported to exist in Nigeria. The highest percentage of the CSP maize adoptersand non-CSP adopters from Kano State was found to belong to the age group of 41-50 years old. This indicated that the farmers were still young and productively active. In Nasarawa State, the greatest percentage (38%) of the maize farmers were found to be the non-CSP adopters who belonged to the age grade of 51 and above years old. They were also still relatively young and active, which could boost the increase in maize production. This result is in agreement with the findings of Okojie(2019), where the average age structure of Nigerian farmers was found to be 60 years old.Majority of the farmers interviewed in the study area were married, which indicated their seriousness and emotional stability, while in maize production. This result agrees with the findings of Adeniyi and Ogunsola (2014) The greatest percentages (66% and 38%) of the Kano non-CSP adopters and CSP adopters indicated to have less than 5 members in the household, while in Nasarawa State, the majority (53% and 37%) of the CSP-adopters and nonadopters indicated to have 6-10 household members respectively. This implied that there was readily available family labour if the household members were up to the productive ages, which could lead to increased maize production. This result is in compliance with the report of the National Bureau of Statistics (2013),indicates the North-Central Nigeriansto have an average household size of less than 10 persons per household. Average number of the non-CSP adopters (46% and 51%) in both Kano and Nasarawa States owned about 1.1-2.0 hectares of farm lands, which suggests that they were mostly small scale farmers. This agrees with the estimates of Mgbenka et al. (2015), which reported through the Federal Office of Statistics in 1999 that farm sizes of 0.4 to 1.01 hectares, are grouped under the small scale and 1.01 to 3.03 hectares are grouped under the medium scale farms.
Average number of the maize farmers belonged to cooperatives and as such were able to get access to information and community assistance. This result complies with the findings of Akudugu et al. (2009), who found farmers to have joined cooperative societies in order to increase the chances of acquiring credit for the farm activities. Majority of the non-CSP farmers from both States indicated to have gotten credit for the maize production through personal savings, while the least of them also indicated to have acquired theirs through bank loans. The socio-economic factors of the farmers studied from the two States in Northern Nigeria, such as access to extension services, reason for adoption of a particular practice, cropping system and farming experience are as presented in 3b.The whole CSP-adopters from both Kano (100%) and Nasarawa (100%) States indicated to have access to extension services, while the majority of them also indicated to have adopted the crop for its high yield and were sole maize croppers. About58.9% of the CSP adopters in Kano State had 11-15 years of farming experience, while average (60.0%) number of the non-CSP adopters from Nasarawa State indicated to have had more than 16 years of farming experience. Average number (72%) and (80%) of the CSP-adopters and non-adopters, respectively, from Kano State indicated not to have sufficient rainfall for the maize production, while majority (79% and 87%) of both CSP and non-CSP adopters, respectively in Nasarawa State indicated to have sufficient rainfall for the maize production. This result is in agreement with the findings of Terdoo and Adekola (2014), where lack of coherent climate mitigation approach and poor institutional structures were found to be both detrimental to adoption of CSP in Nigeria.
Influence of Farmers' Socio-economic Factors on Adoption of CSP in the Study Area:
The estimated coefficients of the socio-economic factors that influence the adoption of CSP and non-CSP maize in Northern Nigeria are as presented in Table 4. The estimated Pseudo R 2 for Kano and Nasarawa State farmers were found to be 50.5% and 50.4% respectively. The Log likelihood ratio (LR) statistics were found to be significant at 52.0% and 50.9% for Kano and Nasarawa States respectively. This implied that the exogenous variables in the model jointly explained the decisions of the maize farmers to adopt the CSP in the maize production. Access to credit and income of both groups of farmers were significant and positively related to the farmer's decisions to adopt both CSP and non-CSP maize production at P≤0.5 and P≤0.1 for Kano and Nasarawa States respectively. Age of the farmers and household size were significant but negatively related to the CSP adoptions at 1% and 5% level of probability for both farmers in both Kano and Nasarawa States respectively. This implied that the older the farmers were and the more the household size, the less the farmer were likely to adopt the CSP or non-CSP maize. Educational level was found to be significant and positively influenced the adoption of the CSP by both States at P≤0.1, but negatively influence the non-CSP adoption for both States at the same level of probability. This meant that the more educated the maize farmers were, the more they adopted the CSP and also the less they adopt the non-CSP maize production. Access to Extension Services was found be significant and positively influence the adoption of CSP maize at 1% level of probability for both States.
Influence of CSP Adoption on Production Efficiency (allocative, technical and economic) of Maize Production in the Study Area:
The influence of CSP adoption on production efficiency of maize production in the study area is as shown in Table 5.
Marginal Value Productivity (MVP), as the yardstick for measuring the efficiency of resource use at a given level of production processinvolves the comparison of the cost of the inputs and the value of the outputs. In both Kano and Nasarawa States, the CSP-adopters in maize production recorded allocative, technical and economic efficiencies, while all the non-CSP adopters recorded inefficiency throughout the production process.Technical efficiency was observed to be the most efficient for the CSP-adopters in both States, as Kano recorded 1.00 and Nasarawa recorded 1.01. This implied that the CSP-adopters were most efficient in the technology used in the maize production, which is in agreement with the findings of Babatunde & Boluwade(2004), where it was found that increasing the level of the resources used in crop production can lead to output. Both Kano and Nasarawa non-CSP adopters over-utilized the scale, technology and overall resources management and thus, recorded allocative, technical and economic inefficiencies, respectively in the maize production. Therefore, for optimal level of efficiency to be attained, the non-CSP adopters must reduce the amount of resourcessuch as the maize seed, fertilizer, and agrochemicals by increasing their scale of operation. The labour and farm tools ought to be reduced, while the overall management has to be adjusted to obtain efficiency. This result is in agreement with the findings of Ogunniyi et al., (2012) whereover-utilization or under-utilisation of agricultural production resources were found to be attributed to the cultivation of small farm size and the use of crude farming implement. To increase the maize output, more land should be cultivated, which can be achieved if farmers are provided with modern farm tools and other production resources at affordable prices.
Statistical Difference between the Mean Incomes of both CSP and Non-CSP Adopters:
The result of the test of statistical difference between the mean incomes of CSP and non-CSP maize adopters in Northern Nigeria was carried out with the use of Z-test to assess the hypothesis which states that 'there is no significant difference between the mean incomes of CSP and non-CSP farmers'. The CSP-adopters mean income were estimated to be ₦140, 198 and ₦162,545 in Kano and Nasarawa, respectively, while that of the non-CSPadopters were ₦54,622 and ₦85,570 in Kano and Nasarawa respectively. The Z-calculated were found to be 2.61 and 2.83 at 1% level of significance for Kano and Nasarawa States respectively. These were greater than the tabulated mean (1.96), which meant that there is a significant difference between the mean incomes of both farmers. The coefficient of variation of the mean incomes of the two groups of farmers was found to be 35%, thus, the hypothesis that there is no significant difference between the mean incomes of CSP-adopters and the non-CSP adopters, isrejected.
Constraints that limit the adoption of CSPs among the maize farmers in the study area:
The constraints that limit the adoption of the CSP in maize production in Northern Nigeria is as presented in Table 7.
Majority of the maize farmers indicated high cost of production inputs to be the most severe constraints that affect the adoption of the CSP in the maize production in Northern Nigeria. This scored the highest weighted mean (2.64) and was thus, ranked as the first constraint, while poor transportation problem was recorded as the least severe constraint, and was thus, ranked as the eleventh.
CONCLUSION AND RECOMMENDATION
The study of adoption of climate-smart practices and the effects on production efficiency of maize farmers in Northern Nigeria revealed that the CSP-maize adopters earned more net income and were more efficient in the crop production than the non-adopters. The most severe constraint to the CSP-adoption was high cost of production inputs. Maize farmers were advised to adopt CSP and join cooperatives to benefit from cheaper inputs and increase efficiency. ( | 2020-06-11T09:08:14.159Z | 2020-05-25T00:00:00.000 | {
"year": 2020,
"sha1": "52a88cfd69f0a166343cde38e03ba2ea55920327",
"oa_license": null,
"oa_url": "http://innovativejournal.in/index.php/jbme/article/download/2922/2476/5599",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e1c467ec522ad677fc75bd08c73b1982def4826d",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
67889103 | pes2o/s2orc | v3-fos-license | ANALYSIS OF CODE SWITCHING USED BY DEDDY CORBUZIER ON HITAM PUTIH PROGRAM IN TRANS7
Used two or more languages within an utterance or what linguists call code switching, fairly common especially between two of the most languages when speaking in public. The code switching often done by someone when he was talking to someone else or talking in front of the public in particular person who mastery more than one language. Deddy Corbuzier always do code switching when speaking or explain topic particular, so topic submitted more interesting.This study aims to analyze the type of code switching used by Deddy Corbuzier on Hitam Putih program in Trans7 and also describe Deddy Corbuzier do code switching based on the theory of Romaine and Hoffman. The writer also analyze the type and the reason code switching of the most dominant frequently by Deddy Corbuzier.The method in this study the writer uses qualitative descriptive and also use the case study to analyze and describe the type and the reason Deddy Corbuzier do code switching on Hitam Putih program in Trans7. A result of the research was conducted by the author shows Deddy Corbuzier more dominant code switch type of Inter-sentential switching compared other type code switching and the reason for the most dominant Deddy do code switching is quoting somebody else.
INTRODUCTION
The development of technology in the world, the sooner we get any information, especially information or events in Indonesia through electronic media, one of them is television. In addition to being a medium for watching entertainment, broadcast television in Indonesia broadcast many broadcasts that educate. This program is one of the best quality about inspiration, education, and give motivation to us. Because we can know, we can also understand about people out side or arround us from the poor human until the rich human in Indonesia.
Hitam Putih program on Trans7 it's very simple and very nice where the host Deddy Corbuzier present one of story about something, someone that interest from Indonesia like our life, our occupation, our education, our culture, our habits and many more that we can talk here together. Deddy said that this program nothing lies but this is true and really happen in our life especially in Indonesia. Deddy said to audience that tv show can make one of learning for us in our life because that talk show how to present one of story and how to make that very important and all audience to feeling that. Sometimes we can hear and see how can host talking with the guest in different language, like English language. So make audience sometimes not understand what that he say, but Deddy always translate to Indonesia language to make audience more understand. We may often hear bilingual people switch their language into or code switching. According to (L.Bloomfield, 1935) Bilingualism as the native like control of two languages. Usually the people mastered more language like Deddy Corbuzier always do code switching when speaking in the public. Code switching is a conversional strategy, the function of which is to express social meanings" (Gumperz, 1983).
The writer can conclude that code switching is worth analyzing because it is a phenomenon that influences our community today. So, the purpose of this research is to find out the types and reasons of code switching used by Deddy Corbuzier and the dominant type of code switching used by him of Hitam Putih program on Trans7. According to (Spolsky, 1998) defines bilingual is a person who has some functional ability in second language. That phenomenon usually called code switching in sociolinguistics. Cited by Romaine in (Faiz Ahmad, 2016) code switching can be defined as the use of more than one language ,variety, style by a speaker within a utterance or discourse, and between different interlocutors or situation. Code switching occurs in bilingual and multilingual community when a person switches from one language, variety, or dialect to another one. Romaine in (Afina, 2016) States that there are three types of code switching, that is : 1. Intra-sentential switching Intra-sentential code switching concerns alternation that occurs a sentence or a clause boundary. Sometimes mixing within word boundaries. Intra-sentential code switching occurs within sentence or clause or word or phrase. Intra-sentential code switching is the code switch that occurs in a sentence or clause. Sometimes mixed in word boundaries. This type of code switch is usually done from a language that is considered foreign to used daily language. 2. Inter-sentential switching Appel & Muysken in (Afina, 2016) stated that inter-sentential codes switching is the code switch that involves the movement of the language into another language between sentences. This type of code change is usually marked starting from a language that is often used and then switching to a foreign language. Inter-sentential switching can work to emphasize the point created in other languages in the conversation.
Tag Switching
Tag Switching are insertion of words in other languages. For example someone who daily uses the Indonesian language and then inserted the English words when he spoke. According to (Hoffman, 1992) there are a number of reasons for bilingual person to switch their language that is talking about a particular topic, quoting somebody else, showing emphaty about something, interjection, repetition used for clarification, intention of clarifying the speech content for interlocutor, and expressing group identity.
METHOD
According Sink in (Kaswan, 2017) Method is a style of conducting a research work which is determined by the nature of the problem.The research method in this research the writer use Qualitative method, because it analyses the data in the form of utterances descriptively. The Qualitative research is used to describe and analyze code switching used by Deddy Corbuzier of Hitam Putih pogram on Trans7.
Research design of the research, the writer use case study. According to Mitchel in (Rhee, 2004) case study as a detailed examination of an event (or series of related events) which the analyst believes exhibits (or exhibits) the operation of some identified general theoretical principles. The case study is intended to study intensively on the background of the current state of affairs and the position of an ongoing event, as well as the environmental interaction of a particular social unit.Case study is used to describe and analyze code switching used by Deddy Corbuzier of Hitam Putih Program on Trans7. The writer analyzed the type of code switching used by Deddy Corbuzier in Hitam Putih Program based on data obtained from You tube. In addition the writer analyzed the reason Deddy did the code switching.
The writer used human instrument in this research because writer the key instrument who actively and directly in data collection and data analysis. In qualitative research, the research instrument,is researchers themselves. In other words, the research tool is the researcher himself. Accordig to (Sugiyono, 2006) mentions the role of researchers as key instruments in the process of qualitative research if we look at the instruments in qualitative research.
Related instruments used by researchers to collect data is Human Instrument. The writer made several steps in collecting data. Firstly, the writer opened you tube on the web site . Secondly, the writer searched for video Deddy Corbuzier of Hitan Putih program on Trans7. Thirdly, the writer choose and identified some video Deddy Corbuzier of Hitam Putih program on Trans7 and then downloaded about 10 videos from you tube. Fourthly, the writer watched the videos and listened to and understood the topic of each episode. Fifthly, the writer tries to transcribe the utterances and also select, and classified the data. Finally, the writer arranges the data systematically in accordance with the research questions.
To analyze data the writer use several steps. The writer choose Deddy corbuzier videos that often do code switching . And Then, the writer identified every Deddy Corbuzier doing code switching and making notes. After that, classify the data obtained in several categories according to the purpose of research that is the type of code switching and reasons Deddy Corbuzier do code switching. Finally, identify the most type of code switch that Deddy Corbuzier often do and make conclusion based on the data that has been analyzed.
Results
The result of analysis code switching used by Deddy Corbuzier is present in table . The following writer serve in two tables, that is : The tables are the result of analyzing the type and reason of Deddy Corbuziers' code switching. The results obtained from analyzing 10 videos that the writer get from youtube. Table 1 describes the type of code switching used by Deddy corbuzier. From 10 video Deddy is two times performing intra-sentential switching that is data 10 and 10.1, 4 times inter-sentential
Intra Sentential
Inter-Sentential Tag switching that is data 10.2, data 1, data 4 and data 10.3 and twice does tag switching that is data 9 and 10.4. Table 2 explains the reasons Deddy corbuzier does code switching. From the 10 videos the most dominant reason Deddy Corbuzier doing code switching is quoting somebody else.
Discussion
After analyzed and classified the data based on the types and the reason do code switching, it is clear that Deddy Corbuzier utterances show types and the reason do code switching. There are three types of code switching found in this study : inter-sentential switching, intrasentential switching, and tag switching by Romaine in (Afina, 2016) 1. Intra-sentential switching : Example 1 / Data 10 "I will come back to hitam putih" The words " I will come back " as the English word are mixed with Indonesia in sentence. So The sentence of example 1 / data 10 above is intra-sentential. Example 2 / Data 10.1 "So you want make you need to learn English, seperti itu"?
The sentence "So you want make need to learn English" as the English sentence are mixed with Indonesia "seperti itu" as verification about something. So the sentence of example 1 / Data 10.1 above is intra-sentential. 2. Inter-Sentential : Example 1 / Data 1 "Mereka juga punya acara sendiri di Trans7 15 Juni hari sabtu jam 9 pagi judulnya JKT 48 mission. When Deddy Corbuzier said the title of the film he switched code from indonesia to English. Mereka juga punya acara sendiri di Trans7 15 Juni hari Sabtu jam 09.00 WIB as the Indonesia sentence are mixed English "JKT48 mission". So the sentence of example 1/ Data 5 above is Inter-Sentential. Example 2 / Data 10.2 "Padahal itu British, dari mana kamu belajar itu "? When Deddy Corbuzier said the word "british" he switched to foreign language. So the sentence is inter-sentential. Example 3 /Data 4 "Karena dengan profokasi itu sengaja orang-orang menyebarkan berita informasi hoax". Example 4 / Data 10.3 "Apa yang loe dapat dari viral"? The sentence of example 3 and 4 is Inter-sentential because Deddy switched from Indonesia to English. 3. Tag Switching : Example 1/Data 9 "Kalau kita kan berbicara tentang behavior yang diguilt sama humanity because your behavior seperti itu"? Example 2 / Data 10.4 "Berarti loe habis ini dapat job banyak dubbing film yah"?
The sentence of example 1 and 2 is tag witching because Deddy insertion of a tag in one language into utterance which is otherwise entirely in other language.
According to (Hoffman, 1992) there are a number of reasons for bilingual person to switch their language that is : 1) Talking about a particular topic : Example 1 / Data 8 "Pendidikan penting, harus pendidikan harus, you will do that also. Banyak orang mengatakan that's never to all to you try thing" From the data above is data 8. The writer analyzed Deddy Corbuzier when explain a topic. When he explains the topic he sometimes switches code to English, for example from data 8 Deddy Corbuzier explains the importance of education with more than one language. When explaining something sometimes we have our own thinking and the way of delivery is different based on the knowledge . 2) Quoting somebody else : Example 1 / Data 2 "Make the rest of your life, Become the best of your life" Example 2 / Data 3 "Don't limit yourself" Example 3 / Data 7 "Find yourself and be that" From the data above are data 2, 3 and 7. The writer analyzed every last episode Hitam Putih Deddy always gives a quote. When giving a quote He always switches the code to English and He always explains the quote with more than one language. So when we hear it always touch the heart, amazed and make us motivated. According to (Hoffman, 1992) suggested that "people sometimes like to quote a famous expression or sying of some well-known figures". 3) showing emphaty about something (express solidarity ) : Example 1 / Data 7.1 "Jono, thank you for coming here " Example 2 / Data 1 "the amazing" From the data above. The writer analyzed when Deddy express his feelings to others. Sometimes, he switch code to English for example in data 7.1 and 1. He expressed his gratitude to Jono using English. In data 1, he express his feelings of admiration using English. 4) interjection (inserting sentence filler or sentence connectors ) : Example 1 / Data 10.4 "Okay, kalau begitu silahkan perform sekarang"! Example 2 / Data 3 "So for now hidup kita tidak panjang" Deddy Corbuzier always inserts English when he speaks. For example in data 10.4 and data 3, he uses interjection in English is okay and so for now. Of course, a lot of interjection are always used by Deddy on the sidelines of the conversation, because we know Deddy Corbuzier mastered the English language. Not only Deddy anyone who knows English, Sometimes, he insert sentence filler or sentence connector. 5) Repetition used for clarification : Example 1 / Data 4 "Hoax, kamu tahu hoax ? Hoax adalah berita yang dibuat-buat" From the data above. The writer analyzed Deddy Corbuzier repetitions used for clarification. For the example in data 4, he repetition "hoax" for clarification. 6) Intention of clarifying the speech content for interlocutor : Example 1 / Data 10.5 "So, you want make you need to learn English "? Example 3 / Data 5 "Mungkin saya akan tanyanya seperti ini, satu pertanyaan why"? From the data above. The writer analyzed Deddy Corbuzier intention of clarifying the speech content for interlocutor when he speaks. For the example In data 10.5 and 5 he ask for clarification something by using English. 7) Expressing group identity : Example 1 / Data 9 "Saya dulu itu, dari dulu ng'fans. I think that is amazing group" Example 2 / Data 8.1 "and thus, maybe i don't care about that " From the data above , writer analyzed Deddy corbuzier when he expresses his feelings a group identity. In data 9 he expresses his admiration to a group. In data 8.1 he expresses no regard to a group. When we express a group, we sometimes uses other languages to be more impressed are very animated including Deddy Corbuzier.
CONCLUSION
Related to the title of this research conducted by the researcher is the analysis of code switching used by Deddy Corbuzier of Hitam Putih in Trans7 and find out the types and reasons of code switching used by Deddy Corbuzier and the dominant type of codeswitching. The researcher got some conclusions. That is, First Deddy corbuzier mastered English, proved he always use two languages namely Indonesia and English when become host on Hitam Putih shows, especially when he arrives foreign guest star. Second, the types of code switching by Deddy corbuzier based on Romaine in (Afina, 2016) are intra-sentential switching, inter-sentential switching and tag switching. The most type is Inter-sentential switching. Because he always do a lot of code switching from Indonesia to English. Third, there are seven reasons Deddy Corbuzier did the code change based on (Hoffman, 1992) that is talking about a particular topic, quoting somebody else, showing emphaty about something, interjection, repetition used for clarification, introduction of clarifying the speech content for interlocutor, and expressing group identity. The most dominant reason is quoting somebody else, because at the end of the event Hitam Putih program, Deddy Corbuzier always quoting somebody else. | 2019-03-06T14:02:28.084Z | 2018-06-30T00:00:00.000 | {
"year": 2018,
"sha1": "dfe98ec53d4a5a17b20a66b8e066cfa6141641ce",
"oa_license": "CCBYSA",
"oa_url": "https://journal.ikipsiliwangi.ac.id/index.php/project/article/download/1270/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "971804a073e9a85fc6570f9f0e05af0e09bf44bf",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
115169569 | pes2o/s2orc | v3-fos-license | Categorifying Coloring Numbers
Coloring numbers are one of the simplest combinatorial invariants of knots and links to describe. And with Joyce's introduction of quandles, we can understand them more algebraically. But can we extend these invariants to tangles -- knots and links with free ends? Indeed we can, once we categorify. Starting from the definition of coloring numbers, we will categorify them and establish this extension to tangles. Then, decategorifying will leave us with matrix representations of the monoidal category of tangles.
Topological Quantum Computation and Tangle Representations.
The rise of topological quantum computation as a method to provide fault-tolerance for quantum computers [13,9,18] brings with it the need to turn knot theory into representation theory. Every computation is actually approximating a topological invariant of the knotted paths anyons follow, and every knot invariant should give a quantum computer.
But we cannot simply consider these as invariants of knots. Computations take place through time, and we must be able to understand what happens in the first half of a computation as separate from what happens in the second half. When we consider less than the complete run of a topological quantum computer we do not find neatly knotted paths of anyons, but rather a loose collection of tangled paths with free ends hanging out at the beginning and end of the computation.
Thus we must consider tangles [10] as a natural generalization of knots and links, and a simpler one for the purposes of topological quantum computation. To describe a topological quantum computer corresponding to a tangle we must select one transition matrix for each of four simple generating tangles, subject to a short list of conditions. That is, we must define a matrix representation of the category T ang of tangles. This specifies not only the evolution of the computer's state as we move anyons around each other, but also the initial conditions and the measurements to be performed as we pair them off.
And so quantum computation requires us to consider the representation theory of tangles, and to think of knot invariants as restrictions of these representations.
1.2. Colorings and Quandles. In this paper we will lay out this picture for a particularly simple combinatorial invariant of knots and links: the number of colorings of a link by a given involutory quandle.
Colorings of knot and link diagrams go back to Fox [7], who asked if we can color the arcs of a diagram red, green, and blue, so that at each crossing either one color appears or all three do. More generally, in how many ways can we manage this? It turns out that this number of colorings depends only on the knot type and not on the particular diagram.
Quandles were introduced by Joyce [12] and Matveev [14] (under the name, "distributive groupoids") as a tool for studying oriented knots and links. The special case of involutory quandles first appeared as "keis" [23]. These fill the same role for unoriented links that general quandles do for oriented links.
The connection between quandles and colorings is that a coloring is essentially a homomorphism of quandles [11]. First, the set of colors {red, green, blue} can be given the structure of an involutory quandle. Then, to an unoriented link diagram we can assign a "fundamental" involutory quandle encoding exactly those relations demanded by the diagram's crossings. Fox's colorings, then, are homomorphisms from this fundamental involutory quandle to the quandle of colors. Replacing this target quandle with other involutory quandles gives a rich stock of invariants to investigate.
The framework of quandles has been extended to include a cohomology theory analogous to that of groups [6,5]. Link colorings by various sorts of quandles have also been extensively studied [16,19,21,20,15,17,8]. However, these invariants must be extended to tangles for our purposes! Our first step will be to "categorify" the coloring number invariants by considering instead the set of colorings of a given diagram. It is essential at this point to note that this set is not invariant under the Reidemeister moves -only its cardinality is. This leads us in passing to define it as an example of a link (or tangle) "covariant".
Next we extend our definition to cover tangles by introducing the category of spans, as defined by Bénabou [4]. We find that defining the colorings of a tangle to be a span of sets gives us exactly the handles we need to compose them properly, and to define colorings as a functor on the category of tangles.
Finally, we "decategorify" our spans to find matrices [2]. This gives us our sought-after matrix representation of the category of tangles. When we regard a link as a tangle, our representation will give us a 1 × 1 matrix whose single entry is the old number of colorings.
Acknowledgements. I am deeply indebted to the input and advice of John Baez and J. Scott Carter on the preliminary versions of this paper, and to Sam Lomonaco and Louis Kauffman in the development of these ideas.
Quandle Coloring Numbers
2.1. Quandles. A "quandle" is an algebraic structure consisting of a set Q and two binary operations and . These satisfy the three conditions Q1. For all a ∈ Q, a a = a.
As is usual for algebraic structures, we have a notion of a "quandle homomorphism" f : Q 1 → Q 2 , which is simply a function from the underlying set of Q 1 to that of Q 2 which preserves the two quandle operations. We then have the category Quan of quandles and quandle homomorphisms, which will feature prominently in our discussion.
It is useful to keep the following quandles in mind as examples. Given any group G, the conjugation Conj(G) with the same underlying set as G. We define the operations by conjugation within the group: If G is abelian, then the operations in Conj(G) are trivial. But we do have another interesting quandle structure. The dihedral quandle D(G) also has the same underlying set as G, but we now define the two operations: This quandle satisfies an additional condition When this condition is satisfied, we say the quandle is "involutory".
2.2. Colorings. Given a unoriented knot or link diagram and an involutory quandle X, we color the diagram by assigning an element of X to each arc of the diagram. When an arc with color a meets an overcrossing arc with color b, the arc on the other side must be colored b a, as in figure 1. Figure 1. Coloring arcs at a crossing Notice here that it doesn't matter which undercrossing arc we regard as coming in and which we regard as going out of the crossing because we are using an involutory quandle. The axioms QInv and Q2 tell us that As it turns out, the number of colorings of a diagram for a given link by a given involutory quandle is independent of which diagram of the link we use. Indeed, given a coloring of a link diagram, we get a unique coloring of any link diagram related to it by a Reidemeister move. In fact, the three quandle axioms exactly correspond to the three Reidemeister moves, as indicated in figure 2.
Thus we have the Theorem 2.1. For any involutory quandle X, the number of colorings of an unoriented link diagram by X is an invariant of unoriented links.
The Fundamental Involutory Quandle.
Given an unoriented link diagram, we can define its fundamental involutory quandle [24]. This is a quandle which contains exactly the relations forced by the crossings in the diagram. It is, in a sense, "universal" for colorings.
We generate a free quandle [12] on the set of arcs in the diagram K. We then impose a relation for each crossing. If generators a and c meet at the overcrossing generator b, we add the relation c = b a. Once these relations are added, the result is the fundamental involutory quandle Q(K).
A coloring of the diagram K by the quandle X assigns to each arc of K an element of X. But these arcs are the generators of Q(K). Further, the relations defining Q(K) are enforced by the definition of an X-coloring. Thus an X-coloring of the link diagram K is exactly the same as a quandle homomorphism hom Quan (Q(K), X).
When we apply a Reidemeister move to turn the diagram K 1 into the diagram K 2 , the fundamental involutory quandle doesn't stay the same. The set of arcs in K 2 is not the same as the set of arcs in K 1 , and there are different relations imposed by the different crossings. However, we do have the Theorem 2.2. If link diagrams K 1 and K 2 are related by a Reidemeister move, then there is an isomorphism Q(K 1 ) ∼ = Q(K 2 ).
Proof. If we refer to figure 2 we can see the proof. For example, let's say that K 1 is on the left side of a Reidemeister II move, while K 2 is on the right.
The labels in the middle row of figure 2 describe a coloring of K 2 using the quandle Q(K 1 ), or equivalently a coloring of K 1 using the quandle Q(K 2 ). Thus wecan define two homomorphisms of quandles: f ∈ hom Quan (Q(K 2 ), Q(K 1 )) and g ∈ hom Quan (Q(K 1 ), Q(K 2 )). These are clearly inverses of each other, establishing the isomorphism.
In particular, this isomorphism gives a bijection between the sets of colorings hom Quan (Q(K 1 ), X) and hom Quan (Q(K 2 ), X), which reestablishes the invariance of coloring numbers.
It is important to note at this point that these sets of colorings are not the same set. They are merely isomorphic as sets, rather than identical. Therefore the set of colorings is not an invariant of the knot type. Only its cardinality is invariant. We must now lay out a language in which to talk about exactly these details.
Categorification
Categorification is, simply put ... the process of finding category-theoretic analogues of settheoretic concepts by replacing sets with categories, functions with functors, and equations between functions by natural isomorphisms between functors, which in turn should satisfy certain equations of their own, called 'coherence laws'. [3] More to the point, we want to take things we'd called "identical" and see them as merely "equivalent".
In the case at hand, we're considering a knot to be an equivalence class of knot diagrams under the Reidemeister moves. Instead, we'd like to think of link diagrams as the objects of a category KDiag. The morphisms will be sequences of Reidemeister moves. Since any such move can be reversed, this category of link diagrams forms a groupoid. Now we can recast theorem 2.1 as follows: Theorem 3.1. For any involutory quandle X we have a functor Col X from the groupoid KDiag to the set of natural numbers, considered as a category with no non-identity morphisms.
Proof. To any diagram we associate the number of X-colorings. This defines the functor on objects.
Since every morphism is a composite of Reidemeister moves, we just need to define the functor on the Reidemeister moves to define it on all morphisms. But we know that under a Reidemeister move the number of X-colorings remains the same, so to any move between two diagrams we can associate the identity morphism on the (common) number of colorings.
We can also categorify the value of our invariant. Instead of considering how many colorings a given diagram has, we should instead consider the set of colorings itself. We further refine theorem 3.1 to state: For any involutory quandle X we have a functor Col X : KDiag → Set which associates to any link diagram K the set of X-colorings of K.
Proof. Indeed, we can now see 2.2 as asserting the functoriality of the fundamental involutory quandle construction. That is, to a sequence of Reidemeister moves connecting two link diagrams we get an isomorphism of fundamental involutory quandles. Then we can define Col X (K) = hom Quan (Q(K), X) Thus a sequence of Reidemeister moves connecting two link diagrams gives an explicit bijection between the sets of X-colorings. Since the sets are changing as we change the diagram, it no longer seems appropriate to call our functor a "link invariant". Instead, we will make the following definition Definition 3.3. A link covariant is a functor from the groupoid KDiag to any other category. If the image of each morphism is an identity morphism, we call the functor a link invariant.
Thus the fundamental involutory quandle of a knot diagram is a covariant, as is the set of X-colorings for any involutory quandle X. Many other well-known "invariants" are actually covariants under this definition, like the knot group given by the Wirtinger presentation [22].
Tangles
4.1. The 2-category of Tangles. Now that we've categorified our link invariant, we have enough breathing room to truly extend its domain of definition. Specifically, we want to color tangle diagrams.
Topologically, a tangle is like a knot or a link embedded in a cube, but we now allow arc components with their edges running to marked points on the top and bottom of the cube. These tangles are known to form a monoidal category T ang. The objects of this category are the natural numbers, and a morphism from m to n is a tangle with m points on the bottom of its cube, and n endpoints on the top.
If we have a tangle from n 1 to n 2 , and another tangle from n 2 to n 3 , we can stack the second cube on top of the first and splice together the n 2 endpoints in the middle. This defines our composition. The monoidal product of two objects is their sum as natural numbers, while the monoidal product of two tangles is given by stacking their cubes side-by-side.
Just as for knots and links, tangles can be described by tangle diagrams. Ambient isotopies of tangles are again equivalent to sequences of Reidemeister moves. This leads to a well-known presentation of T ang as a monoidal category [10]: We read the generator X + as a right-handed crossing, X − as a left-handed crossing, ∪ as a local minimum in the tangle diagram, and ∩ as a local maximum. The relations T 1 , T 2 , and T 3 then encode the three Reidemeister moves, while T 0 and T 0 handle the interaction of local maxima and minima with each other and with crossings.
As we did before, let's categorify this picture. Instead of identifying two tangle diagrams if they are related by a Reidemeister move (or one of the new "topological" tangle moves), let's jut consider them to be equivalent.
That is, we consider a (strict) monoidal 2-category whose objects are again the natural numbers, and whose morphisms are built from compositions and monoidal products of the four generating tangles. Now instead of imposing the five relations, we add 2-isomorphisms to relate any tangle diagrams that would be identified by the relations. It is this 2-category that we will refer to as T ang.
In analogy with definition 3.3 for links, we introduce the following The straightforward approach now is to define a coloring of an unoriented tangle diagram by an involutory quandle X exactly as we did for link diagrams.
We assign an element of X to each arc and subject these assignments to restrictions at crossings just as before. This indeed gives a set of X-colorings, but there is no way to compose two of these sets as morphisms in some category. We need to extend our na ive notion of the set of tangle colorings and give it "handles" that we can use to compose them.
This composition is not quite associative, but it's easily verified to be associative up to a unique 2-isomorphism, which gives the associator for the 2-category.
There are a few facts about the span construction which will be useful to us. [1] Theorem 5.1. Given categories C and D with pullbacks and a functor F : C → D preserving them, there is a 2-functor Span(F ) : Span(C) → Span(D) defined by applying F to all parts of a span diagram.
Theorem 5.2. If C is a monoidal category such that the monoidal product preserves pullbacks, then Span(C) is a monoidal 2-category.
Dually, given a category C with pushouts we can define the 2-category CoSpan(C) of cospans. A cospan diagram is like a span diagram, but with the arrows pointing in instead of out, and we compose them by pushing out a square rather than pulling back, but otherwise everything we've said about spans holds for cospans.
Coloring
Spans. The category Set of sets has fibered products, which act as pullbacks, and so we have a 2-category Span(Set). The two functions out to the side of the central set in a span will provide us with exactly the handles we need to compose sets of colorings. Now we can extend theorem 3.2 to: For any involutory quandle X we have a 2-functor On an object n of T ang we define Col X (n) = X n the set of n-tuples of elements of X.
For a tangle diagram T : m → n from m free ends to n free ends we define the span where the arrow on the left is the function sending a coloring of T to the coloring it induces on the lower endpoints of the tangle, and the one on the right is the similar function for the upper endpoints.
The 2-functor is defined on 2-morphisms by the diagrams in figure 2, as in theorem 3.2.
Proof. The main thing to check here is that composition of coloring spans really does reflect composition of tangles. But given a composite tangle T 1 • T 2 , a coloring in Col X (T 1 • T 2 ) is exactly a coloring of T 1 and a coloring of T 2 that agree on the endpoints we splice together to compose the tangles. This is exactly the definition of the fibered product Notice what happens to this picture when we consider a link as a tangle from 0 to 0. Both sides of the span become empty products -singletons -and the functions in the span become trivial. What remains is the old set of link colorings.
The Fundamenal Involutory Quandle Cospan.
Earlier we identified the fundamental involutory quandle Q(K) of a link diagram K as the quandle that captures coloring numbers for all involutory quandles X: The same construction can give us a quandle Q(T ) from a tangle diagram T , which then gives us the set of X-colorings of T . Can we get the sides of our span as well?
Indeed, the free quandle on n generators Q n satisfies hom Quan (Q n , X) = X n . We can choose these generators to be a collection of free ends of our tangle diagram, and the inclusion of those ends into the whole diagram gives us a homomorphism Q n → Q(T ).
Theorem 5.4. There is a 2-functor extending the fundamental involutory quandle to tangles: Q : T ang → CoSpan(Quan) On an object n in T ang we define Q(n) = Q n , the free quandle on n generators. For a tangle diagram T : m → n from m free ends to n free ends we let Q m be the free quandle on the incoming ends and Q n be the free quandle on the outgoing ends. We define the cospan where the arrows are the quandle homomorphisms induced by including the endpoints into the tangle diagrams.
For a 2-morphism φ we define Q(φ) by referring to figure 2, as in theorem 2.2.
Proof. Again, the meat of the proof is in showing that composition of tangles really does correspond to a pushout in Quan.
Composition of tangle diagrams T 1 and T 2 consists of laying down both diagrams and joining some arcs from T 1 to arcs from T 2 , as determined by the lineup of the endpoints. But matching endpoints corresponds to adding relations saying that the image of a generator of Q n in Q(T 1 ) equals its image as a generator in Q(T 2 ). This amalgamated free product is exactly the pushout construction in Quan.
Again, if we consider a link as a tangle from 0 to 0, the free quandle on zero generators is trivial, as are all homomorphisms from it. The only nontrivial information in this cospan is the old fundamental involutory quandle of the link. Now we can use this fundamental involutory quandle cospan to recover the coloring spans. The contravariant hom-functor hom Quan ( , X) automatically takes all colimits to limits, so in particular it preserves pullbacks as a functor Quan op → Set.
Theorem 5.5. The coloring span 2-functor Col X factors as the composition of the span of the hom-functor Span(hom Quan ( , X)) and the fundamental involutory quandle 2-functor Q.
Monoidal structure.
All of the 2-categories considered above also carry monoidal structures, and all the 2-functors preserve them. This allows us to obtain tangle covariants, and to decategorify them to tangle invariants.
The category Set has all finite products, so it has the Cartesian monoidal structure. The direct product of sets preserves pullbacks, so Span(Set) is a monoidal 2-category.
Similarly, Quan has finite coproducts given by the free product of quandles, or equivalently by the pushout over the free quandle on zero generators. These coproducts preserve pushouts, so CoSpan(Quan) is a monoidal 2-category. Proof. This is a straightforward consequence of the fact that the hom-functor hom Quan ( , X) : Quan op → Set preserves products. Proof. Given two tangles T 1 : m 1 → n 1 and T 2 : m 2 → n 2 we form their monoidal product T 1 ⊗ T 2 by laying them side-by-side. When we calculate the fundamental involutory quandle of this diagram, we just use all the generators and relations that come from each of T 1 and T 2 , and none of them interact with each other. Thus the quandle of T 1 ⊗ T 2 is the free product of the quandles of T 1 and T 2 . Similarly at the ends, Q m1+m2 is the free product of Q m1 and Q m2 , and Q n1+n2 is the free product of Q n1 and Q n2 . So the monoidal product of tangles corresponds under Q to taking free products of cospan diagrams. But this is just the induced monoidal structure on CoSpan(Quan). Proof. This is an immediate corollary of the preceding theorems and theorem 5.5 6. Decategorifying 6.1. Coloring Matrices. When we decategorify a coloring set we get a coloring number. What happens when we decategorify a coloring span?
The span functions f l and f r partition F into its "double preimages" Similarly, the functions g l and g r partition G into its double preimages G a,b . Then for the diagram above to commute the function φ must decompose into functions φ a,b : F a,b → G a,b . And then for φ to be a bijection, each of the φ a,b must be a bijection.
So when we identify isomorphic spans of sets, we retain only the cardinality of each of the double preimages. We are left with a matrix of cardinal numbers indexed by the set A on the one side and the set B on the other.
For a coloring span, these index sets are the colorings of the endpoints. Thus when we decategorify a coloring span we get a matrix Col X (T ) indexed by colorings of the endpoints of the tangle. The entry Col X (T ) µν is the number of colorings of the diagram T that agree with the coloring µ on the incoming ends and with the coloring ν on the outgoing ends.
This interpretation as matrices is compatible with matrix multiplication. That is, given tangle diagrams T 1 : m → l and T 2 : l → n, the number of colorings Col X (T 1 • T 2 ) µν agreeing with the colorings µ and ν on the ends can be calculated as a sum of products of coloring numbers: Decategorification also plays nice with the monoidal structure on spans induced by the product of sets. Take two diagrams T 1 : m 1 → n 1 and T 2 : m 2 → n 2 . A coloring µ 1 of the incoming ends of T 1 and a coloring µ 2 of the incoming ends of T 2 combine to give a coloring (µ 1 , µ 2 ) ∈ X m1+m2 of the incoming ends of T 1 ⊗ T 2 . Similarly, we can combine colorings of the outgoing strands of each diagram to get a coloring (ν 1 , ν 2 ) ∈ X n1+n2 of the outgoing strands of T 1 ⊗ T 2 . Every coloring of the incoming or outgoing strands arises in this manner. Now when we count the colorings of T 1 ⊗ T 2 compatible with a given coloring of the incoming and outgoing ends, we find This follows since a coloring of T 1 ⊗ T 2 is simply a coloring of each of T 1 and T 2 with no particular relation between them. This shows that the coloring matrix for the monoidal product T 1 ⊗ T 2 is the Kronecker product of the coloring matrices for T 1 and T 2 . where the target category is that of matrices with natural number entries, and with identity 2-morphisms added.
Proof. If we pick d to be the cardinality of X, then there are exactly d n colorings of a collection of n endpoints in a tangle. We thus set Col X (n) = d n on objects.
We already have a coloring span of sets for every tangle. Even if we disregard the coloring relations at crossings, we can only pick one color from X for each arc in the diagram, and so the sets in the coloring span are finite. Taking cardinalities, we get a matrix of natural numbers. As described above, this assignment of a coloring matrix to a tangle preserves the composition and monoidal structure.
Finally, if we have a 2-morphism φ : T 1 ⇒ T 2 in T ang we know that the coloring matrices for T 1 and T 2 will be the same, so we can pick Col X (φ) to be the identity 2-morphism on that matrix.
Since every 2-morphism becomes an identity 2-morphism under this functor, we have a tangle invariant.
In particular, when we consider a link L as a tangle from 0 to 0, we can find the 1 × 1 matrix Col X (L). The single entry in this matrix is the number of X-colorings of the link L.
Instead of restricting our attention to links, we may instead consider any nstrand braid as a tangle from n to n. In this case we find a matrix representation Col X of each braid group B n .
6.2.
Computation. It turns out that not only do we have a tangle invariant in our coloring matrices, we have a straightforward way of computing them. The category of tangles was given by generators and relations. Thus we can calculate the coloring matrix of each generating tangle by hand, and then assemble the coloring matrix using matrix multiplications and Kronecker products.
The matrix for each generating tangle is straightforward to work out. The right-handed crossing, for instance, takes a pair of colors for each index. The entry Col X (X + ) (a,b)(c,d) will be 1 if a = d and c = a b, and 0 otherwise. As an example, figure 3 shows all the coloring matrices of the generating tangles for the quandle D(Z 3 ).
Computations with these matrices may be tedious by hand, but they are easily programmed into a computer. | 2008-03-11T17:38:19.000Z | 2008-03-11T00:00:00.000 | {
"year": 2008,
"sha1": "0595d4c67b23538f5286644472e6fadd2904fdb1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0595d4c67b23538f5286644472e6fadd2904fdb1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
619732 | pes2o/s2orc | v3-fos-license | Biological Basis of Differential Susceptibility to Hepatocarcinogenesis among Mouse Strains
There is a vast amount of literature related to mouse liver tumorigenesis generated over the past 60 years, not all of which has been captured here. The studies reported in this literature have generally been state of the art at the time they were carried out. A PubMed search on the topic “mouse liver tumors” covering the past 10 years yields over 7000 scientific papers. This review address several important topics related to the unresolved controversy regarding the relevance of mouse liver tumor responses observed in cancer bioassays. The inherent mouse strain differential sensitivities to hepatocarcinogenesis largely parallel the strain susceptibility to chemically induced liver neoplasia. The effects of phenobarbital and halogenated hydrocarbons in mouse hepatocarcinogenesis have been summarized because of recurring interest and numerous publications on these topics. No single simple paradigm fully explains differential mouse strain responses, which can vary more than 50-fold among inbred strains. In addition to inherent genetics, modifying factors including cell cycle balance, enzyme induction, DNA methylation, oncogenes and suppressor genes, diet, and intercellular communication influence susceptibility to spontaneous and induced mouse hepatocarcinogenesis. Comments are offered on the evaluation, interpretation, and relevance of mouse liver tumor responses in the context of cancer bioassays.
Introduction
Since the early 1970s and even before, the mouse liver tumor response observed in cancer bioassays has been a source of unresolved controversy with books, symposia, and advisory committee deliberations on the topic [1][2][3][4][5][6][7][8] . Some have taken the debate further and argued that the mouse bioassay should be eliminated altogether from our hazard identification/safety assessment armamentarium and that a one species default model, the rat, is sufficient 9 . Debates on a one or two species bioassay and the relevance of the mouse liver tumor response continue.
Our knowledge of factors associated with mouse hepatocarcinogenesis has significantly improved over the last two decades. The diagnostic and nomenclature issues of concern in the 1970's have been largely resolved. The propensity of male mice versus females to develop spontaneous and treatment-induced liver tumors is better understood. Advances in the genetics of strain susceptibility have been made, although more work remains to tease out the definitive genes responsible. Modulators of murine hepatocarcinogenesis, such as diet, hormones, oncogenes, methylation, imprinting, and cell proliferation/apoptosis are among multiple mechanistically associated factors that impact this target organ response in control as well as treated mice. With the advancement of our understanding, it has become obvious that no mode of action, pathway, or mechanism should be considered mutually exclusive.
There is no one simple paradigm to explain the differential strain sensitivity to hepatocarcinogenesis. With the recent effort to delineate over 8 million single nucleotide polymorphisms in 15 mouse strains selected for genetic resequencing 10 , the prospect for continued and possibly specialized use of mice for hazard identification and safety assessment remains. In the meantime, we need to continue ongoing efforts and progress in defining threshold limiting factors that impact murine hepatocarcinogenesis and use this knowledge to place the mouse liver tumor response in appropriate regulatory perspective. There will always be a need for research on mode of action and quantitative differences/similarities between species.
Origins of Inbred Mice
Classical strains used in research today were originally derived from interbreeding of Mus musculus subspecies. It first started as a hobby in 18 th century Victorian England and Asia with breeding of "fancy" mice based on coat color and the subsequent adoption of some of these mice in the early 1900's by researchers in the United States [10][11][12][13] . The oldest inbred strain, the DBA/2 (originally dba for d = dilute, b = brown, and a = non-agouti) was established in 1909 by C. C. Little 14 . Ten years later the mouse lady of Granby, Massachusetts, Miss Abbie E. C. Lathrop, intercrossed black offspring of a female 57 mouse to produce a stock of mice ultimately used by Clarence Cook Little to form his C line resulting in the C57BL/6 12,13 . Strains A, C3H, and CBA were created by Leonell C. Strong in the early 1940's 15 .
Commonly used mouse strains
In the early 1960's the National Cancer Institute adopted the B6C3F1 mouse, the F1 hybrid of the C57BL/6 female and C3H male, as the mouse for use in the cancer bioassay program. The decision to use the B6C3F1 mouse was based on results from an 18-month study involving 20,000 mice including two hybrid lines and 127 different chemicals 16 . Partially based on historic inertia and partly based on a concern about losing the historic control database, the B6C3F1 has remained the mouse model of choice in U.S. Government-sponsored hazard identification programs for toxicity and cancer. Variations of the C57BL and the outbred Swiss stock are popular models for safety assessment by the chemical and pharmaceutical industries while these plus the A strain, B6CF1, C3H, BALB/c, FVB, 129, and others are used by biomedical researchers. The recent publication of mouse haplotype maps for 15 inbred mouse strains, including 8 classical strains commonly used for research, is expected to facilitate obtaining important information for a wide range of biological questions 10 and may influence how mice are used for hazard identification in the future.
Conventional hazard identification bioassay
While the hazard identification paradigm of maintaining a core group of 50 animals per dose per sex has remained relatively constant for many years, modifications such as include three to five doses, some of which approximate human exposure levels, in utero exposures and stop studies have been added to answer specific questions. The basic hazard identification study commences exposures when mice and rats are 6 to 8 weeks of age and treatment typically continues for 2 years. When using specifically susceptible mouse strains such as the B6C3F1 hybrid, relatively high and variable incidences of liver tumors can occur in the untreated or vehicle control mice. The liver tumor incidence increases as the mice age with the majority of liver tumors occurring after 20 months of age 17 .
Single or multiple doses to adult mice
Administration of a few to several repeated doses of known and usually genotoxic hepatocarcinogens, starting with 5 to 8-week old mice will yield liver tumors in a few months, making this a potentially useful tool to follow liver tumor pathogenesis and to study variables that might influence hepatocarcinogenesis. Published reports using this type of protocol were fairly common prior to the 1970s with a general switch to initiation-promotion studies in subsequent years. It should be noted, however, that repeated and prolonged exposures to nongenotoxic agents certainly can result in induction of mouse liver tumors.
Neonatal mouse model
While variations of this model have been used, the basic approach is to administer one to three doses of the test agent prior to weaning with no further treatment. The rationale is that "fixation" of hepatocellular initiation will be favored by the high rate of cell growth during the neonatal period. Subsequent endogenous promotion would allow for clonal expansion of any initiated clones with development of liver tumors within a 12-month observation period. This model, using either CD1 or B6C3F1 mice, has been described and shown to work well with genotoxic test agents 18 . A single dose of a hepatocarcinogen such as diethylnitrosamine to a neonate at 12 to 15 days of age will yield a high incidence and multiplicity of liver neoplasia with a relatively short latency while the same dose given to the adult mouse will result in a lower incidence and multiplicity with a much longer latency period. B6C3F1 and C3AF1 mice neonatally treated with ethylnitrosourea versus treatment at 42 days of age results in a 3-to 8-fold increase in liver tumors in the neonates compared to the older dosed mice 19 . This age difference is based upon an efficient initiation of hepatocytes during the neonatal period when there is rapid liver growth [20][21][22] . The neonatal mouse model has been used extensively to learn more about the biology of murine hepatocarcinogenesis and has included studies on the genetics of susceptibility, the relative roles of cell proliferation and apoptosis, effects of hormones and diets on liver tumor development, and the effects of DNA methylation on expression of genes relevant to murine hepatocarcinogenesis.
Initiation-promotion protocols
These protocols typically consist of administration of the initiating agent, usually diethylnitrosamine, to either neonatal mice or to mice shortly after weaning followed by repeated dosing with an agent being tested as a liver tumor promoter. One variation involves administration of a necrogenic dose of diethylnitrosamine 1 to 2 weeks after weaning. Another variation involves partial hepatectomy of 5 to 6 week old mice followed in 36 hours by a non-necrogenic dose of diethylnitrosamine 23 . These two protocols without further treatment will generally yield 3 or more liver tumors per mouse at one year of age. A third protocol, the preweaning initiation-promotion protocol, involves administration of a non-necrogenic dose of diethylnitrosamine or an alternative initiator often at age day 12 or 15 followed by administering the agent being tested for promotion shortly after weaning. Without further treatment, these mice can develop a high multiplicity of liver tumors by one year of age while treatment with a promoter significantly shortens the time to tumor. Partial hepatectomy alone after preweaning initiation also serves as a liver tumor promoter 24 . While these initiation-promotion protocols can be carried out until there is an obvious liver tumor endpoint, quantitation of putative preneoplastic foci of cellular alteration has been used as a biomarker of hepatocarcinogenicity. Features of these initiation-promotion protocols have been described in a review paper dealing with phenobarbital promotion 25 . There are variations such as initiation by treatment with diethylnitrosamine in drinking water for 4 weeks followed by promotion 26 and other examples will be referenced throughout this document.
Genetically engineered mouse models
Several lines of viral oncogene induced mouse models of hepatic neoplasia are summarized by Sandgren 27,28 . These include ras and myc oncogenes under the influence of the albumin gene regulatory elements 29,30 as well as SV40-TAg driven by a variety of regulatory elements 27,31 . Growth factors such as TGFalpha and IGF2 under the influence of the metallothionein-1 or major urinary protein regulatory elements, respectively, yield dysplastic foci in as little as 6 months and hepatocellular neoplasms in 8 to 18 months 27,32 with evidence of severe hepatic dysplasia in newborn mice 33 .
In an effort to more closely mimic human hepatitisassociated hepatocellular neoplasia, a small number of g e n e t i c a l l y e n g i n e e r e d m o u s e m o d e l s b a s e d o n incorporation of various portions of the human hepatitis virus such as the HB × gene and the pre-S2 gene have been developed 34,35 and show a male predominance under the influence of androgens and glucocorticoids 36 . Hepatic changes can appear as early as 4 weeks and consist of glycogen-rich centrilobular foci of cellular alteration 37,38 . During a latent period of several months the altered foci show enhanced cell proliferation and aneuploidy with ultimate development of hepatocellular carcinomas adjacent to adenomas and additional foci of cellular alteration. The yield of hepatocellular carcinomas can exceed 80% 39 and an X gene/c-myc construct will yield dysplastic foci in 12 weeks and hepatocellular carcinomas between 20 and 28 weeks 40 . In essentially all of the HBV models of HCC, there is an imbalance of cell proliferation and apoptosis 34,41 .
B6C3F1 mice expressing genotype 1a hepatitis C virus core and envelope proteins 1 and 2 develop hepatopathy and are prone to lymphoid neoplasia as well as heptocellular carcinoma 42 . Lymphoid neoplasia was seen after age 18 months. Hepatopathy was present at 12 months and tumors later (exact age of occurrence of hepatocellular carcinomas not stated).
The genetically engineered mouse models that have been proposed for hazard identification, such as the rasH2 and p53+/-, fall into the category of hepatocarcinogenesis resistant mice.
Strain Susceptibility to Spontaneous and Induced Liver Tumors: Biology
Based on genetic and numerous other factors, mouse stocks and inbred strains differ in susceptibility to both spontaneous and treatment-induced liver neoplasia. Because of the variety of studies with differing protocols used to generate susceptibility data, direct comparisons among strains and stocks is problematic and, in looking across the numerous published studies, the best we can do is to classify mice by their relative susceptibility to hepatocarcinogenesis. By way of example, CBA and C3H inbred mice are considered highly susceptible to induction of liver neoplasia while in comparison C57BL/6 and BALB/c are relatively resistant 43 .
There are two metrics typically used to categorize liver tumor susceptibility: incidence and multiplicity. The former is most often used to characterize spontaneous liver neoplasia and the latter to assign relative susceptibility to treatment-induced liver neoplasia. A study by Hanigan and co-workers serves to demonstrate the magnitude of relative susceptibility. Utilizing an identical protocol, a direct comparison between C3H/HeJ and C57BL/6J male mice with respect to liver tumor induction demonstrated up to a 40-fold difference in liver tumor multiplicity 20 .
In defining resistance and susceptibility (i.e., strain dependent variablilty), two factors play a significant role -the number of initiated tumor cells and the rate of tumor growth. Using the basal levels as an indicator of pot ent ial spontaneous initiation in mouse liver, 8hydroxydeoxyguanosine (OH8dG) strain differences in C3H, B6C3F1, and C57BL mice are positively correlated with spontaneous hepatocellular neoplasia 44 . Since formation of OH8dG has been shown to result in gene mutation 45 and hypomethylation 46 , it may also contribute via these mechanisms to the differential strain susceptibility to spontaneous hepatocarcinogenesis. C3H mice are more s u s c e p t i b l e t h a n C 5 7 B L / 6 m i c e t o a v a r i e t y o f hepatocarcinogens with differing metabolic activation patterns and they form similar numbers of preneoplastic lesions and DNA adducts 47,48 , indicating that the relative susceptibility of these mice is based on detection of proliferative lesions which in turn is dependent upon lesion growth rate within the temporal context of the experimental study.
Spontaneous liver tumors
The strain-specific spontaneous incidence of liver tumors (originally diagnosed as hepatomas) in mice has been documented in the published literature since the late 1930's and early 1940's. Andervont reported on the occurrence of spontaneous hepatomas in C3H mice 49,50 , with Burns and Schenken publishing in the following year 51 . Subsequent reports of C3H sublines maintained in different laboratories and over time have confirmed the high spontaneous incidence of liver tumors 52,53 . The high susceptibility of CBA mice was noted in 1936 54 and has since been confirmed by others 53,[55][56][57] .
Attempts at comparative tabulation of the incidences in these reports of spontaneous liver tumor incidence among different mouse strains can be potentially misleading since countless variables including study design, diet, caging, diagnostic features, and study duration differ among these reports.
Treatment induced liver tumors
Strain sensitivity to treatment induced liver tumors generally parallels strain susceptibility to spontaneous liver tumor development 23,48,[59][60][61][62] . Susceptibility to treatmentinduced liver neoplasia is contingent upon the temporal timing and frequency of dosing as well as the magnitude of the dose and the duration of the study observation period. Thus, a single dose of a hepatocarcinogen such as diethylnitrosamine to a neonate at 15 days of age will yield a high incidence and multiplicity of liver neoplasia with a short latency while the same dose given to the adult mouse will result in a lower incidence and multiplicity with a much longer latency period. A 10-fold increase in diethylnitrosamine (5 to 50 mg/kg) resulted in a 3.7-fold increase in number of tumors but with size distribution similar at the low and high dose 63 . Swiss mice initiated neonatally with ethylnitrosourea at two different doses also had a dose-dependent increase in liver tumors 64 as did B6C3F1 mice initiated with dimethylnitrosamine 65 , and BALB/c mice treated with 2-acetylaminofluorene 66 . It was also reported that the higher doses favored the development of malignant over benign liver tumors 58,65 .
The number of papers dealing with treatment-induced hepatocarcinogenesis and the various treatment regimens are legion. A few illustrative examples to highlight strain susceptibility will be provided.
The sensitivity of C3H mice to chemical induction of liver tumors began to be documented shortly after their susceptibility to spontaneous liver tumors was known. Heston and coworkers showed increased susceptibility to liver tumors in C3H mice treated with urethane 53 and the antifertility drug enovid 67 . In more recent years, studies at the McArdle lab by Drinkwater and coworkers as well as in the laboratory of Dragani and coworkers in Milan have studied the genetics and modifying factors responsible for differential strain sensitivity to murine hepatocarcinogenesis.
Using a neonatal mouse model in which the mice were injected no later than 16 hours after birth, it was shown that 9, 10-dimethyl-1,2-benzanthracene caused a pronounced increased incidence of liver tumors in male and female C3H, CBA, C3H × CBA, and CBA × C3H but only a marginal increase in strain A, C57BL, A × C57BL, C57BL × A, BALB/c and IF 68 . Liver tumor latency was the shortest in male C3H mice.
Both diethylnitrosamine and ethylnitrosourea have been used in studies of treatment-induced mouse hepatocarcinogenesis. Vesselinovitch published several studies in the 1970s primarily using the B6C3F1 preweaning initiation-promotion mouse model with ethylnitrosourea 19,[69][70][71] . Lee used this model to study diethylnitrosamine induced liver tumors in C3H-C57BL/6 chimeras 72 . A comparison of 1 1 i n b r e d s t r a i n s u si n g e t h y l n i t r o s o u r e a a n d / o r diethylnitrosamine was reported by the McArdle lab 73 .
While treated males typically are more susceptible to liver tumor induction than treated females, occasional exceptions do occur. Male and female C3H mice treated with carbon tetrachloride by gavage and 4-o-tolylazo-otoluidine by subcutaneous injection develop increased incidences of hepatomas with the female response to 4-otolylazo-o-toluidine exceeding that of the treated males 74 . Another example of a prominent female response is seen In diethylnitrosamine treated C57BR/cdJ mice 75 .
DBA/2 susceptibility
Male DBA/2 mice represent an unusual and puzzling situation with respect to liver tumor susceptibility. The spontaneous liver tumor rate is a low 1.5% 57 and treatment of 5-week-old male DBA/2 mice with diethylnitrosamine yields a relatively low liver tumor response 60 . However, t r e a t m e n t o f 1 2 -d a y o l d D B A / 2 m i c e w i t h diethylnitrosamine results in a 20-fold increase in liver tumor multiplicity versus liver tumor multiplicity in the resistant C57BL/6 mouse 7 6 , 7 7 . The reason for this differential in susceptibility is not known but differences in metabolism have been speculated as playing a role 60 .
Two specific categories of treatment induced murine liver tumors, viz., halogenated hydrocarbons and phenobarbital, are highlighted separately because of general academic and regulatory interest.
Halogenated hydrocarbons
Halogenated hydrocarbons continue to receive scientific attention because of potential widespread human exposure to contaminated drinking water as well as to water disinfection by-products. This class of compounds is clearly associated with liver tumor induction in mice and typically there is no clear evidence of carcinogenicity in other species, including rats. There are multiple modes of action believed to be contributory to the murine hepatocarcinogenicity assoc iated with one or m o re o f the hal oge nat ed hydrocarbons. These include enhanced cell proliferation, decreased apoptosis, perturbation of DNA methylation, disruption of gap junctional intercellular communication, activation of oncogenes, and peroxisome proliferation.
Often used as a classic example of the enhanced cell proliferation secondary to a cytotoxicity mode of action, chloroform has been classified as possibly carcinogenic to humans (IARC Classification 2B). While corn oil gavage of chloroform leads to murine liver tumors in B6C3F1 male and female mice 78 , chloroform given in drinking water is not hepatocarcinogenic 79,80 . This differences is attributed to differential hepatotoxicity and increased hepatocellular proliferation in gavage studies 79,81 . Similarly, when chloroform was given by drinking water in an initiationpromotion study, it inhibited liver tumor development compared with a positive liver tumor response with phenobarbital promotion 26 . Cytotoxicity and secondary enhanced cell proliferation has been proposed as a primary m o d e o f a c t i o n f o r t h r e e o t h e r t r i h a l o m e t h a n e s (bromodichloromethane, chlorodibromomethane, and bromoform) 82 .
The mode of action of the murine hepatocarcinogen trichloroethylene is extensively reviewed by Bull 83 . Trichloroethylene causes liver tumors in B6C3F1 and Swiss mice. The metabolism of trichloroethylene to chloral hydrate, dichloroacetic acid, and trichloroacetic acid complicates teasing out the modes of action since these metabolites are also hepatocarcinogenic in mice 8 3 . Modification of cell signaling pathways that alter cell replication and cell death as a consequence of decreased DNA methylation as well as c-myc and c-jun methylation are likely contributory modes of action for the tumorigenic effects o f tri chloroethylene 8 3 , 8 4 . Stud ies of the trichloroethylene metabolites dichloroacetic acid, trichloroacetic acid, and chloral hydrate suggest that both dichloroacetic acid and trichloroacetic acid are involved in trichloroethylene-induced liver tumorigenesis and that many dichloroacetic acid effects are consistent with conditions that increase the risk of liver cancer in humans 85 . Some of these effects involve GST Xi, histone methylation, and overexpression of IGF2 85 .
While there are differences in the type of dose response for dichloroacetic acid and trichloroacetic acid induced mouse liver tumors 86 , both dichloroacetic acid and trichloroacetic acid have caused liver tumors in multiple mouse studies [87][88][89] . Several studies have examined the modes of action involved in dichloroacetic acid induced murine hepatocarcinogenesis and have implicated hypomethylation [90][91][92] , enhanced cell proliferation 90 , peroxisome proliferation or even activation of PPAR without evidence of peroxisome proliferation 89,93,94 , and oncogene activation 84 . Based upon hypomethylation identified in mouse liver tumors following initiation with methylnitrosourea and promotion by dichloroacetic acid or trichloroacetic acid, and as an early response in short term exposure to dichloroacetic acid and trichloroacetic acid, 8 2 , 8 4 , 9 5 , 9 6 and can be blocked by dietary methionine 97 .
A comprehensive review of aldrin/dieldrin has been published 98 . Dieldrin is considered a ground water contaminant and, like the water disinfection by-products discussed above, it is associated with mouse liver tumor responses in different mouse strains [99][100][101] . Oxidative stress generated via futile cycling of the cytochromes enzymes has been implicated as a primary mode of action responsible for the murine liver tumor response 98 . It has been proposed that the consequences of oxidative stress are threshold responses and should be used in risk assessment for dieldrin and for other halogenated hydrocarbons 98 .
In addition to dieldrin, the organochlorine class of chemicals has been clearly associated with induction of murine liver tumors 102 . Thirty-seven of 138 organochlorine agrochemicals were hepatocarcinogenic in mice without an apparent effect of mouse strain or study duration 103 . A clear positive association between hepatomegaly at one year and a liver tumor response at 18 or 24 months was documented in these studies.
Phenobarbital
Phenobarbital is one of the most widely studied rodent hepatocarcinogens, is considered the prototype chemical for a mode of action whereby hepatic enzyme induction leads to rat and mouse liver tumor induction, and is typically used as a positive control in rodent initiation-promotion protocols. An extensive review of the relationship between phenobarbital and mouse liver neoplasia has been published by McClain 104 . McClain provides information on a series of studies utilizing different phenobarbital treatment protocols from the early 1970s to the early 1980s to test for liver tumor susceptibility in C3H, CF1, B6C3F1, and C57BL mice. As more studies have been carried out from the early 1970s to the present time, it has become obvious that many different factors and variables contribute significantly to phenobarbital induced liver tumors. For example, depending upon the study protocol, phenobarbital may enhance or inhibit liver tumor formation following diethylnitrosamine initiation. Diethylnitrosamine initiation in the postweaning period versus diethylnitrosamine initiation in the preweaning period results in enhancement or inhibition of phenobarbital induced liver tumors, respectively. This paradoxical effect has been carefully studied 25 . The different effect on promotion may be a reflection of the observation that phenobarbital enhances the growth of eosinophilic but not basophilic foci of cellular alteration and proportionally fewer eosinophilic foci are produced with preweaning diethylnitrosamine initiation.
Most studies of the factors involved in phenobarbital associated murine hepatocarcinogenesis have been carried out utilizing initiation-promotion protocols. These studies include examination of differential strain sensitivity, influence of phenobarbital on cell proliferation, oncogene analysis in liver tumors, and effects of phenobarbital on global DNA methylation as well as on methylation of oncogenes. Most of these studies have utilized the preweaning neonatal model of initiation followed by different promotion regimens. The response in these studies is dependent on the strain of mouse used and on the initiating chemical carcinogen 105 . C3H initiated with diethylnitrosamine and given phenobarbital demonstrated an increase in adenomas compared to diethylnitrosamine treatment alone while B6C3F1 males had a decrease in adenomas as a result of phenobarbital treatment. Phenobarbital had no effect on hepatocellular adenomas in C57BL/6 males previously treated with diethylnitrosamine. Following 9, 10-dimethyl-1,2-benzanthracene initiation, phenobarbital increased adenomas in both C3H and B6C3F1 but not in C57BL mice.
In BALB/c mice phenobarbital treatment following initiation by diethylnitrosamine resulted in a decrease latency for heptic adenomas and an increased incidence of adenomas at multiple sampling times 106 .
In comparing C57BL/6, C3H/HeN, and DBA/2 mice initiated with diethylnitrosamine and treated for 17 weeks with phenobarbital, Diwan and coworkers identified an increase in preneoplastic foci and liver tumors in C3H and DBA/2 mice but not in C57BL mice and noted that the DBA/ 2 mice were especially responsive 60 . In a different study using reciprocal C57BL/6 and DBA/2 hybrids, susceptibility to phenobarbital induced tumors was a dominant trait with both F1 hybrids responding similarly 107 .
Timing of phenobarbital dosing appears to be important in how mice respond to diethylnitrosamine initiation and subsequent promotion by phenobarbital. Diethylnitrosamine alone induced focal hepatic lesions, adenomas and carcinomas in B6C3F1 mice but subsequent treatment with Diethylnitrosamine initiation at 6 and 10 weeks of age versus initiation at 15 days of age with both followed by long-term treatment with phenobarbital resulted in strong liver tumor promotion following initiation at 6 and 10 weeks of age and inhibition of liver tumor development following neonatal initiation in B6C3F1 males 109 . On the other hand, neonatal initiation followed by phenobarbital promotion in BALB/c males enhanced development of liver tumors 109 . Similar effects on inhibition of tumor development by phenobarbital have been seen in B6C3F1 initiated with diethylnitrosamine followed by phenobarbital promotion commencing at 4 weeks of age 110 .
There is considerable evidence to support the differential strain susceptibility to phenobarbital induced murine hepatocarcinogenesis. Diethylnitrosamine injection followed by partial hepatectomy and subsequent dietary phe nobarbital resulted in accelerate d growth of preneoplastic focal lesions in C3H and BALB/c mice with only slight increased growth of preneoplastic lesions in C57BL/6 mice 23 .
Alterations in the balance between cell proliferation and cell death as they impact phenobarbital hepatocarcinogenesis are influenced by mouse strain. Maximum induction of hepatic DNA synthesis in the absence of any evidence of cytotoxicity in phenobarbital treated 8-week-old B6C3F1 mice is seen in short-term to 28-day studies 111 . This is suggestive of a mitogenic effect. Furthermore, a dosedependent enhanced DNA synthesis and an associated decreased apoptosis is seen in preneoplastic foci in B6C3F1 mice treated with 100 and 500 mg phenobarbital/kg in the diet but not 10 mg phenobarbital/kg 111,112 . In an initiationpromotion study in C3H and B3B6F1, cell proliferation was measured in preneoplastic foci and non-involved hepatocytes and there was a differentially enhanced response to phenobarbital in the C3H strain preneoplastic foci 113 . The bromodeoxyuridine labeling index in male C57BL/6J, B6C3F1 and C3H/HeJ mice initiated neonatally with diethylnitrosamine followed by phenobarbital promotion for 12 months was positively correlated with focus growth showing a C3H > C6C3F1 > C56BL/6 strain dependent effect 114 .
Murine strain susceptibility to phenobarbital induced heptocarcinogenesis may also involve alterations in global DNA methylation as well as methylation associated changes in oncogene expression. Both hypomethylation and hypermethylation can be associated with tumorigenesis 115 . While a choline-devoid, methionine-deficient diet causes global DNA hypomethylation in B6C3F1 and in C57BL mice, treatment with phenobarbital lowers DNA methylation in B6C3F1 mice to 20% of control. Phenobarbital treated C57BL mice, on the other hand, maintain normal levels of DNA methylation despite having a higher rate of treatment induced cell proliferation versus B6C3F1 116 . Phenobarbital treatment of B6C3F1 and the two parental strains is primarily associated with hypermethylation changes in GCrich regions of DNA in the B6C3F1 and C3H mice 117 as an indication that inability to maintain normal methylation is involved with susceptibility to phenobarbital induced liver tumors.
Furthermore, B6C3F1 mice on a choline-devoid, methionine-deficient diet with and without phenobarbital treatment exhibit increased mRNA expression of Ha-ras and raf as a consequence of hypomethylation 118 . There was altered global DNA methylation, both hypomethylation and hypermethylation, in some spontaneous and phenobarbital induced liver tumors as well as increased Ha-ras expression 118 indicative of a decreased ability of the B6C3F1 mouse to maintain its methylation status. Following phenobarbital treatment B6C3F1 mice are also less capable of maintaining methylation of raf in hepatocytes compared to C57BL mice 119 . However, more phenobarbital induced than spontaneous liver tumors had increased raf mRNA levels in contrast to equivalent frequency of enhanced Haras mRNA levels in both phenobarbital and spontaneous tumors 119 indicating that at least for raf, B6C31 mouse liver tumors may arise by a separate pathway from spontaneous tumors.
Examination of the methylation status of Ha-ras, Ki-ras and myc in spontaneous, chloroform and phenobarbital induced liver tumors in B6C3F1 mice showed that Ha-ras was hypomethylated in all tumors examined, Ki-ras was hypomethylated in some tumors, and the methylation status of myc was not changed 120,121 indicating that there are some common biochemical pathways to spontaneous and induced liver tumorigenesis.
The oncogene mutational profile seen in phenobarbital induced tumors in both C3H and CF1 mice does not differ from the oncogene mutational profile in sponataneous tumors [122][123][124] supporting the contention that phenobarbital provides a selective growth advantage to heptocytes with spontaneously occurring mutations.
Very little has been published regarding the hepatocarcinogenicity of phenobarbital in genetically engineered mice. In an initiation-promotion study utilizing 3 different doses of phenobarbital, there was no evidence of hepatic carcinogenicity in a 26-week study using rasH2 mice on a BALB/c × C57BL/6 F1 background 125 . A 9-month study of phenobarbital in DNA repair deficient XPA -/-mice did not yield any tumors 126 . There were also no liver tumors in XPA -/-× p53 +/-and C57BL/6 mice similarly treated with phenobarbital in the same study. On the other hand, phenobarbital decreased tumor latency and increased multiplicity in livers of c-myc/TGF-alpha mice, primarily as a consequence of phenobarbital blocking cell death during initial stages of tumor development 127 .
Genetics of Murine Liver Tumor Susceptiblity
Although specific murine genes responsible for liver tumor susceptibility have not been discretely identified, several loci that influence susceptibility and resistance to liver tumor induction have been mapped by different r e s e a r c h g r o u p s u s i n g b a c k c r o s s e s a n d l i n k a g e analysis 47,77,[128][129][130][131] . These loci influence hepatocyte growth control, especially in preneoplastic lesions. The loci and associated mouse chromosomes implicated in mouse hepatocarcinogenesis are presented in Table 1.
Susceptiblity to liver tumor induction can vary more than 50-fold among inbred strains 4 3 . Quantitative comparison among strain is extremely problematic because of significant differences in animal models used, study protocols, choice of carcinogen, age at dosing, and duration of the observation period. Attempts to tabulate the liver tumor incidences and multiplicities along with identification of all the intended and unintended experimental variables from 70 years of publications would be non-trivial and probably not of much help in fostering an understanding of implications of this target tissue response for human risk assessment. Even qualitative statements of relative sensitivity and susceptibility across decades of studies need to be carefully considered. Most scientists will agree that the C3H male is highly susceptible to liver tumor induction while the C57BL/6 male is highly resistant. Based upon publications of the spontaneous incidences of liver tumors, the C3H male is intrinsically sensitive to develop liver tumors as a function of age while liver tumors are extremely rare in aged C57BL/6 males. Classifying the remaining mouse strains with respect to liver tumor susceptibility is more judgmental but I have attempted to do that in Table 2.
Up to 85% of the greater susceptibility of the C3H versus the C57BL/6 male mouse to liver tumor induction is attributable to an Hcs7 (hepatocarcinogenicity sensitivity 7) locus. Following a neonatal dose of N-ethyl-N-nitrosourea or diethylnitrosamine there was a 1.7-to 2-fold acceleration of the growth rate of preneoplastic foci of cellular alteration i n C 3 H v e r s u s C 5 7 B L / 6 m a l e s w i t h o u t f u r t h e r treatment 20,48,63 . The Hcs7 locus is also associated with a 2.6-fold higher growth rate of normal hepatocytes in C3H versus C57BL/6 males 20 .
Partial hepatectomy of 6-week old C3H and C57BL/6 mice treated neonatally with N-ethyl-N-nitrosourea results in an accelerated growth rate of preneoplastic lesions in the resistant C57BL/6 mouse but no increase in the already accelerated growth rate of preneoplastic foci in the C3H mice 24 supporting the contention that the Hcs7 locus has a role in hepatocyte growth control. Furthermore, liver tumor multiplicity was greater than 5-fold increased in C57BL/6 males that underwent partial hepatectomy compared to the sham controls while there was a 60% reduction in liver tumor multiplicity in the C3H males that had partial hepatectomy. Thus, partial hepatectomy acted as a promoter for C57BL/6 males but not for C3H males. Partial hepatectomy had no effect on foci or tumors in female mice of either strain. The data suggest that the physiological growth stimulus provided by partial hepatectomy as well as the Hcs7 gene both work through the same growth regulatory pathway. It has been demonstrated that both the frequency of DNA adducts and number of preneoplastic foci of cellular alteration are similar in C3H and C57BL/6 mice following treatment with N-ethyl-N-nitrosourea 48 and that the hepatocellular foci take longer to grow to easily detectable size in C57BL/6 mice 20 , further supporting the likely role of the Hcs7 locus in preneoplastic lesion growth control. The Hcs7 locus appears to exert its effect at the level of the hepatocyte since diethylnitrosamine-induced liver tumors arise exclusively from C3H heptocytes in C3H-C57BL/6 chimeric mice 75,132 .
Linkage studies of crosses between C3H and C57BL/6 have shown that the Hcs7 C3H allele is sufficient to render the C57BL/6 susceptible to liver tumor induction with up to a 14-fold increase in liver tumor multiplicity in congenic males 47 . Furthermore, the tumorigenic effect of the Hcs7 C3H allele is independent of gender, causing an increase in tumor multiplicity in congenic females as well as males 47 .
The C57BR/cdJ mouse was originally generated from the same breeding pair that produced the C57BL/6 mouse 11 and paradoxically has a 20-fold greater susceptibility to liver tumor induction 43 . Study of the C57BR/cdJ mouse and related chimeras has led to identification of several loci in addition to Hcs7 that are implicated in hepatocarcinogenesis and account for the enhanced liver tumor susceptiblity of the C57BR/cdJ versus the closely related C57BL/6 mouse 47 .
Female C57BR/cdJ are insensitive to inhibitory effects of estrogen and are unusually susceptible to both spontaneous and treatment-induced liver neoplasia 75 .
The high susceptibility is attributable to 2 loci (Hcf1 a n d H c f 2 ; l o c a t e d o n c h r o m o s o m e s 1 7 a n d 1 , respectively). Using C57BR/cdJ-C57BL/6 chimeras, the increased susceptibility was found to be intrinsic to the C57BR/cdJ hepatocytes with over 90% of the tumors in both male and female originating from these hepatocytes. This finding provides evidence that the determinants of hepatocarcinogenesis sensitivity are intrinsic to the specific hepatocytes. Further evidence that the determinants are at the level of the hepatocyte is provided by the early work of Condamine and co-workers 133 using C3H-C57BL/6 and C3H-BALB/c chimeras to examine the cellular composition of liver tumors in aged mice as well as by Lee and coworkers 72 in C3H-C57BL/6 chimeras treated neonatally with diethylnitrosamine.
Multiple genetic loci affecting hepatocarcinogenesis susceptility and resistance have been identified 77,129,130,134 indicating that the genetics underlying susceptibility is complex. Using volume percent as a quantitative index of susceptibility in a study of C3H crosses with A/J and with M. spretus, Dragani and coworkers concluded that strain variation in susceptibility to hepatocarcinogenesis involves polygenic inheritance of unlinked genetic loci 128 .
While basic strain differences in hepatocarcinogen sensitivity are determined by intrinsic genetic factors, studies of C3H-BALB/c sexually chimeric mice treated neonatally with diethylnitrosamine show that male specific hormonal or micro-environmental factors are responsible for promotion of liver cancer in both XX and XY hepatocytes 135 .
Using recombinant inbred, backcross, and intercross mouse breeding schemes, researchers at the McArdle laboratory sought to tease out the genetics responsible for the biological complexity of hepatocarcinogenicity in D B A / 2 m i c e . In the process they identified two hepatocarcinogenesis resistance genes (Hcr1 and Hcr2) in the very sensitive neonatally treated DBA/2 mouse 77 . Based on their linkage analysis which covered ~95% of the genome, it was concluded that neonatally sensitivity DBA/2 mice probably carry multiple hepatocarcinogen sensitive loci, each with a small effect that in the aggregate overcome the resistance conferred by Hcr1 and Hcr2 77 .
In most studies examining the susceptibility of different strains the primary effect is a differential cell proliferative response in putative preneoplastic altered foci during the promotional operational phase of hepatocarcinogenesis. However, in an initiation-promotion study using diethylnitrosamine initiation after partial hepatectomy of males at 6 weeks of age followed by phenobarbital, clofibrate, or ethynyl estradiol promotion, Lee and coworkers found that interstrain differences in both initiation and promotion exist 23 . BALB/c and C57BL/6 had fewer foci of altered hepatocytes than C3H after diethylnitrosamine alone. Phenobarbital accelerated the growth rate of altered foci in both C3H and BALB/c and clofibrate increased the growth of altered foci only in the C3H males.
Mouse strain differential susceptibilities differ between liver and lung tumors following treatment with agents that induce neoplasia in both target sites 43 indicating that genetic factors responsible for liver and lung neoplasia are tissuespecific.
Cell proliferation, apoptosis, and growth kinetics
Cancer development requires an hereditable alteration of DNA plus cellular proliferation. It has been experimentally shown that cell proliferation is a fundamental requirement for initiation, promotion, and progression of spontaneous and treatment induced liver cancer in mice 136,137 . The basis of the preweaning initiation-promotion model of murine hepatocarcinogenesis is dependent upon agent induced promutagenic DNA damage being "fixed" by the enhanced hepatocellular cell proliferation associated with the rapid liver growth in the neonatal mouse. For the adult mouse, the regenerative hepatocellular proliferative response to hepatonecrogenic doses of chloroform 138 and other nongenotoxic agents [138][139][140][141] , or regenerative growth following partial hepatectomy all serve to initiate the cancer process. It has also been postulated that natural infidelity in DNA maintenance methylation could lead to heritable hypomethylation that would play a functional role in liver tumor initiation 115,142 . Following initiation, liver cancer promotion requires a proliferative growth advantage for expansion of initiated clones of hepatocytes 143 . During the process of clonal expansion further genetic alterations may yield subclones that have enhanced cell proliferative rates and the ability to progress to malignancy.
There has been considerable study of the relative roles of cell proliferation versus cell death (apopotosis) in murine hepatocarcinogenesis, especially for nongenotoxic liver tumor promoters. It is generally accepted that the critical factor driving growth or regression of preneoplastic hepatocellular lesions in rodents is the balance between cell proliferation and apoptosis both in the preneoplastic lesions as well as in the surrounding noninvolved hepatic parenchyma 144 . In a comparison of the roles of apoptosis and cell proliferation in C3H/He and C57BL/6J mice, it has been reported that cell proliferation is the prevailing determinant of liver tumor promotion by phenobarbital and nafenopin in both strains following diethylnitrosamine initiation 145 .
While enhanced cell proliferation is not a universal predictor of liver carcinogenesis 146 , there is little doubt that it is an important and necessary component of murine hepatocarcinogenesis observed for a spectrum of nongenotoxic agents. Hepatocarcinogenicity may be driven either by enhanced cell proliferation and/or reduced apoptosis within proliferative lesions. The liver tumor response in male B6C3F1 mice attributed to dichloroacetic acid was suggested to involve the ability of dichloroacetic acid to suppress apoptosis rather than to enhance proliferation of initiated cells 147 . Similarly, suppression of apoptosis rather than cell proliferation was attributed to account for the growth of H-ras positive C3H mouse liver tumors 148 . On the other hand, dieldrin promoted focus growth appears dependent upon cell proliferation and without effects on apoptosis at multiple concentrations 149 . In general, murine susceptiblity to peroxisome proliferators as well as other nongenotoxic hepatocarcinogenic agents correlates more strongly with induction of DNA synthesis and less so with suppression of apoptosis 150 .
There is often a concerted effect between cell proliferation and cell death involving preneoplastic foci of cellular alteration and noninvolved hepatocytes and this concerted effect may be influenced by the continuation of treatment. For example, in male B6C3F1 mice dieldrin and phenobarbital increased heptocyte labeling indices and decreased apoptosis in eosinophilic and basophilic lesions and both decreased upon cessation of treatment 1 51 . Furthermore, the dose of test agent, for example phenobarbital, can influence the relative rates of DNA synthesis and apoptosis with higher doses promoting outgrowth of focal lesions by increased cell proliferation and decreased apoptosis 112 . In an initiation-promotion study using C57BL/6, C6C3F1, and C3H males, phenobarbital promotion following neonatal diethylnitrosamine initiation resulted in a strain-dependent (C3H>B6C3F1>C57BL) increase in focus number and size with focus growth rates positively correlated with cell proliferation and with intrafocal apoptosis occurring late 114 . The authors also suggested that extrafocal apoptosis was contributory to clonal growth via removal of adjacent normal cells 114 .
It is generally accepted that larger proliferative hepatocellular lesions grow faster than smaller lesions. In female B6C3F1 mice initiated with diethylnitrosamine at 12 days of age and subsequently promoted with unleaded gasoline vapor or dietary ethinyl estradiol, larger preneoplastic foci in the treated mice had higher hepatocyte labeling indices 152 . In a hepatocarcinogenesis study involving diethylnitrosamine initiation and phenobarbital promotion of C3H and C3B6F1 mice, hepatocellular adenomas had higher bromodeoxyuridine labeling indices than altered foci with the lowest labeling indices in noninvolved hepatocytes 113 . It is noteworthy that the ratio of labeling index in foci to non-involved hepatocytes rather than the level of cell proliferation alone was related to the enhanced liver tumor susceptibility of the C3H mice versus the C3B6F1 hybrid 113 .
Preweaning initiation with ethylnitrosourea followed by a partial hepatectomy as a growth stimulus in C57BL/6J and C3H/HeJ male mice lead to a tumor volume doubling time of 2.2 and 2.9 weeks, respectively 24 . Following neonatal initiation with diethylnitrosamine, three strains of mice were euthanized at 4 intervals up to 42 weeks of age. The number of liver tumors in C3H mice was 2.5 times that in B6C3F1 and C57BL/6 but the growth rates were similar for the 3 strains with a doubling time of 2.1 to 2.5 weeks 63 . Larger tumors grew faster 63,152 and the tumor periphery grew faster than the central portions, probably due to a gradient of oxygen, nutrition, and growth factors. There were also differences in tumor growth rates among multiple tumors in the same mouse. The growth rate in C57BL/6 liver tumors was initially fast but subsequently slowed down by 80%. It has been suggested that impaired growth of some liver tumors in C57BL/6 mice is associated with accumulation of secretory protein cytoplasmic inclusions in tumor cells 63 .
Different factors may affect hepatoproliferative rates in various studies. Using female B6C3F1 mice, the route of administration can determine whether chloroform enhances cell proliferation in the liver of B6C3F1 mice 81 . Chloroform given by gavage but not in the drinking water increased cell proliferation of hepatocytes. Similarly, the gavage administration of trihalomethanes, including chloroform, bromodichloromethane, chlorodibromomethane, and bromoform, enhanced cell proliferation consistent with their known hepatocarcinogenicity 82 . Dietary restriction of 12month old male B6C3F1 mice significantly increased the rate of hepatocyte apoptosis and significantly decreased the frequency of proliferating cells compared to ad libitum fed mice 153 .
Enzyme induction
Hepatic enzyme induction has been proposed as a mode of action to explain what is considered a rodent-specific liver tumor response to treatment with phenobarbital and other nongenotoxic rodent hepatocarcinogens. In addition to the publication by McClain 104 , this epigenetic mechanism of hepatocarcinogenesis has been discussed in a 2000 review 154 . Generation of reactive oxygen species as a consequence of futile cycling of P450s 155 is capable of producing tissue necrosis and mutations that would favor development of hepatic neoplasia. Reactive oxygen species can lead to lipid peroxidative damage to hepatocyte cell membranes and may then cause functional alterations in membrane receptors that in turn exert liver promoting action 156 . Reactive oxygen species can also modulate gene expression and lead to altered regulation of growth factors that favor tumor promotion and progression 154 . Temporal increases in hepatic malondialdehyde, a biomarker of oxidative damage to lipids, following dieldrin treatment suggest that oxidative damage may be an early event in dieldrin-induced mouse hepatocarcinogenesis 157 . It has been proposed that proliferative mouse liver lesions induced secondary to enzyme induction, such as eosinophilic nodules seen in phenobarbital treated mice, are phenotypically different from liver tumors seen in control mice or mice treated with genotoxic agents and should not be considered a carcinogenic response directly relevant to the chemical that caused the enzyme induction 158 .
DNA methylation and imprinting
In recent years there has been a wider acknowledgement of the importance and contributory nature of DNA methylation throughout different stages of cancer development 115,159,160 . Both global and gene specific alterations involving hypomethylation and hypermethylation o f D N A a r e s i g n i f i c a n t c o n t r i b u t o r y f a c t o r s t o oncogenesis 1 6 0 . DNA methylation is required for maintaining the status of imprinted genes and for epigenetic activation of oncogenes and silencing of tumor suppressor genes. Temporal analyses of methylation status in mice used in various hepatocarcinogenesis protocols, particularly in initiation and promotion studies, provide evidence that altered methylation is not just a consequence of malignant transformation but plays a contributory role in liver tumor genesis. I hasten to point out that DNA methylation is involved in a wide variety of biological processes and, consequently, there is no simple explanation that will cover the diverse possible ways methylation can impact oncogenesis. However, there are definite associations between DNA methylation, genomic imprinting, and mouse hepatocarcinogenesis 115,160 The insulin-like growth factor IGF2 and the mannose 6phosphate(M6P)/IGF2R receptor genes are known to be imprinted, have a monoallelic expression in mice, and provide a possible explanation for enhanced sensitivity of some mice to liver tumor formation [163][164][165] . It is also of some relevance that imprinting and alterations in the M6P/IGF2R gene are associated with human liver tumors 166,167 and that there is loss of IGF2 imprinting in mouse hepatocellular carcinoma cell lines 165 . Genomic imprinting of murine genes regulated by androgen could also play a role in murine hepatocarcinogenesis 168 . The recent creation of conditional knock-out mice with tissue-specific inactivation of murine M6P/IGF2R should prove useful in future studies 169 .
Oncogenes & tumor suppressor genes
Proto-oncogenes are believed to play an important role in the genesis of neoplasia when genetically altered or expressed at increased levels. In conjunction with several other genes, growth factors, and transcription regulatory proteins, proto-oncogenes play pivotal roles in regulating cell growth, differentiation, and development. Oncogene activation associated with mouse liver tumors is both strain and carcinogen dependent. Mouse liver tumors involving several strains document the common observation of mutations in the H-ras protooncogene 170 with the frequency of H-ras mutations approximately 10-fold higher in genetically susceptible versus resistant strains 171 . For e x a m p l e , c o d o n 6 1 p o i n t m u t a t i o n s i n H -ra s i n diethylnitrosamine induced liver tumors occur with a frequency greater than 50% in C3H mice, 33% in B6C3F1 mice and 0% in C57BL/6 and BALB/c mice 171 . Greater than 50% of spontaneous B6C3F1 and C3H liver tumors harbor H-ras mutations compared to 7% in BALB/c 172 . Analysis of precancerous foci of cellular alteration for evidence of H-ras mutations has shown that a small proportion of these foci have H-ras mutations indicating that this change may be an early and critical event in murine hepatocarcinogenesis 170,171,173 . The frequency of activated H-ras in induced mouse liver tumors is typically higher for genotoxic versus nongenotoxic chemicals. For genotoxic hepatocarcinogens there is an inverse dose-response relationship to H-ras activation suggesting that higher doses may at least partially utilize a non-H-ras pathway for tumorigenesis 170 . The low frequency of H-ras mutations in tetrafluoroethylene induced mouse liver tumors is indicative that some mouse liver tumors produced by nongenotoxic chemicals favor development by a ras-independent p a t h w a y 1 7 4 . T h e t u m o r p r o m o t e r s d i e l d r i n a n d phenobarbital increase the frequency of c-Ha-ras wild-type, but not of c-Ha-ras mutated focal liver lesions in male C3H/ He mice 124 . It has been shown that the preferential outgrowth of proliferative H-ras positive hepatic lesions is mediated by suppression of apoptosis rather than by accelerated rates of cell proliferation 148 .
O t h e r o n c o g e n e s c a n p l a y a r o l e i n m u r i n e hepatocarcinogenesis. C-jun, which has important functions in cell proliferation and differentiation, is believed to play a role in hepatogenesis and, by implication, in hepatocarcinogenesis 175 . Increased expression of c-myc as well as H-ras has been reported in hyperplastic nodules and hepatocellular carcinomas in B6C3F1 mice treated with dichloroacetic acid and trichloroacetic acid 176 and increased expression of c-jun and c-myc in B6C3F1 mice treated with d i c h l o r o a c e t i c a c i d , t r i c h l o r o a c e t i c a c i d a n d trichloroethylene 84,92 . Hypomethylation of raf has been associated with enhanced expression of raf in phenobarbitalinduced liver tumors of B6C3F1 mice 119 .
The ability to identify possible tumor suppressor genes that are involved with mouse liver tumor development has been hampered by the relative lack of LOH detection in murine liver tumors 177 . It has been speculated that difficulty in detecting LOH in murine liver tumors may be a consequence of the high frequency of tetraploidy in mouse hepatocytes 178 . However, since hypermethylation silencing of tumor suppressor genes is known to occur 179 , this epigenetic mechanism could be involved in murine hepatocarcinogenesis. In addition to altered expression of proto-oncogenes and tumor suppressor genes, mutations and perturbations in other genes such as beta-catenin, Ecadherin, cyclin D1, and EGFR have been documented in mouse liver tumors 180,181 .
Hormones
With the exception of the female C57BR/cdJ mouse, both spontaneously occurring and treatment induced hepatocellular tumors occur with significantly greater frequency and multiplicity in males than in females 19,58,65,182 .
Castration of 2-day old BALB/c mice followed by gavage administration of N-2-fluorenylacetamide starting at one week of age completely abolished development of hepatocellular carcinomas 183 . Using orchidectomy and ovaiectomy alone as well as with and without subsequent administration of androgens and ovarian hormones or simply administration of hormone without ablative surgery, several studies have shown that testosterone promotes and ovarian hormones suppress development of liver tumors in mice 43,70,73,182,[184][185][186] . As an indication of the magnitude of the hormonal effect, neonatal urethane exposure followed by castration or ovariectomy at 6 weeks of age resulted in a 96% and 20% incidence of heptomas in sham-operated male and female B6C3F1 hybrids, respectively, while orchidectomy and ovariectomy hybrids had hepatoma incidences of 62% and 67%, respectively 187 . Using C3H-BALB/c sexually chimeric mice, Tsukamoto and co-workers show that while basic strain differences are genetically determined, male hormonal or micro-environmental factors lead to promotion of liver cancer in both XX and XY hepatocytes 135 .
C57BL/6 × DS F1 mice injected at 18 days of age with 3'-methyl-4-dimethylaminoazobenzene and castrated at 23 days of age had a reduced multiplicity of adenomatous hepatic nodules but castration did not influence the incidence of either adenomatous nodules or carcinomas in males. Ovariectomy of females shortened the latency and increased both the incidence and multiplicity of adenomatous nodules 186 .
Endogenous liver tumor promotion of preneoplastic foci of altered hepatocytes by testosterone is considered a primary explanation for the well documented increased susceptibility of male versus female mice 21,70 . The tumor promoting effects of testosterone are mediated by the hepatocyte androgen receptor 43 . Castration of male mice leads to a 3-fold increase in hepatocyte androgen receptors, a result that approximates the androgen receptor levels in female mouse hepatocytes 43 . The greater susceptibility to induction of liver neoplasia in male versus female mice associated with testosterone is attributed to its positive effect on the growth rate of preneoplastic foci of cellular alteration 21,55,70,188 . Either castration of male mice or administration of testosterone to female mice results in a decreased or increased growth rate of preneoplastic lesions, respectively, as well as a corresponding alteration in the multiplicity of liver tumors 21,182 . Suprisingly, however, the actual testosterone levels among susceptible and resistant inbred mouse strains is not correlated with liver tumor susceptibility 43 . Similarly, the degree of binding of testosterone to hepatocellular androgen receptor is not correlated with the differential liver tumor susceptibility among mouse strains 43 .
In investigating the role of growth hormone in mediating the effects of sex hormones on liver tumor development, investigators at the McArdle lab treated growth hormone deficient C57BL/6J lit/lit mice neonatally with diethylnitrosamine 184 . There was up to a 59-fold increase in tumors in the growth hormone deficient mice versus the wild type C57BL/6J with the effect significantly more dramatic in males than in females. These investigators then bred the growth hormone deficiency onto a C57BR/cdJ and a C3H/HeJ background and demonstrated that growth hormone deficiency suppressed liver tumor development to less than 1% 184 . The authors conclude that growth hormone is a potent endogenous regulator of susceptibility and its absence abrogates the effects of sex hormones and genetic background on liver tumor susceptibility.
Diet and body weight
It has long been know that natural vs synthetic vs semisynthetic diet plus caloric intake, amino acid composition, lipid content, methyl deficiency, etc. impact safety assessment and hazard identification rodent toxicity and cancer studies 189,190 . Hancock and Dickie reported 100% incidence of hepatoma in D2CEF1 and CED2F1 hybrids at 8 to 14 months of age when switched to a high protein, high fat diet whereas for the previous 16 years neither hybrid had liver tumors even at 28 to 32 months of age 191 .
In an effort to better control growth, body weight, and age-related disease including tumor incidence, the National Toxicology Program changed from NIH-07 diet to a new diet designated NTP-2000. By reducing the caloric content, mainly by increasing fiber content and reducing protein, the diet could be fed ad libitum. NTP-2000 proved beneficial for a number of variables and, importantly, lowered body weight and the spontaneous incidence of liver tumors in male B6C3F1 mice 192 . An alternative approach of utilizing dietary restriction has been championed by the FDA's National Center for Toxicologic Research as a means to minimize tumor and survival variability within and between studies [193][194][195] and has been shown to inhibit spontaneous and treatment-induced tumorigenesis 189,193 . A rather extreme dietary restriction of 60% led to increased survival in male and female B6C3F1 mice and resulted in an 8.6-fold reduced liver tumor incidence in control B6C3F1 mice 196 . Dietary restriction started as late as 3 months after treatment with a potent hepatocarcinogen can still have an inhibitory effect on mouse liver tumor development 197 .
There is a strong correlation between body weight at 52 weeks and the subsequent incidence of liver tumors in control B6C3F1 mice 193,198,199 and a concern that body weight differences between dosed and control groups could mask carcinogenic effects sensitive to body weight changes 197,198 . Consequently, reduction of body weight gain would be expected to reduce the spontaneous liver tumor burden. Using tumor risk data from several hundred B6C3F1 mice, Leakey and coworkers constructed body weight weight curves to predict the amount of dietary restriction required to predict a 15 to 20% spontaneous liver tumor incidence 194 . In a chloral hydrate bioassay using this approach, they were successful in achieving their objective and, in addition, the variable feed restriction paradigm provided for a statistically significant dose-response in their study versus the ad libitum treated cohort 200 . As further testimony to the importance of diet, it has been shown that the post-weaning diet in C57BL/6J × Cast/EiJ F1 mice can affect IGFII methylation and lead to permanently decreased IGFII expression via imprinting 201 .
Cell-cell communication
Multiple in vitro and in vivo experimental studies have shown that fully functional gap junctions inhibit both spontaneous and treatment induced neoplasia and that a number of nongenotoxic rodent carcinogens inhibit gap junction cell-cell communication, leading to enhanced cellular proliferation and increased neoplasia [201][202][203][204][205] . Gap junction cell-cell communication is controlled by connexins which constitute a family of tumor suppressor genes 203 .
For the most part, studies of gap junction intercellular communication in hepatocytes have been carried out on primary cultures. Using primary B6C3F1 hepatocytes, it has been demonstrated that endosulfan and at least one endosulfan metabolite, plus chlordane and heptachlor inhibit gap junctional communication in a dose dependent manner 206 . A dose dependent inhibition of gap junctional cell-cell communication in primary mouse hepatocytes has also been shown for phenoparbital, DDT, and lindane and is most probably mediated via cAMP 207,208 . Similar inhibition has been documented in primary mouse hepatocytes for m o n o e t h y l h e x y l p h t h a l a t e , t r i c h o l o r a c e t i c a c i d , trichloroethylene, nafenopin, and arochlor 1254 207 .
Studies of liver tumor promotion in rats using p h e n o b a r b i t a l , p o l y c h l o r i n a t e d b i p h e n y l , a n d dichlorodiphenyltrichloroethane showed an increase hepatocellular proliferation and decreased gap junction cellular communication, albeit without a quantitative association 209 .
Cell-cell communication can play an important role in mouse hepatocarcinogenesis. Mice deficient in connexin32, the major gap junction protein expressed in hepatocytes, had a 25-and 8-fold increase in spontaneous liver tumors in male and females, respectively, compared to wild-type controls 202 . These same authors showed an increased incidence of liver tumors and a faster growth rate of these tumors in connexin32 deficient mice on a C57BL/6/129/Sv-F1 background compared to controls one year after neonatal treatment with diethylnitrosamine 202 .
While both oncogenes and growth factors have been shown to downregulate gap junctional function and analysis and both rat and mouse hepatic neoplasms have altered gap junctional cell communication 207 H o w e v e r , hypermethylation inactivates connexin genes 210 , suggesting that methylation may contribute to carcinogenesis by disruption of gap junctional intercellular communication.
Viral infection
A retrospective review of over 30 control groups and over 30 low dose and high dose groups of B6C3F1 mice of each sex was undertaken to determine the effect of viral infection on tumor incidence in conventional cancer bioassays 211 . Sendai virus infection was associated with increased incidence of liver tumors and lympomas in B6C3F1 males but no increase of tumors in female B6C3F1 mice. The increased tumor response may be a consequence of higher survival of control, low-does and high-dose groups 211 .
Reversibility (Conditional Hepatocarcinogens)
It is certainly well documented that a major hallmark of l i v e r t u m o r p r o m o t i o n b y n o n g e n o t o x i c r o d e n t hepatocarcinogens is the regression of preneoplastic foci and nodules following cessation of treatment 151,212,213 . A dramatic example of the extent of this process is the regression of 30% of chlordane induced hepatocellular adenomas and carcinomas within a few weeks after stopping treatment 214 . Regression of frank liver neoplasia has been reported in humans following cessation of growth hormone supplements containing androgens 215,216 , in women after ceasing use of oral contraceptives 217,218 , and in rats after cessation of nafenopin 212 , phenobarbital and clofibrate 219 , and the peroxisome proliferators WY-14,643 220 . Agents which require continual administration for the stable presence and growth of preneoplastic and neoplastic rodent liver lesions are probably best categorized as conditional hepatocarcinogens.
Historical Controls
For assessment of liver tumor data in cancer bioassays, laboratory specific historical control data are extremely valuable in putting unusual high or low tumor responses into perspective 198,221,222 . Acquiring accurate historical control data for liver tumors of different inbred mouse strains is problematic because of a host of variables such as caging, diet, individual study duration, route of test article administration, and the period of time over which the control data are acquired. The best data come from labs with consistent study protocols and with periodic updating of a moving window of observation to generate the most relevant contemporary historical control information. While historical control data from organizations such as the National Toxicology Program and the FDA National Center for Toxicological Research are readily available, similar data from industrial organizations is generally not available to the public. Based on extraction of concurrent control data from published reports, some crude estimates could be generated but these would constitute isolated examples of concurrent controls rather than solidly reliable historical controls. Based on web site data for March 2007, the NTP mean (± SD) historical control data for B6C3F1 mouse liver tumors with all routes combined and using NTP-2000 diet is shown in Table 3.
Some comparative historical data extracted from the NTP Workshop on liver tumors in different mice but without background husbandry and study duration data 223 are presented in Table 4. There are dramatic strain and stock differences in liver tumor incidences and, based on data in Tables 3 and 4, the NTP B6C3F1 mouse has a particularly high and variable background incidence of liver tumors.
Rationale for Selection of Experimental Mice
Strain and stock selection of experimental mice for toxicity and carcinogenicity studies is based partly on ready accessibility, organizational history, and the specific purposes of the study. Unfortunately, the reasons for selection of specific mice for cancer bioassays are often obscure. However, once selected for use in hazard identification screening, there are two primary reasons for the continued use of the particular strain: (1) reluctance to lose the historic control database, and (2) historic inertia. In the early phases of the National Cancer Institute cancer bioassay program several mouse strains including HaM/ ICR, CD-1, and strain A were used 224,225 . The report of a large scale study involving 20,000 mice exposed to 127 different pesticides and industrial chemicals compared responses from two F1 hybrids: (C57BL/6 × C3H/Anf) F1 and C57BL/6 × AKR)F1 16 . The B6C3F1 hybrid was more effective at identifying the chemicals that were considered known carcinogens and, as a result, it ultimately became the default mouse for the National Cancer Institute cancer bioassay program. Use of this hybrid has continued with the transfer of the bioassay program to the National Toxicology Program.
A recent National Toxicology Program workshop was held to consider the most appropriate rodent strains for hazard identification studies 223 and, although there was considerable discussion considering mouse strains other than the B6C3F1, a decision to change has not been forthcoming. The contemporary National Toxicology Program philosophy would consider use of either additional or alternative mouse strains for specific studies on a knowledge-based approach to testing and based on specific issues related to the agent under study. There was general interest, however, in considering use of multiple isogenic mouse strains, especially if appropriate background information were available and in spite of potential logistical constraints 223 .
The outbred CD1 and NMRI stocks and inbred C57BL/ 10 mice are generally favored for carcinogenicity studies by the pharmaceutical, pesticide, and agrochemical communities 103 . A comfort level in use of these particular mice plus a robust historical control database are reasons these organizations maintain use of their preferred mouse.
Evaluation, Interpretation, and Relevance
General As a separate exercise from assessing the relevancy of a positive or negative cancer bioassay response, it must first be determined if the response observed in the test animals is credible. A negative cancer outcome in both sexes of two test species is generally interpreted to reflect the probability that the agent under test is not a potential human carcinogen. This judgment is not made in a vacuum but rather takes into account many other factors including whether the agent is genotoxic, the toxicokinetics in the bioassay models, evidence of a biological response indicative of adequate exposure, sufficient survival for the duration of the bioassay, etc. Given that all factors favorably support the bioassay outcome, it is probable that existing or pending regulatory actions will be mitigated. However, it is readily understood that one can never prove a negative; susceptible human subpopulations or individuals might react unfavorably to agent exposure; and the bioassay outcome could be reflective of a false negative response 226 .
Multi-site and trans-species carcinogens
Positive cancer responses in the bioassay test species engender a spectrum of interpretative response and sometimes a considerable degree of controversy. This follows from the fact that bioassays yield varying degrees of response. At one end of the spectrum is the clear response in which multiple cancers occur early in the study involving multiple tissues in both male and female rats and mice -a multi-site, trans-species rodent carcinogen. At the other end of the spectrum of response is the situation where there is a marginal increase in a common spontaneous tumor and the response is seen in one sex of one species and typically only at the highest dose. For those individuals who subscribe to a conservative public health policy, these responses are often considered indicative of potential human risk for developing c a n c e r , a t t h e v e r y l e a s t f o r s u s c e p t i b l e h u m a n subpopulations or in situations where there may be in utero human exposure. The majority of positive outcomes, which in the case of the National Toxicology Program represents approximately 50% of agents tested, fall between the extremes of response and are generally classified as showing either clear or some evidence of carcinogenicity. Appropriate regulatory agencies and bodies then take the bioassay outcome into account as one of the factors in the weight of evidence for determining the potential human health hazard. Multi-site and trans-species carcinogens would provide more compelling justification for protective regulatory decisions than a single sex, single species marginal response. Situations where there is an equivocal response ideally require additional, and perhaps more rigorous, studies; but cost considerations generally preclude additional studies except in unusual circumstances.
Mouse liver carcinogens
Concern about a mouse liver tumor response comes about in situations where it is the sole response in a twospecies carcinogenicity study, is typically seen in liver tumor susceptible strains when given high doses of nongenotoxic agents, when the likely human exposure is significantly lower than bioassay responsive doses, when the likely mode of action is expected to have a threshold, and when the mode of action is not relevant to humans. Given what we have learned from various bioassay toxicity and carcinogenicity databases generated over the past 40 years, short and medium term exposures can reasonably be expected to identify predictive rodent-and chemical-specific biomarkers and, thus, to establish predictive testing strategies 227,228 . This may be relatively easy for prediction of liver responses by use of clinical chemisty, organ weight, histopathology, and cell proliferation measurements and should work equally well for rats as well as mice.
Mouse debate
Over the last three decades and especially during the last few years, there has been considerable concern and debate regarding the utility of the mouse for long-term rodent carcinogenicity testing 9,229-234 . A few quotes (without attribution) serve to illustrate one position regarding the utility of mouse bioassays: "It is not appropriate to make human risk assessment decisions based on a mouse liver tumor response" ".... it is suggested that strong consideration be given to deleting the mouse as a routine test animal ...." "So, my conclusion is, in the future, probably the near future, with sufficient scientific evidence added, we can eliminate the long-term mouse bioassay from our protocol." "For non-genotoxic hepatic tumor promoters, the weight of the evidence would indicate that a mouse liver tumor response is not a relevant indication of human cancer risk." ".... have strengthened my views that from a regulatory standpoint the use of mouse strains with a high spontaneous incidence of hepatic tumours for routine carcinogenicity testing is undesirable." "The utility of the mouse for purposes of routine screening of chemicals for carcinogenic potential is, therefore, highly questionable." A primary basis for recommendations that the mouse no longer be used for carcinogen hazard identification stems from the liver tumor response. Some argue that, at least for pharmaceuticals, the mouse cancer bioassay is redundant in that it has not provided evidence of carcinogenicity that was not already identified in the rat cancer bioassay 9,235 . The proponents of using the mouse bioassay are concerned that dropping it from the armamentarium will preclude the possibility of identify trans-species carcinogens 223,232,[236][237][238] . The current development of new genetically engineered mouse models to better understand and combat cancer and ongoing efforts to map the genome of multiple inbred mouse strains 10 suggest that it would be prudent to retain the mouse as a cancer bioassay species. Although the debate regarding the utility of the mouse as a cancer bioassay model will probably continue, it appears that mice will retain a role in understanding the complexity of cancer as a disease, identifying causative factors, serving as a screening model for hazard identification, and providing a basis for therapy and, hopefully, prevention.
Relevance
Of all the concerns and controversies surrounding rodent cancer bioassay programs, relevance to potential human disease is of critical importance. Since the objective of the rodent cancer bioassay is to provide information that will permit the avoidance, reduction, and prevention of carcinogenic risk to humans, the relevance of the bioassay must be defined. Not surprisingly, there are two schools of thought on this topic 228,[239][240][241][242][243][244][245] . It is generally accepted that agents whose mode of action involves direct interaction with and alteration of DNA should be considered to have human carcinogenic potential. These DNA reactive agents are typically considered to not have a threshold, although there may be some low level of exposure to genotoxic agents that simply will not result in cancer in a human lifetime. The majority of agents tested in contemporary rodent cancer bioassays, however, are non-genotoxic and, if they indeed do lead to cancer in rats and/or mice, the mechanism by which that rodent cancer occurs should ideally have relevance to humans. Over the years, diligent investigative work subsequent to observed cancer responses in the rodent bioassay, has provided convincing evidence that some bioassay cancer response are rodent-specific and simply not germane to humans. Agencies have tended to mitigate regulatory decisions when clear rodent-specific mode of action data are provided and all other factors are considered, including a cost-benefit analysis 246,247 .
The Future
The rodent cancer bioassay is unlikely to be abandoned in the near future. Even given an appealing alternative for identifying carcinogenic hazard, just the process of validating that alternative and overcoming historical inertia could easily take several years. Furthermore, the rodent cancer bioassay has evolved to address various toxicities other than just cancer. Concerns about reproductive toxicity, neurotoxicity, immunotoxicity, and developmental effects have captured public awareness and organizations are increasingly addressing the issue of toxicity and other noncancer consequences for human health.
Contemporary rodent bioassay design frequently incorporates sub-studies for the generation of information about mode of action. Concerns about the sensitivity of the rodent bioassay and growing concern about long-term and trans-generational human health issues are leading to the prospect of running bioassays that commence with in utero exposure followed by continuation of exposure during nursing and following weaning. This development will pose considerable logistical considerations for testing as well as being resource intensive. The general belief is that in utero exposures will provide a more sensitive bioassay animal model. Its use will undoubtedly identify more agents as hazards, including cancer hazards. If such studies are u n d e r t a k e n t h e y w i l l h o p e f u l l y b e d e s i g n e d t o simultaneously develop predictive biomarkers to help obviate the need for continual long-term resource intensive testing.
While the majority of bioassays have been carried out on single specific agents in the workplace and general environment, there is a growing interest in understanding the health consequences for the total environmental load of exposures and in testing relevant mixtures of agents that would likely reflect realistic human exposures. Examples include complex non-standardized agents such as readily available over-the-counter herbal medicines, combinations of exposures from water disinfection byproducts, physical exposures from electromagnetic fields and cell phones, nanoscale materials, and endocrine disruptors to name a few. This type of testing will require novel exposure scenarios and will create logistical hurdles and new issues of interpretation and relevance. The problems will be all the more complex as the rodent bioassay is expanded to include non-cancer endpoints, exploration of interacting modes of action, and identification of biomarkers of exposure and effect that might have applicability in protecting human health. | 2014-10-01T00:00:00.000Z | 2009-03-01T00:00:00.000 | {
"year": 2009,
"sha1": "1b35075f9bc07a294f4586b8e24fb6fbbdddd197",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/tox/22/1/22_1_11/_pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b35075f9bc07a294f4586b8e24fb6fbbdddd197",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
230616423 | pes2o/s2orc | v3-fos-license | The detoxification mechanisms of low-accumulating and non-low-accumulating medicinal plants under Cd and Pb stress
Recently, the levels of heavy metals in medicinal plants have aroused widespread concern because these elements usually enter the food chain through plants and are gradually passed to the final consumers, greatly threatening human health. To reduce heavy metal pollution, it is necessary to solve the problem from the source to ensure environmental quality during medicinal material production. We use low-accumulating and non-low-accumulating medicinal plants to remediate soil contaminated by Cd and Pb. This experiment aims to study the amino acid levels in root exudates, to study antioxidant enzymes and malondialdehyde (MDA) in leaves, and to discuss the detoxification mechanisms of low-accumulating and non-low-accumulating medicinal plants under Cd and Pb stress. In soil contaminated with Cd or Pb, catnip, thyme and Fineleaf Schizonepeta Herb were cultivated. Enrichment factor (EF) and translocation factor (TF) levels were calculated to determine which are low-accumulating medicinal plants with respect to Cd or Pb. The relationships between the amino acid levels in root exudates, the levels of antioxidant enzymes, the present heavy metal species, heavy metal concentrations, and plant species were discussed. Under Pb and Cd stress, the total amounts of amino acids secreted by plant roots and the level of each amino acid were associated with the heavy metal concentrations and plant species. Plants alleviate Pb and Cd stress via adding malondialdehyde (MDA) and antioxidant enzymes. Thyme can be used as a low-accumulating medicinal plant with any concentration of the heavy metal Pb. These results are of great significance for understanding the chemical behaviors of heavy metals at the root/soil interface under Cd and Pb stress and the detoxification mechanisms of medicinal plants.
Introduction
At present, heavy metal pollution in soil has attracted worldwide attention. Approximately 19.4% of arable land (2.6 Â 10 7 hm 2 ) in China suffers from heavy metal pollution. 1 According to a 2014 National Soil Pollution Survey Bulletin reported by the Ministry of Environmental Protection of China, in 7% of cases, Cd levels in soil exceed the standard, making it the most polluting metal. Cd is known to be extremely toxic, and it can be easily taken up by plant roots and transferred to the aerial parts. 2 A well-documented phenomenon is Itai-Itai disease, which arises due to the accumulation of Cd residue in rice. 3 A typical heavy metal pollutant, with the strongest toxicity, that exist in many fertilizers is Pb, which is easily absorbed and eventually accumulated in the edible parts of plants. It can easily enter the human body from the food chain and can cause harm to human lives and health. According to Rodriguez et al.,4 due to the slow mobility of Pb, the migration of inorganic Pb and elemental Pb from the soil to underground water goes slowly, and Pb can be readily taken in by various plants. Even though the Pb content in the higher parts of plants (such as fruits, seeds, and leaves) is low, it remains toxic when consumed by humans due to the large accumulation of Pb in the roots. 5 In the last quarter of the last century, people became more and more interested in using naturally sourced substances, particularly herbal medicines, for therapeutic purposes. The WHO estimates that herbal medicines are currently used by approximately 4 billion people for certain aspects of primary health care. However, the existence of heavy metals may have a strong impact on the safety, efficacy, and quality of natural products prepared from medicinal plants. 6 Incidents of excessive heavy metals in traditional Chinese medicine (TCM) have occurred occasionally, seriously damaging the image of TCM and causing signicant economic losses to the TCM industry. 7 At present, heavy metal pollution is becoming increasingly serious. To reduce heavy metal pollution, in addition to taking effective control measures, it is also necessary to solve the problem from the source to ensure environmental quality during medicinal material production. Hence, phytoremediation has become more and more important. The phytoremediation of heavy metals is of great signicance [8][9][10] and it is considered to be one of the better measures to x heavy-metal pollution due to its low cost, which can be 1000 times lower than traditional restoration methods such as excavation and reburying. 11 A desired phytoremediation model would not only be tolerant to heavy metals, but would also have high efficiency in absorbing heavy metals effectively and show rapid growth, having good economic benet. 12 However, few plants are both hyperaccumulators and rapid-growth plants. Most so-called hyperaccumulators exhibit great potential for heavy metal accumulation, but their accumulation of biomass is very low. On the other hand, some low-accumulators show higher biomass production, even though they have lower uptake capacities than hyperaccumulators. Zhi et al. 13 found that Chinese soybean (Tiefeng 29) was a low Pb-accumulator. Research by Manan et al. 14 showed that C. asiatica is a low-accumulator of Zn and Pb. Research by Huang et al. 15 indicated that Chinese cabbage (No. 12, No. 21) and cabbage (No. 6, No. 7) are low Cd-accumulating vegetables.
The "rhizosphere" is generally dened as an area of soil around the roots where microbes are highly active and affected by the secretion of a microbial community by the roots. 16 Root exudates usually include amino acids, sugars, organic acids, high molecular weight compounds, and phenolic compounds. 17,18 Low molecular weight compound (amino acids, sugars, organic acids, phenols) and high molecular weight compound (polysaccharides and proteins) root exudates play critical roles in rhizospheric processes. 19 In addition, the root exudates of wheat and rice also showed a certain degree of toxicity under Pb and Cd stress, unlike plants that were not treated with Pb and/or Cd. 20 Salt et al. 21 found that citrate-and Ni-chelated histidine accumulated in plant root exudates that were not hyperaccumulating; therefore, they can assist in Ni detoxication strategies by helping to reduce the uptake of Ni. Therefore, root exudates have an inuence on the distribution and absorption of Pb and Cd in plants. 22 Reactive oxygen species (ROS) accumulate and damage cell membranes under heavy-metal stress, leading to increased membrane lipid peroxidation products. The malondialdehyde (MDA) content is a lipid peroxidation indicator, representing the extent of membrane lipid peroxidation and the intensity of a plant's reaction to stress conditions. 23 To resist the negative effects of ROS accumulation and improve the survival rates of plants under these stresses, plants have to modulate the expression of related genes in complex antioxidant enzyme systems. Peroxidase (POD), superoxide dismutase (SOD), and catalase (CAT) are the main enzymes for scavenging ROS in plants. 24 SOD is the rst line of defense for plants when it comes to removing ROS, and it takes a core position in protecting enzyme systems. Its main function is to remove O 2À , which can disproportionate O 2À to generate H 2 O 2 . The main function of CAT and POD is to remove H 2 O 2 in organisms. 25 We use low-accumulating medicinal plants to remediate soil contaminated by Cd and Pb. A plant with low accumulation is one that can show reduced element absorption when the element concentration in the matrix is high or the net excretion of the element is high. Even though the concentration is high in the matrix, the element concentration in such plant tissue is still very low. 26 The cultivation process of low-accumulating plants is simple and poses no ecological risk, and the selected crops can be directly promoted in local communities, so it is an ascendant approach to use low-accumulating plants to remediate polluted soil.
The purpose of this paper is to study the detoxication mechanisms of low-and non-low-accumulating plants under Cd and Pb stress. Via analyzing the amino acid content values of root exudates and the enzyme content values of leaves, the mechanisms of repair of low-accumulating plants and non-lowaccumulating plants can be further analyzed.
The cultivation of medicinal plants
Soil samples were obtained from depths of 0-20 cm from the surface, passed through a 4 mm sieve, and then mixed with suitable amounts of CdCl 2 $2.5H 2 O or Pb(NO 3 ) 2 solution. Plastic owerpots (20 cm diameter  15 cm depth) were used, containing 2.5 kg of soil in each case. Five treatments were adopted: 1 CK treatment (control, no Cd or Pb); 2 Cd treatments, i.e., T 1 (1.0 mg Cd kg À1 soil) and T 2 (2.5 mg Cd kg À1 soil); and 2 Pb treatments, i.e., T 1 (500 mg Pb kg À1 soil) and T 2 (1500 mg Pb kg À1 soil). T 1 and T 2 indicate low and medium contamination levels. The heavy metal pollution classication standard and single pollution index methods were used to assess the soil heavy-metal pollution levels. 27,28 500 g of each soil sample was accurately weighed and placed in a 300-mesh Nylon rhizosphere bag with a diameter of 15 cm. A rhizosphere bag was placed in each plastic owerpot and the soil served as the plant rhizosphere soil. 29 The soil was watered and the pots were then kept under constant conditions for a month, which is a sufficient time for various adsorption mechanisms in the soil to obtain a natural balance.
The catnip (Nepeta cataria), thyme (Thymus spp), and Fineleaf Schizonepeta Herb (Nepeta cataria L.) seeds used in this study were all obtained from a seed company. Aer sterilizing with 2% (v/v) hydrogen peroxide for 10 minutes, the seeds were washed with distilled water several times and sowed in each pot. No chemical fertilizers were used in these pots. During the experiments, soil moisture loss was compensated for to maintain a soil moisture content of 75-80%.
Determination of heavy metals in soil and plant samples
Aer harvesting, the plants were rinsed entirely with tap water and deionized water 3 times, and they were then divided into shoot and root groups. Aer drying at 105 C for 5 minutes, the samples were completely dried in an oven at 70 C, and then weighed and ground into powder. 0.50 g of soil sample and 0.50 g of plant sample were digested (v/v) in 12 ml of solution containing 13% concentrated HClO 4 and 87% concentrated HNO 3 . 30 ICP-AES (Spectro Arcos, Germany) was used to determine the Cd and Pb levels. 29 The recoveries of these two elements were between 94% and 99%.
Determination of amino acids in root exudates
Approximately 2 g of rhizosphere soil was weighed and placed in a 10 ml centrifuge tube. To inhibit microbial activity, 4 ml of 0.1% H 3 PO 4 solution was added to the root exudate components. In order to realize apparent equilibrium desorption, the tube was rst shaken using a rotating shaker at a speed of 200 rpm in the dark, then the tube was centrifuged at 5000 rpm for 5 minutes to remove all microorganisms, and then it was centrifuged with a syringe lter (0.45 mm). 0.25 g of soil sample was accurately weighed, divided into three portions, and placed in a 100 ml hydrolysis bottle; 20 mL of 6 mol L À1 hydrochloric acid was then added, followed by hydrolysis at 105 C for 12 h. The amino acids were determined via liquid chromatography (Thermo 3000).
Determination of leaf enzymes
Guaiacol oxidation was used to measure POD activities. 31 The total SOD activity was measured using nitro-blue tetrazolium. 32 The total CAT activity was determined via spectrophotometry. 33 MDA was determined using the thiobarbituric acid method. 34
Statistical analysis
All treatments were repeated three times and each sample was assayed three times in parallel. Excel 2018 and SPSS 18.0 were used to analyze data. Signicant differences between means (P < 0.05) were tested using the least squares (LSD) method. All results are expressed as dry weights. (Fig. 1). Under medium-concentration Cd stress, the plant heights of catnip, thyme, and Fineleaf Schizonepeta Herb were 1.06 times, 1.18 times, and 1.86 times higher than those of the control groups, respectively. In general, the plant heights of the control groups were lower than those of the catnip, thyme, and Fineleaf Schizonepeta Herb groups under Cd stress, and the plant heights of the three plants reached maximum levels under low Cd stress. This shows that Cd stress can promote the growth of catnip, thyme, and Fineleaf Schizonepeta Herb.
Effects of Cd or
3.1.1.2 Effects of Pb stress on plant heights. Under lowconcentration Pb stress, the plant heights of catnip, thyme, and Fineleaf Schizonepeta Herb were 0.79 times, 0.58 times, and 1.35 times higher than those of the control groups (Fig. 2). Under high-concentration Pb stress, the plant heights of catnip, thyme and Fineleaf Schizonepeta Herb were 0.93 times, 0.52 times, and 0.81 times those of the control groups, respectively. The plant height changes of the three plants under Pb stress were different. Under Pb stress, the plant heights of the control groups were higher in the cases of catnip and thyme, and the changes to the thyme heights were more obvious. The plant heights of Fineleaf Schizonepeta Herb showed a trend of increasing rst and then decreasing as Pb stress increased. This indicates that Pb stress can inhibit the growth of catnip and thyme, while it promotes the growth of Fineleaf Schizonepeta Herb at low concentrations and inhibits it at high concentrations.
3.1.2 Effects on the dry weights of medical plants 3.1.2.1 Effects of Cd stress on dry weights. The dry shoot and root weights of catnip, thyme, and Fineleaf Schizonepeta Herb were affected by different concentrations of Cd (Fig. 3). In general, the dry weights of the roots and shoots of catnip, thyme, and Fineleaf Schizonepeta Herb showed a parabolic change as the concentration of Cd was increased. It can be seen that catnip, thyme, and Fineleaf Schizonepeta Herb are resistant to low Cd stress but weak under high-concentration Cd stress. In addition, under low Cd stress, the dry weights of the shoots and roots of the three plants were much higher compared with the control groups. It is speculated the growth of catnip, thyme and Fineleaf Schizonepeta Herb will be promoted under a certain concentration of Cd stress.
3.1.2.2 Effects of Pb stress on dry weights. Under lowconcentration Pb stress, the catnip shoot and root dry weights have a tendency to decrease (Fig. 4), while under mediumconcentration Pb stress, the catnip shoot and root dry weights were increased signicantly, much higher than the control group. Under Pb stress, the shoot and root dry weights of Fineleaf Schizonepeta Herb decreased.
3.1.3 The identication of medicinal plants with low heavy metal accumulation. The heavy metal levels in the shoots and roots of catnip, thyme, and Fineleaf Schizonepeta Herb are shown in Table 1.
Using translocation factor (TF) and enrichment factor (EF) values to identify low-accumulating medicinal plants, 35 the following statements can be made ( Table 2): (1) The heavy metal levels in the aboveground parts are lower than the maximum allowable doses for medicinal plants when the Cd content is # 0.3 mg kg À1 and the Pb content is # 5.0 mg kg À1 (referring to Chinese Pharmacopoeia, 2010 edition); (2) when the translocation factor (TF) is <1.0, a plant is low accumulating: where C shoot is the concentration of Cd or Pb in the aerial parts of the medicinal plant and C root is the concentration of Cd or Pb in the medicinal plant root; (3) when the enrichment factor (EF) is <1.0, a plant is low accumulating: where C shoot is the Cd or Pb concentration in the overground parts of the medicinal plant and C soil denotes the total Cd or Pb concentration in the corresponding soil sample; and (4) In the case of high heavy metal tolerance, the biomass of soil polluted by heavy metals does not decrease remarkably.
In a low-concentration Pb environment, the Pb levels in the aerial parts of catnip and Fineleaf Schizonepeta Herb were less than 5.0 mg kg À1 and 2.68 mg kg À1 , respectively. The enrichment factors of catnip and Fineleaf Schizonepeta Herb are in line with the standards of low-accumulating medicinal plants, being 0.01 and 0.01, respectively. Their translocation factors are 0.35 and 0.64, respectively. In a medium-concentration Pb environment, the Pb content of thyme above ground is 2.39 mg kg À1 , and the enrichment factor and translocation factor values are 0.00 and 0.01, respectively. In a low-concentration Pb environment, the Pb content above ground is 2.18 mg kg À1 , the enrichment factor is 0.00, and the transport factor is 0.09. It can be seen that thyme is a low-accumulating plant with respect to Pb.
3.1.4 Effects on amino acids in root exudates 3.1.4.1 Effects of heavy-metal stress on the total amino acid content. Under Cd stress of 1.0 mg kg À1 and 2.5 mg kg À1 , the total amino acid levels secreted by catnip roots were 1.08 times and 1.01 times more than the control groups (Fig. 5), the total amino acids levels secreted by thyme roots were 1.02 times and 1.09 times higher than those of the control groups, and the total amino acid levels secreted by Fineleaf Schizonepeta Herb roots were 1.34 times and 1.13 times higher than those of the control groups. Under 500 mg kg À1 and 1500 mg kg À1 Pb stress, the total amino acid levels secreted by catnip roots were 1.22 times and 1.00 times more than the control groups (Fig. 6), the total amino acid levels secreted by thyme roots were 1.02 times and 1.09 times more than the control groups, and the total amino acid levels secreted by Fineleaf Schizonepeta Herb roots were 1.21 times and 0.91 times those of the control groups. 3.1.4.2 Effects of heavy-metal stress on the types of amino acids. Under Cd stress, the roots of catnip, thyme, and Fineleaf Schizonepeta Herb secrete six main amino acids (Fig. 5): Tyr, Phe, Lys, Pro, His, and Arg. Under both medium-and lowconcentration Cd stress, the amount of proline secreted by the root systems is the highest, followed by phenylalanine, arginine, histidine, lysine, and tyrosine.
Under Pb stress, the roots of catnip, thyme, and Fineleaf Schizonepeta Herb secrete six main amino acids (Fig. 6): Tyr, Phe, Lys, Pro, His, and Arg. As shown in the gure, under any concentration of Pb stress, the roots produce proline in the highest amount, followed by phenylalanine, arginine, lysine, histidine, and tyrosine.
3.1.4.3 Effects on Pro under heavy-metal stress. Changes in the proline content are oen regarded as an important indicator of whether the amino acid metabolism of a plant is being impaired. When plants are exposed to unfavorable conditions, such as high temperature, low temperature, drought, salting, and air pollution, the free proline content in plant tissue increases signicantly, which can give rise to a remarkable increase in the amino-acid levels secreted by the root system. This is because the molecular structure and physical and chemical properties of proline can protect the spatial structures of enzymes, stabilize the membrane system, participate in the synthesis of chlorophyll, eliminate NH 3 toxicity, and reduce the acidity of cells. A rise in the plant proline content is a critical adaptive mechanism for the remediation of heavy metal stress. 36 According to a report by Islam et al., 37 under heavymetal stress, in tobacco plants, the integrity of cellular membranes was restored by proline, and enzymes in the AsA-GSH cycle were elevated due to proline. This indicates an efficient antioxidative defense system, which can defend plants against stress via improving the antioxidant activities of nonenzymes and enzymes. Through eliminating ROS toxins, increasing the levels of GSH and ASA, and increasing the activities of GR, APX, POX, CAT, and SOD, this mechanism decreases the toxicity of heavy metals. These tasks all reduce the transcription level and/or translation and endogenous proline levels. 38 Furthermore, proline reacts with Cd 2+ in plants to form the non-toxic compound Cd 2+ -proline. 39 In addition, it supplies plants with energy needed to grow and survive, improves their tolerance to stress, defends membranes in plants against ionleakage caused by Cd 2+ and degradation, 40 and improves the water potential of leaf tissue through protecting cellular membranes from heavy metal oxidation. 41 Under low concentration Pb stress, the amount of proline secreted by catnip roots was 1.10 times higher than that of the control group, while the amount of proline was 0.28 times higher than that of the control group with medium-concentration Pb stress (Fig. 6). Under low-concentration Pb stress, the amount of proline secreted by the root of Fineleaf Schizonepeta Herb reached its maximum value, which was 3.29 times higher than that of the control group, and then decreased. Under the stress of low-concentration Cd, the amount of proline the root of Fineleaf Schizonepeta Herb secreted reached the maximum, which was 3.61 times higher than that of the control group. Under the stress of low-concentration Pb, the amount of proline secreted by the thyme root system was 1.91 times higher compared with the control group, while it was 2.41 times higher compared with the group under medium-concentration Pb stress. Under low-concentration Cd stress, the amount was 1.33 times higher compared with the control group, and it was 2.57 times higher compared with the group under mediumconcentration Cd stress. In other words, under the heavy metal stress of Cd and Pb, as the concentration of Cd and Pb increases, the proline amount secreted by the root system also increases. Except for catnip under low-concentration Cd and medium-concentration Pb stress, compared with the control group, the proline secretion amount was reduced. In general, it was higher than the control group in other cases. Moreover, the secretion of proline is the largest compared with the secretion of other amino acids. This shows that proline makes a valuable contribution towards the heavy metal stress resistance mechanism of catnip, thyme, and Fineleaf Schizonepeta Herb, and the amount of proline secreted by the root of thyme correlates well with the heavy metal concentration.
Catnip amino acid secretion under heavy-metal stress.
At different concentrations of Pb and Cd, catnip roots secrete high levels of lysine and arginine in addition to proline. Under low-concentration Pb stress, the amount of lysine secreted by the root system was 1.05 times higher than that of the control group. Under medium-concentration Pb stress, the amount of lysine secreted is 1.62 times that of the control. When catnip is subjected to different concentrations of Cd stress, catnip roots secrete more arginine, lysine, and histidine. The secreted amounts of the three amino acid reach maximum values and then decrease. Among them, the amount of arginine secreted was 3.03 times that of the control, which means that the level of arginine secreted by the root system increased signicantly under low-concentration Cd stress. Catnip secretes more lysine and arginine at different Pb and Cd stress concentrations, which may be one of the important mechanisms explaining how catnip resists Pb and Cd stress.
3.1.4.5 Thyme amino acid secretion under heavy-metal stress. At different levels of Pb stress, the histidine levels secreted by thyme decreased sharply. The secretion levels of histidine under low-concentration and medium-concentration Pb stress were 0.54 and 0.33 times that of the control group, respectively. The sharp reduction in histidine may be one of the important mechanisms explaining thyme's resistance to Pb stress. At different levels of Cd stress, its root system will secrete more tyrosine in addition to more proline. Under low-concentration Cd stress, the amount of tyrosine secreted by the root system was 1.09 times higher compared with the control group, and the secreted amount was 1.30 times that of the control group under high-concentration Cd stress.
3.1.4.6 Fineleaf Schizonepeta Herb amino acid secretion under heavy-metal stress. Under different levels of Pb and Cd stress, Fineleaf Schizonepeta Herb secretes more lysine in addition to more proline. Under low-concentration Pb stress, the lysine secretion was 1.09 times higher compared with the control group, while it was 1.20 times higher compared with the control group under-high concentration Pb stress. Fineleaf Schizonepeta Herb secretes more lysine at different levels of Pb stress, which may be one of the important mechanisms explaining its resistance to Pb stress.
3.1.5 Effects on NH 3 content. The NH 3 content is related to the secretion of proline. The molecular structure and physical and chemical properties of proline can protect the spatial structure of enzymes, stabilize the membrane system, participate in the synthesis of chlorophyll, eliminate NH 3 toxicity, and reduce the acidity of cells. Under low-concentration Cd stress, the NH 3 content of Fineleaf Schizonepeta Herb was 0.58 times that of the control group, while it was 0.62 times that of the control group under medium-concentration Cd stress. Under low-concentration Pb stress, the NH 3 content secreted by Fineleaf Schizonepeta Herb was 0.51 times that of the control group, while it was 0.63 times that of the control group under mediumconcentration Pb stress. When the content of proline secreted by Fineleaf Schizonepeta Herb increased, the NH 3 content decreased signicantly.
3.1.6 Effects of enzymes on the stress resistance of plants 3.1.6.1 Effects on enzymes under Cd stress. On the 10th day under Cd stress, the MDA content levels in the leaves of the three plants increased as the Cd concentration increased, and the MDA content levels in the three plants at different stress concentrations were almost the same (Fig. 7). The CAT content in the leaves of the three plants and the SOD content in the leaves of the three plants showed increasing trends with an increase in the Cd concentration. Among the studied plants, the SOD secretion by thyme and Fineleaf Schizonepeta Herb under low-concentration stress and under medium-concentration stress were very alike. The POD content levels in the leaves of the three plants presented an upward trend as the Cd concentration increased. POD secretion under low-concentration stress increased signicantly compared with the control. POD secretion under medium-concentration stress was comparable with that under low-concentration stress. There is little difference in the levels. In addition, the POD content levels in the three plants are almost the same at each stress concentration.
On the 20th day under Cd stress, the MDA content levels in the leaves of the three plants went up as the Cd concentration increased; the MDA content of thyme increased signicantly when the Cd concentration was at a medium level, and it was much higher than in the other two plants. The CAT content levels in the leaves of the three plants presented an upward trend. As the Cd concentration increased, the SOD content levels in the leaves of the three plants also presented an upward trend. Under stress at each concentration, SOD secretion by Fineleaf Schizonepeta Herb was higher than by the other two plants. The POD content levels in the leaves of the three plants showed an increasing trend along with increasing Cd concentration, and the amounts of POD secreted by the three plants under stress at each concentration were almost the same.
3.1.6.2 Effects on enzymes under Pb stress. On the 10th day under Pb stress, the MDA content levels in the leaves of the three plants went up along with increasing Pb concentration, and the content levels of MDA in the three plants at each concentration of Pb stress were almost the same (Fig. 8). As the Pb concentration increased, the CAT levels in the three plants changed in different ways. CAT secretion by catnip and thyme increased as the Pb concentration increased, while CAT secretion by Fineleaf Schizonepeta Herb rst decreased and then increased. SOD secretion by the three plants increased along with increasing Pb concentration, and the SOD secretion levels of Fineleaf Schizonepeta Herb under various Pb concentrations were much higher than the other two plants. The POD secretion levels of the three plants all increased as the Pb concentration increased, and POD secretion by Fineleaf Schizonepeta Herb at various Pb concentrations was slightly higher than those of the other two plants. On the 20th day under Pb stress, the MDA content levels in the leaves of catnip and Fineleaf Schizonepeta Herb went up along with increasing Pb concentration, while the MDA content decreased and then increased for thyme. The MDA secretion by thyme was much higher than those of the other two plants. As the Pb concentration increased, CAT secretion levels increased for all plants, except thyme (which rst decreased and then increased), and the CAT secretion levels of thyme were much higher than those of catnip and Fineleaf Schizonepeta Herb. With an increase in Pb concentration, the SOD secretion levels of all three plants increased, and SOD secretion by Fineleaf Schizonepeta Herb was much higher than the other two plants. The secretion of POD by the three plants showed different trends as the Pb concentration increased. The secretion of POD by catnip rst increased and then decreased, POD secretion by thyme increased, and POD secretion by Fineleaf Schizonepeta Herb rst decreased and then increased. POD secretion by Fineleaf Schizonepeta Herb is higher than the other two plants.
In this study, the MDA content levels in the leaves of the three plants increased slightly with increases in Pb and Cd concentration, but no signicant differences were found compared with the control group.
Our experiments showed that as the Pb and Cd concentrations increased, the levels of POD, CAT, and SOD in the leaves of the three plants generally increased compared with the control groups over the two time periods. Experiments by Hao et al. 25 showed that Cu, Mg, and Fe can increase the MDA levels over 3 time periods, and they can increase the SOD and POD activities.
Differences in plant heights and dry weights under heavy-metal stress
Under low-concentration Cd stress, the shoot and root dry weights of the three plants were much higher compared with the control groups, indicating that certain concentrations of Cd stress may promote the growth of catnip, thyme, and Fineleaf Schizonepeta Herb.
Under low-concentration Pb stress, the shoot and root dry weights of catnip tended to decrease. Thyme is a lowaccumulating plant with respect to Pb. It can be clearly found that at different concentrations of Pb stress, the plant heights and dry weights of thyme are higher than those of the other two medical plants.
According to a study by Hammami et al. 42 on the phytoremediation abilities of weeds (S. nigrum, Portulaca oleracea, Taraxacum officinale, and Abutilon theophrasti) in Cdcontaminated soil, the fresh weights and dry weights of the shoots and roots of each plant decreased as the Cd level in the soil increased. Manan et al. 35 found that O. stamineus appeared to grow well in contaminated soil and did not seem to be any different visually from plants grown in controlled soil. O. stamineus roots in contaminated soil were 30% longer than the control group, which indicated that contaminated soil treatment did not have any adverse effects on O. stamineus root elongation. This shows the species-specic adaptation of different plants towards heavy-metal stress.
Total amounts of amino acids secreted by plant roots.
The current results show that under the stress at any concentration of Pb, the roots of catnip, thyme, and Fineleaf Schizonepeta Herb produce proline in the highest amounts, followed by phenylalanine, arginine, lysine, histidine, and tyrosine. Under stress at different Cd concentrations, the amounts of proline secreted by the root systems are still the highest, followed by phenylalanine, arginine, histidine, lysine, and tyrosine.
The trends of total amino acid secretion shown by catnip and Fineleaf Schizonepeta Herb are consistent. When the heavy- This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 43882-43893 | 43889 metal concentration is low, the amounts of amino acid secreted by the roots are high, while the amounts of amino acid secreted at higher heavy-metal concentrations are low. The reason may be that some of the proteins in the plants decompose when the heavy-metal stress concentration is low; however, when the heavy-metal stress concentration is higher, the plant's own antistress mechanisms play a role, synthesizing a large number of enzymes to resist adverse stress. The trend of amino acid secretion by thyme is that as the concentration of heavy metal increases, the total amount of amino acids secreted by the root system also increases; this is probably because at higher concentrations of Pb stress, the plasma membranes of thyme root cells are destroyed and root-cell amino acids are released. Past research has centered on the inuence of amino acids on the abilities of plants to tolerate Cd. Experiments by Tang examining how Cd stress changes the amino acids secreted by different rice roots showed that a low concentration of Cd can promote the total amino acid secretion and a high concentration of Cd can inhibit amino acid secretion. Moreover, Cd stress had almost no effect on the types of amino acids secreted by the root system, but it had a great inuence on the amounts of secreted amino acids; 43 this is consistent with the conclusions of these experiments. Experiments by Liao et al. 44 showed that the total amount of amino acids in sugarcane decreased under Cd treatment at low concentrations, but it increased as the Cd concentration was increased.
4.2.2 Types of amino acids secreted by plant roots under Pb stress. Under Pb stress, the amino acid content levels of catnip changed, as the amounts of Tyr, Lys, and Arg increased. While the levels of Phe and His decreased rst and then increased, the amount of Pro showed an opposite trend. In thyme, the amounts of Phe, Lys, and Arg decreased at rst and then increased, Tyr and His decreased as the Pb concentration increased, and the amount of Pro increased with an increase in the Pb concentration. In Fineleaf Schizonepeta Herb, the amounts of Tyr and Phe decreased rst and then increased. As the concentration of Pb increased, the levels of His and Arg decreased, but the amount of Lys increased. The amount of Pro showed a trend of increasing rst and then decreasing. The amino acids secreted by rice roots include Phe, Arg, His, and Tyr. What is different in our experiments is that there is no secretion of Lys at various levels of Cd stress. 4.2.3 Types of amino acids secreted by plant roots under Cd stress. The amino acid levels of catnip changed under Cd stress; the amounts of Tyr, Phe, Lys, His, and Arg increased at rst and then decreased, and the Pro level decreased rst and then increased. In thyme, the amounts of Phe, Lys, and His showed a trend of increasing rst and then decreasing, the amounts of Tyr and Pro increased, and the level of Arg rst decreased and then increased. In Fineleaf Schizonepeta Herb, the levels of Tyr, Phe, and Pro showed a trend of increasing rst and then decreasing. The amounts of Lys, His, and Arg decreased as the Cd concentration increased. Experiments by Liao et al. 44 found that amino acids such as Arg, Lys, Phe, His, Lie, Met, and Cys accumulate in sugarcane tissue under Cd stress. Among these, the levels of Arg and Lys increased as the concentration of Cd treatment increased, while the amounts of Tyr and His decreased under Cd25 treatment, increased under Cd50 treatment, and increased signicantly under Cd100 treatment. The levels of Phe under Cd25 and Cd50 stress decreased compared with the control group and increased under Cd100 stress. As the Cd stress concentration increased, the amount of Pro decreased, and while it increased under Cd100 stress, it was still lower than that of the control group. Experiments by Tang et al. 43 showed that the amounts of various amino acids secreted by rice root systems presented a trend of rst increasing and then decreasing as the Cd concentration increased. Research by Zoghlami et al. 45 showed that in tomato roots that were exposed to Cd, asparagine, glutamine and branched-chain amino acids (tryptophane, isoleucine, valine, and phenylalanine) accumulated signicantly. Zemanová 46 found that in the process of two Noccaea metallophyte species reacting to Cd stress, phenylalanine, threonine, tryptophan, and ornithine levels increased, while alanine and glycine levels decreased.
4.2.4 Changes in the amounts of proline secreted by plant roots under Cd stress. The most common adaptive response when plants are exposed to a variety of metal ions, such as Al 3+ , Zn 2+ , Cd 2+ , and Cu 2+ , is the accumulation of proline. 47 In our experiments, Pro was the amino acid that the three medical plant roots secreted in the highest amounts. It played a crucial role in the mechanism of catnip, thyme, and Fineleaf Schizonepeta Herb resistance to heavy metal stress, and the secretion levels could be correlated to the concentration of heavy metal. Except for the case of catnip under low-concentration Cd stress, where proline secretion was slightly reduced compared with the control, proline secretion in all other cases was higher than the control. Records have shown that the accumulation of Cd leads to proline accumulation in wheat, barley, and mung. [48][49][50] However, Costa and Morel 51 found that Cd did not induce proline accumulation in lettuce, but it induced particular increases in the levels of lysine, asparagine, and methionine. Experiments by Yin 52 showed that the free proline content in coreopsis leaves is lower than that of a control, showing a general downward trend. The amount of proline in the roots of Gaillardia increases as the Pb concentration increases.
In addition, the increase in Pro levels proved a relationship between antioxidant enzymes and proline. Under heavy-metal stress, proline restored the cellular membrane integrity and increased the enzyme levels in the AsA-GSH cycle. This also shows that catnip, thyme, and Fineleaf Schizonepeta Herb have good tolerance to Pb and Cd.
Based on the existing literature, genotypic differences, differences in plant species and heavy-metal treatments, and even different parts of the plant result in different effects on amino-acid metabolisms. Generally speaking, these results suggest that amino acid accumulation is benecial for Pb and Cd stress resistance.
4.2.5 Differences in amino acid levels between thyme and the other two medical plants. It can be clearly found that the amounts of Arg, His, Lys, Phe, and Tyr secreted by the roots of thyme are lower than those of the other two plants under Pb stress. However, the levels of Pro are much higher than the other two plants. This reects the important role of Pro in phytoremediation at the same time. In this experiment, SOD levels showed a tendency to increase when the concentration of exogenous Cd in the soil was increased, which indicated that the induced expression of heavy metal Cd was higher and the ability to remove superoxide anions was enhanced.
Changes in leaf enzyme levels under
4.3.3 Changes in CAT and POD secreted by plant leaves. The main function of CAT and POD is to remove H 2 O 2 in organisms but, because of their low affinity for substrates, the efficiency of H 2 O 2 removal by CAT is not as efficient as by POD.
In these experiments, it is obvious that the secretion of POD was much higher. In terms of CAT and POD secretion, as the Cd concentration increased, the secretion of both increased gradually, reecting the stress response of plants to adversity.
Changes in leaf enzyme levels under Pb stress
On the 10th day of Pb stress, the levels of MDA, CAT, SOD, and POD in the three plants all increased, and the CAT level of the low-accumulation plant thyme was much higher than the other two plants. On the 20th day of Pb stress, the levels of MDA, CAT, SOD, and POD of the three plants were still generally high compared with the control group, and the levels of MDA and CAT in the low-accumulating plant thyme were much higher than the other two plants. It is inferred from this that MDA and CAT play decisive roles in the anti-stress response of thyme.
Experiments by Michalak et al. 53 found that the MDA and SOD levels of cyanobacteria increased under heavy-metal stress. Zhang et al. 54 found that compared with a control group, the activities of POD and SOD in plant leaves under heavy metal stress exhibited a uctuating trend, while the activity of CAT increased along with the level of stress in K. candel but stayed the same in B. gymnorrhiza leaves. Studies by Choudhary et al. 55 showed that the levels of MDA and SOD in cyanobacterium Spirulina platensis-S5 increased under heavymetal stress. Hao et al. 26 found that Cu, Mg, and Fe can increase MDA levels over 3 time periods, also increasing the activities of SOD and POD. Experiments by Radić et al. 47 showed that antioxidant responses were observed under Zn and Al stress. Both SOD and POD levels increased signicantly. This is similar to our experiments. In addition, the secretion levels of the three enzymes were higher under Pb stress than under Cd stress, indicating that the effects of Pb stress on the plants were stronger than those of Cd stress.
Conclusions
Under Pb and Cd stress, the total levels of amino acids secreted by plant roots and the levels of each amino acid were studied and associated with heavy metal concentrations and plant species. Pb and Cd stress has little effect on the types of amino acids secreted by the root systems, but it has a greater impact on the amounts of the various types of amino acids that are secreted. Proline is essential in the resistance of plants to heavy metal stress. Plants alleviate Pb and Cd stress via increasing MDA and antioxidant enzyme (CAT, POD, and SOD) levels. Thyme is a low-accumulating plant with respect to Pb. These results are of great signicance for understanding the chemical behaviors of heavy metals at root/soil interfaces under Cd and Pb stress and the detoxication mechanisms of medicinal plants.
Author contributions
Yang Zhi and Qixing Zhou conceived and designed the study; Mo Zhou conducted the research and wrote the initial paper; Yang Zhi and Mo Zhou revised the paper; Yueying Dai and Jialun Lv collected data; Yajun Li and Zehua Wu had primary responsibility for the nal content.
Conflicts of interest
There are no conicts to declare. | 2020-12-17T09:13:28.119Z | 2020-11-27T00:00:00.000 | {
"year": 2020,
"sha1": "fc6b565e477d6a8773032aaecf204b4c7ae8ba14",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/d0ra08254f",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9195ac4315f4356e629f1b2b9a62f2386efdd7d6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
132164782 | pes2o/s2orc | v3-fos-license | DEVELOPMENT OF FLUORESCENCE IMAGING LIDAR FOR BOAT-BASED CORAL OBSERVATION
A fluorescence imaging lidar system installed in a boat-towable buoy has been developed for the observation of reef-building corals. Long-range fluorescent images of the sea bed can be recorded in the daytime with this system. The viability of corals is clear in these fluorescent images because of the innate fluorescent proteins. In this study, the specifications and performance of the system are shown.
INTRODUCTION
Reef-buildings Corals (hereafter corals) are mainly distributed in tropical shallow water (0-30 m in depth) and play a role in primary producers of coral reefs. Despite the importance as a species, it has been reported that coral distribution areas are rapidly diminishing because of various marine environmental factors [1]. In addition, it is predicted that global climate change (such as ocean warming and ocean acidification) will affect decline of corals [2]. Therefore, understanding their current status is important. Coral observation is considered an urgent requirement.
As corals are marine organisms, wide-area observations are technically difficult. In air surveillance, the waves on the sea surface and the absorption and scattering of light in seawater interfere with coral observation. In underwater surveillance, the field of view (FOV) is narrow, and navigation is slow. In this study, a boat-based surveillance technique has been developed as an improved method of coral observation beneath the sea surface.
Ordinary boat-based video observation has several weaknesses: The clarity of video images depends on the amount of solar radiation (cloudy conditions) because of passive observation; Video image resolution is affected by image blurring due to boat propulsion and motion; Sometimes checking coral viability is difficult because of short observation times and insufficient information.
In this study, a new active coral observation method has been developed to address the above mentioned problems.
Most corals have innate fluorescent proteins in the surface of their flesh, and they emit fluorescence with colors ranging from blue to yellow-green by ultraviolet (UV) excitation. Their fluorescence lifetime is approximately 1 to 3 ns. When corals die, the fluorescent proteins are degraded and no longer emit fluorescence. Thus, the detection of UV excited fluorescence is a sign of live corals.
METHODOLOGY
Observing UV excited fluorescence of corals is preferred in the nighttime in order to avoid background sunlight. However, surveying in the daytime is necessary because the boat operator must confirm the sea traffic safety and the safety of operating the boat with the shallow sea bottom in the observed coral reef area. Therefore, a fluorescence imaging lidar system has been developed for operating in the daytime.
In this system, the wavelength of the UV pulsed laser is 355 nm, the pulse width is approximately 9 ns, and the exposure time of the ICCD camera is approximately 100 ns. The effect of background
RESULTS
Coral observations have been operated using this lidar system in Taketomi Island, Okinawa, Japan (24.33N, 124.09E). A large number of fluorescent images of the sea bed in shallow coral reef sea area have been successfully obtained. In addition, passive sea bed images, boat position data with an accuracy of 1 m, and bathymetry data with an accuracy of 0.1 m were also simultaneously recorded using a video camera, DGPS, and SONAR, respectively. Sample images from the video and the lidar are shown in Fig. 2.
Estimating the coral viability is much easier using fluorescent images than video images because of the intensity of the coral fluorescence. The boat-based lidar surveillance around the Taketomi Island (boat track ~14 km long) was recorded with a repetition of 5 Hz, and the total observation time was approximately 3 h.
CONCLUSIONS
A fluorescence imaging lidar system installed in a boat-towable buoy has been developed, and sea bed fluorescent images were successfully obtained in the daytime. This study indicates that the observed lidar data are useful in surveying coral distributions. | 2019-04-26T14:21:20.514Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "8d3ec0f3cf4c5459c72106afab87d91c0426bce9",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2016/14/epjconf_ilrc2016_22002.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5da64e2e77b446f17350ef16fb7f6f70dceb99c8",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
119176473 | pes2o/s2orc | v3-fos-license | Twisting non-commutative $L_p$ spaces
The paper makes the first steps into the study of extensions ("twisted sums") of noncommutative $L^p$-spaces regarded as Banach modules over the underlying von Neumann algebra $\mathcal M$. Our approach combines Kalton's description of extensions by centralizers (these are certain maps which are, in general, neither linear nor bounded) with a general principle, due to Rochberg and Weiss saying that whenever one finds a given Banach space $Y$ as an intermediate space in a (complex) interpolation scale, one automatically gets a self-extension $ 0\longrightarrow Y\longrightarrow X\longrightarrow Y \longrightarrow 0. $ For semifinite algebras, considering $L^p=L^p(\mathcal M,\tau)$ as an interpolation space between $\mathcal M$ and its predual $\mathcal M_*$ one arrives at a certain self-extension of $L^p$ that is a kind of noncommutative Kalton-Peck space and carries a natural bimodule structure. Some interesting properties of these spaces are presented. For general algebras, including those of type III, the interpolation mechanism produces two (rather than one) extensions of one sided modules, one of left-modules and the other of right-modules. Whether or not one can find (nontrivial) self-extensions of bimodules in all cases is left open.
Introduction
In this paper we make the first steps into the study of extensions of noncommutative L p -spaces. An extension (of Z by Y ) is a short exact sequence of Banach spaces and (linear, continuous) operators This essentially means that X contains Y as a closed subspace so that the corresponding quotient is (isomorphic to) Z. We believe that the convenient setting in studying extensions of L p -spaces is not that of Banach spaces, but that of Banach modules over the underlying von Neumann algebra M. Accordingly, one should require the arrows in (1) to be homomorphisms.
In this regard it is remarkable and perhaps a little ironic that, while the study of the module structure of general L p -spaces goes back to its inception, the only papers where one can find some relevant information about extensions, namely [14] and [15], deliberately neglected this point.
Let us summarize the main results and explain the organization of the paper. Section 1 contains some preliminaries. Section 2 deals with the tracial (semifinite) case. It is shown that whenever one has a reasonably "symmetric" self-extension of the commutative L p (R + ) (the usual Lebesgue space of p-integrable functions) one can get a similar self-extension Our approach combines Kalton's description of extensions by centralizers (these are certain maps which are, in general, neither linear nor bounded) with a general principle, due to Rochberg and Weiss that we can express by saying that whenever one finds a given Banach space Y as an intermediate space in a (complex) interpolation scale, one automatically gets a self-extension Thus for instance, considering L p (M, τ ) as an interpolation space between M and its predual M * one arrives at a certain self-extension of L p (M, τ ) that we regard as a kind of noncommutative Kalton-Peck space. Some interesting properties of these spaces are presented.
In Section 3 we leave the tracial setting and we consider L p -spaces over general (but σ-finite) algebras, including those of type III. In this case the interpolation trick still works but produces two (rather than one) extensions of one sided modules, one of left-modules and the other of rightmodules. Whether or not one can find (nontrivial) self-extensions of bimodules in all cases is left open.
1. Preliminaries 1.1. Extensions. Let A be a Banach algebra. A quasi-Banach (left) module over A is a quasi-Banach space X together with a jointly continuous outer multiplication A × X → X satisfying the traditional algebraic requirements.
An extension of Z by Y is a short exact sequence of quasi-Banach modules and homomorphisms The open mapping theorem guarantees that ı embeds Y as a closed submodule of X in such a way that the corresponding quotient is isomorphic to Z. Two extensions 0 → Y → X i → Z → 0 (i = 1, 2) are said to be equivalent if there exists a homomorphism u making commutative the diagram By the five-lemma [11,Lemma 1.1], and the open mapping theorem, u must be an isomorphism. We say that (2) splits if it is equivalent to the trivial sequence 0 → Y → Y ⊕ Z → Z → 0. This just means that Y is a complemented submodule of X, that is, there is a homomorphism X → Y which is a left inverse for the inclusion Y → X; equivalently, there is a homomorphism Z → X which is a right inverse for the quotient X → Z.
Operators and homomorphisms are assumed to be continuous. Otherwise we speak of linear maps and "morphisms".
Taking A = C one recovers extensions in the Banach space setting. Every extension of (quasi-) Banach modules is also an extension of (quasi-) Banach spaces. Clearly, if an extension of modules is trivial, then so is the underlying extension of (quasi-) Banach spaces. Simple examples show that the converse is not true in general. A Banach algebra A is amenable if every extension of Banach modules (2) in which Y is a dual module splits as long as it splits an an extension of Banach spaces. This is not the original definition, but an equivalent condition. The original definition reads as follows: A is amenable if every continuous derivation from A into a dual bimodule is inner. Here "derivation" means "operator satisfying Leibniz rule" and has nothing to do with the derivations appearing in Section 1.4.
Every Banach space is a quasi-Banach space and it is possible that the middle space X in (2) is only a quasi-Banach space even if both Z and Y are Banach spaces (see [16,Section 4]). This will never occur in this paper, among other things because X will invariably be a quotient of certain Banach space of holomorphic functions. Anyway, Kalton proved in [12] that if Z has nontrivial type p > 1 and Y is a Banach space, then X must be locally convex and so isomorphic to a Banach space. In particular, any quasi-norm giving the topology of X must be equivalent to a normhence to the convex envelope norm. If Z is super-reflexive the proof is quite simple; see [3].
1.2. Centralizers and the extensions they induce. Let us introduce the main tool in our study of extensions.
Definition 1. Let Z and Y be quasi-normed modules over the Banach algebra A and letỸ be another module containing Y in the purely algebraic sense. A centralizer from Z to Y with ambient spaceỸ is a homogeneous mapping Ω : Z →Ỹ having the following properties.
(a) It is quasi-linear, that is, there is a constant Q so that if f, g ∈ Z, then Ω We denote by Q[Ω] the least constant for which (a) holds and by C[Ω] the least constant for which (b) holds.
We now indicate the connection between centralizers and extensions. Let Z and Y be quasi-Banach modules and Ω : Z →Ỹ is a centralizer from Z to Y . Then is a linear subspace ofỸ × Z and the functional (g, f ) Ω = g − Ωf Y + f Z is a quasi-norm on it. Moreover, the map ı : Y → Y ⊕ Ω Z sending g to (g, 0) preserves the quasi-norm, while the map π : Y ⊕ Ω Z → Z given as π(g, f ) = f is open, so that we have a short exact sequence of quasi-normed spaces with relatively open maps. This already implies that Y ⊕ Ω Z is complete, i. e., a quasi-Banach space. Actually only quasi-linearity (a) is necessary here. The estimate in (b) implies that the multiplication a(g, f ) = (ag, af ) makes Y ⊕ Ω Z into a quasi-Banach module over A in such a way that the arrows in (3) become homomorphisms. Indeed, We will always refer to Diagram 3 as the extension (of Z by Y ) induced by Ω. It is easily seen that two centralizers Ω and Φ (acting between the same sets, say Z andỸ ) induce equivalent extensions if and only if there is a morphism h : We write Ω ∼ Φ in this case and Ω ≈ Φ if the preceding inequality holds for h = 0 in which case we say that Ω and Φ are strongly equivalent. In particular Ω induces a trivial extension if and only if Ω(f ) − h(f ) Y ≤ K f Z for some morphism h : Z →Ỹ . In this case we say that Ω is a trivial centralizer.
The corresponding definitions for right modules and bimodules are obvious. Thus, for instance, we define bicentralizers from Z to Y (which are now assumed to be Banach bimodules over the Banach algebra A) by requiringỸ to be also a bimodule and replacing the estimate in Definition 1(b) by . We insist that we are interested in the case of Banach spaces here, so one can assume Z and Y to be Banach spaces. However, the Ribe function · Ω will be only a quasi-norm on Y ⊕ Ω Z, even if it is equivalent to a true norm. See the paragraph closing Section 1.1 and [5, Appendix 1.9].
1.3. Push-outs and extensions. The push-out construction appears naturally when one considers two operators defined on the same space. Given operators α : Y → A and β : Y → B, the associated push-out diagram is is the quotient of the direct sum A ⊕ B (with the sum norm, say) by S, the closure of the subspace {(αy, −βy) : y ∈ Y }. The map α ′ is given by the inclusion of B into A ⊕ B followed by the natural quotient map A ⊕ 1 B → (A ⊕ 1 B)/S, so that α ′ (b) = (0, b) + S and, analogously, β ′ (a) = (a, 0) + S.
Suppose we are given an extension (2) and an operator t : Y → B. Consider the push-out of the couple (ı, t) and draw the corresponding arrows: − −−− → PO Clearly, ı ′ is an isomorphic embedding. Now, the operator π : X → Z and the null operator n : B → Z satisfy the identity πı = nt = 0, and the universal property of push-outs gives a unique operator ̟ : PO → Z making the following diagram commutative: Or else, just take ̟((x, b) + S) = π(x), check commutativity, and discard everything but the definition of PO. Elementary considerations show that the lower sequence in the preceding diagram is exact. That sequence will we referred to as the push-out sequence. The universal property of push-out diagrams yields: Lemma 1. With the above notations, the push-out sequence splits if and only if t extends to X, that is, there is an operator T : X → B such that T ı = t.
1.4. Complex interpolation and twisted sums. These lines explain the main connection between interpolation and twisted sums we use throughout the paper. General references are [26,7,18,15,4]. Let (X 0 , X 1 ) be a compatible couple of complex Banach spaces. This means that both X 0 and X 1 are embedded into a third topological vector space W and so it makes sense to consider its sum Σ = X 0 + X 1 = {w ∈ W : w = x 0 + x 1 } which we furnish with the norm w Σ = inf{ x 0 0 + x 1 : w = x 0 + x 1 } as well as the intersection ∆ = X 0 ∩ X 1 with the norm x ∆ = max{ x 0 , x }. We attach a certain space of analytic functions to (X 0 , X 1 ) as follows.
Then G is a Banach space under the norm g G = sup{ g(j + it) j : j = 0, 1; t ∈ R}. For θ ∈ [0, 1], define the interpolation space X θ = [X 0 , X 1 ] θ = {x ∈ Σ : x = g(θ) for some g ∈ G} with the norm x θ = inf{ g G : x = g(θ)}. We remark that [X 0 , X 1 ] θ is the quotient of G by ker δ θ , the closed subspace of functions vanishing at θ, and so it is a Banach space. Now, the basic result is the following.
In this way, for each θ ∈]0, 1[ we have a push-out diagram whose lower row is a self extension of X θ . The derivation associated with the preceding diagram is the map Ω : X θ → Σ obtained as follows: given x ∈ X θ we choose g = g x ∈ G (homogeneously) such that x = g(θ) and g G ≤ (1 + ǫ) x X θ for small ǫ > 0 and we set Ω(x) = g ′ (θ) ∈ Σ. (Note that Ω(x) lies in X θ at least for x ∈ ∆ = X 0 ∩ X 1 .) Homogeneously means that if g is the function attached to x and λ is a complex number, then the function attached to λx is λg -this makes Ω : X θ → Σ homogeneous. Needless to say, the map Ω depends on the choice of g. However, ifΩ(x) is obtained as the derivative (at θ) of anotherg ∈ G such thatg(θ) = x and g G ≤ M x , theng − g vanishes at θ, so (by Lemma 2) Lemma 3. The just defined map Ω is quasi-linear on X θ . The extension induced by Ω is (equivalent to) the push-out sequence in (7).
Proof.
That Ω is quasi-linear is straightforward from Lemma 2.
There is an obvious map ı : X θ → X θ ⊕ Ω X θ sending x to (x, 0). If f ∈ ker δ θ one has (δ ′ θ , δ θ )(f ) = (f ′ (θ), 0) = ıδ ′ θ (f ) and the universal property of the push-out construction yields an operator u making commutative the following diagram The preceding argument is closely related to the observation, due to Rochberg and Weiss [26], where the third space carries the obvious (infimum) norm.
An important feature of the derivation process is that if we start with a couple (X 0 , X 1 ) of Banach modules over an algebra A (this terminology should be self-explanatory by now), then the diagram (7) lives in the category of Banach modules and Ω is a centralizer over A.
2. The tracial (semifinite) case 2.1. Some special properties of centralizers on L p (R + ). Let X be a Köthe space on R + = [0, ∞). It is clear from the definition that X is an L ∞ (R + )-module under "pointwise" multiplication which turns out to be a submodule of L 0 (R + ), the space of all measurable functions f : R + → C.
Here we apply the usual convention of identifying functions agreeing almost everywhere. Let Φ : X → L 0 (R + ) be an L ∞ (R + )-centralizer on X. Then Φ is said to be: • Real if it takes real functions to real functions.
• Symmetric if (X is symmetric and) there is a constant S so that, whenever u is a measure- . Important examples of centralizers are given as follows (see [13], Section 3 and specially Theorem 3.1). Let ϕ : R 2 → R be a Lipschitz function. Then the map L p (R + ) → L 0 (R + ) given by (8) f is a (real, symmetric, lazy) centralizer on L p (R + ). Here r g is the so called rank-function of g ∈ L 0 (R + ) defined by r g (t) = λ{s ∈ R + : |g(s)| > |g(t)| or s ≤ t and |g(s)| = |g(t)|}, which arises in real interpolation.
For what this paper is concerned, the crucial result on L ∞ -centralizers is the following.
Theorem 1 (Kalton [15], Theorem 7.6). There is a (finite) constant K so that whenever 1 < p ≤ 2 and X is a p-convex and q-concave Köthe function space with p −1 + q −1 = 1 and Φ is a real centralizer on X with C[Φ] < 200/q then there is a pair of Köthe function spaces (X 0 , X 1 ) so that X = [X 0 , X 1 ] 1/2 (with equivalent norms) and if Ω : If Φ is symmetric, then X 0 and X 1 can be taken to be symmetric (that is, rearrangement invariant).
2.2. From commutative to noncommutative. Let M be a semifinite von Neumann algebra with a faithful, normal, semifinite (fns) trace τ , acting on a Hilbert space H. A closed densely defined operator on H is affiliated with M if its spectral projections belong to M. A closed densely defined operator x affiliated with M is called τ -measurable if, for any ǫ > 0, there exists a projection e ∈ M such that eH ⊂ D(x) and τ (1 − e) ≤ ǫ. We denote the set of all τ -measurable operators associated with a von Neumann algebra M by M. The so called measure topology on M is the least linear topology containing the sets {x ∈ M : there exists a projection e ∈ M such that τ (1 − e) < ǫ, xe ∈ M and xe < ǫ}, with ǫ > 0. Endowed with measure topology, strong sum, strong product and adjoint operation as involution, M becomes a topological *-algebra (see [21,6] for basic information). The trace τ has a natural extension to M + .
More general spaces of operators can be introduced as follows [8,22,19]. Let x be a measurable operator, so that τ e λ |x| < ∞ for some λ > 0, where e λ |x| denotes the spectral resolution of |x| corresponding to the indicator function of (λ, ∞). The generalized singular numbers of x is the function µ(x) : (0, ∞) → [0, ∞] given by An important feature of these spaces is that they are bimodules over M with the obvious outer multiplications.
The following result and its proof are modeled on [15,Theorem 8.3].
Then, for every semifinite von Neumann algebra (M, τ ) there is an M-bicentralizer Φ τ : L p (τ ) → M whose action on σ-elementary operators can be obtained as follows. Given a σ-elementary Proof. (a) Clearly, one can write Φ = Φ 1 + iΦ 2 with Φ 1 and Φ 2 real centralizers and we can assume that Φ is real and arising as a derivation. Precisely, we are assuming there is a couple of (symmetric) Köthe spaces on R + such that Set X = [X 0 , X 1 ] 1/2 , with the natural norm. This is a symmetric Köthe space on R + . The key point is that the formula holds for all semifinite algebras (M, τ ) -see [8,Theorem 3.2] and [23]. Fixing τ we want to export Φ from L p (R + ) to L p (τ ). In view of (9) one has [X 0 (τ ), X 1 (τ )] 1/2 = L p (τ ) (with equivalent norms) and we can consider the corresponding derivation. So we define a mapping Φ τ : Notice that if x = i t i e i is a σ-elementary operator and f = i t i 1 A i is as in the statement, then applying (9) to the restriction of τ to the subalgebra of M spanned by the projections (e i ) and to the subalgebra of L ∞ (R + ) generated by the functions 1 A i we see that there is an (almost) optimal representative of f having the form z → i g i (z)1 A i and also that g(z) = i g i (z)e i is then an (almost) optimal representative of x. Thus we may assume Φ Needless to say, the real content of Part (a) is that the map Φ τ just defined is an M-bicentralizer on X(τ ) = L p (τ ). This should be obvious by now, but let us record the proof for future reference. Take x ∈ X(τ ) and a, b ∈ M (that we regard as constant functions on S). We have g axb − ag x b ∈ ker δ 1/2 by the very definition. Moreover, (b) It suffices to check that if Ψ : L p (τ ) → M is a (say left) centralizer vanishing on every σ-elementary operator, then Ψ is bounded. First, let x ∈ L p (τ ) be self-adjoint. It is easy to see that x = ay, where y is a σ-elementary operator with y p ≤ 2 x p and a M ≤ 1. Hence (c) was proved during the proof of (a).
Although we are unable to describe Φ τ (x) when x is not elementary for general Φ, the following result applies to many centralizers appearing in nature. In particular, it applies to the centralizers given by (8) when ϕ depends only on the first variable, by just taking θ(t) = tϕ(log t) for t ∈ R + .
Proof. Let us first assume M abelian. By general representation results, M is isomorphic to L ∞ (µ) for some (countably additive, nonnegative, strictly localizable) measure space (S, Σ, µ). Moreover one can take the isomorphism χ : M → L ∞ (µ) in such a way that if a ∈ M is nonnegative and f = χ(a), then τ (a) = S f (s)dµ(s). It is easily seen that χ extends to a continuous homomorphism of *-algebras M → L 0 (µ) that we still call χ -the topology we consider on L 0 (µ) is that of convergence in measure on subsets of finite measure. Although χ is not in general surjective from M to L 0 (µ), it is an isometric isomorphism between L p (τ ) and L p (µ). If fact, if θ : The consequence of all this is that in order to prove Corollary 1 for abelian algebras it suffices to check that if θ and Φ are as in the statement of Corollary and µ is a (countably additive, nonnegative, strictly localizable) measure, the map Φ µ : Applying Theorem 2 to Φ with M = L ∞ (µ) and embedding M into L 0 (µ) as before, we get a centralizer Φ µ : if f takes countably many values. If besides a takes only countably many values we have If a ∈ L ∞ (µ) and f ∈ L p (µ) are arbitrary we may take sequences of σ-simple functions (a n ) and (f n ) such that f n → f in L p (µ), a n → a in L ∞ (µ) and f n − f ∞ → 0 as n → ∞. This implies that a n Φ µ (f n ) → aΦ µ (f ) and Φ µ (a n f n ) → Φ µ (af ) in L 0 (µ). As the unit ball of L p (µ) is closed in L 0 (µ) we get and Φ µ is a centralizer. By Theorem 2(b) we have Φ µ ≈ Φ µ .
Going back to general (semifinite) algebras, we want to see that if a ∈ M and x ∈ L p (τ ) are positive and commute (that is, each spectral projection of a commutes with every spectral projection of x; see [25, Chapter VIII, Section 5]), then Obviously, the map Φ τ : L p (M, τ ) → M sends L p (A, τ ) to A and have seen that this restriction acts as an A-centralizer on L p (A, τ ) and the centralizer constant depends only on θ through the centralizer constant of Φ, which proves (10). We finally prove that Φ τ ≈ Φ τ . Take a positive x ∈ L p (τ ) and write x = ay, where a and y are positive, commuting operators such that y is σ-elementary with y p ≤ 2 x p and a ∞ ≤ 1. One has Φ τ (x)−aΦ τ (y) p ≤ C2 x p , by (10) (1) h is · Σ -bounded.
(2) For each x ∈ ∆ the function z −→ τ (xh(z)) is continuous on S and analytic on S • .
We equip H with the norm h H = sup{ h(it) M , h(it + 1) 1 : t ∈ R}}. Note that the elements of H are in fact · Σ -analytic on S • .
Letting θ = 1/p ∈ (0, 1) we have that δ θ maps H onto L p (τ ) (without increasing the norm) and replacing G by H everywhere in the proof of Lemma 2 we see that the restriction of δ ′ θ to ker δ θ is a bounded operator onto L p (τ ) and we can form the push-out diagram (11) Please note that the above diagram lives in the category of bimodules over M. Also, as H contains the Calderón space G it is really easy to see that this new push-out extension is in fact the same one gets by using G.
Let us compute the extremals associated to the quotient δ θ : H → L p (τ ). Suppose f ∈ L p (τ ) is a positive operator with f p = 1. It is easily seen that the function h(z) = f pz belongs to H (although it is not in G in general) and also that h H = 1. Of course, h ′ (θ) = pf log f and thus, the derivation associated to Diagram 11 is given by (12) Ω p (f ) = pf log(|f |/ f p ) (f ∈ L p (τ )).
Let us denote the corresponding twisted sum L p (τ ) ⊕ Ωp L p (τ ) by Z p (τ ). Our immediate aim is to prove the following.
Theorem 3. Z p (τ ) is a nontrivial self extension of L p (τ ) as long as M is infinite dimensional and 1 < p < ∞.
Proof. Needless to say Z p (τ ) is a bimodule extension. We shall prove that it doesn't split even as an extension of Banach spaces. As M is infinite dimensional there is a sequence (e i ) of mutually orthogonal projections having finite trace. Given E ⊂ M, we put Notice that Ω p maps L p (τ ) e to M e as an M e -centralizer and we have a commutative diagram On the other hand, the conditional expectation given by is a contractive projection of L p (τ ) whose range is L p (τ ) e . The immediate consequence of all this is that if the lower extension in Diagram 13 splits, then so does the upper one. Let us check that this is not the case. As M e is amenable (it is isometrically isomorphic to the algebra ℓ ∞ ) and L p (τ ) e is a dual bimodule over M e = ℓ ∞ (it is isometrically isomorphic to ℓ p ) we have that the upper row in (13) splits as an extension of Banach spaces if and only if it splits as an extension of Banach M e -modules. And this happens if and only if there is a morphism φ : L p (τ ) e → M e approximating Ω p in the sense that (14) Ω for some constant δ and every f ∈ L p (τ ) e . It is clear that every morphism φ : L p (τ ) e → M e has the form φ( i t i e i ) = i φ i t i e i for some sequence of complex numbers (φ i ). Taking f = e i in (14) we see that |φ i + log τ (e i )| ≤ δ. It follows that if (14) holds for some φ = (φ i ) then it must hold for φ i = − log τ (e i ), possibly doubling the value of δ. Fix n ∈ N and take f = n i=1 t i e i normalized in L p (τ ) in such a way that the nonzero summands in the norm of f agree:
For this f and
which makes impossible the estimate in (14).
Duality.
In this Section we extend Kalton-Peck duality results in [17] to all semifinite algebras by showing that for every trace τ the dual space of Z p (τ ) is isomorphic to Z q (τ ), where p and q are conjugate exponents. In order to achieve a sharp adjustment of the parameters, let us agree that, given p ∈ (1, ∞) and a Lipschitz function ϕ : R → C, the associated Kalton-Peck centralizer Φ p : L p (τ ) → M is defined by Φ p (f ) = f ϕ(p log(|f |/ f p ) and the corresponding Kalton-Peck space is Z ϕ p (τ ) = L p (τ ) ⊕ Φp L p (τ ). This is coherent with (12), where ϕ is the identity on R.
Theorem 4. Let p and q be conjugate exponents, ϕ a Lipschitz function, and τ be a trace. Then Z ϕ q (τ ) is isomorphic to the conjugate of Z ϕ p (τ ) under the pairing Proof. The proof depends on the following elementary inequality: given s, t ∈ C one has As (x − Φ q (y))w 1 ≤ x − Φ q q w p and, similarly, y(v − Φ p (w)) 1 ≤ y q v − Φ p (w) p it suffices to get an estimate of the form (17) |τ (Φ q (y)w − yΦ p (w))| ≤ M y q w p .
First, let us assume y and w are σ-elementary operators with y q = w p = 1 and representations y = t i y i and w = s j w j converging in L q (τ ) and L p (τ ), respectively. We may assume with no loss of generality that i y i = j w j = 1 M (summation in the σ(M, M * ) topology). We have Applying (16) and taking into account that if y and w are projections one has τ (yw) ≥ 0 we can estimate the left-hand of (17) as follows: where L ϕ denotes the Lipschitz constant of ϕ. By symmetry one also has |τ (Ω q (y)w − yΩ p (w))| ≤ 2e −1 qL ϕ and so (18) |τ (Φ q (y)w − yΦ p (w))| ≤ 4e −1 L ϕ y q w p , whenever y and w are σ-elementary operators. Now, if y and w are arbitrary it is easy to find sequences of σ-elementary operators (y n ) and (w n ) for which the numerical sequences are all convergent to zero. This implies that Φ q (y n )w n − y n Φ p (w n ) − Φ q (y)w n − yΦ p (w) 1 → 0 and so (18) holds for every y and w. Therefore, going back to (15) we have The remainder of the proof is quite easy: we have just seen that the map u : Z ϕ q (τ ) → (Z ϕ p (τ )) * given by (u(x, y))(v, w) = τ (xw − yv) is bounded. On the other hand, the following diagram is commutative: Here, the lower row is the adjoint of the extension induced by Φ p . It follows that u is onto, and open.
2.5. The role of the trace. Theorem 3 cannot be extended to arbitrary centralizers. Actually, the following example shows that the behavior of Φ τ may depend strongly on the trace τ .
(b) If τ is bounded away from zero on the projections of M, then
Proof. (See the proof of Theorem 3.) Let Ψ : L p (µ) → L 0 (µ) be any centralizer. Let (S i ) be a sequence of disjoint measurable sets, with finite and positive measure. Given E ⊂ L 0 (µ), we write E S for the set of those functions f in E that can be written as f = i t i 1 S i , for some sequence of complex numbers (t i ). Then, if Ψ maps L p (µ) S to L 0 (µ) S , then it defines an L ∞ (µ) S centralizer on L p (µ) S . Moreover, if Ψ is trivial on L p (µ) S (as a quasi-linear map), then its restriction to L p (µ) S is trivial as an L ∞ (µ) S centralizer.
(a) To check that Φ + is nontrivial on L p (R + ) just take a sequence (A i ) with |A i | = 2 −i . To check that Φ − is nontrivial, take A i with |A i | = 1 for all i ∈ N.
(b) We may assume τ (e) ≥ 1 for every projection e ∈ M. Pick a positive, σ-elementary f normalized in L p (τ ) so that f = ∞ n=1 f i e i , with f i ≥ 0 and e i disjoint projections. Obviously f i ≤ 1 for every i and so Φ + (f ) = 0. It follows that Φ + is bounded on L p (τ ).
As Φ + τ + Φ − τ = Ω p and Φ + is trivial we see that Φ − must be nontrivial since Ω p is nontrivial unless M is finite-dimensional.
Type III algebras
In this Section we leave the comfortable tracial setting and we face the problem of twisting arbitrary L p spaces, including those built over type III von Neumann algebras. There are several constructions of these L p spaces, none of them elementary. All provide bimodule structures on the resulting spaces that turn out to be equivalent at the end.
It is natural to ask for (nontrivial) self-extensions of L p (M) is the category of Banach bimodules over M. Unfortunately we have been unable to construct such objects; nevertheless we can still use the interpolation trick to obtain self extensions as (one-sided) modules. In this regard the most suited representation of L p spaces is one due to Kosaki. For the sake of clarity, we can restrict here to σ-finite algebras so that we can take weights in M * . So, let M be a von Neumann algebra and φ ∈ M * a faithful positive functional. (We don't normalize φ because the restriction of a state to a direct summand is not a state; see Lemma 4(b) below.) We "include" M into M * just taking a ∈ M → aφ ∈ M * thus starting the interpolation procedure with Σ = M * as "ambient" space and ∆ = Mφ -to which the norm and σ(M, M * ) topology are transferred without further mention. Then, the Kosaki (left) version of the space We emphasize we are referring to Kosaki's construction [20,24,23] and not to that of Terp [27,28]. Recall that M * is an M-bimodule with product given by The inclusion ·φ : M → M * is, however, only a left-homomorphism: (ba) · φ = b(aφ). Asking for a two-sided homomorphism means that one should also have In particular (take a = 1) bφ = φb for all b ∈ M, which happens if and only if φ is a trace.
(where θ = 1/p) and observe that every arrow here is a homomorphism of left M-modules. Let us denote by Z p (M, φ) or Z p (φ) the push-out space in the preceding diagram. This is coherent with the notation used in the tracial case. We have mentioned that there is also a right action of M on L p (M, φ) which is compatible with the given left action and makes L p (M, φ) into a bimodule. All known descriptions of that action are quite heavy and depend on Tomita-Takesaki theory. That action is in general incompatiblewith the arrows in the preceding diagram. Now, we are confronted with the problem of deciding whether the lower extension in Diagram 19 is trivial or not. The pattern followed in the proof of Theorem 3 cannot be used now because we have only a left multiplication in Z p (M, φ). Then, for each p ∈ [1, ∞], L p (N , φ| N ) embeds into L p (M, φ) and there is a N -homomorphism on the latter space whose range is the former.
Proof. (a) We have assembled the hypotheses in order to guarantee the commutativity of the diagram Here, ı : N → M the inclusion map and the subscript indicates preadjoint (in the Banach space sense), in particular ı * is plain restriction. Indeed, for a ∈ N , one has ε * (aφ) = aε * (φ) = aφ, so the left square commutes. As for the right one, taking a ∈ N , b ∈ M we have ε(b)φ, a = φ, aε(b) = φ, ε(ab) = φ, ab = bφ, a .
(b) In this case we can use the same diagram, just replacing ε by the projection P : M → N given by P (a) = eae, where e is the unit of N . Then P * : N * → M * is given by P * (ψ), b = ψ, ebe .
The following step is the result we are looking for.
Lemma 5. With the same hypotheses as in Lemma 4,Z p (N , φ| N ) is a complemented subspace of Z p (M, φ) for every 1 < p < ∞.
Proof. We write the proof assuming (a). The other case requires only minor modifications that are left to the reader. Let us begin with the embedding of PO(N ) = Z p (N , φ| N ) into PO(M) = Z p (M, φ). Consider the diagram Here, (ε * ) • sends a given function f : S → N * to the composition ε * • f : S → N * → M * and the mappings from L p (N ) to L p (M) are all given by ı p . It is not hard to check that this is a commutative diagram. Therefore, we can insert an operator κ : PO(N ) → PO(M) making the resulting diagram commutative because of the universal property of the push-out square A similar argument shows the existence of an operator π : PO(M) → PO(N ) making commutative the diagram The arrows from L p (M) to L p (N ) are now given by ε p . Putting together the two preceding diagrams it is easily seen that π • κ is the identity on PO(N ).
Here is the main result about the twisting of Kosaki's L p . As we shall see later (Section 3.2) Z p (M, φ) doesn't depend on φ and so the conclusion of the following Theorem holds for any φ.
Theorem 5. Let M be an infinite dimensional von Neumann algebra. There is a faithful normal state φ for which the lower extension of the push-out diagram (21) ker is nontrivial.
Proof. The idea of the proof is to choose φ in such a way that its "centralizer subalgebra" Apply now Lemma 5 to embed PO(M φ , φ) as a complemented subspace (in fact as a "complemented subextension") of PO(M, φ) and please note that the restriction of φ to M φ is a (finite) trace by the very definition of M φ .
The nonsplitting of PO(M φ , φ) is nothing but a particular case of Theorem 3 as for a finite trace τ one has M ⊂ L 1 (τ ) and, after identifying L 1 (τ ) with M * , the inclusion agrees with Kosaki's left method.
In order to find out the required φ, let us decompose M = N ⊕ L, with N semifinite and L without direct summands of type I (This can be done in several ways: for instance, taking N as the semifinite part and L as the type III part of M, or taking N as the discrete part and L as the continuous part, see [2,Section III.1.4].) By Lemma 5 we have an isomorphism PO(M, φ) = PO(N , φ| N ) ⊕ PO(L, φ| L ) and we can consider the two cases separately.
⋆ First, assume M has no direct summand of type I (so that it is either type II or III). Then, if ψ is any faithful normal state on M, there is a faithful normal state φ (in the closure of the orbit of ψ under the inner automorphisms of M) whose centralizer subalgebra M φ is of type II 1 ([10, Theorem 11.1]) and "we" are done.
⋆ Now, suppose M semifinite and let us see that any φ works. Let τ be a (fns) trace on M and let us identify M * with L 1 (τ ) so that we may consider φ as a τ -measurable operator on the ground Hilbert space. If φ is elementary, let us write it as φ = n i=1 t i e i , where the e i are mutually orthogonal projections in M. Letting M i = e i Me i we see that ⊕ i M i is an infinite dimensional subalgebra of M φ , which is enough. Otherwise φ has infinite spectrum and its spectral projections already generate an infinite dimensional subalgebra of M φ .
We want to see that if p, q ∈ (1, ∞) are conjugate exponents, then the conjugate of Z p (M, φ) ℓ (our former Z p (M, φ)) is well isomorphic to Z q (M, φ) r .
at least when f ∈ Mφ and g ∈ φM.
The following result is implicit in [26].
Theorem 6. Given all this paraphernalia, if p, q ∈ (1, ∞) are conjugate exponents, then the More precisely, there is an isomorphism of right Banach modules over M making commutative the following diagram We have (24) shows that u(g ′ , g) acts, as a bounded linear functional on Z p (M, φ) ℓ , with u(g ′ , g) : Z p (M, φ) ℓ → C ≤ M (g ′ , g) −Ω r q , at least when g is in ∆ r . This defines an operator making the following diagram commute: and where ∆ r is treated as a submodule of L q (M, φ) r . By density u extends to an operator that we still call u fitting in (23). The five-lemma and the open mapping theorem guarantee that u is a linear homeomorphism. It remains to check it is also a homomorphism of right M-modules. But for g ∈ ∆ r and f ∈ ∆ ℓ one has This completes the proof.
Change of state.
In this Section we prove the extension L p (M, φ) → Z p (M, φ) → L p (M, φ) is essentially independent on the reference state φ in the following precise sense.
in which the vertical arrows are isomorphisms of left M-modules.
Proof. The proof is based on an idea explained and discarded by Kosaki in [20, p. 71]. We remark that our proof provides a very natural isometry between L p spaces based on two different states.
It will be convenient to consider two more spaces of analytic functions. The first one is the obvious adaptation of the space H appearing in Section 2.3 to the nontracial setting. So, given a faithful state φ ∈ M * , we consider the couple (Mφ, M * ), and the space H = H(M, φ) of bounded functions H : S → M * such such that: (1) H is continuous on S and analytic on S • with respect to σ(M * , M).
with identical norms. This is very easy to check, once we know that L p (M, φ) is reflexive and agrees with the dual of the right space L q (M, φ) r , where q is the conjugate exponent of p. As Lemma 2 is true (with the same proof) replacing G by F or by H we see that the lower extension in Diagram 19 does not vary after replacing G by F or by H. We shall use the following notations: and similarly for G and H. As we mentioned after Lemma 3 one has isomorphisms It is important to realize how these quotient spaces arise as self-extensions of L p = L p (M, φ). We describe the details for the smaller space F; replacing it by G or H makes no difference. Recall that we have F 1 ⊂ F 0 ⊂ F and therefore an exact sequence where and ̟ are the obvious maps. This becomes a self-extension of L p after identifying F/F 0 with L p through the (factorization) of the evaluation map δ θ : F → L p at θ = 1/p, while the identification of F 0 /F 1 with L p is provided by the (factorization) of the derivative δ ′ θ : F 0 → L p (at θ = 1/p) which is an isomorphism of left modules over M.
We conclude these prolegomena with the following observation. Let E(M, φ) denote the subspace of those F ∈ F(M, φ) having the form F (z) = f (z)φ, where f : S → M is continuous and analytic on the interior. It turns out that E(M, φ) is dense in F(M, φ). Indeed, the set of functions having the form F (z) = f (z)φ, with is already a dense subspace of F(M, φ). See [1, Lemma 4.2.3]. Let ϕ : S → D be the function given by (6). Replacing G by F everywhere in the proof of Lemma 2 we see that F 0 = ϕF in the sense that the multiplication operator f → ϕf is an isomorphism between F and F 0 . Similarly, f → ϕ 2 f is an isomorphism between F and F 1 . It follows that E ∩ F 0 and E ∩ F 1 are dense in F 0 and in F 1 , respectively. Now we need a bit of (relative) modular theory for which we refer the reader to [20] or [24]. We fix two faithful states φ 0 , φ 1 ∈ M * and we consider the Connes-Radon-Nikodým cocycle of φ 0 relative to φ 1 : As it happens, t → (Dφ 0 ; Dφ 1 ) t is a strongly continuous path of unitaries in M and so (26) t −→ (Dφ 0 ; Dφ 1 ) t φ 1 defines a continuous function from R to M * . Now the point is that (26) extends to a function from the horizontal strip −iS = {z ∈ C : −1 ≤ ℑ(z) ≤ 0} to M * we may denote by (Dφ 0 ; Dφ 1 ) (·) φ 1 having the following properties: (a) For each x ∈ M, the function z → (Dφ 0 ; Dφ 1 ) (z) φ 1 , x is continuous on −iS and analytic on iS • . (b) (Dφ 0 ; Dφ 1 ) (−i+t) φ 1 = φ 0 (Dφ 0 ; Dφ 1 ) t for every real t.
To complete the proof we have to prove that α is an isomorphism -that γ is an isomorphism then follows from the five-lemma. This is not automatic because I : F(φ 0 ) → H(φ 1 ) is not surjective. | 2016-02-01T15:40:38.000Z | 2014-07-02T00:00:00.000 | {
"year": 2014,
"sha1": "3028e2d0ce54994eb6547534da145b47df847143",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.aim.2016.02.029",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "3028e2d0ce54994eb6547534da145b47df847143",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
225059413 | pes2o/s2orc | v3-fos-license | Single-agent belantamab mafodotin for relapsed/refractory multiple myeloma: analysis of the lyophilised presentation cohort from the pivotal DREAMM-2 study
DREAMM-2 (NCT03525678) is an ongoing global, open-label, phase 2 study of single-agent belantamab mafodotin (belamaf; GSK2857916), a B-cell maturation antigen-targeting antibody-drug conjugate, in a frozen-liquid presentation in patients with relapsed/refractory multiple myeloma (RRMM). Alongside the main study, following identical inclusion/exclusion criteria, a separate patient cohort was enrolled to receive belamaf in a lyophilised presentation (3.4 mg/kg, every 3 weeks) until disease progression/unacceptable toxicity. Primary outcome was independent review committee-assessed overall response rate (ORR). Twenty-five patients were enrolled; 24 received ≥1 dose of belamaf. As of 31 January 2020, ORR was 52% (95% CI: 31.3–72.2); 24% of patients achieved very good partial response. Median duration of response was 9.0 months (2.8–not reached [NR]); median progression-free survival was 5.7 months (2.2–9.7); median overall survival was not reached (8.7 months–NR). Most common grade 3/4 adverse events were keratopathy (microcyst-like corneal epithelial changes, a pathological finding seen on eye examination [75%]), thrombocytopenia (21%), anaemia (17%), hypercalcaemia and hypophosphatemia (both 13%), neutropenia and blurred vision (both 8%). Pharmacokinetics supported comparability of frozen-liquid and lyophilised presentations. Single-agent belamaf in a lyophilised presentation (intended for future use) showed a deep and durable clinical response and acceptable safety profile in patients with heavily pre-treated RRMM.
Introduction
Despite improved outcomes with currently available therapies, including proteasome inhibitors (PIs), immunomodulatory agents and anti-CD38 monoclonal antibodies (mAbs), multiple myeloma (MM) remains a challenging disease that is incurable for most patients [1][2][3][4] . The typical MM clinical course includes frequent relapses and development of refractory disease 5 . With each successive line of treatment, the duration of response (DoR) and progression-free survival (PFS) get shorter 5,6 . Patients refractory to anti-CD38 mAbs have a poor prognosis and limited treatment options, with newer agents used in combination (such as selinexor plus dexamethasone) resulting in an overall response rate (ORR) of 26% in patients refractory to at least one PI, one immunomodulatory agent and daratumumab 7 . Thus, there remains a need for novel targets and therapies in relapsed/refractory MM (RRMM).
B-cell maturation antigen (BCMA), a member of the tumour necrosis factor receptor family, is expressed on the surface of all normal plasma cells and late-stage B cells, as well as on all malignant cells in all patients with MM 8,9 . BCMA promotes the maturation and long-term survival of normal plasma cells and is also essential for proliferation and survival of malignant plasma cells in MM 9 . Belantamab mafodotin (belamaf; GSK2857916) is a firstin-class, BCMA-targeted antibody-drug conjugate (ADC) consisting of a humanised, afucosylated anti-BCMA mAb fused to the cytotoxic payload monomethyl auristatin F (MMAF) by a protease-resistant maleimidocaproyl linker 10 . Belamaf specifically binds to BCMA and eliminates myeloma cells by a multimodal mechanism, including delivering mafodotin to BCMA-expressing malignant cells thereby inhibiting microtubule polymerisation, and inducing immune-independent ADC-mediated apoptosis; immune-dependent enhancement of antibody-dependent cellular cytotoxicity and phagocytosis; and release of markers characteristic of immunogenic cell death-a form of regulated cell death involving the release of a series of damage-associated molecular patterns (such as calreticulin and high-mobility group box 1) leading to an adaptive immune response [10][11][12][13] .
In the first-in-human, phase 1 DREAMM-1 study (NCT02064387), single-agent belamaf administered as a frozen-liquid presentation induced clinically meaningful (ORR: 60%; 95% confidence interval [CI]: 42.1-76.1), deep (54% of patients with a very good partial response [VGPR] or better) and durable responses (PFS: 12 months, 95% CI: 3.1-not estimable; DoR: 14.3 months, 95% CI: 10.6-not estimable) with median duration of follow-up of 12.5 months in patients previously treated with alkylators, PIs and immunomodulatory agents and refractory to the last line of therapy 14,15 . In a sub-group of patients previously treated with anti-CD38 mAbs, and refractory to both a PI and an immunomodulatory agent, an ORR of 38.5% was reported in patients receiving 3.4 mg/kg singleagent belamaf every 3 weeks (Q3W) 15 .
A refrigerated lyophilised powder presentation of belamaf was developed to improve supply chain robustness by eliminating frozen shipments and storage, and is the presentation intended for future clinical use. In order to gain clinical experience with the lyophilised presentation of belamaf, an independent, exploratory cohort of patients was included in the DREAMM-2 study to receive this alternative presentation. Herein, we report the analysis for this cohort.
Study design and treatment
The DREAMM-2 full study design has been reported previously 16 . In brief, this phase 2, open-label, two-arm, global, multicentre study consisted of a screening/baseline period after which patients in the main study were randomised to receive intravenous belamaf in a frozen-liquid presentation (2.5 or 3.4 mg/kg Q3W). An independent cohort of patients was enrolled to receive belamaf in a lyophilised presentation (3.4 mg/kg Q3W, selected on the basis of the results from the DREAMM-1 study 15 ). As per International Conference on Harmonisation Q5E (ICHQ5E) guidance 18 , the liquid and lyophilised drug products have been deemed comparable for the purpose of safety and efficacy as both are administered intravenously, have the same formulation, are essentially identical upon dilution for administration, and have been demonstrated to be analytically comparable through extensive biochemical and functional characterisation studies (including primary and higher-order structures, bioassay and binding assays), and stability testing. This new presentation was supplied as a refrigerated lyophilised powder to be reconstituted with water for injection prior to dilution in normal saline. It was administered intravenously over ≥30 min on Day 1 of each 3-week cycle, until disease progression or unacceptable toxicity. No systemic premedication was given unless deemed necessary by the investigator. Corticosteroid eye drops and preservativefree lubricant eye drops were used in both eyes to mitigate corneal events, a known toxic effect of MMAF 19 and commonly reported in DREAMM-1. At the discretion of patient and investigator, cooling eye masks could be applied from the start of belamaf infusion for approximately 1 h, and up to 4 h, as tolerated. Dose modifications (delays or reductions) were permitted to manage adverse events (AEs), or for medical or surgical and logistical reasons unrelated to treatment. Criteria for dose modifications and patient withdrawal from the study are shown in the study protocol (Supplementary Material). Patients in the lyophilised cohort followed the same assessments and procedures as in the main DREAMM-2 study 16 .
The study was conducted in accordance with the Declaration of Helsinki and Good Clinical Practice guidelines following approval by ethics committees and institutional review boards at each study site. All patients provided written informed consent.
Patient population
Inclusion/exclusion criteria were the same for patients in the lyophilised study cohort and the main DREAMM-2 study 16 .
Key inclusion criteria
To be eligible for inclusion, patients had to be 18 years or older with an Eastern Cooperative Oncology Group performance status of 0-2 and a histologically or cytologically confirmed diagnosis of MM according to the International Myeloma Working Group (IMWG) criteria 20 . They must have undergone stem cell transplant (>100 days before enrolment) or been considered transplant-ineligible; had disease progression after ≥3 prior lines of anti-myeloma treatment; were refractory to both an immunomodulatory agent and a PI, and refractory and/or intolerant to an anti-CD38 mAb; and meet the criteria for adequate organ system function. Patients with mild or moderate renal impairment and history of cytopenias (without active conditions) were eligible.
Key exclusion criteria
Patients were excluded if they had received prior allogeneic stem cell transplant, BCMA-targeted therapy, had corneal epithelial disease at screening (except mild punctate keratopathy) or any serious and/or unstable medical, psychiatric disorder or other condition that could interfere with the patient's safety, ability to provide informed consent or compliance to the study procedures. Full inclusion and exclusion criteria are included in the Supplementary Material.
Endpoints and assessments
Analysis of the lyophilised cohort was an exploratory objective of the main DREAMM-2 study. Key endpoints were ORR (defined as the percentage of patients with a partial response or better, according to IMWG criteria) 20 assessed by independent review committee (IRC), clinical benefit rate (minimal response or better), time to response (TTR), time to best response, DoR, time to progression, PFS, OS and safety. Investigator-assessed ORR was also recorded and will be reported elsewhere. The safety profile of lyophilised belamaf was monitored with clinical and laboratory assessments, the reporting of AEs graded (with the exception of keratopathy) according to the Common Terminology Criteria for Adverse Events (2010, version 4.03; see Supplementary Material) 21 and the rate of discontinuations and dose adjustments. Keratopathy (defined as microcyst-like epithelial changes [MECs] to the corneal epithelium observed by eye examination, with or without symptoms), thrombocytopenia and infusion-related reactions (IRRs) were monitored as AEs of special interest (AESI). Baseline and subsequent eye examinations were performed pre-dose Q3W by an ophthalmologist or optometrist (full details are provided in the Supplementary Material). Corneal examinations and best-corrected visual acuity assessments (BCVA) were combined and graded on the basis of a keratopathy and visual acuity scale.
Pharmacokinetic analysis
The pharmacokinetic (PK) profile of belamaf was assessed by measurement of belamaf, total mAb (with and without the cytotoxic payload MMAF) and cysteinemaleimidocaproyl MMAF (cys-mcMMAF; the cytotoxic moiety released from belamaf) in plasma collected at Cycles 1 and 3 from all patients. The bioanalytical methods used to quantify concentrations of these analytes were selective, accurate and reproducible (data not shown). The assay methods for belamaf and total mAb measure both free and soluble BCMA-complexed molecules. Individual PK parameters were calculated using standard non-compartmental methods.
Statistical methods
The full analysis population comprised all patients enrolled in the lyophilised cohort of DREAMM-2, regardless of treatment administration. All patients who received ≥1 dose of lyophilised belamaf were included in the safety population. The sample size for this cohort was chosen based on feasibility in order to gain clinical experience with the lyophilised presentation. The probability of observing a ≥20% ORR was retrospectively calculated, with the assumption made that the true ORR was 33%, there would be a 95% probability of observing ≥20% ORR with 25 patients. For the ORR, two-sided exact 95% CI were reported; 95% CI are reported for other data. PFS, DoR and TTR were analysed using the Kaplan-Meier method. Descriptive statistics were used for efficacy endpoints, pre-treatment characteristics, AEs and PK parameters. All efficacy endpoints were assessed by the IRC. This study was overseen by an independent data monitoring committee. Direct comparisons to the main study were not intended or made, due to the nonrandomised nature of enrolment into the lyophilised cohort and the relatively small numbers of patients enrolled. Analyses were carried out using Statistical Analysis System software (version 9.4).
Patient disposition and baseline characteristics
Between 5 December 2018 and 10 January 2019, 31 patients were screened for the lyophilised presentation cohort at 9 sites in the USA and Australia. Twenty-five patients were allocated to treatment with the lyophilised presentation of belamaf (full analysis population) and 24 received the allocated treatment (safety population; 1 patient instead received frozen-liquid presentation; Fig. 1). At the data cut-off date (31 January 2020), patients had received a median of 3.5 treatment cycles (range: 1-17); median time on study treatment was 16.6 weeks (range: 3-60). Median duration of follow-up was 11.2 months (range: 1.8-14.5). At data cut-off, 17% (4/24) of patients were still receiving study treatment, and 83% (20/24) of patients had discontinued treatment (primary reason: progressive disease [67%]). Ten deaths were reported in this cohort: nine due to the disease under study, one had another cause.
Baseline characteristics are presented in Table 1. At screening, patients had received a median of 5 prior lines of therapy (range: 3-11). As per the inclusion criteria, all patients had received prior treatment with, and upon analysis were refractory to, a PI, an immunomodulatory agent, and an anti-CD38 mAb (daratumumab). Patients with high-risk cytogenetics (per IMWG criteria) 20 , international staging system stage III disease and extramedullary disease were well represented.
Efficacy
The IRC-assessed ORR was 52% (95% CI: 31.3-72.2). A VGPR was seen in 24% (6/25) of patients (46% [6/13] of responders) ( Fig. 2 and Table 2). The IRC-assessed clinical benefit rate (minimal response or better) was 56% Patients could have more than one reason for exclusion. c Five patients were excluded due to pre-existing corneal disease, as specified in the study protocol. d The remainder of enrolled patients were included in the main DREAMM-2 study previously reported 16 . Two patients in the main study were rerandomised and counted twice (once per each randomisation). e One patient was randomised to the belamaf 3.4 mg/kg lyophilised presentation, but actually received 3.4 mg/kg frozen-liquid presentation as first dose, and never received lyophilised presentation during the study. The number of prior lines of therapy is derived as the number of prior anticancer regimens received by a patient as reported on the electronic case report form. Combination therapy containing multiple components was counted as one regimen. d All patients were refractory to a PI, an immunomodulatory agent, and refractory and/or intolerant an anti-CD38 mAb as per eligibility criteria. Refractory was defined as disease that is non-responsive while on primary or salvage therapy or progressing ≤60 days of last therapy.
Safety
Overall, 100% (24/24) of patients experienced ≥1 AE. The most common AEs (any grade) were keratopathy (MECs, changes to the corneal epithelium observed by eye examination with or without symptoms), thrombocytopenia, fatigue, blurred vision, dry eye, anaemia and back pain ( Table 3). The most common grade 3/4 AEs were keratopathy (MECs), thrombocytopenia, anaemia, hypercalcaemia, hypophosphatemia, neutropenia and blurred vision (Table 3). Serious AEs (SAEs) were reported in 63% of patients (Supplementary Table 1) and were considered treatment related in 17% of patients. There was one death due to an SAE (due to cardiac failure; unrelated to study treatment).
Median dose intensity was 2.32 mg/kg Q3W (range: 1.0-3.4), which was lower than intended due to the incidence of dose reductions and delays. Dose reductions and delays occurred in 58% (14/24) and 71% (17/24) of patients, respectively. Of those with dose reductions, 71% (10/14) of patients had a single dose reduction to 2.5 mg/ kg and 29% (4/14) had a second reduction to 1.92 mg/kg. In patients with dose delays, 59% (10/17) of patients had a single dose delay, 12% (2/17) had two dose delays and 29% (5/17) of patients had ≥3 dose delays. The median duration of dose delays was 21 days (range: 4-168). AEs leading to dose reductions (58%) and delays (79%) were common; 2 patients (8%) permanently discontinued treatment due to AEs (keratopathy [MECs] in 1 patient, cardiac failure in 1 patient). Permanent treatment discontinuation due to AEs was considered treatment related in 1 patient (4%). The most common AEs leading to dose reductions (occurring in ≥5% of patients) included keratopathy (MECs; in 46% of patients), thrombocytopenia (8%) and blurred vision (8%); patients could have more than one AE leading to dose reduction. Keratopathy (MECs; 75%) and blurred vision (25%) were the most common AEs leading to dose delays. Thrombocytopenia (which included thrombocytopenia, haematoma and platelet count decreased) was reported in 46% of patients, with 21% of patients experiencing grade 3/4 events (Table 3). IRRs (including terms IRR, pyrexia, transfusion reaction and chills occurring ≤24 h of infusion) occurred in 17% (4/24) of patients, with no grade 3/4 events. In patients with IRRs, the first occurrence was typically with first infusion (in 75% [3/4] patients); 2/4 patients experienced a single IRR and 2/4 had two IRRs; IRRs resolved in all patients. Although not protocol mandated, 46% of patients received at least one prophylactic pre-medication for IRRs, with 29% of patients receiving prophylactic pre-medication at Cycle 1. In terms of drug class, 33% of patients in the safety population received an analgesic (paracetamol), 42% received an antihistamine and 25% received a steroid as prophylactic pre-medication for IRRs.
Keratopathy (MECs) was the most frequent AE (96%); grade 1/2 (mild/moderate) events were recorded in 21% (5/24) patients and grade 3 (severe) events in 75% (18/24) patients. No grade 4 events occurred. In patients with grade ≥2 events (n = 21), the median time to onset of the first occurrence of keratopathy (MECs) was 23 days (range: 18-283). The onset of keratopathy (MECs) was reported in 58% of patients in the safety population at Cycle 1, in 83% of patients by Cycle 2, in 92% of patients by Cycle 4 and reached the maximum reported incidence of 96% after Cycle 10. At data cut-off, 52% (11/21) of patients with ≥grade 2 keratopathy (MECs) recovered from the first occurrence, with a median duration of first occurrence of 127 days (range: 23-278). Among the 11 patients who recovered, 5 recovered after treatment discontinuation, 5 recovered with dose delay or dose reduction, and 1 recovered without dose modification. At data cut-off, the first occurrence of ≥grade 2 keratopathy (MECs) had not resolved in the remaining 48% (10/21) of patients. Of these, 20% (2/10) of patients were on treatment, 50% (5/10) were no longer in follow up due to death or loss to follow-up and 30% (3/10) were still in follow-up. Keratopathy (MECs) was the most common AE leading to dose delays (75%) and reductions (46%). Dose delays due to keratopathy (MECs) began at Week 4, while dose reductions began later, at Week 7.
BCVA declined to 20/50 or worse in the better seeing eye at least once during or after the treatment period in 33% (8/24) of patients. Median time to onset for the first occurrence was 57 days (range: 39-146). As of the last follow-up, 100% (8/8) patients had recovered (BCVA better than 20/50 in the better seeing eye) with median time to recovery of 21.5 days (range: . Two patients had a transient worsening of their vision (BCVA worse than or equal to 20/200) in one eye only; however, both patients saw an improvement in BCVA (i.e., returned to baseline during follow-up). In 1 patient, the event occurred 61 days after the last dose (treatment discontinued due to progressive disease) and had resolved 21 days later; in the other patient, the event occurred after the first dose but was resolved prior to administration of the second dose. This patient has remained on treatment without a further occurrence; follow-up is ongoing. No patients had a transient worsening of vision to 20/200 in their better seeing eye.
Among patients with keratopathy (MECs), 83% reported symptoms (including blurred vision or subjective dry eye) and/or had a decrease in BCVA (2 or more lines decline in the better seeing eye). Overall, blurred vision and dry eye were the most common patient-reported corneal symptoms (38 and 25%, respectively), and were generally <grade 3 (Table 3). Median time to first occurrence of blurred vision and dry eye was 26 days (range: 19-247) and 45 days (range: 2-66), respectively. Median duration of first occurrence was 43 days (range: 26-178) and 115 days (range: 4-173), respectively. As of last follow-up, blurred vision had resolved in 56% (5/9) of affected patients, and dry eye resolved in 33% (2/6) of patients.
Pharmacokinetics
Exposure measures (AUC and C max ) were generally similar for the three analytes (belamaf, total mAb and cys-mcMMAF) after administration of either the frozen-liquid or lyophilised presentations of belamaf (Table 4). In population PK analyses, presentation was not a significant factor for belamaf PK (data not shown). After accounting for key covariates, PK behaviour of the three analytes was similar after administration of the frozen-liquid and lyophilised presentations.
Discussion
In this exploratory cohort of patients with heavily pretreated RRMM, single-agent belamaf (3.4 mg/kg Q3W) in a lyophilised presentation demonstrated deep and durable anti-myeloma activity, with an ORR of 52%. Responses were deep, with 46% (6/13) of responders achieving a VGPR. ORRs were similar to those in patients with RRMM who were refractory to a PI and an immunomodulatory agent and exposed to anti-CD38 mAbs receiving single-agent belamaf 3.4 mg/kg Q3W in both the first-inhuman DREAMM-1 study (ORR: 38.5% in this sub-group of 13 patients) and the previously published main DREAMM-2 study (ORR: 35% at 13-month follow-up). The ORR reported in this study compares favourably with STORM, the only other clinical trial designed to prospectively evaluate an anti-myeloma treatment (combination selinexor plus dexamethasone) in patients refractory to at least one PI, one immunomodulatory agent, and daratumumab (as in DREAMM-2), in which an ORR of 26% was reported 7 . The STORM study recruited patients previously exposed to bortezomib, carfilzomib, lenalidomide, pomalidomide, daratumumab and an alkylating agent, a similar population to this DREAMM-2 cohort in which all patients were exposed to bortezomib, lenalidomide, pomalidomide and daratumumab, and 80% of patients were exposed to carfilzomib. The median DoR in this study was 9.0 months (95% CI: 2.8-NR) after median follow-up of approximately 11 months; a median DoR of 4.4 months was reported in STORM 7 , suggesting that clinical responses to belamaf are durable, as was the case in the DREAMM-1 study 15 . The median PFS in this patient cohort was 5.7 months, while median OS was not reached (95% CI: 8.7 months-NR) even at this later time point.
As in the main DREAMM-2 study, belamaf had an acceptable safety profile, with no new safety concerns identified with the lyophilised presentation 16 . Based on previous clinical experience with belamaf and literature reports of MMAF-containing ADCs, thrombocytopenia was an AESI 19 . In this study, while common, thrombocytopenia was considered self-limiting and did not lead to AUC area under the curve, C max maximum observed plasma concentration, C trough plasma concentration prior to next dose, cys-mcMMAF cysteine-maleimidocaproyl monomethyl auristatin F, NQ not quantifiable, t max time of C max . Data presented as geometric mean (%CVb), except t max and C trough for cys-mcMMAF, presented as median (minimum-maximum). a Study population details, efficacy and safety analyses were previously reported 16 . treatment discontinuation. IRRs, as expected for biological agents including belamaf, were common, but resolved in all patients. As expected, keratopathy (MECs) on eye examination was common, but events were generally limited to the epithelium (the superficial layer of the cornea) and rarely led to treatment discontinuation. Dry eye and blurred vision events were also common, but as with keratopathy (MECs), were effectively managed with dose delays and/or reductions and concomitant use of preservative-free lubricant eye drops. Corneal events associated with belamaf may be adequately managed by close liaison with eye care professionals and dose modifications (both delays and reductions), as clinically warranted. For patients with grade 1 events, treatment should be continued at the current dose (on the basis of the 2.5-mg/kg results from the main study) 17 . For grade 2 events, dosing should be withheld until corneal exam findings and changes in BCVA improve to a grade 1 event or better, when dosing should resume at the current dose. For grade 3 events, treatment should be withheld until corneal exam findings and changes in BCVA improve to grade 1 or better, when dosing should resume at a reduced dose of 1.9 mg/kg. Treatment should be permanently discontinued for grade 4 events.
The belamaf frozen-liquid presentation was primarily used in DREAMM-1 and in the main cohort of the pivotal DREAMM-2 study to evaluate safety and efficacy [14][15][16] . The refrigerated lyophilised presentation is intended for future clinical use, has been demonstrated to be analytically comparable to the liquid presentation, and is a more robust presentation since it eliminates the frozen shipment and storage requirements. From the patient perspective, either presentation of the drug product is essentially identical upon dilution for intravenous administration. After correction for covariates, no significant difference in PK behaviour was observed for the two presentations, and presentation was not a significant factor in the population PK and exposure-response analyses for belamaf. Belamaf is the first anti-BCMA agent with a multimodal mechanism of action, convenient dosing schedule and no requirement for combination with dexamethasone, making it potentially attractive for use in the real-world setting 22 . The data presented here, in combination with previously published data from DREAMM-1 and DREAMM-2, support single-agent lyophilised belamaf as a practical and effective treatment option for patients with heavily pre-treated RRMM. | 2020-10-25T13:05:18.740Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "de5de8cd2054818ae02d0e22fc61cf55c1049140",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41408-020-00369-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "00a1f77ee61f4ad5b116f01afb700ceb9c364284",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270936130 | pes2o/s2orc | v3-fos-license | Tendances temporelles et impact pronostique des modalités de reperfusion chez les patients Tunisiens se présentant pour infarctus avec sus décalage du segment ST : Analyse sur 20 ans
RESUME Introduction: Grace aux thérapeutiques de reperfusion, la prise en charge des patients se présentant pour infarctus du myocarde avec sus-décalage du segment ST (IDMST) a vécu une métamorphose durant les dernières décennies. Méthodes: Les tendances en termes de reperfusion et de mortalité ont été analysées dans ce registre de 20 ans de patients se présentant pour IDMST dans la région de Monastir. Résultats: Sur 1734 patients atteints d’IDMST, 1370 (79%) étaient des hommes et l’âge moyen était de 60,3 ± 12,7 ans. De 1998 à 2017, l’utilisation de l’intervention coronarienne percutanée (ICP) primaire est passée de 12,5% à 48,3%, tandis que l’utilisation de la fibrinolyse est passée de 47,6% à 31,7% (p<0,001 pour les deux). La mortalité hospitalière est passée de 13,7% au cours de la période 1998-2001 à 5,4% au cours de la période 2014-2017 (p=0,03). La mortalité à long terme (suivi moyen de 49,4 ± 30,7 mois) a significativement diminué de 25,3% à 13% (p<0,001). En analyse multivariée, l’âge, le sexe féminin, l’anémie à la présentation, l’akinésie/dyskinésie de la région infarcie et l’utilisation de l’angioplastie par ballonnet ordinaire étaient des prédicteurs indépendants de décès à long terme, tandis que l’ICP primaire et l’angor pré-infarctus étaient des prédicteurs de survie à long terme. Conclusions: Dans cette étude entre 1998 et 2017, les délais de reperfusion des IDMST ont diminué concomitamment à une augmentation de l’utilisation de l’ICP primaire. La mortalité hospitalière et à long terme a significativement diminué.
INTRODUCTION
In the last four decades, management of ST-elevation myocardial infarction (STEMI) has witnessed a significant transformation worldwide [1][2][3].In Western countries, fibrinolytic therapy, antithrombotic treatments, and the implementation of primary percutaneous coronary intervention (PCI) networks led to a substantial decline in early and long-term mortality and complications [4][5][6][7].Main modifications consisted in the rapid shift to primary PCI for over than 90% of patients with STEMI, the introduction of clopidogrel then of more potent anti P2Y12 antiplatelet agents, and the implementation of new lipid lowering drugs aiming at reducing atherosclerosis related events [2,4,8].In countries of the Middle East and North Africa (MENA) region, STEMI management is still affected in one hand by a somewhat random access to reperfusion therapies and territorial discrepancies in coronary care units (CCU) distribution, and in the other, by insufficient patient follow-up and secondary prevention.In the last few years, available publications from national registries and surveys addressed the high cardiovascular risk profile that characterizes these patients and the dire short-term prognosis in this setting [9][10][11].Reports concerning the progressive implementation of reperfusion therapeutics in STEMI over time and their potential long-term prognostic impact are clearly lacking.In this study, we sought to delineate temporal trends in reperfusion therapy utilization, short and long-term mortality in patients presenting for STEMI in the Monastir region (Tunisia) over a 20-year study period.
METHODS
The current study was carried-out on data extracted from a single center retrospective registry.All consecutive patients aged 18 and older presenting to a major tertiary care facility in the Monastir region (Tunisia) for STEMI between January 1998 and December 2017 were enrolled.The registry was updated in a regular fashion (every 1 to 2 years) by two senior academic staff for data integrity, patient follow-up and outcomes.Patients presenting with STEMI were admitted via the emergency department (ED) or via the regional emergency medical service (EMS).STEMI diagnosis was retained in the presence of a significant ST-segment elevation (2 mm in precordial leads or 1 mm in frontal leads) in two contiguous leads on electrocardiogram (ECG) or in the presence of a presumably new left bundle branch block concomitantly to a prolonged (>20 minutes) chest pain or discomfort.For STEMI early management, medical staff in our department, our regional EMS as well as ED were basically implementing the European guidelines for the diagnosis and management of patients presenting with STEMI [12].That is patients presenting to the ED or to a nearby healthcare facility (in the Monastir Governorate area) were whenever possible swiftly transferred for primary PCI.When long transfer delays (>90 to 120 minutes) were expected, and in the absence of contraindications, fibrinolytic therapy is administered then patients are referred as soon as possible to our department for reassessment and potential rescue PCI or diagnostic invasive coronary angiography (ICA).Notwithstanding, the registry included patients with STEMI managed with no reperfusion therapy (conservative treatment) at the early phase.Reasons for that include late presentation, spontaneous reperfusion, very old age.At the exception of a negligible number, all patients received clopidogrel (loading dose of 300 or 600 mg and 75 mg/day thereafter), aspirin and intravenous unfractionated heparin or enoxaparin as recommended.In patients receiving fibrinolytic therapy, thrombolytic agents used included streptokinase, alteplase and tenecteplase.In case of primary or rescue PCI, dilatation balloons, bare metal/drug-eluting stents, thrombus aspiration and GP IIb-IIIa inhibitors were used at the operator's discretion, depending on availability and in accordance with European guidelines.Symptoms, transfer and reperfusion delays were estimated using ED and EMS records and during thrombolysis and PCI procedures.We opted to report the symptom-to-reperfusion delay, i.e symptom-to-fibrinolysis or symptom-to-balloon delay rather than door-to-needle or door-to-balloon time, due to a better availability of these data in patient files and to the good predictive value of total ischemic time for early outcomes in previous studies [13][14][15].Risk profile, baseline demographic and clinical characteristics were specified in all patients upon admission.Routine biology tests were withdrawn in all patients according to local protocols.Creatinine clearance was calculated using the Modification of Diet in Renal Disease (MDRD) formula.For the purpose of the current study, chronic kidney disease was considered in patients presenting with a creatinine clearance ≤60 mL/min.In accordance with the World Health Organization criteria, anemia was defined as a hemoglobin rate <13 g/dL in men and <12 g/dL in women [16].Transthoracic echocardiography was performed in the first 48 hours and anytime a complication was suspected.Parameters reported were left ventricle (LV) systolic and diastolic diameters, LV ejection fraction by the Simpson method, regional kinetic abnormalities, LV filling pressures and pulmonary pressures.Patients were observed in the coronary care unit (CCU) for at least 48 hours and put on adjunctive medications (beta blockers, statins, angiotensin converting enzyme inhibitors/angiotensin receptor blockers) as guideline recommended.Major bleeding was defined as any fatal, cerebral bleeding or overt bleeding mandating urgent transfusion.After discharge, patients were followed in outpatient clinics at one to two months, at 6 months, then biyearly.Patients lost to clinical follow-up were contacted by phone.Time between index STEMI and last follow-up or death was documented.
Statistical analysis
For the current analysis, the overall study period was divided into five periods of four years each: Period The chi-square test was applied to compare categorical variables.The one-way ANOVA or the Kruskal Wallis tests were applied to compare continuous variable means or medians between periods as appropriate.Kaplan Meier curves for long-term survival were represented according to the reperfusion strategy adopted (i.e., thrombolysis, primary PCI or no reperfusion).Log rank test was applied for comparison between different reperfusion modalities outcome.Independent long-term predictors of death or survival were determined using multivariable binary logistic regression applied on a variable set.Categorical variables chosen to be included in the multivariate model were determined using chi-square univariate analysis on long-term mortality (cut-off p for selection <0.2) in addition to other forced variables judged as relevant for the model.Odds and accompanying 95% confidence intervals were reported and a p value <0.05 was set for statistical significance.Data collection and statistical analysis were performed using Statistical Package for Social Sciences (SPSS) V. 21 for Windows.
DISCUSSION
The current study presents a unique depiction of major changes in epidemiological characteristics, clinical presentation, management, early and long-term mortality in Tunisian patients presenting for STEMI in a main tertiary care facility over a 20-year period.To the best of our knowledge, this is the first large study from a North African country that focuses on trends in reperfusion modalities in STEMI with such a long inclusion period and clinical follow-up.
The first observation we depict herein is the heavy burden of cardiovascular risk factors in Tunisian patients with STEMI that was overall stable along the study duration.These findings remain in line with those from Gulf registries [17][18][19] and at a lesser extent with major western STEMI registries where prevalence of classical coronary risk factors, diabetes mellitus and tobacco smoking in particular, are lower than reported in the present study [6,8,20,21].Another important fact is the tangible increase in female gender prevalence along the 20-year study period.This may have a substantial impact on informing management strategies to be implemented given the higher prevalence of comorbidities and the risk of complications and mortality in women compared to men documented in several reports [22][23][24].
It is remarkable that from 1998 to 2017, recourse to fibrinolysis decreased progressively along with a gradual increase in primary PCI use.This is actually related to several factors including the progressive adoption of the European guidelines by the different protagonists involved in STEMI patient care, better logistics, and better availability of catheterization platform and personnel.Primary PCI for STEMI is nowadays the gold standard for STEMI management when performed timely by experienced operators.Its superiority over fibrinolysis regarding survival, reinfarction, revascularization, and heart failure occurrence has been largely demonstrated [25].In concurrence with that, survival analysis in the current study suggests similar findings demonstrating better long-term outcomes with primary PCI in comparison to fibrinolysis or no immediate reperfusion.Likewise, primary PCI use was found to be an independent predictor of better long-term survival in multivariable analysis.Nevertheless, fibrinolysis as a reperfusion therapy in acute STEMI remains a viable option in developing countries especially when utilized as part of the pharmacoinvasive strategy and when new fibrinospecific agents are administered by EMS [26,27].
In parallel with the shift in reperfusion modalities, we witnessed a decrease in symptom-to-reperfusion delays in the overall study population and for each reperfusion modality, which has been proven to impact early and late outcomes in STEMI.Although symptom-to-reperfusion delay does not strictly equal total ischemic time, it seems to be a good surrogate to the latter in clinical practice [28,29].Furthermore, total ischemic time also depends on the time to first medical contact, a parameter that is highly dependent on the patient himself.Higher recourse to transport by EMS and better awareness of the Tunisian population about cardiovascular disease are possible explanations to the decreasing trend we observed in all these delays.It is also interesting to notice that concomitantly to the decreasing trend in reperfusion delays, in-hospital complications and mortality declined.Indeed, between 1998 and 2017, in-hospital mortality has been more than halved with more than 8% absolute reduction.Although a causative relationship between the two phenomena may be evoked, such a conclusion should not be systematically drawn due to the multitude of other factors impacting early and long-term survival in patients with STEMI.Reduction in reperfusion delays as well as the adoption of evidence-based secondary prevention therapeutics were broadly investigated in European registries with obvious impact on early and late outcomes in patients presenting with STEMI [3,21,30].Among long-term predictors of death in the current study, most of them were already reported in previous studies, although mitigated in some instances like female gender and plain old balloon angioplasty [31,32].One unique factor reported herein is the occurrence of preinfarction angina that was associated with better long-term survival in patients presenting for STEMI.Preinfarction angina is frequently regarded as synonymous with ischemic preconditioning, a phenomenon found to be associated to lesser infarct size in animal as well as human studies [33,34].Further investigation as for actual clinical significance and factors associated with ischemic preconditioning in our context is warranted.
Study limitations
Although highly informative about STEMI risk profile, management and prognosis in the Tunisian context, our study entails several limitations that have to be acknowledged.First, the retrospective character of the study made it difficult to draw any firm conclusion regarding a cause-effect relationship between patients' characteristics and management on one hand and early and late outcomes on the other.Some variables such as vascular access route and type of stent implanted were not reported herein due to the lack of accuracy in reporting them in some patients' files.Although we insist that a great majority of patients received pharmacological therapeutics for secondary prevention according to contemporary guidelines, prevalence of their use and doses were not studied.Such therapeutics have plausibly a substantial effect on the improvement observed in the prognosis of patients presenting with STEMI in most of international studies.Finally, long-term outcomes other than mortality (such as ischemia driven revascularization or heart failure) were not studied.
CONCLUSIONS
By observing actual trends over 20 years in reperfusion strategies in STEMI patients in a Tunisian region, the current study demonstrated an upward evolution in primary PCI use concomitant to a significant reduction in reperfusion delays.These facts were associated to a decrease in early and long-term mortality in STEMI patients.
List of abbreviations
1, from 1998 to 2001, Period 2, from 2002 to 2005, Period 3, from 2006 to 2009, Period 4, from 2010 to 2013, and Period 5, from 2014 to 2017.Trends in risk profile, reperfusion therapy, early and long-term mortality were determined.Categorical variables are expressed in absolute values and percentages.Continuous variables are expressed in means ± SD or by median value and interquartile range in case of non-normal distribution.
Table 1 .
.1% and 35.9%, respectively.Baseline characteristics of study population according to study period are presented in table I. Prevalence of female gender in STEMI patients increased from 16.9% during Period 1 to 34.2% during Period 5 (p<0.001).Prevalence of classical cardiovascular risk factors remained relatively stable throughout the study periods at the exception of dyslipidemia that moderately Jomaa & al.ST-elevation myocardial infarction management in Tunisia Baseline characteristics according to study period.
PCI, Percutaneous Coronary Intervention, STEMI, ST-elevation Myocardial Infarction.Patients presented to ED or to EMS with chest pain in a large majority of cases (tableII).6%,significantlydecreasing to 31.7% in Period 5 (p<0.001).Conversely, recourse to primary PCI increased over time from 12.5% in Period 1 to attain 48.3% in Period 5 (p<0.001).As for reperfusion delays (i.e., symptom-to-fibrinolysis delay and symptom-to-primary PCI delay), and irrespective to reperfusion therapy, they steadily decreased throughout the study period (tableIII).Figure 1.Trends in reperfusion strategies according to study period.PPCI, primary percutaneous coronary intervention.
Table 2 .
Clinical characteristics on-presentation according to study period.
Table 3 .
Trends in reperfusion modalities and delays over 20 years.
* Symptoms-to-FMC delay was calculated for the overall study population.FMC, First Medical Contact, PCI, Percutaneous Coronary Intervention, POBA, Plain Old Balloon Angioplasty
Table 4 .
In-hospital complications and mortality according to study period.
Table 5 .
Kaplan Meier curves for long-term survival according to reperfusion strategy.Factors independently associated to long-term death in multivariate analysis. | 2024-07-04T15:03:22.790Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "0885327a8d5a26b5e05ef76ad83e98691744eb8b",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "6bd7c4453ba3ae3adb66e1befdd7690294115726",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
116058948 | pes2o/s2orc | v3-fos-license | A Parametric Study of Blast Damage on Hard Rock Pillar Strength
Pillar stability is an important factor for safe working and from an economic standpoint in underground mines. This paper discusses the effect of blast damage on the strength of hard rock pillars using numerical models through a parametric study. The results indicate that blast damage has a significant impact on the strength of pillars with larger width-to-height (W/H) ratios. The blast damage causes softening of the rock at the pillar boundaries leading to the yielding of the pillars in brittle fashion beyond the blast damage zones. The models show that the decrease in pillar strength as a consequence of blasting is inversely correlated with increasing pillar height at a constant W/H ratio. Inclined pillars are less susceptible to blast damage, and the damage on the inclined sides has a greater impact on pillar strength than on the normal sides. A methodology to analyze the blast damage on hard rock pillars using FLAC3D is presented herein.
Introduction
The most common methods employed in hard rock mines are drilling and blasting. An inherent problem that exists with this method is the damage to the periphery of excavation induced by the blast. Perimeter control and smooth wall blasting have been implemented to reduce blast-induced damage where it is considered an issue and is common in underground civil construction. However, some level of blast damage is inevitable, and this leads to adverse consequences in the form of stability issues in the rock excavations.
Some of the earliest research [1][2][3] stated over-break as the only major consequence of the blast damage. It was defined as the unwanted loosening, dislocation, and disturbance of the rock mass beyond the limits of the intended excavation design. Figure 1 illustrates a clear distinction between the over-break and the blast damage [4]. This paper employs numerical models to consider blast-induced damage as the region beyond the final excavation boundary, where the rock has been damaged by blasting.
Numerous studies have been undertaken to determine the extent of the blast damage on rock excavations [5][6][7][8][9]. General theories were developed to determine the extent of the damage zone from small-scale and field-scale investigations based on explosive properties, borehole radius, and material properties. Kutter and Fairhurst [5] also found that the stress waves and the gas-generated fractures propagate along the maximum principal stress direction. Ouchterlony et al. [10] have done field studies to determine the blast-damaged zone based on explosive type, charge concentration, and charge diameter. It was concluded that the damaged zone can range from 0.25 to 2.0 m depending on the properties of the rock, such as intact rock mass, jointed rock mass, and heavily jointed rock mass. Hoek et al. [11] classified the rock damage from blasting by introducing a blast damage factor (D). A qualitative classification was presented with good blasting represented as D = 0 and poor blasting represented as D = 0.8. Sharifzadeh and Pal [12] attempted to quantify the blast damage factor by deducing a relationship between deformation modulus and intact rock modulus by taking the blasting effect into consideration. Torbica and Lapcevic [13] quantified the blast damage by reducing the geological strength index (GSI) by 10 units in the damaged rock zone. It was concluded that the results of the reduced-GSI method were equivalent to that of the D factor, showing that reduced GSI can be used as an alternative for determining the properties of the degraded rock mass.
There are no theoretical methods to account for excavation stability in the tunnels and pillars considering blast damage. Therefore, numerical modeling is one of the most appropriate method to observe the effect of blast damage on excavations when the model parameters, rock properties, and constitutive models are employed properly. Shen and Barton [14] described the damaged zone with increased jointing and evaluated the stress distribution around the tunnels with the help of discontinuum modeling UDEC. Saiang [15] evaluated the properties of the rock in the blast-damaged zone with the discrete element modeling PFC 2D . These properties were used in FLAC to perform a parametric study on the properties affecting the stress distribution of the tunnels. Blast damage thickness had a moderate effect, and deformation modulus had a high effect, on the stress distribution in the tunnels. Mitelman and Elmo [16] developed a hybrid element discrete modeling approach to study the induced damage in tunnels. Phase2 was used to demonstrate the blast effect on the tunnels where the degraded rock properties were used for only the specified thickness of the blast zone [13]. In these studies, the periphery of the tunnels was simulated by reducing the intact rock properties of the blast-effected zone.
Baharani et al. [17] conducted finite element analysis on pillars in a hypothetical case study considering the blasting effect. It was concluded that the slender pillars are prone to strain bursting, and the strength of the wider pillars is affected by drill and blast methods. It was also shown that the yielding of the pillar side walls is higher due to blast damage. Limited study has been conducted on pillars with blast damage. Therefore, in this paper, a parametric study of the damage factor and damage thickness over different pillar dimensions was conducted to understand the blast damage effect on the strength of the pillars.
Theoretical Background
Pillar design theories are predominantly concerned with the factor of safety, which is the ratio of pillar strength to the maximum pillar stress. This approach ignores aspects such as the following: • The presence of geology and water; • A weak roof or floor; Hoek et al. [11] classified the rock damage from blasting by introducing a blast damage factor (D). A qualitative classification was presented with good blasting represented as D = 0 and poor blasting represented as D = 0.8. Sharifzadeh and Pal [12] attempted to quantify the blast damage factor by deducing a relationship between deformation modulus and intact rock modulus by taking the blasting effect into consideration. Torbica and Lapcevic [13] quantified the blast damage by reducing the geological strength index (GSI) by 10 units in the damaged rock zone. It was concluded that the results of the reduced-GSI method were equivalent to that of the D factor, showing that reduced GSI can be used as an alternative for determining the properties of the degraded rock mass.
There are no theoretical methods to account for excavation stability in the tunnels and pillars considering blast damage. Therefore, numerical modeling is one of the most appropriate method to observe the effect of blast damage on excavations when the model parameters, rock properties, and constitutive models are employed properly. Shen and Barton [14] described the damaged zone with increased jointing and evaluated the stress distribution around the tunnels with the help of discontinuum modeling UDEC. Saiang [15] evaluated the properties of the rock in the blast-damaged zone with the discrete element modeling PFC 2D . These properties were used in FLAC to perform a parametric study on the properties affecting the stress distribution of the tunnels. Blast damage thickness had a moderate effect, and deformation modulus had a high effect, on the stress distribution in the tunnels. Mitelman and Elmo [16] developed a hybrid element discrete modeling approach to study the induced damage in tunnels. Phase2 was used to demonstrate the blast effect on the tunnels where the degraded rock properties were used for only the specified thickness of the blast zone [13]. In these studies, the periphery of the tunnels was simulated by reducing the intact rock properties of the blast-effected zone.
Baharani et al. [17] conducted finite element analysis on pillars in a hypothetical case study considering the blasting effect. It was concluded that the slender pillars are prone to strain bursting, and the strength of the wider pillars is affected by drill and blast methods. It was also shown that the yielding of the pillar side walls is higher due to blast damage. Limited study has been conducted on pillars with blast damage. Therefore, in this paper, a parametric study of the damage factor and damage thickness over different pillar dimensions was conducted to understand the blast damage effect on the strength of the pillars.
Theoretical Background
Pillar design theories are predominantly concerned with the factor of safety, which is the ratio of pillar strength to the maximum pillar stress. This approach ignores aspects such as the following:
•
The presence of geology and water; • A weak roof or floor; To overcome these other, often-ignored parameters, a higher factor of safety is adopted, such as 1.4 for hard rock pillars [18].
To date, many studies have been conducted to develop empirical approaches to the study of hard rock pillar strength [19,20] in different rock types. Lunder's approach [18] is considered one of the most prominent empirical approaches that has been used to design pillars. It is given as: where σ p is the ultimate strength of the pillar (MPa), K is the pillar size factor, UCS is the uniaxial compressive strength of the intact rock (MPa), C 1 and C 2 are the empirical rock mass constants, and κ is the friction term, which is calculated as: where C pav is the average pillar confinement, and Coeff is the coefficient of the pillar confinement.
Past empirical studies were based on a specific database that does not consider the rock mass properties. To improve upon these empirical equations, studies were conducted with numerical models considering the fracture sets [21,22], but blast damage has yet to be considered.
Blast Damage Zone Properties
Rock properties play an important role in explaining the damage in the blast damage zone. Blasting creates fractures in the blast damage zone that reduce the strength of the rock mass. The most important rock properties affecting blast damage are massive rock modulus, cohesion, and friction [11].
The rock modulus in the blast damage zone is expressed as the degraded rock modulus (E rm ), which depends on the intact rock modulus (E i ), damage factor (D), and GSI, [23] is given as follows: The most popular forms of numerical modeling have been developed with the Mohr-Coulomb or bilinear failure criterion. For example, the effects of cohesion (C ) and friction (Ø ) on the blast damage zone were evaluated by Hoek et al. (2002) and are given as: where m b is the reduced value of the rock mass material constant, a and s are the rock structure constants, σ ci is the uniaxial compressive strength of the intact rock sample, and σ 3max is the upper limit of the confining stress. The degraded rock modulus, cohesion, and friction are the key parameters employed in numerical models of the blast damage zone derived by Equations (5)- (7), while the massive rock properties are the key parameters employed for the zone beyond the blast damage zone.
Effect of Pillar Height
Pillar height has a considerable effect on the strength of hard rock pillars. The strength of the pillar decreases with the increase in pillar height at a constant pillar width-to-height (W/H) ratio [24]. Many studies have been conducted to understand the effect of size, which shows that at a constant W/H ratio, as the sample size increases, the strength of the sample decreases [25]. Alternatively, at a constant blast thickness, the blast would encompass a larger amount of a smaller pillar and, conversely, a smaller amount of a larger pillar. For example, a 0.25 m blast thickness on a pillar with a height of 1 m and a W/H ratio of 1 would lead to 44% pillar damage, while a blast of the same thickness on a pillar with a height of 2 m and a W/H ratio of 1 would lead to 23% pillar damage. Therefore, it is important to understand the different blast damage that occurs at different pillar heights at constant pillar W/H ratios.
Effect of Pillar Inclinations
Pillar inclinations lead to inclined loading conditions, and increasing pillar inclination reduce pillar strength [26,27]. In inclined pillars, the pillar sides towards the dip lead to the pillar failure. While blasting affects all sides of the pillar similarly, for inclined pillars, it would be interesting to understand which pillars sides are more susceptible to pillar strength reduction from blast damage. Therefore, the effect of blast damage on the strength of inclined pillars on all the pillar sides, including dip sides and strike sides, is also studied in this paper.
The scope of this paper is to understand the effects of blast damage on vertical as well as inclined hard rock pillars with use of numerical modeling. A parametric study has been conducted with pillar parameters such as W/H ratio, pillar height, and pillar inclination and with blast parameters such as blast damage factor and blast damage thickness.
Numerical Modeling
To develop an understanding of pillar behavior with blast damage, FLAC 3D 5.0 [28], a finite difference element software, was employed throughout the body of the work. A majority of the studies on ground control, and especially on pillar stability, have employed FLAC 3D for its inbuilt, well-developed constitutive models. The extent of blast damage in a pillar depends on the blast factor (D) and blast zone thickness (T). Therefore, these two parameters were varied to understand the response of the models. The results were presented in either normalized fashion or the actual results depending on the factor analyzed. The normalized result is the ratio of pillar strength as affected by blast damage to that of the actual pillar strength.
Grid Generation
The model was created in a three-dimensional framework with the origin at the center of the pillar, as shown in Figure 2. The horizontal plane of the coordinate system is denoted by x and y axes and the vertical plane by the z axis. The model consists of pillar, main roof, and main floor. The model's extent in the vertical plane was three times the pillar width to ensure that there were no interaction effects of model boundaries on the pillar. The excavation surrounding the pillar was set at a 75% extraction ratio; therefore, the excavation width is equal to that of the pillar width.
Mesh Generation and Loading Rate
The mesh size and the loading rates play a critical role in developing the numerical models effectively and are dependent on each other. Mesh size depends on the area of concentration on the pillar that will be helpful in understanding the impact of blast damage on pillar behavior. The blast damage reported in the excavation ranges from 0.25 to 2 m [10]. Therefore, the minimum mesh size should be about 0.25 m × 0.25 m × 0.25 m or less, such that the blast zone thickness of 0.2 m can be analyzed on the pillars.
Loading rate can be either stress-controlled or strain-controlled. It is recommended to use straincontrolled loading rates to obtain reliable stress strain graphs [28]. The model run time and the stress strain curve are dependent on the loading rate. To understand the effect of loading rate, a pillar with W/H ratio of 1.0 was applied with three different loading rates. Table 1 shows that the model run time increases exponentially with decreasing loading rates. Figure 3a shows that with low loading rates, the models obtained good and reliable stress strain graphs.
The loading rate is largely dependent on the mesh size. Past numerical studies on pillars have adopted large meshes [29][30][31] and therefore employed higher loading rates. The relationship between the mesh size and loading rate was defined with the help of stress strain graphs. It was found that smaller mesh sizes needed very low loading rates, and as the mesh size increases, higher loading rates can be employed. Three mesh sizes were varied to understand the stress strain behavior at a 1 × 10 −6 m/step loading rate. This loading rate serves well for 0.5 m and 0.25 m meshes. For 0.125 m mesh, the loading rate seems to be higher, resulting in bumps in the stress strain curve (Figure 3b). The smaller mesh size also results in higher model run times. Optimization of the loading rate, mesh size, and model run time is required to obtain a good stress strain graph. Therefore, a 1 × 10 −6 m/step loading rate is suitable for a mesh size of 0.25 m × 0.25 m × 0.25 m and has been used throughout this
Mesh Generation and Loading Rate
The mesh size and the loading rates play a critical role in developing the numerical models effectively and are dependent on each other. Mesh size depends on the area of concentration on the pillar that will be helpful in understanding the impact of blast damage on pillar behavior. The blast damage reported in the excavation ranges from 0.25 to 2 m [10]. Therefore, the minimum mesh size should be about 0.25 m × 0.25 m × 0.25 m or less, such that the blast zone thickness of 0.2 m can be analyzed on the pillars.
Loading rate can be either stress-controlled or strain-controlled. It is recommended to use strain-controlled loading rates to obtain reliable stress strain graphs [28]. The model run time and the stress strain curve are dependent on the loading rate. To understand the effect of loading rate, a pillar with W/H ratio of 1.0 was applied with three different loading rates. Table 1 shows that the model run time increases exponentially with decreasing loading rates. Figure 3a shows that with low loading rates, the models obtained good and reliable stress strain graphs.
Boundary Conditions
Roller boundaries were applied on the x and y boundaries, which restricted displacement and velocity normal to the planes. These boundaries simulate the chain of pillars around the model. The main floor was pinned, restricting the displacements and the velocities both normal and parallel to the plane. The load was applied as uniform velocity on the top of the main roof to simulate the compressive loading on the pillars. The model was subjected to a vertical stress of 2.7 MPa with a vertical-to-horizontal stress ratio of 1:1, which simulates a mine of a depth of 100 m.
Material Properties
Material properties and failure criteria prove to be critical in developing realistic numerical models. The model comprises the main roof, main floor, and pillars, and as the focus of the paper is on the pillars; the main roof and the main floor are simulated as elastic materials. The pillars are best represented with the brittle Hoek-Brown criterion [29,32,33], which is established on the formation of brittle cracks at 0.3 to 0.5 times the uniaxial compressive strength, followed by shear failure in the pillars. Therefore, a bilinear strength envelope was used in which strength is equal to one third of the The loading rate is largely dependent on the mesh size. Past numerical studies on pillars have adopted large meshes [29][30][31] and therefore employed higher loading rates. The relationship between the mesh size and loading rate was defined with the help of stress strain graphs. It was found that smaller mesh sizes needed very low loading rates, and as the mesh size increases, higher loading rates can be employed. Three mesh sizes were varied to understand the stress strain behavior at a 1 × 10 −6 m/step loading rate. This loading rate serves well for 0.5 m and 0.25 m meshes. For 0.125 m mesh, the loading rate seems to be higher, resulting in bumps in the stress strain curve (Figure 3b). The smaller mesh size also results in higher model run times. Optimization of the loading rate, mesh size, and model run time is required to obtain a good stress strain graph. Therefore, a 1 × 10 −6 m/step loading rate is suitable for a mesh size of 0.25 m × 0.25 m × 0.25 m and has been used throughout this paper.
Boundary Conditions
Roller boundaries were applied on the x and y boundaries, which restricted displacement and velocity normal to the planes. These boundaries simulate the chain of pillars around the model. The main floor was pinned, restricting the displacements and the velocities both normal and parallel to the plane. The load was applied as uniform velocity on the top of the main roof to simulate the compressive loading on the pillars. The model was subjected to a vertical stress of 2.7 MPa with a vertical-to-horizontal stress ratio of 1:1, which simulates a mine of a depth of 100 m.
Material Properties
Material properties and failure criteria prove to be critical in developing realistic numerical models. The model comprises the main roof, main floor, and pillars, and as the focus of the paper is on the pillars; the main roof and the main floor are simulated as elastic materials. The pillars are best represented with the brittle Hoek-Brown criterion [29,32,33], which is established on the formation of brittle cracks at 0.3 to 0.5 times the uniaxial compressive strength, followed by shear failure in the pillars. Therefore, a bilinear strength envelope was used in which strength is equal to one third of the uniaxial compressive strength and is independent of friction at lower confinement, followed by friction hardening at higher confinement [33].
The bilinear strain-hardening/softening ubiquitous-joint model, an inbuilt FLAC 3D constitutive model, was used to simulate the bilinear rock strength behavior based on the Mohr-Coulomb failure criterion and strain softening as a function of deviatoric plastic strain [28]. The rock and joint properties for the constitutive model are obtained from Esterhuizen [29] using uniaxial compressive strength and Rock Mass Rating of 120 MPa and 70, respectively, and are shown in Tables 2 and 3.
Strain softening parameters are dependent on the model mesh size. These parameters are established by calibrating all of the numerical models to that of the theoretical results using the same element size throughout all of the models [28]. It was determined that while using a large mesh size, the softening should occur at very low plastic strain, and when a small mesh size is adopted, the softening occurs over a large plastic strain. Therefore, the mesh size was kept at 0.25 m × 0.25 m × 0.25 m with a loading rate of 1 × 10 −6 for all the models, and cohesion softening was performed to calibrate the numerical model results to that of the Lunder results [18].
To simulate the blast damage in the model, rock properties were changed to degraded rock properties in the blast-damaged zones with the help of FISH code (an inbuilt function in FLAC 3D for developing user-defined variables and functions), as shown in Figure 4. That shows the blast damage zone's thickness was 0.5 m, whereas the Young's modulus, Mohr-Coulomb cohesion, and Mohr-Coulomb friction were altered in the blast-damaged zones using Equations (4)-(6) for blast damage factors (D) of 0.25, 0.5, 0.75, and 1.0 and are presented in Table 4. A flowchart has been developed representing step-by-step procedure for numerical modeling of pillars in FLAC 3D , as shown in Figure 5. uniaxial compressive strength and is independent of friction at lower confinement, followed by friction hardening at higher confinement [33]. The bilinear strain-hardening/softening ubiquitous-joint model, an inbuilt FLAC 3D constitutive model, was used to simulate the bilinear rock strength behavior based on the Mohr-Coulomb failure criterion and strain softening as a function of deviatoric plastic strain [28]. The rock and joint properties for the constitutive model are obtained from Esterhuizen [29] using uniaxial compressive strength and Rock Mass Rating of 120 MPa and 70, respectively, and are shown in Tables 2 and 3.
Strain softening parameters are dependent on the model mesh size. These parameters are established by calibrating all of the numerical models to that of the theoretical results using the same element size throughout all of the models [28]. It was determined that while using a large mesh size, the softening should occur at very low plastic strain, and when a small mesh size is adopted, the softening occurs over a large plastic strain. Therefore, the mesh size was kept at 0.25 m × 0.25 m × 0.25 m with a loading rate of 1 × 10 −6 for all the models, and cohesion softening was performed to calibrate the numerical model results to that of the Lunder results [18].
To simulate the blast damage in the model, rock properties were changed to degraded rock properties in the blast-damaged zones with the help of FISH code (an inbuilt function in FLAC 3D for developing user-defined variables and functions), as shown in Figure 4. That shows the blast damage zone's thickness was 0.5 m, whereas the Young's modulus, Mohr-Coulomb cohesion, and Mohr-Coulomb friction were altered in the blast-damaged zones using Equations (4)-(6) for blast damage factors (D) of 0.25, 0.5, 0.75, and 1.0 and are presented in Table 4. A flowchart has been developed representing step-by-step procedure for numerical modeling of pillars in FLAC 3D , as shown in Figure 5.
Model Calibration
The models were created at pillar W/H ratios of 0.5, 1.0, 1.5, and 2.0. The pillar height adopted in these models was about 4 m, which is similar than the pillar height in Lunder's database [18]. The strength results were obtained through the stress strain curves developed by FISH code. The model strength of the pillars was then calibrated to that of the theoretical results, as shown in Figure 6. It was observed that the difference between the model results and the theoretical results was less than 5%.
Model Calibration
The models were created at pillar W/H ratios of 0.5, 1.0, 1.5, and 2.0. The pillar height adopted in these models was about 4 m, which is similar than the pillar height in Lunder's database [18]. The strength results were obtained through the stress strain curves developed by FISH code. The model strength of the pillars was then calibrated to that of the theoretical results, as shown in Figure 6. It was observed that the difference between the model results and the theoretical results was less than 5%.
Effect of Blast Damage on Pillar Width-to-Height Ratio
Five W/H ratios were simulated: 0.5, 1.0, 1.5, 2.0, and 2.5. These models used four different blast factors (D) (0.25, 0.5, 0.75, and 1.0) and four different blast thicknesses (T) (0.25 m, 0.5 m, 0.75 m, and 1.0 m). For a W/H ratio of 1.0, 16 models were simulated to understand the blast effect on pillar strength. Therefore, a total of 80 models were simulated to understand the blast effect on varying W/H ratios.
Assuming the blast damage on the pillars would decrease pillar strength, the results are presented in a normalized fashion, mainly for qualitative purposes, to understand the percentage decrease in pillar strength due to blasting when compared to the pillar strength with no damage effect. Pillar strength with no blast effect (i.e., a disturbance factor of zero and a damage thickness of zero) represents the baseline value. For example, if the pillar strength with no blast effect were 50 MPa, and the pillar strength with a 0.5 disturbance factor and a 0.5 m damage thickness were 35 MPa, then the normalized pillar strength ratio would be 1.0 for no blast effect and 0.7 for a pillar with a 0.5 disturbance factor and a 0.5 m damage thickness.
Assuming the aforementioned methodology, the results are shown in Table 5. Based on the results for the blast effect on the slender pillars (W/H < 0.8), the damage has a little effect on the pillars' strength. This is due to the fact that the slender pillars fail in a brittle fashion, which starts from the center of the pillar [29]. The models show that the blasting had a more considerable effect on pillars with higher W/H ratios. With a blast damage of 1 m and a disturbance factor of 0.75, the decrease in pillar strength was observed to be about 7% for a W/H ratio of 1, 16% for a W/H ratio of 1.5, 22% for a W/H ratio of 2.0, and 27% for a W/H ratio of 2.5.
It was observed that the 16% decrease in pillar strength of a pillar with a W/H ratio of 1.5 gives it an equivalent pillar strength to a pillar with a W/H ratio of 1.0 with no blast effect. Similarly, a 22% decrease in the pillar strength of a pillar with a W/H ratio of 2.0 gives it the same pillar strength as a pillar with a W/H ratio of 1.5 with no blast effect. Therefore, to account for blasting with a disturbance factor of 0.75 and a damage thickness of 1 m, 1 m could be added along all sides of the pillar. These normalized pillar strengths can be used to derive the thickness of the sides that must be left on the pillars to account for blasting. Damage caused by blasting on the excavation has been considered as a technique to combat excessive stress accumulated near underground excavations in highly stressed rock masses [15,[34][35][36]. Krauland and Soder [37] suggested that destressing and preconditioning practices can strategically Figure 7. It can be observed that the modulus of the pillars decreases with an increase in disturbance factor and damage thickness. The stress strain curve of the pillar with no damage shows a point (40 MPa) where the modulus changes which can be denoted as the pillar transitioning from brittle failure to shear failure. This transition happens in the pillar from blast damage with a disturbance factor of 0.5 at 48 MPa and blast damage with a disturbance factor of 1.0 at 54 MPa. It can be deduced that with the increase in blast damage, the tendency of the brittle failure increases in the pillars. Finally, it can be observed that the pillar with a W/H ratio of 1.5, disturbance factor of 1.0, and damage thickness of 1 m undergoes complete brittle failure. Damage caused by blasting on the excavation has been considered as a technique to combat excessive stress accumulated near underground excavations in highly stressed rock masses [15,[34][35][36]. Krauland and Soder [37] suggested that destressing and preconditioning practices can strategically create fractures in the rock mass near excavation to soften the rock locally to transfer the stress away from the excavation boundaries. The models of blast damage show a similar effect on the pillars.
For this, five models were analyzed with a W/H ratio of 1. To understand the failure behavior of the pillars with blast damage, failure regions were observed from the plastic state plot to better describe the regions undergoing plastic flow. These plots can be used to determine which regions have or are undergoing shear and tensile failure. The plots show the failure regions at the central section of the pillar, as shown in Figure 8. Three points on the stress strain curve in Figure 7 were selected that correspond to the plastic state plots in Figure 8. The points were selected at the following stages: before loading began, at the pillar failure initiation, and at the transition point between brittle and shear failure. The before-loading point shows the pillar with massive rock and degraded rock in the model, while the failure-initiation point shows the crack initiation in the pillar, and the transition point shows the total brittle failure in the pillar.
Based upon the plots in Figures 8b-d, several observations were made. In Figure 8b, the model plots have been extrapolated, showing massive rock mass and degraded rock mass at the central section of the pillar. Initially, failure regions were analyzed in the pillar with no damage. In the pillar without damage, the failure starts at the sides of the pillar (Figure 8c) and propagates through the whole side into brittle failure (Figure 8d). In Esterhuizen et al. [29], similar brittle failure has been described in the pillars at higher W/H ratios. To understand the failure behavior of the pillars with blast damage, failure regions were observed from the plastic state plot to better describe the regions undergoing plastic flow. These plots can be used to determine which regions have or are undergoing shear and tensile failure. The plots show the failure regions at the central section of the pillar, as shown in Figure 8. Three points on the stress strain curve in Figure 7 were selected that correspond to the plastic state plots in Figure 8. The points were selected at the following stages: before loading began, at the pillar failure initiation, and at the transition point between brittle and shear failure. The before-loading point shows the pillar with massive rock and degraded rock in the model, while the failure-initiation point shows the crack initiation in the pillar, and the transition point shows the total brittle failure in the pillar.
Based upon the plots in Figure 8b-d, several observations were made. In Figure 8b, the model plots have been extrapolated, showing massive rock mass and degraded rock mass at the central section of the pillar. Initially, failure regions were analyzed in the pillar with no damage. In the pillar without damage, the failure starts at the sides of the pillar (Figure 8c) and propagates through the whole side into brittle failure (Figure 8d). In Esterhuizen et al. [29], similar brittle failure has been described in the pillars at higher W/H ratios.
Effect of Blast Damage on Pillar Height
All the models discussed to this point assumed a pillar height of 4 m. The effect of blast damage on a pillar with a W/H ratio of 1.5 at different pillar heights was also analyzed. Figure 9 shows pillars at three different heights (2 m, 4 m, and 6 m), each with a damage thickness of 0.5 m. Since this now incorporates different pillar heights, the strength of the normal pillars was analyzed at different pillar W/H ratios. Figure 10 shows that shorter pillar heights represent higher strength, while larger pillar heights lead to lower strength, in accordance with Kaiser [24]. Next, the failure regions were analyzed in pillars with a disturbance factor of 0.5 and a blast damage thickness of 0.5 m. Figure 8b shows the central section of the pillar with two zones on each side as the degraded rock in the 0.5 m-damage-thickness model. In Figure 8c, it can be observed that the failure initializes in the massive rock, which can be ascribed to softening of the degraded rock due to blasting and the stresses getting transferred to the massive rock. In Figure 8d, it can be observed that the total brittle failure in the pillar has increased when compared to that of the pillar without damage, which can be attributed to increase in the transition point from brittle failure to shear failure to 48 MPa. This can therefore decrease the region for shear failure which ultimately decreases the strength of the pillar with blast damage.
For a pillar with a disturbance factor of 0.5 and a damage thickness of 1.0 m, Figure 8b shows the four zones on each side with degraded rock mass that account for 1.0 m of blast damage thickness. In Figure 8c, it can be observed that the failure initiation occurs in massive rock, as well as in the degraded rock, which can be attributed to the higher stress-causing fracture initiation in both rock masses. The softening effect on the degraded rock due to blasting is evident, while as the massive rock is far away from the pillar boundary, the stresses required to initiate the fracture in the pillar affect both the massive rock mass and the degraded rock mass. In Figure 8d, at the transition point of brittle to shear failure in the pillar, it can be observed that the total brittle failure is more than that in the pillar without damage or the pillar with a disturbance factor of 0.5 and a damage thickness of 0.5 m.
As the brittle failure in the pillar increases, the region undergoing shear failure decreases, which in turn decreases the overall strength of the pillar.
Next, the failure regions were analyzed in a pillar with a disturbance factor of 1.0 and a damage thickness of 0.5 m, which is shown in Figure 8b. In Figure 8c, it can be observed that the failure initiates in the massive rock mass due to the softening effect in the degraded rock mass caused by blasting where stresses get transferred to the massive rock. In Figure 8d, it is observed that the degraded rock mass is so weak that it does not provide any confinement to the core, which increases the brittle failure of the pillar and decreases the region for shear failure. Therefore, the overall strength of the pillar decreases significantly.
The pillar model with a disturbance factor of 1.0 and a damage thickness of 1.0 m is shown in Figure 8b. It can be observed that the fracture initiation is evident in the massive rock mass, which is beyond the very weak degraded rock mass. Figure 8c shows that the total brittle failure in the pillar is more than any of the pillars analyzed above. Due to the very low confinement, this pillar, with a W/H ratio of 1.5, disturbance factor of 1.0 and damage thickness of 1.0 m, has a lower strength than that of a pillar with a W/H ratio of 1.0.
Effect of Blast Damage on Pillar Height
All the models discussed to this point assumed a pillar height of 4 m. The effect of blast damage on a pillar with a W/H ratio of 1.5 at different pillar heights was also analyzed. Figure 9 shows pillars at three different heights (2 m, 4 m, and 6 m), each with a damage thickness of 0.5 m. Since this now incorporates different pillar heights, the strength of the normal pillars was analyzed at different pillar W/H ratios. Figure 10 shows that shorter pillar heights represent higher strength, while larger pillar heights lead to lower strength, in accordance with Kaiser [24].
Effect of Blast Damage on Pillar Height
All the models discussed to this point assumed a pillar height of 4 m. The effect of blast damage on a pillar with a W/H ratio of 1.5 at different pillar heights was also analyzed. Figure 9 shows pillars at three different heights (2 m, 4 m, and 6 m), each with a damage thickness of 0.5 m. Since this now incorporates different pillar heights, the strength of the normal pillars was analyzed at different pillar W/H ratios. Figure 10 shows that shorter pillar heights represent higher strength, while larger pillar heights lead to lower strength, in accordance with Kaiser [24].
Effect of Blast Damage on Pillar Height
All the models discussed to this point assumed a pillar height of 4 m. The effect of blast damage on a pillar with a W/H ratio of 1.5 at different pillar heights was also analyzed. Figure 9 shows pillars at three different heights (2 m, 4 m, and 6 m), each with a damage thickness of 0.5 m. Since this now incorporates different pillar heights, the strength of the normal pillars was analyzed at different pillar W/H ratios. Figure 10 shows that shorter pillar heights represent higher strength, while larger pillar heights lead to lower strength, in accordance with Kaiser [24]. The pillars with a W/H ratio of 1.5 and a damage factor (D) of 0.5 were analyzed at blast thicknesses (T) of 0.25 m, 0.5 m, 0.75 m, and 1 m. Figure 11 shows that the blast damage has a significant effect on pillars with lower pillar heights. It was also observed that at a blast thickness of 1.0 m, the strength of the pillars with different heights converges to a single point with 10% deviation. The pillars with a W/H ratio of 1.5 and a damage factor (D) of 0.5 were analyzed at blast thicknesses (T) of 0.25 m, 0.5 m, 0.75 m, and 1 m. Figure 11 shows that the blast damage has a significant effect on pillars with lower pillar heights. It was also observed that at a blast thickness of 1.0 m, the strength of the pillars with different heights converges to a single point with 10% deviation.
Effect of Blast Damage on Inclined Pillars
The effect of blast damage on inclined pillars ( Figure 12) was evaluated next. The strength of the inclined pillars with a height of 4 m and a W/H ratio of 1.5 were evaluated with a blast damage factor of 0.5 at different blast thicknesses. It was determined that the strength of the inclined pillars is less susceptible to blast damage. Figure 13 shows the decrease in strength of the pillars at 0, 20, and 40 degrees of inclination with a W/H ratio of 1.5 and a damage factor of 0.5 at varying blast thicknesses. Given that that inclined pillars have less strength than vertical pillars, the 10% decrease in pillar strength because of blasting in inclined pillars would lead to significant strength reduction.
Effect of Blast Damage on Inclined Pillars
The effect of blast damage on inclined pillars ( Figure 12) was evaluated next. The strength of the inclined pillars with a height of 4 m and a W/H ratio of 1.5 were evaluated with a blast damage factor of 0.5 at different blast thicknesses. It was determined that the strength of the inclined pillars is less susceptible to blast damage. Figure 13 shows the decrease in strength of the pillars at 0, 20, and 40 degrees of inclination with a W/H ratio of 1.5 and a damage factor of 0.5 at varying blast thicknesses. Given that that inclined pillars have less strength than vertical pillars, the 10% decrease in pillar strength because of blasting in inclined pillars would lead to significant strength reduction. The pillars with a W/H ratio of 1.5 and a damage factor (D) of 0.5 were analyzed at blast thicknesses (T) of 0.25 m, 0.5 m, 0.75 m, and 1 m. Figure 11 shows that the blast damage has a significant effect on pillars with lower pillar heights. It was also observed that at a blast thickness of 1.0 m, the strength of the pillars with different heights converges to a single point with 10% deviation.
Effect of Blast Damage on Inclined Pillars
The effect of blast damage on inclined pillars ( Figure 12) was evaluated next. The strength of the inclined pillars with a height of 4 m and a W/H ratio of 1.5 were evaluated with a blast damage factor of 0.5 at different blast thicknesses. It was determined that the strength of the inclined pillars is less susceptible to blast damage. Figure 13 shows the decrease in strength of the pillars at 0, 20, and 40 degrees of inclination with a W/H ratio of 1.5 and a damage factor of 0.5 at varying blast thicknesses. Given that that inclined pillars have less strength than vertical pillars, the 10% decrease in pillar strength because of blasting in inclined pillars would lead to significant strength reduction.
Conclusions
Based on the investigation of blast damage on hard rock pillars, the following conclusions can be drawn.
Damage factor and damage thickness are important features that need to be considered when evaluating pillar strength. The decrease in pillar strength is considerable in pillars with W/H ratios higher than 1.0. Pillar strength can decrease up to 7% for a W/H ratio of 1.0, 16% for a W/H ratio of 1.5, 22% for a W/H ratio of 2.0, and 27% for a W/H ratio of 2.5.
Assuming a constant blast damage in the region, the hard rock mine pillars with larger W/H ratios would have more strength than that of the pillars with smaller W/H ratios.
The numerical models with a W/H ratio of 0.5 show that pillar failure starts at the center of the pillar, which is also evident with the pillars with damage and a W/H ratio 0.5. Therefore, the slender pillars are prone to strain bursting with or without the damage due to blasting. As the pillar failure occurs from the center of the pillar, the strength of the pillar remains similar.
Numerical models revealed that the brittle failure plays an important role in defining the strength of the pillars based on the blast damage zone. Initiation of the brittle failure beyond the damaged zone, resulting in reduction of core, causes failure at a relatively low brittle failure in the pillar, resulting in loss of pillar strength.
Numerical model results show that the degraded rock modulus in the blast damage zone is a significant parameter in softening the rock, resulting in transfer of the stresses inside the pillar. The blast damage zone, which is itself a critical factor, leads to an increase of the brittle failure in the pillar, resulting in loss of pillar strength.
Pillar height is an essential dimension in determining the strength of the pillars. Varying blast thicknesses on pillars with different heights at constant W/H ratios has a distinct impact on the reduction of pillar strength. Blast damage causes a substantial reduction in the strength of pillars with smaller heights and insignificant reduction for pillars with larger heights.
The strength of inclined pillars is less susceptible to blast damage. Blast damage on the dip sides of inclined pillars reduces the pillar strength more significantly than blast damage on the normal sides. When compared to vertical pillars, the inclined pillars have lower strength at a constant W/H ratio.
A methodology has been presented for analyzing the strength and failure characteristics of hard rock pillars. This methodology can help engineers generate models to analyze the failure mechanisms in these pillars due to blasting damage.
Considering the effect of blast damage on pillars could enhance mine safety and improve stability and, possibly, profitability.
Conclusions
Based on the investigation of blast damage on hard rock pillars, the following conclusions can be drawn.
Damage factor and damage thickness are important features that need to be considered when evaluating pillar strength. The decrease in pillar strength is considerable in pillars with W/H ratios higher than 1.0. Pillar strength can decrease up to 7% for a W/H ratio of 1.0, 16% for a W/H ratio of 1.5, 22% for a W/H ratio of 2.0, and 27% for a W/H ratio of 2.5.
Assuming a constant blast damage in the region, the hard rock mine pillars with larger W/H ratios would have more strength than that of the pillars with smaller W/H ratios.
The numerical models with a W/H ratio of 0.5 show that pillar failure starts at the center of the pillar, which is also evident with the pillars with damage and a W/H ratio 0.5. Therefore, the slender pillars are prone to strain bursting with or without the damage due to blasting. As the pillar failure occurs from the center of the pillar, the strength of the pillar remains similar.
Numerical models revealed that the brittle failure plays an important role in defining the strength of the pillars based on the blast damage zone. Initiation of the brittle failure beyond the damaged zone, resulting in reduction of core, causes failure at a relatively low brittle failure in the pillar, resulting in loss of pillar strength.
Numerical model results show that the degraded rock modulus in the blast damage zone is a significant parameter in softening the rock, resulting in transfer of the stresses inside the pillar. The blast damage zone, which is itself a critical factor, leads to an increase of the brittle failure in the pillar, resulting in loss of pillar strength.
Pillar height is an essential dimension in determining the strength of the pillars. Varying blast thicknesses on pillars with different heights at constant W/H ratios has a distinct impact on the reduction of pillar strength. Blast damage causes a substantial reduction in the strength of pillars with smaller heights and insignificant reduction for pillars with larger heights.
The strength of inclined pillars is less susceptible to blast damage. Blast damage on the dip sides of inclined pillars reduces the pillar strength more significantly than blast damage on the normal sides. When compared to vertical pillars, the inclined pillars have lower strength at a constant W/H ratio.
A methodology has been presented for analyzing the strength and failure characteristics of hard rock pillars. This methodology can help engineers generate models to analyze the failure mechanisms in these pillars due to blasting damage.
Considering the effect of blast damage on pillars could enhance mine safety and improve stability and, possibly, profitability. | 2019-04-16T13:28:34.258Z | 2018-07-20T00:00:00.000 | {
"year": 2018,
"sha1": "a0fafe348301db1b09f81526eaa5c00151547897",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/11/7/1901/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "49f8a69877fde3a1784fb7754a0f23220b1492d5",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
211190302 | pes2o/s2orc | v3-fos-license | Spexin and a Novel Cichlid-Specific Spexin Paralog Both Inhibit FSH and LH Through a Specific Galanin Receptor (Galr2b) in Tilapia
Spexin (SPX) is a 14 amino acid peptide hormone that has pleiotropic functions across vertebrates, one of which is involvement in the brain-pituitary-gonad axis of fish. SPX(1) has been identified in each class of vertebrates, and a second SPX (named SPX2) has been found in some non-mammalian species. We have cloned two spexin paralogs, designated as Spx1a and Spx1b, from Nile tilapia (Oreochromis niloticus) that have varying tissue distribution patterns. Spx1b is a novel peptide only identified in cichlid fish, and is more closely related to Spx1 than Spx2 homologs as supported by phylogenetic, synteny, and functional analyses. Kisspeptin, Spx, and galanin (Gal) peptides and their corresponding kiss receptors and Gal receptors (Galrs), respectively, are evolutionarily related. Cloning of six tilapia Galrs (Galr1a, Galr1b, Galr2a, Galr2b, Galr type 1, and Galr type 2) and subsequent in vitro second-messenger reporter assays for Gαs, Gαq, and Gαi suggests that Gal and Spx activate Galr1a/Galr2a and Galr2b, respectively. A decrease in plasma follicle stimulating hormone and luteinizing hormone concentrations was observed with injections of Spx1a or Spx1b in vivo. Additionally, application of Spx1a and Spx1b to pituitary slices decreased the firing rate of LH cells, suggesting that the peptides can act directly at the level of the pituitary. These data collectively suggest an inhibitory mechanism of action against the secretion of gonadotropins for a traditional and a novel spexin paralog in cichlid species.
INTRODUCTION
Spexin (SPX1; also termed neuropeptide Q) was identified first by computational methods in humans (1), and then also by chemical methods in goldfish (2). These computational methods have been attempted mostly on the basis of the characteristics of the prohormones from which active neuropeptides are processed. The mature peptide sequence contains 14 amino acids that are flanked by monobasic and dibasic proteolytic cleavage sites. The mature spexin peptide was found to be identical in all tetrapods and elephant shark, and differs in only one amino acid (A 13 T) in piscine species. A paralog of spexin, termed Spx2, was later identified in non-mammalian species (3). In mammals, Spx was found to participate in inducing stomach contraction (1), inhibiting adrenocortical cell proliferation (4), postnatal hypoxia response (5), cardiovascular and renal modulation (6), nociceptive response (7), fatty acid absorption and weight regulation (8), and diabetes (9). In teleosts, functional studies of Spx1 mainly focused on its inhibitory role in the regulation of reproduction (10) and food intake (2,11). However, a recent study reported that spx1 knock-out zebrafish exhibited normal reproductive capability but higher food intake than wild type fish, an effect mediated via increased expression of the appetite stimulant, agouti-related peptide AgRP1 (12). The galanergic neurotransmission system is one of the newest described signaling systems. Today, the galanin family consists of galanin (Gal), galanin-like peptide (GalP), galanin-message associated peptide (GMAP), and alarin, and this family has been shown to be involved in a wide variety of biological and pathological functions (13). Three different types of galanin receptors have been described so far in mammals: galanin receptor 1, 2, and 3 (GALR1, GALR2, and GALR3) (14). All of them are members of the G protein-coupled receptor (GPCR) family and act through stimulation of various second messenger systems. The biological activity of GALR1 and GALR3 stimulation is linked to the activity of adenylate cyclase (AC) and cyclic AMP (cAMP) production, and stimulation of GALR2 receptor results in phospholipase C (PLC) activity (14). Recently, it was reported that SPX is a functional agonist for GALR2 and GALR3 in humans as well as Galr2a and Galr2b in zebrafish (15).
Reproduction is regulated by the hypothalamic-pituitarygonadal (HPG) axis in all vertebrates. Hypothalamic axons secrete neuropeptides into the gonadotroph cells of the pituitary in order to mediate the expression and secretion of the gonadotropins follicle-stimulating hormone (FSH) and luteinizing hormone (LH) (16). The first identified neuropeptide that regulates this function was gonadotropinreleasing hormone (GnRH). The hypophysiotrophic type of GnRH regulates gonadotropins by neuroglandular and neurovascular anatomical connections in zebrafish (17,18). Recently, a plethora of neuropeptides involved in the control of reproduction were investigated. Most of them are stimulatory neuropeptides, such as neurokinin B (19), neuropeptide Y (20), secretoneurin (21), galanin (22), agouti-related peptide (23), kisspeptin (24), and melanocortin (25). However, the study of neuropeptides that relay their signal through inhibitory pathways is more challenging, and hence are less studied. The most known inhibitory pathway in fish reproduction is the dopaminergic system (26). In fish, as in mammals, dopamine D2 receptors transduce their signal through an inhibitory (Gα i ) signaling pathway (27). Gα i signaling is involved in a variety of physiologic processes, including chemotaxis, neuro-transmission, proliferation, hormone secretion, and analgesia (28).
Nile tilapia is one of the top principal aquaculture species, and is a suitable experimental model fish for reproductive endocrinological research on Perciformes, which is the most recently evolved and largest group of teleost fish that includes many other target aquaculture species. Our objective was to identify spexin in tilapia and clarify its role as a regulator in the HPG axis. This was accomplished by cloning two spexin and six Galr sequences, and performing in vitro and in vivo studies.
Fish Husbandry and Transgenic Lines
Sexually mature Nile tilapia (Oreochromis niloticus, Lake Manzala strain) were kept and bred in the fish facility unit at the Hebrew University in 500-L tanks at 26 • C and with a 14/10 h light/dark photoperiod regime. Fish were fed daily ad libitum with commercial fish pellets (Raanan fish feed, Israel).
We previously created transgenic tilapia lines by the adoption of a tol2 transposon-mediated approach and Gateway cloning technology (29). In the current study we used transgenic tilapia in which red fluorescent protein (RFP) expression is driven by the tilapia LHβ promoter, thus labeling LH gonadotrophs. The tagRFP-CAAX cassette, used in the current study directs the fluorescent protein to the cell membranes. The use of tagRFP eliminates the aggregation problems associated with mCherry and results in a more uniform labeling of the cells.
All experimental procedures were in compliance with the Animal Care and Use guidelines of the Hebrew University and were approved by the local Administrative Panel on Laboratory Animal Care.
Total RNA was extracted from sexually mature female tilapia brain using TRIzol reagent (Life Technologies), and 5 µg was used as template for cDNA synthesis using Smart MMLV reverse transcriptase (Clontech). All cloning PCRs were performed with an initial denaturation at 94 • C for 2.5 min, followed by 30 cycles of denaturation at 94 • C for 30 s, annealing at each of the primers' specific Tm (Table 1) for 30 s, and extension at 72 • C for 90 s, and a final extension at 72 • C for 10 min using Advantage 2 polymerase mix (Clontech); specific primers were designed for cloning the putative spexin ligands and receptors ( Table 1). The PCR products were ligated into pCRII-TOPO vector and cloned into competent DH5α E. coli cells. Plasmid DNA was isolated from overnight cultures by miniprep columns (QIAgen) and sequenced with T7 and SP6 primers.
Cloned tilapia spexin 1a and 1b sequences were submitted to GenBank under accession numbers MN399812 and MN399813, respectively. Cloned tilapia galanin receptors sequences were submitted to GenBank under accession numbers MN326828, MN326829, MN326830, MN326831, MN614146, and MN614147 for Galr1a, Galr1b, Galr2a, Galr2b, Galr type 1, and Galr type 2, respectively. Tissue samples were collected from three mature male and female tilapia. Total RNA was extracted from each of the following tissues: brain, pituitary gland, spleen, gills, kidneys, muscles, fat, ovaries/testes, retina, heart, caudal and front intestines, and liver. We dissected the brain into three parts, of which the anterior part contains the olfactory bulbs and preoptic area, the midbrain contains the optic tectum and hypothalamus, and the hindbrain contains the medulla oblongata and the cerebellum. cDNA samples were prepared from 2 µg of total RNA according to (24). The tissue expression patterns of Spx1a, Spx1b, Galr1a, Galr1b, Galr2a, and Galr2b in various tissues were analyzed by qPCR with the primer sets described in Table 2.
The cycling parameters consisted of pre-incubation at 95 • C for 10 min followed by 45 cycles of denaturation at 95 • C for 10 s, annealing at 60 • C for 30 s, and extension at 72 • C for 10 s, followed by a melting curve analysis (95 • C for 60 s, 65 • C for 60 s, 97 • C for 1 s).
Phylogenetic and Synteny Analyses
All sequences used for spexin and galanin receptors were identified from NCBI and Ensembl databases (Supplementary Table 1) and aligned by MUSCLE using Mega7. Evolutionary analyses were conducted in MEGA7 (32). The evolutionary history of mature spexin peptide homologs was inferred using the Maximum Likelihood method based on the JTT matrix-based model, and an additional frequency distribution (JTT+F) was utilized for the receptors (33). The bootstrap consensus trees were inferred from 500 replicates to represent the evolutionary history of the taxa analyzed (34). The percentage of replicate trees in which the associated taxa clustered together in the bootstrap test are shown next to the branches. Initial tree(s) for the heuristic search were obtained automatically by applying Neighbor-Join and BioNJ algorithms to a matrix of pairwise distances estimated using a JTT model, and then selecting the topology with superior log likelihood value. The analysis involved 34 and 77 amino acid sequences and there were a total of 15 and 264 positions in the final datasets for spexins and Galrs, respectively. We examined and compared the genomic environment around spexin1 and spexin2 in zebrafish, Nile tilapia, and Burtoni. Cichlid orthologs were identified by tblasn query using zebrafish genes in NCBI genome browsers, and organized 5 ′ -3 ′ on their respective chromosomes/linkage groups/scaffolds. Accession numbers can be found in Supplementary Table 2.
Peptide Synthesis
Tilapia spexin 1a (NWTPQAMLYLKGTQ-NH2), tilapia spexin 1b (NWTSQAILYLKGAQ-NH2) and human galanin (GWTLNSAGYLLGPHAVGNHRSFSNKNGLTS-NH2) were synthesized by GL Biochem. Peptides were synthesized by the automated solid-phase method by applying Fmoc active-ester chemistry, purified by HPLC to >95% purity and the carboxy terminus of each peptide was amidated. The peptides were dissolved to the desired concentration in fish saline (0.85% NaCl in DDW) for in vivo experiments.
In vivo Effect of Spx1a and Spx1b on FSH and LH Release
Adult female tilapia (body weight (BW) = 82.4 ± 21.7 g) were injected intraperitoneally with saline, tilapia spexin 1a, or spexin 1b at 10 µg/kg fish (n = 8-12 fish per group). The fish were bled from the caudal blood vessels into heparinized syringes at 2, 4, 8, and 24 h after injection. Blood was centrifuged (3,200 rpm for 30 min at 4 • C) to obtain plasma samples, which were stored at −20 • C until assayed. This time course is according to standard protocols used previously to test the effect of various hypothalamic neuropeptides on circulating levels of LH and FSH in tilapia (35,36).
ELISA for the Measurement of Tilapia FSH and LH
Plasma LH and FSH concentrations were measured by specific competitive ELISAs developed for tilapia (37) using primary antibodies against recombinant tilapia LHβ or FSHβ, respectively, and recombinant tilapia LHβα (38) or FSHβα (37) for the standard curves. The sensitivity was 2.43 and 1.52 ng/mL for LH and FSH, respectively. The inter-assay covariance was 14.8 and 12.5%, and the intra-assay variation was 7.2 and 8% for LH and FSH, respectively.
Determination of Brain Spx and Galr Expression in Fed vs. Fasted Adult Tilapia by qPCR
An in vivo experiment where sexually mature, male tilapia (n = 24, BW = 41.7 ± 1.64) were divided into two groups (one fed ad libitum and the other fasted for 26 days) was previously performed (39). Total RNA from the brains were taken for real-time PCR expression analyses from the fed and starved groups. The RNA generated from this experiment was used as template for generated cDNA and performing qPCR analysis of brain Spx1a, Spx1b, Galr2a, and Galr2b in this study.
The qPCR cycling parameters consisted of preincubation at 95 • C for 10 min followed by 45 cycles of denaturation at 95 • C for 10 s, annealing at 60 • C for 30 s, and extension at 72 • C for 10 s, followed by a melting curve analysis (95 • C for 60 s, 65 • C for 60 s, 97 • C for 1 s). Reaction conditions were according to (19), and amplification was carried out on Lightcycler 96 R (Roche Diagnostic International). Serial dilutions were prepared from a cDNA pool, and the efficiencies of the specific gene amplifications were compared by plotting Ct vs. log (template concentration). A dissociation-curve analysis was run after each real-time experiment to confirm the presence of a single product. To control for false-positives, a no reverse transcriptase negative control was run for each template and primer pair. Data were interpreted by the comparative cycle threshold method using 18S as a reference gene (19).
Second Messenger Reporter Assays
Transient transfection, cell procedures, and stimulation protocols were generally performed as described previously (27,40). Briefly, the entire coding sequence of tilapia Galr1a, Galr1b, Galr2a, Galr2b, Galr type 1, and Galr type 2 were inserted into pcDNA3.1 (Invitrogen). Co-transfection of the receptors (3 µg/plate for each Galr) and a cAMP-response element-luciferase (CRE-Luc), serum response element (SRE-Luc; Invitrogen), or Gqi5 reporter plasmid (3 µg/plate) was carried out with TransIT-X2 R System (Mirus). The cells were serum-starved for 16 h, stimulated with various stimulants for 6 h, and then harvested and analyzed according to (27,40). Experiments were repeated a minimum of three times from independent transfections and each treatment was performed in triplicate wells. EC 50 values were calculated from dose response curves by means of computerized non-linear curve fitting on baseline-corrected (control) values using Prism version 6 software (GraphPad). When we have started this project the sequence of tilapia galanin was unknown, and hence we used the known human galanin. Apparently, the first thirteen residues of human and tilapia galanins are identical (Figure 1) and since it was shown that the N-terminal region of galanin is responsible for ligand binding, the effect of the human galanin is probably similar to that of the tilapia (41).
To confirm which of the novel receptors that were cloned in this study transduce effects through the inhibitory Gi signaling pathway, we used a plasmid that contains a chimeric G-protein, Gqi5, in which the five C-terminal amino acids of Gq were changed to those of Gi1, 2 (obtained from Addgene, Inc. Cambridge, MA, USA) and has been described previously (42). Gqi5 is activated by Gi-linked GPCRs, but couples to the effector protein normally activated by Gq, phospholipase C. Since phospholipase C produces IP 3 and actuates Ca 2+ release, the inhibitory nature of Gi-linked receptors can be observed as stimulation via the SRE-luc reporter (43).
For electrophysiological recordings, slices were gently transferred to a chamber attached to the stage of an upright microscope (Axioskop FS, Zeiss, Oberkochen, Germany) continuously superfused with Ringer's saline at room temperature. Endocrine cells were viewed with a 40 × 0.8 numerical aperture, water immersion objective lens (Olympus, Munich, Germany). Patch pipettes were pulled from borosilicate glass capillaries (Hilgenberg, Maisfield, Germany) on a Narishige PP83 puller. Membrane currents were recorded using an on-cell patch technique-cell-attached recording in which a patch electrode is attached to the cell, but the membrane is not broken. The standard pipette solution contained (in mM): 135 potassium gluconate, 2 MgCl 2 , 1 CaCl 2 , 11 EGTA, 3 ATP (magnesium salt), and 10 HEPES (potassium salt), pH 7.25. Only fluorescently labeled cells from the pituitary were recorded. An Axoclamp-2B amplifier (Axon Instruments, Union City, CA) was used in Bridge mode. Experiments were controlled by USB-6341 data acquisition board (National Instruments, USA) and WinWCP V5.3.7 (Strathclyde Electrophysiology Software, UK). Data were analyzed using PClamp 10 (Axon Instruments) and Microcal Origin 6.0 software.
Statistical Analyses
Data are presented as means ± SEM. All samples had equal variance, as determined by an unequal variance test performed using JMP 7.0 software. The significance of differences between group means of plasma gonadotropin levels and reporter assays were determined by ANOVA, followed by Tukey's test using Graph-Pad Prism 5.01 software (San Diego, CA). EC 50 values of the receptors assays were calculated using log treatment vs. luciferase intensity on a non-linear regression curve using Prism.
Cloning and Tissue Distribution of Spexin1a, 1b, and Gal Receptors in Tilapia
The cloned full coding sequence of tilapia spexin 1a and 1b were 363 and 315 base pairs and coded preprohormones of 120 and 104 amino acids, respectively. Both mature tilapia spexin peptides were presumed to be 14 aa based on conserved peptideflanking monobasic and dibasic cleavage sites (Figure 1). The amino acid sequence of Spx1b differs from that of orthologous piscine Spx1a at positions 4 (Pro to Ser), 7 (Met to Ile), and 13 (Thr to Ala). Both Spx1a and Spx1b are likely amidated at their C-termini due to the GRR motif. The amino acid sequence of Spx2 differs from that of Spx1 at positions 3 (Gly vs. Thr), 6 (Ser vs. Ala), 13 [Arg vs. Thr (piscine species) or Ala (shark and tetrapods)], and 14 (Tyr or His vs. Gln) (Figure 1).
To shed light on the potential physiological roles of spexins/Galrs in tilapia, we examined their mRNA tissue distribution by real-time PCR (Supplementary Figure 1). The two spexins and their related receptors exhibit overlapping and distinctive patterns of expression in various tissues. Spx1a expression was detected primarily in the midbrain and ovary, whereas Spx1b was detected in pituitary, kidney, and all brain parts, but mostly in the anterior brain. Spx1b expression levels were a few orders of magnitude higher than that of Spx1a relative to 18S ribosomal subunit expression. The expression patterns of Galr1a and Galr2a were similar, where both were detected in most of the tissues with the highest expression observed in the kidney and head kidney. However, Galr1b was detected mostly in the anterior gut and anterior brain, with lower expression found in all other tissues except the gills. High expression of Galr2b was detected in the anterior and midbrain relative to the other tissues, where virtually no expression was detected.
Evolutionary History of Spexins and Gal Receptors
The phylogenetic analysis showed that the vertebrate SPX sequences fall into two distinct clades, with both cichlid spexins grouped with Spx1 (Figure 2A) and all Spx2 orthologs grouped together in a distinct clade. However, the second tilapia spexin paralog was more phylogenetically related with SPX1 than with SPX2 homologs. We also found that other cichlids, like zebra mbuna and Burtoni, contain two spexin paralogs that are more similar to orthologous spexin 1 sequences. Thus, we named the two cichlid spexins Spx1a and Spx1b.
The genomic organization of spexin paralogs between zebrafish, Nile tilapia, and Burtoni was determined. The genomic environment for spx1a was identified on zebrafish chromosome 4, Nile tilapia LG15, and Burtoni on an unplaced scaffold, and was highly conserved (Figure 2B). Spx1b was identified on Nile tilapia LG7 and an unplaced scaffold in Burtoni between golt1bb and ldhbb. Spx2 was identified on zebrafish chromosome 25 between gal and ldhd ( Figure 2C).
To determine the phylogenetic identity of the cloned Gal receptors we performed a phylogenetic analysis consisting of vertebrate Galr and Kissr sequences (Figure 3). The Galr and Kissr sequences formed two distinct clades. The Galr clade was subdivided into Galr1 and Galr2/3 clades, which, respectively, further formed distinct groups of Galr1a and Galr1b, and Galr3, Galr2a, and Galr2b. Tilapia Galr type 1 and 2 and medaka Galr type 1 belong to a sister group of Galr1.
Tilapia Spexins Suppress FSH and LH Release in vivo
We next aimed to evaluate the biological effects of the two spexin paralogs on the release of tilapia gonadotropins. After a single intraperitoneal administration of 10 µg/kg fish Spx1a, both FSH and LH plasma levels significantly decreased after 120 min, compared with the levels observed at 0 min, and stayed at this lower level even after 24 h (Figures 4A,C). Injection of Spx1b caused a gradual decrease in plasma levels of FSH and LH, with lower levels shown after 4 and 8 h (Figures 4B,D). In the control groups, plasma gonadotropin levels did not change for the whole experimental period.
Fasting Lowers Spexin and Galr2 mRNA Expression in Adult Tilapia
Most of the documented knowledge relates spexin to metabolic processes or feeding behaviors. Even though our study primarily focused on reproduction, we aimed to determine differences in spexin expression in adult tilapia that were fasted or fed for 26 days. Spx1a and Spx1b mRNA expression was significantly lower in the fasted vs. fed fish. Galr2a expression was significantly lower in fasted fish than in fed fish, and Galr2b expression not affected (Figure 5).
Activation and Signaling of Tilapia Galanin Receptors
All six cloned, full-length tilapia Galrs were sub-cloned into the pcDNA3.1 expression vector and co-transfected into COS7 cells with a luciferase plasmid containing either cAMP response element (CRE), serum responsive element (SRE), or Gqi5 to evaluate their signal transduction pathway(s) upon treatment with tilapia Spx1a, Spx1b, or human galanin (Figure 6). Since each Galr subtype is coupled to different G proteins (Gα i for GalR1 and GalR3; Gα q/11 for GalR2) (14), we tested all the receptors using three different signal transduction pathways: CRE-luc for the activation of PKA/cAMP, SRE-luc for the activation of PKC/Ca 2+ and MAPK, and Gqi5/SREluc for further verification of inhibition through Gα i . The 5 aa at the carboxyl-terminus of Gqi5 are sufficient to enable interaction with Gα i -coupled receptors (42). Ligand stimulation of a Gα i -coupled receptor is expected to trigger the following cascade: Gqi5, phospholipase C, protein kinase C, SRE-luciferase transcription, luciferase activity, light emission. For verification of the activation of Gqi5 trough Gα i, we used the tilapia dopamine D2 receptor (27,40). A dosedependent increase in luciferase activity confirmed that tilapia D2 receptor relayed its activity through the inhibitory Gi (Supplementary Figure 4). Tilapia Spxs or hGAL did not increase CRE-luc or SREluc levels in cells expressing Galr1a or Galr1b, but hGAL activated Gqi5 signaling via Galr1a. Increasing concentrations of Gal resulted in significant increases in CRE-luc, SRE-luc, and Gqi5 activity in cells expressing Galr2a. Stimulation by high concentrations of both tilapia Spx paralogs resulted in an increase in Gqi5 in cells expressing Galr2a. Exposure to increasing concentrations of either Spx resulted in a dosedependent increase in SRE-luc, CRE-luc, and Gqi5 activity in cells expressing Galr2b. Both Spx1a and Spx1b were also very efficient in transcription of SRE-luc in cells expressing Gqi5, suggesting that Galr2b relays its signal through Gi (EC 50 = 1nM). The EC 50 values of Spxs and Gal for each receptor are summarized in Table 3.
Effect of Spexin1a on the Electrophysiological Activity of Pituitary LH Cells
Since we found that spexins activate Galr2a and 2b, and that the former is also expressed in the pituitary of tilapia, we sought to determine whether the peptides can act directly at the pituitary level. To this end, we utilized a pituitary slice preparation, and measured the effect of Spx on the spike rates of identified LH cells. Figure 7 shows that upon application of 100 nM Spx1a, action potential generation ceased, and spike rate gradually return to the basal level upon washout of the peptide. Similar results were found for Spx 1b (Supplementary Figure 5).
DISCUSSION
Spexin is a highly conserved peptide hormone that may have pleiotropic functions in different vertebrate species. Cloning of two spexin paralogs and six galanin receptors permitted phylogenetic, synteny, and functional analyses in Nile tilapia, supporting the involvement of the spexin/galanin receptor system in fish reproduction and metabolism. Evolutionary analyses support that cichlid species possess a novel form of Spx1, named Spx1b, but not Spx2 like in some other vertebrates.
Demonstration of Spx1a and Spx1b involvement in reproduction was shown in vivo and in vitro, and starvation limited the expression of both paralogs. Despite their differential tissue distribution and expression profiles, both Spx1a and Spx1b had similar biological effects, which are likely mediated through galanin receptor 2b. Phylogenetic and synteny analyses coupled with vertebrate ancestral chromosome (VAC) reconstruction supported that kiss, gal, and spx genes were located adjacent to each other on VAC D and individually arose by local tandem duplication prior to FIGURE 7 | Effect of spexin1a on firing rate of LH cells in mature tilapia. Transgenic tilapia (LH-RFP) pituitaries were dissected for electrophysiological recording, and a cell-attached configuration was used to monitor action potentials without breaking the cell membrane. The pituitary slice was briefly exposed to Spx1a during the recording (yellow box). (A) A graphic representation of firing-rate before (B), during (C), and after spexin application. Each dot represents an action potential along the time scale (X-axis) and the instantaneous frequency from the action potential before (Y-axis). Note that the firing-rate decreased after Spx1a was applied, and after a few minutes the spike rate slowly rises back to the control level. (D) Two spikes with an example of the firing-rate measurement.
the first whole genome duplication event (15). This genomic proximity was conserved after two rounds of whole genome duplication (1R, 2R) and is observed in modern vertebrate genomes, where spx1 is found near kiss2, and spx2 is found near gal. Our synteny analysis supported this organization for Nile tilapia and Burtoni spx1, but not spx2. We initially had thought that the spexin-like gene flanked by golt1bb and ldhbb in tilapia was spx2, but this did not correspond with the genomic organization of zebrafish spx2, which is flanked by gal and ldhd. The genomic distance between Nile tilapia gal and ldhd is >34 million base pairs, and spx1b lies outside of this syntenic block between golt1bb and ldhbb. No zebrafish spexin-like gene was found between golt1bb and ldhbb, suggesting that this tilapia and Burtoni spexin paralog is not spx2. Additionally, Spx1b has 85-93% (12-13/14aa) sequence conservation across all vertebrate Spx1 peptides, but only 57-71% (8-10/14aa) conservation with the variable and meager Spx2 sequences. Phylogenetic analysis of mature spexin peptide sequences shows that cichlid Spx1b forms a sister clade to vertebrate Spx1, and is not placed with other teleost Spx1 homologs. Due to the highly conserved nature of short peptide sequences and a 20% divergence in amino acid sequence identity from Spx1, we propose that Spx1b is a novel spexin peptide found only in cichlid species.
Spexin has been implicated in regulating the hypothalamicpituitary-gonad (HPG) axis in some fish species. Our in vivo experiments support that not only Spx1a, but also the novel form, Spx1b, inhibits LH and FSH release in tilapia. The first report on fish spexin in zebrafish and goldfish identified high and dynamic expression in the brain throughout the reproductive stages (10). Spexin inhibited LH release in goldfish pituitary cultures and in vivo, and an estradiol feedback mechanism on hypothalamic spexin expression was observed. Ovariectomized goldfish had increased hypothalamic spexin expression, and estrogen replacement returned expression to basal levels (10). A similar effect was seen in the spotted scat, where estradiol decreased spexin expression in a dose-dependent manner in vitro (46). However, zebrafish spx −/− knockouts displayed normal reproductive phenotypes, so spexin may only play a direct role in regulating the reproductive axis in certain fish species (12). With that being said, knockout models of reproductive hormones in fish species tend to reproduce normally [reviewed in (47)]. There are few reports on the role of spexin in reproduction, all of which have been carried out in fish; additional studies in non-piscine species are required to determine if spexin has a functionally conserved role along the HPG axis.
A distinctive feature of the teleost pituitary gland is that LH and FSH are synthesized by different gonadotrophe cells, which suggests a functional and differential regulation of gonadotropin release mechanisms. The cells of the teleost pituitary are distinctly organized and receive hypothalamic signals directly by nerve innervation as well as via neurovasculature (18). Teleost LH cells form dense networks throughout the pars distalis of the adenohypophysis, and FSH cells form small clusters that are more broadly distributed throughout the pars distalis (17). LH cells, but not FSH cells, are functionally coupled, as shown by perfusion of pituitary fragments exposed to GnRH with gap-junction blockers and by a patch-clamp technique (17). These organizational characteristics of the teleost pituitary complicate our understanding about the various ways by which the gonadotrophes decode stimulatory and inhibitory inputs. The principal stimulatory and inhibitory factors for LH release are gonadotropin-releasing hormone (GnRH) and dopamine (DA), respectively, but less is known about FSH release. LH and FSH cells are regulated by more than 20 neurohormones, but only a small portion of them function as inhibitory factors (47). In fish, the dopamine receptor (D2-R) is expressed on LH cells (48) and on GnRH neurons (26), suggesting that direct and indirect mechanisms of LH release inhibition exist. Gonadotrophs are excitable cells, and action potential frequency of LH and FSH cells were shown to be influenced by GnRH exposure (49). We performed electrophysiological measurements on LH cells in pituitary slices and observed that both Spx1a and Spx1b cause a reversible decrease in action potential frequency. Although the precise link between electrical activity and hormone release has yet to be determined, this finding does confirm that Spx can act directly at the level of the pituitary.
Spexin mRNA expression levels may also be correlated with reproductive development. Increases in LH and FSH mRNA expression is seasonal and corresponds to gonadal development (50,51). In goldfish, hypothalamic Spx1 expression significantly decreased as the breeding season progressed and GSI increased (10), potentially contributing to progressive LH release. A similar decrease was seen in the orange-spotted grouper over the course of oogenesis, where the highest hypothalamic Spx1 expression was observed prior to oocyte primary growth (11), and in the spotted scat, expression significantly decreased over the course of vitellogenesis (46). In zebrafish, however, Spx1 expression in the brain gradually increased from the ovarian primary growth stage, peaked in early-mid vitellogenesis, then returned to primary growth levels during final maturation (10). Given that Spx1 is negatively regulated by estrogen (10), which increases in circulation until final oocyte maturation, it seems reasonable that Spx expression correlates with reproductive development. Additionally, Spx1 treatment affects the expression of other reproductive factors, such as increasing gonadotropin inhibitory hormone (GnIH) and GnRH-III expression and decreasing GpHα and FSHβ expression in a sole (52).
Spexin has wide tissue distribution patterns in tetrapods and fish, suggesting that spexin has pleiotropic biological consequences in feeding behavior, metabolism, nociception, cardiovascular function, muscle motility, stress, depression, and anxiety (3). The localization of spexins at the cellular level inform these functions. In tilapia, the two spexin paralogs exhibited wide differential tissue distribution patterns, where Spx1a was primarily expressed in the midbrain (containing the hypothalamus in our preparations), and Spx1b was primarily expressed in the anterior brain, which contains the preoptic area. Similarly, in a detailed report of spexin neuron circuitry in zebrafish, Spx1 expression was restricted to the midbrain tegmentum and the hindbrain and Spx2 expression was found in the preoptic area, however neither was detected in the hypothalamus (53). In goldfish, Spx1 was identified in the hypothalamus, thalamus, and medial longitudinal fasciculus, however immunoreactivity was detected using heterologous antibodies that may have recognized both Spx1 and Spx2 (10). Even though our tissue distribution analysis is crude and unable to implicate any detailed biological function from expression, these data support the conservation of Spx1 expression, which differs from Spx1b localization. In order to determine if the neuronal circuits of spexins are conserved in fish, methodologies for determining specific spexin localization (IHC with custom antibodies, fluorescent in situ hybridization, etc.) are needed from additional species. Spexin plays a role in feeding behavior and metabolism in fish and mammals [reviewed in (3)]. In the orange-spotted grouper (11), half-smooth tongue sole (46), and spotted scat (46), starvation increased Spx1 expression. We observed a decrease in brain Spx1a, Spx1b, and Galr2a expression in fasted adult tilapia. Brain Spx1 expression was also decreased in unfed Ya-fish (54). In goldfish, Spx1 injections decreased surface feeding and increased food rejection, and increased circulating Spx1 and brain and liver Spx1 mRNA expression after feeding (2,55). This effect was also observed in spx1 −/− zebrafish, which had higher food intake compared to wildtype fish, and Spx1 administration suppressed agouti-related peptide (AgRP) expression (12). In support of this mechanism of satiety control, central administration of Spx1 in goldfish caused a decrease in expression of appetite stimulants (NPY, AgRP, and apelin) and increased expression of anorexigenic factors [proopiomelanocortin (POMC), cocaine-and amphetamineregulated transcript (CART), cholecystokinin (CCK), melaninconcentrating hormone (MCH), and corticotropin-releasing hormone (CRH)] in different parts of the brain (2). Furthermore, intracerebroventricular injection of Spx1 inhibited feeding behavior induced by NPY and orexin. In addition to appetite factors, insulin was shown to increase circulating levels of Spx1 and its expression in the telencephalon, hypothalamus, and optic tectum (55). Therefore, local and central Spx1 might have a role in appetite control and energy homeostasis in fish.
Spexin mediates its effects by activating an inhibitory Gprotein (Gα i ) via galanin receptor (GalR) 2/3. Like the kisspeptin, galanin, and spexin peptides, the kisspeptin and galanin receptors are related. Ancestral forms of KissR, GalR1, and GalR2/3 were identified on different VACs with 1R and 2R expanding each clade, giving rise to four KissR (KissR1, 3, 2, 4), two GalR1 (1a, 1b), two GalR2 (2a, 2b) and GalR3 paralogs (15). Our phylogenetic analysis supports that of Kim et al. (15), showing that GalRs are divided into two major clades, namely GalR1 and GalR2/3, with KissRs forming a distinctive sister clade. We cloned two additional Galrs based on predicted NCBI sequences named Galr type 1 and type 2, which formed a sister group to GalR1 sequences, but were not activated by galanin or spexin (Supplementary Figure 2). It has been previously shown that the cognate receptor(s) for Spx1/Spx2 are GalR2/3, whereas GalR1 is the cognate receptor for galanin; teleosts do not possess GalR3 (15). Our second messenger reporter assays revealed that tilapia Spx1a and Spx1b activated Gα s (via CRE), Gα q (via SRE), and Gα i (via Gqi5) signaling pathways through Galr2b, but not Galr1a or Galr1b. Galanin was more efficacious than Spx1a or Spx1b in activating Gα s and Gα i via Galr2a, but did not activate Gα q (Figure 8). Determining inhibitory effects via Gα i is challenging, so we utilized a reporter plasmid that permits Gα i -coupled receptors to stimulate PLC (42). Human, Xenopus, and zebrafish GalRs were activated in a similar manner using an alternative expression system (15). GalR2 displays differential signaling preferences to either Gal or Spx, which induce different conformational changes in the receptor and bias intracellular signaling cascades (40). Spx quickly dissociated after G-protein signaling, whereas Gal binding was more stable and promoted arrestin-dependent receptor internalization. It has been shown that galanin and spexin have an inverse relationship in regards to LH release [reviewed in (15)], perhaps due in part to the receptor displaying biased agonism in order to decipher endogenous hormone signaling.
We have shown evolutionary and functional evidence that cichlid fish possess two paralogs of Spx1 but not Spx2, and that tilapia Spx1a and Spx1b can inhibit LH and FSH secretion via Galr2b. Why cichlids have evolved a second form of Spx1 and lost Spx2 remains unknown. One in ten teleosts are cichlids, making them the most species-rich group of vertebrates and excellent genetic models of adaptive radiation (56). Additionally, Nile tilapia is a major globally aquacultured species. Further research into the potential for manipulation of growth and reproductive processes with spexin or galanin receptor agonists/antagonists is warranted.
ETHICS STATEMENT
The animal study was reviewed and approved by Local Administrative Panel on Laboratory Animal Care of the Hebrew University. | 2020-02-20T09:03:24.299Z | 2019-11-24T00:00:00.000 | {
"year": 2020,
"sha1": "3ce410897d2632a4f13818ad346cad689fd1ad24",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2020.00071/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7fffd57053945616a91c6d9c61c16376b102f4df",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
186525616 | pes2o/s2orc | v3-fos-license | Handling Tantrums in Children Aged 5-6 Years in TK Pembina
This is a research case study that describes the behavior of handling tantrums conducted in TK Pembina Malang in children aged 5-6 years due to the divorce of parents resulting in a different upbringing that lead children into tantrum. Tantrum is one of the characteristics of a problem child in the emotional development of a child whose handling is inappropriate. The causes of tantrums in children include rough family background, parents who are not yet mature and do not have the maturity to care for children, parents who are unable to love their children, excessive feelings of feelings that are not handled properly, and children are not ready to face new situations. Handling the behavior of tantrums is done using the method of telling stories with moral messages. But it is still not maximal because of the lack of cooperation between the school
I. INTRODUCTION
Childhood is a critical time for human development, it is appropriate that during the development period, stimuli are given according to the stages of development and minimize the developmental problems experienced by children, one of which is social-emotional development in children. Children with delays in the development of socio-emotional behavior in the future can experience difficulties in getting friends and maintaining good relationships with their friends, have the possibility of increasing bad / criminal records, causing actions such as eviction by the surrounding environment, and worse is dropping out of school, bad relationships between parents and peers, making themselves unemployed, and having problems in mental health [1], [2], [3], [4].
In the current era of globalization, the problem of child development is increasingly complex, especially the problem of children's emotional development. Cases of abuse carried out by a student against the teacher to death in Sampang Madura [5], the rampant brawl between students, one of them was a bloody brawl involving 40 students of vocational schools in Bekasi [6], until a fight between members of the Indonesian Parliament [7]. Was an example of his failure emotional development in a human. So that there is a problem with emotions, especially tantrums, when handling is needed from an early age.
The research conducted about handling emotional problems experienced by children in school with several processes, namely first setting the environment, second giving direct instructions, and the fourth is application and followup. Nonetheless, in the study it has not revealed in depth the causes of the occurrence of tantrums in children and suitable learning models for handling tantrums in children. So from that this research examines more about the causes of tantrums and learning methods used in tantrum handling in children. Kindergartens Pembina city of Malang is an inclusive school where during observations and interviews with teachers, there are some children who have problems in the development of emotions one of which is a tantrum. Namely LL, is one of the indicated children who experience tantrums at the age of 5 years which should at that age the child has begun to overcome his emotions.
This study describes how the state of tantrum experienced by LL, what is the cause of LL infertility , and how the teacher applies the learning model in dealing with tantrums that occur in LL in Malang City Nursery Kindergarten. This study aims to serve as a reference for handling tantrums in children in kindergarten schools.
The rest of this paper is organized as follow: Section II describes the literature review. Section III describes the data and proposed methodology. Section IV presents the obtained results and following by discussion. Finally, Section V concludes this work.
II. LITERATURE REVIEW
This section presents the literature review.
A. Emotion
Emotion plays an important role in a child's life because emotions determine the child's ability to adapt to the environment. There are various types of problems related to children's emotions including lack of affection, anxiety, hypersensitivity, phobias, tantrums , withdrawal, and many more. Reynold argues, children who experience emotional problems are usually caused by family background, feelings going physically and emotionally by the parents, parents who are not yet mature and do not have the maturity to care for children, losing too early someone who is loved, parents who are unable to love their children, feelings of excessive feelings and not handled properly, children are not ready to face new situations, get bluffing, disruption and insecurity from other children, and physical disabilities.
B. Tantrum in children
Tantrum is one of the characteristics of problem children in their emotional development. Dewi [8] argues, the characteristics of tantrums are: excessive anger, very strong fear, shame and hypersensitivity. Excessive anger, for example, wants to damage himself and his belongings. A very strong fear can interfere with the interaction with the environment. Furthermore, the child becomes embarrassed and withdraws from his environment plus hypersensitivity, is very sensitive, difficult to overcome his feelings of exclusion, and negative tendencies are moody.
In general, there are several characteristics of recognizing that a child is demonstrating tantrum behavior. Dewi [8] , revealed that the characteristics of recognizing them are as follows: The child looks frowned or irritable Attention, hugs, or other special hugs do not seem to improve his mood . He tried to do something out of character or ask for something that he believed would not be obtained. He increases his demands by whining and does not want to accept "no" answers. He continues by crying, screaming, shaking, hitting, or breathing. Zaviera in [9] also explained the characteristics of tantrums based on age groups. In this case, it is explained from the age of 3-5 years and above. Based on the age group, tantrums are divided into: Under 3 years old, children under 3 years of age are tantrums in the form of crying, biting, hitting, kicking, screaming, squealing, arching their backs, throwing their bodies to the floor, hitting their hands, holding their breath, hitting knock heads and throw objects.
Age 3-4 years, children with ages ranging from 3 years to 4 years tantrums include behaviors in children under 3 years old plus stomping, screaming, punching, slamming doors, criticizing and whining.
The age of 5 years and over the form of tantrums in children aged 5 years and over is increasingly widespread which includes the first and second behaviors coupled with cursing, swearing, hitting, selfcriticizing, solving items intentionally and threatening.
Tavris in [10] see the form of tantrums by the formation process that can be divided into three stages, namely stage trigger (trigger), step response and phase formation. Phase triggers looked at when a child is attacked, criticized or yelled at by a parent or sibling with something that is painful or irritating. Then, the child responds to the criticism aggressively and destructively. If the behavior of aggression f the raised by the child gets a reward from the attacker (attacker) by being silent or stop criticizing, then this tactic is considered successful. This is where children will begin to learn to form tantrum behavior as a weapon to fight all forms of attacks from their environment. Meanwhile, Tasmin [11] distinguishes the form of tantrum behavior based on the tendency of the form of behavior that is raised by children based on age, namely age less than three years, ages three to four years and age over five years.
Dryden [12] sees tantrum behavior based on the direction of his aggressiveness, which is directed out and aggressiveness directed into him. Aggressive behavior is directed out, for example the child displays aggression by damaging the surrounding objects such as toys, household furniture, electronic objects and others. In addition to objects, aggressiveness is also shown in the form of violence to parents, relatives, friends and others by cursing, spitting, hitting, scratching, kicking and other actions that intend to harm others. Aggressive behavior that is directed into the self, such as scratching the skin to bleed, head banging against the wall or to the floor, body slammed onto the floor, scratching the face or force themselves to vomit or cough.
Tasmin [12] suggested that several factors can cause tantrums in children. Like, obstruction of a child's desire to get something, a need that is not met. For example, being hungry, inability of children to express or communicate themselves and their desires so that the parent's response is not in accordance with the child's wishes. Inconsistent parenting parents are also one of the causes of tantrums; including if parents are too indulgent or too neglectful of children. Other causes are when children experience stress, insecurity (unsecure) and discomfort (uncomfortable) can also trigger a shock.
The causes of tantrums are closely related to family conditions, such as children getting too much criticism from family members, marriage problems with parents, interference or interference when children are playing by other siblings, emotional problems with one parent, competition with siblings and communication problems and lack of understanding of parents regarding tantrums that respond to it as something that is distracting and distressing [13].
Handling of children who experience tantrums can be done in various ways depending on the characteristics and severity of the tantrum problems experienced by the child. In a previous study conducted by Withey in [1] one of the ways that can be done at school is by providing interventions conducted with several processes, first determining the environment , a teacher can manage the learning environment and provide activities that children like to in learning material, giving children freedom to explore their knowledge, and what needs to be considered is that teachers must behave the same as those taught to students, for example teachers teach not to shout when calling their teacher so that a teacher does not shout when calling his students.
Second, giving direct instructions. The instruction must also be in accordance with the child's development period. When children experience tantrums, children do movements that they think can make them calmer. Then, after they are calmer, the next step gives attention and a sense of comfort so that the child wants to tell a story and the teacher can analyze the child's chemistry, acknowledge their feelings, and then choose actions to resolve the cause of the child's anger [14]. The provision of these interventions aims to help students develop self-regulation skills by helping them become more aware of emotions as they experience them. For this skill, the researchers suggest teaching students to stop and calm themselves through breathing [15]. The next step, third and fourth, is application and follow-up. After carrying out environmental arrangements and direct instructions.
III. DATA AND METHODOLOGY
This type of research is Case Study with descriptive analysis techniques. An observation was made to understand the handling of emotional development problems, especially in tantrum behavior in children aged 5-6 years . Observations were carried out at the Pembungkandang Kedungkandang Kindergarten in Malang City where a child was indicated to have emotional problems, namely the tantrum. Data collection techniques use observation and interviews.
Observations were made while in the classroom when learning and in the school environment when LL played with his friends. Interviewing classroom teachers about LL's daily life in the school, LL family's family and LL's family on how LL's daily life and care are provided.
IV. RESULTS AND DISCUSSION
Based on the results of the observation in the Kedungkandang Kindergarten Builder in Malang, there was a child in their daily life at school. The atmosphere was always moody, often responding with rejection, irritability, often yelling and even hitting people who were nearby even though they were 6 years old. should begin to decrease, but not in the child. The child's behavior tends to reflect children who are experiencing emotional problems, namely tantrums .
The results of the data that have been described, the researcher notes several findings related to the state of LL research subjects experiencing tantrums as follows : LL is a child who is fast in understanding learning, quickly completes a task that is given well, likes art activities and has a high curiosity, but the emotional control that LL has is very bad, LL is very irritable to hurt his friend. When LL finished the task quickly, LL often interrupted his friend who was working on the task and in the end LL hit his friend then LL cried. LL also often disturbs children who are waiting by their parents, LL feels dislike if there is a child who is waiting by his parents. LL is from a broken home family and lives with dad every Tuesday until Friday, while Saturday to Monday and when he is off LL lives with his mother. The parenting style applied by the family of mothers and families of different fathers. Based on the results of interviews and observations found the fact that ethics are in the mother's environment, LL is abundant in attention and all his wishes are fulfilled. But LL felt less attentive when with his father. This makes LL feel jealous with his friends who are always waiting for their parents when learning activities. When in the mother's environment, LL is abundant in attention and spoiled while at home the child's father feels less attention because LL is always taught independently in taking care of daily needs so LL feels not being given attention. The tantrum that appears in LL when LL's wishes are not fulfilled and always happens when LL leaves school from his mother's house. The tantrum that happened to LL was going berserk, annoying his friends, throwing things around him and crying. LL feels jealous when his classmates are watched by his parents. When students lack focus or when LL experiences tantrums in learning, the teacher always tells moral stories by using LL as the subject of the story. Researchers find religious and moral values that are always emphasized in the overall learning activities. Then the teacher makes an agreement to all students so as not to disturb friends who are doing assignments , especially in LL. The teacher always uses experience at home or in the child's daily activities to be used as a learning medium, and builds LL's confidence by labelling "big brother" and making LL a leader on several occasions. Ask LL to help friends who have difficulty completing assignments.
The tantrum that appears in LL when his desires are not fulfilled, seeks attention from the teacher, his classmates and even to the parents of the students and always happens when LL leaves school from his mother's residence. The tantrum that happened to LL was going berserk, annoying his friends, throwing things around him and crying. The cause of LL's absurdity is the unfulfillment of love which LL is urgently needed because LL is from a broken home family and lives with his grandfather and grandmother from his father and mother alternately. The parenting style adopted by the families of mothers and families from different fathers is also a cause of tantrums in LL. The parenting style applied between different fathers and mothers causes LL to become confused because of the different habits given by his parents. When in school LL becomes a tantrum and reacts jealously directly with his friends who are always waiting for their parents when learning and friends who get more attention from their parents.
Forms tantrums at LL by the formation process occurs 3 stages, namely stage trigger (trigger) when in the current class where rapid LL task, then she took another child to play and given a reprimand by the teacher so that LL was angry because his wishes are not fulfilled. Then, LL responded to the reprimand aggressively, such as crying and hitting his friend and even scattering the items around him. Finally, the formation stage, where the formation stage occurs because of the parenting style provided by the mother, namely when LL wants something always by whining and being followed by his mother so as to make "whine" as a weapon when LL wants something so LL also "whines" when wanting something in school.
Efforts to handle the behavior of tantrums in LL have been carried out by classroom teachers at school but have not been effective because they are not supported by the family environment. At school the teacher handles the problem of tantrum by providing interventions carried out with the first few processes the teacher establishes the environment, here the teacher has provided activities that suit the child's interest in learning materials such as asking LL to lead praying before and after doing activities, when doing so LL feels he was noticed by the teacher and his friends. On another occasion LL was given an additional task to help his friend in completing the task. Second, giving direct instructions. The instruction must also be in accordance with the child's development period. When LL tantrum , LL often interferes with his friends by hitting and seeking attention, what the teacher does in giving direct instruction is to calm LL then ask LL why LL is bothering his friend or causing him to cry then the teacher gives attention to what LL feels and chooses actions to resolve the cause LL tantrum. The next process is application, because the teacher has understood LL characteristics so the teacher can take the necessary actions. But in the evaluation process, the teacher does not discuss with the principal and other teachers. Teachers and principals should be able to work together to deal with problems experienced by students. Also note that early intervention may be key to preventing the escalation of this tendency among older children where tantrums can signal referrals for special education services [17] V. CONCLUSION Tantrum is a behavior that is universal and normal in children. It's just that many parents respond inappropriately by treating it as something that is distracting and distressing. Wrong responding to children whose tantrums will greatly affect the next development. Instead of being disciplined and learning to solve problems faced solutively, it becomes increasingly destructive and aggressive. There is a connection between the emotional elements of the child and the tantrum . Such as frustration, dissatisfaction, anger and so on. However, social elements appear to be more dominant in shaping tantrum behavior such as competition with friends or relatives, parenting patterns, or the presence of strangers. The main causes of tantrums in LL are divorced parents and the parenting style applied is different between the family of the father and the mother's family.
Handling emotional problems that occur in children in this case the problem of tantrums in TK Pembina Malang by using the method of telling stories with moral messages that make children as subjects in the story. When dealing with problems with children the school and parents should work together. Things that can be done by the school are discussing these issues with the principal or other teachers for the provision of treatment that is appropriate to the child's problems, giving parents understanding of emotional development in LL. Things that can be done by parents including seeking full affection and attention and equating parenting between fathers and mothers and being more active in communicating with the school about child development . | 2019-06-13T13:24:10.626Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "bb85a673edb0c564887b6246fe00941a238eff62",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/icsie-18.2019.62",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1ea13e9702dd74e5bba03408227f7d5f30e35c7e",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
225291930 | pes2o/s2orc | v3-fos-license | Myocardial ischaemia caused by bilateral coronary ostial stenosis from pseudointimal membranes in a full root freestyle valve: a case report
Abstract Background Coronary artery ostial stenosis is a rare but well-known complication to aortic root replacement. The occurrence of this complication in patients with the Medtronic Freestyle bioprosthesis is poorly described. We report a case of late bilateral coronary ostial stenosis due to pseudointimal membranes within a Medtronic Freestyle bioprosthesis, resulting in acute coronary syndrome. Case summary In 2013, a 43-year-old male patient received a Medtronic Freestyle bioprosthesis as a full aortic root implantation due to endocarditis with root abscess. Preoperative coronary angiography was normal. The patient, who had no previous symptoms of coronary ischaemia, presented with severe chest pain and acute coronary syndrome in 2017. Coronary angiography and electrocardiogram-gated contrast-enhanced cardiac computed tomography showed bilateral coronary ostial stenosis. The patient was successfully treated with coronary artery bypass grafting. Intraoperative inspection revealed pseudointimal membranes covering the coronary ostia. Histology showed fibro-intimal thickening with areas of inflamed granulation tissue. Discussion Bilateral coronary ostial stenosis is a severe, potentially life-threatening condition, and a possible complication to implantation of the Medtronic Freestyle bioprosthesis as a full root. The phenomenon may occur late and should be distinguished from arteriosclerotic coronary artery disease.
Introduction
Aortic root replacement can be achieved by full root implantation of the porcine Medtronic Freestyle stentless bioprosthesis (FB), which includes coronary artery reimplantation. 1 Subsequent coronary ostial stenosis is a known complication; yet its exact morphology, aetiology, and frequency are poorly described. 2,3 Case reports indicate a high mortality rate (33% among reported cases) and additionally a risk of further complications, leaving mortality due to undiagnosed cases unknown. [4][5][6] We present a unique case of late bilateral coronary ostial stenosis in a full root FB occurring as late as 4 years after surgery. This report presents and discusses the intraoperative findings, with the aim to add to the understanding of pathophysiology and treatment of this unusual but potentially fatal complication.
Case presentation
A 43-year-old Pakistani man was admitted to Rigshospitalet, University Hospital of Copenhagen, in January 2013, with complex mechanical mitral valve-and native aortic valve endocarditis, complicated by aortic root abscess. Medical history included heart failure after rheumatic fever with mitral stenosis and mild aortic insufficiency, treated with mitral valve replacement and a minor commissurotomy of his fused aortic valve cusps in 2009. The patient was otherwise healthy and had no risk factors for coronary artery disease. Preoperative coronary angiography was normal ( Figure 1A and B). The patient was emergently operated with implantation of a biological mitral valve (St. Jude Medical, Epic, size 27 mm) and a full root FB (size 23 mm) in the aortic position. A FB was chosen since the tissue quality and irregularity of the revised root including the presence of a rigid biological mitral valve prosthesis did not allow for implantation of the rigid suture-ring of a stented (mechanical or biological) aortic valve prosthesis. Furthermore, a homograft was not available. Reimplantation of the coronary ostia was performed in a standard fashion with the button technique. The angle of the porcine coronary ostia is 90 relative to the centre of the aortic root, whereas the human coronary ostia assume an angle of 120 . The FB was therefore oriented so that the left coronary artery was reimplanted in the left porcine ostium, while the right coronary artery was reimplanted higher and further to the right in the right sinus of the FB, to avoid kinking or stretching of the coronary (Figure 2). The patient was discharged after 6 weeks of targeted intravenous antibiotic therapy, with atrial fibrillation, and newly diagnosed diabetes mellitus 2007 A 37-year-old man presented in with rheumatic mitral stenosis, mild aortic insufficiency, and pulmonary hypertension. He was referred to surgical treatment in his country of residence, which was not performed.
February 2009
Presented with heart failure, signs of endocarditis, and sepsis with excessive organ failure.
Urgent surgery with implantation of a mechanical mitral valve and a minor commissurotomy of the aortic valve cusps. January
Learning points
• Patients with full root implantation of Medtronic Freestyle bioprosthesis are at risk of bilateral coronary ostial stenosis, due to pseudointimal membranes.
• The pseudointimal membranes can result in myocardial ischaemia even years after surgery.
• Bilateral coronary ostial stenosis in Medtronic Freestyle bioprosthesis can be difficult to treat, with high risk of further complications or death.
In June 2017, the patient presented with chest oppression, pain in the left arm and hand, as well as shortness of breath. Physical examination revealed a discrete heart murmur along with an irregular rhythm. Lung auscultation presented normal vesicular breath sounds and extremities were without oedema. Except from the chest oppression, the general condition was good and further physical examination normal. Electrocardiogram showed atrial fibrillation and new ST depressions and T-wave inversions in I, II, V5, V6, and ST elevation in aVR ( Figure 3). Troponin T samples showed increasing concentration reaching a maximum of 196 ng/L (normal range < 14 ng/L). A contrast-enhanced cardiac computed tomography (CT) showed significant left-and right coronary ostial stenosis but no sign of atherosclerosis ( Figure 4). Preoperative coronary angiography confirmed this finding, showing 90% stenosis in both ostia and otherwise normal coronary arteries ( Figure 1C and D, Videos 1 and 2). Preoperative transthoracic echocardiogram showed left ventricular ejection fraction of 50%, and excellent prosthetic aortic-and mitral valve function. Due to the risk of non-dilatable ostial tissue and stents protruding into the aortic lumen, the case was not found suitable for percutaneous coronary intervention (PCI). The patient underwent emergent reoperation with the aim to either replace the FB or to perform revascularization. Intraoperatively, after partial opening of the distal anastomosis between the FB and the ascending aorta, we found pseudointimal glass-like membranes, which covered the distal anastomosis between the FB and the ascending aorta as well as both coronary ostia, leaving the latter with high-grade stenoses ( Figure 5). The membranal tissue was brittle, not invading the FB and could thus be peeled of the FB tissue in strips. Since the pseudointimal membranes extended from the anastomotic sites and into both coronary arteries, and since surgical detachment of the membrane tissue could only be done radically in the proximal parts of the vessels, we considered replacement of the FB and subsequent reimplantation to carry a considerable risk of dissection between the pseudointimal tissue and native tissue, and subsequent coronary occlusion by the remaining membrane tissue. Likewise, PCI could potentially also dislodge the membrane and cause occlusion. Therefore, after collecting a tissue sample of pseudointimal membrane, the patient had coronary artery bypass grafting (CABG) performed with separate venous grafts to the left anterior descending artery and the right coronary artery (RCA). The patient recovered uneventfully, aside from a smaller procedure due to epigastric fascial rupture.
Acetylsalicylic acid 75 mg was added to previous medication and metoprolol was reduced to 25 mg. Histological examination of the explanted pseudointimal membranes showed fibro-intimal thickening with areas of inflamed granulation tissue. There was no sign of acute inflammation, calcifications, foreign bodies, or amyloidosis.
In September 2017, a follow-up cardiac CT and transthoracic echocardiography, performed due to complaints of chest pains, showed a small (2.3 mL) pseudoaneurysm arising from partial rupture of the last opened (distal) anastomosis between the FB and the aorta ( Figure 6 and Video 3). Due to the patient's numerous previous reoperations, and the small size of the pseudoaneurysm, we chose a conservative, non-surgical strategy. Repeat follow-up cardiac CT, performed 3 months later, showed reduction of the pseudoaneurysm to 1.4 mL. and video 3 During follow-up in November 2019, the patient still suffered from exercise-induced chest pains. Myocardial perfusion scintigraphy from 2018 showed exercise-induced myocardial ischaemia of 6-8%, probably due to previous myocardial infarction. Coronary angiography from 2018 confirmed complete revascularization. Transthoracic echo showed mild-moderate stenosis of the FB valve and well-functioning biological mitral valve.
Discussion
Previously published cases describe acute coronary syndrome due to bilateral coronary ostial stenosis between 2 and 18 months after receiving the FB. [4][5][6][7] In contrast to this, our patient presented 4 years after surgery, suggesting another pathophysiology. Furthermore, this is the first case where the mechanical cause for the bilateral coronary ostial stenosis has been identified in vivo, namely pseudointimal membranes covering the coronary ostia. This complication does not seem to be caused by atherosclerosis nor by technical default, since the suture lines of the anastomoses were not involved in the stenosis formation.
The mortality rate of this complication is unknown due to the limited amount of previous reports on the subject, and it is therefore not unlikely that patients may have suffered from this complication without diagnosis. In a literature search, we found four case reports describing five patients treated for bilateral coronary stenosis presenting between 2 and 18 months after aortic root replacement with FB. [4][5][6][7] In two patients, the primary treatment was CABG: one of these patients died postoperatively in the intensive care unit, 4 and one received PCI 1 year later due to unstable angina pectoris. 5 Two other patients received PCI as the primary 6 while the other patient recovered without subsequent restenosis. 7 The fifth patient died during coronary angiography. 4 The patient presented in the present report received CABG, which was complicated by pseudoaneurysm formation. Thus, only one out of six patients with FB-induced coronary stenosis went through uncomplicated treatment for coronary ostial stenosis. The exact causes of these complications to reintervention are unknown, but we suspect that dissection of the pseudointimal membranes may occur due to mechanical disturbance such as catheterization during PCI. This may lead to coronary artery obstruction, which could explain the high incidence of restenosis after PCI, and one death during coronary angiography. Theoretically, from a revascularization point of view, treatment with CABG would seem as a safer solution to avoid mechanical disturbance, but this approach naturally carries the risk of an open-heart reoperation and complications such as restenosis and pseudoaneurysms ( Table 1).
The pathophysiological mechanism causing bilateral coronary ostial stenosis in the FB remains unknown, yet several theories have been suggested: (i) local pressure necrosis and subsequent intimal proliferation due to cannulation of the coronary ostia with cardioplegia catheters 8,9 ; (ii) a genetic predisposition for developing ostial coronary stenosis after aortic valve replacement 10 ; (iii) turbulence in the blood flow due to aortic valve replacement invoking intimal thickening and fibrous proliferation of the ostia 8 ; and (iv) an immunological reaction towards the FB causing coronary ostial stenosis. 4,5 Another potential mechanism for late occurring coronary stenosis from to pseudointimal membranes could be local fibrosis induced by mechanical tension from inappropriate stretch or 'pull' of the coronary arteries during reimplantation. This mechanism would be comparable to other situations in which fibrosis occurs from stretch of the luminal surfaces of the heart and vessels, such as left atrial fibrosis secondary to mitral stenosis or regurgitation or left ventricular fibroelastosis in aortic stenosis. We sought to elucidate this option, by comparing the placement and orientation of the leftand right coronary arteries before and after reimplantation in the FB, knowing well that this would only be indicative. As shown in Figure 1, the proximal part of the left coronary artery does not seem to change its spatial orientation significantly after surgery, whereas the RCA does seem to assume a more transverse orientation. As opposed to this, post-surgical 3D volume rendering CT images (Figure 7) show that both coronary arteries depart from the FB at almost right angles. Any surplus stretch, tension or 'pull' during initial coronary reimplantation, must be assumed to translate into more acute angulation between the coronaries and the root prosthesis. Since this does not seem to be the case, we think that this hypothetical mechanism for late occurrence of pseudointimal membranes remains suggestive. The present report is not able to establish a causal association to support or to reject the above suggestions, however, the late presentation of the phenomenon in this case suggest that initial mechanical injury is not a likely cause. The endoluminar noninvasive structure of the membranes, as well as the fibrotic and inflamed tissue could be compatible with an immunological reaction towards the FB.
Because aortic root replacement, by virtue of the procedure, necessitates coronary reimplantation, previous aortic root replacement is an alarm sign in patients with chest pain or other symptoms of ischaemic heart disease. Coronary ostial issues must always be considered, in addition to-and in absence of-conventional atherosclerotic coronary disease and related risk factors. The clinical presentation of pseudointimal membranes is likely similar to other ostial issues such as technical or plaques, although the symptoms of pseudointimal membranes seems to present later. Whether the membranes form late postoperatively or is a result of gradual progress of an early process is to this point unknown. As we cannot know why or when membrane formation is initiated and how long it takes, we can only assume that the membranes builds up gradually and the symptoms occur as the stenosis narrows. From a purely surgical point of view, it may therefore be advocated to reimplant the coronary arteries using large buttons, since any subsequent proliferative membrane formation will take longer time to develop and to cover the ostia.
Conclusion
Full root freestyle implantation with reimplantation of the coronary arteries may be complicated by bilateral ostial stenosis caused by pseudointimal membranes covering the coronary ostia. Mechanical disruption of the membranes during coronary angiography may cause pathological dissection of these and thus occlusion and ischaemia. Caution is therefore warranted for patients with reimplanted coronary ostia, who present with symptoms of cardiac ischaemia. These patients do not necessarily have risk factors of coronary artery disease and the complication may occur years after surgery. Further research is warranted to elucidate the pathophysiological mechanism causing bilateral coronary ostial stenosis in patients with FB, and whether this phenomenon may occur in other cases of coronary reimplantations.
Figure 7
Three-dimensional volume rendering from the computed tomography scan in Figure 3, showing the aortic root and coronary arteries from different angles. Note the almost perpendicular departure of the coronary arteries from the aortic root, suggesting no tension as a consequence of the reimplantation.
Lead author biography
Kirstine Bekke studied medicine at the University of Copenhagen, Denmark. She graduated in January 2020. Her main interests are cardiothoracic surgery and research. During her studies, she has worked as a research assistant in the Department of Cardiothoracic Surgery, Rigshospitalet, University of Copenhagen, Denmark. During her studies, she also completed two clinical stays abroad respectively at Universitätsklinikum Münster, Germany and Mbulu Hospital, Tanzania.
Supplementary material
Supplementary material is available at European Heart Journal -Case Reports online. | 2020-09-10T10:22:26.666Z | 2020-09-04T00:00:00.000 | {
"year": 2020,
"sha1": "d18f9f7655cdfde4ed06504c03c94960fb8fa685",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1093/ehjcr/ytaa136",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa4f2cb9ba370002a802bb61641be22568d2e3ee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
61153386 | pes2o/s2orc | v3-fos-license | Transcriptomic Analysis of the Brucella melitensis Rev.1 Vaccine Strain in an Acidic Environment: Insights Into Virulence Attenuation
The live attenuated Brucella melitensis Rev.1 (Elberg-originated) vaccine strain is widely used to control the zoonotic infection brucellosis in small ruminants, but the molecular mechanisms underlying the attenuation of this strain have not been fully characterized. Following their uptake by the host cell, Brucella replicate inside a membrane-bound compartment—the Brucella-containing vacuole—whose acidification is essential for the survival of the pathogen. Therefore, identifying the genes that contribute to the survival of Brucella in acidic environments will greatly assist our understanding of its molecular pathogenic mechanisms and of the attenuated virulence of the Rev.1 strain. Here, we conducted a comprehensive comparative transcriptome analysis of the Rev.1 vaccine strain against the virulent reference strain 16M in cultures grown under either normal or acidic conditions. We found 403 genes that respond differently to acidic conditions in the two strains (FDR < 0.05, fold change ≥ 2). These genes are involved in crucial cellular processes, including metabolic, biosynthetic, and transport processes. Among the highly enriched genes that were downregulated in Rev.1 under acidic conditions were acetyl-CoA synthetase, aldehyde dehydrogenase, cell division proteins, a cold-shock protein, GroEL, and VirB3. The downregulation of these genes may explain the attenuated virulence of Rev.1 and provide new insights into the virulence mechanisms of Brucella.
INTRODUCTION
Brucella are facultative intracellular bacteria that are responsible for brucellosis-a zoonotic infection that causes abortions and sterility in ruminants, pigs, dogs, and rodents, and a severely debilitating febrile illness in humans (Ko and Splitter, 2003;von Bargen et al., 2012). One factor that crucially contributes to the virulence of Brucella is their ability to survive within various host cells, where they are inaccessible to the humoral immune response of the host (Delrue et al., 2004). Following uptake by the host cells, Brucella create a unique, highly acidic intracellular niche-the Brucella-containing vacuole (BCV)-in which they reside and multiply (Celli, 2006;Starr et al., 2008). The acidification of the BCV is essential for inducing the major Brucella virulence determinant, the type-IV secretion system (T4SS; Porte et al., 1999;Boschiroli et al., 2002;Köhler et al., 2002;Ke et al., 2015) which is encoded by the virB locus in their chromosomes. As the T4SS system (and, especially, the proteins VirB3-6 and VirB8-11) plays a crucial role in inhibiting the host immune response and in the intracellular survival and replication of Brucella within the host cells (Comerci et al., 2001;den Hartigh et al., 2008;Ke et al., 2015;Smith et al., 2016), the ability of Brucella to survive within the acidic conditions of the BCV is key to their pathogenesis and can be used to study the underlying mechanisms (Roop et al., 2009). Porte et al. (1999) reported that the pH in phagosomes containing live Brucella suis decreases to 4.0 within 1 h following infection, and that this value persists for at least 5 h. Thus, one can assume the existence of a complex, transcription-level regulation network, which responds to specific cellular signals that enable the bacteria to survive in the acidic BCV environment. Indeed, two recent comparative transcriptome analyses employed RNA-seq to determine the changes in Brucella gene expression in cultures containing normal-pH media (namely, pH 7.3 ) versus those containing low-pH media (pH 4.4 ), thereby revealing novel molecular mechanisms leading to Brucella pathogenicity (Liu et al., 2015(Liu et al., , 2016. Notably, one gene that was shown to play an important role in the resistance of Brucella to low-pH conditions is BMEI1329, which encodes a two-component response regulator gene in the transcriptional regulation pathway of Brucella melitensis (Liu et al., 2016).
Brucella melitensis, which infects goats and sheep mainly around the Mediterranean and the Persian Gulf, is the most pathogenic Brucella species for humans (Poester et al., 2013). Among the brucellosis vaccines used in high-prevalence regions, a widely used one utilizes the live attenuated Rev.1 B. melitensis strain (Avila-Calderón et al., 2013). This strain, originally developed from the virulent B. melitensis 6056 strain by Elberg and Herzberg in the mid-1950s, successfully protects and reduces abortions in small ruminants (Herzberg and Elberg, 1953;Banai, 2002), but it remains infectious for humans and causes abortions in small ruminants vaccinated during the last trimester of gestation. To improve brucellosis vaccines, we need to better understand the mechanisms underlying the virulence attenuation of the Rev.1 vaccine strain (as compared with that of other, pathogenic strains), but these mechanisms are yet unclear.
In a recent study, we sequenced and annotated the whole genome of the original Elberg B. melitensis Rev.1 vaccine strain (passage 101, 1970) and compared it to that of the virulent B. melitensis 16M strain (Salmon-Divon et al., 2018a,b). We found that, as compared with 16M, Rev.1 contains non-synonymous and frameshift mutations in important virulence-related genes-including genes involved in lipid metabolism, stress response, regulation, amino acid metabolism, and cell-wall synthesis-which we assumed are related to the attenuated virulence of this strain. In this study, we aimed to extend these findings to elucidate the intracellular survival mechanisms of the virulent 16M strain versus the vaccine Rev.1 strain. To this end, and in light of the importance of the acidic BCV environment for the virulence of Brucella species, we employed RNA-seq to comprehensively compare the transcriptome of the Rev.1 and 16M strains, each grown under either low-or normal-pH conditions, under the hypothesis that the gene expression patterns of the two strains will differ between the two conditions. Our analysis revealed several candidate genes that may be related to the attenuated virulence of Rev.1 and may, therefore, facilitate the design of improved brucellosis vaccines.
Bacteria Strains and Culture Conditions
Bacterial strains used in the present study were B. melitensis 16M (INRA Brucella Culture Collection)-the commonly used, virulent, wild-type biotype 1 strain-and the original attenuated B. melitensis Rev.1 vaccine strain (passage 101, 1970). For comparative assays, both the Rev.1 and 16M strains were cultured for 72 h on tryptic soy agar (TSA) plates at 37 • C under 5% CO 2 . The low-pH treatment assay was performed as reported previously (Liu et al., 2016). Briefly, bacteria were grown with shaking for 24 h in 10 ml of a tryptic soy broth (TSB; pH 7.3 ) at 37 • C, with an initial density of 1 × 10 7 CFU/ml. The final bacterial densities were adjusted to 5 × 10 8 CFU/ml (OD 600 ∼ = 0.4) before the low-pH treatment, in which 1 ml of the culture was centrifuged at 7000 × g, resuspended in a pH 4.4 TSB culture, and incubated for 4 h at 37 • C. In the control group, bacteria were cultured in a pH 7.3 TSB and incubated at 37 • C for 4 h. After incubation, cell cultures were collected and centrifuged at 7000 × g, and then the supernatants were removed and an RNA Protect Reagent (Qiagen, Hilden, Germany) was added to the pellets to prevent RNA degradation. Five different biological replicates were used for each strain under each type of condition (total 20 samples). All the work with Brucella strains was performed at a biosafety level 3 laboratory in the Kimron Veterinary Institute, Bet Dagan, Israel.
RNA Isolation
The total RNA of the 16M and Rev.1 strains was isolated using the RNeasy Mini Kit (Qiagen) with a DNase treatment (Qiagen). RNA was eluted from the column using RNase-free water. RNA quality was measured by Bioanalyzer (Agilent, Waldbronn, Germany). Libraries were prepared using the ScriptSeq RNA-Seq Library Preparation Kit (Illumina, Inc., San Diego, CA, United States). Library quantity and pooling were measured by Qubit [dsDNA high sensitivity (HS); Molecular Probes, Inc., Eugene, OR, United States]. The pool was size-selected by using a 4% agarose gel. Library quality was measured by TapeStation (HS; Agilent). For RNA-seq, the NextSeq 500 high output kit V2 was used (Illumina, Inc.). The reads were single end at the length of 75 bp (∼10 million reads per sample). Sample denaturation and loading were conducted according to the manufacturer's instructions. Library preparation and RNA-seq were conducted at the Center for Genomic Technologies at the Hebrew University of Jerusalem, Jerusalem, Israel.
Reverse Transcriptase PCR (RT-PCR)
To confirm the RNA-seq results, five upregulated or downregulated genes from the RNA-seq analysis were selected and a RT-PCR was used to confirm the expression changes of these genes in both strains (16M and Rev.1) and conditions (low-and normal-pH). PCR primers were designed using Primer-BLAST (Ye et al., 2012) and are listed in Supplementary Table S1. Complementary DNA (cDNA) was obtained by a reverse transcription of 850 ng total RNA at a final reaction volume of 20 µl, containing 4 µl qScript Reaction Mix, and 1 µl qScript Reverse Transcriptase (Quantabio, Beverly, MA, United States). Quantitative RT-PCR assays were purchased from Biosearch Technologies (Petaluma, CA, United States) and used according to the manufacturer's instructions. PCR reactions were conducted in a final reaction volume of 10 µl containing 20 ng of cDNA template, 5 µl of PerfeCTa SYBR Green FastMix, ROX (Quantabio), and 1 µl of primer mix. All reactions were run in triplicate and the reference gene 16S rRNA was amplified in a parallel reaction for normalization.
RNA-Seq Analysis
Following quality control with FastQC 1 , the reads were processed to trim adaptors and low-quality bases by using Trim Galore software 2 . The EDGE-pro v1.3.1 software (Magoc et al., 2013) was used with the default parameters to map reads to the B. melitensis 16M reference genome (GCF_000007125.1), filter out multialigned reads, and estimate the expression levels of each gene. To convert the EDGE-pro output to a count-table format, the "edgeToDeseq.perl" script (provided with the software) was used. Normalization and differential gene expression analysis were conducted with the edgeR and Limma R packages (Smyth, 2005), using as input the count table generated by EDGE-pro. Briefly, genes that did not show more than 1 count per million (CPM) mapped reads in at least three samples were filtered out. Then, a TMM normalization (Robinson and Oshlack, 2010) was applied, followed by voom transformation (Law et al., 2014). Linear models to assess differential gene expression were generated by fitting a model with a coefficient for all factor combinations (strain and low-pH treatment) and then extracting the comparisons of interest, which also included the interaction between strain and treatment effects. The aim of adding the interaction term in this experimental setup was to detect genes that respond differently to pH treatment in Rev.1 compared to 16M; we named these genes "interaction genes." Only genes that demonstrated a fold change ≥ 2 and an FDR ≤ 0.05 were considered significant. Sequencing reads from this study were deposited in the NCBI SRA repository under the accession number PRJNA498082. The significantly upregulated or downregulated genes were subjected to a gene ontology enrichment analysis using ClusterProfiler (Yu et al., 2012) with a cutoff of FDR < 0.05. To perform the gene ontology analysis, we first generated a database annotation package for B. melitensis 16M using the "makeOrgPackage" command from the AnnotationForge R package (Carlson, 2018). As input, we used the GO annotation, downloaded from QuickGO (Binns et al., 2009). Additional comparisons of the biological processes were performed with the Comparative GO web server (Fruzangohar et al., 2013) using all the upregulated and downregulated genes. Multidimensional scaling analysis (MDS) was conducted using the "plotMDS" command within the edgeR package. A heatmap of the 403 genes that respond differently to acidic conditions in the two strains was generated using the "heatmap3" R package (Zhao et al., 2014), employing 1-Pearson correlation as the distance measure and "complete" as the linkage method. Genes were categorized into five clusters based on the generated dendrogram, and genes within each cluster were characterized based on Clusters of Orthologous Groups (COGs) annotations. For this purpose, protein sequences of the clustered genes were searched against a local COG BLAST database, which was downloaded from NCBI using the reverse position-specific BLAST (RPS-BLAST) tool (Marchler-Bauer et al., 2013). The expectation value (E) threshold was set to 0.01 and the BLAST output was parsed using an updated version of the cdd2cog.pl script 3 to obtain the assignment statistics of the COGs. The number of genes within each heatmap cluster belonging to each COG assignment was calculated, and the ontologies with the highest number of genes were indicated in the heatmap.
Cell Infection Test
JEG-3 (ATCC R HTB-36 TM ) human trophoblasts were grown in Eagle's Minimum Essential Medium (EMEM; ATCC R 30-2003 TM ) with 10% fetal bovine serum. For intracellular replication experiments, 2 × 10 5 cells were seeded in a 24-well plate and cultured overnight at 37 • C under 5% CO 2 . Monolayers of cells were infected with the 16M or Rev.1 strains at a multiplicity of infection (MOI) of 500 (100 µl of bacterial suspension per well). To synchronize the infection, the infected plates were centrifuged at 400 g for 5 min at room temperature, followed by a 75 min incubation at 37 • C in an atmosphere containing 5% CO 2 . The cells were then washed three times with PBS and re-incubated for another 60 min in a medium containing 50 µg/ml gentamicin to eliminate extracellular bacteria, after which the number of internalized bacteria was measured (time zero of the culture). To assess the intracellular bacterial growth, the concentration of gentamicin was reduced to 5 µg/ml. To monitor the intracellular survival of the bacteria at various times post-infection, the infected cells were lysed for 10 min with 0.1% Triton X-100 in water and serial dilutions of the lysates were plated on TSA plates to enumerate the colony-forming units. Three identical wells were evaluated at each time for each strain. Experiments were repeated three times, independently.
RESULTS
We used RNA-seq to conduct a comprehensive comparative transcriptomic analysis of the gene expression profiles of the Rev.1 (vaccine) and 16M (virulent) B. melitensis strains, grown either under low-pH conditions that mimic the intracellular niche of the BCV (pH 4.4 ; referred to here as the "low-pH" group) or normal-pH conditions (pH 7.3 ; "normal-pH" group). The raw sequence outputs for each group are presented in Table 1. An MDS analysis revealed four clusters, of which all bacterial samples within each cluster are closely related, emphasizing the high quality and reproducibility of the data (Figure 1). Below, we first report the genes that are differentially expressed (DE) between the Rev.1 and 16M strains, each grown under normal-pH conditions. Then, for each separate strain, we report the genes that are DE between bacteria grown under normal-pH conditions and those grown under low-pH conditions. Finally, FIGURE 1 | Similarities between bacterial samples visualized using an MDS analysis. Relative distances between bacterial samples were projected onto a two-dimensional space using the "plotMDS" command implemented at the Limma R package (Smyth, 2005). Black, 16M normal-pH group samples; red, 16M low-pH group samples; orange, Rev.1 normal-pH group samples; blue, Rev.1 low-pH group samples.
we report possible interactions between the strain and its unique response to acidic conditions.
Differential Gene Expression Between B. melitensis Rev.1 and 16M Grown Under Normal-pH Conditions
When both Rev.1 and 16M were grown under normal-pH conditions (pH 7.3 ), our comparative transcriptomic analysis revealed 242 genes that were DE (FDR < 0.05, fold change ≥ 2; Supplementary Table S2) between the two strains, of which 172 genes were upregulated and 70 genes were downregulated in Rev.1 versus 16M. The most enriched biological processes associated with the DE genes were transport-related processes (Figure 2), while the most enriched molecular functions were cation transmembrane transporter, oxidoreductase, hydrolase, and ATPase activities (Figure 3). Twelve of the 242 DE genes encode for proteins that were previously reported in a proteomic analysis to be overexpressed in Rev.1 versus 16M (Table 2), including BMEII0704 (which encodes bacterioferritin) and six genes that encode ABC transporters (Forbes and Gros, 2001).
Next, we compared the genes that we found to be DE between the two strains to a list of Brucella virulence genes obtained from the Brucella Bioinformatics Portal (Xiang et al., 2006). Out of the 212 B. melitensis virulence genes that were reported in the Brucella Bioinformatics Portal, our transcriptomic analysis indicated eight genes that were upregulated and eight genes that were downregulated in Rev.1 versus 16M (Table 3), including six genes that encode transporters, of which three are annotated as sugar transporters.
Differential Gene Expression Between B. melitensis 16M Grown Under Lowand Normal-pH Conditions
In total, 773 genes in the 16M strain were DE (FDR < 0.05, fold change ≥ 2) between bacteria grown under normaland low-pH conditions, of which 374 were upregulated and 399 were downregulated in the low-pH group versus the normal-pH group (Supplementary Table S3). The most enriched biological processes within these DE genes were transport, oxidation-reduction, and nucleoside triphosphate biosynthetic processes (Figure 2), and the most enriched molecular functions were ion and cation transmembrane transporters, oxidoreductase activities, and transition metal ion binding (Figure 3). Recently, Liu et al. (2016) reported 113 genes that are DE (using FDR < 0.05 and fold change ≥ 8) between normal-and low-pH conditions in 16M. Of these genes, 104 were also annotated in our analysis, of which 72 were DE (∼70%; FDR < 0.05, fold change ≥ 2; Supplementary Table S4) between the two conditions, including 24 genes that were upregulated and 48 genes that were downregulated in the low-pH group versus the normal-pH group. Notably, the two-component response regulator BMEI1329, which is involved in the acid resistance of B. melitensis (Liu et al., 2016) was upregulated to a similar extent FIGURE 2 | Comparison of biological Gene Ontology (GO) enrichment of the B. melitensis 16M (left column) and Rev.1 (middle column) genes that were differentially expressed when the bacteria were grown under low-pH versus normal-pH conditions. The right column (16M-Rev.1) indicates genes that were differentially expressed in the normal-pH Rev.1 group compared with the normal-pH 16M group. The dot-plot displays two layers of information: significance of enrichment (p-value), which is represented by the color of the dot (highly enriched: red; lowly enriched: blue), and the "gene ratio," which is the degree of overlap between genes in the tested list and the genes associated with a GO term, represented by the size of each dot (large dots indicate a higher degree of overlap).
FIGURE 3 | Comparison of molecular Gene Ontology (GO) enrichment of the B. melitensis 16M (left column) and Rev.1 (middle column) genes that were differentially expressed when the bacteria were grown under low-versus normal-pH conditions. The right column (16M-Rev.1) indicates genes that were differentially expressed in the normal-pH group of Rev.1, as compared with the normal-pH group of 16M. The dot-plot displays two layers of information: significance of enrichment (p-value), which is represented by the color of the dot (highly enriched: red; lowly enriched: blue), and the "gene ratio," which is the degree of overlap between genes in the tested list and the genes associated with a GO term, represented by the size of each dot (large dots indicate a higher degree of overlap).
Differential Gene Expression Between B. melitensis Rev.1 Grown Under Normal-and Low-pH Conditions
In total, 1076 genes in the Rev.1 strain were DE (FDR < 0.05, fold change ≥ 2) between the low-pH and normal-pH groups, of which 519 genes were upregulated and 557 genes were downregulated in the low-pH versus the normal-pH group (Supplementary Table S5). The most enriched biological process within these DE genes was the nucleoside triphosphate biosynthetic process (Figure 2), and the most enriched molecular functions were rRNA binding and a structural constituent of the ribosome (Figure 3).
The Effects of Low-pH Conditions on Gene Expression in B. melitensis 16M and Rev.1: A Comparison
In total, 560 genes that were DE between the low-pH and normal-pH conditions were common to both 16M and Rev.1, while 213 and 516 of the DE genes were unique to either 16M or Rev.1, respectively (Figure 4). A comparison of the genes that uniquely changed their expression between the low-pH and normal-pH groups in Rev.1 to those that uniquely changed their expression between the two conditions in 16M, in relation to their GO categories, revealed that the main biological processes that were highly enriched in Rev.1 were translation, metabolic process, and transport (transmembrane, amino acid, carbohydrate, and protein), . The numbers of overlapping and unique genes in each strain are indicated in the plot. The diagram was generated using BioVenn (Hulsen et al., 2008).
whereas multiple biological processes were enriched in 16M, including pathogenesis, cell division, cell cycle, and cell wall organization (Supplementary Table S6).
Interaction Genes: Determination of a Possible Link Between Gene Expression, Environmental Stress, and a Specific Strain
In the analyses described above, we assumed that the two major parameters that could affect gene expression-the specific B. melitensis strain (Rev.1 versus 16M) and the environmental pH (4.4 versus 7.3)-are independent. Therefore, we adopted a naive approach and detected the effect of the acidic environment on gene expression in each strain separately, then compared the final list of DE genes. Our next step was to identify the potential dependency between strain and environmental pH, i.e., we sought to detect genes that respond differently to acidic stress in Rev.1 versus 16M. To this end, we added the "interaction" term to our statistical model, which revealed 403 genes that can be referred to as "interaction genes" (FDR < 0.05, fold change ≥ 2; Supplementary Table S7) and may potentially shed light on the attenuation mechanisms of Rev.1. Annotating these "interaction genes" revealed that the most enriched biological processes were related to metabolic, biosynthetic, and transport processes; the most enriched molecular functions were related to catalytic, hydrolase, nucleotide binding, oxidoreductase, and transporter activities; and the most enriched cellular compartments were related to integral components of the membrane (Supplementary Tables S8-S10). To identify genes that are potentially involved in the attenuation and survival of Rev.1, we searched the interaction genes for those that are associated with bacterial virulence and survival within the host and found four highly downregulated genes involved in metabolism processes and mitigation of acidic and oxidative stresses ( Table 4). A heatmap of the 403 detected interaction genes, categorized into five clusters, is presented in Figure 5, and the list of genes within each cluster is shown in Supplementary Table S11. Finally, we created a GO Network based on 133 interaction genes with FDR < 0.05 and fold change ≥ 2.8 ( Figure 6); as expected, the most enriched GO was related to transport and metabolic processes.
RT-qPCR Validation of the RNA-Seq Results
To ensure technical reproducibility and to validate the data generated from the RNA-seq experiment, we conducted a real-time qPCR analysis of five selected genes (BMEII0027, BMEII0591, BMEI1980, BMEII1116, and BMEI1040), from both strains (16M and Rev.1), grown under either low-or normal-pH conditions. The mRNA levels of all genes obtained by the RT-qPCR were in high accordance with those obtained by our RNA-seq analysis (Supplementary Table S1).
Differential Survival of B. melitensis 16M and Rev.1 Within JEG-3 Human Trophoblastic Cells
As shown by Porte et al. (1999), the acidic environment during the early phase of infection is necessary for the survival and multiplication of Brucella in host cells. Therefore, we investigated the ability of the virulent 16M and the attenuated Rev.1 strains to infect and replicate within the human trophoblastic cell line JEG-3. As expected, the number of bacteria that were replicated over time (4 and 24 h) was higher in 16M than in Rev.1 (Figure 7).
DISCUSSION
To elucidate the molecular mechanisms underlying the attenuation of the B. melitensis Rev.1 vaccine strain, we conducted a comparative transcriptomic analysis between Rev.1 and its virulent counterpart, 16M, each grown under either normal-or low-pH conditions. When the two strains were grown under normal-pH conditions, Rev.1 showed a marked upregulation, as compared with 16M, of various genes that encode ABC transporters-a large and widespread family of proteins (Garmory and Titball, 2004). ABC transporters, which export solutes, antibiotics, and extracellular toxins, play a role in various cellular processes, such as translational regulation and DNA repair (Garmory and Titball, 2004), and the number of ABC systems appears to depend upon the bacterial adaptation to its environment (Garmory and Titball, 2004;Tanaka et al., 2018). The upregulation of ABC transporter-related genes in the attenuated Rev.1 strain should be considered in light of two other findings. First, Rev.1 showed an upregulated expression of BMEII0704, which encodes for bacterioferritin (Table 2). Notably, iron plays an important role in the survival of pathogens within host cells (Collins, 2003), and during infection, macrophages actively export iron from the phagosome (which is the replicative niche of Brucella; Forbes and Gros, 2001); it was previously suggested that Rev.1 may have lost the ability to regulate bacterioferritin synthesis and degradation (Eschenbrenner et al., 2002). Second, Rev.1 showed a downregulated expression of BMEI1759 (Table 3), which encodes for the vitamin B12-dependent methyltransferase, MetH (Lestrate et al., 2000). As MetH is involved in methionine biosynthesis, its downregulation in Rev.1 may have impaired amino acid metabolism in this strain. Taken together, these finding suggest that an improper regulation of essential metabolic pathways, including iron and amino acid metabolism, may have affected the ABC transporter activity, leading to down/upregulation of specific transporters to compensate for the gain/loss of critical metabolites.
To further understand the molecular mechanisms underlying Rev.1 attenuation, we examined the "interaction genes, " i.e., the main genes that are influenced differently by the acidic treatment in Rev.1 and in 16M. These interaction genes are probably the key genes involved in the attenuation of Rev.1, and are, FIGURE 5 | Heatmap representing the expression profiles of the 403 interaction genes. Rows represent genes and columns represent bacterial samples. Red and blue pixels indicate upregulated and downregulated genes (Rev.1 versus 16M), respectively. The hierarchical clustering was generated using 1-Pearson correlation as the distance measure, and "complete" as the linkage method. Genes were categorized into five clusters based on the generated dendrogram, and genes within each cluster were characterized based on Clusters of Orthologous Groups (COGs) annotations. therefore, of particular interest. Among these interaction genes, we found that acetyl-coenzyme A (acetyl-CoA) synthetase was significantly downregulated in the low-pH group of Rev.1, as compared with the low-pH group of 16M. Acetyl-CoA is the molecule by which glycolytic pyruvate enters the tricarboxylic acid cycle, it is a crucial precursor of lipid synthesis, and it acts as a sole donor of the acetyl groups for acetylation (Pietrocola et al., 2015). Previous studies revealed a clear association between the metabolism of Brucella and its persistence in its hosts (Hong et al., 2000;Lestrate et al., 2000;Barbier et al., 2011). The significantly high downregulation of acetyl-CoA synthetase in Rev.1 may decrease the levels of acetyl-CoA production, thereby affecting crucial metabolic processes that may potentially have a major contribution to bacterial attenuation.
Pathogenic bacteria must deal with oxidative stress emanating from the host immune response during invasion and persistent infection (Cabiscol et al., 2000;Singh et al., 2013). We found several key interaction genes that were significantly downregulated in Rev.1, as compared with 16M, and which encode proteins involved in oxidative stress: aldehyde dehydrogenase (ALDH), the DNA starvation/stationary phase protection protein Dps, and a cold-shock protein (CSP). Prokaryotic and eukaryotic ALDHs metabolize endogenous and exogenous aldehydes to mitigate oxidative stress, (Singh et al., 2013) and an upregulation of bacterial ALDH was shown to occur following exposure to environmental or chemical stressors. It was suggested that such an upregulation is a critical element in the response of bacteria to oxidative stress (Singh et al., 2013). Dps was shown to protect Escherichia coli from oxidative stress, UV and gamma irradiation, iron and copper toxicity, thermal stress, and acid and base shocks (Martinez and Kolter, 1997;Nair and Finkel, 2004;Karas et al., 2015). CSP-A activity was found to be associated with the ability of B. melitensis to resist acidic and H 2 O 2 stresses, especially during the mid-log-phase FIGURE 6 | Gene Ontology (GO) network of genes responding differently to low-pH conditions in the Rev.1 versus 16M strains. Of the 403 interaction genes (FDR < 0.05, fold change ≥ 2) expressed differently in Rev.1 versus 16M, the 133 most significant genes (FDR < 0.05, fold change ≥ 2.8) were selected and their GO network was generated using the Comparative GO tool (Fruzangohar et al., 2013). The node sizes represent the level of GO enrichment. (Wang et al., 2014), and it was suggested that the Brucella CSP highly contributes to its virulence, most likely by facilitating its adaptation to the harsh environmental circumstances within the host (Wang et al., 2014). Taken together, it is possible that the downregulation of these key interaction genes in Rev.1 results in its inability to cope with oxidative stress in the host, thus contributing to bacterial attenuation.
As compared with Rev.1, 16M demonstrated a downregulation (FDR < 0.05, fold change ≥ 2) of interaction genes that encode for four proteins of the SUF system and the heat-shock protein IbpA, which were shown to be involved in the resistance to heat and oxidative stress (Kitagawa et al., 2000;Angelini et al., 2008;Outten, 2015). In E. coli, the SUF pathway plays a major role in preserving Fe-S cluster biosynthesis under oxidative stress conditions (Angelini et al., 2008;Outten, 2015). The small heat shock proteins (sHsps) IbpA and IbpB were previously suggested to be involved in the resistances to heat and oxidative stress, as overexpression of ibpA and ibpB in E. coli increased the resistance to heat and superoxide stress (Kitagawa et al., 2000). Our transcriptomic analysis revealed enhanced enrichment within the molecular function of oxidoreductase activity in the low-pH group of 16M, but not in the low-pH group of Rev.1. As oxidoreductase protects bacteria against oxidative stress (Lumppio et al., 2001), we assume that the enhanced expression of the SUF system and IbpA by Rev.1, as compared with 16M, compensates for its impaired oxidoreductase activity, thereby enabling survival within the harsh oxidative intracellular environment of the host.
As compared with the low-pH group of 16M, the low-pH group of Rev.1 showed a downregulation (FDR < 0.05, fold change ∼ 2) of five key interaction genes BME_RS02910, BMEI0584, BMEI0583, BME_RS13825, and BMEI1943 that encode for FtsZ, FtsA, FtsQ, FtsK, and DnaA respectively, all of which participate in critical stages of the cell cycle (Margolin, 2005;Sherratt et al., 2010;van den Ent et al., 2008;Bell and Kaguni, 2013;Loose and Mitchison, 2014). This finding may indicate that, in the acidic conditions of the BCV, the replication capabilities FIGURE 7 | Bacterial burden, measured as colony forming units (CFUs) over time. JEG-3 trophoblasts were infected with bacteria at a MOI of 500, and CFUs were determined 0, 4, and 24 h thereafter. (A) Boxplots representing the distribution of 16M (blue) and Rev.1 (yellow) bacterial count replicates, measured 0, 4, and 24 h following infection. P-values were calculated using two-sample Wilcoxon test. (B) Scatter plot indicating the change in bacterial counts over time (means ± SD) following infection. Three replicate wells were evaluated at each time for each strain.
of Rev.1 are reduced, which lowers their intracellular survival compared to 16M. The significantly lower intracellular survival of Rev.1 in trophoblasts (Figure 7) supports this conclusion.
Under low-pH conditions, Rev.1 also showed a downregulation of the interaction gene BMEII1048 that encodes the molecular chaperone, GroEL (FDR < 0.05, fold change ∼ 2). Molecular chaperons facilitate protein folding, preventing protein denaturation, and are involved in various cellular processes, including DNA replication, UV mutagenesis, bacterial growth, and RNA transcription (Maleki et al., 2016). Under acidic conditions, partially unfolded proteins may emerge (Mendoza et al., 2017) and molecular chaperones may stabilize them to prevent their acid-induced aggregation. Indeed, the Helicobacter pylori GroEL homolog, HSP60, was shown to be induced upon acid stress (Mendoza et al., 2017). Thus, the downregulation of the interaction gene encoding GroEL in Rev.1 under low-pH conditions may lead to the accumulation of partially unfolded and desaturated proteins, leading to bacterial attenuation.
Finally, the interaction gene BMEII0027, which encodes for the T4SS protein VirB3, was upregulated (FDR < 0.05, fold change ∼ 2) in the low-pH group of 16M, as compared with the low-pH group of Rev.1. This may have a harmful effect on the survival of Rev.1 within host cells, as VirB3 was shown to be essential for Brucella virulence because, together with VirB4, VirB6, VirB8, and the N-terminus of VirB10, it comprises the inner membrane complex of the T4SS apparatus (Ke et al., 2015).
Notably, the Rev.1 upregulated interaction gene BMEII1116 that encodes the HTH-type quorum sensing-dependent transcriptional regulator VjbR, was shown to contribute to the virulence and survival of Brucella by regulating the expression of various virulence factors (Delrue et al., 2005;Weeks et al., 2010). It is possible that this interaction gene somewhat compensates for the lower expression of highly important virulent genes, such as VirB3, in Rev.1.
CONCLUSION
Through a comparative transcriptomic analysis, we revealed DE key genes involved in various crucial pathways, which are either upregulated or downregulated under acidic conditions in Rev.1, as compared with 16M. We suggest that these genes-and, especially, those mentioned in Table 4-are involved in the molecular mechanisms underlying Rev.1 attenuation, although further characterization through mutation and knockout experiments is required to conclusively determine the role of these genes in acid resistance and virulence attenuation of the B. melitensis Rev.1 strain.
AUTHOR CONTRIBUTIONS
MS-D and DK conceived and coordinated the study. DK conducted the bacteriology work, acidic experiments, acquired the samples, and extracted RNA. MS-D analyzed the data. TZ performed the real-time PCR validation experiments. All authors interpreted the data, drafted the manuscript, and approved the content for publication. | 2019-02-14T14:03:29.883Z | 2019-02-14T00:00:00.000 | {
"year": 2019,
"sha1": "ff938deb76e7d9a32f52f13db35373d227ca7fc2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.00250/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff938deb76e7d9a32f52f13db35373d227ca7fc2",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
236353205 | pes2o/s2orc | v3-fos-license | Ethephon as a potential tool to manage alternate bearing of ‘Fuji’ apple trees
,
INTRODUCTION
Apple is the second temperate fruit tree most produced in Brazil (Pasa et al., 2016), mainly in the states of Rio Grande do Sul (46.3 %) and Santa Catarina (50.1 %) (IBGE, 2016).While 'Gala' is the main cultivar planted in most growing areas, 'Fuji' represents 56% of the apples grown in the region of São Joaquim.This cultivar is described as susceptible to alternate bearing (Atay et al., 2013).Alternate bearing may be affected by cultivar (Monselise & Goldschmidt, 1982) and is characterized by large yields of small sized fruit in "on" years, and low yields, sometimes even no fruit, in "off" years (Guitton et al., 2011).
Alternate bearing is affected by several factors, among which the most important seem to be the influence of plant hormones, by either inhibiting or inducing flower bud initiation (Jonkers, 1979).The inhibitory effects of seed-derived gibberellins on flower bud initiation in apple is widely known.However, recent studies suggest that gibberellins influence, but do not control, this complex process (Schmidt et al., 2009).Then, spraying gibberellins as means reduce flower bud initiation and achieve more regular crops may not be the best option.Managing the trees to initiate more flowers instead of reducing it seems to be more reasonable, since high yields of "on" years would be maintained and yields of "off" years increased.
An adequate chemical thinning program has the potential do reduce the biennial behavior of apples.However, cultivars with strong natural tendency for alternate bearing may show alternate habit even after a successful reduction of crop load by chemical thinning (McArtney et al., 2013).In this case, additional strategies might be required to manage alternate bearing.The application of ethephon coinciding with flower bud initiation has shown promising results to increase return bloom and yield in pome trees.Several studies have reported the efficiency of ethephon to increase return bloom in apples (Duyvelshoff & Cline, 2013;McArtney et al., 2013) and pear (Einhorn et al., 2014) with varying rates, number of applications, time of application, and cultivars.Such results are not available for 'Fuji' in the Brazilian conditions, which is, as mentioned before, very susceptible to alternate bearing.Ethephon is also an alternative for chemical thinning for apple (Petri et al., 2018) and peach (Giovanaz et al., 2016), but its effect is highly dependent on climatic conditions before and after spraying.
Given the limited availability of information regarding the management of alternate bearing of 'Fuji' apple trees in Brazil, and the potential positive economic impact of reducing its effects for apple growers, the objective of this study was to investigate different rates of ethephon on return bloom and yield, and fruit quality atributtes of 'Fuji' apple trees.
MATERIAL AND METHODS
The study was performed in São Joaquim, at the Experimental Station of São Joaquim/EPAGRI, located in São Joaquim, Santa Catarina State, Brazil (28º17'39''S, 49º55'56''W, at 1,415 m of altitude), during the growing seasons of 2014/2015 and 2015/2016.According to Köppen-Geiger classificaton, the climate of the region is mesothermal humid (Cfb) i.e, temperate climate constantly humid, without dry season, and cool summer (Benez, 2005), and average chill accumulation (temperatures below 7,2 ºC) is 900 hours.Climatic conditions during the experiment were recorded and are shown in Figure 1.According to the Brazilian soil classification system (Santos et al., 2013), the soil of the experimental field is a Cambissolo Húmico (Inceptisol), Eighteen year-old Fuji Standard apple trees grafted on M.9, trained to a central-leader system, were used as plant material, and as pollinator two adjacent rows of 'Gala' were planted.Row spacing was 4m and within-row spacing, i.e., between trees in the row, was 1.0 m (2,500 trees ha -1 ).Orchard management was performed according the recommendations of the apple production system (Epagri, 2006).The experiment was arranged in a randomized complete block design with four replicates.Each replication consisted of three trees, but only the central one was used for evaluation, leaving one at each end as border).
Treatments consisted on Ethephon sprayed at different rates (300 mg L -1 , 400 mg L -1 , and 500 mg L -1 ) and an unsprayed control.Trees were sprayed in the "on" year (2014/2015 growing season) 30 days after full bloom.The date of full bloom was 09/25/2014.The source of ethephon was the commercial product Ethrel® (24% a.i.w:v, Bayer CropScience).A nonionic surfactant (Break-Thru, BASF Corp.) was added in all solutions at a rate of 0.05% (v:v) of.Trees were sprayed with a motorized handgun backpack sprayer (Stihl SR 450) to runoff, with a flow rate of 2.64 L min -1 (spraying volume of approximately 1000 L ha -1 ).
At commercial maturity, fruit were harvested according to starch-iodine index (4-5), flesh firmness (70-90 N), and soluble solids (11-12°Brix), in 03/17/2015 and 03/13/2016.Total number of fruit per tree was counted and weighed (kg), with a digital scale (UR 1000 Light, URANO).From these data, yield per tree (kg), fruit weight (g) and estimated yield (ton ha -1 ) were calculated.Since Ethephon may have a thinning effect on apples, in the season of treatment (2014/2015) the number of fruit thinned per tree was recorded approximately 45 DAFB.Return bloom was assessed the year after the year of treatment at full bloom (09/24/2015) from representative scaffolds (at least 100 flower clusters) of each tree.The total number of spurs and 1-year-old shoots (with and without flower clusters) was counted and return bloom expressed as the percentage of flowering spurs and 1-year-old shoots.In both growing seasons, samples of 15 fruit per replicate (tree) were taken at harvest for flesh firmness, soluble solids content and starch-iodine index determination, according to methodology described by Pasa et al. (2018).
The R software (R Core Team, 2014) was used to perform statistical analysis.Data expressed as percentage or counts were transformed by arcsin [square root (n + 1)] and square root (n + 1).Data were analyzed for statistical significance by means of F test, and when significant, regression analysis was performed.
RESULTS AND DISCUSSION
Ethephon did not significantly affect yield components in the season of application (Table 1).
In the season following application the percentage of spurs flowering was significantly increased by ethephon, regardless the rate (Figure 2A).On the other hand, the percentage of flowering 1-year-shoots was not affected (Figure 2B).Similar effect was observed for yield (Figure 2C), estimated yield (Figure 2D) and number of fruit per tree (Figure 2E).
Despite the greater crop load of ethephon treated trees the year following application, no differences in fruit size were observed (Figure 2F).Regardless the treatment, fruit size was considerable small in this season, which was likely an effect of late thinning performed in this block (~70 DAFB), which occurred due to operational issues, but was similar to all treatments.Fruit quality attributes did not differ among treatments in both growing seasons (Table 2).
Our results show that exogenous application of ethephon increases return bloom and yield of 'Fuji' apple trees.Similar effect was observed in other biennial apple cutlivars, like Golden Delicious, York Imperial (McArtney et al., 2013), and Redchief Delicious (Bukovac et al., 2006).
Rev. Ceres, Viçosa, v. 68, n.3, p. 180-184, may/jun, 2021 McArtney et al. (2013) observed increased return bloom of spurs (43.3 %) of 'Golden Delicious' in response to ethephon (560 mg L -1 ) sprayed 50 + 80 DAFB, while control trees had only 10.9 % of spurs flowering.These authors also observed that the transition to floral development started 64 DAFB but peaked around 85 DAFB.When the transition is visible in the microscope (based on the doming of the axillary meristem), the induction has already occurred.
Based on this information, one might think that substances to induce flowering should be sprayed around 60 DAFB.However, a study in apple at the genetic level showed that flower bud induction seems to occur around 30 DAFB, before the first visible morphological changes in apical meristem occur (Hättasch et al., 2008).Indeed, our results show that buds are responsive to exogenous ethephon application at this time, also suggesting that flower bud induction occurs early in the season.
Regardless the rate, estimated yield of ethephon treated trees was significantly greater than control the season following the treatment (off year).Einhorn et al. (2014) observed greater return bloom and yield of 'D'Anjou' pears treated with ethephon 300 mg L -1 87 DAFB.Bukovac et al. (2006), observed similar results with Redchief 'Delicious' apple, in response to ethephon 200 mg L -1 , sprayed 21 + 42 DAFB and 21 + 42 + 63 DAFB.These authors also observed that at the end of the sixyear study, the mean yield per tree of ethephon treated trees was similar to control trees, but ethephon reduced the variation in yield between "on" and "off" years.Even though mean yield was similar, achieving regular crops over the years has a dramatic impact on tree physiology and orchard management.For example, an orchard of 'Fuji' apples in an "off" year is very difficult to manage.Firstly, this cultivar is vigorous, so a low crop load means greater vegetative growth and consequently more summer pruning (or other strategies to control vegetative growth) is needed in order to allow good light penetration for flower bud formation and fruit color.Secondly, fruit of these trees may show greater incidence of post-harvest physiological disorders like "bitter pit", which is more severe in vigorous trees (Jemriae et al., 2016).Ripening in climacteric fruit is associated with a great increase in ethylene production and can be induced by exogenous ethylene (Silva et al., 2012;Hiwasa et al., 2003).Since apples are climacteric, we might expect ripening changes following the application of ethephon.However, we did not observe such effect in fruit of trees sprayed with ethephon, even in the season of application.This is probably because ethephon was sprayed early in the season (30 DAFB), and at this time ripening is little affected by exogenous ethylene, since they are in early stages of development.
CONCLUSIONS
Collectively, our results show that ethephon, sprayed approximately 30 days after full bloom, with rates varying from 300 to 500 mg L -1 , reduces alternate behavior of 'Fuji Standard' apple trees by increasing return bloom and yield, without negatively affecting yield and fruit quality in the year of application.We strongly suggest first time applications to be performed in small areas, since the application of plant growth regulators like ethephon, may be affected by climatic conditions, tree age, nutrition, sanitary conditions, among others, which vary among orchards.
The results found in the present study are new and promising as means to reduce the negative effects of alternate bearing of 'Fuji' apple trees and promote regular yields of high-quality fruit.While our results are promising, we encourage future studies to investigate the effects of ethephon and other compounds in other growing regions, as well as testing additional rates, application timings, influence on post-harvest, among other potential implications.
Figure 1 :
Figure 1: Climatic conditions of the experimental field, from 2014 to 2016.
Figure 2 :
Figure 2: Return bloom and yield components of 'Fuji Standard' apple trees in the growing season of 2015/2016, in response to ethephon sprayed at different rates the previous season.Vertical bars represent SE (n = 4).
Table 1 :
Yield components of 'Fuji Standard' apple trees in response to Ethephon sprayed 30 days after full bloom in the growing season of 2014/2015
Table 2 :
The effect of ethephon on fruit quality attributes of 'Fuji Standard' apples in the season of application (2014/2015) and the season following the application(2015/2016) | 2021-07-27T00:04:40.000Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "8c5d340b47b8d62b0ec8306a098ca1ee8a4d5f26",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rceres/a/zqr3XVTKyHfYb4z7SZPQcsQ/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9b42e5849a18c13175f0993c2a58e25c13b4a9bf",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
53084783 | pes2o/s2orc | v3-fos-license | Magnetohydrodynamic fluid flow and heat transfer over a shrinking sheet under the influence of thermal slip
We study the heat transfer of a Magneto Hydrodynamic (MHD) boundary layer flow of a Newtonian fluid over a pervious shrinking sheet under the influence of thermal slip. The flow allows electric current to pass through. The governing PDEs are transformed into self-similar ODEs via Lie group analysis. We study the variations in the dimensionless quantities like velocity and temperature of the flow in terms of the different parameters involved in the problem. We discuss the thickness of the boundary layers under the influence of various parameters involved in the flow. Numerical simulations are carried out to explain and support the results obtained.
Introduction
Due to variety applications in manufacturing industries and technological processes, like, wire drawing, production of paper and glass-fiber, processing industries of metal and polymer etc., the flow of incompressible viscous fluids through stretching sheets has attracted a considerable attention of researchers. In closed analytical form, an exact similarity solution was found, later on, by Crane [1], who considered streamline of Newtonian fluid's flow. McLeod and Rajagopal [2], later on proved the uniqueness of the solution established by Crane. For the same flow, Gupta and Gupta studied the transfer of heat and mass over a stretching surface [3]. The idea of a flow due to a stretching surface was extended to three dimensions by Wang [4]. The investigation of magnetohydrodynamic (MHD) flow is very interesting due to promising magnetic field effects on the boundary layer. Pavlov took an uniform magnetic field into account and studied the MHD flow over a stretching surface to obtain the exact similarity solutions [5]. Andersson investigated the MHD flow of an incompressible viscous fluid over a stretching sheet [6]. The MHD flow over a stretching permeable surface without and with blowing were respectively studied by [7] and [8], where important contributions were made.
Because of having increasing applications to various engineering problems, the flow of an incompressible fluid due to a shrinking sheet has attracted much more attention [9]. An analytical study of MHD fluid's flow past a shrinking surface was reported in [10]. In the presence of suction, Kandasamy and Khamis studied the effects of mass and heat transfer of MHD boundary layer flow over a shrinking sheet [11]. Fang and Zhang studied MHD flow over a shrinking surface and obtained an analytical solution [12]. In a later publication, they reported the thermal boundary layer flow and calculated an exact analytic solution [13]. For the MHD fluid's flow over shrinking surface, Krishnendu studied the effects of heat source subject to mass suction [14].
Lie group analysis is a technique applied to non-linear differential equations to obtain similarity reductions. This particular analysis reduces the number of variables in the governing PDEs. Consequently the system of PDEs is converted into a self-similar system of ODEs. In this report, we consider a shrinking sheet and study the effects mass and heat transfer under the influence of thermal slip. We use the technique of Lie group to determine the self similar solution.
The above mentioned approach has been exploited for analyzing the process of conviction in many flow configurations arising in different branches of engineering and other applied sciences [15]. Ullah and Zaman used the approach of Lie group to obtain similarity transformations for the flow of tangent hyperbolic fluid over a stretching sheet subject to slip conditions [16]. Avramenko et al. used the analysis of Lie group to determine the symmetric characteristics of tumultuous flows of a boundary layer [17]. The authors of [18] studied hypothetically a declining plume of bio-convection in a profound apartment full of a fluid soaked permeable medium by exercising the method of the Lie group. The Lie group technique has been exploited to investigate blended convective flow taking the transmission of mass into account [19]. Hamad et al. used the Lie group analysis to explore the impact of thermic emission and convective surface transversality conditions over a flow of boundary layer [20]. Considering an inclined plate, Aziz et al. explored the production of heat and the reactive index's variable in a magneto-hydrodynamic flow by using the transformations of scaling group [21]. A free of convection nanofluid's flow across a chemically susceptible horizontal plate in a penetrating medium was investigated by Rashidi et al. [27]. They used the technique of Lie group analysis to determine the solution.
In this report, we study the impacts of thermal source on magnetohydrodynamics (MHD) flow and the transfer of heat across a shrinking sheet taking thermal slip into account. We apply Lie group analysis to transform the governing partial differential equations into self-similar ordinary differential equations.
The report is organized as follows. In Section 2, we present the mathematical formulation. We use the Lie group analysis in Section 3 to obtain similarity transformations for our mathematical formulation. Section 4 is devoted to the detail study of the variations in the dimensionless velocity and temperature profiles for different values of the Hertmann number, the mass suction parameter and the slip parameter. Numerical simulations are presented to geometrically show the impacts of different parameters on the velocity and temperature profiles. Finally, we conclude our work in Section 5.
Model
In this work, we consider a magnetohydrodynamic flow of a boundary layer Newtonian fluid that allows electric current to pass through in two dimensions.
The fluid is taken over a permeable declining sheet with internal thermal radiation (absorption). The thermal shift will be investigated over a sheet which is coinciding with = 0 and assume that the flow is limited to the region where is strictly positive. The sheet is along the horizontal axis and is perpendicular to the vertical axis.
In the presence of uniform transverse magnetic field, the fundamental equations of continuity, momentum and energy for the steady two-dimensional flow with usual notations are given below [14] In the above equations the horizontal and vertical components of the velocity are dented respectively by and . The viscosity and the density of the kinetic fluid are represented respectively by and . The symbols and 0 represent the fluid's electrical conductivity and the magnetic field applied to the system. The temperature of the system is denoted by while ∞ represents free stream temperature. The symbols 0 , and express the volumetric ratio of generation of heat, the fluid's thermal conductivity and the specific heat respectively.
Denoting the shrinking constant and temperature of the sheet respectively by > 0 and , the transversality conditions in terms of components of velocity and the temperature may be expressed as Here, denotes a prescriptive sharing of wall mass suction of the sheet which is strictly positive, and 1 ( ) is the slip factor of the thermal measured in (length) −1 .
Now we focus our attention on the non-dimensionalization the system under consideration. This can be done by introducing the dimensionless quantities as Now, we plug the scalings given in (6) into the system given by Eqs. (1), (2) and (3) and, for the sake of simplicity, ignore the over bars. After a simple algebraic manipulation, the expressions for the continuity, the momentum and the energy can be written as The scalings defined in Eq. (6), transform the boundary conditions (4) and (5) into Now, for the reduction of parameters and the number of equations, the stream function is chosen in such a way that it satisfies the equations = , = − .
We observe that the continuity of the second order partial derivatives of the stream function is confirmed from Eq. (7). Consequently, Eq. (7) reduces to an identity.
Our next step is to find the invariant solutions for the system described by (12) and (13) under a particular continuous one parametric group. This is the same as to determine the similarity solutions to the system given by (12) and (13). In this regard, we look for a group of transformations from the set of one parametric scaling transformation simplified form of the Lie group analysis.
Analysis
Our goal in this section is to apply the technique of the Lie group to determine transformations of similarity. Consequently, the system of nonlinear PDE's will be converted into a self similar system of linear ODE's. To do so, we need to introduce the following scaling group of transformations (see, for instance, [22,23]), The letter in (16) denotes the parameter of the group Γ and ′ s, = 1, 2, 3, 4, are arbitrary real numbers to be calculated. The introduction of the above scalings (16) transforms the ordered triplet ( , , , ) into ( * , * , * , * ).
Expanding mono parametric group of transformations (22) by Taylor's series considering very small and confining up to leading order terms in , the transformations Γ are converted to the elementary form A simple algebraic manipulation of Eq. (23) leads mono parametric group of transformations (22) to the characteristic equation Applying algebraic manipulation to Eq. (24), one may be able to deduce the following similarity transformations where is the similarity variable, and , , are the dependent variables. Our next task is the similarity equations. Introducing the transformations (25) to the governing equations (12) and (13), we arrive at the following system of ordinary differential equations where the primes denote derivatives with respect to the parameter . Finally, solving the system presented by the relations (26) and (27) under the boundary conditions (14) and (15) leads to
Technique of solution
To find the solution of system consisting of the nonlinear ODE's presented by (26) and (27) under the BC's (28), we follow the finite difference code [26] together with
Discussion
In the following, we explore the impacts of distinct parameters of the system upon the velocity as well as the temperature of the flow. From the practical point of view, this has great significance. We vary one of the parameters and fix the remaining taking physical relevancy into account.
First we study the impacts of the Hertmann number upon the velocity and temperature profiles of the flow. In Figure 1, The impacts of the Prandtl number upon the temperature gradient at sheet against , for = 2, = 2 and = 0.2, are depicted in Figure 6. We note that ′ (0) is negative sign definite in for all values of the Prandtl number . This implies that there is no heat absorption in the sheet and there is a heat transfer from the sheet.
The increase in the rate of heat transfer is related with the increase in the Prandtl number . This is of particular interest when one evaluates the rate of heat transfer from the sheet.
In Figure 7 we depict skin friction in terms of the magnetic field for several values of the mass suction parameter . One may observe that for a certain value of , skin friction increases with an increase in the value of parameter of mass suction . From the momentum equation one may also observe that the skin friction independent of , so that there is no impact of the temperature on the skin friction.
Conclusion
We investigated a flow of a boundary layer of a magnetohydrodynamic (MHD) Newtonian fluid over a diminishing sheet having internal thermal source. The fluid allows an electric current to pass through. We applied the approach of Lie group to the system to obtain the similarity transformations in the form of ODE's.
The impacts of the Hertmann number upon the profiles of the dimensionless velocity and temperature of the flow has been discussed. We have demonstrated that the temperature (velocity) decreases (increases) as we bring increase in the Hertmann number.
The variations in the velocity and temperature of the flow have been discussed in terms the mass suction parameter. We have shown that the dimensionless temperature of the flow decreases when one increases the Prandtl number, the parameter of the thermal slip and source/sink parameters. By increasing the Prandtl number and the heat source parameter, the transfer of heat can be reinforced. To | 2018-11-11T01:39:44.598Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "31e0f3dbafbc3b5c1aaa7b589b0e1202499283eb",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844018305887/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "31e0f3dbafbc3b5c1aaa7b589b0e1202499283eb",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
247466019 | pes2o/s2orc | v3-fos-license | Reduced CCR2 Can Improve the Prognosis of Sarcoma by Remodeling the Tumor Microenvironment
Background The tumor microenvironment (TME) plays a very important role in the development of sarcoma (SARC), but it is still unknown how to effectively regulate the TME. Aim Our study aims to identify core molecules that can concurrently regulate immune and stromal cells in TME as potential therapeutic targets. Methods and Results We used the ESTIMATE algorithm to score the immune and stromal components of 265 SARC samples and determined that increased immune and stromal components in TME were both associated with poor prognosis in SARC. Next, we identified differential genes that regulate both immune and stromal cells, and identified the core prognostic gene CCR2 through the protein–protein interaction (PPI) network, COX analysis, survival analysis, and GSEA enrichment analysis. Next, we calculated the content of infiltrating immune cells and stromal cells in tumors using the CIBERSORT and xcell algorithms, respectively. Using differential analysis and Spearman correlation analysis, we identified 12 immune cells and 7 stromal cells, including CD4+T cells, CD8+T cells, monocytes, macrophages, dendritic cells, NK cells, mesenchymal stem cells (MSC), Fibroblasts and Endothelial cells, all of which were regulated by CCR2. Conclusion Increased immune and stromal cell components were associated with poor prognosis in SARC, and CCR2 had a prognostic role in TME, regulating multiple immune and stromal cells, and was an important target for TME remodeling as well as immunotherapy in SARC.
Introduction
Sarcomas (SARC) are heterogeneous tumors originating from mesenchymal tissue and are malignant tumors commonly found in children and adolescents. [1][2][3] The five year survival rate of SARC patients is 50%, but it decreases sharply in patients with advanced disease; in addition, distant recurrence occurs in nearly half of SARC. 4,5 To date, the most common treatment for local SARC is surgery combined with radiotherapy and chemotherapy, but the recurrence rate after treatment is still as high as 50%; 6,7 therefore, new therapies and targets are needed to treat SARC.
The tumor microenvironment (TME) refers to the cellular environment in which tumor or tumor stem cells exist. 8 It has been widely recognized that the tumor microenvironment plays an important role in dynamically regulating tumor progression and influencing therapeutic outcomes. 9 TME not only maintains tumor cell survival and proliferation by resisting apoptosis and evading growth inhibition, but also modulates the response to therapy. 10 Therefore, TME plays an essential role within the therapeutic outcome and clinical prognosis of cancer patients.
In addition to tumor cells, the tumor microenvironment also includes stromal cells and immune cells. Immune cell infiltration is related to poor prognosis in SARC, 11 especially macrophages, and has an important impact on local recurrence, distant metastasis and overall survival of SARC patients. 12 Mesenchymal stem cells (MSC) can significantly increase the number of cancer stem cells (CSC) promoting the production of SARC precursors and causing a CSC-like state by inducing epithelial-mesenchymal transition. 13 Therefore, finding a target that can modulate immune and stromal cells to enable alteration of SARC TME, leading to an improved prognosis of SARC is critical.
In this article we used the ESTIMATE to estimate the level of stromal and immune cell infiltration in malignant tumor tissue, and our study shows that increased infiltration of stromal cells and immune cells correlated significantly with poor survival status in SARC patients. We then performed a string of analyses to identify a core prognostic marker, CCR2, related to both stromal and immune cells. Calculation of the relative content of tumor-infiltrating immune cells by the CIBERSORT algorithmic program in order to subsequently analyze the immune cells associated with CCR2. We calculated the relative content of tumor-infiltrating stromal cells by the xcell algorithm to find the stromal cells associated with CCR2. Our study shows that reduced CCR2 is closely associated with poor prognosis in SARC and the ability to drive multiple immune and stromal cells, and is an important regulator of remodeling SARC TME to improve tumor prognosis, providing an important target for SARC therapy.
Raw Data
This study used 256 SARC RNA-seq cases and the corresponding clinical data downloaded from the TCGA database. 256 SARC RNA-seq cases include 2 normal samples and 263 tumor samples and the database is https://portal.gdc.cancer.gov/.
Generation of the ImmuneScore, StromalScore, and ESTIMATEScore The ESTIMATE algorithm 14 was performed using the estimate package loaded with R language version 4.0.3 to evaluate the proportion of immune and stromal components in each sample, presented in three forms of scores, ImmuneScore, StromalScore, and ESTIMATEScore. ImmuneScore, StromalScore, and ESTIMATEScore have positive correlation with the amount of immune cells, stromal cells, and the sum of the two in the tumor, respectively. A high score means a high content.
Survival Analysis
Survival analysis was performed by the survival and survminer packages loaded in R language. Survival time and status were recorded in the clinical data for survival analysis in 263 tumor samples, grouped using the optimal cutoff value, and survival curves were plotted by using the Kaplan-Meier method, and log rank was used as a significance analysis, with p<0.05 being considered significant.
Difference Analysis of Clinical Characteristics
Clinical data such as age and sex of TCGA-SARC samples were analyzed using Wilcoxon rank sum test in R language.
Generation of DGEs and Heatmap
The samples were divided into two groups based on the median values of ImmuneScore and StromalScore, respectively, and differential gene expression analysis was performed by using the limma package, and differential genes with | log(FC)| >1 and false discovery rate (FDR) <0.05 between the high group and the low group were considered as DGEs. The pheatmap package was used for heatmap of DEGs.
Enrichment Analysis of GO and KEGG
Enrichment analyses of GO and KEGG were carried out on 545 DGEs by the clusterProfiler, enrichplot and ggplot2 packages loaded in R language. Compliance with p-value and q-value simultaneously <0.05 was regarded as significantly enriched.
PPI Network Construction
PPI networks were constructed by STRING database, and nodes with interaction relationship confidence higher than 0.95 were used as reconstructed PPI networks by Cytoscape version 3.8.2. The top 20 DGEs according to the number of adjacent nodes were considered as the core genes for the next step of analysis. Regression Analysis of Univariate COX Regression analysis of univariate COX was performed by the survival package loaded in R language. The top 20 genes are displayed in univariate COX regression analysis plots in order of smallest to largest P-value.
Survival Analysis of CCR2 as a Prognostic Marker for SARC
First, samples were separated into two groups in accordance with the median value of CCR2 expression levels. Subsequently, the Kaplan-Meier method was implemented to analyze the relationship between CCR2 expression level and Overall Survival, and this was used to plot survival curves.
Gene Enrichment Analysis
Hallmark downloaded from MSigDB as the target gene set. Enrichment was carried out by using 4.1.0 GSEA software. Gene sets that met both p and q-value < 0.05 were regarded as remarkably enriched.
Immune Cell and Stromal Cell Infiltration
The CIBERSORT algorithm was applied to assess the content of immune cells in the tumor samples. Samples with P<0.05 in mass filtering were used for subsequent analysis. The relative content of stromal cells in tumor samples was calculated using the xcell software package.
Analysis of Differences and Correlation
The median value of CCR2 gene expression was used as a grouping criterion to compare immune and stromal cell differences using the Wilcox test, and the correlations of CCR2 with immune and stromal cells was analyzed using the spearman test. p<0.05 was deemed significant.
The Analysis Process of This Research
The flow of our research is illustrated in Figure 1: First, we downloaded the transcriptome RNA-seq data of 263 SARC cases from the TCGA database, then analyzed the TME composition of SARC, and calculated the immune and stromal cell components of SARC samples using the ESTIMATE algorithm. And we tried to determine whether the immune component and stromal component have any effect on the survival time of SARC patients. Next, we found DGEs that differed in both immune score and stromal score, and used PPI network and COX regression analysis to find core genes with prognostic effects to obtain CCR2.We then focused on several important analyses of CCR2, including survival analysis to determine its value as a prognostic marker for SARC and GSEA enrichment to analyzed its function. Finally we explored which immune and stromal cells this prognostic gene could regulate in SARC by two methods.
Association of Tumor Microenvironment Scores with Patient Survival and Clinical Traits
To identify the relationship between the content of immune and stromal components of TME and survival in SARC patients, survival analyses were carried out separately for ImmuneScore, StromalScore and ESTIMATEScore by using Kaplan-Meier. ImmuneScore, StromalScore represent the relative content of immune or stromal cell components in TME, respectively. ESTIMATEScore is the sum o ImmuneScore, StromalScore. As shown in Figure 2A-C, ImmuneScore, StromalScore and ESTIMATEScore were strongly associated with patient survival, suggesting that the immune and stromal components of TME in SARC patients play an important role in patient prognosis.In addition to the StromalScore, the ESTIMATEScore and ImmuneScore were also higher in the older patients ( Figure 2D, F and H). Interestingly, all scores were higher in female patients than in male patients significantly, demonstrating a greater difference in the components of TME between female SARC patients and male SARC patients ( Figure 2E, G and I).
DEGs are Mainly Related to Immune Function
To explore the altered gene profile in TME related to immune and stromal components, we compared the high and low subgroups ( Figure 3A and B).In the comparison of samples with high scores and low scores by ImmuneScore, we obtained 881 upregulated DEGs, and 288 downregulated DEGs.Similarly, 1562 up-regulated genes and 2859 downregulated genes were obtained by the StromalScore.Performing intersection analysis showed that there were 484 Relationship between SARC patient scores and survival, age and sex. SARC patients were grouped separately using the best cutoff value for high and low groups for survival analysis, and ESTIMATEScore group with p=0.0065 by Log rank test (A), ImmuneScore group with p=0.016 by Log rank test (B), StromalScore group with p=0.00075 by Log rank test (C). The P value of ESTIMATEScore(D), ImmuneScore(F) and StromalScore(H) subgroup in age is 0.046, 0.031 and 0.11 respectively. By Wilcoxon rank sum test, the P value of ImmuneScore(G) and StromalScore(I) subgroup in gender is 0.00018, 0.0018 and 4.3e −05, respectively. upregulated genes in both ImmuneScore and StromalScore high subgroups at the same time, meaning that these DGEs are up-regulated with higher stromal and immune cell contents in TME, and in both low scores subgroups there were 61 genes downregulated, for a total of 555 DEGs ( Figure 3C and D).
3047
These DEGs may be the determinants of the dynamic changes in TME. Gene Ontology (GO) enrichment analysis results revealed that almost all of these DEGs are associated with immune-related biological processes, such as the T cell activation and regulation, the regulation of immune effector processes, and the lymphocyte proliferation and differentiation ( Figure 3E and G).Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis also showed enrichment in chemokine signaling pathways, cytokine-cytokine receptor interactions, and cell adhesion molecule pathways ( Figure 3F and H).Thus, the whole function of DEGs seems to map onto immune-related activities. It also implies immune factors are a major characteristic of TME in SARD patients.
Identification of Core Prognostic Genes Affecting the Tumor Microenvironment
To identify core genes that play a critical role in the tumor microenvironment, we build a PPI network supported by STRING database using Cytoscape software [National Institute of General Medical Sciences (NIGMS) USA]. Figure 4A shows the top 30 genes sorted by the amount of significant nodes. Univariate COX regression analysis was then performed to identify the top 20 genes with prognostic significance according to p-value ( Figure 4B).Finally, we performed intersection analysis to identify core genes with prognostic value, As shown in Figure 4C, the core gene with prognostic role that can influence the composition of tumor microenvironment in SARC is CCR2.
CCR2 is a receptor for monocyte chemoattractant protein-1, a chemokine that is specifically known to mediate monocyte chemotaxis. Survival analysis revealed SARC patients with higher CCR2 expression survived longer than those with low CCR2 expression ( Figure 4D), indicating that it is a beneficial factor for patient survival. Next, we explored its function in tumors, and we used the MSigDB-defined tumor signature gene set for enrichment, which showed the enrichment of multiple immune-functional gene sets in the CCR2 highly expressed group, such as the inflammatory response, the interleukin 2-STAT5 pathway, the interleukin 6-JAK-STAT3 pathway, and the interferon α and interferon γ responses ( Figure 4E).However, in the low CCR2 expression group, gene sets such as the G2M checkpoint, the WNT β-catenin pathway, and the E2F transcription factor pathway associated with cell cycle and senescence were enriched ( Figure 4F).These results suggest that CCR2 may be an important target capable of influencing the prognosis of SARC patients by regulating TME composition.
Correlation of CCR2 with the Proportion of Tumor-Infiltrating Immune Cells
For the further confirmation the association between the expression of CCR2 and the immune microenvironment, we conducted an analysis of the proportion of tumor-infiltrating immune subpopulations via the CIBERSORT algorithm. The results of correlation and difference analysis revealed that a total of 12 immune cells were related to the expression of CCR2. (Figure 5A and B). Among them, nine immune cells had positive correlation with CCR2 expression, including T cells CD8, activated T cells CD4 memory, T cells follicular helper, T cells regulatory (Tregs), naive B cells, Plasma cells, Dendritic cells resting, Macrophages M1, Monocytes. Three types of immune cells were negatively correlated with CCR2 expression, including Macrophages M0, Macrophages M2, NK cells resting ( Figure 5C-N). These findings further support that the CCR2 expression level affects the TME immune activities.
Correlation of CCR2 with the Proportion of Tumor-Infiltrating Stromal Cells
Next, we analyzed the content of seven tumor-infiltrating stromal cells for subsequent analysis using the xcell algorithm, and the outcomes of correlation and difference analysis revealed that all seven stromal cells were correlated with CCR2 expression ( Figure 6A and B). Among them, three stromal cells were positively correlated with CCR2 expression, including Endothelial cells, mv Endothelial cells, and ly Endothelial cells; Four stromal cells were negatively correlated with CCR2 expression, including MSC, Fibroblasts, Pericytes, and Smooth muscle ( Figure 6C-I). These results demonstrated the level of CCR2 also had an effect on the stromal cells of TME.
Discussion
In our research, we attempted to find out the key genes that are closely associated with TME formation and patient prognosis in SARC by screening the TCGA database. Through a series of bioinformatic analyses, the close association that CCR2 had with TME formation and poor prognosis in SARC has been confirmed. Further, we found that CCR2 was associated with 12 immune cells such as T cells CD8, T cells CD4 memory activated, T cells regulatory (Tregs), Macrophages, and 7 stromal cells such as Endothelial cells, MSC, Fibroblasts. TME plays an essential role in tumorigenesis and progression. It would be useful to explore potential therapeutic targets that could contribute to the TME remodeling and facilitate the shift from tumor-promoting to tumor-suppressing TME. The important role the tumor microenvironment TME plays in dynamically regulating cancer progression and influencing therapeutic outcomes is now well known. 9 Immunotherapy has made tremendous progress in recent years, with immune checkpoint inhibitors such as the PD1-PD1 ligand 1 (PDL1) axis and/or cytotoxic T lymphocyte-associated antigen 4 (CTLA4) having been approved as first-line agents in a variety of solid tumors. [15][16][17][18][19] However, the efficiency of immune checkpoint inhibitor therapy in SARC treatment is limited. 20,21 Therefore, targeting the sarcoma microenvironment is an attractive therapeutic approach, but its implementation may need to be combined with other therapeutic targets. Our analysis of SARC transcriptome data from the TCGA database revealed that increased immune and stromal components in TME are associated with poor prognosis in SARC patients, and these results highlight the importance of exploring targets that regulate immune as well as stromal cell infiltration. Subsequently, our findings show that CCR2 is associated with tumor prognosis and is involved in the chemotaxis of multiple immune and stromal cells in TME of SARC, and therefore, CCR2 may be a therapeutic target and potential prognostic marker for TME of SARC patients.
CCR2 is a receptor for chemokine (CC motif) ligand 2 (CCL2), and CCR2 is selectively expressed on the cell surface, participating in multiple signaling pathways and regulating cell migration. 22 When CCL2 binds to CCR2, it induces chemotactic activity and increases calcium inward flow. It has various effects on wide variety of cells, including monocytes, macrophages, osteoclasts, basophils and endothelial cells, and it is also involved in multiple diseases. 23 Moreover, CCR2 has multiple effects in tumor progression, like increasing tumor cell proliferation and invasiveness, as well as creating a tumor microenvironment by increasing angiogenesis and immunosuppressive cell recruitment. 24 However, our results suggest that elevated CCR2 expression in SARC patients is associated with a better patient prognosis, which seems inconsistent with other cancers. Therefore, we speculate that CCR2 expression may exhibit differential effects of anti-tumor versus tumor-promoting effects in different tumors. In the tumor microenvironment, CD4+ and CD8+ T cells act as central players in tumor growth. 25,26 Increased CD8+ lymphocyte infiltration is associated with a better prognosis for synovial SARC and is related with longer survival in angiosarcoma patients. 27 Our study showed that CCR2 positively correlated with T cells CD8, T cells CD4 memory activated, and T cells regulatory (Tregs) expression, while higher infiltration of CD8+ or FOXP3+ lymphocytes was related to favorable overall survival in patients, 28 so we hypothesized that increased CCR2 was associated with better prognosis in patients by inducing increased infiltration of CD8+ or CD4+ lymphocytes. Besides, higher infiltrated CD163+ macrophages were associated with poorer progression-free survival in SATC, 28 while our study proved that high CCR2 expression was negatively correlated with Macrophages M0, Macrophages M2, in agreement with previous studies. Meanwhile, SARC immune checkpoint therapy requires CD4+ and CD8+ T cells, 29 and increased CCR2 could also increase the efficacy of immune checkpoints. In addition to immune cells, stromal cells play an important role in tumor prognosis, such as MSCs, 30 MSCs promotes osteosarcoma growth through PI3K/Akt and Ras/Erk intracellular cascade responses and promote metastasis via CXCR4 signaling. 31 In addition, MSCs-mediated STAT-3 pathway activation in osteosarcoma increases MMP2/9 and decreases E-calmodulin expression, promoting tumor progression. 32 Our results show that increased CCR2 is negatively correlated with the expression of MSCs, again demonstrating the beneficial prognostic role of CCR2.
In conclusion, in this study, through bioinformatic analyzing SARC samples in the TCGA database, we clarified that the immune and stromal components of the TME of SARC are closely associated with the prognosis of patients, and identified the core gene CCR2 as a marker capable of influencing the prognosis of SARC patients through the tumor microenvironment. Furthermore, we identified 12 immune cells and 7 stromal cells associated with CCR2 and found that its high expression had negatively correlation with Macrophages M0, Macrophages M2, MSC, and other immune and stromal cells that were related to poor SARC prognosis, and positively correlated with CD4+ and CD8+ T that made SARC prognosis better. Therefore, we suggest that CCR2 may be involved in the regulation of TME in SARC and may be an effective target to improve the efficacy of SARC immunotherapy.
Conclusion
Increased immune and stromal cell components are associated with poor prognosis in SARC, and decreased CCR2 in TME helps drive multiple immune cells and stromal cells that contribute to poor prognosis in SARC, making CCR2 an important target for TME remodeling as well as SARC immunotherapy.
Ethics Statement
The patients involved in the database have obtained ethical approval. Our study was based on open source data where users can download relevant data for free for research and publish relevant articles. There were no ethical issues. And this study was submitted to the Ethics Committee of the China-Japan Union Hospital of Jilin University for review and was deemed ethics approval was not necessary. | 2022-03-16T15:22:07.329Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "c881df12a752647c90231fc612a84bec20d03120",
"oa_license": "CCBYNC",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8932926",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "ed02b2f59977df7a15f1252eb0ff2252cd17ec11",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256316010 | pes2o/s2orc | v3-fos-license | Study of structural imbalances in agricultural engineering
. The authors emphasized that the development of the agricultural machinery industry is inextricably linked with the development of agricultural production, and consequently, changes in land use forms and the development of agricultural science, which, in turn, are conditioned by the current socio-political conditions and progress in the field of agricultural knowledge. It is proved that the impetus for the development of domestic agricultural machinery is the agricultural holdings. The authors focused on the fact that state support and regulation of the agricultural machinery market was proposed by stimulating the renewal of the technical park through the improvement of financial leasing, the reduction in the cost of medium-and long-term loans, and the improvement of the procedures for putting equipment into operation. It is noted that in order to preserve the process of updating the machine and tractor fleet and high-tech equipment in agriculture, it is necessary to ensure the effectiveness of state programs to support domestic agricultural enterprises, which will allow the restoration of their financial viability, and this in turn will serve as an impetus to the development of the market of agricultural machinery due to the growing demand for technical means.
Introduction
Agricultural engineering is a knowledge-intensive industry, in this regard, an important factor influencing its development is the level of scientific and technological progress.Mechanical engineering is one of the most important branches of the economy, since any production is provided with machinery and equipment, and the population is provided with consumer goods.Consequently, the industrial potential of the country and competitiveness in foreign markets depend on the level of development of mechanical engineering.
Mechanical engineering is an extremely important and complex intersectoral complex of the economy of Ukraine, which plays a leading role in the formation and improvement of the material and technical base, implements the achievements of scientific and technological progress, provides comprehensive mechanization and automation of production.However, today this industry, the state of which is one of the main indicators of the economic and industrial development of the country, is in a difficult economic situation.
Research of financial and economic activity at Agricultural Engineering Enterprises was carried out by such scientists as Abuselidze The implementation of integration processes is an important element of the overall process of socioeconomic development of the company.The main provisions of the integration strategy of the jointstock company should follow and be fully consistent with other aspects of strategic planning of JSC development (Hutsaliuk et al., 2020).
Under appropriate conditions, domestic enterprises need to improve and constantly adapt existing evaluation methods by creating a system of evaluation indicators -indicators that serve as criteria for determining the prospects for further economic and environmental expertise of environmentally oriented investment projects (Shvets et al., 2013).
In this context, the issue of ensuring the effectiveness of this mechanism is extremely important, as the action of the first international mechanism of low-carbon development has proved that the Kyoto mechanisms have not had a positive impact on the situation with carbon emissions in the world (Datsii et al., 2021).
To objectively identify the key factors of development of innovation processes in agricultural engineering, a list of indicators that allow assessing the state of the industry qualitatively and quantitatively and solving problems was formed during the study (Andriushchenko et al., 2021).
However, a comprehensive, systematic approach to the financial and economic activities of agricultural machinery enterprises has not yet received sufficient theoretical and practical justification.
The purpose of the article is the substantiation of theoretical and methodological provisions and the development of practical recommendations for the formation of a system of financial and economic activities of agricultural machinery enterprises.
Methods
In the process of work, the following research methods were used: historical and retrospective review -to characterize the processes of formation and development of machine-building enterprises in Ukraine; system, situational and process approaches to determining the essence of financial and economic management of agricultural machinery enterprises; abstractlogical; strategic analysis; statistical and technical-economic; economic and mathematical modeling.
Results and Discussion
Agricultural engineering traditionally occupies an important place in the structure of the machine-building complex of Ukraine.It focuses on the areas of agricultural production, and its placement is associated with the zonal specialization of agriculture.
The development of the agricultural machinery industry is inextricably linked with the development of agricultural production, changes in land use forms and the development of agricultural science.Which, in turn, are conditioned by the current socio-political conditions and progress in the field of agricultural knowledge.The second factor influencing the development of agricultural machinery is scientific and technological progress, the historical march of which makes it possible to technically improve both agricultural machinery and equipment and technologies for their production.
In accordance with the state target economic program for the development of domestic mechanical engineering for the agro-industrial complex until 2020, agricultural engineering of Ukraine is a strategically important branch of the state economy, which forms and has a significant impact on production volumes, production costs and prices for basic types of food for the population of the country.The concept of the state target economic program for the development of domestic mechanical engineering for the agro -industrial complex stated: reducing the volume of equipment output; closure of enterprises; low competitiveness of the industry's products, physically and morally worn-out production base, etc.Now the situation has not changed, and according to some indicators it has also worsened (Table 1).In January-November 2020, compared to the same period in 2019, the production of tractors for agriculture and forestry increased by 6%, disc harrows -by 17.6%, the production of plows, seeders, cultivators, mowers is stable, and the production of spreaders increased 2.7 times, sprayers -by 41.6%, trailers -by 27.4%.In monetary terms: in 2014, the sale of agricultural machinery products amounted to UAH 4.826 billion (3.7% of machine-building products), then in 2020 3 UAH 9.26 billion (7.1%).That is, the dynamics of agricultural engineering has become more positive.
The mechanism of state support for agricultural machinery is based on a number of regulations.In particular, compensation for the purchased Ukrainian agricultural machinery occurs in accordance with the Procedure for using the funds provided for in the state budget for partial compensation of the cost of agricultural machinery and equipment of domestic production.1.Machine-building enterprises apply for participation in budget programs; 2.The Ministry is forming a list of agricultural machinery, the cost of which will be partially compensated to farmers from the budget;
Ministry
Machine-Building Enterprise 3.Agricultural producers pay machine builders for machinery and equipment included in the List through state and other banks with a state share of 75%: Oschadbank, Ukreximbank, Privatbank, Ukrgasbank; 4.The agricultural manufacturer opens an account in one of these banks and submits to him an application and a package of documents: -a copy of the payment order; -an act of acceptance and transfer of equipment and equipment: -a certificate of state registration of equipment (if the equipment is subject to mandatory state registration); 5.The State Bank provides the Ministry with information on the amount of funds subject to partial compensation; 6.A special fund forms a register of agricultural producers and transfers funds to the state bank within the scope of open allocations from the register; 7.The Bank transfers compensation funds for equipment within 20% to the current accounts of Agricultural enterprises to producers.
The government has decided to use budget funds to partially compensate farmers for the cost of Ukrainian agricultural machinery and equipment with a forecast that will increase the purchasing power of agricultural producers and upgrade the technical park by reducing the cost of purchased machinery and equipment of Ukrainian production, and will also stimulate the production of machinery and equipment by Ukrainian agricultural machinery enterprises.
The Commission under the Ministry of Economic Development approved a list of agricultural machinery of Ukrainian production, 20% of the cost of which is compensated from the state budget.The specified list includes 40 Ukrainian manufacturers 4 almost 800 types and brands of machinery and equipment.Among them, in particular, PJSC "Kharkiv Tractor Plant", SE "PO Southern Machine-Building Plant named after Makarova", LLC "NPP Belotserkovmaz", PJSC "Berdyansk harvesters", LLC "Orikhovselmash", LLC "Soyuz-special equipment".
In 2020, compared with 2018, there was also a decrease in the vast majority of varieties of agricultural machinery, although the list of varieties of machinery for which there was a slight increase in its number increased slightly.In 2020, compared to 2018, an increase in the number of the following types of equipment was observed: sprinkler machines (+10.21%),roller harvesters (7.36%), mowers (+2.67%), seeders (+2.38%)), tractors (+1.75%), combine harvesters (+0.63%).
Thus, in 2021, there is a positive trend towards increasing the number of technical means both in all agricultural enterprises and in farms, in particular.
Combine harvesters
Imported: "Klaas" (25.3% of the total import of combines to Ukraine), John Deere (17.4%) and Polesie (Gomselmash, Belarus) (15.1%) 99,5% Domestic: "Slavutich" manufacturer "Kherson Machine-building Plant" 0,3% Tractors Imported: MTZ (47.2% of total imports) 88% tractors in Ukraine, "John Deere" (19%) 10,5% It is proved that "the purchasing power of Ukrainian enterprises for the purchase of equipment annually amounts to only UAH 5-7 billion with an annual market capacity of UAH 22-28 billion.That is, the technological need is covered by only 15-20%.Most agricultural enterprises are practically unable to purchase modern equipment and combined machines."In order to preserve the process of updating the machine and tractor fleet and high-tech equipment in agriculture, it is necessary to ensure the effectiveness of state programs to support domestic agricultural enterprises, which will allow the restoration of their financial viability, and this in turn will serve as an impetus to the development of the primary and secondary agricultural machinery market due to the growing demand for technical means.
Analysis of the functioning of machine-building enterprises in modern market conditions indicates that external changes are divided into two types: continuous and intermittent.Continuous changes in the environment occur slowly and are quite predictable.With external changes of this kind, the company has time to adapt to new problems and implement new opportunities (Table 3).As a result of the analysis, the composition of the most influential factors, the trends of their development, the nature of the impact on the enterprise, as well as possible actions of the enterprise as a response to the manifestation of factors are established.Factors and conditions of the general external environment do not have a direct effect on the operational activities of the enterprise, but they determine strategically important decisions made by its management.The impact of these factors on the enterprise is manifested in the form of opportunities, the use of which can positively affect the activities of the enterprise, as well as threats characterizing such factors that, when implemented, pose a danger to the enterprise.
Conclusions
During the study period, the volume of sales of industrial products decreased.The negative dynamics of industrial development indicates significant risks for the prospects of modernization of the national economy and economic dynamics in general."The main negative factors determining the downward indicators of the industry during the intensive deployment of the economic crisis are: falling solvency of enterprises; the increase in the cost of production.The synergistic effect of the simultaneous action of these destructive factors has led to a crisis rate of decline in industry."Mechanical engineering largely depends on the development of the economy of Ukraine and the CIS countries, therefore, all the negative phenomena caused by the global crisis caused a drop in the production of machine-building products.During the period under study, the dynamics of machine-building production showed a steady trend of slowing down.
An important factor in the slowdown in industrial development was the decline in investment dynamics.By types of economic activity, mechanical engineering is the main manufacturer and supplier of high-tech products.A small number of industrial enterprises introduced innovations into their activities: carried out complex mechanization and automation of production, introduced new technological processes, they mastered the production of innovative types of products.The current level of investment in mechanical engineering does not meet the needs of structural renewal and extensive modernization of the industry's production base.
An important factor contributing to the growth or decrease in the level of variability of the external environment is the economic policy of the state, stimulating or dampening business activity.The rise of domestic engineering is largely determined by the scale of technical re-equipment.But there is a situation when in many cases this is possible only with the use of imported materials and equipment.
Fig. 1 .
Fig. 1.Mechanism of partial compensation of the cost of agricultural machinery and equipment of domestic production Source: compiled according to Ministry of Economy of Ukraine, (2021) Symbols:1.Machine-building enterprises apply for participation in budget programs; 2.The Ministry is forming a list of agricultural machinery, the cost of which will be partially compensated to farmers from the budget;
Table 1 .
Production of certain types of machinery and equipment for agriculture and forestry for 2014-2020, units.Source: Compiled according to the State Statistics Service.
Table 3 .
Analysis of macro-environment factors essential for machine-building enterprises | 2023-01-28T16:12:22.311Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "396e5cc8025256a10f41ac3ae3833cb63d61305e",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2022/30/e3sconf_interagromash2022_01037.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3e7507f83fe3c538c852a483c9eb5b672c1b1cfd",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
39401687 | pes2o/s2orc | v3-fos-license | PS102. Apathy in elderly depression and the antidepressant response
s | 35 administered at baseline, weeks 1, 2, 4, and week 8.The dose of desvenlafaxine was fixed (50mg/day) until week 4, after which it was flexible up to 100mg/day, based on response and tolerability. Results: Montgomery Asberg Depression Scale scores significantly decreased from baseline (M=23.61, SD=5.51) to end of treatment (M=12.29, SD=8.41), p<.0001. Severity of illness, as measured by the Clinical Global Impression scale, as well as selfreported depressive symptom scores, significantly decreased from baseline to end of treatment (p<.0001). Improvement in quality of life (p<.0001), levels of perceived stress (p<.0001), coping styles (p<.0001), and work impairment (p<.01) were noted over the course of treatment. Conclusions: Overall results indicate that desvenlafaxine is effective in reducing depressive symptoms and improving functioning in patients with persistent depressive disorder. Further, results provide evidence of good safety and tolerability of desvenlafaxine in this population. These results support the further investigation of desvenlafaxine for this condition using larger, placebo controlled, randomized control trials. PS101 Oral Ketamine for Treatment Resistant Major Depression – A double blind randomized controlled trial Yoav Domany MD, Maya Bleich-Cohen PhD, Nadav Stoppelman PhD, Talma Hendler MD PhD, Ricardo Tarrasch PhD, Shaul Schreiber MD, Roi Meidan MD and Haggai Sharon, MD Tel Aviv Sourasky Medical Center, Israel Abstract Background: Major depression is a devastating common disorder. Current pharmacotherapy relies on the monoaminergic theory, and requires a substantial time for full therapeutic effect. Regrettably, about 40% fail to attain remission, defined as Treatment Resistant Depression (TRD). Recently, intravenous ketamine has been shown to provide rapid, short lived, amelioration of TRD. We aimed to assess the clinical efficacy and safety of oral ketamine for TRD. Methods: In a double-blind, randomized, placebo-controlled trial 27 TRD outpatients received either oral ketamine or placebo for 21 days. Patients were evaluated pre-trial and after 21 days. The main outcome measure was the change in Montgomry Asberg Depression Rating Scale(MADRS) score. Result: 14 subjects were randomized to the ketamine group, and 13 to the placebo group. Of these, 12 and 9 respectably completed the study. No significant differences were obtained at time zero. A significant reduction of 13.4 points of the MADRS score was obtained after 21 days in the ketamine group (p=0.003) while a nonsignificant reduction of 2.9 was observed in the placebo group. Four subjects (33%) attained remission (MADRS ≤10) in the ketamine group compared to none in the placebo group. No serious side effects were reported. Conclusion: In this study, sub-anesthetic oral ketamine produced rapid amelioration of depressive symptoms in ambulatory TRD patients, and was well tolerated. The results of this study suggest that oral ketamine may hold significant promise in the care of TRD.Background: Major depression is a devastating common disorder. Current pharmacotherapy relies on the monoaminergic theory, and requires a substantial time for full therapeutic effect. Regrettably, about 40% fail to attain remission, defined as Treatment Resistant Depression (TRD). Recently, intravenous ketamine has been shown to provide rapid, short lived, amelioration of TRD. We aimed to assess the clinical efficacy and safety of oral ketamine for TRD. Methods: In a double-blind, randomized, placebo-controlled trial 27 TRD outpatients received either oral ketamine or placebo for 21 days. Patients were evaluated pre-trial and after 21 days. The main outcome measure was the change in Montgomry Asberg Depression Rating Scale(MADRS) score. Result: 14 subjects were randomized to the ketamine group, and 13 to the placebo group. Of these, 12 and 9 respectably completed the study. No significant differences were obtained at time zero. A significant reduction of 13.4 points of the MADRS score was obtained after 21 days in the ketamine group (p=0.003) while a nonsignificant reduction of 2.9 was observed in the placebo group. Four subjects (33%) attained remission (MADRS ≤10) in the ketamine group compared to none in the placebo group. No serious side effects were reported. Conclusion: In this study, sub-anesthetic oral ketamine produced rapid amelioration of depressive symptoms in ambulatory TRD patients, and was well tolerated. The results of this study suggest that oral ketamine may hold significant promise in the care of TRD. PS102 Apathy in elderly depression and the antidepressant response Takahisa Shimano, Hajime Baba Junendo University Koshigaya Hospital, Japan Abstract Background: Although apathy is a common symptom in late-life depression, the treatment effect of antidepressant on apathy in those patients is still unclear. The aim of the present study is to reveal the difference of treatment response on apathy among the class of antidepressant. Methods: A total of 128 elderly inpatients (>or=60 years old) with a DSM-IV major depressive disorder were recruited from Juntendo Koshigaya Hospital. Patients showing clinical evidence of dementia or with mini-mental state examination (MMSE) scores <24 were excluded. Finally 92 elderly patients were treated with selective serotonin reuptake inhibitors (SSRI, n=52) and serotonin and norepinephrine reuptake inhibitors (SNRI, n=40). We evaluated depressive symptom using Hamilton Depression Scale (HAM-D) and apathy using the Apathy Evaluation Scale Japanese version (AES-J) before and after 4 weeks treatment. Responder was defined as the patients with 50 percent improvement of each score by treatment. Result: There are no significant differences between SSRI and SNRI on responder rates of HAM-D and AES-J scores. Conclusion: The treatment response on apathy in patients with late-life depression was not different according to the class of antidepressant. The results with a larger dataset will be reported in the congress. PS103 Search for biomarkers for ketamine response from changes of cytokines in the patients with treatment resistant depression Tung-Ping Su1,2,3,4, Mu-Hong Chen2,3, Cheng-Da Li1, 2,3, Ya-Mei Bai1,2,3, Annie Chang2, Hui-Ju Wu, Wei-Chen Lin 2,3, Pei-Chi Tu2,4,5 1 Department of Psychiatry, National Yang-Ming University 2 Department of Psychiatry, Taipei Veterans General Hospital 3 Institute of Brain Science, national Yang-Ming University 4 Department of Medical Research, Taipei Veterans General Hospital 5 Institute of Philosophy of Mind and Cognition Abstract Objective: Increased levels of pro-inflammatory cytokines were reported to be associated with depression. The aim of this study is to search for biomarker for ketamine antidepressant response using levels of cytokines to account for and predict clinical response. Methods: we conducted a randomized, double-blind placebocontrolled study comparing the two single subanesthetic doses of ketamine infusion (0.5mg/kg & 0.2mg/kg) vs. placebo (PBO) to see the primary behavioral outcome and the alterations of cytokines level for the secondary outcome. The levels of cytokines such as CRP, IL2, IL6, TNF α measured at baseline, 240 mins, D2 (48 hrs), and D6 with concommitant mood ratings (HAMD-17 and MADRS) and their changes from baseline were assessed and correlated. Results: RepeatedMeasure ANOVA showed no significant differences of group effect on these four-cytokine levels (p=NS) but with significant time effect on IL2, IL6, TNF α. (p=0.034, 0.001 & 0.004 respectively). In that, we observed minimal decreasing rate from baseline to 40 mins and 240 mins post-infusion in IL2 and IL6 (< 5%) while moderate decreasing in TNF α (10–15%). However, no correlations between decreasing rate of cytokines with mood improvemnt rate nor predictors of baseline or changes of cytokines for responder rate (>=50% reduction of either HAMD-17 and MADRS from D2 to D4) were found. Nevertheless, If we divided the cytokine levels using median No into high and low level group, only baseline IL6 high level group (IL6 > 28953pg/ml) and CRP low level group (CRP< <518ng/ml)Objective: Increased levels of pro-inflammatory cytokines were reported to be associated with depression. The aim of this study is to search for biomarker for ketamine antidepressant response using levels of cytokines to account for and predict clinical response. Methods: we conducted a randomized, double-blind placebocontrolled study comparing the two single subanesthetic doses of ketamine infusion (0.5mg/kg & 0.2mg/kg) vs. placebo (PBO) to see the primary behavioral outcome and the alterations of cytokines level for the secondary outcome. The levels of cytokines such as CRP, IL2, IL6, TNF α measured at baseline, 240 mins, D2 (48 hrs), and D6 with concommitant mood ratings (HAMD-17 and MADRS) and their changes from baseline were assessed and correlated. Results: RepeatedMeasure ANOVA showed no significant differences of group effect on these four-cytokine levels (p=NS) but with significant time effect on IL2, IL6, TNF α. (p=0.034, 0.001 & 0.004 respectively). In that, we observed minimal decreasing rate from baseline to 40 mins and 240 mins post-infusion in IL2 and IL6 (< 5%) while moderate decreasing in TNF α (10–15%). However, no correlations between decreasing rate of cytokines with mood improvemnt rate nor predictors of baseline or changes of cytokines for responder rate (>=50% reduction of either HAMD-17 and MADRS from D2 to D4) were found. Nevertheless, If we divided the cytokine levels using median No into high and low level group, only baseline IL6 high level group (IL6 > 28953pg/ml) and CRP low level group (CRP< <518ng/ml)
PS101
Oral Ketamine for Treatment Resistant Major Depression -A double blind randomized controlled trial Yoav Domany MD, Maya Bleich-Cohen PhD, Nadav Stoppelman PhD, Talma Hendler MD PhD, Ricardo Tarrasch PhD, Shaul Schreiber MD, Roi Meidan MD and Haggai Sharon, MD Tel Aviv Sourasky Medical Center, Israel Abstract Background: Major depression is a devastating common disorder. Current pharmacotherapy relies on the monoaminergic theory, and requires a substantial time for full therapeutic effect. Regrettably, about 40% fail to attain remission, defined as Treatment Resistant Depression (TRD). Recently, intravenous ketamine has been shown to provide rapid, short lived, amelioration of TRD. We aimed to assess the clinical efficacy and safety of oral ketamine for TRD. Methods: In a double-blind, randomized, placebo-controlled trial 27 TRD outpatients received either oral ketamine or placebo for 21 days. Patients were evaluated pre-trial and after 21 days. The main outcome measure was the change in Montgomry Asberg Depression Rating Scale-(MADRS) score. Result: 14 subjects were randomized to the ketamine group, and 13 to the placebo group. Of these, 12 and 9 respectably completed the study. No significant differences were obtained at time zero. A significant reduction of 13.4 points of the MADRS score was obtained after 21 days in the ketamine group (p=0.003) while a nonsignificant reduction of 2.9 was observed in the placebo group. Four subjects (33%) attained remission (MADRS ≤10) in the ketamine group compared to none in the placebo group. No serious side effects were reported. Conclusion: In this study, sub-anesthetic oral ketamine produced rapid amelioration of depressive symptoms in ambulatory TRD patients, and was well tolerated. The results of this study suggest that oral ketamine may hold significant promise in the care of TRD.
Apathy in elderly depression and the antidepressant response
Takahisa Shimano, Hajime Baba Junendo University Koshigaya Hospital, Japan Abstract Background: Although apathy is a common symptom in late-life depression, the treatment effect of antidepressant on apathy in those patients is still unclear. The aim of the present study is to reveal the difference of treatment response on apathy among the class of antidepressant. Methods: A total of 128 elderly inpatients (>or=60 years old) with a DSM-IV major depressive disorder were recruited from Juntendo Koshigaya Hospital. Patients showing clinical evidence of dementia or with mini-mental state examination (MMSE) scores <24 were excluded. Finally 92 elderly patients were treated with selective serotonin reuptake inhibitors (SSRI, n=52) and serotonin and norepinephrine reuptake inhibitors (SNRI, n=40). We evaluated depressive symptom using Hamilton Depression Scale (HAM-D) and apathy using the Apathy Evaluation Scale Japanese version (AES-J) before and after 4 weeks treatment. Responder was defined as the patients with 50 percent improvement of each score by treatment.
Result: There are no significant differences between SSRI and SNRI on responder rates of HAM-D and AES-J scores.
Conclusion:
The treatment response on apathy in patients with late-life depression was not different according to the class of antidepressant. The results with a larger dataset will be reported in the congress.
PS103
Search for biomarkers for ketamine response from changes of cytokines in the patients with treatment resistant depression | 2018-05-30T18:17:42.308Z | 2016-05-27T00:00:00.000 | {
"year": 2016,
"sha1": "2c268d92a79422f58097f2713710630de3063b48",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/ijnp/article-pdf/19/Suppl_1/35/21604059/pyw043.102.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c268d92a79422f58097f2713710630de3063b48",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
} |
12377623 | pes2o/s2orc | v3-fos-license | Comments Concerning the CFT Description of Small Objects in AdS
In this paper we resolve a contradiction posed in a recent paper by Horowitz and Hubeny. The contradiction concerns the way small objects in AdS space are described in the holographic dual CFT description.
The Apparent Contradiction
According to the Holographic Principle [1] [2] objects deep in the interior of a spatial region should have a description in terms of a holographic theory that in some sense lives on the region's boundary. A concrete realization of this idea has been given by the AdS/CFT duality [3][4] [5]. It has become a subject of active investigation to find out exactly how particular objects in AdS space are represented in the corresponding conformal field theory.
There are two apparently contradictory claims in the literature concerning this question. According to [6][7] the field theoretic representation of an object or event far from the AdS boundary is through nonlocal operators such as Wilson loops whose degree of nonlocality increases as the object recedes from the boundary. This follows from the usual UV/IR correspondence [8].
In apparent contradiction with this view, Horowitz and Hubeny in an extremely interesting paper [9] have presented evidence that local operators of very high dimensionality contain information about the size and shape of small 1 objects. The resolution of this conflict will be seen to lie in an ambiguity in what we mean when we say that a bulk quantity is represented by a certain field theoretic quantity.
We will begin by very briefly describing the results of Horowitz and Hubeny. These authors consider objects in AdS 5 × S 5 which are much smaller than the AdS radius of curvature and which are localized at a point on the 5-sphere. In addition they are also localized near the center of AdS in appropriately chosen coordinates. Using the gravitational dual theory they find that the expectation values of certain scalar operators of dimension n = l + 4 are of order Φ n ∼ e cρl (1.1) where ρ is the size of the object in units of the AdS radius and c is a numerical constant which we will ignore. It follows from eq. (1.1) that for l > ρ −1 the signal is appreciable and that by examining the l dependence the size of the object can be determined. This apparently contradicts [6][7].
Our conventions for describing AdS space are as follows: Global coordinates for AdS can be defined so that the metric is given by where the radial coordinate runs from 0 to 1. Here, R is the radius of curvature of the space and dΩ 2 represents the metric of a unit sphere. In the case of AdS 5 the sphere is a 3-sphere. These coordinates are especially useful when trying to recover infinite flat space in the limit R → ∞. Indeed the AdS space as defined above behaves in many respects like a finite cavity of size R with a reflecting boundary at r = 1. We will refer to the metric in eq. (1.2) as cavity coordinates.
Another coordinate system which is particularly useful when studying the properties of the CFT in flat space is given by wheret, x labels 4-dimensional Minkowski space and y is the 5th direction perpendicular to x. At timet = 0, the center of AdS can be taken to be the point x = 0, y = 1 in these co-ordinates.
The quantum field theory lives on the 4-dimensional boundary whose metric we take to be either in the case of cavity coordinates, or for the flat space representation. As in ref. [8] we will be thinking of the boundary quantum field theory in Wilsonian terms. Thus we imagine the boundary field theory to be defined in terms of a bare set of degrees of freedom at some very small coordinate length scale δ 0 . The details of the cutoff are not important but an example to keep in mind is the Hamiltonian lattice cutoff sometimes used to study QCD. In that case δ 0 would be the spatial lattice constant. We assume δ 0 is much smaller than any other length scale we will encounter. According to the UV/IR correspondence, a connection exists between the UV regulator length scale δ 0 of the QFT and an IR cutoff of the bulk theory. The bulk cutoff is implemented by replacing the AdS boundary at r = 1 or y = 0 with the surface r = 1 − δ 0 in global AdS or y = δ 0 in the Poincare patch.
There are two different large N limits of the QFT that have two different purposes. The first is the 't Hooft limit From the bulk point of view this is the limit of classical gravity or classical string theory in an AdS space of fixed radius in string units; (1.7) In this limit the ratio of the size of a physical object to the AdS radius is fixed.
The limit of interest for analyzing the holographic principle as defined in [1][2] is a different one [10] [11]. For this purpose we take (1.8) In this limit the AdS radius becomes much larger than the size of any physical object. This is the limit discussed in [6] where it was claimed that objects at r = 0 or y = 1 should be represented by Wilson loops of roughly unit size.
What it Means to Represent
One reason for confusion is that different people may mean different things when they say a certain field theory quantity represents a corresponding bulk quantity. In order to resolve the paradox raised by the Horowitz-Hubeny result we need to have a clear understanding of what it means for a particular set of observables in the CFT to describe a particular set of circumstances in the AdS space.
The ability to find observables (hermitian operators) in the field theory to represent physical quantities in the bulk theory follows from the assumption that the Hilbert space of bulk states is the same as that of the boundary field theory. We argue that f aithf ully representing a given bulk quantity α by a field theory quantity A should mean more than just requiring a correspondence between their expectation values. Ideally we would like the probability distribution for α and A to be the same. For example, a faithful representation of a highly classical bulk quantity such as the size of a macroscopic object should involve a field theory quantity with very small fluctuation.
Let us suppose that the quantum state of the system determines a probability distribution P (α) centered at α 0 with a width ∆(α). Now consider a second state characterized by a second distribution P ′ (α) centered at α ′ 0 . These two states are clearly distinguishable if the two distributions do not overlap. In particular two different macroscopic classical configurations of α should have negligible overlap in their probabilities. A minimum condition for A to faithfully represent α is that two probability distributions for A will not overlap if the two corresponding distributions do not overlap for α. In other words two configurations which disagree on the value of α must be represented by probability distributions in A which are almost orthogonal. We will regard this to be a minimal requirement for a faithful representation of a bulk variable by a corresponding holographic variable.
The Resolution
Let us now ask whether the high dimension operators Φ n are a faithful representation of the size of an object at the center of AdS space. From what we have said in the previous section the question comes down to whether or not the probability distributions for the Φ's are orthogonal or almost orthogonal for two classically distinguishable values of the size ρ. We emphasize again that we are working in a Wilsonian framework where it is assumed that the field theory is defined by a concrete regularized system.
The Φ n 's are defined to have vanishing vacuum expectation values. We must also specify a convention for normalizing them. We follow the same convention as in [9], namely the two point function Φ n (x)Φ n (x ′ ) is of order one at unit coordinate separation. Now consider the width of the probability distribution for Φ n , in other words the fluctuation ∆ in Φ n : Obviously if the difference in expectation values of Φ n for two distinct configurations is much less than ∆ then these variables do not faithfully represent the variables they were intended to describe.
The scalar fields Φ n are given by where X stands for the six fundamental scalars of maximally supersymmetric SU(N) Super Yang-Mills Theory. The trace is over the adjoint representation of SU(N) and X l represents a polynomial of order l in the X ′ s. The dimension of Φ n is n = l + 4. The operators are normal ordered meaning that their vacuum expectation value has been subtracted out. They are normalized so that their two point function at unit coordinate separation is of order one. Now consider the fluctuation in Φ n . This is given by the square root of the connected two point function at vanishing separation. In the continuum theory this fluctuation will be divergent. In the Wilsonian cutoff theory the fluctuation will be of order δ −n 0 which we assume is extremely large 2 . Thus unless these operators are somehow further regulated the fluctuation is divergent. This is true for any local operator Φ. It means that Φ can not faithfully describe anything. A measurement of Φ gives completely random results in any state. This point was made forcefully in a famous paper by Bohr and Rosenfeld in the earliest days of quantum field theory. According to Bohr and Rosenfeld the correct observables for a quantum field theory are what we would today call "regulated" fields. This entails introducing a regulator scale δ chosen to be much larger than the cutoff scale δ 0 . The observables are defined by some form of smearing or point-splitting of the composite operators Φ. This will be discussed further in the next section.
The regulated fluctuation in Φ n is of order This follows from dimensional analysis and the fact that Φ n has mass dimension n = l + 4. Evidently from eq. (1.1), the condition that the expectation value of Φ n is larger than the fluctuation is Since ρ is defined to be the size of an object measured in units of R, it will vanish in the limit R → ∞. Thus we find that the inequality is satisfied only if In other words the operators Φ n must not only be regulated but the regulator scale has to be comparable to the coordinate distance of the object from the AdS boundary. The meaning of this is clear. For an operator to faithfully represent a property of a small object near the center of AdS, it must be non-local as described in [6][7].
Regulating Φ
Granted that we must regulate the operators Φ, the question arises as to exactly how to do so. We will begin with an implicit construction. The problem with the unregulated operators is that they have large matrix elements connecting two very high energy states. In calculating their fluctuation most of the contribution comes from these high energy intermediate states. We can easily regulate the operators by simply throwing away the matrix elements between states whose energies differ by more than δ −1 . Equivalently we can integrate the operators over time with a smooth test function with support over a time interval ∼ δ. By solving the equations of motion we can express the regulated operator in terms of operators at a fixed time. The result will be spatially nonlocal over a scale δ.
As we have seen in the previous section, δ must be ∼ 1 so that the regulated operator is nonlocal over the entire boundary sphere.
On the other hand smearing an operator over time will not change its expectation value in a time independent configuration. Thus, for such configurations, the local operator and its nonlocal counterpart have the same expectation value. This accounts for the results in [9]. However, for time dependent configurations such as those described in [6][7] only the nonlocal operator faithfully represents the relevant instantaneous property of a small object near r = 0.
The field theory of interest in this paper is a gauge theory in which all fundamental fields are in the adjoint representation. If the bare theory is a Hamiltonian lattice gauge theory, then any operator at a fixed time can be expressed in terms of generalized Wilson loops in which the Wilson loop is "decorated" with insertions of adjoint fields. The regulated operators will be expressible as linear superpositions of such Wilson loops of size δ.
Horowitz and Hubeny provide important information on how to decorate the Wilson loop in order to describe particular features of small objects.
Another important example concerns the "precursors" described in [6] [7]. Suppose that an event takes place near the center of AdS which results in the emission of a wave propagating towards the boundary. Bulk causality ensures that all local supergravity fields evaluated within a neighborhood of the boundary will retain their original expectation values until the wave itself arrives at the boundary. Therefore, for a period of time of order one, all local QFT operators corresponding to the bulk supergravity fields will retain their original expectation values carrying no information about the wave. At some time, when the wave arrives at the boundary (t = 0), some local operators in the QFT will begin to oscillate. According to the AdS/CFT correspondence, their expectation value at t = 0 will be given by the boundary data of the wave. Furthermore, their expectation value will be insensitive to the R → ∞ limit and proportional to the amplitute of the wave. Thus, in regulating the local operators at t = 0 so as to keep the signal bigger than their fluctuations, we only need to introduce a cutoff of order the width of the wave pulse. Now to find the non-local "precursors" describing the wave at an earlier time, when say the wave is at co-ordinate distance δ from the boudary, we use the equations of motion to express the regulated local operators at t = 0 in terms of operators at t = −δ. Then, the "precursors" will be spatially non-local over a scale δ. The results of [7] suggest that the resulting non-local operator will involve superpositions of Wilson loops of size δ.
A point worth mentioning involves the possibility of constructing operators with small fluctuation by spatially averaging Φ over Ω. It is not hard to see that this diminishes its fluctuation by a factor δ 3/2 . This would have no important effect on our conclusion.
Finally we want to emphasize that expectation values are not the observables of a system. The observables representing the results of measurements have uncertainties. A correct representation of a variable should not only represent its expectation value but also its entire probability distribution. The wild fluctuation of local fields makes them bad representations of weakly fluctuating positions and sizes of macroscopic objects. | 2014-10-01T00:00:00.000Z | 2000-11-17T00:00:00.000 | {
"year": 2000,
"sha1": "2819dfb80a033034fecc58bb5f789632d79d465f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2819dfb80a033034fecc58bb5f789632d79d465f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
235489958 | pes2o/s2orc | v3-fos-license | Distilling effective supervision for robust medical image segmentation with noisy labels
Despite the success of deep learning methods in medical image segmentation tasks, the human-level performance relies on massive training data with high-quality annotations, which are expensive and time-consuming to collect. The fact is that there exist low-quality annotations with label noise, which leads to suboptimal performance of learned models. Two prominent directions for segmentation learning with noisy labels include pixel-wise noise robust training and image-level noise robust training. In this work, we propose a novel framework to address segmenting with noisy labels by distilling effective supervision information from both pixel and image levels. In particular, we explicitly estimate the uncertainty of every pixel as pixel-wise noise estimation, and propose pixel-wise robust learning by using both the original labels and pseudo labels. Furthermore, we present an image-level robust learning method to accommodate more information as the complements to pixel-level learning. We conduct extensive experiments on both simulated and real-world noisy datasets. The results demonstrate the advantageous performance of our method compared to state-of-the-art baselines for medical image segmentation with noisy labels.
Introduction
Image segmentation plays an important role in biomedical image analysis. With rapid advances in deep learning, many models based on deep neural networks (DNNs) have achieved promising segmentation performance [1]. The success relies on massive training data with high-quality manual annotations, which are expensive and time-consuming to collect. Especially for medical images, the annotations heavily rely on expert knowledge. The fact is that there exist lowquality annotations with label noise. Many studies have shown that label noise can significantly affect the accuracy of the learned models [2]. In this work, we address the following problem: how to distill more effective information on noisy labeled datasets for the medical segmentation tasks?
Many efforts have been made to improve the robustness of a deep classification model from noisy labels, including loss correction based on label transition matrix [3,4,5], reweighting samples [6,7] , selecting small-loss instances [8,9], etc. Although effective on image classification tasks, these methods cannot be straightforwardly applied to the segmentation tasks [10].
There are some deep learning solutions for medical segmentation with noisy labels. Previous works can be categorized into two groups. Firstly, some methods are proposed to against label noise using pixel-wise noise estimation and learning. For example, [11] proposed to learn spatially adaptive weight maps and adjusted the contribution of each pixel based on meta-reweighting framework. [10] proposed to train three networks simultaneously and each pair of networks selected reliable pixels to guide the third network by extending the co-teaching method. [12] employed the idea of disagreement strategy to develop label-noiserobust method, which updated the models only on the pixel-wise predictions of the two models differed. The second group of methods concentrates on imagelevel noise estimation and learning. For example, [13] introduced a label quality evaluation strategy to measure the quality of image-level annotations and then re-weighted the loss to tune the network. To conclude, most existing methods either focus on pixel-wise noise estimation or image-level quality evaluation for medical image segmentation.
However, when evaluating the label noise degree of a segmentation task, we not only judge whether image-level labels are noisy, but also pay attention to which pixels in the image have pixel-wise noisy labels. There are two types of noise for medical image segmentation tasks: pixel-wise noise and image-level noise. Despite the individual advances in pixel-wise and image-level learning, their connection has been underexplored. In this paper, we propose a novel twophase framework PINT (Pixel-wise and Image-level Noise Tolerant learning) for medical image segmentation with noisy labels, which distills effective supervision information from both pixel and image levels.
Concretely, we first propose a novel pixel-wise noise estimation method and corresponding robust learning strategy for the first phase. The intuition is that the predictions under different perturbations for the same input would agree on the relative clean labels. Based on agreement maximization principle, our method relabels the noisy pixels and further explicitly estimates the uncertainty of every pixel as pixel-wise noise estimation. With the guidance of the estimated pixel-wise uncertainty, we propose pixel-wise noise tolerant learning by using both the original pixel-wise labels and generated pseudo labels. Secondly, we propose image-level noise tolerant learning for the second phase. For pixel-wise noise-tolerant learning, the pixels with high uncertainty tends to be noisy. However, there are also some clean pixels which show high uncertainty when they lie in the boundaries. If only pixel-wise robust learning is considered, the network will inevitably neglect these useful pixels. We extend pixel-wise robust learning to image-level robust learning to address this problem. Based on the pixel-wise uncertainty, we calculate the image-level uncertainty as the image-level noise estimation. We design the image-level robust learning strategy according to the original image-level labels and pseudo labels. Our image-level method could distill more effective information as the complement to pixel-level learning. Last, to show that our method improves the robustness of deep learning on noisy labels, we conduct extensive experiments on simulated and real-world noisy datasets. Experimental results demonstrate the effectiveness of our method.
Pixel-wise robust learning
Pixel-wise noise estimation. We study the segmentation tasks with noisy labels for 3D medical images. To satisfy the limitations of GPU memory, we follow the inspiration of mean-teacher model [14]. We formulate the proposed PINT approach with two deep neural networks. The main network is parameterized by θ and the auxiliary network is parameterized by θ, which is computed as the exponential moving average (EMA) of the θ. At training step t, θ is updated where γ is a smoothing coefficient. Fig.1 shows the pixel-wise noise tolerant learning framework.
For each mini-batch of training data, we generate synthetic inputs {X m } M m=1 on the same images with different perturbations. Formally, we consider a minibatch data (X, Y ) sampled from the training set, where X = {x 1 , · · · , x K } are K samples, and Y = {y 1 , · · · , y K } are the corresponding noisy labels. In our study, we choose Gaussian noises as the perturbations. Afterwards, we perform M stochastic forward passes on the auxiliary network θ and obtain a set of probability vector {p m } M m=1 for each pixel in the input. In this way, we choose the mean prediction as the pseudo label of v-th pixel: is the probability of the m-th auxiliary network for v-th pixel. Inspired by the uncertainty estimation in Bayesian networks [15], we choose the entropy as the metric to estimate the uncertainty. When a pixel-wise label tends to be clean, it is likely to have a peaky prediction probability distribution, which means a small entropy and a small uncertainty. Conversely, if a pixel-wise label tends to be noisy, it is likely to have a flat probability distribution, which means a large entropy and a high uncertainty. As a result, we regard the uncertainty of every pixel as pixel-wise noise estimation: where u v is the uncertainty of v-th pixel and E is the expectation operator. The relationship between label noise and uncertainty is verified in Experiments 3.2.
Pixel-wise loss. We propose pixel-wise noise tolerant learning. Considering that the pseudo labels obtained by predictions also contain noisy pixels and the original labels also have useful information, we train our segmentation network leveraging both the original pixel-wise labels and pesudo pixel-wise labels. For the v-th pixel, the loss is formulated by: where L seg v is the pixel-wise loss between the prediction of main network f v and original noisy label y v ; L seg v adopts the cross-entropy loss and is formulated by is the pixel-wise loss between the prediction f v and pseudo labelŷ v .ŷ v is equal top v for soft label and is the one-hot version ofp v for hard label. L pse v is designed as pixel-level mean squared error (MSE) and is formulated by α v is the weight factor which controls the importance of L seg v and L pse v . Instead of manually setting a fixed value, we provide automatic factor α v based on pixel-wise uncertainty u v . We introduce α v as exp(−u v ). If the uncertainty has received one large value, this pixel-wise label is prone to be noisy. This factor α v tends to zero, which drives the model to neglect original label and focus on the pseudo label. In contrast, when the value of uncertainty is small, this pixel-wise label is likely to be reliable. The factor α v tends to one and the model will focus on the original label. The rectified pixel-wise total loss could be written as:
Image-level robust learning
Image-level noise estimation. For our 3D volume, we regard every slicelevel data as image-level data. Based on the estimated pixel uncertainty, the image-level uncertainty can be summarized as: Ni v u v , where U i is the uncertainty of i-th image (i-th slice); v denotes the pixel and N i denotes the number of pixels in the given image. In this case, the image with small uncertainty tends to provide more information even if some pixels involved have noisy labels. The pipeline is similar to pixel-wise framework and the differences lie in the noise estimation method and corresponding robust total loss construction.
Image-level loss. For image-level robust learning, we train our segmentation network leveraging both the original image-level labels and pseudo image-level labels. For the i-th image, the loss is formulated by: where L seg i is the image-level cross-entropy loss between the prediction f i and original noisy label y i ; L pse i is the image-level MSE loss between the prediction f i and pseudo labelŷ i ; Image-level pseudo labelŷ i is composed of pixel-levelŷ v . α i is the automatic weight factor to control the importance of L seg i and L pse i . Similarity, we provide automatic factor α i as exp(−U i ) based on image-level uncertainty U i . The rectified image-level total loss is expressed as: Our PINT framework has two phases for training with noisy labels. In the first phase, we apply the pixel-wise noise tolerant learning. Based on the guidance of the estimated pixel-wise uncertainty, we can filter out the unreliable pixels and preserve only the reliable pixels. In this way, we distill effective information for learning. However, for segmentation tasks, there are also some clean pixels have high uncertainty when they lie in the marginal areas. Thus, we adopt the imagelevel noise tolerant learning for the second phase. Based on the estimated imagelevel uncertainty, we can learn from the images with relative more information. That is, image-level learning enables us to investigate the easily neglected hard pixels based on the whole images. Image-level robust learning can be regarded as the complement to pixel-level robust learning.
Datasets and implementation details
Datasets. For synthetic noisy labels, we use the publicly available Left Atrial (LA) Segmentation dataset. We refer the readers to the Challenge [20] for more details. LA dataset provides 100 3D MR image scans and segmentation masks for training and testing. We split the 100 scans into 80 scans for training and 20 scans for testing. We randomly crop 112×112×80 sub-volumes as the inputs. All data are pre-processed by zero-mean and unit-variance intensity normalization. For real-world dataset, we have collected CT scans with 30 patients (average 72 slices / patient). The dataset is used to delineate the Clinical Target Volume (CTV) of cervical cancer for radiotherapy. Ground truths are defined as the reference segmentations generated by two radiation oncologists via consensus. Noisy labels are provided by the less experienced operators. 20 patients are randomly selected as training images and the remaining 10 patients are selected as testing images. We resize the images to 256 × 256 × 64 for inputs. Implementation details. The framework is implemented with PyTorch, using a GTX 1080Ti GPU. We employ V-net [16] as the backbone network and add two dropout layers after the L-5 and R-1 stage layers with dropout rate 0.5 [17]. We set the EMA decay γ as 0.99 referring to the work [14] and set batch size as 4. We use the SGD optimizer to update the network parameters (weight decay=0.0001, momentum=0.9). Gaussian noises are generated from a normal distribution. For the uncertainty estimation, we set M = 4 for all experiments to balance the uncertainty estimation quality and training efficiency. The effect of hyper-parameters M is shown in supplementary materials. Code will be made publicly available upon acceptance.
For the first phase, we apply the pixel-wise noise tolerant learning for 6000 iterations. At this time, the performance difference between different iterations is small enough in our experiments. The learning rate is initially set to 0.01 and is divided by 10 every 2500 iterations. For the second phase, we apply the imagelevel noise tolerant learning. When trained on noisy labels, deep models have been verified to first fit the training data with clean labels and then memorize the examples with false labels. Following the promising works [18,19], we adopt "high learning rate" and "early-stopping" strategies to prevent the network from memorizing the noisy labels. In our experiments, we set a high learning rate as lr=0.01 and the small number of iterations as 2000. All hyper-parameters are empirically determined based on the validation performance of LA dataset.
Results
Experiments on LA dataset. We conduct experiments on LA dataset with simulated noisy labels. We randomly select 25%, 50% and 75% training samples and further randomly erode/dilate the contours with 5-18 pixels to simulate the non-expert noisy labels. We train our framework with non-expert noisy annotations and evaluate the model by the Dice coefficient score and the average surface distance (ASD [voxel]) between the predictions and the accurate ground truth annotations [17]. We compare our PINT framework with multiple baseline frameworks. 1) V-net [16]: which uses a cross-entropy loss to directly train the network on the noisy training data; 2) Reweighting framework [11]: a pixel-wise noise tolerant strategy based on the meta-reweight framework; 3)Tri-network [10]:a pixel-wise noise tolerant method based on tri-network extended by coteaching method. 4) Pick-and-learn framework [13]: an image-level noise tolerant strategy based on image-level quality estimation. We use PNT to represent our PINT framework with only pixel-wise robust learning and INT to represent our PINT framework with only image-level robust learning. Our PINT framework contains two-phase pixel-wise and image-level noise tolerant learning. Table 1 illustrates the experimental results on the testing data. For cleanannotated dataset, the V-net has the upper bound of average Dice 91.14% and average ASD 1.52 voxels. (1) We can observe that as the noise percentage increase (from clean labels to 25%, 50% and 75% noise rate), the segmentation performance of baseline V-net decreases sharply. In this case, the trained model tends to overfit to the label noise. When adopting noise-robust strategy, the segmentation network begins to recover its performance. (2) For pixel-wise noise robust learning, we compare Reweighting method [11] and our PNT with only pixel-wise distillation. Our method gains 2.92% improvement of Dice for 50% noise rate (83.24% vs 86.16%). For image-level noise robust learning, we compare Pick-and-learn [13] and our INT with only image-level distillation. Our method achieves 1.12% average gains of Dice for 75% noise rate (73.30% vs 74.42%). These results verify that our pixel-wise and image-level noise robust learning are effective. (3)We can observe that our PINT outperforms other baselines by a large margin. Moreover, comparing to PNT and INT methods, our PINT with both pixel-wise and image-level learning shows better performance, which verifies that our PINT can distill more effective supervision information.
Label noise and uncertainty. To investigate the relationship between pixel-wise uncertainty estimation and noisy labels, we illustrates the results of randomly selected samples on synthetic noisy LA dataset with 50% noise rate in Fig.2. The discrepancy between ground-truth and noisy label is approximated as the noise variance. We can observe that the noise usually exists in the areas with high uncertainty (shown in white color on the left). Inspired by this, we provide our pixel-wise noise estimation based on pixel-wise uncertainty awareness. Apart from noisy labels, pseudo labels also suffer from the noise effect. The best way for training robust model is to use both original noisy labels and pseudo labels. Furthermore, multiple examples are shown on the right. We observe that there are some clean pixels show high uncertainty when they lie in the boundaries. If only pixel-wise robust learning is considered, the network will neglect these useful pixels. Therefore, we propose image-level robust learning to learn from the whole images for distilling more effective information.
Visualization. As shown in Fig.3, we provide the qualitative results of the simulated noisy LA segmentation dataset and real-world noisy CTV dataset. For noisy LA segmentation, we show some random selected examples with 50% noise rate. Compared to the baselines, our PINT with both pixel-wise and image-level robust learning yields more reasonable segmentation predictions.
Experiments on real-world dataset. We explore the effectiveness of our approach on a real CTV dataset with noisy labels. Due to the lack of professional medical knowledge, the non-expert annotators often generate noisy annotations. The results are shown in Table 2. 'No noise' means we train the segmentation network with clean labels. The other methods including V-net, Re-weighting, Pick-and-learn, PNT, INT and PINT are the same with LA segmentation. All the results show that our PINT with both pixel-wise and image-level robust learning can successfully recognize the clinical target volumes in the presence of noisy labels and achieves competitive performance compared to the state-of-theart methods.
Conclusion
In this paper, we propose a novel framework PINT, which distills effective supervision information from both pixel and image levels for medical image segmentation with noisy labels. We explicitly estimate the uncertainty of every pixel as pixel-wise noise estimation, and propose pixel-wise robust learning by using both the original labels and pseudo labels. Furthermore, we present the imagelevel robust learning method to accommodate more informative locations as the complements to pixel-level learning. As a result, we achieve the competitive performance on the synthetic noisy dataset and real-world noisy dataset. In the future, we will continue to investigate the joint estimation and learning of pixel and image levels for medical segmentation tasks with noisy labels. | 2021-06-22T01:15:56.167Z | 2021-06-21T00:00:00.000 | {
"year": 2021,
"sha1": "4c6ed20ec87a37063a348ff16afa3a833aed371d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2106.11099",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4c6ed20ec87a37063a348ff16afa3a833aed371d",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
241033294 | pes2o/s2orc | v3-fos-license | Autonomous Magnetic Navigation Framework for Active Wireless Capsule Endoscopy Inspired by Conventional Colonoscopy Procedures
In recent years, simultaneous magnetic actuation and localization (SMAL) for active wireless capsule endoscopy (WCE) has been intensively studied to improve the efficiency and accuracy of the examination. In this paper, we propose an autonomous magnetic navigation framework for active WCE that mimics the"insertion"and"withdrawal"procedures performed by an expert physician in conventional colonoscopy, thereby enabling efficient and accurate navigation of a robotic capsule endoscope in the intestine with minimal user effort. First, the capsule is automatically propelled through the unknown intestinal environment and generate a viable path to represent the environment. Then, the capsule is autonomously navigated towards any point selected on the intestinal trajectory to allow accurate and repeated inspections of suspicious lesions. Moreover, we implement the navigation framework on a robotic system incorporated with advanced SMAL algorithms, and validate it in the navigation in various tubular environments using phantoms and an ex-vivo pig colon. Our results demonstrate that the proposed autonomous navigation framework can effectively navigate the capsule in unknown, complex tubular environments with a satisfactory accuracy, repeatability and efficiency compared with manual operation.
procedure requires a long learning curve of the operator, the lack of skilled endoscopists and medical resources in rural areas has resulted in the urban-rural disparity in colorectal cancer incidence [5].
To address these issues, wireless capsule endoscopy (WCE) was introduced in 2000 as a painless and non-invasive tool to inspect the entire GI tract [6]. As shown in Fig. 1(b), The patient is required to undergo bowel cleansing and swallow a capsule containing tiny encapsulated cameras to captures images of the colon and rectum. However, the currently used WCE is passively pushed though the intestine by peristalsis, which results in a very long procedural time (the whole inspection of the GI tract takes about 8 ∼ 24 hours [7]) and the flexibility of examination is limited as the capsule cannot be intuitively positioned [7].
Active WCE is a concept of endowing WCE with active locomotion and precise localization to achieve autonomous navigation in the GI tract, which holds great promise to realize painless, efficient and accurate diagnosis and therapy with minimal manual operations [8]. In recent years, simultaneous magnetic actuation and localization (SMAL) technologies have been intensively studied for active WCE, which utilize the magnetic fields to actuate and locate the capsule at the same time [9] [10]. The actuating magnetic field can be generated by electromagnetic coils or external permanent magnets [11], resulting in different system design and control strategies. In view of a clinical translation of the SMAL technologies for active WCE, two fundamental questions need to be answered: 1) How to automatically propel a capsule through an unknown intestinal environment? 2) How to accurately control the capsule to reach a given point?
The first task focuses on the fast exploration of an unknown tubular environment by automatically propelling the robotic capsule, which we refer to as the "automatic propulsion" (AP) task; while the second task is similar to the trajectory following problem, where the environment is assumed known and the capsule is required to accurately follow a pre-defined trajectory, which we refer to as the "trajectory following" (TF) task.
Previous studies have separately dealt with the above two tasks. Several SMAL systems have been developed to address the AP task based solely on the magnetic feedback [12]- [15] or on a fusion of magnetic and visual feedback [4]. While these methods can determine the real-time movement of the capsule based on the estimated direction of the intestine, the actual moving trajectory of the capsule cannot be precisely controlled to allow repeated inspections at given points (e.g., suspicious lesions) for high-quality diagnosis. Other groups have exclusively focused on the TF task of a capsule to accurately follow manually defined trajectories [16]- [20]. However, the human intestinal environment is complex and unconstructed due to patient variability and unknown obstacles, so the simple, pre-defined trajectories may not be viable during clinical applications and would limit the flexibility of the examination.
In this work, we go beyond just investigating the AP or TF task for active WCE, and answer the two fundamental questions by providing a general framework for autonomous navigation of a capsule in unknown tubular environments. Our method is inspired by conventional colonoscopy procedures and combines efficient exploration of an unknown tubular environment and accurate tracking of given trajectories through a workflow that mimics the skills of an expert colonoscopist. Moreover, we develop a SMAL system based on an external sensor array and a reciprocally rotating magnetic actuator to implement the autonomous navigation framework, in order to achieve safe, efficient and accurate navigation of a capsule in the intestine. Our proposed framework is validated in realworld experiments on both phantoms and ex-vivo pig colons. The contributions are summarized as follows: • An autonomous navigation framework for active WCE is first proposed in this paper, which mimics an expert colonoscopist performing the "insertion" and "withdrawal" procedures in routine colonoscopy to achieve efficient exploration of an unknown tubular environment and accurate inspection of suspicious lesions with minimal user effort. • The proposed framework is implemented on a real robotic system, which is incorporated with adaptive magnetic localization and reciprocally rotating magnetic actuation-based AP and TF algorithms, and is validated in extensive navigation experiments in various tubular environments. • A thorough analysis of the experiment results is provided to compare different methods used in our framework in different tasks. The results demonstrate that our autonomous magnetic navigation method can achieve comparable accuracy and improved repeatability and efficiency compared to manual control.
II. METHODS
In this section, we first introduce the concept of autonomous navigation for active WCE that mimics the skills of an expert clinician in routine colonoscopy, and present a general workflow to realize efficient exploration and accurate inspection of the intestine with minimal user effort. Then, we introduce the magnetic actuation methods and describe the system and algorithms we use to implement the framework to navigate a capsule in unknown tubular environments.
A. Autonomous Navigation Framework for Active WCE
The conventional colonoscopy requires an experienced physician to perform the "insertion" and "withdrawal" procedures to manipulate the endoscope during the examination [21]. As shown in Fig. 2(d), the physician first pushes forward the endoscope from anus to cecum during the "insertion" procedure to find a viable path in the unknown intestine and perform preliminary detection of abnormalities, which usually takes a long time due to the unknown friction and shape of the intestinal environment. Subsequently, the "withdrawal" procedure is performed as shown in Fig. 2(e), during which the endoscope can be smoothly pulled back through the intestine and the entire intestine can be inspected carefully for high-quality diagnosis. This motivates us to design a general workflow for the autonomous robotic navigation of active WCE that mimics the insertion-withdrawal procedures in conventional colonoscopy, to first perform efficient exploration in the unknown intestinal environment and then conduct accurate navigation towards suspicious lesions. In other words, the AP and TF tasks are executed successively in our navigation framework to realize coarseto-fine navigation in an unknown tubular environment.
To better illustrate the proposed method, we compare our autonomous navigation framework for active WCE with the Fig. 3. Workflow of our proposed autonomous navigation framework for active WCE in a clinical setting. The capsule is actuated and localized with a robotic system to automate the "insertion" and "withdrawal" procedures in routine colonoscopy. classic concepts in mobile robot navigation, as shown in Fig. 2(a-b), including the mapping, localization and path planning modules [22]. After the map of an unknown environment is generated, the mobile robot can navigate in the map by executing a planned path under motion control. In comparison, in the navigation framework for WCE (see Fig. 2(c)), during the AP step, the robotic capsule is actuated to explore the unknown intestinal environment, and a viable path is generated to represent the environment, as illustrated in Fig. 2(f). In the TF step, the robotic capsule can accurately navigate towards any selected point on the trajectory with control strategies, as shown in Fig. 2(g).
In view of a clinical integration, the AP and TF steps can be executed by a robotic system with minimal user effort through the workflow summarized in Fig. 3. Specifically, the capsule is first advanced through the entire intestine by the AP algorithm under the supervision of a physician. After the capsule reaches the end of the intestine (determined by the user), a smooth trajectory is generated to represent the intestine. This step mimics the "insertion" step in routine colonoscopy. Then, the user can select a set of suspicious points on the trajectory, and the capsule will be automatically controlled to reach the selected points using the TF algorithm. This step is associated with the "withdrawal" step in routine colonoscopy to facilitate accurate and repeated inspection of suspected lesions. Combining the advantages of the AP and TF techniques, this framework can realize both flexible and accurate navigation in the unknown intestinal environment, thereby enabling effective robotic capsule endoscopy with minimal manual operations.
B. Magnetic Actuation Methods
In this work, we consider a permanent magnet-based SMAL system to realize the autonomous navigation of the capsule. The magnetic actuation methods applied in existing permanent magnet-based SMAL systems for active WCE can be roughly classified into three categories: i) Dragging magnetic actuation (DMA), which directly uses the magnetic force generated by the magnetic actuator to drag or steer the capsule [4] [18] [19] [23]; ii) Continuously rotating magnetic actuation (CRMA), which uses a rotating magnetic field generated by a continuously rotating magnetic actuator for helical propulsion of a capsule with external thread in a tubular environment [13]- [15], [24]- [26], and iii) Reciprocally rotating magnetic actuation (RRMA), which uses a reciprocally rotating magnetic actuator to rotate a non-threaded capsule back and forth during propulsion in a tubular environment [27] [28]. Since the RRMA method [27] was introduced to reduce the risk of causing intestinal malrotation and enhance patient safety, and it was observed that the reciprocal motion of the capsule can help make the intestine stretch open to reduce the friction in narrow tubular environments compared with the DMA and CRMA methods, in this work, we apply the RRMA method in our autonomous magnetic navigation framework for safe and efficient actuation of the capsule.
C. SMAL System and Algorithms
The proposed autonomous navigation framework is implemented on a robotic system developed based on our previous work in [13], [15], [27] to allow closed-loop SMAL of a capsule. The design of the system is illustrated in Fig. 4(a), which uses an external spherical magnetic actuator controlled by a robotic arm to actuate a magnetic capsule inside the intestine, and the capsule is tracked by an external sensor array placed on the examination bed. This SMAL system only relies on the magnetic sensor data, which is not limited by the line-of-sight compared with visual sensorbased systems [23]. Also, the use of the external sensor array for capsule localization can save the internal space of the capsule and reduce the power consumption compared with internal sensor-based systems [17] [19]. As illustrated in Fig. 4(b), the center line of the magnetic ring embedded in the capsule coincides with the principal axis of the capsule. The 6-D poses of the capsule and the actuator can be represented by their positions p c , p a , unit magnetic moments m m m c , m m m a , and unit rotation axes (heading directions) ω ω ω c , ω ω ω a [15].
1) Magnetic localization algorithm: In order to track the capsule in a large workspace, we adopt the adaptive magnetic localization algorithm in [29] to estimate the 6-D pose of the capsule in real time. As outlined in Algorithm 1, the capsule's 5-D pose is first initialized based on the measurements of all the sensors using the multiple objects tracking (MOT) method [30] (line 1). Subsequently, a sensor sub-array with the optimal layout is adaptively selected and activated from the entire sensor array based on the capsule's position to improve the localization accuracy and update frequency (line [3][4]. Then, after the 5-D pose of the capsule (p c , m m m c ) is estimated using the MOT method (line 5), the heading direction of the capsule ω ω ω c is estimated using the normal vector fitting (NVF) method [15].
Algorithm 1: Magnetic Localization Algorithm
Input: magnetic field measurements from the external sensor array B (t) , t = 1, 2, · · · Output: capsule's pose (p 2) Automatic propulsion algorithm: In order to control the movement of the actuator for the AP task, we use Algorithm 2 to calculate the desired pose of the actuator given the estimated capsule pose (p c , m m m c , ω ω ω c ) and velocitẏ p c . Based on the method presented in our previous work [29], we adaptively change the actuator's heading direction according to the estimated moving speed of the capsule v c to efficiently and robustly propel the capsule in complexshaped environments (line 2-3). However, different from [29] that uses CRMA, we employ RRMA [27] in this work to improve patient safety and reduce environmental resistance Calculate the capsule's moving speed v in the narrow tubular environment (line 5).
3) Generation of the trajectory of the environment: After the AP step is finished, the Gaussian Mixture Model (GMM) based Expectation Maximization (EM) algorithm is used to cluster the points in the trajectory [15], and a smooth trajectory through these points is generated by cubic spline interpolation to represent the tubular environment as p traj (s), s ∈ [0.0, 1.0], where p traj (0) and p traj (1) represent the first and last points on the trajectory, respectively. The user is allowed to select any point on the trajectory to carry out repeated inspections in the following step.
4) Trajectory following algorithm: Finally, during the TF step, given the capsule's pose and velocity (p c , ω ω ω c ,ṗ c ), intestinal trajectory p traj , and user-selected goal point g, the system automatically actuates the capsule to reach the goal using the algorithm outlined in Algorithm 3. First, the desired trajectory is obtained by truncating the entire a , ω ω ω (t) a ), t = 1, 2, · · · 1 for t = 1, 2, · · · do 2 Obtain the desired trajectory p traj by truncating p traj between g and the closest point to p
III. EXPERIMENTS AND RESULTS
In order to investigate the feasibility of the proposed autonomous navigation framework, we conducted a set of real-world experiments to study the navigation performance of our system in unknown, complex tubular environments. The results are reported and discussed in this section.
A. System Setup
The real-world system setup for our navigation experiments can be seen in Fig. 4. An actuator consisting of a motor (RMDL-90, GYEMS) and a spherical permanent magnet (diameter 50mm, NdFeB, N42 grade) is installed at the end-effector of a 6-DoF serial robotic manipulator (5-kg payload, UR5, Universal Robots). The capsule (diameter 16mm, length 35mm) is comprised of a 3Dprinted shell (Polylactic Acid, UP300 3D printer, Tiertime) and a permanent magnetic ring (outer diameter 12.8mm, inner diameter 9mm, and length 15mm, NdFeB, N38SH grade). The large external sensor array comprises 80 threeaxis magnetic sensors (MPU9250, InvenSense) arranged in an 8 × 10 grid with a spacing of 6cm to cover the entire abdominal region of the patient. The output frequency of each sensor is 100Hz. 10 USB-I2C adaptors (Ginkgo USB-I2C, Viewtool), a USB-CAN adaptor (Ginkgo USB-CAN, Viewtool) and a network cable are used for data transmission. All proposed algorithms are implemented with Python and run on a desktop (Intel i7-7820X, 32GB RAM, Win10).
B. Evaluation of Different Magnetic Actuation Methods in the AP Task
As the first step in our autonomous navigation framework, it is important to ensure safe and efficient AP of the capsule to explore the unknown tubular environment. We first compare the performance of the system in the AP task when using three different magnetic actuation methods, i.e., DMA, CRMA and RRMA. The experiments were conducted in a PVC tube and an ex-vivo pig colon, as shown in Fig. 5. We carried out five instances for each test, and the position of the actuator relative to the capsule is kept unchanged during all the experiments. The results are presented in Table I and Movie S1 (please refer to section III-E). We found that all the three magnetic actuation methods can successfully propel the capsule through the straight PVC tube (see Fig. 5 (a)) in the five trials; however, the moving speed of the capsule using rotating magnetic actuation methods is much faster (∼ 10 times) than that under draggingbased actuation. This is mainly because the rotating magnetic actuation can reduce the friction between the capsule and the tube to accelerate the exploration in the unknown tubular environment. In the experiments in the ex-vivo pig colon (see Fig. 5(b)), since the friction becomes larger, the capsule cannot be effectively propelled using DMA. While CRMA has a lower success rate of 40%, RRMA can still reach a success rate of 100% to advance the capsule through the environment with a speed of about 2.5mm/s. It is observed in the experiments that the CRMA can occasionally cause the malrotation of the intestinal wall, which may increase the environmental resistance and hinder the advancement of the capsule. Instead, RRMA does not cause the malrotation of the intestine in all the experiments, which shows the potential of RRMA to reduce patient discomfort and improve patient safety [27]. Moreover, since the reciprocal rotation of the capsule helps make the intestine stretch open during propulsion, the environmental resistance can be reduced to make the capsule easily pass through the narrow tubular environment, which is important to improve the overall efficiency of the autonomous navigation.
C. Evaluation of Different Magnetic Actuation Methods in the TF Task
We further evaluate the performance of the system using different magnetic actuation methods in the TF task in the same environments as shown in Fig. 5. Five instances of each test were conducted, with the relative pose of the actuator to the capsule set the same as in the AP task. The desired trajectory for each tubular environment is manually specified. As can be seen in Table II and Movie S1 (please refer to section III-E), the DMA method has the worst tracking accuracy in the PVC tube and cannot finish the trajectory following task in the ex-vivo pig colon. In contrast, we found that the CRMA and RRMA methods can successfully actuate the capsule to follow the pre-defined trajectories in both environments, and RRMA achieves the best tracking accuracy. This shows that the DMA method is difficult to overcome the large friction in narrow tubular environments to complete the TF task, while the rotating-based actuation methods can inherently reduce the friction. In addition, the continuous rotation would cause a shift in the capsule position during the actuation, which would reduce the tracking accuracy in the TF task. The results suggest that the RRMA method can realize both robust and accurate trajectory following of the capsule given the desired trajectory, which has the potential to realize repeated inspection of suspicious points in the intestine. According to the above analysis, it can be concluded that the RRMA method can achieve the best performance in both the AP and TF tasks in terms of propulsion efficiency and tracking accuracy in the intestinal environment and has the potential to improve patient safety during the procedure.
D. Evaluation of the Overall Autonomous Magnetic Navigation Framework
In order to evaluate the performance of the overall autonomous magnetic navigation framework, we conducted the navigation experiments in five tubular environments with different complexities, including four PVC tubes with different shapes and lengths and an ex-vivo pig colon. Each experiment is composed of two steps, the "insertion" step and "withdrawal" step, as described in Section II-A. During the "insertion" step, AP under RRMA is applied to autonomously advance the capsule through the unknown tubular environment, and a smooth trajectory is generated. Then, several points are manually selected on the trajectory as suspicious lesions that need to be revisited by the capsule. Three different approaches are implemented and compared for the "withdrawal" step, including tele-operation, backward AP and TF. In the tele-operation mode, the user can observe the desired trajectory generated by AP and manually send motion commands to navigate the capsule. In the backward AP mode, the system will automatically propel the capsule along the inverse direction of the trajectory and does not use the trajectory generated by the forward AP. The TF mode is to automatically actuate the capsule using the method described in Section II-C. The experiment results are summarized in Table III. We first evaluate the system performance in the "insertion" step using the proposed AP method in different environments. As shown in Fig. 6(a-e) and Table III, we found that the capsule can be efficiently propelled through all the five unknown tubular environments with a moving speed ranging from 1mm/s to 5mm/s, and the system can successfully generate a smooth trajectory to represent each tubular environment.
Then, we take a look at the "withdrawal" step (see Fig. 6(fj)) to assess the system's ability to allow accurate navigation towards suspicious lesions. One goal point is set in Tube No.1 as it has a relatively short length. In the other tubular environments, two goal points are set and the capsule is required to reach the first goal, then reach the second goal, and finally revisit the first goal. We evaluate the three methods (i.e., tele-operation, backward AP and TF) based on the accuracy (distance between the final position of the capsule and the goal), repeatability (distance between the two final positions of the capsule navigating towards the same goal twice) and the average speed to achieve the goal. The quantitative results are summarized in Table III. We found that the three methods show similar accuracy in all five environments, with a tracking error of 2 ∼ 3mm. However, the repeatability of the manual operation is the worst among all three methods, especially in the ex-vivo pig colon, which Fig. 6. Navigation experiments conducted in five different tubular environments to evaluate the performance of the overall autonomous magnetic navigation framework. (a-e) show the experiments during the "insertion" step, when the capsule is propelled through four PVC tubes with different shapes and lengths and an ex-vivo pig colon, respectively. (f-j) show the experiments during the "withdrawal" step, when the capsule is navigated towards the given goal points (marked in yellow) in the four PVC tubes and an ex-vivo pig colon, respectively. may be due to the low accuracy of visual localization. Also, the tele-operation is tedious and time-consuming as it requires the user to continuously observe the screen and click the control panel. The backward AP method can automatically propel the capsule backward, but it can only make decisions based on the current environmental information and thus cannot quickly navigate towards the goal. In contrast, the TF method achieves the best repeatability and efficiency in all five tubular environments, as it can take advantage of the knowledge of the intestine gained during the "insertion" step to better control the movement of the capsule. The slight deterioration in the tracking accuracy of TF in some experiments (e.g., in tubes No.2 and No.4) can be attributed to the fact that the TF algorithm will produce an overshoot to reach the target as soon as possible in the navigation process, but this slight decrease in accuracy (less than 1mm) does not affect the overall navigation performance.
E. Video Demonstration
Video demonstration of the three magnetic actuation methods in the AP and TF tasks can be seen in Movie S1 1 . Video demonstration of the autonomous magnetic navigation in different tubular environments can be seen in Movie S2 2 .
IV. CONCLUSIONS
In this paper, we present a framework for autonomous magnetic navigation of active WCE in an unknown tubular environment inspired by the procedures of conventional colonoscopy, and describe its potential use in a clinical setting. The automatic propulsion (AP) and trajectory following (TF) of a magnetic capsule are performed by a robotic system to mimic the "insertion" and "withdrawal" techniques performed by an expert physician in routine colonoscopy, in order to allow for efficient and accurate navigation in unknown intestinal environments. Our method is implemented on a real robotic system and validated in extensive navigation experiments in phantoms and an exvivo pig colon. Our results preliminarily demonstrate that the reciprocally rotating magnetic actuation method used in our system can achieve satisfactory performance in both the AP and TF tasks, and the overall framework for autonomous magnetic navigation of active WCE can effectively navigate the capsule towards desired positions in unknown, complex tubular environments with minimal user effort, which has the potential to reduce the examination time and improve the diagnostic outcome for WCE.
This technology can potentially advance the field of medical robotics by providing a general solution to autonomous positioning of a medical robot in an unknown tubular environment, which may improve the usability and clinical adaptation of active locomotion technologies for different medical applications. | 2021-11-04T01:15:33.252Z | 2021-11-03T00:00:00.000 | {
"year": 2021,
"sha1": "ce5c3f34d4adb645ea7c066450e9c18b000af4f0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ce5c3f34d4adb645ea7c066450e9c18b000af4f0",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
26841622 | pes2o/s2orc | v3-fos-license | Inference for modulated stationary processes
We study statistical inferences for a class of modulated stationary processes with time-dependent variances. Due to non-stationarity and the large number of unknown parameters, existing methods for stationary, or locally stationary, time series are not applicable. Based on a self-normalization technique, we address several inference problems, including a self-normalized central limit theorem, a self-normalized cumulative sum test for the change-point problem, a long-run variance estimation through blockwise self-normalization, and a self-normalization-based wild bootstrap. Monte Carlo simulation studies show that the proposed self-normalization-based methods outperform stationarity-based alternatives. We demonstrate the proposed methodology using two real data sets: annual mean precipitation rates in Seoul from 1771-2000, and quarterly U.S. Gross National Product growth rates from 1947-2002.
Introduction
In time series analysis, stationarity requires that dependence structure be sustained over time, and thus we can borrow information from one time period to study model dynamics over another period; see Fan and Yao [20] for nonparametric treatments and Lahiri [29] for various resampling and block bootstrap methods. In practice, however, many climatic, economic and financial time series are non-stationary and therefore challenging to analyze. First, since dependence structure varies over time, information is more localized. Second, non-stationary processes often require extra parameters to account for time-varying structure. One way to overcome these issues is to impose certain local stationarity; see, for example, Dahlhaus [15] and Adak [1] for spectral representation frameworks and Dahlhaus and Polonik [16] for a time domain approach.
In this article we study a class of modulated stationary processes (see Adak [1]) whereas its variance changes over time in an unknown manner. In the special case of σ i ≡ 1, (1.1) reduces to stationary case. If σ i = s(i/n) for a Lipschitz continuous function s(t) on [0, 1], then (1.1) is locally stationary. For the general non-stationary case (1.1), the number of unknown parameters is larger than the number of observations, and it is infeasible to estimate σ i . Due to non-stationarity and the large number of unknown parameters, existing methods that are developed for (locally) stationary processes are not applicable, and our main purpose is to develop new statistical inference techniques. First, we establish a uniform strong approximation result which can be used to derive a self-normalized central limit theorem (CLT) for the sample meanX of (1.1). For stationary case σ i ≡ 1, by Fan and Yao [20], under mild mixing conditions, γ k and γ k = Cov(e i , e i+k ). (1.2) For the modulated stationary case (1.1), it is non-trivial whether √ n(X − µ) has a CLT without imposing further assumptions on σ i and the dependence structure of e i . Moreover, even when the latter CLT exists, it is difficult to estimate the limiting variance due to the large number of unknown parameters; see De Jong and Davidson [18] for related work assuming a near-epoch dependent mixing framework. Zhao [41] studied confidence interval construction for µ in (1.1) by assuming a block-wise asymptotically equal cumulative variance assumption. The latter assumption is rather restrictive and essentially requires that block averages be asymptotically independent and identically distributed (i.i.d.). In this article, we deal with the more general setting (1.1). Under a strong invariance principle assumption, we establish a self-normalized CLT with the selfnormalizing constant adjusting for time-dependent non-stationarity. The obtained CLT is an extension of the classical CLT for i.i.d. data or stationary time series to modulated stationary processes. Furthermore, we extend the idea to linear combinations of means over different time periods, which allows us to address inference regarding mean levels over multiple time periods.
Second, we study the wild bootstrap for modulated stationary processes. Since the seminal work of Efron [19], a great deal of research has been done on the bootstrap under various settings, ranging from bootstrapping for i.i.d. data in Efron [19], wild bootstrapping for independent observations with possibly non-constant variances in Wu [39] and Liu [30], to various block bootstrapping and resampling methods for stationary time series in Künsch [27], Politis and Romano [34], Bühlmann [12] and the monograph Lahiri [29]. With the established self-normalized CLT, we propose a wild bootstrap procedure that is tailored to deal with modulated stationary processes: the dependence is removed through a scaling factor, and the non-constant variance structure of the original data is preserved in the wild bootstrap data-generating mechanism. Our simulation study shows that the wild bootstrap method outperforms the widely used stationarity-based block bootstrap.
Third, we address change-point analysis. The change-point problem has been an active area of research; see Pettitt [32] for proportion changes in binary data, Horváth [25] for mean and variance changes in Gaussian observations, Bai and Perron [8] for coefficient changes in linear models, Aue et al. [6] for coefficient changes in polynomial regression with uncorrelated errors, Aue et al. [7] for mean change in time series with stationary errors, Shao and Zhang [37] for change-points for stationary time series and the monograph by Csörgő and Horváth [14] for more discussion. Most of these works deal with stationary and/or independent data. Hansen [24] studied tests for constancy of parameters in linear regression models with non-stationary regressors and conditionally homoscedastic martingale difference errors. Here we consider where J is an unknown change point. The aforementioned works mainly focused on detecting changes in mean while the error variance is constant. On the other hand, researchers have also realized the importance of the variance/covariance structure in change point analysis. For example, Inclán and Tiao [26] studied change in variance for independent data, and Aue et al. [5] and Berkes, Gombay and Horváth [10] considered change in covariance for time series data. To our knowledge, there has been almost no attempt to advance change point analysis under the non-constant variances framework in (1.3). Andrews [4] studied change point problem under near-epoch dependence structure that allows for non-stationary processes, but his Assumption 1(c) on page 830 therein essentially implies that the process has constant variance. The popular cumulative sum (CUSUM) test is developed for stationary time series and does not take into account the time-dependent variances. Using the self-normalization idea, we propose a self-normalized CUSUM test and a wild bootstrap method to obtain its critical value. Our empirical studies show that the usual CUSUM tests tend to over-reject the null hypothesis in the presence of non-constant variances. By contrast, the self-normalized CUSUM test yields size close to the nominal level. Fourth, we estimate the long-run variance τ 2 in (1.2). Long-run variance plays an essential role in statistical inferences involving time series. Most works in the literature deal with stationary processes through various block bootstrap and subsampling approaches; see Carlstein [13], Künsch [27], Politis and Romano [34], Götze and Künsch [21] and the monograph Lahiri [29]. De Jong and Davidson [18] established the consistency of kernel estimators of covariance matrices under a near epoch dependent mixing condition. Recently, Müller [31] studied robust long-run variance estimation for locally stationary process. For model (1.1), the error process {e i } is contaminated with unknown standard deviations {σ i }, and we apply blockwise self-normalization to remove non-stationarity, resulting in asymptotically stationary blocks.
Fifth, the proposed methods can be extended to deal with the linear regression model where U i = (u i,1 , . . . , u i,p ) are deterministic covariates, and β = (β 1 , . . . , β p ) ′ is the unknown column vector of parameters. For p = 2, Hansen [23] established the asymptotic normality of the least-squares estimate of the slope parameter under a fairly general framework of non-stationary errors. While Hansen [23] assumed that the errors form a martingale difference array so that they are uncorrelated, the framework in (1.4) is more general in that it allows for correlations. On the other hand, Hansen [23] allowed the conditional volatilities to follow an autoregressive model, hence introducing stochastic volatilities. Phillips, Sun and Jin [33] considered (1.4) for stationary errors, and their approach is not applicable here due to the unknown non-constant variances σ 2 i . In Section 2.6 we consider self-normalized CLT for the least-squares estimator of β in (1.4). In the polynomial regression case u i,r = (i/n) r−1 , Aue et al. [6] studied a likelihoodbased test for constancy of β in (1.4) for uncorrelated errors with constant variance. Due to the presence of correlation and time-varying variances, it is more challenging to study the change point problem for (1.4) and this is beyond the scope of this article.
The rest of this article is organized as follows. We present theoretical results in Section 2. Sections 3-4 contain Monte Carlo studies and applications to two real data sets.
Main results
For sequences {a n } and {b n }, write a n = O(b n ), a n = o(b n ) and a n ≍ b n , respectively, if |a n /b n | < c 1 , a n /b n → 0 and c 2 < |a n /b n | < c 3 , for some constants 0 < c 1 , c 2 , c 3 < ∞. For q > 0 and a random variable e, write e ∈ L q if e q := {E(|e| q )} 1/q < ∞.
The uniform approximations in (2.2) are generally called strong invariance principle. The two Brownian motions {B t } and {B * t } may be defined on different probability spaces and hence are not jointly distributed, which is not an issue because our argument does not depend on their joint distribution. To see how to use (2.2), under H 0 in (1.3), consider Theorem 2.1 below presents uniform approximations for F j and V 2 j . Define For any c ∈ (0, 1], the following uniform approximations hold: Theorem 2.1 provides quite general results under (2.2). We now discuss sufficient conditions for (2.2). Shao [36] obtained sufficient mixing conditions for (2.2). In this article, we briefly introduce the framework in Wu [40]. Assume that e i has the causal representation e i = G(. . . , ε i−1 , ε i ), where ε i are i.i.d. innovations, and G is a measurable function such that e i is well defined. Let {ε ′ i } i∈Z be an independent copy of {ε i } i∈Z . Assume 2) holds with ∆ n = n 1/4 log(n), the optimal rate up to a logarithm factor.
For linear process
2) holds with ∆ n = n 1/4 log(n). For many nonlinear time series, e i − e ′ i 8 decays exponentially fast and hence (2.8) holds; see Section 3.1 of Wu [40]. From now on we assume (2.2) holds with ∆ n = n 1/4 log(n).
Remark 2.1. If e i are i.i.d. with E(e i ) = 0 and e i ∈ L q for some 2 < q ≤ 4, the celebrated "Hungarian embedding" asserts that i j=1 e j satisfies a strong invariance principle with the optimal rate o a.s. (n 1/q ). Thus, it is necessary to have the moment assumption e i ∈ L 8 in Proposition 2.1 in order to ensure strong invariance principles for both S i and S * i in (2.1) with approximation rate n 1/4 log(n). On the other hand, one can relax the moment assumption by loosening the approximation rate. For example, by Corollary 4 in Wu [40], assume e i ∈ L 2q for some q > 2 and 2) holds with ∆ n = n 1/ min(q,4) log(n).
As shown in Examples 2.1-2.3 below, r n and r * n in (2.4) often have tractable bounds.
Example 2.1. If σ i is non-decreasing in i, then σ n ≤ r n ≤ 2σ n and σ 2 n ≤ r * n ≤ 2σ 2 n . If σ i is non-increasing in i, then r n = σ 1 and r * n = σ 2 1 . If σ i are piecewise constants with finitely many pieces, then r n , r * n = O(1).
Then r n , r * n = O(n 1−γ ). If γ = 1, we obtain a locally stationary case with the time window i/n ∈ [0, 1]; if γ ∈ [0, 1), we have the infinite time window [0, ∞) as n/n γ → ∞, which may be more reasonable for data with a long time horizon.
Self-normalized central limit theorem
In this section we establish a self-normalized CLT for the sample averageX. To understand how non-stationarity makes this problem difficult, elementary calculation shows where γ k = Cov(e 0 , e k ). In the stationary case σ i ≡ 1, under condition ∞ k=0 |γ k | < ∞, τ 2 n → τ 2 , the long-run variance in (1.2). For non-constant variances, it is difficult to deal with τ 2 n directly, due to the large number of unknown parameters and complicated structure. See De Jong and Davidson [18] for a kernel estimator of τ 2 n under a near-epoch dependent mixing framework.
To attenuate the aforementioned issue, we apply the uniform approximations in Theorem 2.1. Assume that (2.10) below holds. Note that the increments B i − B i−1 of standard Brownian motions are i.i.d. standard normal random variables. By (2.6), n(X − µ) is equivalent to N (0, τ 2 Σ 2 n ) in distribution. By (2.7), V n /Σ n → 1 in probability. By Slutsky's theorem, we have Proposition 2.2.
Proposition 2.2 is an extension of the classical CLT for i.i.d. data or stationary processes to modulated stationary processes. If X i are i.i.d., then n(X − µ)/V n ⇒ N (0, 1). In Proposition 2.2, τ 2 can be viewed as the variance inflation factor due to the dependence of {e i }. For stationary data, the sample variance V 2 n /n is a consistent estimate of the population variance. Here, for non-constant variances case (1.1), by (2.7) in Theorem 2.1, V 2 n /n can be viewed as an estimate of the time-average "population variance" Σ 2 n /n. So, we can interpret the CLT in Proposition 2.2 as a self-normalized CLT for modulated stationary processes with the self-normalizing term V n , adjusting for non-stationarity due to σ 1 , . . . , σ n and τ 2 , accounting for dependence of {e i }. Clearly, parameters σ 1 , . . . , σ n are canceled out through self-normalization. Finally, condition (2.10) is satisfied in Example 2.2 with γ > 3/4 and Example 2.3 with β > −1/4.
In classical statistics, the width of confidence intervals usually shrinks as sample size increases. By Proposition 2.2 and Theorem 2.1, the width of the constructed confidence interval for µ is proportional to V n /n or, equivalently, Σ n /n. Thus, a necessary and sufficient condition for shrinking confidence interval is , the contribution of a new observation is negligible relative to its noise level.
, the length of confidence interval is proportional to Σ n /n ≍ n β−1/2 . In particular, if c 1 < σ i < c 2 for some positive constants c 1 and c 2 , then Σ n /n achieves the optimal rate O(n −1/2 ). If σ i ≍ log(i), then Σ n /n ≍ log(n)/ √ n.
The same idea can be extended to linear combinations of means over multiple time periods. Suppose we have observations from k consecutive time periods T 1 , . . . , T k , each of the form (1.1) with different means, denoted by µ 1 , . . . , µ k , and each having time-dependent variances. Let ν = β 1 µ 1 + · · · + β k µ k for given coefficients β 1 , . . . , β k . For example, if we are interested in mean change from T 1 to T 2 , we can take ν = µ 2 − µ 1 ; if we are interested in whether the increase from T 3 to T 4 is larger than that from T 1 to T 2 , we can let ν = (µ 4 − µ 3 ) − (µ 2 − µ 1 ). Proposition 2.3 below extends Proposition 2.2 to multiple means. Proposition 2.3. Let ν = β 1 µ 1 + · · · + β k µ k . For T j , denote its sample size n j and its sample averageX(j). Assume that (2.10) holds for each individual time period T j and, for simplicity, that n 1 , . . . , n k are of the same order. Then
Wild bootstrap for self-normalized statistic
Recall σ i e i in (1.1). Suppose we are interested in the self-normalized statistic For problems with small sample sizes, it is natural to use bootstrap distribution instead of the convergence H n ⇒ N (0, τ 2 ) in Proposition 2.2. Wu [39] and Liu [30] have pioneered the work on the wild bootstrap for independent data with non-identical distributions. We shall extend their wild bootstrap procedure to the modulated stationary process (1.1).
Define the self-normalized statistic based on the following new data: Clearly, ξ i inherits the non-stationarity structure of σ i e i by writing Thus, {e * i } is a white noise sequence with long-run variance one. By Proposition 2.2, the scaled version H n /τ ⇒ N (0, 1) is robust against the dependence structure of {e i }, so we expect that H * n should be close to H n /τ in distribution.
Theorem 2.2. Let the conditions in Proposition 2.2 hold. Further assume
Letτ be a consistent estimate of τ . Denote by P * the conditional law given {e i }. Then Theorem 2.2 asserts that, H * n behaves like the scaled version H n /τ , with the scaling factorτ coming from the dependence of {e i }. Here we use the sample meanX in (1.1) to illustrate a wild bootstrap procedure to obtain the distribution of n(X − µ)/(τ V n ) in Proposition 2.2.
(i) Apply the method in Section 2.5 to X 1 , . . . , X n to obtain a consistent estimateτ of τ . (ii) Subtract the sample meanX from data to obtain whereτ b is a long-run variance estimate (see Section 2.5) for bootstrap data ξ b i . (v) Repeat (iii)-(iv) many times and use the empirical distribution of those realizations of H b n as the distribution of n(X − µ)/(τ V n ).
The proposed wild bootstrap is an extension of that in Liu [30] for independent data to modulated stationary case, and it has two appealing features. First, the scaling factor τ makes the statistic independent of the dependence structure. Second, the bootstrap data-generating mechanism is adaptive to unknown time-dependent variances {σ 2 i }. For the distribution of α i in step (iii), we use P(α i = −1) = P(α i = 1) = 1/2, which has some desirable properties. For example, it preserves the magnitude and range of the data. As shown by Davidson and Flachaire [17], for certain hypothesis testing problems in linear regression models with symmetrically distributed errors, the bootstrap distribution is exactly equal to that of the test statistic; see Theorem 1 therein.
For the purpose of comparison, we briefly introduce the widely used block bootstrap for a stationary time series {X i } with mean µ. By (1.2), √ n(X − µ) ⇒ N (0, τ 2 ). Suppose that we want to bootstrap the distribution of √ n(X − µ). Let k n , ℓ n , I 1 , . . . , I ℓn be defined as in Section 2.5 below. The non-overlapping block bootstrap works in the following way: (i) Take a simple random sample of size ℓ n with replacement from the blocks I 1 , . . . , I ℓn , and form the bootstrap data X b 1 , . . . , X b n ′ , n ′ = k n ℓ n , by pooling together X i s for which the index i is within those selected blocks.
(iii) Repeat (i)-(ii) many times and use the empirical distribution of Ξ n 's as the distribution of √ n(X − µ).
In step (ii), another choice is the studentized versionΞ n = √ n ′ {X b − E * (X b )}/τ b , whereτ b is a consistent estimate of τ based on bootstrap data. Assuming stationarity and k n → ∞, the blocks are asymptotically independent and share the same model dynamics as the whole data, which validates the above block bootstrap. We refer the reader to Lahiri [29] for detailed discussions. For a non-stationary process, block bootstrap is no longer valid, because individual blocks are not representative of the whole data. By contrast, the proposed wild bootstrap is adaptive to unknown dependence and the nonconstant variances structure.
Change point analysis: Self-normalized CUSUM test
To test a change point in the mean of a process {X i }, two popular CUSUM-type tests (see Section 3 of Robbins et al. [35] for a review and related references) are whereτ 2 is a consistent estimate of the long-run variance τ 2 of {X i }, and Here c > 0 (c = 0.1 in our simulation studies) is a small number to avoid the boundary issue. For i.i.d. data, j(1 − j/n) is proportional to the variance of S X (j), so T 1 n is a studentized version of T 2 n . For i.i.d. Gaussian data, T 1 n is equivalent to likelihood ratio test; see Csörgő and Horváth [14]. Assume that, under null hypothesis, for a standard Brownian motion {B t } t≥0 . The above convergence requires finitedimensional convergence and tightness; see Billingsley [11]. By the continuous mapping theorem, For the modulated stationary case (1.3), (2.15) is no longer valid. Moreover, since T 1 n and T 2 n do not take into account the time-dependent variances σ 2 i , an abrupt change in variances may lead to a false rejection of H 0 when the mean remains constant. For example, our simulation study in Section 3.3 shows that the empirical false rejection probability for T 1 n and T 2 n is about 10% for nominal level 5%. To alleviate the issue of non-constant variances, we adopt the self-normalization approach as in previous sections. Recall F j and V j in (2.3). For each fixed cn ≤ j ≤ (1 − c)n, by Theorem 2.1 and Slutsky's theorem, F j /V j ⇒ N (0, τ 2 ) in distribution, assuming the negligibility of the approximation errors. Therefore, the self-normalization term V j can remove the time-dependent variances. In light of this, we can simultaneously self-normalize the two terms j i=1 X i and n i=j+1 X i in (2.14) and propose the self-normalized test statistic Theorem 2.3. Assume (2.2) holds. Let δ n → 0 be as in (2.10). Under H 0 , we have By Theorem 2.3, under H 0 , T SN n is asymptotically equivalent to max cn≤j≤(1−c)n | T n (j)|. Due to the self-normalization, for each j, the time-dependent variances are removed and T n (j) ∼ N (0, 1) has a standard normal distribution. However, T n (j) and T n (j ′ ) are correlated for j = j ′ . Therefore, { T n (j)} is a non-stationary Gaussian process with a standard normal marginal density. Due to the large number of unknown parameters σ i , it is infeasible to obtain the null distribution directly. On the other hand, Theorem 2.3 establishes the fact that, asymptotically, the distribution of T SN n in (2.16) depends only on σ 1 , . . . , σ n and is robust against the dependence structure of {e i }, which motivates us to use the wild bootstrap method in Section 2.3 to find the critical value of T SN n .
(v) Based on ǫ i in (ii), use the wild bootstrap method in Section 2.3 to generate synthetic data ξ 1 , . . . , ξ n , and use (i)-(iv) to compute the bootstrap test statistic T b n based on the bootstrap data ξ 1 , . . . , ξ n . (vi) Repeat (v) many times and find (1 − α) quantile of those T b n s.
As argued in Section 2.3, the synthetic data-generating scheme (v) inherits the timevarying non-stationarity structure of the original data. Also, the statistic T SN n is robust against the dependence structure, which justifies the proposed bootstrap method. If H 0 is rejected, the change point is then estimated byĴ = argmax cn≤j≤(1−c)n |T n (j)|.
If there is no evidence to reject H 0 , we briefly discuss how to apply the same methodology to testH 0 : σ 1 = · · · = σ J = σ J+1 = · · · = σ n , that is, whether there is a change point in the variances σ 2 i . By (1.1), we have (X i − µ) 2 = σ 2 i + σ 2 i ζ i , where ζ i = e 2 i − 1 has mean zero. Therefore, testing a change point in the variances σ 2 i of X i is equivalent to testing a change point in the mean of the new dataX i = (X i −X) 2 .
Long-run variance estimation
To apply the results in Sections 2.2-2.4, we need a consistent estimate of the long-run variance τ 2 . Most existing works deal with stationary time series through various block bootstrap and subsampling approaches; see Lahiri [29] and references therein. Assuming a near-epoch dependent mixing condition, De Jong and Davidson [18] established the consistency of a kernel estimator of Var( n i=1 X i ), and their result can be used to estimate τ 2 n in (2.9) for the CLT of √ n(X − µ). However, for the change point problem in Section 2.4, we need an estimator of the long-run variance τ 2 of the unobservable process {e i }, so the method in De Jong and Davidson [18] is not directly applicable.
Some possible extensions
The self-normalization approaches in Sections 2.2-2.5 can be extended to linear regression model (1.4) with modulated stationary time series errors. The approach in Phillips, Sun and Jin [33] is not applicable here due to non-stationarity. For simplicity, we consider the simple case that p = 2, U i = (1, i/n), and β = (β 0 , β 1 ) ′ . Hansen [23] studied a similar setting for martingale difference errors. Denote byβ 0 andβ 1 the simple linear regression estimates of β 0 and β 1 given bŷ Then simple algebra shows that The long-run variance τ 2 can be estimated using the idea of blockwise self-normalization in Section 2.5. Let k n , ℓ n and I j be defined as in Section 2.5. Then we proposê (ii) Based on Z 1 , . . . , Z n , obtainτ with block length k.
(iii) Repeat (i)-(ii) many times, compute empirical MSE(k) as the average of realizations of (τ − 1) 2 , and find the optimal k by minimizing MSE(k).
We find that the optimal block length k is about 12 for n = 120, about 15 for n = 240, about 20 for n = 360, 600 and about 25 for n = 1200. where φ is the standard normal density, and 1 is the indicator function. The sequences A1-A4 exhibit different patterns, with a piecewise constancy for A1, a cosine shape for A2, a sharp change around time n/2 for A3 and a gradual downtrend for A4. Let ε i be i.i. d. N(0, 1). For e i , we consider both linear and nonlinear processes. Without loss of generality we examine coverage probabilities based on 10 3 realized confidence intervals for µ = 0 in (1.1). We compare our self-normalization-based confidence intervals to some stationarity-based methods. For (1.1), if we pretend that the error process {ẽ i = σ i e i } is stationary, then we can use (1.2) to construct an asymptotic confidence interval for µ. Under stationarity, the long-run variance τ 2 of {ẽ i } can be similarly estimated through the block method in Section 2.5 by using the non-normalized version D j = √ k n [X(j) −X] in (2.25); see Lahiri [29]. Thus, we compare two self-normalizationbased methods and three stationarity-based alternatives: self-normalization-based confidence intervals through the asymptotic theory in Proposition 2.2 (SN) and the wild bootstrap (WB) in Section 2.3; stationarity-based confidence intervals through the asymptotic theory (1.2) (ST), non-overlapping block bootstrap (BB) and studentized nonoverlapping block bootstrap (SBB) in Section 2.3. From the results in Table 1, we see that the coverage probabilities of the proposed self-normalization-based methods (columns SN and WB) are close to the nominal level 95% for almost all cases considered. By contrast, the stationarity-based methods (columns ST, BB and SBB) suffer from substantial undercoverage, especially when dependence is strong (θ = 0.8 in Table 1(a) and β = 2.1 in Table 1(b)). For the two self-normalization-based methods, WB slightly outperforms SN.
Size and power study
In (1.3), we use the same setting for σ i and e i as in Section 3.2. For mean µ i , we consider µ i = λ1 i>40 , λ ≥ 0, and compare the test statistics T 1 n , T 2 n in (2.13) and T SN n in (2.16). First, we compare their size under the null with λ = 0. The critical value of T SN n is obtained using the wild bootstrap in Section 2.4; for T 1 n and T 2 n , their critical values are based on the block bootstrap in Section 2.3. In each case, we use 10 3 bootstrap samples, nominal level 5%, and block length k n = 10, and summarize the empirical sizes (under the null λ = 0) in Table 2 based on 10 3 realizations. While T SN n has size close to 5%, T 1 n and T 2 n tend to over-reject the null, and the false rejection probabilities can be three times the nominal level of 5%. Next, we compare the size-adjusted power. Instead of using the bootstrap methods to obtain critical values, we use 95% quantiles of 10 4 realizations of the test statistics when data are simulated directly from the null model so that the empirical size is exactly 5%. Figure 1 presents the power curves for combinations {A1-A4} × {B1 with θ = 0.4; B2 with β = 3.0} with 10 3 realizations each. For A1, T SN n outperforms T 1 n and T 2 n ; for A2-A4, there is a moderate loss of power for T SN n . Overall, T SN n has power comparable to other two tests. In practice, however, the null model is unknown, and when one turns to the bootstrap method to obtain the critical values, the usual CUSUM tests T 1 n and T 2 n will likely over-reject the null as shown in Table 2. In summary, with such small sample size and complicated time-varying variances structure, T SN n along with the wild bootstrap method delivers reasonably good power and the size is close to nominal level. Finally, we point out that the proposed self-normalization-based methods are not robust to models with time-varying correlation structures. For example, consider the model e i = 0.3e i−1 + ε i for 1 ≤ i ≤ 60 and e i = 0.8e i−1 + ε i for 61 ≤ i ≤ n, where ε i are i.i.d. N(0, 1). With k n = 10, the sizes (nominal level 5%) for the three tests T SN n , T 1 n , T 2 n are 0.154, 0.196, 0.223 for A1. Future research directions include (i) developing tests for change in the variance or covariance structure for (1.1) (See Inclán and Tiao [26], Aue et al. [5] and Berkes, Gombay and Horváth [10] for related contributions); and (ii) developing methods that are robust to changes in correlations.
Annual mean precipitation in Seoul during 1771-2000
The data set consists of annual mean precipitation rates in Seoul during 1771-2000; see Figure 2 for a plot. The mean levels seem to be different for the two time periods 1771-1880 and 1881-2000. Ha and Ha [22] assumed the observations are i.i.d. under the null hypothesis. As shown in Figure 2, the variations change over time. Also, the autocorrelation function plot (not reported here) indicates strong dependence up to lag 18. Therefore, it is more reasonable to apply our self-normalization-based test that is tailored to deal with modulated stationary processes. With sample size n = 230, by the method in Section 3.1, the optimal block length is about 15. Based on 10 5 bootstrap samples as described in Section 2.4, we obtain the corresponding p-values 0.016, 0.005, 0.045, 0.007, with block length k n = 12, 14, 16, 18, respectively. For all choices of k n , there is compelling evidence that a change point occurred at year 1880. While our result is consistent with that of Ha and Ha [22], our modulated stationary time series framework seems to be more reasonable. Denote by µ 1 and µ 2 the mean levels over pre-change and post-change time periods 1771-1880 and 1881-2000. For the two sub-periods with sample sizes 110 and 120, the optimal block length is about 12. With k n = 12, applying the wild bootstrap in Section 2.3 with 10 5 bootstrap samples, we obtain 95% confidence intervals [121.7, 161.3] for µ 1 , [100.9, 114.3] for µ 2 . For the difference µ 1 − µ 2 , with optimal block length k n = 15, the 95% wild bootstrap confidence interval is [19.6, 48.2]. Note that the latter confidence interval for µ 1 − µ 2 does not cover zero, which provides further evidence for µ 1 = µ 2 and the existence of a change point at year 1880.
Quarterly U.S. GNP growth rates during 1947-2002
The data set consists of quarterly U.S. Gross National Product (GNP) growth rates from the first quarter of 1947 to the third quarter of 2002; see Section 3.8 in Shumway and Stoffer [38] for a stationary autoregressive model approach. However, the plot in Figure 3 suggests a non-stationary pattern: the variation becomes smaller after year 1985 whereas the mean level remains constant. Moreover, the stationarity test in Kwiatkowski et al. [28] provides fairly strong evidence for non-stationarity with a p-value of 0.088. With the block length k n = 12, 14, 16, 18, we obtain the corresponding p-values 0.853, 0.922, 0.903, 0.782, and hence there is no evidence to reject the null hypothesis of a constant mean µ. Based on k n = 15, the 95% wild bootstrap confidence interval for µ is [0.66%, 1.00%]. To test whether there is a change point in the variance, by the discussion in the last paragraph of Section 2.4, we considerX i = (X i − X n ) 2 . With k n = 12, 14, 16, 18, the corresponding p-values are 0.001, 0.006, 0.001, 0.010, indicating strong evidence for a change point in the variance at year 1984. In summary, we conclude that there is no change point in the mean level, but there is a change point in the variance at year 1984.
for help on improving the presentation and Kyung-Ja Ha for providing us the Seoul precipitation data. Zhao's research was partially supported by NIDA Grant P50-DA10075-15. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIDA or the NIH. | 2013-02-01T09:27:16.000Z | 2013-02-01T00:00:00.000 | {
"year": 2013,
"sha1": "a8a330e8b5ac58502be057e6146a6748bdd65194",
"oa_license": "implied-oa",
"oa_url": "https://doi.org/10.3150/11-bej399",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "a8a330e8b5ac58502be057e6146a6748bdd65194",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
} |
233714786 | pes2o/s2orc | v3-fos-license | Magnetostriction of {\alpha}-RuCl3 flakes in the zigzag phase
Motivated by the possibility of an intermediate U(1) quantum spin liquid phase in out-of-plane magnetic fields and enhanced magnetic fluctuations in exfoliated {\alpha}-RuCl3 flakes, we study magneto-Raman spectra of exfoliated multilayer {\alpha}-RuCl3 in out-of-plane magnetic fields of -6 T to 6 T at temperatures of 670 mK - 4 K. While the literature currently suggests that bulk {\alpha}-RuCl3 is in an antiferromagnetic zigzag phase with R3bar symmetry at low temperature, we do not observe R3bar symmetry in exfoliated {\alpha}-RuCl3 at low temperatures. While we saw no magnetic field driven transitions, the Raman modes exhibit unexpected stochastic shifts in response to applied magnetic field that are above the uncertainties inferred from Bayesian analysis. These stochastic shifts are consistent with the emergence of magnetostrictive interactions in exfoliated {\alpha}-RuCl3.
I. INTRODUCTION
The quantum spin liquid (QSL) is a long sought-after non-classical phase characterized by a topological order parameter [1][2][3] . QSLs may be critical to the development of topologically protected quantum computing platforms because they may host non-local excitations with Anyonic statistics. Amongst possible candidates that could host such a topologically protected phase, α-RuCl 3 has been extensively studied over the past several years because it can be approximately described by the analytically solvable Kitaev honeycomb model 4 . Several experimental efforts have reported features consistent with a QSL phase, including half-quantized thermal Hall conductance plateaus 5 , a scattering continuum in Raman spectroscopy [6][7][8] , neutron scattering 9,10 , and nuclear magnetic resonance 11,12 . Despite numerous reports of possible QSL signatures, many fundamental questions remained unanswered, both theoretically and experimentally. For example, while the room temperature structure is now accepted as C2/m 13,14 , the low temperature symmetry of α-RuCl 3 remains in question. Early reports suggested trigonal P3 1 12 15 and monoclinic C2/m 15-17 symmetries, but a) Electronic mail: yunyip@ornl.gov b) Electronic mail: lawriebj@ornl.gov; This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doepublic-access-plan). more recent reports have suggested the presence of rhombohedral R3 18,19 symmetry. Further, it remains unclear whether the Kitaev model exchange parameters should be antiferromagnetic or ferromagnetic. Additionally, a recent density functional renormalization group (DMRG) calculation suggested that commonly reported QSL phases induced by in-plane magnetic fields are missing in the DMRG result, but a U(1) QSL phase can be stabilized by out-of-plane magnetic fields 20 . The number of antiferromagnetic zigzag phases that exist before the onset of the QSL phase also remains in question, as does the effect of sample-to-sample variability 21 between ostensibly identical flakes and between exfoliated flakes and bulk crystals.
The layered Van der Waals nature of α-RuCl 3 enables the possibility of heterostructure assembly, proximity effect engineering, and strain engineering [22][23][24][25][26] . Exfoliated flakes a few layers thick are air-stable 27,28 , greatly increasing the flexibility of the workflow. On the other hand, reduced dimensionality yields stronger orderparameter fluctuations and eventually suppression of long-range order 29 . Stacking faults may also open up additional hopping pathways to stabilize the QSL phase 27 . Magnetic fluctuations have been reported to persist 30 or be enhanced 27,28 in the few-layer to monolayer limit. Furthermore, strain gradients may induce synthetic gauge fields that locally tune topological phases 31,32 . A recent first principle calculation suggests that 2% strain is enough to drive monolayer α-RuCl 3 from the AFM zigzag phase to the spin-polarized phase. Topological devices with well-patterned QSL puddles may be possible with appropriate gauge field engineering.
Raman spectroscopy is a flexible and powerful tool for resolving sample symmetry as well as microscopic electron-phonon and phonon-magnon interactions. In addition to the above described characterization of C2/m and R3 symmetry, Raman spectroscopy has been used to characterize temperature dependent hysteresis 19 , several magnon modes, and a possible Majorana mode 8,33 . Here, motivated by a possible intermediate U(1) QSL phase in out-of-plane magnetic fields 20 and larger tunability in exfoliated α-RuCl 3 , we study the Raman spectra of exfoliated α-RuCl 3 in out-of-plane magnetic fields in a Faraday geometry 8 at temperatures as low as T = 670 mK.
II. SAMPLE DETAILS AND EXPERIMENTAL SETUP
Bulk α-RuCl 3 single crystals were grown by vapor transport 34 . The α-RuCl 3 flakes were mechanically exfoliated onto a 300 nm SiO 2 film on a Si substrate. Due to signal-to-noise-ratio (SNR) constraints and large parameter sweeps, we focused on thick flakes with lateral dimensions of tens of microns. Variable temperature Raman spectra were acquired for temperatures of 3 K -300 K in a Montana Instruments closed-cycle cryostat with a Princeton Instruments Isoplane SCT-320 spectrograph, a Pixis 400BR Excelon camera, and a 2400 line/mm grating. The sample was probed in a backscattering configuration (beam path c*, where c* is the outer product of the crystallographic a and b axes illustrated in Figure 1 (a)) with a 532.03 nm excitation at 2.0 mW power and 45 sec integration time. The laser excitation was removed with Semrock 532 nm RazorEdge ultrasteep dichroic and long-pass edge filters with cutoff at 90 cm −1 .
The T = 4 K and 670 mK magnetic field dependent Raman spectra were acquired in a customized Leiden dilution refrigerator with free space optics access that is described in further detail elsewhere 35 . Raman spectra were recorded with an Andor Kymera 193 spectrograph with a Newton EMCCD DU970P-BV camera and a 2400 line/mm grating. The exposure time was 1800 seconds per spectrum. The sample was probed in a backscattering configuration with a 2 mW (200 uW), 532.2096 nm laser excitation at 4K (670 mK). The excitation was filtered by a set of 3 volume Bragg gratings (Optigrate, 1 volume Bragg beam splitter and 2 volume Bragg notch filters). The Faraday geometry (B c* and beam path) induces a θ F (B) = −25.60 • /T polarization rotation due to the beam propagation through the objective.
Initial temperature dependent Raman spectra acquired as flake 1 (illustrated in Figure 1 (b)) warmed from T = 3 K to 270 K are shown in Figure 1 (c) and (d). All the Raman modes here are consistent with those reported in the literature. Using peak assignments from the R3 space group 36 for low temperature, we identify E 1 g at 116 cm −1 , E 2 g at 164 cm −1 , E 3 g at 272 cm −1 , E 4 g at 296 cm −1 , and A 1 1 g at 313 cm −1 . We note that the low energy tail below 125 cm −1 is not negligible and results in distortion of the lineshape of the E 1 g Fano peak at 116 cm −1 in this dataset (this is not the case for the other datasets taken with 3 volume Bragg gratings).
Additionally, less reported α-RuCl 3 Raman modes at 222 cm −1 and 345 cm −1 are observed. The 222 cm −1 peak has been attributed to stacking faults 30 or defects 36 . It has been reported that thin flakes are more prone to stacking faults than single crystals 21 , which is consistent with the fact that the α-RuCl 3 flakes studied here are in the thin crystal to thick flake limit. The 345 cm −1 peak has been attributed to A 2 1g due to its XX polarization (parallel polarization) nature 30 and to defects 36 . Notably, Li et al. 36 reported that both the 222 cm −1 peak and 345 cm −1 peak only showed up for blue (488 nm) excitation but not for red (633 nm) excitation. 2. Almost all the peaks appear to fluctuate as a function of magnetic field, an effect that is clear in the magnified 164 cm −1 and 313 cm −1 modes shown in Figure 2 (b-c) and (e-f).
III. MAGNETO-RAMAN SPECTROSCOPY
The first observation can be simply explained by the fact that the angle α between the polarization and the crystal a, b axes was not fixed during the magnetic field sweep. The angle rotates at θ F (B) = −25.60 • /T due to Faraday rotation in the objective. Since the 313 cm −1 mode is sensitive to XX polarization only, it is known to have a cos(2α) dependence 14 , where α is the angle between the polarization and a axis. The observed magnetic-field dependence of the 313 cm −1 peak intensity is consistent with this polarization-rotation effect. The same explanation could, at first glance, be true for the second observation: as reported in Mai et al. 14 , the spectral weights of the modes from the A g series and B g series are a function of α. It is natural to consider the possibility that a similar effect may be present here. However, the measurement reported by Mai et al. 14 was performed at room temperature, and the irreducible representation of the space group C2/m was assumed. For the low temperature spectra reported here, the likely space group R3 yields phonon modes at similar energies, and the decomposed E g modes, though doubly degenerate, are not broken. Fluctuations as a function of magnetic field have not previously been observed for low temperature polarization resolved Raman spectroscopy of single crystal α-RuCl 3 37 . We subsequently performed a polarization sweep at zero magnetic field on flake 3. Figure 2 (g) illustrates the normalized 164 cm −1 mode as a function of the polarization angle. A clear 2-fold oscillation is observed. Fitting the 164 cm −1 peak to a pair of closely spaced peaks yields Figure 2 (h). This angular dependence is different from the expected R3 angular independence 37 . It is also different from the reported angular dependence for C2/m, which has a 4-fold symmetry 14 . However, it is worth pointing out that fits of the peak position may be affected by the baseline, which could, in principle, be affected by the angle α. Figure 2 (i) illustrates a set of Raman spectra on flake 3 as a function of magnetic field at T = 4 K with parallel XX polarization configuration and a constant polarization angle α with respect to the flake. This was done by compensating half of the θ F (B) = −25.60 • /T on both the excitation and collection path. We note that the 345 cm −1 peak on this flake appears at 359 cm −1 , consistent with previous reports that this peak is related to stacking faults or defects 36 ) that may vary significantly from flake to flake. (c) Selected inferred peak positions using R3 peak assignment for spectra taken at T = 670 mK. (d) Selected inferred peak positions using C2/m peak assignment for spectra taken at T = 670 mK. (e) Selected inferred peak positions using R3 peak assignment for spectra taken at T = 4 K with parallel (XX) polarization.
IV. BAYESIAN INFERENCE
In order to better understand the Raman spectra shown above, we employed Bayesian inference techniques, considering both the modes from R3 with unsplit degeneracies and C2/m. For R3, as shown in Figure 3 (a), the model consists of 2 Fano lineshapes (E 1g , E 2g ), 5 Gaussian peaks (E 3g , E 4g , A 1 1g , A 3g , and R 1 ). To capture the baseline, we use a linear background with a broad Gaussian centered at ∼ 300 cm −1 . For C2/m, peaks consist of the A g series and B g series. The A 1g , B 1g , A 2g , B 2g are fitted to Fano lineshapes and the rest to Gaussian. We fit the Raman spectra from 90 cm −1 to 400 cm −1 in a single step with Scipy. We then used the fitting result as the starting center of the distribution of our priors. The priors of the parameters are assumed to be Gaussian. The Bayesian inference was done using Hamiltonian Monte Carlo PyMC3 38 . For fast convergence we use a No U-Turns (NUTS) sampler. We use 4 chains with 3,000 samples per chain. It takes between 3 and 100 minutes for a spectrum to converge, depending on the complexity of the models. Figure 3 (b) shows the resulting reconstructed spectra from the posterior predictives using R3 peak assignments plotted with an original spectrum taken at T = 670 mK. Figure 3 (c) shows the resulting reconstructed spec-tra from the posterior predictives using C2/m peak assignments plotted with an original spectrum taken at T = 670 mK. Figure 4 shows the posterior distribution. Note that these histograms are the distributions of the posteriors at each field. Since the magnetic field was swept from B = 0 T to B = +6 T to B = −6 T to B = 0, the extrema are singly visited, B = 0 T is triply visited, and all the other B field values are doubly visited. No manual normalization was applied. We see: 1. While the uncertainty for each posterior peak position is different, most of the peaks shown here move above their highest density interval (HDI) 39 .
2. While the models R3 and C2/m are very different, the posterior peak positions for 222 cm −1 , 313 cm −1 , and 341 cm −1 are quantitatively similar within a dataset. This means that these peak positions are robust against bias from the details of the other peaks. The joint scatter plot of peak position against, for example, the baseline, are essentially orthogonal. Hence the change of the peak position as a function of the magnetic field is not likely to be introduced by a polarization dependent baseline.
3. While polarization dependent frequency shifts exist for some of the modes such as the 163 cm −1 mode for C2/m symmetry, the 222 cm −1 and 313 cm −1 modes are known to not exhibit frequency shifts as a function of the polarization angle. Furthermore, the polarization induced frequency shift has clear rotational symmetry that is missing here.
4. The 163 cm −1 mode has 2 clear hysteresis loops in the R3 model, as shown in Figure 5 that is less clear in the C2/m model. The posterior for the 163 cm −1 mode in the C2/m model has a multinomial distribution, which is usually an indication of the model not converging well, possibly due to the complexity of the model, which includes multiple closely spaced Raman modes.
5.
The trend is different each sweep, even for the same flake. For example, the 310 cm −1 peak has a positive slope as a function of magnetic field in the first sweep at T = 4 K, but not in the second sweep at T = 670 mK.
V. DISCUSSION AND CONCLUSION
The variation in vibrational Raman modes as a function of magnetic field reported here is consistent with spin-phonon interactions and magnetostrictive effects. Recently Gass et al. 40 and Schonemann et al. 41 reported contraction along the c * axis and along the in-plane direction in response to applied in-plane fields. Though not reported yet, it is natural to assume similar effects may occur in response to out-of-plane magnetic fields as well. While Raman shifts seemed to be negligible in a previous study of single crystals 8 , magnetostrictive effects may be stronger in exfoliated flakes due to reduced rigidity. On the other hand, related materials like CrI 3 have strong Raman shifts for out-of-plane magnetic fields of B = 1 T 42 . A non-reciprocal magneto-phonon interaction was also reported for CrI 3 in the form of polarization rotation asymmetric with magnetic field 43 , but little to negligible shift of the vibrational Raman modes was described in that report. In addition to CrI 3 , Raman frequency shifts in response to magnetic field have been observed in 15R-BaMnO 3 44 . The hysteretic shifts in α-RuCl 3 Raman modes reported here are statistically significant, but highly variable. This could plausibly be due to variation in microscopic spin arrangement and strain that can result from substrate roughness 45 or from variations in flake geometry and thickness and may also affected by the exfoliation process. For example, in CrI 3 , Raman modes depend on the spin arrangement and are sensitive to spin flips 42 . As the magnetic field is applied, the spins in α-RuCl 3 flakes may re-orient, and the flake itself may deform mechanically. As the magnetic field is removed, the flake may settle in a new spin-mechanical configuration that is more stable than the initial configuration. This is consistent with the fact that we see a stronger Raman shift in the 4K sweep than the mK sweep, which was performed after the 4K sweep.
Knowing that the Raman modes move around stochastically in a manner that may be linked to microscopic details, one may consider a device with multiple individually addressable, non-degenerate Majorana QSLs by patterning different local strain gradients in α-RuCl 3 on pillars of different sizes. Hysteretic magnetostrictive effects may also be useful for neuromorphic or quantum memory applications, though substantial additional research is essential to corroborate such claims. work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Science Center. Exfoliation (KX), variable temperature photoluminescence (BL) and modeling (LL) were performed at the Center for Nanophase Materials Sciences, which is a U.S. Department of Energy Office of Science User Facility. Student (MAF, BEL, YSP) research support was provided by the Department of Defense through the National Defense Science & Engineering Graduate Fellowship (NDSEG) and by the DOE Science Undergraduate Laboratory Internships (SULI) program. | 2021-05-05T01:31:36.355Z | 2021-05-03T00:00:00.000 | {
"year": 2021,
"sha1": "27730fcf710456ce654adeef208c36cf1c32c3e1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "27730fcf710456ce654adeef208c36cf1c32c3e1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
56390480 | pes2o/s2orc | v3-fos-license | Evaluation Research as a Mechanism for Critical Inquiry and Knowledge Construction in Architectural and Urban Education
This paper responds to the misconceptions that continue to characterize the delivery of knowledge content in architectural courses. Based on reviewing the literature on pedagogy, the paper explores the value and benefits of introducing evaluation research as a mechanism for critical inquiry and knowledge construction in theory courses in architecture and urbanism. A framework is developed and employed to demonstrate how this type of learning can be incorporated. The development and implementation of a series of in-class and offcampus exercises in two different contexts reveal that structured actions and experiences help students control their learning experience while invigorating their understanding of the knowledge delivered in a typical lecture format. It firmly believed this would offer students multiple learning opportunities while fostering their capabilities to shift from passive listeners to active learners and from knowledge consumers to knowledge producers.
Introduction
Discourses in architectural and urban education corroborate that a university's mission should advance a learning environment that cultivates exploration and critical thinking.Today, inquiry and investigation are viewed as activities central to architectural and urbanism pedagogy, presenting new opportunities for academics to strengthen undergraduate courses, to enhance their role in shaping education in architecture, and to improve the overall quality of pedagogy.Throughout the past two decades, influential literature was introduced to the academic community in architecture (UIA- UNESCO Charter, 1996;Boyer & Mitgang, 1996) indicating that architectural education does not take full advantage of the unique opportunities available in higher education institutions.Links between education, professional practice, and academic research are often oversimplified.Opportunities to enrich and strengthen professional education through exposure to research processes are missed.
This paper underscores the value of evaluation research as a form of inquiry-based learning (IBL).It argues for exposing students to primary source materials and for educating them about the production of knowledge.This is proposed to complement traditional teaching practices that emphasize secondary source information and knowledge consumption by offering students ready-made interpretations.Primary sources enable students to get close as possible to what actually happened or is happening during a historical event or time period.Evaluation research is an important paradigm that would invigorate future architects to think critically, be more culturally and environmentally responsive, and engage in knowledge production.
A Critical View of Knowledge Delivery and Acquisition
In traditional pedagogy, architecture students are typically encouraged to engage in site visits and walkthroughs in the built environment to observe different phenomena.Unfortunately, however, literature indicates that these visits and exercises are not structured with rigorous investigation or critical inquiry (Salama, 1995;Bose, 2007).Moreover, in large classes, the proposition of a site visit is often met with logistical difficulties and little opportunity for individual student mentoring.
While architectural educators strive to impart the requisite knowledge necessary for successful practice, their approaches often diverge depending on the educator's priorities and ideals.Therefore, what and how knowledge is transmitted has significant professional and social implications (Salama, 2009).In this respect, Rapoport introduced many questions regarding "knowledge about better environments," which are: "what is better, better for whom, and why is it better?"(Rapoport, 1994:35).Key idiosyncrasies that continue to characterize teaching practices in architecture and urbanism involve gaps between what and how.
When teaching any body of knowledge, educators tend to present it as facts, theories, and as a process of scientific criticism.Processes leading to an outcome are often hidden and internalized.There should be a distinction between the types of knowledge resulting from research in architecture; students should be given the opportunity to experience these types.The first type consists of research that tests accepted ideas and knowledge resulting from research that seeks to understand the future through a better understanding of the past.The second type comprises knowledge resulting from research that develops new hypotheses and visions and research that probes new ideas and principles that will shape the future.
Knowledge is usually presented to students in a retrospective way.Nevertheless, abstract and symbolic generalizations used to describe research results do not convey a sense of the behaviour of the phenomena they describe (Schon, 1988).Here, the term "retrospective" means extensive exposure to an architect's performance over time.Educators tend to offer students experiments in the form of hypothetical design projects that neglect many contextual variables.In this respect, learning from the actual environment should be introduced.It can provide students with opportunities to understand the practical realities and variables that affect real-life situations (Salama, 2008).This would foster their abilities to explore issues associated with the relationship between users and the buildings they use.
Evaluation Research and Inquiry Based Learning (IBL)
IBL is an instructional method developed during the 1960s that continues to characterize current interests in higher education (Ackoff, 1974;Salama, 2009).It was developed in response to the perceived failure of more traditional forms of instruction, in which students were required simply to memorize and reproduce instructional materials.Active and experiential learning are sub-forms of IBL, in which students' progress is assessed by how well they develop experiential, critical thinking and analytical skills, rather than how much knowledge they have acquired.
The value of active learning is evident since the amount of information retained by the students declines substantially after ten minutes (Bonwell, 1996).The results of research comparing lecturing versus discussion techniques indicate that students favour discussion methods over lecturing and the one-way mode of knowledge transfer.Experiential learning, on the other hand, refers to learning in which the learner is directly in touch with the realities being studied (Keeton & Tate, 1978).It is contrasted with learning in which students only read about, hear about, talk about, or write about realities they never experience as part of the learning process.
Mistakenly, some educators equate experiential learning only with off-campus or nonclassroom learning.In architectural and urbanism pedagogy, however, a class in history or theory might incorporate periods of student practice on theory exercises and critical thinking problems, rather than consist entirely of lectures about theories of architecture and the work of famous architects.Similarly, a class in human-environment interactions might involve critical analysis exercises about how people perceive and comprehend a built environment.Both classes might involve field visits to buildings and spaces where students engage closely with the environment, exploring culture, diversity, and people's behaviour while being part of that environment (Salama, 2006).All of these mechanisms involve an experiential learning component.
Evaluation is an area of research and a mental activity devoted to collecting, analysing, and interpreting information.Evaluation studies in architecture are intended to provide reliable, useful, and valid information, with overarching objectives that include developing a database about the quality of the built environment, identifying existing problems or needs and their characteristics, and providing a basis for predicting the quality of future environments (Preiser, 1989;Preiser & Vischer, 2005).
Assessment of environments as a generator of knowledge and a valuable research vehicle needs to be introduced in lecture courses, establishing a knowledge base about the built environment that can endow students with more control over the process of knowledge acquisition, assimilation, and utilization in future experiences.This argument corresponds with John Habraken's statement when he argued: Linking evaluation research and IBL, one can argue that architecture students need to be involved in evaluation processes that should be conducted objectively and systematicallybut not through casual interviews or observations that may only reveal what is already known.In this context, they learn about problems and potentials of existing environments and how they meet people's needs, enhance and celebrate their activities, and foster desired behaviours and attitudes.
Evaluation Research: A Paradigm for Utilising the Built Environment as an Open Textbook
While different evaluation research exercises have been developed and implemented by the author in different contexts, the examples presented here are limited to a Socio-Behavioural Factors in Design elective course offered in the Master of Architecture program at Queen's University in Belfast.This was performed by assigning two major exercises; the first was "Contemplating Settings," and the second was "Procedural Evaluation."The two exercises adopted the concept of the built environment as an open textbook and as a teaching tool.
The number of students enrolled in class was 22.They were sensitized toward understanding key issues relating to research ethics through reading different documents adopted by the School Research Ethics Committee.Most importantly, they were to use unobtrusive photography and walkthrough in a manner that does not reveal people's personalities and identities or interfere with their activities in public spaces.
Contemplating Settings
In the first five weeks, students were introduced to a number of sociocultural and behavioural phenomena that included privacy, personal space, territoriality, crowding, and density.Examples describing these phenomena were displayed to students to illustrate what each phenomenon encompassed (Figure 1).The purpose was to complement knowledge acquired in lectures by exposing students to real-life conditions.They were required to take concepts underlying each phenomenon in abstract terms and to turn them into concrete terms through description and interpretation of the situations observed.
Students were to record and document cultural and behavioural phenomena by photographing selected settings.Two photographs that illustrated each phenomenon were required.A number of rules were established where photographs should be taken for a reallife situation to represent indoor or outdoor spontaneous settings.Students were required to write one statement describing the setting in physical, cultural, and/or behavioural terms.Contained in the structure of each statement were simple questions such as who is doing what, where, how, for how long, and with whom.Assessment criteria were delivered to students; these included how accurately their text and photographs reflected the meaning of the phenomena as discussed in the lectures and how their interpretations showed a scholarly understanding of the term and the selection of the setting.The overall quality of photographs and graphic layout of their submissions were important criteria for evaluating their work and assessing the overall learning outcomes.An important finding indicates that while all students were able to observe, document, and interpret the information, most of them could not phrase concise statements that described each setting.However, in a group discussion for debating in which students work among themselves with the facilitation of the author, they were able to recognize how people behave in a specific environmental situation.This included their body gestures, degrees of socialization, and how they attempt to control their environment, shape and transform the physical aspects of the setting to support their activities, and enhance their position in space, create views, or block distractions.
Procedural Evaluation and Assessing Spatial and Sustainable Design Characteristics
To introduce the procedural evaluation mechanism, a survey tool was devised, the purpose of which was to develop students' ability to have control over their learning by establishing links between spatial and sustainable design parameters of a building or a group of buildings.The exercise was conducted through self-guided tours.Checklists were provided to offer students a procedure for taking a structured walk through and around a building.The evaluation strategy in this context was considered to be impressionistic, which increases students' awareness by focusing on specific factors.
Students were divided into four groups, each of which conducted the exercise utilising the multiple category building appraisal tool.Four buildings in Belfast were selected based on their familiarity to the students: Students' Union and Professional Education Centre of Queen's University, University of Ulster College of Arts in Belfast, and Grove Wellbeing Centre.A number of key factors were identified under four categories: (a) planning and zoning, (b) landscaping, (c) designing, and (d) energy and waste.Checklists were phrased in the form of questions underlying each category.
How effectively does the design of landscape items avoid the use of synthetic materials?(Consider the materials used for walkways and the asphalt pavements of the parking area.) Does the project introduce softscape elements like plants and shrubs?If so, how effective are they?(Consider their harmony with the existing natural environment).
How effectively is site furniture like seats, pergolas, and garbage boxes installed in and distributed within the site?(Consider their location, materials, and manufacturing).
How well are the routes around and within the site marked?Are the markings clear and easily understood?(Consider directional signs, their location, content, and material).
Are there any signs for environmental education purposes?If so, how effectively do they convey messages about appropriate behaviour? Are the pedestrian paths and other hardscape elements made of natural or recycled materials? Does the site have a reused water system, i.e., grey water?If so, how effective is it?(Consider capturing rainwater and reusing it for irrigation or other purposes.) How effectively does the project introduce native plants that require the least amount of watering?
Average Score (total/10) = ------ Provide or other forms of illustrations that represent issues underlying sustainable landscape design.
A summary paragraph should be written describing how well landscape design deals with sustainabilityrelated issues.
The process included the use of notes, sketches, diagrams, and verbal description.Table 1 illustrates an example data sheet used to conduct the evaluation.Questions were designed in a generic manner that reflected the essence of each category.Students' attention was drawn to the fact that the list of questions underlying each category was not exclusive and was introduced to help structure and guide their tours for the purpose of the exercise.
Numerical scores were assigned to the questions to represent the degree of appropriateness underlying each factor using a point scale method.Scores were averaged, and an overall score for the building was then computed.Students were required to develop a report that would consider the following: • Description of the building appraised with the support of photographs and illustrations; • Appraisal of the building using the checklists with numerical scores assigned for each question; • Analysis of numerical ratings by computation of an average score for each category and for the overall score; and • Writing comments based on students' impressions and understanding of the building.
The findings point out that the students were able to make judgments about the built environment and give reasons for those judgments.Yet, students' analyses revealed shortcomings in their abilities to comment, whereas a few students could not express their concerns verbally and could not write an understandable reporting statement.Also, a smaller number of students was not able to recognize similarities and differences between the questions.However, they commented that checklists and survey tools for investigating the built environment helped them recognize exactly what to look for in the building and to understand relationships between different factors while comprehending the impact of one factor as opposed to others.
Other Contexts for Integrating Evaluation Research as an IBL Mechanism in a Classroom Setting
As a continuous effort to introduce IBL into theory courses, a series of tools were developed by the author and were implemented as exercises during his teaching in two different contexts, as follows: While the exercises were introduced in different grade levels, there was one shared aspect: the nature of the courses in which they were introduced.Specifically, the courses address personenvironment interactions and explore the relationship between human behaviour and different types of environments and the impact of those environments on individual, community, and societal attitudes.In essence, this reflects the amenability and implement-ability of the exercises on different levels and in different contexts.Despite the fact that each course is introduced in a context aimed at achieving specific objectives and learning outcomes, an integral component in the two courses is an intensive discussion of issues that pertain to ways in which information about sociocultural factors and environment-behaviour knowledge can be applied to design projects.It should be noted, however, that the objective here is not to compare the two, different contexts, but to illustrate the way in which IBL was introduced and implemented.The shared objectives of the courses offered in the two contexts can be exemplified as follows.
To increase students' sensitivity to the built environment and to break any habits of taking the environment for granted.
To acquaint students with particular knowledge of a variety of environments, including residential, work, learning, and urban. To enhance students' understanding of the core concepts regarding humanenvironment relations and how these concepts vary by different cultures and subcultures. To develop students' critical thinking abilities about the role of the built form in fostering, enhancing, or inhibiting cultural behaviours and attitudes.
The selected examples of exercises were envisioned to complement different types of knowledge offered to students in the typical lecture format.The instructor explained the exercises to the students and the way in which they are linked to the body of knowledge and experiences students have already gained in the course and in other courses.While some exercises were performed in groups of two or four, others were individual exercises based on the nature of each and the type of issues involved.Each exercise was followed by a class discussion moderated by the tutor in which all students have opportunities to voice their thoughts to the whole class.The following are three examples selected from a wide variety of exercises utilised as in-class, IBL mechanisms.
Culture and Environment: Relating Visual Attributes of Buildings to Culture
Purpose: The purpose of this exercise is to offer students the opportunity to translate their understanding of a building image into responses that relate culture to architecture and that link the built environment to the community within. Prior Knowledge: Students have been introduced to the dialectic relationship between culture and environment and how culture is manifested in human artefacts and buildings/built environments.The basic premise in this context is that culture appears in objects and in the environment as a result of people's interpretation of such an environment and is based on a set of values and beliefs.In essence, it adopts the view that any object embodies human choices and preferences. Requirements: Three different images that represent different cultures were presented.
Students were required to describe each image in one or two sentences only, think of what culture each image belongs to, and state at least three visual/formal attributes that influenced their answer (Figure 2).The exercise is conducted in 15 minutes and is performed in teams of two, as each two neighbouring students have to articulate an answer based on their agreement.
Recognition of Building Types: Relating Building Images to Functions and Users
Purpose: The purpose of this exercise is to develop students' visual perception abilities regarding how to recognize different building types based on their understanding of their visual characteristics and the messages they convey. Prior Knowledge: Through a series of lecture presentations preceding this exercise, students were introduced to notions that pertain to expression in architecture; how buildings have certain characteristics that convey messages about the use, functions, and activities that take place inside them; and how they offer some clues about who uses them. Requirements: Students were offered a sheet that includes 12 images of different buildings selected from different environments.They were required to look carefully at the images and then state the type, activity, and the age group for each of the images utilising the two left columns given in the sheet (Figure 3).The exercise is conducted in 45 minutes and is performed in teams of two, as each two neighbouring students are required to discuss the images and reach an agreement on identifying the building type, activity, and user type of each image.Seeing and Verbalizing the Environment Purpose: This exercise is developed to elicit evaluative comments about students' understanding of different environments.The aim is to help them recognize the importance of the terminology used by the public and the terminology used by architects and designers.Another aim is that students can express their concerns about different environmental settings and eventually be able to work toward improving existing environments or designing new environments. Prior Knowledge: Students were introduced to the way in which buildings relate to the psychology of the users.Knowledge delivered and discussed prior to conducting this exercise included issues that pertain to the fact that in any given environment there are certain physical features that evoke good or bad feelings.It is critical for students, as users and as future designers and architects, to become aware of perceived environmental effects.This is a first step in understanding the delicate balance between different aspects of a built environment and their impact on people psychologically. Requirements: Students were offered 6 images and were required to look at each of the images and consider which of the paired adjectives better describes them.They were to check the box closest to the more appropriate adjective in each line.If they thought neither adjective applied, they were to check the box in the middle (Figure 4).As well, they were required to write generic comments based on their understanding of each environmental setting shown in each image.The exercise was conducted individually and was performed over a period of 30 minutes; each student was expected to spend 5 minutes only on each image.After conducting each of the three exercises, students were asked to elaborate on what benefits they have gained out of their engagement and reflect on their experience.The findings point out that the students were able to make judgments about the built environment and to give reasons for those judgments through a wide spectrum of exercises.However, a few students could not recognize similarities and differences between the building images or fully comprehend the crux of each exercise.Nevertheless, they commented that utilising checklists and discussion tools for relating the content of the course to the exercises helped them recognize what to look for exactly in the building images.Students reported that they were excited during the discussions.In their comments, the majority felt that their experience of the buildings in a structured manner invigorated their understanding of many of the concepts typically delivered in a lecture format without exposure to generating discussions or debates in the classroom.As well, writing and presenting were considered important skills they needed to develop.The discussions that followed each exercise corroborated the value of introducing in-class, IBL mechanisms while creating an atmosphere amenable to responsive reflection and critical thinking.
Toward a New Form of Knowledge-Based Pedagogy By and large, the results of implementing evaluation research as a form of IBL were not exclusive; nevertheless, they accentuated the value of introducing assessment studies through structured interactive learning mechanisms, while utilising the built environment as an educational medium.Students developed a deeper understanding of the relationship between people and the settings they use and between spatial and sustainable design factors.They were able to focus on critical issues that go beyond those adopted in traditional teaching practices.
The two widely held conceptions of the built environment and the physical/objective, were embedded in the exercises.While the exercises emphasised knowledge acquisition based on students' perceptions and interpretations of the built environment driven by knowledge delivered in the classroom, they also attempted to develop students' understanding of how qualitative aspects of the built environment could be translated into quantifiable measures.The exercises helped students focus on specific aspects of the built environment that pertain to a specific knowledge content while conceiving the gaps between "what" and "how" types of knowledge.
A considerable portion of students' education is based on experience and active engagement.Students are typically encouraged to study the existing built environment and attempt to explain it through theories or typologies, always looking at outstanding examples.Underlying these theories, however, are assumptions about the built environment and the people associated with it, and usually these assumptions remain hidden.It is in this relationship where the lesson to be learnt lies.The incorporation of exercises similar to the ones presented would foster the establishment of links between the existing dynamic environments, the concepts and theories that supposedly explain them, and the resulting learning outcomes.Concomitantly, the contribution of evaluation research and IBL to architectural and urban pedagogy lies in the fact that the inherent, subjective, and hard-toverify conceptual understanding of the built environment is complemented by the structured, documented interpretation.This was performed in a systematic manner in a classroom or offcampus setting amenable to critical thinking and reflection.
The built environment is variable, diverse, and complex.Buildings, spaces, and settings are major components of this environment: designed, analysed, represented, built, and occupied.They are also experienced, perceived, and studied.They should be redefined as objects for learning and need to be transformed into scientific objects.It should be emphasized that in order for an object to be taught and learned, its components should be adapted to a specific pedagogic and cognitive orientation that introduces issues about specific bodies of knowledge.Evaluation research would thus achieve this desired end.
Ashraf M. Salama Ph.D., FHEA, FRSA -Professor of Architecture Founding Chair, Department of Architecture and Urban Planning, Qatar University Email address: asalama@gmail.com or asalama@qu.edu.qa
Figure 1 :
Figure 1: Different environmental settings assessed by the students.
Socio-Behavioural Factors in Design, First Year, M. Arch.-RIBA-II at the School of Planning, Architecture, and Civil Engineering--SPACE, Queen's University, Belfast (academic year 2008-2009). Community Design Workshop, Third Year, B. Arch., Department of Architecture and Urban Planning at Qatar University (academic years 2009-2010 and 2010-2011).
Figure 2 :
Figure 2: Relating visual attributes of buildings to culture.
Figure 3 :
Figure 3: Relating building images to functions, activities, and users.
Figure 4 :
Figure 4: Seeing and verbalizing the environment.
Table 1 :
Example category utilised in procedural evaluation.
ScoreHow effectively are the site features kept?(Consider levelling, excavations, and land filling). Does the landscape design integrate the site with the surrounding environment?(Is the site surrounded by fences?If so, consider their materials). | 2018-12-18T09:51:23.596Z | 2012-12-31T00:00:00.000 | {
"year": 2012,
"sha1": "52ca166a4d3b1a5d103d7b5a64d196fecf8dde6a",
"oa_license": "CCBY",
"oa_url": "https://journals.oslomet.no/index.php/formakademisk/article/download/435/478",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "52ca166a4d3b1a5d103d7b5a64d196fecf8dde6a",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
219940273 | pes2o/s2orc | v3-fos-license | Distribution of Patients at Risk for Complications Related to COVID-19 in the United States: Model Development Study
Background: Coronavirus disease (COVID-19) has spread exponentially across the United States. Older adults with underlying health conditions are at an especially high risk of developing life-threatening complications if infected. Most intensive care unit (ICU) admissions and non-ICU hospitalizations have been among patients with at least one underlying health condition. Objective: The aim of this study was to develop a model to estimate the risk status of the patients of a nationwide pharmacy chain in the United States, and to identify the geographic distribution of patients who have the highest risk of severe COVID-19 complications. Methods: A risk model was developed using a training test split approach to identify patients who are at high risk of developing serious complications from COVID-19. Adult patients (aged ≥ 18 years) were identified from the Walgreens pharmacy electronic data warehouse. Patients were considered eligible to contribute data to the model if they had at least one prescription filled at a Walgreens location between October 27, 2019, and March 25, 2020. Risk parameters included age, whether the patient is being treated for a serious or chronic condition, and urban density classification. Parameters were differentially weighted based on their association with severe complications, as reported in earlier cases. An at-risk rate per 1000 people was calculated at the county level, and ArcMap was used
Introduction
The first case of coronavirus disease (COVID-19) was detected in the United States on January 20, 2020 [1].The spread of the virus increased exponentially across the United States during the subsequent two months, with large outbreaks occurring in urban localities including New York City, the San Francisco Bay Area, Detroit, and New Orleans [2].
The Centers for Disease Control and Prevention (CDC) analyzed data from lab-confirmed COVID-19 cases in the United States from February 12 to March 28, 2020.This analysis found that older adults and individuals with underlying health conditions are at higher risk of developing life-threatening complications from COVID-19 [3].Among COVID-19 patients, 38% had one or more underlying health conditions, and the rates of hospitalization among these patients was disproportionately high.The majority of intensive care unit (ICU) admissions (78%) and non-ICU hospitalizations (71%) were patients with at least one underlying health condition.
Efforts to reduce mortality due to COVID-19 should include identifying and protecting patients who have the highest risk of developing severe complications from the disease.The purpose of this study was to develop a risk model to estimate the risk status for patients of a nationwide pharmacy chain in the United States and to identify the geographic distribution of patients who have the highest risk of severe COVID-19 complications.
Pharmacy Data
Adult patients (aged ≥18 years) were identified from the Walgreens electronic data warehouse.Patients were considered eligible to contribute data to the model if they had at least one prescription filled at a Walgreens location between October 27, 2019, and March 25, 2020.Eligible patients were assigned a risk score based on the sum of each patient's risk parameters including the following: an inferred diagnosis of a serious chronic condition based on a prescription fill within this period for certain specialty medications (Multimedia Appendix 1), an inferred diagnosis of a chronic condition that is deemed to put the patient at high risk of severe COVID-19 complications based on a prescription fill to treat these conditions (Multimedia Appendix 2), prescription fills which infer diagnosis of other chronic conditions, age, and urban density classification.Ethical approval was received from the Advarra Institutional Review Board (protocol number 35300).
Our team assigned a risk value to each parameter based on findings from recent COVID-19 studies [3,4].The risk score algorithm weighted parameters based on their association with complications from COVID-19 infection, such as hospitalization and death.Parameters shown to be associated with the greatest risk of severe COVID-19 complications were assigned the highest value possible, regardless of the presence of other risk factors.The highest risk parameters included a prescription fill within the study period for one of the high-risk specialty medications and being aged 80 years and above.
Prescription fills to treat high-risk chronic conditions and other chronic conditions not deemed high-risk were assigned a value based on hazard ratios published in the European Respiratory Journal [5].Patients with specific underlying health conditions are at high risk of developing severe complications from COVID-19 [3].The risk score for patients with chronic lung disease, diabetes mellitus, and cardiovascular disease was weighted higher than the risk for patients being treated for other chronic conditions that do not fall into one of these three disease states.Baseline risk is determined by the number of medications the patient is on, and whether that medication is for treatment of any chronic condition.Patients treated with medication for one or more of the three high-risk conditions in addition to being treated with additional chronic condition medications received a cumulative value for each category.For instance, a patient being treated for chronic lung disease, diabetes mellitus, and one additional high-risk maintenance medication would receive the following values for these conditions: 2.681 + 1.586 + 2.592 = 6.459.
Compounding evidence shows that the risk of developing severe complications from COVID-19 increases exponentially with age; therefore, the risk score was weighted more heavily for older patients.Observational evidence shows that the spread of COVID-19 occurs most rapidly in urban areas.For this reason, we weighted patients who live in densely populated urban areas with the greatest risk, followed by those in less dense urban, suburban, and rural settings.Counties categorized as rural contain a population density of <400 people per square mile, suburban encompasses population density between 400 and 5000 people per square mile, less dense urban includes counties with 5000 to 12,500 people per square mile, and urban encompasses population density over 12,500 people per square mile.Population data were acquired from Popstats 2019 (Syergos Technologies Inc).
The risk model was developed using a training test split approach.The model was tested and validated using data for patients residing in one state (Georgia), and then applied to the full United States study cohort.Once cumulative risk values were calculated for each patient, the values were transformed to a maximum risk score of 10 to aid with interpretation using the following formula:
COVID-19 Surveillance Data
Real-time data of COVID-19 cases captured by the Johns Hopkins University Center for Systems Science and Engineering (CSSE) [2] was layered in the risk map to show where cases exist relative to the populations identified as being at high risk of severe complications from COVID-19.
Model Validation
The model was compared with current trends in COVID-19 cases.Without the availability of confirmed cases, the predictive value of this model is unknown [6].
Mapping
ArcMap (Esri) was used to depict the presence of patients identified as being at high risk for severe complications from COVID-19 and real-time COVID-19 cases.The at-risk rate per 1000 people is provided at the county level.County populations of fewer than 100 residents or fewer than 10 patients were excluded from the data set.The combined view shows where cases exist relative to the populations identified as high-risk.Additionally, testing locations, Walgreens store, and clinic locations are seen with a zoomed in view.The ArcGIS Online platform (Esri) was used to distribute this map publicly beginning April 16, 2020.
Results
The study included 30,100,826 adults filling at least one specialty or maintenance medication during the study period.Table 1 shows the model inputs and parameters.Using a training test split approach, the model was tested and validated on 623,972 patients residing in Georgia and applied to the full US study cohort (N=30,100,826).
The average age of patients is 50 years, and the average patient has 2 to 3 comorbidities.Nearly 28% (8,285,408) Patient addresses were used to depict the distribution of risk status across the United States.These data were then compiled to depict a county-level risk status for each county for which we had sufficient data.A county-level at-risk rate was calculated per 1000 residents.The highest county-level risk category ranged from 265.1 to 375.0 high-risk residents per 1000.Furthermore, 8 risk ranges were assembled and color coded onto a county-level US map (Figure 1).The real-time Johns Hopkins University CSSE COVID-19 cases data are layered on top of the county-level risk status to facilitate a visual depiction of the presence of cases in relation to the county-level risk of residents at risk of suffering severe complications from COVID-19 [2].At the time of publication, the map depicts numerous counties, principally in less densely populated regions of the United States that have a high rate of vulnerable residents but have not yet had large numbers of COVID-19 cases.The interactive map depicting the US distribution of patients at risk for complications related to COVID-19 is publicly available for viewing [7].The county-level risk rates are recalculated and refreshed weekly, whereas the Johns Hopkins University CSSE case numbers are uploaded in real time.
Overview
This study shows that there are counties across the United States whose residents are at high risk of developing severe complications from COVID-19; many of these counties had not yet recorded many COVID-19 cases when the interactive map was released.Although transmission rates may differ among rural and urban areas, it is often the case that residents of rural counties have higher risk statuses and less access to health care resources.If disease transmission becomes rampant in a rural county with a high risk status, health care resources may become depleted quickly if a disproportionate number of its residents experience severe complications from the disease.This risk model utilizes data from approximately 10% of the US population.At the time of publication, this is the most comprehensive US model to depict county-level prognosis of COVID-19 infection [8].DeCaprio et al [9] modeled rates of COVID-19-related pneumonia and hospital admission using 1.5 million records from Medicare claims data from 2015 to 2016.Unlike medical claims data, our pharmacy claims data is accessible at a near real-time rate, which likely improves the precision of the model.Moreover, our data includes US adults aged 18 years and above, making our population estimates broader and more generalizable.
With the core data, Walgreens was able to implement proactive community outreach by pharmacists who offered home delivery to high-risk patients to ensure they had a sufficient supply of their medications without having to leave their homes.The pharmacists also inquired about patients' wellbeing during the pandemic and shelter-in-place orders, and they referred patients to community services as needed.Additionally, by publicly sharing deidentified county-level risk distributions, Walgreens and other organizations are able to plan and respond as COVID-19 begins to spread to areas that previously experienced little impact.
More importantly, our interactive map will serve to inform public officials and health care leaders of where there are highly vulnerable pockets of the population so that they may proactively prepare for the possibility of a disproportionately high number of patients with severe complications due to COVID-19.Many of these high-risk populations are in rural areas that have limited access to advanced health care services such as a hospital with respirators.Other maps have depicted the current availability of health care resources, such as ICU beds, compared to the amount that will be required in the event of a regional COVID-19 outbreak [10].Our county-level risk estimates may be used alongside data sets such as that produced by Moghadas et al [10] to improve the accuracy of anticipated health care resource needs.
Our interactive map will also aid in proactive planning and preparations among employers that are deemed critical, such as pharmacies and grocery stores, to prevent the spread of COVID-19 within their facilities.At the time of publication, the interactive map showed that it is relatively uncommon to see a county with a low rate of patients at risk for complications related to COVID-19, but a high rate of COVID-19 cases.This may be evidence of the differential presentation of SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) in individuals who are younger and have few comorbidities as compared to their counterparts.
Limitations
There is potential bias in the data source as it only includes Americans who have access to health care and can afford to purchase medication.The model would likely be strengthened if it represented less-advantaged individuals who are uninsured or underinsured, as well as those who are financially unable to afford their medications.Moreover, since our model relied on pharmacy data, not medical claims data, patient diagnoses were assumed based on the pharmaceutical treatment regimen.Finally, the model could not be externally validated because we did not have access to patient-level COVID-19 case data, which limited our ability to calculate the sensitivity and specificity of the risk model.
While the interactive map will be useful for multiple purposes, it is for informational purposes only and is not intended to provide medical advice or discourage social distancing or other health-related recommendations.Although Walgreens will take reasonable steps to update this map routinely with the latest available information, SARS-CoV-2 is a novel virus and its spread is rapid and unpredictable.We encourage everyone to visit the CDC's Coronavirus (COVID-19) webpage for the latest information and recommendations [11].We encourage the public to contact their health care provider to address any concerns and before taking any personal action in response to the information provided by the model or map.
Figure 1 .
Figure 1.Distribution of patients at risk for complications related to coronavirus disease (COVID-19) in the United States. | 2020-04-30T09:07:12.580Z | 2020-04-24T00:00:00.000 | {
"year": 2020,
"sha1": "795c2a4353c220e505338957fa814147cbe12243",
"oa_license": "CCBY",
"oa_url": "https://publichealth.jmir.org/2020/2/e19606/PDF",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6dc989d6926ea89cd5d4c3c20411d349f0de2b16",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234489079 | pes2o/s2orc | v3-fos-license | Heart rate variability in late pregnancy: exploration of distinctive patterns in relation to maternal mental health
Exploration of photoplethysmography (PPG), a technique that can be translated to the clinic, has the potential to assess the autonomic nervous system (ANS) through heart rate variable (HRV) in pregnant individuals. This novel study explores the complexity of mental health of individuals in a clinical sample responding to a task in late pregnancy; finding those with several types of past or current anxiety disorders, greater trait anxiety, or greater exposure to childhood traumatic events had significantly different HRV findings from the others in the cohort. Lower high frequency (HF), a measure of parasympathetic activity, was found for women who met the criteria for the history of obsessive–compulsive disorder (OCD) (p = 0.004) compared with women who did not meet the criteria for OCD, and for women exposed to greater than five childhood traumatic events (p = 0.006) compared with those exposed to four or less childhood traumatic events. Conversely higher low frequency (LF), a measure thought to be impacted by sympathetic system effects, and the LF/HF ratio was found for those meeting criteria for a panic disorder (p = 0.006), meeting criteria for social phobia (p = 0.002), had elevated trait anxiety (p = 0.006), or exposure to greater than five childhood traumatic events (p = 0.004). This study indicates further research is needed to understand the role of PPG and in assessing ANS functioning in late pregnancy. Study of the impact of lower parasympathetic functioning and higher sympathetic functioning separately and in conjunction at baseline and in relation to tasks during late pregnancy has the potential to identify individuals that require more support and direct intervention.
Introduction
Mood and anxiety disorder episodes during pregnancy and the postpartum period are common, with 10-20% meeting full criteria for a major depressive disorder (MDD) episode 1 and 20% meeting criteria for one or more anxiety disorders 2 . Maternal mood and anxiety disorders during pregnancy and the postpartum period are associated with negative outcomes including: preterm birth and low birth weight; increased risk of physical and mental problems; and a higher risk of suicide during and beyond the postpartum period [3][4][5] . Well-functioning stress systems are critical to adapt to changes during pregnancy, delivery and postpartum, and dysregulation increases risk of and results from mood and anxiety disorder episodes 6 .
An individual's ability to react to change, adjust the stress response and then recover, has been shown as an important indicator of long-term health 7 . The optimal stress response is neither too little nor too great. The repeated need of stress responses, particularly exaggerated or underperforming responses, over time, may result in a loss of ability to respond or conversely, to respond in situations that do not require a response, both of which can be harmful and result in illness 6,7 . The concepts of allostatic load 8 , a state of homeostasis, and cacostatic load 7 , defective adaptation, provide a conceptual framework for understanding the "wear and tear" associated with stressors over time. This is a useful framework for the perinatal period as the stressors before pregnancy are then compounded by adaptations to the changes required during pregnancy, delivery, and the postpartum period. For example, women with greater trait anxiety are more likely to develop a depressive disorder postpartum 9 . Women with exposure to greater number of childhood adverse life events have an increased risk of postpartum psychiatric disorders 10 . Despite knowing that some women are at higher risk in pregnancy for perinatal mental health disorders and related negative outcomes for mother and baby based on history, new tools are needed to identify at an individual level when there is defective adaptation resulting in mental health disorders. Another important consideration is that the majority of women receive all of their care including mental health care in obstetrical settings where the mental health services are often limited by time, specialized training, and lack of providers. This further complicates identifying those individual women who require more support, providing greater need for new objective tools.
One tool that is thought to reflect emotional regulation and dysregulation during times of stress is heart rate variability (HRV), which consists of changes in the time intervals between consecutive heartbeats 11,12 . Photoplethysmography (PPG) has grown in popularity due to accessibility of use for evaluating the autonomic nervous system (ANS) with resulting measures that correspond to HRV [13][14][15] . For HRV measures to be a tool that can be widely utilized in perinatal women and be most clinically relevant, methods of assessment need to be conveniently implemented in maternal care settings such as obstetric offices and/or by patients at home, especially as some care has become virtual.
The nomenclature standards for HRV were initially suggested by the Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology founded in 1996 and explained by Shaffer and Ginsberg (Supplement Table 5 with additional information adapted from Shaffer and Ginsberg) 11,16 . HRV is defined in the following two ways: first, timedomain measures quantify the amount of variability in measurements of interbeat intervals, the time period between heartbeats, and include the standard deviation in normal-to-normal R-R intervals (SDNN) and the root mean square of successive R-R interval differences (RMSSD). Second, frequency-domain measures estimate the distribution of absolute or relative power in frequency bands of heart period oscillations and indicate the signal energy found with a frequency band: high frequency (HF), low frequency (LF), and very low frequency (VLF). Frequency is measured in absolute power as calculated by ms squared divided by cycles per second (ms 2 /Hz); and relative power as the percentage of total HRV power and divides the absolute power for the specific frequency band by the summed absolute power of the LF and HF bands. These values are thought to reflect the dynamic relationship of systems such as the parasympathetic nervous system (PNS) and sympathetic nervous system (SNS)but also the contribution of other systems such as the central nervous, endocrine, and respiratory systems-that control the time between heartbeats 11 . For example, the PNS predominates at rest and withdraws when greater output is needed, but may also rebound with high levels of stress and causing PNS-dominated effects 11 . Different measures are thought to reflect different components, e.g., HF reflecting vagal modulation of heart rate 11 . Although LF and the LF/HF ratio are debated with regards to their meaning and importance and are not supposed to be considered only a measure of the SNS; they have increasingly been studied and provide different information than the HF band 11,17 .
HRV measures have been compared with other traditional compound metrics of cacostatic load and comparably indicate well-being, homeostasis, and dysregulation of stress responses 18 . Low HRV has been associated with greater mortality 19 . In non-pregnant populations, low or excessive as well as unstable HF are associated with psychiatric diagnoses 12,20,21 . In addition to HF, other HRV measures have also been associated with anxiety disorders. For example, total power, LF, SDNN, and the LF/HF ratio have been found to differ between those with higher anxiety, posttraumatic stress disorder (PTSD) or panic disorder and those with lower anxiety, without panic, and not having PTSD [22][23][24][25][26][27] . Several studies have gone further to study the links between HRV and prefrontal cortex (PFC) function 12,[28][29][30] . Important as both increased and decreased PFC functioning are linked to psychiatric diagnoses including MDD, bipolar disorder, obsessive-compulsive disorder (OCD), PTSD, and social phobia 20,[31][32][33][34] .
During pregnancy numerous ANS adaptations are required such as sympathetic activation compensatory for systemic vasodilation and decrease in mean arterial pressure 35 . Adaptation is required, as pregnancy progresses, to maintain eustasis, i.e., healthy homeostasis, reflected in changes in HRV, including increasing attenuation of stress responsiveness [36][37][38][39][40][41] . Although HRV has been studied in pregnancy 39,41-45 , HRV has not been studied in pregnant women with mood and different forms of anxiety disorders, which is necessary to understand if the ANS adaptations are different for these women. In the present study, HRV profiles obtained during gestational week 38 of the third trimester of pregnancy from PPG before and after exposure to a working memory task, a standardized task that can be administered in a maternal care clinic, were explored in a group of women whose psychiatric diagnoses were characterized.
Subjects
The current study was undertaken as part of the Biology, Affect, Stress, Imaging and Cognition (BASIC) cohort 46 , a population-based study at Uppsala University Hospital, Sweden. The primary aim of the BASIC cohort is to investigate correlates of affective symptoms during pregnancy and after childbirth. Women who register for the routine ultrasound examination at the hospital around gestational week 17 were asked to participate. Exclusion criteria were age <18 years, not being able to adequately communicate in Swedish, protected identity, blood borne infectious diseases, and non-viable pregnancies. The BASIC study collected data through web surveys sent out at the time of consent (around gestational week 17), at gestational weeks 32, as well as at 6 weeks, 6 months, and 12 months postpartum. An example of the surveys is the Edinburgh Postnatal Depression Scale (EPDS) 47 .
Participants were invited to the research laboratory of the Department of Obstetrics and Gynecology at approximately gestational week 38 for an in-person study visit beyond the surveys. Starting in 2014, the substudy in-person visits also included PPG before and after the working memory task, the Wechsler Digit Span Test (DST) 48 . The cohort was enriched with subjects who either had EPDS scores above 12 or scores below 6 to be able to compare subjects likely to have depression and those more likely to not have depression. We also enriched the cohort with women taking SSRIs, regardless of their current level of depressive or anxiety symptoms to reflect real-world care of perinatal women and supported by findings from the literature that have shown differences in several biological measures in those on SSRI treatment compared with untreated depressed women 49 . Between January 2010 and December 2018, 715 invitations were sent out for pregnancy test sessions with 349 pregnancy test sessions completed, half after 2014 (48.8% participation rate including half of those with elevated EPDS scores at 32 weeks) 46 .
The Mini-International Neuropsychiatric Interview (MINI) was administered in order to characterize women by current or past episodes of MDD, bipolar disorder (presence of a manic or hypomanic episode ever), or panic and past history of anxiety disorders agoraphobia, OCD, generalized anxiety disorder (GAD), and social phobia 50,51 . Participants filled out the EPDS and the State-Trait Anxiety Inventory for Adults (STAI), the latter previously used in pregnancy 52,53 . From EPDS scores at the 38-week visit a cutoff of >12 was utilized to determine those more likely to have a depressive disorder 47 . Three questions (EPDS 3 A) can be utilized to assess anxiety, and cutoff scores of above 4 and 6 have been validated with a score of 5 or greater recommended 54,55 . We used a cutoff score of 40 or greater for the STAI-Trait subscale for elevated trait anxiety and cutoff of 12 or greater for the state (six items) subscale 53 . The STAI cutoff score above 40 is not only validated in pregnancy but as a predictor of postpartum anxiety and mood states; so likely a measure of both current and ongoing anxiety 53 . The Life Incidence of Traumatic Events (LITE) was given to assess number of exposures to traumatic events limited to occurring during childhood (up to 18 years of age) 56 . Each individual was also characterized as to whether they had an elevated number, above 5, of traumatic events in childhood (75th percentile). Childhood trauma was the focus given the association of maternal childhood trauma with perinatal mood and anxiety disorders [57][58][59] . The LITE has been studied in its ability to assess for exposure to childhood traumas so that the cumulative effects can be studied in relation to other factors and outcomes 60 . While the LITE does not assess PTSD symptoms, it has been found to be correlated with posttraumatic stress, depression, and anxiety 60 . Also included in the analyses were treatment with SSRIs and personal characteristics such as age and BMI. Participants were determined to have severe delivery fear based on either self-report of being "terrified" of delivery or based on medical record notes of a scheduled visit to a fear of birth clinic.
The HRV portion of the visit was done at the end of the visit after individuals completed demographic information and answered questions such medical history including current medication usage. HRV was recorded using a PPG transducer (model PPG stress flow, provided by BioTekna, Marcon (Italy)) 61 . with one electrode placed on each index finger. Studies have found this method of measuring pulse rate variability (PRV) to have sufficient accuracy for estimating HRV 62,63 . The participants were told to try to relax but were allowed to see their heart rate on the computer screen while recording. Two measurements, of 5 minutes each, were conducted; the second one following the DST. Both time-domain measures (RMSSD and SDNN) and frequency-domain measures (HF power, LF power, very low frequency or VLF power and Total power) were calculated by the device from BioTekna. There were no restrictions in food and caffeine intake or tobacco use before the visit.
A group of healthy non-pregnant controls aged 22-42 years with BMI within the range of 20-29 kg/m 2 , with parity <4 and with no systemic disease or current psychiatric conditions were also invited to complete the same assessments as in the BASIC study including having HRV measures recorded. The healthy controls had never or not during the past two years given birth and had completed breastfeeding at least 3 months before. The women had regular periods, were using combined contraceptives, intrauterine device, or no contraceptives at all, and could not be suffering from premenstrual syndrome. To standardize for the menstrual cycle, the healthy non-pregnant participants were asked to come for the assessment on day 16-26 (luteal phase) in the menstrual cycle.
Ethical considerations
The study protocol has been approved by the Regional Ethical Review Board in Uppsala, Sweden (Dnr 2009/171) and conducted in accordance with the Declaration of Helsinki. Written informed consent was obtained from all participants to participate in the BASIC study, as well as when participating in the substudy prior to any testing.
Statistical analyses
To calculate percent change in HRV measures, each person's measure from before the task was subtracted from the value after the task and then divided by the measure before the task. For the STAI and EPDS results, missing data were imputed if only one response was missing.
To test the associations between psychiatric diagnoses including MDD, panic disorder, GAD, OCD, bipolar disorder, social phobia, and agoraphobia (yes/no), symptoms (over/under cutoff), BMI, age on the one hand and HRV variables on the other, linear regressions were applied. Subjects were grouped based on the cutoffs into categorical variables that were included in the regression analyses. Individuals who met MINI criteria for a diagnosis were compared against all others who did not meet criteria for that diagnosis regardless of other diagnoses for which they may have met criteria. HRV variables were plotted in histograms to test normality. For LF/HF and RMSSD, which were assessed as non-normally distributed, Mann-Whitney non-parametric testing was applied.
Fisher's exact test was applied for BMI and age in relation to the psychiatric diagnoses, fear of childbirth, STAI results, EPDS results, and exposure to trauma.
To adjust for multiple testing with regards to the regression analyses, Bonferroni correction was applied to consider the multiple comparisons occurring. Those with p values <0.00625 were considered significant given eight HRV measures. The data were analyzed using the Statistical Package for the Social Sciences (SPSS) version 26.0 (IBM SPSS, Armonk, NY).
Demographics
One hundred and twenty-six subjects participated in the third trimester visit that included the HRV measures. The average age of subjects was 31.6 years (s.d. of 4.4) with a range of 21-43 years. One hundred and fourteen subjects, or 93%, reported having been born in Scandinavia. Other subjects reported evenly between having been born in Europe outside of Scandinavia, Asia, Africa, or did not report where they were born. Almost 80% had a university level education and 65% identified as working full-time at the gestational week 17. By gestational week 32, 38% were working full-time, 25% working part-time, and 20% taking leave due to the pregnancy. The average BMI before pregnancy, based on their self-reported weight by the participants was 24.3 kg/m 2 (s.d. 4.4 kg/m 2 ) with a range of 17.7 to 39.8 kg/m 2 .
Prevalence of mood and anxiety diagnoses and respective risk factors
There were 42% who did not meet criteria for any psychiatric diagnosis from the MINI Of these who did not meet criteria for a psychiatric diagnosis, 13% had past exposure to greater than five childhood traumatic events, 32% did not fill out the LITE, one was taking a SSRI, and three had EPDS scores above 12. There were 45% with a history of MDD, including 26% who had a diagnosis of MDD as their only diagnosis. Panic disorder was the next largest diagnosis with 15%, followed by 12% with a GAD. Despite the large percent with history of depression or anxiety disorders, only 6.4% were taking an SSRI at 17 Table shows the number of women in the BASIC cohort included in these analyses with a diagnosis or combination of diagnoses as determined by the MINI The final column shows the number with each diagnosis. Note that in the "None" group, some individuals did not have LITE results, had greater than five childhood traumatic events, or had elevated EPDS scores. One was taking an SSRI. and/or 32 weeks' gestation (medication information was missing for 24 women). Table 1 shows the distribution of diagnoses by individual. The average number of childhood traumatic events was 3.7 (s.d. 2.8), ranging from no childhood traumatic events to 11 childhood traumatic events for one person.
The average score on the STAI-Trait was 36.7 (s.d. 9.8). The average score on the STAI-State was 9.5 (s.d. 2.8) with a range of 6-19. The average score on the EPDS during the visit was 6.2 (s.d. 4.5) with range of 0-18. There were 11% of subjects who had an elevated total EPDS score at the 38-week visit, but also an additional 15%, who despite scoring 12 or below, had an elevated score on the three anxiety questions of the EPDS and/or a positive screen for suicidal or self-harm thoughts.
Mean heart rate variability measures in the third trimester of pregnancy HRV measures in comparison with other studies in pregnancy The average and standard deviation of HRV measures from this study were similar to others reported in the literature for women in the third trimester (see Table 2). In response to the stressor, across all individuals the mean HF and RMSSD values showed increases from before and after the tasks while VLF and HR mean values were lower after the tasks.
HRV measures in pregnancy in comparison with nonpregnant subjects
Heart rate in pregnant women versus non-pregnant women from Uppsala was elevated as shown in Table 2, whereas HF power, LF power, RMSSD, and SDNN was lower. Table 3 presents the overall patterns of HRV measures per psychiatric diagnosis, scores on the EPDS, states and traits of anxiety from the STAI, severe fear of childbirth, whether taking a SSRI, and those exposed to a greater number of childhood traumatic events. Those with p values <0.00625 were considered significant given eight HRV measures (dark red and blue in Table 3). P values <0.05 and <0.10 were also noted in Table 3 as lighter reds and blues. Supplement Table 1 through 4 contain tables with more details on findings.
Heart rate variability measures by mood and anxiety diagnoses and their risk factors
After correction for multiple comparisons, at least one HRV measure differed between those with a specific psychiatric diagnosis (past or current) and all others (those not meeting criteria for that specific psychiatric diagnosis) including: panic disorder, social phobia, and OCD. As shown in Table 3, Lower HF, a measure of parasympathetic activity, was found for women who met criteria for history of OCD (p = 0.004) compared with women who did not meet criteria for OCD, and for women exposed to greater than five childhood traumatic events (p = 0.006) compared with those exposed to four or less childhood traumatic events. Conversely, higher LF, a measure, thought to be impacted by sympathetic system effects, and the LF/HF ratio were found for those meeting criteria for a panic disorder (p = 0.006), meeting criteria for social phobia (p = 0.002), elevated trait anxiety (p = 0.006), and exposure to greater than five childhood traumatic events (p = 0.004).
In terms of current elevated distress symptoms, as measured by the EPDS at the 38-week visit, including separately the EPDS 3-A, no findings were significant after controlling for multiple comparisons. When all with one or more MINI diagnoses, regardless of type of diagnosis, were included in one group compared with those with no MINI diagnoses there were no differences that remained Any fear of birth was not statistically significant. There were 43 individuals with any fear and 82 not reporting fear. Fear of childbirth included fear of C-section, fear of vaginal delivery, and severe fear for any of the above. BMI was also included in the model for fear of childbirth only since it was only significantly associated with this variable. statistically significant after consideration for multiple comparison.
Even after adjusting for multiple comparisons, change in VLF from before and after the two tasks of those with SSRI use at 17 and/or 32 weeks in pregnancy was different than those not taking SSRI.
Although BMI and age were both associated with differences HRV measures, BMI was not associated to the diagnoses or related variables with exception of the fear of childbirth condition. Almost half of those who were overweight or obese had severe fear of delivery as compared to being only 28% of those with lower fear of delivery (Fisher's exact two-sided p value 0.046). Age was only a factor for a positive screening on the MINI for MDD; those 35 and older making up 35% of those with a diagnosis of MDD versus 16% of those without (Fisher's exact two-sided p value 0.021). Results reported in the tables for fear of childbirth are adjusted for BMI and MDD is adjusted for age.
Discussion
In this study, HRV measures captured from PPG before and after exposure to DST were explored in a clinical sample of pregnant women at gestational week 38. Women were characterized in terms of current and past depression, bipolar manic, and anxiety disorder episodes and risk factors for perinatal mood and anxiety disorders including trait anxiety and exposure to traumatic events during childhood. This study explores the complexity of mental health of a population-based sample of individuals, representing those seen in maternal care settings, showing the difficulty of defining cases and controls given the number of different diagnoses, risk factors, and co-morbidities, which reflects the situation in clinical settings. The primary finding was that those who meet criteria for several different anxiety disorders as well as women with higher trait anxiety or greater exposure to childhood traumatic events had evidence of ANS alterations compared with others in late pregnancy. Anxiety may be associated with greater ANS alterations than depression, or depression in this sample may be more heterogenous, or not as severe; the anxiety findings are particularly relevant owing to one in five women having an anxiety disorder in pregnancy, whereas mood disorder episodes are more prevalent in the postpartum period 1,2 <./p> Pregnancy is a known stressor with effects that extend beyond the perinatal period, with a well-known example being the emergence of diabetes and hypertension during pregnancy that are associated with increased cardiovascular risks later in life 35,64,65 . Mean HRV values across all pregnant women were lower compared with nonpregnant women and consistent with other studies in pregnant women, regardless of use of PPG or ECG for measurement 39,[41][42][43][44][45] .
Unique ANS alterations in the late pregnancy based on mental health history and risk
The findings from this study of a population-based maternal care clinically relevant sample support the notion that women who meet criteria for certain diagnoses or risk factors have lower PNS activity, whereas individuals meeting criteria for other diagnoses or risk factors show greater SNS activity. Interestingly, those exposed to more childhood trauma events had HRV measures reflecting both lower PNS and higher SNS. Although similar findings have been found in nonpregnant individuals (i.e., relative imbalance of HF and LF with social phobia 66 ; lower HF; and increased LF/HF ratio with PTSD [24][25][26], this is the first time pregnant women with anxiety disorders were shown to have distinct HRV patterns. Those with OCD and those with exposure to childhood traumatic events had similar alterations in ANS functioning, supported by literature showing childhood trauma exposure has been associated with greater obsessive-compulsive symptoms, particularly in females 67 . Obsessive-compulsive symptoms are more commonly seen in perinatal patients 68,69 . Further study is needed to determine whether this increase in symptoms associates with ANS alterations in the perinatal period. Women with panic disorder in late pregnancy in the current study had higher LF, a slightly different finding than reported by Zhang et al. 27 in non-pregnant individuals with panic disorder, who had alteration in the ratio of HF and LF. In this study, SSRI users exhibited less change in VLF between before and after completing the memory task, compared with those not taking SSRIs. VLF is a measure that includes multiple components including the renin-angiotensin system and also SSRI use may indicate severity of mood and anxiety disorder 70 . The literature supports the importance of considering SSRIs in relation to HRV alterations, but also the contractions and complexity. HRV both predicted response to antidepressant medication, and did not respond to antidepressant treatment 71,72 . The results of this study, whereas preliminary, suggest that particularly in late pregnancy, the ANS alterations observed for different groups of psychiatric disorders are in some cases similar and in some slightly different than those observed outside the perinatal period; the impact of these ANS alterations needs to be further studied with regards to outcomes for mother and child.
Strengths, limitations, and future directions
One strength of this study is a population-based sample with comorbid conditions that reflects clinical settings. Table 1 shows the complexity of such samples, where many individuals have co-morbidities or other risk factors. This was also true for those without a MINI diagnosis. By comparing each group based on those that met the criteria versus those that did not meet the criteria, we have addressed each diagnosis separately, regardless of other diagnoses. However, it will be important for future studies to recruit even greater numbers with single disorders and risk factors in order to better describe distinct HRV alterations. In addition, as methods of diagnosis are also dependent of self-reports some that do not meet full criteria may still have symptoms comparable to those that met criteria by the MINI for diagnosis. A dimensional approach may be important given the complexity of clinical populations.
The study was limited in power by the sample size despite being larger than previous studies of HRV in pregnancy. Some true associations may have been missed owing to the sample size. It is possible that some significant findings may be due to chance due to multiple measures and comparisons. We nevertheless also controlled for multiple comparisons and have only discussed the most robust findings. Although GAD in a nonpregnant population has been associated with differences in HF 73 , women with GAD in this study did not show significantly different HRV values. Although HF was significantly lower in non-pregnant individuals with GAD, all pregnant women had lower HF than non-pregnant women and so larger sample sizes may be needed to reveal alterations in pregnant women with GAD versus pregnant women without history of GAD. GAD in this patient population might also be more heterogenous, and a larger sample size would also further enable subdivision into more homogenous subgroups of generalized anxiety. Similarly, no robust RMSSD findings were found despite the robust HF findings; RMSSD and HF are thought to represent PNS activity from the vagus nerve on HRV and are usually correlated 11 . Given the respiratory changes, particularly in the third trimester, differences between RMSSD and HF may be exacerbated in late pregnancy. Wang and Huang note, RMSSD may not correlate with HF power given different impacts of respiration in general; 74 more information is needed comparing RMSSD and HF in late pregnancy. Logan et al. 45 found opposite directions in HF and RMSSD pre and post a stretching exercise further supporting that respiration is an important consideration. A larger sample size in future studies would be needed, to both validate our findings, assess if additional findings become apparent, and further study the use of PPG in characterizing distinct psychiatric conditions. However, even with the sample size limitations there were some groups that had robust alterations compared to those without those risk factors or diagnoses.
This study demonstrated that PPG has not only potential for assessment of ANS functioning but also requires further study to continue to improve its use as a biomarker in the field of perinatal mental health. Several studies have compared HRV measured from PPG as compared with other methods and found good agreement between methods 62,63 . PPG can be captured by mechanisms that could be utilized by patients in their natural environments and during day-to-day activities 62,63 . Some differences though, especially in response to a physical stressor, have been identified and merit further investigation 75,76 . Yuda et al. 76 suggest PPG may be considered as its own biomarker, representing ANS activity and its impacts on pulse conduction. Further studies comparing findings from the behavioral laboratory with ECG in relation to PPG in maternal care settings and patient's homes is warranted.
The future role of HRV measurement using PPG during pregnancy, in particular for those with anxiety disorders, trait anxiety, and greater exposure to childhood traumas, depends on several factors. First, the methods must be standardized and a range of normal values established. This will require inclusion of pregnant and non-pregnant women by psychiatric conditions such as by primary disorders alone and with co-morbidities. Given the consistency between HRV values during pregnancy in this and others studies, it may be possible to identify normal ranges of values. Normal values for HRV measures may need to be adjusted for factors including age and different hormonal states across the perinatal period 77,78 . As an example for the latter from the literature, trait anxiety was found to be associated with greater decrease in HF and greater HF reactivity in the third trimester; along with greater increase in VLF compared with the second trimester 41,42 . This discrepancy requires further exploration; it may reflect HRV differences between time points in the perinatal period but may also be due to methodological differences, as HRV in one study was measured during a stressor, whereas in the other HRV was taken at rest.
Second, longitudinal studies that target recruitment of greater number of women with social phobia, panic disorder, OCD, trait anxiety, and exposure to a number of childhood traumas are also needed. Larger longitudinal studies will allow study of changes in each individual across the perinatal period. Larger longitudinal studies would also improve our understanding of the use of the DST as a tool for testing memory in relation to biomarkers, but also as a possible evidence-based stressor. Memory deficit is a complaint of many women that increases during the perinatal period but also is more associated with depression and anxiety disorders 79,80 . There is evidence that women may utilize different parts of the brain to accomplish working memory tasks during different time points in pregnancy; and these different processes may result in women with anxiety having more stress in navigating the task, particularly in late pregnancy 81 . Another example where a longitudinal study would be beneficial is in understanding the finding with regards to SSRI users exhibiting less change in VLF between before and after completing the memory task, compared with those not taking SSRIs. Longitudinal assessments of women who start, maintain, or stop SSRIs are needed to better understand how SSRIs may impact on the ANS. Longitudinal assessments of PPG measures would allow for comparison not only between individuals but within individuals with assess the changing balance in ANS functioning and when imbalance first presents. Similarly, longitudinal studies would also allow for comparison of the same subjects in terms of episodes of exacerbations of psychiatric conditions while pregnant and non-pregnant beyond the perinatal period. Further, they would also improve understanding of the association between ANS alterations in late pregnancy on maternal and child outcomes. Study of the impact of lower PNS functioning and higher SNS functioning separately and in conjunction at baseline and in relation to tasks during late pregnancy has the potential to identify dyads that require more support and to direct intervention.
Conclusions
This novel study explores the complexity of mental health disorders in women during late pregnancy, in relation to ANS functioning. The primary finding was that those who meet criteria for several different anxiety disorders as well as women with higher trait anxiety or greater exposure to childhood traumatic events had evidence of HRV alterations as measured by PPG compared with others in late pregnancy. Some HRV measures may point towards lower PNS activity in women with certain diagnoses or risk factors, others towards a shift in the balance of systems maintaining HRV, and still others towards an increase in SNS and other heightened stress responses. Continued research should be encouraged in the direction of determining the normal range of values for HRV measured by PPG. HRV is thought to reflect ANS and CNS functioning and larger studies may provide new information on ANS and CNS functioning in late pregnancy. PPG could eventually provide a tool that can be utilized in maternal care settings after larger studies further our understanding of how we can use each perinatal woman's ANS reaction to different tasks and stressors as an intermediate phenotype, to identify high-risk individuals in need of extra support. | 2021-05-14T13:48:20.011Z | 2021-05-14T00:00:00.000 | {
"year": 2021,
"sha1": "5d779d6b1bf568117717772d2ef8b9042c0c6558",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41398-021-01401-y.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0fa1ee5678cb3eb93afd8f548db5618ff3807d37",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25381852 | pes2o/s2orc | v3-fos-license | An Improved HPLC-DAD Method for Quantitative Comparisons of Triterpenes in Ganoderma lucidum and Its Five Related Species Originating from Vietnam
An HPLC-DAD method for the quality control of wild and cultivated Ganoderma lucidum (Linhzhi) and related species samples was developed and validated. The quantitative determination of G. lucidum and its related species using 14 triterpene constituents, including nine ganoderma acids (compounds 4–12), four alcohols (compounds 13–16), and one sterol (ergosterol, 17) were reported. The standard curves were linear over the concentration range of 7.5–180 µg/mL. The LOD and LOQ values for the analyses varied from 0.34 to 1.41 µg/mL and from 1.01 to 4.23 µg/mL, respectively. The percentage recovery of each reference compound was found to be from 97.09% to 100.79%, and the RSD (%) was less than 2.35%. The precision and accuracy ranged from 0.81%–3.20% and 95.38%–102.19% for intra-day, and from 0.43%–3.67% and 96.63%–103.09% for inter-day, respectively. The study disclosed in detail significant differences between the quantities of analyzed compounds in different samples. The total triterpenes in wild Linhzhi samples were significantly higher than in cultivated ones. The total constituent contents of the five related Linhzhi samples were considerably lower than that in the G. lucidum specimens, except for G. australe as its constituent content outweighed wild Linhzhi’s content by 4:1.
of the G. lucidum species collected in Vietnam have not been reported, and no data exist that compare the components of wild-harvested, cultivated, and other related Linhzhi species from Vietnam. In this study, a reverse-phase HPLC method was developed for the fingerprint analysis and simultaneous determination of 17 compounds including a new lanostane triterpene (butyl lucidenate E2 (11) [19], uracil (1), 5-dihydrobenzoic acid (3, gentisic acid) [20], 12 lanostane triterpene derivatives (compounds 4-10 and 12-16), adenosine (2), and ergosterol (17). The developed method was successfully applied to the quantification of 14 triterpenoids in six wild and four cultivated G. lucidum samples, and five related Ganoderma species.
Optimization of Sample Preparation Condition
Several methods for the extraction of the fruiting bodies of Ganoderma species were surveyed including ultrasonication, refluxing, and maceration using methanol, but the ultrasonication method was the most effective, therefore we used this method to evaluate the effect of different solvents (100% methanol and 100% ethanol) on the amount of sample extracted. When 100% methanol was used, the content of the sample extracted was higher. To test the necessary time to accomplish the extraction, samples were prepared for 30, 60, 90, and 120 min. Since, the amount of the sample extracted after 90 min was same as the 120 min sample and higher than the 30 min sample, 90 min was selected as the optimal extraction time.
Selection of HPLC Conditions and Validation of the Developed Method
Although several HPLC methods have been reported for the determination of the constituents of Linhzhi samples [11,12,14,21,22], few constituents have been analyzed within the same study due to a lack of standard reference compounds. Our previous chemical investigation of G. lucidum from Vietnam resulted in the extraction and isolation of 17 compounds 1-17 ( Figure 1). The chemical structures of the isolated compounds were identified using UV-Vis, IR, 1 H-and 13 C-NMR, and mass spectrometry as well as by the comparison of these spectroscopic data with those reported in literature as a new lanostane triterpene (butyl lucidenate E2 (11) [19], adenosine (2), and two compounds isolated from the first time in G. lucidum, namely uracil (1) and gentisic acid (3) [20]. In addition, 11 lanostane steroids were identified, including lucidenic acid N (4), lucidenic acid E2 (5), ganoderic acid A (6), lucidenic acid A (7), ganoderic acid E (8), methyl lucidenate E2 (9), methyl lucidenate A (10), and butyl ganoderate A (12) [23], lucidadiol (13), ganodermanontriol (14), ganoderiol F (15), ganodermadiol (16), and ergosterol (17) [19,20,24]. The purities of the compounds were greater than 95%, as estimated using an HPLC-DAD method. Most Linhzhi triterpenoids contain a conjugated skeleton, and their UV absorption peaks are concentrated at 210, 237, 243, 253, and 255 nm [11,16]. The analytes were divided into three groups including lanostane triterpenoid-type alcohols and acids, sterol, and others including gentisic acid and adenosine. Based on the maximum absorption of the compounds, the detection wavelengths were set at 256 nm for the acids and their derivatives and 243 nm for the others. The retention times of the compounds in the analyzed samples were distinguished by comparing with those of each reference compound, which are shown in Table 1. The sample preparation conditions for the extraction of the compounds in the Ganoderma species were optimized, which are described in Section 2.1. This study describes the results of the fingerprint analysis of the compounds 1-17 and the quality analysis of 14 of them (compounds 4-17). The chromatographic fingerprints of the Ganoderma species are shown in Figure 2, which are divided into three groups including (A) the wild Linhzhi group, (B) the cultivated Linhzhi group, and (C) the related species group. This HPLC-DAD method was validated for linearity, the limits of detection (LOD) and limits of quantitation (LOQ), recovery, and reproducibility. Each coefficient of correlation (r 2 ) was >0.999, as determined by least square analysis, suggesting good linearity between the peak area ratio versus the compound concentration (Table 1). The LOD and LOQ were examined based on the lowest detectable peak in the chromatogram with a signal-to-noise (S/N) ratio of 3 and 10, respectively. Under our experimental conditions, we determined the LOD and LOQ for the 14 reference compounds in Table 2. The values obtained for both the LOD and LOQ in these analyses were low enough to detect traces of the compounds in the crude extract. For the recovery, each reference compound was spiked into 1 g of each Ganoderma species at three levels, as described in the Experimental Section. The spiked samples were assayed, and the recoveries of each reference compounds were found to be 97.09% to 100.79%, and the relative standard deviation (RSD) (%) was less than 2.31% ( Table 2). The average recovery was represented by the formula: R (%) = [(amount from the sample spiked standard − amount from the sample)/amount from the spiked standard] × 100. Table 3 shows the intra-day and inter-day precision (%RSD) of this HPLC method. The precision and accuracy ranged from 0.81%-3.20% and 95.38%-102.19% for intra-day and from 0.40%-3.67% and 95.63%-103.09% for inter-day, respectively. The data demonstrate that the method was acceptable in terms of linearity, accuracy, and reproducibility.
Quantitative Comparison of Different Ganoderma Species
The amount of the chemical compounds within the samples was influenced by various factors such as the place of origin, type of study sample (cultivated or wild samples; different species of the same genus), and harvesting season. The variation of the lanostane triterpenoid alcohol or acid derivatives and ergosterol in the different Ganoderma species originating from the wild and cultivated collections from Vietnam was evaluated. The fingerprint analysis of the wild Linhzhi group ( Figure 2A) showed a similarity across the chromatograms, and the 17 analytes were present in all the samples. Figure 2B,C show the differences among the wild Linhzhi group (A group), the cultivated Linhzhi group (B group), and the related Linhzhi species group (C group). In particular, the chromatograms of the five related Linhzhi species including G. sp, G. applanatum, G. australe, G. colossum, and G. subresinosum showed obvious differences. Compounds 1-3 were not distinctly separated using the developed method, so they were not quantified. This study focused on the simultaneous determination of the remaining 14 compounds 4-17 by using the developed HPLC-DAD method for the all samples (Groups A, B and C), which are summarized in Tables 4-6, respectively. Each sample was analyzed in triplicate to ensure the reproducibility of the quantitative results. The comparison of the 14 compounds in the Linhzhi samples (both wild and cultivated samples) showed that the number of acids and their derivatives in all the samples is significantly higher than the number of alcohols. However, there are differences between the wild and cultivated samples. While the total amount of the acids and alcohols in the wild samples vary from 2089.40 to 44,703.07 μg/g and 917.41 to 2498.68 μg/g, respectively, those from the cultivated samples fluctuate between 1003.83 and 1720.69 μg/g and 153.31 to 549.32 μg/g, respectively. Similarly, the total amount of the acids in G. australe outweighs that of the wild and cultivated Linhzhi samples and other related Linhzhi species (Tables 4-6,) with a total amount of 19,999.28 μg/g. In addition, the amount of the 2 compounds 11 and 12 in the analyzed samples was below LOQ except for TGau, which had 131.29 ± 1.31 µg/g. Tables 4-6 and Figure 3, the total amount of all the compounds in the wild samples was appreciably higher than in the cultivated ones. For example, the amount of compound 4 (lucidenic acid N) in the wild G. lucidum sample varied from 257.80-845.46 μg/g; however, this compound was not observed in GL2, and in the other cultivated samples (Group C) it fluctuated between 52.53-139.08 μg/g. Another good example is lucidenic acid E2 (5), which was found in the wild G. lucidum samples in a range from 319.47 to 1,766.75 μg/g in comparison with the cultivated G. lucidum samples in a range from 258.06 to 481.31 μg/g. In addition, the wild samples contained significantly more ganoderiol F (15) than the cultivated samples, which varied between 563.94 μg/g and 1,635.06 μg/g and between 65.03 μg/g and 226.71 μg/g, respectively. Interestingly, a different trend was observed for methyl lucidenate E2 (9), which was under the LOQ in the wild samples but was found in GL1, GL2, and GL3 in the range between 286.94 and 446.95 μg/g. On the whole, the samples from Bac Giang (VN16 and VN18) seemed to have a higher amount of the constituents than the specimens from Quang Nam (VN1, VN12, and VN13). The total amount of constituents in the related Linhzhi samples (Group C) including in TGau, TGLs, TGap, TGc, and TGs was considerably lower than the amount in the Linhzhi samples of Groups A and B. The amount of constituents in TGau outweighed that of the others, as it was about 4 times as high as that of the wild Linhzhi samples (VN12). The proportion of constituents in the other species is different from G. lucidum. More specifically, the proportion of lucidenic acid E2 (5), which is one of the major compounds in G. lucidum, is low in TGap and TGS and is not found in TGau. Similarly, while almost G. lucidum samples contain a considerable amount of ganodemanontriol, it was only seen in trace amounts in the TGLs, TGap, TGau and TGs samples. In contrast, the amount of methyl lucidenate E2 (9), which is not observed in G. lucidum, is the major compound in the TGap, TGau, and TGc samples. It is noteworthy that the amount of the constituents in TGs was substantially smaller than others, and 50 percent was ergosterol.
Discussion
To date, several previous studies have reported using HPLC analytical methods for the analysis of Ganoderma lucidum and its related products. For example, Zhao et al. used HPLC for the determination of 9 triterpenes and sterols for the quality evaluation of G. lucidum [25]. In a study from Wang et al., an RP-HPLC method was developed for the determination of six ganoderic acids [22]. In 2004, Gao and coworkers reported the quantitative determination of 19 triterpene constituents, including six ganoderma alcohols and 13 ganoderic acids [11]. These studies and others focused only on the ganoderic acids and their derivatives [12,26]. Nucleosides, nucleobases, and polysaccharides were used for the qualitative and quantitative analyses of Ganoderma spp [27]. However, these studies are insufficient for a comprehensive evaluation of G. lucidum, and there is little data that compare G. lucidum from different origins or compare G. lucidum and its related species.
In this paper, we developed and optimized an HPLC-DAD method that allows for the specific identification of many terpenes. Fourteen triterpenes, including nine ganodermic acids 4-12, four alcohols 13-16, and one sterol (ergosterol, 17), were used for the quantitative determination of G. lucidum and its related species. Eight of the nine ganoderma acids (all but ganoderic acid A) had never been analyzed before. Two new lanostane triterpenes 11 and 12, which recently were discovered by our group and Lee Iksoo et al. [24], were used in the fingerprint analysis and quantitative determination for the first time [19,23]. Alcohols 13 and 16, which had never been examined quantitatively in previous studies, were used to evaluate G. lucidum and its related species chemically using HPLC-DAD. Two Ganoderma species investigated in this study were studied quantitatively using HPLC for the first time, except for G. applanatum [28]. However, in a study by Liu et al., G. applanatum was evaluated by using five ganoderic acids. Moreover, two compounds, uracil and gentisic acid, which were found in G. lucidum for the first time, were confirmed using the HPLC fingerprint technique.
In comparison with the results of previous studies, this study showed both similarities and differences. In a study by Gao et al. [11], the amount of ganondermanontriol (14) in Japanese Linhzhi ranged widely from 19.2 to 235.3 μg/g. In our study, there was a wide variation of compound 14 in the wild, cultivated, and related Linhzhi samples ranging from 129.31 to 394.10 μg/g, 50.85 to 208.34 μg/g, and 72.99 ± 1.88 μg/g (related Linhzhi species, TGc), respectively. Therefore, the amount of compound 14 in the Japanese samples was similar to the Vietnamese cultivated samples but was lower than that found in the Vietnamese wild Linhzhi. These results indicate that the amount of compound 14 may not depend on geographic factors but instead is affected by the cultivation conditions. Gao's study showed that the contents of ganoderiol F (15) ranged from 18.9 to 156.5 μg/g [11]. However, the Vietnamese wild samples contained 563.94-1635.06 μg/g of 15, and the Vietnamese cultivated samples contained 65.03-226.71 μg/g of 15. Both the wild and cultivated Linhzhi samples from Vietnam contain more compound 15 than the Linhzhi from Japan. Similar to the results found with compound 14, these results indicate that amount of 15 is not only affected by geographic factors but also by cultivation conditions. In a study from the Yuan group, the content of ergosterol (17) from sporoderm-broken germinating spores of Linhzhi varied from 32 μg/g to 1202 μg/g in the cultivated Linhzhi from China [21], corresponding with the results from this study, as the content of 17 in the cultivated Linhzhi ranged between 135.14 μg/g to 795.96 μg/g. With regard to methyl lucidenate E2 (9), there was a huge difference in the amounts found among the wild and cultivated Linhzhi and its related species. While a quantitative determination of 9 in the wild species could not be made, its content was above 288 μg/g in the cultivated species. Interestingly, compound 9 was the major compound in related species of Linhzhi as its content was at least 1023.84 μg/g across the species and was as high as 2499.52 μg/g in G. australe.
Chemicals and Reagents
Acetonitrile and methanol (MeOH) of analytical HPLC grade was purchased from Merck (Darmstadt, Germany). Phosphoric acid of analytical reagent grade was obtained from Sigma-Aldrich (St Louis, MO, USA). The other organic solvents and other chemical reagents were of analytical reagent grade.
Reference Compound Preparation
To determine the content of fourteen markers (compounds 4-17) of Linhzhi and related Linhzhi samples, the dried powders were used for extraction. The same amounts (about 1 g) of pulverized fruiting bodies were weighed and sieved through 50 mesh and then placed into a volumetric flask, methanol (10 mL) was added, the weight was accurately measured and the samples were ultrasonically extracted for 90 min at 50 °C. The solution was cooled, weighed again, and made up the loss in weight with methanol. The solution was filtered through 0.45 µm membrane filter prior to HPLC analysis.
HPLC
Analytical HPLC was carried out on a LC 20A system (Shimadzu, city, Japan) consisting of a LC-20AD quaternary gradient pump, an autosampler, and a SPD-M20A diode array detector, connected to a LC solution singer ver. 1.25 software. A Zorbax XDB C18 (4.6 × 250 mm, 5 µm, Agilent Technologies, Inc., Santa Clara, California, CA, USA) was used. A binary gradient solution system consisted of 0.1% phosphoric acid in water (A) and acetonitrile (B) and separation was achieved using the following gradient program: 0 min, 4% B; 10 min, 11% B; 15 min, 30% B; 60 min, 45% B; 90 min, 85% B; 110 min, 100 B%; 130-140 min, 100% B; and finally, reconditioning the column with 4% B isocratic for 10 min. The flow rate was 0.5 mL/min, the system operated at 40 °C and the detection wavelengths were set at 243 and 256 nm for ganoderma alcohols and acids, respectively.
Method Validation
Every standard compound was accurately weighed and dissolved in 100% MeOH to prepare a stock solution of 1.0 mg/mL concentration. Working standard solutions of ganoderma alcohols and acids were prepared by repeated dilution to give eight respective concentrations with methanol (7.5-180 µg/mL). Eight concentrations of 14 analyses were injected in triplicate, and then the calibration curves were constructed by plotting the peak areas versus the concentrations of each analysis. The linearity was demonstrated by a correlation coefficient (r2) greater than 0.999. The limit of detection (LOD) and the limit of quantification (LOQ) were determined based on signal-to noise ratios (S/N) of 3:1 and 10:1, respectively. Intra-and inter-day variations were chosen to determine the precisions of the developed method. The relative standard deviation (RSD) was taken as a measure of precision. Intraand inter-day repeatability was determined on five times within one day and five separate days, respectively. The recovery tests were prepared by mixing a powdered sample (1 g) with three concentration levels (25%, 50%, and 100%) of each compound. The mixture was then extracted by following the section of preparation of sample solution for HPLC analysis. The extract solutions were filtered through a 0.45 µM membrane. The HPLC-DAD analysis experiments were performed in triplicate for each control level. Precision were determined by multiple analysis (n = 5) of quality control samples. All samples were then subjected to HPLC analysis to calculate the recovery rates.
Statistical Analysis
The data were analyzed using the unpaired Student's t-test between the control and compounds. Data compiled from three independent experiments and values are expressed as mean ±SD.
Conclusions
This is the first time a HPLC-DAD method for quantitative analysis of constituents in Ganoderma lucidum and its related species batches originating from Vietnam was established. In the present work, we have reported for the first time the presence of lanostane triterpenes, ergosterol, uracil, adenosine, and gentisic acids in the Vietnamese G. lucidum and its related species. Especially, two new lanostanes, butyl lucidenate E2 and butyl ganoderate A, were reported for the first time in G. lucidum originating from Vietnam and its four related species using a HPLC-DAD method. In addition, the highest content of methyl lucidenate E2 was found in G. australe, G. applatatum, and G. colossum, respectively. In the present study the profile of the 17 compounds differed significantly in the all analyzed samples. It can be also concluded that the geographical distributions, growth conditions, and substrates might be the key to differences in producing chemical compositions. This present work suggested an accurate and sufficient method for quantitative evaluation, which is suitable for quality evaluation of Ganoderma products. | 2016-03-22T00:56:01.885Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "77c0d21741deda880157423f1fdf8fb7730f19f8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/20/1/1059/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77c0d21741deda880157423f1fdf8fb7730f19f8",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
46833121 | pes2o/s2orc | v3-fos-license | High-density amorphous ice: nucleation of nanosized low-density amorphous ice
The pressure dependence of the crystallization temperature of different forms of expanded high-density amorphous ice (eHDA) was scrutinized. Crystallization at pressures 0.05–0.30 GPa was followed using volumetry and powder x-ray diffraction. eHDA samples were prepared via isothermal decompression of very high-density amorphous ice at 140 K to different end pressures between 0.07–0.30 GPa (eHDA0.07–0.3). At 0.05–0.17 GPa the crystallization line Tx (p) of all eHDA variants is the same. At pressures >0.17 GPa, all eHDA samples decompressed to pressures <0.20 GPa exhibit significantly lower Tx values than eHDA0.2 and eHDA0.3. We rationalize our findings with the presence of nanoscaled low-density amorphous ice (LDA) seeds that nucleate in eHDA when it is decompressed to pressures <0.20 GPa at 140 K. Below ~0.17 GPa, these nanosized LDA domains are latent within the HDA matrix, exhibiting no effect on Tx of eHDA<0.2. Upon heating at pressures ⩾0.17 GPa, these nanosized LDA nuclei transform to ice IX nuclei. They are favored sites for crystallization and, hence, lower Tx. By comparing crystallization experiments of bulk LDA with the ones involving nanosized LDA we are able to estimate the Laplace pressure and radius of ~0.3–0.8 nm for the nanodomains of LDA. The nucleation of LDA in eHDA revealed here is evidence for the first-order-like nature of the HDA → LDA transition, supporting water’s liquid–liquid transition scenarios.
entropy fluctuation at/near the proposed second critical point [11]. According to this hypothesis, stable and supercooled liquid water is a supercritical fluid of two states, low-density liquid (LDL) and high-density liquid (HDL), which are inseparable above the proposed second critical point [17]. Below the critical temperature, however, they would transform into each other discontinuously, involving a coexistence line (or related spinodal lines). The thermodynamics of such a two state model to understand water anomalies has very recently been described by Anisimov et al [18]. This interpretation also includes the assumption of LDA and HDA as vitrified forms of LDL and HDL, respectively.
Consequently, two distinct T g ( p ) lines representing the two different glass-to-liquid transitions are expected. This view is supported by computer simulations using the ST2 model [19] and experiments probing LDA's [20][21][22][23] and HDA's [24][25][26][27][28][29][30][31][32] glass-to-liquid transition utilizing differential scanning calorimetry, volumetry and dielectric relaxation spectroscopy at ambient and high pressure conditions. Hill et al applied small-angle neutron scattering to scrutinize structural changes in LDA upon slow heating [33]. Above 121 K, they could observe the onset of diffusive translational motion within the LDA sample, indicating a glass-to-liquid trans ition. Another recent study [34], using wide-angle x-ray scattering combined with x-ray photon-correlation spectroscopy, provides further evidence for the diffusive nature of molecular motions above the glass transition temperatures of both, LDA and HDA, supporting Poole's hypothesis [10]. Nevertheless, it is uncertain, whether the LLCP will ever be spotted directly or whether it will remain a virtual point in water's phase diagram, which can only be perceived from a distance [35].
The reason for this experimental inaccessibility of the p-T-region where the LLCP is expected (0.1 GPa, 220 K [36]; 0.027 GPa, 232 K [37]; 0.05 GPa, 223 K [38]; elaborately reviewed by Holten et al [39]) is the presence of fast crystallization kinetics within the borders of the homogeneous nucleation temperature T H ( p ) and the crystallization temperature of the amorphous ices T x ( p ). This p-T-region is often referred to as water's 'no man's land'. Note that the borders of this region are soft and highly dependent on the sample size and the experimental time scale [40].
Approaching the 'no man's land' from the amorphous ice states at the low temperature border, Seidl et al as well as Stern and Loerting could show the importance of appropriate sample pretreatment for shrinking water's 'no man's land' [41][42][43]. While Stern and Loerting scrutinized the crystallization behavior of VHDA and unannealed HDA (uHDA) in the intermediate pressure range 0.7-1.8 GPa [43], Seidl et al compared in their studies expanded HDA (eHDA) with uHDA concerning stability against crystallization and resulting crystallization products in the low pressure range 0.001-0.50 GPa [41,42]. The term uHDA describes the type of HDA which was discovered by Mishima et al [1] when they compressed hexagonal ice (I h ) to 1.6 GPa at 77 K. It anneals to eHDA on warming below ~0.5 GPa [44] or via decompression of VHDA at 140 K [6]. The experimental strategy of Seidl et al is based on isobaric heating experiments and x-ray diffraction for characterization of the crystallization products. They observed that eHDA is more stable against crystallization than uHDA by up to 11 K. This discrepancy is especially pronounced at pressures ⩽0.20 GPa. Additionally, the analysis of crystallization products revealed that at pressures ⩽0.20 GPa uHDA crystallizes always to a mixture of ice phases including ice I h as the main share, whereas eHDA crystallizes to a single ice phase only. At pressures ranging from 0.30 GPa to 0.50 GPa, the qualitative difference between eHDA and uHDA considering the crystallization products disappears, while the lower crystallization temperature of uHDA compared to eHDA remains. Combining the results of their two studies [41,42] These results favor the conjecture that eHDA, due to its apparent glassy nature, instead of uHDA, may be the low temper ature proxy of the proposed HDL of water. Consequently, employing eHDA, as well as VHDA [43], could enable further exploration of (so far) inaccessible p-T-regions within water's phase diagram in order to gather further evidence for or against the proposed LLCP scenario [10].
In that context, the present study focuses on the aspect of preparation of eHDA. One main question is, whether eHDA, usually produced via decompression of VHDA at 140 K to an end pressure of 0.20 GPa [42], could become even more thermally stable against crystallization if it was prepared via decompression of VHDA to end pressures <0.20 GPa. Considering the phase diagram including (metastable) amorphous states in figure 1, the end pressure of decompression of VHDA (for preparation of eHDA) is limited by the spinodal of the HDA → LDA transition. The border between LDA and HDA was obtained by Mishima via decompression experiments of HDA at different temperatures [5]. Winkel et al conducted decompression experiments of VHDA in a pressure range 1.10-0.02 GPa at 140 K [6]. They located the quasi-discontinuous HDA → LDA transition at a pressure of ~0.06 GPa at 140 K. In the present study, this pressure is considered to be the ultimate limit of decompression for the preparation of eHDA. However, as we are going to show in the following, even at end pressures >0.06 GPa and <0.20 GPa, the vicinity to the HDA → LDA spinodal during the preparation of eHDA has a significant influence on the nature of eHDA.
Apparatus
In the current study the same setup was used as it was employed by Seidl et al [41,42]. More precisely, a custom-made high-pressure piston cylinder with a 8 mm bore together with a commercial 'universal material testing machine' (Zwick, model BZ100/TL3S) was utilized, for both, the high-pres sure preparation of the sample as well as the subsequent in situ pressure dependent crystallization experiments. Temperature control was accomplished using a Pt-100 temperature sensor, which was inserted in the respective bore in the piston cylinder. This experimental setup enables the simultaneous detection and control of piston displacement (corresponding to volume change), temperature and pressure. For temper ature control a Lakeshore temperature controller, operated via a self-written LABVIEW program was used. Control of piston displacement and pressure was accomplished using the commercial software TESTXPERT 7.1 (Zwick). For further details see [45].
Preparation of eHDA samples and in situ crystallization experiments
All ice samples in the present study were prepared by pipetting 300 µl of ultrapure liquid water into a precooled container made of ~0.3 g indium foil, a convenient low-temperature lubrication material preventing undesirable phase transitions in the sample due to shock-wave heating [1]. This effect can occur if a piston is stuck (due to friction within the bore) and suddenly released by applying increased pressure leading to a quick heating and pressure-release event. Therefore, the use of indium as a lubricant is inevitable [1]. eHDA samples for subsequent crystallization experiments were prepared via the following steps (see figure 1).
2.2.a. Preparation of uHDA via isothermal compression of
hexagonal ice I h . In figure 1 this step is depicted by the horizontal arrow with a grey arrowhead. Hexagonal ice (big turquoise hexagon) is compressed from atmospheric pressure to 1.6 GPa. Following in essence the protocol by Mishima et al [1], subsequently, decompression to 1.1 GPa is performed (T ~ 77 K; compression/decompression rate: 0.1 GPa min −1 ). This results in the amorphous matrix (grey ellipse) containing distorted I h nanocrystallites [41,42] (small turquoise hexagons in grey ellipse), see figure 1. figure 1 sketches the formation of VHDA: uHDA is isobarically heated from 77 K to 160 K and subsequently cooled to 140 K (p = 1.1 GPa; heating/cooling rate: ~2 K min −1 ), following the protocol by Loerting et al [3]. This step results in a denser amorphous matrix, essentially void of nanocrystalline domains [43], as indicated in figure 1 (red ellipse).
2.2.c. Preparation of eHDA via isothermal decompression of
VHDA. In order to yield eHDA, we followed the protocol of Winkel et al [6]. VHDA is isothermally decompressed at 140 K to a certain end pressure between 0.07-0.30 GPa. The resulting different sorts of eHDA are referred to as eHDA 0.07-0.3 , depending on the respective end pressure, stated as a superscript (in GPa). This preparation step is visualized in figure 1 by a horizontal arrow, directed to the left. Differently colored arrowheads correspond to the different sorts of eHDA resulting from different end pressures (eHDA 0.3 : blue ellipse; eHDA 0.2 : green ellipse; eHDA 0.1 : orange ellipse with small yellow ellipses) (T = 140 K; decompression rate: 0.02 GPa min −1 ). The different sorts of eHDA differ in terms of their densities, i.e. eHDA 0.3 is denser than eHDA 0.1 [46]. Note that we assume the formation of nanosized LDA domains (small yellow ellipses) within eHDA 0.1 during the preparation process. The decompression temperature for preparation of eHDA 0.1 (140 K) is above both glass trans ition temperatures in the pressure range where the LDA nuclei form (0.20-0.10 GPa). In this pressure range the T g for HDA is 134 K at 0.10 GPa and 139 K at 0.20 GPa [28] and the T g for LDA is 132 K at 0.10 GPa and 127 K at 0.20 GPa [19]. In other words, at 140 K the amorphous samples are kept above their glass transition temperatures below 0.20 GPa. Considering the experimental conditions during the decompression of eHDA 0.1 , an incipient transition HDA → LDA (or even HDL → LDL) seems plausible (see HDA → LDA spinodal in figure 1). This subject will be discussed in more detail on the basis of our experimental results below. After the preparation of an eHDA sample, the in situ crystallization experiments were conducted as follows. [5], whereas the line between HDA and VHDA was deducted from figure 3(b) in [6]. Note that the HDA-LDA line represents a downstroke transition, whereas the HDA-VHDA line represents an upstroke transition-none is a binodal. Colored symbols and arrows represent the preparation route for eHDA, starting from hexagonal ice I h (turquoise hexagon) via uHDA (grey ellipse with small hexagons, denoting remnants of I h [41,42]) and VHDA (red ellipse). Depending on the end pressure of the decompression VHDA → eHDA, eHDA is referred to as eHDA 0.3 (blue), eHDA 0.2 (green) and eHDA 0.1 (orange with small yellow ellipses, denoting nanosized LDA domains). Adapted figure with permission from [40], Copyright 2016 by the American Physical Society.
2.2.d. Crystallization.
The eHDA samples are then quenched to 77 K and (de)compressed to the desired pressure. Upon varying the pressure at 77 K the nature of the sample is retained, i.e. eHDA 0.3 decompressed at 77 K to 0.10 GPa remains eHDA 0.3 [6,7].
The different sorts of ice are isobarically heated to temperatures T max ⩾ 150 K (anyway, T max > T x or T trans ) and subsequently cooled to 115 K (heating/cooling rate: ~2 K min −1 ) and quenched to ~80 K by pouring liquid nitrogen around the piston cylinder. For eHDA, these in situ crystallization experiments were conducted at 6 different pressures ranging from 0.05-0.30 GPa. In figure 2, the isobaric heating experiments are sketched by light red arrows marked at every studied pressure in a phase diagram of water (including metastable ice IX). We note, e.g. eHDA 0.3 slowly relaxes towards eHDA 0.1 prior to crystallization upon heating at 0.10 GPa. That is, the superscript merely describes the sample history but does not indicate that eHDA 0.3 is actually the state just before crystallization.
2.2.e. Second isobaric heating step to T max .
To check for complete transition, the sample was heated isobarically to T max again at the same pressure as described in 2.2.d. applying a heating/cooling rate: ~2 K min −1 .
2.2.f. Quench recovery.
After reaching T max in 2.2.e, the sample was quenched to 77 K by pouring liquid nitrogen around the piston cylinder and subsequently releasing the pressure (T = 77 K; decompression rate: 0.02 GPa min −1 ).
Preparation of bulk LDA and I h samples for control experiments
As mentioned in section 2.2.c, we assume the formation of nanosized LDA nuclei in eHDA during the decompression of VHDA to pressures <0.20 GPa at 140 K. The presence of these LDA nuclei influences the crystallization temper ature of eHDA depending on the applied pressure during the crystallization experiment (see results in section 3). Therefore, we conducted control experiments on the pressure dependence of T x in phase transitions in bulk LDA. Furthermore, bulk ice I h samples were studied under pressure since we compare the phase transition temperatures obtained here with the ones obtained by Seidl et al on uHDA [41,42]. Since these samples contain nanocrystalline domains of ice I h , knowledge of the behavior of bulk ice I h is needed for reference. (Bulk) LDA samples for the respective crystallization studies at pressures 0.20-0.40 GPa were obtained as described for eHDA in section 2.2, except for step 2.2.c, where VHDA was isothermally decompressed to 0.01 GPa in order to yield LDA [6]. Isobaric heating experiments at pressures 0.20-0.50 GPa, scrutinizing phase transitions in (bulk) ice I h were done by isothermal (77 K) pre-compression of hexagonal ice to 0.70 GPa and decompression (0.1 GPa min −1 ) to the desired pressure, followed by the steps described in sections 2.2.d-2.2.f.
Definition of crystallization temperature T x
Volume change curves ΔV(T) are obtained by multiplication of the vertical (uniaxial) piston displacement with the bore's cross section (the temperature-dependence of the bore diameter (8 mm) was considered as insignificant). Usually, volume changes upon crystallization, and so it can be detected as a step in the ΔV(T) curves. To define the crystallization temperature T x , the same method as in [42] was applied. Specifically, the intersection of a straight line through the mid-temperature part and a straight line through the high-temperature part of the step-like expansion (or contraction) in a ΔV(T) curve, representing the crystallization, was defined as T x . In the case of a very rapid jump-like volume change at the transition, the temper ature at the vertical edge was considered to be T x . Note that the crystallization temperatures according to this definition have to be considered as end temperatures. Alternatively, also p(T) curves can be used for defining T x . Although the heating experiments are conducted isobarically, fast expansions (contractions) at the transition cause temporary pressure deviations because the response of the apparatus is not fast enough. Consequently, the temperature at the maximum pressure deviation can be considered as T x . However, in this study the T x values were obtained by evaluation of the ΔV(T) curves to be able to compare our results with the results from Seidl et al [42]. [47]), including stable phases of water and metastable ice IX. Solid lines depict measured phase boundaries between stable phases, the dot-dashed line indicates the hydrogen-(dis)ordering temperature for the ice III↔ice IX transition. Dashed lines depict estimated or extrapolated phase boundaries between stable phases, dotted lines indicate estimated or extrapolated borders between metastable phases. Red arrows represent isobaric heating experiments of eHDA in the current study. Adapted figure with permission from [42], Copyright 2015 by the American Physical Society.
Apparatus correction
The piston displacement recorded by the machine does not only reflect the behavior of the ice samples but also contributions from the apparatus, especially the volume changes of the steel pistons. Hence, a correction of the volume curves was applied [42] utilizing isobaric heating experiments at four different pressures between 0.05-0.30 GPa, analogous to the step described in section 2.2.d, without ice samples, but with ~0.3 g indium foil. In good approximation, the resulting volume curves exhibit linear behavior. Therefore, straight lines were fit through the data points at temperatures ranging from 145-165 K. These linear functions were then subtracted from the raw ΔV(T) curves at each pressure (linear functions at intermediate pressures were obtained by linear interpolation). As a consequence, the volume curves shown in figure 3 only depict the behavior of the ice samples themselves.
Characterization of crystallization products
The quench-recovered samples were characterized using x-ray powder diffraction (Cu K α 1 radiation; diffractometer: Siemens D5000) in θ-θ geometry at ~80 K and subambient pressure (~10 −3 bar). In order to conduct a qualitative analysis of the crystallization products of a sample, at least two x-ray diffractograms for each sample were considered. One prominent intensity maximum for each resulting crystalline ice phase was chosen (ice IX: at 29.6°, ice I c : at 24.3°, ice V: at 30.9°). The intensities of these peak maxima were then summed up to a 'total intensity' for each diffractogram. The respective peak maximum intensities were then divided by the 'total intensity' in order to obtain polymorph fractions. Note that the stated percentage values are a rough approximation, because peak maximum intensities are not a direct measure of quantity of the present phases. Texture effects and different scattering cross sections for different polymorphs prevent a more accurate assessment. In order to test whether decompression to lower pressures than 0.10 GPa during the preparation of eHDA could lower T x even further (compared to eHDA ⩾0.2 ), eHDA 0.08 and eHDA 0.07 were prepared and isobarically heated at 0.30 GPa. As it is shown in figure 3(f), our assumption was confirmed by the experiments. T x of eHDA 0.07 is ~7 K lower than T x of eHDA ⩾0.2 .
Volumetric crystallization study of eHDA
Crystallization events can also be monitored by the pres sure change Δp (T, p) at the (formally) isobaric heating experiments (figure 4). For reasons of clarity, only the curves of eHDA 0.1; 0.2; 0.3 are depicted. As already mentioned, the temperature at the maximum of pressure increase can also be used to define the crystallization temperature T x . Figure 4 illustrates the crystallization behavior as mentioned above: at pressures 0.05-0.15 GPa the Δp-peaks of all three sorts of eHDA are aligned within a temper ature interval of less than 1.5 K. Above 0.15 GPa, eHDA 0.2 and eHDA 0.3 remain aligned but the Δp-peaks of eHDA 0.1 are shifted to significantly lower temperatures.
In figure 5, the T x data collected from our isobaric heating experiments of eHDA are summarized: figure 5(a) depicts crystallization temperature as a function of pressure of eHDA 0.07-0.3 as extracted from figure 3 in comparison with uHDA (adapted from [42]). For repeated experiments, error bars were calculated from the difference of the highest and the lowest measured value at a certain pressure. Crystallization experiments of eHDA decompressed to 0.07 and 0.08 GPa were only conducted at 0.30 GPa, to exhibit the large difference of ~7 K between T x of eHDA 0.2; 0.3 and eHDA decompressed to pressures as low as 0.07 GPa. While T x ( p ) is well described for eHDA 0.2 and eHDA 0.3 by a linear function this is not the case for eHDA 0.1 . The kink of the T x ( p ) line of eHDA 0.1 around ~0.17 GPa indicates a change in the crystallization process. A similar kink, but for the T x ( p ) line of uHDA, was observed at ~0.25 GPa by Seidl et al [41,42]-see grey line in figure 5(a). They explained this effect with the presence of nanosized I h crystallization seeds within the amorphous matrix of uHDA at <0.25 GPa. At pressures >0.25 GPa these nuclei transform to ice IX upon isobaric heating, decreasing the slope of the T x ( p ) line significantly. In figure 5(b), this phase transition, identified by Seidl et al [41,42], is sketched by grey ellipses. Below 0.25 GPa the starting material of crystallization is uHDA with embedded I h nuclei (small turquoise hexagons). Above 0.25 GPa the I h nuclei have transformed to ice IX nuclei (small light blue squares). Figure 5(b) also contains a sketch of the microscopic picture of eHDA 0.1 and eHDA 0.3 derived from the results of the present study. Our interpretation, including the phase transition of nanoscaled seeds of LDA to seeds of ice IX in eHDA 0.1 , will be presented in detail in section 4.
XRD study of crystallization products of eHDA
A series of x-ray diffractograms is shown in figure 6(a). They were obtained for crystallized samples after isobaric heating at different pressures. The intensities are normalized to the highest peak in the respective diffractogram, resulting in 'relative intensity'. Peaks of high intensity indicating ice phases (I c/h , IX, V) are marked with roman numerals.
Based on the crystallization products, the studied pressure range (0.05-0.30 GPa) can be divided into three areas (see figure 6(b)). At 0.05 GPa, all studied sorts of eHDA (eHDA 0.1; 0.2; 0.3 ) crystallize to cubic ice, nowadays known as stacking-disordered ice I [48][49][50][51] (blue area). At pressures 0.10-0.25 GPa, mixtures of IX/I c occur upon isobaric heating (pink area). The amount of I c decreases with increasing pressure. At 0.30 GPa (green area), mixtures of IX/V emerge. The values of '% Ice I c '/'% Ice IX' given in figure 6(b) are approx imations, as described in section 2.6. However, relative changes of fractions with pressure are significant and valid. Thus, our method provides comprehensible insight into the three different crystallization modes that can be observed within the studied pres sure range. Figure 7 summarizes the results of the volumetric studies and the x-ray diffraction studies on eHDA 0.1 , eHDA 0.3 (present study) and uHDA [41,42] at 0.10 GPa and 0.30 GPa. The amorphous starting materials are sketched as ellipses, representing the microscopic picture of eHDA 0.1 derived from our results (section 4) and uHDA [41,42]. T x for each amorphous material and pressure is marked as a horizontal line. The crystallization products (main component written first) are given above T x . Note, T x of uHDA at 0.10 GPa is considerably lower than T x of eHDA 0.1 , which is similar to T x of eHDA 0.3 . At 0.30 GPa, however, T x of uHDA is similar [42] for clarity. The microscopic picture we could derive from our experimental results is represented by the sketch. eHDA 0.1 contains nanosized LDA seeds that transform to ice IX seeds above ~0.17 GPa. uHDA contains ice I h seeds that transform to ice IX seeds above ~0.25 GPa. By contrast, eHDA 0.3 exhibits a linear T x ( p ) line throughout the studied pressure range, confirming its glassy nature. to T x eHDA 0.1 but significantly lower than T x of eHDA 0.3 . That is, nanosized LDA domains in eHDA 0.1 at 0.10 GPa do not influence the crystallization temperature, whereas ice I h nuclei in uHDA do [41,42]. Furthermore, ice IX nuclei lower T x both for eHDA 0.1 and uHDA compared to eHDA 0.3 . To answer the question, why nanoscaled LDA nuclei do not lower T x of eHDA 0.1 , crystallization studies of bulk LDA were conducted.
Crystallization/polymorphic transition of (bulk) LDA/I h
Similar to the experiments scrutinizing eHDA, isobaric heating experiments and subsequent characterization by use of x-ray diffraction were done for bulk LDA and bulk I h . Crystallization temperatures T x (transformation temperatures T trans ) of LDA (I h ) were obtained from the respective ΔV(T) curves, as described in section 2.4. Figure 8 depicts the results of the crystallization experiments and the XRD measurements. T x (yellow) and T trans (turquoise) as a function of pres sure for bulk LDA and bulk ice I h are shown. Additionally, the starting materials and resulting crystallization products are depicted by the respective symbols. The vertical dashed lines crossing T x ( p ) and T trans ( p ) indicate a change in the mech anism of the respective phase transition. Below ~0.37 GPa LDA (yellow ellipse) crystallizes to cubic ice I c (azure cube) upon heating, above ~0.37 GPa LDA crystallizes to ice IX (light blue square). Below ~0.45 GPa I h transforms to ice II (purple triangle) upon heating, above ~0.45 GPa I h transforms to ice IX (see phase diagram in figure 2). That is, the crystallization mechanism changes at ~0.37 GPa for bulk LDA, and at ~0.17 GPa for nanocrystalline LDA (kink for eHDA 0.1 in figures 5(a) and (b)). Similarly, the transformation mechanism for I h changes at ~0.45 GPa in the bulk, and at ~0.25 GPa in nanocrystalline I h (kink for uHDA in figures 5(a) and (b)). In both cases there is a downshift of ~0.20 GPa when comparing the change of mechanism in nanoscaled seeds with the bulk material.
Crystallization of eHDA
Based on the crystallization line T x ( p ) of eHDA 0.2 and eHDA 0.3 in figure 5(a), as well as the analysis of the resulting crystallization products in figure 6(b), we conclude that there is no significant difference between the nature of eHDA 0.2 and eHDA 0.3 , neither in the thermal stability against crystallization nor in the crystallization mode, as both starting materials yield similar crystallization products. The presence of one main crystalline phase (and only marginal amounts of another phase) after crystallization indicates that both, eHDA 0.2 and eHDA 0.3 , can be regarded glassy, in other words the low-temper ature proxy of HDL [28,31,42]. By contrast, the crystallization line T x ( p ) of eHDA 0.1 exhibits quite different behavior (see figure 5(a)). The measured T x values at pressures 0.05, 0.10 and 0.15 GPa can be connected by a straight line, whereas the data points from 0.20, 0.25 and 0.30 GPa can be connected by another straight line of decreased slope. Between 0.15 GPa and 0.20 GPa (in our diagram shown at ~0.17 GPa) eHDA 0.1 seems to change in a way that causes a significant effect on the crystallization behavior. Below ~0.17 GPa the T x ( p ) line of eHDA 0.1 exhibits a similar slope as the respective slopes of eHDA 0.2, 0.3 , but at pressures above ~0.17 GPa eHDA 0.1 shows significantly decreased thermal stability against crystallization by up to ~7 K. Apparently, crystallization kinetics in eHDA 0.1 seem to be enhanced at pressures above ~0.17 GPa.
We interpret our results in the following way: during the preparation of eHDA 0.1 (isothermal decompression of VHDA at 140 K, see section 2.2.c) domains of LDA nucleate upon decompression to 0.10 GPa within the eHDA matrix. eHDA 0.1 is decompressed well beyond the HDA-LDA binodal located at ~0.2 GPa [52] and close to the spinodal [5] shown in figure 1. This corresponds to the p-T regime, in which LDA is thermodynamically favored over HDA and, hence, nucleation is possible. At 140 K the rate of nucleation is sufficiently high to form a significant amount of nuclei larger than the critical radius at the time scale of minutes. However, at 140 K the rate of growth is still too low for significant growth of the nuclei in our experiments. Close to the spinodal the size of the critical cluster is rather small, probably just a few molecules of water [53], so that the critical cluster size can be exceeded easily in spite of slow kinetics.
As the LDA domains remain hidden in x-ray diffractograms (see [6]), we conclude, that these domains have to be nanoscaled. In our experimental setup the size limit for the detection of ice crystals is on the order of 10 nm as estimated based on the Debye-Scherrer equation considering the instrumental broadening and the signal to noise ratio for our typical measurements of 45 min. For crystallization experiments in the pressure range 0.05-0.15 GPa these LDA nuclei remain latent, showing no effect on the crystallization behavior compared to eHDA 0.2; 0.3 . This is because T x of LDA is ~140 K at 0.25-0.35 GPa (see figure 8, considering 0.20 GPa internal pressure of the LDA nanodomains as demonstrated below), and hence about the same as T x of eHDA. In other words, the presence of LDA domains does not enhance crystallization kinetics.
This observation changes at pressures ⩾0.20 GPa. At higher pressures we witness a significant decrease of T x for eHDA 0.1 . We suggest a phase transition of the nanoscaled LDA nuclei to crystalline nuclei, which act as favored sites for crystal growth and thereby decreasing T x of eHDA 0.1 . Considering the crystallization products of eHDA 0.1 (which do not differ significantly from the crystallization products of eHDA 0.2, 0.3 , see figure 6(b)) we suggest, that LDA seeds transform to ice IX seeds. This observation resembles the one of Seidl et al [42], see figure 5(b). In their study, they proposed a phase trans ition of nanosized I h seeds (remnants after pressure induced amorphization at 77 K [1]) in uHDA to ice IX seeds. At pressures ⩽0. 25 GPa T x for uHDA is up to 11 K lower than T x of eHDA 0.2 due to the presence of nanosized I h seeds. This effect diminishes after transformation of the I h nuclei to ice IX nuclei. At 0.30 GPa uHDA, eHDA 0.1 , eHDA 0.08 and eHDA 0.07 crystallize at 147 ± 1 K, and hence roughly 7 K lower than eHDA 0.2 and eHDA 0.3 (see figure 5(a)). The equality of T x of uHDA and T x of eHDA 0.07-0.1 at 0.30 GPa shows that in both cases the same crystallization mechanism is operative, namely growth of ice IX domains. Despite the different preparation history, at 0.30 GPa and just below T x , we consider uHDA and eHDA 0.07-0.1 to be identical. By contrast, for eHDA 0.2-0.3 the crystallization mechanism is different, namely crystallization of a homogeneous glassy matrix takes place in this case.
Comparison of phase transitions in the bulk and in nanosized nuclei
We scrutinized the phase transitions occurring in (bulk) LDA and (bulk) I h upon isobaric heating. In figure 8 we summarize our results: The T x line of LDA shows almost no pres sure dependence at pressures 0.20-0.35 GPa (T x ~ 140 K). In this pressure range the resulting crystalline product is I c with marginal amounts of ice IX. The ratio of crystalline products reverses at pressures higher than 0.35 GPa, showing ice IX as main product as well as marginal amounts of I c .
Therefore, we estimate the minimal pressure for the transition of (bulk) LDA to (bulk) ice IX to be ~0.37 GPa upon heating. In comparison with eHDA 0.1 , we experience this trans ition of nanoscaled LDA seeds to ice IX seeds at a minimal pressure of ~0.17 GPa (see kink for eHDA 0.1 in figure 5(a)). Considering the lowest pressure necessary for the transition LDA → IX upon heating, there is a difference of about 0.20 GPa between (bulk) LDA and nanosized LDA domains in eHDA 0.1 .
Comparing the change of transition mechanism in (bulk) ice I h and nanosized I h nuclei embedded in uHDA, there is also a difference of ~0.20 GPa observable. 0.45 GPa (vertical dashed line in figure 8) appears to be the minimum pressure necessary for the transition I h → IX to happen in the bulk I h system upon heating. The corresponding transition of I h nanocrystallites in uHDA occurs at a minimal pressure of ~0.25 GPa [42] (see kink for uHDA in figure 5).
Summarizing the observation in bulk LDA and bulk I h : The proposed phase transitions on the nanometer scale within eHDA 0.1 (LDA → IX) and uHDA (I h → IX) could also be observed in the respective macroscopic systems. Nevertheless, the transitions were only observable at pressures at least ~0.20 GPa higher than the pressure at the kink in the T x line of eHDA 0.1 and uHDA, respectively. This pressure gap between the transitions in the bulk and in nanoscale can be explained by the high internal pressure within a nanosized nucleus, which has to compensate the external pressure and the surface tension of the nucleus. In this context, the Laplace equation (Δp = 2 σ r −1 ; quantifying the difference between the internal pressure (in a curved object) and the external pressure Δp, Laplace pressure) was used to estimate the dimension of a LDA or I h seed.
Therefore, Δp was assumed to be the pressure gap of ~0.20 GPa. A lower limit for the surface tension σ of LDA (or I h ) nuclei within a HDA matrix was taken from [53]. Espinosa et al calculated a surface tension of 29.8 mJ m −2 for I h nuclei within liquid water using the TIP4P/Ice model [53]. Based on the Laplace equation, the radius of a (spherical) LDA/I h seed within eHDA 0.1 /uHDA has then to be 0.3 nm. Note, this is just a rough approximation. Instead of an ice I h seed in liquid water, in our case we actually have an LDA seed in HDA, or ultraviscous HDL [19,28]. Therefore, an exact value for the surface tension is unknown. As an upper limit we tentatively assume a surface tension of 75 mJ m −2 , corresponding to the liquid vapor surface tension at 273 K [54,55]. The true surface tension of LDA within HDA is presumably clearly lower than this value. Under this premise a nucleus radius of 0.8 nm results. Assuming a spherical seed, it then contains ~100-200 water molecules.
The result shows a reasonable order of magnitude for the size of a single seed, another indirect hint for our proposed microscopic picture of eHDA 0.1 .
Conclusions
We have conducted a study on the pressure dependence of the crystallization temperature in eHDA samples of different preparation history. We conclude that the crystallization temper atures summarized in figure 5(b) show that different crystallization modes are operative for different samples. We argue that the observations can only be rationalized on the basis of LDA-nanodomains forming in eHDA <0.2 . We rule out a crystalline nature of the nanodomains, e.g. ice I h [41] or ice 0 [56], since crystalline domains would enhance crystallization at low pressures in contrast to our findings. Furthermore, these nanodomains transform to crystalline ice IX nanodomains above ~0.17 GPa. We want to emphasize the novelty as well as the exceptionality of our proposed microscopic picture describing the nature of eHDA 0.1 . It involves the nucleation of nanoscaled amorphous seeds within another (highly dense) amorphous matrix. Our study uncovers the nucleation of LDA in eHDA upon decompression of VHDA to pressures <0.20 GPa. The nucleation of LDA in eHDA is another, yet unknown piece of evidence for the first-order nature of the HDA → LDA transition, supporting scenarios including a liquid-liquid transition [9,10].
In fact, considering that 140 K is above the glass transition temperatures of both HDA and LDA, we actually interpret the observations on the basis of LDL nanodomains nucleating in HDL, i.e. one liquid nucleating in another. This interpretation requires that amorphous ices turn into ultraviscous liquids above T g , which is contested. While we regard the samples to be in the ultraviscous state [33,34] other researchers consider the sample to be glassy even above T g [57,58].
Furthermore, we want to emphasize the significance of our observation of the LDA → IX transition within nanosized domains in eHDA 0.1 , similar to the I h → IX trans ition in uHDA observed by Seidl et al [42]. In the present study we can show the different behavior of nanoscaled LDA domains in eHDA 0.1 compared to the behavior of nanoscaled I h in uHDA [42], pointing out the different nature of LDA and I h . Below ~0.17 GPa, LDA-nanodomains remain latent, whereas I h nanodomains significantly lower T x . However, above ~0.17 GPa these nanodomains transform to ice IX. These nanocrystallites enhance the crystallization kinetics, resulting in up to ~7 K lower T x values compared to homogeneous eHDA 0.2 . In contrast, I h nanocrystallites in uHDA decrease T x significantly (up to 11 K at ⩽0.25 GPa) compared to eHDA 0.2 , also below 0.17 GPa. When I h nanocrystallites transform to ice IX (above ~0.25 GPa), the effect diminishes. That is, ice I h nanocrystallites and LDA nanodomains have opposite effects on the crystallization kinetics up to a pressure of ~0.30 GPa.
Finally, conducting isobaric heating experiments probing bulk LDA and bulk I h enables us to estimate the size of the LDA/ice IX nuclei in eHDA 0.1 . Due to the elevated internal pressure within the LDA nuclei in eHDA 0.1 , the nanoscaled LDA → IX transition takes place at lower pressures compared to the bulk. Employing the Laplace equation, we can estimate the radius of a (spherical) LDA/I h seed within an eHDA 0.1 / uHDA matrix to be ~0.3-0.8 nm. | 2018-04-03T04:10:29.301Z | 2018-01-24T00:00:00.000 | {
"year": 2018,
"sha1": "1efabd34cda446bf7a27c56ed68a805eb9adc915",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1361-648x/aa9e76",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "b4fbf9fd5344eae7dacbf37b52aba64adbcf3a9d",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Materials Science"
]
} |
230108785 | pes2o/s2orc | v3-fos-license | Intraoperative real-time near-infrared optical imaging for the identification of metastatic brain tumors via microscope and exoscope
OBJECTIVE As chemotherapy and radiotherapy have developed, the role of a neurosurgeon in the treatment of metastatic brain tumors is gradually changing. Real-time intraoperative visualization of brain tumors by near-infrared spectroscopy (NIRS) is feasible. The authors aimed to perform real-time intraoperative visualization of the metastatic tumor in brain surgery using second-window indocyanine green (SWIG) with microscope and exoscope systems. METHODS Ten patients with intraparenchymal brain metastatic tumors were administered 5 mg/kg indocyanine green (ICG) 1 day before the surgery. In some patients, a microscope was used to help identify the metastases, whereas in the others, an exoscope was used. RESULTS NIRS with the exoscope and microscope revealed the tumor location from the brain surface and the tumor itself in all 10 patients. The NIR signal could be detected though the normal brain parenchyma up to 20 mm. While the mean signal-to-background ratio (SBR) from the brain surface was 1.82 ± 1.30, it was 3.35 ± 1.76 from the tumor. The SBR of the tumor (p = 0.030) and the ratio of Gd-enhanced T1 tumor signal to normal brain (T1BR) (p = 0.0040) were significantly correlated with the tumor diameter. The SBR of the tumor was also correlated with the T1BR (p = 0.0020). The tumor was completely removed in 9 of the 10 patients, as confirmed by postoperative Gd-enhanced MRI. This was concomitant with the absence of NIR fluorescence at the end of surgery. from the brain surface with both the microscope and exoscope systems. The Gd-enhanced T1 tumor signal may predict the NIR signal of the metastatic tumor, thus facilitating tumor resection.
We hypothesized that SWIG could localize the metastasis in the brain parenchyma in real time during surgery and enable identification of the margin and removal of the tumor en bloc, without cutting into it.
Patient Population
This prospective study was approved by the Fujita Health University Clinical Research Ethics Committee and the Saiseikai Yokohamashi Tobu Hospital. We obtained informed consent from all included patients. We started recruiting the participants in August 2019.
NIR Contrast Agent
We injected 5 mg/kg intravenous ICG (C 43 H 47 N 2 NaO 6 S 2 , Daiichi Sankyo) for 1-2 hours 1 day before surgery. The dose and time were based on previous reports on SWIG. 5,6 The dose was calculated as 5 mg/kg per patient, and it was packed into 100-500 ml normal saline solution.
NIR Imaging System
The VisionSense Iridium camera system and exoscope was used in 5 patients, and the KINEVO microscope system (Carl Zeiss AG) was used in the remaining 5 patients. We calculated the intensity of T1 Gd enhancement by drawing a region of interest (ROI) over the tumor and comparing it with the adjacent brain parenchyma.
The KINEVO microscope comprises a FLOW 800 application that utilizes a filter ranging from 700 nm to approximately 800 nm. The upper limit corresponded to the excitation wavelength of ICG. The resultant near-infrared (NIR) emission was passed through an 800-to 910-nm filter.
The VisionSense exoscope comprises a silicon image sensor with an open field of view of 19 × 14 cm at a 40cm nominal imaging distance. The emission filter band in the visible light ranged from 400 to 700 nm, whereas that for the NIR was much narrower and ranged from 825 to 850 nm. The camera system featured a dual optical path design, thus allowing separate and independent use of white and NIR light. The presence of separate paths allowed acquisition of faint fluorescent images in the presence of strong white light. A heat map was used as an overlay on the visible-light image to provide quantitative fluorescence intensity.
Data Analysis
We obtained a background reading from the adjacent normal brain to generate a signal-to-background ratio (SBR). In addition, we used ROI analysis in ImageJ software (NIH) to quantitate the amount of fluorescence from the tissues. Because of the differences between the FLOW 800 in KINEVO and Iridium in VisionSense and the inability to compare the data generated from each system, we used the SBR to reduce errors for each system. We drew 5 ROIs corresponding to the tumor lesions and normal tissue on the images and analyzed their average using ImageJ software. We conducted univariate analysis using the chi-square or Fisher's exact test for comparing the cat-egorical variables and the unpaired t-test or Mann-Whitney rank-sum test and simple linear regression analysis for the continuous variables. We plotted the unadjusted survival curves by the Kaplan-Meier method, using log-rank tests to assess the significance; p < 0.05 was considered significant. We performed the statistical analysis using JMP 14.1.0 (SAS Institute Inc.).
SWIG Shows NIR Fluorescence of Metastatic Tumor via the Microscope and Exoscope
There were 4 patients with lung cancer, 3 with colon cancer, 1 with gastric cancer, 1 with urinary tract cancer, and 1 with gall bladder cancer. In the brain, 7 tumors were located in the cranium and 2 in the cerebellum; the remaining lesion spread from the skin to the dura mater. While the average tumor volume was 17,563 ± 14,669 mm 3 , the maximum average tumor diameter was 45.0 ± 14.1 mm. The hematological test results, physical findings, and neurological findings did not reveal any side effects for 3 months after the administration of 5 mg/kg ICG. In addition, the average time from the administration of ICG to the surgery was 21.1 ± 1.7 hours (Table 1).
All patients underwent preoperative Gd-enhanced MRI, with the exception of a patient undergoing hemodialysis. NIR fluorescence of the metastatic tumor was identified in all patients, regardless of whether the microscope or exoscope was used during the surgery. The fundamental strategy of tumor resection involved dissecting the outer layers of the tumor, followed by en bloc resection with margins, without invading the tumor cavity. The distances from the brain surface to tumor were 13.0 ± 4.1 mm and 8.0 ± 8.7 mm in the exoscope and microscope groups, respectively (p = 0.31) ( Table 2).
Intraoperative Fluorescence SBR
The NIR signal was found to be confined to the tumor in all patients. By using SWIG with both the microscope and exoscope, we used the data to determine the presence of NIR fluorescence in the metastatic tumors. The surgeons could visualize the signal following administration of 5 mg/kg ICG. In all patients, the metastatic tumor produced stronger NIR signals than the surrounding brain parenchyma. While the mean SBR from the brain surface was 1.82 ± 1.30, it was 3.35 ± 1.76 from the tumor. Hence, the brain surface SBR was 64% of the tumor SBR (Fig. 1). The depth from the brain surface to the outermost edge of the tumor on Gd-enhanced T1-weighted MRI was 10.56 ± 6.74 mm (range 0-20 mm). The deepest tumor (20 mm deep) could be also observed from the brain surface. Figure 2 shows the linear regression plot of the SBR of NIR tumor signal versus the depth from the brain surface (p = 0.031, R 2 = 0.46). Figure 3 shows the SBR of NIR tumor signal versus the maximum tumor diameter on preoperative T1-weighted MRI (p = 0.030, R 2 = 0.46). The time (in hours) from ICG infusion to visualization did not reach statistical significance. However, the linear regression plot of the SBR from the NIR tumor signal versus the time from the ICG infusion did not show any significant difference (p = 0.21, R 2 = 0.22). NIR spectroscopy (NIRS) did not reveal any residual tumor after resection. Moreover, postoperative MRI did not detect any residual enhanced lesion.
Ratio of Gd-Enhanced T1 Tumor Signal to Normal Brain on MRI
It has been reported that preoperative Gd enhancement may predict NIR fluorescence. 2 Here, we evaluate the relationships between preoperative MRI findings and the NIR signal. All patients in this study underwent Gd-enhanced MRI, except for 1 patient, who was receiving hemodialysis (case 4). Preoperative MRI comprised the average of the 5 ROIs from the enhanced lesion and normal tissues. The mean ratio of Gd-enhanced T1 tumor signal to normal brain (T1BR) on MRI was 2.43 ± 1.77. We conducted linear regression analysis to evaluate the relationships between Gd enhancement on T1-weighted MR images (T1BR) and SBR. Figure 4 shows the linear regression plot of SBR of the NIR tumor signal versus T1BR in 10 patients (p = 0.0020, R 2 = 0.77). In addition, we conducted linear regression analysis to evaluate the relationships between T1BR and the maximum diameter of the tumor. Figure 5 shows the linear regression plot of the maximum tumor diameter versus T1BR in the aforementioned 10 patients (p = 0.0040, R 2 = 0.48).
NIR Differences Between the KINEVO Microscope and VisionSense Exoscope
Finally, we wanted to check the differences between the KINEVO and VisionSense systems. NIR fluorescence of the tumor could be observed from the brain surface in all patients. While the mean SBR from the brain surface was 1.58 (95% CI 1.2-2.0) with the KINEVO microscope, it was 2.4 (95% CI 0.1-5.0) with the VisionSense exoscope. In contrast, those from the tumor itself were 3.50 (95% CI 0.43-6.57) and 3.23 (95% CI 0.98-5.48) with the KINEVO and VisionSense systems, respectively. There was no substantial difference between the mean distances from the brain surface to the camera with the microscope (290 mm [95% CI 188-392 mm]) and exoscope (320 mm [95% CI 158-481 mm]) (p = 0.37). Furthermore, the difference between the SBR from the brain surface (p = 0.62) and that from the tumor itself (p = 0.90) was insignificant. Moreover, the distance from the brain surface to the camera system between KINEVO and VisionSense did not show any significant difference (p = 0.39).
Case 1
A 74-year-old woman presented to our clinic with forgetfulness. She received a diagnosis of brain metastasis from colon cancer. The colon cancer had already been treated surgically and radiologically. The patient was administered 5 mg/kg ICG for 2 hours 22 hours before the surgery. Preoperative Gd-enhanced MRI showed the ring-enhancing lesion with a maximum diameter of 40 mm in the left frontal lobe ( Fig. 6A and B).
The intracranial tumor was operated on using the KI-NEVO system, and neuronavigation was used to determine the area of the craniotomy and the location of the dural incision. NIR signal could be detected through the brain parenchyma and was reconfirmed by NIRS before the brain incision. The tumor was located at a depth of 17 mm from the surface and was identified with NIRS ( Fig. 6C and D). This was followed by identification of the signal at the tumor ( Fig. 6E and F). Following tumor resection with margins, there was no residual tumor and no NIR signal at the end of surgery ( Fig. 6G and H). Figure 6I and J shows the gross-total resection on postoperative Gd-enhanced MRI. The patient received stereotactic ra-FIG. 1. NIR SBR at different depths at the cortex and at the tumor. The NIR signal is smaller above the cortex; however, it is still visible. Bars represent the means, and error bars represent the SDs. There is a statistically significant difference between the two groups (p = 0.032).
FIG. 2.
Linear regression plot of SBR of NIR signal from the tumor versus the distance from the brain surface. The SBR decreases with depth (p = 0.031, R 2 = 0.46). diotherapy following the surgery and there was no recurrence of symptoms for 8 months.
Case 6
A 77-year-old woman with lung cancer had been treated and followed at a nearby hospital. She presented with severe headache and gait disturbance and was referred to our institution. Gd-enhanced MRI showed an en plaque enhanced lesion with a maximum diameter of 70 mm in the medial side of the left frontal lobe along with falx and meningeal enhancement, suggesting brain metastasis and meningeal carcinomatosis (Fig. 7A-C). Her intracranial tumor was operated on using the VisionSense system with neuronavigation and NIRS in the same manner as in case 1. The tumor was deeply located along with the falx and was covered by the brain parenchyma (Fig. 7D). It could not be directly observed by the exoscope; however, NIRS could detect the tumor at the cortex (Fig. 7D-F). This tumor identification by NIR continued during surgery and clearly showed residual tumor at the tumor site (Fig. 7G-I). After tumor removal, no residual tumor was observed in either white bright light or NIR (Fig. 7J-L). Postoperative Gd-enhanced MRI showed no enhancing mass except for meningeal carcinomatosis (Fig. 7M-O). The patient's symptoms dramatically improved after surgery, and she was referred back to the nearby hospital for adjuvant radiotherapy.
Benefit of SWIG for the Resection of the Metastatic Tumor
Neuronavigation is now being used for the removal of intracranial tumors. It facilitates surgical planning before the operation; however, the brain becomes displaced after the dura mater is opened. This in turn prevents determining the accuracy of the tumor location. Thus, the use of an NIR fluorescence technique as an adjunct for tumor removal is beneficial. Previously, researchers have reported on the use of fluorescein and 5-aminolevulinic acid (5-ALA) for glioma. However, SWIG with ICG has been used in recent years.
This pilot study confirmed the usefulness of SWIG and ICG in metastatic brain tumors by using an exoscope as well as a microscope. Fluorescein, 5-ALA, and ICG are currently the available fluorescent agents. ICG has been used as a vascular angiography technique in patients with metastatic tumors. 7,8 Earlier, 25 mg ICG was intravenously administered to visualize the arterial, capillary, and venous flow. Lee et al. reported on the use of the SWIG technique for metastatic tumor treated by a VisionSense exoscope. 3 SWIG involves administration of 5 mg/kg ICG 24 hours before surgery, and 5-ALA helps visualize approximately 62% of the metastatic tumors. 9 Kamp et al. cautioned that the residual 5-ALA-induced fluorescence after complete macroscopic resection of a metastasis needs to be cautiously interpreted because of the limited specificity for residual tumor tissue detection. 9 In that study, approximately 61.5% of 52 patients showed positive fluorescence for 5-ALA after complete resection under the microscope. In contrast, 42.9% (18 of 52 cases) showed residual fluorescence of 5-ALA in the resected cavity. Of these patients, 33.3% (6 of 18 cases) had signs of pathologically confirmed positive tumor cells. Thus, the false-positive rate for 5-ALA-induced fluorescence was 66.6%. Nonetheless, the pattern of fluorescence did not correlate with tumor histology. 9 Fluorescein was used in fluorescence-guided neurosurgery in one study. 10 Of the 95 patients with metastatic tumors, 95% showed positive fluorescence using fluorescein. In contrast, 14% showed residual tumor on postoperative MRI. 11 Therefore, ICGinduced fluorescence seems to enable more feasible tumor identification in metastatic neurosurgery.
Dose of ICG and Intensity of NIR Signal
We followed the protocol published by Madajewski et al. and Lee et al. [2][3][4][5] Madajewski et al. reported on administration of a high dose of ICG (7.5 mg/kg) 24 hours prior to surgery to allow ICG accumulation in the areas of neoplasm in a flank tumor model. 5 This effect was confirmed by the dose studies in the flank tumor model 12 and in a murine intracranial tumor model. 13 Thus, we hypothesized that the accumulation of ICG occurs through an enhanced permeability and retention effect that regulates the solid tumor of enhanced vascular permeability due to defective vascular structures and lymphatic drainage system, breakdown of the blood-brain barrier by the tumor, and increased permeability mediators. Using a boosted light beam is likely to enhance the efficacy of a microscope. 14 Our findings revealed the availability and usefulness of both the microscope and exoscope for NIRS of metastatic tumor.
NIRS of the Tumor From the Brain Surface
According to previous reports, NIRS can detect intraaxial tumor at a mean of 13.5 ± 4.0 mm in cases of glioblastoma and 6.8 mm deep in cases of metastatic brain tumors from the brain surface. 2,3 However, we could vi-sualize all tumors from the brain surface in this study. The depth from the brain surface to the outermost edge of the tumor on Gd-enhanced T1-weighted MRI was 10.56 ± 6.74 mm (range 0-20 mm). In addition, we could even observe the deepest tumor with a depth of 20 mm from the brain surface in case 7. ICG NIRS facilitated visualization of the intraparenchymal lesion through the brain surface, compared with 5-ALA and fluorescein, consistent with the findings of Lee et al. 3 Thus, the fluorescent signal was visible at the brain surface (64%) and showed a strong SBR (100%) in all cases.
However, an NIR signal can be visible under both a microscope and an exoscope. This in turn might improve the boost excitation of NIRS. In addition, a weak NIR signal can even help identify the tumor. While a stronger NIR signal can result in false-positive findings, a weaker signal produces false-negative results.
Microscope and Exoscope
Li et al. reported on the failure of the Leica M530 OH6 to detect ICG fluorescence at the cortex and tumor, with SBRs of 1.8 ± 0.18 and 1.7 ± 0.24, respectively. 14 The NIR FL800 OH6 (Leica) has a 300-to 400-W xenon lamp; however, the power of the laser was not known for the KI-NEVO system. This system was able to detect NIR fluorescence from the brain surface with an SBR of 1.97 ± 0.3. In contrast, the SBR from the tumor increased to a mean of 3.43 ± 0.65. Thus, the results obtained using the KI-NEVO system were much better than those obtained using the Leica OH6. Li et al. elucidated the postexcitation boost data with SBRs of 2.8 ± 0.32 and 2.1 ± 0.48 on cortex and tumor, respectively. 14 The boosted light was brighter than the original one. Future studies should investigate the differences in signals among the NIR systems. Thus, NIR fluorescence of the tumor can produce the SBRs from the brain surface (p = 0.22) and the tumor itself (p = 0.29) equally well with the KINEVO and VisionSense systems. While the mean SBR from the brain surface was 1.58 (95% CI 1.2-2.0) using the KINEVO system, it was 2.4 (95% CI 0.1-5.0) with VisionSense. In contrast, the mean SBRs from the tumor itself were 3.50 (95% CI 0.43-6.57) and 3.23 (95% CI 0.98-5.48) with the KINEVO and Vi-sionSense systems, respectively. Both means are nested within the 95% CIs for the KINEVO and VisionSense systems, and both systems were more likely to produce similar measurements; however, an equivalency test is yet to be done.
Despite the small sample size, with the KINEVO and VisionSense systems, there was no significant difference between the distance from the brain surface to the camera system (KINEVO, 290 mm [95% CI 188-392 mm]; and VisionSense, 320 mm [95% CI 158-481 mm]; p = 0.74) and that to the tumor itself (p = 0.31). Both means are nest- ed within the 95% CIs for the KINEVO and VisionSense systems, and both systems were likely to produce similar measurements. Thus, the KINEVO microscope can allow NIR fluorescence detection of a metastatic tumor. According to Li et al., on boost excitation, the Leica OH6 helped in tumor visualization by increasing the SBR. 14 SWIG seems to be widely used in brain tumor surgery if the microscope works well without boost excitation.
In future studies, the most appropriate conditions, including time to detection, NIRS machine type, and analysis of software of the various microscopes and exoscopes, should be examined.
T1BR and SBR
The T1BR has been associated with the intensity of the NIR signal. 4 Our data demonstrated a correlation between T1BR and SBR (p = 0.0020, R 2 = 0.77) and the maximum diameter of the tumor (p = 0.0040, R 2 = 0.48). These results suggest that we can identify the useful SWIG before surgery based on the T1BR on MRI. In addition, the mechanism of Gd-enhanced MRI can only be influenced by vascular permeability. 15 This supported the hypothesis that accumulation of ICG occurs through the enhanced permeability and retention effect in areas of enhanced vascular permeability. 16,17 Future studies should aim to verify the available pathology for SWIG and elucidate the mechanism of ICG accumulation.
Possibility of False Positives With the NIR System
The cameras of the KINEVO and VisionSense systems have an automatic exposure feature that could average the pixel intensity to normalize the background and assign it a neutral gray. The camera tries to balance the exposure when set in the automatic exposure mode. Therefore, we can manually fix the gain and illumination. Automatic exposure can cause high false-positive (i.e., NIR positive and pathology negative for tumor) and false-negative (i.e., NIR negative and pathology positive for tumor) findings at the margins. We had set the percentage of gain and illumination when the tumor was exposed to avoid the false-positive identification of the tumor margins. This in turn was identified by the bright light and NIR fluorescence. Previous reports identified 5.9% of the true-positive and 48.5% of the false-positive specimens of metastatic tumor. 3 However, our results showed that the tumor could be completely resected in 9 of 10 cases under a bright light. In addition, NIR fluorescence could not be detected at the end of surgery. Furthermore, we could not find any enhanced lesion on postoperative MRI with Gd enhancement. This in turn demonstrated the absence of residual tumor after failing to identify an NIR signal. Lee et al. also hypothesized that the time from infusion until imaging can affect the falsepositive margin of the tumor. 4 Nonetheless, there were no substantial differences between the time from infusion to observation and SBR in our study (p = 0.20).
Limitations of SWIG
We have not yet evaluated the pathology of all of the specimens and its diagnostic correlation with NIRS findings. Also, the sensitivity and specificity could not be cal-culated in this work. Future evaluations are needed to address these deficiencies.
ICG is neither a receptor-bound nor a receptor-specific agent. Therefore, the lack of specificity is inevitable to this method. We can manually fix the gain and illumination at the tumor site to resolve the problems of the camera gain and autoexposure system as we mentioned above. In addition, we can use these settings throughout the surgery. The scores are the relative values, not absolute values. Thus, the distance between the lens and the target was different every time the tumors were observed by NIR fluorescence. Furthermore, the relative scores were different in the autoexposure system. In addition, we could not decide the absolute value during the surgery. This was a pilot study to establish the protocol for the main study. The small sample size was a major limitation. According to previous reports, there may have been a high potential to decrease the falsepositive rate and autoexposure system. 3
Conclusions
This pilot study revealed the usefulness of the SWIG technique for operating on tumor metastases with a microscope and with an exoscope. NIRS with ICG can provide a stronger fluorescence of tumor in relation to normal brain parenchyma. It can also help identify the location of the tumor from the brain surface during the tumor resection. Limiting the autoexposure and fixing the gain and illumination manually at the tumor site may reduce the false-positive rate. Further evaluation is needed to solve the problems and limitations for clinical use; however, the SWIG technique of ICG can achieve safe and complete tumor resection in the near future. | 2021-01-03T06:16:00.305Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "3ecadd4e0e1ab610c8bfbf523664cf2b7ce9d603",
"oa_license": null,
"oa_url": "https://thejns.org/downloadpdf/journals/neurosurg-focus/50/1/article-pE11.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7f39c09adf57f9adc391b4c61c23bcb36c777d4c",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
199577813 | pes2o/s2orc | v3-fos-license | Tomographic analyses of the CMB lensing and galaxy clustering to probe the linear structure growth
In a tomographic approach, we measure the cross-correlation between the CMB lensing reconstructed from the Planck satellite and the galaxies of the photometric redshift catalogue based on the combination of the South Galactic Cap u-band Sky Survey (SCUSS), Sloan Digital Sky Survey (SDSS), and Wide-field Infrared Survey Explorer (WISE) data. We perform the analyses considering six redshift bins spanning the range of $0.1<z<0.7$. From the estimates of the galaxy auto-spectrum and the cross-spectrum, we derive with high significance the galaxy bias and the amplitude of the cross-correlation at each redshift bin. We have finally applied these tomographic measurements to estimate the linear structure growth using the bias-independent $\hat{D}_{G}$ estimator introduced by Giannantonio et al. 2016. We find that the amplitude of the structure growth with respect to the fiducial cosmology is $A_{D}=1.02\pm0.14$, in agreement with the predictions of $\Lambda$CDM model. We perform tests for consistency of our results, finding no significant evidence for systematic effects.
Introduction
Progress in the sensitivity of astronomical photometric surveys dedicated to the study the large-scale structure (LSS) has been providing valuable information about the features of the Universe at several scales and redshifts [2][3][4][5][6]. The prospects of using LSS data to constrain cosmology are very promising. Several upcoming astronomical surveys will produce extensive photometric data covering a wide area of the sky such as the Large Synoptic Survey Telescope (LSST) [7] and the Wide-Field Infrared Survey Telescope (WFIRST). On the other hand, the cosmic microwave background radiation (CMB) allow us to test the primordial characteristics of the Universe. However, before reaching us, the CMB photons are affected by inhomogeneities along their path producing a range of secondary effects, beyond the primary CMB temperature fluctuations at the last scattering surface [8][9][10]. The gravitational deflection of the CMB photons by the mass distribution along their path, namely weak gravitational lensing, is one of these secondary effects.
The CMB lensing has been investigated by several methods and experiments in the past [11][12][13][14][15]. Recently, through observations of the Planck satellite, it was possible not only to detect the lensing effect with high statistical significance but also to robustly reconstruct the lensing potential map in almost full-sky [16,17]. Such a reconstructed map contains unique information of the LSS since it is related to the integral of all photons deflections between the last scattering surface and us.
Although the CMB lensing signal covers a broad redshift range, from local to high redshifts, it is not possible to obtain the evolution of the LSS along the line of sight using only the CMB lensing data. The cross-correlation between the CMB lensing map with another tracer of matter provides additional astrophysical and cosmological information. Several galaxy catalogs, such as those from the Wide Field Survey Infrared Explorer (WISE) [18],
Background
The gravitational lensing effect remaps the CMB temperature anisotropies by a 2D angular gradient of the lensing potential, α(n) = ∇ψ(n), where ∇ is the 2D gradient operator on the sphere and ψ(n) is the lensing potential. The 2D Laplacian of the lensing potential is related to the convergence κ(n), which can be written as a function of the three-dimensional matter density contrast δ (see e.g. [36]) κ(n) = ∞ 0 dzW κ (z)δ(χ(z)n, z), (2.1) where the lensing kernel W κ is W κ (z) = 3Ω m 2c where we are considering a flat universe, c is the speed of light, H(z) is the Hubble parameter at redshift z, H 0 and Ω m are the present-day parameters of Hubble and the matter density, respectively. The comoving distances χ(z) and χ * are set to redshift z and to the last scattering surface at z * 1090, respectively. On the other hand, the galaxy overdensity δ g from a galaxy catalogue with normalized redshift distribution dn/dz also provides an estimate of the projected matter density contrast, given by where the galaxy kernel W g for a linear, deterministic and scale-independent galaxy bias b(z) [37] is Under the Limber approximation [38], the two-point statistics in the harmonic space of the galaxy-galaxy and galaxy-CMB lensing correlations become where P (k, z) is the matter power spectrum. The Limber approximation is quite accurate when is not too small ( > 10) [38], which is the regime considered in this work. Moreover, is possible to rewrite the equations 2.5 in terms of the linear growth function D(z), since P (k, z) = P (k, 0)D 2 (z). Therefore, Thus, by properly combining the two quantities of the equation 2.6, it is possible to eliminate the bias dependence and break the degeneracy between the galaxy bias and the linear growth through the estimator introduced by [1]: In the above equation, theD G depends on the observed and theoretical slashed correlation functions, being the theoretical quantities / C gg and / C κg evaluated at z = 0 and therefore, they have the growth function removed. In order to obtain the theoretical predictions for the matter power spectrum P (k, z), we use the public Boltzmann code CAMB 1 [39] with the Halofit [40] extension to nonlinear evolution. Throughout the paper, we use the Planck 2015 cosmology [41]
The SCUSS is a u-band (354 nm) imaging survey using the 2.3m Bok telescope located on Kitt Peak, USA. The data products were released in 2015 containing calibrated singleepoch images, stacked images, photometric catalogs, and a catalogue of star proper motions. The released catalogue covers an area of approximately 4000 square degrees of the South Galactic Cap and overlaps roughly 75% of the area covered by the SDSS [45]. The detailed information about the SCUSS and the data reduction can be found in [45] and [46].
The SDSS is a multi-spectral imaging and spectroscopic redshift survey, encompassing an area of about 14000 square degrees. The SDSS uses a wide-field camera that is made up of 30 CCDs. The survey is carried out imaging in five broad bands u, g, r, i, z, with limitmagnitude with 95% completeness 22.0, 22.2, 22.2, 21.3 and 20.5 mag, respectively. The data have been released publicly in a series of roughly annual data releases. Specifically, the photometric data from the Data Releases 10 (DR10) [47] is considered to obtain the final catalogue used in this paper [35].
WISE is an infrared astronomical space telescope that scanned all-sky at 3.4, 4.6, 12 and 22 µm, known as W1, W2, W3, and W4, respectively. In September 2010, the frozen hydrogen cooling the telescope was depleted and the survey continued as NEOWISE, with the W1 and W2 bands. In order to match properly the official all-sky WISE catalogs with the SDSS data, is considered a technique to measure model magnitudes of the SDSS objects in new coadds of WISE images, called as forced photometry. This provides an extensive extragalactic catalogue resulting in a sample of more than 400 million sources.
The catalogue we use has been built by using the 7 photometric bands ranging from the near-ultraviolet to near-infrared. A local linear regression algorithm [6] is adopted using a spectroscopic training set composed mostly of galaxies from the SDSS DR13 spectroscopy, in addition to several other surveys. The model magnitudes utilize the shape parameters from SDSS r-band and also the SDSS star/galaxy separation to characterize the source type. The final catalogue contains ∼ 23.1 million galaxies 2 with ∼ 99% of the sources spanning the redshift interval of z ≤ 0.9 [35]. The multi-band information allows to estimate the photo-z's for the sources more accurately and less biased than the SDSS photometric redshifts, with the average bias of ∆z norm = 2.28 × 10 −4 and standard deviation of σ z = 0.019.
In order to apply a tomographic approach, we split the full catalogue into six redshift bins of width ∆z = 0.1 over 0.1 < z < 0.7. We ignore the extreme redshift bins where the fractional photo-z errors become large and the galaxy density became small. We use the position of the sources to create a pixelized overdensity map, for each redshift bin, using δ g ( x) = ng( x)−n n , where n g is the number of observed galaxies in a given pixel andn is the mean number of objects per pixel in the unmasked area, in the HEALPix scheme [48] with a resolution parameter N side = 512. The figure 1 shows the overdensity map in these six redshift bins, where the grey area indicates the masked regions. However, we discard the stripes located in the galactic longitude range 180 < l < 330 due to the low density, remaining about f sky = 0.08 in each map for analyses. The specifics of each bin are summarized in the table 1. As discussed in the section 2, we need the overall redshift distribution dn/dz and the galaxy bias to connect the galaxy overdensity δ g to the underlying matter overdensity δ. However, we need take into account the effect of the photometric redshift errors [49,50]. We can accurately reconstruct the true dn/dz distribution by the convolution of the sample's photometric redshift distribution dn/dz(z ph ) with the catalog's photo-z error function p(z|z ph ): where p(z|z ph ) is parameterized as a Gaussian distribution with zero mean and dispersion σ z so that p(z|z ph ) ∝ exp (−0.5(z/σ z (1 + z)) 2 ), where σ z = 0.019 [35] and the W (z ph ) is the window function, such that W = 1 for z ph in the selected interval and W = 0 otherwise. The redshift distribution for the total catalogue is shown as the solid black line in figure 2, while the distribution to each tomographic bin is shown as the dashed lines. 3 3,178981 3.14 × 10 6 0.3 -0. 4 3,686820 3.64 × 10 6 0.4 -0. 5 5,155408 5.09 × 10 6 0.5 -0. 6 4,348898 4.29 × 10 6 0.6 -0.7 2,101281 2.08 × 10 6 Total 20,680257 2.04 × 10 7
Planck CMB lensing
We consider the CMB lensing products of the Planck 2015 data release 3 . The lensing convergence map has been constructed based on the quadratic estimators that exploit the statistical information introduced by weak lensing in the CMB data [51]. The Planck team [16] has provided as an estimate of the CMB lensing, the convergence field κ reconstructed using the minimum-variance (MV) combination of the estimators applied to temperature (T) and polarization (P) of the SMICA foreground-cleaned map. The total lensing signal measured from κ is detected at about 40σ.
The κ map released is band-limited to the multipole range 8 ≤ ≤ 2048. The reconstructed map covers about f sky ∼ 67.3% of the sky, masking regions contaminated by Galaxy emissions and point sources. Jointly to the released lensing products, it is available the corresponding confidence mask and a set of 100 realistic simulations, which accurately incorporate the Planck noise levels and the κ statistical properties [52]. The maps as well as the mask are provided in the HEALPix resolution parameter N side = 2048. We use the HEALPix ud-grade routine to convert in the resolution N side = 512.
Method
In this work, we use the angular power spectrum (APS) of the galaxy overdensity and the angular cross-power spectrum (CAPS) between the galaxy overdensity and the CMB convergence map to estimate the cosmic growth information at several redshifts. In this section, we describe the procedure followed in the analysis of these two datasets.
Estimator
The APS and CAPS estimates for incomplete sky coverage are affected by the mask, which introduces coupling between different modes [53]. Therefore, we use a pseudo-C estimator based on the MASTER approach [54], that provides a very good approximation to this issue, mainly on larger scales which is the regime we are considering, as detailed below.
Let us denote the two fields X and Y with the auto-power spectrum when X = Y and the true cross-(auto-)spectrum denoted as C XY (C XX ) to the full sky. The pseudo-cross spectrumC XY measured in a fraction of the sky is whereX m andỸ * m are the spherical harmonic coefficients of the maps. The mask acts as a weight modifying the underlying harmonic coefficients so that the pseudo-C measured from the data can be related to the true spectrum by the mode-mode coupling matrix M as where M is inferred by the geometry of the mask [55], given by Here W is the APS of the mask when X = Y , while in the cross-correlation corresponds to the two joint masks. In the cross-correlation analysis, we use the mask resulting from multiplying the κ mask with the δ g mask for each redshift bin.
Depending on the size of the sky cut, the relation 4.2 cannot be inverted to obtain C XY because in general, the coupling matrix is singular. To mitigate the coupling effect and also to reduce the errors on the resulting CAPS, it is appropriate to bin the power spectrum in . We bin the power spectrum in in a linearly spaced band powers of width ∆ = 10 in the range 10 < < 512. We test different bin width values, however, we find no significant influence on the results. While we set the lowest value of based on the l min of the Planck map and the accuracy of the Limber approximation, we impose a conservative cut in scales > 70 to avoid several effects significant at small scales that could affect our analysis, such as the non-linear galaxy bias and the thermal Sunyaev-Zel'dovich (tSZ) contamination in the κ map. The impact of this max = 70 choice on our results is explored below. An unbiased estimator of the true-bandpowersĈ XY L is then, given in terms of the binned coupling matrix K LL Ĉ XY where Here L denotes the bandpower index, P L is the binning operator and Q L is its reciprocal corresponding to a piece-wise interpolation. The B is a beam function for each X and Y observed field, p is the pixel window function and F is the effective filtering function. We do not need to debias the noise in the CAPS estimator since the CMB lensing and the galaxy data are completely independent measurements and therefore have, in principle, uncorrelated noise signals. However, we correct the estimated APS,Ĉ gg , by subtracting the shot noise term: N gg = 1/n, wheren is the average number density of galaxies per steradian.
The errors on the estimated auto-(cross-)spectrum are determined by [56] ∆Ĉ XY where we assume in this equation that both fields behave as Gaussian random fields and the APS incorporates the associated noise which is N gg and N κ for the galaxy and CMB lensing, respectively. We need to take into account the errors associated with the APS and CAPS measurements of the equation 4.6 to obtain theD G estimator properly. Thus, we use the weighted average in theD G calculation, as described in detail in Appendix A in [24].
Galaxy bias and lensing amplitude
We show the measurements of the -binned APS (left panel) and the CAPS (right panel) in Figure 3. The six panels represent, from top to bottom, the estimates to each redshift bin. The error bars of the extracted APS and CAPS are calculated using the expression 4.6.
Although theD G estimator is bias-independent for a narrow redshift bin, we can use the observed APS and CAPS to respectively estimate the best-fit bias b and the amplitude of the cross-correlation A = bA lens , where the later is introduced motivated by phenomenological reasons and A lens is the CMB lensing amplitude. Therefore, on the average, A lens is expected to be 1 if the underlying cosmology conforms to the fiducial model and then, the amplitude A should be the same value as the galaxy bias b determined from the auto-correlation.
We assume that the bias does not evolve within each redshift bin so that A and b are free parameters obtained by means of Bayesian analysis assuming uninformative flat priors and a Gaussian likelihood where x is the extractedĈ gg orĈ κg , µ is the correspondent binned theoretical prediction for the parameters θ, and C is the covariance matrix. The covariance matrix is assumed diagonal, with its elements computed by the equation 4.6. In order to efficiently sample the parameter space, we use the Markov chain Monte Carlo (MCMC) method, employing emcee 4 package [57]. We perform this analysis for each redshift bin and as a comparison, also for the full sample spanning 0.1 < z < 0.7. Our results are stable against the length of the chain as well as the initial walker positions. The best-fit bias and amplitude with their 1σ errors are reported in the captions of the Figure 3 as well as the best-fit theoretical model with its 1σ uncertainties are shown as the solid lines and the gray shaded region, respectively. The significance of the detection is calculated as S/N = χ 2 null − χ 2 min (θ), where the χ 2 null is the χ 2 (θ = 0) and χ 2 min (θ) is the value for the best-fit. The parameter values, the S/N , and the χ 2 min for each redshift bin are summarized in Table 2.
In the tomography analysis, we have found the best-fit bias in agreement up to 1σ with the values of the cross-correlation amplitude, indicating the lensing amplitude consistent with unity. For all the redshift bins, the best-fit bias has S/N ∼ 13. Although the constraints using CAPS is clearly weaker than in the APS case, we do find S/N ∼ 1.40 − 2.55 in each bin. We also show that the reduced χ 2 min values reveal that our estimate of the covariance is realistic and the model provides a good fit to the data. The only exception is the bias from the auto-correlation in the last two redshift bins, 0.5 < z < 0.6 and 0.6 < z < 0.7, with reduced χ 2 min slightly greater than 1. The figure 4 shows theĈ κg (right panel) and theĈ gg (left panel) when considering the sample of the galaxy covering the redshift from 0.1 to 0.7. The results are summarized also in table 2. In this case, we found that A < b in more than 3σ (including only statistical errors), unlike that found in the tomographic analysis. This discrepancy might point toward effects such as scale-dependent bias, the bias evolution in this interval, or stochasticity in the sample. The tension between b and A was also reported by other authors. [1] found A(z) < b(z) by 2 − 3σ using the DES Science Verification galaxies data correlated with the CMB lensing from SPT and from Planck in 5 redshift bins in the range 0.2 < z < 1.2 as well to the full galaxy sample. Also, correlations between CFHTLens galaxy density and Planck CMB lensing [20] and between the CFHTLens shear and Planck CMB lensing shows the lensing amplitude smaller than 1, although with modest significance [58].
Null tests
In order to check the validity of the cross-correlation estimate against the possibility of residual systematics or spurious signals, we perform a null hypothesis test of no correlation between the CMB lensing and the galaxy density maps. We do this by considering the cross-correlation of these two fields, being one of them the real map and the second one from simulations. As these maps do not contain a common cosmological signal, the mean correlation is expected to be consistent with zero.
We cross-correlate the real galaxy maps of each redshift bin with the 100 convergence simulations from the Planck 2015 release [16]. In addition, we cross-correlate the Planck CMB convergence map with 100 galaxy simulations constructed considering the corresponding bestfit bias, masks, shot noise and the same properties of the galaxy number density of each redshift tomographic bin. The Figure 5 shows the cross-power spectrum estimated in both The panels refer to the photo-z bins, from low to high redshift (top to bottom). The points are the direct estimates while the solid line is the fiducial cosmology rescaled by the best-fit galaxy bias (for the auto-spectra) and by the cross-correlation amplitudes A = bA lens (for the cross-spectra). Both, the amplitudes and biases are reported in the captions with their 1σ. The best-fit theory was inferred using up to multipole < 70. The shaded grey region indicates the 1σ around the best-fit theory. Correlation Table 2: Summary of the results estimated from the APS and CAPS for the 5 redshift bins and for sample between 0.1 < z < 0.7: the top half table shows the best-fit linear bias b to the galaxy auto-correlations, while the lower half shows the best-fit to the cross-correlations amplitudes A = bA lens . The signal-to-noise (S/N ) and the best-fit χ 2 are also shown.
cases, where the errors bars were computed by the standard deviation of the simulated crosspower spectra divided by the √ N sim , with N sim = 100. Considering the covariance matrices obtained from these simulations, we calculate the χ 2 and the probability-to-exceed (PTE) for the scales 10 ≤ ≤ 70 with dof ν = 6. The results are displayed in the Table 3. Although we found different values of the PTE for each test and redshift bin, no significant signal is detected in either case and then, they are consistent with zero.
κ Real × Galaxy Overdensity Simulations Figure 5: Null tests for the cross-power spectrum to the six redshift bins. In the right panel, is the mean correlation between the Planck CMB convergence map and 100 galaxy overdensity simulations obtained considering the respective features of each redshift bin. In the left panel, is the mean correlation between the galaxy overdensity and the 100 simulated Planck CMB lensing maps. The errors bars are given by the standard deviation of the simulated cross-power spectra divided by √ 100.
Constraints ofD G
We calculate theD G estimator, using the extracted APS and CAPS from the datasets. The figure 6 shows the result for each redshift bin with the corresponding 1σ error bars. The error bars for each redshift bin are estimated from the dispersion of theD sim G , establish from auto-and cross-spectra of 500 correlated Gaussian realizations [59,60] of κ and δ g considering their statistical properties consistent with the data. The solid black line is the expected in the fiducial Planck ΛCDM model D f id G (z). As the expected function D f id G (z) is directly related to the cosmological parameters Ω m σ 8 H 2 0 , we consider the Planck chains to randomly draw 3000 points and calculate the linear growth function for each cosmology. The gray shaded region around the D f id G (z) is the 2σ scatter for the 3000 cosmologies. It is worth mention that for each model i, we normalize the curve by multiplying by the factor (Ω m σ 8 H 2 0 ) i /(Ω m σ 8 H 2 0 ) f id as [1,24].
We can assess the amplitude of the linear growth function A D , with respect to the fiducial prediction, assuming a template shape of the D G to be fixed by the D f id G (z) [1,24], such that where to each tomographic bin we use the median of the redshift distribution as input of z.
We use the MCMC method with a flat prior to fit the amplitude. We find A D = 1.02±, 0.14, in excellent agreement with the fiducial value A D = 1. Similar analyses using other galaxy samples, for nearer [24,25] and for deeper [1,61] redshifts than the considered in this work, also indicate agreement with the fiducial cosmology established by Planck. Although, for the DES Science Verification galaxies, it revealed a mild ∼ 1.7σ discrepancy away from the fiducial cosmology. In this sense, our analysis is complimentary, as we consider another survey that covers a different region of the sky and therefore, extends to probing other possible systematics effects and redshifts intervals. Figure 6: The linear growth factor estimated from theD G estimator to the six redshift bins. The solid black line represents the theoretical growth function for the Planck fiducial cosmology. The 2σ scatter for 3000 cosmologies randomly drawn from the Planck chains is shown in the gray shaded region.
Consistency test
In order to avoid contamination of nonlinearities, we limited our analyses at scales up to ≤ 70 in all redshift bins. However, it is necessary to investigate the impact of our choice, since the scales subtended by the modes that are entering the nonlinear regime, that is, ∆ 2 (k N L ) = k 3 N L P linear (k N L )/(2π 2 ) ≈ 1, vary for each of the redshift bins considered. In this way, we explore the variation of theD G value, considering different choices of max for each redshift bin. Effectively, the question is to understand whether by extending the scales would lead to a significant change in the observed value ofD G . The figure 7 shows theD G as a function of the maximum multipole max considered, for each redshift bin. The gray shaded region represents a 1σ error bar, estimated through the 500 correlated realizations described in the previous section. The dotted horizontal red line indicates the value found when considering scales up tp ≤ 70. For better visualization, the x-axis range changes for each redshift bin, due to the scales where the nonlinearities are relevant.
We can observe that for the three first redshift bins and the last one there are no significant deviations of theD G with max . Thus, it being unlikely that the linear growth factor inferred is affected by the inadequate inclusion of non-linear scales. However, for 0.4 < z < 0.5 and 0.5 < z < 0.6, we can see a slight increase in theD G value with the max , although this behavior begins to occur at scales where the nonlinearities are not expected to be relevant. Therefore, a better investigation of possible systematics and effects in these redshift ranges is needed, in order to better determine the cause of this effect. For the purpose of this work, this aspect does not significantly impact the overall result, since the shift in the D G value is in agreement in up 2σ of the estimated value (represented by the dotted red line).
Conclusions
Recent reports witness the increasing importance of measuring the growth of the cosmic structures using the large deep surveys catalogues and CMB lensing data [1,24,25,61], now available. In fact, the linear structure growth factor as a function of redshift, D G (z), have the potential to discriminate between alternative models of cosmic acceleration. In this work, we present a tomographic estimate of the linear growth factor by combining the auto-and cross-correlation of the CMB Planck convergence map, κ, and a galaxy density fluctuations map, δ g , where the δ g map was constructed from the photometric catalogue based on multiband data from SCUSS, SDSS, and WISE [35]. We perform detailed analyses in six redshift bins of width δz = 0.1, in the redshift interval 0.1 < z < 0.7.
We have studied the evolution of the linear galaxy bias, b, and the amplitude of the cross-correlation, A, using the auto-and cross-angular power spectra, respectively. We found a significant detection of the best-fit parameters, although the galaxy clustering auto-correlation fit with higher S/N than the galaxy-CMB lensing correlations and the results are summarized in Table 2. We have found that b and A are consistent with each other in all redshift bins. However, when using the full galaxy sample at 0.1 < z < 0.7, we do find A < b, with a tension of ∼ 3σ (including only statistical errors). This result may indicate effects such as stochasticity or evolution of bias in this redshift range [62].
In addition, we perform null tests to check if our measured signal is contaminated by artifacts from the survey's systematics or other undesirable effects. To this end, we have crosscorrelated the real κ map (δ g ) with a set of δ g (κ) simulations that include noise, where no significant signal is detected in the cases analyzed, which indicates that the cross-correlation is unlikely to be affected by such effects. Our results are displayed in Figure 5.
By combining the auto and the cross-correlation estimates, we measure the linear growth factor at different epochs of the Universe by using the bias-independent estimatorD G introduced by [1]. Our main result displayed in Figure 6, shows the measured linear structure growth factor in comparison with the expected in the fiducial ΛCDM scenario. Our result is consistent with the fiducial model, with the amplitude of the linear growth function A D = 1.02±, 0.14, being A D = 1 the fiducial value (see equation 5.2). Moreover, we have tested the stability of theD G value against different choices of ranges of angular scales used in the analysis. We found no significant shift in theD G value with the different scales, except in the bins 0.4 < z < 0.5 and 0.5 < z < 0.6, although this does not significantly affect our overall result. The details of these analyses are displayed in Figure 7.
The CMB lensing tomography is an efficient method to test the linear growth of cosmic structures and, by extension, to test dark energy scenarios and/or alternative gravity models. In the near future, the CMB and galaxy surveys such as the Simons Observatory, CMB-S4, LSST, and WFIRST will produce comprehensive data and will enable to reach a deep mapping of the galaxies and a high sensitivity in reconstructing the CMB lensing potential. Thus, we may expect that the CMB lensing tomography, through analysis as the one used here, will be fundamental to find shrunken bounds in the scenario that better explains the history of the cosmic structure growth. | 2019-08-13T20:50:56.000Z | 2019-08-13T00:00:00.000 | {
"year": 2019,
"sha1": "47f68a7d9d46711561bafb6d69ae11481e0253e5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1908.04854",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "47f68a7d9d46711561bafb6d69ae11481e0253e5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
236364672 | pes2o/s2orc | v3-fos-license | Documenting the conservation value of easements
Placing conservation easements on private lands could contribute greatly to biodiversity protection in the United States. However, a paucity of data prevents us from knowing to what extent this potential is met. We discuss best practices for baseline documentation reports and biodiversity surveys of properties that could help mitigate this data shortage and contribute to a national database on private land biodiversity. We then examine 49 private properties totaling 3,048 ha in Alabama and tally high priority (i.e., at‐risk) species that are recorded within this portfolio of land parcels protected by conservation easements. The number is 116 species in total, or 38 high‐priority species per 1,000 ha. Not only is the number of these documented at‐risk species per unit area high compared to the number documented from nearby Conecuh National Forest (~38 vs. ~5 per 1,000 ha), 92 of the species recorded from the private lands have not been recorded from the much larger Conecuh National Forest (33,993 ha). This emphasizes the opportunity for well‐positioned easements to complement and be a valuable addition to large networks of public lands.
| INTRODUCTION
Although the United States has a well-developed system of national parks and public conservation lands, the location of these conservation areas is mismatched to the location of unique biodiversity (Jenkins, van Houtan, Pimm, & Sexton, 2015). Because of this mismatch, the amount of habitat protected in the United States must be dramatically increased if species extinctions are to be averted (Fishburn, Kareiva, Gaston, & Armsworth, 2009;Wilcove, Rothstein, Dubow, Phillips, & Losos, 1998). The enhancement of habitat for at-risk species can be achieved by either expanding the amount of public land set aside for conservation or by establishing conservation practices on private lands. In this article, we focus on the opportunity associated with private lands-an opportunity that has been highlighted by several researchers who have noted large numbers of threatened and endangered species that are found only on private lands (Aycrigg et al., 2016;Groves et al., 2000;Scott et al., 2001).
Private lands can be purchased outright for the purpose of conservation, or they can be placed under an easement. Conservation easements are permanent restrictions on land use associated with a property deed. Easements may prohibit subdivision and development; they may limit the extent of logging or ranching; they may prohibit mining, or they may limit the total building footprint on a property. The idea is that these restrictions preserve critical habitat, while allowing private landowners to own, use, sell, and bequeath the land subject to easement restrictions. Thus, easements allow land to remain private, yet also be protected in perpetuity for biodiversity (Draper, 2004). As a supplement to governmentdesignated lands, conservation easements have become a prominent tool for protecting biodiversity in the United States (Armsworth & Sanchirico, 2008).
The potential conservation benefits of a well-designed portfolio of easements are enormous, since private lands provide habitat for 95% of the federally listed species in the United States (Hilty & Merenlender, 2003). Wellplaced easements could be geographically positioned to redress the mismatch in the location of public lands and biodiversity. For example, in 80% of the U.S.'s ecoregions, public conservation lands do no better than a random placement of the same amount of land in terms of protecting endangered species (Clancy et al., 2020). Ideally, easement placement would not be random, and hence should be able to substantially improve upon existing public conservation lands.
The predominant holder of conservation easements are nonprofit land trusts. Land trusts assess the conservation value of land before accepting it, and then if they do end up holding the easement, they have a responsibility to monitor the easement to make sure the restrictions are not violated. There are now over 1,300 land trusts operating in the United States (Keiter, 2018). The most recent land trust census report (Land Trust Alliance, 2015) indicates that these land trusts currently conserve 6.8 million hectares with easements. Tax incentives have played a major role in facilitating the use of easements as a conservation tool. In particular, the Tax Reform Act of 1976 created federal tax incentives for granting conservation easements on privately owned property and donating that easement to a land trust or government agency to manage (Parker, 2002).
The need, the opportunity and the potential for private land conservation in the United States is firmly established. What is missing are empirical studies that document the occurrence of high priority conservation species on existing easement lands (Hilty & Merenlender, 2003). Simply noting that private lands fall within the range of endangered species is not adequate, because one might be skeptical of the quality of habitat on lands that have been ranched or partially logged. This is not to say easements have been neglected by researchers. However, the bulk of easement research has examined their effectiveness as measured by compliance with restrictions, the type of habitat they protect (not species presence), their strategic placement in a landscape sense (corridors or stepping stones) and the extent to which they target lands at high risk of conversion (Capano, Toivonen, Soutullo, & Di Minin, 2019;Copeland et al., 2013;Newburn, Reed, Berck, & Merenlender, 2005;Shumba et al., 2020). The few studies that actually include data on species presence (e.g., Pocewicz et al., 2011) focus on a small number of pre-selected species-not an inventory of all high-priority species. Conspicuously absent are published reports from on-the-ground biological surveys of easements.
Here we describe best practices for surveys of conservation easements, and then use these best practices to document the conservation value of 49 easements in Alabama. The question of what species are found on easements is both biologically important and important from the perspective of public policy (Farmer, Knapp, Meretsky, Chancellor, & Fischer, 2011). Tax deductions are given on the assumption of public good-in this case delivering conservation value. Every conservation easement grantor attempting to get a federal tax deduction requires a "baseline documentation report" that describes what of conservation value is found on the property. These reports and their associated surveys are conducted by professional biologists. One of the goals of this article is to highlight the opportunity for advancing conservation practice and science by making full use of the data in baseline documentation reports. If data from the surveys associated with these baseline reports were curated, they could contribute to a national assessment of biodiversity on private lands. However, to maximize the scientific value of such survey data, more attention will need to be given to descriptions of the field methods used, and to a clearer presentation of data than is usually demanded of baseline documentation reports. We show in this article how field surveys of conservation easement properties might adopt some best practices along with flexible standardization of methods to yield unprecedented data on private-land occurrences of rare or at-risk species.
| SELECTING ALABAMA FOR A CASE STUDY OF THE CONSERVATION VALUE OF EASEMENTS
We synthesized data and best-practice examples of data reporting from a portfolio of 49 conserved properties in Alabama totaling 3,048 ha of undeveloped land. As can be seen in the map presented as Figure S1, these properties tended to be clustered together, and sometimes were adjoining. The placement of these easements was based, in part, on their anticipated conservation value, as assessed by conservation nonprofits operating in Alabama (Atlantic Coast Conservancy and National Wild Turkey Federation). These organizations use a wide variety of sources to identify areas of potential conservation value (such as State Wildlife Action Plans, professional field experience, and the published literature).
We focused on Alabama because it is an iconic state in terms of having high biodiversity, a large number of federally endangered and threatened species, and an extraordinarily small percentage of public land. In other words, Alabama is a state that must rely on private land conservation and easements if it is to have any success at conserving its precious biodiversity. Moreover, Alabama harbors more federally listed species than any state in the lower 48, with the exception of California. If we list the top five states (excluding Hawaii) in terms of number of federally listed species, their ranking is: California at #1, Alabama at # 2, Florida at #3, Tennessee at #4, and Texas at #5 (see Figure 1).
One striking feature of Alabama is how startlingly little of its land has any form of conservation protection, public or private, given its high biodiversity and large number of threatened or endangered species (Figure 2). For contrast, California with 283 federally listed species has over half of its land managed for biodiversity protection, whereas Alabama with 143 listed species has less than one-twentieth of its land managed for conservation. One reason for the discrepancy is that 52.5% of California's land is public, whereas only 4.9% of Alabama's land is public land (https://headwaterseconomics.org/public-lands/ protected-lands/public-land-ownership-in-the-us/). The second reason is that only a very small percentage, 0.4%, of the private land in Alabama is under conservation easements. This is the second lowest percentage of lands under easement protection in the lower 48 states-only Mississippi has a lower rate of easement protection (0.36%).
| IDENTIFYING BEST PRACTICES FOR BASELINE REPORTS AND EASEMENT SURVEYS
Best practices for sampling biodiversity are well known and can be found in numerous articles and books on ecological methods (e.g., Hill, Fasham, Tucker, Shewry, & Shaw, 2005). Our goal here is to apply these best practices in the context of standard easement surveying, which is typically not viewed as a "sampling" exercise, but is instead simply an attempt to provide baseline documentation of the conservation value of land. In this sense, easement surveys are akin to rapid biodiversity assessments used around the world to establish priorities (Mittermeier, Myers, Thomsen, da Fonseca, & Olivieri, 1998). Our intent is to show how with only minor modifications, routine baseline surveys could provide scientifically valuable information on biodiversity in a form that would help the United States more effectively protect its biodiversity. Properties under consideration for conservation easements vary enormously in size, remoteness, access, habitat character and quality, and the species present. It would be foolish to dictate a rigid protocol to be followed identically on different properties. However, a menu of best practices could make data from easement surveys a valuable conservation resource. We propose the following best-practice guidelines: • Identify the spatial boundaries of the easement and estimate the percent of the area in different major habitat types within those boundaries. • For any taxonomic group being surveyed, describe the survey method and, most importantly, quantify the sampling effort (time spent observing, length of plant transects walked and sampled, number of stream seine hauls, etc.). • Examine (and ideally plot) the cumulative number of species observed as the sampling effort increases.
• To the extent possible, sample in multiple seasons in order to detect species that vary in their seasonal activities. • Employ targeted sampling for species of high conservation value that might reasonably be expected on the parcel. • Summarize for each easement property high priority conservation assets defined as all federally listed species, and all species with conservation status S1, S2, or S3 according to NatureServe.
| Spatial delineation and habitat summary
No easement survey has scientific value unless the spatial coordinates of the parcel are provided along with its total area. In addition, there should be a general description of the parcel's habitat. Ideally, the approximate percentage of the easement in terms of its component major habitat types can be estimated using measuring tools such as those available on Google Earth Pro, or even visual inspection of aerial photographs. An example of aquatic habitat and terrestrial habitat assessments are in Table 1 F I G U R E 2 Percent of state land area in GAP 1, 2, or 3 status. GAP is the acronym for the "U.S. Geological Survey's Gap Analysis Project," where the word gap refers to gaps in protection of biodiversity. GAP status indicates the degree of protection given to the land. GAP 1 is permanent protection from conversion with management designed to maintain a natural state. GAP 2 is permanent protection from conversion with management designed to maintain a primarily natural state (which means some activities such as fire suppression may be allowed). GAP 3 provides permanent protection from conversion for the majority of the area, but with allowances for some low intensity uses such as selective logging, some vehicle traffic, and so forth. Because of their permanence, GAP 1, 2, and 3 represent significant conservation, with GAP 1 and 2 being the "gold standard" for conservation because of the absence of all commercial or exploitive land use. These data are from the Protected Areas Database of the United States (PAD-US) 1.4, which was released May 2016 by the U.S. Geological Survey (USGS) (Zhu et al., 2021), and does not pick up unique aquatic habitats or caves that contribute greatly to the biodiversity value of these lands. Moreover, many baseline surveys contain aerial photos of sufficient resolution that these photos plus "boots on the ground" promise much more accurate data than is available from NLCD.
| The importance of noting sampling effort
In addition to reporting the major habitat types, the sampling or survey methods should be described, along with the sampling effort devoted to each method. Sampling effort can be recorded as person hours spent searching for and listening for birds, meters of line transect walked, number of stream and river seine hauls, and so on. Without any quantification of sampling effort, it is impossible to interpret data on number of species or individuals observed. Examples of how to report sampling effort are given in Table 2. Note that the dates of sampling should be specified as well as the amount of sampling. Currently, baseline reports tend not to document sampling effort even though the effort is known. By recording sampling effort, it is possible to have a better sense of the possibility that sampling effort underlies differences among easements in reported biodiversity elements.
| Cumulative species number as a function of sampling effort
Species lists comprise the raw data of any survey. Often the construction of these lists will require the T A B L E 1 Habitat Summaries from baseline easement surveys. (a) Aquatic habitats in an easement located in Escambia County, Alabama. (b) Terrestrial habitats in an easement located in Elmore County, Alabama Older-appearing ponds with vegetative shallow areas contained more potential habitat for aquatic species than ponds with limited vegetation that appeared to be more recently constructed borrow pits.
(b)
Habitat type Percent area (%) Early successional forest and pine plantation 88 Mature hardwood floodplain forest 5 Seasonally inundated swamp 7 collaboration of experts at species identification. Whereas for some purposes simply tallying up "operational taxonomic units" is acceptable, for easements, it is key to establish species identities. This is especially the case because records of species may reveal extensions of known ranges that are important to document. Species counts are especially valuable if they are presented as a cumulative species curve as shown in Figure 3. When curves of cumulative number of species saturate or level-off, as is the case for plants in Figure 3, it is an indication that the surveying has been relatively thorough and has likely captured most of the plant biodiversity on the parcel for that particular season. The accepted convention for these curves is that they are fit to one of two equations-both of which pass through the origin (meaning no survey, no species recorded): where S E is the number of different species noted after E units of sampling (where E could be days, hours, meters walked, traps set, etc.), and B and S max are constants to be fit to the data. S max is the number of species expected if the sampling effort went to infinity. "Effort" (E in the above equations) need not be something as rigid as hours walking a line transect looking only for plants, or looking only for lizards. Effort can refer to hours multitasking and looking for birds, plants, and anything of conservation interest. As long as effort is aptly described, a species accumulation curve has scientific value. By plotting these curves, one can compare properties, seasons, survey methods, and different years and gain a sense of how rapidly and easily one In the Alabama River, two biologists in SCUBA searched for mussels. One biologist tended divers. Two biologists in mask and snorkel searched shallow areas.
November 21, 2019 5 2 10 In the Alabama River, two biologists in SCUBA searched for mussels. One biologist tended divers. Two biologists in mask and snorkel searched shallow areas.
November 21 . For this specific case, E is in "hours" of surveying, S max is the maximum possible number of plant species, and PLANTS represents the number of different plant species detected as a function of hours spent surveying accumulates species. They represent a rapid biodiversity assessment tool. In the absence of such a curve, at a minimum one should find some way of documenting thoroughness of sampling. For example, one might note, "we conducted five seine hauls for fish and stopped when four hauls in a row failed to capture any species we had not sampled in previous hauls."
| Sampling to account for seasonality
Many species vary seasonally in terms of their activity or visibility (Dybala, Truan, & Engilis Jr, 2015). Some species may be associated with the spring, others with the summer or fall, and perhaps some in the winter. As a result of seasonality, an easement property should be visited during multiple seasons. The merits of this are evident in Figure 4, which reports the accumulation of species from surveys of birds in a 35.68-ha easement in Taylor County, FL (30.149768, À83.929174). In this case, bird surveys were conducted on January 26, 2020, on April 30 and May 1, 2020, and on October 8, 2020. The winter survey was completed in about 5 hours by two professionals; the late spring/early summer survey was completed in 10.5 hr by two professionals, and the fall survey was completed in 4.5 hours by two professionals.
3.5 | Targeted sampling methods for unique species of special conservation value While birds, plants, and fish in streams can be straightforward to sample, other species of conservation value may require highly specialized sampling methods. A good example of this is bats. Bats are among the most imperiled terrestrial vertebrates in North America due to a combination of disease (white-nose syndrome), habitat loss, and the increasing presence of wind turbines (Hammerson, Kling, Harkness, Ormes, & Young, 2017). As a result, documenting their occurrence on land protected by easements is a priority. Surveys for bats are challenging, because they are fast fliers, active at night, and unless captured by a trained expert, seldom can be visually identified to species. They will forage and migrate great distances and are constantly changing areas and habitat types for ecological and seasonal needs. Habitat requirements will also differ between sex and reproductive status. Although typical survey methods include mist netting, acoustics, habitat assessments, and hibernacula surveys, these methods may need to be implemented in a variety of ways, habitat types, height and orientation, and times of the year to successfully document certain species. All of these require special permits and certifications, and a level of expertise that involves years of training and experience. Only with such highly specialized and technically sophisticated sampling can bat presence or specific species on a site be reliably detected.
| Summarize what is of high conservation value on the land
Species lists and total number of species are useful indicators, but because of the limited and varied sampling efforts, these lists are likely to always be incomplete. However-if a species is documented on the land-and if that species is designated to be at risk, then that occurrence is significant. In other words, the best indicator of a property's conservation value is the presence of species that have been officially designated as conservation priorities by state and federal agencies, and by internationally accepted indices of vulnerability such as NatureServe designations (https://www.natureserve.org/). For this reason, baseline reports, and all reports of the species found on private lands, should highlight the presence of any species that are threatened, endangered, or a candidate for listing by the U.S. Fish and Wildlife Service (USFWS). These are species that are globally imperiled, and species for which any occurrence is of global importance. Also to be highlighted is the occurrence of any species assigned with the NatureServe at-risk categories of S1, S2, or S3 in the state of concern. These rankings indicate that in that state the species have been assessed as being critically imperiled (S1), imperiled (S2), or vulnerable (S3). Consistently, species receiving these ranks in a state are prioritized in state conservation plans. For each property that is surveyed, a best practice would be to summarize the conservation assets via a table such as Table 3 for an easement in Escambia County, Alabama. The value of best practices is that they allow anyone consulting baseline surveys to better interpret the absences of organisms of high conservation value. Absences could occur because sampling effort is insufficient, or because sampling occurs during the wrong seasons, or because sampling entailed methods that are unlikely to detect special species such as bats. When baseline reports follow the above best practices and make clear sampling effort in terms of quantity, timing, and method, the absence of high priority species can be better assessed. In addition, embracing best practices would allow one to return to the same site and resurvey, and better interpret any changes that are noted. The best practices we have outlined are a modest step toward standardization. They ensure essential background information for interpreting the data that are reported for private lands; they provide an inventory of species; and they may allow documentation of range shifts. However, because the methods and effort will vary from property to property, they are not sufficiently standardized to track population trends in the way breeding bird surveys or other highly regimented large-scale censuses allow one to estimate population trends (Hudson et al., 2017).
If one were to design an ideal national sampling protocol for temporal trends in the abundance of high priority species on private lands, it would demand far more standardization and uniformity than we have outlined above. Such standardization is not feasible for the enormous variety of land trusts engaged in conservation, with their varied staffs and resources. The one federal effort at a National Biological Survey (NBS) in 1993 was ill-fated for numerous reasons-one of which was the resistance T A B L E 3 High value conservation assets in a 155-hectare property under a conservation easement in Escambia County, Alabama. S1, critically imperiled in Alabama; S2, imperiled in Alabama; S3, vulnerable; UR, under federal review in the candidate or petition process to top-down imposition of federal scientists and standards (Krahe, 2012). Since the NBS effort was abandoned in 1996, there has been no renewed effort at developing a biological inventory for species that covered all lands (Krahe, 2012). Aggregating species records obtained from baseline surveys is not a substitute for the NBS envisioned in 1993, but it may be the only inventory of high priority species that is feasible in the near future. Even if one cannot make cross-easement comparisons or identify temporal trends with such an ad hoc collection of data, simply knowing where species of high conservation value have been documented is invaluable. This is especially the case in light of climate change, which is creating a demand for studies of the possibility of range shifts or range contractions for at-risk species. Using the database we envision, one could revisit sites where occurrences have been previously documented and see if the species of interest were still there.
| HOW EFFECTIVE ARE 49 EASEMENTS IN ALABAMA AT PROTECTING HIGH PRIORITY SPECIES?
A useful way to visualize the value obtained from the conservation of additional properties is to graph the cumulative number of high priority species protected by the conservation easements as one goes from one easement, to two easements, to three easements, and so forth. This type of curve is analogous to the more standard species area curve such as that depicted in Figure 3-only in this case, it is not all species-but rather just species designated as high conservation priority. Figure 5 shows these curves for the portfolio of 49 Alabama easements drawn in two different ways: by number of easements, or by cumulative area of easements. Both graphs show a staircase increase to 116 species, and it is likely that adding even more easement protection would drive the curves higher-especially if those easements were placed in different areas of the state or in different habitat types. Figure S1 in Supporting Information shows the locations of these 49 easements.
The most striking result is that 116 high priority species are documented within only 3,048 ha in total-that is an astonishing number of 38 high priority species per 1,000 ha protected by easements.
To put this in context, it is useful to compare these 49 easements to federal and state conservation lands in Alabama, and to similarly focus on high priority species (S1, S2, S3 state rankings or threatened and endangered federally). For comparison sake, we sought a large public conservation area, for which there is a history of biological surveys with catalogued data that was closest in habitat attributes and geography to the 49 properties in our portfolio of parcels protected by easements. The Conecuh National Forest (hereafter CNF), which is 33,993 ha of forested habitat, shares many of the same soil types and habitats. Specifically, both CNF and the portfolio of easements include riparian forest, isolated wetlands, seepage bogs, sandhills, and upland pine forest (Graham et al., 2015). F I G U R E 5 Accumulation curves for high priority species in 49 Alabama easements. (a) The cumulative number of unique high priority species (Federally listed, under Federal review for listing, or S1, S2, and S3 priority status according to NatureServe) protected as easements are established. The horizontal axes represent cumulative area protected by this collection of easements. The easements are rank-ordered from smallest to largest. The cumulative number increases if, as an easement is added, a species is recorded that has not been detected on any of the previous easements. These data are from 15 easements established in 2014, 8 easements established in 2015, and 26 easements established in 2016. The raw data are in Table S1 in the Supporting Information. (b) The same cumulative accumulation of high priority species, only now no attention is paid to area, but rather just the accumulation of different easements-from the first easement to the last (the 49th) There are 159 high priority species documented for CNF (Alabama Department of Conservation and Natural Resources, State Lands Division, Natural Heritage Section: data request fulfilled October 29, 2020, and Alabama Natural Heritage Program, Auburn University, Alabama: data request fulfilled October 29, 2020). The list of these species is given in Table S2. The comparison of CNF species records to easement data is complicated because the sampling methods are not the same. The species list for CNF was obtained by uniting two data sets: Alabama Department of Conservation and Natural Resources, State Lands Division and the Alabama Natural Heritage Program, Auburn University. The observations that went into creating these data sets come from a variety of sources. One source is the concerted effort of biologists to report their findings voluntarily, and program staff scouring existing databases of university museums and herbaria. A second source is an effort by employees of the Natural Heritage Section to review research papers and museum specimens to contribute to the database. Finally, much of the data is acquired from annual reports filed by researchers who requested Scientific Collection Permits issued by the Alabama Wildlife and Freshwater Fisheries Division. This mix of methods does not lend itself to the easily interpreted cumulative species curves associated with adding easements to a portfolio of private land protection. Moreover, the State Lands Division database cannot be conveniently searched for date of first occurrence. The Heritage Program data does have a record of first occurrences that is readily obtained. To gain some sense of the prospect of finding more species with more sampling, we used the Heritage Program records of first occurrences, and generated a cumulative species curve for CNF ( Figure 6).
What is evident from Figures 5 and 6 is that the curves are still increasing. This means more sampling will likely uncover more species records for CNF, and adding more easements is likely to increase the number of high priority species on private land.
In spite of sampling ambiguities, it is still informative to compare the data from the 49 private properties with the data from CNF. The 49 private land properties averaged a significantly larger number of priority species per unit area than did the large national forest: 38 high priority species per thousand hectares for the private lands versus 5 high priority species per thousand hectares in the national forest. As a result, with only 9% of the area of the CNF, the 49 private properties reported over 70% of the number of high priority species documented from the much larger CNF. Moreover, these small private properties reported 92 high priority species not documented in the larger national forest (see Table S3). This observation makes clear the complementarity of public land conservation and private land conservation-private lands are harboring species not found on public lands, and vice versa.
At first glance, it might seem surprising that small parcels of private land so dramatically outperform large public conservation areas. However, the intensity of sampling devoted to the 49 private properties likely greatly exceeds the intensity of sampling on public lands. Given more time for sampling, we might expect the contrast in species per thousand hectares between CNF and easement portfolio to diminish. Second, numerous studies have indicated that public lands often correspond to areas of low productivity and are sometimes placed opportunistically as opposed to for particular at-risk species (Scott et al., 2001). Public lands serve multiple functions such as recreation and watershed protection, whereas the easements in this study have been selected primarily for species targets.
The large number of high priority species might at first seem surprising. However, these parcels of land are a highly nonrandom sample. First, Alabama is a hotspot for threatened, and hence high priority species targets. Second, these properties tended to include caves or habitats attractive to bats, as well as an abundance of aquatic habitats. Finally, professional biologists working for land trusts have conducted baseline surveys to document conservation value for all 49 of the easements in this study. We doubt that randomly selected private lands would yield so many high priority species. It is also worth noting that many, if not most of these easements were at risk of habitat conversion either because they overlaid granite and rock deposits that are vulnerable to mining in pursuit of materials used in construction aggregates, or because they are in areas expected to see development from nearby cities. The key result that warrants highlighting is: the protection of private lands can secure habitats in which numerous high-priority species are documented. Since Alabama, compared to other states, is deficient in both private land conservation and public land conservation (Figure 2), these data confirm the value of increasing the amount of land devoted to conservation in Alabama-especially given the species accumulation curves shown in Figure 5. Finally, the curves depicted in Figure 5 represent only the accumulation of at least one record for a species. In fact, as one adds land parcels under easement protection one adds multiple occurrences of species. We know from basic population biology that multiple occurrences of threatened and endangered species can enhance their chances of survival (Mace et al., 2008). This additional benefit is being realized in the 49 properties that comprise this analysis. Specifically, the federally threatened gopher tortoise is found on two different parcels of land and the federally threatened wood stork is found on five different parcels of land.
| DISCUSSION
Conservation biologists have long emphasized the importance of private land for the protection of biodiversity in the United States (Rissman et al., 2007). There are four strategies available for protecting species-at-risk on private land: (a) acquisition of the land by government, (b) acquisition of land by nonprofit land trusts with conservation missions, (c) involuntary government regulations, and (d) voluntary easement agreements that protect the lands in perpetuity. Easements offer the advantage of cost-effectiveness and maintaining the lands in the hands of private citizens who can sell the land or leave the land to their children, and can conduct certain activities in accord with the easement terms (Korngold, 2010;Parker, 2002;Parker, 2004). For that reason, conservation land trusts have increasingly turned to easements as the primary tool for private land conservation.
The open question is how effective are easements at protecting parcels of land for which there are documented occurrences of high priority species (Merenlender, Huntsinger, Guthey, & Fairfax, 2004). Effectiveness entails four key facets: (a) is habitat degradation or conversion halted? (b) from a landscape ecology perspective is the easement well situated? (c) is the land under risk of development or conversion? and (d) does an inventory of species document the presence of at-risk or high conservation value species? This article and the data we present address only the last dimension of effectiveness-are high priority species present? In some sense, this is the most fundamental question since an answer of "No" would suggest the other facets of effectiveness might not even be worth examining. In addition, as mentioned in the introduction, inventories of directly observed high priority species on easement lands are lacking in the peer reviewed literature.
Our results make clear that the easements surveyed in this study are highly effective when measured by the number of high priority species with documented occurrences. It is essential to extend this research to other states and habitat types so that we can determine the generality of our findings. There is certainly a possibility of conducting similar assessments across the nation, since professional biologists routinely visit easements in every state to provide baseline reports for land trusts. Something as simple as a PDF library of these reports could be searched for species occurrences. Even more useful would be an effort to moderately standardize the format of baseline survey reports. The Land Trust Alliance (LTA) describes itself "as the national leader in policy, standards, education and training" (https://www. landtrustalliance.org/what-we-do). The LTA formally accredits land trusts, holds annual meetings, and is the only national organization that reviews and establishes easement best practices. It was established in 1982 and has since grown to be a strong organization with a large membership representing all sizeable land trusts, and many of the professional biologists who conduct baseline surveys and easement monitoring. Consequently, it would make sense for the LTA in collaboration with local universities and the Natural Heritage program to promote this standardization. The data itself could be housed in the National Conservation Easement Database (NCED) (see https://www.conservationeasement.us/ about/). NCED is a national database of conservation easement information, compiling records from land trusts and public agencies. Spatial shape files are not always available but the hectares protected and general location of most of the nation's easements can be found in NCED. It would not be much work to create an option of associating baseline survey data with each easement that is entered into the NCED. Another organization that could be helpful in initiating an easement data system is National Center for Ecological Analysis and Synthesis (NCEAS). NCEAS hosted landmark studies on federal recovery plans and on habitat conservation plans, and in the same vein could initiate a synthesis of easement data (Clark, Hoekstra, Boersma, & Kareiva, 2002;James, 1999) that could get this effort underway.
We hope with this article to prompt a conversation about the feasibility, possible pathways, and funding opportunities for curating data from baseline surveys of easements into a national inventory of biodiversity on private conservation lands. All of the best practices outlined in this article will enhance the quality of data associated with baseline surveys. However, two best practices are absolutely essential if the scientific value of baseline surveys is to be realized: (a) quantifying the amount of sampling effort that was involved in any survey, and (b) providing a clear description of the methods of surveying. In a time of rapid climate change and a nation with so much biodiversity on private land, the absence of these data mean we are "flying blind" in terms of protecting our nation's priceless biodiversity. If we combined the threat of land conversion with these species surveys, we could evaluate the extent to which the placement of easements was optimal in terms of reducing extinction per dollar cost (Newburn et al., 2005). A national database would also benefit conservation practice by potentially detecting range shifts, and highlighting species that do not show up on any public or private conservation lands, and that hence need special attention.
ACKNOWLEDGMENT Claire Hirashiki assisted with compiling data on state-bystate protected areas and federally listed species.
CONFLICT OF INTERESTS
The authors declare no financial or other conflicts of interest.
AUTHOR CONTRIBUTIONS Peter Kareiva: Helped design the standardized best practices and took the lead on writing the manuscript and analyzing the data. Mark Bailey: Contributed to the study design and writing, and surveyed easement parcels for species. Dottie Brown: Contributed to the study design and writing, and surveyed easement parcels for species, and had primary responsibility for bat surveys. Barbara Dinkins: Contributed to the study design and writing, and surveyed easement parcels for species. Lane Sauls: Contributed to the study design and writing, and surveyed easement parcels for species. Gena Todia: Contributed to the study design and writing, and surveyed easement parcels for species.
DATA AVAILABILITY STATEMENT
The Supporting Information lists the high priority conservation species reported in each easement, along with the location and area of the easement. It also includes the list of high priority species in the Conecuh National Forest.
These represent all of the original primary data in the paper, from which the major figures and conclusions are drawn. Any other data are from publicly available websites. | 2021-07-27T00:06:24.334Z | 2021-05-18T00:00:00.000 | {
"year": 2021,
"sha1": "82b6e62094bd4a1afe184a8f478d2cf74fb31fca",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/csp2.451",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "1aace00788fe4b38967e92cd90ae993ae1da3dd5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
44209982 | pes2o/s2orc | v3-fos-license | Unmanned aerial systems (UAS) operators’ accuracy and confidence of decisions: Professional pilots or video game players?
Abstract Unmanned Aerial Systems (UAS) operations have outpaced current training regimes resulting in a shortage of qualified UAS pilots. Three potential UAS operator groups were explored for suitability (i.e. video game players [VGP]; private pilots; professional pilots) and examined to assess levels of accuracy, confidence and confidence-accuracy judgements (W-S C-A) during a simulated civilian cargo flight. Sixty participants made 21 decision tasks, which varied across three levels of danger/risk. Scales of Tolerance of Ambiguity, Decision Style and NEO-PIR were also completed. Professional pilots and VGPs exhibited the highest level of decision confidence, with VGPs maintaining a constant and positive W-S C-A relationship across decision danger/risk. As decision danger/risk increased, confidence, accuracy and W-S C-A decreased. Decision danger also had a role to play in the confidence expressed when choosing to intervene or rely on automation. Neuroticism was negatively related, and conscientiousness positively related, to confidence. Intolerance of ambiguity was negatively related to W-S C-A. All groups showed higher levels of decision confidence in decisions controlled by the UAS in comparison to decisions where the operator manually intervened. VGPs display less overconfidence in decision judgements. Findings support the idea that VGPs could be considered a resource in UAS operation.
ABOUT THE AUTHOR Jacqueline M. Wheatcroft is a Chartered and Forensic Psychologist in the Institute for Psychology, Health & Society at the University of Liverpool, UK and is Chair of the British Psychological Society Division of Forensic Psychology Training Committee. Her research interests are in the enhancement of information, intelligence and evidence with a focus on process and procedural techniques to increase accuracy and appropriate confidence in information forms. She upholds interdisciplinary research and has published widely in areas that relate to security, law enforcement and legal process. Her work has contributed to governments, police, professional investigators and the courts.
PUBLIC INTEREST STATEMENT
The move to significant automation has been a feature of aviation over the last 40 years. This paper describes three potential unmanned aerial supervisor (UAS) groups; video game players, private pilots and professional pilots who made 21 decisions across three levels of dangerousness. As danger increased levels of confidence, accuracy and the relationship between how accurate the decision was and the level of confidence applied to those decisions decreased. The dangerousness of the decision also affected how confident participants were when choosing to intervene or rely on the automation; confidence was lower when the operator chose to intervene. Understanding which potential supervisory group has the best skills to make the best decisions can help to improve UAS supervision. Overall, video game players were less overconfident in their decision judgements. The outcome supports the idea that this group could be a useful resource in UAS operation.
Introduction
Automation is the allocation of functions to machines that have in the past been executed by humans; the term is also used to refer to machines that perform, partially or fully, those functions (Funk et al., 1999). The move to significant automation has been a feature of aviation over the last 40 years. As such, the term automation captures a complex blend of technology interacting with human operators and carrying out a wide range of tasks (Civil Aviation Authority, 2016). From the removal of the flight engineer from the cockpit whose function is now carried out by sophisticated full authority digital engine control (FADEC) computers to the advanced stabilisation, guidance and navigation functions of modern aircraft flight control systems (FCS), the role of the crew in the cockpit has been transformed from "seat-of-the-pants" aviators to being the monitors of those systems to check that they are functioning correctly. This function is considered to be so important that it warranted a study of its own by the UK regulator and airline industries (Civil Aviation Authority, 2013). The rapid development of automated technologies has moved the world of work and systems into emergent automation innovation challenges. The introduction of automation, to what are now referred to as "glass cockpits", provides numerous benefits, including increased vehicle trajectory precision (Murnaw, Sarter, & Wickens, 2001) and reduced crew workload. These benefits mainly manifest themselves in tasks that do not require the crew to be involved. However, when collaboration and cooperation between the crew and the automated system is required, problems can occur (Woods & Sarter, 2000). One key issue is that crews can become confused about the state and/or behaviour of the automation (Sarter & Woods, 1994, 1995. This can have fatal consequences, as demonstrated in the AF447 disaster when three highly trained pilots were unable to identify that their aircraft was in a stall condition-a basic skill taught at the earliest stages of pilot training-at least partially because of the information from the aircraft's systems available to them (Bureau d 'Enquêtes et d'Analyses, 2012). Crews can also become complacent about the ability of automation and fail to detect failures in the automatic systems (Parasuruman, Molloy, & Singh, 2009).
The challenges are most acute where the correct functioning of automatic systems is safety-critical. For example, many of the basic in-flight functions are carried out automatically with the crew monitoring the health of these functions, and ready to step in should any of them be identified as not performing correctly. The challenge of carrying out this supervisory task increases manifold when the crew are removed from the cockpit. The human loses vital sensory information (for example, engine pitch can be used as a surrogate for engine health or even thrust demand), whilst the aircraft loses a very powerful sensor and information processor. However, this is the very situation that unmanned aerial system (UAS) operators find themselves in when supervising typical missions for these vehicles. The supervisory task and assessment of the suitability of potential UAS operators thereby forms the basis of this paper.
A UAS can be defined as a powered vehicle that does not carry a human operator, can be operated autonomously or remotely, can be expendable or recoverable and can carry a lethal or nonlethal payload (Department of Defense, 2007). Since the 1970s aviation automation technology has proliferated. This has undoubtedly contributed to the continued excellent safety record enjoyed by air travel. However, with new technologies emerge new problems. For example, there has been a corresponding increase in errors caused by human-automation interaction; that is, human error (Prinzel, DeVries, Freeman, & Mikulka, 2001).
It has been recognised that UASs, and in particular, those that have the capability to make certain high-order decisions independently (this agent will be referred to hereafter as "The Executive"), can reduce life cycle cost and serve as a force multiplier within the military and civilian world (Ruff, Calhoun, Draper, Fontejon, & Guilfoos, 2004). The success and growth in the use of automation and UASs does not eliminate humans from the system-instead it transforms the human role from operator to supervisor. Such transformation means that the workload of the human supervisor is not necessarily reduced but instead requires cognitive resource and skills to be applied across a different set of tasks. For example, anticipating and understanding the automation (Walliser, 2011) to ensure the UAS is free from errors and effectively take control of malfunctions, if necessary (Ross, Szalma, Hancock, Barnett, & Taylor, 2008). Supervisors are thereby responsible for the allocation of functions between automatic and manual control and whether the supervisor chooses to control the system automatically or manually can have an impact on the performance of the system (Lee & Moray, 1992). Human interaction is thus an integral part of UAS operations as part of the human-machine cooperative (Drury & Scott, 2008). However, it is anticipated that the benefits gained through the use of UASs can be increased through reductions in supervisor-system ratio and multiple UASs monitored by one supervisor (Ruff et al., 2004). Automated systems coupled with the desire to operate them in ever-greater numbers with fewer supervisors demands close examination of the humanmachine relationship (Cring & Lenfestey, 2009).
One factor that may influence the efficacy of supervision is that of trust; perhaps one of the most important factors that enables automated systems to be used to their full potential (Lee & Moray, 1992). Trust in automation has been defined as the extent to which the supervisor is confident in and accurately willing to act on the basis of the recommendations, actions and decision of an artificially intelligent agent (the UAS Executive; Madsen & Gregor, 2000). Furthermore, trust has been characterised as "the attitude that an agent (the UAS Executive) will help achieve an individual's (supervisor's) goals in a situation characterised by uncertainty" (Lee & See, 2004, p. 51). This can be explained by the suggestion that a relationship embodied in trust leads to the effective use of resources, efficient cooperation and improved communication (Tajfel & Turner, 1986), while distrust produces an opposite conceptual framework (Toma & Butera, 2009; see also Tversky & Kahneman, 1974).
Yet, what is discussed less is the necessary concordance between a supervisor's own level of trust and confidence in decisions as they relate to accuracy. We will return to this point later. Ultimately, the capabilities and limitations of the UAS need to be understood in order that supervisors can effectively recognise and intervene when automation capabilities have been exceeded (Cring & Lenfestey, 2009). Accordingly, Lee and See's (2004) Appropriate Trust Framework states that trust calibration is essential for achieving appropriate dependence (i.e. where trust calibration refers to the match between the supervisor's level of trust in the automation and the automated aid's capabilities). If a supervisor's trust does not equal the true capabilities of the system then this may result in difficulties. For example, (a) in misuse (e.g. using it when it should not be used), (b) in an overreliance on the automation (e.g. paying less attention to important information) or (c) in disuse, such as the underuse of automation (e.g. ignoring alarms, turning off automated safety systems; Parasuraman & Riley, 1997).
The fatal consequences of the misuse of automation are evident from the crash of Eastern Flight 401 in the Florida Everglades-due to the crew's failure to notice the disengagement of the autopilot and their poor monitoring of the aircraft's altitude (National Transportation Safety Board, 1973). Similarly, the consequences of the disuse of automation can be observed in the crash of Air France AF447 in 2009-due to pilot error when the automatic Stall Warner was ignored because of conflicting air speed readings due to icing of the aircraft's air data system (Martins & Soares, 2012). These catastrophic situations demonstrate the importance of the operator's need to have appropriate confidence and trust in the automation available to them. However, these examples relate to accidents when the pilots were on board the aircraft. The issue becomes more relevant when a UAS is involved as the supervisor lacks the proprioceptive cues available to pilots of manned aircraft (e.g. changes in engine noise or vibration that can indicate possible engine malfunctions, Drury & Scott, 2008). Indeed, Tvaryanas, Thompson, and Constable (2005) have showed a significant number (n = 271) of UAS mishaps have occurred in the last decade due to human factors. Further, an analysis of 16 UAS accidents by Glussich and Histon (2010) showed, in many instances, common human deficiencies directly contributed to the loss of control of the automation and the aircraft.
The framework of automation use by Dzindolet, Beck, Pierce, and Dawe (2001) predicts that cognitive, motivational and social processes work together to cause misuse, disuse and inappropriate trust in automation and, indeed, many factors may affect each of these processes impacting upon automation use. When forming trust judgements, supervisors of automated functions compare the perceived reliability of the automated aid to the reliability of manual operation in order to determine the perceived utility of the aid and the level of automation trust. If the perceived utility of the aid is high, trust in the automation is likely to be high and dependence on the automation expected. Conversely, if the perceived utility is low, trust will also be low and so self-reliance expected. Cognitive biases can impact upon the use of automation. For example, the number of tasks to be performed, intrinsic interest in the task, cognitive overhead, penalties for failure and rewards for completion, and so on, will affect the effort a supervisor will expend on any task and the likelihood of reliance on the automated aid. However, Lee and See (2004) found that high levels of trust in automation do not always result in misuse as long as the trust is appropriate. In support of this, individuals with high levels of trust in automation were more successful at detecting automation failure than those with low levels of trust. Furthermore, the self-confidence of a supervisor significantly influences how they interact with automation and the degree of trust instilled in it (Lee & Moray, 1992;Riley, 1994;Will, 1991). Individual's use and trust automation more when their confidence in own ability is lower than in automation, and vice-versa (Lee & Moray, 1992;Riley, 1994). Thus, biases in self-confidence can have a substantial effect on the appropriate reliance on automation (Lee & See, 2004). That reliance may also be influenced by the degree of confidence one has in the automation and thus some research demonstrates that individuals can tend to over rely on automation (Parasuraman & Manzey, 2010).
Automation bias, as it is termed, occurs when there is overconfidence in the automation system. It has been defined by Mosier and Skitka (1996) as "a heuristic replacement for vigilant information seeking and processing" (p. 205). This tendency to over rely on automation can negatively impact on decision-making. For example, supervisors are likely to approve system decisions even when the system providing the information is unreliable (Cummings, 2004). Three main reasons for the occurrence of automation bias have been highlighted in the literature (Mosier & Skitka, 1996;Parasuraman & Manzey, 2010). First, automation may be deemed less cognitively demanding thus being a preferred choice, as individuals tend to opt for the option of least effort (a cognitive miser effect- Fiske & Taylor, 1991). Second, individuals tend to overestimate of the correctness of automation viewing it as holding superior knowledge to that of their own. For instance, information from automated systems has been rated as more accurate than information provided by humans (Dijkstra, Liebrand, & Timminga, 1998), and supported by research which suggests those with more expertise are less likely to rely on automation (Sanchez, Rogers, Fisk, & Rovira, 2014). Third, individuals may view automation as a diffusion of responsibility resulting in feeling less accountable for the decision (Latane & Darley, 1970). Indeed, Skitka, Mosier, and Burdick (2000) found that increasing accountability reduced instances of automation bias. Hence, as supervisors need to be able to correctly allocate between automated and manual functions (Ross et al., 2008) it would be beneficial to examine what factors influence overconfidence in automation.
It has been suggested that the effect of self-confidence and reliance on automation (i.e. increased confidence) can be moderated by both the skill level of the supervisor and risk associated with the decision to use or not to use automation (Riley, 1994). For instance, experience may impact on how much confidence is placed on a decision. Indeed, Riley, Lyall, and Weiner (1993) found that pilots rely on automation more than novices. Further, decision confidence may also depend on the danger or risk associated with the decision. However, research regarding whether individuals rely on automation more or less with increased risk is mixed. For example, when the risk is low individuals show more confidence in automation, but when the risk is high individuals tend not to rely on automation, suggesting a reduction in confidence in automation when greater risk is involved (Perkins, Miller, Hashemi, & Burns, 2010). However, Lyons and Stokes (2011), where supervisors were provided with the option to use either a human aid or automated tool for decision-making, found that in conditions of high risk the human aid was relied on less, demonstrating a preference to the automated aid in high risk circumstances. Hence, confidence in automation may well vary according to associated risk. It is necessary therefore that both confidence in automated and manual decisions and selfconfidence of potential supervisors of automation be evaluated.
Currently, a wide range of individuals can legally operate a UAS. These range from professional pilots (e.g. Royal Air Force) to enlisted men (e.g. US Marine Corps) to private individuals (e.g. those who qualify for a UK Basic National UAS Certificate (BNUC-S) which allows them to fly aircraft up to 20 kg maximum take-off mass (MTOM) within visual line of sight (VLOS). Certification can vary however depending upon classification. For example, larger systems such as Predator/Reaper or Global Hawk require formal training courses in UAS operations, tactical and theatre operations, battlespace awareness, threats, weapons and sensors. Smaller systems tend to perform less complex missions and require less formal training. However, the tempo of UAS operations, at least for larger (generally military at the time of writing) vehicles has now outpaced current supervisor training regimes resulting in a shortage of qualified UAS pilots. Surrogates need to be found to replace the use of manned aircraft pilots as UAS supervisors; preferably recruits who would learn faster and be easier to train, to accelerate supervisor training, to meet these new and pressing requirements (McKinley & McIntire, 2009). Indeed, the US Air Force has adopted aptitude requirements and a training syllabus (Undergraduate RPA Training or URT) for UAS pilot trainees with little or no prior flying experience (see Carretta, 2013;Rose, Barron, Carretta, Arnold, & Howse, 2014).
Nevertheless, it is possible that the ground control stations of UASs can be compared to traditional video game environments. This comparison can be made in the sense that, in a video game, the player is trying to achieve some goal (the aircraft mission) and interacts with the game via screens and inceptors that provide sufficient but limited information to allow this to happen (the aircraft sensor feed, displays and controllers). Thus, it is plausible to investigate whether video game experience and skills can be of particular benefit to UAS supervision. Indeed, video game players (VGP) who have no piloting experience may well be better suited to the role of UAS supervisor as these individuals will tend not to base aviation decisions on proprioceptive cues available to pilots of manned aircraft (McKinley & McIntire, 2009). Plus, VGPs are argued to be able to track more targets (Castel, Pratt, & Drummond, 2005), have improved psychomotor skills (Griffith, Voloschin, Gibb, & Bailey, 1983), quicker reaction times (Yuji, 1996) and enhanced spatial skills (Dorval & Pepin, 1986). Importantly, many studies have found the skills, abilities and other characteristics (SAOCs) of VGPs transfer to other cognitive tasks (Gopher, Weil, & Bareket, 1994;Green & Bavelier, 2007) but may not have the tactical and/or operational awareness. (2009) compared VGPs, professional pilots and a control group that had no gaming or pilot experience on UAS cognitive tasks. It was found that VGPs and professional pilots did not significantly differ but both were superior when compared to the control group in aircraft control and landing skills. These findings suggest that VGPs possess skills that have direct application to UAS supervision, and that VGPs and professional pilots possess some skills relevant to the supervision of UASs. However, more research is needed to consolidate these outcomes and across other measures. For example, it is important that self-confidence in decisions across decision-risk categories is associated with accurate responses made. This papers aims to assess these measures across a range of potential UAS supervisors.
McKinley and McIntire
In addition, and in order to identify suitable recruits for supervisory roles, it is beneficial to look at various typologies of potential agents. As noted, there has been some research which investigates the relationship different groups of potential supervisors have with the autonomous system operating the UAS with regard to levels of trust, what affects trust (Lee & See, 2004;Ruff et al., 2004) and abilities to effectively supervise a UAS (McKinley & McIntire, 2009). However, an extensive literature search has found a vacuum of research, focused on the supervisor, which is concerned with own supervisory levels of confidence and accuracy across potential UAS groups relative to decisions made. With this in mind, the present research focuses on four different groups of potential UAS supervisors' confidence and accuracy across risk decision, including some comparison to broad psychological constructs. As Riley (1994) suggests, the four different groups distinguished by their skill levels in aviation can have an impact on a supervisory confidence and accuracy. The four groups examined by this research are, (a) a control group; individuals with no gaming or pilot experience, (b) VGP; such individuals have been shown to possess cognitive abilities necessary for supervising a UAS, (c) private pilot; individuals who hold a private pilot's licence and (d) professional pilot; individuals who are either instructors, commercial airline or military pilots.
The work here raises the notion that supervisor confidence is conceptually different to that work conducted on trust in this context; confidence here is a qualifier, which is associated with a particular decision. An individual who makes a decision associates this with a level of certainty (decision confidence) which arises as a result of specific knowledge with the decision built on reasoning; it is not synonymous with trust which is largely based on intuitions (Muir, 1994;Shaw, 1997). This means an individual makes an evaluation of decisions and reports a level of confidence in those decisions that, ideally, correlate with correct performance (i.e. accuracy). Such a correlation is known as the within-subject confidence and accuracy (W-S C-A) relationship; measure of metacognitive sensitivity that enables the expression of individual confidence in correct or incorrect responses (Wheatcroft & Ellison, 2012;Wheatcroft & Woods, 2010;Yeung & Summerfield, 2012). The measure assists the gauging of individual decision-making across a course of actions and so is very important where complexity may exist. Moreover, confidence has been related to decision success (i.e. increased accuracy; Bingi & Kasper, 1995), and overconfidence in wrong decisions can result in inappropriate, perhaps fatal, action. Adidam and Bingi (2000) note that if an individual has more confidence in their decisions they tend to allocate more resources (i.e. cognitive ability) into implementing the decision; though this work cannot necessarily be generalised to aerial settings. Nevertheless, pilots may have greater metacognitive sensitivity (W-S C-A) to supervise a UAS than non-pilots because skill levels have been shown to positively affect confidence and accuracy (Riley, 1994). However, it has also been suggested that simulation training (i.e. playing video games) can increase confidence in decision-making (Atinaja-Faller, Quigley, Banichoo, Tsveybel, & Quigley, 2010) implying VGPs may also exhibit high W-S C-A. This research will determine which potential UAS supervisory group is most metacognitively confidence-accuracy sensitive, and moreover, across decision risk categories.
On the note of decision danger/risk, research has suggested the difficulty of a decision task can influence confidence and accuracy; the easier a task, the greater the concordance between confidence and accuracy, and vice versa (Kebbell, Wagstaff, & Covey, 1996;Wheatcroft, Wagstaff, & Manarin, 2015). Decision difficulty, in the context of UAS supervision, can be induced by varying the potential danger/risk of the decision needed to be made-relevant given that decisions carrying dangerous implication can be more difficult to make (Riley, 1994). Thus, decision danger/risk may reduce individual confidence and affect decision-making variably across the potential UAS groups; whilst overconfidence can lead to risky (Krueger & Dickson, 1994); or inaccurate decisions (Wheatcroft, Wagstaff, & Kebbell, 2004). Potential thereby exists for groups to be highly confident and wrong.
Finally, it is also useful to explore personality typology of individuals across the potential groups as certain groups may respond in particular ways due to stable deep-seated predispositions (Chidester, Helmreich, Gregorich, & Geis, 1991). For example, those who have higher levels of ambiguity tolerance are more decisive and display greater confidence in choices (Ghosh & Ray, 1997;Maddux & Volkmann, 2010). The NEO-PIR is a general measure of five factors of personality (i.e. neuroticism, extraversion, openness to experiences, agreeableness and conscientiousness; . Measures of conscientiousness, openness to experience and agreeableness from the NEO-PIR have been shown to be positively related to pilot performance of manned aircraft (Barrick & Mount, 1991;Burke, 1995;Chidester, Kanki, Foushee, Dickinson, & Bowles, 1990;Siem & Murray, 1994) but as yet no work has considered these factors in a UAS context, as described, nor across group type. Research suggests that the five factors of the NEO-PIR have implication for aviation these may have importance to UAS decision confidence and accuracy.
Rationale and aims
Potential UAS group supervisor's confidence, accuracy and W-S C-A in decisions related to personality constructs, together with the examination of the impact of decision danger is of critical importance in this context; particularly as concordant confidence and accuracy is relevant to high-performance successful decisions and vice versa.
The current study therefore explores accuracy, confidence and W-S C-A relationship across groups (control, VGP, private, and professional pilots) to identify potential UAS supervisor's levels on these measures and in relation to decision style. In addition, the decisions made will likely vary by the decision danger/risk to reveal the impact that increased danger has on decision confidence and accuracy. The varied decision danger is designed to induce decision difficulty, which previous research has found to negatively impact confidence and accuracy (Kebbell et al., 1996;Wheatcroft et al., 2015). The present study will also assess whether groups differ in decision confidence applied to manual decisions in comparison to automated decisions. Further, group personality traits will be assessed via standardised tools related to confidence, accuracy and W-S C-A. Insight into associations with measures will help to determine traits useful for UAS supervision.
In the light of the exploratory nature of some of the above, two key non-directional hypotheses were formulated: (i) levels of confidence, accuracy and W-S C-A will differ across groups and levels of decision danger; and (ii) psychometric tests will reveal personality characteristics that differ across groups and in relation to confidence, accuracy and the W-S C-A relationship.
Importantly, two key directional hypotheses were also expressed: (iii) as decision danger increases, there will be a significant decrease in confidence, accuracy and W-S C-A across the groups; and (iv) decision confidence in automated and manual decisions will differ across groups and be negatively impacted by increased decision danger.
Participants
The sample consisted of four different groups (control, video game players (VGP), private, and professional pilots) each made up of 15 participants (i.e. 60 participants in total; 51 male, 9 female). The minimum age of any participant was 17 as this is the minimum legal age a person can hold a pilot's licence in the UK. There was no maximum age because holding a pilot license is determined by the ability to pass an appropriate medical (Civil Aviation Authority, 2010). The Control group consisted of participants with no gaming or pilot experience, recruited via the University of Liverpool (M = 39.4, SD = 18.8). The VGP group was recruited via the University of Liverpool (M = 21.7, SD = 2.9). The private pilot licence group was recruited from flying clubs in North West England (M = 45.1, SD = 16.1). The professional pilot group was identified in one of three ways by; (1) an airline transport pilots licence (ATPL), (2) an instructor rating or (3) a military pilot recruited from various established flying institutions around North West England (M = 46, SD = 13.4). Opportunity sampling was employed. Any potential for representation bias and motivated responses was reduced by targeting a defined population and sample frame matched as keenly as possible. Participants responses were kept confidential and were only identified by a number on their consent form and answer sheets.
Design
The independent variables are represented by the UAS interact groups and the level of potential danger of the decision (i.e. decision danger). The research is independently structured via a 4 (UAS Group: Control/VGP/Private Pilot/Professional Pilot) × 3 (Level of Decision: Low/Medium/High) design. The dependent variables are measured as confidence, accuracy, within-subjects confidence-accuracy (W-S C-A), correlation scores and psychometric scores.
Materials
A set of demographic questions asked participant sex and age. A Tolerance of Ambiguity questionnaire was used to assess individual tolerance of ambiguity (Budner, 1961) and Decision Style was used (Need for Closure; Roets & Van Hiel, 2007). The NEO-PIR is a five-factor model of personality and consists of assessments of the major factors (i.e. Neuroticism, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness). The NEO-PIR has strong support for reliability, construct and discriminant validity (see Piedmont & Weinstein, 1993).
To provide the participants with as realistic a scenario as possible to allow them to make decisions, a series of pre-recorded video vignettes of typical scenarios that might be encountered during a typical flight were recorded. To ensure ecological validity, this environment was modelled such that it contained information and displays that are typical of current UAS supervisory environments, with additional display elements to convey decisions being made by the Executive Agent (see Webster, Cameron, Fisher, & Jump, 2014). The supervisor's station information was displayed on a visual screen with four individual display units that showed: (1) An external view of the simulated (virtual) outside world. This emulated the view from a forward-looking camera mounted on the UAS in a good visual environment.
(2) A moving map display. This showed a real-time indication of the UASs world plan-position and the current route planned by the Executive Agent.
(3) The "Basic Six" flight instruments. These instruments provide pilots of manned aircraft with the essential information required to conduct a flight. They are arranged in a standard configuration of 2 rows × 3 columns. The first row, moving left-to-right comprises: air speed indicator (ASI); attitude direction indicator (ADI) and altimeter. The second row moving left-to-right comprises: turn and slip indicator; horizontal situation indicator (HSI) and vertical speed indicator (VSI).
(4) The Aircraft Information Panel. Removing the pilot form the aircraft deprives the aircraft of a useful sensory system but also deprives the pilot of a number of valuable sensory cues that can be used to make decisions (engine noise, vibration etc.). This panel provided some limited information on the state of the aircraft (control surface positions, fuel remaining etc.) plus information concerning the communication status between the aircraft's Executive Agent and the relevant air traffic control (ATC) station. A number of these information messages were colour-coded to indicate the urgency with which they should be attended to (red = immediate action; orange = prepare to take action; green = no action required). Figure 1 shows the standard set of screens used to create the vignette videos.
The aircraft flight dynamics model was created using the multi-body dynamics software FLIGHTLAB [3] and was configured to be representative of a small general-aviation trainer aircraft. A piece of code was written to make the outputs of this model drive the visualisation of the outside world using the Microsoft FSX gaming software as the display engine. The other three displays were generated using Presagis' industry-standard VAPS (Virtual Avionics Prototyping Software) display creation software, with an additional specific piece of interface code being written to make the displays respond in the appropriate manner to the aircraft flight dynamic model's outputs. A Flight Briefing was drawn up to explain to participants what a UAS is, the overall mission of the flight, the goals of safety and performance, the capabilities and constraints of a UAS, and the standard operational procedures in aviation. An event and decision log was developed and the strongest answer for all 21 decisions was identified and verified by two UAS pilot experts (Cronbach's Alpha = 1). The flight was separated into seven phases: taxy (the air vehicle manoeuvres on the ground to reach the runway threshold); take off (the air vehicle manoeuvres on the ground to line-up with the runway centre-line, accelerates to a particular speed, rotates and becomes airborne); climb out (the vehicle achieves the desired climb attitude, heading and speed and continues in this mode until the desired cruise altitude s reached); cruise (the vehicle achieves the straight and level flight attitude and speed and follows heading autopilot commands to follow the pre-planned route); descent (in the vicinity of the destination airport, the vehicle begins a descent from the cruise altitude to achieve and altitude and inertial position that is suitable to begin the approach to the runway); approach (the air vehicle achieves the desired approach speed and configuration and lines up with the extended runway centre-line. It flies a heading such that it remains in this alignment and descends at an appropriate rate such that it is at approximately 50ft above ground level as it crosses the runway threshold); and landing (from the so-called 50ft "screen height", the air vehicle rate of descent is reduced such that it comes into contact with the runway surface at an acceptably low rate of descent). Each phase included three events that could occur across the flight which varied in potential danger/risk (low, medium, high) of the decision. As such, there were 21 events that required a decision to be made to each. Every event had three options to choose from. There always existed the option to (a) allow the autonomous system to control the UAS and (b) to intervene and manually fly the vehicle. This increased the ecological validity of the experiment because, as with field operators, those supervisors taking part needed to balance the competing goals of safety and performance using automatic or manual control. The event and decision log provided a baseline measurement tool against which the research measurements (confidence, accuracy and W-S C-A) could be scored. Such methods and measures have been used successfully in previous research (Wheatcroft & Ellison, 2012;Wheatcroft & Woods, 2010). A Likert scale (ranging from 1-not at all confident at all to 10-extremely confident) was used to record participant confidence in each of the decisions made.
Procedure
The investigation was approved by the University of Liverpool ethics committee. Participants were assigned to the appropriate group as defined by the criteria. The experiment was conducted with mixed groups of no more than 20 participants being tested at any one time. Participants were seated separately from each other in front of a projector and instructed not to confer. Participants were then given the information sheet to read, asked if they had any questions and once satisfied they had an understanding of the experiment signed the consent form to proceed. Participants first completed the Demographic, Tolerance of Ambiguity, Decision Style and NEO-PIR questionnaires. All participants were given the opportunity to practise a short flight to familiarise them to the requirements of the study. For the main experiment, participants were given the Flight Briefing to read and instructed to supervise a UAS on a civilian cargo flight from A to B (Liverpool John Lennon International Airport to Blackpool International Airport) that was shown using the 33-min long vignette. This route was chosen as it was short enough to conduct the experiment several times whilst exposing the participants to, for example, several different kinds of airspace classification and scenarios that might be representative of a longer duration flight. During the flight procedure, 21 events required the participant to make a key decision. As each event arose, the flight was paused and the participants had 45 s to choose from one of the three options presented to them. They had to select the one which they believed to be the best decision that met the terms of the briefing and then rate how confident they were in that decision. The simulation attempted as far as possible to mirror the context of decision-making required in this setting. Once the decision-making time had elapsed, the flight sequence was re-started from the paused condition until the next event occurred. This process was repeated until the flight had come to an end. Once completed participants were debriefed fully to ensure they knew what they had taken part in and given the opportunity to ask questions and reminded of their right to withdraw data anytime. Ethical protocols were followed at all stages during the study.
Results
Participants' psychometric test scores, overall accuracy (i.e. the number of decisions made correctly) and confidence scores for the event and decision logs were calculated.
Psychometric data
First, one-way ANOVAs were conducted on the psychometric data to assess and compare each group on the psychometric measures (see Table 1). There was a significant effect for neuroticism, F(3, 56) = 3.10, p = .034, p < .05, and agreeableness, F(3, 56) = 3.06, p = .035, p < .05. Neuroticism was lower for professional pilots (M = 15.40, SD = 4.84) than for controls (M = 23.20, SD = 8.69), p = .022, p < .05. No other comparisons were found to be significant for neuroticism (p > .05). Agreeableness however was higher for professional pilots (M = 33.80, SD = 5.62) compared to VGPs (M = 28.13, SD = 4.93), p = .023, p < .05. No other comparisons for agreeableness were significant (p > .05). No other effects were observed, for example, for tolerance of ambiguity A F(3, 56) In order to establish if the psychometric scores were related to the accuracy, confidence and W-S C-A Pearson's correlations were performed. No significant relationships between the psychometric data and accuracy were found (p > .05).
There was however a significant moderate negative relationship shown between neuroticism and confidence (r = −.415, p = .000, p < .001); as neuroticism increases confidence decreases. A significant moderate positive relationship between conscientiousness and confidence (r = .374, p = .003, p < .01) was also found; as conscientiousness increases, so does confidence. Further, a significant weak negative relationship between tolerance of ambiguity A and W-S C-A was found (r = −.300, p = .019, p < .02); as tolerance of ambiguity A score decreases (i.e. greater tolerance to ambiguity) the W-S C-A relationship increases.
Accuracy data
As above a 4 × 3 repeated measures ANOVA was conducted on the data to analyse the effect of decision danger and UAS group on accuracy (see Table 2).
A 4 (UAS Group: control/VGP/private pilot/professional pilot) × 3 (Decision Danger: Low/Medium/ High) repeated measures ANOVA was conducted to analyse the effect of decision danger and UAS group on accuracy.
Confidence data
As before, a 4 × 3 repeated measures ANOVA was performed to examine the effect of decision danger and UAS group on confidence (see Table 3).
No difference existed between low and medium decision danger confidence, p = .308, p > .05.
W-S C-A data
To establish if there were any significant effects of UAS group (control, VGP, private pilot, professional pilot) and level of decision danger (low, medium and high) for within-subjects confidence and accuracy (W-S C-A) correlations, it was first necessary to calculate each individual participant's W-S C-A score. The answer to each question was coded as correct or incorrect, and the confidence score for each question was recorded to generate a numerical relationship between confidence and accuracy for each participant (i.e. a point-biserial correlation). Table 4 illustrates.
Table 4. Means and standard deviations for overall W-S C-A and decision danger W-S C-A
Note: Standard deviations are shown in parenthesis.
UAS group W-S C-A and decision danger (DD)
Overall WS-CA Low DD Medium DD High DD A further 4 × 3 repeated measures ANOVA was conducted to analyse the effect of decision danger and UAS group on W-S C-A.
Between-subjects confidence-accuracy (B-S C-A)
In order to establish if confidence scores related to accuracy scores, a between-subjects Pearson's correlation was also conducted. A significant weak positive correlation was observed between confidence and accuracy (r = .272, p = .035, p < .05).
Decision type: Manual vs. automated data
One-way ANOVAs were conducted on decision confidence in manual and automated decisions for both high and low decision danger levels to assess differences between the groups (control, VGP, private pilots, professional pilots; see Table 5).
Manual choice-low decision danger
Significant differences were found between the groups in reported decision confidence in manual decisions made in conditions of low decision danger F(3, 56) = 11.385, p = .000, p < .001. Post hoc tests found that the VGP group (M = 18.5, SD = 2.1) was significantly more confident in their manual decisions than the control (M = 14.3, SD = 3.2), p = .000, p < .001. The professional pilot group (M = 19.2, SD = 1.1) was also significantly more confident in their decisions than the control (M = 14.3, SD = 3.2), p = .000, p < .001, and private pilots (M = 16.8, SD = 2.9), p = .035, p < .05. No other comparisons were significant.
Automated choice-low decision danger
Similarly to manual decision there were significant differences found between groups in their reported decision confidence in automated decision choices in low decision danger conditions F(3, 56) = 5.086, p = .004, p < .01. Post analysis found that professional pilots (M = 43.6, SD = 5.0) were more confident in automated decisions in low decision danger than both control (M = 34.6, SD = 8.5), p = .009, p < .01 and private pilots (M = 38.1, SD = 4.4), p = .019, p < .02. No other comparisons were significant.
Automated choice-high decision danger
No effects were observed for confidence in automated decision choices in conditions of high decision danger, F(3, 56) = 1.417, p = .247, p > .05.
To investigate confidence across manual and automated decisions overall (regardless of group), a series of paired t-tests were conducted (see Table 6).
First, data were analysed to examine if decision confidence differed in high and low decision danger in automated and manual decisions. A Bonferroni correction was applied, p < .01.
Discussion
This study investigated accuracy, confidence and within-subjects confidence-accuracy (W-S C-A) relationships across UAS groups (control, VGPs, private and professional pilots) to identify potential UAS supervisor level on factors relevant to decisions made where that danger/risk was manipulated and options to manually intervene or allow the autonomous systems to control the UAS were provided. Personality constructs were also considered.
Confidence
As predicted the groups differed significantly in decision confidence. It was found that professional pilots and VGPs showed significantly higher confidence compared to the control group. One reason for the greater confidence of these two groups may well be explained by the impact of prior experience, training and familiarity that both professional pilots and VGPs have with mission-based tasks which involve split-second reactive decisions and that have implication in the real or virtual world. Indeed, it has been shown that experience and training, including simulation, such as playing video games, can result in increased decision confidence (Atinaja-Faller et al., 2010; Chung & Monroe, 2000; Payne et al., 2002) and that familiarity can also result in increased decision confidence by enabling the illusion that individuals are accurately remembering important detail (Chandler, 1994).
Although not significant, the direction of the findings support the possibility that professional pilots show greater levels of confidence because they are used to the possibility of reality danger (M = 180.47), whereas VGPs are operating within virtual danger (M = 172.53). As professional pilots frequently make aviation decisions that have potential for major implication across all aspects of professional, private and the lives of others they thereby exhibit more confidence in such decisions. Conversely, private pilots, for example, do not fly as a career and are not used to the added stress of these types of issues which may be expressed in confidence levels. The study thus lends some support to previous work (Chung & Monroe, 2000;Kebbell et al., 1996) which shows, for example, as difficulty increases confidence would be expected to decrease. In turn, this may suggest that high danger levels can lead to problematic decision-making.
Furthermore, it is an interesting observation that professional pilots do exhibit more confidence across the decision dangers than do private pilots again supporting the idea that experience, training and familiarity in dealing with high impact decisions will effectively increase individual confidence. However, confidence without experience, training and relevant knowledge can be regarded as overconfidence. In fact, research suggests that when complex tasks are unfamiliar such overconfidence relative to accurate decision-making can occur (Wheatcroft et al., 2004). Professionals not only have experience but are highly regarded and relied upon for their expertise; though it is necessary that the professional label is coupled with necessary training, particularly as it has been suggested that confidence can positively affect risk taking behaviour (Krueger & Dickson, 1994). One can then be more confident that a decision made with high confidence will more likely be accurate.
Confidence in decision type: Manual vs. automated
A UAS supervisory role involves the allocation of functions between automated and manual which impact on how the system performs (Lee & Moray, 1992) hence, supervisors need to be able to apply the correct amount of confidence to decisions to both automated and those which require intervention. In this study, participants were always provided with the decision option to allow the autonomous system to control the UAS or to intervene and manually fly the vehicle and confidence ratings from both these decision types were obtained. Data were analysed to assess whether groups would differ in how much confidence was applied to manual decisions in comparison to automated decisions and vice versa as a function of different levels of decision danger.
The results demonstrate support for the hypothesis that confidence in decisions to automate and manually intervene will differ between groups. Professional pilots were more confident in manual decisions in both high and low decision danger and automated decisions in low decision danger levels. Similarly to overall confidence, experience may be able to explain this finding, as professional pilots are more likely to have experience making both manual decisions and also using autonomous systems compared to other groups, therefore displaying higher levels of confidence in those decisions. Research has shown that pilots tend to rely more on automation in comparison to student populations (Riley et al., 1993), which would explain elevated confidence in automated decisions. Thus, it seems that experience may increase confidence in both automated and manual decisions.
However, what is interesting to note is that no significant difference was found between groups in confidence of automated decisions in conditions of high decision danger. This would demonstrate that all groups displayed similar levels of confidence in those decisions in these conditions, providing further support to the idea that increases in decision danger does impact on confidence and is most evident in decisions to allow autonomous control the UAS.
Further analysis examined the data collapsed across all groups. Supervisors were significantly more confident in decisions in low danger than high decision danger and this occurred in both automated and manual decisions. Conditions of high decision danger were characterised as encompassing more risk, consequently increasing decision complexity. In these conditions, it was found that decision confidence in choice to use automation was reduced in comparison to decisions made in low decision danger. Hence, individuals felt less confident in decisions made by the UAS when in conditions of increased risk and complexity. Such a finding lends support to previous research which showed in conditions of high risk automation is relied on less (Lyons & Stokes, 2011), as supervisors tend to display less confidence in the decision. Whilst in this study for manual decisions, danger increased confidence.
All groups tended to place a higher degree of confidence in a decision when they chose to let the autonomous system control the UAS. This was observed to occur in both high and low decision danger and regardless of decision danger level in the collapsed data. As mentioned, confidence scores do not necessarily relate to accuracy therefore these findings could demonstrate a tendency for supervisors to be overconfident in decisions made by autonomous systems, providing some support for automation bias (Mosier & Skitka, 1996). The idea that supervisors believe decisions made by automation have superior knowledge and consequently more confidence is placed in that decision is not new (Dijkstra et al., 1998). The concept is further supported by the reduced confidence shown in manual decisions in this study; contrasting previous research which argues manual decisions are preferred (de Vries, Midden, & Bouwhuis, 2003;Lee & Moray, 1992).
Although it can be argued that confidence in automation is beneficial in that it reduces workload (Parasuraman, Cosenzo, & de Visser, 2009) overconfidence can also cause operators not to attend to conflicting data (Cummings, 2004) and ignore erroneous decisions (Mosier, Palmer, & Degani, 1992). Hence, it is imperative that supervisors are able to correctly discriminate between incorrect and correct autonomous decisions and place the appropriate confidence in the actual decisions taken.
W-S C-A (and B-S C-A)
Confidence, while very important, represents overconfidence if it is not correlated in the appropriate direction with accuracy decisions. It is imperative then, when a supervisor exhibits a high level of confidence that a positive relationship exists between their assessment and the accuracy of response. That is to say: the greater confidence a supervisor expresses in their decision the more accurate those answers should be. However, whilst correlation is necessary it is not a sufficient condition for causality. The supervisor may have more confidence in their decisions because they have learned through past experience that their decisions have high accuracy under similar conditions.
There was no group effect for W-S C-A. This suggests the metacognitive measure, as one might expect, requires a different skill set not necessarily afforded by past experience. Thus group membership does not significantly improve individual awareness of the accuracy or inaccuracy of judgements made (DePaulo & Pfeifer, 2006). However, there was a main effect of decision danger; largely, when decision danger was high W-S C-A significantly decreased. This finding validates the categorisation of decision danger and reinforces that decisions which carry dangerous implication are harder to assess, judge and therefore make. A closer examination of the means showed that, although no interaction was observed, VGPs W-S C-A remained relatively constant across the three decision danger categories and were able to produce the highest W-S C-A in the most difficult category. In comparison, other groups achieved roughly no correlation or one which was negative (i.e. control) for the same category. To be able to maintain positive W-S C-A levels for high decision dangers is crucial as these decisions are considered critical junctures at which things could go wrong. The observation that VGPs produce the highest correlation between confidence and accuracy for risk suggests that VGPs may be a good resource for UAS supervision, as, according to the findings here, this group are able to show the best awareness of the accuracy or inaccuracy of their decisions, particularly those characterised by high danger and avoidance of overconfident ratings. However, a person's accuracy for risk assessment and making the correct decisions is a key indicator of suitability for UAS operations-as confidence could improve with experience. There was a significant positive relationship between confidence and accuracy (B-S C-A). https://doi.org/10.1080/23311908.2017.1327628
Accuracy
The groups did not differ in the accuracy of decisions. It is however plausible this outcome could have been observed given the standardisation of the study. Groups however were separated by their experiences and the finding that groups made reasonably equivalent decisions in terms of accuracy lends support to the Chung and Monroe (2000) finding that experience has no effect on accuracy. Indeed, Boot, Kramer, Simons, Fabiani, and Gratton (2008) suggest that any improved performance seen in video game players compared to non-players may either be due to practice in honing cognitive skills through the act of playing video games, or could be a result of self-selection to pursue video game playing given pre-existing skills. Video game players may have the capacity to develop relevant UAS skills through practice or may inherently possess the skills which drew them to game playing. However, the findings here do not support that view. Accuracy can be seen as a measure of not only whether the group members accurately identified the optimal response but also whether a supervisor allows for automatic control or takes control manually. The accuracy score can provide some insight into whether supervisor trust is appropriate to UAS capability. It could be said that as decision danger increases accuracy decreases and that supervisor trust is negatively affected and thereby inappropriate. For example, either misuse (i.e. using automation for more than it should be used for-the danger is too high so individual allows automation to make the decision/carry out action), or disuse (i.e. using manual control unnecessarily-the danger is considered too great to trust the automation so individual is more confident in own ability) can occur more readily under such conditions. Of course, this suggestion would require further investigation. What can be said is that given the best information and learning experience the role of UAS supervisor is within the scope of non-pilot trained individuals; while the role requires new skill sets and aviation experience may provide some slight advantage it was certainly not found to be a significant factor here.
Broadly speaking, for decision danger the hypotheses were largely supported; accuracy, confidence and W-S C-A relationships reduced significantly as decision danger increased. The simple exception to this was that no differences were observed between low and medium decision danger for accuracy, confidence or W-S C-A. Therefore, a constant feature was that high decision danger impacted negatively on and across the accuracy, confidence and W-S C-A measures. It also had a role to play in the confidence expressed when choosing to intervene or rely on automation.
Personality constructs
It was predicted that the measured characteristics would differ across the groups and in part this was supported. Professional pilots scored lower for neuroticism than the control group which suggests the professional pilot group are more likely to address problems in an emotionally stable, calm, even-tempered and relaxed manner, and would be better equipped to cope with the stresses involved. Given the crew-context professional pilots work in they are much more likely to express these characteristics in an altruistic fashion where successful task completion takes priority. Moreover, the professional pilot group scored significantly higher on agreeableness than VGPs; indicating the latter would be more prone to competitiveness rather than helpfulness.
Of note is that, overall, neuroticism was negatively related to confidence; thus, those individuals who score highly on this construct would be less able to control impulses and be susceptible to irrational thoughts and/or behaviour-which may well increase in intensity under stress. Neuroticism construct screening for professional pilots is thereby very informative as inability to cope with stress can inhibit problem-solving, increase anxiety and make for less confident decisions (Michie, 2002). Further, the finding that professional pilots express significantly higher levels of confidence perhaps supports this idea; though the authors remain mindful that this higher level of confidence was not always reflected in accurate responses, particularly for decisions classified as high danger (see W-S C-A). Nonetheless, these findings suggest an advantage in selection of individuals for training who score low on neuroticism as it can be considered vital for confidence in critical decision-making.
Correlational analysis, across all participant groups, showed that conscientiousness was positively related to confidence. The more planning, organisation and task focus an individual has means they are likely to express increased confidence; in this case in decisions made. It suggests that conscientiousness is a desirable trait that UAS supervisors should hold to exhibit increased decision confidence. Researchers have considered aptitude tests (Carretta, Rose, & Barron, 2015) and the utility of personality (Chappelle, McDonald, Heaton, Thompson, & Haynes, 2012) in the US Air Force, with the suggestion that other measures to supplement the current arrangements would be helpful.
What is as important is whether a relationship exists between confidence and accurate decisions. Here, intolerance of ambiguity (A) was negatively related to W-S C-A. In this case, lower levels of tolerance of ambiguity (A) means one expresses greater tolerance to ambiguous contexts and tasks. Aviation is characterised by the need to make critical and potentially irreversible decisions during flight without the benefit of discussion and timed reflection. Therefore, greater tolerance of these kinds of situations is psychologically advantageous in that individuals can make confident and accurate decisions without the undue negative effects dissonance can bring. It follows that intolerant individuals would feel uncomfortable and be motivated to reduce this by making a decision inhibited by lowered confidence and inability to accurately judge correctness. As such, individuals who have a greater tolerance to ambiguity will show increased W-S C-A relationships and is thereby an important resource in the role of UAS supervision illustrating the importance of greater sensitivity in the assessment of efficacious decision-making. The W-S C-A correlation affords a start point for metacognitive measurement in this context.
Of course, the participants knew the decisions had little real-life implication and thus the outcomes are generalisable only in this context. Despite this limitation the study attempted to maintain high verisimilitude as the simulation equipment was modelled on a UAS supervisory environment which incorporated the known requirements of supervisory control. Further the W-S C-A measure has been applied successfully in different contexts (Wheatcroft & Ellison, 2012;Wheatcroft & Woods, 2010). There are however traits other than those studied here that may be relevant to accuracy (Szalma & Taylor, 2011), confidence and decision-making in this important human machine interface.
It would therefore be beneficial to conduct further research into the impact of conditions containing even greater ecological validity. One way might be to instruct one group that they are indeed monitoring a UAS. Further, while this study was able to verify response accuracy against UAS supervisor expert decision logs there is scope to systematically increase the difficulty and complexity of events and to measure multiple decisions and/or sources of information. Indeed, the complexity of the factors and effects involved suggest that for any selection tools to be effective the optimal profiles would need to be developed separately for each level, type and so on (Szalma & Taylor, 2011). Latency to decisions could also be important to measure across groups and environments. Moreover, other groups (i.e. air traffic controllers who have experience of supervising multiples of aircraft) and group age may also be assessed.
The study adds to current literature which has the goal of developing ways of identifying and selecting appropriate personnel for specific tasks in this context. One might argue that focusing on SAOCs rather than specific VGP experience would result in a larger pool but this is yet to be established.
Conclusion
The Civil Aviation Authority (2012) recognises there is currently no approved training course for UAS supervisory role for vehicles above 20 kg MTOM. It does however express the view that a UAS supervisor need not have manned aircraft pilot experience in the recognition that a supervisor may require alternative skill sets. The findings here give ground to the idea that VGPs could be considered as a resource; indeed, VGPs display constant W-S C-A across decision danger categorisations. VGPs exhibit some skills that may be required in successful UAS supervision, particularly as they are least likely to exhibit overconfidence in decision judgements. All groups displayed an increase in decision confidence in automated decisions which can be problematic when unmatched against decision accuracy. Personality constructs measured suggest operators be selected for low neuroticism, high https://doi.org/10.1080/23311908.2017.1327628 conscientiousness and tolerance to ambiguous contexts. It is important to note that for supervisors to appropriately increase decision confidence, the experience gained, in training, familiarity (simulated or real), in mission-based tasks and time-limited decisions involving criticality will most likely be required to be updated as part of continuous personal development. | 2017-10-25T23:53:05.315Z | 2017-05-22T00:00:00.000 | {
"year": 2017,
"sha1": "397302207d01cdef3f4001a67e19df21c41c4c3c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/23311908.2017.1327628",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "d42361e1569d7b20f07bf51e368922e6d5aea092",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Psychology"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.